anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Lamp filaments in a circuit | Question:
My answer to this question was:
"The brightness of each lamp would increase due to more voltage available to spread across the now 39 lamps (since one burns out)"
This, however, is incorrect. Instead, the answer is given as "the entire circuit of lamps do not light"
Could someone explain why this is so?
Answer: When an incandescent bulb burns out, the filament inside breaks, so it is essentially an open switch, i.e. it "fails open". Since the bulbs are in series, once any of them fails, no current can flow and no bulb will light. | {
"domain": "physics.stackexchange",
"id": 88081,
"tags": "homework-and-exercises, electric-circuits"
} |
If the decision problem can be solved in poly time, show the optimization problem also can | Question: Here is a problem I am trying to solve:
The bin packing decision problem is defined as follows: given an unlimited number of
bins, each of capacity equal to $1$, and $n$ objects with sizes $s_1$, $s_2$, $\dots$, $s_n$ ($0 < s_i ≤ 1$), do the objects fit in $k$ bins (where $k$ is a given
integer)? The bin packing optimization problem is to find the smallest
number of bins into which the objects can be packed. Show that if the
decision problem can be solved in polynomial time, then the
optimization problem can also be solved in polynomial time.
I know what it is asking but I don't know what the "optimization" problem for this is. Is it the grouping of all the objects into different bins (for instance, $s_1$, $s_3$, and $s_6$ are in bin #$1$, $s_2$, $s_4$, $s_5$ are in bin #$2$, etc.)? Or is it simply the number of bins you need to store them? I feel like the number of bins is the decision problem...
Answer: In the decision problem, $k$ is given, so the answer is yes, or no (a decision). In the optimization problem $k$ is unknown, and we have to find out how small can it be (the optimal solution). The bigger question asks if there is a logical connection between possible solutions to these two problems, such that the complexity of the respective algorithms is corelated. | {
"domain": "cs.stackexchange",
"id": 4760,
"tags": "np-complete, optimization, decision-problem, np"
} |
Loaded Truss problem | Question:
Hi, I'm having some problems with this truss question that I have in my homework. So far this is my solutions.
I worked out my solutions to the reaction forces but don't know if they're correct.
The next step is the find the internal forces ZB
YB
SU
SV
TV
RT
RS
QS
JL
JK
NP
MP
HO
HM
IO
MO
Is there an easier way to find all these? e.g. are some of them the same?. Wouldn't using either method of joints or sections be too tedious here?.
Thanks.
[![enter image description here][2]][2]
Answer: It may be worth reconsidering your deduced zero force members. But for this question it is not really necessary to determine which are zero force members in advance as this will come out in the analysis.
It is not important if the truss is determinate or not in order to get the member forces your are interested in (highlighted in red). Approach this question using the method of sections. I have marked sensible section cuts in green. You can get the member forces of JL and JK by considering the equilibrium of joint J. It is worth taking note that the member forces requested hints at the appropriate solution method.
For example, in order to get the member forces RT, RS, and QS make cut 3:
Notice that since you have the reaction forces at A already the member forces RT, RS, QS can be solved by equilibrium.
Getting member forces for NP, MP, and MO is a little bit more tricky, but once you have member force for MH (by making cut 4) you can make the cut marked in blue.
It is a bit tedious using method of sections, but there isn't a faster method that I am aware of. Even modelling this in a structural analysis software package would probably take longer ... | {
"domain": "engineering.stackexchange",
"id": 477,
"tags": "civil-engineering, structures, statics"
} |
How to model a humidity control system inside a box? | Question: I am a computer engineering student and currently having my undergraduate thesis. My thesis is about a chicken egg incubator and i need to control the humidity inside the incubator.
Right now, i dont know how to calculate the transfer function of the humidity control system. Please someone point me to the right direction.
Is it possible to get the transfer function if i can give a step input (turn on the humidifier) and then get the s-domain of the output, then divide the output over the input because transfer function = output/input? or is this method difficult considering im only an undergraduate computer engineering student?
Answer: If a system can be approximated as a (stable) linear time invariant system, than any bounded input which contains all frequencies (for digital signals limited to the Nyquist frequency) can be used to identify the system.
The Laplace transform of a step function is $s^{-1}$, so high frequency behavior might be more prone to be hidden below a noise floor. So you would need more measurements in order to average away this noise, if you are interested in those high frequencies.
Once you have multiple measurements of input data, which contain "all frequencies", and output/system-response data, then you can approximate the frequency response function of the system, denoted with $P(s)$, using,
$$
P(s) \approx \frac{\sum Y_i(s)\,\bar{U}_i(s)}{\sum U_i(s)\,\bar{U}_i(s)},
$$
where $Y_i(s)$ is the fast fourier transform (FFT) of the $i$th measurement of the output, $U_i(s)$ the FFT of the $i$th input data, and $\bar{U}_i(s)$ means taking the complex conjugate of $U_i(s)$ (which will help reduce the amount of noise in $P(s)$ the more measurements you use). Usually you would also apply a window function to each time domain signal, before calculating the FFT. A common choice is the Hanning window. | {
"domain": "engineering.stackexchange",
"id": 1099,
"tags": "thermodynamics, control-engineering, chemical-engineering"
} |
Vision: what is the difference between on-off ganglion cells and lateral inhibition? | Question: Is 'lateral inhibition' just a term for the biological basis of the functioning of the on-center (or off-center) ganglion cells?
Or do these terms describe separate processes?
Answer: Short answer
Retinal center-surround receptive fields are an example of lateral inhibition. It occurs elsewhere in the nervous system too.
Background
Center-surround receptive fields are indeed an example of lateral inhibition, where the ON field suppress the OFF field through lateral inhibition. The center-surround connectivity in the retina (Fig. 1) is indeed the most well-known example of this kind of circuitry. However, lateral inhibition occurs in other sensory systems too, for example auditory and olfactory neurons (Bakshi & Ghosh, 2017). For more information on the retinal circuitry underlying center/surround inhibition see this answer.
References
- Bakshi & Ghosh (2017), Handbook of Neural Computation: 487-513.
Fig. 1. Retinal circuitry underlying On-OFF center-surround conncetivity in ganglion cells. source: New York University | {
"domain": "biology.stackexchange",
"id": 10265,
"tags": "neuroscience, neurophysiology, vision"
} |
Pytorch mat1 and mat2 shapes cannot be multiplied | Question: The error message shows
RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x32768 and 512x256)
I have built the following model:
def classifier_block(input, output, kernel_size, stride, last_layer=False):
if not last_layer:
x = nn.Sequential(
nn.Conv2d(input, output, kernel_size, stride, padding=3),
nn.BatchNorm2d(output),
nn.LeakyReLU(0.2, inplace=True)
)
else:
x = nn.Sequential(
nn.Conv2d(input, output, kernel_size, stride),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
return x
class Classifier(nn.Module):
def __init__(self, input_dim, output):
super(Classifier, self).__init__()
self.classifier = nn.Sequential(
classifier_block(input_dim, 64, 7, 2),
classifier_block(64, 64, 3, 2),
classifier_block(64, 128, 3, 2),
classifier_block(128, 256, 3, 2),
classifier_block(256, 512, 3, 2, True)
)
print('CLF: ',self.classifier)
self.linear = nn.Sequential(
nn.Linear(512, 256),
nn.ReLU(inplace=True),
nn.Linear(256, 128),
nn.ReLU(inplace=True),
nn.Linear(128, 64),
nn.ReLU(inplace=True),
nn.Linear(64, output)
)
print('Linear: ', self.linear)
def forward(self, image):
print('IMG: ', image.shape)
x = self.classifier(image)
print('X: ', x.shape)
return self.linear(x.view(len(x), -1))
The input images are of size 512x512. Here is my training block:
loss_train = []
loss_val = []
for epoch in range(epochs):
print('Epoch: {}/{}'.format(epoch, epochs))
total_train = 0
correct_train = 0
cumloss_train = 0
classifier.train()
for batch, (x, y) in enumerate(train_loader):
x = x.to(device)
print(x.shape)
print(y.shape)
output = classifier(x)
loss = criterion(output, y.to(device))
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Loss: {}'.format(loss))
Any advice would be much appreciated.
Answer: In forward, the image first passes through some convolutional layers (i.e. self.classifier), then it is flattened, then passed through some linear layers (i.e. self.linear).
The problem is the dimensionality of the flattened tensor does not match the expected input for self.linear. The last dimension of the flatened tensor is expected to be 512 (see the in_features parameter of nn.Linear), but it actually is 32768, according to the error you get.
You may change nn.Linear(512, 256) with nn.Linear(41472, 256) to fix the mismatch. | {
"domain": "datascience.stackexchange",
"id": 11581,
"tags": "neural-network, cnn, convolutional-neural-network, image-classification, pytorch"
} |
Fuerte + Ubuntu 12.04 + Kinect errors | Question:
Dear all,
I am trying to access to the cloud point from a Kinect device using Ros Fuerte on a Ubuntu 12.04 64bit. The command roslaunch openni_launch openni.launch give me several errors. I also found several Ros Answers users with my same problems, but no one was able to find a complete solution.
Basically I found 3 kind of problems.
The first:
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[camera_base_link3-21]: started with pid [9074]
This error and the similar ones are related to this Python bug http://bugs.python.org/issue14308 but I was not able to use the patch they proposed. However, it seems a minor bug that should not change anything (I hope!).
The second:
[ERROR] [1340647948.750262621]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressed/set_parameters]
[ERROR] [1340647948.758634172]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/theora/set_parameters]
It is related to a cyclic reference inside the files included by openni.launch. To solve this issue it is enough to comment inside the file (ros_stack)/openni_launch/launch/includes/depth_registered.launch the following code:
<!-- Get all the usual depth topics -->
<include file="$(find openni_launch)/launch/includes/depth.launch"
ns="$(arg depth_registered)">
<arg name="manager" value="$(arg manager)" />
<arg name="points_xyz" value="false" /> <!-- Suppress XYZ point cloud -->
<arg name="respawn" value="$(arg respawn)" />
</include>
so, this is ok.
At last:
[ INFO] [1340647956.288019996]: Number devices connected: 1
[ INFO] [1340647956.288166663]: 1. device on bus 002:08 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'A00365910150107A'
[ INFO] [1340647956.289262201]: Searching for device with index = 1
nodelet: /usr/include/boost/smart_ptr/shared_ptr.hpp:412: boost::shared_ptr<T>::reference boost::shared_ptr<T>::operator*() const [with T = xn::NodeInfo, boost::shared_ptr<T>::reference = xn::NodeInfo&]: Assertion `px != 0' failed.
[camera_nodelet_manager-1] process has died [pid 8573, exit code -6, cmd /opt/ros/fuerte/stacks/nodelet_core/nodelet/bin/nodelet manager __name:=camera_nodelet_manager __log:=/home/salvo/.ros/log/1c33ed90-bee0-11e1-b0ec-f46d04509546/camera_nodelet_manager-1.log].
log file: /home/salvo/.ros/log/1c33ed90-bee0-11e1-b0ec-f46d04509546/camera_nodelet_manager-1*.log
I was completely not able to solve this. Could you help me? Do you have any idea? Some workaround? I found other answers here about this, but in this case no one was able to solve..
Best,
Salvo
Originally posted by Salvo on ROS Answers with karma: 41 on 2012-06-25
Post score: 3
Original comments
Comment by Salvo on 2012-06-25:
no one can help?
Answer:
The first two errors are not really fatal, so I'm ignoring them on my system.
the third one seems similar to what I experience on a fresh install of lubuntu 12.04 64 bit and ros fuerte.
I solved it by reverting to older versions of openni-dev and ps-engine. Using binaries for newer versions that were posted here on answers only caused more problems (ie. my camera wasn't detected at all).
step 1:
sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl
step 2:
sudo apt-get update
step 3:
Open synaptic (install it if you have to) then search for openni and remove openni-dev completely. You'll probably see version 1.5.2.23 installed.
Now select openni-dev and choose the menu item Package, Force version. Select version 1.3.2.1-4+precise1 from the menu (provided by the ppa you just added).
Install it.
Then go to Package and Lock version. You should see a red bar over the now installed openni-dev.
Then choose ps-engine (it should appear when you search for openni). Force the version for this package to 5.0.3.3-3+precise1.
install it and lock the version.
You can now install the packages ros-fuerte-openni-launch and ros-fuerte-openni-camera. The functionality should be the same as what you used to have in electric.
Now let's just hope that whatever is causing the problem gets fixed soon.
Originally posted by Daniel Canelhas with karma: 465 on 2012-07-31
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 9940,
"tags": "kinect, ros-fuerte, ubuntu-precise, ubuntu"
} |
JS Clock widget | Question: I want to get some best practices and code corrections.
JS
function clock(){
var currentTime = new Date()
var hours = currentTime.getHours()
var minutes = currentTime.getMinutes()
var seconds = currentTime.getSeconds()
var timeOfDay = ( hours < 12 ) ? "AM" : "PM";
hours = ( hours > 12 ) ? hours - 12 : hours;
hours = ( hours === 0 ) ? 12 : hours;
minutes = ( minutes < 10 ? "0" : "" ) + minutes;
seconds = ( seconds < 10 ? "0" : "" ) + seconds;
var place = (hours + ":" + minutes + ":" + seconds + timeOfDay);
document.getElementById('clock').innerHTML = place;
setTimeout(clock,1000);
};
clock();
CSS
#clock {
font-family:arial,helvetica;
background:#FFF;
box-shadow:3px 3px 5px 6px #ccc;
width:86px;
-webkit-box-shadow: 0 10px 6px -6px #777;
-moz-box-shadow: 0 10px 6px -6px #777;
box-shadow: 0 10px 6px -6px #777;
}
HTML
<div id="clock">Time</div>
Answer: initial findings:
You aren't using the init function, so why keep it. Whatever code you don't need: get rid of it.
Your clock function calls itself sort-of recursively with a 1 second interval, due to your using the setTimeout function.
You could easily avoid calling setTimout every time by simply changing that to setInterval:
function clock()
{
//code here
}
var clockInterval = setInterval(clock, 1000);
The result of this is that, through the return value of setInterval (which is the interval's ID) you can stop the code from running whenever you need to:
//some event handler, for example a pauseBtn.addEventListener('click'...)
clearInterval(clockInterval);
which makes gives you more flexibility, and will make things easier when you want to debug or indeed add functionality later on.
The same can be done with an timeout, of course:
var tId = setTimeout(function()
{
console.log('~5 seconds have passed');
}, 5000);
clearTimeout(tId);//the callback won't ever be called, unless for some reason this statement takes more than 5 seconds to reach.
However, an interval's ID is the same throughout. If you decide to use setTimeout, you'll have to re-assign the return value of every setTimout call to the same variable, simply because each timeout can, and most likely will, have a new ID. For that reason alone I think setInterval is the better choice in your case.
in the function, you have this: document.getElementById('clock') DOM lookup that queries the DOM for the same element over and over again. The DOM API is what it is: clunky and slow, so wherever you can, saving on DOM queries is a good thing. The element in question is, as far as your function is concerned pretty much a constant, is it not?
Why not use a closure, so you can query the DOM once, and use the same reference over and over? Something like this:
var clock = (function(clockNode)
{//clockNode will remain in scope, and won't be GC'ed
return function()
{//the actual clock function
//replace document.getElementById('clock') with
clockNode.innerHTML = place;
};
}(document.getElementById('clock')));//pass DOM reference here
var clockInterval = setInterval(clock, 1000);
However, it's worth noting that innerHTML is quite slow, looks dated and is non-standard. The "correct" way of doing things would be:
clockNode.replaceChild(document.createTextNode(place), clockNode.childNodes.item(0));
Which, admittedly looks a tad more verbose and over-complicates things, but just thought you'd like to know.
Speaking of over-complicating things, and as phyrfox rightfully pointed out, you're essentially building a locale time string from the Date instance. You are re-inventing the wheel, considering the Date object comes with a ready-made method for just such a thing: toLocaleTimeString. This dramatically simplifies our code:
clockNode.replaceChild(
document.createTextNode((new Date()).toLocateTimeString()),
clockNode.childNodes.item(0)
);
That's all you need!
here's a fiddle
Oh, and remember how I said that the interval ID returned by setInterval can be a useful thing to have?
Here's another fiddle that illustrates that point
The code you have can, therefore be re-written to be more standard-compliant and at the same time be made shorter. What you end up with is simply this:
var clock = (function (clockNode)
{
return function()
{
clockNode.replaceChild(
document.createTextNode(
(new Date).toLocaleTimeString()
),
clockNode.childNodes.item(0)
);
};
}(document.getElementById('clock')));
var clockInterval = setInterval(clock, 1000);
Conventions
Yes, semi-colons are optional in JS, but it is considered a good habit to write them nonetheless. That's an easy convention to follow, and it'll help you when you decide to learn, or have to switch to, other languages. Many of which see the semi-colon as being non optional.
It's a bit of a hang-up of mine, I admit, but try to adhere to the conventions as much as possible, like: avoiding too many var declarations in a block of code. You can easily compact them into a single statement, using a comma:
var currentTime = new Date(),
hours = currentTime.getHours(),
minutes = currentTime.getMinutes(),
seconds = currentTime.getSeconds(),
timeOfDay = ( hours < 12 ) ? "AM" : "PM";
You can do away with the temp var place all together, and can even move the var declarations to the IIFE which we've used to store the clockNode DOM reference in:
var clock = function(clockNode)
{
var currentTime, hours, minutes, seconds, timeOfDay;
return function(){};
}());
Pass your code through the rather pedantic JSLint tool never hurts.
It'll probably complain about your keenness on using the ternary operator, too, because, though I'm not particularly opposed to the odd ternary, you do seem to be using it an awful lot. | {
"domain": "codereview.stackexchange",
"id": 6704,
"tags": "javascript, datetime"
} |
Massless particles in the Lagrangian formalism of special relativity | Question: In the Lagrangian picture of special relativity we usually define the action $$S = -mc^2 \int d\tau.$$ This is clearly 0 for massless particles, so it says absolutely nothing about their motion. Despite that, from this Lagrangian we obtain momentum and energy (after passing to the Hamiltonian) and eventually conclude $$E^2 = m^2c^4 + {\bf p}^2c^2$$ and then use this identity even for massless particles. Why is this fair?
Answer:
It seems overkill to use the Lagrangian formalism to prove the relativistic dispersion relation. It follows just from
(i) that the 4-momentum $p^{\mu}$ is a 4-vector and
(ii) that the invariant/rest mass times $c$ is equal to the length of the momentum 4-vector$^1$.
Nevertheless, if OP is not satisfied by a continuity argument $m\to 0$, and if OP wants to pursuit the Lagrangian formalism, then one should use a manifestly Lorentz-covariant Lagrangian
$$L~=~-\frac{\dot{x}^2}{2e}-\frac{e (mc)^2}{2} \tag{L}$$
that works for both massless & massive point particles, cf. e.g. this Phys.SE post. One may show that the corresponding Hamiltonian Lagrangian is
$$L_H~=~p_{\mu}\dot{x}^{\mu}-\underbrace{\frac{e}{2}((mc)^2-p^2)}_{\text{Hamiltonian}}, \tag{H}$$
cf. e.g. this Phys.SE post. The EL equation for the auxiliary field $e$ yields precisely the sought-for mass-shell condition
$$p^2~=~(mc)^2,\tag{M}$$
even in the massless case.
--
$^1$ In this answer the Minkowski signature is assumed to be $(+,-,-,-)$. | {
"domain": "physics.stackexchange",
"id": 69782,
"tags": "special-relativity, lagrangian-formalism, momentum, vectors, dispersion"
} |
Morgenstern proof for FFT lower bound | Question: I looked at my notes from a class about fast forier transform , and the professor proved in class theorem thanks to Morgenstern , first he defined linear algorithm as a algorithm that inly uses multiplications by scalar and addtions, that he stated Morgenstern's theorem the following way
Every linear algorithm that computes DFT (Discrete Fourier Transform) , in which the scalar are bounded in their absolute value by $C$ , takes at least $\Omega_C(n\log{n})$ addition operations.
I haven't understood his proof , if someone can give me guidlines to the proof I'll be more than happy :) [right now my proof of the theorem looks like a mess]
p.s.: In class he said we will write in each step the coeffiecents of the linear function that was just computed , then he defined a "computation advancment function" , I didn't understand his definition of this function , it was definied something like $\phi(k)=$ maximum determinant of n$\times$n matrices that were computed untill the k'th step [I probably copied wrong]
Answer: Morgenstern first defines the notion of a linear algorithm. A linear algorithm gets as input $x_1,\ldots,x_n$ and its goal is to compute some $y_1,\ldots,y_m$, each of which is a (specific) linear combination of $x_i$s. The algorithm proceeds in steps, starting with step $n+1$. At step $t$, the algorithm computes $x_t = \lambda_t x_i + \mu_t x_j$ for some $i,j < t$. At the end of the computation, for each $i$, $y_i = x_j$ for some $j$.
For example, here is an algorithm computing the unnormalized DFT on 2 variables:
$$
x_3 \gets x_1 + x_2 \\
x_4 \gets x_1 - x_2
$$
Similarly, the unnormalized two dimensional DFT on $2^2$ variables is computed by:
$$
x_5 \gets x_1 + x_2 \\
x_6 \gets x_1 - x_2 \\
x_7 \gets x_3 + x_4 \\
x_8 \gets x_3 - x_4 \\
x_9 \gets x_5 + x_7 \\
x_{10} \gets x_5 - x_7 \\
x_{11} \gets x_6 + x_8 \\
x_{12} \gets x_6 - x_8
$$
We can view each $x_t$ as an $n$-dimensional vector which gives the linear combination of $x_1,\ldots,x_n$ producing $x_t$; call this vector $v_t$. The vectors $v_1,\ldots,v_n$ are just the $n$ basis vectors.
Morgenstern defines the quantity $\Delta_t$, which is the maximum magnitude of the determinant of any square submatrix of the matrix $V_t$ whose rows are $v_1,\ldots,v_t$.
Lemma. Let $c \geq 1/2$. If $|\lambda_s|,|\mu_s| \leq c$ for all $s$ then $\Delta_{n+t} \leq (2c)^t$.
Proof. The proof is by induction on $t$. When $t = 0$, this is easy to verify directly since $V_n$ is just the identity matrix. Consider now any $t > 0$. Every square submatrix of $V_t$ is either a square submatrix of $V_{t-1}$, in which case its determinant is at most $(2c)^{t-1} \leq (2c)^t$ by induction, or it involves the new row $v_t = \lambda_t v_i + \mu_t v_j$. In the latter case, we can write the square submatrix $A$ as $A = \lambda_t B + \mu_t C$, where $B,C$ are square submatrices of $V_{t-1}$ (replace the relevant part of $v_t$ by the corresponding parts of $v_i$ and $v_j$). Since the determinant is a linear function of any of its rows, $\det(A) = \lambda_t \det(B) + \mu_t(C)$. By induction, $|\det(B)|,|\det(C)| \leq (2c)^{t-1}$, and so $$|\det(A)| \leq |\lambda_t| |\det(B)| + |\mu_t| |\det(C)| \leq c(2c)^{t-1} + c(2c)^{t-1} = (2c)^t.$$
Corollary. Computing the DFT on $n$ variables using a linear algorithm with bounded coefficients requires $\Omega(n\log n)$ steps.
Proof. The determinant of the DFT matrix is $n^{n/2}$. Hence any linear algorithm computing the DFT in $t$ steps satisfies $\Delta_t \geq n^{n/2}$. If the bound on the coefficients is $c$, then the lemma shows that $(2c)^t \geq n^{n/2}$ and so $t = \Omega(n\log n)$.
Remark. Strassen has shown that any algebraic algorithm (algorithm involving $+,-,\cdot,/$) for computing the DFT can be transformed to a linear algorithm using the same number of steps. | {
"domain": "cs.stackexchange",
"id": 4180,
"tags": "lower-bounds, fourier-transform"
} |
Would we be able to hear the sun if space were full of air? | Question: I was wondering if the sun could be audible from earth in an air-filled space scenario.
We can ignore all the other disastrous consequences!
Thanks!
Answer: Let me give a more detailed back-of-the-envelope approximation, which might actually be able to decide, given the conditions of the problem, if we would be able to hear the sound of Sun.
Assumptions:
The space between Earth and Sun is filled with uniform air. This is a non-physical assumption. It basically means we are ignoring the gravitational effects of both Sun and Earth; but then one should ask what keeps Sun and Earth from exploding into the space. Anyway, the question doesn't make much sense without this assumption. So, the space is filled with air at $1_{atm}$ pressure(ask the OP how :)
According to Wikipedia:
For comparison purposes, the minimum level of a pure tone at 1000 Hz has been standardized at a sound pressure of 20 micropascals. It is approximately the quietest sound a young healthy human can detect.
Looking at some typical minimum audibility curves, one sees that the above standard limit is actually really close to the global minimum(dB is a logarithmic scale of intensity itself, so I'm being a little bit sloppy here. But at the end of my calculations, I hope a small constant coefficient won't matter much).
Sound intensity $\propto$ (amplitude)$^2 \propto (\Delta P)^2$
Some amount of spherical symmetry.
Calculations:
$$I_{\odot} 4 \pi R_{\odot}^2=I_{\oplus} 4 \pi r_{\oplus}^2 \\ \Delta P _{\odot} \approx \left( \frac{r_{\oplus}}{R_{\odot}} \right)\Delta P _{\oplus} \approx \frac{1500}{7}20_{mPc}\approx 4.3 _{Pc}$$
I believe Sun is capable of creating pressure differences much higher than this, so one expects to hear Sun's noise loud and clear. One should be able to estimate typical pressure differences by looking at solar wind data.
But How loud is sun?
In order to estimate how loud sun is, we would have to concentrate on different things that might happen on Sun. A typical aspect which might be interesting are solar flares. Let's try and estimate the sound of a solar flare on earth. According to wikipedia:
A solar flare is a sudden brightening observed over the Sun's surface or the solar limb, which is interpreted as a large energy release of up to 6 × 1025 joules of energy (about a sixth of the total energy output of the Sun each second or 160,000,000,000 megatons of TNT equivalent, over 25,000 times more energy than released from the impact of Comet Shoemaker–Levy 9 with Jupiter).
So the intensity of a solar flare sound on Earth can be approximated with $1/6$ of solar constant $\approx 227_ \frac{W}{m^2}$. I have no idea how it will actually sound, but believe me that is well above a generic human's minimum hearing capability. | {
"domain": "physics.stackexchange",
"id": 62532,
"tags": "acoustics, space, sun, air"
} |
Folding of bulliform cells | Question: How do bulliform cells cause a leaf to fold in half when the leaf looses water? Also, how would these bulliform cells be arranged to cause the cell to instead curl up?
Answer: Curling up is due to turgor pressure. Quoting from wikipedia
Loss of turgor pressure in these cells causes leaves to "roll up" during water stress
More scientific reference , on morphology, on mechanism, on development, some genetics and on physics. | {
"domain": "biology.stackexchange",
"id": 4530,
"tags": "botany, plant-physiology"
} |
How dangerous is taking a bath in Coca Cola? | Question: What can be the effects for the body to be immersed in Coca Cola for a while? A friend commented me that the coke is quite acidic (pH 2.5 if I recall properly), and we were guessing what could be the effects for the body after a time of immersion.
I read 20 Practical Uses for Coca Cola – Proof That Coke Does Not Belong In The Human Body and I see some things like:
E338 – Orthophosphoric Acid. This can cause irritation of the skin and
eyes.
But I am guessing: what are the problems of a long contact of coke with the body - not drinking, but being in touch with it.
Answer: Cola and soft drink products are generally in the pH=2.5-3.5 range.
This is the same pH range as vinegar, lemon juice and most fruit juices. Note apple itself is around 3.0-3.5, so it is not about some magical trick of food industry to kill us all.
Does Cola dissolve your body traceless? No.
Do you want to soak your body in it for hours? Would you soak your body in vinegar for hours? Or even water... | {
"domain": "chemistry.stackexchange",
"id": 1641,
"tags": "everyday-chemistry, safety"
} |
Do bras and kets have dimensions? | Question: I'm trying to understand more intuitively what bras and kets are, but some aspects of them remain a mystery to me.
We usually think of $\psi (x)$ as having dimension of $[1/\sqrt{L}]$ so that squaring it and multiplying it with a distance differential would result in a dimensionless quantity. An example of this is:
$$\int_{-\infty}^{\infty} \mid\psi(x)\mid^2 dx= 1$$ for normalized wavefunctions.
I also know it is possible to write a ket in position basis as:
$$\mid \psi \rangle = \int_{-\infty}^{\infty} \psi(x) \mid x \rangle dx$$
I would like to believe that $ \mid \psi \rangle$ has no units, it can be represented in the position or momentum base, so it having units doesn't make a whole lot of sense, but this leads me to the conclusion that $\mid x \rangle$ must have the same units as the wavefunction in order to cancel the lenght units of $dx$!
Is this correct? If so, what is its physical interpretation? do position and momentum kets have units after all?
Answer: This is a very interesting question. I don't know if there is a general and definitive answer, but I'll try to make some comments. I apologize if this ends up rambling; I'm finding this out as I write this answer.
Operators have dimensions, since their eigenvalues are physical quantities. For bras and kets it gets more complicated. First, you cannot in general say that they are dimensionless. To see why, consider a state with a certain position $|x\rangle$. Since $\langle x | x' \rangle = \delta(x-x')$ and the Dirac delta has the inverse dimension of its argument, it must be that $[ \langle x | ] \times [ | x \rangle ] = 1/L$. A similar relationship holds for momentum eigenstates. Of course, there are higher powers of $L$ in higher dimensions.
However, consider an operator with discrete spectrum, such as the energy in an atom or something like that. Then the appropriate equation is $\langle m | n \rangle = \delta_{mn}$, and since this delta is dimensionless, bras and kets must have inverse dimensions. This gets even weirder when you consider that the Hamiltonian for a hydrogen atom has both discrete and continuous eigenvalues, so the relationship between the bras and the kets' dimensions will be different depending on the energy (or whatever physical quantity is appropiate).
We have the equation $\langle x | p \rangle = \frac1{\sqrt{2\pi\hbar}} \exp(ipx/\hbar)$. I at first thought that this combined with $[\langle x |] \times [| p \rangle ] = [\langle p |] \times [| x \rangle ]$ would allow us to find the dimensions of $|x\rangle$ (and everything else), but it turns out that the normalization conditions of $|x\rangle$ and $|p\rangle$ force the dimensions of $\langle x | p \rangle$ to come out right. We can find that $[|p\rangle] = \sqrt{T/M} [|x \rangle]$, but we can't go any further. Similar relationships will apply for the eigenstates of your favourite operator.
Any given ket is a linear combination of eigenkets, but again there are subtleties depending on whether the spectrum is discrete or continuous. Suppose we have two observables $O_1$ and $O_2$ with discrete spectrum and eigenstates $|n\rangle_1$ and $|n\rangle_2$. Any state $|\psi\rangle$ can be expressed as a dimensionless linear combination of the eigenstates (dimensionless because since $\langle n | n \rangle = 1$, the squares of the coefficientes make up probabilities): $|\psi\rangle = \sum_n a_n |n\rangle_1 = \sum_n b_n |n\rangle_2$. This implies that the eigenkets of all observables with discrete spectrum have the same dimensions, and likewise for the eigenbras.
It gets trickier for observables with continuous spectrum such as $x$ and $p$, because of the integration measure. We have $|\psi\rangle = \int f(x) |x\rangle\ dx = \int g(p) |p\rangle\ dp$. $\langle \psi | \psi \rangle = 1$ implies $\int |f(x)|^2\ dx = 1$, so that $[f] = 1/\sqrt{L}$ and likewise $[g] = \sqrt{T/ML}$. This should be no surprise since $f$ and $g$ are Fourier transforms of each other, with an $1/\sqrt{\hbar}$ thrown in. From this we can deduce $[|p\rangle] = \sqrt{T/M} [|x \rangle]$, which we already knew, and $\sqrt{L} [|x \rangle] = [|n \rangle]$.
The conclusion seems to be the following. All eigenkets with discrete eigenvalues must have the same dimensions, but it looks like that dimension is arbitrary (so you could take them to be dimensionless). Furthermore, normalized states have that same dimension. Eigenstates with continuous spectrum are more complicated; if we have an observable $A$ (with continuous eigenvalues) with eigenvalues $a$, then we can use the fact that $|\psi\rangle$ can be written either as an integral over eigenstates of $A$ or as a sum over discrete eigenstates to find that $\sqrt{[a]} [|a\rangle] = [|n\rangle]$, where $|n\rangle$ is some discrete eigenket. So once you fix the dimensions of one ket, you fix the dimensions of every other ket. | {
"domain": "physics.stackexchange",
"id": 76168,
"tags": "quantum-mechanics, wavefunction, units, conventions, dimensional-analysis"
} |
Writing a new motion planner | Question:
I am writing my own planner , called SplineBasedPlanner . It takes in a set of waypoints, and interpolates them with a special kind of spline (not the ones already implemented by ROS). The spline is designed so that a vehicle ,whose params are specified in config files, can traverse the spline smoothly. I want this to plug into move_base.
However, I am confused about what kind of interface (i.e. API) I should provide to do so. Should SplineBasedPlanner resemble base_local_planner's interface? Or should it be more like TrajectoryPlanner? Are there multiple choices?
I'll be glad to provide more specific information if needed.
Update 1: From responses below , I really want both both local and global versions for the SplineBasedPlanner.
Originally posted by PKG on ROS Answers with karma: 365 on 2012-03-12
Post score: 1
Original comments
Comment by prince on 2012-03-13:
Is this planner targeted towards generating trajectories from one waypoint to next waypoint?? As DimitriProsser explained below, it is not clear.
Answer:
The base_global_planner is responsible for drawing paths to the goal and around obstacles, while the base_local_planner is responsible for calculating trajectories and issuing cmd_vel messages in order to achieve the plan drawn by the base_global_planner.
Based on what you've written, it's not entirely clear what category your planner would fall into. I'm going to guess and say that you want to adhere to the base_local_planner interface (TrajectoryPlannerROS), but you'd know better than me. Which of these two do you think it fits better?
Originally posted by DimitriProsser with karma: 11163 on 2012-03-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8564,
"tags": "navigation, base-local-planner"
} |
How to split data into 3 parts in Python - training(70%), validation(15%) and test(15%) and each part have similar target rate? | Question: I'm working on a company project which I will need to do data partition into 3 parts - Train, Validation, and Test(holdout).
Does anyone know how I can split the data into 3 parts above and each part will have similar response variable(target rate) - (similar accuracy for classification and similar mean of (response) for regression.
I know how to split data into 3 parts by using train_test_split function from SKLEARN
from sklearn.model_selection import train_test_split
x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2,train_size=0.8)
x_train, x_cv, y_train, y_cv = train_test_split(x,y,test_size = 0.25,train_size =0.75)
But this does not give a similar target rate, can someone help me?
Answer: For classification you can use the stratify parameter:
stratify: array-like or None (default=None)
If not None, data is split in a stratified fashion, using this as the class labels.
See sklearn.model_selection.train_test_split. For example:
x, x_test, y, y_test = train_test_split(xtrain,labels,test_size=0.2, stratify=labels)
This will ensure the class distribution is similar between train and test data.
(side note: I have tossed the train_size parameter since it will be automatically determined based on test_size)
For regression there is, to my knowledge, no current implementation in scikit learn. But you can find a discussion and manual implementation here and here with regards to cross-validation. | {
"domain": "datascience.stackexchange",
"id": 6881,
"tags": "machine-learning, python, scikit-learn, overfitting"
} |
Wrapper for jquery ajax to ensure redirects occur on clientside | Question: Recently I had the need to make some ajax calls within a MVC 3.0 with Razor and jQuery application. After a bit of trial and error and refactoring it was discovered that a number of different needs were required. These included:
The serverside code would be responsible for creating any redirect urls and we wanted to stick with the inherited controller RedirectToAction() method where possible in our controller actions.
We are using jQuery unobtrusive validation and some situations meant that new forms were rendered onto the view via ajax. That meant we needed to ensure those new form fields contained and validated when required.
We wanted the flexibility to return html (PartialViews), json or tell the client side it was a redirect and the URL to redirect to (initiated via 1 above).
Before page redirects at times we wanted to show div popups such as a Success confirmation. Hence although we were using RedirectToAction() in the backend we wanted the javascript to actually perform the redirect.
With these in place we did a bit of serverside code that ensured we hijacked the RedirectToAction() result and returned a json object instead. I'm fairly happy that code works. What I'm not sure about is the javascript code that I use to perform the result.
Some notes:
The $.validator.unobtrusive.parseDynamicContent method will ensure the new html elements are rebound for validation. I did not write that code so have not added it for review.
For options 1, 2 and 3 above we didn't want the person writing the ajax to always have to worry about these so esentially wanted them to just be handled so to speak.
I'm definitely pretty raw in the Javascript department so any comments or improvements on the code below would be greatly appreciated.
$.ajaxWithRedirect = function (options) {
// If dataType wasn't specified in the options, default to 'html'
var dataType = (options.dataType !== undefined) ? options.dataType : 'html';
// jQuery AJAX object
$.ajax({
// Normal properties
type: options.type,
url: options.url,
data: options.data,
dataType: dataType,
cache: false,
// Global beforeSend wrapper with user defined function
beforeSend: function () {
// Execute user defined method
if (typeof options.beforeSend === 'function') {
options.beforeSend();
}
},
// Global success wrapper which will redirect if url specified is a json object
// with the RedirectUrl tag
success: function (data) {
var jData;
var redirected = false;
try {
if (data) {
jData = $.parseJSON(data);
if (jData && (typeof jData.RedirectUrl !== 'undefined' && jData != null && jData.RedirectUrl.length > 0)) {
var performRedirect = true;
if (typeof options.beforeRedirect !== 'undefined') {
// beforeRedirect returns true if the redirect is still to occur otherwise false
performRedirect = options.beforeRedirect(jData);
}
if (performRedirect) {
redirected = true;
// Using replace based off SO - http://stackoverflow.com/questions/503093/how-can-i-make-a-redirect-page-in-jquery-javascript
window.location.replace(jData.RedirectUrl);
}
}
}
} catch (e) {
// not json
}
// Execute user defined method
if (!redirected) {
if ((options.success && typeof options.success === 'function')) {
options.success(data);
}
// always done after the success to ensure any dyanamic elements have been loaded
if (typeof options.dynamicValidation != 'undefined') {
$.validator.unobtrusive.parseDynamicContent(options.dynamicValidation);
}
}
},
error: function (xhr, status, err) {
if (typeof options.error === 'function') {
options.error(xhr, status, err);
}
if (xhr.status == 400) {
alert(err);
}
}
});
};
Example of usage is (Methods such as addUploadTableSpinner just included for examples):
$.ajaxWithRedirect({
beforeSend: addUploadTableSpinner(),
type: "POST",
url: '/MyUrl/Delete' + '?sessionId=' + sessionId,
dataType: "html",
beforeRedirect: function (data) {
closePopup();
// show the user success message
showDeleteSuccessfulPopup(data.RedirectUrl);
showSuccessfulPopup = false;
// don't do the redirect
return false;
},
success: function (data) {
if (data.length == 0) {
deleteRow.remove();
} else if (showSuccessfulPopup) {
deleteRow.html(data);
closePopup();
showDeleteSuccessfulPopup();
}
removeUploadTableSpinner();
}
});
Answer: It seems to me that you're duplicating some of what $.ajax() will do for you (such as calling beforeSend) and you're also leaving the door open for using the deferred promise methods (then, done, fail, etc.) that jQuery's XHR object provides.
Here's a version that focusses solely on the redirection (i.e. it doesn't worry about the data type, the validation stuff, or doing its own error handling). It's just a transparent layer on top of the normal $.ajax() function.
The code's below, and here's a demo
$.ajaxWithRedirect = function (options) {
"use strict";
var deferred = $.Deferred(),
successHandler,
xhr;
// force no-cache
options.cache = false;
// get a copy of the success handler...
successHandler = options.success;
// ... and replace it with this one
options.success = function (data) {
var contentType = xhr.getResponseHeader("Content-Type"),
performRedirect = true,
redirectUrl = null,
args = [].slice.call(arguments, 0),
json;
// If response isn't there or isn't json,
// skip all the redirect logic
if( data && (/json/i).test(contentType) ) {
// If json was requested, and json received, jQuery will
// have parsed it already. Otherwise, we'll have to do it
if( options.dataType === 'json' ) {
redirectUrl = data.RedirectUrl;
} else {
try {
json = $.parseJSON(data);
redirectUrl = json.RedirectUrl;
} catch(e) {
// no-op
}
}
// check the redirect url
if( redirectUrl && typeof redirectUrl === 'string') {
// Is there a beforeRedirect handler?
if( typeof options.beforeRedirect === 'function' ) {
// pass all the arguments to the beforeRedirect handler
performRedirect = options.beforeRedirect.apply(null, args);
}
// unless strictly false, go ahead with the redirect
if( performRedirect !== false ) {
location.replace(redirectUrl);
// and stop here. No success and/or deferred handlers
// will be called since we're redirecting anyway
return;
}
}
}
// no redirect; forward everything to the success handler(s)
if( typeof successHandler === 'function' ) {
successHandler.apply(null, args);
deferred.resolve.apply(null, args);
}
};
// Make the request
xhr = $.ajax(options);
// Forward the deferred promise method(s)
xhr.fail(deferred.reject);
xhr.progress(deferred.notify);
// Replace the ones already on the xhr obj
deferred.promise(xhr);
return xhr;
};
Point is that you should be able use $.ajaxWithRedirect exactly like you'd use vanilla $.ajax, including all the shiny deferred promise stuff. The only thing it does different from the normal version is set options.cache = false, and that it has a beforeRedirect callback. | {
"domain": "codereview.stackexchange",
"id": 2586,
"tags": "javascript, jquery, asp.net-mvc-3"
} |
Method of images | Question:
we know this classic problem shown.
Where a conducting infinite sheet has a charge q in front of it at distance d.
We know how to solve this using method of images for the region A where the charge q is.
but my doubt is
will the same procedure work for the region B ?
my intuition is that the electric field is zero in region B as the infinite conducting sheet will be shielding the effect of charge q ??
any way to justify my guess using boundary conditions ?
thanks.
Answer: Your intuition is right. The justification that we zero field in the conductor, and since we can see that the sheet is grounded, we require the electric potential to be zero in the sheet. Thus if we require continuity of the potential we get zero field in the region B.
We do not use the method of images to solve for region B, I mean we could but since we have no charge in the B region the method of images would yield zero field in the B region. | {
"domain": "physics.stackexchange",
"id": 90550,
"tags": "electrostatics, method-of-images"
} |
Eigenvalue Equation : $ A |\psi\rangle = 0 |\phi\rangle$? | Question: A typical eigenvalue equation goes like: $ A |\psi\rangle = e\: |\psi\rangle$, where $|\psi\rangle$ is an eigenstate for operator $A$ with eigenvalue $e$.
Suppose that $e=0$ in the above equation, then we say that $|\psi\rangle$ is an eigenstate with eigenvalue $0$. Now, I encountered this problem wherein my calculations gave me an equation like: $ A |\psi\rangle = 0\: |\phi\rangle$.
I'm being told that even in this case $|\psi\rangle$ is an eigenstate with eigenvalue $0$. I cannot convince myself that it is true. My rationale are as follows:
We must get the same state on the other side of the equation, otherwise it's not even an eigenvalue equation!
If I do a physical measurement on a state and I get zero "eigenvalue" (actually, measurement value) corresponding to some operator (which is doing the measurement) with the state collapsing to a different state then it's not actually the "measurement". I know it's confusing but perhaps someone can understand.
Consider a scaled harmonic oscillator such that the ground state energy is 0. Now, \begin{align} H &= \hat{n}\ \hbar \omega \\ H |0\rangle &= \hat{n}\ \hbar \omega |0\rangle = \vec{0} \ne 0\\
a |0\rangle &= 0 \ne \vec{0}
\end{align}
This, to me, clearly says that $ A |\psi\rangle = 0\: |\phi\rangle$ is not an eigenvalue equation.
Can someone help me see the above?
Answer: This is a good question (or at least I think so because I struggled with the same question when I first studied quantum mechanics).
Notational Clarity
First of all, let's establish some notation. I will call the vector-space $\mathbb{V}$ and I will assume that it is finite-dimensional. Nothing of relevance hinges upon its finity and it'll save me some time. I will denote vectors in the vector-space as $\vert v\rangle$. Now, there are three different $0$s that you are playing with.
The scalar $0\in\mathbb{R}\subset\mathbb{C}$. In particular, this is the null element of the field over which the vector space is defined.
The null vector $\vert\Phi\rangle\in\mathbb{V}$ which is the null element of the vector-space itself, i.e., $\vert{\Phi}\rangle+\vert{v}\rangle=\vert{v}\rangle,\forall\vert{v}\rangle\in\mathbb{V}$.
The ground-state $\vert0\rangle$ of the Hamiltonian such that $\hat{H}\vert 0\rangle=E_{0}\vert0\rangle$ where $E_0\in\mathbb{R}$ is the ground-state energy of the Hamiltonian. If $E_0 = 0$ then $\hat{H}\vert0\rangle=0\vert0\rangle=\vert\Phi\rangle\neq 0$.
Notice here that multiplying a vector by a scalar gives you a vector, not a scalar. In particular, if you multiply a vector with the null-emement of the field over which the vector-space is defined then you get the null-element of the vector-space, not the null element of the field (i.e., not the scalar zero).
I don't think this is something you are particularly confused about but in the interest of a broader audience, it should also be noted that $\vert 0\rangle\neq\vert\Phi\rangle$. The quickest way of seeing this is to note that $\langle 0\vert 0\rangle=1 $ and $\langle \Phi\vert\Phi\rangle=0$.
Onto Your Questions
We must get the same state on the other side of the equation, otherwise, it's not even an eigenvalue equation!
Mathematically, there is nothing too confusing here. As already pointed out in the comments, $0\vert v\rangle=\vert\Phi\rangle,\forall\vert v\rangle\in\mathbb{V}$. And thus, clearly, if $A\vert\psi\rangle=\vert \Phi\rangle$ the eigenvalue equation is satisfied by $\vert\psi\rangle$ with the eigenvalue being $0$.
Look at it this way if it helps: The eigenvalue equation says that if the two quantities $A\vert\psi\rangle$ and $e\vert\psi\rangle$ are equal then $\vert\psi\rangle$ is an eigenstate of $A$ with the eigenvalue $e$. OK, so, let's check if this criterion is satisfied for an eigenstate with zero eigenvalue: The first quantity is $A\vert\psi\rangle=\vert\Phi\rangle$ and the second quantity is $0\vert\psi\rangle = \vert \Phi\rangle$. Voila! The criterion is obviously satisfied.
Now, it is completely irrelevant that $0\vert \lambda\rangle=\vert \Phi\rangle$ for $\vert\lambda\rangle\neq\vert\psi\rangle$. The eigenvalue equation does not say that there cannot exist a $\vert\lambda\rangle\in\mathbb{V}$ such that $A\vert\psi\rangle=e\vert\lambda\rangle$ for $\vert\psi\rangle$ to be an eigenstate of $A$ with the eigenvalue $e$.
If I do a physical measurement on a state and I get zero "eigenvalue" (actually, measurement value) corresponding to some operator (which is doing the measurement) with the state collapsing to a different state then it's not actually the "measurement". I know it's confusing but perhaps someone can understand.
No, you're misunderstanding one of the two things here:
If $\vert\psi\rangle$ is an eigenstate of $A$ with the eigenvalue $0$ then it is true that $A\vert\psi\rangle=0\vert\lambda\rangle$ for $\vert\lambda\rangle\neq\vert\psi\rangle$. BUT, this does not make $\vert\lambda\rangle$ an eigenstate of $A$. For $\vert\lambda\rangle$ to be an eigenstate of $A$, it would have to satisfy the eigenvalue equation $A\vert\lambda\rangle=0\vert\lambda\rangle$, not $A\vert\psi\rangle=0\vert\lambda\rangle$. So, there is no reason the post-measurement state of a measurement that yielded an eigenvalue $0$ for $A$ would be anything other than $\vert\psi\rangle$ because $\vert\psi\rangle$ is the eigenstate.
Another thing to keep in mind is that the eigenvalue equation is not to be interpreted in the following way in quantum mechanics: you measure $A$ on a state $\vert\psi\rangle$ and that corresponds to the mathematical action $A\vert\psi\rangle$. And then, the output of what you get is whatever comes out on the righthand side of the mathematical expression when you write down $A\vert\psi\rangle=...$. This is not what quantum mechanics says. I can imagine that if one thinks this way then they might be confused as to what would happen in the case of zero eigenvalue because you can write down the right-hand side in many different ways.
Consider a scaled harmonic oscillator such that the ground state energy is 0. Now, \begin{align} H &= \hat{n}\ \hbar \omega \\ H |0\rangle &= \hat{n}\ \hbar \omega |0\rangle = \vec{0} \ne 0\\
a |0\rangle &= 0 \ne \vec{0}
\end{align}
This, to me, clearly says that $ A |\psi\rangle = 0\ |\phi\rangle$ is not an eigenvalue equation.
I am not sure how this tells you what you think it tells you but you are making a mistake of confusing the scalar and the vector zeroes. In particular, the correct version is $\hat{H}\vert 0\rangle = 0 \vert 0\rangle = \vert\Phi\rangle $ and $a\vert 0\rangle = \vert \Phi\rangle$. In your notation, my $\vert\Phi\rangle$ is $\vec{0}$. Hope this helps. | {
"domain": "physics.stackexchange",
"id": 85933,
"tags": "hilbert-space, operators, harmonic-oscillator, linear-algebra, eigenvalue"
} |
How do we express $q_\pi(s,a)$ as a function of $p(s',r|s,a)$ and $v_\pi(s)$? | Question: The task (exercise 3.13 in the RL book by Sutton and Barto) is to express $q_\pi(s,a)$ as a function of $p(s',r|s,a)$ and $v_\pi(s)$.
$q_\pi(s,a)$ is the action-value function, that states how good it is to be at some state $s$ in the Markov Decision Process (MDP), if at that state, we choose an action $a$, and after that action, the policy $\pi(s,a)$ determines future actions.
Say that we are at some state $s$, and we choose an action $a$. The probability of landing at some other state $s'$ is determined by $p(s',r|s,a)$. Each new state $s'$ then has a state-value function that determines how good is it to be at $s'$ if all future actions are given by the policy $\pi(s',a)$, therefore:
$$q_\pi(s,a) = \sum_{s' \in S} p(s',r|s,a) v_\pi(s')$$
Is this correct?
Answer: Not quite. You are missing the reward at time step $t+1$.
The definition you are looking for is (leaving out the $\pi$ subscripts for ease of notation)
$$q(s,a) = \mathbb{E}[R_{t+1} + \gamma v(s') | S_t=s,A_t=a] = \sum_{r,s'}(r +\gamma v(s'))p(s',r|s,a)\;.$$
Because $q(s,a)$ relates to expected returns at time $t$, and returns are defined as $G_t = \sum_{b = 0}^\infty \gamma ^b R_{t+b+1}$, thus $R_{t+1}$ is also a random variable at time $t$ that we need to take expectation with respect to, not just the state that we transition into. | {
"domain": "ai.stackexchange",
"id": 2058,
"tags": "reinforcement-learning, definitions, value-functions, sutton-barto"
} |
Displaying JavaScript object's structure on page with HTML | Question: I wanted to reproduce the type of navigation system found in the console of Firefox and Chrome: you can explore an object's properties by unfolding boxes:
So I'm searching to:
display all of the object's properties (even if they're nested)
being able to fold/unfold the properties if themselves are objects
Element handling
I have written two functions createElement and appendElement, I will need them later:
/**
* Creates an HTML element without rendering in the DOM
* @params {String} htmlString is the HTML string that is created
* @return {HTMLElement} is the actual object of type HTMLElement
*/
const createElement = htmlString => {
const div = document.createElement('div');
div.innerHTML = htmlString;
return div.firstChild;
}
/**
* Appends the given element to another element already rendered in the DOM
* @params {String or HTMLElement} parent is either a CSS String selector or a DOM element
* {String or HTMLElement} element is either a HTML String or an HTMLElement
* @return {HTMLElement} the appended child element
*/
const appendElement = (parent, element) => {
element = element instanceof HTMLElement ? element : createElement(element);
return (parent instanceof HTMLElement ? parent : document.querySelector(parent))
.appendChild(element);
}
Version 1
My initial attempt was to use a recursive approach. Essentially each level would call the function for each of its children which would, in turn, call the function for their own children.
In the end, it would result in having the complete tree displayed on the page.
const showContent = (object, parent) => {
Object.keys(object).forEach(e => {
if(object[e] && object[e].constructor == Object) {
showContent(object[e], appendElement(parent, `<div class="level fold"><div class="parent"><span>-</span> ${e}<div></div>`));
} else {
appendElement(parent, `<div class="level">${e}: <span>${object[e]}</span></div>`)
}
});
}
// display the object's property
showContent(obj, 'body');
// toggle function to fold/unfold the properties
const toggle = () => event.target.parentElement.classList.toggle('fold');
// listen to click event on each element
document.querySelectorAll('.parent span').forEach(e => e.parentElement.addEventListener('click', toggle));
const showContent = (object, parent) => {
Object.keys(object).forEach(e => {
if(object[e] && object[e].constructor == Object) {
showContent(object[e], appendElement(parent, `<div class="level fold"><div class="parent"><span>-</span> ${e}<div></div>`));
} else {
appendElement(parent, `<div class="level">${e}: <span>${object[e]}</span></div>`)
}
});
}
// display the object's property
showContent(obj, 'body');
// toggle function to fold/unfold the properties
const toggle = () => event.target.parentElement.classList.toggle('fold');
// listen to click event on each element
document.querySelectorAll('.parent span').forEach(e => e.parentElement.addEventListener('click', toggle));
body>.level{position:relative;left:25%;width:50%}.level{background:lightgrey;border:2px solid brown;padding:4px;margin:4px;overflow:hidden}.level>span{color:#0366d5}.level>.parent{color:green;display:inline-block;width:100%}.fold{height:18px}
<script>const createElement=htmlString=>{const div=document.createElement('div');div.innerHTML=htmlString;return div.firstChild};const appendElement=(parent,element)=>{element=element instanceof HTMLElement?element:createElement(element);return(parent instanceof HTMLElement?parent:document.querySelector(parent)).appendChild(element)};const obj = {innerObj:{other:'yes',note:12,innerObj:{other:'no',note:1}},name:'one',int:1225,bool:true};</script>
Here are a few things I didn't like about this code:
there's an event on each property element, also the event listener needs to be set after the element has been added to the DOM. So calling showContent before the event handlers doesn't feel natural.
this version doesn't support circular structures. For example:
let obj = {};
obj['obj'] = obj;
showContent(obj);
will fail...
So this won't work for me. I need something able to handle cycles without too much trouble and that would not require to add an event listener each time a new property is unfolded.
Version 2
I came up with a better version that solved all these problems:
/**
* Shows all the object's properties with a depth of 1
* @params {Object} object, its first level properties are shown
* {String or HTMLElement} parent the element in which are displayed the properties
* @return {undefined}
*/
const showObject = (object, parent='body') => {
Object.entries(object).forEach(([key, value]) => {
if(value && value.constructor == Object) {
const element = appendElement(parent, `<div class="level fold"><div class="parent"><span>-</span> ${key}<div></div>`);
element.addEventListener('click', () => {showObject(value, element)}, {once: true});
} else {
appendElement(parent, `<div class="level">${key}: <span>${value}</span></div>`);
}
});
};
/**
* Toggles the CSS class .fold on the provided element
* @params {HTMLElement} element on which to toggle the class
* @return {undefined}
*/
const fold = element => element.classList.toggle('fold');
/**
* Document click listener
*/
document.addEventListener('click', function() {
const target = event.target;
const isFoldable = target.classList.contains('parent');
if(isFoldable) {
fold(target.parentElement);
}
});
/**
* Shows all the object's properties with a depth of 1
* @params {Object} object, its first level properties are shown
* {String or HTMLElement} parent the element in which are displayed the properties
* @return {undefined}
*/
const showObject = (object, parent='body') => {
Object.entries(object).forEach(([key, value]) => {
if(value && value.constructor == Object) {
const element = appendElement(parent, `<div class="level fold"><div class="parent"><span>-</span> ${key}<div></div>`);
element.addEventListener('click', () => {showObject(value, element)}, {once: true});
} else {
appendElement(parent, `<div class="level">${key}: <span>${value}</span></div>`);
}
});
};
/**
* Toggles the CSS class .fold on the provided element
* @params {HTMLElement} element on which to toggle the class
* @return {undefined}
*/
const fold = element => element.classList.toggle('fold');
/**
* Document click listener
*/
document.addEventListener('click', function() {
const target = event.target;
const isFoldable = target.classList.contains('parent');
if(isFoldable) {
fold(target.parentElement);
}
});
showObject(obj);
body>.level{position:relative;left:25%;width:50%}.level{background:lightgrey;border:2px solid brown;padding:4px;margin:4px;overflow:hidden}.level>span{color:#0366d5}.level>.parent{color:green;display:inline-block;width:100%}.fold{height:18px}
<script>const createElement=htmlString=>{const div=document.createElement('div');div.innerHTML=htmlString;return div.firstChild};const appendElement=(parent,element)=>{element=element instanceof HTMLElement?element:createElement(element);return(parent instanceof HTMLElement?parent:document.querySelector(parent)).appendChild(element)};const obj = {innerObj:{other:'yes',note:12,innerObj:{other:'no',note:1}},name:'one',int:1225,bool:true};</script>
Here it is seen working with a cycle:
let obj = {};
obj['obj'] = obj;
showObject(obj);
body>.level{position:relative;left:25%;width:50%}.level{background:lightgrey;border:2px solid brown;padding:4px;margin:4px;overflow:hidden}.level>span{color:#0366d5}.level>.parent{color:green;display:inline-block;width:100%}.fold{height:18px}
<script>const createElement=htmlString=>{const div=document.createElement('div');div.innerHTML=htmlString;return div.firstChild};const appendElement=(parent,element)=>{element=element instanceof HTMLElement?element:createElement(element);return(parent instanceof HTMLElement?parent:document.querySelector(parent)).appendChild(element)};const showObject=(object,parent='body')=>{Object.entries(object).forEach(([key,value])=>{if(value&&value.constructor==Object){const element=appendElement(parent,`<div class="level fold"><div class="parent"><span>-</span> ${key}<div></div>`);element.addEventListener('click',()=>{showObject(value,element)},{once:!0})}else{appendElement(parent,`<div class="level">${key}: <span>${value}</span></div>`)}})};const fold=element=>element.classList.toggle('fold');document.addEventListener('click',function(){const target=event.target;const isFoldable=target.classList.contains('parent');if(isFoldable){fold(target.parentElement)}})</script>
Questions
What do you think?
What could be improved (structure, naming, comments, programming style)?
Any advice?
Answer: Very neat project! Your code already looks pretty clean to me. I especially like the optimization of only running showObject once for each expansion.
Bug: If my object is { a: '<b>Bold</b>' }, the text displayed for property a will be Bold, not <b>Bold</b>. This is a great example of why building HTML with strings is a bad idea. The <template> element can be used to define an HTML structure and replicate it multiple times.
Missing implementation: The Chrome devtools handle Sets, Maps, and other data structures in a very nice way, making it possible to peek inside them. Your implementation will just display them as strings (e.g. [object Set]). If you are looking to extend this code, I'd recommend handling Sets, Maps, and Dates.
You might want to consider using the <details> element instead of a custom foldable structure.
In general, I avoid allowing multiple types of parameters. Instead of taking a string or an element for showObject's parent, I'd only take an element and change the default argument to document.body.
Enhancement: Getters should not be evaluated until explicitly requested. This lets the user avoid evaluating getters which mutate the state (yes, unfortunately some people do this...). You can check if a key is a getter with Reflect.getOwnPropertyDescriptor()
Demo (open inner, obj, inner):
let obj = {
inner: {
i: 0,
get x() { return obj.inner.i++ }
}
};
obj.obj = obj;
showObject(obj);
body>.level{position:relative;left:25%;width:50%}.level{background:lightgrey;border:2px solid brown;padding:4px;margin:4px;overflow:hidden}.level>span{color:#0366d5}.level>.parent{color:green;display:inline-block;width:100%}.fold{height:18px}
<script>const createElement=htmlString=>{const div=document.createElement('div');div.innerHTML=htmlString;return div.firstChild};const appendElement=(parent,element)=>{element=element instanceof HTMLElement?element:createElement(element);return(parent instanceof HTMLElement?parent:document.querySelector(parent)).appendChild(element)};const showObject=(object,parent='body')=>{Object.entries(object).forEach(([key,value])=>{if(value&&value.constructor==Object){const element=appendElement(parent,`<div class="level fold"><div class="parent"><span>-</span> ${key}<div></div>`);element.addEventListener('click',()=>{showObject(value,element)},{once:!0})}else{appendElement(parent,`<div class="level">${key}: <span>${value}</span></div>`)}})};const fold=element=>element.classList.toggle('fold');document.addEventListener('click',function(){const target=event.target;const isFoldable=target.classList.contains('parent');if(isFoldable){fold(target.parentElement)}})</script>
Since this seemed very short, I decided to provide a simple example of one way to fix the HTML injection. I'll admit - in this case templates might be slightly overkill, but when working with even slightly more complex HTML structures, templates can be a lifesaver.
const showObject = (object, parent = document.body) => {
const keyValueTemplate = document.getElementById('keyValue');
const folderTemplate = document.getElementById('folder');
Object.entries(object).forEach(([key, value]) => {
if (value && value.constructor == Object) {
// Since this structure is really simple, just create the elements.
const element = document.createElement('details');
const summary = element.appendChild(document.createElement('summary'));
summary.textContent = key;
element.addEventListener('toggle', () => {
showObject(value, element)
}, { once: true });
parent.appendChild(element);
} else {
// Use a template since the structure is somewhat complex.
const element = document.importNode(keyValueTemplate.content, true);
element.querySelector('.key').textContent = key;
element.querySelector('.value').textContent = value;
parent.appendChild(element);
}
});
};
showObject({ a: "<b>Hi</b>", b: { c: 123, d: true }});
.property, details {
background: lightgrey;
border: 2px solid brown;
padding: 4px;
margin: 4px;
overflow: hidden;
}
.value {
color: #0366d5;
}
summary {
color: green;
}
<template id="keyValue">
<div class="property">
<span class="key"></span>:
<span class="value"></span>
</div>
</template> | {
"domain": "codereview.stackexchange",
"id": 31083,
"tags": "javascript, ecmascript-6"
} |
co-NP but not NP problems | Question: What are the problems that are in co-NP but not in NP?
i.e, those problems where incorrect strings can be deterministically verified in polynomial time but the correct strings can't be.
Answer: co-NP is the set of complements of problems that are in NP. So co-NP contains problems such as non-3-colourability, Boolean unsatisfiability and so on.
Most complexity theorists believe that NP$\,\neq\,$co-NP and one consequence of this is that the complement of any NP-complete problem would be in co-NP but not in NP.
Wikipedia has more information on co-NP. | {
"domain": "cs.stackexchange",
"id": 8504,
"tags": "p-vs-np, co-np"
} |
Infinite EMF produced in transformer | Question: If I have an ideal lossless transformer, where the ratio of voltages is proportional to the ratio of turns of wire, what, theoretically would stop infinite amplification of the EMF in one coil, and therefore infinite power being drawn from the circuit, if I increase the number of turns in the secondary coil towards infinity? Edit: I know that it's common knowledge that $V \cdot I$ is conserved from one coil to the other. I don't, however, see how this comes about from Faraday's law.
Answer: When you increase the induced EMF in the secondary winding of a transformer, the induced current in the winding is reduced by the same factor. So in an ideal transformer the product $V_j I_j$ (where the index $j$ labels the primary or secondary winding, and $V$ and $I$ are amplitudes of the voltage and current) is the same in the two windings. This product gives the power, and so rather than the power increasing towards infinity in the situation you consider, and thereby breaking the conservation of energy, it in fact remains constant.
To see the physical reason for this, consider the magnetic fields created by the primary and secondary. The magnetic field generated by the primary $B_p \propto N_p I_p$, and in the secondary $B_s \propto - N_s I_s$, where $N_j$ is the number of turns in the winding. For an ideal transformer the magnetic flux threading the primary all passes through the secondary, as shown in the figure.
If for simplicity we take the cross-sectional areas of the windings to be the same, then the constancy of the magnetic flux implies that $I_p N_p = - I_s N_s$. For a step-up transformer, $N_s > N_p$, and so $I_s$ is reduced in the same ratio to which $V_s$ is increased. This keeps the product $V I$ constant.
In practice the power in the secondary will always be less than the power in the primary due to losses in eddy currents, flux leakage, the resistance of the windings, and so on.
Taken from Wikipedia https://en.wikipedia.org/wiki/Transformer#/media/File:Transformer3d_col3.svg licensed under CC BY-SA 3.0. | {
"domain": "physics.stackexchange",
"id": 66686,
"tags": "electromagnetism, magnetic-fields, electric-circuits, electric-fields, electromagnetic-induction"
} |
Can I remap a topic from rosrun | Question:
Can I remap a topic from rosrun or must I set up a launch file.
Originally posted by rnunziata on ROS Answers with karma: 713 on 2014-01-08
Post score: 0
Original comments
Comment by dornhege on 2014-01-09:
Please don't close any questions unless there is a need to do so. If you find a correct answer click the small checkmark to the left. This will signal that a question was answered in the overview.
Comment by rnunziata on 2014-01-09:
In close drop down there is a item that says " question is answered, right answer accepted". What does this mean in the light of your comment.
Comment by dornhege on 2014-01-09:
Only that clicking the checkmark is the preferred option to signal that a question was answered.
Answer:
As documented on the Remapping Arguments wiki page, you can easily remap arguments using rosrun from the command line (see example there).
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-01-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 16613,
"tags": "ros"
} |
Where is the rotated angle actually located in fitEllipse method? | Question: Main task: I fit the ellipse using the fitEllipse() method and then I'd like to count the rotation angle between the horizontal axis and the major axis of the generated ellipse. I'm going to do this using the 3rd returned argument from the fitEllipse() method - $\theta$ (rotation angle).
Main issue: I can't find exact information about which axes this angle is located between.
Other: If I'm right the length of minor axis and the length of major axis in ellipse it's the same lengths as two sides in a rotated rectangle.
Sources:
From documentation here (section nb. 9) it seems that this angle is between horizontal axis and the first side how it's written in CvBox2D section. So, it means that the angle can be between the horizontal axis and minor or major axis.
But:
In this article (section 3) the first example is good but in the 2nd example the angle should be between horizontal axis and height instead of width
(referring to my aforementioned first point).
In this article it shows that the angle is between vertical axis and one side of rectangle.
So, where is the rotated angle located in fitEllipse() method?
Any hints how it works are welcome.
Answer: I've implemented the following simple code:
import cv2
import numpy as np
nr_im = 9876
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 1
colorText = (0, 0, 255)
thickness = 2
img = cv2.imread('testing/' + str(nr_im) + '.jpg')
original = img.copy()
blured_img = cv2.GaussianBlur(img,(17,17),5)
image = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 140], dtype="uint8")
upper = np.array([0, 0, 255], dtype="uint8")
mask = cv2.inRange(image, lower, upper)
# Morphological Closing: Get rid of the noise inside the object
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (25, 25)))
# Find contours
cnts, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
print(len(cnts))
cntsElps = []
for num_cnt, cnt in enumerate(cnts):
genEllipse= cv2.fitEllipse(cnt)
cntsElps.append(genEllipse)
cv2.ellipse(original,genEllipse,(0,255,0),2)
cv2.putText(original, str(num_cnt+1), (int(genEllipse[0][0]),int(genEllipse[0][1])), font, fontScale, colorText, thickness, cv2.LINE_AA)
print("Ellipse nb: " + str(num_cnt+1) + " has angle: " + str(genEllipse[2]) + "\n")
cv2.imwrite('testing/' + str(nr_im) + '_' + 'trash2' + '.png', original)
And I used this image as example:
I've got the following image result:
And the rotation angle for each ellipse was:
Ellipse nb: 1 has angle: 55.63788986206055
Ellipse nb: 2 has angle: 108.58539581298828
Ellipse nb: 3 has angle: 170.23861694335938
Ellipse nb: 4 has angle: 73.59089660644531
So, my conclusion is that an angle between vertical axis and major side of rectangle(=major ellipse axis) is the rotation angle in fitEllipse() method.
Addendum
If you look at this question from the point of view of how opencv-python defines axes (positive x-axis to the right, positive y-axis downwards), the angle is defined between the horizontal axis and the minor ellipse diameter. To demonstrate this, different angles are plotted on a white canvas
import math
import cv2
import numpy as np
# create white canvas
img = np.zeros([512, 512, 3], dtype=np.uint8)
img.fill(255)
xc = 256
yc = 256
angles = list(range(0, 360, 30))
# radii = np.linspace(30, 200, len(angles))
radii = [175] * len(angles)
for idx, (angle, radius) in enumerate(zip(angles, radii)):
xtop = xc + math.cos(math.radians(angle)) * radius
ytop = yc + math.sin(math.radians(angle)) * radius
cv2.line(img, (int(xtop), int(ytop)), (int(xc), int(yc)), (0, 0, 255), 1)
# Put the contour index in the ellipse
cv2.putText(img, f'{round(math.radians(angle) / math.pi, 2)} pi', (int(xtop), int(ytop)),
cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255),
2, cv2.LINE_AA)
cv2.imwrite('opencv_angles.jpg', img)
cv2.imshow('Definition of angle', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The text in the image shows the angles for that line in radians
The angle can also be plotted for the fitted ellipses, using the angle returned by fitEllipse.
(xc, yc), (width, height), angle = genEllipse
rminor = min(width, height) / 2
xtop = xc + math.cos(math.radians(angle)) * rminor
ytop = yc + math.sin(math.radians(angle)) * rminor
cv2.line(result, (int(xtop), int(ytop)), (int(xc), int(yc)), (0, 0, 255), 3)
Here you can see that the angle is between the horizontal axis and minor diameter. Remember the rotation angle for each ellipse:
Ellipse nb: 1 has angle: 55.63788986206055
Ellipse nb: 2 has angle: 108.58539581298828
Ellipse nb: 3 has angle: 170.23861694335938
Ellipse nb: 4 has angle: 73.59089660644531
Conclusion
Angles in opencv, where positive x is to the right and positive y is downwards in images, means that the rotation angle for ellipses is best seen as between positive x-axis and minor ellipse diameter (downwards towards the positive y-axis).
However, if you flip the axes (in your mind), so that positive y-axis is upwards and positive x-axis is to the right in the image, then you can also interpret the ellipse rotation angle as between the positive y-axis and major ellipse diameter (to the right towards the positive x-axis) | {
"domain": "datascience.stackexchange",
"id": 8583,
"tags": "python, opencv"
} |
dynamic queue implementation in C++ | Question: I've implemented a dynamic queue with a random-access iterator.
iterator:
#pragma once
#include <iterator>
namespace con {
template <class T> class rnd_iterator {
T *m_Ptr;
public:
/*
* type aliases
*/
using value_type = T;
using iterator_type = rnd_iterator<value_type>;
using iterator_category = std::random_access_iterator_tag;
using pointer = value_type *;
using const_pointer = const pointer;
using difference_type = std::ptrdiff_t;
using reference = value_type &;
using const_reference = const value_type &;
/*
* constructors
*/
rnd_iterator(const rnd_iterator<T> &other) noexcept : m_Ptr(other.m_Ptr) {}
rnd_iterator(T *p) noexcept : m_Ptr(p) {}
/*
* access operators
*/
reference operator*() { return *m_Ptr; }
reference operator[](std::size_t idx) { return m_Ptr[idx]; }
pointer operator->() { return m_Ptr; }
/*
* increment/decrement and assign operators
*/
rnd_iterator &operator=(pointer oth) {
m_Ptr = oth;
return *this;
}
rnd_iterator &operator+=(pointer oth) {
m_Ptr += oth;
return *this;
}
rnd_iterator &operator-=(pointer oth) {
m_Ptr -= oth;
return *this;
}
iterator_type &operator=(const iterator_type &rhs) {
m_Ptr = rhs.m_Ptr;
return *this;
}
friend iterator_type &operator+=(const iterator_type &lhs,
const iterator_type &rhs) {
lhs.m_Ptr += rhs.m_Ptr;
return lhs;
}
friend iterator_type &operator-=(const iterator_type &lhs,
const iterator_type &rhs) {
lhs.m_Ptr -= rhs.m_Ptr;
return lhs;
}
rnd_iterator operator++() {
++m_Ptr;
return *this;
}
rnd_iterator operator++(int) {
auto temp = *this;
m_Ptr++;
return temp;
}
rnd_iterator &operator--() {
--m_Ptr;
return *this;
}
rnd_iterator operator--(int) {
auto temp = *this;
m_Ptr--;
return temp;
}
/*
* comparison operators
*/
friend bool operator!=(const iterator_type &lhs, const iterator_type &rhs) {
return lhs.m_Ptr != rhs.m_Ptr;
}
friend bool operator!=(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr != rhs;
}
friend bool operator==(const iterator_type &lhs, const iterator_type &rhs) {
return lhs.m_Ptr == rhs.m_Ptr;
}
friend bool operator==(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr == rhs;
}
friend bool operator<(const iterator_type &lhs, const iterator_type &rhs) {
return lhs.m_Ptr < rhs.m_Ptr;
}
friend bool operator<(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr < rhs;
}
friend bool operator<=(const iterator_type &lhs, const iterator_type &rhs) {
return lhs.m_Ptr <= rhs.m_Ptr;
}
friend bool operator<=(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr <= rhs;
}
friend bool operator>(const iterator_type &lhs, const iterator_type &rhs) {
return lhs.m_Ptr > rhs.m_Ptr;
}
friend bool operator>(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr > rhs;
}
friend bool operator>=(const iterator_type &lhs, const iterator_type &rhs) {
return lhs.m_Ptr >= rhs.m_Ptr;
}
friend bool operator>=(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr >= rhs;
}
friend difference_type operator+(const iterator_type &lhs,
const iterator_type &rhs) {
return lhs.m_Ptr + rhs.m_Ptr;
}
friend difference_type operator+(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr + rhs;
}
friend iterator_type operator+(const iterator_type &lhs,
difference_type rhs) {
return lhs.m_Ptr + rhs;
}
friend difference_type operator-(const iterator_type &lhs,
const iterator_type &rhs) {
return lhs.m_Ptr - rhs.m_Ptr;
}
friend iterator_type operator-(const iterator_type &lhs,
difference_type rhs) {
return lhs.m_Ptr - rhs;
}
friend difference_type operator-(const iterator_type &lhs, pointer rhs) {
return lhs.m_Ptr - rhs;
}
};
}
queue:
#pragma once
#include "iterator.hpp"
#include <algorithm>
#include <cassert>
#include <initializer_list>
#include <iostream>
#include <iterator>
#include <limits>
#include <memory>
#include <type_traits>
#include <utility>
namespace con {
template <class T, class Allocator = std::allocator<T>> class queue {
Allocator m_Alloc;
std::size_t m_Size, m_Capacity;
std::allocator_traits<Allocator> m_AllocTraits;
T *m_RawData;
void m_ReallocAnyway(std::size_t t_NewCapacity) {
std::size_t f_old = m_Capacity;
T *f_temp = m_Alloc.allocate(sizeof(T) * t_NewCapacity);
try {
for (std::size_t i = 0; i < m_Size; i++) {
new (&f_temp[i]) T(std::move_if_noexcept(m_RawData[i]));
m_AllocTraits.destroy(m_Alloc, std::addressof(m_RawData[i]));
}
m_Alloc.deallocate(m_RawData, f_old);
m_RawData = f_temp;
} catch (const std::exception &exc) {
m_Alloc.deallocate(f_temp, sizeof(T) * t_NewCapacity);
throw std::move(exc);
}
}
void m_Realloc(std::size_t t_NewCapacity) {
if (t_NewCapacity > m_Capacity) {
m_ReallocAnyway(t_NewCapacity);
} else {
return;
}
}
void m_ShiftToLeft() {
for (std::size_t i = 0; i < m_Size; i++) {
new (&m_RawData[i]) T(std::move_if_noexcept(m_RawData[i + 1]));
}
}
template <class F>
void m_ShiftFromTo(std::size_t from, std::size_t to, F &&func) {
for (; from < to; from++) {
new (&m_RawData[from])
T(std::move_if_noexcept(m_RawData[func(from, to)]));
}
}
template <class It> void m_ShiftRangeFromTo(It from, It to) {
for (; from != to; from++) {
new (std::addressof(*from))
T(std::move_if_noexcept(*(from + (to - from))));
}
}
template <class Iter> void m_DestroyRange(Iter beg, Iter end) {
for (; beg != end; beg++) {
m_AllocTraits.destroy(m_Alloc, std::addressof(*beg));
}
}
void m_CheckOrAlloc(std::size_t t_Size) {
if (t_Size >= m_Capacity) {
m_Realloc(m_Capacity * 2);
}
}
public:
using value_type = T;
using allocator_type = Allocator;
using size_type = decltype(m_Size);
using difference_type = std::ptrdiff_t;
using reference = value_type &;
using const_reference = const value_type &;
using pointer = typename std::allocator_traits<Allocator>::pointer;
using const_pointer =
typename std::allocator_traits<Allocator>::const_pointer;
using iterator = con::rnd_iterator<value_type>;
using const_iterator = const iterator;
using reverse_iterator = std::reverse_iterator<iterator>;
using const_reverse_iterator = std::reverse_iterator<const_iterator>;
explicit queue(size_type cap = (sizeof(value_type) * 5),
const Allocator &alloc = Allocator{}) noexcept
: m_Alloc(alloc), m_Size(0), m_Capacity(cap),
m_RawData(m_Alloc.allocate(m_Capacity)) {}
explicit queue(const std::initializer_list<T> &init,
const Allocator &alloc = Allocator{}) noexcept
: m_Alloc(alloc), m_Size(init.size()), m_Capacity(sizeof(value_type) * 5),
m_RawData(m_Alloc.allocate(m_Capacity)) {
m_Size = init.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_copy(init.begin(), init.end(), m_RawData);
}
explicit queue(const queue<value_type> &oth) : queue() {
if (std::is_destructible<value_type>::value)
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_copy(oth.begin(), oth.end(), m_RawData);
}
explicit queue(queue<value_type> &&oth) noexcept : queue() {
if (std::is_destructible<value_type>::value)
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_move(oth.begin(), oth.end(), m_RawData);
}
template <class It> queue(It begin, It end) noexcept : queue() {
assert(begin <= end);
size_type f_size = std::distance(begin, end);
m_CheckOrAlloc(f_size);
m_Size = f_size;
std::uninitialized_copy(begin, end, m_RawData);
}
explicit queue(const queue<value_type> &&oth) = delete;
iterator begin() noexcept { return iterator(m_RawData); }
iterator end() noexcept { return iterator(m_RawData + size()); }
reverse_iterator rbegin() noexcept { return reverse_iterator(end()); }
reverse_iterator rend() noexcept { return reverse_iterator(begin()); }
const_iterator begin() const noexcept { return const_iterator(m_RawData); }
const_iterator end() const noexcept {
return const_iterator(m_RawData + size());
}
const_reverse_iterator rbegin() const noexcept {
return const_reverse_iterator(m_RawData + size());
}
const_reverse_iterator rend() const noexcept {
return const_reverse_iterator(m_RawData);
}
const_iterator cbegin() const noexcept { return const_iterator(m_RawData); }
const_iterator cend() const noexcept {
return const_iterator(m_RawData + size());
}
const_reverse_iterator crbegin() const noexcept { return rbegin(); }
const_reverse_iterator crend() const noexcept { rend(); }
bool empty() const noexcept { return size() == 0; }
size_type size() const noexcept { return m_Size; }
size_type capacity() const noexcept { return m_Capacity; }
size_type max_capacity() const noexcept {
return std::numeric_limits<size_type>::max();
}
const_pointer data() const { return m_RawData; }
void clear() requires(std::is_destructible<value_type>::value) {
for (size_type i = 0; i < size(); i++) {
m_AllocTraits.destroy(m_Alloc, std::addressof(m_RawData[i]));
}
m_Size = 0;
}
void reserve(size_type cp) { m_CheckOrAlloc(cp); }
void resize(size_type sz) {
m_Size = sz;
m_CheckOrAlloc(sz);
}
void erase(iterator val) {
if (val != end()) {
difference_type x = val - begin();
pointer p = m_RawData + x;
m_AllocTraits.destroy(m_Alloc, std::addressof(*val));
m_ShiftFromTo(std::distance(begin(), iterator(p)), size(),
[](auto l, [[maybe_unused]] auto _) { return l + 1; });
m_Size--;
} else {
return;
}
}
void erase(iterator first, iterator last) {
assert(first <= last && "queue::erase invalid range");
m_DestroyRange(first, last);
m_ShiftRangeFromTo(first, last);
m_Size -= std::distance(first, last);
}
void erase(reverse_iterator first, reverse_iterator last) {
assert(first <= last && "queue::erase invalid range");
m_DestroyRange(first, last);
m_ShiftRangeFromTo(first, last);
m_Size -= std::distance(first, last);
}
void erase(reverse_iterator val) {
if (val != rend()) {
m_AllocTraits.destroy(m_Alloc, std::addressof(*val));
m_ShiftFromTo(std::distance(val, rend()) - 1, size(),
[](auto l, [[maybe_unused]] auto _) { return l + 1; });
m_Size--;
} else {
return;
}
}
void erase(const value_type &obj) { erase(std::find(begin(), end(), obj)); }
void rerase(const value_type &obj) {
erase(std::find(rbegin(), rend(), obj));
}
iterator find(const value_type &obj) {
return std::find(begin(), end(), obj);
}
reverse_iterator rfind(const value_type &obj) {
return std::find(rbegin(), rend(), obj);
}
const_iterator find(const_reference obj) const {
return std::find(begin(), end(), obj);
}
const_reverse_iterator rfind(const value_type &obj) const {
return std::find(rbegin(), rend(), obj);
}
void enqueue(const value_type &oth) requires(
std::is_copy_constructible<value_type>::value) {
m_CheckOrAlloc(size());
new (&m_RawData[m_Size++]) value_type(oth);
}
void enqueue(value_type &&oth) requires(
std::is_move_constructible<value_type>::value) {
m_CheckOrAlloc(size());
new (&m_RawData[m_Size++]) value_type(std::move(oth));
}
[[nodiscard]] value_type
dequeue() requires(std::is_destructible<value_type>::value) {
--m_Size;
value_type temp = m_RawData[0];
m_AllocTraits.destory(m_Alloc, std::addressof(m_RawData[0]));
m_ShiftToLeft();
return temp;
}
template <class... Args> void emplace(Args &&...args) {
enqueue(value_type(std::forward<Args>(args)...));
}
value_type at(size_type index) const {
if (index >= size()) {
throw std::range_error("out of bounds queue"); // yes helpful error
} else {
return m_RawData[index];
}
}
reference at(size_type index) {
if (index >= size()) {
throw std::range_error("out of bounds queue"); // yes helpful error
} else {
return m_RawData[index];
}
}
value_type operator[](size_type index) const { return m_RawData[index]; }
reference operator[](size_type index) { return m_RawData[index]; }
queue<value_type> &operator=(const queue<value_type> &oth) {
if (&oth != this) {
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_copy(oth.begin(), oth.end(), m_RawData);
}
return *this;
}
queue<value_type> &operator=(queue<value_type> &&oth) {
if (&oth != this) {
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_move(oth.begin(), oth.end(), m_RawData);
oth.~queue();
}
return *this;
}
queue<value_type> &operator=(const queue<value_type> &&oth) = delete;
~queue() {
m_Alloc.deallocate(m_RawData, m_Capacity);
std::exchange(m_RawData, nullptr);
std::exchange(m_Size, 0);
}
~queue() requires(std::is_destructible<value_type>::value) {
clear();
m_Alloc.deallocate(m_RawData, m_Capacity);
std::exchange(m_RawData, nullptr);
std::exchange(m_Size, 0);
}
};
}
```
Answer: Despite the many criticisms in my review, I think this is an OUTSTANDING effort.
Design review
Iterators are part of range interfaces
The first issue that strikes me about your design is that you have separated the iterator from the container. That’s nonsense. Iterators can’t exist independently from a container. There is no such thing as a “general-purpose random-access iterator”.
Iterators are a part of a range. We talk about them as independent “things”, but they are not. In fact, you can’t even create an iterator without a range to create it from. By attempting to separate the iterator from the range, you’ve actually created numerous problems. It is possible (even dangerously easy) to create broken rnd_iterators (rnd_iterators that don’t actually reference a legitimate range).
What you’ve actually created with rnd_iterator is not really a random-access iterator. It looks like one, and it will work for some containers—it will work for std::vector, but not std::deque, for example—but it will crash with most random-access containers (like std::deque). It’s actually an iterator type that will work with contiguous ranges (std::vector is a contiguous range, and so is con::queue)… not random-access ranges… but it will work worse than the range’s own iterator type.
In fact, rnd_iterator is really nothing more than an alias for T*… except it’s more limited, less efficient, and lacks the well-understood semantics.
Dangerous conversions
rnd_iterator has a serious problem with dangerous conversions from raw pointers. That’s not a good thing; that’s very, very bad. There are NO situations, any time, ever, where you’d want client code to be able to freely convert raw pointers to queue iterators. And especially to have those conversions happen silently, by default.
The problem here is that because the iterator is separate from the container, you need a way to convert the container’s internal pointers to iterators. But by making that public, you’ve allowed random user code to be able to do the same thing. I can take any random pair of raw pointers, and create a pair of iterators… even if those pointers are not pointing to the same array. And in fact, even worse, random pointers will be automatically and silently converted to iterators, so at the slightest typo, suddenly I have what looks like a valid range. Hello, bugs. For example:
auto i = 0;
auto ptr = &i;
auto q = con::queue<int>{};
find(q.begin(), ptr, 42); // compiles without a warning, and probably crashes
You need for con::queue to be able to construct iterators from raw pointers… but that should only be an internal function. It should not be available to outside code. You could do this by friendship, I suppose, but however you do it, con::queue should be able to turn its internal pointers into iterators, but nothing else.
Incidentally, this also implies you should remove all the other operations with raw pointers:
// all these are bad
rnd_iterator &operator=(pointer oth);
rnd_iterator &operator+=(pointer oth);
rnd_iterator &operator-=(pointer oth);
friend bool operator!=(const iterator_type &lhs, pointer rhs);
friend bool operator==(const iterator_type &lhs, pointer rhs);
friend bool operator<(const iterator_type &lhs, pointer rhs);
friend bool operator<=(const iterator_type &lhs, pointer rhs);
friend bool operator>(const iterator_type &lhs, pointer rhs);
friend bool operator>=(const iterator_type &lhs, pointer rhs);
friend difference_type operator+(const iterator_type &lhs, pointer rhs);
There is NO situation where it would be a good thing to do comparisons or other operations between iterators and raw pointers. (And if you ever create a situation where you need to do that: 1) you should rewrite your algorithm; and 2) you could do it anyway, it would just be much more verbose, but that’s a good thing.)
Overblown interface
Now, you’re making a queue, which, by definition, exists to take elements in on one end, and pop them off from the other. Does the existing interface make sense for that?
Like, what is the purpose of being able to iterate through the queue… backwards? Seems a bit silly. I mean, one might ask why you need to iterate through the queue at all even, including forwards, because… that’s not how you use queues. You push, and you pop; those are the only operations a queue needs. But okay, having range access isn’t a bad thing because it can allow some massive optimizations (like rather than repeatedly popping items off the queue until it’s empty, you could iterate through the queue doing some operation, then clear the whole queue in a single step). But… backwards? Really?
And let’s be clear, if your queue can support reverse iteration, I’m not saying you should prevent it. But you don’t need to make it part of the public interface. You could remove rbegin()/rend() etc. and still have reverse iteration:
std::for_each(std::reverse_iterator{q.begin()}, std::reverse_iterator{q.end()}, func);
// or:
std::for_each(std::ranges::rbegin(q), std::ranges::rend(q), func);
// or:
for (auto&& element : q | std::views::reverse)
func(element);
All of the above work with just begin() and end(), and do the exact same thing, just as efficiently. So there’s no need to add more cruft to the queue’s interface, especially stuff that doesn’t directly relate to what a queue actually is.
Similar logic goes for the element access functions. Does it really make sense to provide random access to the middle of a queue? I don’t see why. If I want efficient random-access to a sequential data structure… well, that’s what std::vector is for (or std::deque for that matter). Again, even if you remove operator[size_type] and at(), it’s still possible to get efficient access to random elements in the queue (because the queue iterators are random-access iterators). So you’re not losing functionality by removing those functions. You’re just sending the message “that’s not how I intend for this type to be used (because it’s a queue)”.
A good interface should be:
Complete. It should be possible to do every operation necessary for the type with maximal efficiency.
Minimal. The more crap you add to an interface, the more unwieldy the class becomes both for maintainers and users (because now users have learn and memorize more functions).
Logical. The interface should have all the operations that make semantic sense for what the type means… and no more than that (with exceptions made for the sake usability and efficiency).
A queue’s basic interface should be nothing more than construction, destruction, moving, copying, pushing, and popping, and maybe peeking at the front of the queue, and maybe-maybe peeking at the back of the queue… and that… is… it. If you add ANYTHING ELSE, you need to seriously justify why it’s NECESSARY, either for usability or efficiency… because every single little thing you add to a class makes it that much more brittle, that much less maintainable, and that much more annoying to learn and use.
So I would suggest trimming down this interface quite a bit. I would suggest removing:
reverse_iterator and const_reverse_iterator.
rbegin(), rend(), crbegin(), crend().
max_capacity(), because it is not a standard container function; max_size() is.
All the element access functions. You can get at elements just as easily and efficiently with iterators.
All the erase functions that don’t take iterator.
All the find functions.
I would also suggest adding a few functions:
Definitely shrink_to_fit().
At least front(), and maybe back() as well.
A dequeue_and_discard() function for when you want to pop but don’t care what you’re popping.
Maybe assign(), where you can assign from an iterator pair or initializer list.
Maybe a resize() (and perhaps assign()) overload that takes a source to copy for any new elements.
get_allocator() and max_size(), for container requirements.
Unbalanced efficiency
The standard library has a queue—std::queue. It’s actually a container adapter: you supply a container, and it makes it act like a queue. The reason I’m mentioning all this is because the default container for a std::queue is std::deque… not std::vector.
Why is that relevant? Because your queue—con::queue—is actually built on an implementation that is basically std::vector.
Here’s why that’s an issue. When you are pushing to your queue, you get maximal efficiency. Let’s imagine you start with a queue that has capacity = 2, and is full, and you want to push a total of 8 elements:
Push element 3.
Allocate 4 spaces.
Move element 1 to new memory.
Move element 2 to new memory.
Push element 3 into new memory.
(Queue now has size = 3, capacity = 4)
Push element 4.
Push element 4 into existing capacity
(Queue now has size = 4, capacity = 4)
Push element 5.
Allocate 8 spaces.
Move element 1 to new memory.
Move element 2 to new memory.
Move element 3 to new memory.
Move element 4 to new memory.
Push element 5 into new memory.
(Queue now has size = 5, capacity = 8)
Push element 6.
Push element 6 into existing capacity
(Queue now has size = 6, capacity = 8)
Push element 7.
Push element 7 into existing capacity
(Queue now has size = 7, capacity = 8)
Push element 8.
Push element 8 into existing capacity
(Queue now has size = 8, capacity = 8)
That’s not bad! That’s actually highly efficient. There are two allocations during the process, but that’s not really avoidable (unless you reserve, of course), and during those two allocations, there are a bunch of extra moves… but for fully half of the pushes, it’s doing the absolute minimum work possible: just adding the element directly into the queue. (And if you actually reserved the required capacity up front, you’d really get the minimum work possible.)
But now look what happens when you try to pop those 6 elements back off:
Pop element.
Move element 1 to return slot.
Move element 2 to position 1.
Move element 3 to position 2.
Move element 4 to position 3.
Move element 5 to position 4.
Move element 6 to position 5.
Move element 7 to position 6.
Move element 8 to position 7.
Pop element.
Move element 1 to return slot.
Move element 2 to position 1.
Move element 3 to position 2.
Move element 4 to position 3.
Move element 5 to position 4.
Move element 6 to position 5.
Move element 7 to position 6.
Pop element
Move element 1 to return slot.
Move element 2 to position 1.
Move element 3 to position 2.
Move element 4 to position 3.
Move element 5 to position 4.
Move element 6 to position 5.
Pop element.
Move element 1 to return slot.
Move element 2 to position 1.
Move element 3 to position 2.
Move element 4 to position 3.
Move element 5 to position 4.
Pop element.
Move element 1 to return slot.
Move element 2 to position 1.
Move element 3 to position 2.
Move element 4 to position 3.
Pop element.
Move element 1 to return slot.
Move element 2 to position 1.
Move element 3 to position 2.
Yikes. That’s not a problem with your implementation. In fact, if you used std::vector with std::queue you’d get the same behaviour. That’s why std::vector is not the default container to use with std::queue. Every time you pop an element off of a queue built on a vector-like container with N, you trigger a chain of N − 1 moves. If your queue has a million elements, popping an item off triggers 9,999,999 moves. Not great.
Here is what would happen with a std::queue using std::deque:
Pop element.
Move element 1 to return slot.
Pop element.
Move element 1 to return slot.
Pop element
Move element 1 to return slot.
Pop element.
Move element 1 to return slot.
Pop element.
Move element 1 to return slot.
Pop element.
Move element 1 to return slot.
Wow, big difference, eh?
Now I am NOT saying you should make your queue with a re-implementation of std::deque internally. In fact, I think you could do much better. Your implementation is already half-way there. The only thing I think you need to do differently is not adjusting the entire internal array when you pop from the front.
Here’s one way you could do it:
Store the currently allocated block address.
Store a pointer to the head of the queue (in the currently allocated block).
Store the queue size.
Currently you store only m_RawPtr as BOTH the current block address AND the head of the queue… and that is where your problems arise. I recommend splitting them into two. That way, when you pop from the queue, you don’t need to move all the elements one position over… you just move the pointer to the queue head.
Here’s how that might look. Suppose you have a queue with size 6, capacity 8:
+---+---+---+---+---+---+---+---+
| A | B | C | D | E | F | _ | _ |
+---+---+---+---+---+---+---+---+
^
|
m_block -+
|
m_head --/
m_size = 6
m_capacity = 8
With your current design, when you pop, you get this:
+---+---+---+---+---+---+---+---+
| B | C | D | E | F | _ | _ | _ |
+---+---+---+---+---+---+---+---+
^
|
m_block -+
|
m_head --/
m_size = 5
m_capacity = 8
(Except of course, instead of m_block and m_head, you just have the one m_RawPtr.)
What I’m suggesting is that when you pop, you destroy the head element, and then just advance the head pointer:
+---+---+---+---+---+---+---+---+
| _ | B | C | D | E | F | _ | _ |
+---+---+---+---+---+---+---+---+
^ ^
| |
m_block -/ |
|
m_head ------/
m_size = 5
m_capacity = 7
The pro of this design is that popping now becomes MUCH faster, and there is no more chance of failure (which might happen if you have elements that are not nothrow-movable). The con is that you don’t recover capacity by popping. So if your use pattern is one-push-one-pop over and over… you’ll need to reallocate eventually. (With std::deque as the underlying container, that can be avoided because a deque is a bunch of chunks: if necessary, the deque can just move empty chunks around to avoid needing to reallocate at one end or the other.) HOWEVER, if your use pattern is to basically pump the queue empty every so often, then this design could be very efficient, because every time you empty the queue, you can just reset the head pointer to the beginning of the block, and recover all the capacity. Or, if the space at the beginning is larger than the size, you can safely copy the whole queue back to the beginning of the block. (And if the elements are nothrow-movable, you can do that safely any time, so you can always recover the capacity when you need it.)
Anyway, whatever design you choose, I just wanted to point out that you have an unbalanced efficiency issue: pushing is fast, popping is slow. If that’s what you want, well then fine; there’s nothing wrong with having fast pushes and slow pops. It’s a little surprising, but if you document that that’s the point, then that’s cool. That pattern does have real-world usage potential.
Indestructible elements?
Several functions in the queue class have a bizarre requirement: requires(std::is_destructible<value_type>::value). Now, for starters, it would probably make more sense to use the destructible concept: requires std::destructible<T>. But aside from that… what do you think it means for a type to not be destructible?
If a type cannot be destroyed, then how the hell are you are supposed to pop elements from the queue? How the hell are you supposed to destroy the queue itself? (No, your answer— simply ignoring the destruction of the elements and deallocating their memory out from under them—is absolutely NOT the right answer. That’s a one-way ticket to UB-land, with bonus leakage along the way.)
If a type cannot be destroyed, it cannot be constructed. (At least, not without significant chicanery that’s not worth serious consideration.) If a type cannot be constructed… how the hell is it supposed to get into the queue? The only way to put things into the queue is either to move construct, copy construct, or otherwise directly (via emplace()) construct them in the queue. If you can’t destruct, you can’t construct, so none of those things should be possible (conceptually). So never mind getting things out of the queue; you can’t even put things into the queue.
So if a type is indestructible, you can’t put it into the queue, and even if you could, you couldn’t then take it out of the queue, or even destroy the queue itself.
So what do you think it means to have a queue full of indestructible objects? Do you have any actual use-cases for this?
From where I’m sitting, it looks like complete nonsense, but maybe I’m missing something.
Code review
rnd_iterator
To start, let me reiterate that what you’ve basically done is re-implemented T*… except not as efficiently. That being said, let’s dive in to the actual code.
#pragma once
#pragma once is non-standard, and it has serious problems that while rare, are nightmarish to deal with (which is why it’s never been standardized). Use include guards instead.
T *m_Ptr;
In C++, the convention is to put the pointer asterisk or reference ampersand with the type, not with the variable name. That’s because types matter more in C++… it’s more important to think of m_Ptr as a T* than it is to think of it as a pointer to a T.
T *m_ptr is C style.
T* m_ptr is C++ style.
This is true for references as well.
using value_type = T;
If you want this template to be used for both iterator and const_iterator, then you will need to remove the const here. This is as easy as using value_type = std::remove_cv_t<T>;.
using pointer = value_type *;
// ... [snip] ...
using difference_type = std::ptrdiff_t;
using reference = value_type &;
You have a problem where con::queue<T>::iterator::pointer may not be the same as con::queue<T>::pointer, because the former is typename std::allocator_traits<Allocator>::pointer while the latter is just T*. This is just one of a number of problems that arise because you have separated the iterator from the container.
using iterator_type = rnd_iterator<value_type>;
I don’t see the point of this type alias. It’s actually longer than just using rnd_iterator.
using const_pointer = const pointer;
// ...
using const_reference = const value_type &;
These two type aliases make no sense. They make sense in the context of the container… but not in the context of the iterator. iterator::const_reference makes no sense; what you’d really want is const_iterator::reference.
In any case, you have defined const_pointer incorrectly (although const_reference is correct).
rnd_iterator(const rnd_iterator<T> &other) noexcept : m_Ptr(other.m_Ptr) {}
There is no reason to write this constructor, because it’s not doing anything differently from the default, implicitly generated copy constructor. And, in fact, by writing it out, you have actually crippled the efficiency. See the the rule of 3/5/0.
rnd_iterator(T *p) noexcept : m_Ptr(p) {}
All single-argument constructors should, by default, be declared explicit.
But as mentioned in the design overview, this constructor should also be private (if it even exists at all).
reference operator[](std::size_t idx) { return m_Ptr[idx]; }
This should actually be using the container’s size_type, not std::size_t. Again, this is why iterators should be defined with their containers. They don’t make sense independently.
Also, all of the access operators should be const. None of them change the iterator. You could use the returned pointer/reference to change whatever is pointed-to or referenced… but the iterator itself isn’t being changed.
rnd_iterator &operator=(pointer oth)
rnd_iterator &operator+=(pointer oth)
rnd_iterator &operator-=(pointer oth)
All of these operations should be deleted.
Also… what sense does adding two pointers make? What do you think will happen if you add a pointer to a pointer?
iterator_type &operator=(const iterator_type &rhs)
Like the copy constructor, this shouldn’t be explicitly defined.
friend iterator_type &operator+=(const iterator_type &lhs,
const iterator_type &rhs)
Adding pointers, or iterators, makes no sense.
friend iterator_type &operator-=(const iterator_type &lhs,
const iterator_type &rhs)
Subtracting iterators does make sense… but the result will be a difference_type… not an iterator. So operator-= with an iterator on the right-hand side makes no sense.
rnd_iterator operator++()
You have a bug; you forgot the & on the return type.
friend bool operator!=(const iterator_type &lhs, const iterator_type &rhs)
friend bool operator!=(const iterator_type &lhs, pointer rhs)
friend bool operator==(const iterator_type &lhs, const iterator_type &rhs)
friend bool operator==(const iterator_type &lhs, pointer rhs)
friend bool operator<(const iterator_type &lhs, const iterator_type &rhs)
friend bool operator<(const iterator_type &lhs, pointer rhs)
friend bool operator<=(const iterator_type &lhs, const iterator_type &rhs)
friend bool operator<=(const iterator_type &lhs, pointer rhs)
friend bool operator>(const iterator_type &lhs, const iterator_type &rhs)
friend bool operator>(const iterator_type &lhs, pointer rhs)
friend bool operator>=(const iterator_type &lhs, const iterator_type &rhs)
friend bool operator>=(const iterator_type &lhs, pointer rhs)
Okay, first, all of the operations that take raw pointers should be removed.
That leaves you with only the operations between two iterators. But you’re using C++20, so all of the above can be reduced to one line:
constexpr auto operator<=>(iterator const&) const noexcept = default;
(I’ve added constexpr, though you don’t have it anywhere else. You could, though.)
friend difference_type operator+(const iterator_type &lhs,
const iterator_type &rhs)
friend difference_type operator+(const iterator_type &lhs, pointer rhs)
Adding iterators makes no sense. Adding a pointer to an iterator makes even less sense.
friend iterator_type operator-(const iterator_type &lhs, difference_type rhs)
friend difference_type operator-(const iterator_type &lhs, pointer rhs)
The operation with a raw pointer should be removed, but you have also forgotten all the addition operations. You can’t add two iterators… but you can add an iterator and a difference_type in either order.
A big problem that you will probably run into is that you haven’t given any thought to the relationship between rnd_iterator<T> and rnd_iterator<const T>. The former should be implicitly convertible to the latter… but the latter should not be convertible to the former. This will really become an issue when you try to use it as iterator and const_iterator for the container.
One more thing worth mentioning: you explicitly said you want a random-access iterator, and yeah, that’s what you have. However, you could have a contiguous iterator, if you wanted.
queue<T, Allocator>
Allocator m_Alloc;
std::size_t m_Size, m_Capacity;
std::allocator_traits<Allocator> m_AllocTraits;
T *m_RawData;
There are a couple issues with the way you’ve laid out your class.
First, when you are ordering a class’s data members, you should try to put the most important data members first. Why? Because the address of the very first member in a class is (usually!) the same as this, which means the moment you access this, you already have the first data member right there. Later data members might not be in cache yet, and may require a separate load.
In this case, the data member you probably want right up front is m_rawData. So that should be first. Next most important is maybe m_Size, followed by m_Capacity, with m_Alloc being the least important.
The second problem comes from the fact that allocators are often stateless, which means they have zero size. However, when you write an allocator data member like that, it has to take up at least 1 byte. And, unfortunately, the next type is std::size_t, which is usually 8 bytes, and has to be aligned on an 8-byte boundary… so m_Alloc will have to be padded with 7 extra bytes just to make things line up.
To fix that, you can use [[no_unique_address]], so if m_Allocator really is zero-sized, it will take up zero space in the class.
Never do this:
std::size_t m_Size, m_Capacity;
Each declaration should be on its own line.
And, finally, there is no need for m_AllocTraits. std::allocator_traits is a traits class. It is zero-sized by definition, and it is meant to be used statically. You’re never supposed to create an allocator_traits object, let alone store one in a class. m_AllocTraits.destroy(...) is just plain wrong. destroy() is not a non-static member function, it is a static member function. You have to call it like this: decltype(m_AllocTraits)::destroy(...). But of course, that makes no sense when you can just do std::allocator_traits<Allocator>::destroy(...).
So your data members should probably look more like this:
T* m_RawData = nulltpr;
std::size_t m_Size = 0;
std::size_t m_Capacity = 0;
[[no_unique_address]] Allocator m_Alloc = {};
Note I added initializers, which is probably a good idea, too.
void m_ReallocAnyway(std::size_t t_NewCapacity)
This is a very good attempt at writing what is actually an EXTREMELY complicated operation.
You correctly use std::allocator_traits elsewhere in the code, but not here, which is a shame. Instead of calling m_Alloc.allocate(...) directly, you should do:
auto f_temp = std::allocator_traits<Allocator>::allocate(m_Alloc, t_NewCapacity);
Note also that it’s just t_NewCapacity… not sizeof(T) * t_NewCapacity. The allocator already knows about sizeof(T) (as well as alignof(T)).
Also, using std::addressof() is a little ridiculous. You know m_RawData is a T*. It doesn’t make sense to dereference the pointer, then use addressof() to get it back. Just do m_RawData + i.
Alright, now comes the really tricky part:
for (std::size_t i = 0; i < m_Size; i++) {
new (&f_temp[i]) T(std::move_if_noexcept(m_RawData[i]));
// ...
}
First, rather than a raw placement-new, you should be using std::allocator_traits<Allocator>::construct(m_Alloc, f_temp + i, std::move_if_noexcept(m_RawData[i])).
But now here comes the critical issue. Let’s say m_Size is 10; there are 10 elements in queue. So you start the loop, and successfully copy-construct 5 elements… then, catastrophe, an exception is thrown copying the 6th element. What happens now?
Well, you bubble up to the catch block, deallocate f_temp, and then propagate the exception…
… but… hang on… you’ve missed something.
5 objects were constructed. Those 5 objects need to be destructed before you can deallocate the memory out from under them.
So what you actually need to do is something more like:
// allocate the memory
auto const f_temp = std::allocator_traits<Allocator>::allocate(m_Alloc, t_NewCapacity);
auto num_constructed = std::size_t{0};
try
{
// construct the objects in that memory
for (; num_constructed != m_Size; ++num_constructed)
{
std::allocator_traits<Allocator>::construct(
m_Alloc,
f_temp + num_constructed,
std::move_if_noexcept(m_RawData[num_constructed])
);
}
}
catch (...)
{
// destroy objects in reverse order
for (auto i = num_constructed; i != 0; --i)
{
std::allocator_traits<Allocator>::destroy(m_Alloc, f_temp + (i - 1));
}
// deallocate the memory
std::allocator_traits<Allocator>::deallocate(m_Alloc, f_temp, t_NewCapacity);
throw;
}
Now, your catch block is also wrong. You should never rethrow an exception the way you do, because while you catch a std::exception const&, the actual exception might not be an actual std::exception. It might be std::bad_alloc or some other type that derives from std::exception. So when you do throw exc; (or throw std::move(exc);, which is pointless, because exc is a const reference… you can’t move from a const object, it’s going to turn into a copy anyway) you might actually be slicing the actual exception object. That’s bad.
Never rethrow an exception like throw exc;. Just do throw;.
Also, there is no reason you need to limit the exceptions you catch to types that derive from std::exception. If some T constructor throws some other exception type, you want to catch and rethrow that, too. So don’t limit the catch to std::exception const&. Catch everything with catch (...). Rethrow anything with throw;.
Finally, you’re asking for trouble interleaving the construction of the new elements with the destruction of the old ones. Consider the same scenario as before, where you’ve copied 5 out of 10 elements, and then there’s an exception. If you’ve been destroying the source elements as you go along… well, now you’re screwed. You have an array where the first 5 elements are invalid.
Instead, what you should aim to do is get all the dangerous stuff out of the way first, and then do cleanup. Allocating the new array and copy-constructing the new elements are the dangerous steps. Once those are done, destroying the old elements and deallocating the old array should be safe. So do those last. Something like:
void m_ReallocAnyway(std::size_t new_capacity)
{
using alloc_traits = std::allocator_traits<Allocator>;
// this might throw, but if it does, meh, we haven't done anything yet
auto const new_data = alloc_traits::allocate(m_Alloc, new_capacity);
auto num_constructed = std::size_t{0};
try
{
// any of these copy-constructions might fail
//
// if they do, the catch block will clean up everything done so far
for (; num_constructed != m_Size; ++num_constructed)
{
alloc_traits::construct(
m_Alloc,
new_data + num_constructed,
std::move_if_noexcept(m_RawData[num_constructed])
);
}
}
catch (...)
{
// all the clean-up from the potentially dangerous stuff goes here
for (auto i = num_constructed; i != 0; --i)
{
alloc_traits::destroy(m_Alloc, new_data + (i - 1));
}
alloc_traits::deallocate(m_Alloc, new_data, new_capacity);
throw;
}
// now new_data contains all the stuff from m_RawData
//
// from this point on, all we need to do is clean up the old stuff,
// which should be no-fail
// destroy old objects in reverse order
for (auto i = m_Size; ; i != 0; --i)
{
// shouldn't fail
alloc_traits::destroy(m_Alloc, m_RawData + (i - 1));
}
// deallocate old memory; shouldn't fail
alloc_traits::deallocate(m_Alloc, m_RawData, m_Capacity);
// and finally, set the class data members to the new values
//
// these also can't fail
m_RawData = new_data;
m_Capacity = new_capacity;
}
As an optimization to the above, you could put the entire try-catch block in an if constexpr block, so that you only need it when you don’t have no-fail moving. If moving is no-fail (as it is for most types), then there’s no need to worry about anything in that first loop failing, so there’s no need for any cleanup.
void m_Realloc(std::size_t t_NewCapacity) {
if (t_NewCapacity > m_Capacity) {
m_ReallocAnyway(t_NewCapacity);
} else {
return;
}
}
I mean, you don’t really need the else { return; } here. But I suppose that’s just a matter of personal style.
void m_ShiftToLeft() {
for (std::size_t i = 0; i < m_Size; i++) {
new (&m_RawData[i]) T(std::move_if_noexcept(m_RawData[i + 1]));
}
}
This is really wrong.
First, placement new constructs a new object in raw memory. You should never do that overtop of an existing object. If an object is already constructed, you need to destroy it before you can construct a new object over it. If you want to replace an existing object with another existing object… as you’re doing here… you have two options:
The hard way:
Destroy object 1
Use placement new to copy/move construct from object 2 over the old location of object 1
The easy way:
Just copy/move assign object 2 to object 1
The catch of option 2 is that it requires the type to be copy/move assignable, whereas option 1 only requires the type to be copy/move constructible (which you need anyway).
(By the way, I get that the only time this function is ever used, you’ve already destroyed the first object in the queue. So the first iteration of the loop is correct… though every subsequent iteration is still wrong. In any case, the whole plan is still terrible. But we’ll get to that later.)
If you actually want to shift everything in the queue to the left, the safest way to do it would be to use move_if_no_except() to copy/move-assign every item to the previous one. If moving really is no-fail, then this will be perfectly safe. If it’s not… well, then there’s really no way to make this operation safe. (Well, there is one! But we’ll get to it later.)
Also, note that in your code, you have a bug. You loop from 0 to m_Size, which is fine, but then you access m_RawData[i + 1], which is one-past-the-end.
template <class F>
void m_ShiftFromTo(std::size_t from, std::size_t to, F &&func)
This whole function really doesn’t make any sense. All you ever use it for is to shift elements to the start N positions over starting at position I (and N is always 1, though it doesn’t need to be, conceivably). m_ShiftRangeFromTo() already covers that. There’s no need for all the extra complexity of a function object. Keep things simple, and you’ll have fewer bugs and less maintenance headache.
void m_CheckOrAlloc(std::size_t t_Size) {
if (t_Size >= m_Capacity) {
m_Realloc(m_Capacity * 2);
}
}
Are you sure this is what you want? If your capacity is 10, and someone wants to put 10 elements in the queue… you really want to reallocate to a capacity of 20 rather than just put the 10 elements in the existing capacity?
using iterator = con::rnd_iterator<value_type>;
using const_iterator = const iterator;
const_iterator is incorrect. It should be con::rnd_iterator<value_type const> (assuming rnd_iterator correctly handles const value types… which it doesn’t, but should).
explicit queue(size_type cap = (sizeof(value_type) * 5),
const Allocator &alloc = Allocator{}) noexcept
: m_Alloc(alloc), m_Size(0), m_Capacity(cap),
m_RawData(m_Alloc.allocate(m_Capacity)) {}
This is your default constructor… among other things (which is not good!)… and you want it to be noexcept, which is a good idea… however, it can’t be, because you’ve crammed too much work into it. How can it possibly be noexcept when you’re allocating?
The first thing I would recommend is to not use default parameters. I think they’re a terrible idea in general, and it’s an especially bad idea here. At the very least, even if you insist on using default parameters, you’re going to need more constructors than this, because it should be possible to construct a queue with just an allocator, like so: auto q = queue<int>(alloc);.
So you need at least two constructors:
explicit queue(const Allocator& alloc = {}) noexcept;
explicit queue(size_type cap, const Allocator& alloc = {});
But this is still a wacky interface, because when I do auto q = queue<int>(5);, I expect that means to create a queue with 5 default-constructed elements in it. That’s what it means for every container in the standard library, after all. But, no, here I get an empty queue… with a capacity of 5. That’s just weird.
My advice is to just forget the constructor with the capacity. You don’t need it. This:
auto q = queue<int>{};
q.reserve(5);
… is clearer than:
auto q = queue<int>(5);
… and there’s no reason it couldn’t be just as efficient.
So that leaves you with just the default constructor (with optional allocator). But it’s still not noexcept so long as it’s allocating. So if you really want a noexcept default constructor (and you should!), you need to not do any allocating. How is that possible?
Well, one option is to say that a default-constructed queue has a size of 0 and a capacity of 0, and thus m_RawData is nullptr:
template <typename T, typename Allocator = std::allocator<T>>
class queue
{
T* m_RawData = nullptr;
std::size_t m_Size = 0;
std::size_t m_Capacity = 0;
[[no_unique_address]] Allocator m_alloc = {};
// ... [snip] ...
public:
constexpr explicit queue(Allocator const& alloc = {}) noexcept :
m_Alloc{alloc}
{}
// ...
Of course, since you now have a “null state”, you have to be careful in some of your member functions to account for it… but not that many actually. (Mostly just the destructor and the functions that might do destruction in one form or another (like reserve()).
But having this no-fail default construction is enormously important, because it allows for no-fail moving as well, and thus, swapping. And you really, really want no-fail moving and no-fail swapping. You could do:
static constexpr auto _swap_data_with_equal_allocators(queue& to, queue&& from) noexcept
{
std::ranges::swap(to.m_RawData, from.m_RawData);
std::ranges::swap(to.m_Size, from.m_Size);
std::ranges::swap(to.m_Capacity, from.m_Capacity);
}
static constexpr auto _move_data_with_equal_allocators(queue& to, queue&& from) noexcept
{
return _swap_data_with_equal_allocators(to, std::move(from));
}
constexpr queue() noexcept = default;
constexpr explicit queue(Allocator const& alloc) noexcept :
m_Alloc{alloc}
{}
constexpr queue(queue&& other) noexcept :
m_Alloc{std::move(other.m_Alloc)}
{
_move_data_with_equal_allocators(*this, std::move(other));
}
constexpr queue(queue&& other, Allocator const& alloc)
noexcept(std::allocator_traits<Allocator>::is_always_equal) :
m_Alloc{alloc}
{
if constexpr (std::allocator_traits<Allocator>::is_always_equal)
{
_move_data_with_equal_allocators(*this, std::move(other));
}
else
{
if (m_Alloc == other.m_Alloc)
{
_move_data_with_equal_allocators(*this, std::move(other));
}
else
{
// need to allocate memory, then MOVE-CONSTRUCT elements
// from other
//
// afterwards, other should have its original size and
// capacity... but all the elements should be moved-from
}
}
}
constexpr auto operator=(queue&& other)
noexcept(std::allocator_traits<Allocator>::propagate_on_container_move_assignment
or std::allocator_traits<Allocator>::is_always_equal)
-> queue&
{
if constexpr (std::allocator_traits<Allocator>::propagate_on_container_move_assignment)
{
// this is safe, because copying allocators is guaranteed to be
// noexcept, and of course moving a queue's data is noexcept
clear();
m_alloc = other.m_alloc;
_move_data_with_equal_allocators(*this, std::move(other));
}
else
{
if constexpr (std::allocator_traits<Allocator>::is_always_equal)
{
_move_data_with_equal_allocators(*this, std::move(other));
}
else
{
if (m_alloc == other.m_alloc)
{
_move_data_with_equal_allocators(*this, std::move(other));
}
else
{
// need to make sure this has enough capacity, then
// MOVE-ASSIGN the elements from other into this
//
// take into account whether moving/copying is noexcept;
// if not, then maybe you need to use a temporary buffer
// to keep the strong exception guarantee
//
// afterwards, other should have its original size and
// capacity... but all the elements should be moved-from
}
}
}
return *this;
}
constexpr auto swap(queue& other)
noexcept(std::allocator_traits<Allocator>::propagate_on_container_swap
or std::allocator_traits<Allocator>::is_always_equal)
-> void
{
if constexpr (std::allocator_traits<Allocator>::is_always_equal)
{
if constexpr (std::allocator_traits<Allocator>::propagate_on_container_swap)
std::ranges::swap(m_alloc, other.m_alloc);
_swap_data_with_equal_allocators(*this, std::move(other));
}
else
{
if (m_Alloc == other.m_Alloc)
{
_swap_data_with_equal_allocators(*this, std::move(other));
}
else
{
// ??? you're on your own here
//
// this is UB for all standard containers, so maybe make it UB
// for yours, too, and throw or terminate
}
}
}
With the above, moving and swapping is guaranteed no-fail whenever possible, and even when it’s not possible to guarantee, it’s no-fail whenever possible.
But let’s get back to your constructor:
explicit queue(size_type cap = (sizeof(value_type) * 5),
const Allocator &alloc = Allocator{}) noexcept
Now you seem confused about what “capacity” means, and sizing in general. In the standard containers, a capacity of 5 means it can hold 5 Ts without reallocating… no matter what size the Ts are. Same goes for the size, basically: a size of 8 means it has 8 objects… not that it’s 8 bytes large.
So when you do… cap = (sizeof(value_type) * 5), you’re saying that the default capacity for a queue depends on the size of the objects its holding. If it’s a queue<std::byte>, then it has a capacity to hold 5 objects, so the total size in memory is 5 bytes… but if it's a queue<std::array<std::byte, 1000>>, then it has a capacity to hold 5,000 objects… so the total size in memory is 5,000,000 bytes. Clearly something has gone awry.
What you really want, I think, is just cap = 5. That means the default capacity is 5 objects, regardless of what those objects are.
Also, once again, don’t intialize multiple things on a single line. It makes your code damn near illegible.
explicit queue(const std::initializer_list<T> &init,
const Allocator &alloc = Allocator{}) noexcept
: m_Alloc(alloc), m_Size(init.size()), m_Capacity(sizeof(value_type) * 5),
m_RawData(m_Alloc.allocate(m_Capacity))
First, you should never take initializer lists by const&. They are made to be passed around by value.
Second, this constructor obviously can’t be noexcept, because it’s allocating.
Third, I don’t see the sense of allocating a capacity of 5 (or sizeof(T) * 5 even) when you know already how many objects are in the initializer list. If it’s greater, then you’ll need to throw away what you’ve just allocated and reallocate… which is silly. If it’s less, then you’ve got wasted capacity that you may never need, because if someone gave you an initializer list, that often means they already know all the data they need in the queue.
In fact, you might as well just delegate this constructor over to the iterator constructor:
constexpr queue(std::initializer_list<T> init, Allocator const& alloc = {}) :
queue(init.begin(), init.end(), alloc)
{}
Assuming you write the iterator constructor well, there will be no performance loss from doing this.
explicit queue(const queue<value_type> &oth) : queue() {
if (std::is_destructible<value_type>::value)
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_copy(oth.begin(), oth.end(), m_RawData);
}
First, let’s get the obvious issue out of the way: you are clearing a queue that you know has to be empty… because it’s just been constructed and nothing’s been put in it. I don’t think you thought this through.
But the real issue here is the test for is_destructible. As mentioned in the design section… this is gibberish.
Okay, that weirdness aside, you have an efficiency issue when you delegate to the default constructor, which allocates a default capacity, and then possibly reallocate with a new size. As with the initializer list constructor, it’s easier just to delegate to the iterator constructor:
constexpr queue(queue const& other) :
queue(other.begin(), other.end())
{}
constexpr queue(queue const& other, Allocator const& alloc) :
queue(other.begin(), other.end(), alloc)
{}
Once again, assuming the iterator constructor is properly written, this should cause no performance penalty.
explicit queue(queue<value_type> &&oth) noexcept : queue() {
if (std::is_destructible<value_type>::value)
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_move(oth.begin(), oth.end(), m_RawData);
}
Yikes, no, this is not how you move containers. You absolutely do not allocate a whole new buffer and then move construct a whole new set of objects there. I’ve already shown how to do a proper noexcept move constructor above.
template <class It> queue(It begin, It end) noexcept : queue() {
assert(begin <= end);
size_type f_size = std::distance(begin, end);
m_CheckOrAlloc(f_size);
m_Size = f_size;
std::uninitialized_copy(begin, end, m_RawData);
}
Now this is an important constructor to get right, because if it’s done well, so many other operations can be built on top of it.
The first problem here is that you don’t seem to understand iterator categories. assert(begin <= end); will only work for random-access or better iterators… and, frankly, it’s a pointless test anyway. But the real sneaky issue is that call to std::distance(). Because you use that, you are restricting the iterator category to forward iterators or better… meaning you can’t fill a queue with data from a file like this:
auto file = std::ifstream{"/path/to/data"};
auto q = queue<int>{std::istream_iterator<int>{file}, std::istream_iterator<int>{}};
// q now has all the ints that were in the data file
If you try the code above, it will compile and run… but the queue will be empty. That’s because istream_iterators are input iterators, which means you get only one pass. You blow through that one pass with std::distance(). Which means that by the time you get to std::uninitialized_copy()… there’s no more data. Your queue is now broken, because it says it has a certain size… but there will be nothing in it but uninitialized gibberish.
What you need to do is check the iterator category. If it’s forward or better, you can use std::distance() to preallocate the buffer. But if it’s just input iterator, then the best you can do is add elements one at a time, reallocating as you go:
template <std::input_iterator It, std::sentinel_for<It> Sen>
constexpr queue(It first, Sen last, Allocator const& alloc = {}) :
m_Alloc{alloc}
{
while (first != last)
push_back(*first++);
}
template <std::forward_iterator It, std::sentinel_for<It> Sen>
constexpr queue(It first, Sen last, Allocator const& alloc = {}) :
m_Alloc{alloc}
{
reserve(std::distance(begin, end));
std::ranges::uninitialized_copy(first, last, m_RawData, m_RawData + m_Capacity);
m_Size = m_Capacity;
}
Note that I’m using iterator/sentinel pairs, rather than iterator/iterator pairs. That’s the C++20 way. Also note that I haven’t included any error handling. Let’s call than an exercise for the reader.
explicit queue(const queue<value_type> &&oth) = delete;
?
const_iterator cbegin() const noexcept { return const_iterator(m_RawData); }
const_iterator cend() const noexcept {
return const_iterator(m_RawData + size());
}
These can both just return begin() and end() respectively.
const_reverse_iterator crend() const noexcept { rend(); }
Missing return.
size_type max_capacity() const noexcept {
return std::numeric_limits<size_type>::max();
}
Standard containers don’t have max_capacity()… but they do have max_size(), which you don’t.
Also, assuming you can hold the max value of size_type seems optimistic. A better estimate would probably be std::numeric_limits<size_type>::max() / sizeof(T). But meh, I’ve never seen anyone actually use max_size().
const_pointer data() const { return m_RawData; }
If you’re providing this, you might as well provide the non-const overload.
void clear() requires(std::is_destructible<value_type>::value) {
for (size_type i = 0; i < size(); i++) {
m_AllocTraits.destroy(m_Alloc, std::addressof(m_RawData[i]));
}
m_Size = 0;
}
The constraint makes no sense, as discussed earlier.
The function should be noexcept (especially since it needs to be used in the destructor).
You are not using allocator traits correctly, as discussed earlier.
The addressof() is pointless.
void reserve(size_type cp) { m_CheckOrAlloc(cp); }
It’s good that you’re willing to delegate to helper functions, but…
m_CheckOrAlloc() checks whether the new capacity is larger, and if so delegates to m_Realloc().
m_Realloc() checks whether the new capacity is larger, and if so delegates to m_ReallocAnyway().
m_ReallocAnyway() finally just reallocates unconditionally.
Is all that dancing really necessary?
m_ReallocAnyway() is ONLY ever called from m_Realloc().
m_Realloc() is ONLY ever called from m_CheckOrAlloc().
m_CheckOrAlloc() is called from multiple places (good!)… but… it’s really just reserve().
I think AT MOST all you need is reserve(), and then maybe an internal, unconditional reallocation function. Keep it simple.
void resize(size_type sz) {
m_Size = sz;
m_CheckOrAlloc(sz);
}
This is just completely wrong. You allocate enough capacity, but you never actually construct or destruct any objects. You just set the size. You’re either going to truncate your queue with a bunch of inaccessible objects past the end, or the last few elements in your queue are going to be empty garbage.
void erase(iterator val)
void erase(iterator first, iterator last)
erase() is perhaps the trickiest function in std::vector to properly implement, and your queue is basically std::vector. In the std::vector version of erase() there are basically 2 paths:
erase(p, q), where q is equal to end().
erase(p, q), where q is not equal to end().
And the single-argument form of erase() just delegates to those two paths:
erase(p) where p is end()… just return.
erase(p) where p is end() - 1… go to path 1 above as erase(end() - 1, end()).
erase(p) where p is not end() - 1… go to path 2 above as erase(p, p + 1).
So the single argument version is easy:
void erase(const_iterator p)
{
if (p != end())
erase(p, p + 1);
}
Now, in the 2-argument version, if last is end(), you just destroy everything from first to end(). No biggie:
void erase(const_iterator first, const_iterator last)
{
if (last == end())
{
for (; first != end(); ++first)
// use allocator traits to destroy each element
// set m_Size
}
else
{
// ...
}
}
Simple.
If last is not end(), now you need to do the shifting. Something like:
void erase(const_iterator first, const_iterator last)
{
if (last == end())
{
// ...
}
else
{
std::ranges::move(last, end(), first);
for (; last != end(); ++last)
// use allocator traits to destroy each element
// set m_Size
}
}
You can simplify this, but when you do, you will find that it is the opposite of your implementation: first it does the shift, then it does the destroying. Why? Exception safety. If you destroy all the elements in the middle first, then you try to start the shifting, what happens if an exception is thrown midway through shifting? Now you have a hole of uninitialized memory in the middle of your queue.
void erase(reverse_iterator first, reverse_iterator last)
void erase(reverse_iterator val)
These functions just seem ridiculous. Who’s seriously going to want to erase elements from the queue… backwards? And if they don’t really want to erase elements backwards, but it’s just that they have reverse iterators and want to erase some stuff (but don’t really care about the order), then they can just do:
q.erase(rev_first.base(), rev_last.base());
Don’t put crap in your interface you don’t need. The more you over-complicate the plumbing, the easier it is to clog up the drain.
void erase(const value_type &obj) { erase(std::find(begin(), end(), obj)); }
Notice how the entirety of this function is implementable from the public interface, just as efficiently? Thus you don’t need it.
void rerase(const value_type &obj)
The Scooby-Doo version of erase(), I presume.
iterator find(const value_type &obj)
reverse_iterator rfind(const value_type &obj)
const_iterator find(const_reference obj) const
const_reverse_iterator rfind(const value_type &obj) const
All of these functions are pointless, because you can do the exact same job from the rest of the public interface, with no loss of efficiency. In fact, you can do MORE from the public interface. For example:
auto q1 = queue<std::string>{};
std::ranges::find(q1, "foo"sv); // can search with a string view,
// without having to construct a string
// could also use projections
auto q2 = queue<int>{};
std::find(std::unseq, q2.begin(), q2.end(), 42); // vectorized find
You see, adding more functions to an interface doesn’t make it better. Making it easier to use existing algorithms is what really makes it better. And for that, all you really need are basics: begin(), end(), size() maybe, and so on. Less is more.
void enqueue(const value_type &oth) requires(
std::is_copy_constructible<value_type>::value) {
m_CheckOrAlloc(size());
new (&m_RawData[m_Size++]) value_type(oth);
}
void enqueue(value_type &&oth) requires(
std::is_move_constructible<value_type>::value) {
m_CheckOrAlloc(size());
new (&m_RawData[m_Size++]) value_type(std::move(oth));
}
You’ve actually introduced a subtle bug by being clever here. By incrementing m_Size in the same expression, if the copy/move construction fails, now the queue has the wrong size; the last element will be random garbage.
You shouldn’t increment m_Size until after the creation of the new element succeeds.
A word about using concepts: concepts are still very new technology, and we haven’t really established a good set of rules for when to use them or how. But one idea that seems to be taking root is that you shouldn’t treat concepts as simply assertions on operations. In other words, you shouldn’t say, “well, I’m move-constructing the element here… therefore I should add a move_constructible concept”. When you do that, you are limiting your options. As a somewhat silly example, let’s say that someone has a non-move-constructible type… but that type is noexcept default constructible, and noexcept move-assignable. In that case, you could still implement enqueue(value_type&&)… except if you already blocked it by requiring move construction, now you’re screwed.
That example may have been silly, but there’s actually a real case of it in your code here. Your enqueue(value_type&&) says that it requires move construction… except that it will still work without move construction. If I have a type that is copy constructible, but not move constructible, then move construction will fall back on copy construction. So enqueue(value_type&&) still works fine with types that are not move constructible. Your requires clause prevents perfectly good code from working.
The bottom line is: don’t use concepts unless you know you need them. You don’t need them here. They don’t actually improve anything. Even without the concepts, the functions will fail to compile for types that won’t work.
[[nodiscard]] value_type
dequeue() requires(std::is_destructible<value_type>::value) {
--m_Size;
value_type temp = m_RawData[0];
m_AllocTraits.destory(m_Alloc, std::addressof(m_RawData[0]));
m_ShiftToLeft();
return temp;
}
Now this requires clause is obviously unnecessary, as discussed earlier.
I’m also not keen on the [[nodiscard]] here. It’s not hard to image usage scenarios where I want to pop something off the front of the queue, but I don’t really care what it is. Forcing me to care about what I’m taking off the queue seems unnecessarily dictatorial. You should use [[nodiscard]] in situations where ignoring the return value is probably an error. I don’t see that that’s the case here.
You also have some exception safety issues. What happens if the copy construction to temp fails? You’ve already decremented m_Size… which means you’ve “lost” the last element of the queue. Seems safer to not change the size until after you’ve done the dangerous stuff.
(Also, you misspelled “destroy”.)
template <class... Args> void emplace(Args &&...args) {
enqueue(value_type(std::forward<Args>(args)...));
}
Ah, you’ve kinda missed the point of emplacing. The point of emplacing is to construct the new object in place. (Hence, “emplace”.) Not to just construct it somewhere random and then copy it in. Rather than writing emplace() in terms of enqueue(), it would make more sense to do the opposite.
value_type at(size_type index) const
value_type operator[](size_type index) const
These should return a const_reference, not a value_type.
queue<value_type> &operator=(const queue<value_type> &oth) {
if (&oth != this) {
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_copy(oth.begin(), oth.end(), m_RawData);
}
return *this;
}
This is a dangerous way to do copy assignment. If any of the copying at the end fails, you’ve lost the original data. A better way is to use the “copy and swap” idiom.
queue<value_type> &operator=(queue<value_type> &&oth) {
if (&oth != this) {
clear();
m_Size = oth.size();
m_CheckOrAlloc(m_Size);
std::uninitialized_move(oth.begin(), oth.end(), m_RawData);
oth.~queue();
}
return *this;
}
It’s usually possible, and much easier, to implement moving in terms of swapping.
But in any case, explicitly calling the destructor of oth is definitely wrong. Except in very rare, special-case situations, you should never manually call destructors.
queue<value_type> &operator=(const queue<value_type> &&oth) = delete;
Again, what is the purpose of this?
~queue() {
m_Alloc.deallocate(m_RawData, m_Capacity);
std::exchange(m_RawData, nullptr);
std::exchange(m_Size, 0);
}
~queue() requires(std::is_destructible<value_type>::value) {
clear();
m_Alloc.deallocate(m_RawData, m_Capacity);
std::exchange(m_RawData, nullptr);
std::exchange(m_Size, 0);
}
So, let’s set aside the whole is_destructible thing; only the second destructor is correct (sorta).
However, there’s no point in zeroing the data members at the end (and even less point in using std::exchange() to do it). It doesn’t matter if m_RawData is still pointing to now deallocated memory, because m_RawData is about to cease existing anyway. | {
"domain": "codereview.stackexchange",
"id": 40929,
"tags": "c++, beginner, queue"
} |
Inverting the equation for $T_{\mu\nu}$ in terms of $F_{\mu\nu}$ | Question: The Stress-Energy Tensor for electromagnetism is given by:
$$ T_{\mu \nu} = F_{\mu}\,^{\alpha}F_{\nu\alpha}-\frac{1}{4}g_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta} $$
How can I find $F_{\mu\nu}$ in terms of $T_{\mu\nu}$?
Rewriting the above equation using:
$$ T_{\mu\nu}=- F_{\mu \alpha} g^{\alpha\beta} F_{\beta\nu} + \frac{1}{4} g_{\mu \nu}g^{\alpha\beta}F_{\beta\delta}g^{\delta\gamma} F_{\gamma\alpha}$$
from which we can write the following $4\times4$ matrix equation for the three matrices $T,\,F,\,g$, where $T$ is symmetric, $F$ is anti-symmetric and $g$ is symmetric and invertible:
$$ T = -F g^{-1} F+\frac{1}{4}\left(\mathrm{Tr}\, \left[g^{-1}Fg^{-1}F\right]\right)\,g$$
The only way I can think of is writing down 10 equations (as there are free components in $T^{\mu\nu}$) and then trying to find the 6 unknowns (as there are free components of $F^{\mu\nu}$).
Is there a better way to do this?
Answer: See Edit below, the original answer is not completely correct.
There is no gauge freedom in $F$. $F$ is gauge invariant.
In fact, $F$ is completely measurable. It's components are the Electric and Magnetic fields, so you just go out with a set of test charges and measure $E$ and $B$ and you've got $F$.
One hint that $T$ and $F$ do not contain the same amount of information is that they have different numbers of independent components. $F$ has 6 independent components as an antisymmetric tensor, while $T$ has 10 as a symmetric one. This isn't a proof of anything, but a hint that they are capturing different things.
If you are working locally (ie, at a point), the simple way to see this explicitly is to use Lorentz transformations. The stress energy tensor has $10$ independent components since it is a symmetric tensor, we can use the $6$ Lorentz transformations to diagonalize $T$. Then we have 4 equations
\begin{eqnarray}
T_{00} &=& \frac{1}{2}\left(E^2 + B^2\right) \\
T_{ii} &=& (E_i^2 - \frac{1}{2}E^2) + (B_i^2 - \frac{1}{2}B^2)
\end{eqnarray}
There is no sum over $i$ implied in the second equation, it's just a quick way of writing the 3 spatial equations.
You can see that there is no way to solve these. For one thing, there are more components in $E$ and $B$ than there are in $T$ in this frame. For another, since the fields appear squared, there is no way to determine the sign of any of the components of $E$ or $B$.
Additionally you can't tell the difference between $E$ and $B$ (ie, given $T_{00}$, who is to say whether you had $E^2=0$ or $B^2=0$ or neither)? This last point is a consequence of the electromagnetic duality: in the absence of matter, the physics of E/M is invariant under $E\rightarrow B$, $B\rightarrow -E$.
EDIT:
The above is not quite correct in detail (though I think the conclusion is correct). For whatever reason I neglected the fact that there are always 10 components of $T_{\mu\nu}$, so there are always 10 equations, even in the frame in which $T$ is diagonal. In particular, there are also conditions like
\begin{eqnarray}
0 &=& E_x E_y + B_x B_y \\
0 &=& E_x B_y - E_y B_x
\end{eqnarray}
So my counting argument, "There are more variables than equations," was incorrect. This fits with the idea that $T$ has more components than $E$--if anything based on counting you would think that computing $T$ from $E$ was the harder thing to do. (In fact this is generically true--the stress energy tensors you get from field theory are not the most general stress energy tensors you can write down. There are plenty of stress energy tensors you can write down that won't come from a lagrangian).
The real reason this won't work, as far as I can tell, is the electromagnetic duality as well as the fact that everything is squared. There just isn't a way to distinguish $E$ from $B$ if you write out all the components. In other words, the duality means that the equations have a degeneracy, so there are fewer equations than it naively appears, so you can't solve for all the components.
On the other hand, if you know $T$ everywhere, not just locally, that is a totally different story. That's because (1) if you know $T$ everywhere you can differentiate it, and (2) $\partial_\mu T^{\mu\nu}=0$ is just maxwells equations $\partial_\mu T^{\mu\nu}=\partial_\mu F^{\mu\nu}$, possibly up to an overall factor. So then, up to the usual caveats about needing to know the boundary conditions, if you know $T$ everywhere you can solve maxwell's equations to obtain $F$.
Moral: don't believe everything you read on the internet. | {
"domain": "physics.stackexchange",
"id": 14373,
"tags": "field-theory, tensor-calculus, stress-energy-momentum-tensor"
} |
how to write subscriber and publisher for webcam? | Question:
Actually, I am new to ROS and its features. I want to know the code for subscriber and publisher for taking picture from my integrated laptop webcam and store in the hard drive. I am familiar with gscam feature to take pictures from my webcam.
But i want to use it as subscriber and publisher.
please help.
Originally posted by Prashant Kumar on ROS Answers with karma: 25 on 2015-06-22
Post score: 0
Answer:
See there
http://wiki.ros.org/usb_cam
is an v4l driver
install the packet
sudo apt-get install ros-indigo-usb-cam
source PATH_TO_YOUR_WS/devel/setub.bash
rosrun usb_cam usb_cam_node ~video_device "/dev/video0"
and you have a video publisher
and some more code
https://rosstitchernode.wordpress.com/2014/06/05/ros-and-opencv-with-usb_cam/
Install UVC
sudo apt-get install install ros-indigo-libuvc ros-indigo-libuvc-camera
sudo echo '# UVC cameras
SUBSYSTEMS=="usb", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="04f2", ATTRS{idProduct}=="b2eb", MODE="0666"' > /etc/udev/rules.d/99-uvc.rules
where the ID (idvendor and idProduct from you wabcam)
sudo rmmod snd-usb-audio
sudo rmmod uvcvide
sudo udevadm trigger
Test
source PATH_TO_YOUR_WS/devel/setub.bash
roscore &
rosrun libuvc_camera camera_node &
rosrun image_view image_view image:=/image_raw
Originally posted by duck-development with karma: 1999 on 2015-06-22
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Prashant Kumar on 2015-06-23:
doesn't help..
can you give me a sample code for it?
Comment by duck-development on 2015-06-23:
http://pharos.ece.utexas.edu/wiki/index.php/How_to_Use_a_Webcam_in_ROS_with_the_usb_cam_Package
there is some exampel code for the usage ov usb_cam
Comment by Prashant Kumar on 2015-06-24:
if you are talking about image transport then I know it..and I can use gscam for taing snaps but not as subscriber and publisher.
Rather, I want to take snaps from my webcam by subscriber and publisher. which is getting me in trouble.
Comment by Prashant Kumar on 2015-06-26:
in the link that you gave in the above comment,
"rosmake --rosdep-install" is not working.
secondly, it's not the code for subscriber and publisher, it's just a launch file code.
Comment by duck-development on 2015-06-26:
Say exactly what you like to have. A Program wicht publish picture. Write your self a Program witch read the camera and publish the picture.
subscriber:
you like only to see the pricture
you like to work with the picture
you like to trigger a photo with you subscriber and then compute the picture
Comment by Prashant Kumar on 2015-06-27:
thanks for updated answer..but I am getting error as follows when I run the launch file:-
"[ERROR] [1435430293.861984376]: Webcam: expected picture but didn't get it..."
I can see my camera gets activated but no image is shown in the pop up window..its just full black..
any suggestions?
Comment by Prashant Kumar on 2015-06-27:
Oh I got it. I just had to change the pixel format.
thanks for the help.
can you please tell me how to do the same using uvc_cam/libuvc_cam. In this case also my camera gets activated but no image is there in the window. | {
"domain": "robotics.stackexchange",
"id": 21983,
"tags": "ros, gscam, webcam"
} |
Trying to install dynamixel_controllers in Fuerte getting error | Question:
I'm attempting to install the dynamixel_controllers package in Fuerte. This is a fresh installation of Fuerte on a 12.04 machine. I'm receiving this error after downloading the package and typing "rosmake dynamixel_controllers" I receive this error:
[ rosmake ] rosmake starting... [ rosmake ] Packages requested are: ['dynamixel_controllers'] [ rosmake ] Logging to directory /home/joe/.ros/rosmake/rosmake_output-20130719-115348 [ rosmake ] Expanded args ['dynamixel_controllers'] to: ['dynamixel_controllers'] [rosmake-0] Starting >>> roslang [ make ] [rosmake-1] Starting >>> rostest [ make ] [rosmake-0] Finished <<< roslang No Makefile in package roslang [rosmake-2] Starting >>> actionlib_msgs [ make ] [rosmake-1] Finished <<< rostest No Makefile in package rostest [rosmake-0] Starting >>> rospy [ make ] [rosmake-3] Starting >>> roscpp [ make ] [rosmake-2] Finished <<< actionlib_msgs No Makefile in package actionlib_msgs [rosmake-4] Starting >>> diagnostic_msgs [ make ] [rosmake-1] Starting >>> std_msgs [ make ] [rosmake-2] Starting >>> trajectory_msgs [ make ] [rosmake-0] Finished <<< rospy No Makefile in package rospy [rosmake-0] Starting >>> dynamixel_msgs [ make ] [rosmake-4] Finished <<< diagnostic_msgs No Makefile in package diagnostic_msgs [rosmake-5] Starting >>> control_msgs [ make ] [rosmake-3] Finished <<< roscpp No Makefile in package roscpp [rosmake-3] Starting >>> actionlib [ make ] [rosmake-1] Finished <<< std_msgs No Makefile in package std_msgs [rosmake-4] Starting >>> dynamixel_driver [ make ] [rosmake-2] Finished <<< trajectory_msgs No Makefile in package trajectory_msgs [rosmake-3] Finished <<< actionlib No Makefile in package actionlib [rosmake-5] Finished <<< control_msgs ROS_NOBUILD in package control_msgs No Makefile in package control_msgs [rosmake-4] Finished <<< dynamixel_driver [PASS] [ 0.57 seconds ] [rosmake-0] Finished <<< dynamixel_msgs [PASS] [ 1.08 seconds ] [rosmake-0] Starting >>> dynamixel_controllers [ make ] [ rosmake ] Last 40 linesnamixel_controllers: 3.6... [ 1 Active 12/13 Complete ] {------------------------------------------------------------------------------- valid_packages = valid_packages + rospack.get_depends(package_context, implicit=True) TypeError: can only concatenate list (not "set") to list make[3]: *** [../srv_gen/lisp/StartController.lisp] Error 1 make[3]: *** [../srv_gen/lisp/SetTorqueLimit.lisp] Error 1 Traceback (most recent call last): File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 873, in <module> generate_srv(sys.argv[1]) File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 822, in generate_srv write_srv_component(s, spec.request, spec) File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 697, in write_srv_component write_md5sum(s, spec, parent) File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 596, in write_md5sum compute_files=False) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/gentools.py", line 314, in get_dependencies _add_msgs_depends(rospack, spec.response, deps, package) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/gentools.py", line 75, in _add_msgs_depends valid_packages = valid_packages + rospack.get_depends(package_context, implicit=True) TypeError: can only concatenate list (not "set") to list Traceback (most recent call last): File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 873, in <module> generate_srv(sys.argv[1]) File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 822, in generate_srv write_srv_component(s, spec.request, spec) File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 697, in write_srv_component write_md5sum(s, spec, parent) File "/opt/ros/fuerte/share/roslisp/rosbuild/scripts/genmsg_lisp.py", line 596, in write_md5sum compute_files=False) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/gentools.py", line 314, in get_dependencies _add_msgs_depends(rospack, spec.response, deps, package) File "/opt/ros/fuerte/lib/python2.7/dist-packages/roslib/gentools.py", line 75, in _add_msgs_depends valid_packages = valid_packages + rospack.get_depends(package_context, implicit=True) TypeError: can only concatenate list (not "set") to list make[3]: *** [../srv_gen/lisp/SetCompliancePunch.lisp] Error 1 make[3]: *** [../srv_gen/lisp/SetSpeed.lisp] Error 1 make[3]: Leaving directory /home/joe/fuerte/sandbox/trunk/dynamixel_controllers/build'
make[2]: *** [CMakeFiles/ROSBUILD_gensrv_lisp.dir/all] Error 2
make[2]: Leaving directory /home/joe/fuerte/sandbox/trunk/dynamixel_controllers/build' make[1]: *** [all] Error 2 make[1]: Leaving directory /home/joe/fuerte/sandbox/trunk/dynamixel_controllers/build'
[ rosmake ] Output from build of package dynamixel_controllers written to:
[ rosmake ] /home/joe/.ros/rosmake/rosmake_output-20130719-115348/dynamixel_controllers/build_output.log
[rosmake-0] Finished <<< dynamixel_controllers [FAIL] [ 3.63 seconds ]
[ rosmake ] Halting due to failure in package dynamixel_controllers.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 13 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/joe/.ros/rosmake/rosmake_output-20130719-115348
`
Originally posted by joe.s on ROS Answers with karma: 162 on 2013-07-19
Post score: 0
Answer:
I see that you are using Ubuntu 12.04, there should be packages available for dynamixel_motor stack on Fuerte. Any reason you are trying to compile from source?
Originally posted by arebgun with karma: 2121 on 2013-07-21
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by joe.s on 2013-07-22:
When I did a search using the ROS search bar, that was the package that came up. I did not see the dynamixel_motor stack until after troubleshooting.
I have since found it but haven't confirmed that things are working properly yet. I will post back once I have made sure everything is working properly. | {
"domain": "robotics.stackexchange",
"id": 14982,
"tags": "dynamixel, ros-fuerte"
} |
Pubish custom odometry information for robot_localization | Question:
hello everyone. I am trying to publish a new custom odomtery information for robot_localization package. I have a sensor which is giving me a position x and y and i want to use that to locate my robot. Since i want to fuse this odomtery information with robot's wheel odometry information, Other things like Twist can be zero as i will use robot's odometry for that purpose.My sensor is recording the positioning information in a .txt file and i will be reading that values to use. How should i modify things to get ting running.
#!/usr/bin/env python
# license removed for brevity
import rospy
import time
from std_msgs.msg import String
from nav_msgs.msg import Odometry
def odom1():
global x
global y
pub = rospy.Publisher('odom1', Odometry, queue_size=10)
rospy.init_node('publisher_turtlebot', anonymous=True)
rate = rospy.Rate(10) # 10hz
while not rospy.is_shutdown():
f= open ("x.txt", "r")
x= f.readlines()
f1= open ("y.txt", "r")
y= f1.readlines()
odom1= Odometry()
odom1.header.stamp= rospy.Time.now()
odom1.header.frame_id = "odom"
odom1.pose.pose.position.x= x
odom1.pose.pose.position.y= y
pub.publish(odom1())
rate.sleep()
if __name__ == '__main__':
try:
odom1()
except rospy.ROSInterruptException:
pass``
One file read x and other read y. I am getting this error.
Error: Odometry object is not callable.
What should i do? I am using kinetic kame with ubuntu 16.04 and python.
Originally posted by enthusiast.australia on ROS Answers with karma: 91 on 2019-09-17
Post score: 0
Original comments
Comment by PeteBlackerThe3rd on 2019-09-18:
It sounds like you're trying to integrate a GPS like sensor, those also produce position information but without heading information. Have you looked at the instructions for GPS here?
Answer:
You can take a look here, you can publish it as either Odometry message or as PoseWithCovarianceStamped. According to this frame_id should be map/odom and child_link should be base_link.
EDIT:
You need to make and odom object of type Odometry(), and add information to it.
odom = Odometry()
odom.header.stamp = rospy.Time.now()
odom.header.frame_id = "odom"
odom.child_frame = "base_link"
odom.pose.pose.position.x = x # x you read through your text file
odom.pose.pose.position.y = y # y you read through your text file
pub.publish(odom)
Originally posted by Choco93 with karma: 685 on 2019-09-17
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by enthusiast.australia on 2019-09-17:
My frame id would be odom and child_link be base_link, But i am not sure how to add up this info. Should i format my data and the way it is recording or should i change my code? In either way, what should i do ?
because if i run above code, i get this error.
Invalid number of arguments., arguments should be [header, child_frame_id, pose, twist] args are [x:1.25254689 y:1.23585785 theta=0]
In case of using Odometry or PoseWithCovarianceStamped, then what things should i do so that i could use it in robot_localization?
Comment by Choco93 on 2019-09-18:
You cannot just read a text file and pass it to a publisher, when making publisher you specifically told it to expect Odometry type message, so you need to provide an Odometry message. Take a look at my edited answer.
Comment by enthusiast.australia on 2019-09-18:
I have changed my question and also have updated the code, but i am still having the problem. Any suggestions??
Comment by Choco93 on 2019-09-18:
change pub.publish(odom1()) to pub.publish(odom1)
Comment by enthusiast.australia on 2019-09-18:
thanks. got it. Silly mistake | {
"domain": "robotics.stackexchange",
"id": 33778,
"tags": "navigation, odometry, ros-kinetic, ubuntu, robot-localization"
} |
How can a Turing machine write the description of the n-th Turing machine? | Question: I am trying to interpret the following problem:
"Describe an algorithm for a Turing machine which receives the integer n as
input and proceeds to write the description of the n-th Turing machine from
the standard enumeration on its tape."
I am confused about how a Turing machine itself can represent another Turing machine (the n-th Turing machine).
Answer: The question is probably assuming that there is some arbitrary but agreed-upon convention that specifies how to write a Turing machine description with the symbols of some alphabet. Since all Turing machines have by definition a finite description, all valid outputs will be finite strings.
The exercise itself is an ordinary combinatorics problem. | {
"domain": "cs.stackexchange",
"id": 8277,
"tags": "turing-machines, computation-models"
} |
How to transform free field Hamiltonian from position to momentum space? | Question: I'm reading Srednicki's Quantum Field Theory.
The equation (3.1) says
$$
H=\int\mathrm{d}^3xa^\dagger(\boldsymbol{x})\left(-\frac{1}{2m}\nabla^2\right)a(\boldsymbol{x})
$$
will be transformed
$$
H=\int\mathrm{d}^3p\frac{1}{2m}\boldsymbol{p}^2\tilde{a}^\dagger(\boldsymbol{p})\tilde{a}(\boldsymbol{p})
$$
when eq. (3.2)
$$\tilde{a}(\boldsymbol{p})=\int\frac{\mathrm{d}^3x}{(2\pi)^{3/2}}\mathrm{e}^{-\mathrm{i}\boldsymbol{p}\cdot\boldsymbol{x}}a(\boldsymbol{x})$$
I want to show this.
I tried to transform lower equation by substituting (3.2) and $\tilde{a}^\dagger(\boldsymbol{p})=\int\frac{\mathrm{d}^3x}{(2\pi)^{3/2}}\mathrm{e}^{-\mathrm{i}\boldsymbol{p}\cdot\boldsymbol{x}}a^\dagger(\boldsymbol{x})$, and canonical quantization $\boldsymbol{p}\to-\mathrm{i}\nabla,(\hbar\equiv1)$. I got equation
$$
H=\frac{-1}{2m\cdot(2\pi)^3}\int\mathrm{d}^3p\ \nabla^2\int\mathrm{d}^3x\mathrm{e}^{-\mathrm{i}\boldsymbol{p}\cdot\boldsymbol{x}}a^\dagger(\boldsymbol{x})\int\mathrm{d}^3x^\prime\mathrm{e}^{-\mathrm{i}\boldsymbol{p}\cdot\boldsymbol{x}^\prime}a(\boldsymbol{x}^\prime).
$$
I predicted Dirac's delta function and Leibniz's law will be needed for transformation. Though, I couldn't find what to do specifically.
What is the specific method should I do next?
Answer: Start with the inverse Fourier transform:
$$
a(\mathbf x)=\int\frac{\mathrm d^3 p}{(2\pi)^{3/2}}e^{-i\mathbf{p\cdot x}}\ \tilde a(\mathbf p)
\\H=\int\mathrm{d}^3x\ a^\dagger(\mathbf{x})\left(-\frac{1}{2m}\nabla^2\right)a(\mathbf{x})
\\=\int\mathrm{d}^3x\ \int\frac{\mathrm d^3 p}{(2\pi)^{3/2}}e^{i\mathbf{p\cdot x}}\ \tilde a^\dagger(\mathbf p)\left(-\frac{1}{2m}\nabla^2\right)\int\frac{\mathrm d^3 q}{(2\pi)^{3/2}}e^{-i\mathbf{q\cdot x}}\ \tilde a(\mathbf q)
\\=\int\mathrm{d}^3x\ \int\frac{\mathrm d^3 p}{(2\pi)^{3/2}}\int\frac{\mathrm d^3 q}{(2\pi)^{3/2}}e^{i\mathbf{p\cdot x}}\ \tilde a^\dagger(\mathbf p)\left(-\frac{1}{2m}\nabla^2\right)e^{-i\mathbf{q\cdot x}}\ \tilde a(\mathbf q)
\\=\int\mathrm{d}^3x\ \int\frac{\mathrm d^3 p}{(2\pi)^{3/2}}\int\frac{\mathrm d^3 q}{(2\pi)^{3/2}}e^{i\mathbf{p\cdot x}}\ \tilde a^\dagger(\mathbf p)\left(\frac{\mathbf q^2}{2m}\right)e^{-i\mathbf{q\cdot x}}\ \tilde a(\mathbf q)
\\=\int\mathrm{d}^3x\ \int\frac{\mathrm d^3 p}{(2\pi)^{3/2}}\int\frac{\mathrm d^3 q}{(2\pi)^{3/2}}\ \tilde a^\dagger(\mathbf p)\left(\frac{\mathbf q^2}{2m}\right)e^{i\mathbf{(p-q)\cdot x}}\ \tilde a(\mathbf q)
\\=\int\mathrm d^3 p\int\mathrm d^3 q\ \tilde a^\dagger(\mathbf p)\left(\frac{\mathbf q^2}{2m}\right)\delta(\mathbf{p-q})\ \tilde a(\mathbf q)
\\=\int\mathrm d^3 p\ \tilde a^\dagger(\mathbf p)\left(\frac{\mathbf p^2}{2m}\right)\ \tilde a(\mathbf p)
$$
The key step is the use of the delta function identity
$$
\delta(\mathbf k)=\int\frac{\mathrm d^3 x}{(2\pi)^3}\ e^{i\mathbf{k\cdot x}}
$$ | {
"domain": "physics.stackexchange",
"id": 79190,
"tags": "quantum-field-theory, fourier-transform, hamiltonian"
} |
What are chemical bonds made of, and how do they form? | Question: All atoms are linked together by chemical bonds, but what are the bonds themselves made of? I'm sorry that's a stupid question, but I'm only 14, and haven't entered high school yet, and my science class isn't really teaching me much I haven't learned. So please try to simplify your answer too.
Answer: I'll assume that you know that charges of opposite sign attract each other and charges of like sign repel each other. Every atom consists of a tiny, positively charged nucleus and one or more negatively charged electrons (such that the overall charge is balanced to neutral).
Let us talk through what happens when two hydrogen atoms come close to each other. The (single) electron of the first atom will be attracted to its "own" nucleus, but it will also be attracted to the second nucleus. The same happens vice versa. However, there is also repulsion between the electrons. It turns out, due to fundamental properties of the electrons (they are smeared out over space, you may have heard that in some experiments, they are like a wave), the repulsion between the electrons is a little less than their attraction to the nuclei. There is also nucleus-nucleus repulsion, but they are not that close.
Overall, it is a balance. When the distance between the nuclei becomes too short (the bond becomes too short), the repulsion between the nuclei will increase and the electrons will be pushed into too small a space, also leading to more repulsion between them. When the nucleus-nucleus distance becomes too large, the nuclei-electron attraction will become a little less, but the repulsive forces far more so, leading to a net attraction, which pulls the bond back together.
Of course, it is more complicated than what I have written here. In reality, electrons are indistinguishable from each other and do not really belong to a particular nucleus. In order to understand why hydrogen and hydrogen form a chemical bond, but helium and helium do not, some quantum physics (or at least results thereof) must become involved. | {
"domain": "chemistry.stackexchange",
"id": 13836,
"tags": "everyday-chemistry"
} |
order of features importance after make_column_transformer and pipeline | Question: I have a data preparation and model fitting pipeline that takes a dataframe (X_trn) and uses the ‘make_column_transformer’ and ‘Pipeline’ functions in sklearn to prepare the data and fit XGBRegressor.
The code looks something like this
xgb = XGBRegressor()
preprocessor = make_column_transformer(
( Fun1(),List1),
( Fun2(),List2),
remainder='passthrough',
)
model_pipeline = Pipeline([
('preprocessing', preprocessor),
('classifier', xgb )
])
model_pipeline.fit(X_trn, Y_trn)
Therefore, the training data which inputted into the XGBRegressor have no labels and resorted due to the make_column_transformer function. Given this, how do I extract the features importance using XGBRegressor.get_booster().get_score() method?
Currently, the output of get_score() is a dictinary that looks like this:
{‘f0’: 123 ,
‘f10’: 222,
‘f100’: 334,
‘f101’: 34,
…
‘f99’:12}
Can I assume that the order of the features provided by get_score() is identical to the order of features after make_column_transformer function (aka, I have to incorporate the feature sorting) such that 'f0' == 1st feature after make_column_transformer, 'f1' ==2nd feature after make_column_transformer, etc.?
Answer: Your assumption is correct. Usually after column Transformation columns lose their names and get default values corresponding to their orders.
Additional Info:
You may try Eli5
from eli5 import show_weights,show_prediction
show_weights(model)
show_prediction(model,data_point)
The later function shows the impact of every features for predicting a data_point. | {
"domain": "datascience.stackexchange",
"id": 6919,
"tags": "xgboost, features, pipelines"
} |
Peskin and Schroeder confusion on promoting Classical Klein-Gordon equation to quantum field equation | Question: I am reading "An introduction to quantum field theory" by Peskin and Schroeder and I am confused. I appreciate your help.
Here's the context to my question: In chapter 2, the book introduces quantum field theory by first talking about classical field theory and Klein-Gordon equation. The book expands a typical classical field in momentum space as,
\begin{equation}
\phi(x, t)=\int\frac{d^3p}{(2\pi)^3}e^{ipx}\phi(p, t).
\end{equation}
The classical Klein-Gordon equation in momentum space becomes (equation (2.21)),
\begin{equation}
\left[\frac{\partial^2}{\partial t^2} + (|p|^2 + m^2)\right]\phi(p, t) = 0, \tag{2.21}
\end{equation}
which is the equation of motion of a (classical) simple harmonic oscillator with frequency $\omega_p=\sqrt{|p|^2 + m^2 }$. The book then claims that the solution to quantum harmonic oscillator is well known (equation (2.23)),
\begin{equation}
\phi = \frac{1}{\sqrt{2\omega}}(a + a^{\dagger}).\tag{2.23}
\end{equation}
Where $a$ is the ladder operator. If I understand the book correctly, it finally claims that if we use the solutions of quantum harmonic oscillator as the solution to Klein-Gordon equation in momentum space, i.e. $\phi(p,t) = \frac{1}{\sqrt{2\omega_p}}(a_p + a_p^\dagger)$, we obtain the solution to the quantum Klein-Gordon equation. Such solution satisfies the commutation relation automatically so I can see where the author is going with this idea. The solution to quantum Klein-Gordon equation is then (equation (2.25)),
\begin{equation}
\phi(x) = \int \frac{d^3p}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p}}(a_pe^{ipx} + a^\dagger_pe^{-ipx})\tag{2.25}
\end{equation}
Here's my question: Claiming equation (2.23) as the solution to the equation (2.21) is a bit non-rigorous, don't you think? Put it this way, how can we prove that,
\begin{equation}
\left[\frac{\partial^2}{\partial t^2} + (|p|^2 + m^2)\right]\frac{1}{\sqrt{2\omega_p}}(a_p + a_p^{\dagger}) = 0 ???
\end{equation}
The disconnect I feel here stems from the fact that although expanding $\phi$ as ladder operators is useful in finding the eigenstate (and eigenvalue) to Hamiltonian of harmonic oscillator, equation (2.21) is just a differential equation that, at first glance, isn't about finding eigenstate. Operation-wise, finding eigenstate is so much different from solving differential equation that I think a mathematical proof is needed to before we can claim that the solution to quantum harmonic oscillator is also a solution to the equation of motion of the classical harmonic oscillator. I would like to ask kindly if you can prove a proof to my final equation to fill in the gap in knowledge I current have while reading the book.
Answer: Please first see this reference about solving the quantum simple harmonic oscillator with raising and lowering operators.
Next, realize that we are dealing with a free field theory, and the Hamiltonian (which governs the dynamics) is the free field Hamiltonian:
$$
H = \sum_\vec p \omega_p a^\dagger_{\vec p }a_{\vec p}\;,
$$
which is clearly a sum over individual simple harmonic oscillator Hamiltonians (one for each $\vec p$ value). (To put is another way the free field is just a bunch of uncoupled simple harmonic oscillators.)
In quantum mechanics, the time-dependence of an operator in the Hamiltonian picture (or, once we add interactions, we will call this the interaction picture) is given by:
$$
a_{\vec p}(t) = e^{-iHt}a_{\vec p}e^{iHt}
$$
Taking the time derivative we find, as usual:
$$
\dot a_{\vec p} = -i[H, a_{\vec p}(t)] = i\omega_{\vec p}a_{\vec p}(t)\;.
$$
Take a second time derivative to find:
$$
\ddot a_{\vec p} = -\omega_p^2 a_{\vec p}\;.
$$
Or, using $\omega_{\vec p}^2 = |\vec p|^2 + m^2$ and rearranging:
$$
\ddot a_{\vec p} + (|\vec p|^2 + m^2)a_{\vec p} = 0\;,
$$
just like we want.
I am quite sure you can work out the analogous relation for $a^\dagger_{\vec p}$ on your own. | {
"domain": "physics.stackexchange",
"id": 89772,
"tags": "quantum-field-theory, fourier-transform, klein-gordon-equation"
} |
2D convolution in matlab | Question: function C = convolve_slow(A,B)
(file name is accordingly convolve_slow.m )
This routine performs convolution between an image A and a mask B.
Input: A - a grayscale image (values in [0,255])
B - a grayscale image (values in [0,255]) serves as a mask in the convolution.
Output: C - a grayscale image (values in [0,255]) - the output of the convolution.
C is the same size as A.
Method: Convolve A with mask B using zero padding. Assume the origin of B is at
floor(size(B)/2)+1.
Do NOT use matlab convolution routines (conv,conv2,filter2 etc).
Make the routine as efficient as possible: Restrict usage of for loops which
are expensive (use matrix multiplications and matlab routines such as dot etc). <br>
To simplify and reduce ifs, you should pad the image with zeros before starting your convolution loop.
Do not assume the size of A nor B (B might actually be larger than A sometimes).
here is my solution for this exercise. please elaborate on any change you have done or suggesting since i'm new to matlab and image processing.
function [ C ] = convolve_slow( A,B )<br>
%This routine performs convolution between an image A and a mask B.
%
% Input: A - a grayscale image (values in [0,255])
% B - a grayscale image (values in [0,255]) serves as a mask in the convolution.
% Output: C - a grayscale image (values in [0,255]) - the output of the convolution.
% C is the same size as A.
%
% Method: Convolve A with mask B using zero padding. Assume the origin of B is at floor(size(B)/2)+1.
% init C to size A with zeros
C = zeros(size(A));
% make b xy-reflection and vector
vectB = reshape(flipdim(flipdim(B,1),2)' ,[] , 1);
% padding A with zeros
paddedA = padarray(A, [floor(size(B,1)/2) floor(size(B,2)/2)]);
% Loop over A matrix:
for i = 1:size(A,1)
for j = 1:size(A,2)
startAi = i;
finishAi = i + size(B,1) - 1;
startAj = j;
finishAj = j + size(B,2) - 1;
vectPaddedA = reshape(paddedA(startAi :finishAi,startAj:finishAj)',1,[]);
C(i,j) = vectPaddedA* vectB;
end
end
end
Answer: Rather than making four variables just to index the matrix, I would go for two in such a way:
paddedA(i :i_end,j:j_end) | {
"domain": "codereview.stackexchange",
"id": 2885,
"tags": "matlab"
} |
Can magnetic flux be negative | Question: I am studying magnetic flux linkage in an ac generator and it appears to be that magnetic flux linkage is negative half the time, how can this be?? Also with lenz's law why is emf defined as negative when magnetic flux is increasing and how does this relate to the direction of the current?
Answer: Yes, magnetic flux can be negative. It just depends on where the field is going. Say there is a sheet and magnetic field is going through it from front to the back, we can call the flux there as positive and negative when it's the other way round.
It is pretty clear from the statement of Lenz's Law why the emf defined is taken as negative: An induced electromotive force (emf) always gives rise to a current whose magnetic field opposes the original change in magnetic flux. (wikipedia)
Basically, the magnetic field produced due to the induced current opposes the magnetic flux producing the current itself. | {
"domain": "physics.stackexchange",
"id": 12962,
"tags": "electromagnetism, electricity"
} |
Why do meteors explode? | Question: A report on the Chelyabinsk meteor event earlier this year states
Russian meteor blast injures at least 1,000 people, authorities say
My question is
Why do meteors explode?
Do all meteors explode?
Answer: Meteoroids come in a very large range of sizes, from specks of dust to many-kilometer-wide boulders. Explosions like that of the Chelyabinsk meteor are only found meteors that are larger than a few meters in size but smaller than a kilometer.
Though the details are argued endlessly by those who study such phenomena (it is very hard to get good data when you don't know when/where the next meteor will occur), the following qualitative description gets much of the important ideas across.
The basic idea is that the enormous entry velocity into the atmosphere (on the order of $15\ \mathrm{km/s}$) places the object under quite a lot of stress. The headwind places a very large pressure in front of it, with comparatively little pressure behind or to the sides. If the pressure builds up too much, the meteor will fragment, with pieces distributing themselves laterally. This is known as the "pancake effect."
As a result, the collection of smaller pieces has a larger front-facing surface area, causing even more stresses to build up. In very short order, a runaway fragmentation cascade disintegrates the meteor, depositing much of its kinetic energy into the air all at once.
This is discussed in [1] in relation to the Tunguska event. That paper also gives some important equations governing this process. In particular, the drag force has magnitude
$$ F_\mathrm{drag} = \frac{1}{2} C_\mathrm{D} \rho_\mathrm{air} A v^2, $$
where $C_\mathrm{D} \sim 1$ is the geometric drag coefficient, $\rho_\mathrm{air}$ is the density of air, $A$ is the meteor's cross-sectional area, and $v$ is its velocity. Also, the change in mass due to ablation is
$$ \dot{m}_\text{ablation} = -\frac{1}{2Q} C_\mathrm{H} \rho_\mathrm{air} A v^3, $$
where $Q$ is the heat of ablation (similar to the heat of vaporization) of the material and $C_\mathrm{H}$ is the heat transfer coefficient. Since the mass-loss rate scales as $A \sim m^{2/3}$, sublinearly with mass, smaller objects will entirely ablate faster, setting a lower limit on the size of a meteor that can undergo catastrophic fragmentation before being calmly ablated.
Meteors that are too big, on the other hand, will cross the depth of the atmosphere and crash into the ground before a pressure wave (traveling at the speed of sound in the solid) can even get from the front to the back of the object. There simply isn't time for pressure-induced fragmentation of the entire object to occur, meaning the kinetic energy isn't dissipated until the entire body slams into Earth.
[1] Chyba et al. 1993. "The 1908 Tunguska explosion: atmospheric disruption of a stony asteroid." (link, PDF) | {
"domain": "physics.stackexchange",
"id": 34544,
"tags": "explosions, meteors"
} |
how to train a gene dataset with a nearest shrunken centroid classifier? | Question: I have a data file named "geneexp.csv".
the data contains information about gene expression of three different cell types (CD4 and CD8, CD19) I want to classify cells by performing the nearest shrunken centroid classification of training data in which the threshold is chosen by cross-validation. I split the data (70% train and 30% test).
data = read.csv("geneexp.csv")
splitData <- function(data, trainRate) {
n <- dim(data)[1]
idxs <- sample(1:n, floor(trainRate*n))
train <- data[idxs,]
test <- data[-idxs,]
return (list(train = train, test = test))
}
split <- splitData(data, .7)
train <- split$train
test <- split$test
then with the use of pamr package I tried to buid the following model and plot :
y <- train[[ncol(train)]]
x <- t(train[,-ncol(train)])
mydata <- list(
x = x,
y = as.factor(as.factor(y)),
geneid = as.character(1:nrow(x)),
genenames = rownames(x)
)
# Training and cross-validating threshold
model <- pamr.train(mydata)
cvmodel <- pamr.cv(model, mydata)
pamr.plotcv(cvmodel)
but I can't make it work. I get the following error:
Error in contrasts<-(*tmp*, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
I have already transfer y to the factors. Can you help me? How can I fix it?
Answer: It's a bit different from usual R, but if you check the help:
data: The input data. A list with components: x- an expression genes
in the rows, samples in the columns)
So in your case, you need to transpose the matrix:
mydata <- list(
x = t(x),
y = as.factor(y),
geneid = as.character(1:nrow(x)),
genenames = rownames(x)
)
As I don't have your amazing data, I can only use iris below:
pamr.train(list(x=t(iris[,1:4]),y=iris[,5]))
pamr.train(data = list(x = t(iris[, 1:4]), y = iris[, 5]))
threshold nonzero errors
1 0.000 4 6
2 0.841 4 7
3 1.682 4 10
4 2.523 4 11
5 3.364 4 13
6 4.205 4 18
7 5.046 3 22
8 5.887 3 23 | {
"domain": "bioinformatics.stackexchange",
"id": 1661,
"tags": "r, gene, genome, machine-learning, data-preprocessing"
} |
Galilean covariance of the Schrodinger equation | Question: Is the Schrodinger equation covariant under Galilean transformations?
I am only asking this question so that I can write an answer myself with the content found here:
http://en.wikipedia.org/wiki/User:Likebox/Schrodinger#Galilean_invariance
and here:
http://en.wikipedia.org/wiki/User:Likebox/Schrodinger#Galilean_invariance_2
I learned about these pages in a comment to this answer by Ron Maimon. I think Ron Maimon is the original writer of this content.
This is creative commons, so it's ok to copy it here. It is not on any textbook on non-relativistic quantum mechanics that I know of, and I thought it would be more accessible (if to no one else, at least for myself) and safe here. I hope this type of question is not in disagreement with site policy.
Answer: Operator formalism
Galilean symmetry requires that $H(p)$ is quadratic in $p$ in both the classical and quantum Hamiltonian formalism. In order for Galilean boosts to produce a $p$-independent phase factor, $px - Ht$ must have a very special form -translations in $p$ need to be compensated by a shift in $H$. This is only true when $H$ is quadratic.
The infinitesimal generator of boosts in both the classical and quantum case is
$$ B = \sum_i m_i x_i(t) - t \sum_i p_i $$
where the sum is over the different particles, and $B$, $x$, and $p$ are vectors.
The Poisson bracket/commutator of $B\cdot v$ with $x$ and $p$ generate infinitesimal boosts, with $v$ the infinitesimal boost velocity vector:
$$ [B \cdot v, x_i] = vt $$
$$ [B \cdot v, p_i] = v m_i $$
Iterating these relations is simple since they add a constant amount at each step. By iterating, the $dv$s incrementally sum up to the finite quantity $V$:
$$ x \rightarrow x_i + Vt $$
$$ p \rightarrow p_i + m_i V $$
$B$ divided by the total mass is the current center of mass position minus the time times the center of mass velocity:
$$ B = M X_\text{cm} - t P_\text{cm} $$
In other words, $B/M$ is the current guess for the position that the center of mass had at time zero.
The statement that $B$ doesn't change with time is the center of mass theorem. For a Galilean invariant system, the center of mass moves with a constant velocity, and the total kinetic energy is the sum of the center of mass kinetic energy and the kinetic energy measured relative to the center of mass.
Since $B$ is explicitly time-dependent, $H$ does not commute with $B$, rather:
$$ \frac{dB}{dt} = [H, B] + \frac{\partial B}{\partial t} = 0 $$
This gives the transformation law for $H$ under infinitesimal boosts:
$$ [B \cdot v, H] = - P_\text{cm} v $$
The interpretation of this formula is that the change in $H$ under an infinitesimal boost is entirely given by the change of the center of mass kinetic energy, which is the dot product of the total momentum with the infinitesimal boost velocity.
The two quantities $(H, P)$ form a representation of the Galilean group with central charge $M$, where only $H$ and $P$ are classical functions on phase-space or quantum mechanical operators, while $M$ is a parameter. The transformation law for infinitesimal $v$:
$$ P' = P + Mv $$
$$ H' = H - P\dot{v} $$
can be iterated as before -$P$ goes from $P$ to $P + MV$ in infinitesimal increments of $v$, while $H$ changes at each step by an amount proportional to $P$, which changes linearly. The final value of $H$ is then changed by the value of $P$ halfway between the starting value and the ending value:
$$ H' = H - (P + \frac{MV}{2}) \cdot V = H - P \cdot V - \frac{MV^2}{2} $$
The factors proportional to the central charge $M$ are the extra wavefunction phases.
Boosts give too much information in the single-particle case since Galilean symmetry completely determines the motion of a single particle. Given a multiparticle time-dependent solution:
$$ \psi_t(x_1, x_2, ..., x_n) $$
with a potential that depends only on the relative positions of the particles, it can be used to generate the boosted solution:
$$ \psi'_t = \psi_t(x_1 + vt,...,x_n + vt) e^{i P_\text{cm} \cdot X_\text{cm} - \frac{M v_\text{cm}^2}{2}t} $$
For the standing wave problem, the motion of the center of mass just adds an overall phase. When solving for the energy levels of multiparticle systems, Galilean invariance allows the center of mass motion to be ignored. | {
"domain": "physics.stackexchange",
"id": 89328,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, covariance, galilean-relativity"
} |
Given list of numbers find for each number in the list the next distinct number | Question: Given a sequence of numbers $l_1, \ldots, l_n$, I want to find for each index $i$ the next possible number to the right of $l_i$ (if any) that is different from $l_i$. What is the best time and space optimal solution to do this?
The numbers may be repeating and are in no particular order.
Answer: Here is pseudocode for an algorithm which outputs the entire answer array $a_1,\ldots,a_n$:
Set $\mathit{first} \gets 1$ (first index in the current run)
Set $\mathit{curr} \gets 1$ (index of the current element being scanned)
While $\mathit{curr} < n$:
Set $\mathit{curr} \gets \mathit{curr} + 1$
If $l_{\mathit{curr}} \neq l_{\mathit{first}}$:
Set $a_{\mathit{first}},\ldots,a_{\mathit{curr}-1} \gets l_{\mathit{curr}}$
Set $\mathit{first} \gets \mathit{curr}$
End If
End While
Set $a_{first},\ldots,a_n \gets \bot$ (no different element to the right)
This algorithm uses linear time and $O(1)$ auxiliary space. | {
"domain": "cs.stackexchange",
"id": 11399,
"tags": "algorithms, data-structures"
} |
how to extract a topic data from a text file using matlab? | Question:
I did the following
rostopic echo /topic_name > filename.txt
Then I got a text file the contains the data of the topic...
I want to save the data using matlab code in arrays
for example I have the following text file:
secs: 4113
nsecs: 565000000
frame_id: ''
pose:
position:
x: 5.0
y: 5.0
z: 5.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 0.0
---
header:
seq: 2544
stamp:
secs: 4113
nsecs: 590000000
frame_id: ''
pose:
position:
x: 5.0
y: 5.0
z: 5.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 0.0
---
I want to save the pose position.x = [ 5 2 ..... ]
same as for y and z.. I want to save the data in arrays using matlab ?
Originally posted by RSA_kustar on ROS Answers with karma: 275 on 2014-05-25
Post score: 0
Answer:
this will help :
http://www.cs.utah.edu/~germain/PPS/Topics/Matlab/textread.html
For this i dont think ROS is required !
Originally posted by Sudeep with karma: 460 on 2014-05-26
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18060,
"tags": "matlab"
} |
What is known about the sets enumerated by primitive recursive functions? | Question: Let's say that a set of natural numbers $S \subseteq \mathbb{N}$ is primitive recursively enumerable if there exists some primitive recursive function $f$ such that $S$ is the range of $f$. That is, we can enumerate $S$ by calculating $\{f(0), f(1) \ldots \}$.
What is known about this class of sets? Where does it stand in terms of computability? I suspect that it contains sets that are not context free, and that it does not contain all recursive sets.
Has this been studied? Does anyone have a reference for this?
Answer: In "Extensions of some theorems of Gödel and Church" it's shown by Barkley Rosser that these sets are exactly the recursive sets:
Corollary I. If a class can be enumerated (allowing repetitions) by a general recursive function, it can be enumerated (allowing repetitions) by a primitive recursive function.
Note that the crux here is repetitions. Since you constructed sets, and comparing them with other sets, any repetitions do not violate their equality. | {
"domain": "cs.stackexchange",
"id": 14523,
"tags": "computability, reference-request, primitive-recursion, chomsky-hierarchy"
} |
There are plans to develop a better definition of a "second". How does the current definition fall short? | Question: The current definition of a second is stated here and I found a presentation on the BIPM site which discusses plans to change to a "better" definition of a second. You can find the presentation here. The plan is to use a new definition based on "an optical transition". In what way does the current definition fall short? The BIPM presentation tries to explain why we need a new definition, but I don't have the background to understand it.
Answer: As a rule of thumb, the relative stability and precision which you can hope to achieve with any oscillator is limited by the number of periods you can observe your system for. For the current definition of a second the oscillator is a microwave transition at about $9\textrm{ GHz}\approx 10^{10} \textrm{ Hz}$. Since trapping the atoms shifts the energy levels, you need to chuck them up and measure them when they fall back down, which means that the effective interaction time is of the order of seconds.
On the other hand, using a transition in the optical part of the spectrum would keep observation times at about the same order but increase the frequency of the radiation up to about $10^{15}\textrm{ Hz}$. As Gill points out, this would mean uncertainties lower by two or three orders of magnitude, simply because you observe a much bigger number of periods.
The definition of a second is fine as it is for what we're doing now. However, cold-ion clock technology is indeed close to this fundamental limit. As the presentation shows (page 3), optical clocks have overcome many of the technical reasons that make them difficult to work with, as well as some fundamental issues solved by the frequency comb, to catch up with fountain clocks. It is therefore the time to ask whether we shouldn't make optical transitions the fundamental standard and stop worrying about calibrating them with a (less accurate) fountain clock. | {
"domain": "physics.stackexchange",
"id": 4169,
"tags": "time, si-units, metrology"
} |
AB5E type molecule | Question: In $\ce{AB5E}$-type molecules, why are the lone pairs in axial bonds?
If the lone pair is present in axial bonds it repels four other bonds. On the other hand if the lone pair is in equatorial bonds, it repels only two other bonds.
So what is the reason behind this?
Answer: Why? Because there is no other choice. Starting from the AB6 octahedral configuration, all six vertices of the octahedron are symmetric, so it doesn't matter whichever one you “choose” to replace by the lone pair. All will yield the same final configuration. | {
"domain": "chemistry.stackexchange",
"id": 12260,
"tags": "molecular-structure, symmetry, vsepr-theory"
} |
Regexes for Google App Engine | Question: I want to review the URL routing for my appengine webapp:
routes = [
(r'/', CyberFazeHandler),
(r'/vi/(eyes|mouth|nose)', CyberFazeHandler),
(r'/realtime', RealtimeHandler),
(r'/task/refresh-user/(.*)', RefreshUserHandler),
('/ai', FileUploadFormHandler),
('/serve/([^/]+)?', ServeHandler),
('/upload', FileUploadHandler),
('/generate_upload_url', GenerateUploadUrlHandler),
('/file/([0-9]+)', FileInfoHandler),
('/file/set/([0-9]+)', SetCategoryHandler),
('/file/([0-9]+)/download', FileDownloadHandler),
('/file/([0-9]+)/success', AjaxSuccessHandler),
]
app = webapp2.WSGIApplication(routes,
debug=os.environ.get('SERVER_SOFTWARE', '').startswith('Dev'
))
Does it look alright to you? Can you recommend an improvement? Should I use the 'r prefix to my regexes?
Answer: Have you considered using named Route templates instead of capturing regular expressions? It could make the code more readable. Consider
Route("/task/refresh-user/<username>", RefreshUserHandler)
instead of
(r'/task/refresh-user/(.*)', RefreshUserHandler)
for example. (Of course, I don't know what kwargs RefreshUser actually wants, but you can change the angle-bracketed part to the appropriate name.) | {
"domain": "codereview.stackexchange",
"id": 835,
"tags": "python, regex, google-app-engine"
} |
model.fit fails using keras sequential (slice index ### of dimension 0 out of bounds) | Question: this is the most simple model I can think of for my data yet I can't use the fit function, it gives an error.
the desired procedure is to make a simple autoencoder : from 576 nodes to 64 then back to 576
why this doesn't work ?
model = Sequential(name = 'sth')
model.add(Input(shape=(576,)))
model.add(Dense(576, activation='relu')
model.add(Dense(64, activation='relu') #bottleneck
model.add(Dense(576, activation='sigmoid')
model summary gives :
┌─────────────────────────────────┬────────────────────────┬───────────────┐
│ Layer (type) │ Output Shape │ Param # │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ input (Dense) │ (None, 576) │ 332,352 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_4 (Dense) │ (None, 64) │ 36,928 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ output (Dense) │ (None, 576) │ 37,440 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 406,720 (1.55 MB)
Trainable params: 406,720 (1.55 MB)
Non-trainable params: 0 (0.00 B)
then compile and then fit :
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
model.fit(x=Xdata, y=Xdata, validation_data=(X_val, X_val),
batch_size= 32, epochs= 50)
the Xdata and X_val are numpy arrays of shape (900, 576) and (100, 576) respectively
gives this error :
ValueError: slice index 576 of dimension 0 out of bounds. for '{{node strided_slice_576}} = StridedSlice[Index=DT_INT32, T=DT_FLOAT, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](ReadVariableOp_576, strided_slice_576/stack, strided_slice_576/stack_1, strided_slice_576/stack_2)' with input shapes: [576,64], [1], [1], [1] and with computed input tensors: input[1] = <576>, input[2] = <577>, input[3] = <1>.
Answer: there was an untraceable mistake in one of my Dense layers, and that was I creating the layer and passed the argument kernel_regularizer=HeNormal(), instead of kernel_initializer=HeNormal()
wasted 3 days of tweaking everything, while every time I was reading that line of code I just read it as initializer automatically. | {
"domain": "ai.stackexchange",
"id": 4206,
"tags": "training, keras, autoencoders"
} |
Query about Bernoulli's principle | Question: We know that the lower atmosphere has high pressure and as we go up, the pressure decreases, if it's so then why doesn't all gases fly up into the upper atmosphere from the lower following Bernoulli's theorem? I do expect that gravitational effect on gases isn't that worth notable. Do correct me for my mistake if it exists!
Answer: If you have ever swum to the bottom of a swimming pool you'll know that in water the pressure increases as you go deeper. At a depth of about 10 metres the pressure is twice what it is at the surface, but the water 10 metres down doesn't burst up to the surface because it is held down by the weight of water above it. In fact the increase of pressure with depth is exactly the weight of water above.
Exactly the same is true of the atmosphere. The pressure at ground level is 101,325 Pa because each square metre of the ground has about 10,329 kg of air above it (10329 kg times the acceleration due to gravity 9.81 m/sec$^2$ = 101325 Pa). If you could magically remove the 100 km or so of atmosphere that's above some patch of air at ground level that air would indeed immediately expand upwards.
Incidentally, Bernoulli's principle is unrelated to the problem. | {
"domain": "physics.stackexchange",
"id": 6707,
"tags": "fluid-dynamics"
} |
Does energy of photon change due to some external magnetic field? | Question: I came to know about the energy of photon changes (decreases) while going away from the emitter (even from earth) due to gravitational field effects.
Is there any change in energy/wavelength of a photon due to some external (artificial) magnetic field or electric field?
Answer: Electromagnetic fields generate gravity, and therefore the answer is yes: the energy of a photon changes as it moves through a non-homogeneous electromagnetic field. | {
"domain": "physics.stackexchange",
"id": 36928,
"tags": "electromagnetism, gravity, photons, magnetic-fields"
} |
Can one show NP-completeness by showing a reduction to 3SAT? | Question: The standard technique to show NP-completeness of $L$ seems to be to show that $L$ is in NP, and then to show that some NP-complete language can be reduced to it. What if one tried to show it the other way, i.e., if L $\leq $ 3SAT?
Wouldn't that be one one step way of showing that the language $L$ is in NP-complete?
Answer: Here is a counterexample to your proof method. The empty language reduces to 3SAT, yet it isn't NP-hard.
If you reduce $L$ to 3SAT, then you can conclude that $L$ is in NP, that's it. | {
"domain": "cs.stackexchange",
"id": 19099,
"tags": "np-complete, reductions, decision-problem"
} |
Find least probable path in graph | Question: I am working on a special case of the longest path problem. For a cyclic directed graph $G=(V, E)$, where the edge-weights are probability values (i.e., $P(\_) = w(s, q)$ with $s,q \in V$), my aim is to find the least 'probable' path between two vertices.
My initial approach is to generate an graph $G'$ where the weights are the complementary probabilities $1- w(s, q)$ (with strictly positive values), and compute Dijkstra's shortest path on $G'$. Is this reasoning sound? Or am I getting myself into an NP-hard disaster?
Answer: Your approach doesn't work. Presumably, you want to define the probability of a path to be the product of the probabilities on its edges. It sounds like you want to define the weight $w(e)$ on an edge $e$ to be $w(e)=1-p(e)$ (one minus its probability). However, doesn't work. It doesn't do what you want: you want $w(e)+w(e')$ to correspond to $p(e) \times p(e')$, but it doesn't.
Instead, you should be taking logarithms. In particular, you should define the weight on edge $e$ by $w(e) = -\log p(e)$. Now addition of weights corresponds to multiplication of probabilities:
$$w(e) + w(e') = -(\log p(e) + \log p(e')) = - \log(p(e) \times p(e')),$$
as desired. At this point all of the weights in $G'$ will be non-negative (do you see why?), so you can use Dijkstra's algorithm to find the shortest path in $G'$, and that will correspond to the path in $G$ with highest probability. As you can see, this is not a longest-path problem at all; it is just a straightforward shortest-paths problem.
The trick of taking logs to turn multiplication into addition is a standard one, including in many applications of graphs, so this is worth knowing. | {
"domain": "cs.stackexchange",
"id": 2178,
"tags": "algorithms, graphs, shortest-path"
} |
magnetic field formation with reference frames. | Question: Suppose in a reference system K' a charge and a body are moving with the same velocities v'. The charge should not produce a magnetic field.
But for the reference frame K( at rest relative to K') the charge produces a magnetic field.
Is there an absolute answer to this?(whether a magnetic field is produced or not) and how do we compute it? Or are there some basic conceptual errors in the assumptions?
Answer: The simple (relatively speaking) answer is that magnetism is fundamentally a relativistic phenomenon, and relativity is what unites electricity and magnetism.
What appears to be a purely electric field in the reference frame in which the charge and body are at rest turns out to be a combination of electric and magnetic fields in a reference frame in which they are moving. By the first postulate of special relativity, neither interpretation is more correct than the other.
Wikipedia has a good explanation of electromagnetism under special relativity, however as the Maxwell Equations are compatible with special relativity they (specifically the Ampere-Maxwell law) will generally be sufficient to calculate the magnetic field produced.
A common example is to consider the magnetic field produced by a current-carrying wire. In the laboratory frame, the wire is neutrally charged, however in a moving frame, the moving electrons and the protons will contract at a different rate and hence the wire will appear to be charged. The apparent electric field is equivalent to the magnetic field in the laboratory frame. | {
"domain": "physics.stackexchange",
"id": 50406,
"tags": "magnetic-fields, charge, relativity"
} |
Explanation for this type of (magic-trick) suspension? | Question: Well let's start off with that I'm not a physicist but I'd like some thoughts on something I came across in my hometown.
This guy:
Is it possible that due to the electrical charge of magnets this guy can make the illusion that he can float ? Or is this probably a cheap trick that fools the eye ? I was standing there for quite some time watching the guy and he keep moving his feet. The resistance that he appeared to have was from a magnet force keeping him afloat. So after I passed this guy I did some physics searches on the web and the first thing that caught my eye was the electrical charge of magnets.
So the question is : Is this related to the electrical charge of a magnet or a cheap trick ?
Answer: The "trick" is that the cane he is apparently holding is actually firmly attached to the platform. A rigid piece goes up his sleave, then to a harness that holds his whole body up. For more about this type of magic trick device, google "broom suspension" or "aerial suspension harness".
No electric or magnetic fields were abused here.
Image Credit: TwentyTwoWords | {
"domain": "physics.stackexchange",
"id": 15719,
"tags": "forces, levitation"
} |
Ionization of Dielectric Material | Question: I'm reading Introduction to Electrodynamics by Griffiths and in chapter 4 when discussing induced dipoles and the effect of an electric field on an electrical insulator, it says the following:
These two regions of charge within the atom (positively charged nucleus and negatively charge electron cloud) are influenced by the field: the nucleus is pushed in the direction of the field, and the electrons the opposite way. In principle, if the field is large enough, it can pull the atom apart completely, "ionizing" it (the substance then becomes a conductor).
Emphasis mine and italicized comment added for clarity. I know it says in principle, indicating that this does not happen, but is it possible to ionize an insulator, say wood, with an extremely strong electric field? If it is theoretically possible, how strong would the electric field have to be in order for this to happen (orders of magnitude would be fine)? What would happen to a piece of wood in this kind of electric field? If it is not possible, is it due to limitations on the strength of electric field that we can actually create?
Answer:
but is it possible to ionize an insulator, say wood, with an extremely
strong electric field?
Yes it is possible. It's called dielectric breakdown.
If it is theoretically possible, how strong would the electric field
have to be in order for this to happen (orders of magnitude would be
fine)?
I don't know about wood, but most pure plastics (which are used as electrical insulation) have a dielectric strength in the range of 100 to 300 kV/cm. With additives it can be even higher (source: Polymer Properties Database). Actual values will vary depending on the test method used, especially the configuration of the test electrodes.
What would happen to a piece of wood in this kind of electric field?
I don't know. But you can probably Google up "Dielectric strength of wood" and find out. Be careful about wood as it is hygroscopic (absorbs water) with reduces the strength.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 64053,
"tags": "electrostatics, electric-fields, estimation, dielectric"
} |
Efficient intersection of equivalence classes | Question: Having two equivalence relations, both as a union find data structure with the same number of elements, what is the most efficient way to find the equivalence relation that is the intersection of both relations?
For example, lets have $R_1=\{\{1,2,3\},\{4\}\}$ and $R_2=\{\{1\},\{2,3,4\}\}$. Then the intersection would be $R'=\{\{1\},\{2,3\},\{4\}\}$ as $2$ and $3$ are considered equivalent in both $R_1$ and $R_2$.
The "obvious" way would be $O(n^2)$ and test equivalence for each pair in each of the given relations. But maybe some property of the given union-find data structure would allow for better complexity?
As far as I know, even the canonical representative of a class is not necessary the same but depends on the sequence of class merges. (that was one approach I had in mind but did not really work out)
Answer: For each equivalence class $X$ of $R_1$, find the representatives of all elements of $X$ in $R_2$. This tells you how the elements of $X$ split in $R_1 \cap R_2$. (You might need to use some efficient data structures or algorithms beyond union-find.) Repeat for all other equivalence classes. Total running time is $\tilde{O}(n)$. | {
"domain": "cs.stackexchange",
"id": 19547,
"tags": "algorithms, union-find"
} |
Build and Move a simple human character model in rViz | Question:
Hi Guys!
I'd like to build a simple human model in rViz and be able to move it.
Actually, something pretty similar to this little app which I've developed using Ogre and Qt:
http://www.youtube.com/watch?v=Q0WP0kd4sOU&feature=youtu.be
Here I could basically map the output from each inertial sensor,to the corresponding character body segment.
Is it possible to achieve the same in rViz?
Do you have any suggestion or sample code to share?
By the way, even a simple "stickman" model could be enough :)
Thanks everybody!
Luca
Originally posted by RagingBit on ROS Answers with karma: 706 on 2012-10-24
Post score: 0
Original comments
Comment by SL Remy on 2012-10-25:
Does the video this page (http://www.ros.org/wiki/skeleton_markers) count as a simple stickman model?
Answer:
I also think RVIZ might not be the best choice for this purpose. You can take a look at the MORSE simulator. It's a simulator based on Blender and comes with a human model that can be controlled using Kinect (also via ROS) like in this video:
https://www.youtube.com/watch?v=4qaLBoGTQEI (from 00:12s)
It's open-source, so you can also take a look at the code. You find more information here:
http://www.openrobots.org/wiki/morse
Originally posted by michikarg with karma: 2108 on 2012-10-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 11495,
"tags": "ros, kinect, rviz, model"
} |
Pressure done by a bar/beam inserted in a continuous medium (wall) | Question: Related to a DIY project, I'm facing the following question:
Assume a metal bar (beam) of square section, $a \times a$, is inserted partially in a continuous medium (lets say, a wall) in a way that $h$ of the beam is inside the medium and $l$ is outside of the wall, and a force $F=200 N$ is done in the other extreme of the bar, perpendicular to it. The question is: how the pressure ( the force per unit area, $N/mm^2$ ) is distributed in the contact surfaces between the cantilever beam and the wall ?
Note: we could say that the beam has no weight, no friction forces and the system is stable, no displacements. An infinite wall in width, height and thickness. If easier, we can consider a beam of circular section instead of square.
Inside the wall the beam has 5 surfaces: right, left, top, bottom and back. In the ideal case of no friction, I think forces in right and left surfaces will be null. No idea if forces in back and top are also null. Force in bottom must allow to keep fixed all system, with an unknown distribution of pressure on it ( uniform? maximum at $h$ deep? ).
I've no knowledge to solve it as a continuous problem. A strong simplification that came to my mind is consider the system as equivalent to a lever with lengths $h$ and $l$, being the support axe the red line in the draw. That means to the force in the other extreme of the beam (green line in the draw) will be $F' = F \frac {l}{h}$. By example if h=5cm and l=50 cm, then $F'=200*50/5=2000N$.
Background: if we stand a TV of 20 kg using a $l=50 cm$ beam, how to estimate the forces over the wall ?
Answer: This shows a combination of a steel rod and a concrete wall in 3D. The rod is embedded in the wall 1/6 of the length. Section of a rod 1x1. We take the effective force divided by the cross-sectional area as 1. The middle picture is the distribution of the vertical component of the deformations in the wall; in the right-hand picture, this is the distribution of the $\sigma_{zz}$ stress component. It can be seen that in such a situation, a stress increase of 30 times can be obtained. The calculations are performed using FEM and Mathematica 12. | {
"domain": "physics.stackexchange",
"id": 61022,
"tags": "continuum-mechanics, applied-physics"
} |
Maximal edge weight clique of given size | Question: Let $G$ be an undirected fully connected weighted graph with $N=|V|$ vertices. Given $M<N$ we wish to choose $M$ vertices such that the sum of weights between the chosen vertices is maximal, i.e. we wish to find a set of vertices $S$,
$$\max_S\sum_{i\in S} \sum_{j \in S, j\neq i} w_{ij}\quad \text{s.t.}\quad |S|=M$$
Some questions regarding this problem assuming $M\ll N$:
The brute force algorithm for finding $S$ is simply to examine all $N \choose M$ possibilities, $O(2^N)$. Are they faster deterministic methods? I suspect not, but can we prove this?
A simple greedy approach would be to sort the edges by weight, and choosing the top $M$ vertices appearing in these pairs, $O(N^2 \log N)$. Are there known expected performance bounds for this approach? Are there more optimal non-deterministic approaches?
Answer: Unless the strong exponential time hypothesis fail, there is no deterministic algorithm that can solve your problem in time $N^{o(M)}$, since such an algorithm would immediately solve the $M$-clique problem in the same amount of time. See this paper.
Regarding your greedy algorithm: first of all its time complexity is $\Omega(N^2)$ (although a variant that uses a max-heap only requires time $O(M^2 + M \log N)$). Moreover it cannot provide any constant approximation ratio. Think of a graph consisting collection of $\lfloor M/2 \rfloor$ disjoint edges with weights $1 + \epsilon$ together with a $M$-clique in which each edge weights $1$.
Complete the graph with edges of weight $0$.
Your algorithm would return a clique with a total weight of at most $\frac{M+ M\epsilon}{2}$, while the optimal solution has a weight of at least $\frac{M^2}{2}$. The ratio of these two quantities is $\frac{1+\epsilon}{M}$ which approaches $0$ when $M$ approaches $\infty$.
I don't know what you mean by "more optimal non-deterministic approaches". It is very easy to come up with a linear-time non-deterministic algorithm for the decision version of your problem. Then polynomially-many invocations of such an algorithm suffice to find the optimal solution to the optimization version. | {
"domain": "cs.stackexchange",
"id": 19347,
"tags": "complexity-theory, graphs, approximation"
} |
Why does hydrogen phosphate act as a base? | Question: Let's look at question c:
a) Write a balanced equation for the reaction.
$$\ce{2 NaOH + H3PO4 -> Na2HPO4 + 2 H2O (l)}$$
b) When some crystals of $\ce{Na2HPO4}$ were dissolved in water, the $\mathrm{pH}$ of the resulting solution was found to be $9.5$. Calculate the hydrogen ion concentration of this solution.
$$[\ce{H+}] = 10^\mathrm{-pH}\, ;\qquad [\ce{H+}] = 10^{-9.5} = \pu{3.2e-10 mol L-1}$$
c) Write an equation for the reaction of the $\ce{HPO4^2-}$ ion with water to account for the measured $\mathrm{pH}$.
$$\ce{HPO4^2- + H2O (l) <=> H2PO4- + OH-}$$
So, I'm a bit confused with what's happening. It seems that water is acting as an acid in this reaction, and donating protons to the ion. Can someone just elaborate on this whole process please because I'm not really sure what is happening. Why is it $\ce{HPO}$ and not $\ce{H2PO}$ or $\ce{H3PO}$ (as is in the original equation?) Is it a step wise process and only one step included?
Answer: When hydrogen phosphate salts are dissolved in water there are two main equilibria formed. This is based on the fact, that hydrogen phosphate can act as a Brønsted–Lowry base, i.e. accept protons, or as an acid, i.e. donate protons. For water the same is true. In addition to this it can react with itself, which is known as the autoprotolysis of water:
$$\ce{H2O + H2O <=> H3+O + {}^{-}OH}\tag1$$
With this knowledge you can write
\begin{align}
\ce{HPO4^2- + H2O &~<=> PO4^3- + H3+O}\tag2\\
\ce{HPO4^2- + H2O &~<=> H2PO4- + {}^{-}OH}\tag3\\
\end{align}
To a lesser extent there is also the following equilibrium happening:
$$\ce{HPO4^2- + 2H2O <=> H3PO4 + 2 {}^{-}OH}\tag4$$
From the acidity constant you know that $\ce{HPO4^2-}$ will react mostly as a base, i.e. $\mathrm{p}K_\mathrm{a}(\ce{Na2HPO4})=12.35$, so the equilibrium $(3)$ will be predominant. | {
"domain": "chemistry.stackexchange",
"id": 3864,
"tags": "inorganic-chemistry, acid-base, aqueous-solution, ph"
} |
std::string implementation attempt | Question: The following is my attempt at a string class that behaves like std::string. Any feedback would be appreciated.
my_string.h
#pragma once
#include <stdexcept>
//class that attempts to emulate the behavior of std::string
//uses allocator in order to be able to store uninitialized data
class my_string
{
private:
size_t m_size;
size_t m_space;
char* m_contents;
std::allocator<char> alloc;
//destroys and deallocates memory owned by m_contents
void cleanup();
//helper functions for my_string::insert
my_string& reserve_and_add(const size_t n, char c);
my_string& reserve_and_add(const size_t n, const char* s);
void shift_and_insert(size_t pos, size_t n, char c);
void shift_and_insert(size_t pos, const char* s, size_t s_size, size_t new_end);
void allocate_and_insert(size_t pos, size_t n, char c);
void allocate_and_insert(size_t pos, const char* s, size_t s_size, size_t new_end);
public:
using value_type = char;
using iterator = char*;
using const_iterator = const char*;
my_string();
//contents must be null-terminated. otherwise, behavior is undefined.
my_string(const char* contents);
//stores n elements with value c in m_contents
my_string(size_t n, char c);
//copy functions perform a deep copy on rhs. space of copy arg is not copied.
my_string(const my_string& rhs);
my_string& operator=(const my_string& rhs);
//like the copy functions, the move functions do not copy space of args.
my_string(my_string&& rhs) noexcept;
my_string& operator=(my_string&& rhs) noexcept;
size_t size() const noexcept
{
return m_size;
}
//size including terminating zero
size_t tot_size() const noexcept
{
return m_size == 0 ? 0 : m_size + 1;
}
size_t capacity() const noexcept
{
return m_space;
}
char& operator[](size_t n)
{
return m_contents[n];
}
const char& operator[](size_t n) const
{
return m_contents[n];
}
char& at(size_t n);
const char& at(size_t n) const;
iterator begin()
{
return &m_contents[0];
}
const_iterator begin() const noexcept
{
return &m_contents[0];
}
const_iterator cbegin() const noexcept
{
return &m_contents[0];
}
iterator end()
{
return &m_contents[m_size];
}
const_iterator end() const noexcept
{
return &m_contents[m_size];
}
const_iterator cend() const noexcept
{
return &m_contents[m_size];
}
//reserves space for n chars and copies old elements to new space.
void reserve(size_t n);
void resize(size_t n, char c);
void resize(size_t n) { resize(n, ' '); }
const char* c_str() const noexcept
{
return m_contents;
}
//inserts n elements with value c starting at index pos
my_string& insert(size_t pos, size_t n, char c);
//inserts C-style string s at index pos
my_string& insert(size_t pos, const char* s);
//erases count elemenets starting at index
my_string& erase(size_t index, size_t count);
//erases pos if it is found in m_contents
iterator erase(const_iterator pos);
void pop_back();
void push_back(char c);
my_string& operator+=(const my_string& rhs);
~my_string();
};
my_string.cpp
my_string::my_string()
:m_contents{ nullptr }, m_size{ 0 }, m_space{ 0 }{}
my_string::my_string(const char* contents)
: m_size{ my_strlen(contents) }, m_space{ tot_size()},
m_contents{ alloc.allocate(m_space) }
{
for (int i = 0; i < m_size; ++i)
{
alloc.construct(&m_contents[i], contents[i]);
}
alloc.construct(&m_contents[m_size], '\0');
}
my_string::my_string(size_t size, char c)
:m_size{ size }, m_space{ size + 1 },
m_contents{ alloc.allocate(m_space) }
{
for (int i = 0; i < m_size; ++i)
{
alloc.construct(&m_contents[i], c);
}
alloc.construct(&m_contents[m_size], '\0');
}
my_string::my_string(const my_string& rhs)
:m_size{ rhs.m_size }, m_space{ rhs.tot_size() },
m_contents{ alloc.allocate(m_space) }
{
for (int i = 0; i < m_size; ++i)
{
alloc.construct(&m_contents[i], rhs.m_contents[i]);
}
alloc.construct(&m_contents[m_size], '\0');
}
my_string& my_string::operator=(const my_string& rhs)
{
char* temp = alloc.allocate(rhs.tot_size());
for (int i = 0; i < rhs.m_size; ++i)
{
alloc.construct(&temp[i], rhs.m_contents[i]);
}
alloc.construct(&temp[rhs.m_size], '\0');
cleanup();
m_contents = temp;
m_size = rhs.m_size;
m_space = tot_size();
return *this;
}
my_string::my_string(my_string&& rhs) noexcept
:m_size{ rhs.m_size }, m_space{ rhs.tot_size() },
m_contents{ rhs.m_contents }
{
rhs.m_contents = nullptr;
rhs.m_size = rhs.m_space = 0;
}
my_string& my_string::operator=(my_string&& rhs) noexcept
{
cleanup();
m_contents = rhs.m_contents;
m_size = rhs.m_size;
m_space = tot_size();
rhs.m_contents = nullptr;
rhs.m_size = rhs.m_space = 0;
return *this;
}
char & my_string::at(size_t n)
{
if (n >= m_size) throw std::out_of_range{ "invalid index passed to my_string::at" };
return m_contents[n];
}
const char & my_string::at(size_t n) const
{
if (n >= m_size) throw std::out_of_range{ "invalid index passed to my_string::at" };
return m_contents[n];
}
//reserves new uninitialized space by reallocating. can only reserve
// more than the current space
void my_string::reserve(size_t n)
{
if (n <= m_space) return;
char* temp = alloc.allocate(n);
if (m_size)
{
for (int i = 0; i < tot_size(); ++i)
{
alloc.construct(&temp[i], m_contents[i]);
}
for (int i = 0; i < tot_size(); ++i)
{
alloc.destroy(&m_contents[i]);
}
}
alloc.deallocate(m_contents, m_space);
m_contents = temp;
m_space = n;
}
void my_string::resize(size_t n, char c)
{
if (n > m_space) reserve(n + 1);
for (int i = n; i < tot_size(); ++i) alloc.destroy(&m_contents[i]);
for (int i = m_size; i < n; ++i) alloc.construct(&m_contents[i], c);
alloc.construct(&m_contents[n], '\0');
m_size = n;
}
my_string & my_string::reserve_and_add(const size_t n, char c)
{
reserve(n + 1);
for (int i = 0; i < n; ++i) alloc.construct(&m_contents[i], c);
alloc.construct(&m_contents[n], '\0');
m_size += n;
return *this;
}
my_string & my_string::reserve_and_add(const size_t n, const char * s)
{
reserve(n + 1);
for (int i = 0; i < n + 1; ++i) alloc.construct(&m_contents[i], s[i]);
m_size += n;
return *this;
}
//the elements in the range [new_end, new_end - elems_moving)
//are the ones that will be shifted n spaces to the right for
//both shift_and_insert functions
void my_string::shift_and_insert(size_t pos, size_t n, char c)
{
const auto elements_moving = (tot_size()) - pos;
const auto new_end = m_size + n;
for (auto i = new_end; i > new_end - elements_moving; --i)
{
m_contents[i] = m_contents[i - n];
}
for (int i = 0; i < n; ++i)
{
m_contents[pos + i] = c;
}
}
void my_string::shift_and_insert(size_t pos, const char * s, size_t s_size, size_t new_end)
{
const int elements_moving = tot_size() - pos;
for (auto i = new_end; i > new_end - elements_moving; --i)
{
m_contents[i] = m_contents[i - s_size];
}
for (auto i = 0; i < s_size; ++i)
{
m_contents[pos + i] = s[i];
}
}
void my_string::allocate_and_insert(size_t pos, size_t n, char c)
{
//allocate more memory than needed to save for future insertion operations
char* temp = alloc.allocate(m_space * 2 + n);
//initialize elements before insertion, the insertion itself, then elements after
for (auto i = 0; i < pos; ++i) alloc.construct(&temp[i], m_contents[i]);
for (auto i = 0; i < n; ++i) alloc.construct(&temp[pos + i], c);
for (auto i = pos; i < m_size; ++i)
{
alloc.construct(&temp[i + n], m_contents[i]);
}
alloc.construct(&temp[size() + n], '\0');
cleanup();
m_contents = temp;
}
void my_string::allocate_and_insert(size_t pos, const char * s, size_t s_size, size_t new_end)
{
char* temp = alloc.allocate(tot_size() + s_size);
for (int i = 0; i < pos; ++i)
{
alloc.construct(&temp[i], m_contents[i]);
}
for (int i = 0; i < s_size; ++i)
{
alloc.construct(&temp[pos + i], s[i]);
}
for (auto i = pos; i < m_size; ++i)
{
alloc.construct(&temp[i + s_size], m_contents[i]);
}
alloc.construct(&temp[new_end], '\0');
m_contents = temp;
m_space = m_size + s_size + 1;
}
//inserts n elements starting at index pos with the value of c
//checks to see if there is already enough in the reserve;
//otherwise, allocates new memory
my_string & my_string::insert(size_t pos, size_t n, char c)
{
if (pos > size()) throw std::out_of_range{ "Invalid index arg to my_string::insert" };
if (size() == 0) return reserve_and_add(n, c);
if (n + m_size <= m_space) shift_and_insert(pos, n, c);
else allocate_and_insert(pos, n, c);
m_size += n;
return *this;
}
my_string& my_string::insert(size_t pos, const char* s)
{
if (pos > size()) throw std::out_of_range{ "Invalid index arg to my_string::insert" };
const int s_size = my_strlen(s);
if (size() == 0) return reserve_and_add(s_size, s);
const int new_end = size() + s_size;
if (s_size + tot_size() <= m_space) shift_and_insert(pos, s, s_size, new_end);
else allocate_and_insert(pos, s, s_size, new_end);
m_size += s_size;
return *this;
}
my_string & my_string::erase(size_t index, size_t count)
{
if (index >= m_size) throw std::out_of_range{ "out of range index to my_string::erase" };
if (m_size == 0 || count == 0) return *this;
//don't want to remove more elems than there are in the string
const auto num_elems_removing = min(m_size - index, count);
const auto num_elems_shifting = m_size - (index + num_elems_removing);
const auto new_size = m_size - num_elems_removing;
for (int i = 0; i < num_elems_shifting; ++i)
{
m_contents[i + index] = m_contents[i + index + num_elems_removing];
}
for (int i = new_size; i < tot_size(); ++i)
{
alloc.destroy(&m_contents[i]);
}
m_size = new_size;
alloc.construct(&m_contents[m_size], '\0');
return *this;
}
my_string::iterator my_string::erase(const_iterator pos)
{
auto elem = std::find(begin(), end(), *pos);
if (elem == end()) return elem;
//this loop also copies back the terminating zero
for (auto iter = elem; iter != end(); ++iter)
{
*iter = *(iter + 1);
}
--m_size;
return elem;
}
void my_string::pop_back()
{
if (m_size == 0) return;
m_contents[m_size - 1] = '\0';
alloc.destroy(&m_contents + m_size);
//destroy old terminating zero
alloc.destroy(&m_contents + m_size + 1);
--m_size;
}
void my_string::push_back(char c)
{
if (m_space == 0) reserve(8);
else if (tot_size() == m_space) reserve(2 * m_space);
alloc.construct(&m_contents[size()], c);
alloc.construct(&m_contents[size() + 1], '\0');
++m_size;
}
my_string & my_string::operator+=(const my_string & rhs)
{
return insert(m_size, rhs.c_str());
}
my_string::~my_string()
{
cleanup();
}
std::ostream & operator<<(std::ostream & os, const my_string & rhs)
{
return os << rhs.c_str();
}
void my_string::cleanup()
{
for (int i = 0; i < tot_size(); ++i) alloc.destroy(&m_contents[i]);
alloc.deallocate(m_contents, m_space);
}
size_t my_strlen(const char* str)
{
size_t size = 0;
while (*str)
{
++size;
++str;
}
return size;
}
Answer:
Where are your includes? I only see #include <stdexcept>. However, there should be a lot more: #include <memory> for std::allocator, #include <cstddef> for std::size_t, #include <ostream> for std:ostream, #include <algorithm> for std::find, etc. Add them, or a conforming compiler may refuse to accept your code.
Why do you even include stdexcept here? You don't use anything from it!
You don't forward declare my_strlen, thus it shouldn't be visible further up in my_string.cpp. A conforming compiler actually has to reject your program because of this.
Maybe you wondered about me mentioning std::size_t instead of size_t without the prefix in point 1? C++ only guarantees that the legacy C types exist in the std namespace (provided that the right headers are included) whereas their existence in the global namespace is not mandatory. You should thus prefer the std:: versions of those functions at all times.
What is the point of using std::allocator over normal new/delete here? Normally, standard containers support allocators through template parameters in order to facilitate the use of different allocation managers and schemes. However, your code doesn't take an allocator template parameter, so there is nothing useful you can do with allocators here.
Building on point 5: If you remove that useless std::allocator, you can actually simplify a lot of your code to use std::memset/std::memcpy/std::strncpy/etc. to move and copy data around.
Let's take a look at for (int i = 0; i < m_size; ++i). There is an issue here that you don't seem to have thought through thoroughly: m_size is of type std::size_t, which is not only unsigned but also larger than int on many common platforms (most importantly, x86-64). That means that I can easily exploit your code to have undefined behavior if I make a string bigger than std::numeric_limits<int>::max() characters, in which case your code will have undefined behavior due to signed integer overflow. It is good practice in general to ensure that loop iteration variables are always the same size (or larger) than the loop bound type.
Utilize the copy-and-swap idiom for move assignment operators. Instead of calling cleanup() manually and then tediously reassigning values from one object to another, just swap the contents of each member variable with its equivalent on the move-from side and have the destructor of the move-from side handle the cleanup eventually.
If you are striving for noexcept correctness, the move constructor should be noexcept, as well as both operator[]s, and begin and end, too.
void resize(size_t n) { resize(n, ' '); } seems dubious. If you are following the std::string specification, that code should probably be void resize(size_t n) { resize(n, 0); }, since the default value for type char is 0 (as for all other integral types). | {
"domain": "codereview.stackexchange",
"id": 30223,
"tags": "c++, strings, reinventing-the-wheel"
} |
Difficulty in developing certain vaccines | Question: I have a college level background in Biology, say at the level of Campbell. I am very curious to know why it's extremely difficult to develop vaccines for certain diseases. Two cases which I am really interested in are Malaria and AIDS. It would be great if someone can give a brief sketch of the major issues being faced without getting into extreme technicalities. Thanks in advance.
Answer: Not an easy question, especially since the reasons for both pathogens are different.
For HIV the problems are manyfold:
Insufficient knowledge about required immunity
Its not known, which parameters are really important to raise an adequate immunity.
Variation in HIV strains
HIV has a very high mutation rate, which results in many different subtypes. This makes the virus extremely successful, but also causes problems for making vaccines. We have a similiar problem with vaccines for the flu, but there the number of subtypes is known and not very high. Still we need a new vaccine for each season based on the current viruses. For HIV there are no knwon proteins against which immunity can directed like in the case of the flu.
HIV also recombines very fast, so new variants can come up pretty fast.
HIV targets the immune system
The HI virus targets cells of the human immune system (namely CD4 T-cells) which are important for developing a proper response to a vaccination. These cells need to be stimulated, but on the other hand are destroyed and specially targeted by the virus.
Lack of an effective animal model
To develop and test good vaccines, scientists need good animal models to test the pathogen and see the effect of a treatment.
Chimpanzees show some response to HIV infections, but they are expensive to keep as laboratory animals and also rare to get. The problem is, that HIV is pretty good adapted to its human host.
Human subjects trials
This is more an ethic problem, but still needs to be adressed. To test the efficiency of new drugs, double-blinded placebo controlled test are done. Afterwards you can compare the placebo group to the treatment group and analyze the outcome. How do you do this with a deadly disease?
More information can be found on this webpage from the NIH.
For malaria some of the problems are similiar, others are different. he greatest problem was for years, that not enough money was available for research, which has been overcome only quite recently:
Plasmodium goes through different stages
The parasite causing malaria goes through three different stages in its development in the human body. With each step it hides again from the immune system. The different stages are also present in different cell types, see this image from JCI ("Advances and challenges in malaria vaccine development"):
antigen
Its not clear, against which antigens such a vaccine shoud be directed. This is due to the problem with the different stages and hasn't been found yet.
different pathogens
There are four forms of plasmodium which definitely cause malaria - : Plasmodium falciparum, Plasmodium vivax, Plasmodium ovale and Plasmodium malariae. Newer results suggest, that Plasmodium knowlesi is also a pathogen for humans.
lack of animal models
More information can be found here. | {
"domain": "biology.stackexchange",
"id": 1771,
"tags": "microbiology, vaccination"
} |
Are there quantum-entangled particles in nature? | Question: Pairs of quantum entangled particles have been created in the lab, and then separated by some distance. But are there any non-manmade occurrences of quantum entanglement between distantly separated particles. Either on earth or elsewhere in the known universe. And how would we detect such entanglement?
Answer: When we create an entangled state in the lab, really what we are doing is creating a 'known' entangled state that we can then experiment with. As ACuriousMind said in the comment, most of the time things ARE entangled. What is entangled depends on whether the observer has interacted with it or not. If an observer interacts with a dynamic system and then ceases to interact, the observer can say some of the particles in that system are entangled. i say 'dynamic system' because if its a static system then we know its state, by definition. | {
"domain": "physics.stackexchange",
"id": 32019,
"tags": "quantum-entanglement"
} |
Program for converting a boring text into a stylish written text | Question: I have written a program that makes it possible to write as cool as the cool kids. All you have to do is enter the boring, normal text, which is then transformed into an exciting, stylish text.
Is there a possibilty to make the code more performant without decreasing it's readability?
Example Output:
I H@Ve wRITTen @ Pr0qR@M tH@T m@KEZ IT P0ZzIblE t0 wriTE @z C00L @z the c00L KIdZ :) All Y0U h@Ve T0 D0 iZ eNter thE B0rinQ, n0rm@L teXt, whIch iZ tHeN TR@nZF0RmED int0 @n exCiTInq, ztyLIZh TExT. :D
Source code
import java.util.Random;
import java.util.Scanner;
public class Program {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
System.out.print("Input: ");
String text = input.nextLine();
String trendyText = convertToTrendyText(text);
System.out.println(trendyText);
}
public static String convertToTrendyText(String string) {
string = string.replace("g", "q");
string = string.replace("s", "z");
string = string.replace("a", "@");
string = string.replace("o", "0");
string = string.replace(". ", " " + generateRandomSmiley() + " ");
string = string.replace("! ", " " + generateRandomSmiley() + " ");
string = string.replace("? ", " " + generateRandomSmiley() + " ");
string = string.concat(" " + generateRandomSmiley());
StringBuilder text = new StringBuilder(string);
Random random = new Random();
for (int i = 0; i < text.length(); i++) {
if (random.nextBoolean()) {
text.setCharAt(i, Character.toUpperCase(text.charAt(i)));
}
}
return text.toString();
}
public static String generateRandomSmiley() {
Random random = new Random();
switch (random.nextInt(10)) {
case 0: return ":)";
case 1: return ":D";
case 2: return ":*";
case 3: return "<3";
case 4: return "o.O";
case 5: return "x3";
case 7: return "xD";
case 8: return ":o";
default: return ";D";
}
}
}
Answer: Finally. A program to help me speak the same language as the cool kids. Thanks for that.
Use an array when it's good enough
The second part of convertToTrendyText needlessly uses a StringBuilder instance.
The power of StringBuilder is to efficiently build strings whose size is not known in advance.
In this method we do know in advance,
so there's no need for a StringBuilder,
an array from string.toCharArray() would be more than enough.
After replacing characters in the char[],
you could return it in a new String(...).
Destructive uppercasing
The first part of convertToTrendyText inserts some random smileys.
The second part of the method randomly uppercases some letters.
That risks ruining the following smileys: o.O, x3, xD.
I'm wondering if that's intended or not.
With this potential unintended side effect,
the text might become completely unreadable.
To avoid such destruction,
you could swap the first and second parts:
do the uppercasing first,
and insert smileys after.
That way the smileys will be unaffected by design.
One Random is enough
There's no need to create multiple instances of Random in a program.
It would be better to use just one.
That could be a good step in the direction of making the program testable,
because you will be able to set a seed to get reproducible output.
Getting a random value out of n values
The switch statement in generateRandomSmiley is a bit troublesome.
If you add a new smiley, you have to remember to increment the number in the random.nextInt(...) call, and add a correctly numbered case statement. Such a hassle.
If you use an array of smileys,
then the process of adding or removing values becomes a lot simpler,
more compact,
without having to worry about indexes.
private static final String[] SMILEYS = {
":)", ":D", ":*", "<3", "o.O", "x3", ";D", "xD", ":o", ";D"
};
public static String generateRandomSmiley() {
return SMILEYS[random.nextInt(SMILEYS.length)];
}
Btw, did you notice that there is no case 6 line in your original switch statement?
That leads to getting the same value for it as the default case.
Not sure if that was intentional.
To preserve the behavior of the posted code,
I duplicated the default value ;D at index 6 (in addition to its natural index 9).
A word on performance
Every call like string = string.replace("...", "..."); has to iterate over all the content of the string.
That seems a bit wasteful.
In a toy program like this,
it doesn't really matter,
but it's worth keeping in mind. | {
"domain": "codereview.stackexchange",
"id": 30423,
"tags": "java"
} |
Cheese-Burger-Waffles (aka Rock-Paper-Scissors) | Question: FizzBuzz was fun, and I got great feedback and learned a few things (which I hope I put in practice here), but only scratched the surface. I wanted to explore the LOLCODE language a bit more, so I implemented a little rock-paper-scissors, to play with functions, parameters, return values.. and user input.
Unfortunately I haven't figured out how to get compileonline to take more than a single value through STDIN, so I dropped the idea of making a GAEMLOOPZ and only made it a single round.
Also AFAICT there's no way of generating random numbers in LOLCODE (specs), so the LOLCAT always wins - it even has a seekrit move for invalid inputs!
HAI 1.2
I HAS A CHEEZ ITZ "CHEEZ" BTW "ROCK"
I HAS A BURGR ITZ "BURGR" BTW "PAPER"
I HAS A WAFLZ ITZ "WAFLZ" BTW "SCISSORS"
I HAS A SEEKRITMOOV ITZ "ZPOCK" BTW SHH, SEEKRIT
HOW IZ I GETAMOOV
VISIBLE "WHAT U PLAY?"
VISIBLE "[C]HEEZ [B]URGR [W]AFLZ"
I HAS A MOOV
GIMMEH MOOV
MOOV R I IZ VALID8MOOV YR MOOV MKAY
FOUND YR MOOV
IF U SAY SO
HOW IZ I VALID8MOOV YR MOOV
MOOV, WTF?
OMG "C", OMG "c"
MOOV R CHEEZ
GTFO
OMG "B", OMG "b"
MOOV R BURGR
GTFO
OMG "W", OMG "w"
MOOV R WAFLZ
GTFO
OMGWTF
ANY OF BOTH SAEM MOOV AN "ZPOCK" AN BOTH SAEM MOOV AN "zpock" MKAY
O RLY?
YA RLY, MOOV R SEEKRITMOOV
NO WAI
MOOV R SMOOSH "... WUTS " AN MOOV AN " ANYWAI" MKAY
OIC
OIC
FOUND YR MOOV
IF U SAY SO
HOW IZ I PIKAMOOV YR MOOV
I HAS A ULTIMAETMOOV
MOOV, WTF?
OMG "CHEEZ" BTW must be a constant value :(
ULTIMAETMOOV R BURGR
GTFO
OMG "BURGR" BTW must be a constant value :(
ULTIMAETMOOV R WAFLZ
GTFO
OMG "WAFLZ" BTW must be a constant value :(
ULTIMAETMOOV R CHEEZ
GTFO
OMG "ZPOCK" BTW must be a constant value :(
ULTIMAETMOOV R "Y U CHEAT? I"
GTFO
OMGWTF
ULTIMAETMOOV R SEEKRITMOOV
OIC
FOUND YR ULTIMAETMOOV
IF U SAY SO
HOW IZ I WINTEHGAEM YR MOOV AN YR ULTIMAETMOOV
VISIBLE SMOOSH "Y U PLAY " AN MOOV MKAY
VISIBLE SMOOSH "MAH " AN ULTIMAETMOOV AN " WINZ LOL" MKAY
IF U SAY SO
OBTW
was going to make a GAEMLOOP,
but stupidly I can't seem to figure out
how to pass multiple values to STDIN with this "IDE"...
so this program will only run a single round.
TLDR
HOW IZ I PLAYAMOOV
I HAS A MOOV
MOOV R I IZ GETAMOOV MKAY
I HAS A ULTIMAETMOOV
ULTIMAETMOOV R I IZ PIKAMOOV YR MOOV MKAY
I IZ WINTEHGAEM YR MOOV AN YR ULTIMAETMOOV MKAY
IF U SAY SO
VISIBLE "CHEEZ-BURGR-WAFLZ:)"
I IZ PLAYAMOOV MKAY
KTHXBYE
Outputs
Input: c or C
CHEEZ-BURGR-WAFLZ
WHAT U PLAY?
[C]HEEZ [B]URGR [W]AFLZ
Y U PLAY CHEEZ
MAH BURGR WINZ LOL
Input: b or B
CHEEZ-BURGR-WAFLZ
WHAT U PLAY?
[C]HEEZ [B]URGR [W]AFLZ
Y U PLAY BURGR
MAH WAFLZ WINZ LOL
Input: w or W
CHEEZ-BURGR-WAFLZ
WHAT U PLAY?
[C]HEEZ [B]URGR [W]AFLZ
Y U PLAY WAFLZ
MAH CHEEZ WINZ LOL
Input: zzz ..or pretty much anything else:
CHEEZ-BURGR-WAFLZ
WHAT U PLAY?
[C]HEEZ [B]URGR [W]AFLZ
Y U PLAY ... WUTS zzz ANYWAI
MAH ZPOCK WINZ LOL
Input: zpock or ZPOCK
CHEEZ-BURGR-WAFLZ
WHAT U PLAY?
[C]HEEZ [B]URGR [W]AFLZ
Y U PLAY ZPOCK
MAH Y U CHEAT? I WINZ LOL
I don't like that I'm hard-coding the strings in the WTF (switch) block in VALID8MOOV, but I don't think it's possible to extract just the first letter of a YARN (string)... or is it? Also there doesn't seem to be a way of defining compile-time constants, so I had to put literal strings in PIKAMOOV as well.
Answer: HAI 1.2
I HAZ A REVIEW ITZ "Following" BTW Let's roll ;)
I got a few points I want to make here and the first is actually quite simple. Don't name your names in INTORN3TZP34KZ...
Yes I know it's an esoteric language, and I know that all the instructions are defined to be "Internet-Speaks". But that doesn't mean you absolutely have to make your code follow the spec definitions.
(Who makes their Stack a class anyway..)
CHEEZ, BURGR, WAFLZ, SEEKRITMOOV, ... these names are suboptimal. (Pardon my non-french).
You had it pointed out to you in your previous question, that the documentation uses lowercase for variables. (Seems the specs changed that...)
I think you should adopt either that or at least anything
but SHOUTCASE! There is no need to give a darn about convention when convention is dumb ;)
Especially with the missing syntax highlighting it is hard to read your code and differentiate between instructions and identifiers.
Additionally to that I want to suggest using non-speakzified variable names. Simply because it's easier on the brain to not have to unwrap the variable name before being allowed to think about what the variable name means in the first place.
Moving swiftly on from names to... names..
PIKAMOVE - Is it just me or does that call associations with Pokémon to your head??
VALID8MOOV - What's the 8 anyways???
The names you chose are confusing. Might (wink wink) be because you actually made them Internetspeakz again :(
Next!
Whitespace / Newlines
OIC
FOUND YR ULTIMAETMOOV
IF U SAY SO
HOW IZ I WINTEHGAEM YR MOOV AN YR ULTIMAETMOOV
let's see how you'd put braces in Java (or C# for that matter) for this code:
}
return ultimateMoov;
}
void winthegaem (Object moov, Object ultimatemoov) {
You might want to additionally differentiate the "difference" between IF YOU SAY SO and HOW IZ I. I'd either add an additional newline between these or remove them on "closing the braces".
Comments
OBTW
was going to make a GAEMLOOP,
but stupidly I can't seem to figure out
how to pass multiple values to STDIN with this "IDE"...
so this program will only run a single round.
TLDR
Hmm.. Interesting. But completely irrelevant for the code This comment is not helping. You did mention it in your explanation, so in the code it's meaningless.
Reading the code does not become easier with reading this comment. Remove it.
OMGWTF
Y U NO?
OMG "ZPOCK", OMG "zpock"
MOOV R SEEKRITMOOV
GTFO
OMGWTF
MOOV R SMOOSH "... WUTS " AN MOOV AN " ANYWAI" MKAY
OIC
KTHXBYE
BTW End of review ;) | {
"domain": "codereview.stackexchange",
"id": 9636,
"tags": "game, rock-paper-scissors, lolcode"
} |
Ricci theorem and 4-th divergence of Energy-momentum/ Einstein tensor | Question: I know that Ricci theorem says that absolute differential of metric $g_{ij}$ :
$D(g_{ij}) = (\nabla_{k} g_{ij}) \text{d}x^{k} = 0$
So we can write : $\nabla_{k} g_{ij} = 0$
with $k$ that can be equal to $i$ or $j$.
On the other side, in General relativity, we have the 4-th divergence of $G_{\mu\nu}$ and $T_{\mu\nu}$ which is equal to 0 :
$\nabla_{\mu}T_{\mu\nu} = \nabla_{\mu}G_{\mu\nu} = 0$
Can we do the link between Ricci theorem for metric $g_{ij}$ and these 4-th divergences equal to zero ?
I mean, we cannot have (unlike with Ricci theorem) :
$\nabla_{k}T_{\mu\nu} = \nabla_{k}G_{\mu\nu} = 0$ for $k\neq \mu$ and $k\neq \nu$, isn't it ?
So, can we say, from a particular point of view, that Ricci theorem is more general, in the way that we have for any $k$ index (including $i$ or $j$) : $\nabla_{k} g_{ij} = 0$ ??
UPDATE 1:
You say that "The tensor equation $\nabla_{\rho} g_{\mu\nu} = 0$ ensures invariance of quantities such as $g_{\mu\nu}v^{\mu} w^{\nu}$ under parallel transportations : how to prove it ?
Maybe If I take (with $\tau$ curvilinear abscissa) : $\nabla_{\rho} g_{\mu\nu} \dfrac{\text{d}x^{\rho}}{\text{d}\tau} = 0\quad\quad(1)$,
which would imply :
$(\partial_{\rho} g_{\mu\nu} - g_{\mu\alpha}\Gamma^{\alpha}_{\rho\nu} - g_{\nu\alpha}\Gamma^{\alpha}_{\rho\mu}) \dfrac{\text{d}x^{\rho}}{\text{d}\tau} = 0$
but how to introduce $v^{\mu}$ and $w^{\nu}$ ?
I know that elementary length (like $\text{d}\tau$ above) $\text{d}s^2 = g_{\mu\nu} \text{d}x^{\mu} \text{d}x^{\nu}$ is invariant, so we could have :
$1 = g_{\mu\nu} \dfrac{\text{d}x^{\mu}}{\text{d}s} \dfrac{\text{d}x^{\nu}}{\text{d}s} = g_{\mu\nu} v^{\mu} w^{\nu} = \text{constant}\quad\quad(2)$
but how to connect (2) and (1) ? or get (2) from (1) ?
Any help is welcome, regards
Answer: The ``divergence-free'' equations,
\begin{equation}
\nabla_{\mu}G^{\mu\nu} = \nabla_{\mu}T^{\mu\nu}=0
\end{equation}
are just conservation laws (remember that the repeated index $\mu$ is summed so the above equations are continuity equations that suggest local conservation).
The fact that $\nabla_{\rho}g_{\mu\nu}=0$ is a completely different statement. To make it more obvious, remember that the metric tensor is essentially nothing more than the inner product of your axes unit vectors; it is a way of saying how ``much perpendicular your coordinates are''. The tensor equation $\nabla_{\rho}g_{\mu\nu}=0$ ensures invariance of quantities such as $g_{\mu\nu}w^{\mu}v^{\nu}$ under parallel transportations and it basically comes from the strong equivalence principle and the fact that the metric tensor is a ... tensor! (in flat spacetime $\eta_{\mu\nu}$, i.e. local inertial frames, the equation $\partial_{\rho}\eta_{\mu\nu}$ holds so it must hold for all frames of reference).
To sum it up, $\nabla_{\rho}g_{\mu\nu}=0$ is a unique tensor equation holding for the metric tensor itself, but $\nabla_{\mu}G^{\mu\nu} = \nabla_{\mu}T^{\mu\nu}=0$ are just manifestations of conservation laws in curved spacetimes.
UPDATE 1
The invariance of a quantity $(\dots)$ under parallel transportation along a direction with tangent 4-vector $t^{\rho}$ is expressed by the condition $t^{\rho}\nabla_{\rho}(\dots)=0$. For the invariant quantity $g_{\mu\nu}w^{\mu}v^{\nu}$ this means,
\begin{equation}\begin{aligned}
t^{\rho}\nabla_{\rho} (g_{\mu\nu}w^{\mu}v^{\nu}) &= t^{\rho}\nabla_{\rho}g_{\mu\nu}w^{\mu}v^{\nu} + t^{\rho}g_{\mu\nu}\nabla_{\rho}w^{\mu}v^{\nu} + t^{\rho}g_{\mu\nu}w^{\mu}\nabla_{\rho}v^{\nu} \\
&= t^{\rho}\nabla_{\rho}g_{\mu\nu}w^{\mu}v^{\nu} = 0 \\
^{\text{To hold for any }t^{\rho}, w^{\mu}, v^{\nu}}\Rightarrow \nabla_{\rho}g_{\mu\nu} = 0
\end{aligned}\end{equation}
where the fact that parallel transportation of 4-vectors also means $t^{\rho}\nabla_{\rho}w^{\mu} = t^{\rho}\nabla_{\rho}v^{\nu} = 0$ was used. | {
"domain": "physics.stackexchange",
"id": 47958,
"tags": "metric-tensor, tensor-calculus, stress-energy-momentum-tensor"
} |
Space complexity analysis of binary recursive sum algorithm | Question: I was reading page 147 of Goodrich and Tamassia, Data Structures and Algorithms in Java, 3rd Ed. (Google books).
It gives example of linear sum algorithm which uses linear recursion to calculate sum of all elements of the array:
Algorithm linearSum (arr , n)
if (n == 1)
return arr[0]
else
return linearSum (arr , n-1) + arr[n-1]
end linearSum
And the binary sum algorithm which uses binary recursion to calculate sum of all elements of the array:
Algorithm binarySum (arr, i, n)
if (n == 1)
return arr[i]
return binarySum (arr, i, ⌈n/2⌉) + binarySum (arr, i+⌈n/2⌉, ⌊n/2⌋)
end binarySum
It further says:
The value of parameter $n$ is halved at each recursive call binarySum(). Thus, the
depth of the recursion, that is, the maximum number of method
instances that are active at the same time, is $1 + \log_2 n$. Thus the
algorithm binarySum() uses $O(\log n)$ additional space. This is big
improvement over $O(n)$ needed by the linearSum() algorithm.
I did not understood how the maximum number of method instances that are active at the same time, is $1 + \log_2n$.
For example consider the below calls to method with method parameters given in rounded box:
Then in two recursive calls of second row from top, $n = 8$. So, $1 + \log_2 8 = 4$. Now I dont get what maximum limit this 4 represent?
Answer:
In a given tree, all the vertices of this tree correspond to binarySum() calls.
The value of parameter n to binarySum() is halved at each recursive call.
Also, each recursive call finishes after all its children finish. Thus at each recursive call, number of active calls include all the ancestor calls in call sequence.
Thus when any binarySum() call corresponding to leaves in above call tree is active, its parent, grandparent and so on are still active as well.
Thus, the depth of the recursion, that is, the maximum number of method instances that are active at the same time, which is always equal to the height of the recursive calls tree is 1 + log$_2$n.
For example in binarySum() recursive calls tree above, with n = 8, at any of the calls correspnding to leaves,
Why does the depth of recursion affect the space, required by an algorithm? Each recursive function call usually needs to allocate some additional memory (for temporary data) to process its arguments. At least, each such a call has to store some information about its parent call - just to know where to return after finishing. Let's imagine you are performing a task, and you need to perform a sub-task inside this first task - so you need to remember (or write down on a paper) where you stopped in the first task to be able to continue it after you finish the sub-task. And so on, sub-sub-task inside a sub-task... So, a recursive algorithm will require space O(depth of recursion).
@randomA mentioned the Call Stack, which is normally used when a function invokes another function (including itself). The call stack is the part of the computer memory, where a recursive algorithm allocates its temporary data. | {
"domain": "cs.stackexchange",
"id": 3208,
"tags": "algorithm-analysis, space-complexity, recursion"
} |
mixed AJAX/Javascript form validation check | Question: I've got a form, for which the "Send" button should only be available, upon each form field being validated. For most of my check, I call a function checkFormValue() which is a function gathering all the simple checks, give warning where check failed, and if at least one of the check fail, disable the button. The call are made on the blur event for each field of the form.
function checkFormValue(){
var form = document.myform;
//initializing a few variable with check, amongst them the following
//Those are example of check which are pure javascript function
var checkIDField = checkField(form.myids) && checkIDs();
//...
//This is an example of AJAX function
if(checkIDField){
checkIDValidity();
}
//Checks variables are gathered under one variable
var validform = checkIDField && CheckContactField && CheckPathsField;
var warnings = document.getElementsByClassName('warning');
var i;
var count = 0;
//Loop to check the number of warnings
for (i = 0; i < warnings.length; i++){
// Check that current character is number.
if(warnings[i].style.display===""){
count++
}
}
validform = validform && (count === 0);
document.form.send.disabled = !(validform);
}
A few check needs to be done on the server (SELECT query, check if some file exists,etc.). For those check,, I do AJAX calls, and I added in the checkFormValue() a loop which count the number of warning displayed. If none are displayed, then the button is not disable anymore.
One of my field should contain a series of ID, in the form of a space separated list. After checking that the list is correctly formatted (list of integer separated by space, no duplicate), I want to see if those ID exist in the database. The following function set the AJAX call:
function checkIDValidity(){
var xmlhttp;
if (window.XMLHttpRequest){
//code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}else{
// code for IE6, IE5
xmlhttp=new ActiveXObject('Microsoft.XMLHTTP');
}
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState===4 && xmlhttp.status===200){
var existingIDs = xmlhttp.responseText;
var idcomponent = document.myform.myids;
var idwarningdiv = document.getElementById('nonexistingid');
if(existingIDs==="Existing"){
idcomponent.setAttribute('class', 'valid');
if(idwarningdiv.style.display !== 'none'){
idwarningdiv.style.display='none';
idwarningdiv.innerHTML='';
}
}else{
idcomponent.setAttribute('class', 'invalid');
if(idwarningdiv.style.display !== ''){
idwarningdiv.style.display='';
}
if(idwarningdiv.innerHTML.substr(51,existingIDs.length)!==existingIDs){
idwarningdiv.innerHTML='<br /><img src="img/warning2.png" alt="Warning!" />'+existingIDs+' is not an existing id.';
}
}
}
};
var idnumbers = document.myform.myids.value;
var d=new Date();
//d.toUTCString() is used so the result never get cached
xmlhttp.open('GET', 'checkidvalidity.php?id='+idnumbers+'&rand='+d.toUTCString(), true);
xmlhttp.send();
}
When I enter an ID, which doesn't exist in the database, I got a warning, and if the ID is valid, the warning disappear. But the enabled/disabled status of the "Send button" is not immediately updated, on the contrary, I need to click twice on other field in the form before I can see any change on the button.
Adding a listener on the warning div for non existent IDs, triggering on propertyChanged event and calling checkFormValue did at first the trick, but the event is only supported on IE, and as such it didn't work on other browser. Using the mutation event "DOMAttrModified" when it was supported, helped for Firefox and Opera.
function formLink(objectid){
var xmlhttp;
if (window.XMLHttpRequest){
//code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}else{
// code for IE6, IE5
xmlhttp=new ActiveXObject('Microsoft.XMLHTTP');
}
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState===4 && xmlhttp.status===200){
document.getElementById('terminal').innerHTML=xmlhttp.responseText;
if(isDOMAttrModifiedSupported()){
document.getElementById("nonexistingid").addEventListener("DOMAttrModified",
checkFormValue,
false);
}
checkFormValue();
}
};
xmlhttp.open('GET', 'myform.php?id='+objectid, true);
xmlhttp.send();
}
Details on the isDomAttrModifiedSupported function can be found on this post about detecting if the DomAttrModified event listener is supported.
But using that approach leaves my with no solution for the WebKit browsers (Chrome, Safari). Is there a way to refactor my code so I can do my AJAX call, and it would result on an update of the disabled status of the "Send" button, without discounting the result of my other simple checks? In other word, is my validation approach correct?
PS: I get some restraints on this project, and I am not allowed to use jQuery.
Answer: I would approach it slightly differntly.
Have functions to change when the field has been validated or not:
function checkFormValues(){
var invalidFields = 0;
function updateValidFields(isValidField)
{
invalidFields += isValidField ? -1 : 1;
if(!invalidFields)
{
// Enable Submit Button
// ???
// Profit
}
else
{
// Disable submit Button
// Fail!
}
}
// etc
}
then your validation functions look something like:
function validateMe()
{
//Always assume invalid to startwith
updateValidFields(false);
// do awesome field checking.
if(thisFieldIsValid)
{
updateValidFields(true);
}
}
then your checkValidID function would look like:
function checkIDValidity(){
updateValidFields(false);
and the readyState change function:
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState===4 && xmlhttp.status===200){
var existingIDs = xmlhttp.responseText;
var idcomponent = document.myform.myids;
var idwarningdiv = document.getElementById('nonexistingid');
if(existingIDs==="Existing"){
// Do valid stuff
updateValidFields(true);
}else{
//do invalid stuff.
}
}
}; | {
"domain": "codereview.stackexchange",
"id": 680,
"tags": "php, javascript, ajax"
} |
Are there any theories why such an imbalance in chirality of molluscs? | Question: Most gastropods exhibit sinistral (right hand) winding of their shells. But very few species are anti sinistral. Have there been any theories as to why such a great difference?
Answer: Why so many molluscs exhibit sinistral winding?
The estimates of the number of molluscs vary quite greatly between 50,000 and 200,000 species. Of those molluscs species, about 70'000 are Gasteropoda. Gasteropa is most diverse Mollusca phylum.
The winding you describe is present in all Gasteropoda and is often called the torsion. So the answer to why there are so many molluscs that make a torsion is simply, because of phylogenetic independence. The torsion evolved only once in the gastropods. The answer to the more specific question, why is the torsion right handed rather than left handed in all gastropods is 1) phylogenetic independence again. The torsion evolved only once and was therefore either sinistral or anti-sinistral. There is no need for an explanation of why they are all sinistral because the observations are not independent.
Why has the torsion evolved at first place?
One may ask but why did the torsion evolved at first place?. I think the reasons are still to be discovered. The following is a summary of what I read on wikipedia (torsion#evolution)
"Why torsion is bad"
As a result of this torsion, the anus is found next to the mouth which is an obvious hygiene issue and therefore seems rather deleterious. Moreover, there are a whole bunch of issues about organs spinning around and entwining. Also, ventilation seems to be reduced by the torsion which is pretty deleterious.
"Why torsion is good"
However, because there's no "hole" left in the posterior position, the torsion may help preventing sediments. Some have suggested that the torsion may allow to move sensory organs closer to the head. The most likely explanation is that the torsion might have evolved as a defense mechanisms against predation as torsion allow an organism to hide its head behind its shell. Finally, citing from wikipedia:
The evolution of an asymmetrical conispiral shell allowed gastropods to grow larger but resulted in an unbalanced shell. Torsion allows repositioning of the shell, bringing the centre of gravity back to the middle of the gastropod’s body, and thus helps prevent the animal or the shell from falling over.
Note also that
Whatever original advantage resulted in the initial evolutionary success of torsion, subsequent adaptations linked to torsion have provided modern gastropods with further advantages.
Why sinistral rather than anti-sinistral
To repeat myself, we only have one single observation of torsion (as it evolved only once) and this observation is either sinistral or anti-sinistral. It sounds therefore quite likely that stochastic processes have driven the evolution of sinistral (rather than anti-sinistral) torsion. In other words, the first mutation allowing for some degree of torsion was probably causing a sinistral torsion and this is it.
But there might eventually be a more functional reason for why torsion evolved to be sinistral. The reasons would then be related to the already existing asymmetry of organs. For example, anti-sinistral torsion may yield to more entwining between the gut and the respiratory system, or to not squeeze too much the one lung, who'd be smaller than the other one due to the presence of circulatory organs. I don't have enough knowledge in the anatomy of the molluscs ancestors to have a good intuition of whether I'd expect sinistral or anti-sinistral torsion to be more beneficial. | {
"domain": "biology.stackexchange",
"id": 4205,
"tags": "evolution, invertebrates"
} |
Monopoly simulator | Question: I was advised by a Reddit user to get my code reviewed on this site.
The complete code is on GitHub.
# Monopoly Simulator
# http://img.thesun.co.uk/aidemitlum/archive/01771/Monopoly2_1771742a.jpg
from random import randint
piece = 0
jail = 0
iteration = 0
brown1 = 0
brown2 = 0
light_blue1 = 0
light_blue2 = 0
light_blue3 = 0
pink1 = 0
pink2 = 0
pink3 = 0
orange1 = 0
orange2 = 0
orange3 = 0
red1 = 0
red2 = 0
red3 = 0
yellow1 = 0
yellow2 = 0
yellow3 = 0
green1 = 0
green2 = 0
green3 = 0
dark_blue1 = 0
dark_blue2 = 0
num = input("How many rolls do you want to simulate? ")
for h in range(0, num):
piece += randint(2, 12)
if piece > 40:
piece -= 40
iteration += 1
#print("Around the board %d times so far" %(iteration)) ### Optional
#print(piece) ### Optional
#Jail
if piece == 30:
piece = 10
jail += 1
#print("JAIL") ### Optional
#Brown
if piece == 1:
brown1 += 1
if piece == 3:
brown2 += 1
#Light Blue
if piece == 6:
light_blue1 += 1
if piece == 8:
light_blue2 += 1
if piece == 9:
light_blue3 += 1
#Pink
if piece == 11:
pink1 += 1
if piece == 13:
pink2 += 1
if piece == 14:
pink3 += 1
#Orange
if piece == 16:
orange1 += 1
if piece == 18:
orange2 += 1
if piece == 19:
orange3 += 1
#Red
if piece == 21:
red1 += 1
if piece == 23:
red2 += 1
if piece == 24:
red3 += 1
#Yellow
if piece == 26:
yellow1 += 1
if piece == 27:
yellow2 += 1
if piece == 29:
yellow3 += 1
#Green
if piece == 31:
green1 += 1
if piece == 32:
green2 += 1
if piece == 34:
green3 += 1
#Dark Blue
if piece == 37:
dark_blue1 += 1
if piece == 39:
dark_blue2 += 1
brown = brown1 + brown2
light_blue = light_blue1 + light_blue2 + light_blue3
pink = pink1 + pink2 + pink3
orange = orange1 + orange2 + orange3
red = red1 + red2 + red3
yellow = yellow1 + yellow2 + yellow3
green = green1 + green2 + green3
dark_blue = dark_blue1 + dark_blue2
#Prints all the Statistics
print("\n\n")
print("Brown = %d" %(brown))
print("Light Blue = %d" %(light_blue))
print("Pink = %d" %(pink))
print("Orange = %d" %(orange))
print("Red = %d" %(red))
print("Yellow = %d" %(yellow))
print("Green = %d" %(green))
print("Dark Blue = %d" %(dark_blue))
print("\n")
print("Brown 1 = %d" %(brown1))
print("Brown 2 = %d" %(brown2))
print("\n")
print("Light Blue 1 = %d" %(light_blue1))
print("Light Blue 2 = %d" %(light_blue2))
print("Light Blue 3 = %d" %(light_blue3))
print("\n")
print("Pink 1 = %d" %(pink1))
print("Pink 2 = %d" %(pink2))
print("Pink 3 = %d" %(pink3))
print("\n")
print("Orange 1 = %d" %(orange1))
print("Orange 2 = %d" %(orange2))
print("Orange 3 = %d" %(orange3))
print("\n")
print("Red 1 = %d" %(red1))
print("Red 2 = %d" %(red2))
print("Red 3 = %d" %(red3))
print("\n")
print("Yellow 1 = %d" %(yellow1))
print("Yellow 2 = %d" %(yellow2))
print("Yellow 3 = %d" %(yellow3))
print("\n")
print("Green 1 = %d" %(green1))
print("Green 2 = %d" %(green2))
print("Green 3 = %d" %(green3))
print("\n")
print("Dark Blue 1 = %d" %(dark_blue1))
print("Dark Blue 2 = %d" %(dark_blue2))
print("\n")
print("You've been jailed %d times" %(jail))
#The Board
#Calculating highest number of digits (for board formatting)
places = [brown1, brown2, light_blue1, light_blue2, light_blue3, pink1, pink2, pink3,
orange1, orange2, orange3, red1, red2, red3, yellow1, yellow2, yellow3,
green1, green2, green3, dark_blue1, dark_blue2]
digit = 0
temp = 0
for place in places:
while place / 10 >= 1:
place /= 10
temp += 1
temp += 1
if temp > digit:
digit = temp
temp = 0
#Creating Blanks & Spaces
blank = "-"
space = " "
for i in range(0, digit - 1):
blank += "-"
space += " "
#Formatting all the places, so that they have "temp" digits
formatted = []
placelen = 0
for place in places:
holder = place
form = 0
while holder / 10 >= 1:
holder /= 10
placelen += 1
placelen += 1
if placelen != digit:
form = format(place, "0%d" %(digit))
else:
form = str(place)
placelen = 0
formatted.append(form)
brown1 = formatted[0]
brown2 = formatted[1]
light_blue1 = formatted[2]
light_blue2 = formatted[3]
light_blue3 = formatted[4]
pink1 = formatted[5]
pink2 = formatted[6]
pink3 = formatted[7]
orange1 = formatted[8]
orange2 = formatted[9]
orange3 = formatted[10]
red1 = formatted[11]
red2 = formatted[12]
red3 = formatted[13]
yellow1 = formatted[14]
yellow2 = formatted[15]
yellow3 = formatted[16]
green1 = formatted[17]
green2 = formatted[18]
green3 = formatted[19]
dark_blue1 = formatted[20]
dark_blue2 = formatted[21]
#Making the Board
board = [
[blank, red1, blank, red2, red3, blank, yellow1, yellow2, blank, yellow3, blank],
[orange1, space, space, space, space, space, space, space, space, space, green1],
[orange2, space, space, space, space, space, space, space, space, space, green2],
[blank, space, space, space, space, space, space, space, space, space, blank],
[orange3, space, space, space, space, space, space, space, space, space, green3],
[blank, space, space, space, space, space, space, space, space, space, blank],
[pink3, space, space, space, space, space, space, space, space, space, blank],
[pink2, space, space, space, space, space, space, space, space, space, dark_blue1],
[blank, space, space, space, space, space, space, space, space, space, blank],
[pink1, space, space, space, space, space, space, space, space, space, dark_blue2],
[blank, light_blue1, light_blue2, blank, light_blue3, blank, blank, brown2, blank, brown1, "GO"]
]
#Drawing the Board
print("\n")
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[0][0],
board[0][1],
board[0][2],
board[0][3],
board[0][4],
board[0][5],
board[0][6],
board[0][7],
board[0][8],
board[0][9],
board[0][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[1][0],
board[1][1],
board[1][2],
board[1][3],
board[1][4],
board[1][5],
board[1][6],
board[1][7],
board[1][8],
board[1][9],
board[1][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[2][0],
board[2][1],
board[2][2],
board[2][3],
board[2][4],
board[2][5],
board[2][6],
board[2][7],
board[2][8],
board[2][9],
board[2][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[3][0],
board[3][1],
board[3][2],
board[3][3],
board[3][4],
board[3][5],
board[3][6],
board[3][7],
board[3][8],
board[3][9],
board[3][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[4][0],
board[4][1],
board[4][2],
board[4][3],
board[4][4],
board[4][5],
board[4][6],
board[4][7],
board[4][8],
board[4][9],
board[4][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[5][0],
board[5][1],
board[5][2],
board[5][3],
board[5][4],
board[5][5],
board[5][6],
board[5][7],
board[5][8],
board[5][9],
board[5][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[6][0],
board[6][1],
board[6][2],
board[6][3],
board[6][4],
board[6][5],
board[6][6],
board[6][7],
board[6][8],
board[6][9],
board[6][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[7][0],
board[7][1],
board[7][2],
board[7][3],
board[7][4],
board[7][5],
board[7][6],
board[7][7],
board[7][8],
board[7][9],
board[7][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[8][0],
board[8][1],
board[8][2],
board[8][3],
board[8][4],
board[8][5],
board[8][6],
board[8][7],
board[8][8],
board[8][9],
board[8][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[9][0],
board[9][1],
board[9][2],
board[9][3],
board[9][4],
board[9][5],
board[9][6],
board[9][7],
board[9][8],
board[9][9],
board[9][10]
))
print(" %s | %s | %s | %s | %s | %s | %s | %s | %s | %s | %s " %(board[10][0],
board[10][1],
board[10][2],
board[10][3],
board[10][4],
board[10][5],
board[10][6],
board[10][7],
board[10][8],
board[10][9],
board[10][10]
))
I've only been programming for about a month now, so I know that there are many ways to improve and optimize my code. Just looking through it, I seem to be repeating some code so abstracting them using functions could be a possibility.
I should mention that I am aware of the mistake of simply doing randint(2, 12). On my Github page, I have changed it so that I do randint(1, 6) twice and add them together. This is important because it changes the probability.
I appreciate any help and suggestions that you can offer!
Answer: You really need to remove some of your global variables.
As an example I'd change all your browns to use a list instead.
brown = [0, 0]
As you do a lot of logic over brown, cyan, pink, etc, I'd make a dictionary.
Dictionary's are like lists, they have a few more features that I'd use and are basically lists but can have 'any' key.
And so I'd use:
places = {
'Brown': [0, 0],
'Cyan': [0, 0, 0],
'Pink': [0, 0, 0],
'Orange': [0, 0, 0],
'Red': [0, 0, 0],
'Yellow': [0, 0, 0],
'Green': [0, 0, 0],
'Blue': [0, 0, 0]
}
A lot of your logic is based around where these places are.
To reduce duplicate information I'd make another dictionary, telling us which piece is there.
And so I'd use:
board = {
1: ('Brown', 0),
3: ('Brown', 1),
6: ('Cyan', 0),
8: ('Cyan', 1),
9: ('Cyan', 2),
11: ('Pink', 0),
13: ('Pink', 1),
14: ('Pink', 2),
16: ('Orange', 0),
18: ('Orange', 1),
19: ('Orange', 2),
21: ('Red', 0),
23: ('Red', 1),
24: ('Red', 2),
26: ('Yellow', 0),
27: ('Yellow', 1),
29: ('Yellow', 2),
31: ('Green', 0),
32: ('Green', 1),
34: ('Green', 2),
37: ('Blue', 0),
39: ('Blue', 1),
}
As you should see this is easier to use than a list, as we don't have to provide 0.
I would then go on to change almost all your logic to use these.
It will significantly reduce the amount of lines in your code and will make your code easier to read and understand.
First things first I'd change your first loop to use this. I'd not change iteration and jail as they don't fit into the objects we created above.
However I'd change all your other ifs to use dictionary.get.
With a bit of tuple unpacking you can simplify them all to:
house_set, place = board.get(piece, (None, None))
if house_set is not None:
places[house_set][place] += 1
I'd then go on to change how you calculate and display the totals.
Using a dictionary comprehension and sum.
You want the sum of the 'house_set' and to know the 'place'.
And to get both you need to go through dict.items().
totals = {place: sum(house_set) for place, house_set in places.items()}
To then display this you can use a for loop over totals.items().
Where you display the place and the amount.
for place, amount in totals.items():
print('{} = {}'.format(place, amount))
After this you display all the numbers, from brown 1 to blue 2.
I'd use the same as above but using places rather than totals.
However this returns a list that we'll have to loop through.
I'd loop through this and display them as we did above but you need to know it's 'brown 1' rather than 'brown'.
And so I'd use enumerate with the optional argument to simplify the logic to getting these numbers.
I'd then use:
for place, house_set in places.items():
for i, amount in enumerate(house_set, 1):
print('{} {} = {}'.format(place, i, amount))
Finally you print the amount of times you've been jailed before you print a nice looking board.
This is roughly the same as before, but as I use str.format rather than % I'd change it to keep consistent.
print("You've been jailed {} times".format(jail))
I'd drastically change how you display the board, I'm not saying this is the best way.
It's just simpler than typing space and blank a lot.
It will also show that the board is actually just a line.
Currently you use a while loop and divide by ten a lot to find the amount of digits the number has.
This is alright but not as clear as changing it to a string and finding it's length.
You also find the largest number to get this length from.
This is by manually writing places instead I'd use a list comprehension.
As we will go through (my) places again but we don't need the name you can use dict.values() to get the values.
And so would result in:
digit = len(str(max(amount for house_set in places.values() for amount in house_set)))
After this you make blank and space this size manually. Instead you can use " " * digit.
This will allow you to duplicate the string that many times. And so simplifies the code:
blank = "-" * digit
space = " " * digit
After this you make a formatted list. I'd instead format places.
However to be able to keep the original I'd use a dictionary comprehension and a list comprehension.
This will be like the for loops we used earlier to display the numbers.
I'd also use str.format the format you are using is simple in this mini-language.
You want the number to be proceeded by zeros, and so you'd use something like {:0>digit}.
However you want to pass the digit. To simplify the logic you can use {{:0>{}}}.format(digit).format(data).
Where data is the data we will be formatting, as you did before.
And so I'd use:
place_format = '{{:0>{}}}'.format(digit)
formatted = {
place: [place_format.format(amount) for amount in house_set]
for place, house_set in places.items()
}
Finally I'll use three more steps where you used two.
This is to reduce duplicated logic, but adds some code that is harder to read.
As we constructed the board's positions earlier in board we know where to place things in the boards line.
We also know that this line is around the outer of the square board, and you chose to start the origin at the bottom right.
Before this get's too complicated I'll explain how I'd make the board's line.
You know there are 40 places on the board.
Of these 40 you have said all the places where houses/streets are.
This allows you to use the dictionary we constructed before with board.get again.
If there is no piece in that dictionary than we know that it should be blank.
Finally no matter what, the first item on the board is 'go'.
This allows you to do:
board_line = []
for index in range(40):
place = board.get(index, None)
if place is None:
value = blank
else:
value = places[place[0]][place[1]]
board_line.append(value)
board_line[0] = 'Go'
After this we can translate that list to a board.
This is by getting the x and y position of the board from the 40 index's.
As this is mostly maths I'll just dump the code.
But I initialize the entire board as space and overwrite the outer circle with this code to the line above.
This is with:
board = [[space] * 11 for _ in range(11)]
for index in range(40):
x, y = 0, 0
if index < 11:
x = 10 - index
y = 10
elif index < 21:
x = 0
y = 10 - (index - 10)
elif index < 31:
x = index - 20
else:
x = 10
y = index - 30
board[y][x] = board_line[index]
Finally, I'd simplify the prints.
You should loop through each line of the board and print it.
As you add some space either side of the board I'd use str.format again to wrap the result of str.join.
And as you want to join each square of the board with | you should use ' | '.join(line).
This can result in:
for line in board:
print(' {} '.format(' | '.join(str(p) for p in line)))
Here's all the code:
# Monopoly Simulator
# http://img.thesun.co.uk/aidemitlum/archive/01771/Monopoly2_1771742a.jpg
from random import randint
board = {
1: ('Brown', 0),
3: ('Brown', 1),
6: ('Cyan', 0),
8: ('Cyan', 1),
9: ('Cyan', 2),
11: ('Pink', 0),
13: ('Pink', 1),
14: ('Pink', 2),
16: ('Orange', 0),
18: ('Orange', 1),
19: ('Orange', 2),
21: ('Red', 0),
23: ('Red', 1),
24: ('Red', 2),
26: ('Yellow', 0),
27: ('Yellow', 1),
29: ('Yellow', 2),
31: ('Green', 0),
32: ('Green', 1),
34: ('Green', 2),
37: ('Blue', 0),
39: ('Blue', 1),
}
places = {
'Brown': [0, 0],
'Cyan': [0, 0, 0],
'Pink': [0, 0, 0],
'Orange': [0, 0, 0],
'Red': [0, 0, 0],
'Yellow': [0, 0, 0],
'Green': [0, 0, 0],
'Blue': [0, 0, 0]
}
piece = 0
jail = 0
iteration = 0
num = input("How many rolls do you want to simulate? ")
for h in range(num):
piece += randint(1, 6) + randint(1, 6)
if piece > 40:
piece -= 40
iteration += 1
if piece == 30:
piece = 10
jail += 1
house_set, place = board.get(piece, (None, None))
if house_set is not None:
places[house_set][place] += 1
totals = {place: sum(house_set) for place, house_set in places.items()}
for place, amount in totals.items():
print('{} = {}'.format(place, amount))
for place, house_set in places.items():
for i, amount in enumerate(house_set, 1):
print('{} {} = {}'.format(place, i, amount))
print("You've been jailed %d times" %(jail))
digit = len(str(max(amount for house_set in places.values() for amount in house_set)))
blank = "-" * digit
space = " " * digit
place_format = '{{:0>{}}}'.format(digit)
formatted = {
place: [place_format.format(amount) for amount in house_set]
for place, house_set in places.items()
}
board_line = []
for index in range(40):
place = board.get(index, None)
if place is None:
value = blank
else:
value = places[place[0]][place[1]]
board_line.append(value)
board_line[0] = 'Go'
board = [[space] * 11 for _ in range(11)]
for index in range(40):
x, y = 0, 0
if index < 11:
x = 10 - index
y = 10
elif index < 21:
x = 0
y = 10 - (index - 10)
elif index < 31:
x = index - 20
else:
x = 10
y = index - 30
board[y][x] = board_line[index]
for line in board:
print(' {} '.format(' | '.join(str(p) for p in line))) | {
"domain": "codereview.stackexchange",
"id": 20243,
"tags": "python, beginner, game, python-2.x, dice"
} |
Replace all occurrence of REP [block] with value of the block | Question: I have a block with an undefined data and a pair of REP [code]:
arr: [
REP ["code1"]
| 'something
"f"
| [ f | 3 ]
REP [40 + 2]
]
The pair can be everywhere in the arr.
I replace every occurrence of a pair REP [code] with a value from evaluating the code.
t: true
while [t == true] [
f: find arr 'REP
either f [
t: true
change/part f do first next f 2
][
t: false
]
]
Does this code follow common best practices? Does it have some security issues? Is there any problems with a performance using this code?
Answer:
Does it have some security issues?
Well, you're doing an evaluation of code. If that code comes from a foreign source somehow, that could be a problem. If it's your code, it's no more dangerous than anything else.
But as you wrote it, it does have the property that it can infinite loop within itself. Because after your substitution you start at the beginning again:
arr: [REP [quote REP]]
That's a stable state, but you could keep growing without bound too.
My answers will assume you didn't want this.
Does this code follow common best practices?
It's C-like. Rebol is a great language for writing bad C code. :-)
Using COMPOSE is the obvious answer here:
>> compose [
("code1")
| 'something
"f"
| [ f | 3 ]
(40 + 2)
]
== ["code1" | 'something "f" | [ f | 3 ] 42]
And if you're worried your content contains parens that you want to leave in place, you can quote them:
>> compose [1 (quote (2 3)) 4]
== [1 (2 3) 4]
But if you want to work with the input as written, there's a lot of approaches.
The most "sensible" way--that the wiki regarding features in R3-Alpha suggests would work (but doesn't seem to)--would be:
>> parse arr [
any [
change ['REP blk: block!] (do blk/1)
|
skip
]
]
Which would more or less capture the essence of the task. Any number of times, try and match a pattern of REP followed by a BLOCK!, marking the position prior to the block. Then substitute the execution of do blk/1 for that code". Otherwise skip and keep looking.
It gets almost there but misses the execution. So it splices in do blk/1 literally:
== [do blk/1 | 'something "f" | [f | 3] do blk/1]
Try reporting a bug to...whoever works on that. Or maybe to Red. :-)
I've seen you mention you don't like UNTIL, so here's a FORALL-based solution...
forall arr [
if arr/1 = 'REP [
remove arr
arr: back change arr do arr/1
]
]
(Note: I think forall should be called FOR-NEXT, because all it does is advance the argument until the end via NEXT. You can modify it during the loop, and then it will run NEXT again. It will be reset at the end.)
As a talking point on your code: Note that there are two FALSE? values (NONE! and LOGIC! false), and everything else is TRUE? (except unset which is neither). So it's not that common to work with literal true and false. NONE is almost always better as a contrast to some value you are interested in.
Just to show what kinds of tricks are available in an "imperative-series-assembly-language" with a WHILE approach: if we are walking along the array replacing, we might use the position back from FIND not being NONE! to cue continuation:
head while [arr: find arr 'REP] [
arr: change/part arr do arr/2 2
]
It's able to work because change is designed for chaining, so it returns the point after the replacement. Also because WHILE returns the last evaluation of the body. So even though the condition wipes out the value of arr when the find comes back with NONE!, it can be recovered because the last time the body evaluated it was the end position of the CHANGE...and you can seek back to the head from that.
So there are a few possibilities. | {
"domain": "codereview.stackexchange",
"id": 17588,
"tags": "rebol"
} |
About self-learning general relativity | Question: I am an actuarial science graduate who also minored in physics. I have a physics background of around a year 3 undergraduate physics student, and I've been meaning to self-learn general relativity someday.
I've heard much about how difficult the math of GR is and I'd like to know more about what I should know first before I properly tackle this subject.
I'm a pretty mathy person and I'd like to understand GR from a more rigorous and mathematical standpoint. Here's a brief list of what I know so far:
Physics:
Classical mechanics. Studied this subject up to the level of Morin's classical mechanics textbook. I'm fine with Lagrangians and all that.
Electrodynamics. Made it through half of Griffiths, enough to cover all of Maxwell's equations.
Quantum Mechanics. Not sure if this will relate to GR, but I've also made it halfway through Griffiths for this.
Special relativity. I've learnt all my SR from Morin, and so far I'm comfortable with ideas like events and lorentz transformations, as well as all the fundamental effects. I still need to read about 4-vectors though.
Math
Calculus. Quite familiar with all the computational algorithms for finding derivatives and integrals. Pretty much had all of those drilled into my brain since high school. This includes vector calculus as well.
Analysis. I quite enjoy analysis, and so far I'm learning analysis for single variable calculus.
Set theory. I've delved a bit into the foundations of mathematics from studying analysis, so I am comfortable with set theory and set notation. Don't think this is useful for physics at all, but I find it useful to ground all my mathematical knowledge on a rigorous foundation.
Linear algebra. I think I understand this subject well enough. Things like vector space axioms, inner products, orthogonality. Had to learn these things for QM.
What I want to ask now is whether there's anything I should add to this list? Do I need concepts like topology? Or do you think I'm good to go?
I'd also like to know if there is a mathematically rigorous textbook I could use for GR. If I place Einstein on one end of a spectrum, for people who care more about physics than math, and Hilbert on the other end, for people who care more about math than physics, I'd say I lean pretty heavily towards Hilbert, so I would appreciate a textbook based more on rigour than intuition.
(Thanks in advance for taking the time to read through my question.)
Answer: I've literally seen first-year undergrads tackling GR with a basic knowledge of Linear Algebra and Calculus, so I'd say you are quite good to go. Most books on GR will cover the basic math you need. If you want to dive deep on all this math before tackling GR (it will probably take a long time, but it is up to you to choose), the "basic math" consists of General Topology, Differential Geometry (you will need a little Diff. Topology for this, but most books on DG will cover it), some multilinear algebra (namely, tensors). This is pretty much the essentials.
You could take a look at the 1973 book by Hawking & Ellis' The Large Scale Structure of Space-time. It is a beautiful, mathematically-oriented treatise on General Relativity which also discusses a lot of Physics as well. It starts from the very beginning (for example, covering the notions you'll need about Topology, Differential Geometry, and everything else) and is a maths book (Proposition, proof, Lemma, proof, Lemma, proof, Theorem, proof). Physics students will usually take a look after learning the basics somewhere else, but since you are more interested in the formalism I think it could be interesting. This is far from being the only option, and there's a post with many other excellent suggestions as well.
I will also mention Wald's General Relativity. While it is definitely a Physics book, it usually has quite some care with maths, and might be easier to tackle than H&E or other Mathematical Relativity texts. Wald's is the go-to book for Relativists and it might come in handy if the other texts are too mathy. In my opinion it is quite clear from a mathematical standpoint, but it won't put every theorem in a box.
I should also mention the excellent lectures by Frederic Schuller at the WE Heraeus International Winter School on Gravity and Light (see this link for a link to the YouTube Channel and for some typed notes). Schuller's approach is quite careful, clear, and depends essentially on just a good understanding of multivariable calculus and linear algebra. Every time I've seen these lectures mentioned on Phys.SE or on Reddit, someone would comment on how amazing they are, and I've never seen anyone complaining about them. They would provide an excellent, mathematically clear overview of the theory. It might be everything that you wanted to learn about GR, or it might be a great starting point for you to know what you want to see next. | {
"domain": "physics.stackexchange",
"id": 84534,
"tags": "general-relativity, resource-recommendations, education"
} |
mathematical accurate definition of the binary independence model | Question: I have a hard time understanding the exact mathematical meaning behind the binary independence model. On wikipedia we can see the following definition or similarly in the book from Manning and Schütze,
it claims that
The probability P(R|d,q) that a document is relevant derives from the probability of relevance of the terms vector of that document P(R|x,q). By using the Bayes rule we get:
$P(R|x,q) = \dfrac{P(x|R,q)P(R|q)}{P(x|q)}$
Now, Bayes rule is as follows:
$P(A|B) = \dfrac{P(B|A)P(A)}{P(B)}$
If you set $A := R$ and $B := x,q$ you get:
$P(A|B) = P(R|x,q) = \dfrac{P(x,q|R)P(R)}{P(x,q)}$
If you compare the terms, you'll notice why I am confused:
$x=x,q$
$R,q=q$
$R|q=R$
$x|q=x,q$
This result has nothing to do with the initial claim.
I think that I miss the definition of the 'comma' in this context. I am not aware of multi-dimensional probabilities. As I understand, a probability $P$ is always defined over a $\sigma$ algebra of an event space $\Omega$
In order to understand what's the idea behind the formula above, here a few things that could help:
What does the comma precisely mean in the formula above (in mathematical notation)?
What is the underlying $\Omega$ ? If there is a probability $P$, then there must be an Omega $\Omega$ which serves as the space on which we define probabilities. It's not clear at all what this space is. If a document is a vector $x \in \{{0,1}\}^n$ and the query is a vector $q \in \{{0,1}\}^n$, then defining a space like $\Omega := \{{0,1}\}^n \times \{{0,1}\}^n$ could make sense. In this case it's not clear what $R$ is. Maybe the intent is to use $\Omega := \{{0,1}\}^n \times \{{0,1}\}^n \times \{{relevant, nonrelevant}\}$
Do $R$, $x$ or $q$ have anything to do with random variables? If yes, then it would help to see their domain, e.g: $R : \Omega \mapsto {0,1} $
Because the conditional probability is defined between 2 sets, $R$ and $x,q$, as well as $x$ and $R,q$ or $R|q$ should represent sets. If $R$ is a random variable, then maybe in the formula above the term $R$ represents the set $\{\omega \in \Omega\ | R(\omega)=relevant\}$. What is then the set for $x$ or for $q$ ?
Answer: It's better to talk about x, q and R as (random) events - set of outcomes of the random experiment. x and q will be one element sets but R is an event denoting x is relevant to q and thus it is a subset of cartesian product X x Q (pair of a document and a query).
Comma is then set conjunction and P(A,B) = P(A ^ B) which equals to P(A) * P(B) when A is independent of B.
The wiki statement 'by using Bayesian rule' is a bit of a shortcut since you need to apply it several times. To derive the above formula for P(R|x,q), I would start with conditional probability definition (which is the root of Bayesian rule):
P(A|B) = P(A ^ B) / P(B)
Then:
P(R|x,q) = P(R ^ x ^ q) / P(x,q) = P(x|R,q) * P(R,q) / [ P(x|q) * P(q) ] =
= P(x|R,q) * P(R|q) / P(q) / [ P(x|q) * P(q) ]
When you divide nominator and denominator by P(q), you obtain
P(R|x,q) = P(x|R,q) * P(R|q) / P(x|q) | {
"domain": "datascience.stackexchange",
"id": 3692,
"tags": "nlp, probability, information-retrieval"
} |
Is the velocity of a string that is being rotated around a central point the same at any point on the string? | Question: Say that you have a weight attached to a 2 metre long string, and you are rotating the weight at 5 m/s. Is every point on the string going to be rotating at that same velocity of 5 m/s, or is the velocity of the string going to change according to how far away you are from the centre of rotation?
I'm looking at the linear velocity.
Answer: For different points of the string to have same velocities, their direction of movement and speeds would to be the same as well. If the string is taut then all points are moving in the same direction at any given time, but it's easy to show that the speeds are not the same: If you have your 2 meter long string, the endpoint will move the total distance of 2·π·(2 m)≈12,6 meters during one full revolution. If you specify the speed of 5 m/s, one revolution would then take 4π/5 s ≈ 2,5 seconds.
Clearly one full revolution takes the same time for all points on the string (if it's considered taut, rigid). But where the endpoint travels more than 12 meters in one revolution, points closer to the rotational axis end up moving a smaller distance as any point closer will trace a circle that is smaller than the one drawn by the endpoint. Thus the points closer in travel a smaller distance in the exact same time and hence their speed is less.
(Linear) Velocities therefore can't be the same. | {
"domain": "physics.stackexchange",
"id": 59592,
"tags": "newtonian-mechanics, velocity, centripetal-force"
} |
Feature normalization (scaling) for Hyperspectral images | Question: have been straggling with the concept of feature normalization for the hyperspectral images. Respect to my problem I have attached a picture of my problem which clearly stands the issue which I have.
Answer: There are different methods for the normalization. If it is for visual graphs, you can divide it into bands which can be defined in realtime, so you'd write a program which presents visual graphs of frequency A->B where A and B can be changed by the user. That helps you see display it graphically. using X and Y axis of the mouse to define A->B bandwidth. Then you can scan backwards and forwards through all the frequencies interactively while also normalizing them, using an amplitude meter beside the graph, to say the overall max amplitude of that bandwidth.
If the normalizing process is for high fidelity, balanced post processing and statistics, then it's best to find the highest value at any frequency and divide all the pixel amplitudes by that high value so that it equals 1. | {
"domain": "dsp.stackexchange",
"id": 7200,
"tags": "image-processing, classification, array-signal-processing"
} |
Camera view occluded by robot body, how to deal for SLAM? | Question:
On my mobile robot, I have a RGBD camera(zed) which even at the lowest resolution/fov setting, captures part of the robot body. When performing visual SLAM using RTabMap unfortunately the body is interpreted as part of the environment.
Be sides just cropping the camera stream, which adds latency I'd like to avoid, is there any other way to deal with this?
Originally posted by mugetsu on ROS Answers with karma: 195 on 2020-04-16
Post score: 0
Answer:
One option is to pre-filter the camera data for robot parts based on your URDF model. Examples of packages that provide such functionality are
https://github.com/blodow/realtime_urdf_filter
MoveIt's depth self filter, see for instance here.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2020-04-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34778,
"tags": "slam, navigation, ros-melodic, rtabmap-ros"
} |
Implementation of the DDA algorithm with raycasting | Question: I am making a simple game like Wolfenstein 3d on C using raycasting. To calculate the length of the rays, I use the DDA algorithm. I apply the part of the code that calculates the length of the rays and the size of the wall on the vertical line where the ray hit. How can I optimize and improve my code?
/*
** Function: void calculate()
**
** Arguments: main struct and i
**
** return: void
**
** Description: The raycasting loop is a for loop that goes through every x,
** so there is a calculation for every vertical stripe of the screen.
*/
void calculate(t_cub3d *cub)
{
int i;
i = 0;
while (i < cub->window.res_width)
{
calculate_cam(cub, &i);
// field_x and field_y represent the current square of the map the ray is in.
cub->field.field_x = cub->player.x_pos;
cub->field.field_y = cub->player.y_pos;
calculate_ray_dir(cub);
calculate_step(cub);
calculate_wall(cub);
calculate_height(cub);
draw(cub, i);
i++;
}
calculate_sprite(cub);
}
/*
** Function: void calculate_cam()
**
** Arguments: main struct, variable counter(width)
**
** return: void
**
** Description: x_camera is the x-coordinate on the camera plane
** that the current x-coordinate of the screen represents, done this way
** so that the right side of the screen will get coordinate 1, the center
** of the screen gets coordinate 0, and the left side of the screen gets coordinate -1
*/
void calculate_cam(t_cub3d *cub, int *i)
{
cub->camera.x_camera = 2 * *i / (double)(cub->window.res_width) - 1;
cub->ray.dir_ray_x = cub->player.x_dir + cub->camera.x_plane * \
cub->camera.x_camera;
cub->ray.dir_ray_y = cub->player.y_dir + cub->camera.y_plane * \
cub->camera.x_camera;
}
/*
** Function: void calculate_ray_dir()
**
** Arguments: main struct
**
** return: void
**
** Description: x_deltaDist and y_deltaDist are the distance the ray
** has to travel to go from 1 x-side to the next x-side, or from 1 y-side
** to the next y-side.
*/
void calculate_ray_dir(t_cub3d *cub)
{
if (cub->ray.dir_ray_y == 0)
cub->ray.x_deltadist = 0;
else
{
if (cub->ray.dir_ray_x == 0)
cub->ray.x_deltadist = 1;
else
cub->ray.x_deltadist = fabs(1 / cub->ray.dir_ray_x);
}
if (cub->ray.dir_ray_x == 0)
cub->ray.y_deltadist = 0;
else
{
if (cub->ray.dir_ray_y == 0)
cub->ray.y_deltadist = 1;
else
cub->ray.y_deltadist = fabs(1 / cub->ray.dir_ray_y);
}
}
/*
** Function: void calculate_step()
**
** Arguments: main struct
**
** return: void
**
** Description: x_sideDist and y_sideDist are initially the distance
** the ray has to travel from its start position to the first x-side and
** the first y-side.
*/
void calculate_step(t_cub3d *cub)
{
if (cub->ray.dir_ray_x < 0)
{
cub->ray.x_ray_step = -1;
cub->ray.x_sidedist = (cub->player.x_pos - (double)(cub->field.field_x))
* cub->ray.x_deltadist;
}
else
{
cub->ray.x_ray_step = 1;
cub->ray.x_sidedist = (((double)(cub->field.field_x) + 1.0 - \
cub->player.x_pos) * cub->ray.x_deltadist);
}
if (cub->ray.dir_ray_y < 0)
{
cub->ray.y_ray_step = -1;
cub->ray.y_sidedist = (cub->player.y_pos - (double)(cub->field.field_y))
* cub->ray.y_deltadist;
}
else
{
cub->ray.y_ray_step = 1;
cub->ray.y_sidedist = ((double)(cub->field.field_y) + 1.0 - \
cub->player.y_pos) * cub->ray.y_deltadist;
}
}
/*
** Function: void calculate_wall()
**
** Arguments: main struct
**
** return: void
**
** Description: DDA algorithm. It's a loop that increments the ray with 1 square
** every time, until a wall is hit.
*/
void calculate_wall(t_cub3d *cub)
{
int is_wall;
is_wall = 0;
cub->window.side = 0;
while (is_wall == 0)
{
if (cub->ray.x_sidedist < cub->ray.y_sidedist)
{
cub->ray.x_sidedist += cub->ray.x_deltadist;
cub->field.field_x += cub->ray.x_ray_step;
cub->window.side = 0;
}
else
{
cub->ray.y_sidedist += cub->ray.y_deltadist;
cub->field.field_y += cub->ray.y_ray_step;
cub->window.side = 1;
}
if (cub->field.map[cub->field.field_y][cub->field.field_x] == '1')
is_wall = 1;
}
calculate_distto_wall(cub);
}
/*
** Function: void calculate_distto_wall()
**
** Arguments: main struct
**
** return: void
**
** Description: calculate the distance of the ray to the wall
*/
void calculate_distto_wall(t_cub3d *cub)
{
if (cub->window.side == 0)
{
cub->ray.wall_dist = ((double)(cub->field.field_x) - cub->player.x_pos \
+ (1 - cub->ray.x_ray_step) / 2) / cub->ray.dir_ray_x;
}
else
{
cub->ray.wall_dist = ((double)(cub->field.field_y) - cub->player.y_pos \
+ (1 - cub->ray.y_ray_step) / 2) / cub->ray.dir_ray_y;
}
}
/*
** Function: void calculate_height()
**
** Arguments: main struct
**
** return: void
**
** Description: calculate the height of the line that has to be
** drawn on screen
*/
void calculate_height(t_cub3d *cub)
{
cub->window.height_ln = (int)(cub->window.res_height / cub->ray.wall_dist);
cub->window.top_wall = (cub->window.height_ln * -1) / 2 + \
cub->window.res_height / 2;
if (cub->window.top_wall < 0)
cub->window.top_wall = 0;
cub->window.bottom_wall = cub->window.height_ln / 2 + \
cub->window.res_height / 2;
if (cub->window.bottom_wall >= cub->window.res_height)
cub->window.bottom_wall = cub->window.res_height - 1;
}
Answer: Use the Doxygen language to document your functions
It looks like you use some ad-hoc way to document your functions. It's very good to add that documentation, but there are tools out there that make it even more useful, like Doxygen. Once you reformat your comments to follow the Doxygen syntax, you can then use the Doxygen tools to generate documentation in PDF, HTML and various other formats. It can also check whether your documentation is correct, for example whether you have documented all the parameters. It would have already found an error in the first functions: you mentino i in the description of the function arguments, but there is no such thing.
Pass integers by value
Why are you passing i by reference to calculate_sam()? You should just pass it by value, otherwise you pay for unnecessary dereferencing of the pointer.
Naming things
You should try to give more descriptive names to functions. Naming everything calculate_something() is not very useful, as calculating is what your computer is basically doing all the time. It looks like calculate() is actually causing everything to be drawn, since it calls draw(). So maybe it should be names something like draw_scene() instead? And draw() should probably be draw_column()?
Why is there x_pos and x_plane but field_x? Having a consistent ordering of words makes it easier to read your code. But then also, there is a redundancy in field.field_x, maybe rename it so you can just write field.x?
What is cub? Is it representing a cube? If so, don't arbitrarily remove a single letter from a word, you might regret it later.
Furthermore, if you have a variable holding a coordinate, use x and y instead of i.
Have functions return values
Your functions don't return anything, but rather modify the object pointed to. Although you might think that keeps things together, it actually makes them harder to use, and will likely cause lots of temporary variables to be put into t_cub3d, which will waste memory. Try to have functions return their results, and only pass those variables to them that they need to use.
For example, calculate_cam() seems to only calculate x_camera as a temporary value, and the only thing used by other code (as far as I can see) is the ray's direction. So, assuming cub->ray has type r_ray, I would write:
t_ray calculate_cam(const t_cub3d *cub, int x)
{
int x_camera = 2.0 * x / cub->window.res_width - 1;
t_ray ray;
ray.dir_ray_x = cub->player.x_dir + cub->camera.x_plane * x_camera;
ray.dir_ray_y = cub->player.y_dir + cub->camera.y_plane * x_camera;
return ray;
}
Use bool where appropriate
When you have a variable that hold a value that should mean "true" or "false", use bool from <stdbool.h>. For example, is_wall is a good candidate for that.
Prefer for over while where appropriate
The advantage of a for-statement is that you can clearly put the initial value, the end condition and the increment at the top of the loop. So in calculate(), I would write:
for (int i = 0; i < cub->window.res_width; i++)
{
...
} | {
"domain": "codereview.stackexchange",
"id": 40697,
"tags": "algorithm, c, raycasting"
} |
Use older verisions of OpenCV with Groovy? | Question:
Is it possible to use an older version of OpenCV with ROS Groovy (current version is 2.4.4)? If so, how could I change to a different version?
Originally posted by ryeakle on ROS Answers with karma: 36 on 2013-05-12
Post score: 0
Original comments
Comment by Mac on 2013-05-13:
Out of curiosity, why do you want to do this?
Comment by ryeakle on 2013-05-13:
My team was using the cvblobslib to do some object localization. When we used 2.4.4, everything builds, but the blob detection (cvblobslib) doesn't work. Compiled on our old machine with Fuerte and OpenCV 2.4.2, it does work. We ended up re-writing our program to work with a different blob library.
Comment by Mac on 2013-05-13:
Huh. If you haven't, it might be good to file a bug against cvblobslib, just to let them know that OpenCV changed out from under them.
Answer:
If you want a different version of OpenCV you should create a workspace with all packages which depend on it and the older version of OpenCV and compile from source.
Originally posted by tfoote with karma: 58457 on 2013-05-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 14147,
"tags": "opencv, ros-groovy"
} |
Heuristics for space-efficient storing of Unordered Finite Sets in a DFA | Question: I've got an algorithm I'm working on that generates, stores, and iterates through a large number of finite sets. I'm finding that memory is a bottleneck long before time is.
The finite sets are subsets of the vertices $V$ in my graph, so each finite set is small, there's just a lot of them.
In an effort to save space, I've started representing the finite sets as binary words of length $|V|$, with a 0 indicating the element is not in the set, a 1 indicating that it is. I'm storing the collection of these words as an acyclic deterministic automaton (also known as DAWG, directed acyclic word graph).
However, this requires a fixed ordering of the potential elements, which is fine, but arbitrary. If a different ordering were more likely to produce a smaller output set, I'd be happy to use it.
I'm wondering:
Is there a known, efficient algorithm for finding the permutation which gives the smallest DFA representing a set of finite sets?
If not, has any research been done on heuristics for orderings which have been shown to often produce smaller DFAs?
Answer: It sounds like you are describing a binary decision diagram, also known as a BDD. BDD's have been studied extensively in the literature, so you might take a look at the literature.
In particular, the question of finding the best permutation is known as the "variable ordering" problem for BDDs. Finding the optimal variable ordering is NP-hard. There are some heuristics, but they don't always work. See https://en.wikipedia.org/wiki/Binary_decision_diagram#Variable_ordering. Probably the easiest approach is to use an existing BDD library (the standard ones will typically incorporate some variable ordering heuristics).
You might also be interested in ZDDs. | {
"domain": "cs.stackexchange",
"id": 5343,
"tags": "formal-languages, graphs, data-structures, finite-automata, data-compression"
} |
Why does N₂ react with O₂ to Form NO at high temperatures? | Question: This also raises questions that I have about the Haber Process which produces ammonia ($\ce{NH3}$) from molecular nitrogen ($\ce{N2}$) and hydrogen ($\ce{H2}$).
I have heard multiple times that bond between diatomic nitrogen is one of the strongest bonds in nature due to the fact that it is a triple covalent bond that fills the valence shells of both atoms.
I understand that at high temperatures it is possible to break this bond, but I don't understand why the resulting Nitrogen atoms wouldn't simply return to their previous bonds as the temperature cooled.
For example, I read that lightning can result in this reaction: $\ce{N2 + O2 -> 2NO}$
Why would the atoms not return to their original bonds since they would be more stable in that manner? Is bonding indiscriminate at high energy levels? Completely random and dependent on luck?
Answer: $\Delta G = \Delta H - T \Delta S$
In the case of the $\ce{N2 + O2 -> 2NO}$ , $\Delta H$ and $\Delta S$ are both positive, so the reaction is thermodynamically favorable at high temperature (such as in lightning) but not at low temperature.
If the temperature drops to room temperature after NO is formed, it is thermodynamically favorable for NO to decompose to nitrogen and oxygen.
However, that NO is unstable at room temperature tells us nothing about the rate of the decomposition reaction. In fact there was an interesting 40 year study showing very little decomposition of NO sealed in glass tubes over that time period. The authors' calculations show that without a catalyst the timescale of decompsition could be $10^{29}$ years! | {
"domain": "chemistry.stackexchange",
"id": 2418,
"tags": "inorganic-chemistry, thermodynamics, bond, reactivity"
} |
Copying ranges from multiple Excel sheets into a main sheet | Question: I have a VBA code which copies same data from Multiple sheet and then paste it in "Main" Sheet. It then auto fills the blank cells for values from above and then it delete all the rows Where H:H is blank. However being novice in VBA, i feel my code has too many loops, which makes it run slower. Moreover if have the "Main" Sheet have a table formatted, the code does not delete any row H is blank. However it works if "Main" is blank and not formatted.
Another thing I found out that after the code is executed, the excel sheet becomes less responsive. I cannot select cells quickly, change between sheets.
Please advise if anything can be improved to make it run more efficiently.
Private Sub CopyRangeFromMultiWorksheets1()
'Fill in the range that you want to copy
'Set CopyRng = sh.Range("A1:G1")
Dim sh As Worksheet
Dim DestSh As Worksheet
Dim rng As Range
Dim Last As Long
Dim CopyRng1 As Range
Dim CopyRng2 As Range
Dim CopyRng3 As Range
Dim CopyRng4 As Range
Dim CopyRng5 As Range
Dim CopyRng6 As Range
Dim CopyRng7 As Range
Dim cell As Range
Dim Row As Range
Dim LastrowDelete As Long
With Application
.ScreenUpdating = False
.EnableEvents = False
End With
'Delete the sheet "RDBMergeSheet" if it exist
'Application.DisplayAlerts = False
On Error Resume Next
'ActiveWorkbook.Worksheets("RDBMergeSheet").Delete
On Error GoTo 0
'Application.DisplayAlerts = True
'Add a worksheet with the name "RDBMergeSheet"
Set DestSh = Sheets("Main")
'Set DestSh = ActiveWorkbook.Worksheets.Add
' DestSh.Name = "RDBMergeSheet"
'loop through all worksheets and copy the data to the DestSh
For Each sh In ActiveWorkbook.Worksheets
If sh.Name <> DestSh.Name And sh.Name <> "PAYPERIOD" And sh.Name <>
"TECHTeamList" Then
'Find the last row with data on the DestSh
Last = LastRow(DestSh)
'Fill in the range that you want to copy
Set CopyRng1 = sh.Range("B3")
Set CopyRng2 = sh.Range("C3")
Set CopyRng3 = sh.Range("D3")
Set CopyRng4 = sh.Range("G3")
Set CopyRng5 = sh.Range("C5")
Set CopyRng6 = sh.Range("A8:j25")
Set CopyRng7 = sh.Range("A28:j45")
'Test if there enough rows in the DestSh to copy all the data
If Last + CopyRng1.Rows.Count > DestSh.Rows.Count Then
MsgBox "There are not enough rows in the Destsh"
GoTo ExitTheSub
End If
'This example copies values/formats, if you only want to copy the
'values or want to copy everything look at the example below this
macro
CopyRng1.Copy
With DestSh.Cells(Last + 1, "A")
.PasteSpecial xlPasteValues
'Application.CutCopyMode = False
End With
CopyRng2.Copy
With DestSh.Cells(Last + 1, "B")
.PasteSpecial xlPasteValues
'Application.CutCopyMode = False
End With
CopyRng3.Copy
With DestSh.Cells(Last + 1, "C")
.PasteSpecial xlPasteValues
'Application.CutCopyMode = False
End With
CopyRng4.Copy
With DestSh.Cells(Last + 1, "D")
.PasteSpecial xlPasteValues
'Application.CutCopyMode = False
End With
CopyRng5.Copy
With DestSh.Cells(Last + 1, "E")
.PasteSpecial xlPasteValues
'Application.CutCopyMode = False
End With
CopyRng6.Copy
With DestSh.Cells(Last + 1, "F")
.PasteSpecial Paste:=xlPasteValuesAndNumberFormats
'Application.CutCopyMode = False
End With
'Refresh the Lastrow used so that the values start from
'underneath copyrng6
Last = LastRow(DestSh)
CopyRng7.Copy
With DestSh.Cells(Last + 1, "F")
.PasteSpecial Paste:=xlPasteValuesAndNumberFormats
'Application.CutCopyMode = False
End With
Application.CutCopyMode = False
End If
Next
ExitTheSub:
Application.Goto DestSh.Cells(1)
'AutoFit the column width in the DestSh sheet
DestSh.Columns.AutoFit
'Autofill the rang A2:E for values from above looking at the last row of F
With Range("A2:E" & Range("F" & Rows.Count).End(xlUp).Row)
.SpecialCells(xlBlanks).FormulaR1C1 = "=R[-1]C"
.Value = .Value
End With
'Delete Entire rows where H is Blank
Application.ScreenUpdating = False
Columns("H:H").SpecialCells(xlCellTypeBlanks).EntireRow.Delete
Application.ScreenUpdating = True
With Application
.ScreenUpdating = True
.EnableEvents = True
End With
End Sub
Function LastRow(sh As Worksheet)
On Error Resume Next
LastRow = sh.Cells.Find(What:="*", _
After:=sh.Range("A1"), _
Lookat:=xlPart, _
LookIn:=xlFormulas, _
SearchOrder:=xlByRows, _
SearchDirection:=xlPrevious, _
MatchCase:=False).Row
On Error GoTo 0
End Function
Answer: try replacing your copies with this. Does this improve performance?
DestSh.Cells(Last + 1, "A").Resize(CopyRng1.Rows.Count, CopyRng1.Columns.Count).Value = CopyRng1.Value
DestSh.Cells(Last + 1, "B").Resize(CopyRng2.Rows.Count, CopyRng2.Columns.Count).Value = CopyRng2.Value
DestSh.Cells(Last + 1, "C").Resize(CopyRng3.Rows.Count, CopyRng3.Columns.Count).Value = CopyRng3.Value
DestSh.Cells(Last + 1, "D").Resize(CopyRng4.Rows.Count, CopyRng4.Columns.Count).Value = CopyRng4.Value
DestSh.Cells(Last + 1, "E").Resize(CopyRng5.Rows.Count, CopyRng5.Columns.Count).Value = CopyRng5.Value
DestSh.Cells(Last + 1, "F").Resize(CopyRng6.Rows.Count, CopyRng6.Columns.Count).Value = CopyRng6.Value
Last = LastRow(DestSh)
DestSh.Cells(Last + 1, "F").Resize(CopyRng7.Rows.Count, CopyRng7.Columns.Count).Value = CopyRng7.Value | {
"domain": "codereview.stackexchange",
"id": 31778,
"tags": "performance, vba, excel"
} |
Are the lone pairs in water equivalent? | Question: I've read that the oxygen atom in water is $\mathrm{sp^2}$ hybridized, such that one of the oxygen lone pairs should be in an $\mathrm{sp^2}$ orbital and the other should be in a pure p atomic orbital.
First, am I correct about the lone pairs being non-equivalent?
Second, if so, does this have any significance in actual physical systems (i.e. is it a measurable phenomenon), and what is the approximate energy difference between the pairs of electrons?
Lastly, if it turns out the lone pairs are actually inequivalent, can this be reconciled with the traditional explanation (due to VSEPR theory) that oxygen is $\mathrm{sp^3}$ and the lone pairs are equivalent?
Answer: Water, as simple as it might appear, has quite a few extraordinary things to offer. Most does not seem to be as it appears.
Before diving deeper, a few cautionary words about hybridisation. Hybridisation is an often misconceived concept. It only is a mathematical interpretation, which explains a certain bonding situation (in an intuitive fashion). In a molecule the equilibrium geometry will result from various factors, such as steric and electronic interactions, and furthermore interactions with the surroundings like a solvent or external field. The geometric arrangement will not be formed because a molecule is hybridised in a certain way, it is the other way around, i.e. a result of the geometry or more precise and interpretation of the wave function for the given molecular arrangement.
In molecular orbital theory linear combinations of all available (atomic) orbitals will form molecular orbitals (MO). These are spread over the whole molecule, or delocalised, and in a quantum chemical interpretation they are called canonical orbitals. Such a solution (approximation) of the wave function can be unitary transformed form localised molecular orbitals (LMO). The solution (the energy) does not change due to this transformation. These can then be used to interpret a bonding situation in a simpler theory.
Each LMO can be expressed as a linear combination of the atomic orbitals, hence it is possible to determine the coefficients of the atomic orbitals and describe these also as hybrid orbitals. It is absolutely wrong to assume that there are only three types of spx hybrid orbitals.
Therefore it is very well possible, that there are multiple different types of orbitals involved in bonding for a certain atom. For more on this, read about Bent's rule on the network.[1]
Let's look at water, Wikipedia is so kind to provide us with a schematic drawing:
The bonding angle is quite close to the ideal tetrahedral angle, so one would assume, that the involved orbitals are sp3 hybridised. There is also a connection between bond angle and hybridisation, called Coulson's theorem, which lets you approximate hybridisation.[2] In this case the orbitals involved in the bonds would be sp4 hybridised. (Close enough.)
Let us also consider the symmetry of the molecule. The point group of water is C2v. Because there are mirror planes, in the canonical bonding picture π-type orbitals[3] are necessary. We have an orbital with appropriate symmetry, which is the p-orbital sticking out of the bonding plane. This interpretation is not only valid it is one that comes as the solution of the Schrödinger equation.[4] That leaves for the other orbital a hybridisation of sp(2/3).
If we make the reasonable assumption, that the oxygen hydrogen bonds are sp3 hybridised, and the out-of-plane lone pair is a p orbital, then the maths is a bit easier and the in-plane lone pair is sp hybridised.[5]
A calculation on the MO6/def2-QZVPP level of theory gives us the following canonical molecular orbitals:
(Orbital symmetries: $2\mathrm{A}_1$, $1\mathrm{B}_2$, $3\mathrm{A}_1$, $1\mathrm{B}_1$)[6,7]
Since the interpretation with hybrid orbitals is equivalent, I used the natural bond orbital theory to interpret the results. This method transforms the canonical orbitals into localised orbitals for easier interpretation.
Here is an excerpt of the output (core orbital and polarisation functions omitted) giving us the calculated hybridisations:
(Occupancy) Bond orbital / Coefficients / Hybrids
------------------ Lewis ------------------------------------------------------
2. (1.99797) LP ( 1) O 1 s( 53.05%)p 0.88( 46.76%)d 0.00( 0.19%)
3. (1.99770) LP ( 2) O 1 s( 0.00%)p 1.00( 99.69%)d 0.00( 0.28%)
4. (1.99953) BD ( 1) O 1- H 2
( 73.49%) 0.8573* O 1 s( 23.41%)p 3.26( 76.25%)d 0.01( 0.31%)
( 26.51%) 0.5149* H 2 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%)
5. (1.99955) BD ( 1) O 1- H 3
( 73.48%) 0.8572* O 1 s( 23.41%)p 3.26( 76.27%)d 0.01( 0.30%)
( 26.52%) 0.5150* H 3 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%)
-------------------------------------------------------------------------------
As we can see, that pretty much matches the assumption of sp3 oxygen hydrogen bonds, a p lone pair, and a sp lone pair.
Does that mean that the lone pairs are non-equivalent?
Well, that is at least one interpretation. And we only deduced all that from a gas phase point of view. When we go towards condensed phase, things will certainly change. Hydrogen bonds will break the symmetry, dynamics will play an important role and in the end, both will probably behave quite similarly or even identical.
Now let's get to the juicy part:
Second, if so, does this have any significance in actual physical systems (i.e. is it a measurable phenomenon), and what is the approximate energy difference between the pairs of electrons?
Well the first part is a bit tricky to answer, because that is dependent on a lot more conditions. But the part in parentheses is easy. It is measurable with photoelectron spectroscopy. There is a nice orbital scheme correlated to the orbital ionisation potential on the homepage of Michael K. Denk for water.[8] Unfortunately I cannot find license information, or a reference to reproduce, hence I am hesitant to post it here.
However, I found a nice little publication on the photoelectron spectroscopy of water in the bonding region.[9] I'll quote some relevant data from the article.
$\ce{H2O}$ is a non-linear, triatomic molecule consisting of an oxygen atom covalently bonded to two hydrogen atoms. The ground state of the $\ce{H2O}$ molecule is classified as belonging to the $C_\mathrm{2v}$ point group and so the electronic states of water are described using the irreducible representations $\mathrm{A}_1$, $\mathrm{A}_2$, $\mathrm{B}_1$, $\mathrm{B}_2$. The electronic configuration of the ground state of the $\ce{H2O}$ molecule is described by five doubly occupied molecular orbitals:
$$\begin{align}
\underbrace{(1\mathrm{a}_1)^2}_{\text{core}}&&
\underbrace{(2\mathrm{a}_1)^2}_{\text{inner-valence orbital}}&&
\underbrace{
(1\mathrm{b}_2)^2 (3\mathrm{a}_1)^2 (1\mathrm{b}_1)^2
}_{\text{outer-valence orbital}}&&
\mathrm{X~^1A_1}
\end{align}$$
[..]
In addition to the three band systems observed in HeI PES of $\ce{H2O}$, a fourth band system in the TPE spectrum close to 32 eV is also observed. As indicated in Fig. 1, these band systems correspond to the removal of a valence electron from each of the molecular orbitals $(1\mathrm{b}_1)^{-1}$, $(3\mathrm{a}_1)^{-1}$, $(1\mathrm{b}_2)^{-1}$ and $(2\mathrm{a}_1)^{-1}$ of $\ce{H2O}$.
As you can see, it fits quite nicely with the calculated data. From the image I would say that the difference between $(1\mathrm{b}_1)^{-1}$ and $(3\mathrm{a}_1)^{-1}$ is about 1-2 eV.
TL;DR
As you see your hunch paid off quite well. Photoelectron spectroscopy of water in the gas phase confirms that the lone pairs are non-equivalent. Conclusions for condensed phases might be different, but that is a story for another day.
Notes and References
What is Bent's rule?
Utility of Bent's Rule - What can Bent's rule explain that other qualitative considerations cannot?
Formal theory of Bent's rule, derivation of Coulson's theorem (Wikipedia).
Worked example for cyclo propane by ron.
A π orbital has one nodal plane collinear with the bonding axis, it is asymmetric with respect to this plane. A bit more explanation in my question What would follow in the series sigma, pi and delta bonds?
With in the approximation that molecular orbitals are a linear combination of atomic orbitals (MO = LCAO).
The terminology we use for hybridisation actually is just an abbreviation:
$$\mathrm{sp}^{x} = \mathrm{s}^{\frac{1}{x+1}}\mathrm{p}^{\frac{x}{x+1}}$$
In theory $x$ can have any value; since it is just a unitary transformation the representation does not change, hence
\begin{align}
1\times\mathrm{s}, 3\times\mathrm{p}
&\leadsto 4\times\mathrm{sp}^3 \\
&\leadsto 3\times\mathrm{sp}^2, 1\times\mathrm{p} \\
&\leadsto 2\times\mathrm{sp}, 2\times\mathrm{p} \\
&\leadsto 2\times\mathrm{sp}^3, 1\times\mathrm{sp}, 1\times\mathrm{p} \\
&\leadsto \text{etc. pp.}\\
&\leadsto 2\times\mathrm{sp}^4, 1\times\mathrm{p}, 1\times\mathrm{sp}^{(2/3)}
\end{align}
There are virtually infinite possibilities of combination.
This and the next footnote address a couple of points that were raised in a comment by DavePhD. While I already extensively answered that there, I want to include a few more clarifying points here. (If I do it right, the comments become obsolete.)
What is the reason for concluding 2 lone pairs versus 1 or 3? For example Mulliken has in table V the b1 orbital being a definite lone pair (no H population) but the two a1 orbitals both have about 0.3e population on H. Would it be wrong to say only one of the PES energy levels corresponds to a lone pair, and the other 3 has some significant population on hydrogen? Are Mulliken's calculations still valid? – DavePhD
The article Dave refers to is R. S. Mulliken, J. Chem. Phys. 1955, 23, 1833., which introduces Mulliken population analysis. In this paper Mulliken analyses wave functions on the SCF-LCAO-MO level of theory. This is essentially Hartree Fock with a minimal basis set. (I will address this in the next footnote.) We have to understand that this was state-of-the-art computational chemistry back then. What we take for granted nowadays, calculating the same thing in a few seconds, was revolutionary back then. Today we have a lot fancier methods. I used density functional theory with a very large basis set. The main difference between these approaches is that the level I use recovers a lot more of electron correlation than the method of Mulliken. However, if you look closely at the results it is quite impressive how well these early approximations perform.
On the M06/def2-QZVPP level of theory the geometry of the molecule is optimised to have an oxygen hydrogen distance of 95.61 pm and a bond angle of 105.003°. This is quite close to the experimental results.
The contribution to the orbitals are given as follows. I include the orbital energies (OE), too. The contributions of the atomic orbitals are given to 1.00 being the total for each molecular orbital. Because the basis set has polarisation functions the missing parts are attributed to this. The threshold for printing is 3%. (I also rearranged the Gaussian Output for better readability.)
Atomic contributions to molecular orbitals:
2: 2A1 OE=-1.039 is O1-s=0.81 O1-p=0.03 H2-s=0.07 H3-s=0.07
3: 1B2 OE=-0.547 is O1-p=0.63 H2-s=0.18 H3-s=0.18
4: 3A1 OE=-0.406 is O1-s=0.12 O1-p=0.74 H2-s=0.06 H3-s=0.06
5: 1B1 OE=-0.332 is O1-p=0.95
We can see that there is indeed some contribution by the hydrogens to the in-plane lone pair of oxygen. On the other hand we see that there is only one orbital where there is a large contribution by hydrogen. One could here easily come up with the theory of one or three lone pairs of oxygen, depending on your own point of view. Mulliken's analysis is based on the canonical orbitals, which are delocalised, so we will never have a pure lone pair orbital. When we refer to orbitals as being of a certain type, then we imply that this is the largest contribution. Often we also use visual aides like pictures of these orbitals to decide if they are of bonding or anti-bonding nature, or if their contribution is on the bonding axis.
All these analyses are highly biased by your point of view. There is no right or wrong when it comes to separation schemes. There is no hard evidence for any of these obtainable. These are mathematical interpretations that do in the best case help us understand bonding better. Thus deciding whether water has one, two or three (or even four) lone pairs is somewhat playing with numbers until something seems to fit. Bonding is too difficult to transform it in easy pictures. (That's why I am not an advocate for cautiously using Lewis structures.)
The NBO analysis is another separation scheme. One that aims to transform the obtained canonical orbitals into a Lewis like picture for a better understanding. This transformation does not change the wave function and in this way is as equally a representation as other approaches. What you loose by this approach are the orbital energies, since you break the symmetry of the wave function, but this is going much too far to explain. In a nutshell, the localisation scheme aims to transform the delocalised orbitals into orbitals that correspond to bonds.
From a quite general point of view, Mulliken's calculations (he actually only interpreted the results of others) and conclusion hold up to a certain point. Nowadays we know that his population analysis has severe problems, but within the minimal basis they still produce justifiable results. The popularity of this method comes mainly because it is very easy to perform. See also: Which one, Mulliken charge distribution and NBO, is more reliable?
Mulliken used a SCF-LCAO-MO calculation by Ellison and Shull and was so kind to include the main results into his paper. The oxygen hydrogen bond distance is 95.8 pm and the bond angle is 105°. I performed a calculation on the same geometry on the HF/STO-3G level of theory for comparison. It obviously does not match perfectly, but well enough for a little bit of further discussion.
NO SYM HF/STO-3G : N(O) N(H2) | Mulliken : N(O) N(H2)
1 1A1 -550.79 2.0014 -0.0014 | -557.3 2.0007 -0.0005
2 2A1 -34.49 1.6113 0.3887 | -36.2 1.688 0.309
3 1B2 -16.82 1.0700 0.9300 | -18.6 0.918 1.080
4 3A1 -12.29 1.6837 0.3163 | -13.2 1.743 0.257
5 1B1 -10.63 2.0000 0.0000 | -11.8 2.000
As an off-side note: I completely was unable to read the Mulliken analysis by Gaussian. I used MultiWFN instead. It is also not an equivalent approach because they expressed the hydrogen atoms with group orbitals.
The results don't differ by much. The basic approach of Mulliken is to split the overlap population to the orbitals symmetric between the elements. That is a principal problem of the method as the contributions to that MO can be quite different. Resulting problematic points are occupation values larger than two or smaller than zero, which have clearly no physical meaning. The analysis is especially ruined for diffuse functions.
At the time Mulliken certainly did not know about anything we are able to do today, and under which conditions his approach will break down, it still is funny to read such sentences today.
Actually, very small negative values occasionally occur [...]. [...] ideally to the population of the AO [...] should never exceed the number 2.00 of electrons in a closed atomic sub-shell. Actually, [the orbital population] in some instances does very slightly exceed 2.00 [...]. The reason why these slight but only slight imperfections exist is obscure. But since they are only slight, it appears that the gross atomic populations calculated using Eq. (6') may be taken as representing rather accurately the "true" populations in various AOs for an atom in a molecule. It should be realized, of course, that fundamentally there is no such thing as an atom in a molecule except in an approximate sense.
For much more on this I found an explanation of the Gaussian output along with the reference to F. Martin, H. Zipse, J. Comp. Chem. 2005, 26, 97 - 105, available as a copy. I have not read it though.
Scroll down until the bottom of the page for the image, read for more information: CHEM 2070, Michael K. Denk: UV-Vis & PES. (University of Guelph) If dead: Wayback Machine
S.Y. Truong, A.J. Yencha, A.M. Juarez, S.J. Cavanagh, P. Bolognesi, G.C. King, Chemical Physics 2009, 355 (2–3), 183-193. Or try this mirror. | {
"domain": "chemistry.stackexchange",
"id": 5609,
"tags": "water, hybridization, vsepr-theory"
} |
What happens to graviational force when separation between 2 objects is very small? | Question: I recently learned about gravitational force and found out the equation for gravitational force on a object by an object according to Newton's law of universal gravitation.
$$ F = \frac{Gm_1m_2}{r^2}. $$
$r^2$ denotes the square of separation between two objects. So the question is what happens when this separation is very small. For example when someone is standing on ground, separation between him and Earth is very low (zero).
Does gravitational force $F$ becomes ∞ then?
Answer: The $r$ value in this equation represents the separation of the two bodies' centers of mass.
So, when you're standing on the surface of the earth, then the value of $r$ is equal to $r_E$, the radius of the earth, which is $6378$km or $6.378\times10^{6}$m.
To answer your more general question, the force between two masses does, indeed, increase as separation decreases, but as separation gets smaller and smaller, other forces start to dominate.
The electrostatic forces between molecules and atoms are millions of times stronger than the gravitational forces between them.
The strong nuclear forces between nucleons are even stronger still.
Still, if you have enough mass, gravity can overcome all of these forces. When a large enough star collapses to an ever-smaller point, you get a black hole. While Physics tends to try to avoid talking about infinities, the nature of a black hole is such that you can, in theory, get arbitrarily close to its center of mass.
However, at this point, you are lost to the universe and what happens inside a black hole, stays inside a black hole. Physicists are not really sure what happens there. | {
"domain": "physics.stackexchange",
"id": 85001,
"tags": "forces, newtonian-gravity, singularities"
} |
pH range outside conventional 0-14 | Question: Is a pH value outside 0 - 14 possible?
I asked my teacher who said: yes, it is, but very difficult to achieve.
Then on the internet, I found multiple answers, one saying it is but because of a fault in the pH glass meter we can't be sure, the other said that it is not possible because of the intrinsic value of water to ionize.
As you can see, I'm getting mixed answers from different sources and I'm really confused now. Could somebody here give a concise answer whether or not it is possible? A deep argument will be valuable.
Answer: Let me add a bit more to the answers already given. As has been said, $\mathrm{pH}$ is nothing but a measure of the activity of protons ($\ce{H+}$) in a solvent - $\displaystyle\mathrm{pH} = -\log_{10} \ce{\ a_{H_{\text{solvated}}^{+}}}$. In dilute solutions, a solute's activity is approximately equal to its concentration, and so you can get away with saying $\mathrm{pH} = -\log_{10} \ce{[H_{solvated}^{+}]}$.
The "normal" aqueous $\mathrm{pH}$ scale that goes from 0-14 delimits the region where both $\ce{[H^{+}]}$ and $\ce{[OH^{-}]}$ are lower than $\pu{1 mol L^{-1}}$, which is about the upper limit where concentrations and activities of solutes in solution are approximately the same and they can be used interchangeably without introducing too much error.
(The reason this is true is because a solute in a very dilute solution behaves as if it surrounded by an infinite amount of solvent; there are so few solute species that they interact very little with each other on average. This view breaks down at high concentrations because now the solute species aren't far enough apart on average, and so the intermolecular interactions aren't quite the same.)
Not only is it rarer to talk about pH for very highly concentrated solutions, because activity has a much more subtle definition than concentration, but also because you have to produce quite concentrated solutions to get unusual values of pH. For example, very approximately, to get a litre of an aqueous solution of $\mathrm{pH} = -1$ using hydrogen perchlorate ($\ce{HClO4}$, a very strong monoprotic acid) you would need to dissolve around $\pu{1 kg}$ of $\ce{HClO4}$ in a few hundred $\pu{mL}$ of water. A litre of $\mathrm{pH} = 15$ aqueous solution using potassium hydroxide ($\ce{KOH}$) would also require several hundred grams of the hydroxide in about as much water. It's so much acid/base in a small amount of solvent that many acids/bases can't even dissolve well enough to reach the required concentrations.
There is also another way to come across unusual values of pH. We almost always talk about acids in bases in water, but they also exist in other media. Notice that in the definition of pH, no direct reference is made to water. It just happens that, for water, we have:
$$\ce{H2O (l) <=> H^+ (aq) + OH^{-} (aq)} \\ K^{\pu{25^\circ C}}_{\text{autodissociation}}=k^{\pu{25^\circ C}}_\mathrm{w}=a_{\ce{H^+(aq)}} \times a_{\ce{OH^- (aq)} } \simeq \ce{[H^{+}]} \times \ce{[OH^{-}]}
= 1.01\times 10^{-14}$$
$$\mathrm{pH}+\mathrm{pOH}=-\log\ k_\mathrm{w} \simeq 14 \ (25^\circ C)$$
However, for liquid ammonia, one has:
$$\ce{NH3{(l)} <=> H^+{(am)} + NH2^{-}{(am)}}
\\
K^{\pu{-50^\circ C}}_{\text{autodissociation}}=k^{\pu{-50^\circ C}}_\mathrm{am}=a_{\ce{H^+(am)}} \times a_{\ce{NH2^- (am)} } \simeq \ce{[H^{+}]} \times \ce{[NH2^{-}]} = 10^{-33}$$
$$\mathrm{pH}+\mathrm{p}\ce{NH2}=-\log\ k_{\mathrm{am}} \simeq 33 \ (-\pu{50^\circ C})$$
($\ce{(am)}$ stands for a substance solvated by ammonia). Hence, in liquid ammonia at $-50^\circ C$, the pH scale can easily go all the way from 0 to 33 (of course, it can go a little lower and higher still, but again now activities become important), and that neutral pH is actually 16.5.
For pure liquid hydrogen sulfate (some difficulties arise as $\ce{H2SO4}$ is a diprotic acid and tends to decompose itself when pure, but putting that aside and looking only at the first dissociation):
$$\ce{H2SO4{(l)} <=> H^+{(hs)} + HSO4^{-}{(hs)}}\\
K^{\pu{25^\circ C}}_{\text{autodissociation}}=k^{\pu{25^\circ C}}_\mathrm{hs}=a_{\ce{H^+(hs)}} \times a_{\ce{HSO4^- (hs)} } \simeq \ce{[H^{+}]} \times \ce{[HSO4-]} \simeq 10^{-3}$$
$$\mathrm{pH}+\mathrm{p}\ce{HSO4}=-\log\ k_{\mathrm{hs}} \simeq 3 \ (\pu{25^\circ C})$$
($\ce{(hs)}$ stands for a substance solvated by hydrogen sulfate). Thus, the pH in liquid hydrogen sulfate has a much smaller range than in water and ammonia, not going much further than interval from 0 to 3.
Notice also that not only does the range of $\mathrm{pH}$s change in each solvent, but there is no direct relationship between the values of pH between different solvents; formic acid in ammonia would behave as a strong acid and as such a $\pu{1 M}$ solution in ammonia would have a pH close to 0, but formic acid is actually a base in liquid hydrogen sulfate, and as such a $\pu{1 M}$ solution would have a pH above 1.5. An approximate comparison between the pH ranges and their relative positions in different solvents can be found in the figure below, from A Unified pH Scale for All Phases. | {
"domain": "chemistry.stackexchange",
"id": 722,
"tags": "acid-base, ph"
} |
Utilizing symmetry in statics problem | Question: I am given the structure below, and supposed to find the forces occuring in $A, F, E, D, C$ and $B$. The task hints of using symmetry, as several forces are equal in magnitude.
Supposedly, $A=B$, and $G=H$, where the image below depicts the top beam, and these forces.
My question is therefore why $A=B$ and $H=G$. I have asked other students/T.As., and they say it is obvious or intuitive, but can not really explain. Can anyone help?
Answer: You seem to have no trouble identifying that the structure itself is symmetric, only how that automatically tells you that $A = B$ and $G = H$.
I'm going to take two approaches to this answer, one a bit blunter, the other a bit more visual.
The blunt approach
One could argue that when the structure (including loading) is symmetric, the onus should be on you to give a reason why the reactions shouldn't be symmetric.
If the reactions aren't symmetric, then the internal stresses and deflection aren't symmetric. So why should a symmetric structure have asymmetric deflection? Why should its left span sag more (for example) than the right span?
Wouldn't that imply that the left span is more flexible or under greater load than the right span (which we know isn't the case, since it's symmetric)?
The visual approach
Let's take a walk.
Initially, we are standing in a position such that we can see the structure exactly as shown in the problem statement, with A closer to us but slightly to our left and B in the distance, slightly to our right.
We then walk forwards and a bit to our right until we are exactly between the E and C supports and then we look at the rest of the structure. What we'll see is precisely what you drew: a horizontal beam that starts at A to our left, passes through G, then H, and ends at B. All those points have a 2-meter spacing between them, and the entire beam is under a distributed load $w$.1
Let's then say we sit down and work out all the reactions somehow. Now let's imagine that you get that $A \neq B$ and $G \neq H$. Specifically, let's say we get that $A > B$ and $G > H$. Or, put another way, we get that (left and right defined from our current position between C and E):
$$\begin{align}
\text{left-most reaction (A)} &> \text{right-most reaction (B)} \\
\text{left-inner reaction (G)} &> \text{right-inner reaction (H)}
\end{align}$$
We haven't thought about symmetry yet, so we just accept that result as correct and move on.
Now we stand up and walk around the entire structure until we are standing precisely between supports D and F, and we once again look at the rest of the structure. This time we see a horizontal beam that starts at B to our left, passes through H, then G, and ends at A. All those points have a 2-meter spacing between them, and the entire beam is under a distributed load $w$.
Now, do we need to sit down and calculate the results for this "new" beam? Of course not. It's obviously the same beam we calculated before, just from a different perspective.
But still, let's say we did go through the trouble of redoing our calculations. Assuming we did everything exactly the same way as the first time, we'd once again get that:
$$\begin{align}
\text{left-most reaction} &> \text{right-most reaction} \\
\text{left-inner reaction} &> \text{right-inner reaction}
\end{align}$$
But this time that means that $B > A$ and $H > G$...
So when we looked at the beam from one perspective (between C and E) we get a different result than we got from another (between D and F). That obviously makes no physical sense: the beam's true reactions don't care about where we are standing when we calculate them.
So we know that our result is incorrect. And in fact, we can ask ourselves: what's the only result that's independent of where we're standing? One where $A = B$ and $G = H$.2
This thought experiment only works when the beam is symmetrical. After all, when the beam is symmetrical, the change in perspective is simply an inversion of the labels. Why should beam A-G-H-B have different results than those from B-H-G-A?
If it isn't symmetric (say, if the spans are actually 1 m (A to G), 3 m (G to H) and 2 m (H to B), then our calculation standing between C and E will give us that:
$$\begin{align}
\text{left-most reaction (A)} &< \text{right-most reaction (B)} \\
\text{left-inner reaction (G)} &< \text{right-inner reaction (H)}
\end{align}$$
and the result standing between D and F will be consistent:
$$\begin{align}
\text{left-most reaction (B)} &> \text{right-most reaction (A)} \\
\text{left-inner reaction (H)} &> \text{right-inner reaction (G)}
\end{align}$$
1 In the diagrams below, G and H should actually be springs; they've been marked as rigid supports to keep thing simple and because this is irrelevant to the question at hand.
2 Obviously one where all reactions are equal would also be independent of where we're standing but would fail to satisfy the standard equilibrium equations. | {
"domain": "engineering.stackexchange",
"id": 4567,
"tags": "mechanical-engineering, civil-engineering, statics, beam, deflection"
} |
CSV/JSON converter | Question: This is my first Python script and I would like to hear the opinion of more experienced users, so I can do it right from the beginning instead of trying to correct my mistakes after months (years?) of coding.
#!/usr/bin/env python
import argparse
import csv
import json
import os
import sys
from pprint import pprint
__author__ = 'RASG'
__version__ = '2018.02.15.1843'
# ------------------------------------------------------------------------------
# Argumentos
# ------------------------------------------------------------------------------
arg_parser = argparse.ArgumentParser(
description = 'csv <-> json converter',
epilog = 'Output files: /tmp/rag-*',
formatter_class = lambda prog: argparse.RawTextHelpFormatter(prog, max_help_position=999)
)
arg_parser.add_argument('-v', '--version', action='version', version=__version__)
argslist = [
('-f', '--file', '(default: stdin) Input file', dict(type=argparse.FileType('r'), default=sys.stdin)),
('-ph', '--header', '(default: %(default)s) Print csv header', dict(action='store_true')),
]
for argshort, arglong, desc, options in argslist: arg_parser.add_argument(argshort, arglong, help=desc, **options)
args = arg_parser.parse_args()
# ------------------------------------------------------------------------------
# Funcoes
# ------------------------------------------------------------------------------
def get_file_ext(f):
file_name, file_ext = os.path.splitext(f.name)
return file_ext.lower()
def csv_to_json(csv_file):
csv_reader = csv.DictReader(csv_file)
output_file = open('/tmp/rag-parsed.json', 'wb')
data = []
for row in csv_reader: data.append(row)
json_data = json.dumps(data, indent=4, sort_keys=True)
output_file.write(json_data + '\n')
def json_to_csv(json_file):
json_reader = json.loads(json_file.read())
output_file = open('/tmp/rag-parsed.csv', 'wb')
campos = json_reader[0].keys()
csv_writer = csv.DictWriter(output_file, fieldnames=campos, extrasaction='ignore', lineterminator='\n')
if args.header: csv_writer.writeheader()
for j in json_reader: csv_writer.writerow(j)
# ------------------------------------------------------------------------------
# Executar
# ------------------------------------------------------------------------------
for ext in [get_file_ext(args.file)]:
if ext == '.csv':
csv_to_json(args.file)
break
if ext == '.json':
json_to_csv(args.file)
break
sys.exit('File type not allowed')
Answer: Before any criticism, great job. This is mature-looking code.
Alright, now some criticism ;)
Single-line if and for loops are not good style in Python: always break the body out into its own indented block, even if it's only a single line; for example:
for argshort, arglong, desc, options in argslist:
arg_parser.add_argument(argshort, arglong, help=desc, **options)
Add docstrings to your functions, to document what they do and their inputs/outputs (I prefer the Google format, but you're free to choose whatever works best for you):
def get_file_ext(f):
"""Retrieves the file extension for a given file path
Args:
f (file): The filepath to get the extension of
Returns:
str: The lower-case extension of that file path
"""
file_name, file_ext = os.path.splitext(f.name)
return file_ext.lower()
There are a couple more built-ins you could use. For example, get_file_ext(f) could be replaced by Path(f.name).suffix (using pathlib, if you're using an up-to-date Python)
Unless there's some other good reason, use json.dump(open(...), data, ...) and json.load(open(...)) rather than reading/writing the file yourself, like json.loads(open(...).read()). This way, you never need to store the JSON text in memory, it can get saved/read to/from disk lazily by the parser. It's also just cleaner. (Also, you don't need 'wb' mode, just 'w', since JSON is text, not an arbitrary byte stream.)
When you do want to manually open a file, it's better practice to use it as a context manager, which will automatically close the file at the proper time:
with open(...) as output_file:
output_file.write(...)
Wrap the body of your script in a __main__ block:
if __name__ == '__main__':
for ext in [...]: ...
or
def main():
for ext in [...]: ...
if __name__ == '__main__':
main()
That's more the style that's popular and standard in the Python community. You're clearly familiar with good coding, though, and it shows in your style. Good job, and welcome to Python! | {
"domain": "codereview.stackexchange",
"id": 33589,
"tags": "python, beginner, json, csv"
} |
How to calculate the max torque due to angular momentum of a beam given motor specs | Question: I have non-rigid beam attached to a gearbox and motor assembly. The gearbox outputs $1000 N \cdot m$ at the shaft. The beam is rotated at $3000 RPM$ around it's center. If the motor is switched off immediately and acts as a brake, is it possible that the torque generated from the momentum of the beam could be greater than the $1000N \cdot m$ that the motor originally produced, and if so how?
I am trying to work out the max torque a beam could create at the gearbox mount when the power is cut suddenly, however we don't know any of the beam characteristics such as where the point loads are, how long it takes to stop etc. All I have is the gearbox and motor specifications.
Edit: This question is different from a similar one I asked here. In the other question I mentioned that the beam was uniformly distributed, however in this example I have no beam information. All I have to go off is the motor and gearbox specifications.
Answer:
is it possible that the torque generated from the momentum of the beam
I'm going to stop you right there. Momentum doesn't generate torque. Momentum is moment of inertia times speed. Torque is moment of inertia times acceleration. Just like speed and acceleration aren't the same thing, momentum and force aren't the same thing.
If you apply brakes, the torque that results is whatever the braking torque is. The load doesn't dictate what the braking torque is. The load will determine how quickly it decelerates for a given braking torque, but the brake is what sets the torque.
Again, momentum is $p = mv$, or $L = I\omega$. Torque affects acceleration. It changes the speed, so it changes momentum $\tau = dL/dt$, but that's just because again $L = I\omega$, so really what that's saying is that $\tau = (I)d\omega/dt$, or the more common expression for torque, $\tau = I\alpha$.
So again, to reiterate, the load doesn't set the torque. The brakes set the torque. The load reacts to torque.
I am trying to work out the max torque a beam could create at the gearbox mount when the power is cut suddenly, however we don't know any of the beam characteristics such as where the point loads are, how long it takes to stop etc.
If you want to calculate the load, but you don't know anything about the load, you're pretty well out of luck. | {
"domain": "engineering.stackexchange",
"id": 2288,
"tags": "dynamics"
} |
A Question About the Surface of a Black Hole Singularity | Question: In Kip Thorne's book, Black Holes and Time Warps, he states that the mass of the core of a star shrinks until quantum gravity takes over. And then discusses that at this distance, the singularity cannot be distinguished from quantum foam. I am confused. Is he stating that the singularity is covered in quantum foam or does the mass shrink so small it becomes part of the quantum foam. If it is the latter, how can the quantum gravity, affect the macroscopic world?
Is he discussing the surface is smooth, except for the fluctuations required by quantum gravity?
Sorry - I am not a scientist or student...
Answer: In Thorne's book, on page 477, it says "Because all conceivable curvatures and topologies are permitted in side the singularity, no matter how wild, one says that the singularity is made from a probabilistic foam. John Wheeler, who first argued that this must be the nature of space when the laws of quantum gravity hold sway, has called it quantum foam."
So what you are thinking of as a foam is really only a probabilistic foam. This is just a way of saying that every possible state in the singularity is only a probability. (Takes you back to the probabilistic nature of any particle which is a discussion far beyond this answer).
But remember Thorne's central point of the chapter, that time does not exist at the singularity. Space and time have separated from each other. Time stops at the event horizon, but space continues to dilate all the way down to the singularity. This quantum foam is Wheeler's way of describing the situation of an unknowable, but only probabilistic, nature of the singularity. | {
"domain": "physics.stackexchange",
"id": 83694,
"tags": "black-holes, spacetime, quantum-gravity, event-horizon, singularities"
} |
Typescript error handler | Question: I wrote a function in Typescript which takes required parameters, optional parameters and parameters with default value. I want to use this function for error handling of http requests. I am not sure, if my solution is good, so I wanted to know the opinion of the Code Review community.
Here is my function:
public handleError<T>(
operation: string = 'operation',
customErrorMessage: string,
valueToReturn: T,
logError = true,
showSnackbar?: boolean,
showHttpErrorResponse?: boolean
): (error: any) => Observable<any> {
return (error: any): Observable<T | Error> => {
if (logError) {
console.error(error.message);
}
if (showSnackbar) {
if (showHttpErrorResponse) {
this.snackbar.open(error.message, 'OK');
} else {
this.snackbar.open(customErrorMessage, 'OK');
}
}
return showHttpErrorResponse ? of(new Error(error.message)) : of(valueToReturn as T);
};
}
Here is an example of calling:
this.httpClient.get('/', { observe: 'response', responseType: 'text' }).pipe(
catchError(
this.errorService.handleError('getSysName', '', null, true, false, true)
)
);
My Questions:
How is the selections of the parameter types (usage of required, optional and default parameters)? If it is not good, how would you improve it?
How is the naming of the parameters?
How is the order of the parameters? If it is not good, how would you order the parameters?
Do you have also improvement suggestions for the body of the function?
Do you have completely different suggestions for improvement?
Answer: My understanding of your approach here is to handle errors, which may occur when sending a request to one (of your) server. This function should log and/or display a snackbar depending on the parameters that were passed before. This leads us to your first three questions:
Answers to questions 1-3
operation: I noticed that the operation is unused and can be removed, what is it even used for? Why is the default value 'operation'?
customErrorMessage: So the custom message could basically be an empty string ("") which would leave the snackbar which pops-up empty with I guess an 'OK' button?! This seems like bad UX or a bug. You may want to use something like customErrorMessage: string | undefined and explicitly check that the string is not empty. Also see [1]
valueToReturn: We are handling errors, which may occur, why should this method return a value? Which value anyway? Again, is null a good approach here as well (maybe see [1])? Also see [2]
logError: I don't like where this is placed, why is the first parameter a default parameter and why is this a default parameter placed in between (see [3]) - What do you think about an optional parameter called disabledLog?: boolean? When passed true, this parameter disables the logging otherwise everything gets logged
showSnackbar, showHttpErrorResponse: Those are very self-explanatory I like them, also that those are the last parameters based on their signature is good!
Answer to question 4
I'll concentrate on the function body for now, leaving out the rest.
The check whether or not to log is a good approach, this seems fine.
if (logError) {
console.error(error.message);
}
This looks like a bit of arrow code https://blog.codinghorror.com/flattening-arrow-code/.
if (showSnackbar) {
if (showHttpErrorResponse) {
this.snackbar.open(error.message, 'OK');
} else {
this.snackbar.open(customErrorMessage, 'OK');
}
}
Let's consider the following, which focusses on readability considering previously linked article:
if (showSnackbar) {
const content = showHttpErrorResponse ? error.message : customErrorMessage;
this.snackbar.open(content, 'OK');
}
The same as before, we could try to factor out same things
return showHttpErrorResponse ? of(new Error(error.message)) : of(valueToReturn as T);
Which could look like this:
const returnValue = showHttpErrorResponse ? new Error(error.message) : valueToReturn;
return of(returnValue);
A follow-up question here is: Why does the showHttpErrorResponse dictate whether we return a custom value or Error here? This is not clear from a function signature point of view.
Question 5:
I would like to use this question for two things, first, why is the return value of handleError at first: (error: any) => Observable<any> and later it is (error: any) => Observable<T | Error>, I'd suggest to align those to values. The last thing to finalize this post, this is what I'd come up with considering the things I've noted. And on top of that, I've decided to change the signature to be an object, which makes the boolean parameters and null/undefined params more readable.
handleError<T>(prop: {
customErrorMessage: string | undefined,
customReturnValue: T | undefined,
showSnackbar?: boolean,
showHttpErrorResponse?: boolean,
disableLogging?: boolean }
): (error: any) => Observable<T | Error> {
return (error: any): Observable<T | Error> => {
if (!disableLogging) {
console.error(error.message);
}
if (showSnackbar) {
const content = showHttpErrorResponse ? error.message : customErrorMessage;
this.snackbar.open(content, 'OK');
}
const returnValue = showHttpErrorResponse ? new Error(error.message) : customReturnValue;
return of(returnValue);
};
}
With following usage:
this.errorService.handleError({
customErrorMessage: undefined,
customReturnValue: null,
showSnackbar: true,
showHttpErrorResponse: false,
disableLogging: false
})
------ Appendix ------
[1]: Checking that a string is not empty, is also possible with typings to some extend. Something I scribbled quickly was something like:
type NotEmptyString<T extends string> = `${T}` extends "" ? never : T;
type X = NotEmptyString<""> // X resolves to never
type Y = NotEmptyString<" "> // Y resolves to " "
[2]: Returning null could be a problematic approach and could potentially cause Runtime exceptions. I for myself try to reduce the usage of null and rather explicitly type my functions to return either a value OR undefined.
[3]: In general (across multiple languages, independent of syntax) default parameters come last. This is to avoid signatures like: foo(undefined,undefined,undefined,requiredParamVal,undefined,undefined)
Consider this example
const foo = (a: number, bar = 1) => {}
foo(10)
const foofoo = (a = 1, bar: string) => {}
foofoo(undefined, "") | {
"domain": "codereview.stackexchange",
"id": 43309,
"tags": "error-handling, typescript, angular-2+, rxjs"
} |
SSH error when launching nodes on another machine | Question:
We added this line to our launch file in order to run a node on another machine:
<machine name="robot" address="robot.dynamic.edu" env-loader="/opt/ros/groovy/env.sh" user="turtlebot"/>
The problem is we get this error:
robot.dynamic.edu is not in your SSH known_hosts file.
Please manually:
ssh turtlebot@robot.dynamic.edu
then try roslaunching again.
If you wish to configure roslaunch to automatically recognize unknown hosts, please set the environment variable ROSLAUNCH_SSH_UNKNOWN=1
Our environment variable is set to:
#!/usr/bin/env sh
# generated from catkin/cmake/templates/env.sh.in
if [ $# -eq 0 ] ; then
/bin/echo "Usage: env.sh COMMANDS"
/bin/echo "Calling env.sh without arguments is not supported anymore. Instead spawn a subshell and source a setup file manually."
exit 1
else
. "/opt/ros/groovy/setup.sh"
exec "$@"
fi
Originally posted by Solmaz on ROS Answers with karma: 3 on 2013-08-12
Post score: 0
Answer:
Linux doesn't know what computer robot.dynamic.edu refers to, and so you have to tell it. You do so by editing your /etc/hosts file like they do in this explanation.
Originally posted by thebyohazard with karma: 3562 on 2013-08-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 15237,
"tags": "ros, roslaunch, multiplemachines"
} |
AL N*N Tic Tac Toe Game | Question: Here is a simple Tic Tac Toe game. I would like to know how I can improve this code further.
#include <iostream>
#include <cctype>
#include <array>
#include <random>
enum struct Player : char
{
none = '-',
first = 'X',
second = 'O'
};
std::ostream& operator<<(std::ostream& os, Player const& p)
{
return os << std::underlying_type<Player>::type(p);
}
enum struct Type : int
{
row,
column,
diagonal
};
enum struct Diagonals : int
{
leftTopRightBottom,
rightTopleftBottom
};
template<std::size_t DIM>
class TicTacToe
{
public:
TicTacToe();
bool isFull() const;
void draw() const;
bool isWinner(Player player) const;
bool applyMove(Player player, std::size_t row, std::size_t column);
private:
std::size_t mRemain = DIM * DIM;
std::array<Player, DIM * DIM> mGrid;
};
template<int DIM>
struct Match
{
Match(Type t, int i)
: mCategory(t)
, mNumber(i)
{}
bool operator() (int number) const
{
switch (mCategory)
{
case Type::row:
return (std::abs(number / DIM) == mNumber);
case Type::column:
return (number % DIM == mNumber);
case Type::diagonal:
if (mNumber == static_cast<int>(Diagonals::leftTopRightBottom))
{
return ((std::abs(number / DIM) - number % DIM) == mNumber);
}
if (mNumber == static_cast<int>(Diagonals::rightTopleftBottom))
{
return ((std::abs(number / DIM) + number % DIM) == DIM - mNumber);
}
default:
return false;
}
}
Type mCategory;
int mNumber;
};
template<std::size_t DIM>
TicTacToe<DIM>::TicTacToe()
{
mGrid.fill(Player::none);
}
template<std::size_t DIM>
bool TicTacToe<DIM>::applyMove(Player player, std::size_t row, std::size_t column)
{
std::size_t position = row + DIM * column;
if ((position > mGrid.size()) || (mGrid[position] != Player::none))
{
return true;
}
--mRemain;
mGrid[position] = player;
return false;
}
template<std::size_t DIM>
bool TicTacToe<DIM>::isFull() const
{
return (mRemain == 0);
}
template<std::size_t DIM>
bool TicTacToe<DIM>::isWinner(Player player) const
{
std::array<bool, 2 * (DIM + 1)> win;
win.fill(true);
int j = 0;
for (auto i : mGrid)
{
int x = j++;
for (auto k = 0; k < DIM; ++k)
{
if (Match<DIM>(Type::column, k)(x))
{
win[k] &= i == player;
}
if (Match<DIM>(Type::row, k)(x))
{
win[DIM + k] &= i == player;
}
if (Match<DIM>(Type::diagonal, k)(x))
{
win[2 * DIM + k] &= i == player;
}
}
}
for (auto i : win)
{
if (i)
{
return true;
}
}
return false;
}
template<std::size_t DIM>
void TicTacToe<DIM>::draw() const
{
std::cout << ' ';
for (auto i = 1; i <= DIM; ++i)
{
std::cout << " " << i;
}
int j = 0;
char A = 'A';
for (auto i : mGrid)
{
if (j == 0)
{
std::cout << "\n " << A++;
j = DIM;
}
--j;
std::cout << ' ' << i << ' ';
}
std::cout << "\n\n";
}
struct Random
{
Random(int min, int max)
: mUniformDistribution(min, max)
{}
int operator()()
{
return mUniformDistribution(mEngine);
}
std::default_random_engine mEngine{ std::random_device()() };
std::uniform_int_distribution<int> mUniformDistribution;
};
class Game
{
public:
void run();
private:
void showResult() const;
void turn();
static const std::size_t mDim = 4;
int mNumberOfPlayers = 2;
TicTacToe<mDim> mGame;
std::array<Player, mNumberOfPlayers> mPlayers{ { Player::first, Player::second } };
int mPlayer = 1;
Random getRandom{ 0, mDim - 1 };
};
void Game::run()
{
while (!mGame.isWinner(mPlayers[mPlayer]) && !mGame.isFull())
{
mPlayer ^= 1;
mGame.draw();
turn();
}
showResult();
}
void Game::showResult() const
{
mGame.draw();
if (mGame.isWinner(mPlayers[mPlayer]))
{
std::cout << "\n" << mPlayers[mPlayer] << " is the Winner!\n";
}
else
{
std::cout << "\nTie game!\n";
}
}
void Game::turn()
{
char row = 0;
char column = 0;
for (bool pending = true; pending;)
{
switch (mPlayers[mPlayer])
{
case Player::first:
std::cout << "\n" << mPlayers[mPlayer] << ": Please play. \n";
std::cout << "Row(1,2,3,...): ";
std::cin >> row;
std::cout << mPlayers[mPlayer] << ": Column(a,b,c,...): ";
std::cin >> column;
column = std::toupper(column) - 'A';
row -= '1';
pending = column < 0 || row < 0 || mGame.applyMove(mPlayers[mPlayer], row, column);
if (pending)
{
std::cout << "Invalid position. Try again.\n";
}
break;
case Player::second:
row = getRandom();
column = getRandom();
pending = mGame.applyMove(mPlayers[mPlayer], row, column);
break;
}
}
std::cout << "\n\n";
}
int main()
{
Game game;
game.run();
}
Answer: Here are some things that may help you improve your code.
Avoid non-standard extensions
While your compiler may allow them, the use of non-standard extensions makes your code non-portable and less readable to others. In particular, we have this:
int mNumberOfPlayers = 2;
// ...
std::array<Player, mNumberOfPlayers> mPlayers{ { Player::first, Player::second } };
The problem is that mNumberOfPlayers is neither const nor static, so the std::array size is indeterminate.
Reconsider the class roles
The division of the game in to Game and TicTacToe classes seems odd to me. I did not discern a logical reason for them not to be a single class. Maybe the idea was to separate the playing field from the game logic? If so, then it seems that perhaps applyMove and isWinner would more logically belong to the Game rather than the board. Match also seems to be superfluous.
Consider an array rather than an enum
The use of the enum for Player has some good features, such as type safety, but it doesn't lend itself to allowing more than two players, which seems to have been the idea behind having the mNumberOfPlayers instance variable. An array of player tokens might have been a better choice, with some tie between the number of players and the rest of the game. Similarly, the only places that Diagonals are used, they're used with a static_cast<int>. That would seem to be a sign that they shouldn't have been declared as enum.
Understand what auto does
In some instances, the use of auto is perfectly clear and well suited to the job, as when using a "range for":
for (auto i : mGrid)
However, other places it doesn't make as much sense:
for (auto i = 1; i <= DIM; ++i)
The variable i in the latter case will always be an int because 1 is an int. Some compilers will complain about comparing signed and unsigned values here.
Be consistent with template parameters
In some cases, DIM is an int, but in others it's a std::size_t. It's hard to imagine that was intentional and I can't think of a use, so it's probably better to declare them all the same way. Alternatively, see the next suggestion.
Use a constructor argument instead of a template parameter
Having the Match and TicTacToe use DIM as a template parameter instead of as a constructor argument means that the code must be recompiled to accomodate a different size board. With a bit of redesign, it could easily instead be a constructor argument, allowing flexible use without recompiling.
Reconsider member functions
The applyMove() member function does two things -- it checks to see if the move would be valid and then applies it if it's valid. I'd suggest that those two steps might be different functions since a function that checks but does not alter the board might be very handy for a smarter automatic player.
Consider allowing a std::ostream parameter for output
As it stands, the draw() function is only capable of sending its output to std::cout, but it could easily be made more flexible by allowing it to take an std::ostream as a parameter. In fact, I'd be inclined to refactor it as a friend like this:
std::ostream& operator<<(std::ostream& out, const TicTacToe& ttt) { //... }
Consider making Random a templated class
The use of the Random class is good and appropriate. One suggestion I have is that it might be a candidate for a template to allow a range of either int or unsigned values. It doesn't make much practical difference, but might be handy to quiet signed/unsigned type mismatch warnings if an unsigned were needed.
Be cautious with object construction/destruction
In the shortest possible 4x4 game (4 moves by each of 2 players) there are 1728 calls each to the Match constructor and destructor due to the way that class is used in isWinner. That is neither particularly efficient for the computer, nor does it particularly simplify the code for a human reader, so I would strongly advise rewriting the isWinner code.
Consider separating I/O from program logic
The showResult() and draw() functions are both clearly primarily related to I/O function and do nothing else. That's good design. The turn() function could similarly be refactored into the logic portion and the I/O portion, which would make it easier to see how to adapt the program to, say, a non-text GUI version, without altering the underlying game logic.
Allow rational reuse of objects
If we want to play the game twice, there is currently no provision to do so because the game state can't be reset using the existing interface. Adding a Game.reset() function would be simple to do and add to the usability of the object. | {
"domain": "codereview.stackexchange",
"id": 14618,
"tags": "c++, c++11, tic-tac-toe"
} |
Why is the electron magnetic moment always parallel to the spin for an electron? | Question: Consider the Hamiltonian
$$\hat{H}=\frac{1}{2}\omega\vec{B}\cdot \vec{\sigma}$$
where $\vec{\sigma}$ is the Pauli vector $=\begin{pmatrix}\sigma_x & \sigma_y & \sigma_z \end{pmatrix}$, $\omega$ is the frequency of the magnetic field $\vec{B}=\begin{pmatrix}B_x & B_y & B_z \end{pmatrix}$
The electron spin is described by a 2 dimensional hilbert space. Suppose we pick the basis of this space as the spin up and spin down states$\{\lvert 1\rangle,\lvert 0\rangle\}$. Then any spin state can be written as follows, with the probability amplitudes $a,b$ subjected to the usual normalisation constraints
$$\lvert s(t)\rangle=a(t)\lvert 1\rangle+b(t)\lvert 0\rangle$$
The probability of measuring +1 in some direction $\vec{\sigma}\cdot \hat{n}$, t seconds later can be easily determined by first evolving a given $\lvert s(0)\rangle$ with $e^{-\frac{i}{\hbar}\hat{H}t}$ to get $\lvert s(t)\rangle$ and then compute $||\langle \lambda_+\lvert s(t)\rangle||^2$ where $\lambda_+$ is the eigenvector correspond to +1 for the measurement in the $\hat{n}$ direction. Therefore, we can easily see that what the transformation did by the magnetic field is effectively rotating the spin state.
The magnetic moment of an electron is related to its spin by
$$\vec{\mu}=\frac{ge}{2m}\frac{\hbar}{2}\vec{\sigma}$$
The above calculations however does not seemed to shed any light on how the magnetic moment is influenced by the Hilbert space of the spin state(?)
How do we know (experimentally and theoretically) that the spin must always align with the magnetic moment of the electron (assuming the electron is in a system such that its orbital angular momentum contribution is negligible)?
A brief look at Dirac equation only explain why there is spin (because this arise because when we put quantum mechanics and relativity together, which result in a wavefunction with 4 components), but there is no mention on its relationship with the magnetic moment
Answer: The Key experiment to test this hypothesis would be the Einstein-de Haas experiment. Assuming the magnetic moment doesn't align with the electron spin, the measurement would yield a Lande-factor $g \neq 2$, in contradiction with the prediction by the Dirac- (or linearized Pauli-)equation. | {
"domain": "physics.stackexchange",
"id": 34130,
"tags": "quantum-mechanics, electrons, quantum-spin"
} |
Why is acceleration inversely proportional to the mass of an object, but directly proportional to force? | Question: Why is acceleration of an object inversely proportional to its mass, but directly proportional to force?
Answer: We know $F = m * a$ . Then , you rearrange the equation.
a = $\frac{ F }{m}$.
Case 1: Considering F to be const :
Acceleration is inversely proportional to mass.
Case 2: Considering m to be constant:
Acceleration is directly proportional to Force.
It will be a different question if you ask why is F = m *a. You can check this answer of mine if you do have.
https://physics.stackexchange.com/a/638836/287551
Hence proved. Do let me know if you have any difficulty. | {
"domain": "physics.stackexchange",
"id": 79586,
"tags": "newtonian-mechanics, forces, mass, acceleration"
} |
how can I make a custom msg struct to publish? | Question:
i have a script that publishes a lot of system information like battery voltage/current, sysstem status, and isready all in all i have 13 different parameters im publishing. what i want is to tie all that data into a single sturct and only publish once.
for example something live geometry msgs odom msg, you know how it has like msg.pose.pose.x i want something similar like msg.data1.data
Originally posted by MisticFury on ROS Answers with karma: 5 on 2020-10-19
Post score: 0
Answer:
What you want is to create multiple custom msgs (for how to create custom msgs, see @JackB answer).
Then, you need to nest the messages the way you want. Something along the line of the following msg definitions:
BatteryStatus.msg:
float32 current
float32 voltage
SystemStatus.msg:
int32 error_code
string error_msg
FullSystemInformation.msg
BatteryStatus battery
SystemStatus system
And then you can access the fields like, e.g. msg.battery.current or msg.system.error_code.
Originally posted by mgruhler with karma: 12390 on 2020-10-19
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-10-19:
Or use sensor_msgs/BatteryState. | {
"domain": "robotics.stackexchange",
"id": 35654,
"tags": "ros, ros-melodic, std-msgs, python3"
} |
misunderstanding of spectral leakage | Question: I want to understand spectral leakage.
I understand that whenever we feed $N$ time-samples of a periodic, continuous, signal into a FFT algorithm we are multiplying in time-domain the true periodic, continuous, signal with a rectangular window, this resulting in a convolution in the frequency domain of the true signal's FT with the rectangular window's FT. I also know that the FT of a square function is a sinc function.
I do not understand the idea of having an integer number of cycles within the time period covered by those $N$ time-samples of the continuous signal we feed into the FFT. I want to find out why if we have a (windowed by a square due to the finite value of $N$) pure sinusoid (one frequency only) as input to the FFT, and if this length-$N$-signal contains an integer number of sinusoid's cycles within the time range it covers, then there is no spectral leakage.
What I think is that in the frequency-domain we shall get a sinc function extending to $\pm \infty$, with its main lobe centered at the frequency of the sinusoid. So there is going to be non-zero counts in all the bins across the (horizontal) frequency axis. This is just how life is due to the finite $N$.
Questions:
Does each bin from across the returned-from-the-FFT frequency axis ''summons'' / ''displays'' a sinc function with its main lobe centered on that specific bin? That would be, if we gather $N=10$ bins from the FFT algo because we fed in a length $N=10$ signal, we will have a superposition of $N=10$ sinc functions in a spectrum plot?
If somehow I manage to make the returned-from-the-FFT frequency 1D array to contain in its components the frequency of my sinusoid, will I obtain ONLY 1 sinc function appearing on the spectrum plot? That sinc function will have its main lobe centered on the frequency of interest.
It seems from the above two points that there is going to be spectral leakage to other-than-the-center-lobes of that only sinc function appearing in the spectogram, just by the nature of things. Is the explanation for not seeing this leakage in the spectrum plot the fact that the sinc function's side lobes which still contain a reasonable count-value (so which doesn't tend to 0, but it's still visible with the eye) are at such frequencies that they don't reach the next nearby bin? In other words, the very next bins (left and right to the sinusoid's frequency bin) are so far away from the main (important) bin that the amount of counts they get is negligible and this is why when we plot the spectrogram we see a sharp peak on the important beam and "nothing" at other bins? Or is it actually a precise cancellation which is happening and by magic all the other bins get exactly 0 counts?
If I don't manage to make the returned-from-the-FFT frequency 1D array to contain in its components the frequency of my sinusoid, then the sinusoid's frequency will for sure be between 2 bins. Then 2 bins will each summon a sinc function and the spectogram will show the superposition of 2 sinc function now. Is this correct?
I would greatly appreciate (material from anywhere) with equations and or graphs, in addition to a "wordy explanation". I don't mind reading another answer already posted, if that responds to my question, however I kind of searched with a filter through all DSP's questions and read the potentially useful ones, however I still don't understand my questions above ...
Thank you!
Answer:
whenever we feed N time-samples of a periodic, continuous, signal into a FFT algorithm
You can't feed a continuous signal into an FFT. You need to sample it first at a specific sample rate.
Sampling makes the time domain signal discrete and something that's discrete in one domain MUST be periodic in the other domain. The FFT implements the Digital Fourier Transform (DFT) NOT the Continuous Fourier Transform. Since the DFT transforms discrete signals into discrete signals, this also means that both time domain and frequency domain are periodic. The frequency domain period is the sample rate and the time domain period is the FFT length times the sample period.
There are two ways to think about spectral leakage
Sampling in the frequency domain creates periodic repetition in the time domain. If your sine wave does NOT have an integer amount of periods, the repetition will create discontinuity at the repetition point and your continuous time domain signal is not a sine wave anymore.
See the graph below. The picture shows two sine waves: 1 at 1 cycle per FFT and the other at 1.5 cycles per FFT. If you repeat them the 1.5 cycle sine wave will have a strong discontinuity at the period border. The 1 cycle sine wave stays a sine wave.
Your interpretation of the sinc function also works. I you have an integer multiple of periods, than all the zeros of the sinc function in the frequency domain fall on the "other" FFT bins. The spectrum can still determined by convolution with a sinc function, however the contribution of the sinc function to all other FFT bins is zero.
Does each bin from across the returned-from-the-FFT frequency axis ''summons'' / ''displays'' a sinc function with its main lobe centered on that specific bin?
Kind of yes. It's just that a sinc centered on one FFT bin has all its zeros on the other bins
If somehow I manage to make the returned-from-the-FFT frequency 1D array to contain in its components the frequency of my sinusoid, will I obtain ONLY 1 sinc function appearing on the spectrum plot? That sinc function will have its main lobe centered on the frequency of interest.
Yes. Again: if the frequency of interest coincides with an FFT bin the sinc will be 0 on all other bins. If not, you get spectral leakage.
Or is it actually a precise cancellation which is happening and by magic all the other bins get exactly 0 counts?
That one. It's not magic, it's the definition of the sinc function.
Then 2 bins will each summon a sinc function and the spectogram will show the superposition of 2 sinc function now. Is this correct?
No. There is only one sinc centered at the frequency of the sine wave. That frequency does NOT coincide with an FFT bin, so the value at any FFT bin will be determined by the normalized difference between the bin frequency and the sine frequency.
Let's take a look at the normalized sinc function
$$\operatorname{sinc}(x) = \frac{\sin \pi x}{\pi x} $$
This graph is normalized to the bin spacing of the FFT. Each bin "samples" the sinc function. If you sample at integers all values of the sinc function are zero except the one at $x = 0$. If you sample at a non-integer grid, you will get non-zero values at all frequency bins. | {
"domain": "dsp.stackexchange",
"id": 10707,
"tags": "fft, fourier-transform, frequency-spectrum, sampling, dft"
} |
Nuclear Salt Water Rockets: viability and follow-up | Question: This is the original paper by R. Zubrin proposing the Nuclear Salt Water Rocket design.
Basically the design is that a capillar set of pipes store a uranium salt-water solution, inside a cadmium matrix, which helps to keep it below criticality. The fluid is simply flown at high speed into a combustion chamber where it reaches criticality, heats quickly and is expelled from some unspecified nozzle.
The paper is from 1991, and i'm sure that the design has been revisited many times, so there must be good critiques and discussion of the problems that would have to be addressed, but i could not find any serious follow-up literature about the viability of the design, just, you know, the usual blog and mailing list informal ranting that you can find from a simple google search.
Have there been any simulations or attempts to evaluate the design with more detail in the literature that i'm not aware? The original paper doesn't go into much discussion about the temperatures, which i presume are quite high, and most of the informal viability analysis i've read about focus on that aspect.
Answer: Although the paper was published in the early 1990s, I could identify two citations of it only:
McNutt Jr., R.L., Andrews, G.B., McAdams, J., Gold, R.E., Santo, A., Oursler, D., Heeres, K., Fraeman, M. & Williams, B. 2003, Low-cost interstellar probe, Acta Astronautica, vol. 52, no. 2-6, pp. 267-279.
R.L. McNutt Jr., G.B. Andrews, J.V. McAdams, R.E. Gold, A.G. Santo, Douglas A. Ousler, K.J. Heeres, M.E. Fraeman, B.D. Williams, A Realistic Interstellar Probe, In: Klaus Scherer, Horst Fichtner, Hans Jörg Fahr and Eckart Marsch, Editor(s), COSPAR Colloquia Series, Pergamon, 2001, vol. 11, pp. 431-434.
There are issues with citation databases and search engines, their are not conclusive, I know. But just two follow-up papers/abstracts/proceedings in 20 years is usually a sign, that there has not been any further investigation - at least in civilian public research. | {
"domain": "physics.stackexchange",
"id": 5932,
"tags": "nuclear-engineering, rocket-science"
} |
Inactivity Timeout | Question: I made this piece of code to detect inactivity on different aspects of my app. It is part of a set of pieces to analyze user behavior.
Not saying it's ugly, but good looking constructive criticism to help me improve my code and could be useful to me for other similar components.
function InactivityTimeout(idle_time, callback) {
this.state = 0; // 0-new, 1=active, 2=idle
this.idle_time = idle_time;
this.callback = callback;
this.start();
}
InactivityTimeout.prototype.start = function() {
this.state = 1;
this.timer = setTimeout(this.timeout.bind(this), this.idle_time);
}
InactivityTimeout.prototype.activity = function() {
if (this.state == 1) {
clearTimeout(this.timer);
}
this.start();
}
InactivityTimeout.prototype.timeout = function() {
this.state = 2;
this.callback();
}
/// usage
var timer=new InactivityTimeout(5000, function() {
alert("idle reached");
});
var el = document.getElementById('btn');
el.onclick = function() {
timer.activity();
}
<button id="btn">Foo</button>
Answer: You could pass in an object into the constructor instead of using arguments. That way, you don't have to mind order. Additionally, you could add in defaults:
function InactivityTimer(options){
// Defaults
this.defaults = {
timeout : 10
};
// Merge options to defaults> Let's just say you use jQuery.
this.options = $.extend({}, this.defaults, this.options);
}
Now you wouldn't want to make this code run all the time, so I suggest you add in a start as well as a stop. You could also add in an autostart in options so it starts as soon as an instance is created.
InactivityTimer.prototype.start = function(){
if(this.timer) return;
this.timer = setTimeout(function(){
...
},this.options.timeout);
}
InactivityTimer.prototype.stop = function(){
clearTimeout(this.timer);
this.timer = null;
}
You wouldn't want to hard-code all the code that runs when activity or inactivity occurs. I suggest you extend from an EventEmitter object. I have a simple implementation of such, which you could just plug in. Now you emit events in your code, and code outside can listen.
function InactivityTimer(options){
EventEmitter.call(this); // Inherit properties
...
}
InactivityTimer.prototype = new EventEmitter();
InactivityTimer.prototype.start = function(){
if(this.timer) return;
this.timer = setTimeout(function(){
this.emit('idle');
},this.options.timeout);
this.emit('start');
}
InactivityTimer.prototype.stop = function(){
clearTimeout(this.timer);
this.timer = null;
this.emit('stop');
}
var idleCheck = new InactivityTimer({...});
idleCheck.on('start',function(){
// timer started
});
idleCheck.on('stop',function(){
// timer stopped
});
idleCheck.on('idle',function(){
// user became idle
});
idleCheck.on('active',function(){
// user became active
});
The same with listening for events instead of hard-coding the code that reacts to the events into InactivityTimer, you'd might want to also extract the code that listens for activity. Activity can be anything, and the consumer of your code might consider only click or probably mouse movement etc. Suggesting you add observe to listen for certain activity.
InactivityTimer.prototype.observe = function(observer){
var instance = this;
// Call the observer, passing it a callback that should be called when an event happens
observer.call(this, function(){
clearTimeout(instance.timer); // clear the timer
instance.emit('active'); // inform listeners that timer became active
instance.start // start another timer
});
});
// Usage
var timer = new InactivityTimer({...});
timer.observe(function(activate){
$('body').on('click',activate); // Observe body clicks
$(window).on('scroll',activate); // Observe scrolling
}); | {
"domain": "codereview.stackexchange",
"id": 11449,
"tags": "javascript, event-handling, timer, callback"
} |
Capacitor with multiple dielectrics | Question: Let's say there is a capacitor as shown in the figure, $K_1$, $K_2$, $K_3$, $K_4$ being the dielectric constant of each quadrant. (the dimensions of all four parts are equal.)
Let $C_1$, $C_2$, $C_3$, $C_4$ be the capacitance of the respective dielectrics, $A$ be the area of cross section of all four dielectrics and $d$ be the width of all the four capacitors. I am supposed to find the equivalent capacitance of this combination.
I was able to come up with two ways to solve this question:
Take $K_1$, $K_2$ in series and $K_3$, $K_4$ in series, add the two equivalent dielectrics in parallel.
Take $K_1$, $K_3$ in parallel and $K_2$, $K_4$ in parallel, add the two equivalent dielectrics in series.
Should I solve it by method 1 or 2, or are both correct, or is there some other method that I am not aware of?
Answer: Both approaches replace the original configuration with a supposedly-equivalent circuit network of four capacitors, but they differ in the way the capacitors are interconnected.
In an electrostatic field, we can introduce a conductor along surfaces where there is the same potential, and the electrostatic field does not change at all. In a parallel-plates capacitor (as usual, ignoring the field distortion that happens at the plate borders) that means that at any distance, we can introduce a separating pseudo-plate and treat the capacitor as a series-combination of the two parts.
On the other hand, we can cut the surface area of a capacitor and treat it as a parallel-combination of the parts.
Now we have to combine these two operations for our problem.
Your first approach first cuts the surface in two parts (one being K1 and K2, and the other K3 and K4), and then introduces two separating plates, one between K1 and K2, and one between K3 and K4. That's perfectly valid.
The second approach differs in the one fact that now you connect the two separating plates, assuming that the have the same potential, which generally isn't true, only in the special case that K1/K2 = K3/K4.
Take K1, K2 in series and K3,K4 in series, add the two equivalent
dielectrics in parallel.
This is the correct approach. | {
"domain": "physics.stackexchange",
"id": 79402,
"tags": "capacitance, dielectric"
} |
Date prediction - periodic recurrence | Question: If I have some data regarding the occurence of an event on a certain date and some other variables regarding it (think fe.: I have data on which dates it rained, and some addtitional data like temperature, atmospheric pressure etc.), which is the most appropriate model for predicting on which day the event is going to happen again? Or, to be more precise, I'd like to predict the frequency of said event, to know in how many days it's going to occur again. I mostly use Python with the numpy, sklearn libraries, and I'm interested which of its models fits my use case best.
Thank you!
Answer: You should read that :
http://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/
and take a look at that :
http://statsmodels.sourceforge.net/0.6.0/generated/statsmodels.tsa.arima_model.ARIMA.html
The endog argument is your time serie and the exog is your other data | {
"domain": "datascience.stackexchange",
"id": 957,
"tags": "machine-learning, python, scikit-learn, prediction"
} |
Limit ROS traffic to specific network interface | Question:
I have a computer that is acting as a ROS master that has 5 network interfaces, 1 WiFi and 4 ethernet ports. I'd like to limit ROS traffic to just one of the ethernet ports so that I can leave the wifi on for SSH, but not have to set ROS_LOCALHOST_ONLY=1, that way I am able to e.g. run RVIZ on another computer over ethernet without touching the wifi. Is this possible?
I'm running ROS2 Humble on Ubuntu 22.04 and using Cyclone DDS, though I may also need to use FastRTPS instead/as well.
Originally posted by Barty on ROS Answers with karma: 25 on 2022-08-30
Post score: 1
Answer:
Both CycloneDDS and FastRTPS have ways to restrict the comms to a given network interface.
https://dds-demonstrators.readthedocs.io/en/latest/Teams/1.Hurricane/setupCycloneDDS.html
https://fast-dds.docs.eprosima.com/en/latest/fastdds/transport/whitelist.html
On Ubuntu I use a small bash function to hide this:
# restrict fastrtps / Cyclone DDS to this network interface
ros_restrict()
{
if [[ $# -eq 0 ]]; then
echo "ros_restrict: give a network interface"
return
fi
# auto-detect if basic name
local interface=$1
if [[ $1 == "WIFI" ]]; then
local interface=$(for dev in /sys/class/net/*; do [ -e "$dev"/wireless ] && echo ${dev##*/}; done)
fi
if [[ $1 == "ETH" ]]; then
local interface=$(ip link | awk -F: '$0 !~ "lo|vir|wl|^[^0-9]"{print $2;getline}')
local interface=$(for dev in $interface; do [ ! -e /sys/class/net/"$dev"/wireless ] && echo ${dev##*/}; done)
fi
if [[ $1 == "lo" ]]; then
export ROS_LOCALHOST_ONLY=1
unset ROS_DOMAIN_ID
unset FASTRTPS_DEFAULT_PROFILES_FILE
# https://answers.ros.org/question/365051/using-ros2-offline-ros_localhost_only1/
export CYCLONEDDS_URI='<General>
<NetworkInterfaceAddress>lo</NetworkInterfaceAddress>
<AllowMulticast>false</AllowMulticast>
</General>
<Discovery>
<ParticipantIndex>auto</ParticipantIndex>
<MaxAutoParticipantIndex>100</MaxAutoParticipantIndex>
<Peers>
<Peer address="localhost"/>
</Peers>
</Discovery>'
return
fi
# Fast-DDS https://fast-dds.docs.eprosima.com/en/latest/fastdds/transport/whitelist.html
# needs actual ip for this interface
ipinet="$(ip a s $interface | egrep -o 'inet [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
echo "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>
<profiles xmlns=\"http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles\">
<transport_descriptors>
<transport_descriptor>
<transport_id>CustomUDPTransport</transport_id>
<type>UDPv4</type>
<interfaceWhiteList>
<address>${ipinet##inet }</address>
</interfaceWhiteList>
</transport_descriptor>
<transport_descriptor>
<transport_id>CustomTcpTransport</transport_id>
<type>TCPv4</type>
<interfaceWhiteList>
<address>${ipinet##inet }</address>
</interfaceWhiteList>
</transport_descriptor>
</transport_descriptors>
<participant profile_name=\"CustomUDPTransportParticipant\">
<rtps>
<userTransports>
<transport_id>CustomUDPTransport</transport_id>
</userTransports>
</rtps>
</participant>
<participant profile_name=\"CustomTcpTransportParticipant\">
<rtps>
<userTransports>
<transport_id>CustomTcpTransport</transport_id>
</userTransports>
</rtps>
</participant>
</profiles>" > /tmp/fastrtps_interface_restriction.xml
# tell where to look
export FASTRTPS_DEFAULT_PROFILES_FILE=/tmp/fastrtps_interface_restriction.xml
# Cyclone DDS https://dds-demonstrators.readthedocs.io/en/latest/Teams/1.Hurricane/setupCycloneDDS.html
export CYCLONEDDS_URI="<General><NetworkInterfaceAddress>$interface"
# we probably do not want to limit to localhost
unset ROS_LOCALHOST_ONLY
}
Originally posted by Olivier Kermorgant with karma: 280 on 2022-08-31
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 37947,
"tags": "ros, ros2, network"
} |
How much is 1 electron-volt (eV)? | Question: I am interested in knowing how much is one eV of energy. Everywhere I found are the technical definitions. Can anybody please tell me how much is this much energy. I need something which I can feel. I mean how much work I can do with 1 eV? Can I drive a 1000cc car for 1hour? Any of example in context of real life usage would be interesting.
Answer: An electronvolt is just the energy acquired when an electron falls through a potential of 1 volt, which means
$$1\: {\rm eV} = e \times 1\:{\rm V} = 1.6 \times 10^{-19}\: {\rm J}$$
When you lift your $2.5\:{\rm kg}$ laptop (a 15-inch Apple MacBook Pro, for example) by a foot, you do work of approximately $2.5\: {\rm kg} \times 10\: {\rm m\,s^{-2}} \times 0.3 \: {\rm m} = 7.5 \:{\rm J}$ which is about $4.7 \times 10^{19}\:{\rm eV}$. So an $\rm eV$ is a really low energy scale by everyday standards.
One $\rm TeV$ (a teraelectronvolt) is about the energy of motion of a flying mosquito. | {
"domain": "physics.stackexchange",
"id": 2720,
"tags": "energy, everyday-life, units, unit-conversion"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.