anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Finding EMF of a galvanic cell without standard potentials | Question:
For the galvanic cell
$$\ce{Ag | AgCl(s), KCl (\pu{0.2 M}) || KBr (\pu{0.001 M}), AgBr(s) | Ag}$$
find the EMF generated given $K_\mathrm{sp}(\ce{AgCl}) = \pu{2.8e-10},$ $K_\mathrm{sp}(\ce{AgBr}) = \pu{3.3e-13}.$
This is the question from JEE exam (1992).
How to start solving the problem since the $E^°$ of individual half reactions is not given? How to write the $E^°$ for the cell without it? Or is it not needed?
Answer:
How to start solving the problem since the $E^\circ$ of individual half reactions is not given?
This is a concentration cell, i.e. the half reactions at the anode and at the cathode are the same (except for the direction).
AgCl(s) electrode
$$\ce{AgCl(s) <=> Ag+(aq) + Cl-(aq)}$$
$$\ce{Ag+(aq) + e- -> Ag(s)} $$
AgBr(s) electrode
$$\ce{Ag(s) -> Ag+(aq) + e-}$$
$$\ce{Ag+(aq) + Br-(aq) <=> AgBr(s)}$$
The standard reduction potentials will cancel out, i.e. $E^\circ (\mathrm{cell}) = 0$.
Further thoughts
[Comment by EdV] The Ag|AgCl electrodes I have are commercial, but making them in the lab is just a matter of oxidizing Ag wire in a chloride solution, so the electrode is Ag wire with an adherent coating of AgCl... I have never seen one of these made by just sticking an Ag wire in the Ag halide, but I guess it would work.
I added more to the answer prompted by that thoughtful comment.
[...my own comment] I pictured the silver electrode submerged in the solution, with the solid halide on the bottom. I am puzzled now too. Does it make a difference if the electrode touches the solid, the liquid, or both?
What are the actual reduction potentials?
$$\ce{AgCl(s) + e− <=> Ag(s) + Cl−}\ \ \ \ E^\circ_\mathrm{red} = \pu{+0.22233 V}\tag{1}$$
$$\ce{AgBr(s) + e− <=> Ag(s) + Br−}\ \ \ \ E^\circ_\mathrm{red} = \pu{ +0.07133 V}\tag{2}$$
$$\ce{Ag+ + e− <=> Ag(s)}\ \ \ \ E^\circ_\mathrm{red} = \pu{ +0.7996 V}\tag{3}$$
Are they related?
If you subtract (3) from (1), you get the dissolution reaction of AgCl, if you subtract (3) from (2), you get the dissolution reaction of AgBr. Thus, standard reduction potentials for (1) and (3) should be different by
$$ -\frac{RT}{zF} \ln K_\mathrm{sp}(\ce{AgCl})$$
and standard reduction potentials of (2) and (3) should be different by
$$ -\frac{RT}{zF} \ln K_\mathrm{sp}(\ce{AgBr})$$
Finally, standard reduction potentials (1) and (2) should be different by
$$ -\frac{RT}{zF} \ln \frac{K_\mathrm{sp}(\ce{AgBr})}{K_\mathrm{sp}(\ce{AgCl})}$$
Numerical answer using half reaction (1) and (2)
$$\ce{AgCl(s) + Br-(aq) <=> AgBr(s) + Cl-(aq)}$$
$$Q = \frac{[\ce{Cl-}]}{[\ce{Br-}]} = 200$$
$$E_\mathrm{cell} = E^\circ_\mathrm{cell} - \frac{R T}{z F} \ln(Q)$$
$$= \pu{(0.22233 V− 0.07133 V) - 0.13612 V = 0.0149 V}$$
Numerical answer using half reaction (3) twice
$$\ce{Ag+(c) + Ag(b) <=> Ag(c) + Ag+(b)}$$
"c" stands for chloride side, and "b" stands for bromide side. For consistency, I am using the following values for the solubility products (derived from difference of standard reduction potentials of half reactions (1), (2) and (3)).
$$K_\mathrm{sp}(\ce{AgCl}) = \pu{1.74e−10}$$
$$K_\mathrm{sp}(\ce{AgBr}) = \pu{4.89e−13}$$
$$[\ce{Ag+}]_c = K_\mathrm{sp}(\ce{AgCl}) / [\ce{Cl-}] = \pu{8,27e−10} $$
$$[\ce{Ag+}]_b = K_\mathrm{sp}(\ce{AgBr}) / [\ce{Br-}] = \pu{4.89e−10} $$
$$ Q = \frac{[\ce{Ag+}]_c}{[\ce{Ag+}]_b} = 0.560 $$
$$E_\mathrm{cell} = E^\circ_\mathrm{cell} - \frac{R T}{z F} \ln(Q)$$
$$\pu{= 0 - (-0.0149 V) = 0.0149 V}$$ | {
"domain": "chemistry.stackexchange",
"id": 12324,
"tags": "physical-chemistry, electrochemistry, solubility"
} |
Velocity of approach which changes continuously | Question:
There are two particles A and B which are moving with constant speed $v$ and $u$ such that $v$ is always directed towards B. At $t = 0$, the separation between A and B is $l$ and $u$ is perpendicular to the line joining A and B. The Velocity of B is constant i.e., it does not change its direction. Also $v>u$. Find the time when they will collide.
Here's my approach
The angle which A makes with the horizontal changes continuously from $0$ to $\theta$(let). At any time the angle which $v$ makes with horizontal will be a function of time. So $\alpha = f(t) $ where $\alpha$ lies from $0$ to $\theta$. Let time when they will collide be $t$ sec.
So, $$\int_0^t v\cos(f(t)) dt = l$$
And
$$\int_0^t v\sin(f(t)) dt = ut$$
Now I am stuck what to do next
Answer: I think you have the right idea about setup but as long as you are fixated on angles and endpoints per se you will have trouble. What you actually want to write would seem to be the coupled equations
$$\dot x(t) = v\cos\alpha=v\frac{x-ut}{\sqrt{y^2 + (x-u t)^2}}\\
\dot y(t) = v\sin\alpha=-v\frac{y}{\sqrt{y^2 + (x-u t)^2}}
$$
And then maybe you can solve for this. So all I have done here is substituted the definition of sine and cosine for this triangle in.
That looks complicated, though, and I wonder if it would be much easier to consider a particle with variable velocity $v(t)$ traveling toward the origin, then transform this into the desired reference frame to just get an equation for $v(t)$ which makes the transformed velocity constant. | {
"domain": "physics.stackexchange",
"id": 57972,
"tags": "homework-and-exercises, kinematics, relative-motion"
} |
Reading strings into a vector, without using namespace std | Question: I have been told that using namespace std is a bad practice (especially in header files). But wouldn't this make the program less readable?
int main()
{
std::string text;
std::vector<std::string> svec;
while(std::cin >> text)
svec.push_back(text);
for(std::vector<std::string>::size_type i = 0; i < svec.size(); i++)
std::cout << svec[i] + " " << svec[i].size() << "\n";
return 0;
}
Does this not make the for loop difficult to read, even for a simple program as this? Is there a better way to do this?
Answer: I'll assume that you've read Why is using namespace std considered bad practice? so you understand the reasons that this can cause problems for your code. I'm not going to address that any further in this answer.
You have alternatives, that reduce the risk compared to using namespace std; at file scope:
Selectively import names you're going to use, in the scope you're going to use them:
#include <string>
#include <vector>
#include <iostream>
int main()
{
using std::string;
using std::vector;
string text;
vector<string> svec;
while (std::cin >> text)
svec.push_back(text);
for (vector<string>::size_type i = 0; i < svec.size(); i++)
std::cout << svec[i] + " " << svec[i].size() << "\n";
return 0;
}
This is an important technique when using free functions such as begin() and end() in generic code, where you want the std:: implementations to be available, but for argument-dependent lookup to prefer a more specific (local namespace) override if available.
Reduce the need to actually write typenames, with greater use of auto. For example, your output loop can be
for (const auto& element: svec)
std::cout << element << " " << element.size() << "\n";
Using auto more liberally can be very helpful when you decide that a method needs to be generic; you won't need to trawl through it updating the types to match the new signature.
As an aside, note that some namespaces (e.g. std::literals and its contained namespaces) are intended to be imported with a using namespace directive; you may still want to restrict their scope to just that part of your code that needs it, though - and never in a header file! | {
"domain": "codereview.stackexchange",
"id": 24646,
"tags": "c++, io, vectors, namespaces"
} |
What is topological in Kitaev Chain | Question: What is topological in Kitaev Chain? Realspace or the space of Pauli spins or the space of fermions?
My Understanding
I understand that majorana-zero modes are which are spatially separated, are protected in one phase. I am confused with the very notion of 'topology'! When talking about topology people show donut and coffee mug, which are topologically equivalent. What equivalence or inequivalence are we talking about in the context of the Kitaev chain?
Answer: On very general grounds, the idea behind topological order is to classify a system by means of a topological, and as such non-local, invariant. The latter describes the bulk of your phase without needing to consider the system’s boundaries. However, this rather abstract invariant becomes of physical meaning when it comes to interfaces between topologically trivial and topologically nontrivial phases (which you indeed distinguish by means of this topological invariant). That’s the bulk-boundary correspondence.
When it comes to the Kitaev chain, there are several ways to define a topological bulk invariant, whose value reveals whether the first and the last Majorana Fermion of your chain (I’m assuming open boundary conditions) are left unpaired so to make for a zero-energy Fermionic mode.
If you are wondering how this is related to the notion of topology made clear by the usual coffe-mug, donut and pretzel arguments, just think of the phase in which the topological invariant has one of two possible values as a sphere, a frisbee or whatever object without handles and of the other topologically nontrivial phase as a donut. As you can’t mould a frisbee into a donut without hollowing it, you can’t go from the non-trivial phase to the trivial phase (or viceversa) without going through a quantum phase transition, i.e. a critical point for which the energy spectrum of your many-body system becomes gapless. | {
"domain": "physics.stackexchange",
"id": 71878,
"tags": "condensed-matter, superconductivity, topology, topological-phase, majorana-fermions"
} |
CNOT Gate on Entangled Qubits | Question: I was trying to generate Greenberger-Horne-Zeilinger (GHZ) state for $N$ states using quantum computing, starting with $|000...000\rangle$ (N times)
The proposed solution is to first apply Hadamard Transformation on the first qubit, and then start a loop of CNOT gates with the first qubit of all the others.
I am unable to understand how I can perform CNOT($q_1,q_2$) if $q_1$ is a part of an entangled pair, like the Bell state $B_0$ which forms here after the Hadamard transformation.
I know how to write the code for it, but algebraically why is this method correct and how is it done? Thanks.
Answer:
I am unable to understand how I can perform CNOT($q_1,q_2$) if $q_1$
is a part of an entangled pair, like the Bell state $B_0$ which forms
here after the Hadamard transformation.
The key is to notice what happens to the computational basis states (or, for that matter, any other complete set of basis states) upon applying the relevant quantum gate(s). Doesn't matter whether the state is entangled or separable. This method always works.
Let's consider the $2$-qubit Bell state (of two qubits $A$ and $B$):
$$|\Psi\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$
$|\Psi\rangle$ is formed by an equal linear superposition of the computational basis states $|00\rangle$ & $|11\rangle$ (which can be expressed as $|0\rangle_A\otimes|0\rangle_B$ and $|1\rangle_A\otimes|1\rangle_B$ respectively) and $|1\rangle_A\otimes |1\rangle_B$. We need not worry about the other two computational basis states: $|01\rangle$ and $|10\rangle$ as they are not part of the Bell state superposition $|\Psi\rangle$. A CNOT gate basically flips (i.e. does either one of the two mappings $|0\rangle \mapsto |1\rangle$ or $|1\rangle\mapsto |0\rangle$) the state of the qubit $B$ in case the qubit $A$ is in the state $|1\rangle$, or else it does nothing at all.
So basically CNOT will keep the computational basis state $|00\rangle$ as it is. However, it will convert the computational basis state $|11\rangle$ to $|10\rangle$. From the action of CNOT on $|00\rangle$ and $|11\rangle$, you can deduce the action of CNOT on the superposition state $|\Psi\rangle$ now:
$$\operatorname{CNOT}|\Psi\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |10\rangle)$$
Edit:
You mention in the comments that you want one of the two qubits of the entangled state $|\Psi\rangle$ to act as control (and the NOT operation will be applied on a different qubit, say $C$, depending upon the control).
In that case too, you can proceed in a similar way as above.
Write down the $3$-qubit combined state:
$$|\Psi\rangle\otimes |0\rangle_C = \frac{1}{\sqrt{2}}(|0\rangle_A\otimes |0\rangle_B + |1\rangle_A\otimes|1\rangle_B)\otimes |0\rangle_C$$ $$= \frac{1}{\sqrt{2}}(|0\rangle_A\otimes |0\rangle_B\otimes |0\rangle_C+ |1\rangle_A\otimes|1\rangle_B\otimes|0\rangle_C)$$
Let's say $B$ is your control qubit.
Once again we will simply check the action of the CNOT on the computational basis states (for a 3-qubit system) i.e. $|000\rangle$ & $|110\rangle$. In computational basis state $|000\rangle = |0\rangle_A\otimes|0\rangle_B|0\rangle_C$ notice that the state of the qubit $B$ is $|0\rangle$ and that of qubit $C$ is $|0\rangle$. Since qubit $B$ is in state $|0\rangle$, the state of qubit $C$ will not be flipped. However, notice that in the computational basis state $|110\rangle = |1\rangle_A\otimes|1\rangle_B\otimes|0\rangle_C$ the qubit $B$ is in state $|1\rangle$ while qubit $C$ is in state $|0\rangle$. Since the qubit $B$ is in state $|1\rangle$, the state of the qubit $C$ will be flipped to $|1\rangle$.
Thus, you end up with the state:
$$\frac{1}{\sqrt{2}}(|0\rangle_A\otimes|0\rangle_B\otimes|0\rangle_C + |1\rangle_A\otimes|1\rangle_B\otimes|1\rangle_C)$$
This is the Greenberger–Horne–Zeilinger state for your $3$ qubits! | {
"domain": "quantumcomputing.stackexchange",
"id": 288,
"tags": "quantum-gate, entanglement, quantum-state"
} |
Photons and EM Fields | Question: I started learning the basic ideas of QFT in an intuitive manner ( withouth any math, only with mental videos and pictures ) some days ago, and i'm finding it completely beautiful [ the idea of forgetting the notion of particles is so alleviating ] .
But i have some little doubt and i wondered if you guys could help me figure that out.
EM Fields
Pertubations in the electron field [ what we can picture as an electron particle ] , if they are "standing still" in space , interacts with ( generates and get's influenced by ) an EM field that has only the E component.
Furthermore, pertubations in the electron field that are "moving" with uniformly velocity in space, interacts with ( generates and get's influenced by ) an EM field that has only the B component.
Finally, pertubations in the electron field that are "accelerating" in space, interacts with ( generates and gets influenced by ) an EM field that has both the E as well as the B component.
Photons ?
In this last case, where our charge is accelerating, the idea of photons become really clear, they become the discrete quantum fields that make up in energy the totality of the interacting EM field, and the energy of each chunk is related to the acceleration motion of the charge [ the frequency spectra of the EM wave propagated ] . Each of those quantum fields that make the EM field can collapse at any given time.
But i'm really in doubt about the relation between photons ( quanta of the EM fields ) and that case where the EM field has only the E component ( or only the B component ) , namely, the case where our pertubation in the electron field is at inertia in space.
Are photons again the quanta of those EM fields ( that happen to have only E component, or only B component ) present , or is the idea of quanta of EM field,namely photons ,intrinsically related to the idea of accelerating charges ?
Sorry for writing this question on a really layman style and probably ignoring some important aspects, it's just that don't yet know anything close to the math to formalize it. Furthermore, feel free to correct me if i wrote anything that is just intrinsically wrong ( even on the intuitive level ).
Thanks in advance
Answer: Yes, photons are the quanta of the electromagnetic field.
In this picture, two charges create a disturbance in the EM field (which can also be called the photon field). A virtual photon mediates their repulsion. There is more information here. | {
"domain": "physics.stackexchange",
"id": 21401,
"tags": "quantum-field-theory"
} |
Encode and Decode TinyURL | Question: The task
is taken from leetcode
TinyURL is a URL shortening service where you enter a URL such as
https://leetcode.com/problems/design-tinyurl and it returns a short
URL such as http://tinyurl.com/4e9iAk.
Design the encode and decode methods for the TinyURL service. There is
no restriction on how your encode/decode algorithm should work. You
just need to ensure that a URL can be encoded to a tiny URL and the
tiny URL can be decoded to the original URL.
My solution
const url = {};
const ALPHANUMERIC = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
const PREFIX = 'http://tinyurl.com/';
/**
* Encodes a URL to a shortened URL.
*
* @param {string} longUrl
* @return {string}
*/
function encode(longUrl) {
let id = createIdOfSize(6);
while(url[id]) {
id = createIdOfSize(6);
}
url[id] = longUrl;
return PREFIX + id;
};
function createIdOfSize(n) {
let i = n;
let id = '';
while(i-- > 0) {
id += ALPHANUMERIC[(Math.random() * ALPHANUMERIC.length) | 0];
}
return id;
};
/**
* Decodes a shortened URL to its original URL.
*
* @param {string} shortUrl
* @return {string}
*/
function decode(shortUrl) {
return url[shortUrl.split(PREFIX)[1]];
};
/**
* Your functions will be called as such:
* decode(encode(url));
*/
Answer: Review
At reading the problem I thought it needed to be a lossless compression algorithm to generate a unique id.
The most basic compression is run length encoding on the bits that make the url. Or a dictionary compression on bit patterns such as Lempel, Ziv, Welch compression AKA LZW or LZ compression. However there is some storage overhead and for short URLs you will have trouble getting a smaller url than the original.
Random id?
Your solution stores the original url mapped to a random index. There is no need for a compression or even a hash as all you need is a unique id per URL.
To generate an unique id you need only the number line. JS Number can generate 9e15 unique ids, that's a new id every millisecond for the next quarter million years (twice that if you include negatives).
You can also compress the id by converting the id to base 36 (the max base for Number.toString(radix)) thus the longest encoding is "2gosa7pa2gv" and the shortest is "0" and you need store only one 64bit double for the short url.
See Example
Bug?
Your function returns a different short url for the same long url.
Example
Simple index encoding, there is room for lots of optimization in storage.
Produces safe urls using only a-z0-9 to encode.
There is a limit as it does rely on the hashing function (Chance of a clash) of the JS engine.
Checks if a url has already been encoded
You can avoid the JS hash functions (and possible clash), by using an array or set of arrays (if number of urls high) but that comes with a huge CPU cost of locating the URL if the data set is big will require a linear search of the stored urls when encoding. Decoding will still be fast as the encoded url contains the index of the original url
const PREFIX = 'http://tinyurl.com/';
const urls = {
short: new Map(), // can be array use id to index
long: new Map(), // can be array and use find rather than has and get
value: 0,
get id() { return urls.value++ },
};
function encode(url) {
if (urls.long.has(url)) { return PREFIX + urls.long.get(url).id.toString(36) }
const urlEntry = { url, id: urls.id };
urls.long.set(url, urlEntry);
urls.short.set(urlEntry.id, urlEntry);
return PREFIX + urlEntry.id.toString(36);
}
function decode(url) {
return urls.short.get(parseInt(url.split(PREFIX)[1], 36)).url;
} | {
"domain": "codereview.stackexchange",
"id": 34657,
"tags": "javascript, algorithm, programming-challenge, ecmascript-6"
} |
How to make doubly linked list design more efficient? | Question: I have played recently and implemented doubly linked list with merge sort algorithm. I was wondering if there is any way of improving the algorithm further to make it as fast as possible (I know about implementing list as array of void* pointers but I want to implement that as a separate structure).
I've gone from something like merge sort in learn C the hard way and added list_split function to split list in half and improved merging so it only links lists together without calling list_push/list_pop procedure which uselessly call calloc and free. That way I was able to get from ~3.3s to 0.9s needed for sorting 1,000,000 integers in list.
Also any comments about style and hints for possible bugs are welcome.
src/dllist.c:
#include <stdlib.h>
#include <assert.h>
#include "dllist.h"
struct node {
struct node *next;
struct node *prev;
void *data;
};
struct list {
unsigned int count;
struct node *first;
struct node *last;
};
List list_create()
{
List l;
l = calloc(1, sizeof(*l));
assert(l);
return l;
}
void list_destroy(List l)
{
free(l);
}
static inline void list_bond(struct node *first, struct node *second)
{
first->next = second;
second->prev = first;
}
void list_push_first(List l, void *data)
{
struct node *n;
n = calloc(1, sizeof(*n));
assert(n);
n->data = data;
if(l->count > 1) {
list_bond(n,l->first);
l->first = n;
} else {
l->first = n;
l->last = n;
}
l->count++;
}
void list_push_last(List l, void *data)
{
struct node *n;
n = calloc(1, sizeof(*n));
assert(n);
n->data = data;
if(l->count > 1) {
list_bond(l->last, n);
l->last = n;
} else {
l->first = n;
l->last = n;
}
l->count++;
}
/* if count == 0 return NULL! */
static struct node *list_find_nth(List l, unsigned int c)
{
struct node *tmp;
unsigned int i;
assert(c <= l->count+1);
if(l->count == 0)
return NULL;
if(c<=l->count/2) {
tmp = l->first;
for(i = 1; i<c; i++) {
tmp = tmp->next;
}
} else {
tmp = l->last;
for(i = l->count; i>=c; i--) {
tmp = tmp->prev;
}
}
return tmp;
}
void list_append_nth(List l, void *data, unsigned int c)
{
struct node *n;
struct node *tmp;
n = calloc(1, sizeof(*n));
assert(n);
n->data = data;
assert(c <= l->count+1);
tmp = list_find_nth(l, c);
if(tmp != NULL) {
n->prev = tmp;
n->next = tmp->next;
tmp->next = n;
if(n->next != NULL)
n->next->prev = n;
} else {
l->first = n;
l->last = n;
}
l->count++;
}
void *list_pop_first(List l)
{
void *data = NULL;
struct node *tmp;
if(l->count != 0) {
tmp = l->first;
data = tmp->data;
if (l->count == 1) {
l->first = NULL;
l->last = NULL;
} else {
l->first = l->first->next;
l->first->prev = NULL;
}
free(tmp);
l->count--;
}
return data;
}
void *list_pop_last(List l)
{
void *data = NULL;
struct node *tmp;
if(l->count != 0) {
tmp = l->last;
data = tmp->data;
if(l->count > 1) {
l->last = l->last->prev;
l->last->next = NULL;
} else {
l->last = NULL;
l->first = NULL;
}
free(tmp);
l->count--;
}
return data;
}
void *list_pop_nth(List l, unsigned int c)
{
void *data = NULL;
struct node *tmp;
if(l->count != 0) {
tmp = list_find_nth(l, c);
data = tmp->data;
if(l->count > 1) {
tmp->prev->next = tmp->next;
tmp->next->prev = tmp->prev;
} else {
l->last = NULL;
l->first = NULL;
}
free(tmp);
l->count--;
}
return data;
}
/* split lists in half */
static void list_split(List left, List right)
{
struct node *n;
if(left->count > 2) {
n = list_find_nth(left, left->count/2);
right->first = n->next;
right->last = left->last;
right->count = left->count - left->count/2;
left->count = left->count/2;
left->last = n;
n->next->prev = NULL;
n->next = NULL;
} else {
right->last = left->last;
right->first = left->last;
left->last = left->first;
left->last->next = NULL;
right->first->prev = NULL;
left->count = 1;
right->count = 1;
}
}
/* print data in list */
void list_print(List l, print f)
{
unsigned int i;
struct node *tmp;
if(l->count != 0) {
tmp = l->first;
for(i = 0;i<l->count;i++)
{
f(tmp->data);
tmp = tmp->next;
}
}
}
/*
* DO NOT DELETE (yet)
* for debugging info about list
*/
void list_debug(List l, print f)
{
struct node *tmp;
int i;
i = 0;
tmp = l->first;
printf("List: count %d, first %d, last %d\n", l->count, l->first, l->last);
while (tmp!= NULL) {
printf("%d. node. Data: ",i);
f(tmp->data);
printf("at address %d. Next: %d Prev: %d \n",tmp,tmp->next,tmp->prev);
tmp = tmp->next;
i++;
}
printf("\n");
}
static inline List list_merge(List left, List right, List_compare cmp)
{
struct node *tmp_right;
struct node *tmp_left;
List result;
void *val = NULL;
result = list_create();
/* get first element (smallest or biggest debending on cmp) of two lists and add it to result */
if (cmp(left->first->data, right->first->data) <= 0) {
result->first = left->first;
result->last = left->first;
tmp_right = result->first->next;
tmp_left = right->first;
} else {
result->first = right->first;
result->last = right->first;
tmp_right = result->first->next;
tmp_left = left->first;
}
/* Run through left and right lists to append another nodes to result till one of them is empty */
while(tmp_right != NULL && tmp_left != NULL) {
if(cmp(tmp_right->data, tmp_left->data) <= 0) {
list_bond(result->last, tmp_right);
result->last = tmp_right;
tmp_right = tmp_right->next;
} else {
list_bond(result->last, tmp_left);
result->last = tmp_left;
tmp_left = tmp_left->next;
}
}
/* append the remaining nodes of one of the lists */
if(tmp_right == NULL) {
list_bond(result->last, tmp_left);
result->last = left->last;
}
if(tmp_left == NULL) {
list_bond(result->last, tmp_right);
result->last = right->last;
}
result->count = left->count +right->count;
list_destroy(left);
list_destroy(right);
return result;
}
List list_merge_sort(List left, List_compare cmp)
{
List right, sort_left, sort_right;
if(left->count <= 1) {
return left;
}
right = list_create();
list_split(left,right);
sort_left = list_merge_sort(left, cmp);
sort_right = list_merge_sort(right, cmp);
return list_merge(sort_left, sort_right, cmp);
}
src/dllist.h
#ifndef dllist_h
#define dllist_h
/* basic list type */
typedef struct list *List;
typedef int (*List_compare)(const void *a, const void *b);
typedef void (print)(void *);
/* function prototypes generated by cproto */
#include "dllist.p"
#endif
src/test_list.c
#include <stdlib.h>
#include <stdio.h>
#include <assert.h>
#include <values.h>
#include <time.h>
#include "dllist.h"
#define TEST_SIZE (1000000)
int compare(const void *a, const void *b)
{
return (*(const int *)a > *(const int *)b);
}
void printnum(void *a)
{
printf("%d ",*(int *)a);
}
int
main(int argc, char **argv)
{
List l;
int *data;
int i;
int a;
int *r;
time_t t;
(void)argc;
(void)argv;
srand((unsigned) time(&t));
data = calloc(TEST_SIZE, sizeof(*data));
a = rand()%1000;
for(i = 0;i<TEST_SIZE;i++)
{
data[i] = rand()%1000;
}
l = list_create();
for(i = TEST_SIZE/2-1;i>=0;i--)
{
list_push_first(l,&data[i]);
}
for(i = TEST_SIZE/2;i<TEST_SIZE;i++)
{
list_push_last(l,&data[i]);
}
l = list_merge_sort(l, &compare);
for(i = 0;i<TEST_SIZE;i++)
{
r = list_pop_first(l);
}
list_destroy(l);
free(data);
return 0;
}
src/Makefile
# variables. SOURCES is exported from makefile one folder up and so is DEBUG
OBJECTS=$(SOURCES:.c=.o)
PROTO=$(SOURCES:.c=.p)
STATPROTO=$(SOURCES:.c=.sp)
DEPS=$(SOURCES:.c=.d)
# Generate and include dependencies:
all: $(DEPS) $(OBJECTS)
%.o: %.c %.d
$(CC) $(WARNINGS) $(DEBUG) $(COMPILER) -c -o $@ $<
%.d: %.c
$(CC) -MM -MG -MF $@ $<
# Generate function prototypes
%.p: %.c
cproto $< > $@
%.sp: %.c
cproto -S $< > $@
.PHONY: clean
clean:
rm -f $(OBJECTS) $(DEPS) $(PROTO) $(STATPROTO)
-include $(DEPS)
and Makefile
# IMPORTANT variables
PROGRAMNAME=Game
SOURCES=test_list.c dllist.c
#locations
SRCDIR= src
OUTDIR= bin
TESTDIR=tests
OBJECTS= $(addprefix $(SRCDIR)/, $(SOURCES:.c=.o))
PROGRAM= $(addprefix $(OUTDIR)/, $(PROGRAMNAME))
WARNINGS= -W -Wall -ansi -Wextra -pedantic -Wstrict-overflow=5 -Wshadow -Wpointer-arith -Wcast-qual -Wstrict-prototypes # turn on all possible warnings
COMPILER= -std=gnu89 -s -Os -Ofast -ffunction-sections -fdata-sections # strip, optimize for size and for performance. After that place all functions and data to separate sections
LINKER= -Wl,-Map=$(PROGRAM).map,--cref,--gc-section -Wl,--build-id=none # and with linker delete unneeded ones
DEBUG= -g
export SOURCES
export DEBUG
export COMPILER
export WARNINGS
all: $(PROGRAM)
.PHONY: objects clean
objects:
$(MAKE) -C $(SRCDIR) all
$(PROGRAM): objects
cc -o $@ $(OBJECTS) $(DEBUG) $(LINKER)
strip -S --strip-all --remove-section=.note.gnu.gold-version --remove-section=.comment --remove-section=.note --remove-section=.note.gnu.build-id --remove-section=.note.ABI-tag $@
clean:
$(MAKE) -C $(SRCDIR) clean
$(MAKE) -C $(TESTDIR) clean
rm -f $(PROGRAM) $(PROGRAM).map
Answer: Once you get under a second for whole process execution it's better to start timing inside the program using a accurate timestamp function. This avoids the C runtime initialization and printing from dominating the measurement.
One suggestion I had was keeping a struct node *freelist so you can easily grab a new node by doing
struct node *newNode = freelist;
freelist = freelist->next;
And when you free a node you can add it to the free list by doing:
oldNode->next = freelist;
freelist = oldNode;
This avoids having to call calloc and free over and over.
If you destroy a non empty list you leak all nodes still in the list. Similarly the rightlist in list_split just gets its values overwritten. | {
"domain": "codereview.stackexchange",
"id": 19659,
"tags": "performance, c, linked-list, mergesort"
} |
Proof of $f(n) + ο(f(n)) = \Theta(f(n))$ | Question: Can you please help me prove this? I am trying to set $o(f(n))= g(n)$ and then try to solve the equation $f(n) + g(n) = \Theta(f(n))$ but I don't know if it is the correct way, and if it is I don't know how to continue my solution. Thank you
Answer: Obviously, $f(n)+o(f(n))=\Omega(f(n))$ (clearly, I'm assuming all functions being positive), so you need only to prove that $f(n)+o(f(n))=O(f(n))$. But a function in $o(f(n))$ is definitevely smaller than $f(n)$, so for sufficient large $n$ you have $f(n)+o(f(n))\leq 2f(n)=O(f(n))$.
Actually, in the same way you can easily prove also this stronger claim: $f(n)+O(f(n))=\Theta(f(n))$.
If $h(n)=o(f(n))$, then $\lim_{n\to+\infty} \frac{h(n)}{f(n)}=0$, so there exists $n_0$ such that for all $n\geq n_0$ you have $h(n)\leq f(n)$ (actually, we can prove a stronger statemen, namely that for all $\varepsilon>0$, we have $h(n)< f(n)$, but here we not need it). This means that for all $n\geq n_0$ you also have $f(n)+h(n)\leq 2f(n)$, and then $f(n)+h(n)=O(f(n))$.
If you don't want to explicitely assume positiveness of the functions involved; I suggest to use limits (I like limits).
Remember that $f(n)=O(g(n))$ means
$$
\limsup_{n\to+\infty} \frac{|f(n)|}{g(n)}<+\infty
$$
while
$f(n)=\Omega(g(n))$ means
$$
\liminf_{n\to+\infty} \frac{|f(n)|}{g(n)}>0
$$
(use $|g(n)|$ instead of $g(n)$ if you want to take into account also negative functions). Notice that I'm not assuming that $\lim_{n\to+\infty} \frac{f(n)}{g(n)}$ exists, indeed $\liminf$ and $\limsup$ are used to deal with this case.
Now, as above, let $h(n)$ be an $o(f(n))$, e.g. $\lim_{n\to+\infty} \frac{h(n)}{f(n)}=0$.
Finally
$$
\limsup_{n\to+\infty} \frac{|f(n)+h(n)|}{|f(n)|}<\limsup_{n\to+\infty} \frac{|f(n)|+|h(n)|}{|f(n)|}=\limsup_{n\to+\infty}\left( 1+\frac{|h(n)|}{|f(n)|}\right)=1<\infty
$$
while
$$
\liminf_{n\to+\infty} \frac{|f(n)+h(n)|}{|f(n)|}=\liminf_{n\to+\infty} |f(n)|\cdot\frac{\left|1+\frac{|h(n)|}{|f(n)|}\right|}{|f(n)|}\geq 1>0.
$$
Also observe that in the first limit, we only need $\limsup_{n\to+\infty} \frac{|h(n)|}{|f(n)|}$ to be finite, so $h(n)$ can be also a $O(f(n))$. | {
"domain": "cs.stackexchange",
"id": 18186,
"tags": "algorithms"
} |
Ways to convert textual data to numerical data | Question: I've been looking for ways to wrangle my data which contains both text and numerical attributes.
There are of course several algorithms for numerical data, but I am looking for suggestions regarding how to deal with textual data, for instance: for sorting based on K-means clustering and predicting missing data using kNN. I would really appreciate any thoughts regarding that. I am using scikit-learn.
Answer: Do you really mean textual attribute or just Categorical attribute? e.g. if an attribute has three values $a$,$b$ and $c$ it does not mean that you need to work with text but just categories. Here I assume you have really textual attribute e.g.
| attr1: Age | attr2: msg |
| ------------- |---------------------------------------|
| 45 | I do NLP |
| 21 | I do math |
| 34 | I am a mathematician who does NLP |
In this case you have 2 options; either going for classic ways like Bag of Words representation of text data and counting frequency of terms (words/bi-grams/tri-grams/etc.) and see how it works, or try to shortcut the way if you have a specific corpus with specific info to be extracted e.g. in example above a "Keyword Extraction" after a Stemming step will give you a vocabulary of fields in each of which a person can get a value ($0,1$).
For K-means, one should have in mind that it works with numerical data. So if you want to use it, first you need to embedd your text into a n-dimensional space using text embedding algorithms (word2vec, doc2vec, etc) or even using frequency terms (as they are numbers) but the problem with text embedding is that despite you want to train your own Neural Network (which needs descent amount of data) you have to use pre-trained NNs which might not work well if your corpus is about a very specific kind of text.
The other point is that KNN algorithm is supervised and k-means is unsupervised. So be careful that you understood the underlying concepts properly otherwise you will end up with improper results. | {
"domain": "datascience.stackexchange",
"id": 2532,
"tags": "scikit-learn, clustering, text-mining"
} |
Service server and client in python following tutorial does not work | Question:
I'm trying to make a service that I could call to change a pose of a robotic arm using MoveIt!. I follow this tutorial, but I get an error:
Traceback (most recent call last):
File "/home/linux/catkin_ws/src/robotic_arm_algorithms/scripts/set_joint_states_client.py", line 24, in <module>
set_joint_states(joint_states)
File "/home/linux/catkin_ws/src/robotic_arm_algorithms/scripts/set_joint_states_client.py", line 17, in set_joint_states
resp = set_joint_states(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 435, in __call__
return self.call(*args, **kwds)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 495, in call
service_uri = self._get_service_uri(request)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 444, in _get_service_uri
raise TypeError("request object is not a valid request message instance")
TypeError: request object is not a valid request message instance
This seems as a problem purely in the client code, so I show the client code and the service definition bellow. I have defined the servise as 4 float numbers each with a specific name. So when I pass the client four numbers (via terminal arguments), I expect to be able to simply assign those values to the service message as shown below. But something is not right. Can you point out the mistake for me?
srv/SetJointStates.srv:
std_msgs/Float32 forearm_0
std_msgs/Float32 forearm_1
std_msgs/Float32 arm_0
std_msgs/Float32 arm_1
---
script/set_joint_states_client.py:
#!/usr/bin/env python
import sys
import rospy
from robotic_arm_algorithms.srv import SetJointStates
def set_joint_states(joint_states):
rospy.wait_for_service('set_joint_states')
try:
set_joint_states = rospy.ServiceProxy('set_joint_states', SetJointStates)
msg = SetJointStates()
msg.forearm_0 = joint_states[0]
msg.forearm_1 = joint_states[1]
msg.arm_0 = joint_states[2]
msg.arm_1 = joint_states[3]
resp = set_joint_states(msg)
except rospy.ServiceException, e:
print "Service call failed: %s"%e
if __name__ == "__main__":
if len(sys.argv) == 5:
joint_states = [float(sys.argv[1]), float(sys.argv[2]), float(sys.argv[3]), float(sys.argv[4])]
set_joint_states(joint_states)
else:
print("not enaugh arguments. Four arguments required: forearm 0, forearm 1, arm 0, arm 1")
sys.exit(1)
EDIT 1
When I put print (dir(SetJointStates)) into set_joint_states method of the client source code, following strings prints on terminal:
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_md5sum', '_request_class', '_response_class', '_type']
Originally posted by kump on ROS Answers with karma: 308 on 2019-04-29
Post score: 1
Answer:
[..]
from robotic_arm_algorithms.srv import SetJointStates
def set_joint_states(joint_states):
rospy.wait_for_service('set_joint_states')
try:
set_joint_states = rospy.ServiceProxy('set_joint_states', SetJointStates)
msg = SetJointStates()
[..]
Can you point out the mistake for me?
Services have a Request and a Response type. SetJointStates is most likely a module containing both the Request and Response types, but isn't a service request or response itself (note how the tutorial you link to imports *, not AddTwoInts).
If you'd do a print (dir(SetJointStates)) we should be able to verify that.
In any case: you'll probably want to use the SetJointStatesRequest, or just use the ServiceProxy directly.
Another problem with the code is that you've named your ServiceProxy variable set_joint_states, while you already have a method called set_joint_states. That is not going to work.
Edit:
When I put print (dir(SetJointStates)) into set_joint_states method of the client source code, following strings prints on terminal:
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_md5sum', '_request_class', '_response_class', '_type']
As you see, there is a _request_class and a _response_class field present.
That would seem to confirm what I wrote above (ie: SetJointStates is not a request or a response class itself, but the composite one encapsulating both.
As I wrote in my earlier answer, you'll want to use the SetJointStatesRequest class instead. Be sure to import that.
Edit 2: we should try to avoid duplicating these Q&As but I can't find one quickly, so here is an example of some code that will probably work:
[..]
from robotic_arm_algorithms.srv import *
def set_joint_states(joint_states):
rospy.wait_for_service('set_joint_states')
try:
# bind our local service proxy
svc_set_joint_states = rospy.ServiceProxy('set_joint_states', SetJointStates)
# create the service request
req = SetJointStatesRequest()
# populate the fields of the request
req.forearm_0 = joint_states[0]
req.forearm_1 = joint_states[1]
req.arm_0 = joint_states[2]
req.arm_1 = joint_states[3]
# invoke the service, passing it the request
resp = scv_set_joint_states(req)
# 'resp' is here of type 'SetJointStatesReponse'
# perhaps do something with the response
except rospy.ServiceException, e:
print "Service call failed: %s"%e
[..]
Note that in addition to using the Request class, I've also renamed the service proxy variable to svc_set_joint_states to avoid shadowing your set_joint_states(..) method.
Originally posted by gvdhoorn with karma: 86574 on 2019-04-29
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by kump on 2019-04-30:
@gvdhoorn I added the output of print (dir(SetJointStates)) into the question. I don't know what is it telling me.
Comment by kump on 2019-04-30:
@gvdhoorn Could you level down your explanation? I don't understand what are you saying. What is SetJointStatesRequest? There is no mention about such class on the internet. What am I supposed to do with it?
Comment by gvdhoorn on 2019-04-30:\
What is SetJointStatesRequest? There is no mention about such class on the internet.
SetJointStates is the name of your service. You defined it yourself in srv/SetJointStates.srv.
See wiki/rospy/Overview/Services: Service definitions, request messages, and response messages for more info on how rospy generates the respective classes for your .srv. | {
"domain": "robotics.stackexchange",
"id": 32954,
"tags": "ros-kinetic, rosservice"
} |
What happens if we throw the observer in a black hole? | Question: Sorry if this sounds like a silly question, but what would happen if a scientist observes Schrodinger's cat alive, but is then thrown into a black hole before he has leaked any information to the environment.
Then later a second scientist observes the cat - can he observe it as dead this time, since the first scientist and his information can't leave the black hole to contradict?
That is, is a wavefunction allowed to show two different faces to two observers so long as they are never allowed to compare notes because of an event horizon? Is exiling information behind an event horizon the same as erasing it?
Answer: The two measurement results must be consistent, so the second scientist will see the cat alive. It doesn't matter whether the first scientist's knowledge leaks into the environment or not, only that it exists.
Tossing the first scientist into a black hole makes their knowledge inaccessible, but doesn't erase it or undo the measurement. | {
"domain": "physics.stackexchange",
"id": 98658,
"tags": "black-holes, quantum-information, observers, event-horizon, quantum-measurements"
} |
Difference between pressure suits and anti-g suits | Question: This question might come across as a newbie one but are pressure suits and anti-g suits are two names for the same thing? Google didn't turn up differences between them specifically (if there is any), and Wikipedia has separate pages for both of them. I found an old patent paper which says it combines both pressure suits and anti-g suits though I couldn't glean much from it. Their working and effects seemed similar but I couldn't find any confirmation whether they are different. So are they different or not?
Answer: A pressure suit is a protective suit worn by high-altitude pilots who may fly at altitudes where the air pressure is too low for an unprotected person to survive, even breathing pure oxygen at positive pressure. Such suits may be either full-pressure (i.e. a space suit) or partial-pressure (as used by aircrew). Partial-pressure suits work by providing mechanical counter-pressure to assist breathing at altitude.
A g-suit, or anti-g suit, is a flight suit worn by aviators and astronauts who are subject to high levels of acceleration force (g). It is designed to prevent a black-out and g-LOC (g-induced loss of consciousness) caused by the blood pooling in the lower part of the body when under acceleration, thus depriving the brain of blood.1 Black-out and g-LOC have caused a number of fatal aircraft accidents | {
"domain": "engineering.stackexchange",
"id": 4149,
"tags": "pressure, aerospace-engineering, terminology"
} |
Change in earth mass since the time of the dinosaurs | Question: Is there a significant difference in the mass of the Earth between now and the time of the dinosaurs (250 - 75 million years ago)? I was just wondering if the force of gravity on dinosaurs would have been significantly less when they were around.
I gather, but may be wrong, that the mass of earth at present increases by around 108kg/day. All else being equal, one would expect the earth to have gained a mass, since 75 million years ago, of:
$$ 75^6 \times 365 \times 10^8 kg = 6.496 \times 10^{21} kg $$
The current mass of Earth according to Wikipedia is 5.97224kg, so if my math is correct, the change since the time the dinosaurs went extinct would be less than 0.1%.
Is my math correct here? If it is, would the change in the mass of the Earth over this period of time have had any noticeable effect on weight from its gravity?
Answer:
I gather, but may be wrong, that the mass of earth at present increases by around 108kg/day. All else being equal, one would expect the earth to have gained a mass, since 75 million years ago, of:
$$ 75^6 \times 365 \times 10^8 kg = 6.496 \times 10^{21} kg $$
First off, that should be $75\times10^6$, not $75^6$. That alone makes your estimate high by a factor of 2373. Correcting this error, the mass gain is only $2.7\times10^{18}$ kg, rather than $6.5\times10^{21}$ kg.
Secondly, I've never seen anywhere close to that magnitude of mass gain. The article you cited in a comment gives a range of 5 to 300 metric tons per day. Your value of $10^8$ kilograms per day is 100000 metric tons per day, which is high by at least a factor of 333 (and up to 20000). With this, the mass gain is between $1.5\times10^{14}$ and $8.2\times10^{15}$ kg. This seems big, but it's tiny compared to the mass of the Earth.
Thirdly, you are ignoring atmospheric losses. Almost all of the atmospheric losses are in the form of hydrogen and helium. Most of the lost hydrogen results from water vapor that makes its way from the surface of the Earth into the stratosphere and is dissociated. Most of the lost helium results from radioactive decay in the Earth. Both forms of atmospheric loss represent a decrease in the mass of the Earth. Estimates on this mass loss also vary widely, from 100 to 300 metric tons per day.
Putting these widely varying estimates together means the mass of the Earth is increasing by as much as 200 metric tons per day (+300 mass gain, -100 mass loss), or decreasing by as much as 300 metric tons per day (+5 mass gain, -300 mass loss, and since the numbers vary so widely, that's just -300). Either way, it's small potatoes, even over the course of 75 million years. This means g was between $13\times10^{-9}\,\text{m/s}^2$ smaller than it is now and $9\times10^{-9}\,\text{m/s}^2$ larger than it is now. Both of those numbers are very small. | {
"domain": "earthscience.stackexchange",
"id": 405,
"tags": "geophysics, earth-history, gravity"
} |
Is the performance of electronic devices affected by their age? | Question: I always wondered what happens inside an electronic component during its lifetime. I am aware that passive or active electronic devices can have their performance decreased over a long period of activity, due to exposure to excessive hot / cold temperatures, humidity, corrosion, etc, but how can an electronic component "get old" if this has been taken care of and it has been exploited within an optimal environment?
Example: My Android based phone which is about $1.5$ years old, started to experience a bit of lag even if it has been restored to factory settings, having the same initial operating system version and without no other applications installed. It's just clearly slowed down by something and for sure it's not related to some cpu throttling due to battery performance as this is still in good shape. I noticed this behavior on almost all Android devices I've owned so far but never on $iOS$ based devices.
My questions is: how can two electronic devices (Android & $iOS$), same age approximately, both restored to factory settings, behave so different in terms of performance after a few years of exploitation?
As it's clearly not a software related issue and excluding the battery, is the quality of the hardware components really so important when comparing two manufacturers? What exactly can define an "aged" component? Dried out capacitors or faulty resistors / transistors? Would these really be the cause of slowed down Android devices across time?
Answer: In electronic devices of modern manufacture, the three hardware component classes that actually degrade with age are 1) batteries, 2) electrolytic capacitors, and 3) wire-to-board interconnects.
Degraded batteries exhibit progressively less charge capacity and eventually will not take a charge at all, but this would not in and of itself cause the device to "run slow".
When electrolytic capacitors degrade, they slowly stop behaving as charge storage devices and begin acting as resistors instead. This causes gross circuit failures (for example, an electronically-tuned FM receiver that no longer responds to tuning commands) but again, this would not in and of itself cause a device to "run slow" either.
When interconnects degrade, signals that are sent through them are degraded by introduction of noise. This can cause data transfer faults which are remedied by automatically retransmitting the data packet until it happens to be fault-free, which slows down the device- but not in a consistently repeatable way.
The most common reason for a digital device to "run slow" is when its free memory is used up or when you are trying to run a new app on an old, overdue-for-upgrade operating system. | {
"domain": "physics.stackexchange",
"id": 69397,
"tags": "electric-circuits, electronics, electrical-engineering"
} |
Calculating Forces using equations of equilibrium | Question: The homogeneous bar shown in Fig. P-106 is supported by a smooth pin at C and a cable that runs from A to B around the smooth peg at D. Find the stress in the cable if its diameter is 0.6 inch and the bar weighs 6000 lb.
When using the equilibrium of moment equation at point C, The Tension in the string is found to be 2957.13 lb.
6000(5) = 5(T) + 10(T)*(3/(34)^(1/2))
I tried to use the equation of equilibrium for Vertical Forces.
6000 = T + T(3/(34)^(1/2))
T = 3961.714 lb
Is there any mistake in applying the equilibrium vertical condition? I'm having trouble finding any faults. The diagram for forces is given below
Answer: You're on the right track but as noted by commenter Sam, the free body diagram for beam AC should include the support reactions from the pin at point C.
Then, the general procedure using the equations of equilibrium will be:
Sum moments about point C to solve for cable tension T
$$\Sigma M_C = -T(5m) - T \frac{3}{\sqrt{34}}(10m) + W(5m) = 0$$
Sum forces in vertical direction to solve for the vertical reaction at point C, (Cy)
$$\Sigma F_y = T + T\frac{3}{\sqrt{34}} - W + C_y = 0$$
Sum forces in the horizontal direction to solve for the horizontal reaction at point C, (Cx)
$$\Sigma F_x = T\frac{5}{\sqrt{34}} - Cx = 0$$
Note that I just assumed directions for Cx and Cy in the sketch. If the equilibrium equations produce a negative value for either Cx or Cy it simply means the force acts in the opposite direction to the one I assumed. | {
"domain": "engineering.stackexchange",
"id": 2914,
"tags": "stresses"
} |
Edge-partitioning cubic graphs into claws and paths | Question: Again an edge-partitioning problem whose complexity I'm curious about, motivated by a previous question of mine.
Input: a cubic graph $G=(V,E)$
Question: is there a partition of $E$ into $E_1, E_2, \ldots, E_s$, such that the subgraph induced by each $E_i$ is either a claw (i.e. $K_{1,3}$, often called a star) or a $3$-path (i.e. $P_4$)?
I think I saw a paper one day where this problem was proven to be NP-complete, but I cannot find it anymore, and I don't remember whether that result applied to cubic graphs. On a related matter, I'm aware that edge-partitioning a bipartite graph into claws is NP-complete (see Dyer and Frieze). Does anyone have a reference for the problem I describe, or something related (i.e. the same problem on another graph class, that I could then try to reduce to cubic graphs)?
Answer: It seems I had missed this other paper by Dyer and Frieze, where they prove that partitioning the edges of a planar bipartite graph into connected components with $k$ edges is NP-complete for any fixed $k\geq 3$ (Theorem 3.1 p. 145). They then comment that the problem can be shown to remain NP-complete for $k=3$ if all vertices have degree $2$ or $3$, which unless I parsed this the wrong way includes my problem (since the input is bipartite, the graph contains no odd cycle and in particular no triangle, which means that the only subgraphs we can use are claws and paths).
This wasn't actually the end of the story: if the cubic graph is bipartite, then it's easy to partition its edge set using only claws, by selecting one set of the bipartition and making it a set of "claw centers". The general problem is indeed hard, which can be proved using a reduction from CUBIC PLANAR MONOTONE 1-IN-3 SATISFIABILITY. All details are freely accessible on arxiv. | {
"domain": "cstheory.stackexchange",
"id": 363,
"tags": "cc.complexity-theory, graph-theory, np-hardness, co.combinatorics"
} |
Same instances for different problems | Question: According to the formal definition of an instance, it is a set of input data containing the values of the parameters of some problem (What is an instance of NP complete problem?).
So, two problems may formally have the same class of instances, as long as they share the same parameters, even though they might have, e.g., different objective functions?
Answer: Yes, two problems can be unrelated but have the same set of instances.
For example: given a graph $G$ you can consider the problems of computing a global minimum cut of $G$, computing a vertex cover of $G$, computing a dominating set of $G$, computing a maximum clique in $G$, computing a maximum independent set of $G$, computing the chromatic number of $G$, determining is $G$ is $2$-edge connected, determining whether $G$ is chordal, determining whether $G$ is triangle-free, etc... | {
"domain": "cs.stackexchange",
"id": 19157,
"tags": "optimization, theory, definitions"
} |
Are there any implications of the automorphism group in QECC? | Question: We often see that classically automorphism group of an error-correcting code plays a crucial role in many computational problems. Are there any important implications that depend on this in quantum case?
One I could find was this fault tolerant computing. In that case automorphism group for underlying codes to CSS codes are mainly considered.
I am looking for more such applications of automorphism groups of QECC. Also in the case of stabilizer codes, does the automorphisms group have any direct relation to its stabilizer group, and is there a standard definition of "automorphism group" of relavance?
Answer: In the case of stabilizer codes, one generally starts with the group, $\cal{G}$, of tensor products of basis vectors on $n$ qubits. On one qubit the applicable group is the Pauli Group, which is order 16, call it $\cal{G}_0$. So at the general level $\cal{G} = \bigotimes_{i=1}^n \cal{G}_0$. Simplifying assumptions are made in many treatments that make it difficult (for me at least) to pin down the exact discrete groups, and irreps of those groups, being invoked for different stabilizer code applications.
The stabilizer subgroup, $\cal{S} < \cal{G}$, is an Abelian subgroup of $\cal{G}$ that fixes the codespace, $\mathbf{T}$. Note that $\mathbf{T}$ doesn't necessarily have a group structure, so it's a subspace of $\cal{G}$. Since $\mathbf{T}$ is the space of vectors fixed by $\cal{S}$, the action of $\cal{S}$ on $\mathbf{T}$ is the trivial automorphism of $\mathbf{T}$ (i.e. the identity).
As indicated by the name, it's generally the structure of $\cal{S}$ and $\text{Aut}(\cal{S})$ that are most interesting and useful. $\text{Aut}(\cal{S})$ determines the set of valid fault-tolerant encoded operations, so codes with large $\text{Aut}(\cal{S})$ are desireable. Since $\cal{S}$ is Abelian there are no non-trivial inner automorphisms of $\cal{S}$, and $\text{Aut}({\cal{S}})=\text{Out}(\cal{S})$.
So at least in the general theory, the most interesting automorphisms are the trivial automorphisms of the codespace, which defines the stabilizer subgroup, and the outer automorphisms of the stabilizer subgroup, which enable fault tolerant operations. The best reference for all of this that I have found is Gottesman's Thesis, which reads like a textbook on the subject.
As a final note, conventional stabilizer QECC's are a special case of stabilizer operator QECC's. In the context of OQECC's gauge symmetries are exploited to make codes more efficient, so the normalizer of $\cal{S}$ plays an important role. The standard reference for OQECC from Poulin is also very helpful in understanding the group structure of conventional stabilizer codes. | {
"domain": "quantumcomputing.stackexchange",
"id": 2056,
"tags": "error-correction, stabilizer-code"
} |
Launch node based on parameters from parameter server | Question:
Hello, is it possible to achieve conditional launching of nodes based on parameters (not arguments!) on roslaunch XML files?
A workaround I think of might be a dummy Python node that would eventually launch the given node after evaluating parameters, but being able to do so in the launch file directly would be great.
Originally posted by peci1 on ROS Answers with karma: 1366 on 2015-08-12
Post score: 0
Answer:
This will probably never be supported in ROS 1. There was a series of pull requests that added the support, but it was not merged: https://github.com/ros/ros_comm/pull/2097 .
Originally posted by peci1 with karma: 1366 on 2022-11-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22441,
"tags": "roslaunch"
} |
Do mutations occur while growing virus for preparing inactivated viral vaccine? | Question: The development of mutations in virus is reported to happen during replication, especially for an mRNA type virus like SARS-COV-2
Viruses that encode their genome in RNA, such as SARS-CoV-2, HIV and
influenza, tend to pick up mutations quickly as they are copied inside
their hosts, because enzymes that copy RNA are prone to making errors
My question is do mutations occur while mass manufacturing inactivated viral vaccines?
Simply because the process of manufacturing inactivated vaccines replicate the virus in large quantities, there might be large number of mutations occurring. Will this make each batch different from one another? Or do they control the replication process to limit the mutations?
Veritaisum did a recent video on the study on mutations in long term evolution experiment of E.Coli. It is clear that mutations do occur even in standard controlled environment.
Answer: Very simply, mutations do occur, as they do for any cultured organism. This is a well recognized problem in many fields of biology where organisms are cultured and remains in particular a problem for cultured mammalian cell research.
As far as I know there is no method for slowing or altering the rate of mutation as this is an inherent part of the RNA-dependent RNA polymerase. This happens irrespective of the environment the virus is grown in and is one of the ways that viruses of this sort combat the immune system and ecosystem changes - they produce so many mutations over the generations of viral replication that some of the progeny will be more fit in the environment in which they are replicating.
In fact, they produce so many mutations at each replication cycle that these viruses can not be called a single "species" they are actually what is referred to as a quasispecies.
It is part of the difficulty of producing vaccines against things like influenza and related viruses as they can change from the phenotype seen in the wild. However, we have methods of checking that the viruses are the same or similar enough antigenically that it makes no difference for the immune response to the vaccine. | {
"domain": "biology.stackexchange",
"id": 11461,
"tags": "virology, mutations, coronavirus, vaccination"
} |
What are some examples of undirected weighted networks in ecology? | Question: I'm a math major with a current interest towards network theory. A network can be considered as a collection of nodes, and edges between these nodes signifying some relation between them. The most ubiquitous example would be a social network like Facebook with nodes representing people, and an edge connecting two nodes if they're friends.
A network is called directed if the edges have a direction. This means that if there's an edge directed from A to B, it's not the same as an edge from B to A. An example of this would be a food web network, where the nodes represent species and an edge from species A to B means that A eats B. Obviously, this does not necessarily mean that there's an edge from B to A (rarely does the prey eat the predator).
A network is called undirected if it is not directed, i.e. none of the edges have any direction; if there's an edge from A to B, it necessarily means that there's an edge from B to A.
Now, one can assign weights (numerical values) to the edges in a network to signify the relative importance of edges. In the Facebook example above, this can be considered akin to assigning values to edges which indicate the strength of friendship/distance between the residences of friends, etc.
I wish to study ecological networks to see if I can make interesting inferences from my study of weighted networks.
So can you give me some examples of undirected, weighted ecological networks?
Answer: Epidemiology, the spreading of diseases, is probably one of the most famous applications of network theory in biology, going all the way back to John Snow.
A more relevant example would maybe be something like graph models of habitat mosaics. Populations are often both spread out over landscapes and clumped together in smaller pockets of suitable habitats. While fragmented, these populations still have a certain amount of contact, resulting in gene flow and the creation of what is often called a meta-population. These things can happen on a vast range of scales, from your back yard to whole continents, but by modeling them like graphs they can all be studied comparatively and tested for resilience agains disturbances, habitat loss etc.
You mention social networks, but there is no reason why you can't study the same in animal populations. Say one individual learns something useful, maybe the location and route to a rich feeding ground. That information can then be shared with other individuals, either by show-and-tell, or maybe by some kind of abstract communication. The information spreads in familiar exponential fashion until everyone knows, then any one of those individuals can relay or be "eaves dropped" on by a neighbouring population, it doesn't even have to be the same species in some cases, and the information flows again. As with the habitat mosaics the benefit of these models is that very different systems can be compared and tested on an equal footing, anything from fish, to birds, mammals, even some social insects, lend themselves well to this. | {
"domain": "biology.stackexchange",
"id": 4692,
"tags": "network, biological-networks"
} |
Quantum Angular Momentum Covariance | Question: The angular momentum is quantified in Quantum Mechanics, it can only take multiples of $\hbar$ https://en.wikipedia.org/wiki/Angular_momentum_operator#Quantization
The previous statement seems to apply to any inertial reference system in which the measured particle is measured; however, the angular momentum is a quantity that transforms covariantly as a tensor together with the dynamic mass moment N=mx-tp https://en.wikipedia.org/wiki/Relativistic_angular_momentum
According to this transformation, the momentum cannot be 1 ($\hbar$) in all the systems, even if the dynamic mass moment is zero because we measure the particle from its center of mass.
My question, therefore is, how does the quantum angular momentum transform?
Answer: You are combining the angular momentum into a 4-vector which changes covariantly under the full group of Lorentz transformations, however, the core of what is confusing you does not require this. In what follows I will simplify the question to only an angular momentum vector $L^i$ which transforms covariantly under the usual three dimensional rotations. Here the same confusion should exist without the need to involve special relativity.
The historical path quantum theory took was filled with many false starts and dead ends. Putting philosophical discussion about the interpretation of quantum mechanics aside; when the dust settled there was a relatively well agreed upon procedure for obtaining a quantum mechanical theory from a classical one which we call "quantization". The main idea of this is that particle states are no longer represented by phase space coordinates (x,p), but rather by an element $|\psi\rangle$ of a vector space $\mathcal{H}$. Observable quantities, like position, momentum, or anything that was classically defined as a function of them (such as angular momentum) are now represented by operators which act on the vector space. When you observe these quantities in an ideal setting, you will obtain one of the eigenvalues of that operator probabilistically, with the probability distribution determined by the state $|\psi\rangle$ which the particle was in.
Now, it so happens that the operator corresponding to angular momentum has finitely many distinct eigenvalues and this is what we mean when we say that it is quantized: you can only ever measure one of those discrete values. When we do a coordinate rotation $R$ it will now act on the state vector in a unitary representation $U(R)$ (unitary is necessary to make sure the probability of measuring anything remains 1):
$$
|\psi\rangle\rightarrow U(R)|\psi\rangle\equiv|\psi\rangle_R.
$$
This can alternatively be viewed as an action on the spaces operators such as the "vector of angular momentum operators" $\hat{L}^i$ as:
$$
\hat{L}^i\rightarrow U(R)^\dagger \hat{L}^i U(R)
$$
These will still look like the usual continuous transformations on vectors but now in the context of the operators $\hat{L}^i$. But at the end of the day, this transformation will not change the operators eigenvalues, so you can only ever measure one of the same discrete set of angular momenta eigenvalues, with the probability of measuring a certain value depending on the state $|\psi\rangle$. | {
"domain": "physics.stackexchange",
"id": 94642,
"tags": "quantum-mechanics, operators, angular-momentum, covariance"
} |
Using setJointValueTarget and setPoseTarget have different results | Question:
The following code works, if the target is defined by Joint Value. But if I define the same point by Pose (as in the comment section), the planning path is different every time I run the code. And some times the robot is not able to find the path...
I am using hydro on UR10, moveit version is 0.5.20. ubuntu 12.04. I get the position and orientation value of 'goal' by using command: rosrun tf tf_echo base_link ee_link, when the end effector reach 'joints'.
So basically, 'goal' and 'joints' are the same position. Why setJointValueTarge is working while setPoseTarget is not?
int main(int argc, char **argv)
{
ros::init(argc, argv, "move_group_test");
ros::AsyncSpinner spinner(1);
spinner.start();
moveit::planning_interface::MoveGroup group("manipulator");
group.setPlannerId("ESTkConfigDefault");
std::map<std::string, double> joints;
joints["shoulder_pan_joint"] = 0;
joints["shoulder_lift_joint"] = -0.5648;
joints["elbow_joint"] = -1.06;
joints["wrist_1_joint"] = -1.69;
joints["wrist_2_joint"] = -1.76;
joints["wrist_3_joint"] = 8.572;
group.setJointValueTarget(joints);
group.move();
/*
geometry_msgs::Pose goal;
goal.position.x = 0.555;
goal.position.y = 0.146;
goal.position.z = 1.156;
goal.orientation.w = 0.991428;
goal.orientation.x = 0.0085588;
goal.orientation.y = -0.087665;
goal.orientation.z = -0.0964952;
group.setPoseTarget(goal);
group.move();
*/
}
Originally posted by ami on ROS Answers with karma: 11 on 2016-06-02
Post score: 1
Original comments
Comment by silent07 on 2017-01-17:
I have a similar issue, did you find any answers?
Answer:
If you only specify the pose of the end-effector, Inverse Kinematics will be invoked to compute joint value targets internally, that satisfy your request.
This might fail and might find different joint solutions each time.
If you are using the default KDL kinematics plugin, it probably helps to switch to TracIK.
See http://docs.ros.org/kinetic/api/moveit_tutorials/html/doc/trac_ik_tutorial.html
Originally posted by v4hn with karma: 2950 on 2017-01-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by silent07 on 2017-01-18:
I found out my issue. It was indeed related to kinematics. Sometimes the IK would give me an elbow up solution and other time elbow down solution. This was with Track_IK. I will try filter the IK solutions and then feed it for planning.
Comment by silent07 on 2017-01-18:
I tried the whole thing once again. Again setPoseTarget(Pose) generates some crazy trajectories, while setJointValueTarget(Pose) is always good. Both have the IK issue above but the plan they generate for similar IK solutions is very different.
Comment by gvdhoorn on 2017-01-18:
You could see whether TracIK configured with the distance metric would result in more consistent solutions. It would depend on what your seed is, but it could help. | {
"domain": "robotics.stackexchange",
"id": 24797,
"tags": "ros, moveit, path"
} |
Upgrade to ros-melodic from ros-kinetic | Question:
I want to upgrade to Ubuntu 18.04, so i'm planning to upgrade from Ubuntu 16.04. Since i know ros-kinetic is not supported on Ubuntu 18.04. So i'm ready to use ros-melodic. But the question is before upgrading to Ubuntu to 18.04 should i uninstall ros-kinetic first?
Originally posted by rosbie0602 on ROS Answers with karma: 1 on 2020-02-05
Post score: 0
Answer:
I actually recently did this transition myself for my home machines.
If you attempt to go through the Ubuntu upgrade system to change to 18.04, you'll get about 80% done and it'll complain about some really esoteric things that couldn't be changed. I don't remember off hand. After a few hours of investigating, I figured it may be ROS because that's the only non-vanilla thing I have on my machine.
After uninstalling ROS, and attempting again, it worked fine. So yes, I would recommend fully uninstalling ROS Kinetic. After the upgrade, go through a fresh Melodic install.
Originally posted by stevemacenski with karma: 8272 on 2020-02-05
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 34391,
"tags": "ros-melodic"
} |
Another Rock Paper Scissors Lizard Spock implementation | Question: The Challenge
Create an implementation of "Rock - Paper - Scissors - Lizard - Spock".
The rules:
Scissors cuts paper, paper covers rock, rock crushes lizard, lizard poisons Spock, Spock smashes scissors, scissors decapitate lizard, lizard eats paper, paper disproves Spock, Spock vaporizes rock. And as it always has, rock crushes scissors. -- Dr. Sheldon Cooper
My approach
I wanted to make the game flexible and general. It should not be necessary to change much to create a "normal" Rock-Paper-Scissors implementation. It should also be possible to add elements such as Water balloon if you'd like. Technically, it should also be possible without too much effort to modify the elements at runtime (this is currently not supported in the below implementation, but there's not many changes needed to make it a reality).
Players. All players have a score that gets increased when they win. A player should have a method to return which item the player chooses, this implementation can vary (a human can write input, an AI can return something random, some other AIs should perhaps always choose SPOCK...)
Code
Because I am lazy, I have put all the classes/interfaces in the same file. Of course, all of them could be placed in their own files as public. The code is stand-alone, just copy it and paste it to your favorite IDE (Eclipse) and run it as a JUnit test case.
interface IItem {
/**
* To allow configuration "from both sides", use an int instead of a boolean.<br>
* For example, if SCISSORS have -1 edge against ROCK, it will lose as long as ROCK doesn't have an edge below -1 against SCISSORS.<br>
* This way, it is possible to configure both "beats" and "gets beaten by". It is also possible to return a randomized value here to allow for more complex game styles.
*/
int edge(IItem opponent);
}
abstract class ItemPlayer {
public abstract IItem chooseOne(IItem[] possibles);
private int score = 0;
public void wonAGame() {
score++;
}
public int getScore() {
return score;
}
}
enum Items implements IItem {
SCISSORS, PAPER, ROCK, LIZARD, SPOCK;
private final Set<IItem> beats = new HashSet<IItem>();
void beat(IItem... items) {
this.beats.addAll(new HashSet<IItem>(Arrays.asList(items)));
}
@Override
public int edge(IItem opponent) {
return beats.contains(opponent) ? 1 : 0;
}
}
public class StackExchange {
private static final IItem NO_WINNER = null;
public static IItem fight(IItem first, IItem second) {
int firstEdge = first.edge(second) - second.edge(first);
if (firstEdge == 0)
return NO_WINNER;
return firstEdge > 0 ? first : second;
}
public static class AIInput extends ItemPlayer {
private final Random random = new Random();
@Override
public IItem chooseOne(IItem[] possibles) {
if (possibles.length == 0)
throw new IllegalArgumentException("Possibles needs to contain at least one element");
return possibles[random.nextInt(possibles.length)];
}
@Override
public String toString() {
return "AI";
}
}
public static class HumanInput extends ItemPlayer implements Closeable {
private final Scanner scanner;
public HumanInput() {
this.scanner = new Scanner(System.in);
}
@Override
public void close() {
this.scanner.close();
}
@Override
public IItem chooseOne(IItem[] possibles) {
if (possibles.length == 0)
throw new IllegalArgumentException("Possibles needs to contain at least one element");
do {
System.out.println("Choose one of the following: " + Arrays.toString(possibles));
String str = scanner.nextLine();
for (IItem item : possibles) {
if (item.toString().equals(str)) {
return item;
}
}
System.out.println("Incorrect input.");
}
while (true);
}
@Override
public String toString() {
return "Human";
}
}
@Before
public void setup() { // This configures what beats what.
Items.SCISSORS.beat(Items.PAPER);
Items.PAPER.beat(Items.ROCK);
Items.ROCK.beat(Items.LIZARD);
Items.LIZARD.beat(Items.SPOCK);
Items.SPOCK.beat(Items.SCISSORS);
Items.SCISSORS.beat(Items.LIZARD);
Items.LIZARD.beat(Items.PAPER);
Items.PAPER.beat(Items.SPOCK);
Items.SPOCK.beat(Items.ROCK);
Items.ROCK.beat(Items.SCISSORS);
}
@Test
public void assertions() {
assertEquals(NO_WINNER, fight(Items.SCISSORS, Items.SCISSORS));
assertEquals(Items.SPOCK, fight(Items.SPOCK, Items.SCISSORS));
assertTrue(Items.ROCK.edge(Items.SCISSORS) > 0);
assertTrue(Items.SCISSORS.edge(Items.ROCK) == 0);
assertEquals(Items.ROCK, fight(Items.SCISSORS, Items.ROCK));
}
@Test
public void challenge() {
HumanInput human = new HumanInput();
ItemPlayer comp = new AIInput();
final int GAMES = 42; // we play 42 games, just because it's 42 of course.
for (int i = 1; i <= GAMES; i++) {
System.out.println("Game " + i + " of " + GAMES);
// choose the items
IItem first = human.chooseOne(Items.values());
IItem second = comp.chooseOne(Items.values());
// determine which item wins
IItem fightResult = fight(first, second);
// show result
System.out.println(first + " vs. " + second + ": " + fightResult);
ItemPlayer winner = null;
if (fightResult == NO_WINNER)
System.out.println("Tie!");
else {
winner = fightResult == first ? human : comp;
winner.wonAGame();
System.out.println("Winner is: " + winner);
}
System.out.println("Score is now " + human.getScore() + " - " + comp.getScore());
System.out.println();
}
human.close();
}
}
Answer: ItemPlayer is a really good idea. Keeping the logic and score centralized in an abstract class for the AI and Human player is a good design, but, you have not taken the idea far enough.
The abstract class should have a name field, and a constructor that takes the name:
abstract class ItemPlayer {
private final String name;
ItemPlayer(String name) {
this.name = name;
}
public String getName() {
return name;
}
}
Using the toString() on the concrete classes to get the name is not a great option. Instead, for example, the AI class should be:
public static class AIInput extends ItemPlayer {
AIInput() {
super("AI");
}
....
}
Similarly, the HumanInput can be changed..... but, with the Human input, I like how you've made it Closable.... but then, why didn't you use a try-with-resources structure in your main loop?
Finally, I like the use of the Enum for the valid moves (and I did something similar), but having the rules for the moves outside of the enum is cumbersome. using a static initializer block inside the enum class would solve that problem instead of having to have the setup() method call | {
"domain": "codereview.stackexchange",
"id": 5190,
"tags": "java, game, community-challenge, rock-paper-scissors"
} |
Using the de Bruijn sequence to find the $\lceil\log_2 v \rceil$ of an integer $v$ | Question: Sean Anderson published bit twiddling hacks containing the Eric Cole's algorithm to find the $\lceil\log_2 v \rceil$ of an $N$-bit integer $v$ in $O(\lg(N))$ operations with multiply and lookup.
The algorithm relies on a "magic" number from De Bruijn sequence.
Can anybody explain fundamental math properties of the sequence used here?
uint32_t v; // find the log base 2 of 32-bit v
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = MultiplyDeBruijnBitPosition[(uint32_t)(v * 0x07C4ACDDU) >> 27];
Answer: First note that this algorithm only computes $\lceil \log_2 v \rceil$, and as the code is written, it works only for $v$ that fit in a $32$-bit word.
The sequence of shifts and or-s that appears first has the function of propagating the leading 1-bit of $v$ all the way down to the least significant bit. Numerically, this gives you $2^{\lceil \log_2 v \rceil} - 1$.
The interesting part is the de Bruijn trick, which comes from this paper of Leiserson, Prokop and Randall (apparently MIT professors spend time doing bit hacks :) ). What you need to know about de Bruijn sequences is that they represent all possible sequences of a given length in a way that's as compressed as possible. Precisely, a de Brujn sequence over the alphabet $\{0, 1\}$ is a binary string $s$ of length $2^k$ such that each length $k$ binary string appears exactly once as a contiguous substring (wrap around is allowed). The reason this is useful is that if you have a number $X$ whose bit representation is a de Bruijn sequence (padded with $k$ zeros), then the top $k$ bits of $2^iX$ uniquely identify $i$ (as long as $i <k$). | {
"domain": "cstheory.stackexchange",
"id": 3959,
"tags": "nt.number-theory, comp-number-theory"
} |
Why are some fungi poisonous? | Question: There are many poisonous fungi in nature. For example Amanita Phalloides.
What reasons could a fungus need poison for? Some species, like venomous snakes, use poison to kill other species as prey. But what about fungi? I can't think of any purpose for poison in fungi. If poison has no real function in fungi shouldn't evolution get rid of it?
Answer: The same reason some plants are poisonous: to stop animals from eating them.
The visible part of the fungus is called, rather misleadingly, the fruiting body. It exists to produce and spread spores and thus produce the next fungal generation. Getting eaten, rather obviously, inhibits its ability to do this. Being poisonous discourages animals from eating the fruiting body and thus permits it to complete its life cycle. | {
"domain": "biology.stackexchange",
"id": 1256,
"tags": "evolution, mycology"
} |
Is it possible to add topic to Rviz from command line? | Question:
I want my topics to be added automatically to Rviz. Is it possible??
Wanted to know whether there are commands to add topics
Originally posted by shashankbhat on ROS Answers with karma: 3 on 2017-03-14
Post score: 0
Answer:
No, that is not possible.
But you can tell RViz to load a particular configuration at startup, using the -d command line arg.
See rviz -h for more info.
Originally posted by gvdhoorn with karma: 86574 on 2017-03-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by shashankbhat on 2017-03-14:
Thanks alot @gvdhoorn | {
"domain": "robotics.stackexchange",
"id": 27310,
"tags": "ros, rviz, topics"
} |
Whats the point in conserving electricity if the AC current produced cannot be stored? | Question: If we cannot save the ac current produced in a battery or some device like the DC current there is no point in conserving it because current once produced should be consumed somehow!!
Answer: The torque on electricity generators is continuously adjusted to keep them running at a constant speed (e.g. 60Hz in the US and 50Hz in the UK). When you turn on some electrical item the current it consumes places a greater load on some electricity generator somewhere and this reduces the speed. To counter this, at the generating station more torque is applied to the generator to keep the speed constant. At a conventional generator this means increasing the steam flow, which means increasing the amount of coal burnt and increasing the amount of carbon dioxide released.
The converse is also true. When you turn off an electrical item it decreases the load on the generator, and this means less coal has to be burnt.
In practice the effect on the generator of you turning on/off one device is unlikely to be noticable. However when millions of people turn things on and off it makes a large difference. This is the origin of the notorious TV pickup effect.
The point is that you're starting with the wrong assumption. Power companies don't just generate power and then not care if anyone uses it or not. They only generate just enough power to supply the demand. | {
"domain": "physics.stackexchange",
"id": 18375,
"tags": "electricity, energy-conservation, electric-current"
} |
Probability calibration is worsening my model performance | Question: I'm using RandomForest and XGBoost for binary classification, and my task is to predict probabilities for each class. Since tree-based models are bad with outputting usable probabilities, i imported the sklearn.calibration CalibratedClassifierCV, trained RF on 40k, then trained CCV with a separate 10k samples ( with cv="prefit" option ), my metric ( Area Under ROC ) is showing a huge drop in performance. Is it normal for probability calibration to alter the base estimator's behavior?
Edit : Since i'm minimizing logloss on my XGBClassifier, the output probabilities aren't that bad compared to RF's outputs.
Answer: The probability calibration is just stacking a logistic or isotonic regression on top of the base classifier. The default is logistic, and since the sigmoid is a strictly increasing function, the rank-ordering of samples will be unaffected, and so AUC should not change at all.
(With isotonic regression, it's actually piecewise-constant, so within a span where the function is constant, all samples will have their scores made equal, and so your ROC curve will become more coarse, which would affect the AUC; but these effects ought to be small, as long as the isotonic fit produces sufficiently many/short constant segments.)
Also, gradient boosting like XGBoost produces scores biased toward the extremes, not away from them like random forests, so the logistic calibration is unlikely to work out well. | {
"domain": "datascience.stackexchange",
"id": 5789,
"tags": "machine-learning, classification, probability-calibration"
} |
"rosdep update" error in Kinetic | Question:
Hello all;
I am installing Ros Kinetic in a new machine with Ubuntu 16.04.
Everything goes fine, until I try to do a "rosdep update" command.
I obtain the following errors repetedly:
reading in sources list data from /etc/ros/rosdep/sources.list.d
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml)
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml)
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml)
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml)
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml]:
Failed to download target platform data for gbpdistro:
<urlopen error no host given>
Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml
ERROR: error loading sources list:
<urlopen error <urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml)>
I have already read the related questions and answers related to this issue, but none of them fully applies to my problem.
EDIT (links to questions and answers):
https://answers.ros.org/question/162996/cannot-update-rosdep/
https://answers.ros.org/question/277395/rosdep-update-the-read-operation-timed-out/
https://github.com/ros-infrastructure/bloom/issues/386
https://github.com/ros/rosdistro/issues/9721
https://github.com/ros-infrastructure/rosdep/issues/576
http://blog.csdn.net/gddxz_zhouhao/article/details/72861266
The rosdep update output that I am obtaining is:
a4blue@a4blue-HP-EliteDesk-800-G3-TWR:~$ rosdep update
reading in sources list data from /etc/ros/rosdep/sources.list.d
-------------------
url: 'https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml'
----------------
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml)
-------------------
url: 'https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml'
----------------
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml)
------------------
url: 'https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml'
----------------
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml)
-------------------
url: 'https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml'
----------------
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml]:
<urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml)
ERROR: unable to process source [https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml]:
Failed to download target platform data for gbpdistro:
<urlopen error no host given>
Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml
ERROR: error loading sources list:
<urlopen error <urlopen error no host given> (https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml)>
a4blue@a4blue-HP-EliteDesk-800-G3-TWR:~$
Can anyone give a clue about this issue?
Thank you all very much in advance,
Best,
Alberto
Originally posted by altella on ROS Answers with karma: 149 on 2018-03-07
Post score: 0
Original comments
Comment by gvdhoorn on 2018-03-07:
Can you link us to the Q&As you've already looked at? Just to avoid forum members from suggesting the same solutions.
You can use the q#nnnnnn format to link to questions, don't need to copy-paste the complete url.
Comment by gvdhoorn on 2018-03-07:
Probably start with the regular things to check:
ping github.com works?
are you behind a (company) firewall or proxy server?
can you wget any of those URLs?
is your computers clock and date set correctly?
Comment by altella on 2018-03-07:
Everything is ok. ping works, I can do wget, clock and date correct. In the same company network, I have installed other previous versions of ROS without problems.
Comment by gvdhoorn on 2018-03-07:\
urlopen error no host given
this seems to suggest that there is no host part in the url passed to urlopen. If you can, it would probably be easiest to instrument the rosdep code to get it to print whatever URL it is about to pass to urllib and see whether something is wrong there.
Comment by gvdhoorn on 2018-03-07:\
I have installed other previous versions of ROS without problems.
that doesn't necessarily mean that something couldn't have changed that now prevents you from doing the same of course.
Comment by altella on 2018-03-07:
I know...I am going to try to install Kinetic in another machine just to check. How can I make rosdep to print the information you a referring to?
Comment by gvdhoorn on 2018-03-07:\
I am going to try to install Kinetic in another machine just to check
that might be a bit drastic. What about installing Docker on a machine (if it doesn't already have it) and then using the ros:kinetic image see if rosdep wants to play nice? You could even do that on your own machine.
Comment by gvdhoorn on 2018-03-07:\
How can I make rosdep to print the information you a referring to?
by editing the sources. I don't think there is a built-in facility for this unfortunately.
Comment by altella on 2018-03-08:
I have installed Kinetic in a Virtualbox clean image of Ubuntu 16.04 LTS, and everything goes perfect. The machine with Virtualbox is in the same LAN, so installation under same conditions. Tha PC that fails has also installed freeling, stardog and xampp. Could affect?
Comment by gvdhoorn on 2018-03-08:
this is where the download is attempted. Searching for the error (no host given) turns up a lot of results where spurious \n characters at the end of URLs made urllib ..
Comment by gvdhoorn on 2018-03-08:
.. fail. It's a long shot, but can you check that the files in /etc/ros/rosdep/sources.list.d are ok? No windows line-endings fi?
You could add something like:
print ("\n\n-------------------\n\nurl: '{}'\n\n----------------\n\n".format(url))
right above the line I linked to see ..
Comment by gvdhoorn on 2018-03-08:
.. what URL rosdep is really trying to download.
Comment by altella on 2018-03-08:
there is only one file. Seems ok, line endings correct.
I have modified the file and the urls seem also ok. I paste the output in the following link:
[https://drive.google.com/open?id=1rQWJd1wkvph1jJAlKuUM_zdWgGYdcbla]
Comment by gvdhoorn on 2018-03-08:
Please edit your original question. Comments are not suited for this.
Comment by gvdhoorn on 2018-03-08:
One thing to check could be the following:
create a Python virtualenv, be sure to use the --no-site-packages option
source /path/to/your/virtualenv/bin/activate
install rosdep using pip: pip install rosdep
sudo rm -rf /etc/ros/rosdep
sudo rosdep init
..
Comment by gvdhoorn on 2018-03-08:
.. then finally a rosdep update. If that does succeed, it could be that some Python library has been updated on your system that is interfering (somehow) with rosdep.
Comment by gvdhoorn on 2018-03-08:
Also check your PYTHONPATH, and whether your setting it explicitly in your .bashrc (fi).
Note that we're just checking everything, as without being able to reproduce your issue, it's hard to diagnose.
Comment by altella on 2018-03-08:
PYTHONPATH in the machine I have been given is empty.
In the machine kinetic works is set to:
/opt/ros/kinetic/lib/python2.7/dist-packages
Could this be the problem?
Installation from my side has been identical concerning ROS. But the machine had previous SW installed..
Comment by gvdhoorn on 2018-03-08:
PYTHONPATH pointing to /opt/ros/.. probably means that the machine you're checking against does a source /opt/ros/kinetic/setup.bash in its .bashrc. That should not matter.
Comment by gvdhoorn on 2018-03-08:
Check whether you have Python pkgs installed through pip:
find /usr/local/lib/python2.7/dist-packages -maxdepth 2 -name __init__.py | xargs realpath | LANG= xargs dpkg -S 2>&1 | grep 'no path found' | sed "s/.*\/\([^\/]*\)\/__init__\.py.*/\1/"
these may be interfering.
Answer:
Hi all,
Problem solved. In the machine where I was trying to install Kinectic, at the end of the .bashrc file, there were the following lines:
export http_proxy=''
export https_proxy=''
export ftp_proxy=''
export socks_proxy=''
which were making rosdep fail. After commenting these lines, installation has gone ok as always.
Thanks a lot for the fast received help.
Originally posted by altella with karma: 149 on 2018-03-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2018-03-09:\
are you behind a (company) firewall or proxy server?
so it was a proxy after all :)
The no host specified error could then have come from urllib(2) complaining that *_proxy variables were set, but did not specify "a host". | {
"domain": "robotics.stackexchange",
"id": 30227,
"tags": "ros-kinetic"
} |
Thrust produced by a propeller? | Question: I am trying to calculate how much thrust a propeller will have to generate so that the drone can attach itself to any mass $M$ and pull it up under a gravitational field that is half as strong as Earth’s. But I found this on how to find the thrust generated by a propeller:
Simple Momentum Theory
Turning to the math, the thrust $F$ generated by the propeller disk is equal to the pressure jump $\Delta p$ times the propeller disk area $A$:
$$F = \Delta p \cdot A$$
A units check shows that:
$$\text{force} = \frac{\text{force}}{\text{area}} \cdot\text{area}$$
We can use Bernoulli's equation to relate the pressure and velocity ahead of and behind the propeller disk, but not through the disk. Ahead of the disk the total pressure $pt_0$ equals the static pressure $p_0$ plus the dynamic pressure $0.5 \cdot r\cdot V_0^2$.
$$pt_0 = p_0 + 0.5 \cdot r \cdot V_0^2$$
where $r$ is the air density and $V_0$ is the velocity of the aircraft. Downstream of the disk,
$$pt_e = p_0 + 0.5 \cdot r \cdot V_e^2$$
where $pt_e$ is the downstream total pressure and $V_e$ is the exit velocity. At the disk itself the pressure jumps
$$\Delta p = pt_e - pt_0$$
Therefore, at the disk,
$$\Delta p = 0.5 \cdot r \cdot [V_e^2 - V_0^2]$$
Substituting the values given by Bernoulli's equation into the thrust equation, we obtain
$$F = 0.5 \cdot r \cdot A \cdot [V_e^2 - V_0^2]$$
We still must determine the magnitude of the exit velocity. A propeller analysis based on the momentum equation provides this value.
Note that this thrust is an ideal number that does not account for many losses that occur in practical, high speed propellers, like tip losses. The losses must be determined by a more detailed propeller theory, which is beyond the scope of these pages. The complex theory also provides the magnitude of the pressure jump for a given geometry. The simple momentum theory, however, provides a good first cut at the answer and could be used for a preliminary design.
Here we need to know the velocity of the drone or in this case the aircraft to find out the thrust. But how do I do that when I am trying to find the exact same thing? I need to know the amount of thrust it needs to generate to generate a force which accelerates my drone more than $5\,\text{m}/\text{s}^2$. How do I then find the thrust without knowing the velocity?
Answer: Drag increases as airspeed increases while propeller thrust decreases as airspeed increases. At some point there is an equilibrium where the drag has increased to the same value that the propeller thrust has decreased to. That is the equilibrium speed the aircraft will accelerate to.
You should have gleamed from this that both drag and propeller thrust change with airspeed which means that for a fixed propeller with constant RPM, you cannot have a constant acceleration as the net force accelerating the drone decreases as it goes faster and faster. That means the way you framed your specifications is a problem. You say it needs to accelerate the drone at $5\mathrm{m/s^2}$, but at what airspeed?
You also need to formulate a drag vs speed curve and add that force to your the force required to support the mass against gravity and accelerate against gravity.
(since I assume when you say "drone" you really mean "quadcopter".
The mainstream public does this but it is a terrible terminology since there are many kinds of UAVs and drones and not all of them are hovering quadcopters. Many are fixed wing.)
That gives you the thrust required for your propeller to accelerate at $5\mathrm{m/s^2}$ at some particular airspeed. | {
"domain": "physics.stackexchange",
"id": 81517,
"tags": "aerodynamics"
} |
Does AlphaZero use Q-Learning? | Question: I was reading the AlphaZero paper Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, and it seems they don't mention Q-Learning anywhere.
So does AZ use Q-Learning on the results of self-play or just a Supervised Learning?
If it's a Supervised Learning, then why is it said that AZ uses Reinforcement Learning? Is "reinforcement" part primarily a result of using Monte-Carlo Tree Search?
Answer: Note: you mentioned in the comments that you are reading the old, pre-print version of the paper describing AlphaZero on arXiv. My answer will be for the "official", peer-reviewed, more recent publication in Science (which nbro linked to in his comment). I'm not only focusing on the official version of the paper just because it is official, but also because I found it to be much better / more complete, and I would recommend reading it instead of the preprint if you are able to (I understand it's behind a paywall so it may be difficult for some to get access). The answer to this specific question would probably be identical for the arXiv version anyway.
No, AlphaZero does not use $Q$-learning.
The neural network in AlphaZero is trained to minimise the following loss function:
$$(z - \nu)^2 - \pi^{\top} \log \mathbf{p} + c \| \theta \|^2,$$
where:
$z \in \{-1, 0, +1\}$ is the real outcome observed in a game of self-play.
$\nu$ is a predicted outcome / value.
$\pi$ is a distribution over actions derived from the visit counts of MCTS.
$\mathbf{p}$ is the distribution over actions output by the network / the policy.
$c \| \theta \|^2$ is a regularisation term, not really interesting for this question/answer.
In this loss function, the term $(z - \nu)^2$ is exactly what we would have if we were performing plain Monte-Carlo updates, based on Monte-Carlo returns, in a traditional Reinforcemet Learning setting (with function approximation). Note that there is no "bootstrapping", no combination of a single-step observed reward plus predicted future rewards, as we would have in $Q$-learning. We play out all the way until the end of a game (an "episode"), and use the final outcome $z$ for our updates to our value function. That's what makes this Monte-Carlo learning, rather than $Q$-learning.
As for the $- \pi^{\top} \log \mathbf{p}$ term, this is definitely not $Q$-learning (we're directly learning a policy in this part, not a value function).
If it's a Supervised Learning, then why is it said that AZ uses Reinforcement Learning? Is "reinforcement" part primarily a result of using Monte-Carlo Tree Search?
The policy learning looks very similar to Supervised Learning; indeed it's the cross-entropy loss which is frequently used in Supervised Learning (classification) settings. I'd still argue it is Reinforcement Learning, because the update targets are completely determined by self-play, by the agent's own experience. There are no training targets / labels that are provided externally (i.e. no learning from a database of human expert games, as was done in 2016 in AlphaGo). | {
"domain": "ai.stackexchange",
"id": 1221,
"tags": "reinforcement-learning, q-learning, monte-carlo-tree-search, supervised-learning, alphazero"
} |
Is there a "standard" Newton? | Question: Basic SI units have definitions through experiments that seems to imply a pretty obvious setup.
Is there a standard experiment for calibrating Newtons?
The definition is the force needed to cause a 1m/s² acceleration of a 1 kg mass. I can imagine obtaining a 1 kg mass.
However, it seems error-prone to try and directly measure acceleration. If you mark out 2 consecutive meters (points A, B, and C) and then measure the time the object crosses each, then you can say you have 1 Newton if $\tfrac{1}{T(C)-T(B)}=1+\tfrac{1}{T(B)-T(A)}$.
But there are several objections to be raised about defining a Newton in terms of a mass, two distances and three times.
Back to the question: How do you (theoretically) calibrate Newtons from first definitions? Thanks
Answer: Absolute accelerometers (or gravimeters) do measure the acceleration of a dropped weight. But instead of timing it between two points directly, the speed is measured by having it reflect a laser into an interferometer and timing the interference fringes. Commercial devices are available that can measure local acceleration to one part in $10^{-8}$.
Wikipedia article on Gravimeters
Once the local acceleration of gravity is known, force can be read from calibrated masses.
Conference paper on Force and Torque measurements | {
"domain": "physics.stackexchange",
"id": 54333,
"tags": "newtonian-mechanics, forces, definition, si-units, metrology"
} |
Can there be Electron and/or Proton Stars? | Question:
What happens to all of the electrons and protons in the material of a neutron star?
Could there ever be an electron star or a proton star?
Answer: If a dense, spherical star were made of uniformly charged matter, there'd be an attractive gravitational force and a repulsive electrical force. These would balance for a very small net charge:
$$
dF = \frac1{r^2}\left( - GM_\text{inside} dm + \frac1{4\pi\epsilon_0}Q_\text{inside} dq
\right)
$$
which balances if
$$
\frac{dq}{dm} = \frac{Q_\text{inside}}{M_\text{inside}} = \sqrt{G\cdot 4\pi\epsilon_0} \approx 10^{-18} \frac{e}{\mathrm{GeV}/c^2}.
$$
This is approximately one extra fundamental charge per $10^{18}$ nucleons, or a million extra charges per mole — not much. Any more charge than this and the star would be unbound and fly apart.
What actually happens is that the protons and electrons undergo electron capture to produce neutrons and electron-type neutrinos. | {
"domain": "physics.stackexchange",
"id": 17838,
"tags": "astrophysics, stars, neutron-stars"
} |
Custom caching class in Swift | Question: I'm using Swift 4.0 and I'm using UserDefaults in my project for storing Strings like login or something. I have a class:
class Cache: UserDefaultsManager {
static var currentProfileID: Int? {
get {
return value(forKey: Constants.Cache.CurrentProfileID) as? Int
}
set {
set(newValue, forKey: Constants.Cache.CurrentProfileID)
}
}
static var login: String? {
get {
return value(forKey: Constants.Cache.Login) as? String
}
set {
set(newValue, forKey: Constants.Cache.Login)
}
}
// ... and there are more of this kind
, where UserDefaultsManager looks like this:
class UserDefaultsManager {
private static var uds: UserDefaults {
return UserDefaults.standard
}
static func value(forKey: String) -> Any? {
if let encoded = uds.object(forKey: forKey) as? Data {
return NSKeyedUnarchiver.unarchiveObject(with: encoded)
}
return nil
}
static func set(_ data: Any?, forKey: String) {
let encodedData: Data = NSKeyedArchiver.archivedData(withRootObject: data as AnyObject)
uds.set(encodedData, forKey: forKey)
if (!uds.synchronize()) {
NSLog("Failed sync UserDefaults?")
}
}
}
Constants.Cache is a structure that holds my keys. My question is:
What would be a good approach to make my Cache class lighter with
less code?
I imagine the final product should look like:
class Cache {
static var hash: UDSObject<String>?
static var currentProfileID: UDSObject<Int>?
static var login: UDSObject<String>?
static var selectedWindowID: UDSObject<Int>?
}
Answer: Your Cache class cannot be lighter than it already is. The most you can do is, introduce generics to remove the redundant casting every time you are trying to fetch from user defaults.
However, I see a major problem with the implementation here:
Is Cache also a UserDefaultsManager? Do you see Cache as something that extends the functionalities of the UserDefaultsManager class, or does your Cache use the functionalities of the UserDefaultsManager? I feel that composition is the way to go here than inheritance. I'd refactor this code to something like this:
protocol CacheHandling {
func value<T>(forKey: String) -> T?
func set<T>(_ data: T?, forKey: String)
}
extension CacheHandling {
func value<T>(forKey: String) -> T? {
let userDefaults = UserDefaults.standard
if let encoded = userDefaults.object(forKey: forKey) as? Data {
return NSKeyedUnarchiver.unarchiveObject(with: encoded) as? T
}
return nil
}
func set<T>(_ data: T?, forKey: String) {
let userDefaults = UserDefaults.standard
let encodedData: Data = NSKeyedArchiver.archivedData(withRootObject: data as AnyObject)
userDefaults.set(encodedData, forKey: forKey)
if (!userDefaults.synchronize()) {
NSLog("Failed sync UserDefaults?")
}
}
}
struct Cache : CacheHandling {
var currentProfileID: Int? {
get {
return value(forKey: Constants.Cache.CurrentProfileID)
}
set {
set(newValue, forKey: Constants.Cache.CurrentProfileID)
}
}
var login: String? {
get {
return value(forKey: Constants.Cache.Login)
}
set {
set(newValue, forKey: Constants.Cache.Login)
}
}
} | {
"domain": "codereview.stackexchange",
"id": 31279,
"tags": "swift, cache"
} |
Build order from a dependency graph (Python) | Question: Below is my attempt at the challenge found here. The challenge is "Find a build order given a list of projects and dependencies." Already given are the classes Node and Graph (which I can add if anyone needs them to help review my code), and also
class Dependency(object):
def __init__(self, node_key_before, node_key_after):
self.node_key_before = node_key_before
self.node_key_after = node_key_after
I came up with the below:
class BuildOrder(object):
def __init__(self, dependencies):
self.dependencies = dependencies
self.graph = Graph()
self._build_graph()
def _build_graph(self):
for dependency in self.dependencies:
self.graph.add_edge(dependency.node_key_before,
dependency.node_key_after)
def find_build_order(self):
processed_nodes = [n for n in self.graph.nodes.values() if n.incoming_edges == 0]
if not processed_nodes:
return None
num_processed = len(processed_nodes)
num_nodes = len(self.graph.nodes)
while num_processed < num_nodes:
for node in [n for n in processed_nodes if n.visit_state == State.unvisited]:
node.visit_state = State.visited
for neighbor in list(node.adj_nodes.values()):
node.remove_neighbor(neighbor)
processed_nodes += [n for n in self.graph.nodes.values() if n.incoming_edges == 0 and
n.visit_state == State.unvisited]
if len(processed_nodes) == num_processed:
return None
else:
num_processed = len(processed_nodes)
return processed_nodes
When I timed it against the solution given here, mine was ~4.5x faster. This was surprising, as throughout completing the challenges in the repo, I usually have approx same runtime as the given solution, or worse.
I'd like to know if the above is pythonic, is it readable and how would you improve it? Also, I have deliberately left out comments - which lines in this should probably have a comment to help others understand it? Thanks!
Answer: "Pythonicness"
The code is easy to read and understandable - a good start.
Remove (object) when declaring classes, it is not needed :)
Use type annotations!
from typing import List
def __init__(self, node_key_before: Node, node_key_after: Node):
def __init__(self, dependencies: List[Dependency]):
def find_build_order(self) -> List[Node]:
The statement:
if len(processed_nodes) == num_processed:
return None
else:
num_processed = len(processed_nodes)
Can be rewritten as:
if len(processed_nodes) == num_processed:
return None
num_processed = len(processed_nodes)
That is it for this section (for now - I might edit later).
Algorithm and runtime
Let's talk complexity. This algorithm runs in O(V * V + E), it can be seen from the example of the following graph:
a -> b -> c -> d -> ... (V times)
You will iterate over the V nodes and for each of them you will search (twice in fact) over all the V nodes. The reason we add + E is since you will also delete all edges of the graph.
Now the correct algorithm can do this in O(V + E), the analysis a just a little bit more complex but you can do it yourself.
Why is my algorithm faster in tests than?
I cannot answer for sure but 2 explanations are possible:
The graph in the test has many many edges and E is close to V * V, therefore the fact that the time complexity is worst is not relevant in practice.
After a quick look at the code in the answer you can see it has more functions and more dictionaries, this is probably the reason your code is faster in with smaller graphs, also cache probably plays a factor here.
I might add some more stuff. This is it for now. | {
"domain": "codereview.stackexchange",
"id": 41797,
"tags": "python-3.x, graph"
} |
Cutting of DNA strand | Question: Would EcoRI cut the following sequence??
5'CTCGAGTTCGAG3'
3'GAGCTCAAGCTC5'
What is the logic of cutting? Please explain how restriction enzymes work.
Answer: No, it will not cut it, because EcoRI recognizes the following palindromic sequence: GAATTC, and only if the whole sequence is present it will cut it at particular site. You can find these information for all restriction enzymes at sites of the providers, for example at New England Biolabs site https://www.neb.com , just enter the name of the enzyme you are interested in. | {
"domain": "biology.stackexchange",
"id": 8382,
"tags": "molecular-biology, dna-sequencing, restriction-enzymes"
} |
Get raw network data of a ros topic | Question:
Hey guys,
I try to get the raw data of a topic in ros for some days now. My Problem is, that I receive a complex Message Type from a topic and I have to serialize its data to a <uint8_t>vector to fill the payload of a network package and send it to another application.
If I subscribe the topic, I only get a ConstPtr& to the specific message type and because it's a complex type which change it's size at runtime I can't use simply memcpy() to serialize the data. Of course I can serialize the data with some for loops, but this isn't a smart way to do it.
Instead of deserializing the data by the subscriber, is it possible to get the serialized raw frames directly from the topic?
So I have my subscriber:
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("topic", 1000, &callback);
and my callnack function:
void
Listener::callback(SpecialDataType::ConstPtr&
msg)
{ //should serialize data and forward it }
If it is possible to just get the raw eth frame of the topic, I'll only need copy the raw frame in my new payload and send it.
Do you have an idea how to achieve a good solution to such a problem?
Originally posted by Felix394 on ROS Answers with karma: 5 on 2018-08-06
Post score: 0
Answer:
It's still unclear to me why you want to do this (but then again, perhaps that is not important): but if I understand you correctly, you could take a look at ShapeShifter to create a generic subscriber. That subscriber has access to the raw data bytes.
See also the ros_type_introspection package for some examples (such as How to create a generic Topic Subscriber) and convenience classes that make this sort of thing somewhat easier to do.
Edit:
If it is possible to just get the raw eth frame of the topic [..]
You're probably aware that TCP streams != ethernet frames, so what you suggest is not directly possible without some knowledge of how the network stack on the publishing side fragmented the stream.
Originally posted by gvdhoorn with karma: 86574 on 2018-08-07
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by gvdhoorn on 2018-08-07:
Note also that this question has been asked multiple times. Using google fi: generic subscriber site:answers.ros.org.
Comment by Felix394 on 2018-08-08:
It looks like a solution, which will fix my problem! Thanks a lot! :) | {
"domain": "robotics.stackexchange",
"id": 31472,
"tags": "ros-kinetic, rostopic"
} |
C++20 "FixedArray" Container (dynamically allocated fixed-size array) | Question: I've wrote a FixedArray to fit seamlessly (function-wise) with the default C++ containers, including working with algorithms/etc. It is a dynamically allocated, but fixed size array.
Code
#pragma once
#include <cstddef>
#include <initializer_list>
#include <algorithm>
#include <limits>
#include <concepts>
#include <iterator>
#include <stdexcept>
namespace Util {
template<typename C, typename T=typename C::value_type>
concept Container = std::input_iterator<typename C::iterator> && std::is_same_v<typename C::value_type,T> && requires(C c,const C cc,size_t i) {
{ c.begin() } -> std::same_as<typename C::iterator>;
{ c.end() } -> std::same_as<typename C::iterator>;
{ cc.begin() } -> std::same_as<typename C::const_iterator>;
{ cc.end() } -> std::same_as<typename C::const_iterator>;
{ c.size() } -> std::same_as<typename C::size_type>;
};
template<typename T, bool _reassignable=false, typename Alloc=std::allocator<T>>
class FixedArray {
public:
using value_type=T;
using allocator_type=Alloc;
using size_type=size_t;
using difference_type=std::ptrdiff_t;
using reference=T&;
using pointer=T*;
using iterator=T*;
using reverse_iterator=std::reverse_iterator<T*>;
using const_reference=const T&;
using const_pointer=const T*;
using const_iterator=const T*;
using const_reverse_iterator=std::reverse_iterator<const T*>;
private:
value_type * _data;
size_type _size;
[[no_unique_address]] Alloc allocator;
void _cleanup() { // destroy all objects, deallocate all memory, leaves container in a valid empty state
if(_data) {
for(size_t i=0;i<_size;i++) {
_data[i].~value_type();
}
allocator.deallocate(_data,_size);
_data=nullptr;
}
_size=0;
}
template<typename C>
void _container_assign(const C &container) { // allocate memory and copy contents from generic container of type C
if(container.size()==0) {
_size=0;
_data=nullptr;
} else {
_size=container.size();
_data=allocator.allocate(_size);
typename C::const_iterator it=container.begin();
for(size_t i=0;i<_size;i++,it++){
new(_data+i) value_type(*it);
}
}
}
public:
FixedArray() : _data(nullptr), _size(0) {
}
FixedArray(std::nullptr_t) : _data(nullptr), _size(0) {
}
FixedArray(std::initializer_list<T> data) requires std::is_copy_constructible_v<value_type> {
_container_assign(data);
}
template<typename U>
FixedArray(std::initializer_list<U> data) requires (!std::is_same_v<T,U>) && requires (U x){ T(x); }
{
_container_assign(data);
}
FixedArray(const FixedArray & other) requires std::is_copy_constructible_v<value_type> {
_container_assign(other);
}
FixedArray(FixedArray && other) : _data(other._data), _size(other._size) {
other._data=nullptr;
other._size=0;
}
template<typename C>
FixedArray(const C& container) requires Container<C,value_type> && std::is_copy_constructible_v<value_type> {
_container_assign(container);
}
template<typename C>
FixedArray(const C& container) requires Container<C> && (!std::is_same_v<T,typename C::value_type>) && requires (typename C::value_type v) { T(v); }
{
_container_assign(container);
}
FixedArray(size_t n) requires std::is_default_constructible_v<value_type> : _data(nullptr), _size(n) {
if(_size>0){
_data=allocator.allocate(_size);
for(size_t i=0;i<_size;i++){
new(_data+i) value_type();
}
}
}
~FixedArray() {
_cleanup();
}
// --- optional reassignment methods ---
void operator=(std::nullptr_t) requires _reassignable {
_cleanup();
}
void clear() requires _reassignable {
_cleanup();
}
FixedArray & operator=(std::initializer_list<T> data) requires _reassignable {
_cleanup();
_container_assign(data);
return *this;
}
template<typename U>
FixedArray & operator=(std::initializer_list<U> data) requires _reassignable && requires (U x){ T(x); }
{
_cleanup();
_container_assign(data);
return *this;
}
FixedArray & operator=(const FixedArray & other) requires _reassignable {
_cleanup();
_container_assign(other);
return *this;
}
FixedArray & operator=(FixedArray && other) requires _reassignable {
_cleanup();
_data=other._data;
_size=other._size;
other._data=nullptr;
other._size=0;
return *this;
}
template<typename C>
FixedArray & operator=(const C& container) requires _reassignable && Container<C,value_type> {
_cleanup();
_container_assign(container);
return *this;
}
template<typename C>
FixedArray & operator=(const C& container) requires _reassignable && Container<C> && (!std::is_same_v<T,typename C::value_type>) && requires (typename C::value_type v) { T(v); }
{
_cleanup();
_container_assign(container);
return *this;
}
void swap(FixedArray & other) requires _reassignable {
value_type * temp_data=_data;
size_t temp_size=_size;
_data=other._data;
_size=other._size;
other._data=temp_data;
other._size=temp_size;
}
static void swap(FixedArray & lhs,FixedArray & rhs) requires _reassignable {
lhs.swap(rhs);
}
// --- container boilerplate after this ---
template<typename C>
bool operator==(const C& container) const {
return _size==container.size()&&std::equal(container.begin(),container.end(),begin(),end());
}
template<typename C>
bool operator!=(const C& container) const {
return _size!=container.size()||!std::equal(container.begin(),container.end(),begin(),end());
}
size_type size() const noexcept {
return _size;
}
static size_type max_size() noexcept {
return (std::numeric_limits<size_type>::max()/sizeof(value_type))-1;
}
size_type empty() const noexcept {
return _size==0;
}
iterator begin() noexcept {
return _data;
}
iterator end() noexcept {
return _data+_size;
}
const_iterator begin() const noexcept {
return _data;
}
const_iterator end() const noexcept {
return _data+_size;
}
const_iterator cbegin() const noexcept {
return _data;
}
const_iterator cend() const noexcept {
return _data+_size;
}
reverse_iterator rbegin() noexcept {
return std::make_reverse_iterator(end());
}
reverse_iterator rend() noexcept {
return std::make_reverse_iterator(begin());
}
const_reverse_iterator rbegin() const noexcept {
return std::make_reverse_iterator(end());
}
const_reverse_iterator rend() const noexcept {
return std::make_reverse_iterator(begin());
}
const_reverse_iterator crbegin() const noexcept {
return std::make_reverse_iterator(end());
}
const_reverse_iterator crend() const noexcept {
return std::make_reverse_iterator(begin());
}
pointer data() noexcept {
return _data;
}
const_pointer data() const noexcept {
return _data;
}
reference front() noexcept {
return _data[0];
}
const_reference front() const noexcept {
return _data[0];
}
reference back() noexcept {
return _data[_size-1];
}
const_reference back() const noexcept {
return _data[_size-1];
}
reference operator[](size_t i) noexcept {
return _data[i];
}
const_reference operator[](size_t i) const noexcept {
return _data[i];
}
reference at(size_t i) {
if(i>=_size) throw std::out_of_range("Index "+std::to_string(i)+" is out of range for FixedArray of size "+std::to_string(_size));
return _data[i];
}
const_reference at(size_t i) const {
if(i>=_size) throw std::out_of_range("Index "+std::to_string(i)+" is out of range for FixedArray of size "+std::to_string(_size));
return _data[i];
}
};
}
After testing it, it seems to work without any problems, but a second pair of eyes could help find anything i might have overlooked.
Answer: Consider not using this class
Your class basically reimplements almost all of std::vector, except the functions that resized the container. The only feature I can see that it has over regular std::vector is that you can easily assign another container to it without using a pair of iterators. In this case, I would recommend considering not using this class, but just a regular std::vector, so that you don't have to maintain this code.
Alternatively, you could perhaps consider deriving your class from std::vector, but delete all the functions that resize. For example:
template<typename T, typename Alloc = std::allocator<T>>
class FixedArray: public std::vector<T, Alloc> {
public:
// Delete functions you don't want:
void resize(size_t) = delete;
...
// Pull in the constructors of the parent class:
using std::vector<T, Alloc>::vector;
// Add more constructors/functions if needed:
template<typename C>
FixedArray(const C& container) : std::vector<T, Alloc>(std::begin(container), std::end(container)) {}
...
};
Inheriting from a container class this way has its own drawbacks, so I wouldn't recommend this either, but I do think this approach will require much less code to be written.
For the rest of the review I'm assuming you do want to use your own class as it is.
Make better use of existing concepts
Instead of defining your own concept Container, make use of the concepts from the Ranges library, like std::ranges::range, or even better the more specific ones like std::ranges::input_range.
Also, instead of writing requires (typename C::value_type v) { T(v); }, consider using requires std::is_constructible_v<T, C::value_type>.
Use concepts directly inside a template parameter list
A nice thing about concepts is that you can use them instead of typename in a template parameter list. This avoids some typing, and makes the code a little easier to read. For example, instead of:
template<typename C>
FixedArray(const C& container) requires Container<C> && (!std::is_same_v<T,typename C::value_type>) && requires (typename C::value_type v) { T(v); }
{
_container_assign(container);
}
You can write:
template<std::ranges::input_range C>
FixedArray(const C& container) requires (!std::is_same_v<T,typename C::value_type> && std::is_constructible_v(T, C::value_type))
{
_container_assign(container);
}
Use default member value initialization
You can avoid some typing by using member value initialization:
pointer _data{};
size_type _size{};
This avoids repeating the default initialization in several of the constructors. In fact, this way the default constructor can really be made default:
FixedArray() = default;
Avoid code duplication
There are several ways you can reduce the amount of code you have written:
Reuse your own functions
You can make use of your own member functions and iterators. For example, you can use range-for to loop over the elements of _data:
void _cleanup() { // destroy all objects, deallocate all memory, leaves container in a valid empty state
if(_data) {
for(auto &el: *this) {
el.~value_type();
}
allocator.deallocate(_data, _size);
_data = nullptr;
}
_size = 0;
}
Also, when moving data from another container, make use of your own swap() function. Combined with the default member initialization, this removes a lot of lines, for example the move constructor becomes:
FixedArray(FixedArray && other) {
swap(other);
}
Merge (template) overloads where possible
There are several places where you have multiple overloads for dealing with other containers of the same value_type or of another, when they can be merged into one. For example, the constructors taking initializer lists can be merged into this single one:
template<typename U>
FixedArray(std::initializer_list<U> data) requires std::is_constructible_v<value_type, U>
{
_container_assign(data);
}
Since this will also handle the case of a std::initializer_list<value_type> just fine. The same goes for the other cases where you used !std::is_same_v<>.
Define operator!= in terms of operator==
It's always best to define one of those operators in terms of the other, so you cannot have a small mistake give subtly different behaviour. So:
template<std::ranges::input_range C>
bool operator!=(const C& container) const {
return !operator==(container);
}
Be consistent using std::size(), std::begin() and std::end()
To ensure your class works with all possible containers, don't use member functions .size(), .begin() and .end() directly, but use the free-standing std::size() and related functions. For example:
template<std::ranges::input_range C>
bool operator==(const C& container) const {
return std::equal(std::begin(container), std::end(container), begin(), end());
}
Note that you also don't need to use std::size() in this example, std::equal() will already check whether the sizes match.
Use your own type aliases consistently
If you define an alias, you should use it yourself as well. This ensures that if you need to change a type, you only need to do it on one line. So replace size_t with size_type everywhere, use value_type instead of T in a few places, and so on.
Use consistent code formatting
Your code does not use consistent formatting, in some places there are spaces around operators, in other places there are none. I recommend using a code formatting tool to do this for you, for example Artistic Style or ClangFormat. | {
"domain": "codereview.stackexchange",
"id": 45099,
"tags": "c++, array, collections, c++20"
} |
Algorithm for generating random incrementing numbers up to a limit | Question: I'm trying to write a code to generate incremental sequences of numbers such as:
0 + 1 + 3 + 5 + 1 = 9
0 + 5 + 1 + 1 + 1 = 8
0 + 1 + 1 + 2 + 1 = 5
I have 3 constraints:
1) I need to have limited number of addends (here it is =5)
2) final sum must be smaller than certain limit ( here limit is <9)
3) addends must be random
As for now, I generate sequences randomly and select only suitable ones. For 2-digit numbers and for long sequences (>8) my algorithm takes significant time.
Is there a better algorithm for such problem?
As least could you tell me, what branch of CS is studying such problems?
UPDATE (algorithm):
0) array = [0,]; // initial array
1) if sum(array) > 99, go to 6)
2) generate random number in [1..99], let's say rand = 24
3) rand = array[-1] + rand // add random number to last value of array
4) array.push(rand) // add the random number to array
5) goto 1)
6) if length(array) < 5, goto 0) // 5 is desired sequence length
Answer: A simple remedy
The reason why your algorithm produces desired sequences in a very low rate might be that you are generating random numbers that are so large on average that it is not easy for the sum of them to be smaller than the limit 100. A simple remedy is to change line 2) of your pseudocode to
2) generate random number between 1 and 55, let's say rand = 24
Here the upper limit, 55 is a number near 48 = 99//4 *2, where 4 = 5 - 1 is the number of numbers to be generated (it seems you require the first number must be 0) and 99//4 is about the average of each number. Then the algorithm will be much more likely succeed without going back to line 0). It is possible that the upper limit should be slightly bigger than 2 times the average to approximate the maximum rate of production. I profiled a few times so as to determine that 55 is the fastest number. You can experiment to find what is the best limit.
As successive differences
Here is another way to generate the desired sequences with length $m$ and sum less than $n$. It works well if $n$ is not large, in which case it does not take long to complete the sorting step.
Generate $m$ random numbers between 0 inclusive and $n$ exclusive.
Sort them so we obtain $a[0]\le a[1]\le\cdots\le a[m-1]$.
Return sequence $b[0], b[1], \cdots, b[m-1]$, where $b[0]=a[0]$ and $b[i]=b[i]-a[i-1]$ for all $i\gt1$.
Scaling technique, the fastest way
Generate m random numbers between 0 inclusive and 1, a[0], a[1], ..., a[m-1].
Let s be the sum of them. Compute the scaling factor f = u/s, where u is the given upper limit of the sum.
Let b[i] be a[i] multiplied by f, chopping off the fraction part.
Return sequence b[0], b[1], ..., b[m-1].
You can also try generating m + k of them, following by the same scaling step, but returning only the first m numbers. You can either fix k or even let k be random.
Note that different algorithms generate sequences with different "randomness", which may or may not be significant for your purpose.
You can search for how to generate random sequences with given sum for more methods. If you prefer sum to be less than a given limit, just generate a longer sequence with a sum that is bigger than the given limit. Then take the first m numbers. | {
"domain": "cs.stackexchange",
"id": 14176,
"tags": "algorithms"
} |
Rotate the sub-list of a linked list from position M to N to the right by K places | Question: Given a linked list and two positions m and n. The task is to rotate the sub-list from position m to n, to the right by k places. (link to GeeksforGeeks)
Examples
Input:
list 1->2->3->4->5->6, m = 2, n = 5, k = 2
Output: 1->4->5->2->3->6
Explanation: Rotate the sub-list 2 3 4 5 towards right 2 times
then the modified list is: 1 4 5 2 3 6
Input: list = 20->45->32->34->22->28, m = 3, n = 6, k = 3
Output: 20->45->34->22->28->32
Explanation: Rotate the sub-list 32 34 22 28 towards right 3 times
then the modified list is: 20 45 34 22 28 32
// Rotate the sub-list of a linked list from position M to N to the right by K places
#include <iostream>
using namespace std;
// structure of the node in the linked list
struct Node {
int data;
Node* next;
};
// inserting at the beggining
void push( Node **head, int data ) {
Node *newNode = new Node();
newNode->data = data;
if((*head) == NULL) {
(*head) = newNode;
return;
}
Node *temp = (*head);
while(temp->next != NULL)
temp = temp->next;
temp->next = newNode;
newNode->next = NULL;
}
// display
void display(Node *head) {
cout << "The list : \t";
while (head != NULL) {
cout << head->data << " ";
head = head->next;
}
}
// Rotate the linked list by k
void rotate(Node *head, int m, int n, int k) {
Node *temp1 = head ;
Node *temp3 = NULL, *ref = NULL, *last = NULL;
for (int i = 0; i < m - 1 ; i++)
temp1 = temp1->next;
ref = temp1->next;
temp3 = ref ;
for (int i = 0; i < n - m; i++)
temp3 = temp3->next;
last = temp3->next;
temp3->next = ref ;
k = k%(n - m + 1);
for(int i = 0; i < k; i++) {
temp3 = ref;
ref = ref->next;
}
temp3->next = last;
// if the starting point is not 1st element
if (m == 0)
head = ref;
else
temp1->next = ref;
display(head);
}
// Driver function
int main() {
Node *head = NULL;
push(&head, 1);
push(&head, 2);
push(&head, 3);
push(&head, 4);
push(&head, 5);
push(&head, 6);
push(&head, 7);
display(head);
cout << "\n\n";
int m, n, k;
cout << "m, n, k = \t";
cin >> m >> n >> k ;
rotate(head, m - 1, n - 1, k);
return 0;
}
Answer: I see a number of things that may help you improve your program.
Don't abuse using namespace std
Putting using namespace std at the top of every program is a bad habit that you'd do well to avoid.
Omit return 0
When a C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no reason to put return 0; explicitly at the end of main.
Make sure the comments don't mislead
The code currently includes this comment and function:
// inserting at the beggining
void push( Node **head, int data ) {
However, that comment is false (and spelled incorrectly). In fact, the data is appended to the end of the linked list. For that reason, I'd suggest changing the name to append.
Use nullptr rather than NULL
Modern C++ uses nullptr rather than NULL. See this answer for why and how it's useful.
Return something useful from functions
It is not very useful to have every function return void. Instead, I'd suggest that push (now renamed to append per previous suggestion) could return a pointer to the head of the list. Here's one way to write that:
// append data to end of linked list
Node* append(Node *head, int data) {
auto newNode = new Node{data, nullptr};
if (head == nullptr) {
return newNode;
}
auto temp{head};
while (temp->next) {
temp = temp->next;
}
temp->next = newNode;
return head;
}
Don't leak memory
This code calls new but never delete. This means that the routines are leaking memory. It would be much better to get into the habit of using delete for each call to new and then assuring that you don't leak memory.
Use objects
The Node object is a decent start, but I'd recommend and actual linked list object to better take care of memory management and node initialization.
Sanitize user input
If the user enters a negative number or a number that's larger than the array, or numbers that are not in the right order, bad things happen in the current program. In general, it's better to be very wary of user input and test for and handle any bad input.
Fix the bug
If the subsequence includes the first node, the output is incorrect. That's a bug.
Rethink the problem
There are two essential parts to the algorithm. First, we identify the subsequence and then we do the rotation. With some thought, you can figure out either 0 or 3 next links will need to be changed. We can immediately understand that if \$n - m + 1 = k \mod (n-m+1)\$ then no links need to change and we're done. Otherwise 3 links need to change. Let's number each of the nodes, starting from 1. So the first node is \$N[1]\$, and the second is \$N[2]\$, etc.
Now the first node that needs to be altered is \$N[m-1]\$. It needs to point to \$N[m + k]\$.
Next \$N[n]\$ (the last node of the subset) points to \$N[m]\$ (the first of the subset).
Then \$N[m + k - 1]\$ must point to \$N[n+1]\$
With that in mind, it should be apparent that only a single pass through the data structure is needed with only two temporary pointers. I'll leave it to you to write the code for that. | {
"domain": "codereview.stackexchange",
"id": 32982,
"tags": "c++, programming-challenge, linked-list"
} |
Circular doubly linked list with templates | Question: I tried to write some code namely "Circular doubly linked list" and I would like to use templates to do this.
This is the first time I use templates and throws some exceptions.
I would like to know if sth is badly writen, sth must be changed or I need to rewrite everything.
Last but not least, what is the best framework to unit test in C++ (and mostly without too much effort to implement in IDE, I use CLion) like jUnit in Java?
ListNode class
#ifndef CYCLIC_LIST_LISTNODE_H
#define CYCLIC_LIST_LISTNODE_H
template<typename DT>
class List;
template<typename DT>
class ListNode{
private:
DT dataItem;
ListNode *prior, *next;
ListNode(const DT & data, ListNode *prior = nullptr, ListNode *next = nullptr)
: dataItem(data), prior(prior), next(next) {}
ListNode() : dataItem(), prior(nullptr), next(nullptr) {}
friend class List<DT>;
};
#endif //CYCLIC_LIST_LISTNODE_H
List class
#ifndef CYCLIC_LIST_LIST_H
#define CYCLIC_LIST_LIST_H
#include "ListNode.h"
#include <iostream>
#include <stdexcept>
#include <exception>
template<typename DT>
class ListNode;
template<typename DT>
class List{
private:
ListNode<DT> *head;
ListNode<DT> *cursor;
public:
List() : head(nullptr), cursor(nullptr) {}
~List();
void insert(const DT &newDataItem) throw(std::bad_alloc);
void remove() throw(std::logic_error);
void replace(const DT & newDataItem) throw(std::logic_error);
void clear();
bool isEmpty() const;
void gotoBeginning() throw(std::logic_error);
void gotoEnd() throw(std::logic_error);
void gotoNext() throw(std::logic_error);
void gotoPrior() throw(std::logic_error);
DT getCursor() const throw(std::logic_error);
void showStructure() const;
};
template<typename DT>
void List<DT>::replace(const DT & newDataItem) throw(std::logic_error){
if(cursor == nullptr || isEmpty())
throw std::logic_error("Empty list!");
cursor->dataItem = newDataItem;
}
template<typename DT>
void List<DT>::clear() {
if(!isEmpty())
{
ListNode<DT>* cursor2;
cursor2 = head->prior;
cursor = cursor2->prior;
while(cursor != head)
{
delete cursor2;
cursor2 = cursor;
cursor = cursor->next;
}
delete cursor;
}
head = nullptr;
cursor = nullptr;
}
template<typename DT>
void List<DT>::remove() throw(std::logic_error) {
if (isEmpty())
throw std::logic_error("Empty list!");
ListNode<DT> * temp;
temp = cursor->prior;
(cursor->prior)->next = cursor->next;
(cursor->next)->prior = cursor->prior;
if(cursor == head)
{
if(cursor->next == head)
head = nullptr;
else
head = cursor->next;
}
delete cursor;
cursor = temp;
}
template<typename DT>
void List<DT>::insert(const DT &newDataItem) throw(std::bad_alloc) {
try {
ListNode<DT> *newNode = new ListNode<DT>(newDataItem);
if (isEmpty()) {
head = newNode;
cursor = newNode;
newNode->next = head;
newNode->prior = head;
} else {
newNode->next = cursor->next;
newNode->prior = cursor;
cursor->next = newNode;
(newNode->next)->prior = newNode;
gotoNext();
}
}
catch (std::bad_alloc) {
throw std::bad_alloc();
}
}
template<typename DT>
bool List<DT>::isEmpty() const {
return head == nullptr;
}
template<typename DT>
void List<DT>::gotoBeginning() throw(std::logic_error) {
if (isEmpty())
throw std::logic_error("Empty list!");
cursor = head;
}
template<typename DT>
void List<DT>::gotoEnd() throw(std::logic_error){
if (isEmpty())
throw std::logic_error("Empty list!");
cursor = head->prior;
}
template<typename DT>
void List<DT>::gotoNext() throw(std::logic_error) {
if (isEmpty())
throw std::logic_error("Empty list");
cursor = cursor->next;
}
template<typename DT>
void List<DT>::gotoPrior() throw(std::logic_error) {
if (isEmpty())
throw std::logic_error("Empty list");
cursor = cursor->prior;
}
template<typename DT>
DT List<DT>::getCursor() const throw(std::logic_error) {
if (isEmpty())
throw std::logic_error("Empty list");
return cursor->dataItem;
}
template< class DT>
void List<DT>::showStructure () const
{
ListNode<DT> *temp;
for(temp = head; temp->next != head; temp = temp->next)
std::cout << temp->dataItem << " ";
std::cout<< temp->dataItem << std::endl;
}
template <typename DT >
List<DT>::~List()
{
clear();
}
#endif //CYCLIC_LIST_LIST_H
Answer: Review
ListNode:
Are you sure you want this constructor?
ListNode() : dataItem(), prior(nullptr), next(nullptr) {}
The problem I see is that it default constructs the data object. Not all data objects can be default constructed. The other question I would ask is: Can you create a node with no item in it?
In C++11 we introduced move semantics so I would expect to see a move constructor on this class.
In C++11 we introduced perfect forwarding. So though not required it is sometimes nice to build an item (of type DT) in place with the constructors parameters. You can achieve this using perfect forwarding. If you look at std::vector you can see this in practive in the emplace_back() method.
List
You don't need to forward declare a class if you have already included the header file:
#include "ListNode.h"
// This is not needed.
template<typename DT>
class ListNode;
Design
The cursor concept. I see your logic.
ListNode<DT> *cursor;
void gotoBeginning() throw(std::logic_error);
void gotoEnd() throw(std::logic_error);
void gotoNext() throw(std::logic_error);
void gotoPrior() throw(std::logic_error);
DT getCursor() const throw(std::logic_error);
But this means all code is using the same cursor. This makes threaded code impossible and even single threaded code hard if you are using the list in more than one place.
In C++ we end to use the concept of an iterator. The iterator is an object itself that tracks the current position in the list. Can be copied so you can save a position; incremented and decremented to move along the container; de-referenced to get the element value. It is then used in conjunction with the methods of the container for insertion/deletion. But the important point; it is a separate object.
Exception Specifications
Most languages have decided that exception specifications are a bad idea. In Java they devolve (over time) to throw Exception and in C++ the only useful one we found was the specification that said this function does not throw.
void insert(const DT &newDataItem) throw(std::bad_alloc);
void remove() throw(std::logic_error);
void replace(const DT & newDataItem) throw(std::logic_error);
void gotoBeginning() throw(std::logic_error);
void gotoEnd() throw(std::logic_error);
void gotoNext() throw(std::logic_error);
void gotoPrior() throw(std::logic_error);
DT getCursor() const throw(std::logic_error);
As a result. C++11 deprecated the throw specifier on functions/methods and introduced the noexcept specifier. So either your function is exception safe and throws nothing or it can potentially throw.
Sentinel
Your code uses a nullptr head to specify an empty list. The problem is that your code basically has two versions of every function. Code to handle the empty list situation and code to handle the non empty list.
There is a technique that uses a sentinel object. This is a fake ListNode object (with no data). That is always in the list (the sentinel). This simplifies the code tremendously as you no longer need to check for the empty list when removing or adding elements.
Move Semantics
Your list does not support move semantics or emplace semantics. This can make the code very inefficient.
// COPY (current implementation)
template<typename DT>
void List<DT>::replace(const DT & newDataItem) throw(std::logic_error)
{
if(cursor == nullptr || isEmpty())
throw std::logic_error("Empty list!");
cursor->dataItem = newDataItem;
}
// Move
template<typename DT>
void List<DT>::replace(const DT&& newDataItem) throw(std::logic_error)
{
if(cursor == nullptr || isEmpty())
throw std::logic_error("Empty list!");
// Uses the objects move assignment operator
// to move the object over the current object.
// For large objects this can be significantly more efficient).
cursor->dataItem = std::move(newDataItem);
}
Bug
template<typename DT>
void List<DT>::clear() {
if(!isEmpty())
{
ListNode<DT>* cursor2;
cursor2 = head->prior; // I believe this is a bug.
// As a result you will probably only
// remove one item from the list.
Bug
template<typename DT>
void List<DT>::remove() throw(std::logic_error) {
// STUFF
delete cursor;
cursor = temp; This is a bug if head is nullptr.
}
Biggest Bug
You don't implement the rule of three/Five.
You class manages resources (hence the destructor). But you have not defined the copy/move semantics of the class. As a result the compiler has provided default implementations of each (which don't work when you are managing resources). Hence the rule.
{
List<int> l1;
l1.insert(5);
List<int> l2(l1); // Makes a copy of l1
}
// Both objects are destroyed.
// But because you don't not implement copy semantics the
// compiler generated version is used which does a shallow copy
// and thus both objects have the same pointer. This may not seem
// like a problem but both destructors are going to use that pointer
// to head to delete the list and as a result the list will be
// deleted twice. | {
"domain": "codereview.stackexchange",
"id": 23602,
"tags": "c++, linked-list, error-handling, template, circular-list"
} |
Project RGB with Switches | Question: I am working on project in which I need to display different colors
on RGB led. I am using pwm to drive different colors on LED. My Pic
is PIC24FJ64GA004 with which I am working on now. Basic concept of
this project is to use switch to controls colors.
Colors on RGB led will be according to days and month in year. For
that I am using 7-segment led with switch to count days and month.
Problem is that I have only one switch on my board. I have designed
hardware and bits for it has been tested as well. So hardware works
fine.
At the moment my problem is pic code.
I am stuck in switch use now. I have one switch on board. I need to
display how many times switch pressed on 7 segment. Problem is that
I am new to Pic code and confuse as well. First switch status will be checked. If switch pressed for two seconds it will go days mode. otherwise it will stick to months mode.It will display month or days according to that.I have pasted my code here please give
me some positive advice.
I would like to know is this coding is going in right way or not.
void switch_function(void)
{
if (PORTAbits.RA4==1) // is SW1 pressed?
{
IEC0bits.T1IE = 1; // Enable Output Compare interrupts
T1CONbits.TON = 1; // Start Timer1 with assumed settings
if (modecounter==0) // Checking status of month
{
if (PORTAbits.RA4==1)
{
counter++;
if (counter== 12)
{
counter = 0;
}
}
}
else if(modecounter ==1) // Checking status of days
{
if (PORTAbits.RA4==1)
{
counter++;
if (counter== 32)
{
counter = 0 ;
}
}
}
else
{
}
}
else
{
counter =0;
}
return ;
}
//***** Timer1 interrupt *****//
void __attribute__((interrupt, auto_psv)) _T1Interrupt(void)
{
timer_counter++;
if (timer_counter== 2)
{
if (PORTAbits.RA4==1)
{ // is SW1 still has pressed status after 2sec delay?
modecounter = 1; // switch to days
}
else
{
modecounter = 0; // switch to months
}
}
else
{
}
IFS0bits.T1IF=0; /* clear timer1 interrupt flag */
return;
}
//***** Timer2 interrupt *****//
void __attribute__ ((__interrupt__, no_auto_psv)) _T2Interrupt(void)
{
IFS0bits.T2IF = 0; /* clear timer2 interrupt flag */
return;
}
Answer: A few comments:
Why check this twice PORTAbits.RA4==1 in the same if-block?
Following can be rewritten:
counter++;
if (counter== 12)
{
counter = 0;
}
to:
if(++counter == 12)
{
counter = 0;
}
or:
counter = (++counter == 12) ? 0 : counter;
Same here
if (PORTAbits.RA4==1)
{ // is SW1 still has pressed status after 2sec delay?
modecounter = 1; // switch to days
}
else
{
modecounter = 0; // switch to months
}
to:
modecounter = (PORTAbits.RA4==1) ? 1 : 0; | {
"domain": "codereview.stackexchange",
"id": 3180,
"tags": "c, algorithm, performance"
} |
Why doesn't local anesthesia affect muscles? | Question: While having a dental surgery, I've got this question that why I can still talk, open/close my mouth, move my lips and so on while I can't feel anything at all in my mouth?
If the local anesthesia interrupts the alert that goes through brain, how is the muscle still functional? Is it some sort of one-way interruption which only disables the alert that go "from" the nerve "to" the brain? Or are there different nerves for this task?
Answer: @Nicolai's answer is not entirely correct.
Background
The most common local anesthetics are all the "-caine" drugs - like novocaine, lidocaine, - and even cocaine has some of the same pharmacology, as well as other effects (in movies this is why cops might rub some white powder on their lips - if it's cocaine, it will make their lips tingle; this is not standard training for non-movie police; also not advised for drug sellers or purchasers, because for exactly this reason cocaine is often cut with other -caines or other compounds that give a tingly feeling).
What all of these drugs do is to block voltage-gated sodium channels. These are the channels that propagate action potentials through axons and excitable dendrites. Other local anesthetics that aren't used in the clinic, like tetrodotoxin from pufferfish, block the same types of channels. At sufficient concentrations, local anesthetics block all nervous activity: unlike @Nicolai's answer, they do shut down nerves (as a brief note for terminology: nerves are bundles of axons containing many many individual fibers).
So back to your original question: how come the muscle isn't impaired during dental surgery?
Geometry of innervation of the face
Take a look at this picture of the nerves innervating the facial muscles and the muscles themselves: https://bmc.med.utoronto.ca/cranialnerves/wp-content/images/c_07/Facial-vii12_labelled768.jpg (note: I wasn't able to find a good image that has a license allowing me to reproduce it here; if someone finds one and wants to edit it in, please do - thanks!)
They mostly come in from the sides of the face, and so they are further away from the site of local anesthetic injection. The muscles themselves also extend further away. It may be that the muscle is partly anesthetized, but you don't realize it.
Limited diffusion and location of injections
Diffusion of local anesthetics is limited because they actually act inside cells, and only the unionized form can diffuse across cell membranes. Once inside the cell, the tend to be ionized and therefore get "stuck". Therefore, they don't typically diffuse nearly as far as other compounds of similar size, especially lipophilic compounds.
The nerve endings that sense pain, on the other hand, have to be at the surface of the skin/mucous membranes. That is, they are exactly where the local anesthetic is being injected. The objective of local anesthesia is often to inject enough to affect only those surface tissues, trying to spare the deeper tissues.
Effects on nerves
However, it isn't too uncommon to overdo it and to paralyze the underlying muscles or nerves. Even nerves far away can be affected, for example see this paper discussing cases where the local anesthetic travels near the eyes and impairs eye movement. There are also other non-dental procedures where block of a particular nerve is the intended result: this can be seen as an alternative to general anesthesia. One example is the use of an epidural (essentially into the space around the spinal cord) injection during childbirth or surgery. The extent of paralysis depends on the dose, however, and usually the intent is to avoid major paralysis and use other drugs like opiates to assist in preventing pain. | {
"domain": "biology.stackexchange",
"id": 11647,
"tags": "neuroscience"
} |
Is methanol optically active? | Question: Strictly on theoretical basis, not based on experimental results.
I can very well be wrong, but I think that methanol should be
optically inactive because it has a plane of symmetry passing through it.
My teacher however insists that it is optically active as you can create two non-superimposable mirror images out of it.
Who is correct and why?
Answer: Just to sum up what has already been pointed out in the comments: Methanol can have infinite conformations in free space due to free rotation of the C-H and C-OH sigma bonds. Hence, for any particular spatial conformation you can think of for the molecule, it can always develop a perfect mirror image for the same almost instantly(in case you are wondering about some time lag between them) as conformers are in chemical equilibrium.
So what this essentially means is that, suppose some conformation A of methanol has polarized light in some direction (assuming the right-handed coordinate system, let's say the positive X direction) in some fashion(let's say circularly polarized, clockwise). Then, this particular interference pattern of light will be internally compensated by a conformation B of the molecule which will circularly polarize the light in the anticlockwise direction in the negative X direction. So, the emergent light ray will appear to be unpolarized for the observer | {
"domain": "chemistry.stackexchange",
"id": 12258,
"tags": "optical-properties"
} |
Calculating Chinese zodiac signs based on birthdays | Question: I am attempting to write a function to calculate a user's Chinese zodiac sign based on his or her birthday for Drupal.
I originally had a function for calculating based on the Gregorian calendar (1924 = Rat, 1925 = Ox). This was easy to write, but several of my users have complained that the Chinese zodiac is actually based on the lunar calendar, and a list of relevant start and end dates can be found here.
I started writing the code as follows. The big problem is that I have an if, else if, and else per year, and at this rate I will have to manually code 100 years worth of birthdays. On the other hand, I'm unsure of how to use a loop to iterate over this, because the day the lunar year starts and ends each year is different. Any insight on how to make this more efficient would be highly appreciated.
function mymodule_calculate_chinese_zodiac($birthdate) {
// Drupal MySQL data looks like this: 1981-07-30 00:00:00
$year = substr($birthdate, 0, 4);
$month = substr($birthdate, 5, 2);
$day = substr($birthdate, 8, 2);
if ($year == 1924) {
if ($month > 2) {
// Rat
$sign = "1";
}
else if (($month == 2) && ($day > 4) {
$sign = "1";
}
else {
// Pig
$sign = "12";
}
}
else if ($year == 1925) {
if ($month > 1) {
// Ox
$sign = "2";
}
else if (($month == 1) && ($day > 23)) {
$sign = "2";
}
else {
$sign = "1";
}
}
else if ($year == 1926) {
if ($month > 2) {
// Tiger
$sign = "3";
}
else if ($month == 2) && ($day > 12) {
$sign = "3";
}
else {
$sign = "2";
}
}
//All years until 2020
return $sign;
}
Answer: There's a few ways this could be improved:
Use DateTime's rather than fiddling with individual date parts.
Use an array to store the edges between each Chinese year.
Search through the array until you find a match then return the index of that element mod 12. (plus 1 since you want values ranging from 1 - 12).
I'd recommend something like this:
$target = new DateTime($birthdate);
$dates = array (
new DateTime('2024-02-05'),
new DateTime('2025-01-24'),
new DateTime('2026-02-13'),
new DateTime('2027-02-02'),
// all years until 2020
);
foreach ($array as $i => $value) {
$diff = $target->diff($value);
if ($diff->invert == 0) {
return (string)(($i % 12) + 1);
}
}
According to the documentation, as of PHP 5.2.2 you can do this:
foreach ($array as $i => $value) {
if ($target > diff) {
return (string)(($i % 12) + 1);
}
}
You can also use a binary search (since this data is sorted) to speed up the search. My PHP is a bit rusty, so some one else might be able to improve this further, but I think it would look like this:
$min = 0;
$max = count($dates) - 1;
while ($max >= $min) {
$mid = (int)(($max + $min) / 2);
$value = $dates[$mid];
if ($target < $value) {
$max = $mid - 1;
}
else if ($target > $value) {
$min = $mid + 1;
}
else {
break;
}
}
if ($target > $value) {
$mid++;
}
return (string)(($mid % 12) + 1); | {
"domain": "codereview.stackexchange",
"id": 3954,
"tags": "php, datetime"
} |
Does the k-clique problem became easier on sparse graphs? | Question: Some definitions, just to not create confusion:
A sparse graph is a graph that contains a number of edges less or equal than the number of vertices.
In $k$-clique problem we are given a graph and an integer $k$, and the task is to decide whether the graph contains a $k$-clique.
I think that even if we have a sparse graph, nothing changes for the $k$-clique problem (it always has complexity $O(n^k)$ if $k$ is less than number of vertices and it has complexity $O(n^n)$ if $k$ is equal to the number of vertices), but I'm not so sure about this.
Is a $k$-clique easier to find if the graph is sparse?
Answer: Given a graph $G$ on $n$ vertices, you can always add $\binom{n}{2}$ new vertices not connected to anything to get a new sparse graph $G'$ with the same clique number. So assuming that the graph is sparse doesn't make it much easier to find its largest clique. In particular, determining whether a sparse graph has a clique of given size (which is part of the input) is NP-complete. | {
"domain": "cs.stackexchange",
"id": 10646,
"tags": "complexity-theory, graphs, clique"
} |
Grid_search (RandomizedSearchCV) extremely slow with SVM (SVC) | Question: I'm testing hyperparameters for an SVM, however, when I resort to Gridsearch or RandomizedSearchCV, I haven't been able to get a resolution, because the processing time is exceeding hours.
My dataset is relatively small: 4303 rows and 67 attributes, with four classes (classification problem)
Here are the tested parameters:
params =[{'C': [0.1,1, 10, 100],
'kernel': ['poly','sigmoid','linear','rbf'],
'gamma': [1,0.1,0.01,0.001]}
]
sv = SVC()
clf = RandomizedSearchCV(estimator=sv,
cv = 3,
param_distributions=params,
scoring='accuracy',
n_jobs = -1,
verbose=1)
clf.fit(X, y)
print("Best parameters:", clf.best_params_)
print("better accuracy: ", (clf.best_score_)**(1/2.0))
I've already reduced the number of parameters and the number of cvs, but I still can't get a result that doesn't take hours of processing.
Is it possible to optimize this process? Am I making a mistake regarding gridsearch or SVM?
Answer: It looks like your current approach is taking a long time because you are trying to search a large space of hyperparameters. One way to make the hyperparameter search more efficient is to use a smaller number of values for each hyperparameter, as this will reduce the total number of combinations that need to be tried.
There are several ways to optimize the hyperparameter tuning process for an SVM, including the following:
Use a smaller sample of the dataset for hyperparameter tuning, as the processing time will be proportional to the size of the dataset.
Use a more efficient algorithm for hyperparameter tuning, such as Bayesian optimization or genetic algorithms, which can find the optimal hyperparameters in a more efficient manner.
Use a more efficient implementation of the SVM algorithm, such as the LibSVM library, which can be faster than the default SVM implementation in scikit-learn.
Try different combinations of hyperparameters manually, rather than using grid search or randomized search, which can be computationally intensive.
Use a more efficient kernel, such as the linear kernel, which can be faster to train than more complex kernels such as the polynomial or RBF kernels.
Use a smaller number of hyperparameters, as the processing time will be proportional to the number of hyperparameters being tuned.
Use a coarser grid for hyperparameter tuning, such as increasing the stepsize for the values of the hyperparameters, as this can reduce the number of combinations to be tested.
Overall, it is important to carefully select and optimize the hyperparameters for an SVM to improve its performance and reduce the processing time.
Also, check - SVC classifier taking too much time for training | {
"domain": "datascience.stackexchange",
"id": 11982,
"tags": "python, scikit-learn, svm, hyperparameter-tuning, grid-search"
} |
Unknown CMake command "add_service_files" | Question: These are the code I include in my CMakeLists.txt
add_service_files(
FILES
AddTwoInts.srv
)
However, when I try to use catkin_make to build, the error says Unknown CMake command "add_service_files" and the error lies in the code above. However, I can't find anything wrong with these three lines of code.
Answer: I forgot to include the message_generation in the find_package().
The complete section of code should be like below:
find_package(catkin REQUIRED COMPONENTS
roscpp
rospy
std_msgs
message_generation
) | {
"domain": "robotics.stackexchange",
"id": 38824,
"tags": "ros, ros-noetic"
} |
Multi-Client Socket Communication with Thread Pool in C++ | Question: I've been working on implementing a multi-client socket communication system with a thread pool in C++. The system comprises three main components: logger.h, socket.h, and thread.h, which handle logging, socket operations, and thread pooling respectively. Additionally, I have a test suite tests_multisock.cpp that verifies the functionality of the system.
The code has been designed to facilitate communication between multiple clients and a server using sockets. I'd greatly appreciate your expertise in reviewing my code for any potential issues, optimizations, or areas of improvement. I've outlined a brief summary of the components and included a snippet of the test code for context. If you have any suggestions or feedback, I'm eager to hear them.
Components:
logger.h: A logging class for capturing events and errors.
socket.h: A socket class to manage socket communication, supporting functions like creating, binding, listening, accepting, and sending/receiving data.
thread.h: A thread pool class to handle concurrent execution of tasks.
Test Suite:
tests_multisock.cpp: A suite of tests that validates the functionality of the multi-client socket communication and thread pool system.
include.h
#pragma once
#include <iostream>
#include <vector>
#include <queue>
#include <thread>
#include <functional>
#include <mutex>
#include <condition_variable>
#include <iostream>
#include <ctime>
#include <string>
#include <fstream>
#include <thread>
#include <WinSock2.h>
#include <Ws2tcpip.h>
logger.h
#pragma once
#include "include.h"
enum class LogLevel
{
DEBUG,
INFO,
WARNING,
ERR
};
class Logger
{
public:
Logger(LogLevel minLogLevel = LogLevel::INFO, const std::string& fileName = "default_log.txt")
: minLogLevel(minLogLevel), fileName(fileName)
{
SetLogFile(fileName);
}
~Logger()
{
if (logFile.is_open())
{
logFile.close();
}
}
void SetLogFile(const std::string& filename)
{
logFile.open(filename, std::ios::app);
if (!logFile.is_open())
{
std::cerr << "Failed to open log file: " << filename << std::endl;
}
}
void Log(LogLevel level, const char* file, int line, const std::string& message)
{
if (level >= minLogLevel)
{
std::string logEntry = GetTimeStamp() + " [" + LogLevelToString(level) + "] " + message +
" [" + file + ":" + std::to_string(line) + "]" + "\n";
std::cout << logEntry;
if (logFile.is_open())
{
logFile << logEntry;
logFile.flush();
}
}
}
std::string GetLogFile() const
{
return fileName;
}
private:
LogLevel minLogLevel;
std::ofstream logFile;
std::string fileName;
std::string LogLevelToString(LogLevel level) const
{
switch (level)
{
case LogLevel::DEBUG: return "DEBUG";
case LogLevel::INFO: return "INFO";
case LogLevel::WARNING: return "WARNING";
case LogLevel::ERR: return "ERROR";
default: return "UNKNOWN";
}
}
std::string GetTimeStamp() const
{
std::time_t now = std::time(nullptr);
char timestamp[20];
struct tm timeinfo;
#ifdef _WIN32
localtime_s(&timeinfo, &now);
#else
localtime_r(&now, &timeinfo);
#endif
std::strftime(timestamp, sizeof(timestamp), "%Y-%m-%d %H:%M:%S", &timeinfo);
return timestamp;
}
};
thread.h
#include "logger.h"
class ThreadPool {
public:
ThreadPool(int numThreads, Logger& logger) : logger(logger), stop(false) {
for (int i = 0; i < numThreads; ++i) {
threads.emplace_back([this]() { ThreadFunction(); });
}
}
~ThreadPool() {
{
std::unique_lock<std::mutex> lock(mutex);
stop = true;
}
condition.notify_all();
for (std::thread& thread : threads) {
thread.join();
}
}
void Enqueue(std::function<void()> task) {
{
std::unique_lock<std::mutex> lock(mutex);
tasks.push(task);
}
condition.notify_one();
}
private:
void ThreadFunction() {
while (true) {
std::function<void()> task;
{
std::unique_lock<std::mutex> lock(mutex);
condition.wait(lock, [this]() { return stop || !tasks.empty(); });
if (stop && tasks.empty()) {
return;
}
task = tasks.front();
tasks.pop();
}
try {
logger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Task started.");
task();
logger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Task completed.");
}
catch (const std::exception& ex) {
logger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Task error: " + std::string(ex.what()));
}
}
}
private:
Logger& logger; // Reference to your Logger instance
std::vector<std::thread> threads;
std::queue<std::function<void()>> tasks;
std::mutex mutex;
std::condition_variable condition;
bool stop;
};
socket.h
#include "logger.h"
class Socket
{
public:
Socket(Logger& logger) : logger(logger) {}
bool Create()
{
WSADATA wsaData;
if (WSAStartup(MAKEWORD(2, 2), &wsaData) != 0)
{
logger.Log(LogLevel::WARNING, __FILE__, __LINE__, "WSAStartup failed");
return false;
}
m_socket = socket(AF_INET, SOCK_STREAM, 0);
if (m_socket == INVALID_SOCKET)
{
logger.Log(LogLevel::WARNING, __FILE__, __LINE__, "Failed to create socket: " + std::to_string(WSAGetLastError()));
return false;
}
return true;
}
bool Bind(int port)
{
sockaddr_in hint;
hint.sin_family = AF_INET;
hint.sin_port = htons(port);
hint.sin_addr.s_addr = INADDR_ANY;
return bind(m_socket, (sockaddr*)&hint, sizeof(hint)) != SOCKET_ERROR;
}
bool Listen()
{
return listen(m_socket, SOMAXCONN) != SOCKET_ERROR;
}
bool Accept(Socket& clientSocket)
{
SOCKET client = accept(m_socket, nullptr, nullptr);
if (client != INVALID_SOCKET)
{
clientSocket.m_socket = client;
return true;
}
return false;
}
bool Connect(const char* ipAddress, int port)
{
sockaddr_in hint;
hint.sin_family = AF_INET;
hint.sin_port = htons(port);
if (inet_pton(AF_INET, ipAddress, &hint.sin_addr) <= 0)
{
// Handle error, unable to convert IP address
return false;
}
return connect(m_socket, (sockaddr*)&hint, sizeof(hint)) != SOCKET_ERROR;
}
int Send(const char* data, int dataSize)
{
return send(m_socket, data, dataSize, 0);
}
int Receive(char* buffer, int bufferSize)
{
return recv(m_socket, buffer, bufferSize, 0);
}
void Close()
{
if (m_socket != INVALID_SOCKET)
{
closesocket(m_socket);
m_socket = INVALID_SOCKET;
logger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Socket closed.");
}
}
bool SendHttpRequest(const std::string& url, int port, const std::string& httpRequest, std::string& httpResponse)
{
if (!Create())
{
return false;
}
if (!Connect(url.c_str(), port))
{
return false;
}
if (Send(httpRequest.c_str(), httpRequest.size()) != static_cast<int>(httpRequest.size()))
{
return false;
}
const int bufferSize = 1024;
char recvBuffer[bufferSize];
httpResponse.clear();
int bytesRead = 0;
do
{
bytesRead = Receive(recvBuffer, bufferSize);
if (bytesRead > 0)
{
httpResponse.append(recvBuffer, bytesRead);
}
} while (bytesRead > 0);
return true;
}
private:
SOCKET m_socket;
Logger& logger; // Reference to the Logger instance
};
tests_multisock.cpp
#include "logger.h"
#include "socket.h"
#include "thread.h"
#include <gtest/gtest.h>
class MultiClientCommunicationTest : public testing::Test {
protected:
Logger logger;
void SetUp() override {}
void TearDown() override {}
};
TEST_F(MultiClientCommunicationTest, MultipleClients) {
Logger serverLogger(LogLevel::DEBUG, "server_log.txt");
Logger clientLogger(LogLevel::DEBUG, "client_log.txt");
// Start a server thread
std::thread serverThread([&serverLogger]() {
Socket serverSocket(serverLogger);
ASSERT_TRUE(serverSocket.Create());
ASSERT_TRUE(serverSocket.Bind(12345));
ASSERT_TRUE(serverSocket.Listen());
serverLogger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Server listening");
ThreadPool threadPool(50, serverLogger);
for (int i = 0; i < 50; ++i) {
threadPool.Enqueue([&serverSocket, i, &serverLogger]() {
Socket clientSocket(serverLogger);
ASSERT_TRUE(serverSocket.Accept(clientSocket));
serverLogger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Server accepted client connection");
const char* message = "Hello from server!";
ASSERT_EQ(clientSocket.Send(message, static_cast<int>(strlen(message) + 1)), static_cast<int>(strlen(message) + 1));
serverLogger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Server sent message to client");
});
}
});
// Start client threads
std::vector<std::thread> clientThreads;
for (int i = 0; i < 50; ++i) {
clientThreads.emplace_back([&clientLogger]() {
Socket clientSocket(clientLogger);
ASSERT_TRUE(clientSocket.Create());
ASSERT_TRUE(clientSocket.Connect("127.0.0.1", 12345));
clientLogger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Client socket connected to server");
char recvBuffer[1024] = { 0 };
ASSERT_EQ(clientSocket.Receive(recvBuffer, sizeof(recvBuffer)), strlen("Hello from server!") + 1);
clientLogger.Log(LogLevel::DEBUG, __FILE__, __LINE__, "Client received message from server");
ASSERT_STREQ(recvBuffer, "Hello from server!");
});
}
// Wait for server and client threads to finish
serverThread.join();
for (auto& thread : clientThreads) {
thread.join();
}
}
int main(int argc, char** argv) {
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Answer: Use of a thread pool
What was the point of creating a thread pool with 50 threads if you are going to submit exactly 50 tasks to it? You could have just used std::vector<std::thread> like you did for clientThreads.
The thread pool only makes sense if you are going to submit more tasks than you have threads, and you either want to avoid the overhead of creating and destroying lots of threads, or you have CPU-intensive tasks and you want to avoid running more threads concurrently than you have CPU cores. In the case of network communication, it's probably not the latter. But then you have to wonder: what if I have 100 clients connecting, and I only have a thread pool of size 50. The first 50 clients will be accepted, but the next 50 have to wait? What if you did more than just send "Hello from server!", but had a long running task for each connection?
More importantly though, if you only add 50 tasks that each accept one connection, how will you ever be able to service 100 clients? Instead of having each task call Accept(), you probably want to create one thread dedicated to accepting connections, and as soon as a connection is accepted, then create a thread that will further handle that connection.
Partial send()/recv() is not a bug
In your code you are assuming that send() will send the whole message in one go, and that the call to recv() will receive the whole message. However, there is no such guarantee, not even for blocking sockets. Make sure you check the return value (don't just use an assert), and use a loop to call send() repeatedly until either the whole message has been sent, or an error has occured.
For receiving you have to do the same, but it can even be more complicated if you don't know the size of the message you will receive in advance.
Validate the data that you receive
The sender might send something different than you expect. You are calling Receive() with a buffer of 1024 bytes. What if the sender sends 1024 bytes, none of which are a NUL byte? Luckily you only accept a string containing "Hello from server!", but if you didn't know the length and contents of the message, you can no longer trust that the buffer will be NUL-terminated. | {
"domain": "codereview.stackexchange",
"id": 44984,
"tags": "c++, beginner, unit-testing, socket"
} |
Manometer Reading from Fluid Dynamics | Question: so I have been working on this problem :
Diameter1 at wide end: 8cm || V1 = 1.56m/s
Diameter2 at narrow end: 3cm || V2 = 11.094m/s
Find the manometer reading
I know that due to Bernoulli's Equation the pressure where velocity is at 11.094m/s is much lower than at the other end. This can also be seen from the Mercury(Hg) level.
I have take P2 as being 0 because it is open to the atmosphere -- I am unsure whether this should be taken as 0 or 1.
I have used the following equation to work out P1 :
P1+(1/2 ρv1^2)=P2+(1/2 ρv2^2) and I end up with P1 = 60.321kPa
Can I use : 1 − 2 = (2 −1)ℎ in order to find the height ? Or is thus equation only valid for a differential manometer that has a constant cross sectional area ?
I am still unsure how to proceed in this case. Any help or tips would be appreciated.
EDIT This question is not related to finding any sort of velocity as asked in the possible related question. I am interested in the manometer reading and if the equation 1 − 2 = (2 −1)ℎ is relevant in this case.
Answer: For a problem like this, you first calculate the velocity in the two sections - the ratio of the velocities is the inverse of the ratio of the areas for an incompressible fluid.
The manometer will read a pressure that depends on the difference in density between mercury and water, so
$$\Delta p = \Delta \rho g h$$
The absolute pressure doesn't matter here, just the difference. So you can rewrite this as
$$\frac12 \rho_w (v_2^2 - v_1^2) = (\rho_m - \rho_w) g h$$
You asked:
Can I use : 1 − 2 = (2 −1)ℎ in order to find the height ?
The short answer is "yes", assuming you are careful with your signs. $P_1<P_2$, but $\rho_1 < \rho_2$ so in your expression you would get a negative $h$ but the way you drew it, the value looks positive. But that's easy enough to get right. | {
"domain": "physics.stackexchange",
"id": 37030,
"tags": "fluid-dynamics"
} |
Arriving at the $\big(\pi_\ell,P_\ell(\mathbb{C}^2)\big)$ representation of $\mathfrak{su}(2)$ | Question: I think I'm really close, but confused on applying the multivariable chain rule and untangling the result. The $(\Pi_\ell,P_\ell(\mathbb{C}^2))$ representation of $SU(2)$ induced from the fundamental representation $(\Pi,\mathbb{C}^2)$ is given by
$$(\Pi_\ell(A)\hspace{.5mm}p)(v)\equiv p(\Pi(A^{-1})v)=p(A^{-1}v)$$
for $A\in SU(2), v\in \mathbb{C}^2$, and $p\in P_\ell(\mathbb{C}^2)$, the set of all $\ell$ degree polynomials with basis $\{z_1^{\ell-k} z_2^k\}_{k=0}^{\ell}$. From this, we can determine the induced Lie algebra representation $(\pi_\ell,P_\ell(\mathbb{C}^2))$ of $\mathfrak{su}(2)$ from
$$(\pi_\ell(S_i)\hspace{.5mm}p)(v)=\frac{d}{dt}(\Pi_\ell(e^{tS_i})\hspace{.5mm}p)(v)|_{t=0}=\frac{d}{dt}p(e^{-tS_i}v)|_{t=0}=\sum_{j=1}^2\frac{\partial p}{\partial(e^{-tS_i}v)^j}(-S_iv)^j$$
where I'm denoting $\mathbb{C}^2$ components with $j$. This is where I'm stuck, for two reasons: 1) not clear on how to interpret the denominator of that partial, and 2) not clear on pulling out the representation of $\pi_\ell(S_i)$ from this $v$ dependent form.
Closure: Just wanted to take this a bit further so everything is in one place, as well as make the connection to physics. As @ACuriousMind showed, this representation is
$$\pi_\ell(S_i)=(-S_iv)_1\frac{\partial}{\partial z_1}+(-S_iv)_2\frac{\partial}{\partial z_2}$$
with $v=(z_1,z_2)^T$, so the representations of $S_k=-\frac{i}{2}\sigma_k$ become
$$\pi_\ell(S_x)=\frac{i}{2}(z_2\frac{\partial}{\partial z_1}+z_1\frac{\partial}{\partial z_2})\\\pi_\ell(S_y)=\frac{1}{2}(z_2\frac{\partial}{\partial z_1}-z_1\frac{\partial}{\partial z_2})\\\pi_\ell(S_z)=\frac{i}{2}(z_1\frac{\partial}{\partial z_1}-z_2\frac{\partial}{\partial z_2}).$$
Operating $\pi_\ell(S_z)$ on the basis $\mathcal{B}=\{z_1^{\ell-k}z_2^k\}_{k=0}^\ell$ gets
$$\pi_\ell(S_z)(z_1^{\ell-k}z_2^k) = i(\frac{\ell}{2}-k)z_1^{\ell-k}z_2^k.$$
These elements are eigenvectors, so the matrix representation in this basis has the form
$$[\pi_\ell(S_z)]_\mathcal{B}=i\pmatrix{\ell/2\\&\ell/2-1\\&&\cdot\cdot\cdot\\&&&-\ell/2}$$
and setting $s=\ell/2$ identifies this as the same form of $S_z$ acting on the Hilbert space of a spin $s$ particle. This also happens to be the same matrix form as the symmetric tensor representation $S^\ell\pi(S_z)$ on the standard basis, setting up a correspondence between symmetric $(0,\ell)$ tensors and degree $\ell$ polynomials.
Answer: Let's write the multivariate chain rule a little bit differently: With $f(t) = \mathrm{e}^{-tS_i}v$, we have
$$ \frac{\mathrm{d}}{\mathrm{d}t}p(f(t))\vert_{t=0} = D_{f(0)}p \mathop{\circ} D_0f $$
where $D_x$ means the Jacobian evaluated at $x$. $D_0 f = -S_i v$ and since $f(0) = v$
$$D_{f(0)}p = \begin{pmatrix} \frac{\partial p}{\partial z_1}, \frac{\partial p}{\partial z_2} \end{pmatrix}(v)$$
so
$$ D_{f(0)}p \mathop{\circ} D_0f = -(S_i v)_1\frac{\partial p}{\partial z_1}(v) - (S_i v)_2\frac{\partial p}{\partial z_2}(v).$$
Since the $\partial_i p$ are polynomials in $\ell - 1$-degrees and $(S_i v)_i$ is linear in $v$, this is altogether still a polynomial of degree $\ell$, so that checks out.
If you express the $(S_i v)_j$ as linear polynomials $s_{ij}(z_1,z_2)$, you get
$$ \pi_\ell(S_i) = \sum_j -s_{ij}(z_1,z_2)\frac{\partial}{\partial z_j},$$
but you can't really simplify this any further. | {
"domain": "physics.stackexchange",
"id": 64930,
"tags": "mathematical-physics, quantum-spin, group-theory, representation-theory, lie-algebra"
} |
Has a beneficial mutation ever been documented? | Question: I am trying to find a case/study where scientists documented a mutation in an animal or human that was to the benefit of the host.
The closest thing I have been able to find is sickle cell anemia (SCA) helping to fight malaria. However, the life expectancy for people with SCA is 40 – 60 years, and in 1973 it was only 14 years (source). I am looking for another case, preferably one that is not life-threatening.
Are there any other cases where a beneficial mutation — one where the good outweighs the bad — was documented?
By beneficial I simply mean that it helps or protects the host in some way, while not causing substantial harm. As in my example of SCA it can benefit the host if the host lives in an area with malaria. However, it is also life threatening and reduces the life expectancy of the host.
If — for example — SCA would only cause pain and not be life-threatening, then it would (in my opinion) be a beneficial mutation. While not purely beneficial, it would still increase the life expectancy of people living in an area with a high occurrence of malaria.
Answer: Lactase persistence
This is a somewhat unusual example but has been well studied, and would seem to satisfy the criteria of the question. Let me start by quoting the Wikipedia entry for those unfamiliar with the phenomenon:
Lactase persistence is the continued activity of the enzyme lactase in adulthood. Since lactase’s only function is the digestion of lactose in milk, in most mammal species, the activity of the enzyme is dramatically reduced after weaning.[1] In some human populations, though, lactase persistence has recently evolved[2] as an adaptation to the consumption of nonhuman milk and dairy products beyond infancy.
Studies on the geographical distribution of lactase persistence (e.g. AJHG (2014) vol 94, pp. 496–510) show that lactase persistence is associated with cultures which practice pastoralism (specifically herding bovines), supporting the hypothesis that the trait evolved because of the benefit it conveyed to such populations in allowing them to use bovine (or caprine or ovine) milk to survive in adulthood.
There is no doubt that lactase persistence is a heritable — i.e. genetic — trait, as can be seen from consulting the OMIH (Online Mendelian Inheritance in Man) entry. This has extensive documentation, including description of the base changes associated with the trait:
Enattah et al. (2002) found a complete association between biochemically verified lactase nonpersistence in Finnish families and a C/T(-13910) polymorphism of the MCM6 gene (601806.0001) roughly 14 kb upstream from the lactase gene locus (LCT; 603202), located on 2q21. It was the C allele that associated with hypolactasia.
The molecular mechanism of lactase nonpersistence (the putative original human condition) — affected by mutation at this position — is still not completely clear. Recent work suggests:
Epigenetically controlled regulatory elements accounted for the differences in lactase mRNA levels among individuals, intestinal cell types and species.
Finally, I refer to the evidence for the Wikipedia statement that “lactase persistence has recently evolved[2]”. This is a paper by Bersaglieri et al. in The American Journal of Human Genetics (Am. J. Hum. Genet. 74:1111–1120, 2004). I am not a population geneticist, so I shall reproduce the relevant section of their summary unedited:
In northern European–derived populations, two alleles that are tightly associated with lactase persistence (Enattah et al. 2002) uniquely mark a common (∼77%) haplotype that extends largely undisrupted for 11 Mb. We provide two new lines of genetic evidence that this long, common haplotype arose rapidly due to recent selection: (1) by use of the traditional FST measure and a novel test based on pexcess, we demonstrate large frequency differences among populations for the persistence-associated markers and for flanking markers throughout the haplotype, and (2) we show that the haplotype is unusually long, given its high frequency—a hallmark of recent selection. We estimate that strong selection occurred within the past 5,000–10,000 years, consistent with an advantage to lactase persistence in the setting of dairy farming; the signals of selection we observe are among the strongest yet seen for any gene in the genome. | {
"domain": "biology.stackexchange",
"id": 6737,
"tags": "human-biology, evolution, zoology, mutations"
} |
Finding the median in a heap | Question: Given a binary heap $H$, indexed in $[1..n]$, whose elements are integers, is there a way to quickly find its median in $O(\log n)$-time?
Answer: It takes $\Omega(n)$ time to find the median of a heap in the worst case. The reason is that the lowest levels of the heap (the leaves and their close ancestors) can be very disordered, and they make up the majority of the heap. As a result, if you can find the median on a heap, you can find a median of the $\Omega(n)$ unordered elements near the leaves. Since finding the median in an unordered list takes $\Omega(n)$ time, so does finding the median in a binary heap.
For instance, if finding the median in an unordered array of size $n$ takes a given amount of time, presumably it shouldn't be any faster to find the median of that array if we prepend to it an array of size $n$ consisting of only $-\infty$ and append to it an array of size $n$ consisting of only $+\infty$. Yet, this new array of size $3n$ is a valid binary heap, and its median is also the median of the original unordered array of size $n$. Thus, if we can find the median of the heap in $O(\log 3n) = O(\log n)$ time, then we can also find the median of an unordered array in $O(\log n)$ time.
Since we know that it takes $\Omega(n)$ time to find the median of an unordered array of size $n$, it must also take $\Omega(n) = \Omega(3n)$ time to find the median of a heap of size $3n$. | {
"domain": "cs.stackexchange",
"id": 4964,
"tags": "data-structures, priority-queues, selection-problem"
} |
Why is kerosene preferred over other non-polar solvents to store sodium in? | Question: I think we all heard this again and again in our early schooling days:
Sodium is always stored under kerosene so that it doesn't react with air, and corrode itself.
But lately I've been wondering, why kerosene? Why not benzene? Why not cyclohexane? Why not $\ce{CS2}$? What is that makes kerosene preferred?
Is kerosene the cheapest non-polar liquid available? Or is there any particular reason that I'm not aware of?
Answer: Kerosene is a fraction from the distillation of petroleum, the one with boiling point 180° to 230°, containing hydrocarbons from $\ce{C_11}$ to $\ce{C_12}$. In reaction to what has been said in the commentaries (below the question), I note that mineral oil is a cloudy expression whereas kerosene seems to be the precise name of the more or less broader base material of this cloud.
The best means to store alkali metals, which are all strong reductants, is an alkane because they are inert (as indicated by the name “paraffin”) and cannot be reduced.$^1$ The reason why one does not take the lower boiling fraction (gasoline 40° to 180°, $\ce{C_6}$ to $\ce{C_10}$), cyclohexane (b.p. 81°) or hexane (b.p. 69°) is obvious: it would be too evaporable, which is dangerous since its evaporation when someone forgets to close the bottle sufficiently, would lead to spontaneous ignition of the alkali metal (especially in the case of potassium) by contact with moist air. (Additionally, cyclohexane would anyway be unnecessary expensive.) The reason why the higher boiling fraction (light gas oil, 230 to 305°, $\ce{C_13}$ to $\ce{C_17}$) is not used is that this is much more greasy than kerosene.
It is strange that in the question benzene and carbon disulfide are proposed since they are both more expensive, very toxic and their b.p. is very low. — In the case of carbon disulfide this would be a paradigm of a very dangerous choice for three reasons: 1. concerning the boiling point (46°, extremely fiery), 2. concerning its extreme toxicity and 3. because it is not inert against reductants.
Footnote:
$^1$ In the case of lithium, which is much less reactive than the other alkali metals, the container should ideally be filled level with the top since it is swimming on the kerosene. But anyway the kerosene stays over as a protecting film on the lithium, which obstructs its reaction with moist air, even if it is not submerged. | {
"domain": "chemistry.stackexchange",
"id": 8531,
"tags": "inorganic-chemistry"
} |
Metric tensor from hyperbolic PDE | Question: It is clear that when a differential equation is composed of the second partial derivatives only, it could be written in the form
$$
g^{\mu\nu} \frac{\partial^2 \psi}{\partial x^\mu \partial x^\nu} = 0
$$
with $g^{\mu\nu}$ denoting the metric tensor. Is there any general way of obtaining coefficients of $g^{\mu\nu}$ from hyperbolic PDE containing derivatives of lower orders? Probably by means of changing variables.
My naive intuition suggests that the dispersion brought by the lower order terms should be "transferable" to the metric tensor (possibly nonconstant in time and space).
Answer: Suppose that we have an equation of the form
$$
g^{\rho \sigma} \frac{\partial^2 \psi}{\partial x^\rho \partial x^\sigma} + A^\tau \frac{\partial \psi}{\partial x^\tau} = 0 \tag{1}
$$
and we suspect that under a coordinate transformation it is secretly the wave equation. In general, the scalar d'Alembertian in terms of coordinate derivatives is
$$
\Box \psi=g^{\rho \sigma}(\partial_\rho \partial_\sigma \psi -\Gamma^\tau {}_{\rho\sigma}\partial_\tau \psi)
$$
which means that if equation (1) is truly the wave equation, we must have $A^\tau = - g^{\rho \sigma} \Gamma^{\tau} {}_{\rho \sigma}$ for this to work. But it is also not too hard to show that
$$
-g^{\rho \sigma} \Gamma^{\tau} {}_{\rho \sigma} = \frac{1}{\sqrt{|g|}} \partial_\lambda (\sqrt{|g|} g^{\tau\lambda})
$$
and since the metric in terms of the coordinates $x^\mu$ is known (it can be "read off" of the higher-derivative term) then it should be easy to simply calculate this quantity and see if it is equal to $A^\tau$. It is also easy to envision situations where we have $A^\tau \neq - \frac{1}{\sqrt{|g|}} \partial_\lambda (\sqrt{|g|} g^{\tau\lambda})
$, and so the equation (1) is not actually equivalent to the wave equation in some set of coordinates.
The one wrinkle to this argument is that we could allow for a conformal rescaling of equation (1), so that we instead have
$$
\tilde{g}^{\rho \sigma} \frac{\partial^2 \psi}{\partial x^\rho \partial x^\sigma} + \tilde{A}^\tau \frac{\partial \psi}{\partial x^\tau} = 0 \tag{2}
$$
where $\tilde{g}^{\rho \sigma} = \Omega^2 g^{\rho \sigma}$ and $\tilde{A}^\tau = \Omega^2 A^\tau$ for some scalar function $\Omega > 0$. This is entirely equivalent to (1), but allows us a bit more freedom to find a set of coordinates. Going through the same logic as above, equation (2) will be equivalent to the wave equation if we have
$$
\tilde{A}^\tau = - \frac{1}{\sqrt{|\tilde{g}|}} \partial_\lambda \left( \sqrt{|\tilde{g}|} \tilde{g}^{\tau\lambda} \right) \\
\Omega^2 A^\tau = - \frac{\Omega^{D}}{\sqrt{|g|}} \partial_\lambda \left( \Omega^{-D+2} \sqrt{|g|} g^{\tau\lambda} \right) \\
A^\tau = - \frac{1}{\sqrt{|g|}} \partial_\lambda (\sqrt{|g|} g^{\tau\lambda}) + \left( D-2 \right) g^{\tau \lambda} \partial_\lambda (\ln \Omega)
$$
and so equation (1) is equivalent to the wave equation if there exists a scalar function $\Omega$ satisfying
$$
- \left( D-2 \right) \partial_\lambda (\ln \Omega) = A_\lambda + \frac{1}{\sqrt{|g|}} g_{\lambda \tau} \partial_\sigma (\sqrt{|g|} g^{\tau\sigma}).
$$
It is possible to imagine situations where we can solve this for $\Omega$, but it is also possible to imagine situations in which no such $\Omega$ exists. In particular, the "curl" of the left-hand side must be zero, since $\partial_{[\mu} \partial_{\nu]} (\ln \Omega)$ must vanish; but it would not be hard to find instances where the curl of the right-hand side is not zero. And in the case of a two-dimensional manifold, this whole rigamarole doesn't get us anywhere because the wave equation is conformally invariant; if it doesn't work for the equation as written, it won't work at all. | {
"domain": "physics.stackexchange",
"id": 87770,
"tags": "differential-geometry, metric-tensor, coordinate-systems, tensor-calculus, differential-equations"
} |
Length contraction of distances between two unconnected point particles moving at same velocities | Question: The first part of my question is meant to confirm my understanding of length contraction in the context of the following simple thought experiment:
Imagine three particles O(the observer), A and B in a one dimensional space. Initially, they are all stationary with respect to each other. Let the distance between the particle A and B, as observed by O (and consequently, as observed by A or B) be $L$.
Now let's imagine that the particle O starts to move with the velocity $v$. In the observer O's frame of reference, the distance between A and B should now be measured as $L$$\sqrt{1-\frac{v^2}{c^2}}$. The distance between A and B as observed by A and B should of course still be $L$.
Now let's imagine the observer O is stationary, as in the initial setup, and at a certain time instant $t$ in the observer O's frame of reference, both particles A and B start moving from rest with the velocity $v$ at once. My understanding leads me to conclude that distance between A and B as measured by observer O is still $L$. After all, two uniformly accelerating point objects in observer's frame of reference can not come closer to each other. The distance between A and B, as observed by A or B, Would be $L$$\sqrt{1-\frac{v^2}{c^2}}$, wouldn't it? (Please confirm).
This leads us to the second part of the question. The particles A and B were unconnected in the above example, which is why the distance between A and B as measured by O remained $L$. What if the particles were connected?
Common explanations on the internet propound that an object (which may be approximated as a collection of some point particles bound by some force that maintains the distance between them) that begins to accelerate in a frame of reference will be seen by the observer to contract in its length.
What is the reason that the 'observed distance' between two connected particles may undergo length contraction, but the 'observed distance' between two unconnected particles may remain the same?
Answer: All of the following holds whether or not $A$ and $B$ are connected:
Say the particles are lined up in the order $O$, $A$, $B$ and are stationary in what I'll call the "lab frame".
1) Suppose $O$ starts moving to the left in the lab frame. Then:
1a) In $O$'s new frame: $O$ is stationary. $A$ started moving to the right at (say) 12PM and $B$ started moving to the right at (say) 12:01PM (both at the same velocity).
Between 12PM and 12:01 PM, only $A$ was moving, so the distance between $A$ and $B$ was shrinking, say from $L$ to $L'$. Now that they're both moving, the distance remains $L'$.
If $A$ and $B$ are connected by a rod, the length of that rod has shrunk from $L$ to $L'$.
1b) In the lab frame: $O$ is moving. $A$ and $B$ are still stationary. Obviously the distance between them has not changed. If there's a rod connecting them, its length remains as before.
2) Suppose $A$ and $B$ start moving to the right, simultaneously in the lab frame.
2a) In the lab frame: the distance between $A$ and $B$ has (obviously) not changed. Therefore neither has the length of any rod that might be connecting them.
2b) In the new frame of $A$ and $B$: $A$ and $B$ used to be moving leftward. At (say) 12PM $B$ stopped moving, and at (say) 12:01PM, $A$ stopped moving. Between noon and 12:01, the distance between them grew, say from $L$ to $L''$. Now that they're both moving, the distance remains at $L''$. If there's a rod connecting them, it has stretched.
Note that in both cases 1) and 2), once things are underway, $O$ sees the rod connecting $A$ with $B$ as shorter than $A$ and $B$ do.
As to your question in comments about whether an accelerating object finds itself increasing in length, the answer depends entirely on the nature of the acceleration. If, at each instant in the rod's instantaneous frame, all points of the rod are accelerated identically, then the length of the rod won't change. If all points of the rod are accelerated identically in some other frame, then they won't be accelerated identically in the rod's frame. And of course all of this is very easily analyzed with special relativity, so the comment saying otherwise is incorrect. | {
"domain": "physics.stackexchange",
"id": 95750,
"tags": "special-relativity, spacetime, thought-experiment"
} |
Paint rectangles to canvas using mouse | Question: Just wanted to post this here to see if anyone has any critiques of my code, if I'm drawing the background image most efficiently, or if I'm doing anything which could benefit from obvious improvements.
RectangleMover
import java.awt.BorderLayout;
import java.awt.EventQueue;
import java.awt.Graphics;
import java.awt.Image;
import java.awt.event.MouseEvent;
import java.awt.event.MouseMotionListener;
import java.io.File;
import java.io.IOException;
import javax.swing.JFrame;
import javax.swing.JPanel;
public class RectangleMover {
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
final JFrame frame = new JFrame();
frame.setSize(FRAME_WIDTH, FRAME_HEIGHT);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setContentPane(new JPanel(new BorderLayout()) {
/**
*
*/
private static final long serialVersionUID = 1L;
public void paintComponent(Graphics g) {
try {
super.paintComponent(g);
final Image backgroundImage = javax.imageio.ImageIO
.read(new File(
"/Users/langer/Desktop/7923.jpg"));
g.drawImage(backgroundImage, 0, 0, FRAME_WIDTH,
FRAME_HEIGHT, null);
} catch (IOException e) {
e.printStackTrace();
}
}
});
frame.setVisible(true);
frame.setLocationRelativeTo(null);
final RectangleComponent b = new RectangleComponent();
frame.add(b);
frame.addMouseMotionListener(new MouseMotionListener() {
public void mouseDragged(MouseEvent e) {
b.addRect(e.getX(), e.getY());
frame.setVisible(true);
}
public void mouseMoved(MouseEvent e) {
}
});
}
});
}
private static final int FRAME_WIDTH = 800;
private static final int FRAME_HEIGHT = 1000;
}
RectangleComponent
import java.awt.BasicStroke;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Rectangle;
import java.util.ArrayList;
import javax.swing.JComponent;
/**
* This component displays a rectangle that can be moved.
*/
public class RectangleComponent extends JComponent {
private static final long serialVersionUID = 6596614645871812990L;
private ArrayList<Rectangle> boxes = new ArrayList<Rectangle>();
public void paint(Graphics g) {
Graphics2D g2 = (Graphics2D) g;
super.paintComponent(g);
for (int i = 0; i < boxes.size(); i++) {
float thickness = 2;
g2.setStroke(new BasicStroke(thickness));
g2.drawRect(boxes.get(i).x, boxes.get(i).y - 29,
boxes.get(i).width, boxes.get(i).height);
}
}
public void addRect(int x, int y) {
boxes.add(new Rectangle(x, y, BOX_WIDTH, BOX_HEIGHT));
repaint();
}
private static final int BOX_WIDTH = 42;
private static final int BOX_HEIGHT = 60;
}
Answer: Don't load the same resource repeatedly
I don't have much time to give a full review but I can point out one big thing that I saw while looking through the code.
public void paintComponent(Graphics g) {
try {
super.paintComponent(g);
final Image backgroundImage = javax.imageio.ImageIO
.read(new File(
"/Users/langer/Desktop/7923.jpg"));
g.drawImage(backgroundImage, 0, 0, FRAME_WIDTH,
FRAME_HEIGHT, null);
} catch (IOException e) {
e.printStackTrace();
}
}
You shouldn't load and parse the background image in the paint method. This method can be called frequently even though your background image doesn't change. If you try to resize your window you'll see how you reload the background several times per second slowing down your application.
Prefer to create a program instance
Doing so allows you to encapsulate variables to the class instead of keeping them in main or as inline. Also makes your code more readable.
public class RectangleMover {
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
final JFrame frame = new JFrame();
frame.setSize(FRAME_WIDTH, FRAME_HEIGHT);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setContentPane(new JPanel(new BorderLayout()) {
/**
*
*/
private static final long serialVersionUID = 1L;
should be something like this:
public class RectangleMover {
private final Image background;
public RectangleMover(){
background = loadimage();
final JFrame frame = new JFrame();
...
}
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
new RectangleMover();
}});
} | {
"domain": "codereview.stackexchange",
"id": 13294,
"tags": "java, swing, event-handling, graphics"
} |
Solving a bivariate recurrence equation | Question: I'm dealing with this recurrence equation:
$\qquad\displaystyle T(m, n) = T(m/2, n/2) + T(m, n/2) + O(1)$.
Any idea how to solve this? I've looked in to few advanced resources but the literature on two dimensional recurrence equation seems to be very thin.
Answer: Repeated substitution
Repeatedly substitute in, and look for a pattern:
$$\begin{align*}
T(m,n) &= T(m/2,n/2) + T(m, n/2) + 1\\
&= T(m/4,n/4) + 2 T(m/2,n/4) + T(m, n/4) + 3\\
&= T(m/8,n/8) + 3 T(m/4,n/8) + 3 T(m/2,n/8) + T(m, n/8) + 7\\
&\vdots\\
\end{align*}$$
Do you see the pattern? The coefficients $1,3,3,1$ should remind you of binomial coefficients. The pattern is
$$T(m,n) = 2^k-1 + \sum_{i=0}^k {k \choose i} T(m/2^i, n/2^k).$$
You can prove that this holds for all $k$ (by induction on $k$). Then, setting $k=\lg n$, we find
$$T(m,n) = n-1 + \sum_{i=0}^{\lg n} {\lg n \choose i} T(m/2^i, 1).$$
So it suffices to know how to evaluate $T(m,1)$ for all $m$. That information cannot be deduced from the recurrence you gave us -- it needs to be supplied as additional base cases.
For instance, suppose we are given the base cases $T(m,1) = 1$ for all $m$. Then we find
$$T(m,n) = n-1 + \sum_{i=0}^{\lg n} {\lg n \choose i} = 2n-1 = \Theta(n).$$
However, if we were given different base cases, the solution might be different.
Guess-and-check
Another approach is to somehow guess the solution (for instance, by playing with a bunch of examples and looking for patterns), and then check that it holds. For your example, if we assume $T(m,n) = T(m/2,n/2) + T(m,n/2) + 1$ and we assume $T(m,1)=1$ for all $m$, then rubbing our rabbit's foot for luck, we might get lucky and guess
$$T(m,n) = 2n-1.$$
It is then easy to plug in and confirm that this is a valid solution to the recurrence you gave.
However, in general, guess-and-check is often very hard -- you can try many guesses, but sometimes it can be very difficult to find a guess that turns out to be correct. | {
"domain": "cs.stackexchange",
"id": 5016,
"tags": "recurrence-relation"
} |
Assimp and RViz will not import .scene file | Question:
I am trying to load a collision object into my hydraulic arm's planning scene, but it doesn't work with the pillar example provided for the baxter robot:
[ WARN] [1412647012.600856139]: Assimp reports no scene in file:///home/controller/Downloads/baxter_pillar.scene
There is a box.stl from the MoveIt! tutorial example that loads perfectly well.
Why is Assimp not loading these text scene files? All I want to do is have a cylinder that my robot is not allowed to collide with when executing a path.
Kind Regards
Bart
Originally posted by bjem85 on ROS Answers with karma: 163 on 2014-10-06
Post score: 0
Answer:
I think Assimp is only capable of loading 3D models in a number of 'standardised' formats (stl, obj, etc). I doubt .scene is one of those.
From the tutorial you linked (section "Introducing an environment representation for planning", emphasis mine):
We will now create a scene object in a text file to be imported into our environment.
and:
You can now import this scene from the Scene Geometry field selecting Import From Text
Are you sure you clicked the Import from text button in the MoveIt RViz plugin, and not Import File?
Originally posted by gvdhoorn with karma: 86574 on 2014-10-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by bjem85 on 2014-10-12:
That was it. Out of interest how come there are two ways of importing the files?
Comment by gvdhoorn on 2014-10-13:
There are more, but in this case I'm guessing that there was a need to separate the import of a single mesh, versus a .scene file that may contain an entire world (although I don't think it was meant for that). | {
"domain": "robotics.stackexchange",
"id": 19647,
"tags": "ros, moveit, planning-scene"
} |
How do you run multiple hector_quadrotors without separate urdf files? | Question:
I'm currently trying to launch multiple hector_quadrotors with a single roslaunch command with ros-hydro. I have got it "working", I have two drones that hover and listen to different 'cmd_vel's.
But to get this to work I had to make my own versions of:
aw_quadrotor.urdf.xacro
aw_quadrotor_base.urdf.xacro
aw_quadrotor_plugins.gazebo.xacro
aw_quadrotor_sensors.gazebo.xacro
aw_quadrotor_simple_controller.gazebo.xacro
And one of these for each drone.
drone0_aw_quadrotor.gazebo.xacro
drone1_aw_quadrotor.gazebo.xacro
When I look at I see that some of the commits it looks like hector_quadrotor should support multiple drones without having to edit all the urdf files.
https://github.com/tu-darmstadt-ros-pkg/hector\_quadrotor/commits/hydro-devel
What am I missing? How can I make sure the plugins for gazebo don't confuse drones with the same base_link name with each other? Using namespaces doesn't seem to propagate down into the xacro macros/gazebo plugins.
My files currently look like this.
aw_spawn_two_drones_gazebo.launch
<launch>
<param name="/use_sim_time" value="true"/>
<include file="$(find gazebo_ros)/launch/empty_world.launch">
</include>
<group ns="drone0">
<include file="$(find aw_hector_quadrotor)/launch/spawn_quadrotor.launch">
<arg name="name" value="drone0"/>
<arg name="pos_x" value="0.2"/>
<arg name="namespace_arg" value="drone0"/>
</include>
</group>
<group ns="drone1">
<include file="$(find aw_hector_quadrotor)/launch/spawn_quadrotor.launch">
<arg name="name" value="drone1"/>
<arg name="pos_x" value="2.0"/>
<arg name="namespace_arg" value="drone1"/>
</include>
</group>
</launch>
spawn_quadrotor.launch
<launch>
<!-- push robot_description to factory and spawn robot in gazebo -->
<arg name="name" default="quadrotor"/>
<arg name="pos_x" default="0.0"/>
<arg name="pos_y" default="0.0"/>
<arg name="pos_z" default="0.5"/>
<arg name="namespace_arg" default=""/>
<arg name="model" default="$(find aw_hector_quadrotor)/urdf/$(arg namespace_arg)_aw_quadrotor.gazebo.xacro"/>
<!-- send the robot XML to param server -->
<param name="robot_description" command="$(find xacro)/xacro.py '$(arg model)'" />
<param name="tf_prefix" value="$(arg namespace_arg)" />
<node name="spawn_robot" pkg="gazebo_ros" type="spawn_model"
args="-param robot_description
-urdf
-x $(arg pos_x)
-y $(arg pos_y)
-z $(arg pos_z)
-model $(arg name)"
respawn="false" output="screen">
</node>
</launch>
drone0_aw_quadrotor.gazebo.xacro
<robot name="quadrotor" xmlns:xacro="http://ros.org/wiki/xacro">
<xacro:include filename="$(find aw_hector_quadrotor)/urdf/aw_quadrotor.urdf.xacro" />
<xacro:include filename="$(find aw_hector_quadrotor)/urdf/aw_quadrotor_plugins.gazebo.xacro" />
<!-- Instantiate quadrotor_base_macro once (has no parameters atm) -->
<xacro:quadrotor_base_macro prefix="drone0"/>
<xacro:quadrotor_sensors_gazebo prefix="drone0"/>
<xacro:quadrotor_sensors/>
<xacro:quadrotor_simple_controller prefix="drone0" state_topic="ground_truth/state" imu_topic="raw_imu"/>
</robot>
aw_quadrotor_base.urdf.xacro
<robot xmlns:xacro="http://ros.org/wiki/xacro">
<xacro:include filename="$(find aw_hector_quadrotor)/urdf/sonar_sensor.urdf.xacro" />
<xacro:property name="pi" value="3.1415926535897931" />
<!-- Main quadrotor link -->
<xacro:macro name="quadrotor_base_macro" params="prefix" >
<link name="${prefix}/base_link">
<inertial>
<mass value="1.477" />
<origin xyz="0 0 0" />
<inertia ixx="0.01152" ixy="0.0" ixz="0.0" iyy="0.01152" iyz="0.0" izz="0.0218" />
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<mesh filename="package://hector_quadrotor_description/meshes/quadrotor/quadrotor_base.dae"/>
</geometry>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0" />
<geometry>
<mesh filename="package://hector_quadrotor_description/meshes/quadrotor/quadrotor_base.stl"/>
</geometry>
</collision>
</link>
<!-- Sonar height sensor -->
<xacro:sonar_sensor name="${prefix}/sonar" parent="${prefix}/base_link" ros_topic="sonar_height" update_rate="10" min_range="0.03" max_range="3.0" field_of_view="${40*pi/180}" ray_count="3">
<origin xyz="-0.16 0.0 -0.012" rpy="0 ${90*pi/180} 0"/>
</xacro:sonar_sensor>
</xacro:macro>
</robot>
Edit:
Thanks, got it working now without my modified urdfs. I'm not sure what fixed it but things I did:
sudo apt-get install ros-hydro-gazebo-ros-pkgs ros-hydro-gazebo-ros-control
sudo apt-get update
sudo apt-get upgrade
git clone https://github.com/ros/xacro.git
And got these from source:
hector_quadrotor
hector_gazebo
hector_localization
hector_models
And rebuilt everthing.
Originally posted by Anders on Gazebo Answers with karma: 3 on 2013-08-17
Post score: 0
Answer:
ROS groovy:
Normally the namespace in which the spawn_model node is run in should be propagated to all plugins automatically. For some reason this feature is commented out in the ROS API plugin (see gazebo_ros_api_plugin.cpp#532) in the current version 1.7.12 released in groovy. There is no other option than to modify each plugin description and add the <robotNamespace> tags manually.
ROS hydro:
In the gazebo_ros_pkgs repository, which replaces simulator_gazebo in hydro, the namespace feature is implemented correctly and I am able to launch two quadrotors in different namespaces with a simple additional launch file:
<launch>
<arg name="model" default="$(find hector_quadrotor_description)/urdf/quadrotor.gazebo.xacro" />
<group ns="uav1">
<include file="$(find hector_quadrotor_gazebo)/launch/spawn_quadrotor.launch">
<arg name="name" value="uav1" />
<arg name="tf_prefix" value="uav1" />
<arg name="model" value="$(arg model)" />
<arg name="y" value="-1.0" />
</include>
</group>
<group ns="uav2">
<include file="$(find hector_quadrotor_gazebo)/launch/spawn_quadrotor.launch">
<arg name="name" value="uav2" />
<arg name="tf_prefix" value="uav2" />
<arg name="model" value="$(arg model)" />
<arg name="y" value="1.0" />
</include>
</group>
</launch>
I also committed this version (with slight additional modifications in spawn_quadrotor.launch) in ef43f5d today.
The namespace attribute specified in <group ns="..."> specifies the namespace used for nodes, topics, services and parameters. The name argument is the name of the quadrotor in the gazebo model list. The tf_prefix is the prefix used for all non-global tf frames. x, y, z and model arguments should be self-explanatory.
Note: The tf_prefix parameter has been deprecated in ROS hydro. You currently need an updated version of the message_to_tf package from the hector_localization stack. I hope that backwards compatibility will be reestablished until the final release. hector_common is no longer maintained in hydro.
Originally posted by Johannes Meyer with karma: 196 on 2013-08-21
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Anders on 2013-08-21:
Thanks, got it working now without my modified urdfs. I'm not sure what fixed it but things I did:
sudo apt-get install ros-hydro-gazebo-ros-pkgs ros-hydro-gazebo-ros-control
sudo apt-get update
sudo apt-get upgrade
git clone https://github.com/ros/xacro.git
git clone https://github.com/tu-darmstadt-ros-pkg/hector_localization.git
And got these from source:
hector_quadrotor
hector_gazebo
hector_localization
hector_models
And rebuilt everthing.
Comment by koellsch on 2013-09-25:
Hi Johannes, can you give me an example how to modify the plugin description and add the tags manually? I want to use 2 youbots but because of the namespace-bug I can't.
Comment by Yonggun on 2014-08-15:
Is there anyway for fuerte too? I am working on tum_simulator right now. However, I was able to spawn second ardrone, but I can't control that. Could you give me any recommendation approach idea that I can solve this problem? I really want to control second ardrone too. First ardrone works, but second ardrone doesn't work after I spawn that.
Comment by fermendi on 2016-01-31:
I work on Indigo and the Johannes launch works fine, the problem appears when you try to visualize the sonar sensors on rviz, the robots’ namespace don’t propagate automatically to the sonar sensor's frameId of the robots. An error appears:
Transform [sender=unknown_publisher]
For frame [sonar_link]: Frame [sonar_link] does not exist
I resolve the problem modifying the plugin adding manually the robot’s namespace in line 63 of sonar_sensor.urdf.xacro.I think that exist and elegant way to do that
Comment by trA on 2018-01-26:
Did you manage to fix the sensor problem? I can't get the cameras to work | {
"domain": "robotics.stackexchange",
"id": 3432,
"tags": "gazebo"
} |
Snake Game - within worksheet - cells as pixels | Question: Since my rather mediocre attempt at making a Space Invaders game, I stumbled on a cache of Visual Basic for Applications games written by Japanese excel wizards. I have even seen someone create Zelda?! What an inspiration! Making complete, beautiful, fun arcade / GameBoy style games inside of an Excel spreadsheet is possible.
This is my first crack at recreating the old game Snake.
Classes:
Snake Part:
Option Explicit
Private Type Properties
row As Long
column As Long
End Type
Private this As Properties
Public Property Let row(ByVal value As Long)
this.row = value
End Property
Public Property Get row() As Long
row = this.row
End Property
Public Property Let column(ByVal value As Long)
this.column = value
End Property
Public Property Get column() As Long
column = this.column
End Property
Public Sub PropertiesSet(ByVal row As Long, ByVal column As Long)
this.row = row
this.column = column
End Sub
TimerWin64:
Option Explicit
Private Declare PtrSafe Function QueryPerformanceCounter Lib "kernel32" (lpPerformanceCount As LongInteger) As Long
Private Declare PtrSafe Function QueryPerformanceFrequency Lib "kernel32" (lpFrequency As LongInteger) As Long
Private Type LongInteger
First32Bits As Long
Second32Bits As Long
End Type
Private Type TimerAttributes
CounterInitial As Double
CounterNow As Double
PerformanceFrequency As Double
End Type
Private Const MaxValue_32Bits = 4294967296#
Private this As TimerAttributes
Private Sub Class_Initialize()
PerformanceFrequencyLet
End Sub
Private Sub PerformanceFrequencyLet()
Dim TempFrequency As LongInteger
QueryPerformanceFrequency TempFrequency
this.PerformanceFrequency = ParseLongInteger(TempFrequency)
End Sub
Public Sub TimerSet()
Dim TempCounterIntital As LongInteger
QueryPerformanceCounter TempCounterIntital
this.CounterInitial = ParseLongInteger(TempCounterIntital)
End Sub
Public Function CheckQuarterSecondPassed() As Boolean
CounterNowLet
If ((this.CounterNow - this.CounterInitial) / this.PerformanceFrequency) >= 0.25 Then
CheckQuarterSecondPassed = True
Else
CheckQuarterSecondPassed = False
End If
End Function
Public Function CheckFiveSecondsPassed() As Boolean
CounterNowLet
If ((this.CounterNow - this.CounterInitial) / this.PerformanceFrequency) >= 10 Then
CheckFiveSecondsPassed = True
Else
CheckFiveSecondsPassed = False
End If
End Function
Public Sub PrintTimeElapsed()
CounterNowLet
If CounterInitalIsSet = True Then
Dim TimeElapsed As Double
TimeElapsed = (this.CounterNow - this.CounterInitial) / this.PerformanceFrequency
Debug.Print Format(TimeElapsed, "0.000000"); " seconds elapsed "
Dim TicksElapsed As Double
TicksElapsed = (this.CounterNow - this.CounterInitial)
Debug.Print Format(TicksElapsed, "#,##0"); " ticks"
End If
End Sub
Private Function CounterNowLet()
Dim TempTimeNow As LongInteger
QueryPerformanceCounter TempTimeNow
this.CounterNow = ParseLongInteger(TempTimeNow)
End Function
Private Function CounterInitalIsSet() As Boolean
If this.CounterInitial = 0 Then
MsgBox "Counter Initial Not Set"
CounterInitalIsSet = False
Else
CounterInitalIsSet = True
End If
End Function
Private Function ParseLongInteger(ByRef LongInteger As LongInteger) As Double
Dim First32Bits As Double
First32Bits = LongInteger.First32Bits
Dim Second32Bits As Double
Second32Bits = LongInteger.Second32Bits
If First32Bits < 0 Then First32Bits = First32Bits + MaxValue_32Bits
If Second32Bits < 0 Then Second32Bits = First32Bits + MaxValue_32Bits
ParseLongInteger = First32Bits + (MaxValue_32Bits * Second32Bits)
End Function
Worksheet Code:
Option Explicit
Public Enum Direction
North = 1
South = 2
East = 3
West = 4
End Enum
Public ws As Worksheet
Public snakeParts As Collection
Public currentRow As Long
Public currentColumn As Long
Public directionSnake As Direction
Sub RunGame()
Set ws = ActiveWorkbook.Sheets("Game")
Set snakeParts = New Collection
Dim gameOver As Boolean
gameOver = False
Dim TimerGame As TimerWin64
Set TimerGame = New TimerWin64
Dim TimerBlueSquare As TimerWin64
Set TimerBlueSquare = New TimerWin64
Dim TimerYellowSquare As TimerWin64
Set TimerYellowSquare = New TimerWin64
Dim SnakePartNew As snakepart
Set SnakePartNew = New snakepart
GameBoardReset
DirectionSnakeInitialize
StartPositionInitalize
StartGameBoardInitalize
TimerGame.TimerSet
TimerBlueSquare.TimerSet
TimerYellowSquare.TimerSet
ws.cells(currentRow, currentColumn).Select
Do While gameOver = False
If TimerGame.CheckQuarterSecondPassed = True Then
CurrentCellUpdate
ws.cells(currentRow, currentColumn).Select
If SnakePartOverlapItself(currentRow, currentColumn) = True Then
gameOver = True
Exit Do
ElseIf SnakePartYellowSquareOverlap = True Then
gameOver = True
Exit Do
ElseIf SnakePartBlueSquareOverlap = True Then
Call SnakePartAdd(currentRow, currentColumn)
Call SnakePartAdd(currentRow, currentColumn)
Call SnakePartAdd(currentRow, currentColumn)
Call SnakePartRemove
ws.cells(currentRow, currentColumn).Select
TimerGame.TimerSet
Else
Call SnakePartAdd(currentRow, currentColumn)
Call SnakePartRemove
ws.cells(currentRow, currentColumn).Select
TimerGame.TimerSet
End If
End If
If TimerBlueSquare.CheckFiveSecondsPassed = True Then
BlueSquareAdd
TimerBlueSquare.TimerSet
End If
If TimerYellowSquare.CheckFiveSecondsPassed = True Then
YellowSquareAdd
TimerYellowSquare.TimerSet
End If
gameOver = OutOfBounds
DoEvents
Loop
End Sub
Private Sub GameBoardReset()
ws.cells.Interior.Color = RGB(300, 300, 300)
End Sub
Private Sub DirectionSnakeInitialize()
directionSnake = East
End Sub
Private Sub StartPositionInitalize()
currentRow = 96
currentColumn = 64
End Sub
Private Sub StartGameBoardInitalize()
Call SnakePartAdd(currentRow, currentColumn - 6)
Call SnakePartAdd(currentRow, currentColumn - 5)
Call SnakePartAdd(currentRow, currentColumn - 4)
Call SnakePartAdd(currentRow, currentColumn - 3)
Call SnakePartAdd(currentRow, currentColumn - 2)
Call SnakePartAdd(currentRow, currentColumn - 1)
Call SnakePartAdd(currentRow, currentColumn)
End Sub
Private Sub SnakePartAdd(ByVal row As Long, ByVal column As Long)
Dim SnakePartNew As snakepart
Set SnakePartNew = New snakepart
SnakePartNew.PropertiesSet row, column
SnakePartAddToCollection SnakePartNew
SnakePartAddToGameBoard SnakePartNew
End Sub
Private Sub SnakePartAddToCollection(ByRef snakepart As snakepart)
snakeParts.add snakepart
End Sub
Private Sub SnakePartAddToGameBoard(ByRef snakepart As snakepart)
ws.cells(snakepart.row, snakepart.column).Interior.Color = RGB(0, 150, 0)
End Sub
Private Sub SnakePartRemove()
SnakePartRemoveFromGameBoard
SnakePartRemoveFromCollection
End Sub
Private Sub SnakePartRemoveFromCollection()
snakeParts.Remove 1
End Sub
Private Sub SnakePartRemoveFromGameBoard()
ws.cells(snakeParts.Item(1).row, snakeParts.Item(1).column).Interior.Color = RGB(300, 300, 300)
End Sub
Private Function OutOfBounds() As Boolean
If currentRow < 9 Or _
currentRow > 189 Or _
currentColumn < 21 Or _
currentColumn > 108 Then
OutOfBounds = True
MsgBox "GameOver"
Else
OutOfBounds = False
End If
End Function
Private Function SnakePartOverlapItself(ByVal row As Long, ByVal column As Long) As Boolean
If ws.cells(row, column).Interior.Color = RGB(0, 150, 0) Then
MsgBox "GameOver"
SnakePartOverlapItself = True
Else
SnakePartOverlapItself = False
End If
End Function
Private Sub BlueSquareAdd()
Dim TopLeftCornerRow As Long
Dim TopLeftCornerColumn As Long
TopLeftCornerRow = Application.WorksheetFunction.RandBetween(9, 189)
TopLeftCornerColumn = Application.WorksheetFunction.RandBetween(21, 108)
ws.cells(TopLeftCornerRow, TopLeftCornerColumn).Interior.Color = RGB(0, 0, 150)
ws.cells(TopLeftCornerRow, TopLeftCornerColumn + 1).Interior.Color = RGB(0, 0, 150)
ws.cells(TopLeftCornerRow + 1, TopLeftCornerColumn).Interior.Color = RGB(0, 0, 150)
ws.cells(TopLeftCornerRow + 1, TopLeftCornerColumn + 1).Interior.Color = RGB(0, 0, 150)
End Sub
Private Function SnakePartBlueSquareOverlap() As Boolean
If ws.cells(currentRow, currentColumn).Interior.Color = RGB(0, 0, 150) Then
SnakePartBlueSquareOverlap = True
Else
SnakePartBlueSquareOverlap = False
End If
End Function
Private Sub YellowSquareAdd()
Dim TopLeftCornerRow As Long
Dim TopLeftCornerColumn As Long
TopLeftCornerRow = Application.WorksheetFunction.RandBetween(9, 189)
TopLeftCornerColumn = Application.WorksheetFunction.RandBetween(21, 108)
ws.cells(TopLeftCornerRow, TopLeftCornerColumn).Interior.Color = RGB(255, 140, 0)
ws.cells(TopLeftCornerRow, TopLeftCornerColumn + 1).Interior.Color = RGB(255, 140, 0)
ws.cells(TopLeftCornerRow + 1, TopLeftCornerColumn).Interior.Color = RGB(255, 140, 0)
ws.cells(TopLeftCornerRow + 1, TopLeftCornerColumn + 1).Interior.Color = RGB(255, 140, 0)
End Sub
Private Function SnakePartYellowSquareOverlap() As Boolean
If ws.cells(currentRow, currentColumn).Interior.Color = RGB(255, 140, 0) Then
MsgBox "GameOver"
SnakePartYellowSquareOverlap = True
Else
SnakePartYellowSquareOverlap = False
End If
End Function
Private Sub CurrentCellUpdate()
Select Case directionSnake
Case Is = Direction.North
currentRow = currentRow - 1
Case Is = Direction.South
currentRow = currentRow + 1
Case Is = Direction.East
currentColumn = currentColumn + 1
Case Is = Direction.West
currentColumn = currentColumn - 1
End Select
End Sub
Private Sub SnakeCollectionUpdate(ByRef snakeParts As Collection)
snakeParts.add currentRow
End Sub
Private Sub Worksheet_SelectionChange(ByVal Target As Range)
'rowSwitch
If directionSnake = East Or directionSnake = West Then
If Target.column = currentColumn Then
If Target.row <> currentRow Then
If Target.row = currentRow - 1 Then
directionSnake = North
ElseIf Target.row = currentRow + 1 Then
directionSnake = South
End If
End If
End If
End If
'columnSwitch
If directionSnake = North Or directionSnake = South Then
If Target.row = currentRow Then
If Target.column <> currentColumn Then
If Target.column = currentColumn + 1 Then
directionSnake = East
ElseIf Target.column = currentColumn - 1 Then
directionSnake = West
End If
End If
End If
End If
End Sub
Answer: I could take this as a not-so-subtle reminder that I need to finish my Excel Tetris implementation... :-P
I am a little curious why you seem to have abandoned the OOP approach since your last game - this code is completely procedural (the presence of classes doesn't mean that it's object oriented).
A discussion of the architecture would basically entail a top-down re-write, so I'll leave that for other reviewers.
Indentation
This is, well, ...weird. I initially thought it was simply a markdown problem in the question itself, but as I went through the code further, it seems more and more intentional. Why are your procedures creeping to the right? I originally thought that it had something to do with the scope, (Public members indented one level, Private two), but that doesn't jive with this:
Private this As TimerAttributes
Private Sub Class_Initialize()
PerformanceFrequencyLet
End Sub
Private Sub PerformanceFrequencyLet()
Dim TempFrequency As LongInteger
QueryPerformanceFrequency TempFrequency
this.PerformanceFrequency = ParseLongInteger(TempFrequency)
End Sub
Public Sub TimerSet()
Dim TempCounterIntital As LongInteger
QueryPerformanceCounter TempCounterIntital
this.CounterInitial = ParseLongInteger(TempCounterIntital)
End Sub
This is incredibly distracting, and is completely "non-standard" (I've never seen this done in any language). The last thing you want when somebody else is looking at your code is to distract them with the formatting. It's also generally meaningless in that I can just look at the access modifier (assuming it has something to do with scope). My brain is telling me that I'm in a procedure when I'm not, and it was disorienting to the point that I had to run an indenter on this before I continued the review.
API Functions
Your declarations of QueryPerformanceCounter and QueryPerformanceFrequency are incorrect. From the documentation of QueryPerformanceCounter, it is defined as:
BOOL WINAPI QueryPerformanceCounter(
_Out_ LARGE_INTEGER *lpPerformanceCount
);
Furthermore, the documentation states "On systems that run Windows XP or later, the function will always succeed and will thus never return zero", so unless you are intending to support pre-XP versions of Windows (which would likely require a pre-compile directive to get rid of the PtrSafe keyword anyway), this can simply be declared as a Sub. The same applies to QueryPerformanceFrequency:
BOOL WINAPI QueryPerformanceFrequency(
_Out_ LARGE_INTEGER *lpFrequency
);
You are also never checking the return value anyway, so if you're using them as Sub's (discarding the otherwise deterministic return value), declare them as a Sub's:
Private Declare PtrSafe Sub QueryPerformanceCounter Lib "kernel32" (ByRef lpPerformanceCount As LongInteger)
Private Declare PtrSafe Sub QueryPerformanceFrequency Lib "kernel32" (ByRef lpFrequency As LongInteger)
Note that I've also explicitly declared the parameters ByRef. I'd get in the habit of doing this for out parameters of API declarations because it makes the usage clear without consulting the documentation.
Your LongInteger struct is also misleadingly named, in that a "long int" has a different meaning when you're thinking in API terms. It means "at least 32 bits". This is why the LARGE_INTEGER struct exists (it's technically a union). I'd use the API naming and simply call it a LargeInteger to avoid confusion. I'll propose what I'd consider a better option below.
The ParseLongInteger function performs so much work to handle the unsigned low DWORD that makes me wonder if it's really worth using at all for the additional resolution that it provides. The maximum resolution you require is quarter-second accuracy. On top of that, you're performing a fairly dirty cast when you coerce the value into a Double in order to handle the return value on a 32-bit machine (it's a simple LongLong in 64-bit Office). If you intend to support both platforms, I'd suggest going simple and using GetTickCount and GetTickCount64 (conditionally compiled) instead. Or, you could use a game loop similar to what I suggested on your Space Invader Style Game question.
Procedure Signatures
You have functions with no return values, such as this one:
Private Function CounterNowLet()
Dim TempTimeNow As LongInteger
QueryPerformanceCounter TempTimeNow
this.CounterNow = ParseLongInteger(TempTimeNow)
End Function
This always returns Empty, and the "return value" is never checked. You're using it like it's a Sub, so declare it as a Sub. As it stands now, it appears to be a bug even though it isn't.
Sub RunGame() is missing an access modifier. You have them explicitly defined elsewhere, and this is implicitly public. Make it explicit.
You're requiring passing module level variables around as arguments all over the place in the worksheet, i.e.
Private Function SnakePartOverlapItself(ByVal row As Long, ByVal column As Long) As Boolean
...which is always called with the arguments currentRow and currentColumn - both of which are module level. They can be omitted entirely.
Scope
Direction is not used outside of the worksheet it's declared in (more on that below). It also has no meaning outside of the context of the game and uses a very common word for an identifier - it's not hard to imagine a bunch of other ways it could potentially be used in other projects. Make it Private so it can't create namespace conflicts in no-owned code. In general, you should be declaring things with the smallest possible scope.
There is absolutely no reason for these members of the worksheet to be Public:
Public ws As Worksheet
Public snakeParts As Collection
Public currentRow As Long
Public currentColumn As Long
Public directionSnake As Direction
If they need to be used like class members, make them Private - as it stands now they break encapsulation.
Miscellaneous
This is a run-time error waiting to happen:
Set ws = ActiveWorkbook.Sheets("Game")
What if the active workbook doesn't contain a worksheet named "Game"? What if it contains a chart named "Game"? I'd either get rid of this entirely and use the code name of the sheet explicitly or (more likely for this purpose) just create a new worksheet for the game to run on with the understanding that the user will just delete it afterward.
This code likely doesn't belong in a worksheet at all - it looks like it wants to be in its own class with a single public RunGame(target As Worksheet) method. I suspect that it's currently in a worksheet because of the Worksheet_SelectionChange handler, but there's nothing that says a user class can't hold a Worksheet member WithEvents.
This is a meaningless assignment:
Dim gameOver As Boolean
gameOver = False
The default value of a Boolean is False.
Range.Select should never be used in a loop that calls DoEvents without checking the ActiveWorkbook. If the intention is that it should re-focus the game worksheet if the user selects something else (like sets focus to a different worksheet or workbook), you should handle that with an event handler. If another workbook becomes active, this is pretty much an instant error 1004.
The Call keyword is ancient history (and only exists for backward compatibility), and you're using it inconsistently. There is no reason to use it at all, so I'd recommend getting rid of it.
snakepart might call itself a class, but it's really just a glorified Type used to hold two dimensional coordinates. I'd consider re-architecting this to just store the entire game state in a two dimensional array.
The calls to MsgBox "GameOver" belong in the RunGame() method instead of sprinkled all over the tests for game ending conditions. Just put a single call after your loop exits - there's no other way to exit the loop, so that seems like the more logical place for it.
Related to the above, your flow control within the loop is kind of contorted. Your exit condition is Do While gameOver = False, and you have multiple checks for that condition here:
If SnakePartOverlapItself(currentRow, currentColumn) = True Then
gameOver = True
Exit Do
ElseIf SnakePartYellowSquareOverlap = True Then
gameOver = True
Exit Do
So, you're testing for True, then setting your exit flag to True, then explicitly exiting the loop with Exit Do.
I'm also struggling to see the need for 3 separate game timers - they are always initialized one right after the other, so they should only be milliseconds apart (unless you're stepping through with the debugger). The entire loop could be simplified to something more like this:
Do
If TimerGame.CheckFiveSecondsPassed Then
BlueSquareAdd
YellowSquareAdd
End If
If TimerGame.CheckQuarterSecondPassed Then
CurrentCellUpdate
ws.Cells(currentRow, currentColumn).Select
Dim part As Long
For part = 1 To IIf(SnakePartBlueSquareOverlap, 3, 1)
SnakePartAdd
Next
SnakePartRemove
ws.Cells(currentRow, currentColumn).Select
TimerGame.TimerSet
End If
DoEvents
Loop Until SnakePartOverlapItself Or SnakePartYellowSquareOverlap Or OutOfBounds
MsgBox "Game Over" | {
"domain": "codereview.stackexchange",
"id": 33746,
"tags": "beginner, game, vba, excel"
} |
An iterator returning all possible partitions of a list in Java | Question: Given a list of \$n\$ objects and an integer \$k \in \{ 1, 2, \dots, n \}\$, this iterator generates all possible ways of partitioning the elements in the list into exactly \$k\$ disjoint, non-empty blocks (partitions).
There is exactly \$S(n, k)\$ such partitions, see https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind.
My code is as follows:
PartitionIterable.java:
package net.coderodde.util;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import java.util.NoSuchElementException;
/**
* This class implements an {@code Iterable} over all partitions of a given
* list.
*
* @author Rodion "rodde" Efremov
* @version 1.6 (Feb 14, 2016 a.k.a. Friend Edition)
* @param <T> The actual element type.
*/
public class PartitionIterable<T> implements Iterable<List<List<T>>> {
private final List<T> allElements = new ArrayList<>();
private final int blocks;
public PartitionIterable(List<T> allElements, int blocks) {
checkNumberOfBlocks(blocks, allElements.size());
this.allElements.addAll(allElements);
this.blocks = blocks;
}
@Override
public Iterator<List<List<T>>> iterator() {
return new PartitionIterator<>(allElements, blocks);
}
private void checkNumberOfBlocks(int blocks, int numberOfElements) {
if (blocks < 1) {
throw new IllegalArgumentException(
"The number of blocks should be at least 1, received: " +
blocks);
}
if (blocks > numberOfElements) {
throw new IllegalArgumentException(
"The number of blocks should be at most " +
numberOfElements + ", received: " + blocks);
}
}
private static final class PartitionIterator<T>
implements Iterator<List<List<T>>> {
private List<List<T>> nextPartition;
private final List<T> allElements = new ArrayList<>();
private final int blocks;
private final int[] s;
private final int[] m;
private final int n;
PartitionIterator(List<T> allElements, int blocks) {
this.allElements.addAll(allElements);
this.blocks = blocks;
this.n = allElements.size();
s = new int[n];
m = new int[n];
if (n != 0) {
for (int i = 0; i < n - blocks + 1; ++i) {
s[i] = 0;
m[i] = 0;
}
for (int i = n - blocks + 1; i < n; ++i) {
s[i] = m[i] = i - n + blocks;
}
loadPartition();
}
}
@Override
public boolean hasNext() {
return nextPartition != null;
}
@Override
public List<List<T>> next() {
if (nextPartition == null) {
throw new NoSuchElementException("No more partitions left.");
}
List<List<T>> partition = nextPartition;
generateNextPartition();
return partition;
}
private void loadPartition() {
nextPartition = new ArrayList<>(blocks);
for (int i = 0; i < blocks; ++i) {
nextPartition.add(new ArrayList<>());
}
for (int i = 0; i < n; ++i) {
nextPartition.get(s[i]).add(allElements.get(i));
}
}
private void generateNextPartition() {
for (int i = n - 1; i > 0; --i) {
if (s[i] < blocks - 1 && s[i] <= m[i - 1]) {
s[i]++;
m[i] = Math.max(m[i], s[i]);
for (int j = i + 1; j < n - blocks + m[i] + 1; ++j) {
s[j] = 0;
m[j] = m[i];
}
for (int j = n - blocks + m[i] + 1; j < n; ++j) {
s[j] = m[j] = blocks - n + j;
}
loadPartition();
return;
}
}
nextPartition = null;
}
}
public static void main(String[] args) {
List<String> list = Arrays.asList("A", "B", "C", "D");
int row = 1;
for (int blocks = 1; blocks <= list.size(); ++blocks) {
for (List<List<String>> partition :
new PartitionIterable<>(list, blocks)) {
System.out.printf("%2d: %s\n", row++, partition);
}
}
}
}
For example, all partitions of the set \$\{ A, B, C, D \}\$ are
1: [[A, B, C, D]]
2: [[A, B, C], [D]]
3: [[A, B, D], [C]]
4: [[A, B], [C, D]]
5: [[A, C, D], [B]]
6: [[A, C], [B, D]]
7: [[A, D], [B, C]]
8: [[A], [B, C, D]]
9: [[A, B], [C], [D]]
10: [[A, C], [B], [D]]
11: [[A], [B, C], [D]]
12: [[A, D], [B], [C]]
13: [[A], [B, D], [C]]
14: [[A], [B], [C, D]]
15: [[A], [B], [C], [D]]
Is there anything to improve here?
Answer: You code looks really good but there are always ways for improvement :)
Add to main method printout of current \$k\$. For me output was unclear without it and I couldn't compare the output with "golden" table.
I would rename allElements to elements cause all- doesn't give any additional information.
Strings in IllegalArgumentException constructors may be changed to String.format("...text...", params...) it'll simplify i18n of such strings in the future
Use java.util.Arrays#fill for arrays filling instead of circles or even remove zeroing at all as Java Spec guarantees arrays zeroing after initialization.
Extract this initialization into a separate method (remember of SRP)
Make parameters' checks in checkNumberOfBlocks and PartitionIterator consistent (or even remove it from PartitionIterator at all as it's private impl)
Split generateNextPartition into some separate methods with SRP and appropriate names. E.g. for me is unclear how generateNextPartition works now. Remember empty lines in a method code is the sign of breaking SRP and is like "code smell" in the most cases | {
"domain": "codereview.stackexchange",
"id": 19319,
"tags": "java, algorithm, combinatorics, iteration"
} |
Dirac bracket and second class constraints in first-order gravity formalism | Question: In the first order formulation of general relativity, the frame field $e_{\mu}^a$ and $\mathrm{SO}(3,1)$ spin connection $\omega_{\mu c}^b$ are independent variables. In the Hamiltonian formulation of this theory, one finds that there are second-class constraints.
According to Dirac, the way to deal with these second-class constraints when quantising is to first define the Dirac bracket, which is essentially a new Poisson bracket that 'respects the constraints', in the sense that the Dirac bracket of any two constraints is another constraint, and then proceed with the quantisation procedure.
After looking a little bit in the literature, I have been unable to find any paper that actually attempts to construct the Dirac bracket for the first-order formulation of general relativity. And indeed it seems people go to lengths to reformulate gravity so that it doesn't have any second class constraints from the get-go (e.g. using the Ashtekar variables). My question is, has the Dirac bracket for first-order gravity been constructed? If so, a reference would be great.
Answer: D.G.C. McKeon, The Canonical Structure of the First Order Einstein-Hilbert Action, arXiv:1005.3001,
Journal-ref: Int.J.Mod.Phys.A25:3453-3480,2010. | {
"domain": "physics.stackexchange",
"id": 18594,
"tags": "general-relativity, quantum-gravity, quantization, constrained-dynamics"
} |
Improving Modeling of Thermal Noise Propagation Through a Signal Chain | Question: Background
I recently asked this question over on Electrical Engineering Stack Exchange. On the advice of some commenters there, I've broken off those pieces which are appropriate for asking as smaller questions here.
I'm attempting to write a physics simulation code, one portion of which involves the simulation of the voltage observed at a set of \$N\$ radio antennae immersed in some medium due to thermal (or "Johnson-Nyquist") noise, and which may expand to include other noise sources (e.g. triboelectric, anthropogenic) in the future.
For now, I do so by modeling thermal noise simplistically as Gaussian-distributed white noise centered on \$0\$ and with \$V_{RMS} = \sqrt{4k_BTBR}\$. The voltage "waveforms" are produced by drawing \$N\$ sets of \$\textrm{sampling rate} \times \textrm{duration}\$ samples from the Gaussian distribution.
After some discussion with faculty advisors, I've decided that I'd like to improve upon this simplistic model (largely because we find it insufficiently close to data). One consideration I'd like to model is the signal chain, and that's what I'll discuss here.
What I'd Like to Achieve
While I do have some hardware that I'm modeling against, I'd like to generalize this to both other current and future sets of hardware. In general, the hardware consists of:
The antennae. For each antenna, I know the polarization (vertical or horizontal, with respect to our coordinate system), dimensions, material composition, response as a function of frequency, and may also know the radiation pattern. I'd like to incorporate these where relevant.
A cable of known length and resistance, for each antenna.
Filters. A Butterworth band-pass filter, and in some cases a notch filter (so, we effectively have two Butterworth band-pass filters), for each antenna. I know that I can let $B$ in the $V_{RMS}$ calculation above be the integral under the filter transfer function across the pass-band (as-is, it assumes a perfect, rectangular pass-band).
LNA. A low-noise amplifier with known noise figure, for each antenna.
How can I implement more sophisticated hardware modeling, taking into account the detailed properties of the antennae and signal chain?
I understand that this is likely a rather involved task, and that what is sufficiently complex in any physics model is subjective. I'm looking for (ideally mathematically-motivated) suggestions for how to move forward, and resources for further reading (ideally accessible to a (perhaps slightly advanced) upper-level undergraduate). As of now, I simply don't know where to begin (nor what would be a good framework to begin with that would allow for additional layers of complexity to be easily added on).
Answer: The hardware modelling for the propagation of thermal noise in a receiver is well established as the process of determining the cascaded noise figure. Noise figure establishes the receiver sensitivity, and similarly cascaded analysis is done for linearity metrics such as 1 dB compression and 2 tone third order intercept, which help to establish the strongest signal before non-linearities disrupt reception. A complete analysis also includes the effect of phase noise, and cross modulation effects in frequency translation stages.
Here are other existing posts where cascaded noise figure and thermal noise propagation in a receiver is detailed further:
Detection Bandwidth for Noise Power Calculation
Is noise figure dependent on input noise power?
noise floor of attenuator
Detection Bandwidth for Noise Power Calculation
How to calculate a mixer noise? | {
"domain": "dsp.stackexchange",
"id": 12104,
"tags": "filters, noise, butterworth, math"
} |
Use paste function with apply families | Question: I am trying to assign different window sizes to my SNPs dataset to identify regions under selection.
this is the head of my data
head(snp_ids)
snp_id chr pos
Chr01__912 1 912
Chr01__944 1 944
Chr01__1107 1 1107
Chr01__1118 1 1118
Chr01__1146 1 1146
Chr01__1160 1 1160
class(snp_ids)
data.frame
I have chosen 4 different window sizes, win_size <- c(15000, 30000, 50000, 100000).
I have assigned each of these different window sizes to my snp_ids dataset to identify how many SNPs are distributed within each window by looping through each window size
for (i in 1:length(win_size)){
windows <- sapply(snp_ids$pos, function(x) (ceiling(x/win_size[i])))
}
Answer: Not clear what we are trying to do, but here is the solution using lapply that should replicate your existing forloop.
# example input data
snp_ids <- read.table(text = "
snp_id chr pos
Chr01__912 1 912
Chr01__944 1 944
Chr01__1107 1 1107
Chr01__1118 1 1118
Chr01__1146 1 1146
Chr01__1160 1 1160", header = TRUE)
# window sizes to loop through
win_size <- c(15000, 30000, 50000, 100000)
res <- cbind(snp_ids,
data.frame(
lapply(setNames(win_size, paste("window", win_size, sep = "_")), function(w)
as.numeric(paste(snp_ids$chr, ceiling(snp_ids$pos/w), sep = "."))
)))
# result
res
# snp_id chr pos window_15000 window_30000 window_50000 window_1e.05
# 1 Chr01__912 1 912 1.1 1.1 1.1 1.1
# 2 Chr01__944 1 944 1.1 1.1 1.1 1.1
# 3 Chr01__1107 1 1107 1.1 1.1 1.1 1.1
# 4 Chr01__1118 1 1118 1.1 1.1 1.1 1.1
# 5 Chr01__1146 1 1146 1.1 1.1 1.1 1.1
# 6 Chr01__1160 1 1160 1.1 1.1 1.1 1.1
Edit: If we really want to use forloop, then try below:
for (i in win_size){
windows <- ceiling(snp_ids$pos/i)
snp_ids[, paste0("window_", i)] <- as.numeric(paste(snp_ids$chr, windows, sep = "."))
} | {
"domain": "bioinformatics.stackexchange",
"id": 305,
"tags": "r"
} |
Radon or cosmic ray? | Question: My student constructed a cloud chamber to detect the background radiation. It is very simple and efficient.
But the question is, is the radiation mostly from the Radon gas or the cosmic ray?
See some video here:
https://www.youtube.com/watch?v=xky3f1aSkB8
Answer: Unless your student is purposely placing a radioactive source into the chamber, or there are sources near the chamber, the more likely suspect is a cosmic ray if you are seeing condensation trails.
To see radon induced trails from the environment would likely mean you have dangerous levels in your environment. Without an intentional source placed, and unless you are observing in a uranium mine or in a building with known radon outgassing from the floors, the more likely source of the trails you are seeing is from cosmic rays. | {
"domain": "physics.stackexchange",
"id": 38380,
"tags": "experimental-physics, home-experiment, radioactivity, particle-detectors, cosmic-rays"
} |
Is IP a property of the device one uses or the router through which one connects to the internet? | Question: Let's say I visit an IP Checker online, is the OP shown there my IP or the IP of my router? I can't seem to find a conclusive authoritative answer online.
Answer: It depends on how your network is configured.
Since you talk about a router, most domestic users have a public IPv4 assigned to their router and then a private subnetwork for all the home devices. The router is used as a gateway to the internet and it handles translating the private addresses to the public address through a mechanism called NAT.
In this case you'd see the public IP assigned to your router.
The situation could be different for IPv6 addresses. For example your provider can assign a /64 IPv6 prefix to your router, which then advertises it to your network. Each device builds its own (public) IPv6 address by combining the prefix with a 64-bit device-specific suffix (for simplicity, think of it as the MAC address of the network interface card, although this is not always the case).
Keep in mind that these are just two examples. Only you can know how your network looks like. | {
"domain": "cs.stackexchange",
"id": 20224,
"tags": "ip"
} |
Why is the infinite norm of state vectors in QFT a result of an infinite spacetime volume? | Question: A somewhat confusing aspect of qft is that the particle states we usually work with,
$$|{1_{\vec{k}}} \rangle\equiv a^\dagger(\vec{k}) |0 \rangle, $$
are not normalizable since
$$ \langle{1_{\vec{k}}|1_{\vec{k}'}}\rangle = (2 \pi)^3 \delta(\vec k-\vec k') .$$
In particular, we have
$$ \langle{1_{\vec{k}}|1_{\vec{k}'}}\rangle = (2 \pi)^3 \delta(0) \, .$$
This, of course, doesn't look very promising. In almost all QFT textbooks with little to no further motivation it is argued that these factors $\delta(0)$ (that also appear, for example, in our scattering amplitudes) are a result of the fact that we work with an infinite spacetime volume. Formulated differently, $\langle{1_{\vec{k}}|1_{\vec{k}'}}\rangle \propto \delta(0)$ is an infrared divergence. By introducing a finite spacetime volume $V$ and using an integral representation of $\delta(x)$ all occurrences of $\delta(0)$ are replaced by $V$. This way the divergences are cancelled out.
Is there any way to motivate (beyond the formal statement using an integral representation of $\delta(x)$) why the norm of our states should be related to the spacetime volume $V$?
Answer: Momentum eigenstates are non-normalizable in QFT for precisely the same reason that they are non-normalizable in regular old nonrelativistic quantum mechanics - they are pure plane waves which fill the entire space.
Integrating $|e^{ikx}|^2=1$ across all of space yields the volume of the space. I’m not sure to what extent this needs to be motivated, insofar as it is pretty clear once you have a handle on what one particle states actually are. Explicitly,
$$\langle k|k\rangle = \int dx \ \langle k|x\rangle\langle x|k\rangle = \int dx \ e^{-ikx} e^{ikx}$$
$$=\int dx = V$$ | {
"domain": "physics.stackexchange",
"id": 62173,
"tags": "quantum-field-theory"
} |
Simple equation parser and evaluator | Question: I am relatively new to C++ coming from languages such as Java and would like tips to make my code more C++ like and doing things in the C++ way.
The header file:
#pragma once
#include <string>
#include <memory>
class Expression
{
public:
std::string symbol;
std::shared_ptr<Expression> left;
std::shared_ptr<Expression> right;
float eval();
static Expression parse(std::string s);
private:
Expression(std::string symbol, Expression *left, Expression *right);
static Expression parseRec(std::string s);
static float evalRec(const Expression &e);
};
cpp file:
#include "Expression.hpp"
#include <unordered_map>
Expression::Expression(std::string symbol, Expression *left, Expression *right)
: symbol(symbol), left(left), right(right) {}
float Expression::eval()
{
return Expression::evalRec(*this);
}
float Expression::evalRec(const Expression &e)
{
switch (e.symbol[0])
{
case '+':
return evalRec(*e.left) + evalRec(*e.right);
case '-':
return evalRec(*e.left) - evalRec(*e.right);
case '*':
return evalRec(*e.left) * evalRec(*e.right);
case '/':
return evalRec(*e.left) / evalRec(*e.right);
default:
return std::stoi(e.symbol);
}
}
Expression Expression::parse(std::string s)
{
//Remove whitespace
std::string output;
for (auto &i : s)
{
if (i != ' ')
output += i;
}
return parseRec(output);
}
Expression Expression::parseRec(std::string s)
{
const std::unordered_map<char, int> precedence = {{'+', 1}, {'-', 1}, {'*', 10}, {'/', 10}};
int indexOfLowest = -1;
int i = 0;
for (auto &&j : s)
{
switch (j)
{
case '+':
case '-':
case '*':
case '/':
if (indexOfLowest == -1)
indexOfLowest = i;
else if (precedence.at(j) <= precedence.at(s[indexOfLowest]))
indexOfLowest = i;
break;
default:
break;
}
i++;
}
if (indexOfLowest == -1)
{
return Expression(s, NULL, NULL);
}
else
{
return Expression(std::string(1, s[indexOfLowest]),
new Expression(parseRec(s.substr(0, indexOfLowest))),
new Expression(parseRec(s.substr(indexOfLowest + 1, s.length() - indexOfLowest - 1))));
}
}
and a simple main:
#include <iostream>
#include "Expression.hpp"
int main(int argc, char const *argv[])
{
//Parsed expression tree
Expression e = Expression::parse("5 + 10 * 2 + 6 / 3");
std::cout << e.eval() << std::endl;
return 0;
}
Answer: Here are some things that may help you improve your code.
Use include guards
There should be an include guard in each .h file. That is, start the file with:
#ifndef EXPRESSION_H
#define EXPRESSION_H
// file contents go here
#endif // EXPRESSION_H
The use of #pragma once is a common extension, but it's not in the standard and thus represents at least a potential portability problem. See SF.8
Consider using standard routines
C++ provides a great number of useful utilites such as std::remove that you could use instead of your own hand-written function to remove whitespace. They're almost always faster and less buggy and by using library software, you can develop your own code more quickly and accurately. So instead of this:
//Remove whitespace
std::string output;
for (auto &i : s)
{
if (i != ' ')
output += i;
}
return parseRec(output);
One could use this:
s.erase(std::remove(s.begin(), s.end(), ' '), s.end());
return parseRec(s);
Note that this uses s rather than output per the next suggestion.
Understand pass-by-value
One thing that is quite different about C++ as compared with Java is how parameter passing is done. In general, there is pass-by-value and pass-by-reference. The pass-by-value is what we get when we write things like this:
static Expression parse(std::string s);
The gives parse its own copy of s. If we use pass-by-reference, it would generally look something like this:
static Expression parse(const std::string& s);
Now we are giving parse a reference (indicated by &) rather than copying it. In this example, we use the very common idiom of passing a const reference. This indicates that the called function parse may examine but not alter the value. Unlike Java, there is no automatic garbage collector in C++, so it's up to you to manage memory. Generally, passing by reference is faster and uses less memory, but in cases where we would be making a copy anyway, passing by value makes sense.
Prefer private to public where practical
The Expression class has its only data members as public members. Rather than do it that way, it would be better to keep them private. See C.9 for details.
Eliminate redundant functions
When a non-static member function is called in a C++ class, *this is implicitly passed. So the result is that this function is entirely useless:
float Expression::eval()
{
return Expression::evalRec(*this);
}
Instead, just rename evalRec as eval and write it as a member function:
float Expression::eval()
{
switch (symbol[0])
{
case '+':
return left->eval() + right->eval();
case '-':
return left->eval() - right->eval();
case '*':
return left->eval() * right->eval();
case '/':
return left->eval() / right->eval();
default:
return std::stoi(symbol);
}
}
Simplify your code by making full use of standard library structures
The parseRec routine uses a std::ordered_map and also a switch to find the operator with the lowest precedence. I suggest that when you see that you have repeated the operators in multiple places, this is a hint that perhaps you could use a different approach to avoid this duplication. In this case, my inclination would be to use std::min_element. This requires beginning and ending iterators and a comparison operation. First, let's write a comparison operation:
enum class op_precedence { LOW, MED, NON_OP };
static constexpr op_precedence precedence(char ch) {
op_precedence val{op_precedence::NON_OP};
switch (ch) {
case '+':
case '-':
val = op_precedence::LOW;
break;
case '*':
case '/':
val = op_precedence::MED;
break;
default:
break;
}
return val;
}
Now we can use std::min_element:
std::string::iterator lowest{std::min_element(s.begin(), s.end(),
[](const char &a, const char &b){
return precedence(a) < precedence(b);
})
};
This uses a lambda to actually effect the comparison. So now the whole function looks like this:
Expression Expression::parseRec(std::string s) {
std::string::iterator lowest{std::min_element(s.begin(), s.end(),
[](const char &a, const char &b){
return precedence(a) < precedence(b);
})
if (precedence(*lowest) == op_precedence::NON_OP) {
return Expression(s, nullptr, nullptr);
} else {
return Expression(std::string(1, *lowest),
new Expression(parseRec(s.substr(0, lowest-s.begin()))),
new Expression(parseRec(s.substr(lowest-s.begin() + 1)))
);
}
}
Note, too, that this uses the modern nullptr rather than the old C-style NULL.
Dive deeper into C++
I know you're just beginning with C++ (and it looks like a promising beginning!) but to whet your appetite for more, we can actually go a little further with the concept mentioned above. Specifically, we have a similar duplication in eval, so we could further consolidate information. Here's one way to do it:
enum class op_precedence { LOW, MED, NON_OP };
struct op {
op_precedence precedence;
float (*operation)(float, float);
};
static const std::unordered_map<char, op> operation = {
{ '+', { op_precedence::LOW, [](float a, float b){return a+b;}} },
{ '-', { op_precedence::LOW, [](float a, float b){return a-b;}} },
{ '*', { op_precedence::MED, [](float a, float b){return a*b;}} },
{ '/', { op_precedence::MED, [](float a, float b){return a/b;}} },
};
static op_precedence precedence(char ch) {
auto myop{operation.find(ch)};
return myop == operation.end() ? op_precedence::NON_OP : myop->second.precedence;
}
Now the rest of the program gets shorter and more concise:
float Expression::eval()
{
auto myop{operation.find(symbol[0])};
if (myop == operation.end()) {
return std::stoi(symbol);
}
return myop->second.operation(left->eval(), right->eval());
}
Expression Expression::parse(std::string s)
{
s.erase(std::remove(s.begin(), s.end(), ' '), s.end());
return parseRec(s);
}
Expression Expression::parseRec(std::string s)
{
std::string::iterator lowest{std::min_element(s.begin(), s.end(), [](const char &a, const char &b){
return precedence(a) < precedence(b); })};
if (precedence(*lowest) == op_precedence::NON_OP)
{
return Expression(s, nullptr, nullptr);
}
else
{
return Expression(std::string(1, *lowest),
new Expression(parseRec(s.substr(0, lowest-s.begin()))),
new Expression(parseRec(s.substr(lowest-s.begin() + 1)))
);
}
}
Don't use std::endl if you don't really need it
The difference betweeen std::endl and '\n' is that '\n' just emits a newline character, while std::endl actually flushes the stream. This can be time-consuming in a program with a lot of I/O and is rarely actually needed. It's best to only use std::endl when you have some good reason to flush the stream and it's not very often needed for simple programs such as this one. Avoiding the habit of using std::endl when '\n' will do will pay dividends in the future as you write more complex programs with more I/O and where performance needs to be maximized.
About return 0;
All compliant compilers will generate the equivalent of return 0 at the end of main, so I generally omit that. Others prefer to write it explicitly for stylistic reasons. Whichever you choose, it's useful to know.
Use the appropriate smart pointer type
The code, as written, abuses std::shared_ptr because no two objects should ever actually share the same Expression. For that reason, what you really should use instead is std::unique_ptr, but that makes things a bit more complex. In particular, we can no longer freely make copies (or they wouldn't be unique!) so we must use C++ move semantics. It's all perhaps a bit much for a beginner, but here's how one would rewrite the relevant portions of the class declaration.
std::unique_ptr<Expression> left;
std::unique_ptr<Expression> right;
Expression(std::string symbol, std::unique_ptr<Expression> &&left, std::unique_ptr<Expression> &&right);
Now the constructor looks like this:
Expression::Expression(std::string symbol, std::unique_ptr<Expression> &&left, std::unique_ptr<Expression> &&right)
: symbol(symbol), left(std::move(left)), right(std::move(right)) {}
And instead of using new (which is rather rare in modern C++), we use std::make_unique instead:
return Expression(std::string(1, *lowest),
std::make_unique<Expression>(parseRec(s.substr(0, lowest-s.begin()))),
std::make_unique<Expression>(parseRec(s.substr(lowest-s.begin() + 1)))
); | {
"domain": "codereview.stackexchange",
"id": 40510,
"tags": "c++"
} |
What is the connection between exercise and muscle growth | Question: From a physiological perspective, all that is done during exercise is the expending of energy in the form of ATP to fuel muscle contraction and extension.
When I looked up why muscle grow due to exercise, there is the claim that recovery of micro lesions formed during exercise enables slight growth of muscle. I am not sure how micro lesions are formed and intuitively, these micro lesions should cause atrophy of the muscle rather than strengthening of the muscle fibers.
Further there is no logical connection as to why muscle fibers would be strengthened rather than just restored to its previous form during rest.
Can someone help me make the mental leap from the consumption of ATP through exercise to muscle growth?
Answer: Muscle growth and development is mediated by "trophic factors" not biochemical reactions (ATP) or muscle contraction directly. The best method of releasing tropic factors is exercise and not electrical stimulation or biochemical reactions (ATP) or biomechanical contraction.
Nerve ending release growth or tropic factors which stimulate muscle growth and for example if the nerves die off either due to injury or disease (ALS) the muscles will atrophy.
Not all of the tropic factors are as of yet known, but it is these tropic factors that stimulate muscle growth, not the biochemical or biomechanical reaction of the muscle (contraction). One can not as of yet stimulate muscle growth by delivering electrical energy to the muscle, for example (it has been tried to in an effort to maintain muscle mass in critically ill patients).
There is a review of the trophic factors here - http://neuromuscular.wustl.edu/lab/trophic.htm
And a review of electrical stimulation - http://www.medword.com/MedwordStore/PCP/EMS_truth.html
There are many claims of benefit for electrical stimulation, such as regeneration of the facial nerve in facial paralysis, such as Bell's palsy.
The "problem" is that the prognosis in Bell's palsy is very good, even without electrical stimulation, and so it is difficult to "prove" benefit with electrical stimulation:
With or without treatment, most individuals begin to get better within 2 weeks after the initial onset of symptoms and most recover completely, returning to normal function within 3 to 6 months.
http://www.ninds.nih.gov/disorders/bells/detail_bells.htm
Statistically, the natural history without treatment was described in a study of 1011 patients in 1982 [8]. One-third had an incomplete paralysis, and two-thirds had complete paralysis. Overall, 85 percent showed signs of recovery within three weeks, 71 percent had complete recovery, 13 percent had slight sequelae, and 16 percent had residual weakness, synkinesis and/or contracture. Patients with incomplete lesions had a 94 percent rate of return to normal function, while only 60 percent of those with clinically complete lesions returned to normal function. | {
"domain": "biology.stackexchange",
"id": 3839,
"tags": "physiology, muscles, growth"
} |
JavaScript Dropdown Selector | Question: I am just getting into front-end web development and to practice, I made a dropdown selector that allows for more styling than the regular <select> element.
Please let me know what I can improve on (examples are appreciated, but not necessarily needed). I am mainly looking at the JavaScript, but if I am structuring things poorly in the html or there are problems in the CSS, please let me know.
I also have the project on Codepen if it is easier to see it in action.
HTML:
<!DOCTYPE html>
<html>
<head>
<title>Selector Styling</title>
<link rel="stylesheet" href="css/hex-selector-0.9.9.css">
<style>
.container {
width: 800px;
margin: 0 auto;
}
</style>
</head>
<body>
<div class="container">
<div class="hex-selector">
<div class="hex-selection" tabindex="1">
<span value="null">Select a Place</span>
</div>
<div class="hex-content">
<ul class="hex-list">
<li value="#">Afghanistan</li>
<li value="#">Albania</li>
<li value="#">Algeria</li>
...
<li value="#">Zimbabwe</li>
</ul>
</div>
</div>
<div class="hex-selector">
<div class="hex-selection" tabindex="1">
<span value="null">Select a Color</span>
</div>
<div class="hex-content">
<ul class="hex-list">
<li value="#">Red</li>
<li value="#">Orange</li>
<li value="#">Yellow</li>
<li value="#">Green</li>
<li value="#">Blue</li>
<li value="#">Violet</li>
</ul>
</div>
</div>
</div>
<script src="scripts/jquery-1.11.2.js"></script>
<script src="scripts/hex-selector-0.9.9.js"></script>
</body>
</html>
CSS:
*,
*:after,
*:before {
box-sizing: border-box;
margin: 0;
padding: 0;
}
.hex-selector {
position: relative;
display: inline-block;
margin: 10px;
width: 350px;
color: #fff;
font-weight: bold;
font-family: arial, sans-serif;
background: #0099FF;
border: 1px solid #3079ed;
border-radius: 7px;
box-shadow: 0 1px 1px rgba(50,50,50,0.1);
cursor: pointer;
outline: none;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.hex-selection:after {
position: absolute;
right: 15px;
top: 50%;
margin-top: -3px;
width: 0;
height: 0;
border-width: 6px 6px 0 6px;
border-style: solid;
border-color: #fff transparent;
content: "";
}
.hex-selector:hover,
.hex-selector.active { background: #005CE6; }
.hex-selector.active > .hex-content {
opacity: 1;
pointer-events: auto;
}
.hex-selection { padding: 10px; }
.hex-selection:focus { box-shadow: 0 0 10px rgba(50,50,250,0.5); }
.hex-content {
position: absolute;
top: 110%;
left: 0;
right: 0;
opacity: 0;
pointer-events: none;
}
.hex-list {
max-height: 250px;
font-weight: normal;
list-style: none;
background: white;
border: 1px solid rgba(0,0,0,0.17);
border-radius: inherit;
box-shadow: 0 0 5px rgba(0,0,0,0.1);
overflow-y: scroll;
}
.hex-list > li {
display: block;
padding: 10px;
color: #8aa8bd;
border-bottom: 1px solid #e6e8ea;
box-shadow: inset 0 1px 0 rgba(255,255,255,1);
}
.hex-list > li.current { background: #f3f8f8; }
Javascript:
/*
* Hexlan Selector Replacer v0.9.0
*
* Author: Reyer Swengel
* Date: 2015-4-16
*/
(function () {
var CURRENT = "current",
SELECTED = "selected",
ACTIVE = "active",
HEX_SELECTOR = ".hex-selector",
HEX_SELECTION = ".hex-selection",
HEX_LIST = ".hex-list";
var clickTarget = false,
mouseIgnore = false,
typingReset = 0,
searchString = "";
/**
* Used as a callback for mouse events, allows mouse selection to be overridden by key navigation
* @params {jQuery} target - element to highlight in dropdown
*/
var mouseSelection = function(target) {
if(!mouseIgnore) {
$(HEX_LIST+" > li.current").removeClass(CURRENT);
target.addClass(CURRENT);
}
}
/**
* Positions the item either at the top or bottom of the dropdown through scrolling
* @params {jQuery} target - item to position in the dropdown
* @params {jQuery} position - either 'top' or 'bot' referencing where it should be positioned in the dropdown
* @params {jQuery} container - The .hex-selector containing the target
*/
var scrollTo = function(target, position, container) {
var temp_hex_list = $(HEX_LIST, container);
if (position === "top") { temp_hex_list.scrollTop(temp_hex_list.scrollTop() + target.position().top); }
else if(position === "bot") { temp_hex_list.scrollTop(temp_hex_list.scrollTop() - temp_hex_list.height() + (target.height() + 1)*2 + target.position().top); }
}
/**
* Removes .selected from previous element and adds it to target
* If the dropdown is open, it also updates .current
* @params {jQuery} target - Item to change to .selected
* @params {jQuery} container - The .hex-selector containing the target
*/
var selectItem = function(target, container) {
if($("."+SELECTED, container).length) { $("."+SELECTED, container).removeClass(SELECTED); }
target.addClass(SELECTED);
$("span", container).text(target.text());
$("span", container).attr("value", target.attr("value"));
if(container.hasClass(ACTIVE)) { updateCurrent(target, container); }
}
/**
* Toggles the dropdown for the target. Jumps to the selected element if exists
* @params {jQuery} target - .hex-selector to toggle
*/
var toggleSelector = function(target) {
if(target.hasClass(ACTIVE)) {
target.removeClass(ACTIVE);
$("."+CURRENT, target).removeClass(CURRENT);
} else {
target.addClass(ACTIVE);
if($("."+SELECTED, target).length) { $("."+SELECTED, target).addClass(CURRENT); }
else { $(HEX_LIST+" li:first-child", target).addClass(CURRENT); }
scrollTo($("."+CURRENT, target), "top", target);
}
};
/**
* Removes .current from previous element and adds it to target
* @params {jQuery} target - Item to change to .current
* @params {jQuery} container - The .hex-selector containing the target
*/
var updateCurrent = function(target, container) {
$("."+CURRENT, container).removeClass(CURRENT);
target.addClass(CURRENT);
}
/**
* Jumps up the length of the dropdown to the next item
* @params {jQuery} currentElement - Item to jump up from
* @params {jQuery} container - The .hex-selector containing this dropdown
* @returns {jQuery}: The element jumped to
*/
var pageUp = function(currentElement, container) {
var nextPos = currentElement.position().top + currentElement.height() - container.height();
while(currentElement.position().top > nextPos) {
if(currentElement.prev().length) { currentElement = currentElement.prev() }
else { break; }
}
return currentElement;
}
/**
* Jumps down the length of the dropdown to the next item
* @params {jQuery} currentElement - Item to jump down from
* @params {jQuery} container - The .hex-selector containing this dropdown
* @returns {jQuery}: The element jumped to
*/
var pageDown = function(currentElement, container) {
var nextPos = currentElement.position().top - currentElement.height() + container.height();
while(currentElement.position().top < nextPos) {
if(currentElement.next().length) { currentElement = currentElement.next() }
else { break; }
}
return currentElement;
}
$(document).on('mousedown', function(e){ click = $(e.target).closest(HEX_SELECTOR); });
$(document).on('mousemove', function(){ mouseIgnore = false; });
$(document).on('keydown', function(event) {
var TAB_KEY = 9,
ENTER_KEY = 13,
ESC_KEY = 27,
SPACE_KEY = 32,
PGUP_KEY = 33,
PGDOWN_KEY = 34,
END_KEY = 35,
HOME_KEY = 36,
UP_KEY = 38,
DOWN_KEY = 40;
// When in a dropdown, selects current item and closes dropdown
if(event.which == TAB_KEY) {
var temp_hex_selector = $(HEX_SELECTOR);
for(var i = 0; i < temp_hex_selector.length; i++){
var selector = $(temp_hex_selector[i]);
if(selector.hasClass(ACTIVE)) {
event.preventDefault();
selectItem(selector.find("."+CURRENT), selector);
toggleSelector(selector);
}
}
}
// When in a dropdown, selects current item and closes dropdown
// Otherwise, opens the dropdown
if(event.which == ENTER_KEY) {
var temp_hex_selection = $(HEX_SELECTION);
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
if(selector.hasClass(ACTIVE)){ selectItem(selector.find("."+CURRENT), selector); }
toggleSelector(selector);
}
}
}
// Closes an open dropdown without selecting an item
if(event.which == ESC_KEY) {
var temp_hex_selector = $(HEX_SELECTOR);
for(var i = 0; i < temp_hex_selector.length; i++){
var selector = $(temp_hex_selector[i]);
if(selector.hasClass(ACTIVE)) { toggleSelector(selector); }
}
}
// Selects current item without closing the dropdown
if(event.which == SPACE_KEY) {
var temp_hex_selection = $(HEX_SELECTION);
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
if(selector.hasClass(ACTIVE)){ selectItem(selector.find("."+CURRENT), selector); }
}
}
}
// Jumps the length of dropdown to next selection
if(event.which == PGUP_KEY || event.which == PGDOWN_KEY) {
mouseIgnore = true;
var temp_hex_selection = $(HEX_SELECTION);
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
event.preventDefault();
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
if(selector.hasClass(ACTIVE)){
var currentElement = selector.find("."+CURRENT);
if(event.which == PGUP_KEY) {
currentElement = pageUp(currentElement, currentElement.closest(HEX_LIST));
scrollTo($(currentElement, selector), "top", selector);
} else {
currentElement = pageDown(currentElement, currentElement.closest(HEX_LIST));
scrollTo($(currentElement, selector), "bot", selector);
}
} else {
var currentElement = $(HEX_LIST+" li:first-child", selector);
if(selector.find("."+SELECTED).length)
{
currentElement = selector.find("."+SELECTED);
if(event.which == PGUP_KEY) { currentElement = pageUp(currentElement, currentElement.closest(HEX_LIST)); }
else { currentElement = pageDown(currentElement, currentElement.closest(HEX_LIST)); }
}
}
selectItem(currentElement, selector);
}
}
}
// Jumps to first/last element of dropdown
if(event.which == END_KEY || event.which == HOME_KEY) {
mouseIgnore = true;
var temp_hex_selection = $(HEX_SELECTION);
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
event.preventDefault();
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
var currentElement = $(HEX_LIST+" li:last-child", selector);
if(event.which == HOME_KEY) { currentElement = $(HEX_LIST+" li:first-child", selector); }
selectItem(currentElement, selector);
if(selector.hasClass(ACTIVE))
{
if(event.which == END_KEY) { scrollTo($(currentElement, selector), "bot", selector); }
else { scrollTo($(currentElement, selector), "top", selector); }
}
}
}
}
// Moves to next/previous item in dropdown
if(event.which == UP_KEY || event.which == DOWN_KEY) {
mouseIgnore = true;
var temp_hex_selection = $(HEX_SELECTION);
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
event.preventDefault();
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
if(selector.hasClass(ACTIVE)){
var currentElement = selector.find("."+CURRENT);
if(event.which == UP_KEY) {
if(currentElement.prev().length) { currentElement = currentElement.prev(); }
if(currentElement.position().top < 0) { scrollTo($(currentElement, selector), "top", selector); }
} else {
if(currentElement.next().length) { currentElement = currentElement.next(); }
if(currentElement.position().top > 211) { scrollTo($(currentElement, selector), "bot", selector); }
}
} else {
var currentElement = $(HEX_LIST+" li:first-child", selector);
if(selector.find("."+SELECTED).length) { currentElement = selector.find("."+SELECTED); }
if(event.which == UP_KEY) {
if(currentElement.prev().length) { currentElement = currentElement.prev(); }
} else {
if(currentElement.next().length) { currentElement = currentElement.next(); }
}
}
selectItem(currentElement, selector);
}
}
}
// Tracks users input and attempts to jump to matching entry in dropdown
// user input resets after 1s of no input
if(event.which > 59 && event.which < 91) {
mouseIgnore = true;
var currentTime = new Date().getTime();
if(currentTime - typingReset < 1000) { searchString = searchString + String.fromCharCode(event.keyCode); }
else { searchString = String.fromCharCode(event.keyCode); }
var temp_hex_selection = $(HEX_SELECTION);
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
event.preventDefault();
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
var currentElement = $(HEX_LIST+" li:first-child", selector);
while(currentElement.length){
if(currentElement.text().toLowerCase().match("^" + searchString.toLowerCase())) {
selectItem(currentElement, selector);
if($(HEX_LIST, selector).hasClass(ACTIVE)) {
if(currentElement.position().top < 0) { scrollTo($(currentElement, selector), "top", selector); }
else if(currentElement.position().top > 211) { scrollTo($(currentElement, selector), "bot", selector); }
}
break;
}
currentElement = currentElement.next();
}
}
}
typingReset = new Date().getTime();
}
});
$(HEX_LIST+" > li").on("click", function(event) {
var container = $(this).closest(HEX_SELECTOR)
selectItem($(this), container);
container.find(HEX_SELECTION).focus()
});
$(HEX_LIST+" > li").on("mouseenter", function() { mouseSelection($(this)); });
$(HEX_LIST+" > li").on("mousemove", function() { mouseSelection($(this)); });
$(HEX_SELECTION).on("blur", function() {
var container = $(this).closest(HEX_SELECTOR);
if(container.hasClass(ACTIVE) && container[0] !== click[0]) {
container.removeClass(ACTIVE);
}
});
$(HEX_SELECTOR).on("click", function(){ toggleSelector($(this)); });
}());
Answer: To help ensure your site displays well across browsers, you should validate your HTML at the W3C validator.
It is actually very good, but there are two problems. <span value="null"> and <span value="#"> are invalid - you should remove the value attribute.
You can validate your CSS at a W3C validator too. Other than the unknown vender extensions, it appears to be valid.
Your JavaScript looks neat and clean for the most part, but your have irregular indentation in a few places:
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
if(selector.hasClass(ACTIVE)){ selectItem(selector.find("."+CURRENT), selector); }
}
}
That should be indented to easily see what is part of what, like this:
for(var i = 0; i < temp_hex_selection.length; i++) {
if($(temp_hex_selection[i]).is(":focus")) {
var selector = $(temp_hex_selection[i]).closest(HEX_SELECTOR);
if(selector.hasClass(ACTIVE)) {
selectItem(selector.find("."+CURRENT), selector);
}
}
}
I would also put a space around my operators to make it easy to see the separate parts of the equations:
scrollTo($("."+CURRENT, target), "top", target);
Becomes:
scrollTo($("." + CURRENT, target), "top", target);
Right here:
if(event.which == UP_KEY) {
if(currentElement.prev().length) { currentElement = currentElement.prev(); }
if(currentElement.position().top < 0) { scrollTo($(currentElement, selector), "top", selector); }
}
I would not indent my if blocks like that. I would write them like this:
if(event.which == UP_KEY) {
if(currentElement.prev().length) {
currentElement = currentElement.prev();
}
if(currentElement.position().top < 0) {
scrollTo($(currentElement, selector), "top", selector);
}
} | {
"domain": "codereview.stackexchange",
"id": 13141,
"tags": "javascript, jquery, html, css"
} |
Question about the place of definition of the metric | Question: I was reading the book "Dynamical Systems in Cosmology" of the author J. Wainwright. He says that, in order to specify the space-time geometry you need a Lorentzian metric $g$ defined on a manifold $M$.
I agree with that, but in this paper (to be specific, above the equation number (5)) they define the metric over a domain $\Omega \subset \mathbb{R}^{d}$, is this right?
I was thinking that this could only work if you have a flat space-time.
Answer:
Locally, there is no problem. The neighborhood $\Omega\subseteq\mathbb{R}^d$ of the chart is only isomorphic to (a subset of) the manifold. The neighborhood $\Omega$ imports the metric (and curvature) from the manifold via the isomorphism. The standard flat metric on $\mathbb{R}^d$ plays no role in the construction.
Globally, there is a problem. The action principle in the paper does only describe spacetimes that are homeomorphic to $\Omega$. One solution is to give up globally defined actions. | {
"domain": "physics.stackexchange",
"id": 64256,
"tags": "general-relativity, differential-geometry, metric-tensor, coordinate-systems, curvature"
} |
Knock down tall buildings | Question: I'm trying to solve the following problem:
As you know Bob likes to destroy the buildings.
There are N buildings in a city, ith building has ai floors.
Bob can do the following operation.
Choose a building, and destroy its uppermost floor with cost h. Where h is building's height before removing the floor.
You can do this operation any number of times but total cost should be less than or equal to K.
Since you don't like tall buildings you want to decrease their heights.
You have to minimise the maximum height of buildings and report this height (height of tallest among them after operations).
(Use long long data type instead of int to avoid overflow errors)
Input Format
T
N1 K1
a1 a2 a3 ..... aN1
N2 K2
a1 a2 a3 ..... aN2 .
.
.
NT KT
a1 a2 a3 ..... aNT
Constraints
1<=T<=1000 Number of test cases
1<=N<=100000 Number of buildings
1<=K<=10^15
0<=ai<=1000000
Sum of N over all test cases is at most 200000.
Output Format
ans1
ans2
ans3
.
.
.
ansT
Sample Input 0
3
5 23
1 3 2 4 5
5 3000000
0 1000000 2 5 99999
3 9
3 3 3
Sample Output 0
2
999997
2
Here we can only use the iostream library, so all the other functions needs to be written by me. In that case, for the T iterations in a while loop, we are first taking in a an array followed by merge sorting the algorithm before proceeding to the helper function.
In the helper function we are trying to introduce recursion with the fact that as long as K is greater than the tallest building, it is going to reduce K with the height of the tallest building and reduce the top floor by 1.
In case the K is greater than equal to the height of the tallest building, it reduces the height of the tallest building by 1, and sorts the array of building heights and updates the value of accordingly. Finally it return the height of the tallest building if K falls below the same.
My code is:
#include <iostream>
using namespace std;
void merge(long long array[], int const left, int const mid,
int const right)
{
int const subArrayOne = mid - left + 1;
int const subArrayTwo = right - mid;
auto *leftArray = new int[subArrayOne],
*rightArray = new int[subArrayTwo];
for (auto i = 0; i < subArrayOne; i++)
leftArray[i] = array[left + i];
for (auto j = 0; j < subArrayTwo; j++)
rightArray[j] = array[mid + 1 + j];
auto indexOfSubArrayOne = 0, indexOfSubArrayTwo = 0;
int indexOfMergedArray = left;
while (indexOfSubArrayOne < subArrayOne
&& indexOfSubArrayTwo < subArrayTwo) {
if (leftArray[indexOfSubArrayOne]
<= rightArray[indexOfSubArrayTwo]) {
array[indexOfMergedArray]
= leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
}
else {
array[indexOfMergedArray]
= rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
}
indexOfMergedArray++;
}
while (indexOfSubArrayOne < subArrayOne) {
array[indexOfMergedArray]
= leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
indexOfMergedArray++;
}
while (indexOfSubArrayTwo < subArrayTwo) {
array[indexOfMergedArray]
= rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
indexOfMergedArray++;
}
delete[] leftArray;
delete[] rightArray;
}
void mergeSort(long long array[], int const begin, int const end)
{
if (begin >= end)
return;
int mid = begin + (end - begin) / 2;
mergeSort(array, begin, mid);
mergeSort(array, mid + 1, end);
merge(array, begin, mid, end);
}
long long minimizeMaxHeight(long long heights[], int N, int K){
if(K<0) return heights[N-1]+1;
if(K==0) return heights[N-1];
if(K>0){
if(heights[N-1]>=heights[N-2]){
heights[N-1] = heights[N-1]-1;
return minimizeMaxHeight(heights, N, K-heights[N-1]);
}
else{
mergeSort(heights, 0, N - 1);
return minimizeMaxHeight(heights, N, K);
}
}
return 0;
}
long long solve(long long heights[],int n,int k){
//Recursive solution
if(heights[n-1]<=k){
heights[n-1]-=1;
mergeSort(heights,0,n-1);
return solve(heights,n,k-heights[n-1]);
}
return heights[n-1];
}
int main() {
int T;
std::cin >> T;
while (T--) {
int N;
long long K;
std::cin >> N >> K;
long long heights[N];
for (int i = 0; i < N; ++i) {
std::cin >> heights[i];
}mergeSort(heights, 0, N - 1);
long long ans = solve(heights, N, K);
std::cout << ans << std::endl;
}
return 0;
}
The above is the recursive approach and there's another iterative approach which I'm trying:
#include <iostream>
using namespace std;
int main() {
int T;
std::cin >> T;
while (T--) {
int N;
long long K;
std::cin >> N >> K;
long long total[1000001] = {0};
long long heights[N];
for (int i = 0; i < N; ++i) {
std::cin >> heights[i];
total[heights[i]]++;
}
long long ans = 0;
for(int i = 1000000; i>0;){
if(total[i]==0) {
i--;
continue;
}
else if(total[i]>0&&K>=i){
total[i]--;
total[i-1]++;
K=K-i;
}
else if(total[i]>0&&K<i) {
ans = i;
break;
}
}
std::cout << ans << std::endl;
}
return 0;
}
For the recursive approach the code run only for one test case and for the iterative approach the code fails for two test cases.
For all the failed test cases in both the recursive and the iterative approach the issue is: Time Limit Exceeded
Answer: Algorithm
You remove one floor from one building at a time. The complexity of the (iterative) code is therefore proportional to the total amount of floors, which is prohibitive. Better think this way:
Let's say the tallest building has a height \$H_0\$, and there are \$N_0\$ of them. The next tallest building has a height \$H_1\$, and there are \$N_1\$ of such buildings. To equalize heights, you have to remove \$H_0 - H_1\$ floors by the cost of
\$C = (H_1 + 1) + (H_1 + 2) + ... + H_0 = (H_0 - H_1)*(H_0 + H_1 + 1)/2\$
(do you see an arithmetic progression?) from \$N_0\$ building, giving the total cost of \$T = C * N_0\$. If you cannot afford this, you are done. If you can, you end up with \$N_0 + N_1\$ buildings of height \$H_1\$. Adjust the balance: \$K = K - T\$, and keep going.
To be really effective, you need a bit more elaborate data structure, a (sorted) list of pairs height, number.
Review
using namespace std is a very poor practice. Besides, you use std:: prefix anyway.
There is no need for heights array. You never use it. One long long variable is just enough.
More whitespaces please. total[i]>0&&K>=i is barely readable. Consider (total[i] > 0) && (K >= i)
There is no need to test for total[i] > 0 in the else/else if branches. | {
"domain": "codereview.stackexchange",
"id": 45002,
"tags": "c++, recursion, time-limit-exceeded, iteration, knapsack-problem"
} |
Is it possible to avoid encoding the state in streaming ANS? | Question: regular ANS collects all the data into one big integer and provides optimal compression. In cases where all the symbol probabilities are powers of two, it is just as good as huffman coding (because huffman coding is also optimal in this case).
Streaming ANS lets us avoid the need to use a BigInteger or other expensive operation for a little compression loss. trying to understand it, I wrote a simple encoder:
const FREQ: &[u64] = &[1, 1, 2]; //frequency of each symbol
const T: u64 = 1; //encoding precision, increasing increases compression but decreases performance
pub const CUMUL: [u64; FREQ.len() + 1] = { //cumulative sums of previous frequencies
let mut r = [0; FREQ.len() + 1];
let mut i = 1;
while i < r.len() {
r[i] = r[i-1] + FREQ[i-1];
i += 1;
}
r
};
pub const M: u64 = CUMUL[CUMUL.len() - 1];
#[derive(Debug)]
pub struct RansState {
value: u64,
bitstream: Vec<bool>,
}
impl RansState {
pub fn new() -> Self {
RansState {
value: M * T,
bitstream: Vec::new(),
}
}
pub fn encode(&mut self, symbol: usize) {
self.stream_out(symbol);
let block_id = self.value / FREQ[symbol];
let slot = self.value % FREQ[symbol] + CUMUL[symbol];
self.value = block_id * M + slot;
}
pub fn decode(&mut self) -> usize {
let block_id = self.value / M;
let slot = self.value % M;
let symbol = CUMUL.iter().position(|x| slot < *x).unwrap() - 1;
self.value = block_id * FREQ[symbol] + slot - CUMUL[symbol];
self.stream_in();
symbol
}
pub fn finish(mut self) -> Vec<bool> { //write the current state into the bitstream and output it
assert!(self.value >= M * T); //this assertion holding means it's possible to elide one bit, but how could you elide all of them?
while self.value > 0 {
self.bitstream.push((self.value & 1) != 0);
self.value >>= 1;
}
self.bitstream
}
pub fn start(bitstream: Vec<bool>) -> Self { //extract the initial state from the bitstream
let mut r = Self { value: 0, bitstream, };
while r.value < M * T {
r.value <<= 1;
r.value += r.bitstream.pop().unwrap_or(false) as u64;
}
r
}
fn stream_out(&mut self, symbol: usize) {
while self.value >= FREQ[symbol] * 2 * T {
self.bitstream.push((self.value & 1) != 0);
self.value >>= 1;
}
}
fn stream_in(&mut self) {
while self.value < M * T {
self.value <<= 1;
self.value += self.bitstream.pop().unwrap_or(false) as u64;
}
}
}
It seems to work, and it approaches optimal compression for large streams (provided T is also sufficiently large). But I couldn't figure out how to get around the need to encode the final state of the encoder (which is the initial state of the decoder).
I noticed it's possible to elide one bit from the encoding because we know that M * T <= value < M * T * 2, but I can't see how all bits could be elided.
I looked around at some other implementations and they all seem to do a similar thing. But this seems to contradict ANS having strictly better compression than huffman, if we have symbol frequencies that are powers of two, the ANS encoding is always a few bits larger than the huffman encoding.
Answer: That extra word is the equivalent of having an additional symbol that represents the end of file marker, or storing the size of the decoded block in a header. The "overhead" of $O(1)$ bits can also be understood as $o(1)$ bits per symbol, and is therefore is negligible for the purpose of understanding the compression rate.
This is the sense in which ANS is always better than or equal to Huffman coding.
It doesn't necessarily mean that ANS is more appropriate than Huffman coding for your specific application; sometimes, for extremely low-bandwidth applications (e.g. submarine communication), the precise number of bits in each message matters a great deal. | {
"domain": "cs.stackexchange",
"id": 21372,
"tags": "information-theory, data-compression, entropy, huffman-coding"
} |
Geodesic deviation equation - why does the ordinary second derivative give the correct answer? | Question: I've calculated the correct answer to my problem, but don't understand one of the assumptions I made when doing so.
I used the geodesic deviation equation $$\frac{D^{2}\xi^{\mu}}{D\lambda^{2}}+R_{\phantom{\mu}\beta\alpha\gamma}^{\mu}\xi^{\alpha}\frac{dx^{\beta}}{d\lambda}\frac{dx^{\gamma}}{d\lambda}=0$$
to show that on the surface of a unit sphere two particles separated by initial distance $d$, starting from the equator and travelling north (ie on lines of constant $\phi$) will have a separation $s$ after time $t$ equal to $$s=\xi^{\phi}=d\sin\theta=d\cos\left(vt\right).$$
This is similar to Geodesic devation on a two sphere except that question was solved using simple spherical geometry.
The assumption I made was that the second absolute derivative wrt $t$ equals the second ordinary derivative, ie
$$\frac{D^{2}\xi^{\mu}}{dt{}^{2}}=\frac{d^{2}\xi^{\mu}}{dt{}^{2}}.$$
My question is, why am I allowed to make this assumption?
I've been told on another physics forum that the answer is because the problem is framed in terms of Riemann normal coordinate (because the distance the cars travel along their separate geodesics is a linear function of time $t$). I can only assume that in some way this makes the connection coefficients disappear in the absolute derivative equation$$\frac{DV^{\alpha}}{d\lambda}=\frac{dV^{\alpha}}{d\lambda}+V^{\gamma}\Gamma_{\gamma\beta}^{\alpha}\frac{dx^{\beta}}{d\lambda},$$
but I can't see why this is. As I noted in a comment below, I understand it is possible to choose coordinates at a point where the connection coefficients vanish, but I used the ordinary polar coordinates $\phi$ and $\theta$ to calculate the correct answer. To use two different sets of coordinates like this seems like a case of "having your cake and eating it".
The calculation, by the way, is here (my answer to my question): Geodesic deviation on a unit sphere
Answer: The first reason is that your "distance" between geodesics is measured by a parallely propagated direction $\partial/\partial \phi$. If you take a look at the sphere, the difference $\Delta \phi$ does not correspond to the distance between the points on the geodesics. The distance between them would be measured by arc-lengths of great circles. But you are using $\theta=const$ circles which are not the great circles unless $\theta=\pi/2$.
The second reason is you are working on a space of constant curvature. See below.
Say you take a vector $\zeta^\mu$ and propagate it along your geodesic to get $\zeta^\mu(\lambda)$, i.e. you solve $$\frac{D \zeta^\mu}{d \lambda} = 0$$ Thanks to that you now have by a straightforward application of the Leibniz rule $$\frac{D^2(\xi^\mu \zeta_\mu)}{d^2 \lambda} = \frac{D^2\xi^\mu}{d^2 \lambda} \zeta_\mu$$
But $\xi^\mu \zeta_\mu$ is a scalar which is propagated trivially, so you actually also get $D/d\lambda \to d/d\lambda$. Now you can project your equation of geodesic equation into $\zeta^\mu$ to get
$$\frac{d^2(\xi^\mu \zeta_\mu)}{d^2 \lambda} + R^\mu_{\;\nu \kappa \lambda} \xi^\kappa u^\nu u^\lambda \zeta_\mu = 0 \; \; \; \;(*)$$
Now to use the fact you are on a space of constant curvature. In such a space, you can express the curvature tensor as $$R_{\mu \nu \kappa \lambda} = K(g_{\mu \kappa} g_{\nu \lambda} - g_{\mu \lambda} g_{\nu \kappa})$$
When you plug this in into the geodesic deviation equation, you get $$\frac{D^2 \xi^\alpha}{d\lambda^2} + u^2 K \xi^\alpha = 0$$ Where $u = dx/d\lambda$ and we choose an orthogonal $\xi$ to it (as is also your case). When you make the projection into the parallely propagated $\zeta_\alpha$, you get
$$\frac{d^2 (\xi^\alpha \zeta_\alpha)}{d^2\lambda} + u^2 K \xi^\alpha \zeta_\alpha = 0$$
I.e., if you are investigating only $\Delta \phi = \xi^\alpha \zeta_\alpha$ where $\zeta = \partial/\partial \phi$, you can use even this strictly linear equation.
Note that without using the constant curvature of the space you would end up with equation $(*)$ which doesn't give you much of a clue why you should be able to find $\xi^\alpha \zeta_\alpha$ in the sum with the curvature tensor. So your space is special and your measure of deviation is special - both are necessary ingredients for the assumption. | {
"domain": "physics.stackexchange",
"id": 17072,
"tags": "general-relativity, differential-geometry, geodesics"
} |
Angular Momentum Expectation Values in Spherical Coordinates | Question: I have a homework problem that asks:
Using the spherical harmonics calculate $\langle J_x \rangle$, $\langle J_y \rangle$, $\langle J_z \rangle$ in the state $|l,m\rangle$. Use the derivative forms of the $J_{i}$ in spherical coordinates.
I'm not looking for a direct answer, but I am trying to understand how to approach this problem. I have the representations of the $J_i$ in spherical coordinates, but how do I use these to calculate the requested expectations values?
I've asked for help for a specific part of this derivation in Mathematics stackexchange at: https://math.stackexchange.com/questions/1265825/integrating-associated-legendre-polynomials
Answer: You have to compute $\int d\Omega\ Y_{lm}^*(\theta,\phi)\hat{J}_iY_{lm}(\theta,\phi)$ where $d\Omega=\sin\theta d\theta d\phi$, $J_i$ are the angular momenta operators represented in position space and $Y_{lm}$ are the wavefunctions for the state $|lm\rangle$, i.e. spherical harmonics. | {
"domain": "physics.stackexchange",
"id": 21744,
"tags": "quantum-mechanics, homework-and-exercises"
} |
independent stereo cameras | Question:
Hi all,
My robot has two cameras for eyes that are in fixed locations but can rotate (pitch & yaw) independently of one another.
I've gotten the stereo camera node to work and have run my calibration successfully while the eyes are in fixed positions.
Is there a package that can handle the changes in viewing angles while the eyes are moving?
Thanks for you time.
Originally posted by DocSmiley on ROS Answers with karma: 127 on 2012-06-20
Post score: 3
Answer:
Short answer: I don't think so.
Long answer: This is a really hard problem. In a calibrated stereo system, lookup tables are generated that undistort and rectify pixels very fast. If you want to allow changing angles you would have to produce these maps for every possible angle. If you want to support a specific set of angles you could do a seperate calibration for each of these and switch between them when the cameras move. In that case your system would need a very precise repeatability.
You could also try to setup a self-calibrating system which needs either information about the environment or some movement of the cameras.
Anyways, I am not aware of any package in ROS that supports anything of the aforementioned.
Originally posted by Stephan with karma: 1924 on 2013-01-23
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 9871,
"tags": "stereo, camera"
} |
Ammonium Hydroxide Name | Question: Why is an aqueous solution of $\ce{NH3}$ (ammonia) often referred to as $\ce{NH4OH}$?
I know that dissolved ammonia gas reacts with water in the following equilibrium:
$$\ce{NH3 + H2O <=> NH4+ + OH-}$$
However, $K_\mathrm b = 1.8\times10^{-5}$. Since only a small amount of $\ce{NH3}$ dissociates, why is the whole solution called "ammonium hydroxide"?
Is ammonium hydroxide soluble, or is it an actual aqueous species?
Answer: The fictitious substance “ammonium hydroxide” $(\ce{NH4OH})$ was suggested in accordance with the Arrhenius theory. In 1882–1883, Svante Arrhenius had researched into the conductivity of electrolytes. His theory of electrolytic dissociation was discussed in his dissertation in 1884 and published in refined form in 1887. In 1903, the Nobel Prize in Chemistry was awarded to Svante Arrhenius “in recognition of the extraordinary services he has rendered to the advancement of chemistry by his electrolytic theory of dissociation”.
According to the Arrhenius definition, acids are compounds that dissociate and release hydrogen ions $(\ce{H+})$ into the solution:
$$\ce{HCl -> H+ + Cl-}$$
Bases are defined as compounds that dissociate and release hydroxide ions $(\ce{OH-})$ into the solution:
$$\ce{NaOH -> Na+ + OH-}$$
The need for a hydroxide group in bases according to the Arrhenius definition led to the proposal of $\ce{NH4OH}$ (i.e. “ammonium hydroxide”) as the formula for hydrated ammonia in water. (Note that this problem does not exist in the definition of bases according to the Brønsted–Lowry theory, which was proposed in 1923.)
However, ammonium hydroxide cannot be isolated; the fictitious solid compound does not exist. Nevertheless, the misleading traditional name “ammonium hydroxide” is still widely used for solutions of ammonia in water.
Actually, a solid compound (melting point: −79 °C) with the correct atomic composition (molecular formula) can be crystallized from aqueous solutions of ammonia. But even this compound is not ammonium hydroxide $(\ce{NH4OH})$. It actually is a real hydrate $(\ce{NH3.H2O})$ in which the molecules are linked by hydrogen bonding. | {
"domain": "chemistry.stackexchange",
"id": 5310,
"tags": "acid-base"
} |
Can the absence of information provide which-way knowledge? | Question: This seems an incredibly basic question, but one I've been unable to find an answer to on PSE; if this is a duplicate please point me in the right direction.
Concerning a simple Young's double-slit setup:
A sensor of some type is placed by one of the slits, such that if an electron were to pass through this slit, the sensor would register the passing and thus any possibility of seeing an interference pattern after many runs would be destroyed. The other slit has no such sensor.
Electrons are then fired one at a time. After each electron is detected at the downrange detection plate, a note is made whether the sensor positioned by the slit was triggered or not. In this way, two populations of detections may be built up: Marks on the downrange detection plate that were associated with the slit sensor being triggered $A$, and marks on the detection plate that had no associated triggering of the slit sensor $B$.
Now, if I observe the pattern of marks created by population $A$, I would expect to see no signs of interference as I have very clear which-way path information thanks to my sensor.
My question is this:
If I choose to observe the pattern of marks created by population $B$ only, will I observe an interference pattern or not?
It seems my expectations could go both ways:
I can argue that I should indeed observe an interference pattern since these electrons have not interacted with any other measuring device at all between the electron source and the detection plate, between which lie my double slits.
I can argue that the very fact that my sensor at the one slit did not trigger a priori gives me which-way information, in that I now infer that my electron must have gone through the other slit thanks to the absence of which-way information through my sensor-equipped slit.
Which one of these assumptions aligns with reality would seem to have huge ramifications: the first implies that measurement is truly physical interaction of any kind, whereas the second implies that knowledge is measurement, even if that knowledge is obtained without physically interacting with the system (if my detector isn't triggered I cannot see how one could argue it interacted, so perhaps a more accurate statement would be there must be a different kind of interaction that may support non-epistemic views of the wavefunction).
Put another way more succinctly: It is one thing to understand that physical interaction destroys superposition. It is another to understand that a lack of interaction with a measuring device (generally pursued to preserve superposition) may also destroy it if it yields which-way information.
Given this I'm hoping the answer to my question will be #1, but expecting it to be #2.
Answer: The OP's confusion seems to stem from the incorrect assumption that
if my detector isn't triggered I cannot see how one could argue it interacted [with the electron]
Just because the detector sometimes does not click, does not mean that there is no interaction at all.
A good way to think about this is in terms of continuous measurement. This and this are good (albeit quite involved) references for further reading on this topic.
You know that, uprange of the detector, the electron probability amplitude (or if you insist, the Dirac field) is delocalised in space. In particular, there is some amplitude for the electron to be found at the position of the detector. So in fact, the detector is always interacting with the electron (continuously measuring it). However, this interaction is weak because the detector doesn't cover all of space. Therefore the electron-detector interaction is not strong enough to cause the detector to "click" (i.e. trigger it) with 100% probability on a single run of the experiment.
More precisely, at the end of the experiment the detector and the electron (or if you insist, the Dirac field) are in the entangled state (roughly speaking)
$$ \lvert \psi \rangle = \lvert A\rangle_e \lvert \mathrm{click}\rangle_d + \lvert B\rangle_e \lvert \mathrm{no~click}\rangle_d,$$
where $e,d$ label the states of the $e$lectron (or if you insist, the Dirac field) and $d$etector.
You can see already that there is an interaction, because the presence of the electron changes the state of the detector (which was initialised in the pure state $\lvert \mathrm{no~click}\rangle$). You run into conceptual difficulty only if you believe that the state of the detector and the electron can be described independently of each other: in QM probability amplitudes refer to the state of the system as a whole. If you do not observe the detector to click on a given run of the experiment, the state of the electron is correctly described by $\lvert B\rangle_e$. However, in order to see interference, the electron (or if you insist, the Dirac field) must instead be in the state $\lvert A\rangle_e + \lvert B\rangle_e$. Therefore there is no interference. | {
"domain": "physics.stackexchange",
"id": 28251,
"tags": "quantum-mechanics, quantum-information, double-slit-experiment"
} |
In what order does Qiskit assign parameter values? | Question: It seems that "assign_parameters" method introduces the values of the parameters in alphabetical order of different "ParameterVector"s' names. Here is a simple example:
import qiskit
from qiskit import QuantumCircuit
qc = QuantumCircuit(2)
b = qiskit.circuit.ParameterVector('beta', 2)
a = qiskit.circuit.ParameterVector('alfa', 2)
d = qiskit.circuit.ParameterVector('delta', 1)
qc.rx(b[0],0)
qc.rx(b[1],1)
qc.ry(a[0],0)
qc.ry(a[1],1)
qc.rz(d[0],0)
values= [1,2,3,4,5]
print(qc.parameters)
qc.assign_parameters(values, inplace=True)
qc.draw("mpl")
Is there a way to introduce them in the temporal order of the circuit diagram? Is the same order applied when introducing parameters as an argument in Estimator().run(circuit, observable, parameters) ?
Thank you in advance!
Answer: Since the "temporal order" is not unique if we compile the circuit (or e.g. if operations commute), Qiskit assigns the parameters sorted by parameter name. This remains unique under circuit transformation and therefore binding values at different compilation stages always gives the same result. See also the original PR for more info.
If you want to enforce a "temporal order", you could insert barriers after each timeslice, which will ensure the compiler doesn't move the operations around (note that this will also prevent some optimizations). Then you could name the parameters after the timeslices, something like Parameter("<slice_num>_...").
Is the same order applied when introducing parameters as an argument in Estimator().run(circuit, observable, parameters) ?
Yes, since this code internally calls circuit.assign_parameters(parameters). | {
"domain": "quantumcomputing.stackexchange",
"id": 4856,
"tags": "qiskit"
} |
Possible to solve a combinatorial game with integer programming? | Question: I recently had the idea that it would be neat if it were possible to make a SAT solver play combinatorial games. To start, I'm trying a relatively simple case of solving single-stack Misère Nim through integer programming: there's a fixed limit $L$ with a set of moves $N$, where a move is added to the state each turn. Each player is trying to avoid making the state equal the limit.
I managed to encode this with integer programming with the following system:
A move at each time $t$ is either played or not played: $0\leq m_{n,t}\leq 1$
A game is either in progress or over: $0\leq p_t\leq 1$
The state is between one and the goal: $1\leq s_t \leq L$
The game starts at state 1 and is in progress, and ends at limit and not in progress: $s_0=1, p_0=1, s_{final}=L, p_{final}=0$
Exactly one move of the possible moves can be played iff the game is in progress: $\sum_{n\in N}m_{n,t}=p_t$
The game cannot resume once stopped: $p_t\leq p_{t-1}$
The current state is the previous state plus the current move: $s_t=s_{t-1}+\sum_{n\in N}nm_{n, t}$
I believe this encoding successfully represents the set of legal games, and enables with minimal additions asking questions like "What is the shortest/longest game" and "Is there a legal game with these constraints".
Where I'm struggling is handling adversarial behavior. I'm not sure how to ask questions like "can this player force a win" via integer programming. I suspect it may not be possible since SAT is in NP but many combinatorial games are in EXP, so the language might not be able to express (cleanly) the idea of an optimal sequence of play. But it may be possible that special cases like Nim (definitely not EXP) yield clever solutions. I can't find any literature on this.
Hence the question: Is it possible to express optimal play for a combinatorial game as integer programming constraints? If so, how? A general solution for zero-sum combinatorial games is preferred if possible, but even just a solution for the special case of (single-stack) Nim would be interesting.
Answer: I believe you can't, for many/most combinatorial games. In particular, I believe optimal play for two-player combinatorial games is often a PSPACE-complete problem (sometimes a EXP-complete problem), while a SAT solver can only express NP problems (and a polynomial-time algorithm equipped with ability to call a SAT solver can only solve problems in $P^{NP}$). Since it is believed that PSPACE is much bigger than NP (or $P^{NP}$), it is unlikely that a SAT solver will be enough to solve all such game problems.
See, e.g., Playing Games with Algorithms: Algorithmic Combinatorial Game Theory, Erik D. Demaine and Robert A. Hearn, Games of No Chance 3, volume 56, pages 3-56, 2009; or David Eppstein's page, or Wikipedia's page (also here and here).
Another way to think about it is that reasoning about adversarial play often involves formulas of the form $\forall x_1 \exists y_1 \forall x_2 \exists y_2 \forall x_3 \exists y_3 \cdots \varphi(x_1,y_1,\cdots)$. Here the $x_1,x_2,\dots$ variables represent one player's moves, and the $y_1,y_2,\dots$ variables represent the other player's moves. In general, such formulas are PSPACE-complete to solve: see TQBF. Therefore, it is unlikely that they can be solved with a SAT solver (or an ILP solver). | {
"domain": "cs.stackexchange",
"id": 20488,
"tags": "satisfiability, integer-programming, game-theory, optimal-strategy"
} |
How fast does Monte Carlo tree search converge? | Question: How fast does Monte Carlo Tree Search converge? Is there a proof that it converges?
How does it compare to temporal-difference learning in terms of convergence speed (assuming the evaluation step is a bit slow)?
Is there a way to exploit the information gathered during the simulation phase to accelerate MCTS?
Sorry if too many questions, if you have to choose one, please choose the last question.
Answer: Yes, Monte Carlo tree search (MCTS) has been proven to converge to optimal solutions, under assumptions of infinite memory and computation time. That is, at least for the case of perfect-information, deterministic games / MDPs.
Maybe some other problems were covered too by some proofs (I could intuitively imagine the proofs holding up for non-deterministic games as well, depending on implementation details)... but the classes of problems I mentioned above are what I'm sure about. The initial, classic proofs can be found in:
Bandit based Monte-Carlo Planning by Levente Kocsis, Csaba Szepesvári
Improved Monte-Carlo Search by Levente Kocsis, Csaba Szepesvári, Jan Willemson
Much more recently the paper On Reinforcement Learning Using Monte Carlo Tree Search with Supervised Learning: Non-Asymptotic Analysis appeared on arXiv, in which I saw it is mentioned that there may have been some flaws in those original papers, but they also seem to be able to fix it and add more theory for the more "modern" variants which combine (deep) learning approaches inside MCTS.
It should be noted that, as is typically the case, all those convergence proofs are for the case where you spend an infinite amount of time running your algorithm. In the case of MCTS, you can intuitively think of the proofs only starting to hold once your algorithm has manage to build up the complete search tree, and then on top of that had sufficient time to run through all the possible paths in the tree sufficiently often for the correct values to backpropagate. This is unlikely to be realistic for most interesting problems (and if it is feasible, a simpler breadth-first search algorithm may be a better choice).
How does it compare to Temporal Difference learning in terms of convergence speed (assuming the evaluation step is a bit slow)?
If you're thinking of a standard, tabular TD learning approach like Sarsa... such approaches actually turn out to be very closely related to MCTS. In terms of convergence speed, I'd say the important differences are:
MCTS focusses on "learning" for a single state, the root state; all efforts are put towards obtaining an accurate value estimate for that node (and its direct children), whereas typical TD implementations are about learning immediately for the complete state-space. I suppose the "focus" of MCTS could improve its convergence speed for that particular state...
but the fact that the search tree (which can be viewed as its "table" of $Q$-values as you'd see in Sarsa or $Q$-learning) only slowly grows can also be a disadvantage, in comparison to tabular TD learning approaches which start out with a complete table that covers the complete state space.
Note that papers such as the last one I linked above show how MCTS can also actually use Temporal Difference learning for its backing up of values through the tree... so looking at it from a "MCTS vs TD learning" angle doesn't really make too much sense when you consider that TD learning can be used inside MCTS.
Is there a way to exploit the information gathered during the simulation phase to accelerate MCTS?
There are lots and lots of ideas like that tend to improve performance empirically. It will be difficult to say much about them in theory though. Some examples off the top of my head:
All Moves As First (AMAF)
Rapid Action Value Estimation (RAVE, also see GRAVE)
Move Average Sampling Technique (MAST)
N-Gram Selection Technique (NST)
Last-Good-Reply policy
...
Many of them can be found in this survey paper, but it is somewhat old now (from 2012), so it doesn't include all the latest stuff. | {
"domain": "ai.stackexchange",
"id": 1121,
"tags": "reinforcement-learning, comparison, monte-carlo-tree-search, convergence, temporal-difference-methods"
} |
Why there are no approximation algorithms for SAT and other decision problems? | Question: I have an NP-complete decision problem. Given an instance of the problem, I would like to design an algorithm that outputs YES, if the problem is feasible, and, NO, otherwise. (Of course, if the algorithm is not optimal, it will make errors.)
I cannot find any approximation algorithms for such problems. I was looking specifically for SAT and I found in Wikipedia page about Approximation Algorithm the following: Another limitation of the approach is that it applies only to optimization problems and not to "pure" decision problems like satisfiability, although it is often possible to ...
Why we do not, for example, define the approximation ratio to be something proportional to the number of mistakes that the algorithm makes? How do we actually solve decision problems in greedy and sub-optimal manner?
Answer: Approximation algorithms are only for optimization problems, not for decision problems.
Why don't we define the approximation ratio to be the fraction of mistakes an algorithm makes, when trying to solve some decision problem? Because "the approximation ratio" is a term with a well-defined, standard meaning, one that means something else, and it would be confusing to use the same term for two different things.
OK, could we define some other ratio (let's call it something else -- e.g., "the det-ratio") that quantifies the number of mistakes an algorithm makes, for some decision problem? Well, it's not clear how to do that. What would be the denominator for that fraction? Or, to put it another way: there are going to be an infinite number of problem instances, and for some of them the algorithm will give the right answer and others it will give the wrong answer, so you end up with a ratio that is "something divided by infinity", and that ends up being meaningless or not defined.
Alternatively, we could define $r_n$ to be the fraction of mistakes the algorithm mistakes, on problem instances of size $n$. Then, we could compute the limit of $r_n$ as $n \to \infty$, if such a limit exists. This would be well-defined (if the limit exists). However, in most cases, this might not be terribly useful. In particular, it implicitly assumes a uniform distribution on problem instances. However, in the real world, the actual distribution on problem instances may not be uniform -- it is often very far from uniform. Consequently, the number you get in this way is often not as useful as you might hope: it often gives a misleading impression of how good the algorithm is.
To learn more about how people deal with intractability (NP-hardness), take a look at Dealing with intractability: NP-complete problems. | {
"domain": "cs.stackexchange",
"id": 7665,
"tags": "algorithms, satisfiability, approximation, decision-problem"
} |
ROS2: How to publish cv Mat image? | Question:
I'm having trouble understanding how to publish a cv::Mat image in C++, and I can find very little documentation about this.
Here's my publish code. I based it on this tutorial: https://docs.ros.org/en/galactic/Tutorials/Writing-A-Simple-Cpp-Publisher-And-Subscriber.html
#include <chrono>
#include <memory>
#include "cv_bridge/cv_bridge.h"
#include "rclcpp/rclcpp.hpp"
#include "sensor_msgs/msg/image.hpp"
#include "std_msgs/msg/header.hpp"
#include <opencv2/opencv.hpp>
#include <stdio.h>
// for Size
#include <opencv2/core/types.hpp>
// for CV_8UC3
#include <opencv2/core/hal/interface.h>
// for compressing the image
#include <image_transport/image_transport.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std::chrono_literals;
class MinimalPublisher : public rclcpp::Node {
public:
MinimalPublisher() : Node("minimal_publisher"), count_(0) {
publisher_ =
this->create_publisher<sensor_msgs::msg::Image>("topic", 10);
timer_ = this->create_wall_timer(
300ms, std::bind(&MinimalPublisher::timer_callback, this));
}
private:
void timer_callback() {
cv_bridge::CvImagePtr cv_ptr;
cv::Mat img(cv::Size(1280, 720), CV_8UC3);
cv::randu(img, cv::Scalar(0, 0, 0), cv::Scalar(255, 255, 255));
sensor_msgs::msg::Image::SharedPtr msg =
cv_bridge::CvImage(std_msgs::msg::Header(), "bgr8", img)
.toImageMsg();
publisher_->publish(msg);
std::cout << "Published!" << std::endl;
}
rclcpp::TimerBase::SharedPtr timer_;
rclcpp::Publisher<sensor_msgs::msg::Image>::SharedPtr publisher_;
size_t count_;
};
int main(int argc, char *argv[]) {
printf("Starting...");
rclcpp::init(argc, argv);
rclcpp::spin(std::make_shared<MinimalPublisher>());
rclcpp::shutdown();
return 0;
}
The publisher_->publish(msg) line throws the following error:
Starting >>> cpp_pubsub
--- stderr: cpp_pubsub
/app/tests/ros2_cpp_test/src/cpp_pubsub/src/publisher_member_function.cpp: In member function ‘void MinimalPublisher::timer_callback()’:
/app/tests/ros2_cpp_test/src/cpp_pubsub/src/publisher_member_function.cpp:67:26: error: no matching function for call to ‘rclcpp::Publisher<sensor_msgs::msg::Image_<std::allocator<void> > >::publish(sensor_msgs::msg::Image_<std::allocator<void> >::SharedPtr&)’
publisher_->publish(msg);
^
In file included from /opt/ros/galactic/install/include/rclcpp/topic_statistics/subscription_topic_statistics.hpp:31:0,
from /opt/ros/galactic/install/include/rclcpp/subscription.hpp:50,
from /opt/ros/galactic/install/include/rclcpp/any_executable.hpp:25,
from /opt/ros/galactic/install/include/rclcpp/memory_strategy.hpp:25,
from /opt/ros/galactic/install/include/rclcpp/memory_strategies.hpp:18,
from /opt/ros/galactic/install/include/rclcpp/executor_options.hpp:20,
from /opt/ros/galactic/install/include/rclcpp/executor.hpp:36,
from /opt/ros/galactic/install/include/rclcpp/executors/multi_threaded_executor.hpp:26,
from /opt/ros/galactic/install/include/rclcpp/executors.hpp:21,
from /opt/ros/galactic/install/include/rclcpp/rclcpp.hpp:156,
from /app/tests/ros2_cpp_test/src/cpp_pubsub/src/publisher_member_function.cpp:19:
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:187:3: note: candidate: void rclcpp::Publisher<MessageT, AllocatorT>::publish(std::unique_ptr<MessageT, typename std::conditional<std::is_same<typename std::allocator_traits<typename std::allocator_traits<_Alloc>::rebind_traits<MessageT>::allocator_type>::rebind_alloc<MessageT>, typename std::allocator<void>::rebind<_Tp1>::other>::value, std::default_delete<_Tp>, rclcpp::allocator::AllocatorDeleter<typename std::allocator_traits<_Alloc>::rebind_traits<MessageT>::allocator_type> >::type>) [with MessageT = sensor_msgs::msg::Image_<std::allocator<void> >; AllocatorT = std::allocator<void>; typename std::conditional<std::is_same<typename std::allocator_traits<typename std::allocator_traits<_Alloc>::rebind_traits<MessageT>::allocator_type>::rebind_alloc<MessageT>, typename std::allocator<void>::rebind<_Tp1>::other>::value, std::default_delete<_Tp>, rclcpp::allocator::AllocatorDeleter<typename std::allocator_traits<_Alloc>::rebind_traits<MessageT>::allocator_type> >::type = std::default_delete<sensor_msgs::msg::Image_<std::allocator<void> > >]
publish(std::unique_ptr<MessageT, MessageDeleter> msg)
^~~~~~~
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:187:3: note: no known conversion for argument 1 from ‘sensor_msgs::msg::Image_<std::allocator<void> >::SharedPtr {aka std::shared_ptr<sensor_msgs::msg::Image_<std::allocator<void> > >}’ to ‘std::unique_ptr<sensor_msgs::msg::Image_<std::allocator<void> >, std::default_delete<sensor_msgs::msg::Image_<std::allocator<void> > > >’
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:211:3: note: candidate: void rclcpp::Publisher<MessageT, AllocatorT>::publish(const MessageT&) [with MessageT = sensor_msgs::msg::Image_<std::allocator<void> >; AllocatorT = std::allocator<void>]
publish(const MessageT & msg)
^~~~~~~
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:211:3: note: no known conversion for argument 1 from ‘sensor_msgs::msg::Image_<std::allocator<void> >::SharedPtr {aka std::shared_ptr<sensor_msgs::msg::Image_<std::allocator<void> > >}’ to ‘const sensor_msgs::msg::Image_<std::allocator<void> >&’
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:228:3: note: candidate: void rclcpp::Publisher<MessageT, AllocatorT>::publish(const rcl_serialized_message_t&) [with MessageT = sensor_msgs::msg::Image_<std::allocator<void> >; AllocatorT = std::allocator<void>; rcl_serialized_message_t = rcutils_uint8_array_t]
publish(const rcl_serialized_message_t & serialized_msg)
^~~~~~~
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:228:3: note: no known conversion for argument 1 from ‘sensor_msgs::msg::Image_<std::allocator<void> >::SharedPtr {aka std::shared_ptr<sensor_msgs::msg::Image_<std::allocator<void> > >}’ to ‘const rcl_serialized_message_t& {aka const rcutils_uint8_array_t&}’
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:234:3: note: candidate: void rclcpp::Publisher<MessageT, AllocatorT>::publish(const rclcpp::SerializedMessage&) [with MessageT = sensor_msgs::msg::Image_<std::allocator<void> >; AllocatorT = std::allocator<void>]
publish(const SerializedMessage & serialized_msg)
^~~~~~~
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:234:3: note: no known conversion for argument 1 from ‘sensor_msgs::msg::Image_<std::allocator<void> >::SharedPtr {aka std::shared_ptr<sensor_msgs::msg::Image_<std::allocator<void> > >}’ to ‘const rclcpp::SerializedMessage&’
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:248:3: note: candidate: void rclcpp::Publisher<MessageT, AllocatorT>::publish(rclcpp::LoanedMessage<MessageT, AllocatorT>&&) [with MessageT = sensor_msgs::msg::Image_<std::allocator<void> >; AllocatorT = std::allocator<void>]
publish(rclcpp::LoanedMessage<MessageT, AllocatorT> && loaned_msg)
^~~~~~~
/opt/ros/galactic/install/include/rclcpp/publisher.hpp:248:3: note: no known conversion for argument 1 from ‘sensor_msgs::msg::Image_<std::allocator<void> >::SharedPtr {aka std::shared_ptr<sensor_msgs::msg::Image_<std::allocator<void> > >}’ to ‘rclcpp::LoanedMessage<sensor_msgs::msg::Image_<std::allocator<void> >, std::allocator<void> >&&’
make[2]: *** [CMakeFiles/talker.dir/src/publisher_member_function.cpp.o] Error 1
make[1]: *** [CMakeFiles/talker.dir/all] Error 2
make: *** [all] Error 2
---
Failed <<< cpp_pubsub [22.8s, exited with code 2]
Summary: 0 packages finished [23.8s]
1 package failed: cpp_pubsub
1 package had stderr output: cpp_pubsub
Originally posted by zjeffer on ROS Answers with karma: 46 on 2022-03-07
Post score: 1
Answer:
I fixed it by changing the publisher line to:
publisher_->publish(*msg.get());
Originally posted by zjeffer with karma: 46 on 2022-03-07
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 37489,
"tags": "ros2, c++, opencv, cv-bridge"
} |
Prove that the language of non-prime numbers written in unary is not regular | Question: Im trying to prove that the following language is not regular.
$$\text{Notprime} = \{a^n \text{where \(n\) isn't prime}\}
= \{\epsilon, a, aaaa, aaaaaa, aaaaaaaa, \ldots\}$$
Heres what I have:
"If Notprime were regular, then its complement would be regular also. However, the complement of Notprime is the language Prime, hence Notprime is non-regular."
Is this the right way of proving it? Any help is appreciated!
Answer: This proof is correct, the complement of a regular language is indeed a regular language (and dually the complement of a non-regular language is non-regular). Your proof relies on the assumption that Prime is not regular. If this was given as an exercise and there wasn't an earlier question about Prime, then the intent of the exercise was probably that you prove that Prime is not regular (or directly prove that Notprime is not regular, which is about as easy).
The usual method in elementary exercises to prove that a language is not regular is the pumping lemma. It works for this example. See How to prove that a language is not regular? for more methods, and questions tagged pumping-lemma for examples of using the pumping lemma.
For an with a single letter, there is a general form for all regular languages: they are the languages that, for sufficiently long words, consist of the union of several arithmetic progressions with the same coefficient: $\{a^{ak+b} \mid k\in\mathbb{N}, b \in B\}$ for some fixed $a$ and some finite set $b$. See What are the possible sets of word lengths in a regular language? for a more precise description and a proof. The set of primes, or the set of non-primes, does not have this structure since there are growingly large gaps between primes. | {
"domain": "cs.stackexchange",
"id": 545,
"tags": "regular-languages, finite-automata"
} |
Please help identify this physics apparatus! | Question:
This was my grandfather’s and have no idea what it is only that it is some piece of physics equipment!
The main black cylinder doesn’t seem like it wants to rotate but not sure if it should?
Answer: It looks like an induction coil with the make and break device at the bottom and a switch right at the bottom. If you connect it up to an accumulator, be very, very careful as the output between the two balls, when separate, could be lethal. Also the electrical insulation elsewhere may be poor and you might get a shock just by touching the switch.
Use with very great care and preferably have somebody who knows about such devices with you. | {
"domain": "physics.stackexchange",
"id": 54201,
"tags": "electromagnetism, electricity, electromagnetic-induction, induction, instrument"
} |
Kinematic Problem, two balls thrown down from building at different times, when will they meet and what distance will they hit at? | Question: You drop a ball from a building on the road $40\, \mathrm{m}$ below. Your friend beside you throws a second ball 1 second later at $25\,\mathrm{m/s}$, trying to hit your ball in mid-air. Assuming the aim is good, how high does it hit.
So I started by stating my variables
initial distance of ball one and two: $40\, \mathrm{m}$
final distance of ball one and two: $0$
Initial velocity of one: $0\,\mathrm{ m/s}$
Initial velocity of two: $25\,\mathrm{m/s}$
Final velocity of one: ? Final velocity of two: ?
acceleration of one and two: $9.81\,\mathrm{m/s^2}$
This is all I know. I started by using a kinematic equation on ball one:
$$2ad=v_f^2-v_i^2$$
subbing what I know gives me $2(9.81)(40)=v_f^2 - 0^2\to v_f=28.01\,\mathrm{m/s}$
I used this value along with other knowns in the another equation to find time:
$v_f=v_i+a t\to 28.01=0+9.81 \,t \to t= 2.855\,\mathrm{s}$
Ok, now I have the time the first ball took to fall, now I will use the exact same process for the second ball:
$$2ad=v_f^2-v_i^2 \to 2(9.81)(40)=v_f^2-25^2 \to v_f=37.54 \mathrm{m/s}$$
Now I'll plug that number into the other kinematic equation
$$v_f=v_i+a t \to 37.54=25+9.81\,t \to t=1.275\,\mathrm{ s}$$
Since this ball was thrown a second after the first, I'll add one second to the time above: $1.275+1= 2.275$.
Since this time is faster than the time for the first ball, that means they do meet. But now how do I find that out? I tried to make kinematics equations equal each other but I get nowhere since all the distances cancel out. Please help.
Answer: They both must travel the same displacement in order to collide with each other. Try using the kinematic equation $s=ut+\frac{1}{2}at^2$. Remember that as Socre said, one will have travelled the displacement in $x$ seconds, while other in $(x-1)$ seconds as it was thrown 1 second after and that the displacement of the ball 1 = the displacement of the ball 2 as they collide. | {
"domain": "physics.stackexchange",
"id": 25523,
"tags": "homework-and-exercises, kinematics"
} |
When the decay constant is not constant. Limit definition of the exponential of an integral? | Question: In radioactive decay (for example) the probability for a particle to decay per unit time is $\Gamma$. When this is a constant the probability to not decay after time $T$, $P(t)$, is derivable by splitting $T$ into $n$ timesteps of length $dt = T/n$ and using
$$P(t) = \lim_{n \rightarrow \infty}\left( 1 - \Gamma \frac{T}{n}\right)^n = e^{-\Gamma T},$$
using the limit definition of the exponential.
I am interested in the case when $\Gamma \rightarrow \Gamma(t)$ is a function of time. I suspect the answer is now
$$P(t) = e^{-\int_0^T \Gamma(t) dt}.$$
I am wondering if one can prove this again by the limit definition of an exponential. Discretising time into $n$ steps again I believe
$$P(t) = \lim_{n\rightarrow \infty}\Pi_{k=0}^n \left( 1 - \Gamma(k T/n) \frac{T}{n}\right).$$
Is this known to be an alternative definition of $e^{-\int_0^T \Gamma(t) dt}$?
Is there a better way to prove this?
Answer: If you consider a small time interval $dt$, the change of probability to not decay ($dP(t)$) is given by the product of probability to decay per unit time ($\Gamma$) times time interval ($dt$) times the current probability that the particle has not decayed yet ($P(t)$):
$$dP(t) = -\Gamma(t) dt P(t).$$
So, basically we derived a differential equation on $P(t)$. Your proposed
$$P(T) = e^{-\int_0^T \Gamma(t)dt}$$
is indeed a solution to this equation. Moreover, it obeys the correct initial condition:
$$P(T=0) = 1$$
which means the particle begins to decay at the moment $T=0$.
This logic could be used to prove your relation for the exponential of an integral as well, even though we don't need to use it in this approach. | {
"domain": "physics.stackexchange",
"id": 77883,
"tags": "quantum-mechanics, nuclear-physics, radiation, probability"
} |
How hard are PSPACE-complete problems? | Question: There are already good answers from several perspectives regarding the "hardness" of $PSPACE$-complete problems, such as this: What is practical difference between NP and PSPACE-complete?
But what are the practical implications when we actually try to solve (decide) a $PSPACE$-complete problem on a deterministic Turing machine? If $PSPACE \neq NP$ (or $PSPACE \neq P$, should it make a difference) then obviously it will take super-polynomial (exponential?) time. But what about space? Is there some $k>1$ that we know of such that it will definitely take $\Omega(n^k)$ space? And do we know for sure that it won't take more than $O(n^l)$ space for some $l$?
(Or is the question itself stated badly for some reason?)
Answer: By definition, PSPACE consists of all languages decided by some Turing machine using polynomial space. So every language in PSPACE can be decided by some Turing machine using space $O(n^\ell)$ for some $\ell$. The space hierarchy theorem ensures moreover that for any $\ell$ there is a language in PSPACE which requires space $\Omega(n^\ell)$.
Any PSPACE-hard language requires space $\Omega(n^k)$ for some $k$, since otherwise any language in PSPACE would be decided by a Turing machine using subpolynomial space, and this contradicts the space hierarchy theorem.
For any $k > 0$ there is a PSPACE-hard language which can be decided in space $O(n^k)$. Indeed, take any PSPACE-complete language $L$. It can be decided using space $O(n^\ell)$ for some $\ell$. Consider now the language $L' = \{(x,0^{|x|^{\ell/k}}) : x \in L\}$, which is PSPACE-hard by reduction from $L$. An algorithm for $L$ can be converted to one for $L'$ which uses $\log n$ space to verify the input structure and $O(|x|^\ell)$ for the rest. Since $n \geq |x|^{\ell/k}$, $O(|x|^\ell) = O(n^{(k/\ell)\ell}) = O(n^k)$. | {
"domain": "cs.stackexchange",
"id": 8352,
"tags": "space-complexity"
} |
What would happen to wet laundry in the vacuum of space? | Question: Imagine for a second that I was in a college dorm room frustrated that all the dryers were broken or in use. I'm impatient and not a jerk so I don't wait or take out someone else's laundry, I shoot it into space with the ability to get it back and land it successfully on my campus. Ignoring practicality of course:
How would my clothes react? Would the water in them freeze? Would the water just dissipate into the rest of space? Would it boil if it were in the light of the sun?
I'm curious about three cases:
One, where the clothes are in a general vacuum regardless of the effect of the sun.
Two, where my clothes are specifically behind the Earth and so they are eclipsed by the Earth.
Three, where my clothes are specifically between the Earth and the sun.
For those willing to make the argument that this is a silly topic: would it or would it not have implications with space suits?
Thanks!
Answer: 1) Most materials you use in everyday life contain far more moisture than you might believe. This is a major reason materials meant to be exposed to space are specially designed and tested. In a general vacuum, most fabrics and many plastic will outgas - all of the absorbed moisture and oils will work their way to the surface and boil off - which is a major source of contamination for sensitive equipment. With a wet shirt in a general vacuum, the water would boil away extremely quickly; as would the oils and even some of the lower quality waxes (depending on the material).
2) When behind Earth, the temperature does drop significantly. However, because the shirt has its initial heat from the ground and because the only way of losing that heat in space is through radiative and evaporative heat loss, the water on the shirt will explosively boil off almost instantly. The small water drops that might fly off the shirt will freeze to ice pellets almost immediately, but the shirt will be more or less dry.
3) With the Sun shining on it, the shirt will again "insta-dry" but all of the water will vaporize; there will not be small ice pellets flying away from the shirt.
Overall though, I don't recommend doing it. The moisture and oils that outgas from your clothes will put large stresses on the fabrics and might ruin your clothes. Also, you could shrink your new sweater and that was a gift from your grandmother; you wouldn't want that right? | {
"domain": "physics.stackexchange",
"id": 11734,
"tags": "water, space, earth, sun, vacuum"
} |
Anything special about the internal structure of Carbon-12? | Question: In trying to understand the various structures carbon forms, I'm wondering what, if anything, is so special about having 6 neutrons and 6 protons in the nucleus. I'm aware there are permutations possible (in general) with respect to the specific arrangement of nucleons.
On the surface of the issue it seems like there is something about the internal structures possible that is peculiar... It isn't a rational train of thought but it is tempting to ask if there is something more to the internal structure - a lot of 3's and 2's appearing suggesting a geometric or numeric answer...
I've tried to think of it as a sphere packing problem, knowing that the analogy wouldn't be entirely appropriate, haven't gotten far yet. Also wondering if there is any relation to icosahedra, having 12 vertices and a plethora of interesting geometrical properties.
In short, is there anything to be said about the internal structure of Carbon-12 that's remarkable or distinct to that isotope?
Answer: Carbon-12 is an "alpha-cluster nucleus," with even proton number $Z$, even neutron number $N$, and $N=Z$. The alpha-cluster nuclei up to argon or so are slightly more stable than than their "mirror nuclei" neighbors at $Z-2,N+2$, and tend to be concentrated in stellar nucleosynthesis.
Nuclear structure is a big subject where lots of different approaches are good at explaining various phenomena.
The cluster model is one approach (or at least, a phenomenon that should arise from a good microscopic nuclear model).
The shell model follows the same sort of four-quantum-number ruleset that leads to the electron structure of the periodic table. For subtle reasons the nucleon shells fill differently that electron shells do: the noble gases have $2,10,18,36,\cdots$ electrons, while the "magic nuclei" have $8,20,28,50,\cdots$ protons and/or neutrons.
For heavy nuclei you can kind of gloss over the details of what's happening inside and model the nucleus as a liquid drop.
Each of these approaches has strengths and weaknesses.
I'm not aware of an approach that involves icosahedra.
Note that whether your model includes nucleons or clusters, the nucleus is profoundly quantum-mechanical.
The wavelength of each nucleon is comparable to the size of the nucleus, and the overlap and interference between them is a fundamental part of the nuclear dynamics.
This is very much unlike atoms packed into a crystal lattice in a solid; it's more like the ocean of nonlocalized conduction electrons in a metal. | {
"domain": "physics.stackexchange",
"id": 21816,
"tags": "quantum-mechanics, particle-physics"
} |
how to edit the ~/.bashrc file | Question:
The default editor for rosed is vim. To set the default editor to something else edit your ~/.bashrc file to include:
export EDITOR='emacs -nw'
But I don't know how to edit the ~/.bashrc file.
Originally posted by hangzhang on ROS Answers with karma: 88 on 2013-10-06
Post score: 2
Answer:
To edit it:
vi ~/.bashrc
and add that line. You could also use (as SaiHV answered)
gedit ~/.bashrc".
Make sure to
source ~/.bashrc
after editing it.
Originally posted by jdorich with karma: 211 on 2013-10-06
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by hangzhang on 2013-10-06:
but how to add that line when I input: vi ~/.bashrc? I am new at this, could you give me the details, thanks so much.
Comment by hangzhang on 2013-10-06:
Thanks. I have solved the problem. | {
"domain": "robotics.stackexchange",
"id": 15769,
"tags": "ros"
} |
A Game Of Simon | Question: This is one of the projects from freeCodeCamp. It's a game of Simon (a memory game). I would like a review on my code.
var generatedPattern = [];
var playerPattern = [];
var patternLength = 1;
var redRef = 1;
var greenRef = 2;
var blueRef = 3;
var yellowRef = 4;
function generateNextPatternValue() {
if (patternLength == 20) {
gameReset();
} else {
var next = getRandomNum(1, 4);
if (next === generatedPattern[generatedPattern.length - 1] && next == 4)
next--;
else if (next === generatedPattern[generatedPattern.length - 1])
next++;
generatedPattern.push(next);
animateGeneratedPattern();
}
}
function getRandomNum(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
function animateRed(lightup) {
if (lightup) {
var audio = new Audio("https://s3.amazonaws.com/freecodecamp/simonSound1.mp3");
audio.play();
}
$("#1").toggleClass("animateRed");
}
function animateGreen(lightup) {
if (lightup) {
var audio = new Audio("https://s3.amazonaws.com/freecodecamp/simonSound2.mp3");
audio.play();
}
$("#2").toggleClass("animateGreen");
}
function animateBlue(lightup) {
if (lightup) {
var audio = new Audio("https://s3.amazonaws.com/freecodecamp/simonSound3.mp3");
audio.play();
}
$("#3").toggleClass("animateBlue");
}
function animateYellow(lightup) {
if (lightup) {
var audio = new Audio("https://s3.amazonaws.com/freecodecamp/simonSound4.mp3");
audio.play();
}
$("#4").toggleClass("animateYellow");
}
function animateGeneratedPattern() {
var i = 0;
function animateNextPattern(lightup) {
if (!generatedPattern || generatedPattern.length === i) {
return;
}
switch (generatedPattern[i]) {
case 1:
animateRed(lightup);
break;
case 2:
animateGreen(lightup);
break;
case 3:
animateBlue(lightup);
break;
case 4:
animateYellow(lightup);
break;
}
if (lightup) {
// Long delay before turning light off
setTimeout(function() {
animateNextPattern(false);
}, 1000);
} else {
// Small delay before turning on next light
i++;
setTimeout(function() {
animateNextPattern(true);
}, 100);
}
}
animateNextPattern(true);
}
function gameReset() {
if (patternLength == 20)
alert("You Have Won");
document.getElementById("start").disabled = false;
playerPattern = [];
generatedPattern = [];
patternLength = 1;
updateLength();
restoreColors();
}
function restoreColors() {
$("#1").removeClass("animateRed");
$("#2").removeClass("animateGreen");
$("#3").removeClass("animateBlue");
$("#4").removeClass("animateYellow");
}
function updateLength() {
$("#length").empty();
var txtNode = document.createTextNode(patternLength);
document.getElementById("length").appendChild(txtNode);
}
function animateScreen(turnOn) {
if (turnOn) {
var audio = new Audio("http://www.sounds.beachware.com/2illionzayp3may/illwavul/CLNKBEEP.mp3");
audio.play();
}
$("body").toggleClass("wrongSeries");
}
function wrongChoice() {
animateScreen(true);
setTimeout(function() {
animateScreen(false);
}, 100);
}
function unbind() {
$("#1").unbind("click");
$("#2").unbind("click");
$("#3").unbind("click");
$("#4").unbind("click");
}
function bind() {
$("#1").bind("click", function() {
if (document.getElementById("start").disabled) {
playerPattern.push(1);
if (checkSeries()) {
animateRed(true);
setTimeout(function() {
animateRed(false);
}, 500);
}
}
});
$("#2").bind("click", function() {
if (document.getElementById("start").disabled) {
playerPattern.push(2);
if (checkSeries()) {
animateGreen(true);
setTimeout(function() {
animateGreen(false);
}, 500);
}
}
});
$("#3").bind("click", function() {
if (document.getElementById("start").disabled) {
playerPattern.push(3);
if (checkSeries()) {
animateBlue(true);
setTimeout(function() {
animateBlue(false);
}, 500);
}
}
});
$("#4").bind("click", function() {
if (document.getElementById("start").disabled) {
playerPattern.push(4);
if (checkSeries()) {
animateYellow(true);
setTimeout(function() {
animateYellow(false);
}, 500);
}
}
});
}
function checkSeries() {
if (playerPattern[playerPattern.length - 1] !== generatedPattern[playerPattern.length - 1]) {
if (document.getElementById("strict").checked) {
wrongChoice();
gameReset();
document.getElementById("start").disabled = true;
setTimeout(generateNextPatternValue, 1500);
} else {
unbind();
wrongChoice();
playerPattern = [];
setTimeout(animateGeneratedPattern, 1500);
setTimeout(bind, 1500 * patternLength);
}
return false;
} else if (playerPattern.length == patternLength) {
unbind();
patternLength++;
updateLength();
playerPattern = [];
setTimeout(generateNextPatternValue, 1000);
setTimeout(bind, 1500 * patternLength);
return true;
}
return true;
}
window.addEventListener("load", updateLength);
window.addEventListener("load", bind);
$("#start").on("click", function() {
document.getElementById("start").disabled = true;
generateNextPatternValue();
});
$("#reset").on("click", gameReset);
Full code here.
Answer: Duplicate Code
A lot of your code is really similar to other parts. The duplication is generally found when working with your colors. For example, take a look at your animiateColor functions. The only differences between them are the audio file, the html element, and the CSS class. You could shorten your code while making it easier to edit by merging the similar parts together:
function animiateColor(color) { // your lightup boolean controls the sound?
color.audio.play();
color.element.toggleClass(color.animateClass);
}
function stopAnimation(color) {
color.element.toggleClass(color.animateClass);
}
animiateColor({
audio: new Audio("http url red"),
element: $("#1"),
animateClass: "animateRed"
});
You should consider making an array of all your colors instead of giving each their own variable. This allows you to iterate through the colors as opposed to explicitly executing functions on each one:
var colors = [
{
audio: new Audio("http red"), // this also caches the audio file, more on that later
element: $("#1"),
animateClass: "animateRed"
},
{
audio: new Audio("http green"),
element: $("#1"),
animateClass: "animateGreen"
},
...
];
With the array you can iterate through each element using a for loop or the Array.forEach method:
function unbind() {
colors.forEach(function (color) {
color.element.unbind("click");
});
}
function bind() {
colors.forEach(function (color, index) {
color.element.bind("click", function() {
if (document.getElementById("start").disabled) {
playerPattern.push(index);
if (checkSeries()) {
animiateColor(color);
setTimeout(function () {
stopAnimation(color);
}, 500);
}
}
});
}
See how much shorter this has become? Now if you want to change how any of these methods work, you only need to make that change in one place. Additionally, if you ever decide to add another color to the game, you'd only need to add it to the array.
Randomly Selecting a Color
Your algorithm for randomly selecting the next color is weighted heavily for the color one greater than the last one chosen (except yellow which is 1 less). For example, if blue was the last to flash, there's a 50% chance the next chosen color is yellow.
Proof: if blue(3) was the last chosen, then the random number from 1
to 4 has the following cases:
1 => Red, 2 => Green, 3 => Yellow, 4 => Yellow
Since each number has a 25% chance of being chosen, Yellow has a 50%
chance of being the next color.
If you decide to use the array of colors, you could randomly select one without repetition like so:
function getNextIndex(arrayLength, lastIndex) {
var nextIndex = lastIndex + getRandomNum(1, arrayLength - 1);
return nextIndex % 4;
}
How it works: The three other indices are either 1, 2, or 3 away from
the lastIndex--that is relative to mod 4 (think of them as cycling around e.g.
yellow + 1 = red)
Better User Interaction
I tried out your codepen and had a few problems with the user interface. The flashing was hard to see for some colors, and there was an inconsistent delay between clicking and hearing the audio play.
I think you ought to make the colors flash brighter:
.animateRed {
background: #FFCCCC;
}
.animateGreen {
background: #CCFFCC;
}
.animateBlue {
background: #CCCCFF;
}
.animateYellow {
background: #FFFFCC;
}
And to eliminate the sound delay, you should cache your sounds, so your code doesn't load a new Audio object for each animation:
var sounds = {
red: new Audio("http link"),
blue: new Audio(...),
....
};
sounds.red.play();
sounds.red.play();
sounds.red.play(); // each call uses the same audio
This will make your game more responsive and enjoyable to play.
Your project looks like it's turning out pretty well! Keep up the hard work. | {
"domain": "codereview.stackexchange",
"id": 20286,
"tags": "javascript, performance, jquery, programming-challenge, simon-says"
} |
Get every information about the Reservation | Question: I'm developing a plugin for Revit (a software to make 3D buildings).
The goal is simple to understand. When there is an intersection between a Wall and a Duct I create an object called Reservation at this location. I need to extract the Curve of the Ducts and the Faces of the Walls in order to calculate this intersection.
My algorithm is working fine and fast with a small building (3 Ducts, 10 Walls and 8 intersections) But when I want to launch it on a real project (around 10 000 Ducts) the code is way too slow due to many ForEach loops. Here is the sample which cause the issue:
foreach (Duct d in ducts)
{
Curve ductCurve = FindDuctCurve(d);
curves.Add(ductCurve);
foreach (Wall w in walls)
{
wallFaces = FindWallFace(w);
foreach (Curve c in curves)
{
foreach (Face f in wallFaces)
{
foreach (KeyValuePair<XYZ, Wall> pair in FindInterWalls(c, f, walls))
{
Reservation.Res res = new Reservation.Res();
res.RoundCenter = new XYZ(Math.Round(pair.Key.X), Math.Round(pair.Key.Y), Math.Round(pair.Key.Z));
res.WallWidth = pair.Value.Width;
bool containsItemX = resList.Any(itemX => itemX.RoundCenter.DistanceTo(res.RoundCenter) < res.WallWidth + 1);
if (containsItemX == false)
{
res.AssociatedWall = pair.Value;
res.Radius = 1;
res.AssociatedDuct = d;
res.Center = pair.Key;
resList.Add(res);
model.Reservations.Add(new Reservation { ResList = resList });
}
}
}
}
}
}
The custom methods I'm using also contain loops. I wonder if a LINQ would be faster but I don't really know how to use it.
In a nutshell i want to get all the information about Reservations without loosing so much time stuck in so many foreach loops.
Here is my Reservation Class :
public sealed class Reservation
{
public List<Res> ResList { get; set; }
public class Res
{
public XYZ Center { get; set; }
public XYZ RoundCenter { get; set; }
public Duct AssociatedDuct { get; set; }
public Wall AssociatedWall { get; set; }
public double WallWidth { get; set; }
public int Radius { get; set; }
}
public Reservation()
{
ResList = new List<Res>();
}
}
A Curve is a right in the center of a Duct (each Duct contains one Curve). And a Face is a side of a Wall (each Wall contains 6 Faces)
Answer: I hope I have understood this now.
Instead of iterating over all the Duct items and adding each iteration the related Curve to curves you should create another class like
public class DuctCurev
{
public Duct TheDuct {get; private set; }
public Curve TheCurve {get; private set; }
public DuctCurve(Duct duct, Curve curve)
{
TheDuct = duct;
TheCurve = curve;
}
}
now we iterate once over all of the Duct's and find the related Curve which we will add to a List<DuctCurve> like so
List<DuctCurve> ductCurves = new List<DuctCurve>();
foreach (Duct d in ducts)
{
ductCurves.Add(d, FindDuctCurve(d));
}
then we need to adjust the remaining code to use the ductCurves and use the ! operator instead of using containsItemX == false like so
foreach (Wall w in walls)
{
wallFaces = FindWallFace(w);
foreach (DuctCurve dc in ductCurves)
{
foreach (Face f in wallFaces)
{
foreach (KeyValuePair<XYZ, Wall> pair in FindInterWalls(dc.Curve, f, walls))
{
Reservation.Res res = new Reservation.Res();
res.RoundCenter = new XYZ(Math.Round(pair.Key.X), Math.Round(pair.Key.Y), Math.Round(pair.Key.Z));
res.WallWidth = pair.Value.Width;
bool containsItemX = resList.Any(itemX => itemX.RoundCenter.DistanceTo(res.RoundCenter) < res.WallWidth + 1);
if (!containsItemX)
{
res.AssociatedWall = pair.Value;
res.Radius = 1;
res.AssociatedDuct = dc.Duct;
res.Center = pair.Key;
resList.Add(res);
model.Reservations.Add(new Reservation { ResList = resList });
}
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 20514,
"tags": "c#, performance"
} |
Moment of Inertia | Question: The Question is
A thin uniform rod of mass and length is bent at its center so that the two segments are now
perpendicular to each other. Find its moment of inertia about an axis perpendicular to its plane and
passing through
(a) the point where the two segments meet, and
(b) the midpoint of the line connecting its two ends.
Here is my question, where does the one over twelve come from in the third sentence of the solution?
Answer: Neither is the question good, nor is my answer.
But this may help you:
I cannot help better than this. | {
"domain": "physics.stackexchange",
"id": 39141,
"tags": "homework-and-exercises, newtonian-mechanics, moment-of-inertia"
} |
Difference between subshell and sub-subshell? | Question: I consider the s p d f orbitals as a subshell, but I have heard about the term sub-subshell. what is the difference between these two?
Answer: As you pointed out the the s, p, d, and f denoted subshells of atoms.
The term sub-subshells is very uncommon. I did find a couple of uses.
Materials Science
By G. K. Narula, K. S. Narula, V. K. Gupta
pdf file about Modern Atomic Mechanical Theory, page 24
What the authors are noting is that the 2p subshell will have three orbitals which can support two electrons each. Each of these three orbitals is being referred to as a sub-subshell.
As I said it seems a very uncommon term, and I'd suggest you avoid using it. | {
"domain": "chemistry.stackexchange",
"id": 8710,
"tags": "orbitals"
} |
Exception handling, et al - How do I make this web downloader not "poor"? | Question: I haven't done Java coding in years, but I thought I would give it a shot for a job interview. I have a few questions:
Why is this error handling considered poor? How am I supposed to be doing it?
What error catching should I be doing in a finally clause?
How am I masking exceptions?
package jGet;
/**
* Problem
*
* Create a very simple Java class that will retrieve the resource of any URL (using the HTTP protocol)
* and save the contents as seen by the browser, to a file.
*
* Restrictions
*
* You are free to use any library/technique, except for java.net.Url, java.net.URI or java.net.UrlConnection.
* Solutions using these classes will not be accepted.
* You are free to change the class signature for better error handling and readability.
*
* Initial Class outline
*
* public JGet extends Object {
* public JGet( String urlToPage, String saveToFilename ) {
* }
* public Object getContents() {
* }
* }
*
* Sample Test cases
*
* The class should be able to download the following sample URL's to a file:
* http://www.bing.com/
* http://www.aw20.co.uk/images/logo.png
*
* Time Allowance
*
* This took me a long time to complete (longer than 30 minutes).
* I returned the completed .java file to the person who invited me to take this, and
* this is the feedback I got back:
* 1) Poor exception handling
* 2) No finally clause in case of errors
* 3) getContents() returns a void
* 4) You should never throw a NullPointerException
* 5) Exception masking
* 6) Constructor calls the function getContents() no matter what
* 7) Poor layout
*
*/
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.commons.io.IOUtils;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.DefaultHttpClient;
public class JGet {
private final String urlToPage;
private final String saveToFileName;
/**
* @param urlToPage
* @param saveToFilename
*/
public JGet(String urlToPage, String saveToFilename) {
if (urlToPage == null) {
throw new IllegalArgumentException("The URL must not be null");
}
if (saveToFilename == null) {
throw new IllegalArgumentException(
"The name of the destination file must not be null");
}
if (urlToPage.length() == 0) {
throw new IllegalArgumentException("The URL must not be blank");
}
if (saveToFilename.length() == 0) {
throw new IllegalArgumentException(
"The name of the destination file must not be blank");
}
this.saveToFileName = saveToFilename;
this.urlToPage = urlToPage;
try {
getContents();
} catch (IOException e) {
throw new IllegalArgumentException("The ability to save to the destination file must not be blocked");
} catch (IllegalArgumentException e) {
throw new IllegalArgumentException("The URL must not be malformed");
}
}
/**
* @throws IOException
* @throws IllegalArgumentException
*/
public void getContents() throws IOException, IllegalArgumentException {
HttpClient httpClient = new DefaultHttpClient();
HttpGet httpGet = new HttpGet(this.urlToPage);
HttpResponse response = httpClient.execute(httpGet);
HttpEntity entity = response.getEntity();
IOUtils.copy(entity.getContent(), new FileOutputStream(this.saveToFileName));
}
/**
* @param args
*/
public static void main(String[] args) {
new JGet(null, "bing.html");
new JGet("http://www.bing.com/", null);
new JGet("", "bing.html");
new JGet("http://www.bing.com/", "");
new JGet("http://www.bing.com/", "readonly.html");
new JGet("Malformed URL", "bing.html");
new JGet("http://www.bing.com/", "bing.html");
new JGet("http://www.aw20.co.uk/a/img/logo.png", "logo.png");
}
}
Answer: Let me start off by saying how ridiculous the restrictions are:
You are free to use any library/technique, except for java.net.Url, java.net.URI
or java.net.UrlConnection.
Solutions using these classes will not be accepted.
The fact is that the apache libraries which you are using internally use java.net.Url and java.net.URI.
There are a few things that I'd like to point out about your code:
There is some redundancy that can be avoided in the argument validation:
if (urlToPage == null) {
throw new IllegalArgumentException("The URL must not be null");
}
if (saveToFilename == null) {
throw new IllegalArgumentException(
"The name of the destination file must not be null");
}
if (urlToPage.length() == 0) {
throw new IllegalArgumentException("The URL must not be blank");
}
if (saveToFilename.length() == 0) {
throw new IllegalArgumentException(
"The name of the destination file must not be blank");
}
Why don't we express this repeated logic in a validation method:
private static void validate(Object argument, Object illegal, String message){
// reference == to avoid NullPointerException
if (argument == illegal || argument.equals(illegal))
throw new IllegalArgumentException(message);
}
This cleans up that part of the constructor considerably:
validate(urlToPage, null, "The URL must not be null");
validate(saveToFileName, null, "The name of the destination file must not be null");
validate(urlToPage.length(), 0, "The URL must not be blank");
validate(saveToFileName.length(), 0, "The destination file name must not be blank");
The criticism Constructor calls the function getContents() no matter what is justified in the sense that your constructor shouldn't do any complex work (such as file downloading) at all. So just set the two fields and be done. The file download should be explicitly started after creation of an instance.
The try-catch at the end of your constructor does hide IOException as IllegalArgumentException. Avoid ever throwing IllegalArgumentException if it isn't right at the start of your method, or you will confuse your callers. Remove all this as you shouldn't be doing the download in the constructor anyway.
You are right: Object is not an adequate return type for getContents. In this situation it would have been important to demand an explanation as to what kind of return is expected!
However: if you change the method to return void, you also need to change the name as get is no longer an accurate verb. I'd simply go with download.
This may be a bit subjective, but I wouldn't specify Exceptions in the throws clause unless they are checked exceptions and therefore have to be included. So I'd leave out the throws IllegalArgumentException. The finally criticism was presumably referring to the unclosed output stream:
public void download() throws IOException {
HttpClient httpClient = new DefaultHttpClient();
HttpResponse response = httpClient.execute(new HttpGet(urlToPage));
HttpEntity entity = response.getEntity();
FileOutputStream stream = null;
try {
stream = new FileOutputStream(this.saveToFileName);
IOUtils.copy(entity.getContent(), stream);
}
finally{
if (stream != null)
stream.close();
}
}
All in all, I don't consider this exercise to be effective in showing if someone is a good programmer or not. The object-orientation is contrived; a static utility class would be a lot more suitable for task. Who wants to instantiate a new object for every download, especially when none of the features of object-orientation (inheritance, polymorphism, ...) are used beneficially? | {
"domain": "codereview.stackexchange",
"id": 2356,
"tags": "java, interview-questions, exception-handling, http"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.