anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Processor failures in distributed computing that are not crash or Byzantine | Question: There are two main types of processor failures in distributed computing models:
(1) Crash failures: a processor stops, and never starts again.
(2) Byzantine failures: processors behave adversarially, maliciously.
My question is:
What are some other types of processor failures that have been studied, that do not reduce to crash or Byzantine failures?
Also, a more specific question:
Has a model been studied where, with some probability, a process is on at time step $t$, and otherwise off? So each process is winking on and off, as it were.
I am most interested in how these failures relate to consensus and other distributed agreement problems.
Thank you.
Answer: Copied from the comments on the question as per-request.
I have taken theory of distributed computing with Michel Raynal and he described a third model, where messages can be dropped randomly. In that model a message can fail silently to be delivered, but that doesn't necessarily mean that the node has failed. It is about link failures rather than node failures "fair lossy channel model", you can read more about it here : Quiescent Uniform Reliable Broadcast as an Introductory Survey to Failure Detector Oracles - Michel Raynal (ftp.irisa.fr/techreports/2000/PI-1356.ps.gz) | {
"domain": "cstheory.stackexchange",
"id": 1122,
"tags": "reference-request, dc.distributed-comp"
} |
Should a superconductor act as a perfect mirror? | Question: I have been told that metals are good reflectors because they are good conductors. Since an electric field in conductors cause the electrons to move until they cancel out the field, there really can't be electric fields in a conductor.
Since light is an EM wave, the wave cannot enter the conductor, and it's energy is conserved by being reflected (I'm guessing in a similar fashion to a mechanical wave being reflected when it reaches a medium it cannot travel through, like a wave on a rope tied to a wall for example).
I imagine then that more conductive materials are better reflectors of light. Wouldn't a perfect conductor then, such as a superconductor, be a perfect reflector of light? (Or reach some sort of reflective limit?)
Answer: Yes and no. Below the superconducting gap a superconductor is a near perfect reflector and superconductivity has its say in it.
Reflectivity at normal incidence is given by the equation.
$$ R = \left| \frac{1-\sqrt{\varepsilon}}{1+\sqrt{\varepsilon}} \right|^2 $$
where $\varepsilon$ is the complex-valued frequency-dependent dielectric function of the reflective material. Let's look at the dielectric function of a superconductor above and below the superconducting transition temperature:
This is a plot of the real part of the optical conductivity (in arbitrary units) in the normal state (blue) and the superconducting state (orange). The relationship between the real part of the optical conductivity and the imaginary part of the dielectric function is given by $\varepsilon_0 \mathrm{Im}(\varepsilon) \omega = \mathrm{Re}(\sigma)$
The area under the curve must be conserved, therefore the missing part of the area is hidden in a delta-function at zero frequency (we must take it into account to perform a Kramers-Kronig transformation properly). This is important, because the delta function in the conductivity (that's the manifestation of dissipationless dc current!) leads to a $-a/\omega^2$ term in the real part of the dielectric function. Large-by-magnitude values of the dielectric function give a good coefficient of reflection.
The other part of the dielectric function is $\mathrm{Re}(\varepsilon)$ and is obtained by doing a Kramers-Kronig transformation:
Now this can be plugged into the expression for reflectivity:
As you can see, and this is due to the vastly negative real part of the dielectric function, reflectivity below the gap is near 100%.
EDIT - actually, since the real part of $\varepsilon$ is negative, and within the superconducting gap the imaginary part is exactly zero (with caveats, such as s-wave vs. d-wave superconductors) $R$ would be exactly 100%.
Above the gap the reflectivity is actually slightly worse. Now because the superconducting gap lies at energies far lower than those of visible light, the reflectivity for visible lightly is barely affected. As superconductors are often bad conductors in their normal state, their visible light reflectivity leaves much to be desired. Stick to silver. | {
"domain": "physics.stackexchange",
"id": 68286,
"tags": "optics, electromagnetic-radiation, reflection, superconductivity"
} |
Sensitivity analysis of $MST$ edges | Question: I am working on the following exercise:
Consider an undirected graph $G = (V,E)$. Let $T^* = (V,E_{T^*})$ be a $MST$ and let $e$ be an edge in $E_{T^*}$. We define the set of all values that can be assigned to $w_e$ such that $T^*$ remains a MST as $I_e$.
Show that $I_e$ is an interval.
Devise an efficient algorithm to calculate $I_e$ for a given edge $e$.
Devise an efficient algorithm that determines all $I_e$ in one step. It should be more efficient than repeatedly using the algorithm from 2.
I did the following:
Consider an edge $e$ in $E_{T^*}$. Delete it from $G$ and find the new lowest weighted edge connecting the resulting components, say $e'$. The upper bound for $I_e$ is $w(e')$. The lower bound would be $-\infty$.
Use the algorithm sketched in 1.
I do not know what to do here. Could you help me?
Answer: The solution in short. For each edge $e$ not in the tree, check the path between its endpoints in the tree. The weight of each edge on this path is upper-bounded buy the weight of $e$. Keep track of the smallest upper-bound for each edge on the tree while iterating over all edges not in the tree. Following is a sketch of the correctness.
Let us call an edge in a simple cycle Heavy for this cycle, if its weight is maximum among all edges in the cycle. We claim that each edge $e$ in $G\setminus T^*$ is Cycle heavy for some cycle in $G$.
Proof. By adding the edge $e$ to the tree, it constructs a cycle with a path in the tree between its endpoints. If there is an edge in the cycle with strictly heavier weight, we can remove this edge and keep $e$ constricting a lighter tree which contradicts the assumption that the given tree is minimum.
On the other hand, it is not hard to prove that an edge is in an MST of a given graph if it is not heavy for any simple cycle containing this edge. The correctness of the sketched algorithm follows directly from this statement. | {
"domain": "cs.stackexchange",
"id": 14997,
"tags": "minimum-spanning-tree"
} |
adding shared objects to catkin package | Question:
Hi,
i'm writing a node that has to call some functions from a .so library. The node called test_service_server has to use functions from libtest.so. This lib is NOT a catkin lib, it was created outside of ROS entirely.
inside my CMakeLists.txt i have the following:
include_directories( ${catkin_INCLUDE_DIRS} lib)
set(EXTRALIB_BIN ${PROJECT_SOURCE_DIR}/lib/libtest.so)
add_executable(test_service_server src/test_service_server.cpp)
target_link_libraries( test_service_server ${catkin_LIBRARIES} ${EXTRALIB_BIN})
I still get the error: no rule to make target.
Can someone give me a few tipps about linking shared objects to catkin packages?
EDIT:
Just to explain my problem better:
I have a shared object lib with ca 200 functions my node needs. ( in the /lib folder of the package)
These functions are described in header files (ca. 10, in the /include folder of the package).
I can't get the source of this library to try and build it as a catkin lib.
how do i properly link my .so in CMakeLists.txt?
Originally posted by Reiner on ROS Answers with karma: 61 on 2015-05-08
Post score: 2
Original comments
Comment by Reiner on 2015-06-02:
FYI: found a solution:
target_link_libraries( test_service_server ${catkin_LIBRARIES} ${PROJECT_SOURCE_DIR}/lib/libtest.so.1)
this use of target link libraries works perfectly, in case someone stumbles upon the question.
Answer:
Even though @Wolf's answer should work for you, I'll add some references to earlier questions that asked the same thing:
Is it possible to create a catkin package to provide precompiled libraries?
export a prebuilt library in catkin
The accepted answer to the second question also has some links to pkgs that do the same thing.
Originally posted by gvdhoorn with karma: 86574 on 2015-05-11
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Reiner on 2015-05-11:
thanks, those two links were a huge help for me:) | {
"domain": "robotics.stackexchange",
"id": 21639,
"tags": "catkin"
} |
detecting/tracking nano particle | Question: I am new to nano-sized particles and conducting a very simple experiment using nano particles. In my experiment I have nanomagnetic particles inside water and they are moving due to applied magnetic field. I am looking for a method to track or detect the motion of nano particles. Is there any easy way?
Answer: There are a few ways to do nanoparticle tracking, although I'm not sure they will exactly apply to your system.
Dynamic light scattering or DLS has been used to get the distribution of nanoparticle sizes in solution/suspension over time. Generally, the data gives information on the ensemble, not single-particle tracking.
Nanoparticle tracking analysis or NTA also relies on scattered light, but gives frame-to-frame analysis of each particle's trajectories.
There are other related techniques... Another approach is to create an Anti-Brownian ELectrokinetic (ABEL) trap - to monitor the particle or protein dynamics and apply forces to push it back. This technique was developed by the Moerner lab at Stanford, for example Acc. Chem. Res., 2012, 45 (11), pp 1955–1964 | {
"domain": "chemistry.stackexchange",
"id": 9360,
"tags": "nanotechnology, magnetism, electromagnetic-radiation"
} |
Extraction of Sn from SnO2 | Question: In the extraction of $\ce{Sn}$ from $\ce{SnO2}$ by carbon reduction method, why is $\ce{CO}$ formed instead of $\ce{CO2}$ as per the following reaction?
$\ce{SnO2 + C -> Sn + CO}$
Answer: Above a $1000\ \mathrm K$, the $\Delta_\mathrm fG$ of $\ce {CO}$ from $\ce C$ is more negative than that of $\ce{CO2}$ formation from $\ce C$. Therefore, during smelting, when coke ($\ce C$) reacts with $\ce {SnO2}$, the formation of $\ce {CO}$ rather than $\ce {CO2}$ is thermodynamically preferable.
Below that temperature, the reduction with carbon is not possible since $\Delta_\mathrm fG(\ce{SnO2})\lt\Delta_\mathrm fG(\ce{CO2})\lt\Delta_\mathrm fG(\ce{CO})$. Therefore at all temperatures where this reduction is possible, thermodynamically carbon monoxide formation would be more favourable as compared to carbon dioxide formation. To see for yourself, the variation of $\Delta_\mathrm fG$ of various compounds, have a look at Ellingham Diagrams, a link to a comprehensive set of which is Given Here.
$\Delta_\mathrm fG$ is the Gibbs free energy of formation. | {
"domain": "chemistry.stackexchange",
"id": 647,
"tags": "redox, metallurgy"
} |
Implementation of a Resizing Array Queue | Question: Below is a problem from Sedgewick's Algorithms and my solution. Any thoughts or suggestions for improvement would be much appreciated.
Develop a class that implements the queue abstraction with
a fixed-sized array, and then extend your implementation to use array
resizing.
package chapter_1_3_bagsQueuesStacks;
import java.util.Arrays;
import java.util.Iterator;
// Exercise 1.3.14 | pg. 163
public class ArrayQueue<E> implements Iterable<E> {
private E[] a = (E[]) new Object[1];
private int head;
private int tail;
private int N;
public boolean isEmpty() {
return N == 0;
}
private boolean isFull() {
return N == a.length;
}
public int size() {
return N;
}
private void resize(int cap) {
E[] temp = (E[]) new Object[cap];
int curr = head;
for (int i = 0; i < N; i++) {
temp[i] = a[curr];
if (curr == a.length-1) {
curr = 0;
} else {
curr++;
}
}
a = temp;
}
public void enqueue(E element) {
if (isFull()) {
resize(a.length*2);
head = 0;
tail = N-1;
}
if (isEmpty()) {
head = tail = 0;
} else if (tail == a.length-1) {
tail = 0;
} else {
tail++;
}
a[tail] = element;
N++;
}
public E dequeue() {
if (isEmpty())
throw new RuntimeException();
E element = a[head];
a[head] = null;
N--;
if (head == a.length-1) {
head = 0;
} else {
head++;
}
if (N == a.length/4) {
resize(a.length/2);
head = 0;
tail = N-1;
}
return element;
}
@Override
public Iterator<E> iterator() {
return new ArrayIterator();
}
private class ArrayIterator implements Iterator<E> {
int curr = head;
@Override
public boolean hasNext() {
return a[curr] != null;
}
@Override
public E next() {
E element = a[curr++];
return element;
}
}
@Override
public String toString() {
String formatStr = "HEAD: %s - TAIL: %s - %s";
return String.format(formatStr, this.head, this.tail, Arrays.toString(this.a));
}
}
Answer: Advice 1
private E[] a = (E[]) new Object[1];
I suggest that you start from a larger power of two (say 8), since it is not common to deal with only one element in a data structure. Also, a capacity that is a power of two will allow you to omit remainder operator % and use bit operations instead. Namely, say, index % size is the same as index & (size - 1) whenever size is a power of two.
Advice 2
private int N;
Usually, Java developers start field names with a lower case character. However, I suggest that you rename it to size.
Advice 3
private void resize(int cap) {
E[] temp = (E[]) new Object[cap];
int curr = head;
for (int i = 0; i < N; i++) {
temp[i] = a[curr];
if (curr == a.length-1) {
curr = 0;
} else {
curr++;
}
}
a = temp;
}
You can write this as
private void resize(int capacity) {
@SuppressWarnings("unchecked")
E[] newArray = (E[]) new Object[capacity];
for (int i = 0; i < size; ++i) {
newArray[i] = array[(head + i) & (array.length - 1)];
}
this.array = newArray;
this.head = 0;
this.tail = size;
}
Note how the fields are updated in the method itself. You don't have to repeat yourself when contracting as well.
Advice 4
I suggest you add a modification count in your iterator so that it fails as soon as another thread interferes.
Alternative implementation
package chapter_1_3_bagsQueuesStacks;
import java.util.ConcurrentModificationException;
import java.util.Iterator;
import java.util.NoSuchElementException;
import java.util.Scanner;
public class ArrayQueue<E> implements Iterable<E> {
private static final int MINIMUM_CAPACITY = 4;
@SuppressWarnings("unchecked")
private E[] array = (E[]) new Object[MINIMUM_CAPACITY];
private int head;
private int tail;
private int size;
private int modificationCount;
public boolean isEmpty() {
return size == 0;
}
public int size() {
return size;
}
public void enqueue(E element) {
if (isFull()) {
resize(2 * array.length);
}
array[tail] = element;
tail = (tail + 1) & (array.length - 1);
size++;
modificationCount++;
}
public E dequeue() {
if (isEmpty()) {
throw new RuntimeException("ArrayQueue is empty.");
}
if (size < array.length / 4 && size >= 2 * MINIMUM_CAPACITY) {
resize(array.length / 2);
}
E element = array[head];
head = (head + 1) & (array.length - 1);
size--;
modificationCount++;
return element;
}
@Override
public Iterator<E> iterator() {
return new ArrayQueueIterator();
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder("[");
String separator = "";
for (int i = 0; i < size; ++i) {
sb.append(separator);
separator = ", ";
sb.append(array[(head + i) & (array.length - 1)]);
}
return sb.append("] capacity = " + array.length).toString();
}
private boolean isFull() {
return size == array.length;
}
private void resize(int capacity) {
@SuppressWarnings("unchecked")
E[] newArray = (E[]) new Object[capacity];
for (int i = 0; i < size; ++i) {
newArray[i] = array[(head + i) & (array.length - 1)];
}
this.array = newArray;
this.head = 0;
this.tail = size;
}
private final class ArrayQueueIterator implements Iterator<E> {
private int iterated = 0;
private final int expectedModificationException =
ArrayQueue.this.modificationCount;
@Override
public boolean hasNext() {
checkModificationCount();
return iterated < size;
}
@Override
public E next() {
if (!hasNext()) {
throw new NoSuchElementException(
"No more elements to iterate.");
}
return array[(head + iterated++) & (array.length - 1)];
}
private void checkModificationCount() {
if (expectedModificationException !=
ArrayQueue.this.modificationCount) {
throw new ConcurrentModificationException();
}
}
}
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
ArrayQueue<String> queue = new ArrayQueue<>();
for (int i = 0; i < 16; ++i) {
queue.enqueue("" + i);
}
while (true) {
String command = scanner.nextLine();
String[] tokens = command.split("\\s+");
switch (tokens.length) {
case 1:
switch (tokens[0].trim().toLowerCase()) {
case "print":
System.out.println(queue);
break;
case "pop":
System.out.println(queue.dequeue());
break;
case "quit":
return;
}
break;
case 2:
if (tokens[0].trim().toLowerCase().equals("push")) {
queue.enqueue(tokens[1].trim());
}
}
}
}
}
Hope that helps. | {
"domain": "codereview.stackexchange",
"id": 27506,
"tags": "java, algorithm, queue, circular-list"
} |
Analyzing network architecture with TDA | Question: I'm doing Neural Architecture Search (NAS) by varying the number of layers and neurons per layer for a neural network (connections are feed-forward throughout), and then training it on a fixed task to test its performance. The end goal is to perform topological data analysis (TDA) on the best and worst architectures (graphs) to spot the persistent structures in a good/bad architecture.
My question is: am I right in my assumption that TDA could spot such persistent structures? If so, which metric should I use? Just the connection weights?
P.S.: My background in (computational) topology is quite minimal.
Answer: For the previously interested and for those to come, I'd recommend reading through this: https://arxiv.org/abs/1812.09764. The authors open-source their code as well. AFAIK, this is limited to feed-forward networks. | {
"domain": "cs.stackexchange",
"id": 21342,
"tags": "machine-learning, network-topology"
} |
AlwaysUpdate attribute for Entity Framework Code First POCO | Question: The entity framework will only save properties it detects has changed via its proxy classes. I have a situation where I want a property to always be saved no matter if it changed or not.
I wrote a blank attribute called AlwaysUpdate which I apply to the property. I then overrode SaveChanges to scan for these attributes and mark the field as Modified. Is this code all right?
public override int SaveChanges()
{
var changeSet = ChangeTracker.Entries();
if (changeSet != null)
{
foreach (var entry in changeSet.Where( c=> c.State == System.Data.EntityState.Modified ))
{
foreach (var prop in entry.Entity.GetType().GetProperties().Where(p => p.GetCustomAttributes(typeof(AlwaysUpdateAttribute), true).Count() > 0).Select( p => p.Name ))
{
if (entry.State == System.Data.EntityState.Added || entry.State == System.Data.EntityState.Unchanged) continue;
// Try catch removes the errors for detached and unchanged items
try
{
entry.Property(prop).IsModified = true;
}
catch { }
}
}
}
return base.SaveChanges();
}
Answer:
Instead of checking .Count() > 0 which needs to iterate over the IEnumerable you can just use .Any() which only checks if at least one item is contained in the IEnumerable
you should extract the result of typeof(AlwaysUpdateAttribute) to a variable.
Type desiredType = typeof(AlwaysUpdateAttribute);
you can restrict the outer loop by adding the "continue condition" to the Where() method. For readability we will do this outside of the loop
changeSet = changeSet.Where( c=>
c.State == System.Data.EntityState.Modified
&& !(entry.State == System.Data.EntityState.Added
|| entry.State == System.Data.EntityState.Unchanged)
);
you shouldn't shorten any variable names -> prop should be propertyName
Refactoring
Implementing all above will lead to
public override int SaveChanges()
{
var changeSet = ChangeTracker.Entries();
if (changeSet != null)
{
changeSet = changeSet.Where( c=>
c.State == System.Data.EntityState.Modified
&& !(entry.State == System.Data.EntityState.Added
|| entry.State == System.Data.EntityState.Unchanged)
);
Type desiredType = typeof(AlwaysUpdateAttribute);
foreach (var entry in changeSet)
{
foreach (var propertyName in entry.Entity.GetType().GetProperties().Where(p => p.GetCustomAttributes(desiredType , true).Any()).Select( p => p.Name ))
{
// Try catch removes the errors for detached and unchanged items
try
{
entry.Property(propertyName).IsModified = true;
}
catch { }
}
}
}
return base.SaveChanges();
}
foreach (var entry in changeSet)
{
foreach (var prop in entry.Entity.GetType().GetProperties().Where(p => p.GetCustomAttributes(desiredType , true).Any()).Select( p => p.Name ))
{
// Try catch removes the errors for detached and unchanged items
try
{
entry.Property(prop).IsModified = true;
}
catch { }
} | {
"domain": "codereview.stackexchange",
"id": 11199,
"tags": "c#, entity-framework, poco"
} |
Training own classifiers using trainCascadeObjectDetector for face detection | Question: I want to train a cascade classifier to face detect. I read this page. But I wont use trainingImageLabeler app, because I already have face and nonface database sized 24x24. How I use trainCascadeObjectDetector function with my own database?
Is there any example except this you know?
Answer: If you want to train the classifiers with your own database, you will only need 'trainCascadeObjectDetector' function and feed your images into the proper arguments (Positive, negative images). The output classifier will be in your 'outputXMLFilename' as in traincascadeobjectdetector
trainingImageLabeler is helpful function for classifying positive/negative image. Try this function later when you work with larger database. | {
"domain": "dsp.stackexchange",
"id": 2441,
"tags": "matlab, computer-vision, machine-learning, face-detection, matlab-cvst"
} |
Stars in star clusters in SMC and LMC | Question: Is there any catalog or any paper published in any journal that lists the stars discovered under whichever star cluster of the Small Magellanic Cloud (SMC) they belong to? There is one for Large Magellanic Cloud (LMC) in the paper Efremov 2003 "Cepheids in LMC clusters and the period-age relation". But I can't find any for SMC.
It would be better still if the stars happen to be Cepheids.
Answer: Finally, after searching quite a deal, I found this paper which would aptly answer the question.
The paper from 1999 published by Pietrzynski and Udalski in Acta Astronomica lists the Cepheids in the star clusters of Magellanic Clouds. | {
"domain": "astronomy.stackexchange",
"id": 443,
"tags": "star, star-cluster, cepheids, dwarf-galaxy, magellanic-cloud"
} |
Mitotic crossover happens in G1? | Question: I was reading this article in wikipedia and came across this :
It has been suggested that recombination takes place during G1, when
the DNA is in its 2-strand phase, and replicated during DNA synthesis.
If For example - allele A is present on 1 and a is present on 1' and recombination happens such that a is present on 1 and A on 1' and then both get duplicated during S phase.
So how are the daughter cells going to be different from one another and from the mother cell in terms of phenotype? (obviously the alleles around these alleles are going to change)
How can this result in twin spotting in drosophila or any other trait occurring due to coming together of homozygous alleles by mitotic crossovers ?
Answer: First off this is called genetic mosaicism and indeed mitotic recombination is a contribution factor.
Mitotic crossover events involve the exchange, by homologous recombination, of regions of chromosomes. 60% of homologous recombination events might occur during G1 and 40% of those event occurs after chromosomes are replicated (see this paper). For twin spotting to occur, you need the homologous recombination to happen on replicated chromosomes (i.e. you need 4 chromatids).
For you question on twin spotting. The following picture illustrates well what are the outcomes of homologous recombination involving two alleles and replicated chromosomes (each line being a chromatid).
If the cell is heterozygous (normal phenotype) for the two recessive alleles, here called y and sn (respectively the "yellow" and "singed bristles" alleles), then two recombination scenarios are possible and differs based on where the recombination occurred. Note that y+ and sn+ are the wild-type alleles and carrying those alleles do not provoke a particular phenotype. The phenotype of the resulting daughter cells are:
Twin spots: Yellow but not singed spots (y/sn+) and singed but not yellow spots (y+/sn) for recombination after the y/sn locus
Single yellow spots: Yellow but not singed spots (y/sn+) and no spots (y+/sn+) for recombination between y and sn
Here the actual picture of what happens:
Therefore this should make clear why daughter cells can show phenotypes that differ from the original parent cell, i.e. the combination of alleles (especially for recessive ones) changes after mitotic crossover events. The twin spot phenotype in Drosophila melanogaster is an excellent example for illustrating that. | {
"domain": "biology.stackexchange",
"id": 3694,
"tags": "dna, mitosis, recombination, allele"
} |
Thermodynamic limit of a system of atoms | Question: Consider $N$ atoms, each with magnetic moment $\mu$ in a zero field $H = 0$. Given the assumption that each moment is equally likely, what is the magnetization in the thermodynamic limit?
The idea of magnetization is fairly new to me; but, from reading online, I think that it is analogous to density. Furthermore, I know that the thermodynamic limit is when $N \rightarrow \infty$ and $V\rightarrow \infty$ with $N/V$ held constant.
I know that magnetization is defined as $dm/dV$, where $m$ is the elementary magnetic moment, and $V$ is the volume. When $V \rightarrow \infty$, the denominator of the $dm/dV$ term approaches $0$; so, is my answer just $0$?
Answer: First of all, since the dipole distribution is uniform you can rewrite magnetization as $m/V$, where by $m$ I mean the total magnetization of $N$ atoms. Without losing any generality, let us assume that magnetic field $\vec H$ is applied along $z$ axis. Then, the part of the Hamiltonian that depends on magnetic field (energy of the dipole in field is a scalar product of the field and the dipole):
\begin{equation}
\mathcal{H_m}= -\mu H_z\sum\limits_{i=1}^N \cos{\theta_i}.
\end{equation}
Total magnetization is (see eq. (6.1.12) of Schwabl "Statistical mechanics" or eq. (52.1) Landau, Lifshitz "Statistical physics. Part 1. Vol 5." for discussion of Hamiltonian and magnetization).
\begin{equation}
m=-\langle{\partial \mathcal{H_m}}/{\partial H_z}\rangle=\mu \sum\limits_{i=1}^N \frac N\pi \frac \pi N \cos{\theta_i}=\frac{\mu N}\pi\int\limits_0^\pi d\theta \, \cos{\theta}=\frac{\mu N}\pi,
\end{equation}
where I've used $N>>1$ to justify rewriting the sum to integral in the second to last equality ($\pi/N$ becomes $d\theta$). Then desired magnetization is $\frac\mu \pi \frac N V$. | {
"domain": "physics.stackexchange",
"id": 51998,
"tags": "electromagnetism, thermodynamics"
} |
Converting any PHP function toString() like in JS | Question: In JavaScript, any function is basically an object on which you can call (function(){}).toString() to get it's underlying code as a string.
I'm working on a function aimed to do the job in PHP. The intended goal is to convert code from PHP into other languages, such as JavaScript.
It looks like this so far:
function fn_to_string($fn, $strip_comments = true) {
static $contents_cache = array();
static $nl = "\r\n"; # change this to how you want
if(!is_callable($fn)) return ''; # it should be a function
if(!class_exists('ReflectionFunction')) return ''; # PHP 5.1 I think
# get function info
$rfn = new ReflectionFunction($fn);
$file = $rfn->getFileName();
$start = $rfn->getStartLine();
$end = $rfn->getEndLine();
if(!is_readable($file)) return ''; # file should be readable
# cache file contents for subsequent reads (in case we use multiple fns defined in the same file)
$md5 = md5($file);
if(!isset($contents_cache[$md5]))
$contents_cache[$md5] = file($file, FILE_IGNORE_NEW_LINES);
if(empty($contents_cache[$md5])) return ''; # there should be stuff in the file
$file = $contents_cache[$md5];
# get function code and tokens
$code = "<?php ". implode($nl, array_slice($file, $start-1, ($end+1)-$start));
$tokens = token_get_all( $code);
# now let's parse the code;
$code = '';
$function_count = 0;
$ignore_input = false; # we use this to get rid of "use" or function name
$got_header = false;
$in_function = false;
$braces_level = 0;
foreach($tokens as $token){
# get the token name or string
if(is_string($token)){
$token_name = $token;
}elseif(is_array($token) && isset($token[0]) ){
$token_name = token_name($token[0]);
$token = isset($token[1]) ? $token[1] : "";
}else{
continue;
}
# strip comments
if( 1
&& $strip_comments
&& ($token_name == "T_COMMENT" || $token_name == "T_DOC_COMMENT" || $token_name == "T_ML_COMMENT")
){
# but put back the new line
if(substr($token,-1) == "\n")
$code.=$nl;
continue;
}
# let's decide what to do with it now
if($in_function){
# nesting level
if($token_name == "{"){
$braces_level++;
# done ignoring `use`
$ignore_input = false;
}
# append
if( 1
&& $function_count==1
&& ( 0
# skip function names
|| ( $ignore_input && $token_name == "(" && !$got_header && (!($ignore_input=false)) )
# skip function () use (...) in closures functions
|| ( $braces_level == 0 && !$got_header && $token_name == ")" && ($ignore_input=true) && ($got_header=true) )
# this fall-through is intentional
|| !$ignore_input
)
) {
$code .= $token;
}
# ending "}"
if($token_name == "}"){
$braces_level--;
# done collecting the function
if($braces_level == 0)
$in_function = false;
}
}elseif($token_name == "T_FUNCTION"){
$function_count++;
$in_function = true;
$ignore_input = true;
$braces_level = 0;
$code.=$token;
# we can't detect this properly so bail out
if($function_count>1){
$code = '';
break;
}
}
}
return $code;
}
The function uses the ReflectionFunction class to determine where the passed function was declared, and token_get_all() to process the different parts of the declaration.
This works as intended:
Handles function names passed as strings
Handles variable functions
Handles closures and lambdas
Can even handle itself
Can strip out comments
However,
It relies on the undocumented-yet class, ReflectionFunction
Fails if it can't read its own source files
Fails if there are multiple functions declared on the same line(s) where the passed function was declared:
function a(){} function b(){} fn_to_string('a'); // fails
Cannot determine scope or context so it strips out function names and the use keyword to avoid future problems
I'm trying to determine if something like this is ready for the real world, so my questions are:
Are there any reasons for which using this approach may not be a good idea?
Are there any foreseeable performance issues?
Are there any better alternatives?
Are there any overlooked cases which the function doesn't cover?
Are there server settings in which a script may not be able to read itself
is_readable(__FILE__)===false
Answer:
It relies on the undocumented-yet class, ReflectionFunction
ReflectionFunction is not undocumented.
Fails if it can't read its own source files
Seems unavoidable.
Fails if there are multiple functions declared on the same line(s) where the passed function was declared
Why not just return everything in the range of lines ReflectionFunction gives us, since that's as accurate as it can get? Which leads to the inevitable question, what is the use case for all of this?
Cannot determine scope or context so it strips out function names and the use keyword to avoid future problems
What problems? If you explained how you planned to use this, that might make more sense (context!).
Are there any reasons for which using this approach may not be a good idea?
Can you first explain what reasons you are doing this for in the first place?
Are there any foreseeable performance issues?
Parsing the function (token_get_all) seems unnecessary when all you want to do is get the function's source.
Using the MD5 hash of the filename as a cache key seems unnecessary, why not just use the filename itself?
Are there any better alternatives?
Are there any overlooked cases which the function doesn't cover?
Maybe, depending on your intended use case.
Are there server settings in which a script may not be able to read itself
is_readable(__FILE__)===false
Code run in the PHP interactive shell is one case where is_readable(__FILE__)===false.
Some miscellaneous notes:
Using # for comments is unusual; you usually only see # used in the shebang line.
Reusing the $file variable for both a file name and an array of lines from the file is confusing.
Stripping comments does not seem necessary given your stated goal (acting like JavaScript's Function.prototype.toString).
Without knowing more about your intended use case, I'd suggest something much simpler, along the lines of:
function fn_to_string($fn) {
$r = new ReflectionFunction($fn);
$file = $r->getFileName();
if (!is_readable($file)) {
return '';
}
$lines = file($file);
$start = $r->getStartLine() - 1;
$length = $r->getEndLine() - $start;
return implode('', array_slice($lines, $start, $length));
} | {
"domain": "codereview.stackexchange",
"id": 8020,
"tags": "javascript, php, strings, functional-programming, converting"
} |
Can the axial length of the human eye decrease? | Question: I understand that the axial length of the eyeball grows until you are around 20 years of age, which is why hypermetropia decreases with age but myopia doesn't. My question is: can the axial length of the eyeball decrease, and does it do so naturally?
I know that certain conditions can cause the axial length to decrease, such as very low pressure or disease, but my question is regarding a normal healthy eye.
Weihua Meng, Jacqueline Butterworth, François Malecaze, Patrick Calvas; Axial Length of Myopia: A Review of Current Research. Ophthalmologica 1 March 2011; 225 (3): 127–134. https://doi.org/10.1159/000317072
This study by Meng et. al. (2011) says that with current research we KNOW that the axial length decreases with age this same point was also in a different study based on the idea of a an emmetropizing mechanism for the adult eye but many people have criticised this study and due to this apparently the studies findings should be taken with a grain of salt but if this is the case why is it that many articles do say that the axial length reduces with age is there another study that shows this?
Scott A. Read, Michael J. Collins, Beata P. Sander; Human Optical Axial Length and Defocus. Invest. Ophthalmol. Vis. Sci. 2010;51(12):6262-6269. doi: https://doi.org/10.1167/iovs.10-5457.
This study by Read et. al. (2010) basically used lenses to create hyperopic and myopic defocus in participants and they measured the axial length of the eyeball after exposure to the blur. They found that the axial length did decrease in the case of myopic defocus and increased in the case of hyperopic defocus in order to create a clearer image by focusing the image on the retina. But if this article provides definitive proof that the eye does in fact change its axial length then why is that people still say it doesn't happen?
Also do we know which mechanisms allow the eye to determine whether it is myopic defocus or hyperopic defocus that is presented to the retina?
Lastly, if the eye's axial length is able to reduce wouldn't myopia have a cure or at least a method to reduce it based on the reduction of the axial length of the eyeball?
https://www.quora.com/How-can-the-axial-length-of-an-eye-decrease-as-they-say-that-it-does-with-ageing
Answer: Generally the axial length doesn't decrease:
graph
Although there is discussion of various conditions where it has been observed to decrease, i.e. nanophthalmos, microphthalmos, and retinoblastoma... A review of current research is here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5501611/ quote:
It is considered that the axial length reaches adult length by the age of 13 years, showing no further increase in length. Recent cross-sectional studies found that the axial length in older adults tended to be shorter, suggesting that it may decrease with aging. However, in a recent longitudinal study by He et al., slight increase of the axial length in adults (mean refractive error, −0.44 ± 2.21 diopters) was reported. Conversely, an increase in the axial length in adults with highly myopic eyes is common and has been previously shown in longitudinal studies [5–7]. However, to the best of our knowledge, there have been no studies directly comparing the increase in the axial length in adults with non-highly myopic eyes or highly myopic eyes. In addition, because previous studies on the axial length in highly myopic eyes have not compared between eyes with and without macular complications, their influence on the axial length remains unknown.
Can myopia be treated with a reduction in axial length? Presumably, yes it could, if opticians had figured a way to do it... Do you suggest that a drug or a medical intervention can flatten the entire eyeball a bit? perhaps a drug could but it would reduce the entire eyeball, you'd have to find a way to administer it. Apparently doctors haven't researched that option, but perhaps they will in future. Laser currently seems like a better way.
https://www.google.com/search?hl=en&biw=1706&bih=951&sxsrf=ALeKk01GuiHpl6-_fRp-Bg_UdsYOdqVzbg%3A1596509065944&ei=icsoX9aRObLMgweX6KboCQ&q=%22decrease+in+axial+length%22+of+optical
https://www.google.com/search?hl=en&biw=1706&bih=951&sxsrf=ALeKk03b75zOsA6CN_Ypqdv5v7hqfVWLbA%3A1596509091797&ei=o8soX8ySMIyqUP2dtvgI&q=%22reduction+in+axial+length%22+of+optical&oq=%22reduction+in+axial+length%22+of+optical | {
"domain": "biology.stackexchange",
"id": 10792,
"tags": "human-biology, human-anatomy, brain, development, human-eye"
} |
Climate change impact / risk assessment | Question: I work for a local council in the UK. We want to include climate change as a mandatory consideration in all of our decision making. This would require project managers to look at the risk (what adaptation measures we need to consider) and impact (how the project would affect our emissions budget) of any large decisions.
I expect that this has been done before, but I can't find a guide or discussion of it anywhere. Please can you suggest sources of information / existing implementations?
Sorry if this isn't the most appropriate place to post this, if anybody knows of a more appropriate forum, I'd be happy to go there instead!
Answer: The UK Government already publishes information on this at the national level, which may be a useful guide for how to proceed at the local government level.
The last UK Climate Change Risk Assessment was published in 2017 and the next one is imminent in mid-2021. They follow risk assessment methods set out in things like the HM Treasury Green Book, which, despite the name, is not about environmental assessment specifically (there are Orange, Aqua and Magenta Books too). The Green Book
has a shorter, climate-specific supplement that may be a good place to start. In particular, Annex B of the supplement has lots of links to other resources and tools for doing the assessment.
There’s also a local government website that hosts a slightly random selection of plans that various regions have put together over the last few years.
This all looks like a daunting amount of paperwork to a scientist like me, so good luck! | {
"domain": "earthscience.stackexchange",
"id": 2175,
"tags": "climate-change"
} |
Motion of a rod struck at one end | Question: Imagine a strong metal rod of uniform density and thickness floating in a weightless environment. Imagine it lies on an X-Y plane, with one end (A) lying at 0,0, and the other end (B) at 0,1. Then it is struck at A in the direction of increasing X. Please describe the trajectory of the bar. In particular, will the point B move in the opposite direction (i.e. decreasing X) momentarily? Is there a website or article that goes through the physics of this problem?
EDIT: some specific questions: What would be the trajectory of the centre of gravity? Would it just travel horizontally along the line y = 0.5? What would be the ratio of the velocity of the centre of gravity to the rotational velocity of the ends of the bar?
EDIT: Seven years later I find this! https://www.youtube.com/watch?v=uNPDrLhXC9k
Answer: You need Euler's laws of motion for Rigid Bodies.
Sum of vector force impulse acting on a rigid body equal to the change of linear momentum through the center of gravity.
Sum of vector torque impulse acting on the center of gravity of a rigid body equal to the change of angular momentum.
If the hammer impulse is $\vec{J}$ along the $\hat{x}$-axis, the location of $A$ is $\vec{r}_A = (0,0,0)$ and the center fr gravity $C$ is $\vec{r}_C = (0,\frac{L}{2},0)$ where $L$ is the length of the rod the above equations are
$$ \vec{J} = m \Delta \vec{v}_C $$
$$ (\vec{r}_A-\vec{r}_C)\times\vec{J} = I_C\,\Delta\vec\omega $$
The change in motion of point $B$ is $$\Delta\vec{v}_B = \Delta\vec{v}_C + \Delta\vec{\omega}\times (\vec{r}_B-\vec{r}_C ) $$ and point $A$ is $$\Delta\vec{v}_A = \Delta\vec{v}_C + \Delta\vec{\omega}\times (\vec{r}_A-\vec{r}_C ) $$
If you put it all together then you will find the motions of different parts of the rod (at least in direction and sign).
Note that the mass moment of inertia $I_C$ for a thin slender uniform rod of mass $m$ and length $L$ is $I_C = \frac{m}{12} L^2 $.
Here is a url of a website that describes what you need to solve this kind of problems. | {
"domain": "physics.stackexchange",
"id": 3615,
"tags": "homework-and-exercises, newtonian-mechanics, kinematics"
} |
DRCSim 1.2/1.3 With Gazebo Crashes often when moving the base using cmd_vel topic | Question:
Using either DRCSim 1.2 or 1.3 compiled from source with Gazebo 1.3.1 I experience gazebo exiting when I move the robot's pinned pelvis around using the /cmd_vel topic. (1.3 is /atlas/cmd_vel)
I've noticed that this seems to usually happen when the robot is slowing down and almost stopped, but that is probably just a coincidence. I've tried running gazebo in gdb but it just says that the process exits cleanly, so I can't get any sort of stack trace.
Edit: I also think its worth mentioning that the Hokuyo is spinning while the robot is moving around.
Edit2: Okay, I found the reason I wasn't getting a stacktrace is because the gdb was attached to the gzclient and not gzserver (since I was running gdb gazebo, and it forks both off). I've been able to generate the crash with the stack trace posted here: http://pastebin.com/LYv3tt8t
Edit3: I'm pretty sure this is a bug, and I've reported it. https://bitbucket.org/osrf/gazebo/issue/339/laserscan-causes-gazebo-to-crash-when
The crash is in heightfield.cpp, and I should mention that I am using the drc_sim_v0.launch launch file in atlas_utils.
Originally posted by jhoare on Gazebo Answers with karma: 15 on 2013-01-03
Post score: 1
Answer:
It's become a filed bug: https://bitbucket.org/osrf/gazebo/issue/339/laserscan-causes-gazebo-to-crash-when
Originally posted by gerkey with karma: 1414 on 2013-01-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 2895,
"tags": "drcsim, gazebo-1.3"
} |
Suggestions for improving error handling | Question: I am interested in finding out what is the correct way of implementing error handling for this situation in C#.
The system tries to do an operation. If the operation succeeded (returns a non-error code), the system logs in a log database that the operation was succesfull. If the operation returns a error code or something caused an exception before the operation, the system will log an error. However, if the an error is caught when logging, the system shouldn't log an error in the database.
I am using C#. The following code is what I have used until now, but I don't know if it is the best practice in this situation.
int response = -1; //some error code
try
{
//some code to prepare the operation - may cause exceptions
response = DoOperation();
//some code to clean after the operation - may cause exceptions
}
catch
{
//error handling
}
try
{
if (response > 0) //non-error code
//log event in database
else
//log event error in database
}
catch
{
//logging error handling
}
Do you have any suggestions for improving my code?
Note: The catch blocks include in the original code handling specific errors, I just used a general catch blocks for code simplicity in my question.
Answer: You could combine the try-catch blocks:
int response = -1; //some error code
var logging = false;
try
{
//some code to prepare the operation - may cause exceptions
response = DoOperation();
//some code to clean after the operation - may cause exceptions
logging = true;
if (response > 0) //non-error code
//log event in database
else
//log event error in database
}
catch
{
if(logging)
//logging error handling
else
//error handling
}
Even better, if you know the logging could throw a particular type of exception that the other operations will not, then catch just that exception and handle it as a logging exception, and let all others trickle down to a general catch:
int response = -1; //some error code
try
{
//some code to prepare the operation - may cause exceptions
response = DoOperation();
//some code to clean after the operation - may cause exceptions
if (response > 0) //non-error code
//log event in database
else
//log event error in database
}
catch(LoggingSpecificException)
{
//logging error handling
}
catch
{
//error handling
}
As far as the general strategy, it doesn't look too bad. Getting a "return status code" from a method is more than a little outdated (try-throw-catch was designed to replace this style of method return), but there's a lot of "legacy code" out there, and some built-in methods still return values that indicate failure. The return value of DoOperation(), if any, should be its conceptual "product"; data produced by DoOperation as the result of a computation. If DoOperation, conceptually, doesn't make any such calculation, it would be better conceptually (and thus from an understandability standpoint) to return void and instead throw exceptions on errors. However, you'd have to throw and catch multiple types of exceptions, or keep track of multiple states instead of just "logging".
Also, I don't know if you simply omitted it for the operation, but I would think that knowledge of exactly what went wrong might be good to know, and so any general catch should be catch(Exception ex) or similar to allow use of exception data when handling the error. | {
"domain": "codereview.stackexchange",
"id": 2130,
"tags": "c#, exception, error-handling"
} |
Complex signal low pass filter | Question: I'm trying to implement a low pass filter in python for a complex signal but the output doesn't look right. I've created a simple example below where I've mixed a 15Hz complex sine wave and a 30Hz complex sine wave so that I get a signal with components (30+15=45Hz and 30-15=15Hz).
What I'm trying to do is filter this signal with a low pass filter to just give the resultant 15Hz component. When I look at the plot of the filtered signal though the imaginary component seems to be almost identical (same phase) as the real signal. It doesn't look right.
Example of code to reproduce then plot below:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import butter, lfilter
import scipy.signal as sig
#Sample rate.
rate = 5000
# Create a 15 Hz comlex signal.
t = np.arange(0,5000)/rate
f1 = 15
msg = np.exp(1j* (2*np.pi * f1 * t))
# Create a 30 Hz complex signal.
f2 = 30
bb_lo = np.exp(1j * (2*np.pi * f2 * t))
# Create a low pass filter to remove the upper frequency.
nyq = 0.5 * rate
cut = 30
low = cut / nyq
b, a = sig.butter(5, low, btype='low')
# Mix the low frequency and high frequency signal together.
# Will result in a signal with freq components of f1+f2 and f2 - f1.
# Then apply the filter to real and imaginary parts.
signalFilt = lfilter(b,a , bb_lo.real*msg.real) + 1j*lfilter(b,a , bb_lo.imag*msg.imag)
plt.plot(signalFilt.real)
plt.plot(signalFilt.imag)
plt.grid()
plt.show()
Frequency filter looks right at approx 15Hz but real/imaginary in plot I don't think should be the same. What am I doing wrong here?
Thanks
Answer: You're not actually mixing two complex exponentials, because otherwise you'd get a complex exponential with the sum of the frequencies of the individual complex exponentials. That was already mentioned in a comment by Marcus Müller.
What you are doing is compute a signal with real part $\cos(\omega_0t)\cos(\omega_1t)$ and with imaginary part $\sin(\omega_0t)\sin(\omega_1t)$. If you lowpass filter that signal, only the components with the difference of the two frequencies remain, but these components are the same for the real and for the imaginary part, because
$$\cos(\omega_0t)\cos(\omega_1t)=\frac12\big[\cos[(\omega_0-\omega_1)t]+\cos[(\omega_0+\omega_1)t]\big]\tag{1}$$
and
$$\sin(\omega_0t)\sin(\omega_1t)=\frac12\big[\cos[(\omega_0-\omega_1)t]-\cos[(\omega_0+\omega_1)t]\big]\tag{2}$$
Clearly, the components with the difference frequency are the same.
Also, if you want to filter out a component with frequency $\omega_x$ (in your case the component with the sum frequency), it is not wise to use a filter with cut-off frequency $\omega_c=\omega_x$ because the attenuation at $\omega_c$ is usually quite modest, in your case $3$dB. Choose a frequency that is much closer to the lower frequency component to get a decent attenuation of the higher frequency component. | {
"domain": "dsp.stackexchange",
"id": 9296,
"tags": "filters, lowpass-filter, complex"
} |
cakin_make returns the error '/usr/bin/ld: cannot find -lRTIMULib' | Question:
I'm attempting to catkin_make the imu package from this link
But I'm getting the error:
/usr/bin/ld: cannot find -lRTIMULib
From as far as I can tell the library is include and linked in the CMake.txt
Does the location of the library file matter? It's currently in ~/catkin_ws/src/i2c_imu-master/src
Thanks for any help, the CMake.txt is as follows:
cmake_minimum_required(VERSION 2.8.3)
project(i2c_imu)
find_package(catkin REQUIRED COMPONENTS
sensor_msgs
roscpp
tf
angles
)
find_library(RTIMULib libRTIMULib.so)
message(STATUS "RTIMULib: ${RTIMULib }")
catkin_package(
CATKIN_DEPENDS sensor_msgs roscpp tf angles
)
include_directories(
${catkin_INCLUDE_DIRS}
)
add_executable(i2c_imu_node src/i2c_imu_node.cpp)
target_link_libraries(i2c_imu_node
RTIMULib
${catkin_LIBRARIES}
)
EDIT:
The output of catkin_make after adding message(STATUS "RTIMULib: ${RTIMULib }") after find_library(RTIMULib ..): in the CMakeList.txt.
Base path: /home/nvidia/catkin_ws
Source space: /home/nvidia/catkin_ws/src
Build space: /home/nvidia/catkin_ws/build
Devel space: /home/nvidia/catkin_ws/devel
Install space: /home/nvidia/catkin_ws/install
####
#### Running command: "cmake /home/nvidia/catkin_ws/src -
DCATKIN_DEVEL_PREFIX=/home/nvidia/catkin_ws/devel -
DCMAKE_INSTALL_PREFIX=/home/nvidia/catkin_ws/install -G Unix Makefiles" in "/home/nvidia/catkin_ws/build"
####
-- Using CATKIN_DEVEL_PREFIX: /home/nvidia/catkin_ws/devel
-- Using CMAKE_PREFIX_PATH: /home/nvidia/catkin_ws/devel;/opt/ros/kinetic
-- This workspace overlays: /home/nvidia/catkin_ws/devel;/opt/ros/kinetic
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/nvidia/catkin_ws/build/test_results
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.7.8
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - i2c_imu
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'i2c_imu'
-- ==> add_subdirectory(i2c_imu-master)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
CMake Error at i2c_imu-master/CMakeLists.txt:19 (message):
Syntax error in cmake code at
/home/nvidia/catkin_ws/src/i2c_imu-master/CMakeLists.txt:19
when parsing string
RTIMULib: ${{RTIMULib }}
syntax error, unexpected cal_SYMBOL, expecting }} (22)
-- Configuring incomplete, errors occurred!
See also "/home/nvidia/catkin_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/nvidia/catkin_ws/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
Originally posted by TheMilkman on ROS Answers with karma: 17 on 2018-02-12
Post score: 0
Original comments
Comment by gvdhoorn on 2018-02-13:
Can you add the following line to the CMakeLists.txt and add whatever gets printed to your OP? Add this after find_library(RTIMULib ..):
message(STATUS "RTIMULib: ${RTIMULib}")
Comment by TheMilkman on 2018-02-18:
Thanks for your reply! The question has been updated with the information you asked for
Comment by gvdhoorn on 2018-02-19:
You have a space in there after the variable name. CMake might not like that.
And as @ahendrix writes: you probably need to install the library. The message(STATUS ..) was just to see whether the library is actually found or not.
Answer:
It looks like that ROS package is just a wrapper for an IMU library, RTIMULib2. You should probably install that library.
Originally posted by ahendrix with karma: 47576 on 2018-02-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by TheMilkman on 2018-03-11:
Thanks @ahendrix! You're 100% right | {
"domain": "robotics.stackexchange",
"id": 30024,
"tags": "ros-kinetic"
} |
Why can mRNA come out of the nucleus but not enter it? | Question: I am a mechatronics engineer who stopped learning biology after high school - but this is bothering me.
mRNA is, if I recall correctly, created in the nucleus of the cells and migrates out of the nucleus inside the cytoplasm where it will be translated by ribosomes.
mRNA vaccines inject mRNA molecules inside the body, which are apparently transported by their nanolipidic particle coating across the cell membrane, correct?
I understand that the DNA→mRNA transcription is probably not reversible, but I do not understand how the mRNA is only able to cross the nucleus-cytoplasm membrane in one direction.
Care to explain? Bonus kudos for clarifications on the rest of my doubts.
Answer: Nuclear pores control what gets in and out of the nucleus. In general, mRNAs are only allowed out, they don't go back in. Reverse transcriptases, of course, will put mRNA back into DNA, but only some viruses, like HIV, have those enzymes.
https://portlandpress.com/biochemj/article-abstract/477/1/23/221793/Into-the-basket-and-beyond-the-journey-of-mRNA?redirectedFrom=fulltext | {
"domain": "biology.stackexchange",
"id": 12438,
"tags": "cell-biology, dna, mrna"
} |
Why is the order reversed on measurement? | Question: Why is the order reversed on measurement?
from qiskit import(
QuantumCircuit,
execute,
Aer)
from qiskit.visualization import plot_histogram
# Use Aer's qasm_simulator
simulator = Aer.get_backend('qasm_simulator')
# Create a Quantum Circuit acting on the q register
circuit = QuantumCircuit(3, 3)
# Add a X gate on qubit 0
circuit.x(0)
# Add a CX (CNOT) gate on control qubit 0 and target qubit 1
circuit.cx(0, 1)
circuit.barrier()
# Map the quantum measurement to the classical bits
circuit.measure([0,1,2], [0,1,2])
# Execute the circuit on the qasm simulator
job = execute(circuit, simulator, shots=1000)
# Grab results from the job
result = job.result()
# Returns counts
counts = result.get_counts(circuit)
print("\nTotal count:",counts)
# Draw the circuit
circuit.draw()
Got result:
Total count for 00 and 11 are: {'011': 1000}
But I'm expecting '110'.
Answer: I still run into this issue too. If you consider $|q0\rangle$ to be the most significant bit (MSB) you have to map it to the most significant classical bit as well, which is in your case a bit no. 2. Or you can flip your quatnum circuit upside down and then $|q0\rangle$ become the least significant bit (LSB) and the measurement will meet your expectation.
A code
circuit.measure([0,1,2], [0,1,2])
is valid in case $|q0\rangle$ is LSB and
circuit.measure([0,1,2], [2,1,0])
in case $|q0\rangle$ is MSB.
I think that the reason for this arrangement is simply a convention, so you can choose whether $|q0\rangle$ is MSB or LSB and set the measurement procedure accordingly. | {
"domain": "quantumcomputing.stackexchange",
"id": 2673,
"tags": "quantum-gate, programming, qiskit"
} |
Difference between any arene and an aromatic compound? | Question: I can't seem to find what the difference between these two is. In my text book, it says that 'arenes are aromatic hydrocarbons containing one or more benzene rings' which to me suggests that arenes are a subgroup of aromatic hydrocarbons and aromatic hydrocarbons need not contain one or more benzene rings. However later on in my book it says that 'any compound with the benzene ring is classified as an aromatic compound', which to me is the exact same definition as that given for an arene. Is there a difference between arenes and aromatic compounds?
Thank you in advance :)
Answer: Arene — Compound which contains a benzene ring
Aromatic Compounds — Compounds having aroma (wait, the modern definition is different, I know)
Every arene is an aromatic compound but every aromatic compound need not be an arene.
Mathematically,
$$\text{arene} \subset \text{aromatic compounds}$$
therefore anything which contains a benzene ring has to be aromatic
Some examples of arenes:
Getting into aromatic compounds, the modern criteria for aromaticity are:
The molecule is cyclic (a ring of atoms)
The molecule is planar (all atoms in the molecule lie in the same plane)
The molecule is fully conjugated (p orbitals at every atom in the ring)
The molecule has $4n+2$ π electrons ($n=0$ or any positive integer)*
Examples (non-benzenoids):
Now, benzenoids (compounds which contain benzene) do match with this. Therefore, benzene derivatives are aromatic compounds. | {
"domain": "chemistry.stackexchange",
"id": 2822,
"tags": "aromatic-compounds"
} |
How many qubits are simulable with a normal computer and freely accessible simulators? | Question: I want to simulate an arbitrary isolated quantum circuit acting on $n$ qubits (i.e. a pure state of $n$ qubits).
As I know RAM is the bottleneck for quantum simulators, you can consider a "normal" computer to have between $4$ and $8$ Gio of RAM, all the other components are considered sufficiently powerful to not be the bottleneck.
With this definition of a "normal" computer,
What is the maximum value of $n$ (the number of qubits) for which an arbitrary quantum circuit is simulable in a reasonable time ($<1\text{h}$) with a normal computer and freely accessible simulators?
Answer: This answer doesn't directly answer the question (I have little experience of real simulators with practical overheads etc.), but here's a theoretical upper bound.
Let's assume that you need to store the whole state vector of $k$ qubits in memory. There are $2^n$ elements that are complex numbers. A complex number requires 2 real numbers, and a real number occupies 24 bytes in python. Let's say we want to cram this into $4\times 10^9$ bytes of RAM (probably leaving a few over for your operating system etc.) Hence,
$$
48\times 2^n\leq 4\times 10^9
$$
Rearrange for $n$ and you have $n\leq26$ qubits.
Note that applying gates in a quantum circuit is relatively inexpensive memory-wise. See the "Efficiency Improvements" section in this answer. From that strategy, one should be able to estimate the time it takes to apply a single one- or two-qubit gate to an $n$-qubit system, and hence how many gates you might expect to fit within some times limit (an hour is very modest, but would certainly serve for illustrative purposes). | {
"domain": "quantumcomputing.stackexchange",
"id": 303,
"tags": "simulation"
} |
Protected Majorana Zero modes in Kitaev Chain | Question: Kitaev's one-dimensional p-wave superconductor Hamiltonian${}^\dagger$ is
\begin{equation}
{\cal H}_{JW}=-J\sum\limits_i(c_{i}^\dagger c_{i+1} + c_{i+1}^\dagger c_{i} + c_{i}^\dagger c_{i+1}^\dagger + c_{i+1} c_{i} - 2gc_{i}^\dagger c_{i}+g)
\end{equation}
After Fourier transformation ($c_k=\frac{1}{\sqrt{N}}\sum\limits_j c_je^{ikx_j}$) hamiltonian becomes
\begin{equation}\label{afterfourier}
{\cal H}_f= \sum\limits_k(2[Jg-J\cos(ka)]c_{k}^\dagger c_{k}+iJ\sin(ka)[c_{-k}^\dagger c_{k}^\dagger + c_{-k}c_{k}]-Jg)
\end{equation}
If I am not wrong, by ignoring constant term, above Hamiltonian can also be written in standard Bogoliubov-de Gennes form
\begin{equation}\label{bdgequation}
{\cal H}_{BdG} = J\sum\limits_k\Psi_k^\dagger \begin{pmatrix}g-\cos k & -i \sin k\\ i\sin k & -g+\cos k \end{pmatrix}\Psi_k
\end{equation}
where
$$\Psi_k = \begin{pmatrix}
c_{-k}\\
c_k^\dagger
\end{pmatrix}
$$
The energy spectrum for particle-hole symmetry is symmetric about zero. For hole, it is $-\epsilon_k/2$ and for electron it is $\epsilon_k/2$. Where
$$\epsilon_k=2J\sqrt{1+g^2-2g\cos(ka)}$$
If we do Bogoliubov transformation of Fourier transformed Hamiltonian, we get
\begin{equation}\label{eq:BVtrans}
{\cal H}=\sum\limits_k\epsilon_k(\gamma_k^\dagger \gamma_k-1/2)
\end{equation}
My Question
How particle-hole symmetric Hamiltonian is protecting the Majorana-zero-mode in one phase.
${}^\dagger$In special case when $t=\Delta$
Answer: The particle-hole energy spectrum is symmetric around zero energy.
When $g\to 0$, we have two zero energy levels, corresponding to the Majorana zero modes which are localized far away from each other and separated by a gaped medium. It is not possible to move these levels from zero energy individually (as one needs to respect particle-hole symmetry). The only way to split the Majorana modes in energy is to first close the bulk energy gap. For more refer this. | {
"domain": "physics.stackexchange",
"id": 69251,
"tags": "condensed-matter, superconductivity"
} |
Can there be a torque-less net force on a dipole in a magnetic field? | Question: I realise that a magnetic dipole moment is essentially defined on the basis of torque, which also seems to imply that a magnetic field imposed on a current-carrying closed loop can only induce a torque, and never a net force.
Are there any exceptions to this?
Answer:
Can there be a torque-less net force on a dipole in a magnetic field?
Yes, such as when there is a point in the field where the magnetic field is zero but its directional derivative in some direction is not zero. This is because
$$\vec\tau=\vec m\times\vec B$$
but
$$\vec F=(\vec m\cdot\vec\nabla)\vec B.$$
You can also make the torque be zero, even if the field isn’t zero anywhere, by aligning the dipole parallel or antiparallel to the field. But to get a force you have to have a nonuniform field.
a magnetic dipole moment is essentially defined on the basis of torque
Although this is how Wikipedia says it “can be” defined, this definition is generally used only in introductory treatments. At a more advanced level, one can find a general multipole expansion of the magnetic vector potential for a current loop using the equations
$$\vec A(\vec r)=\frac{I}{c}\oint\frac{d\vec\ell’}{|\vec r-\vec r’|}$$
and
$$\frac{1}{|\vec r-\vec r’|}=\frac1r\sum_{n=0}^\infty\left(\frac{r’}{r}\right)^nP_n(\cos\theta)$$
where $P_n(x)$ is a Legendre polynomial and $\theta$ is the angle between $\vec r$ and $\vec r’$.
See this lecture for how the $n=1$ term leads to the following definition of the dipole moment for an arbitrarily-shaped current loop, simply in terms of where current is flowing in space:
$$\vec m\equiv\frac{I}{2c}\oint \vec r’\times d\vec\ell’.$$
The $n=2$ term gives the loop’s quadrupole moment and vector potential, the $n=3$ term the octupole, etc.
More general treatments might work with a current density in space rather than a current confined to a wire, but a multipole expansion based on powers of $1/r$ and Legendre polynomials (or spherical harmonics) would be similar.
Note: These equations are in Gaussian, not SI, units. | {
"domain": "physics.stackexchange",
"id": 69810,
"tags": "magnetic-fields, torque, dipole, dipole-moment"
} |
Insert values into two different but similar tables | Question: I have a combobox with two items. And if one is selected I want to write data to "salary" table and if selected another, to "other" table.
The difference between two code blocks is only one word. I would like to know how I could avoid repeating same code twice.
string myConnection = "datasource=localhost; port=3306; username=root;password=root";
if (comboBox4.SelectedItem.ToString() == "Salary")
{
string insert = "insert into budget.salary (name, suma) values (@name, @price);";
using (var conDataBase = new MySqlConnection(myConnection))
using (var cmdDataBase = new MySqlCommand(insert, conDataBase))
{
cmdDataBase.Parameters.AddWithValue("@name", name);
cmdDataBase.Parameters.AddWithValue("@price", suma);
conDataBase.Open();
cmdDataBase.ExecuteNonQuery();
MessageBox.Show("Saved");
}
}
if (comboBox4.SelectedItem.ToString() == "Other")
{
string insert = "insert into budget.other (name, suma) values (@name, @price);";
using (var conDataBase = new MySqlConnection(myConnection))
using (var cmdDataBase = new MySqlCommand(insert, conDataBase))
{
cmdDataBase.Parameters.AddWithValue("@name", name);
cmdDataBase.Parameters.AddWithValue("@price", suma);
conDataBase.Open();
cmdDataBase.ExecuteNonQuery();
MessageBox.Show("Saved");
}
}
Answer: Declare a method:
public void InsertInto(string table) {
string myConnection = "datasource=localhost; port=3306; username=root;password=root";
string insert = "insert into budget." + table + " (name, suma) values (@name, @price);";
using (var conDataBase = new MySqlConnection(myConnection))
using (var cmdDataBase = new MySqlCommand(iterpti, conDataBase)) {
cmdDataBase.Parameters.AddWithValue("@name", name);
cmdDataBase.Parameters.AddWithValue("@price", suma);
conDataBase.Open();
cmdDataBase.ExecuteNonQuery();
MessageBox.Show("Saved");
}
}
And change your code to call the method:
if (comboBox4.SelectedItem.ToString() == "Salary") {
InsertInto("salary");
}
if (comboBox4.SelectedItem.ToString() == "Other") {
InsertInto("other");
}
Declaring a method that does what you want and then calling the method multiple times instead of repeating your code multiple times is the standard way to avoid duplicating code. This principle is known as encapsulation and is very necessary for writing clean, readable, and reusable code. You should get used to using this technique whenever possible.
More info: https://en.wikipedia.org/wiki/Encapsulation_(computer_programming) | {
"domain": "codereview.stackexchange",
"id": 20995,
"tags": "c#, mysql"
} |
Spin-forbidden transition iof bromine | Question: In one of the lectures of my spectroscopy course, the professor claimed that there is a strong absorption band in the UV/Vis spectrum of bromine, associated with a singlet-state to triplet-state transition. He asked us to think about why this spin-forbidden transition occurs.
So far I haven't really come up with a satisfactory answer. Nevertheless, here are two of my ideas:
Broadly speaking, spin-forbidden does not necessarily mean that a transition is not observed, but rather that the intensity of the spectral line will be considerably diminished. Hence, a spin-forbidden transition could be observed in a spectrum.
An electronic transition is usually associated with a vibrational transition. Through the vibrational excitation the geometry of the molecule is changed such that the spin selection rules are "relaxed".
Am I on the right track with these ideas?
Answer: The selection rules are related to the transition dipole moment that determines the intensity of a given transition. If a given selection rule holds perfectly, all that it says is that a forbidden transition has zero intensity and an allowed transition has non-zero intensity.
If the selection rule does not hold exactly, just like the in the case of the spin selection rule for heavy atoms, a forbidden transition can be intense. Some examples: the $\text{B}^3\Pi_u-\text{X}^1\Sigma^+_g$ electronic transition in $\ce{Br2}$ and $\ce{I2}$, that are strong when compared to the same transition in $\ce{F2}$ and $\ce{Cl2}$; and the strong $^3P_1-^1S_0$ transition in $\ce{Hg}$, while the analogous one in $\ce{He}$ is very weak. As $L$ and $S$ are not good quantum numbers when there is strong spin-orbit coupling, the selection rules involving them can't be used.
The second answer you came up with makes sense, but not for spin selection rule. Consider the parity selection rule for molecules with an inversion center. As the molecule vibrates (in an asymmetric mode) it can be distorted into a geometry that does not have an inversion center, so the selection rule does not hold. In this case, this vibronic transition will be allowed. | {
"domain": "chemistry.stackexchange",
"id": 11597,
"tags": "inorganic-chemistry, spectroscopy, uv-vis-spectroscopy"
} |
What is wrong with this sinc interpolation? (Zero padding in frequency domain) | Question: I'm trying to pad $N=16$ zeros to the DFT of a $16$ point sinewave, but something is wrong either with my code or with my method. The method is this : I create a new $32$ point DFT where $F'(0),...,F'(N/2 - 1)$ are the elements of the original DFT, $F(0),...,F(N/2 - 1)$. I start to add $16$ zeros starting from $F'(N/2)$. The remaining values of $F'(k)$ need to have the proper conjugate symmetry, so that $F'(k) = (F'(N' - k))^*$ holds. I am using Python and numpy.
# parameters
f_s = 1
f_1 = 0.25
f_2 = 0.4
t_s = 1/f_s
N = 16
# define sinewave
def sinewave(x):
return np.sin(2 * pi * f_1 * x * t_s) + 1.5*np.sin(2 * pi * f_2 * x * t_s)
y = np.zeros(N)
for i in range(0, N):
y[i] = sinewave(i)
# zero-pad in frequency domain by 16 points
zero_pad = 16
zero_padded_transform = np.zeros(N + zero_pad, dtype = complex)
zero_padded_transform[0 : 8] = np.fft.fft(y)[0 : 8] # 8 since N/2 = 16/2 = 8
zero_padded_transform[8 + zero_pad + 1:] = np.fft.fft(y)[9:]
The symmetry is correct as the inverse transform has zero imaginary part, but the number of zeros is not correct. As such, this method does not seem to interpolate the sinewave correctly in the time domain.
Answer: Zero padding the frequency samples (just as the OP has done in padding out the center of the array and maintaining symmetry) and using the inverse DFT is almost identical to interpolating with a Sinc in the time domain. To be precise, the result in time is the convolution of the original time domain samples with the Dirichlet Kernel (which is identical to an aliased Sinc function). As $N$ the total number of samples gets larger, the Dirichlet Kernel approaches a Sinc function. This is also consistent how for small $N$ we will have larger error due to time domain aliasing.
Zero padding in frequency will interpolate more samples in time of the Discrete Frequency Inverse Fourier Transform (DFIFT) which is discrete in frequency and continuous in time. This would be the Frequency to Time equivalent of the DTFT which is discrete in time and continuous in frequency.
Plots comparing the magnitude of the ideal Sinc to the Dirichlet Kernel for different total number of samples $N$ are shown below: | {
"domain": "dsp.stackexchange",
"id": 12483,
"tags": "python, dft, interpolation, zero-padding"
} |
Get values between two dates | Question: I have a search form where users can enter ValidFrom and ValidTo dates,
There are three conditions:
ValidFrom and ValidTo NOT NULL
Action: Get values between two dates
ValidFrom is NOTNULL and ValidTo is NULL
Action: Get values where date is greater than ValidFrom
ValidFROM is NULL and ValidTo is NOTNULL
Action: get values where date is less than ValidTo
This is what I have got:
if (model.ValidFrom != null && model.ValidTo != null)
{
var dateFrom = (DateTime)model.ValidFrom;
var dateTo = (DateTime)model.ValidTo;
dalList = dalList.Where(s =>
DbFunctions.TruncateTime(s.ValidFrom) >= dateFrom.Date
&& DbFunctions.TruncateTime(s.ValidTo) <= dateTo.Date).AsQueryable();
}
if (model.ValidFrom != null && model.ValidTo == null)
{
var dateFrom = (DateTime)model.ValidFrom;
dalList = dalList.Where(s => DbFunctions.TruncateTime(s.ValidFrom) >= dateFrom.Date).AsQueryable();
}
if (model.ValidFrom == null && model.ValidTo != null)
{
var dateTo = (DateTime)model.ValidTo;
dalList = dalList.Where(s => DbFunctions.TruncateTime(s.ValidTo) <= dateTo.Date).AsQueryable();
}
Is there a better way to write this type of search?
Answer: Although the answer from Heslacher is a really good one, here's also what you might do. Create two boolean variable to check for null on the model.ValidFrom and model.ValidTo:
bool modelValidFromIsNull = model.ValidFrom == null;
bool modelValidToIsNull = model.ValidTo == null;
Now, like Heslacher mentioned, place a guard to rule out if they are both null.
if (modelValidFromIsNull && modelValidToIsNull) { return null; }
Now you can use these bool variables to do some magic. In the Where() clause of the query, use these flags to check or ignore the condition you want to apply. If the flag is true, the condition will be skipped and if it is false, the condition will be evaluated. A small example:
var numbers = new[] { 1, 2, 3, 4, 5, 6 };
var remaining = numbers.Where(x => false || x > 3);
The variable remaining will now contain 4, 5 and 6. If you were to change that false to true, the variable remaining would contain all the numbers from the original source. [That's why you need the guard :) ]
Now you can apply this logic to your code. When the model.ValidFrom is not null, your variable modelValidFromIsNull will be false. This means that in the where clause the left part is false and the second part will be evaluated. The same goes for model.ValidTo.
This is the resulting code:
bool modelValidFromIsNull = model.ValidFrom == null;
bool modelValidToIsNull = model.ValidTo == null;
if (modelValidFromIsNull && modelValidToIsNull) { return null; }
var result = dalList.Where(s => modelValidFromIsNull || DbFunctions.TruncateTime(s.ValidFrom) >= dateFrom.Date)
.Where(s => modelValidToIsNull || DbFunctions.TruncateTime(s.ValidTo) <= dateTo.Date); | {
"domain": "codereview.stackexchange",
"id": 11782,
"tags": "c#, datetime, linq, search"
} |
Sean Carroll GR - Ex.3.6 (b) & (c) | Question: I'm working in the newtonian limit of GR with the metric
$$
ds^2 = -(1+2\Phi)dt^2 + (1-2\Phi)dr^2 +r^2d\theta^2+r^2sin^2\theta\;d\phi^2
$$
where
$$\Phi = -\frac{GM}{r}.$$
We are first asked to compare the time dilation between two different radii R1 and R2 which was straightforward. My problem is with (b) and (c):
(b) Solve for a geodesic corresponding to a circular orbit around the equator of the Earth ($\theta = \pi/2$). What is $d\phi/dt$?
(c) How much proper time elapses while a satellite at Radius R1 completes one orbit? (to first order in $\Phi$).
I can find the Christoffel symbols, but I don't understand how we arrive at an equation which is not degenerate. I always end up with $0\; \partial\phi/\partial t = 0$ which is obviously not the desired answer.
I don't even know where to begin for part (c)
Edit: Thinking about it some more, when we set the radius($r=R$) and angle ($\theta = \pi/2$) constant the metric reduces to $$
ds^2 = -(1+2\Phi)dt^2 +R^2d\phi^2
$$
Is this not correct?
Edit2: Using this and calculating the connection and plugging the geodesic equation I compute
$$
\frac{d^2\phi}{d\tau^2} = -\frac{1}{R^2}\frac{d\Phi}{d\phi}
$$
With the only non-zero geodesic component being
$$
\frac{d^2\phi}{d\tau^2}+\frac{1}{R^2}\frac{\partial\Phi}{\partial{\phi}}\left(\frac{\partial{t}}{\partial{\tau}}\right)^2 = 0
$$
What am I doing wrong here?
Edit 3:
Okay so $\frac{\partial{\Phi}}{\partial{\phi}} = 0$ hence $\frac{d^2\phi}{d\tau^2} = 0$ which means $\frac{d\phi}{d\tau} = \omega_c$.
Now I'm still not sure how to go about part (c). Is it the period of the function I just derived?
Answer: I can't comment yet, so I have to offer this hint as an answer. Have you tried calculating $\frac{d\Phi}{d\phi}$? | {
"domain": "physics.stackexchange",
"id": 58530,
"tags": "homework-and-exercises, general-relativity, metric-tensor, geodesics"
} |
Null Geodesics in Anti-de Sitter space time | Question:
Would anyone be able to explain how the step was taken in getting the final equation with $R \tan(t/R)$
I understand the steps before where we are finding the null geodesic equation for the AdS space time but not sure how the final equation is produced.
Note: This question is using a coordinate transformed metric of AdS with transformation $r=R \sinh \rho$ as shown at the top.
Answer: Separate the differential relation
$$(\cosh\rho)\,\dot t=R\dot\rho$$
between $t$ and $\rho$ to get
$$\frac{dt}{R}=\frac{d\rho}{\cosh\rho}.$$
Then integrate to get
$$\frac{t}{R}=\tan^{-1}{(\sinh\rho)}$$
or
$$\sinh\rho=\tan\frac{t}{R}.$$ | {
"domain": "physics.stackexchange",
"id": 64227,
"tags": "general-relativity, geodesics, anti-de-sitter-spacetime"
} |
Why does a system whose equations of movement are $\lambda^2U^{\alpha} + \partial_{\mu}F^{\mu \alpha} = 0$ have three degrees of freedom? | Question: I'm trying to understand the solution of a problem where I have to study a field ($U^\mu$) which Lagrangian is:
$$\mathscr{L} = - \frac{1}{4} F_{\mu \nu} F^{\mu \nu} + \frac{1}{2} \lambda^2 U_{\mu} U^{\mu}$$
Where $ F_{\mu \nu}=\partial_{\mu} U_{\nu} - \partial_{\nu} U_{\mu}$. I got the equations of movement:
$$\lambda^2U^{\alpha} + \partial_{\mu}F^{\mu \alpha} = 0$$
But here my professor told us that if $\lambda \neq 0$ then we have three degrees of freedom in the problem. Why is that so? I've been able to grasp it so far.
Answer: Because if $\lambda \neq 0$ you are treating the massive Vector Field, namely you're talking about vector particles (spin $1$) with mass: $W^+$, $W^-$ and $Z^0$ bosons.
Indeed by using the identity you wrote; $F_{\mu \nu}=\partial_{\mu} U_{\nu} - \partial_{\nu} U_{\mu}$, you can write your Lagrangian in the form
$$\mathcal{L} = -\frac{1}{2}\partial_{\mu}U_{\nu}\partial^{\nu}U^{\mu} + \frac{1}{2}\left(\partial_{\mu}U^{\mu}\right)^2 - \frac{1}{2}\lambda^2 U_{\mu}U^{\mu}$$
which is similar to the Klein-Goron Lagrangian but with one term more. Varying $U_{\mu}$ you obtain the equation of motion (after writing the action $\mathcal{S} = \int\ \mathcal{L}\ dt$) getting
$$\partial^{\mu}\partial^{\nu}F_{\mu\nu} - \lambda^2 \partial^{\nu}U_{\nu} = 0$$
and for $\lambda\neq 0$ you have the constraint
$$\partial^{\mu}U_{\nu} = 0$$
Using those facts, the equations of motion can be written in the form:
$$(\Box - \lambda^2)U_{\mu} = 0$$
$$\partial^{\mu}U_{\mu} = 0$$
This is what tells us that of the four $U_{\mu}$ fields, only three of them are independent fields, and they describe in a covariant way the three associated polarizations of a spin $1$ particle. | {
"domain": "physics.stackexchange",
"id": 27012,
"tags": "lagrangian-formalism, vector-fields, degrees-of-freedom"
} |
Why is there a cap on the speed at which we can attain? | Question: Also, If you are traveling 1 mph under the speed of light on a train and throw a baseball in front of you at 20mph what would a viewer outside the train see?
Answer: The observer standing by the tracks would observe that the baseball is travelling faster than the train, but less than the speed of light.
There is an equation for the addition of velocities: see
See http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/einvel.html | {
"domain": "physics.stackexchange",
"id": 29096,
"tags": "special-relativity, visible-light, speed-of-light"
} |
Determining the number of days from today | Question: Today I've got a pretty simple snippet to be reviewed. The function is extremely simple, as all it does is retrieve some entities from Core Data and then compares the dueDate attribute to today. I'm here asking about two questions:
Is this code reliable?
Is this the most efficient way (less lines of code) to accomplish this task?
The output will be separated into different arrays as seen by the comments inside the various if statements. The categories include:
Greater than One Month
One Month
One Week
Tomorrow
Today
func fetchAssignments() {
let appDelegate =
UIApplication.sharedApplication().delegate as! AppDelegate
let managedContext = appDelegate.managedObjectContext!
let fetchRequest = NSFetchRequest(entityName:"Assignment")
var error: NSError?
let fetchedResults =
managedContext.executeFetchRequest(fetchRequest,
error: &error) as! [NSManagedObject]?
if let results = fetchedResults {
for item in results {
let dueDate = item.valueForKey("dueDate") as! NSDate
let todayDate = NSDate()
let calendar: NSCalendar = NSCalendar(calendarIdentifier: NSCalendarIdentifierGregorian)!
let calendarComponents: NSDateComponents = calendar.components(NSCalendarUnit.CalendarUnitYear | NSCalendarUnit.CalendarUnitMonth | NSCalendarUnit.CalendarUnitDay, fromDate: todayDate, toDate: dueDate, options: .allZeros)
println("Days: " + calendarComponents.day.description + " Months: " + calendarComponents.month.description)
if calendarComponents.month > 0 && calendarComponents.day > 0 {
// Greater Than One Month
return
} else if calendarComponents.day > 7 {
// One Month
return
} else if calendarComponents.day > 1 {
// One Week
return
} else if calendarComponents.day == 1 {
// Tomorrow
return
} else {
// Today
return
}
}
} else {
println("Could not fetch \(error), \(error!.userInfo)")
}
}
Answer: Let's start with the main error that I see: You compute the timespan between the "current date" and the "due date". If the current date is today at 11am, and the due date is tomorrow at 10am then the difference
is 0 days and 23 hours, to this is reported as "today".
This is probably not what you wanted, therefore todayDate must be
set to the start of the current day. This is easily done with
let todayDate = calendar.startOfDayForDate(NSDate())
Then todayDate and calendar should be computed only once before the
loop starts, and not for each loop iteration.
There are also problems in the analysis of the computed date components:
if calendarComponents.month > 0 && calendarComponents.day > 0 { ...
What if the difference is exactly 2 (or more) month and 0 days?
} else if calendarComponents.day > 7 {
What if the difference is exactly one month and 0 days?
The first two if-conditions should therefore be
if calendarComponents.month >= 2 || (calendarComponents.month == 1 && calendarComponents.day > 0) {
// More than a month
} else if calendarComponents.month == 1 || calendarComponents.day > 7 {
// Between 8 days and one month
} else if ...
Other things that can be improved:
If you create a NSManagedObject subclass for your entity then
you can use the property accessors instead of valueForKey():
let fetchedResults = managedContext.executeFetchRequest(fetchRequest,
error: &error) as! [Assignment]?
// ... later ...
let dueDate = item.dueDate
The advantage is that you don't need the cast as! NSDate anymore
and the compiler can check for correct types.
Some type annotations are not necessary, e.g.
let calendarComponents: NSDateComponents = calendar.components(NSCalendarUnit.CalendarUnitYear | NSCalendarUnit.CalendarUnitMonth | NSCalendarUnit.CalendarUnitDay,
fromDate: todayDate, toDate: dueDate, options: .allZeros)
can be simplified to
let calendarComponents = calendar.components(.CalendarUnitYear | .CalendarUnitMonth | .CalendarUnitDay,
fromDate: todayDate, toDate: dueDate, options: nil)
And
println("Days: " + calendarComponents.day.description + " Months: " + calendarComponents.month.description)
can be simplified with string interpolation:
println("Days: \(calendarComponents.day) Months: \(calendarComponents.month)") | {
"domain": "codereview.stackexchange",
"id": 13334,
"tags": "datetime, ios, swift"
} |
Trouble with fixed-point arithmetic 2nd order IIR implementation in c | Question: DSP newcomer here!
I am tinkering around with a TI DSP and am trying to implement a second order IIR filter in C. Input is 16 bit 2's complement, as is the output, the accumulator is 32 bit wide.
I have tried several structures: Direct form I, Transposed direct form II, Gold & Rader structure. However, I can't seem to get anything to work and I am not sure whether it is a typo in the code of all of my filter implementations or whether it is because of me misunderstanding the principle of fixed-point arithmetic. I assume - or better: hope - that it's the same mistake that is affecting all implementations.
Here's my code for the transposed df II implementation:
// state variables
int w[3] = {0};
short filter_dfii(short value) {
w[0] = ((value * sos_filter_coeffs_b[0] + w[1]) >> 15 ) & 0xffff;
w[1] = value * sos_filter_coeffs_b[1]
- w[0] * sos_filter_coeffs_a[1]
+ w[2];
w[2] = value * sos_filter_coeffs_b[2]
- w[0] * sos_filter_coeffs_a[2];
return (short) w[0];
}
My df I implementation looks like this:
int last_value[2] = {0};
short filter_dfi(short value) {
w[0] =
((value * sos_filter_coeffs_b[0]
+ last_value[0] * sos_filter_coeffs_b[1]
+ last_value[1] * sos_filter_coeffs_b[2]
- w[1] * sos_filter_coeffs_a[1]
- w[2] * sos_filter_coeffs_a[2])
>> 15) & 0xffff;
last_value[1] = last_value[0];
last_value[0] = value;
w[2] = w[1];
w[1] = w[0];
return (short) w[0];
}
The filter coefficients are:
int sos_filter_coefficients_a = {32767, -51150, 21015};
int sos_filter_coefficients_b = {658, 1316, 658};
How did I get those coefficients? I utilized the Matlab 'butter' function and specified a 2nd order Butterworth lowpass with a cutoff frequency of 0.05*fs or 2400 kHz in my specific case. Then I multiplied each coefficient with 32767 and rounded it to an integer value. If I call Matlab freqz function with those quantized coefficients, I get a bode plot with the expected result, so I believe quantization error is not the issue here.
However, the result is not a lowpass but just garbage. Unfortunately, I lack the proper testing equipment to describe the result scientifically, but it sounds like digital oscillation that is slightly modulated by the actual input signal and is present even when there's no input at all.
So, in my opinion, this can't be an unstable filter since the poles are well within the unit circle. It can't be overflow limit cycle either because then it shouldn't be happening when there's only low input level or no input at all.. right?
If I alter the coefficients to
int sos_filter_coefficients_a = {32767, 0, 0};
int sos_filter_coefficients_b = {32767, 0, 0};
input gets returned basically unprocessed as expected, so my issues most likely aren't caused by any external factors - there must be something wrong with my coefficients, with the function itself, or both.
Any help is much appreciated!
Answer: The " & 0xffff" operation is questionable. It works for positive numbers but for negative numbers it will mask out the sign bit as well, so you will turn everything into positive numbers.
For the DF1, it would be better to delay the shift and mask until you return the actual output and do the conversion to short. This way you can keep the state variable in full double precision. | {
"domain": "dsp.stackexchange",
"id": 5176,
"tags": "filters, infinite-impulse-response, c, fixed-point"
} |
Append a controlled Initialize in Qiskit | Question: I would like to create a controlled initialize, I have the following code:
a = QuantumRegister(2,"a")
b = QuantumRegister(2,"b")
circuit = QuantumCircuit(a,b)
circuit.initialize(state)
and I would like to encode state only if qubit a and b are 1.
I first tried with isometry but then I discovered there is a new simple method like this:
controlled_gate = StatePreparation(state).control()
the problem is that I can't understand how to append on the circuit and how to set the control qubits.
Answer: The way to append gate instances to a QuantumCircuit is with the append method.
circuit.append(controlled_gate, circuit.qubits)
Here is a full example with explanations:
You mentioned:
I would like to encode state only if qubit a and b are 1
As such, you can create two single-qubit registers $a$ and $b$; and a register for the state to encode:
from qiskit import QuantumCircuit, QuantumRegister
circuit = QuantumCircuit(QuantumRegister(1, "a"),
QuantumRegister(1, "b"),
QuantumRegister(2, "state"))
Then, in order to control on two qubits ($a$ and $b$), you need to set control(2).
from qiskit.circuit.library import StatePreparation
import numpy as np
state = [0, 1/np.sqrt(2), -1.j/np.sqrt(2), 0]
controlled_gate = StatePreparation(state).control(2)
Finally, the append. The second parameter describe how to wire the gate, in this case circuit.qubits $= [a, b, state_0, state_1]$:
circuit.append(controlled_gate, circuit.qubits)
circuit.draw()
a: ──────────────────────■──────────────────────
│
b: ──────────────────────■──────────────────────
┌─────────────────────┴─────────────────────┐
state_0: ┤0 ├
│ State Preparation(0,0.70711,-0.70711j,0) │
state_1: ┤1 ├
└───────────────────────────────────────────┘ | {
"domain": "quantumcomputing.stackexchange",
"id": 3985,
"tags": "qiskit, quantum-gate, quantum-state"
} |
What determines rock temperature inside mountains? | Question: Wikipedia article says about the Gotthard Base Tunnel:
"It is the deepest railway tunnel in the world, with a maximum depth of 2,450 m (8,040 ft), comparable to that of the deepest mines on Earth. Without ventilation, the temperature inside the mountain reaches 46 °C (115 °F)."
More authoritative source with detailed temperature profile, a paper by L. Rybach and A. Busslinger (pdf) gives the following diagram:
What is the cause of the elevated temperature? Why does temperature follow depth below surface despite virtually uniform height above sea level of the tunnel?
Edit
In quite simplified model I have homogeneous (the same thermal properties, geothermal gradient etc.) rock with heat flow (conduction only) from the bottom kept at some high temperature (say 1000 °C) to the surface with average annual temperature say 10 °C. Isotherm 50 °C is at some depth below horizontal surface (left diagram, not to scale). Does the isotherm follows surface even in mountains?
Answer: You are on track to answering this yourself. Temperature is a continuous field, so near any surface it has to follow the boundary conditions, with the effect of any surface topography decaying away with depth.
For a steady-state heat flow in a homogeneous media, the Heat equation reduces to the Laplace Equation, which is easy to solve numerically. For example, here are the steady-state temperature contours for a 4 km tall square "mountain" with a 10°C surface temperature being heated from below by a 40 km deep 1000°C mantle boundary, assuming homogeneous rock.
The real profile, of course, also depends on the details of the subsurface geology and history.
Different rocks have different thermal conductivities, so even if the surface is flat, changes in rock types along a transit will change the underground temperature profiles. An example of this is the temperature dip at kilometre 135 in the Gotthard Base Tunnel that is due to porous rocks allowing cold surface water to penetrate deep underground.
The time it takes for heat to diffuse into topography can be comparable to the time it takes for the topography to form, so the geological history matters as well. For example, a typical rock thermal diffusivity is $\kappa~\sim 1.2\times 10^{-6}\,\mathrm{m^2/s}$, so the thermal diffusion time ($d^2/\kappa$) into a 4 km mountain is about a half-million years. According to "Temperature Predictions and Predictive Temperatures in Deep Tunnels", the time of topology evolution in the Alps is about two million years. (This seems reasonable given their current uplift and erosion rates are on the order of millimetres per year.) | {
"domain": "earthscience.stackexchange",
"id": 2769,
"tags": "mountains, geothermal-heat"
} |
Miller Indices: How to deal with some weird(-ish) cases? | Question: I've recently been doing a spot of self-learning (Crystallography), and some of the examples provided in the Wikipedia article for Miller Indices have stumped me.
I can't for the world figure out how they were obtained!
Quick introduction to Miller Indices (the basic stuff):
Including this here, because it was what I was referring to when learning about this stuff
Miller Indices (hkl)
The orientation of a surface or a crystal plane may be defined by considering how the plane (or indeed any parallel plane) intersects the main crystallographic axes of the solid. The application of a set of rules leads to the assignment of the Miller Indices , (hkl) ; a set of numbers which quantify the intercepts and thus may be used to uniquely identify the plane or surface.
The following treatment of the procedure used to assign the Miller Indices is a simplified one (it may be best if you simply regard it as a "recipe") and only a cubic crystal system (one having a cubic unit cell with dimensions $a$ x $a$ x $a$ ) will be considered.
The procedure is most easily illustrated using an example so we will first consider the following surface/plane:
Step 1 : Identify the intercepts on the x- , y- and z- axes.
In this case the intercept on the x-axis is at x = a ( at the point (a,0,0) ), but the surface is parallel to the y- and z-axes - strictly therefore there is no intercept on these two axes but we shall consider the intercept to be at infinity ( ∞ ) for the special case where the plane is parallel to an axis. The intercepts on the x- , y- and z-axes are thus: $a$ , $∞$ , $∞$ (in that order).
Step 2 : Specify the intercepts in fractional co-ordinates
Co-ordinates are converted to fractional co-ordinates by dividing by the respective cell-dimension - for example, a point (x,y,z) in a unit cell of dimensions $a$ x $b$ x $c$ has fractional co-ordinates of ( x/a , y/b , z/c ). In the case of a cubic unit cell each co-ordinate will simply be divided by the cubic cell constant , $a$.
This gives: $a/a$ , $∞/a$, $∞/a$ ; i.e- $1$ , $∞$ , $∞$.
Step 3 : Take the reciprocals of the fractional intercepts
This final manipulation generates the Miller Indices which (by convention) should then be specified without being separated by any commas or other symbols. The Miller Indices are also enclosed within standard brackets (…) when one is specifying a unique surface such as that being considered here.
The reciprocals of 1 and ∞ are 1 and 0 respectively, thus yielding the Miller Indices ($100$)
So the surface/plane illustrated is the ($100$) plane of the cubic crystal.
Further on, it is mentioned:
If any of the intercepts are at negative values on the axes then the negative sign will carry through into the Miller indices; in such cases the negative sign is actually denoted by overstriking (i.e- adding a little bar/dash over) the relevant number.
Now I was going to through the Wiki article on Miller Indices (linked earlier on in this post), and the following examples were provided:
The corresponding Miller Indices have been provided in the boxes right under each entry. Also, the little "T"s are actually "1"s with a bar on top... kinda small, so it's easy to see it wrong.
Now the first six entries are pretty straightforward (I count left-right), but it's the others that are confusing.
Take for example, the seventh entry:
Okay, the third axis ($a_3$) lies along orange-colored plane, so the plane has no intercept with the ($a_3$) axis (i.e- intercept here is taken to be $∞$, as the orange plane and the axis in question are parallel). Which would naturally result in the Miller Index $0$ (third digit) which is the reciprocal of $∞$.
Alright, no issues there.
But what's the deal with the intercepts on the the first two axes ( $a_1$ and $a_2$ )?
The "intercept at infinity" logic doesn't work here (since the orange plane isn't parallel to $a_1$ or $a_2$).
According to the (box under the) entry, the orange plane makes an intercept of $1$ (second digit) with the $a_2$ axis. Moreover, it also (apparently) makes an intercept of $-1$ (first digit) with the $a_3$ axis.
This is incredible...I am totally lost.
The eighth and ninth entries, too, are made in the same vein (well, obviously... because the seventh, eighth and ninth entries are essentially the same thing)
My question:
How exactly, are the Miller Indices for the seventh entry (the one with the orange plane), obtained?
I've just asked for the rationale behind the Miller Indices of the seventh entry because I can easily work out the Indices for the eighth and ninth entries after that.
Answer: IFF I remember correctly...
Sometimes you have to "move" the plane, in order to see where the axis crosses.
Take example 7, if you look at the example plane, the one actually shown, it is (0 ∞ ∞) not very useful. The ones making the examples have "moved the planes" (not really, but it is a useful fiction) in order to get the plane into the unit cell so they can show it. If they hadn't moved it, it wouldn't be visible.
The plane (-110) crosses $a_2$ axis at $a_2$ = $1$ and you see, using your spatial imagination, that it then needs to cross $a_1$ at $-1$ as well. It will never cross the $a_3$ axis, so $0$. The 8th and 9th are much harder to visualize but it is basically the same thing over again. The planes won't fit into the unit cell unless you "move" them.
Disclaimer: the (hkl) represents the set of all planes of the same err... "orientation", so it is technically not moving anything at all, it is merely choosing a representation of that said plane. If two planes in a certain crystal are equivalent, they can be represented with curly brackets. The sets (100),(010) and (001) are equivalent in a cubic crystal and can be represented collectively as {100}. Source: Basic Solid State Chemistry, A.R West 2nd ed. (1999) | {
"domain": "chemistry.stackexchange",
"id": 17627,
"tags": "physical-chemistry, crystal-structure, crystallography"
} |
Finding the number of pairs in an integer Array | Question: This function returns the number of pairs of the same elements in the array
It takes, array length and the array as the parameter
n: Length of array
ar: Array of integers for example
4 6 7 8 7 6 6 7 6 4
returns 4
fun getNumberOfPairs(n: Int, ar: Array<Int>): Int {
val enteries = HashSet<Int>()
var pairs = 0
for (i in 0 until n) {
if (!enteries.contains(ar[i])) {
enteries.add(ar[i])
} else {
pairs++
enteries.remove(ar[i])
}
}
println(pairs)
return pairs
}
How can we write this code in a better way for readability/performance?
Answer: Overall it's very readable and fast already. Good job.
I have some suggestions for possible improvements:
As n must be equal to ar.size, you could drop that parameter from the method and use ar.size in place of n within the method body.
This method is a pure function except for the side effect of printing the result. Being a "pure function" is often a good thing so you can move the printing of the result to outside the method. Printing something is also quite time-consuming.
Your method could easily support more than Int, it doesn't have to be restricted by a specific type. You could check for duplicates of any type so we can make this method generic.
As you are iterating over elements you could use for (e in ar) instead of iterating over the indexes with for (i in 0 until n). This would make it more efficient for data structures that doesn't have a \$O(1)\$ lookup-time, for example LinkedList.
The method HashSet.add returns false if the value already exists, so you don't need the call to .contains.
There is a typo in the name enteries, it should be called entries.
ar could be called input to make it more readable.
After applying all of the above, this is what you would end up with:
fun <T> getNumberOfPairs(input: Array<T>): Int {
val entries = HashSet<T>()
var pairs = 0
for (e in input) {
if (!entries.add(e)) {
pairs++
entries.remove(e)
}
}
return pairs
} | {
"domain": "codereview.stackexchange",
"id": 34761,
"tags": "array, kotlin"
} |
What is difference between a Data Scientist and a Data Analyst? | Question: https://www.datacamp.com/community/tutorials/learn-data-science-infographic
https://www.datacamp.com/community/blog/data-engineering-vs-data-science-infographic
These links contain almost everything but not the difference between data science and data analytics.
Is data analytics a part of the data science workflow? Is data analytics a subset of data science?
Answer: Please visit this and read the difference between Data analyst and Data scientist.
You will find the above link very interesting that fulfills your need as you will also find the words by Data scientist of LinkedIn.
Important lines from the above link:
Data analysts are masters in SQL and use regular expression to slice and dice the data. With some level of scientific curiosity data analysts can tell a story from data. A data scientist on the other hand possess all the skills of a data analysts with strong foundation in modelling, analytics, math, statistics and computer science. What differentiates a data scientist from a data analyst is the strong acumen along with the ability to communicate the findings in the form of a story to both IT leaders and business stakeholders in such a way that it can influence the manner in which a company approaches a business challenge. | {
"domain": "datascience.stackexchange",
"id": 11020,
"tags": "data"
} |
Extracting Public Page Posts from Facebook | Question: I am a data science student working on my capstone research project. I am trying to collect posts from a number of public organization pages from facebook. I am looking at the graphs api and it does not appear to have an endpoint for this use case.
The page/feed requires either moderation access of the pages(which i do not have) or public page read access which requires a business application.
The CrowdTangle process is not accepting researchers outside of very specific subjects at this time.
Is there a process to get these posts that does not involve scraping page data or having people manually record each post? I do not wish to break any terms of service.
Answer: As of today (2/24/2022), facebook has 2 ways to get data. CrowdTangle, and Graphs API.
CrowdTangle is made for researchers and is not taking new people
Graphs API has no avenue for research data.
there is no official solution to access public page posts on facebook as per a Facebook App Review Staff member who i got in touch with. | {
"domain": "datascience.stackexchange",
"id": 10616,
"tags": "data, web-scraping, api"
} |
Image transport to android consumes high memory | Question:
Hello
I tried to run simple image_transport tutorial on my android 4.0 phone to display image captured by camera. It connects with the master, it displays the image, but the frame rate is extremely low and eventually device hangs itself. It seems that the app is using a lot of memory and therefore GC_FOR_ALLOC pauses the execution of the program. I suppose that each image received is stored instead of being removed after the frame is used.
Can anybody help me to fix this problem?
Originally posted by grzebyk on ROS Answers with karma: 141 on 2013-06-14
Post score: 0
Answer:
The solution is here:
http://code.google.com/p/rosjava/issues/detail?id=141
It is needed to make RosImageView a SurfaceView instead of an ImageView.
Originally posted by grzebyk with karma: 141 on 2013-06-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 14571,
"tags": "rosjava, android"
} |
How do I interpret the chemical potential for hydrogen atoms in a neutral gas of hydrogen, protons and electrons? | Question: I am currently self-studying the lecture notes of David Tong on Statistical Physics. In question 6 of problem sheet 2 (https://www.damtp.cam.ac.uk/user/tong/statphys/omg2.pdf) we are asked to find a formula for the proportion of particles in the gas given an equilibrium condition $\mu_H = \mu_e + \mu_p$. Now, I have got the required answer by treating the hydrogen gas in the grand canonical ensemble and setting the chemical potential equal to $\mu = \mu_H - \Delta$ (with $-\Delta$ being the binding energy of hydrogen). What I don't understand is why the $-\Delta$ must be added to the chemical potential. I understand that an extra hydrogen atom will mean the gas loses energy $\Delta$ since an electron and proton must bind. However, my understanding is that this would already be taken into account in $\mu_H$ without the need to subtract $\Delta$. So my question is: What is the interpretation of $\mu_H$ in this case if it is not accounting for the change in potential $\Delta$?
Answer: Chemical potential is defined as $\mu=u-Ts+Pv$ where $u(s)(v)$ are internal energy(entropy)(volume) per particle. Now for the calculation of the $\mu_e$ and $\mu_p$, you used the formula for an ideal gas $u_{ideal}-Ts_{ideal}$. When you use the same formula for hydrogen atoms, you must note that the internal energy of each hydrogen atom is $\Delta$ lower than that of the ideal gas(which is just the kinetic energy), i.e. $H_H=\frac{p_h^2}{2m_H}-\Delta$ hence $u_H=u_{ideal}-\Delta$. The calculation of entropy will not change in any way, so the chemical potential of hydrogen is $u_{ideal}-\Delta-Ts+Pv$. The quantity $u_{ideal}-Ts+Pv$ is what you would have calculated using the grand canonical ensemble, and so you have $\mu=\mu_H-\Delta$. Hope this helps | {
"domain": "physics.stackexchange",
"id": 96061,
"tags": "homework-and-exercises, statistical-mechanics, chemical-potential"
} |
Confusion in Maxwell's derivation of Ampere's Force Law - Part III | Question: Note: This is not a duplicate of my previous questions with same title asked last year.
I am reading Maxwell's a treatise on electricity and magnetism, Volume 2, page 156 about "Ampere's Force Law".
Edit:
I apologise for not being able to write my whole analysis of the Treatise. It is because it is too long. However I would write down the important equations within the question
\begin{equation}
\begin{matrix}
P'Q'=ds',\\ PQ=ds\\
r=\xi\hat{i}+\eta\hat{j}+\zeta\hat{k}\\=(x-x')\hat{i}+(y-y')\hat{j}+(z-z')\hat{k}\\ \\
\end{matrix}
\end{equation}
\begin{align}
&\dfrac{d\xi}{ds}=\dfrac{dx}{ds}=l,\\
& \dfrac{d\eta}{ds}=\dfrac{dy}{ds}=m,\\
& \dfrac{d\zeta}{ds}=\dfrac{dz}{ds}=n,\\
& -\dfrac{d\xi}{ds'}=\dfrac{dx'}{ds'}=l',\\
& -\dfrac{d\eta}{ds'}=\dfrac{dy'}{ds'}=m',\\
& -\dfrac{d\zeta}{ds'}=\dfrac{dz'}{ds'}=n',\\
\end{align}
$P, B$ and $C$ are unknown functions of $r$.
$ Q=-\int_{0}^{r} C dr$ is also a function of $r$.
By making use of the definition of $l'$ equation (19) in book becomes:
\begin{align}
\dfrac{dX}{ds}
&= \biggl[P \xi^{2}-Q \biggl]_{0}^{s^{\prime}}-\int_{0}^{s^{\prime}} (2Pr-B-C)\frac{l' \xi}{r}ds'\\
&=\biggl[P \xi^{2}-Q \biggl]_{0}^{s^{\prime}}+\int_{0}^{s^{\prime}} (2Pr-B-C)\frac{ \xi}{r}d\xi'\\
\end{align}
Now since $P,\xi$ and $Q$ are functions of $r$, they disappear after closed integration around $s'$ and the first term is zero. However if we look carefully, in the second term, the integrand and the variable are all functions of $r$. Therefore this term should also disappear.
But its written$-$below equation (19)$-$that "the second term will not in general disappear". Why is this so even after the second term is a function of $r$?
Answer: It is true that the integrand is function of $r$, but the variable is an infinitesimal length on the $x$ axis. So it need not necessarily be a function of $r$. Also, the path of integration is not along $r$. It is along the curve $s'$. Therefore the variable is $s'$ and not $r$. Therefore $\dfrac{dX}{ds}$ remains a function of $s'$.
In your $\dfrac{dX}{ds}$ equation, integration w.r.t $d\xi'$ means integration along projection of path of circuit $s'$ on $x$-axis and not along projection of $\vec{r}$ on $x$ axis.
Therefore it is better to write integration w.r.t $dx'$ than $d\xi'$ | {
"domain": "physics.stackexchange",
"id": 38228,
"tags": "electromagnetism, forces, electric-current, classical-electrodynamics"
} |
Given a known total stopping distance, how can I calculate the initial speed? | Question: This would be my first question on Physics.stackexchange. As I looked closely if I would not double my question, I try dare to ask the question.
I am doing some calculations together with my kids. They had two physics questions on school which I find particularly interesting.
First question:
Calculate the total stopping distance for a car knowing the initial speed v in km/h, the friction $\mu$ and the thinking time t of 1 second.
I solved this one pretty easy using the formula:
$$
d = (v * t) + \dfrac{v^{2}}{2*\mu*g}
$$
So given the example that the car is travelling with a speed of 144 km/h, assuming g of 9.81 and a friction of 0.3 this will yield:
$$
d = ( 40 * 1) + \dfrac{1600}{2*0.3*9.81} => 311.83146449201496
$$
The second question however is where I made up a mistake in my math. This question is, given a known stopping distance d and a friction $\mu$ and still assuming a thinking time of 1 second, what is the initial speed in km/h?
By putting this initial speed and $\mu$ into the first equation of the first question, the same total stopping distance should be proven.
What would be the formula for that last question?
I am assuming that using:
$$
v = \sqrt{2 * \mu\ * g\ * d }
$$
but that is not including the thinking time right? Hence my outcome is not correct.
Answer: Well, assuming your first formula is correct, you should be able to use that. We already know the relationship $$d = (v t) + \frac{v^2}{2 \mu g}$$ and if we rearrange slightly we get $$ v^2 \frac{1}{2 \mu g} + v(t) - d = 0$$ If you look at that equation you may notice that it is just the quadratic equation for the variable v, meaning it can be solved using the quadratic formula $$x=\frac{-b \pm \sqrt{ b^2 - 4ac}}{2a}$$ where the equation has the format $$0 = ax^2 + bx + c$$(which is the format I rearranged the equation to).
This will give you 2 answers due to the $\pm$ in the equation; but one should be physically insignificant (like a negative velocity). I'll leave you to work out the actual numbers. | {
"domain": "physics.stackexchange",
"id": 37559,
"tags": "homework-and-exercises, newtonian-mechanics, friction, speed, distance"
} |
Tic Tac Toe in object oriented JavaScript | Question: I've been working on a course assignment for 'The Odin Project' and was hoping to get a review of my code.
As per the help guide, the code is fully functional.
The criteria for this was pretty straight forward. Try to have as little global code as possible. (At this point I've learned about classes, and modules)
I feel that I'm breaking some rules when it comes to clean code.
Live example:
https://ablueblaze.github.io/TicTacToe/
Repository:
https://github.com/ablueblaze/TicTacToe
JavaScript:
document.addEventListener("DOMContentLoaded", () => {
// Event listener for all the cells
document.querySelectorAll(".cell").forEach((cell) =>
cell.addEventListener("click", (e) => {
gameControls.makePlay(e, player1, player2, testBoard);
})
);
// Event listener for when the modal is active
document
.querySelector("[data-modal]")
.addEventListener("click", displayControls.toggleModal);
// Event listener for the new game button
document.getElementById("new-game").addEventListener("click", () => {
gameControls.newGameBtn(testBoard);
});
// Event listener for the clear scores button
document.querySelector("#clear-scores").addEventListener("click", () => {
gameControls.clearAllScoresBtn(player1, player2);
});
});
class GameBoard {
constructor() {
this.controlBoard = [1, 2, 3, 4, 5, 6, 7, 8, 9];
this.displayBoard = ["", "", "", "", "", "", "", "", ""];
}
isFull() {
let count = 0;
for (let i = 1; i < 10; i++) {
if (this.controlBoard.includes(i)) {
count++;
}
}
if (count === 0) {
return true;
}
}
markBoard(play, playerMark) {
if (this.controlBoard.includes(play)) {
let index = this.controlBoard.findIndex((e) => e === play);
this.controlBoard[index] = 0;
this.displayBoard[index] = playerMark;
return true;
}
}
boardReset() {
this.controlBoard = [1, 2, 3, 4, 5, 6, 7, 8, 9];
this.displayBoard = ["", "", "", "", "", "", "", "", ""];
}
}
// A player class that will hold the players marker, and all the plays that they have made
class Player {
constructor(marker, plays = [], score = 0) {
this.marker = marker;
this.plays = plays;
this.score = score;
}
addPlay(play) {
this.plays.push(play);
}
clearPlays() {
this.plays = [];
}
clearScore() {
this.score = 0;
}
scoreUp() {
this.score++;
}
// returns true if player has a winning hand
didIWin() {
const winingHands = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[1, 4, 7],
[2, 5, 8],
[3, 6, 9],
[1, 5, 9],
[3, 5, 7],
];
let count = 0;
for (let i of winingHands) {
for (let n of i) {
if (this.plays.includes(n)) {
count++;
}
}
if (count === 3) {
return true;
}
count = 0;
}
return false;
}
}
const testBoard = new GameBoard();
let currentBoard = testBoard.displayBoard;
const player1 = new Player("X");
const player2 = new Player("O");
const displayControls = (() => {
// Updates the pages game board
function boardUpdate(displayBoard) {
for (let i = 0; i < displayBoard.length; i++) {
let activeCell = document.querySelector(`[data-cell-value="${i + 1}"]`);
activeCell.textContent = displayBoard[i];
}
}
// Updates the pages score board
function scoreUpdate(player1Score, player2Score) {
const pXScore = document.querySelector("#player-one-score");
const pOScore = document.querySelector("#player-two-score");
pXScore.textContent = player1Score;
pOScore.textContent = player2Score;
}
// Update the current player banner, by flipping the return from current player
function showCurrentPlayer(currentPLayerMarker) {
const playerSpan = document.querySelector("#current-player");
if (currentPLayerMarker === "X") {
playerSpan.textContent = "O";
return;
}
playerSpan.textContent = "X";
}
// Set the Winner text in modal
function setWinner(winningPlayerMarker) {
const winner = document.querySelector(".winner");
winner.textContent = `Winner: ${winningPlayerMarker}`;
}
// Set the Winner text in modal to Tie
function showTie() {
const winner = document.querySelector(".winner");
winner.textContent = "It's a Draw!";
}
function toggleModal() {
const modal = document.querySelector("[data-modal]");
if (modal.className === "modal active") {
modal.classList.remove("active");
return;
}
modal.classList.add("active");
}
return {
boardUpdate,
scoreUpdate,
showCurrentPlayer,
setWinner,
showTie,
toggleModal,
};
})();
const gameControls = (() => {
// Toggle between the given players
let playCounter = 1;
function togglePlayer(player1, player2) {
playCounter++;
if (playCounter % 2 == 0) {
return player1;
} else {
return player2;
}
}
function resetCounter() {
playCounter = 1;
}
function newGameBtn(activeBoard) {
resetCounter();
activeBoard.boardReset();
displayControls.boardUpdate(activeBoard.displayBoard);
displayControls.showCurrentPlayer("O");
}
function clearAllScoresBtn(player1, player2) {
player1.clearScore();
player2.clearScore();
displayControls.scoreUpdate(player1.score, player2.score);
}
function clearPlays(player1, player2, activeBoard) {
player1.clearPlays();
player2.clearPlays();
activeBoard.boardReset();
}
// Runs after a game is finished
function nextGame(player1, player2, activeBoard) {
clearPlays(player1, player2, activeBoard);
displayControls.boardUpdate(activeBoard.displayBoard);
displayControls.toggleModal();
}
// Check to see if a game is done
function endGame(currentPlayer, player1, player2, activeBoard) {
if (currentPlayer.didIWin()) {
currentPlayer.scoreUp();
displayControls.setWinner(currentPlayer.marker);
displayControls.scoreUpdate(player1.score, player2.score);
nextGame(player1, player2, activeBoard);
} else if (activeBoard.isFull()) {
displayControls.showTie();
clearPlays(player1, player2, activeBoard);
nextGame(player1, player2, activeBoard);
}
}
function makePlay(event, player1, player2, activeBoard) {
let currentPlayer = togglePlayer(player1, player2);
let cellValue = parseFloat(event.target.dataset.cellValue);
activeBoard.markBoard(cellValue, currentPlayer.marker);
currentPlayer.addPlay(cellValue);
displayControls.boardUpdate(activeBoard.displayBoard);
displayControls.showCurrentPlayer(currentPlayer.marker);
endGame(currentPlayer, player1, player2, activeBoard);
}
return { makePlay, clearPlays, newGameBtn, clearAllScoresBtn };
})();
Thanks A bunch!
Answer: Welcome to Code Review, your game looks pretty nice!
There is quite a lot of code to go through but here are some initial observations.
Constructor
You are using exactly the same code in the GameBoard constructor and in boardReset. You should rather simply call boardReset from the constructor:
constructor() {
this.boardReset();
}
isFull
Note that includes does not work in all browsers, not that anybody should be using IE9 anymore =).
This can be simplified by checking if there are any empty spaces left:
isFull() {
return this.displayBoard.indexOf("") == -1;
}
markBoard
Here you first use contains to check if the item is in the list and then indexOf to get the index. You could have simplified this like so:
markBoard(play, playerMark) {
let index = this.controlBoard.findIndex((e) => e === play);
if (index != -1) {
this.controlBoard[index] = 0;
this.displayBoard[index] = playerMark;
return true;
}
} | {
"domain": "codereview.stackexchange",
"id": 42510,
"tags": "javascript, beginner, ecmascript-6, tic-tac-toe"
} |
Why can an event on the upper sheet of the timelike hyperboloid never be transformed onto the lower one? | Question: Currently reading about special relativity in Griffiths E&M, more specifically the invariant interval between two events. Suppose just two spatial dimensions $x$ and $y$ and a temporal dimension $t$. If one event is set to the origin, then the locus of all points in the Minkowski diagram with the same timelike interval $I \equiv -c^2 t^2 + x^2 + y^2 < 0$ will be a hyperboloid with an "upper sheet" and a "lower sheet".
Now the author claims that any point on the upper sheet can be carried to any other point on the upper sheet by Lorentz transformation, since $I$ is invariant under Lorentz transformations. However, he claims that no point on the upper sheet can ever to Lorentz transformed onto the lower sheet, thereby saving the principle of causality (since all observers will have to agree on the order between two events, when the interval is timelike and they are able to interact). Is there any way to prove this inability to bring a point on the upper sheet to the lower sheet, which is so important to causality? It certainly does not violate the invariance of $I$.
Answer: First, the upper hyperboloid actually can be transformed into the lower one and vice-versa: by the time reversal transform $t\to-t$ which is also considered a Lorentz transform in the widest sense (i.e. one that leaves the spacetime metric invariant). However, a Lorentz transform that relates two intertial systems moving with a relative speed of $v$ is a member of the simply connected subgroup that also contains the identity transform, called the proper orthochronous Lorentz group. See Lorentz group. This component does not contain the time reversal transform (nor the space inversion, but that is another story), and hence, is not able not change causality (what came before/after what).
In colloquial terms, a Lorentz transform that you can achieve by burning a rocket from rest (i.e. "connected to identity") will not allow time reversal. Those transforms (called boosts as opposed to mere spatial rotations) can be identified with points on the upper hyperboloid (i.e. final speeds). So, the inability to reverse causality is more or less just expressed by the fact that both hyperboloids cannot be connected by a continuous curve (that starts from the identity, i.e. v=0 for example). | {
"domain": "physics.stackexchange",
"id": 85479,
"tags": "electromagnetism, special-relativity"
} |
shared_ptr and make_shared implementations (for learning) | Question: Recently, I've been going through Scott Meyer's Effective Modern C++ and found the discussion on shared_ptr really interesting. I also watched Louis Brandy's Curiously Recurring C++ Bugs at Facebook talk which also had some details about how shared_ptr works, and I thought it would be fun to implement my own to see if I actually understood it.
Here is my implementation of shared_ptr and make_shared: https://godbolt.org/z/8Yec9K
#include <iostream>
#include <functional>
#include <vector>
template <typename T>
class SharedPtr {
public:
using DeleteFunctionType = std::function<void(T*)>;
explicit SharedPtr():
val_(nullptr),
ctrlBlock_(nullptr)
{
}
explicit SharedPtr(std::nullptr_t,
DeleteFunctionType = [](T* val) {
delete val;
}):
val_(nullptr),
ctrlBlock_(nullptr)
{
}
explicit SharedPtr(T* val,
DeleteFunctionType deleteFunction = [](T* val) {
delete val;
}):
val_(val),
ctrlBlock_(new ControlBlock(1, std::move(deleteFunction)))
{
}
~SharedPtr() {
if (val_ == nullptr) {
return;
}
if (--ctrlBlock_->refCount <= 0) {
ctrlBlock_->deleteFunction(val_);
delete ctrlBlock_;
val_ = nullptr;
ctrlBlock_ = nullptr;
}
}
SharedPtr(const SharedPtr& rhs):
val_(rhs.val_),
ctrlBlock_(rhs.ctrlBlock_)
{
if (ctrlBlock_ != nullptr) {
++ctrlBlock_->refCount;
}
}
SharedPtr& operator=(SharedPtr rhs) {
swap(rhs);
return *this;
}
void swap(SharedPtr& rhs) {
using std::swap;
swap(val_, rhs.val_);
swap(ctrlBlock_, rhs.ctrlBlock_);
}
bool operator==(const SharedPtr& rhs) const {
return val_ == rhs.val_ && ctrlBlock_ == rhs.ctrlBlock_;
}
T* get() const {
return val_;
}
T& operator*() const {
return *val_;
}
T* operator->() const {
return val_;
}
friend void swap(SharedPtr& lhs, SharedPtr& rhs) {
lhs.swap(rhs);
}
operator bool() const {
return val_ != nullptr;
}
private:
struct ControlBlock {
ControlBlock(int cnt, DeleteFunctionType fnc):
refCount(cnt),
deleteFunction(fnc)
{
}
int refCount;
DeleteFunctionType deleteFunction;
};
T* val_;
ControlBlock* ctrlBlock_;
};
template <typename T, typename... Args>
SharedPtr<T> MakeShared(Args&&... args) {
return SharedPtr<T>(new T(std::forward<Args>(args)...));
}
struct Foo {
Foo(int a, int b) : a_(a), b_(b) {}
int a_;
int b_;
void sayHello() {
std::cout << "Hello from " << *this << "\n";
}
friend std::ostream& operator<<(std::ostream& os, const Foo& rhs) {
os << "Foo(" << rhs.a_ << ", " << rhs.b_ << ")";
return os;
}
};
int main() {
{
// Basic usage
SharedPtr<Foo> c; // Default constructor
SharedPtr<Foo> a(new Foo(1,2)); // Constructor with value
auto b = a; // Copy constructor
c = b; // Assignment operator
}
{
// using custom delete
constexpr int arrSize = 10;
SharedPtr<int> a(new int[arrSize], [](auto p) {
delete[] p;
}); // custom deleter
auto b = a; // copy constructor -- make sure the custom deleter is propogated
SharedPtr<int> c;
c = a; // copy assignment
}
{
// nullptr
SharedPtr<Foo> a(nullptr);
auto b = a;
SharedPtr<Foo> c;
c = a;
}
{
// Make shared -- basic usage
SharedPtr<Foo> c; // Default constructor
auto a = MakeShared<Foo>(2, 3);
auto b = a; // Copy constructor
c = b; // Assignment operator
std::cout << "*c = " << *c << "\n";
c->sayHello();
}
{
std::vector<SharedPtr<std::vector<int>>> v;
v.emplace_back();
v.emplace_back();
v.emplace_back();
v.emplace_back();
v.emplace_back();
std::cout << v.size() << "\n";
}
}
Any and all review comments would be greatly appreciated. In particular, I had a few questions when comparing my implementation against the STL's:
I noticed that in the STL implementation I looked at, the class and constructor are templated on different types (i.e. the constructor is implemented like: template <typename T> class shared_ptr { public: template <typename U> explicit shared_ptr(U* val); };) I was wondering why the both the class and the constructor need to be templated?
The following compiles: std::shared_ptr<int[]> a(new int[10]);, but the similar idea doesn't compile with my implementation. How can I fix this?
Is there a downside to my using an std::function to store the custom deleter? I noticed that the STL implementation I looked at doesn't do this, but the type erasure that std::function provides seem to fit in really well with how the custom deleter is supposed to work.
I know that std::make_shared is supposed to use only one allocation to allocate both the control block and the T object. I couldn't figure out how to do it in an easy way though. Is there an easy way to implement that with what I have now?
I'm sure there's a lot of other bugs and mistakes with my code, and I would greatly appreciate any and all feedback. Thanks for your time!
Answer: General stuff
The implementation as-is does not attempt to be thread-safe in any way. I guess this is intentional, but might cause problem and/or confusion if not communicated well.
There is no std::weak_ptr-like analogue. Might be a new challenge to implement later ;)
Move constructor (and assignment operator) would be nice for SharedPtr.
I was first confused why the SharedPtr(nullptr_t, DeleteFunctionType) did not store the deleter function, but then I realized that there is no reset-like functionality.
Many member functions of SharedPtr could be marked noexcept.
Other than that, I can't spot anything obvious to improve. Well done!
Q & A
For one, the standard mandates it. And the standard mandates it, because it allows some metaprogramming tricks, in this case removing the constructor from overload resolution in case U* is not convertible to T*. (In general, this allows for better error messages, and might enable another overload to be a better fit in case there are equally fit overloads).
Add a specialization for array types: template<typename T> class SharedPtr<T[]> { ... };, with the corresponding array versions of the member functions.
Yes, it might cause a third allocation (e.g. if the deleter is a lambda with non-empty capture clause). The std::shared_ptr avoids this by using type erasure on the control block.
Well, not easy per se. It would require some design changes, mainly on the control block, and letting it actually control the lifetime of the T (currently, it only controls the deleter, not the T object itself).
Basically, define a ControlBlockBase<T> class, which provides the control block interface. Then create a derived ControlBlockImmediate<T> which has an additional T member and necessary member functions. (This can also be used to fix the third possible allocation by std::function). Example:
template<typename T>
class ControlBlockBase
{
public:
virtual ~ControlBlockBase() = default;
virtual T* get() const = 0;
void addReference() { ++refCount_; }
void removeReference() { --refCount_; }
size_t countReferences() { return refCount_; }
private:
size_t refCount_ = 0u;
};
template<typename T>
class ImmediateControlBlock : public ControlBlockBase<T>
{
public:
template<typename... Args>
ImmediateControlBlock(Args&&... args)
: immediate_(std::forward<Args>(args)...)
{
addReference();
}
T* get() const override { return &immediate_; }
private:
T immediate_;
};
template<typename T, typename Deleter>
class ControlBlock : public ControlBlockBase<T>
{
public:
ControlBlock(T* ptr, Deleter deleter) : ptr_{ptr}, deleter_{deleter}
{
if (ptr_) addReference();
}
~ControlBlock()
{
assert(countReferences() == 0u);
if(ptr_) deleter_(ptr_);
}
T* get() const override { return ptr_; }
private:
T* ptr_;
Deleter deleter_;
};
Note: This could potentially be further optimized by using template metaprogramming (e.g. removing the need for virtual function calls), but should give an easy introduction to the concept. | {
"domain": "codereview.stackexchange",
"id": 40608,
"tags": "c++, c++11, c++17, pointers"
} |
Symmetries of double dual of Riemann curvature tensor | Question: The definition of the double dual of Riemann is as follows:
$$G^{\alpha\beta}{}{}_{\gamma\delta} = \frac{1}{2}\epsilon^{\alpha\beta\mu\nu}R^{\rho\sigma}{}{}_{\mu\nu}\frac{1}{2}\epsilon_{\rho\sigma\gamma\delta} $$
Now my question is how to prove that this tensor satisfy: $G^{\alpha}{}_{[\beta\gamma\delta]} = 0$
Answer: For simplicity, I use the Latin letters, and I rewrite the double dual of Riemann tensor as
$$
G_{abcd} = \frac{1}{4} \epsilon_{ab}{}^{ef} \, R_{efgh} \, \epsilon_{cd}{}^{gh} .
$$
Now, we want to see whether $G_{a [bcd]} = 0$. To do so, we calculate
$$
G_{abcd} + G_{acdb} + G_{adbc} = \frac{1}{4} \left( \epsilon_{ab}{}^{ef} \epsilon_{cd}{}^{gh} + \epsilon_{ac}{}^{ef} \epsilon_{db}{}^{gh} + \epsilon_{ad}{}^{ef} \epsilon_{bc}{}^{gh} \right)R_{efgh}
\,.
$$
Using the property of the Levi-Civita tensor; i.e.,
$$
\epsilon_{ab}{}^{ef} \epsilon_{cd}{}^{gh}
=
(-1)^q \delta^{efgh}_{abcd} \,,
$$
where $q$ is the number of negatives in the metric signature (in GR this is equal to one) and $\delta^{efgh}_{abcd} := 4! \delta^{efgh}_{[abcd]}$ is the generalised Kronecker delta, we obtain
\begin{equation}
G_{abcd} + G_{acdb} + G_{adbc} =
-\frac{1}{4} \left( \delta^{efgh}_{abcd} + \delta^{efgh}_{acdb} + \delta^{efgh}_{a dbc} \right) R_{efgh} . \tag{1}
\end{equation}
The expression in the bracket is a cyclic (symmetric) permutation within a totally anti-symmetric (the generalised Kronecker delta) tensor, i.e., $\delta^{efgh}_{[a(bcd)]}$. But, this vanishes as you may show this by explicit calculation!
Alternatively:
One could use the definition of a totally antisymmetric tensor given by the generalised Kronecker delta. For a tensor $T_{abcd}$ one has
$$
T_{[abcd]} = \frac{1}{4!} \delta^{efgh}_{abcd} \, T_{efgh}.
$$
Then, the equation (1) becomes:
$$
G_{abcd} + G_{acdb} + G_{adbc}
=
- 3! \left( R_{[abcd]} + R_{[acdb]} + R_{[adbc]} \right).
$$
This obviously vanishes. To show the result one can expand the expressions and use the algebraic symmetries of Riemann tensor. | {
"domain": "physics.stackexchange",
"id": 67139,
"tags": "homework-and-exercises, general-relativity, symmetry, tensor-calculus, curvature"
} |
User registration API | Question: Would love some feedback on this simple API implementation with Ruby on Rails. Have I used proper naming conventions, readable/manageable code, optimal approaches, etc? As I'm still learning Ruby on rails, I'd love to hear what other fellow engineers have to say about this :)
A simple Register API:
require 'net/http'
require 'uri'
require 'json'
class V1::UsersController < ApplicationController
protect_from_forgery with: :null_session
def register
name = params[:name]
email = params[:email]
password = params[:password]
@existing_user = User.find_by(email: params[:email])
if @existing_user == nil
@user = User.new(
name: name,
email: email,
password: password,
plan: "FREE"
)
#save user
if @user.save!
#generate auth token
@auth = save_new_auth @user.id
render :json => {
:user => @user.as_json(:except => [:created_at, :updated_at, :password, :stripe_id, :subscription_id, :id, :email]),
:auth => @auth
}
else
render json: {"error" => "Unprocessable Entity"}
end
else
render json: { error: { type: "UNAUTHORIZED", message: "Looks like you already have an account. Please login " } }, status: 403
end
end
Login API:
def login
email = params[:email]
password = params[:password]
@auth = nil
@user = User.find_by(email: params[:email], password: password)
if @user == nil
render json: { error: { type: "UNAUTHORIZED", message: "Invalid Login Credentials " } }, status: 401
else
@auth = UserAuth.find_by(user_id: @user.id)
if @auth == nil
@auth = save_new_auth @user.id
end
render :json => {
:user => @user.as_json(:except => [:created_at, :updated_at, :password, :stripe_id, :subscription_id, :id, :email]),
:auth => @auth
}
end
end
Access token Generator:
def save_new_auth (user_id)
@auth = UserAuth.new(
access_token: generate_token,
user_id: user_id,
status: true
)
@auth.save
return @auth
end
def generate_token(size = 28)
charset = %w{ 2 3 4 6 7 9 A C D E F G H J K M N P Q R T V W X Y Z}
(0...size).map{ charset.to_a[rand(charset.size)] }.join
end
Answer: Here are a couple of suggestions
Non default CRUD action in Controller
It's generally recommended to only have CRUD actions index, show, new, edit, create, update, destroy in your controllers. So e.g. you should think about renaming your register and login actions and extract according controllers.
For example you could have a UserRegistrationsController and a UserLoginsController which each have a create action.
class V1::UserRegistrationsController < ApplicationController
def create
end
end
class V1::UserLoginsController < ApplicationController
def create
end
end
Here is a good blog article explaining this concept
http://jeromedalbert.com/how-dhh-organizes-his-rails-controllers/
Handling user not found
Instead of using find_by you could use the bang method find_by! which will raise a RecordNotFound error.
User.find_by!(email: params[:email])
Now RecordNotFound errors will get automatically translated to a 404 response code. But you could overwrite the default behaviour as well with your own exception app or rescue_from
https://github.com/rails/rails/blob/main/actionpack/lib/action_dispatch/middleware/public_exceptions.rb#L14
https://stackoverflow.com/questions/62843373/how-to-deal-with-general-errors-in-rails-api/62846970#62846970
Another possibility is to add a validation with scope so you can only have one email address.
class V1::UserRegistrationsController < ApplicationController
def create
@user = User.new(
name: name,
email: email,
password: password,
plan: "FREE"
)
if @user.save
render @user
else
render @user.errors
end
end
Strong parameters
User strong parameters instead of plucking the values from the params hash.
der user_params
params.require(:user).permit(:email, :password, :name)
end
https://edgeguides.rubyonrails.org/action_controller_overview.html#strong-parameters
Use a PORO or Service object
A lot of people will have different opinions of using a Plain Old Ruby Object vs a Service object etc. But I think a lot of people would agree that it's a good idea to move logic from the controller to a dedicated object.
Example
class UserRegistration
def self.create(params)
new(params).save
end
def initialize(params)
@params = params
end
def save
user = create_user
auth = UserAuth.new(
access_token: generate_token,
user_id: user.id,
status: true
)
auth if auth.save
end
private
attr_reader :params
def create_user
User.create(
name: params[:name],
email: params[:email],
password: params[:password],
plan: "FREE"
)
end
def generate_token(size = 28)
charset = %w{ 2 3 4 6 7 9 A C D E F G H J K M N P Q R T V W X Y Z}
(0...size).map{ charset.to_a[rand(charset.size)] }.join
end
end
And then you can just do in your controller
class V1::UserRegistrationsController < ApplicationController
def create
if (auth = UserRegistration.create(params))
render auth
else
render :error
end
end
end
Generally speaking I think there is too much logic in your controllers which makes it hard to read and test. | {
"domain": "codereview.stackexchange",
"id": 42362,
"tags": "ruby, ruby-on-rails"
} |
Caterpillar resembling a snake | Question: I found a very interesting caterpillar. It's 5 cm long and looks like a snake. I discoved it in a greenhouse on a vine in the Czech Republic.
I found two types: a green one and a brown one, both with same length.
What species is it?
Answer: Looks like the larvae of an elephant hawk moth (Deilephila elpenor).
Green Variant. ©2005 Henk Wallays (CC BY-NC 3.0)
In the Sphingidae family (like the snake-mimic caterpillar in this post).
Description: typically brown-gray color with black dots along length of body. Young larvae are yellowish-white or green, but some mature larvae also can be green colored. Fully-grown caterpillars reach 7.62 cm in length.
Have a backward curving spine or "horn" on the final abdominal segment.
D. elpenor color variation. Source: A.R. Pittaway & I.J. Kitching
Range: Most common in central Europe and is distributed throughout the palearctic region.
Habitat: variety: including rough grassland, heathland, sand dunes, hedgerows, woodland, open countryside, and even urban gardens.
Source: Wikipedia | {
"domain": "biology.stackexchange",
"id": 8885,
"tags": "species-identification, zoology, entomology, lepidoptera"
} |
Is $\{Z, CZ\}$ an universal quantum gate set for diagonal gates with eigenvalues +1 and -1? | Question: Consider diagonal quantum gates with eigenvalues $\pm 1$, i.e. all diagonal elements are either $+1$ or $-1$.
Can these gates always be decomposed into a finite number of Z and controlled-Z gates?
My gut feeling says yes, but I don't know how to prove it.
Edit: As @glS has mentioned, an overall phase of $-1$ may be needed if the upper-left element is $-1$. Still, after adjusted for this $-1$ phase, my question remains.
Answer: No. This way, you only can get (a subset of) stabilizer circuits, but there are clearly gates which are not stabilizer circuits.
An example is the three-qubit gate
$$
\begin{pmatrix}1\\ &1\\ &&1\\&&&1\\&&&&1\\&&&&&1\\&&&&&&1\\&&&&&&&-1
\end{pmatrix}
$$
You can see that this is not a stabilizer by applying a Hadamard to the third qubit on both sides: Then, you get a Toffoli gate, which is not a stabilizer (in fact, together with stabilizers it allows to do universal quantum computation).
Another way to see that this is impossible is to just count the number of diagonal gates with $\pm1$ -- there are $2^{(2^N)}$) of those -- and compare them to the number of gates you can build with your recipe -- since they all commute, there are $2^{(N^2)}$ ways of putting the CZs, and $2^N$ ways of putting the $Z$'s, which is exponentially less.
Note that the counting argument does, in fact, give a more general impossibility argument for realizing such gates using only a limited class of diagonal gates. | {
"domain": "physics.stackexchange",
"id": 79401,
"tags": "quantum-information, quantum-computer"
} |
Optimization of pandas row iteration and summation | Question: i'm wondering if anyone can provide some input on improving the speed and calculations of a pandas result.
What i am trying to obtain is a summation of IDs in one table (player table) based on each row of a second table (UUID). Functionally each row needs to sum the total of the players table rows that are contained in its Active row and assign the UUID as the index to that row.
My initial thought was to loop row by row and calculate out my results but that has produced quite a slow result, and i suspect is not the optimal way that this could be accomplished. In the version below my estimate total time for the full dataset would be around 66 minutes. Running on a subsample of 10,000 takes around 20 seconds.
Would anyone have a better solution to calculating these results?
Thanks in advance!
UUID Table
This is a subset of the whole table
shape = (2060590, 2)
Player ID Table
This is a subset of the whole table
shape = (39,8)
Final Table
Code
# executes in ~20 seconds
df = None
for ix, i in enumerate(uuid_df[["UUID", "Active"]].sample(10000).itertuples(index=False)):
# Get UUID for row
_uuid = i[0]
# Get list of "Active items" (these are the ones that will be summed)
_active = i[1]
# Create new frame by selecting everything from points table where the ID is in the Active List.
# Sum selected values, convert to a dataframe with UUID as index and tranpose
_dff = points_table_df.loc[points_table_df.index.isin(_active)].sum().to_frame(_uuid).T
# Check if first dataframe, if not concat to existing one
if df is None:
df = _dff
else:
df = pd.concat([df, _dff])
Answer: This could actually be done quickly and intuitively using linear algebra.
So consider your player as label binarized array (can be done with MultiLabelBinarizer) so you would expect an array of size (2060590, 39) containing 0 an 1, rearrange the columns similar as how you order the your player table (or the other-way around which ever is easier), basically such that first column of your new matrix correspond to the same player on the player table. Finally just apply matrix multiplication, and done.
This is an example using generated sample, but hopefully you get the idea of doing this.
import numpy as np
import pandas as pd
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
sample_active = pd.Series([[100,50,150,200],
[100,50,150],
[100,50],
[100]])
sample_df = pd.DataFrame()
sample_df['id'] = ['fadfsadsa', 'dsafsadf', 'dfsafsda', 'dasfasdfsaf']
sample_df['active'] = sample_active
## sample_df should look close to your original df
classes = [50,100,150,200]
player_df = pd.DataFrame({cl : np.random.uniform(0,1,size=5) for cl in classes}).T
player_df.columns = ['A','B','C','D','E']
sample_transformed = mlb.fit_transform(sample_active.values) ##apply multilabel binarizer
output = sample_transformed.dot(player_df.loc[mlb.classes_]) ##matrix multiply and get your required answer, use loc so the order will be similar as your binarized matrix.
new_df = pd.concat([sample_df['id'], pd.DataFrame(output)], axis = 1)
new_df.columns = ['id'] + list(player_df.columns)
For your case I think this should work :
mlb = MultiLabelBinarizer()
active_transformed = mlb.fit_transform(uuid_df['Active'])
output = active_transformed.dot(points_table_df.loc[mlb.classes_])
df = pd.concat([uuid_df[['UUID']], output], axis = 1)
df.columns = ['UUID'] + list(points_table_df.columns)
Try it! | {
"domain": "datascience.stackexchange",
"id": 6388,
"tags": "pandas, optimization, dataframe, processing"
} |
Nutrition facts label OCR | Question: My team is working on a program that can read nutrition facts labels using OCR, much like NutriScanner (though by its reviews it doesn't seem like it works particularly well).
I've seen other questions noting that it'd be a good idea to straighten out the image prior to running OCR on it. Is there a good image library that can automatically do that? I've looked around and found numerous Photoshop tutorials on how to do it manually, but that's not what I'm looking for.
I know OCR engines like ABBYY have some pre-processing features built-in, but I'd prefer to piece together a solution using Tesseract and a free library for the pre-processing.
I haven't found much in the way of leveraging the positioning of the items on the label to improve scanning accuracy, but any suggestions would be appreciated.
Answer: Been there, done that. One of our on-going projects is to create a process for processing nutrition labels from iPhone camera for a health-tracking app. My company decided to develop a solution for this particular application - extracting data from US nutrition labels. This solution will be used for this client, but we decided to pre-package a few other flexible capabilities along the way, to be used by a wider audience. The solution will be available to general public in about 3 weeks. (I'll come back here and post an update.)
Main goal that will make or break the entire idea is to get usable images. Curved images, heavy shading, low resolution, and blur will all substantially decrease OCR quality, often to no quality. In apps, we found that user training and guidance is the most helpful method. Then there are some technical tools such as quickly detecting quality of image and suggesting to re-take it if needed. In general, if you can achieve "high quality fuel" (images) you can expect high performance from your "machine".
Next, the OCR. Tesseract can do good job reading clear and simple test such as pages from books. Commercial products such as ABBYY distinguish themselves by working well on tough images - shading, distortions, small prints, etc. And unfortunately if mobile images are used, they are more often on bad side than good quality side.
Next, pick your approach to locating and extracting data. Please see my answer on text parsing vs specialized test extraction tools:
https://stackoverflow.com/questions/3070732/processing-ocred-text
For our project we'll be using ABBYY FlexiCapture for targeted nutrition data. It has special tools even when OCR makes mistakes to still find appropriate data (sort of controlled fuzzy search), and that was an important factor in selecting it for the task. | {
"domain": "dsp.stackexchange",
"id": 309,
"tags": "image-processing, ocr"
} |
How can telescopes see anything at all? | Question: I'm impressed that we have any telescope imagery at all. Take the images we have from the "Pillars of Creation".
The Pillars of Creation is in the Eagle Nebula, some 7,000 light-years away from Earth. It's estimated its size is about 4 light years.
Does that mean that between those 7,000 light-years, there's absolute nothing in the way that obscures the pillars? Given its arcseconds, I'd expect that any kind of disturbance would make it simply impossible to visualize it.
Also, does that also means that the pillars occupy that portion (albeit minuscule) of the sky permanently? What I wonder is, what is behind the ever growing portion of the sky being hidden by those 4 light years of size?
Answer: Yes, space is very empty. There is not nothing between us an the Eagle nebula, but little enough that we can still get a reasonable view of it.
The pillars are ephemeral, they are evolving on timescales that, whilst slow on human timescales, are quite rapid on the timescales of galactic evolution. There probably won't be much evidence of them in a few million years.
The pillars do "hide" a portion of space behind them (and inside them). This is why you often see observations taken at infrared wavelengths because light at this wavelength can penetrate the gas and (mainly) dust making up the pillars more easily than visible wavelengths and we can see into them and behind them. See here for example. | {
"domain": "astronomy.stackexchange",
"id": 7137,
"tags": "telescope, observable-universe"
} |
How to unify the electrons and holes in Feynman Diagram? | Question: As we know, following diagram describes the electron-hole pairs excitation. Namely, the right-going line describes creating electrons propagating with energy $(\omega_n+\nu_n)$, and the left-going line describes the creating holes propagating with energy $\nu_n$. Thus, the total energy of such "virtual excitation" in system is $(\omega_n+2\nu_n)$.
However,we can also alter the arrow of the lower line, which gives:
I think now the physical meaning for the lower lines is: creating electrons with energy $-\nu_n$ propagates, but then the total energy of such virtual excitation will just be $\omega_n$, or, may be it even cannot be considered as virtual excitation.
I am confused of the discrepancy here, there must be some mistakes I have taken.
Addition: Comments says if I just switch the arrow like the second figure, the quantum number conservation is not consistent. But does exists some standard procedure if I must want to change the "label arrow" (for standardized automated calculations )?
Answer: The energy in the first diagram is conserved, it means that the total energy (or Matsubara frequency) is $i\omega$ in all parts of the diagram. I the middle it is $(i\omega+i\nu)-i\nu$, the second term is subtracted because the corresponding arrow is directed to the left. So I don't think the quantity $i\nu$ is somehow connected with the virtual excitation energy.
To look at virtual excitations we need to take into account the exact stationary states $\{\psi_a\}$ available in the system, where the system can spend some time. In order to do it, we can use the spectral representation for the Green function
$$
G(\mathbf{r}_1,\mathbf{r}_2,i\nu)=\sum_a\frac{\psi_a(\mathbf{r}_1)\psi^*_a(\mathbf{r}_2)}{i\nu-E_a}.
$$
Thus you diagram can be written as (if the points on the left and on the right are $\mathbf{r}_1$ and $\mathbf{r}_2$)
$$
T\sum_\nu G(\mathbf{r}_1,\mathbf{r}_2,i\omega+i\nu)G(\mathbf{r}_2,\mathbf{r}_1,i\nu)=\\
T\sum_{ab\nu}
\frac{\psi_a(\mathbf{r}_1)\psi^*_a(\mathbf{r}_2)}{i\omega+i\nu-E_a}\frac{\psi_b(\mathbf{r}_2)\psi^*_b(\mathbf{r}_1)}{i\nu-E_b}.
$$
After frequency summation we get
$$
\sum_{ab}\psi_a(\mathbf{r}_1)\psi^*_a(\mathbf{r}_2)
\psi_b(\mathbf{r}_2)\psi^*_b(\mathbf{r}_1)
\frac{f_b-f_a}{i\omega+E_b-E_a},
$$
where $f_a$, $f_b$ are occupation numbers.
Here we can see the virtual excitations $b\rightarrow a$, where the particle in the $a$ state and the hole in the $b$ state are created with the total energy $E_a-E_b$. The denominator $i\omega-(E_a-E_b)$ has the typical resonant form of the difference between the actual and virtual energies.
If you want to transform one of the particles into the hole, you may introduce the hole energy $\tilde{E}_b=-E_b$, wave function $\tilde\psi_b=\psi^*_b$, and the occupation number $\tilde{f}_b=1-f_b$ to get the diagram in the explicit form of particle-hole pair propagator:
$$
\sum_{ab\nu}\psi_a(\mathbf{r}_1)\tilde\psi_b(\mathbf{r}_1)\psi^*_a(\mathbf{r}_2)
\tilde\psi^*_b(\mathbf{r}_2)
\frac{1-f_a-\tilde{f}_b}{i\omega-(E_a+\tilde{E_b})}.
$$ | {
"domain": "physics.stackexchange",
"id": 65437,
"tags": "quantum-field-theory, condensed-matter, feynman-diagrams, greens-functions"
} |
Exponential backoff generator | Question: Exponential backoff in the context of various networking protocols looks something like this:
When a collision first occurs, send a “Jamming signal” to prevent further data being sent.
Resend a frame after either 0 seconds or 51.2μs, chosen at random.
If that fails, resend the frame after either 0s, 51.2μs, 102.4μs, or 153.6μs.
If that still doesn't work, resend the frame after k · 51.2μs, where k is a random integer between 0 and 23 − 1.
In general, after the cth failed attempt, resend the frame after k · 51.2μs, where k is a random integer between 0 and 2c − 1.
I've written a generator that handles this:
def exponential_backoff(k):
num_failed = 0
while True:
suceeded = yield k*random.randint(0, 2**num_failed-1)
num_failed = (num_failed + 1) if not suceeded else 0
Usage:
backoff_generator = exponential_backoff(TIME_FRAME)
try:
send_message("super cool message")
except MessageSendFailed:
time.sleep(backoff_generator.send(False))
else:
backoff_generator.send(True)
Does this seem like a reasonable way to handle things? The goal was to have a simple method of getting the amount of time to wait, without having to maintain too much state in the application itself, without adding an unreasonable amount of extra processing time, and without too much kruft.
Answer: A few suggestions:
Possibly I’m using the given code incorrectly, but it doesn’t seem to work for me. I defined a send_message() function that would always fail:
class MessageSendFailed(Exception):
pass
def send_message(msg):
raise MessageSendFailed
but when I run it, it immediately fails with the following error:
$ python expofailed.py
Traceback (most recent call last):
File "expofailed.py", line 29, in <module>
time.sleep(backoff_generator.send(False))
TypeError: can't send non-None value to a just-started generator
Alternatively, if I have a copy of send_message() that never throws an error, I get the same error.
I’m not a big fan of the foo = bar if condition else baz style of Python ternary operator, because it tends towards unreadability by cramming everything onto a single line. I prefer splitting it into an explicit if block like so:
if succeeded:
num_failed = 0
else:
num_failed += 1
And now you extend those branches more easily, and write a comment about why each branch behaves it does (because it’s not entirely obvious to me).
Use a better variable name than k as the argument to your function – perhaps interval? Don’t skimp on variable names – characters are cheap.
Your generator should have a docstring and a comment.
You’ve misspelt “succeeded”. | {
"domain": "codereview.stackexchange",
"id": 20761,
"tags": "python, python-3.x, generator, timeout"
} |
I tried two approaches and gained the different conclusions of judging the stability of the transfer function of the system | Question: We want to judge whether the system is stable or not.
Given the below transfer function.
$$ H\left( z \right) =\frac{\left( 1+2 z^{-1} \right) }{\left( 2+z^{-1} \right) } $$
$$ H\left( z \right) =2-\frac{3}{2+z^{-1} } $$
$$ 2+\frac{-3}{2+z^{-1} } $$
$$S:=\frac{-3}{2+z^{-1} }$$
$$ S ~~\text{is the sum of each element of the geometric sequence.} $$
$$ -3 ~~ \leftarrow~~ \text{initial term} $$
$$ z^{-1} ~~ \leftarrow~~ \text{common ratio} $$
$$ i \geq1 \rightarrow \text{ith term} = -3 \cdot \left( z^{-1} \right) ^{i-1} $$
$$ = -3 \cdot z^{1-i} =-3 \cdot z^{-0}~,~-3z^{-1}~,~-3 z^{-2} ~,~ \cdot\cdot\cdot $$
$$ H\left( z^{} \right) =\sum_{ n=-\infty }^{ \infty } h\left[ n \right] z^{-n} $$
$$ \sum_{ n=-\infty }^{ \infty } \left( 2 \delta\left[n \right] +\left( -3 \right) u\left[ n \right] \right)z^{-n} $$
$$\displaystyle \therefore ~~ h\left[ n \right] =2 \delta\left[n \right] +\left( -3 \right) u\left[ n \right] $$
$$ \sum_{ n=-\infty }^{ \infty } \left| h\left[ n \right] \right| $$
$$ = \sum_{ n=-\infty }^{ \infty } \left| 2 \delta\left[n \right] +\left( -3 \right) u\left[ n \right]\right| = \infty $$
$$ \therefore ~~ ~~\text{The system is unstable.}~~ $$
However from the another approach,
$$ H\left( z \right) =2-\frac{3}{2+z^{-1} } $$
$$ = \frac{2 \left( 2+ z^{-1} \right) -3}{2+z^{-1} } $$
$$2+z^{-1} = 0 $$
$$ 2+\frac{1}{z^{} } =0 $$
$$ \frac{1}{z^{} } =-2 $$
$$ z=-\frac{1}{2} =-0.5 ~~ \leftarrow~~ ~~\text{pole}~~ $$
$$ ~~\text{Since }~~ \left| \text{pole} \right| =\left| -0.5 \right| =\left| 0.5 \right| <1 ~~\text{is held, the system is stable.}~~ $$
Why the different concolusions were gained?
What I've been missing?
Answer: The system is stable. Your mistake is in the application of the formula for the geometric series:
$$\begin{align}-\frac{3}{2+z^{-1}}&=-\frac32\frac{1}{1+\frac12z^{-1}}\\&=-\frac32\sum_{n=0}^{\infty}\left(-\frac12\right)^nz^{-n}\end{align}$$
Hence, the system's impulse response is
$$h[n]=2\delta[n]-\frac32\left(-\frac12\right)^nu[n]$$ | {
"domain": "dsp.stackexchange",
"id": 10175,
"tags": "discrete-signals, z-transform, transfer-function, stability, time-domain"
} |
How to calculate Altitude from IMU? | Question: How to calculate attitude from IMU ?
For example, mathematical equations
Answer: Altitude is usually determined from pressure and temperature sensors of an IMU. You can see a formula here.
However, you must realize that raw data from a sensor is NEVER RELIABLE. Sensors do not always give a correct reading. Instead, they give you a value somewhat close to the true value, but with some random "noise" added to it. So, instead of just using the data that you obtain from the sensor, you should always filter out the noise using some form of kalman filter. | {
"domain": "robotics.stackexchange",
"id": 442,
"tags": "imu"
} |
How can a Hamiltonian determine the Hilbert space? | Question: Sometimes, when discussing quantum field theory, people speak as if a Hamiltonian determines what the Hilbert space is. For example, in this answer AccidentalFourierTransform says
Imagine an $H_0$ that depends on the phase space variables $P$,$X$. [...] If you add the perturbation $\vec{L} \cdot \vec{S}$, with $\vec{S}$ the spin of the particle, then you change the Hilbert space, because the new space has three phase space variables $P,X,S$, and you cannot span the latter with a basis of the former.
This kind of language also pops up when introducing the free scalar field -- lots of lecture notes and textbooks speak of 'building' or 'constructing' the Hilbert space, or 'finding' the 'Hilbert space of the Hamiltonian'.
This kind of reasoning seems exactly backwards to me. How can one possibly define a Hamiltonian, i.e. an operator on a Hilbert space, if we don't know the Hilbert space beforehand? Without a Hilbert space specified, isn't $H = p^2/2m + V(x)$ just a meaningless string of letters with no mathematical definition? I find this shift of perspective so bewildering that I feel like I missed a lecture that everybody else went to.
For example, when dealing with the harmonic oscillator, it is possible to show that the Hilbert space must contain copies of $\{|0 \rangle, |1 \rangle, \ldots \}$ using only the commutation relations. But there's no way to pin down how many copies there are unless we use the fact that the Hilbert space is actually $L^2(\mathbb{R})$ which shows that $a |0 \rangle = 0$ determines a unique state. Similarly I would imagine for quantum fields we should start with a Hilbert space where the individual states are classical field configurations, but I've never seen this done in practice -- there seems to be no input but the Hamiltonian itself. How could that possibly be enough?
Answer: The point is that sometimes one starts from a more or less explicit algebraic formalism where only algebraic manipulations of algebra elements are initially used. Here operators are not operators in a precise Hilbert space, but just elements of a unital $^*$-algebra and only compositions (multiplication with scalars, sum and algebra product) and the involution operation (formal adjoint) are used. Next one sees if this algebra, with further technical conditions (some operator must be self-adjoint, some representation must be irreducible) or physical requirements (a suitable state exists) uniquely determine a Hilbert space where this algebra is faithfully represented in terms of operators with suitable domains.
For instance, the algebra of $a,a^*$ completely determines the standard harmonic oscillator representation in $L^2(\mathbb R, dx)$ if assuming that $a^*a$ is essentially self-adjoint on a dense common invariant domain and the arising representation is irreducible.
In QFT as soon as one has an algebraic version of field operators, a state construct the Hilbert space representation through the GNS reconstruction theorem for instance.
Sometime some hypotheses turn out to be incompatible as it happens for the case of Haag's theorem.
In summary, the Hamiltonian itself, viewed as an element of a unital $^*$-algebra is not enough to determine a Hilbert space where the theory can be implemented in the standard way, the whole algebra should to be fixed and more information is usually necessary. | {
"domain": "physics.stackexchange",
"id": 43790,
"tags": "quantum-field-theory, hilbert-space, hamiltonian"
} |
All pair shortest path in a tripartite graph | Question: I have a tri-partite graph with three sets of vertices source, bridge and destination nodes. I want to find the shortest path between every vertex in the source set to every vertex in the destination vertex. Also, all vertices in the source set are connected to all vertices in the bridge set, and all the vertices in destination set are connected to all the vertices in the bridge set.
Let $n$ be the cardinality of both the source and the destination set, and $m$ be the cardinality of the bridge set.
A naive algorithm can compute these in $\mathcal{O}(n^{2}m)$ comparisons.
On the other hand, since we are computing $n^{2}$ quantities, so the complexity will be at least $\mathcal{O}(n^{2})$.
My guess is that it can be done in $\mathcal{O}(n^{2}\log m)$ using priority-queue.
So question is, whether my guess is correct and if not, then what is the complexity of solving this problem?
Edit:
Note that shortest path between a vertex in the source set to any vertex in the bridge set is the direct link, and same holds for the shortest path between the bridge and destination set.
Answer: Your problem is essentially $(\min,+)$ matrix multiplication, also known as tropical matrix multiplication. This is because you're computing
$$
C_{ik} = \min_j (A_{ij} + B_{jk}),
$$
which is the same formula as usual matrix multiplication, with $\min$ replacing sum and $+$ replacing product.
Usual fast matrix multiplication algorithms cannot be used for this task. Furthermore, the APSP hypothesis in fine-grained complexity states that there is no $O(n^{3-\epsilon})$ algorithm for APSP. Since APSP reduces to logarithmically many tropical matrix multiplications, this implies that the latter also has no $O(n^{3-\epsilon})$ algorithms.
On the other hand, some $n^{o(1)}$ factors can be shaved, so for example a paper of Ryan Williams, Faster all-pairs shortest paths via circuit complexity. | {
"domain": "cs.stackexchange",
"id": 13367,
"tags": "graphs, asymptotics, dynamic-programming, shortest-path"
} |
Lagrangian of 2D square lattice of point masses connected by springs | Question: Zee's QFT book mentions the Lagrangian of a square 2D horizontal lattice of point masses, connected by springs, and considering only vertical displacements $q_{i}$, as
$ L = \frac{1}{2} \sum\limits_{a}(m \dot{q}_{a}^{2}) - \frac{1}{2} \sum\limits_{ab}(k_{ab}q_{a}q_{b}) - \frac{1}{2} \sum\limits_{abc}(g_{abc}q_{a}q_{b}q_{c}) - ...$
I have done elementary exercises in Lagrangian Mechanics, using $\frac{1}{2}k(l-l_{0})^{2}$ as the potential energy of the springs, but, after naively trying to derive (with which I mean "to build") that Lagrangian by myself, I suspect I must be missing some kind of additional cross-contributions to the potential energy (and I have no idea where did that triple products $q_{a}q_{b}q_{c}$ emerged from...).
I know this is the abc of solid state physics, that gives rise to the phonons and other interesting stuff, but I am almost completely ignorant in that area. Can anybody at least put me in the right track on how to derive (i.e. to build, departing from some given assumptions) that Lagrangian?
NOTE: In other words, say you want to build that Lagrangian, considering only vertical movements of the masses. The kinetic energy term is obvious, but for the potential energy, is it enough to naively sum $\frac{1}{2}k(l-l_{0})^{2}$ of all the springs? (of course written as a function of the $q_{i}$). Or, perhaps, is there any additional contribution to the potential energy that comes from the fact that the springs are somehow having some influence on each other?
EDIT with some remarks:
Remark 1:
A somewhat similar approach to what I am looking for, can be found for a linear chain of atoms, here (Ben Simons, Notes on Quantum Condensed Matter Field Theory, chapter 1)
Remark 2:
Thanks very much for correcting my misuse of the english word "derive". Ok, a Lagrangian is not derived. When I say "to derive a Lagrangian" I want to mean "to build a Lagrangian departing from some assumptions" like is the usual approach. For example, I can build the Lagrangian from a double pendulum from the assumptions that the masses of the rods can be neglected and there is no friction, and so I simply add the kinetic energy of the two masses and subtract their gravitational potential energy.
Answer: I'm not sure if a Condensed Matter book is going to give you what you want: as pointed out by commenters, you cannot derive a Lagrangian, you can only justify it because it represents the correct physics. But here is a simple interpretation of the 3rd order term. For small deformations, Hooke's law holds and the restoring force $F_{a}=−k_{ab}q_b$. (For isotropic systems this reduces to the familiar $F_a = -kq_a$.) But for larger deformations (beyond the proportionality limit) you get non-linear corrections to the spring constant $\delta k_{ab} \sim g_{abc} q_c$, where $g_{abc}$ is some material-dependent constant. So the constant $g_{abc}$ quantifies how much the stress acting on the springs alters their springiness.
This makes perfect sense: the presence of excitations of the field (mattress) alters the way the excitations move, i.e. a self-interaction of the field (or once you quantise, quanta of the field interacting with each other). In particular, you can see that if two wavepackets collide, the increased amplitude of the deformation will alter the effective spring constant at the impact point, resulting in scattering effects. In the absence of the non-harmonic terms the wavepackets would just pass right through each other. | {
"domain": "physics.stackexchange",
"id": 5700,
"tags": "quantum-field-theory, solid-state-physics, lagrangian-formalism, phonons"
} |
Is there a conservative force acting on a spring that oscillates in SHM? | Question: If I were to stretch a spring, then I am doing positive work in order to increase the potential energy stored in the spring. Since the equation for the potential energy in this case is given by $$U(x)=\frac{1}{2}kx^2 $$ i.e, the potential energy depends only on the displacement from a reference point. Then would there be a conservative force doing negative work on the spring in order to increase the potential energy or not since $U \propto x^2$ rather than $x$?
Answer: The force pulling the spring back into equilibrium (due to the tension in the spring) is a conservative force - the very definition of a conservative force is that it's minus the gradient of some potential, i.e. $\overrightarrow{F}(x,y,z)=-\nabla U(x,y,z)$. In the case of a spring in the $xy$-plane being stretched along the $x$ axis, the equation reduces to $\overrightarrow{F}=-\frac{\text{d}U}{\text{d}x}\overrightarrow{e_x}$, which is indeed the case, since the right hand side is $-kx\overrightarrow{e_x}$ so that $\overrightarrow{F}=-kx\overrightarrow{e_x}$, Hooke's law.
Another way to define a conservative force is that $\oint \overrightarrow{F}\cdot\text{ d}\overrightarrow{s}$ must equal zero. In other words, the work of this force along a random closed path must equal zero. This certainly is the case, since along any path, the work done by the spring force is independent of the path in between the end points.
At any rate, the work done by the spring would indeed be minus the work done by you, since your force is anti-parallel to the spring's force. | {
"domain": "physics.stackexchange",
"id": 49353,
"tags": "forces, work, potential-energy, harmonic-oscillator, spring"
} |
How to solve difficult resistance puzzles? | Question: How does one solve really difficult resistance puzzles? These are presented in the form of a diagram with multiple resistances in various combinations.
I have done some puzzles by reducing the puzzle into multiple series/parallel combinations. I know about Kirchoff's laws, and how to use them.
P.S. And while you are at it, you may try solving this incredibly hard question.
Answer: First of all, if you are dealing with a network of finite number of resistors, try redrawing it in some form in which you'll be able to recognize the parallel or series connections.
Secondly, take a look at Delta-Y Transform which might be really helpful in some cases.
If these fail, turn to Kirchoff's laws i.e. put a test generator between the points you're calculating the resistance and take your time solving the circuit. Sometimes, circuit theorems might help.
Also, watch out for high degree of symmetry ( take a look at this problem ).
The problem you posted is quite difficult and fun. It's a both maths and physics problem and that makes it even more fun. I will take a shot at it and post an answer if i succeed. | {
"domain": "physics.stackexchange",
"id": 25112,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance"
} |
Does the discreteness of spacetime in canonical approaches imply good bye to STR? | Question: In all the canonical approaches to the problem of quantum gravity, (eg. loop variable) spacetime is thought to have a discrete structure. One question immediately comes naively to an outsider of this approach is whether it picks a privileged frame of reference and thereby violating the key principle of the special relativity in ultra small scale. But if this violation is tolerated doesn't that imply some amount of viscosity within spacetime itself? Or am I writing complete nonsense here? Can anybody with some background in these approaches clear these issues?
Answer: Here is as I see the problem. The best way to understand it is to think what happens with rotations and quantum theory. Suppose that a certain vector quantity $V=(V_1,V_2,V_3) $ is such that its components are always multiples of a fixed quantity $d $. Then one is tempted to say that obviously rotational invariance is broken because if I take the vector $V=(d,0,0) $ and rotate it a bit, I get $V=(\cos(\phi) d, \sin(\phi) d,0) $, and $\cos(\phi) d $ is smaller than $d $. Therefore, either rotational invariance is broken, or the vector components can be smaller. Right? No, wrong. Why? Because of quantum theory. Suppose now that the quantity $V$ is the angular momentum of an atom. Then, since the atom is quantum mechanical, you cannot measure all the 3 components together. If you measure one, you can get say either $0 $, or $\hbar$, or $2\hbar$ ..., that it, precisely multiples of a fixed quantity.
Now supose you have measured that the $x$ component of the angular momentum was hbar. Rotate slowly the atom. Do you measure them something a bit smaller that $\hbar$? No! You measure again either zero, or $\hbar$ ... what changes continuously is not the eigenvalue, namely the quantity that you measure, but rather the probabilities of measuring one or the other of those eigenvalues.
Same with the Planck area in LQG. If you measure an area, (and if LQG is correct, which all to be seen, of course!) you get a certain discrete number. If you boost the system, you do not meqsure the Lorentz contracted eigenvalues of the area: you measure one or the other of the same eigenvalues, with continuously changing probabilities.
And, by the way, of course areas are observables. For instance any CERN experiment that measures a total scattering amplitude amplitude is measuring an area. Scattering amplitudes are in $\mathrm{cm}^2$, that is are areas. | {
"domain": "physics.stackexchange",
"id": 352,
"tags": "special-relativity, loop-quantum-gravity"
} |
Schrödinger evolution for a Klein-Gordon equation | Question: I have a problem with the transition from quantum relativistic wave equations (specifically Klein-Gordon equation) to QFT, since a lot of assumptions seem implicit. For example I have a problem with the time evolution operator, which is crucial on deriving the perturbative expansion $-$ the main tool in QFT I believe.
c
So here's what I have a problem with: when we make the leap from Schrödinger equation to a Klein-Gordon equation, we get a second order time derivative, and hence loose the simple concepts from nonrelativistic QM like: the Hamiltonian, time evolution operator etc.
But for a scalar quantum field we can make a Lagrangian density:
$$
\mathcal{L}(x) = \hbar^2 c^2 g^{\mu \nu} \partial_\mu \phi \partial_\nu \phi^* - m^2c^4 \phi \phi^*
$$
and perform the "second quantization", from which we get a Hamiltonian, canonical commutation relations and the ability to use pictures (Schrödinger's, Heisenberg's...).
So how does this work? Before there was no Hamiltonian in principle, and now there is. Is this the Hamiltonian we pluck into the perturbative expansions' formulas? What changed, when compared to the single solution wave equation in the beginning?
Answer: The first thing you should realize is the fact that while $\phi$ has an equation of motion with second time derivatives, it is not the wave function, and therefore there is no problem with QM. The field is just an operator (more or less), not a state. Acting with the fields on the vacuum state you generate the other states which do evolve with an hamiltonian built out of the operators such as $\phi$ itself. And the operators evolve accordingly to the usual Heisenberg equation of motion $[H,\phi(t,x)]=-i\partial_t \phi(t,x)$ (and by Lorentz symmetry, $[P_j,\phi(t,x)]=-i\partial_j \phi(t,x)$ with $P_\mu=(H,P_i)$ as a 4 Lorentz vector). From this Heisenberg picture you can move to the schroedinger picture which it is as in non relativistic QM mechanics, the hamiltonian gives rise to time evolution of the states, $H\rightarrow -i\partial_t$.
The fact that the theory is Lorentz invariant just adds other (important) things, but do not change what QM says. QFT implements the principles of QM for a system with infinitely many degrees of freedom that can change the number of particles.
All should become very clear if you realize that the lagrangian for a free scalar boson gives the hamiltonian for a collection of harmonic oscillators, one harmonic oscillator for every momentum $k$ with frequency (aka energy) $\omega^2=k^2+m^2$. In case I find more time, I will add more details to this answer. | {
"domain": "physics.stackexchange",
"id": 14814,
"tags": "quantum-mechanics, quantum-field-theory, lagrangian-formalism, hamiltonian-formalism, second-quantization"
} |
Differential equation in RL-circuit | Question: I am self-studying electromagnetism right now (by reading University Physics 13th edition) and for some reason I always want to understand things in a crystalclear way and in depth. Now look at this simple RL-circuit:
I am assuming the wire is superconducting and suppose at $t=0$ the switch is turned on. My problem is that I want to come up with this differential equation for this circuit:
$V-I\cdot R=-L\cdot \frac{dI}{dt}$. I know that I can't use Kirchoff's voltage law $(\oint \vec{E}\cdot d\vec{l}=0)$, because there is a non-conservative electric field near the inductor. Therefore I use Farayday's law of induction:
\begin{equation}
\mathcal{E}=\oint_{\Gamma} \vec{E}\cdot d\vec{l}=-\frac{d}{dt} \int_{S} \vec{B} \cdot \hat{n}\,dA
\end{equation}
where $\Gamma$ is a closed loop and $S$ is the open surface attached to the loop. Now, my textbook says:
In general, the total field $\vec{E}$ at a point in space can be the
superposition of an electrostatic field $\vec{E_c}$ caused by a distribution of charges at rest and a magnetically induced, nonelectrostatic field $\vec{E_n}$. That is $\vec{E}=\vec{E_c}+\vec{E_n}$.
Inserting this in Faraday's law I get:
$$\oint_{\Gamma} \vec{E}\cdot d\vec{l}=\oint_{\Gamma} \vec{E_c}\cdot d\vec{l}+\oint_{\Gamma} \vec{E_n}\cdot d\vec{l}=-L\cdot \frac{dI}{dt}$$
Now by definition: $\oint_{\Gamma} \vec{E_c}\cdot d\vec{l}=0$ since $\vec{E_c}$ is conservative. This gives: $\oint_{\Gamma} \vec{E_n}\cdot d\vec{l}=-L\cdot \frac{dI}{dt}$, which is perfectly fine and consistent with my textbook. However it also gives (going counterclockwise in circuit): $\oint_{\Gamma} \vec{E_c}\cdot d\vec{l}=-V+I\cdot R=0$ which it shouldn't. I think my flaw is here, but I've tried to think about this this whole day with no succes.
I have my inspiration from Walter Lewin's video: http://youtu.be/LzT_YZ0xCFY?t=26m31s but I still can't get the equation by Faraday's law.
Any help is greatly appreciated.
Answer: When dealing with inductors Sears & Zemansky state that "we need to develop a general principle analogous to Kirchhoff's loop rule".
With an inductor present in the circuit they state that there is a non-conservative electric field within the coils $\vec E_n$ as well a conservative electric field $\vec E_c$.
Assuming that the inductor has negligible resistance they then state that the net electric field within the inductor is zero ie $\vec E_c + \vec E_n =0$.
If the end of the inductor are labelled $a$ and $b$ they then state that instead of writing $\oint \vec E_n \cdot d \vec l = - L \frac {dI}{dt} $ for the complete circuit they can just consider that path within the coil as that is where the magnetic flux is changing which then gives $\int_b^a \vec E_n \cdot d \vec l = - L \frac {dI}{dt} $.
So $\int_b^a \vec E_c \cdot d \vec l = + L \frac {dI}{dt} = V_{ab}$ which is the potential of point $a$ relative to point $b$.
Their final statement is "we conclude that there is a genuine potential difference between the terminals of the inductor, associated with conservative, electrostatic forces, despite the fact that the electric field associated with the magnetic induction effect.
So their aim was to be able to use $\oint \vec E_c \cdot d \vec l = 0$ for a complete circuit with no changing magnetic flux through it which is the essence of what Walter Lewin discusses towards the end of his video with the potential difference across the inductor as part of the line integral.
You might find this Yale video useful in which Ramamurti Shankar starts with a mutual inductance with the secondary open circuited.
So where did you go wrong?
However it also gives (going counterclockwise in circuit):
$\oint_{\Gamma} \vec{E_c}\cdot d\vec{l}=-V+I\cdot R=0$ which it
shouldn't.
Look at this equation:
$$\oint_{\Gamma} \vec{E}\cdot d\vec{l}=\oint_{\Gamma} \vec{E_c}\cdot d\vec{l}+\oint_{\Gamma} \vec{E_n}\cdot d\vec{l}=-L \frac{dI}{dt}$$
Split it term by term:
$$\oint_{\Gamma} \vec{E}\cdot d\vec{l}=\int_{\text{battery}} \vec E_c \cdot d \vec l + \int_{\text{resistor}} \vec E_c \cdot d \vec l + \int_{\text{inductor}} (\vec E_c + \vec E_n) \cdot d \vec l = -L \frac{dI}{dt} $$
$$\Rightarrow \oint_{\Gamma} \vec{E}\cdot d\vec{l}=\int_{\text{battery+resistor+inductor}} \vec E_c \cdot d \vec l + \int_{\text{inductor}} \vec E_n \cdot d \vec l = -L \frac{dI}{dt} $$
$$\Rightarrow \oint_{\Gamma} \vec{E}\cdot d\vec{l}=\oint_{\Gamma} \vec E_c \cdot d \vec l + \int_{\text{inductor}} \vec E_n \cdot d \vec l = -L \frac{dI}{dt} $$
Using your statement: "Now by definition: $\oint_{\Gamma} \vec{E_c}\cdot d\vec{l}=0$"
$$\oint_{\Gamma} \vec{E_c}\cdot d\vec{l}=-V+I\cdot R + L \frac {dI}{dt}=0$$
Going around clockwise or anticlockwise does not matter the only difference is a change of sign for each term. | {
"domain": "physics.stackexchange",
"id": 31996,
"tags": "electromagnetism, electric-circuits, electrical-resistance, inductance"
} |
Counting duration values | Question: I have a durations as such:
[{'duration': 3600}, {'duration': 3600}]
And I want the output to be {3600: 2}, where 2 is the number of occurrences of 3600.
My first attempt is use a loop, as such:
var count = {};
for (var d of durations) {
var length = d.duration;
if (length in count) {
count[length] += 1;
}
else {
count[length] = 1;
}
}
console.log(count);
The second attempt uses reduce() in lodash:
function foo(result, value) {
var length = value.duration;
if (length in result) {
result[length] += 1;
}
else {
result[length] = 1;
}
return result;
}
var reduced = _.reduce(durations, foo, {});
console.log(reduced);
As can be seen, this second attempt is still as verbose as before.
Is there a way to write the iteratee function foo more conform to functional programming?
Answer: First point
Don't add quotes around property names when defining JS objects.
[{'duration': 3600}, {'duration': 3600}]
// should be
[{duration: 3600}, {duration: 3600}]
You state
"And I want the output to be {3600: 2}, where 2 is the number of occurrences of 3600."
Which is not that clear. Going by your code I assume you want [{duration: 60}, {duration: 60}, , {duration: 10}] converted to {60:2, 10:1}
I will also assume that all array items contain an object with the property named duration
Using these assumptions for the rest of the answer.
Rewrite
Taking your first snippet and turning it into a function.
You don't need the if (length in count) {, you can just use if (count[d.duration]) {
The object count and d should be constants as you don't change them.
Using a ternary you can reduce the 6 lines of the if else to one line.
Code
function countDurations(durations) {
const counts = {};
for (const {duration: len} of durations) {
counts[len] = counts[len] ? counts[len] + 1 : 1;
}
return counts;
}
Or a little less verbose as a arrow function
const countDur = dur => dur.reduce((c, {duration: l}) => (c[l] = c[l]? c[l] + 1: 1, c), {});
or
const countDurs = durs =>
durs.reduce((counts, {duration: len}) =>
(counts[len] = counts[len] ? counts[len] + 1 : 1, counts)
, {}
);
Rewrite 2
The second snippet is algorithmicly the same, just uses a different iteration method.
JavaScript has Array.reduce so you don't need to use a external library.
Separating the iterator inner block into a function count
Code
const count= (cnt, {duration: len}) => (cnt[len] = cnt[len] ? cnt[len] + 1 : 1, cnt);
const countDurations = durations => durations.reduce(count, {});
Or encapsulating the count function via closure
const countDurations = (()=> {
const count = (cnt, {duration: len}) => (cnt[len] = cnt[len] ? cnt[len] + 1 : 1, cnt);
return durations => durations.reduce(count, {});
})();
Functional?
"Is there a way to write the iterate function foo more conform to functional programming?"
JavaScript is not designed to be a functional language (it is impossible to create pure functions) so It is best to use functional as a guide rather than a must.
Almost pure
The second rewrite is more along the functional approach, but functional purist would complain that the reducer function has side effects by modifying the counter. You can create a copy of the counts each call as follows..
const counter = (c, {duration: len}) => (c = {...c}, (c[len] = c[len] ? c[len] + 1 : 1), c);
const countDurations = durations => durations.reduce(counter, {});
However this add significant memory and processing overhead with zero benefit (apart from a functional pat on the back).
Note
This answer uses destructuring property name alias so check for browser compatibility. | {
"domain": "codereview.stackexchange",
"id": 34543,
"tags": "javascript, functional-programming, comparative-review, lodash.js"
} |
What species is this bushy thorny yellow-flowered angiosperm from Morocco? | Question: What species is this bushy thorny yellow-flowered angiosperm from Morocco? Photographed in spring in a dried up river bed. The bush is about 1 metre high.
Answer: I happened to come across a photo of a similar leafless shrub (without flowers) that also had 120 degree branches and looked thorny, or Spiny is probably a better description. It helped me to identify the plant in your photo as a Launaea arborescens. It is an unusual species of Chicory in the dandelion family, Asteraceae. Being tolerant of arid locations, it native to northwest Africa and southern Spain.
http://encyclopaedia.alpinegardensociety.net/plants/Launaea/arborescens
https://en.wikipedia.org/wiki/Launaea
https://en.wikipedia.org/wiki/Launaea_arborescens | {
"domain": "biology.stackexchange",
"id": 8597,
"tags": "species-identification, botany"
} |
What is the advantage of a tensorflow.data.Dataset over a tensorflow.Tensor? | Question: I have my own input data class. It has x and y as well as test and train values (1 Tensor for each combination). I noticed there is a Dataset class built in to TensorFlow. What is the advantage of this class over a regular Tensor? Is it mainly around handling large datasets / laziness? It doesn't appear to have features tailored for x vs y data, or test vs train. All my data fits into memory so I am not clear it would be beneficial to use the built in class over my current one. Of course, the first assumption is it would be foolish not to use the built in class.
Answer: The main advantage is in domains where you can't fit all of your data into memory.
However, I've seen improvements in performance even in cases where I have all my data into memory. I think two reasons contribute to this:
One is caching, where some operations (e.g. a mapping OP) will be cached and performed only in the first epoch. This, obviously is applicable if you have such a function.
Another one is prefetching. While a the model is being trained on a batch in the GPU, the CPU loads and prepares the next batch. This can help save a lot of time.
Some other capabilities are allowing for the vectorization of user defined functions (e.g. for data augmentation) and their parallelization.
You can take a look at some benchmarks here. They are a bit unrelated, as they refer to cases where the dataset isn't all loaded into memory, but they might interest you nevertheless. | {
"domain": "datascience.stackexchange",
"id": 9172,
"tags": "tensorflow"
} |
Unlike rotation, why a $3\times 3$ translation matrix cannot be written in 3D? or can it be? | Question: The effect of rotation in 3d on a vector, $\vec{r}=x\hat{x}=y\hat{y}+z\hat{z}$ is given in the form a matrix product:$$\vec{r}\to O\vec{r}$$ where $O$ is a $3\times3$ proper orthogonal matrix. Can we define a $3\times3$ translation matrix $T$ in 3-d so that its action gives $\vec{r}\to\vec{r}+\vec{a}$: $$\vec{r}\to T\vec{r}=\vec{r}+\vec{a}?$$ If yes, what property should $T$ satisfy for example O satisfies $O^TO=OO^T=$identity. I never find it in books. The discussion here gives a $4\times 4$ translation matrix not $3\times 3$.
Answer: Matrices represent linear transformations on the space on which they act.
Translations by a vector $T_\vec{a}\,\vec{x}=\vec{x}+\vec{a}$ don't fall in that class, since
$$ T_\vec{a}(\vec{x}+\vec{y}) \neq T_\vec{a}\,\vec{x} + T_\vec{a}\,\vec{y} $$ | {
"domain": "physics.stackexchange",
"id": 54208,
"tags": "reference-frames, coordinate-systems, group-theory, geometry, galilean-relativity"
} |
Prime factorization of a number | Question: I'm trying to learn Python through the excellent online book Dive Into Python 3. I have just finished chapter 2 about data types and felt like trying to write a program on my own.
The program takes an integer as input and factorizes it into prime numbers:
$ python main.py 6144
6144 -> (2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3)
$ python main.py 253789134856
253789134856 -> (2, 2, 2, 523, 60657059)
I'd like to know if I could improve it in any way. Or if I've used any bad practice and so on.
#!/usr/bin/python
import sys
import math
def prime_factorize(n):
factors = []
number = math.fabs(n)
while number > 1:
factor = get_next_prime_factor(number)
factors.append(factor)
number /= factor
if n < -1: # If we'd check for < 0, -1 would give us trouble
factors[0] = -factors[0]
return tuple(factors)
def get_next_prime_factor(n):
if n % 2 == 0:
return 2
# Not 'good' [also] checking non-prime numbers I guess?
# But the alternative, creating a list of prime numbers,
# wouldn't it be more demanding? Process of creating it.
for x in range(3, int(math.ceil(math.sqrt(n)) + 1), 2):
if n % x == 0:
return x
return int(n)
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: %s <integer>" % sys.argv[0])
exit()
try:
number = int(sys.argv[1])
except ValueError:
print("'%s' is not an integer!" % sys.argv[1])
else:
print("%d -> %s" % (number, prime_factorize(number)))
Edit:
@JeffMercado: Interesting link as I haven't encountered this before. And yes, the memoization technique should be easy to implement in my case since it is basically the same as the Fibonacci example.
In the Fibonacci example, they have a map (dictionary in Python?) outside the function, but should I do that? Global variables “are bad”? If that's the case, where does it belong? In the function prime_factorize() and I pass the map as an argument? I'm having a hard time deciding things like this, but I guess it gets easier with experience...
@WinstonEwert:
Prime factorization really only makes sense for integers. So you really use abs not fabs.
I couldn't find abs when I ran help(math), only fabs. I thought fabs was my only choice but I just found out that abs doesn't even reside in the math module. Fixed.
Tuples are for heterogeneous data. Keep this a list. It is conceptually a list of numbers, and it so it should be stored in a python list not a tuple.
You write that tuples are for heterogeneous data. I searched SO and many seems to be of this opinion. However, in the book I'm following, the author gives a few points for using a tuple:
Tuples are faster than lists. If you’re defining a constant set of values and all you’re ever going to do with it is iterate through it, use a tuple instead of a list.
It makes your code safer if you “write-protect” data that doesn’t need to be changed. Using a tuple instead of a list is like having an implied assert statement that shows this data is constant, and that special thought (and a specific function) is required to override that.
That is why a returned a tuple. Is the author's points valid or just not in this case?
Your expression, int(math.ceil(math.sqrt(n)) + 1) seems to be more complicated that it needs to be. Couldn't you get by with int(math.sqrt(n) + 1).
The upper value of range was unnecessarily complicated. Fixed.
Again, why are you trying to support something that isn't an int?
I was returning int(n) from get_next_prime_factor(n) since the number that is passed in to the function becomes a float when I divide it (in prime_factorize), so if I return just n from the function, I return a float which gets added to the list 'factors'. When I then print the factors, I get e.g. '11.0' as the last factor for the number 88.
Any other way to fix that?
I'd pass an exit code to indicate failure
Should I just exit(1) or something like that?
Answer: #!/usr/bin/python
import sys
import math
def prime_factorize(n):
factors = []
number = math.fabs(n)
Prime factorization really only makes sense for integers. So you really use abs not fabs.
while number > 1:
factor = get_next_prime_factor(number)
factors.append(factor)
number /= factor
if n < -1: # If we'd check for < 0, -1 would give us trouble
factors[0] = -factors[0]
return tuple(factors)
Tuples are for heterogeneous data. Keep this a list. It is conceptually a list of numbers, and it so it should be stored in a python list not a tuple.
def get_next_prime_factor(n):
if n % 2 == 0:
return 2
# Not 'good' [also] checking non-prime numbers I guess?
# But the alternative, creating a list of prime numbers,
# wouldn't it be more demanding? Process of creating it.
for x in range(3, int(math.ceil(math.sqrt(n)) + 1), 2):
Your expression, int(math.ceil(math.sqrt(n)) + 1) seems to be more complicated that it needs to be. Couldn't you get by with int(math.sqrt(n) + 1).
if n % x == 0:
return x
return int(n)
Again, why are you trying to support something that isn't an int?
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: %s <integer>" % sys.argv[0])
exit()
I'd pass an exit code to indicate failure
try:
number = int(sys.argv[1])
except ValueError:
print("'%s' is not an integer!" % sys.argv[1])
else:
print("%d -> %s" % (number, prime_factorize(number)))
Algorithmwise, it will be more efficient to keep a list. They key is to keep information about the primes used between calls. In particular, you want a list that indicates the smallest prime factor for example
factors[9] = 3, factors[15] = 3, factors[42] = 21
A simple modification of the Sieve of Eratosthenes should suffice to fill that in. Then easy lookups into the list should tell you exactly which prime is next. | {
"domain": "codereview.stackexchange",
"id": 1704,
"tags": "python, primes"
} |
can't set any settings for uvc_camera | Question:
Hi guys,
I want to use a webcam with the uvc_camera Package. I'm using Ubuntu 11.10 server and compiled the package from source (because I use an ARM processor). It compiled without any problems but when I try to run the launchfile (camera_node.launch) I got the following failure message. I can see the video but its really slow and all settings (color, saturation, whitbalance etc.) are wrong and the picture looks almost crossprocessed.
camera_node.launch:
PARAMETERS
* /uvc_camera/device
* /rosdistro
* /uvc_camera/fps
* /uvc_camera/height
* /uvc_camera/camera_info_url
* /uvc_camera/frame
* /rosversion
* /uvc_camera/width
NODES
/
uvc_camera (uvc_camera/camera_node)
auto-starting new master
process[master]: started with pid [3736]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to c57a2f6c-b5ae-11e1-bd3e-2e607c264e01
process[rosout-1]: started with pid [3751]
started core service [/rosout]
process[uvc_camera-2]: started with pid [3754]
[ INFO] [1339629806.595916290]: using default calibration URL
[ INFO] [1339629806.597411651]: camera calibration URL: file:///home/panda/.ros/camera_info/camera.yaml
[ERROR] [1339629806.598754424]: Unable to open camera calibration file [/home/panda/.ros/camera_info/camera.yaml]
[ WARN] [1339629806.599486846]: Camera calibration file /home/panda/.ros/camera_info/camera.yaml not found.
[ INFO] [1339629806.612029571]: camera calibration URL: file:///home/panda/ros_workspace/camera_umd/uvc_camera/example.yaml
[ERROR] [1339629806.612853545]: Unable to open camera calibration file [/home/panda/ros_workspace/camera_umd/uvc_camera/example.yaml]
[ WARN] [1339629806.613372344]: Camera calibration file /home/panda/ros_workspace/camera_umd/uvc_camera/example.yaml not found.
opening /dev/video2
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
discrete: 640x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x90: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x120: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 176x144: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x180: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 352x288: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 432x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 640x360: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x448: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x600: 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 864x480: 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 960x720: 1/15 1/10 2/15 1/5
discrete: 1024x576: 1/15 1/10 2/15 1/5
discrete: 1280x720: 1/10 2/15 1/5
discrete: 1600x896: 2/15 1/5
discrete: 1920x1080: 1/5
discrete: 2304x1296: 1/2
discrete: 2304x1536: 1/2
pixfmt 1 = ' ' desc = '34363248-0000-0010-8000-00aa003'
discrete: 640x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x90: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x120: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 176x144: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x180: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 352x288: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 432x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 640x360: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x448: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x600: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 864x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 960x720: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1024x576: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1280x720: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1600x896: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1920x1080: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
pixfmt 2 = 'MJPG' desc = 'MJPEG'
discrete: 640x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x90: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 160x120: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 176x144: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x180: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 320x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 352x288: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 432x240: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 640x360: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x448: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 800x600: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 864x480: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 960x720: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1024x576: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1280x720: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1600x896: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
discrete: 1920x1080: 1/30 1/24 1/20 1/15 1/10 2/15 1/5
int (Brightness, 0, id = 980900): 0 to 255 (1)
int (Contrast, 0, id = 980901): 0 to 255 (1)
int (Saturation, 0, id = 980902): 0 to 255 (1)
bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)
int (Gain, 0, id = 980913): 0 to 255 (1)
menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
0: Disabled
1: 50 Hz
2: 60 Hz
int (White Balance Temperature, 0, id = 98091a): 2000 to 6500 (1)
int (Sharpness, 0, id = 98091b): 0 to 255 (1)
int (Backlight Compensation, 0, id = 98091c): 0 to 1 (1)
menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)
int (Exposure (Absolute), 0, id = 9a0902): 3 to 2047 (1)
bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1)
int (Pan (Absolute), 0, id = 9a0908): -36000 to 36000 (3600)
[ 418.948883] uvcvideo: Failed to query (SET_CUR) UVC control 10 on unit 3: -32 (exp. 2).
int (Tilt (Absolute), 0, id = 9a0909): -36000 to 36000 (3600)
int (Focus (absolute), 0, id = 9a090a): 0 to 250 (5)
bool (Focus, Auto, 0, id = 9a090c): 0 to 1 (1)
current value of 10094851 is 1
current value of 10094849 is 3
current value of 9963776 is 128
current value of 9963777 is 128
current value of 9963788 is 1
current value of 9963802 is 4000
unable to set control: Input/output error
ERROR: could not set some settings.
unable to set control
When I use the nodelet launchfile it doesn't even start at all.
camera_nodelet.launch:
PARAMETERS
* /uvc_camera/device
* /rosdistro
* /uvc_camera/fps
* /uvc_camera/height
* /uvc_camera/camera_info_url
* /uvc_camera/frame
* /rosversion
* /uvc_camera/width
NODES
/
camera_process (nodelet/nodelet)
uvc_camera (nodelet/nodelet)
auto-starting new master
process[master]: started with pid [966]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to e9f6652a-b5bf-11e1-8fa8-2e607c264e01
process[rosout-1]: started with pid [981]
started core service [/rosout]
process[camera_process-2]: started with pid [984]
process[uvc_camera-3]: started with pid [985]
[ INFO] [1339637168.735869637]: Loading nodelet /uvc_camera of type uvc_camera/Camera to manager /camera_process with the following remappings:
[ INFO] [1339637168.761138191]: waitForService: Service [/camera_process/load_nodelet] has not been advertised, waiting...
[ INFO] [1339637169.621276130]: waitForService: Service [/camera_process/load_nodelet] is now available.
[ERROR] [1339637169.630278816]: Failed to load nodelet [/uvc_camera] of type [uvc_camera/Camera]: According to the loaded plugin descriptions the class uvc_camera/Camera with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_proc/crop_decimate image_proc/debayer image_proc/rectify image_view/disparity image_view/image openni_camera/OpenNINodelet openni_camera/driver pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/TestListener pcl/TestPingPong pcl/TestTalker pcl/VFHEstimation pcl/VoxelGrid stereo_image_proc/disparity stereo_image_proc/point_cloud stereo_image_proc/point_cloud2 test_nodelet/ConsoleTest test_nodelet/Plus uvc_camera/CameraNodelet uvc_camera/StereoNodelet
[FATAL] [1339637169.662230720]: Service call failed!
[uvc_camera-3] process has died [pid 985, exit code 255].
log files: /home/panda/.ros/log/e9f6652a-b5bf-11e1-8fa8-2e607c264e01/uvc_camera-3*.log
Originally posted by dinamex on ROS Answers with karma: 447 on 2012-07-07
Post score: 3
Answer:
The uvc_camera wiki page states that dynamic control of the parameters is still on the TODO list (see here). A quick run through the code reveals that the parameters are being set starting at line 158 in src/uvc_cam.cpp.
The best way to change these parameters would be to set them up these calls through dynamic reconfigure. The driver already prints out the available controls and their ranges, and you can use these as a guide to set the appropriate values in the dynamic reconfigure gui.
Originally posted by piyushk with karma: 2871 on 2012-07-08
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by DocSmiley on 2012-08-17:
I've been having a similar issue. I have gone in and changed the control ranges in the area specified but it hasn't had an effect. According to the camera specs, I should not be setting the White Balance Temperature (Auto is there and set). The line is commented out but it's shown in the output.
Comment by RafBerkvens on 2013-04-12:
I also looked at the line suggested by @piyushk. The set_control() function calls a different id than the one printed: 9963776 would be brightness, while printed says brightness is 980900. However, my output suggests that the value set in the code is used, not the one printed. Very confusing.
Comment by lucasw on 2014-08-20:
https://github.com/ericperko/uvc_cam/blob/master/src/uvc_cam/uvc_cam.cpp#L233 (a fork) has some commented out lines like
set_control(V4L2_CID_BRIGHTNESS, 135);
Also from the command line:
uvcdynctrl -d /dev/video0 -s 'Brightness' 135 | {
"domain": "robotics.stackexchange",
"id": 10086,
"tags": "libuvc-camera, camera, uvc-cam, webcam, usb-cam"
} |
Turtlebot teleop running, no response | Question:
I have run the following commands...
roscore
roslaunch turtlebot_bringup minimal.launch
roslaunch turtlebot_teleop keyboard_teleop.launch
Everything appears to be working, but when pressing different movement keys (m, n, etc.) there is no response from the TurtleBot (i.e. it is not moving). When changing the different speeds with the e, q, etc. keys messages are being sent saying that the speeds have changed. So, I know there is some sort of communication between the terminal and the TurtleBot.
Yes, everything is plugged in correctly and charged.
Please help!
Originally posted by mkm_marquette on ROS Answers with karma: 1 on 2013-04-24
Post score: 0
Original comments
Comment by jorge on 2013-05-03:
Do you have TurtleBot 1 or TurtleBot 2? I would check also whether something gets published on velocity topic (rostopic echo /mobile_base/commands/velocity)
Comment by bit-pirate on 2013-05-26:
Just a side note: If you use roslaunch, you don't need to manually start up a roscore. roslaunch is doing this for you: it starts one, if it can't find a running roscore.
Answer:
Try using turtlebot dashboard. It will confirm whether communication between robot and laptop has been established correctly or not.
Originally posted by ayush_dewan with karma: 1610 on 2013-04-24
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 13945,
"tags": "ros, turtlebot, keyboard-teleop"
} |
Horn clause on cnf | Question: Recall that a CNF formula is Horn if each clause contains at most one positive literal.
Is it possible any unsatisfiable Horn CNF formula has a polynomial-size treelike Resolution refutation? Is there any proof exists in any paper?
Answer: Yes, any unsatisfiable Horn CNF has a tree-like resolution refutation with a linear number of clauses.
Consider the standard poly-time Horn-SAT algorithm, which works as follows. First, set all variables to false. Repeat: pick an unsatisfied clause; if it's negative, answer UNSATISFIABLE; otherwise, set its positive literal to true. If all clauses are satisfied, answer SATISFIABLE.
Assume the input formula is unsatisfiable. If you look at the order in which the algorithm sets variables to true, as well as the offending clauses that make it so, you obtain a list of pairwise distinct variables
$$x_0,\dots,x_{t-1}$$
and input clauses
$$C_0,\dots,C_t$$
such that
$$C_i\subseteq\{\neg x_j:j<i\}\cup\{x_i\},\qquad i<t,$$
and
$$C_t\subseteq\{\neg x_j:j<t\}.$$
Construct a tree-like resolution refutation as follows: resolve $D_t:=C_t$ and $C_{t-1}$ over the $x_{t-1}$ variable, obtaining $D_{t-1}$; resolve $D_{t-1}$ and $C_{t-2}$ over $x_{t-2}$ to get $D_{t-2}$; etc., in the last step resolving $D_1$ and $C_0$ over $x_0$ to get $D_0$. By reverse induction on $i\le t$, we see that
$$D_i\subseteq\{\neg x_j:j<i\}.$$
In particular, $D_0$ is the empty clause.
Thus, you obtain a refutation of the input formula $\phi$ whose number of clauses is at most twice the number of distinct variables positively occuring in $\phi$, and whose total size is thus at most quadratic in the size of $\phi$. | {
"domain": "cstheory.stackexchange",
"id": 5844,
"tags": "reference-request, proof-complexity, resolution"
} |
Increment all values in a Map by 1 | Question: What's the neatest way to increment all values in a HashMap by 1? The map is <String, Integer>, but the key doesn't matter as every value will be incremented.
Is it "cleaner" to use lambdas with forEach/compute/etc. or just loop through the entries?
HashMap<String, Integer> map = mobCounter.get(mob);
for (Entry<String, Integer> e : map.entrySet()) {
map.put(e.getKey(), e.getValue() + 1);
}
This doesn't look too messy to me but I'm wondering if people like seeing lambdas more.
Answer: You must not modify the keys of a HashMap while iterating over it. You are just lucky it worked.
Instead of the put(...), write e.setValue(e.getValue() + 1).
If you have some kind of Multiset available (e.g. when you are using Guava), you could replace your HashMap with a Multiset<String>, which will make your intention clearer.
Using lambdas, your code would look like:
map.replaceAll((k, v) -> v + 1);
This looks very nice to me. It cannot get any shorter. | {
"domain": "codereview.stackexchange",
"id": 23137,
"tags": "java, lambda, hash-map"
} |
What is the explanation for the following result of sequential three Stern Gerlach? | Question: Consider the following three Stern Gerlach configurations from Spin and Quantum Measurement
I understand why a and b act so, but can you please explain why c behaves the way it does?
Will the particle in c still end up in the upper detector if only one particle had been shot?
Answer:
2)Yes
1)These cases are why the Stern-Gerlach experiment is the fundamental experiment explaining spin.
The initial beam is unpolarized. It is a mixed state that cannot be described by a wave-function; rather, it requires a density matrix:
$$ \rho_1 = \frac 1 2\left(\begin{array}{cc}1&0\\0&1\end{array}\right)=
\frac 1 2 |\uparrow\rangle\langle \uparrow|
+\frac 1 2 |\downarrow\rangle\langle \downarrow|$$
Which is a "classical" mixture of half up and half down. Note that if you rotate the $z$-axis, $\rho_1$ remains unchanged, as it must.
The first device projects out spin up via:
$$ p_1 = \left(\begin{array}{cc}1&0\\0&0\end{array}\right)$$
so that:
$$ \rho_2 =p_1\rho_1p_1^{\dagger} = \left(\begin{array}{cc}\frac 1 2&0\\0&0\end{array}\right) = \frac 1 2 |\uparrow\rangle\langle \uparrow|=\psi_2\bar{\psi}_2$$
which is half time nothing, and half the time a pure state:
(It's a little unclear why you declare '100' here, and not 50, but OK. It seems spin down is blocked).
The pure state can be written in the $x$-basis:
$$ \psi_2 =|\rightarrow\rangle = \frac 1{\sqrt 2}[
|\leftarrow\rangle + |\rightarrow\rangle]$$
So in (a) and (b) you project out the pure $\pm x$ states so that:
$$ \psi_3^a=|\rightarrow\rangle=\frac 1 2[|\uparrow\rangle + |\downarrow\rangle]$$
$$ \psi_3^b=|\leftarrow\rangle=\frac 1 2[|\uparrow\rangle - |\downarrow\rangle]$$
and the final device just selects the 1/2 that is spin-up ($z$-coordinate).
In device c, your adding both wave functions coherently:
$$ \psi_3^c = \psi_3^a + \psi_3^b = \frac 1 2\big(
[|\uparrow\rangle + |\downarrow\rangle]+
[|\uparrow\rangle - |\downarrow\rangle]
\big) = |\uparrow\rangle $$
(Major) Edit: The OP asked to formulate this answer with density matrices, and I think it is a good idea, although there are some problems.
First: Given a pure state $|\psi\rangle$, the density matrix is:
$$\rho = |\psi\rangle\langle\psi| = p_{\psi}$$
and it's equal (in form) to the projection operator onto $p_\psi$.
Second: With
$$|\uparrow\rangle=\left(\begin{array}{c}1\\0\end{array}\right)$$
$$|\downarrow\rangle=\left(\begin{array}{c}0\\1\end{array}\right)$$
The density matrices of the relevant pure state eigenstates are:
$$\rho_{+z}=\left(\begin{array}{cc}1&0\\0&0\end{array}\right) = p_{+z}$$
$$\rho_{-z}=\left(\begin{array}{cc}0&0\\0&1\end{array}\right)=p_{-z}$$
$$\rho_{\pm x}=\frac 1 2\left(\begin{array}{cc}1&\pm 1\\\pm 1&1\end{array}\right)=p_{\pm x}$$
where it is explicitly stated that they are numerically equal to projection operators for those spin states.
Third: When a state described by $\rho$ passes through a device with projector $p$, the transmitted state is:
$$\rho' = (p|\psi\rangle)(\langle\psi|p^{\dagger})=p\rho p^{\dagger}$$
Fourth: The key to the Stern-Gerlach experiment is that it entangles position and spin. We write the total "state" [note: this is tricky, $\rho$ is not a wave-function] as a column matrix representing the up and lower beams:
$$ \rho = \left[\begin{array}{c}\rho_{\rm top}\\\rho_{\rm bot}\end{array}\right]$$
Here I am using $[$ and $]$ to indicate this array is in position space, not spin-space.
We now have the tools to master the problem.
The beam starts unpolarized:
$$ \rho_0 = \left(\begin{array}{cc}\frac 1 2&0\\0&\frac 1 2\end{array}\right)=\frac 1 2 I_2$$
It passes through a $Z$-device, creating an spin/space entangled state:
$$\rho_1 = \left[\begin{array}{c}p_{+z}\rho_0p_{+z}^{\dagger} \\p_{-z}\rho_0p_{-z}^{\dagger}\end{array}\right]$$
$$\rho_1 = \left[\begin{array}{c}\left(\begin{array}{cc}\frac 1 2 &0\\0&0\end{array}\right)\\\left(\begin{array}{cc}0&0\\0&\frac 1 2\end{array}\right)\end{array}\right]$$
At this point, we block the lower path and renormalize:
$$\rho'_1 = \left(\begin{array}{cc}1&0\\0&0\end{array}\right)$$
Now run this through the $X$-device:
$$\rho_2 = \left[\begin{array}{c}p_{+x}\rho_1'p_{+x}^{\dagger} \\p_{-x}\rho_1'p_{-x}^{\dagger}\end{array}\right]$$
$$\rho_2 = \left[\begin{array}{c}\frac 1 4\left(\begin{array}{cc}1&1\\1&1\end{array}\right)\\ \frac 1 4\left(\begin{array}{cc}1&-1\\-1&1 \end{array}\right)\end{array}\right]$$
Case A:
We take $\rho_{2,{\rm top}}$ and run it through a $Z$-device:
$$\rho_A = \left[\begin{array}{c}p_{+z}\rho_{2,{\rm top}}p_{+z}^{\dagger} \\p_{-z}\rho_{2,{\rm top}}p_{-z}^{\dagger}\end{array}\right]$$
$$\rho_A = \left[\begin{array}{c}\frac 1 4\left(\begin{array}{cc}1&0\\0&0\end{array}\right)\\ \frac 1 4\left(\begin{array}{cc}0&0\\0&1 \end{array}\right)\end{array}\right]\rightarrow
\left[\begin{array}{c}\frac 1 4 |\uparrow\rangle\\\frac 1 4 |\downarrow\rangle\end{array}\right]
$$
Where the last term is formulated as pure states.
Case B:
We take $\rho_{2,{\rm bot}}$ and run it through a $Z$-device:
$$\rho_B = \left[\begin{array}{c}p_{+z}\rho_{2,{\rm bot}}p_{+z}^{\dagger} \\p_{-z}\rho_{2,{\rm bot}}p_{-z}^{\dagger}\end{array}\right]$$
$$\rho_B = \left[\begin{array}{c}\frac 1 4\left(\begin{array}{cc}1&0\\0&0\end{array}\right)\\ \frac 1 4\left(\begin{array}{cc}0&0\\0&1 \end{array}\right)\end{array}\right]\rightarrow
\left[\begin{array}{c}\frac 1 4 |\uparrow\rangle\\\frac 1 4 |\downarrow\rangle\end{array}\right]$$
All is well.
Case C:
Here we coherently recombine the beams emerging from the $X$-device. Recall that:
$$ \rho_{2,{\rm top}}=\frac 1 2[|\rightarrow\rangle\langle\rightarrow|] $$
$$ \rho_{2,{\rm bot}}=\frac 1 2[|\leftarrow\rangle\langle\leftarrow|] $$
where:
$$ |\rightarrow\rangle=\frac 1{\sqrt 2}\left(\begin{array}{cc}1\\1\end{array}\right)$$
$$ |\leftarrow\rangle=\frac 1{\sqrt 2}\left(\begin{array}{cc}1\\-1\end{array}\right)$$
in the Z-basis column vectors. If we add the density matrices, we are by definition creating a mixed state...thereby destroying coherency. We can't do that. We need to consider the entangled state coming out of the $X$-device:
$$\Psi_2 = \left[\begin{array}{c}
\left(\begin{array}{cc}\frac 1 2\\\frac 1 2\end{array}\right)\\
\left(\begin{array}{cc}\frac 1 2\\-\frac 1 2\end{array}\right)
\end{array}\right]=
\left[\begin{array}{c}\psi_{2,{\rm top}}\\\psi_{2,{\rm bot}}
\end{array}\right]
$$
The final Z-device is a linear projection, so it yields:
$$\Psi_C=\left[\begin{array}{c}
p_{z+}\psi_{2,\rm top}\\
p_{z-}\psi_{2,\rm top}
\end{array}\right]
+
\left[\begin{array}{c}
p_{z+}\psi_{2,\rm bot}\\
p_{z-}\psi_{2,\rm bot}
\end{array}\right] =
\left[\begin{array}{c}
p_{z+}(\psi_{2,\rm top}+\psi_{2,\rm bot})\\
p_{z-}(\psi_{2,\rm top}+\psi_{2,\rm bot})
\end{array}\right]
$$
$$
\Psi_C=\left[\begin{array}{c}
\left(\begin{array}{c}1\\0\end{array}\right)\\
\left(\begin{array}{c}0\\0\end{array}\right)
\end{array}\right]
$$ | {
"domain": "physics.stackexchange",
"id": 80632,
"tags": "quantum-mechanics"
} |
What is $E$ with respect to the relation between an electric field and its polarization density? | Question: To avoid confusion let:-
$E_0$ be electric field we applied, $E_p$ be electric field caused by polarization and $E_n$ be net electric field i.e. $E_n = E_0 - E_p$
In the relation
$P=\epsilon_o\chi_eE$ which E is used?
Answer: The $\mathbf{E}$ in the formula is the electric field at equilibrium $\mathbf{E}_\mathrm{total}$ (In the following answer I shall use $\mathbf{E}_\mathrm{total}$ in place of $\mathbf{E}_\mathrm{n}$).
What happens when we apply an external field $\mathbf{E}_0$ is that the material become polarised, and produces a polarisation field $\mathbf{E}_{\mathrm{p},1}$ whose direction is opposite that of the applied field. This field contributes to the total field, which in turn modifies the polarisation to produce another polarisation field $\mathbf{E}_{\mathrm{p},2}$ ... until the process ends at some combination of $\mathbf{E}_\mathrm{total}$ and $\mathbf{E}_\mathrm{p}$ such that $\mathbf{P} = \epsilon_0\chi_\mathrm{e}\mathbf{E}_\mathrm{total}$.
To increase the polarisation, we will first need to increase the applied field $\mathbf{E}_0$, and get a response $\mathbf{E}_\mathrm{p}$ that won’t be as strong, resulting in a net increase in $\mathbf{E}_\mathrm{total}$. The best thing about linear dielectrics is that $\mathbf{P}$ happens to be proportional to $\mathbf{E}_\mathrm{total}$.
As a sidenote, the relation $\mathrm{\mathbf{P}} = \epsilon_0\chi_\mathrm{e} \mathrm{\mathbf{E}} $ is only true for a homogeneous isotropic linear dielectric (which for the sake of brevity is usually referred to as a linear dielectric), when the external field is not too strong. The general form of polarisation $\mathrm{\mathbf{P}}$ is
$$ \frac{P_i}{\epsilon_0} = \chi^{(1)}_{ij}E_j \; + \chi^{(2)}_{ijk}E_jE_k\;+\chi^{(3)}_{ijkl}E_jE_kE_k\;+ \;\cdots\;, $$
where the Einstein summing convention is used, $\chi^{(1)}$ is the linear susceptibility, $\chi^{(2)}$ is the second-order susceptibility tensor, $\chi^{(3)}$ is the third-order susceptibility tensor and so on. | {
"domain": "physics.stackexchange",
"id": 98410,
"tags": "electrostatics, electric-fields, polarization, dielectric"
} |
rospack command not found | Question:
Hello all,
while trying to build my package in ROS, after typing sudo make in the terminal, i have:
make: rospack: Command not found
Makefile:1: /cmake.mk: No such file or directory
make: *** No rule to make target `/cmake.mk'. Stop.
I think that should be a permission problem, so that root can not see rospack. Can anyone help me with that please? How shall i solve this?
Thanks
Originally posted by Yami on ROS Answers with karma: 1 on 2013-08-29
Post score: 0
Original comments
Comment by Yami on 2013-08-29:
ls -l
total 212
drwxr-xr-x 13 root root 4096 Aug 13 13:13 build
drwxr-xr-x 2 nazemi nazemi 4096 Aug 29 15:25 Desktop
drwxr-xr-x 2 nazemi nazemi 4096 Aug 13 10:21 Documents
drwxr-xr-x 3 nazemi nazemi 4096 Aug 27 12:53 Downloads
-rw-r--r-- 1 nazemi nazemi 8445 Aug 13 10:16 examples.desktop
drwxrwxr-x 8 nazemi nazemi 4096 Aug 16 13:37 fuerte
-rw-rw-r-- 1 nazemi nazemi 147512 Aug 13 17:32 iniKarte.pgm
-rw-rw-r-- 1 nazemi nazemi 136 Aug 13 17:32 iniKarte.yaml
drwxr-xr-x 2 nazemi nazemi 4096 Aug 13 10:21 Music
drwxr-xr-x 2 nazemi nazemi 4096 Aug 13 10:21 Pictures
drwxr-xr-x 2 nazemi nazemi 4096 Aug 13 10:21 Public
drwxr-xr-x 16 nazemi nazemi 4096 Aug 29 16:30 ros_stacks
drwxr-xr-x 2 nazemi nazemi 4096 Aug 13 10:21 Templates
drwxr-xr-x 2 nazemi nazemi 4096 Aug 13 10:21 Videos
drwxr-xr-x 20 nazemi nazemi 4096 Aug 13 10:51 youbot_driver
Comment by Martin Günther on 2013-08-29:
You shouldn't be doing make as root.
Comment by Yami on 2013-08-30:
in this case i just have a pile of errors and warnings,as below:
$ make
[rospack] Warning: error while crawling /opt/ros/fuerte/stacks/openni_camera/info: boost::filesystem::directory_iterator::construct: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info"
[rospack] Warning: error while looking for /opt/ros/fuerte/stacks/openni_camera/info/rospack_nosubdirs: boost::filesystem::status: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info/rospack_nosubdirs"
[rospack] Warning: error while looking for /opt/ros/fuerte/stacks/openni_camera/info/manifest.xml: boost::filesystem::status: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info/manifest.xml"
[rospack] Warning: error while crawling /opt/ros/fuerte/stacks/openni_camera/info: boost::filesystem::directory_iterator::construct: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info"
[rospack] Warning: error while crawling /opt/ros/fuerte/stacks/openni_camera/launch: boost::filesystem::directory_iterato
Comment by Martin Günther on 2013-08-30:
If you have run make as root somewhere in your /opt/ros/fuerte folder, you've probably messed up the installation. Your best bet of recovery is to uninstall all ros-fuerte-* packages (using Synaptic, aptitude, whatever) and reinstall them. Then never run make as root again.
Comment by Yami on 2013-08-30:
no i did not do that in /opt/ros/fuerte folder, but in the folder that my new package is placed:
~/ros_stacks/simple_navigation_goals$
Answer:
Why do you need to execute "make" as root?
When you do "sudo make" all the environment variables needed for ROS to work (set by /opt/ros/groovy/setup.bash) are not set any more, that is why rospack can not be found.
You should, either:
-Not run make as root (recommended)
or
-Set /opt/ros/groovy/setup.bash for the root user
Originally posted by Martin Peris with karma: 5625 on 2013-08-29
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Yami on 2013-08-30:
Thanks for your answers first of all, but running not as root does not help. I also found this:
http://answers.ros.org/question/30554/rospack-command-not-found-in-makefile-when-trying-to-build/
but have no idea how he has the problem solved ! :(
Comment by Martin Günther on 2013-08-30:
I'd seriously advise against option (2). Only do that when you know exactly what you're doing and have a good reason to and made sure you're not accidentally changing files in /opt/ros/*. A good rule of thumb is that if you know enough to safely do this, you probably know a better way, too. :)
Comment by Yami on 2013-08-30:
As said $ make produces:
[rospack] Warning: error while crawling /opt/ros/fuerte/stacks/openni_camera/info: boost::filesystem::directory_iterator::construct: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info" [rospack] Warning: error while looking for /opt/ros/fuerte/stacks/openni_camera/info/rospack_nosubdirs: boost::filesystem::status: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info/rospack_nosubdirs" [rospack] Warning: error while looking for /opt/ros/fuerte/stacks/openni_camera/info/manifest.xml: boost::filesystem::status: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info/manifest.xml" [rospack] Warning: error while crawling /opt/ros/fuerte/stacks/openni_camera/info: boost::filesystem::directory_iterator::construct: Permission denied: "/opt/ros/fuerte/stacks/openni_camera/info" [rospack] Warning: error while crawling /opt/ros/fuerte/stacks/openni_camera/launch: boost::filesystem::directory_iterato
Comment by Yami on 2013-08-30:
I also should admit i'm totally new to Ubuntu and ROS
Comment by Martin Günther on 2013-08-30:
@Yami: If you reinstall ros, everything in /opt/ros should be readable by your normal user. Whenever you want to modify something, copy it to your home dir instead and put that dir on your ROS_PACKAGE_PATH. That way you can safely edit stuff.
Comment by Yami on 2013-08-30:
Thanks Martin.
Now I have:
~/ros_stacks/simple_navigation_goals$ ls -l
drwxrwxr-x 2 nazemi nazemi 4096 Aug 30 08:50 bin
drwxrwxr-x 3 nazemi nazemi 4096 Aug 30 12:26 build
-rw-rw-r-- 1 nazemi nazemi 1517 Aug 30 10:23 CMakeLists.txt
drwxrwxr-x 3 nazemi nazemi 4096 Aug 29 16:30 include
-rw-rw-r-- 1 nazemi nazemi 128 Aug 29 16:30 mainpage.dox
-rw-rw-r-- 1 nazemi nazemi 41 Aug 29 16:30 Makefile
-rw-rw-r-- 1 nazemi nazemi 404 Aug 30 09:37 manifest.xml
drwxrwxr-x 2 nazemi nazemi 4096 Aug 30 09:00 src
Comment by Yami on 2013-08-30:
drwxrwxr-x 2 nazemi nazemi 4096 Aug 30 08:50 bin
Comment by Yami on 2013-08-30:
drwxrwxr-x 3 nazemi nazemi 4096 Aug 30 12:26 build
Comment by Yami on 2013-08-30:
So u think I shouls reinstall ros? to be able to change build? | {
"domain": "robotics.stackexchange",
"id": 15389,
"tags": "ros"
} |
what's color dropping meaning in data augmentation? | Question: I'm reading about contrastive learning paper, they use data augmentation as method.
By the way, I have some questions about data augmentation. What is color dropping?
color jittering is changing HSL(Hue, saturation, lightness) on image. But I don't know color dropping.
I search on google, but it doesn't help. I assumed color dropping as ...
bluring?
remove color on image? then show as grayscale or binary image?
add some colored water drops on image? like raining windows which rain contained color water.
change the color some part on image?
change a image as pointillism art?
I really don't know color dropping on data augmentation. could you help me?
Thank you.
Answer: By dropping color channels, they mean replacing the color channel with noise.
For example:
import numpy as np
import cv2
import matplotlib.pyplot as plt
test_image_path = '4.2.07.tiff' # peppers from https://sipi.usc.edu/database/database.php
# read image
img = cv2.imread(test_image_path, cv2.IMREAD_COLOR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = img.astype(float)
img = (img / 255.0) - 0.5
plt.imshow(img + 0.5)
Then dropping channels (the output will vary based on how the parameters of the gaussian noise):
# keep the first channel, drop the other two
img[:, :, 1] = np.random.normal(0, 0.1, (img.shape[:2]))
img[:, :, 2] = np.random.normal(0, 0.1, (img.shape[:2]))
plt.imshow(img + 0.5) | {
"domain": "ai.stackexchange",
"id": 3902,
"tags": "data-augmentation"
} |
What is a slow-roll field? | Question: I am studying inflation reading this article http://lanl.arxiv.org/abs/hep-ph/0406191 and in section 3 it states:
This inflaton field may evolve slowly down its effective potential, or not. While an approximately constant energy density seems to be required, a slow-roll field is only a simplifying assumption. Non-slow-roll models of inflation exist and for the moment make predictions that are compatible with observations.
I don't know what a slow-roll field is, and I am not able to find any good definition.
Answer: The most straightforward theories of inflation assume there exists some scalar field $\phi$ that permeates the Universe and drives inflation. Over time this scalar field changes, and the rate of change is given by $\dot{\phi}$. There is also some "potential energy" associated with the scalar field, which is given by some function $V(\phi)$. The specific functional form of $V(\phi)$ is given by whatever theory of inflation that a theorist has postulated.
The "slow roll approximation" simply states that $\dot{\phi} \ll V(\phi)$. In other words, the "kinetic energy" of the field is negligible compared to the "potential energy" of the field. Under this assumption, it is possible to show that the equation of motion of the expansion of the Universe leads to approximately exponential expansion, i.e., inflation.
The slow roll approximation simplifies the equations of motion and guarantees that inflation will occur. However, as the text you have quoted states, it is also possible to have inflation without resorting to the slow roll approximation. | {
"domain": "physics.stackexchange",
"id": 30147,
"tags": "quantum-field-theory, cosmology, cosmological-inflation"
} |
Sum of squares of two largest of three numbers | Question: Given the following problem (SICP Exercise 1.3):
Define a procedure that takes three
numbers as arguments and returns the
sum of squares of the two largest
numbers.
I wrote the following (somewhat clumsy) solution in Scheme. How can I make it better?
(define (greatest-two a b c)
(cond ((> a b) (cond ((> b c) (list a b))
(else (list a c))))
((> a c) (cond ((> c b) (list a c))
(else (list a b))))
(else (list b c))))
(define (square x) (* x x))
(define (sum-of-squares a b) (+ (square a) (square b)))
(define (f a b c)
(apply sum-of-squares (greatest-two a b c)))
Answer: Scheme is a lot more functional than Common Lisp. The way you can apply that to this situation is by making more use of passing functions around (to the point that this problem is almost a one-liner). For the puzzle as written, I'd do something like
(define (big-squares a b c)
(apply + (map (lambda (n) (* n n))
(take (sort (list a b c) >) 2))))
If you wanted to decompose it properly into named functions
(define (square num) (expt num 2))
(define (sum num-list) (apply + num-list))
(define (two-biggest num-list) (take (sort num-list >) 2))
(define (big-squares a b c) (sum (map square (two-biggest (list a b c)))))
If you wanted to go completely overboard, also toss in
(define (squares num-list) (map square num-list))
which would let you define big-squares as
(sum (squares (two-biggest (list a b c))))
(code above in mzscheme) | {
"domain": "codereview.stackexchange",
"id": 17837,
"tags": "lisp, scheme, sicp"
} |
Compile is successful, but no executable generated? | Question:
Hi guys,
I have created my own package in which I do some image processing with openCV and PCL. After compiling, I get no errors and the build is successful, However, no executable is generated in the /bin folder. Note: this is not my first package, I have created other nodes with no problem.
In the Cmakelist.txt, I have added the following lines
rosbuild_add_executable(myNode src/myNode.cpp)
rosbuild_add_library(myNode src/myNode.cpp)
Here is the output of the compilation:
[ rosmake ] Results:
[ rosmake ] Built 31 packages with 0 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/samme/.ros/rosmake/rosmake_output-20130322-160141
When I run the command $rosun firefly_mv myNode I get:[rosrun] Couldn't find executable named myNode below /home/samme/fuerte_workspace/firefly_mv.
I am not sure why the executable is not being generated here, I have created many other nodes and never got this problem. Any help would be greatly appreciated.
Thanks and regards,
Khalid
Originally posted by K_Yousif on ROS Answers with karma: 735 on 2013-03-21
Post score: 0
Original comments
Comment by DamienJadeDuff on 2013-03-22:
Strange. Can you post the result of going into your "build" directory and there running "cmake ..;VERBOSE=1 make" - should show what it's doing while compiling. Peace.
Answer:
You are declaring a library with the same name as the executable AFTER the executable. My guess is that only the last one is generated.
You should not build the library, try commenting that out.
Originally posted by Claudio with karma: 859 on 2013-03-22
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 13481,
"tags": "ros, compilation, cmake, rosbuild"
} |
Tools for rosbag analysis - Others that are used? | Question:
What tools have people developed and use for rosbag analysis?
I know of rqt_bag (which is an absolutely awesome tool), which allows you to browse the topics in a concise timeline, and using that in conjunction with rqt_plot to plot data. However, it's still a little cumbersome to extract data for more detailed analysis.
I've created a small package using numpy that will parse a rosbag given a set of topic paths (consisting of fields and optionally indices, much like extracting data for plotting) and return simple multi-variable data sets for ease of plotting and analysis, which includes interpolating data sets so that they are on a time scale (with the same spacing). This allows basic errors to be computed. If you also published the same data, you can use this to potentially see delays between the different publishers.
You can always run the conversion, then save the final datasets to Matlab to plot and examine as well. Really easy to do in Spyder.
Will upload the package in a bit.
EDIT:
Googled and saw Pandas from this presentation: https://speakerdeck.com/nzjrs/managing-complex-experiments-automation-and-analysis-using-robot-operating-system
Looks interesting, will start looking at this.
Originally posted by eacousineau on ROS Answers with karma: 174 on 2013-10-16
Post score: 3
Answer:
If you are interested in using pandas. I have a simple somewhat inefficient python package to get a bag file into a pandas dataframe indexed by the bag record time.
https://github.com/aktaylor08/ros_pandas
Originally posted by aktaylor08 with karma: 56 on 2014-06-13
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by eacousineau on 2014-06-18:
Thanks! I'll check that out.
Comment by xaedes on 2014-09-19:
This is awesome. To get a dataframe with a common time index (no interleaved data) use this on top of ros_pandas: http://pastebin.com/hJFR7HbA | {
"domain": "robotics.stackexchange",
"id": 15884,
"tags": "rosbag"
} |
How can dopamine modulate synaptic strength? | Question: Does dopamine act on G protein coupled receptor, leading to more Ca2+ channels on the postsynaptic knob?
Also, how is the specificity of the location (of the brain) that dopamine acts on controlled? Or is it controlled at all? Is this chemical associated with addiction?
Answer: Yes, dopamine receptors (D1-D5) are G protein coupled receptors, which act primarily on adenylatecyclase (changing cAMP concentration, which activate proteinkinase A, which phosphorylates other proteins in turn such as CREB... you can find the pathway in the following article). D1 receptors activate adenylatecyclase, D2 inhibit its activity.
I wouldn't say it directly alters the Ca2+ channel concentration per se, the article says however it can modify their state.
According to this article, G proteins of these receptors could also have different effects (alternative effects through G-proteins such as activating phospholipase C, which results in IP3 formation which results in Ca2+ influx to cytoplasm from ER which acts on proteinkinase C) or new research suggest that it could also act through G protein-independent mechanisms via interactions with ion channels.
specificity of the location (of the brain) that dopamine acts on controlled?
The idea is that cells with certain neurotransmitter may induce growth of cells with same transmitter, thus forming specific circuits. I don't know however how specifically this happens and how the cells communicate to form specific connections (like dopaminergic connection from substantia nigra to basal ganglia) during development. I'd bet on some morphogenes. Someone else might know better :) .
Yes, dopamine is associated with addiction, as it's part of the reward system in brain. Many drugs inhibit its transport from synaptic cleft into the cells, which means prolonged activity, brain senses positive effects you can see when you've done something good (reward). | {
"domain": "biology.stackexchange",
"id": 5216,
"tags": "biochemistry, molecular-biology, neuroscience"
} |
How hot must a star get before it is considered to be a star? | Question: How hot must a star get before it actually becomes a star? Why does it need to get so hot? Please find an official site to quote from, if you can.
Answer: Star temperature is an interesting question since temperature varies a lot in a star. I think that the more relevant temperature to this question is the core temperature of the star: a star is born when it starts to burn hydrodgen in its core.
Finally, hydrogen begins to fuse in the core of the star, and the rest
of the enveloping material is cleared away. This ends the protostellar
phase and begins the star's main sequence phase on the H–R diagram.
(See this Wikipedia page)
The temperature needed for hydrodgen burning is 10 million Kelvin, so that's how hot a star must be to be considered as a star. It needs to get so hot, because else it will fail to burn hydrodgen and will become a "failed star": a brown dwarf.
Edit:
Surface temperature can be misleading, since the temperature ranges in which lay stars are not populated only by stars, but also by other objects such as hot Jupiters, with surface temperature ranging from 1000 to 3000 K. | {
"domain": "astronomy.stackexchange",
"id": 1144,
"tags": "star, heat"
} |
C99 - An alphanumeric random char generator | Question: I have built a very small program that is a command line utility for generating alphanumeric characters of a certain length (up to a max. length) called randchars.
$ randchars 12
Prints twelve pseudo-random upper-/lower-case alphanumeric characters to stdout. The command line options -l and -u can be given, if somebody wishes to have only lower- or uppercase alphanumeric characters.
Here is the code, this is my first project while reading K&R. I know that in C99 I do not need to declare the variables on top of the functions, but I somehow grew to like it this way^^
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
#define MAX_CHARS 32
int is_lowercase(char c);
int is_uppercase(char c);
int is_digit(char c);
enum OPTS {
MIXED, // default
UPPERCASE, // -u
LOWERCASE, // -l
};
int main(int argc, char *argv[])
{
unsigned i, size, char_ok;
srand(time(0));
char chars[MAX_CHARS + 1];
char rchar, c;
enum OPTS mode;
// get options
if (argc < 2 || 3 < argc)
goto error_argc;
// size is last arg
size = atoi(argv[argc - 1]);
if (size == 0)
goto usage;
if (size > MAX_CHARS)
goto error_size;
mode = MIXED;
if (argc == 3) {
if (strcmp(argv[1], "-l") == 0)
mode = LOWERCASE;
else if (strcmp(argv[1], "-u") == 0)
mode = UPPERCASE;
else
goto error_opt;
}
// do actual computation of random chars
for (i = 0; i < size; i++) {
do {
c = rand() % ('z' + 1);
chars[i] = c;
switch(mode) {
case MIXED:
char_ok = is_digit(c) || is_uppercase(c) || is_lowercase(c);
break;
case LOWERCASE:
char_ok = is_digit(c) || is_lowercase(c);
break;
case UPPERCASE:
char_ok = is_digit(c) || is_uppercase(c);
break;
}
} while (!char_ok); // repeat if char is not ok
}
chars[size] = '\0';
printf("%s\n", chars);
return 0;
error_size:
printf("error: maximum sequence length: %u\n", MAX_CHARS);
goto usage;
error_opt:
printf("error: invalid option: %s\n", argv[1]);
goto usage;
error_argc:
printf("error: invalid amount of arguments: %u\n", argc);
goto usage;
usage:
printf("usage:\n\t%s [ -u | -l ] <n>\n", argv[0]);
return 1;
}
int is_lowercase(char c)
{
if ('a' <= c && c <='z')
return 1;
else
return 0;
}
int is_uppercase(char c)
{
if ('A' <= c && c <='Z')
return 1;
else
return 0;
}
int is_digit(char c)
{
if ('0' <= c && c <='9')
return 1;
else
return 0;
}
Answer: First of all, this is well written and organized code. Except for one part. A dreadful part. A part, that was and is the cause for many issues, even today.
Don't use goto
Using goto brings multiple problems. While I won't repeat the statements from the linked question, I can certainly tell that it slowed me down: to understand your code, I had to repeatedly jump between lines 32+ and lines 72+.
Compare this
// size is last arg
size = atoi(argv[argc - 1]);
if (size == 0)
goto usage;
if (size > MAX_CHARS)
goto error_size;
to this:
// size is last arg
size = atoi(argv[argc - 1]);
if (size == 0) {
usage();
return 1;
}
if (size > MAX_CHARS) {
fprintf(stderr, "error: maximum sequence length: %u\n", MAX_CHARS);
usage();
return 1;
}
The latter one does not force me to read other parts of the code, I immediately know what will happen, and I also know all effects around the used variables. goto does not yield any benefit here.
Don't use rand() % NUM for random numbers
That's covered by the C FAQ. Instead, use something along
c = '0' + rand() / (RAND_MAX / ('z' - '0' + 1) + 1)
This has the nice effect that all except 13 of the possible cs are already alphanumeric.
Use stderr for error messages
For errors, you want to use stderr instead of stdout.
And that's all. As I said, the code itself is well-written. I would prefer late declared variables instead of the K&R-style early one, but that's personal preference. For argument parsing, you might want to have a look at getopt. | {
"domain": "codereview.stackexchange",
"id": 42664,
"tags": "c, console, c99"
} |
Generating Julia set | Question: So, I just wanted to post something on Rosetta Code, and I found this task of generating and plotting a Julia set: http://www.rosettacode.org/wiki/Julia_set. There was already one solution but it was quite inefficient and not Pythonic. Here is my attempt on this:
"""
This solution is an improved version of an efficient Julia set solver
from:
'Bauckhage C. NumPy/SciPy Recipes for Image Processing:
Creating Fractal Images. researchgate. net, Feb. 2015.'
"""
import itertools
from functools import partial
from numbers import Complex
from typing import Callable
import matplotlib.pyplot as plt
import numpy as np
def douady_hubbard_polynomial(z: Complex,
*,
c: Complex):
"""
Monic and centered quadratic complex polynomial
https://en.wikipedia.org/wiki/Complex_quadratic_polynomial#Map
"""
return z ** 2 + c
def julia_set(*,
mapping: Callable[[Complex], Complex],
min_coordinate: Complex,
max_coordinate: Complex,
width: int,
height: int,
iterations_count: int = 256,
threshold: float = 2.) -> np.ndarray:
"""
As described in https://en.wikipedia.org/wiki/Julia_set
:param mapping: function defining Julia set
:param min_coordinate: bottom-left complex plane coordinate
:param max_coordinate: upper-right complex plane coordinate
:param height: pixels in vertical axis
:param width: pixels in horizontal axis
:param iterations_count: number of iterations
:param threshold: if the magnitude of z becomes greater
than the threshold we assume that it will diverge to infinity
:return: 2D pixels array of intensities
"""
imaginary_axis, real_axis = np.ogrid[
min_coordinate.imag: max_coordinate.imag: height * 1j,
min_coordinate.real: max_coordinate.real: width * 1j]
complex_plane = real_axis + 1j * imaginary_axis
result = np.ones(complex_plane.shape)
for _ in itertools.repeat(None, iterations_count):
mask = np.abs(complex_plane) <= threshold
if not mask.any():
break
complex_plane[mask] = mapping(complex_plane[mask])
result[~mask] += 1
return result
if __name__ == '__main__':
mapping = partial(douady_hubbard_polynomial,
c=-0.7 + 0.27015j) # type: Callable[[Complex], Complex]
image = julia_set(mapping=mapping,
min_coordinate=-1.5 - 1j,
max_coordinate=1.5 + 1j,
width=800,
height=600)
plt.axis('off')
plt.imshow(image,
cmap='nipy_spectral',
origin='lower')
plt.show()
I think it looks good, and it is definitely more efficient. There was just one thing that I was not sure about. I was thinking to take out creating a complex_plane to a separate function and pass it as a parameter to julia_set. But in this case the julia_set wouldn't be a pure function as it would mutate the complex_plane. And I prefer my functions not to have any side effects. So I decided to leave it as is.
Any comments on this matter or anything else are welcome.
Here are some examples of output:
Answer: 1. Review
Some of the variable names could be improved:
complex_plane is an array of \$z\$ values for each pixel in the image, so naming it z would help the reader relate it to the z in douady_hubbard_polynomial.
imaginary_axis and real_axis are only used once in the very next line, so there is no need for them to have long and memorable names. I would use something short like im and re.
result is an array of iteration counts, so it could be named something like iterations.
mask is a Boolean array selecting pixels that have not yet diverged to infinity, so something like not_diverged or live would convey this better.
On each iteration, the iteration counts of the escaped pixels are incremented. This means that some pixels get incremented many times, for example a pixel that escapes on the first iteration gets its count incremented 256 times. It would be more efficient to set the iteration count for each pixel just once. A convenient time to do this is when it escapes.
As the number of iterations goes up, the number of pixels that have not escaped to infinity gets smaller and smaller. But the masking operations are always on the whole array. It would be more efficient to keep track of the indexes of the pixels that have not escaped, so that subsequent operations are on smaller and smaller arrays.
2. Revised code
im, re = np.ogrid[min_coordinate.imag: max_coordinate.imag: height * 1j,
min_coordinate.real: max_coordinate.real: width * 1j]
z = (re + 1j * im).flatten()
live, = np.indices(z.shape) # indexes of pixels that have not escaped
iterations = np.empty_like(z, dtype=int)
for i in range(iterations_count):
z_live = z[live] = mapping(z[live])
escaped = abs(z_live) > threshold
iterations[live[escaped]] = i
live = live[~escaped]
iterations[live] = iterations_count - 1
return (iterations_count - iterations).reshape((height, width))
Notes
This is about three times as fast as the code in the post.
Because we are maintaining an array of indexes, it is convenient to flatten the z array and then reshape iterations to two dimensions before returning it. If we left the array two-dimensional, there would need to be two arrays of indexes, live_i and live_j.
Pixels that don't escape are given the value iterations_count - 1 in order to match the code in the post. It would make more sense to use iterations_count or a larger value here.
The subtraction iterations_count - iterations is only there so that the returned values match the code in the post. The subtraction could be omitted if you reverse the colour map. | {
"domain": "codereview.stackexchange",
"id": 35297,
"tags": "python, numpy, fractals"
} |
Very small start to MVC refactor in iOS app | Question: I started to refactor my app to MVC today. I wasted a lot of time because of simple mistakes.
One was I kept trying to access the getter method inside of my custom getter method, which just causes and endless loop of gets.
Even when I figured it out, I repeated this mistake several times. I think it's safe to say I'll never use self.iVar inside of it's own getter method ever again.
The thing I sort of want advice on is not just whether I did an ok MVC refactor, but whether I approached things the right way.
One of the things that is really bugging me is my implementation of the phoneNumbers getter method in HALContact.
When I first created it, my nested for loops in my New View Controller Snippet would run forever and just keep adding the same phone number instead of moving on.
I finally just got frustrated and added the removeAllObjects method at the beginning of the phoneNumbers getter. As much as it works fine, I'm not sure if that's the way it should be.
And this is how the getter now looks and works fine:
- (NSArray *)phoneNumbers {
// The difference is I remove all of the objects
if (_phoneNumbers) {
[_phoneNumbers removeAllObjects];
}
if (!_phoneNumbers) {
//Create _phoneNumbers array
_phoneNumbers = [[NSMutableArray alloc]init];
}
ABMultiValueRef *phoneNumberRef = ABRecordCopyValue(_contactRef, kABPersonPhoneProperty);
NSString *phoneNumber = [[NSString alloc] init];
// Make sure multi value ref exists
if (phoneNumberRef) {
CFIndex numberOfPhoneNumbers = ABMultiValueGetCount(phoneNumberRef);
for (CFIndex i = 0; i < numberOfPhoneNumbers; i++) {
phoneNumber = (__bridge_transfer NSString *)ABMultiValueCopyValueAtIndex(phoneNumberRef, i);
CFStringRef label = ABMultiValueCopyLabelAtIndex(phoneNumberRef, i);
if (label) {
[_phoneNumbers addObject:phoneNumber];
NSLog(@"phoneNumbers count in method: %d", _phoneNumbers.count);
}
CFRelease(label);
}
CFRelease(phoneNumberRef);
}
return _phoneNumbers;
}
As far as the general MVC refactor, I threw up all of my new and old files on Pastebin which I will link to in parentheses if you'd rather view them in separate tabs.
Here's a snippet of what the View Controller looks like after the refactor. It now uses 2 model classes I made to handle the address book and contacts data:
// Instantiate our address book instance
self.addressBook = [[HALAddressBook alloc]init];
// Instantiate our contact instance
self.currentContact = [[HALContact alloc]init];
//Ask for access to Address Book.
BOOL result = [self.addressBook requestAccess];
if (!result) {
[self performSegueWithIdentifier:@"addFriendsToMediaCaptureSegue" sender:self];
}
if (result) {
// Loop through all address book contacts
for (int index = 0; index < self.addressBook.allContacts.count; index++) {
// Create contactRef from current contact
self.currentContact.contactRef = (__bridge ABRecordRef)(self.addressBook.allContacts[index]);
// Loop through all of the current contact's phone numbers
int index2;
for (index2 = 0; index2 < self.currentContact.phoneNumbers.count; index2++) {
if (self.currentContact.firstName) {
// Add current contact's phone number to array
[self.potentiaFriendsPhoneNumberArray addObject:self.currentContact.phoneNumbers[index2]];
// Add current contact's first name to array
[self.potentiaFriendsNotInParseFirstNamesArray addObject:self.currentContact.firstName];
}
}
}
}
And here are the 2 model classes that I added. One for the address book and one for the contact:
1A. HALAddressBook header
1B. HALAddressBook implementation
2A. HALContact header
2B. HALContact implementation
Answer: if (_phoneNumbers) {
[_phoneNumbers removeAllObjects];
}
if (!_phoneNumbers) {
//Create _phoneNumbers array
_phoneNumbers = [[NSMutableArray alloc]init];
}
Instead of this, we can replace all these lines with simply:
_phoneNumbers = [NSMutableArray array];
The best advantage of this, is we don't run into any problems of emptying an array that someone else might have a reference to. This could cause problems if, for example, you were using the phone numbers array as the data for a table view. If you all of the sudden removed all the objects, but didn't call reloadData on the table view, the next time it tried grabbing a cell, it could be an index out of bounds.
ABMultiValueRef *phoneNumberRef = ABRecordCopyValue(_contactRef, kABPersonPhoneProperty);
NSString *phoneNumber = [[NSString alloc] init];
// Make sure multi value ref exists
if (phoneNumberRef) {
CFIndex numberOfPhoneNumbers = ABMultiValueGetCount(phoneNumberRef);
for (CFIndex i = 0; i < numberOfPhoneNumbers; i++) {
phoneNumber = (__bridge_transfer NSString *)ABMultiValueCopyValueAtIndex(phoneNumberRef, i);
CFStringRef label = ABMultiValueCopyLabelAtIndex(phoneNumberRef, i);
if (label) {
[_phoneNumbers addObject:phoneNumber];
NSLog(@"phoneNumbers count in method: %d", _phoneNumbers.count);
}
CFRelease(label);
}
CFRelease(phoneNumberRef);
}
I think this section is problematic. I don't know enough about AddressBook to really tell you for sure, so for now I'm just mentioning it and reserving a section for in the future if I come back and figure out what's wrong with it. The CFRelease on phoneNumberRef seems particularly suspicious.
BOOL result = [self.addressBook requestAccess];
if (!result) {
[self performSegueWithIdentifier:@"addFriendsToMediaCaptureSegue" sender:self];
}
if (result) {
This could and probably should be refactored into:
if (![self.addressBook requestAccess]) {
[self performSegueWithIdentifier:@"addFriendsToMediaCaptureSegue" sender:self];
} else {
But... with that said, I did take a look at the requestAccess method in the link. Strictly speaking, it's not in the question and so it's not strictly up for review, but, I'm quite certain it will always return NO, as the return is synchronous but the variable being returned is only set in an asynchronous block, and as such, will always be set too late.
int index2;
for (index2 = 0; index2 < self.currentContact.phoneNumbers.count; index2++) {
if (self.currentContact.firstName) {
// Add current contact's phone number to array
[self.potentiaFriendsPhoneNumberArray addObject:self.currentContact.phoneNumbers[index2]];
// Add current contact's first name to array
[self.potentiaFriendsNotInParseFirstNamesArray addObject:self.currentContact.firstName];
}
}
First, we can declare our iterator variable in the for loop initialization statement. Remember, you did this a few lines up. But perhaps more importantly, why don't we use a forin?
for (id phoneNumber in self.currentContact.phoneNumbers) {
[self.potentialFriendsPhoneNumberArray addObject:phoneNumber];
[self.potentialFriendsNotInParseFirstNamesArray addObject:self.currentContact.firstName];
}
We should also move the if check outside the loop so that the entire loop can be circumvented in the case that we're never going to add anything anyway. Embed the loop within the if.
Also, it doesn't make a ton of sense to me for there to be two separate arrays. I understand that you'll want a name associated with all the phone numbers, however it's better to do this with a single array, either by filling it with dictionaries with a key for the phone number and a key for the name, or by creating a custom object that will hold both of these parts. | {
"domain": "codereview.stackexchange",
"id": 8441,
"tags": "mvc, objective-c, ios"
} |
Best way to copy data to a new sheet and reorganize it (VBA) | Question: I'm writing a VBA program which copies and organizes data from one master sheet into numerous other sheets. One of the recipient sheets unifies all the data from the master sheet which holds the same id number into a single row. For this operation, I am looping through the master sheet for each id number, copying each row which holds the current id number into a new sheet purely used for calculations and organizing, and rearranging the data in this sheet into the new row. The resultant row is copied into the recipient sheet. This process of organizing data for every id number takes a long time to process, especially given the very large size of this sheet and the processing time of the other recipient sheets. I'm wondering if there is a better way to organize and copy data without using an intermediate calculation sheet.
The below code is the main sub, which calls another sub OrganizeAndCopyToPal, which organizes the data in the calculation sheet and copies the result into the recipient sheet.
Sub PalletAssemblyLog()
Dim allidNum As Range
Dim curridNum As Range
Dim rowCount As Long
Dim idNum
Dim I As Long
Dim j As Long
Dim machineLoc As String
Dim calc As Worksheet
Dim full As Worksheet
Dim pal As Worksheet
Set calc = Sheet3
Set full = Sheet4
Set pal = Sheet1
For I = 2 To rowCount
idNum = full.Cells(I, 17).Value
For j = 2 To rowCount
If full.Cells(j, 17).Value = idNum Then
If allidNum Is Nothing Then
Set allidNum = full.Cells(j, 17)
Else
Set allidNum = Union(allidNum, full.Cells(j, 17))
End If
End If
Next j
Set curridNum = allidNum.EntireRow
calc.Activate
calc.Cells.Clear
full.Activate
curridNum.Copy calc.Range("A1")
OrganizeAndCopyToPal curridNum
Next I
End Sub
The below sub organizes and copies the data for each id number. The final sub to copy the data isn't related to the matter of simplifying this task so I'm not including it.
Sub OrganizeAndCopyToPal(curridNum)
Dim calc As Worksheet
Dim pal As Worksheet
Set calc = Sheet3
Set pal = Sheet1
calc.Activate
Dim rowCount As Long
rowCount = calc.Cells(Rows.Count, "A").End(xlUp).Row
Dim palRow As Long
palRow = rowCount + 2
Dim partRow As Long
partRow = palRow + 2
Dim currPartCount As Range
Dim assembly As String
Dim id As String
Dim location As String
Dim machType As String
Dim machLoc As String
Dim currPart As String
Dim link As String
Dim tot As Long
tot = 0
With calc
.Cells(1, 1).Copy .Cells(palRow, 2)
assembly = .Cells(1, 1).Value
.Cells(1, 2).Copy .Cells(palRow, 5)
id = .Cells(1, 17).Value
asArray = SplitMultiDelims(id, "|-")
'MsgBox asArray(0) & " " & asArray(1) & " " & asArray(2)
machArray = Split(.Cells(1, 8), "-")
machType = machArray(0)
.Cells(palRow, 3) = machType
machLoc = .Cells(1, 8).Value
.Cells(palRow, 4) = machLoc
.Cells(1, 17).Copy .Cells(palRow, 10)
location = Cells(1, 9)
.Cells(palRow, 1) = location
For I = 1 To rowCount
partArray = Split(.Cells(I, 16).Value, ",")
For j = 0 To UBound(partArray)
partArray2 = Split(partArray(0), "-")
partPrefix = partArray2(0)
If j = 0 Then
currPart = partArray(j)
Else
currPart = partPrefix & "-" & CStr(partArray(j))
End If
tf = 1
For k = 0 To tot
If Cells(partRow + k, 1).Value = currPart Then
tf = 0
Exit For
End If
Next k
If tf = 1 Then
.Cells(partRow + tot, 1).Value = currPart
tot = tot + 1
End If
Next j
Next I
For I = 1 To tot
Cells(palRow, 10 + I).Value = Cells(partRow + I - 1, 1)
Next I
End With
CopyToPal curridNum, palRow
End Sub
Thank you for any tips or help that you can offer.
Answer: Some comments that you hopefully find useful:
(Best Practice)Declare "Option Explicit" at the top of every module. This option requires that every variable used in the module is explicitly declared. Doing so avoids numerous errors, not the least of which are new-variables-declared-by-typo which can be hard to spot. Declaring it at the top of the provided code resulted in the need to add 9 declarations.
(Best Practice)Explicitly declare types for your variables and parameters. Dim idNum implicitly declares idNum as a Variant. It is probably a Long - but now, to the reader has to look through the code to know for sure. Sub OrganizeAndCopyToPal(curridNum) => parameter curridNum is, by default, declared as a Variant - but it is a Range. Sub OrganizeAndCopyToPal(curridNum As Range) removes all ambiguity.
Naming things.
You can change the code name of worksheets (e.g., Sheet3). So there is no need for Dim calc As Worksheet, Set cal = Sheet3. Simply rename Sheet3 to calc in the Properties Window. Now you do not need to declare and assign calc in your code - you can just use it directly as a Worksheet object. Same comment for full and pal.
Use meaningful names. Single character names are non-descriptive and (IMO) make code harder to read. Even loop and array index variables are easier to interpret if given names like 'idxRow', 'rowNum', etc. Descriptive names will not slow down your code or take up too much memory. What a descriptive name will do is allow you to avoid lots of re-interpretation time when you want to update this code after a long absence.
Don't Repeat Yourself (DRY) and magic numbers:
As an example, PalletAssemblyLog repeats the expression full.Cells(j, 17) 3 times in 5 lines. This expression is both repeated and contains a 'magic number' - 17. 17 must be an important column in the full worksheet...give it a name! (full could use a more descriptive name as well). Private Const idNumberColumn As Long = 17 will not slow down the code, but it is much more readable...and - most importantly, as soon as you need to insert a new column prior to column 17, you only have to change the column number in one location.
Sub OrganizeAndCopyToPal(curridNum) uses lots of magic numbers that need a name: 2,5,4,8,10,16. Give them all names and assign them as constant values in one location. You'll thank yourself in the future when the calc worksheet is eventually re-organized.
Single Responsibility Principle (SRP): Each procedure should have a single purpose (or, have a single reason to change)
The OrganizeAndCopyToPal procedure by its name, betrays that it does two things: Organizes and Copies. In fact, the passed-in argument curridNum is not used until the end of OrganizeAndCopyToPal when it is a parameter in the expression CopyToPal curridNum, palRow. There is no need to pass curridNum as a parameter because the subroutine does not need to know the curridNum in order to determine the palRow. Calculating palRow is a single responsibility - consider making OrganizeAndCopyToPal a function like 'Function DetermineRowTarget() As Long'.
Don't hesitate to break out blocks of code from procedures that can be explained/documented using a function name. Within PalletAssemblyLog, there is a nested loop that gathers all ranges related to the same id number. Rather than sifting through the loop logic to discover what does, it could be better self-documenting by making it a Function that returns palRow. In this case it receives bonus points for a reduction in loop nesting.
Speed
Within the main loop, you are activating worksheets multiple times. It is not clear to me that you need to make the various sheets 'Active' for the modification code that you have. Simply reducing/eliminating anything that causes a redraw within a loop will speed things up.
It looks as though you are processing the full set of idNum rows every time you increment the idNum. If true, this means you are repeating the operations many, many more times than needed. Change the logic to ensure you only process each idNum once. This should greatly speed up your process. One way to do this is to cache the Range result for each idNum. So, the next time you encounter the idNum, you can skip it. Also, the inner loop should start at the row + 1 of the 'new' idNum. This avoid iterating through previously evaluated rows. The example below uses a Dictionary to cache the Range results. Once all the Ranges for each idNum are determined, it runs though each idNum to Organize as before.
During the operations, temporarily turn off screen updating and calculations(if the operation does not depend on the calculations)
Below is the code with some of the edits described above.
Option Explicit
Private Const idNumberColumn As Long = 17
Sub PalletAssemblyLog()
Dim allidNum As Range
Dim curridNum As Range
Dim rowCount As Long
Dim idNum As Long
Dim I As Long
Dim j As Long
Dim machineLoc As String
'Dictionary requires a reference to the 'Microsoft Scripting Runtime'. From Tools menu: Tools -> References
Dim processedIdNumbers As Dictionary
Set processedIdNumbers = New Dictionary
Dim rowIdx As Long
For rowIdx = 2 To rowCount
idNum = full.Cells(rowIdx, idNumberColumn).Value
If Not processedIdNumbers.Exists(idNum) Then
Set curridNum = GetAggregatedRangeForIdNumber(idNum, rowIdx + 1, rowCount)
processedIdNumbers.Add idNum, curridNum
End If
Next rowIdx
Dim vKey As Variant
For Each vKey In processedIdNumbers.Keys
Dim idRange As Range
Set idRange = processedIdNumbers(vKey)
calc.Activate
calc.Cells.Clear
full.Activate
idRange.Copy calc.Range("A1")
Dim palRow As Long
palRow = DetermineRowTarget()
CopyToPal idRange, palRow
Next
End Sub
Private Function GetAggregatedRangeForIdNumber(idNumber As Long, startRow As Long, rowCount As Long) As Range
Dim allidNum As Range
Dim nextRange As Range
Dim rowIdx As Long
For rowIdx = startRow To rowCount
Set nextRange = full.Cells(rowIdx, idNumberColumn)
If nextRange.Value = idNumber Then
If allidNum Is Nothing Then
Set allidNum = nextRange
Else
Set allidNum = Union(allidNum, nextRange)
End If
End If
Next rowIdx
Set GetAggregatedRangeForIdNumber = allidNum.EntireRow
End Function
'formerly OrganizeAndCopyToPal
'Contains some magic numbers to assign names
Function DetermineRowTarget() As Long
calc.Activate
Dim rowCount As Long
rowCount = calc.Cells(Rows.Count, "A").End(xlUp).Row
Dim palRow As Long
'*******************************
' code truncated for brevity
'*******************************
DetermineRowTarget = palRow
End Function | {
"domain": "codereview.stackexchange",
"id": 41148,
"tags": "vba, excel"
} |
Frequency Sampling Filters (FSFs) NOT the Frequency Sampling Method | Question: A class of digital filters called Frequency Sampling Filters (FSF) has been mentioned in a side note by Richard Lyons in another related question comparing IIR digital filter design using mapping techniques. There is very little information on this technique available through a simple web-search (obscured by the "Frequency Sampling Method" which I would not recommend), but I did find details on it in chapter 7 (Specialized Lowpass FIR Filters) of Richard's great book Understanding Digital Signal Processing- but no where else. There he mentions that it predated optimized FIR design techniques (least squares and Parks-McClellan) but suggests they may have practical modern use for highly efficient linear phase filters in certain use cases.
My question is not in the details of how this filter works (Richard covers that quite well in his book), but if there are other references to this class of digital filters and who coined "Frequency Sampling Filter" (unfortunate that the terminology is so close but yet unrelated to the "Frequency Sampling Method", and if anyone can bottom line / summarize concisely with a good demonstration of a case where such a filter would be a superior choice over the optimized design techniques for linear phase filters using Parks-McClellan (PM, equiripple) or Least Squares. (A comparison of these two techniques to mapping of classic analog filters is covered in the other question referenced above.) The implementation of efficient filters structures using the PM or Least Squares techniques should consider the significant complexity optimization that can be achieved with proper decimation/interpolation structures include polyphase implementation.
Please avoid a response that only states what I would refer to as unsubstantiated opinion / rumor. I do see already how this filter structure may possibly be superior in low complexity to achieve a similar performance for smaller ratios of passband to sampling rate. I'm looking for a substantiated comparison that shows the advantage if this is the case, or if this filter structure is indeed driven to obscurity for good reason. (In case anyone has that or is interested/curious as I am enough to create a comparison...if not I will eventually post one here as my own response when time allows).
Answer: @Dan Boschen What I call "frequency sampling filters" (FSFs) is a fascinating subject. (FSF filters are very briefly mentioned in five or six DSP textbooks that I have on my bookshelf.) As far as I know FSFs were first described in the literature in the late 1960s. Here are a couple of references:
Rabiner, L. "Techniques For Designing Finite-duration-impulse-response Digital Filters," IEEE Trans. on Communication Technology, Vol. COM 19, Apr. 1971, pp. 188-195.
Rabiner, L. and Schafer, R. "Recursive and Nonrecursive Realizations of Digital Filters Designed by Frequency Sampling Techniques," IEEE Trans. Audio Electroacoust., Vol. AU 19, Sept. 1971, pp. 200-207.
In the 2nd & 3rd editions of my "Understanding DSP" book I show an example of my "Type-IV" lowpass linear-phase FIR FSF being more computationally efficient than a Parks-McClellan designed tapped-delay line filter.
To show the computational efficiency value of an FSF, an example of the performance of a linear-phase FIR bandpass FSF can be found in Figure 1 at: https://filterdesigner.com/index/bank. That FSF requires 22 multiplications and 25 additions per output sample. A Parks-McClellan designed tapped-delay line FIR bandpass filter with the same frequency magnitude response requires 72 multiplications and 143 additions per output sample!
The Parks-Mclellan FIR filter algorithm allows us to specify filter bandwidth, passband ripple, stopband attenuation, and transition region width. FSF design doesn't give us such detailed control of filter performance. But if you're able to design an FSF that meets your filtering requirements I maintain that the FSF will be more computationally efficient than an equivalent-performance Parks-McClellan designed tapped-delay line filter.
I've been thinking about writing a blog to introduce FSFs to people who do not have a copy of my "Understanding DSP" book. But such a blog would take quite a while to write. (TEASER: I now know a way to make FSFs even more computationally efficient than I've previously described them to be. If I write a "FSF blog" I'll describe those "even more efficient" FSFs.) | {
"domain": "dsp.stackexchange",
"id": 10725,
"tags": "filter-design, finite-impulse-response, reference-request, linear-phase"
} |
Printing the date of any day of the week as it occurs between two set dates | Question: This a block of code taken from a program I am working on that creates multiple files. This part of the code is to create files and sort them by date by appending the date to the file name. User chooses the first and last date and which day(s) he needs the files named in. So if someone wanted to create a Microsoft Word document for every Mon, Wed, and Fri in the year they can do it in one shot.
I am trying to see if there is any way to optimize this. I started with bash scripting few months ago and recently started learning Python this is my first project. I am trying to figure out how to maybe condense all the different blocks for each day of the week to one if that's possible so that way I can have just 1 progress bar. There are some variables in the block that are assigned earlier in the program by user input. I did not include it due to length.
start = np.datetime64(input("Start with: "))
end = np.datetime64(input("End with : "))
print()
print(" *************************** ")
print("Choose Day(s) | All Days |")
print(" | ------------------------- |")
print("E.g. | S | M | T | W | T | F | S |")
print(" | ------------------------- |")
print("(1=On, 0=Off) | 1 | 1 | 1 | 1 | 1 | 1 | 1 |")
print(" | |")
print(" *************************** ")
print("S M T W T F S")
sun, mon, tue, wed, thu, fri, sat = map(int, input().split())
print()
if sun == 1:
first_sunday = np.busday_offset(start, 0, roll='forward', weekmask='Sun')
last_sunday = np.busday_offset(end, 0, roll='preceding', weekmask='Sun')
sun_count = np.busday_count(first_sunday, last_sunday, weekmask='Sun') + 1
for i in trange(sun_count, desc="Sunday"):
touch.touch(file_name + " " + datetime.strptime(str(first_sunday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_sunday += np.timedelta64(7, 'D')
if mon == 1:
first_monday = np.busday_offset(start, 0, roll='forward', weekmask='Mon')
last_monday = np.busday_offset(end, 0, roll='preceding', weekmask='Mon')
mon_count = np.busday_count(first_monday, last_monday, weekmask='Mon') + 1
for i in trange(mon_count, desc="Monday"):
touch.touch(file_name + " " + datetime.strptime(str(first_monday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_monday += np.timedelta64(7, 'D')
if tue == 1:
first_tuesday = np.busday_offset(start, 0, roll='forward', weekmask='Tue')
last_tuesday = np.busday_offset(end, 0, roll='preceding', weekmask='Tue')
tue_count = np.busday_count(first_tuesday, last_tuesday, weekmask='Tue') + 1
for i in trange(tue_count, desc="Tuesday"):
touch.touch(file_name + " " + datetime.strptime(str(first_tuesday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_tuesday += np.timedelta64(7, 'D')
if wed == 1:
first_wednesday = np.busday_offset(start, 0, roll='forward', weekmask='Wed')
last_wednesday = np.busday_offset(end, 0, roll='preceding', weekmask='Wed')
wed_count = np.busday_count(first_wednesday, last_wednesday, weekmask='Wed') + 1
for i in trange(wed_count, desc="Wednesday"):
touch.touch(file_name + " " + datetime.strptime(str(first_wednesday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_wednesday += np.timedelta64(7, 'D')
if thu == 1:
first_thursday = np.busday_offset(start, 0, roll='forward', weekmask='Thu')
last_thursday = np.busday_offset(end, 0, roll='preceding', weekmask='Thu')
thu_count = np.busday_count(first_thursday, last_thursday, weekmask='Thu') + 1
for i in trange(thu_count, desc="Thursday"):
touch.touch(file_name + " " + datetime.strptime(str(first_thursday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_thursday += np.timedelta64(7, 'D')
if fri == 1:
first_friday = np.busday_offset(start, 0, roll='forward', weekmask='Fri')
last_friday = np.busday_offset(end, 0, roll='preceding', weekmask='Fri')
fri_count = np.busday_count(first_friday, last_friday, weekmask='Fri') + 1
for i in trange(fri_count, desc="Friday"):
touch.touch(file_name + " " + datetime.strptime(str(first_friday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_friday += np.timedelta64(7, 'D')
if sat == 1:
first_saturday = np.busday_offset(start, 0, roll='forward', weekmask='Sat')
last_saturday = np.busday_offset(end, 0, roll='preceding', weekmask='Sat')
sat_count = np.busday_count(first_saturday, last_saturday, weekmask='Sat') + 1
for i in trange(sat_count, desc="Saturday"):
touch.touch(file_name + " " + datetime.strptime(str(first_saturday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_saturday += np.timedelta64(7, 'D')
Answer: Each of your if blocks does the same thing, but with different inputs.
if sun == 1:
first_sunday = np.busday_offset(start, 0, roll='forward', weekmask='Sun')
last_sunday = np.busday_offset(end, 0, roll='preceding', weekmask='Sun')
sun_count = np.busday_count(first_sunday, last_sunday, weekmask='Sun') + 1
for i in trange(sun_count, desc="Sunday"):
touch.touch(file_name + " " + datetime.strptime(str(first_sunday), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first_sunday += np.timedelta64(7, 'D')
can be replaced with
abbrev= 'Sun'
name = 'Sunday'
if sun == 1:
first = np.busday_offset(start, 0, roll='forward', weekmask=abbrev)
last = np.busday_offset(end, 0, roll='preceding', weekmask=abbrev)
count = np.busday_count(first, last, weekmask=abbrev) + 1
for i in trange(count, desc=name):
touch.touch(file_name + " " + datetime.strptime(str(first), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first += np.timedelta64(7, 'D')
This allows you to fold the days into a for loop, like so:
# Input the value for each day into a single list rather than individual variables
day_input = (int(val) for val in input().split())
NAMES = ('Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday')
for val, name in zip(day_input, NAMES):
# Skip any day that was inputted as 0
if not val: # integer 0 is equivalent to boolean False
continue
abbrev = name[:3]
first = np.busday_offset(start, 0, roll='forward', weekmask=abbrev)
last = np.busday_offset(end, 0, roll='preceding', weekmask=abbrev)
count = np.busday_count(first, last, weekmask=abbrev) + 1
for i in trange(count, desc=name):
touch.touch(file_name + " " + datetime.strptime(str(first), '%Y-%m-%d').strftime('%m-%d-%Y') + "." + file_type)
first += np.timedelta64(7, 'D')
I replaced your call to map with a generator comprehension, which are generally considered the more aesthetic method of getting the same result. If you plan on using the values more than the once seen here you should use a list comprehension instead.
Also, you can simplify your initial set of print statements by using a multiline string:
print(
"""
***************************
Choose Day(s) | All Days |
| ------------------------- |
E.g. | S | M | T | W | T | F | S |
| ------------------------- |
(1=On, 0=Off) | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| |
***************************
S M T W T F S""") | {
"domain": "codereview.stackexchange",
"id": 40683,
"tags": "python, python-3.x, datetime"
} |
How do mutations of viruses lead to drug resistance? | Question: For instance, after starting zidovudine monotherapy against HIV, resistance develops against the drug because of a point mutation in the RNA transcriptase enzyme to which the drug binds.
So how does the virus ‘know’ to mutate this particular enzyme?
Answer: It doesn't. Viruses don't "know" anything. Mutations occur at random. Most of them don't do anything, or have a slight negative effect on the ability of the virus to infect and reproduce. However, there are billions and billions of viruses. Once in a while a random mutation will offer a significant advantage like immunity to an anti-viral drug. The viruses that have that beneficial mutation will then massively out-reproduce the viruses that don't have it. Eventually the population of viruses will consist mostly of individual viruses that have that mutation. | {
"domain": "biology.stackexchange",
"id": 10219,
"tags": "genetics, microbiology, molecular-genetics, virology"
} |
Christoffell symbols manipulations | Question: Why is it that $$\Gamma^\lambda_{\lambda\tau}\Gamma^\tau_{\mu\nu} = 0?$$
The same goes for $$\Gamma^\lambda_{\nu\tau}\Gamma^\tau_{\mu\lambda} $$which was set equal to zero by the author..
Answer: They're not zero in general. For example, take flat Euclidean space in two dimensions, in polar coordinates:
$$
ds^2=dr^2+r^2d\phi^2
$$
for which the nonzero Christoffel symbols are
$$
\Gamma^r_{\phantom\phi\phi\phi}=-r,\quad \text{and} \quad \Gamma^\phi_{\phantom\phi \phi r}=\Gamma^\phi_{\phantom\phi r \phi}=\frac1r
$$
Then $\Gamma^\lambda_{\phantom\phi \lambda r}=1/r$, and $\Gamma^\lambda_{\phantom\phi \lambda \phi}=0$, so
$$
\Gamma^\lambda_{\phantom\phi \lambda \tau}\Gamma^\tau_{\phantom\phi \phi\phi}=\frac1r\Gamma^r_{\phantom\phi \phi\phi}=-1
$$
for example. Similarly, $\Gamma^\lambda_{\phantom\phi \nu \tau}\Gamma^\tau_{\phantom\phi \mu\lambda}$ can be nonzero in this case: I make it $1/r^2$ when $\mu=\nu=\phi$, and $-1$ when $\mu=\nu=r$.
It's likely that the author of whatever you're reading is talking about a specific spacetime in some specific set of coordinates, and has computed these, which just happen to be zero in that case. | {
"domain": "physics.stackexchange",
"id": 16938,
"tags": "general-relativity, differential-geometry"
} |
Is it possible to change PID parameters of 6 motors silmutaneously(in a single launch file)?) | Question:
Hi everyone, i'm trying to control a 6 dof robot arms using 6 dynamixel motors(MX series). The only way i know to tune PID parameters is to launch dynamixel_workbench_single_manager. However, i may only tune the parameters one by another and cause a time-waste. So i'm searching a way to tune multi-motors parameters. Is it possible or where can i find the source to realize this.
Originally posted/) by ryan on ROS Answers with karma: 3 on 2018-02-25
Post score: 0
Original comments
Comment by Delb on 2018-02-26:
Can you be more specific ? Like do you want to share the same configuration with all your motors ? It looks like you have to create a launch file to define all your parameters.
Answer:
Hello :)
Thank you for your inquiry about dynamixel_workbench.
You can change parameters of multi-dynamixels using SyncWrite function.
If you look at line 56 in torque control example, 'Goal_Current' parameter are added to syncwrite handler. You can change it as 'P_Gain' or 'D_Gain' and write gain to the dynamixels by syncWrite function simultaneously.(line 152 in same example)
Best regards
Darby
Originally posted by Darby Lim with karma: 811 on 2018-02-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Delb on 2018-02-26:
Well this launch isn't doing what you want ?
Comment by ryan on 2018-03-01:
Thanks a lot :)) However, i have just encountered another error. When i launch the torque_control file, after the model and id popped out, the process died immediately. How come?? :(
Comment by Delb on 2018-03-02:
What are you launching exactly ? torque_control.cpp or torque_control.launch ?
Comment by ryan on 2018-03-02:
torque_control.launch exactly.
Comment by Delb on 2018-03-02:
Well since it's just a simple launcher for the torque node it means that you might have a problem with your parameters, check with ls /dev/input if your devices are well connected. | {
"domain": "robotics.stackexchange",
"id": 30149,
"tags": "dynamixel, ros-indigo"
} |
How to approximate minimum clique edge cover | Question: I'd like to take an undirected graph and express it (meaning all of its edges) using only cliques (ideally minimizing their sum cardinality).
It's clear that actually finding the minimum solution is at least as hard as the set-cover problem, but set-cover has good approximation algorithms.
Does anyone know a good approximation algorithm for this?
Particularly, if I start with each edge as its own clique and then arbitrarily choose two cliques from my list (whose union also forms a clique) and merge them and do this over and over until I can't merge any more, what's the worst case (as a ratio / the minimal solution)? Is there some heuristic I should use to determine what to merge next?
Answer: Here is a paper that deals with this problem:
John M. Ennis, Charles M. Fayle, and Daniel M. Ennis: Assignment-minimum clique coverings. Journal of Experimental Algorithmics 17(1), 2012.
It also gives some heuristics and experimental results. They don't give worst-case approximation ratios; minimizing the number of cliques instead of the sum of their sizes has only fairly weak approximation guarantees, so I guess it would be similar here. | {
"domain": "cstheory.stackexchange",
"id": 1915,
"tags": "graph-algorithms, approximation-algorithms, set-cover, clique"
} |
general openrave usage | Question:
Hey guys!
Im attempting to get an arm up and running with openrave (originally I was going to attempt to do it the ros manipulation pipeline way, which looked a bit like overkill to me). A friend introduced me to openrave, which looks faster and easier to setup. I have played around with basic python scripts (non ros) to control openrave, but I am some what confused to what the recommended way to use it with ROS is.
What is your guys general setup for using ROS together with Openrave?
My slight confusion begins with the different openrave_planning sub packages:
openraveros
openrave_robot_control
openrave_actionlib
I think Im missing the overview of the above. How do they work together/can one combine them? What are the typical sorts of usecases for each? What are the other things I need to implement (and with which restrictions?)
As far as I understood, openraveros is basically openrave running with its core functionality exposed to the ros network through services; and openrave_robot_control a way to interface openrave directly with a robot? I see it has simulation server and TF sender too. I originally though that openrave_robot_control was a more specific implementation of openraveros, but its missing lots of the general openrave functionality (ie services) that openraveros provides. Where does openrave_actionlib fit into place/integrate with the rest?
Whats the recommended way to actually move the arm? For now the python scripts I played with outputted a trajectory (time stamp + joint angles). Is it recommened I parse this and send the hardware these angles, or is there a better way?
What are your guys general setups?
Any hints, ideas, etc that you have learnt from experience, what do you recommend?
If you could start over, would you implement it the same way?
I realise I have many vague questions, I purposely left it rather general.
I thought I (and anyone else about to dive into the world of ROS&OpenRave) would attempt to learn from your guys experience before I get started (would probably just go with openraveros and use it as I did with the non-ros python scripts).
Any comments appreciated, thanks!
Thanks!
-Ollie
PS: I would add "openrave_robot_control" as a tag, but the length is limited to 20chars.
Originally posted by omwdunkley on ROS Answers with karma: 21 on 2011-03-14
Post score: 2
Answer:
openrave_robot_control provides a controller that you can plugin to the rest of your system. it is meant for people writing robot drivers to get their robot working.
openrave_actionlib is meant to offer the actionlib controllers to openrave.
openraveros allows openrave to be controlled through the ROS network, but in reality this is rarely necessary for beginners. Most likely, you will create the openrave core environment, load your scene, and process ROS services or send ROS topics.
The orrosplanning planning has recently got 3 new scripts that connect it to the ROS world and offer planning services: armplanning_openrave.py, graspplanning_openrave.py, and ik_openrave.py. There are test files in the orrosplanning/test folder for each service.
Basically think of openrave as node that can process ROS services like motion planning.
Originally posted by Rosen Diankov with karma: 516 on 2011-03-14
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 5059,
"tags": "ros, openrave"
} |
How to reduce drift in system time using Chrony? | Question:
I currently have 2 devices communicating with one another. A Roscube-X acting as the master and a Jetson Orin as its slave.
I'm running an NTP server on the roscube with the Orin acting as a client using chrony.
However, there is a drift in the system time as the duration goes on.
Time drift at launch after restarting chrony using chronyc tracking:
Time drift after a few minutes:
While extremely minor, the drift increases over time. Is there a way to further reduce/delay the drift?
My chrony.conf has the following:
server <roscube ip> minpoll 0 maxpoll 5 maxdelay .000000005
initstepslew 2 <roscube ip>
dumponexit
dumpdir /var/lib/chrony
pidfile /var/run/chronyd.pid
logchange 0.5
sched_priority 1
local stratum 10
allow 127.0.0.1/8
It is only connected to the ROSCube's NTP terminal as a source.
Originally posted by bc524 on ROS Answers with karma: 43 on 2022-09-19
Post score: 0
Original comments
Comment by Mike Scheutzow on 2022-09-19:
ros works fine if the host time-of-day clocks are within 0.001 second. Please explain what ros problem you are having that requires synchronization many orders of magnitude better than 1e-3.
Comment by gvdhoorn on 2022-09-19:
Isn't 0.000000003 seconds, 3 nanoseconds?
I'd agree with @Mike Scheutzow: unless you are dealing with very fast mobile robots (or sensor data which encodes phenomena where delta-t must be very small as state variables change very fast), a 3 nanosecond difference between two clocks would seem like it should be OK.
Comment by bc524 on 2022-09-19:
@Mike Scheutzow @gvdhoorn
Sorry, I'm still new to to ros. Planning to link multiple sensors (camera, lidar) to the two devices for use in mapping, so I was unsure if the drift would cause an issue in the long run as it kept increasing over time. Would this still be ok?
Comment by gvdhoorn on 2022-09-20:
Not really an answer, but only you would be able to determine whether a drift of 3 nanoseconds over 3 minutes would be acceptable.
If you don't have any requirements which tell you otherwise right now, I'd suggest to perform some tests and determine whether the performance is sufficient for whatever use-case(s) you have. I doubt anyone here would be able to tell you whether things "would [..] be ok" like this.
Although, 3 ns per 3 minutes drift would lead to 60 ns per hour, which is still well below what would probably become problematic for almost all use-cases. And typical time-synchronisation setups implement periodic syncing, so there would likely not even be the cumulative drift you appear to expect.
Comment by bc524 on 2022-09-20:
@gvdhoorn
alright, thanks for the information. I only left it at max a few hours and it kept going up, so I thought it was cumulative.
I tried to test it out and play a bagfile on one device and is played on the other. Having some issue when it comes to IMU data, but all the other types seems to be working fine.
Thanks.
udpate: the imu issue was solved lowering the rosbag play rate.
Answer:
If your time service is working, the time difference will not keep increasing forever. On my network, the tools report time-of-day differences of up to tens of microseconds.
For typical ros activities, I would not be concerned unless the difference grows larger than 1e-3. If the difference exceeds that, you need to debug why the time service is not working.
Originally posted by Mike Scheutzow with karma: 4903 on 2022-09-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by bc524 on 2022-09-20:
@Mike Scheutzow
Having some issues with IMU data in testing, but the rest seem to be transmitting ok. Thanks for the info. | {
"domain": "robotics.stackexchange",
"id": 37984,
"tags": "ros-melodic"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.