anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Are all fields on spacetime spinor-valued? | Question: I'm trying to understand the values that fields can take. For fermions, my understanding is that fields on spacetime take values as Dirac Spinors, which are $\mathbb{C}^4$ vectors. The vector space of Dirac Spinors is the one acted on by the matrix ring generated by the Gamma Matrices. Gamma Matrices are pairwise tensor products of Pauli Matrices, which generate the quaternions $\mathbb{H}$. Incidentally, the Clifford Algebra for spacetime $Cl_{1,3}(\mathbb{R}) \cong M(2,\mathbb{H})$, which is isomorphic to the matrix ring generated by the Gamma Matrices. $M(2,\mathbb{H})$ acts on $\mathbb{C}^4$, and therefore so do the Gamma Matrices. As a result, fields $\psi(x,t)$ for fermions on spacetime take values in the space of $\mathbb{C}^4$ spinors.
Taking a slightly more dubious route... The spacetime invariants for fermions under special relativity are given by the Minkowski Metric. The Clifford Algebra corresponding to the Minkowski Metric is $Cl_{1,3}(\mathbb{R}) \cong M(2,\mathbb{H})$, which acts on the vector space $\mathbb{C}^4$. Every relativistic field $\psi(x,t)$ on spacetime preserves the invariants of spacetime, and therefore $\psi(x,t)$ must take values in some subspace of $\mathbb{C}^4$ acted on by some sub-algebra of $Cl_{1,3}(\mathbb{R})$. For fermions, $\psi(x,t)$ takes values in the full subspace $\mathbb{C}^4$ itself.
I'm wondering, first, if that dubious route is valid. If it is valid, I'm wondering if, as a consequence, all relativistic fields on spacetime, even for bosons, take values in some subspace of $\mathbb{C}^4$ spinors.
Answer: I think you are mixing up some basic facts about four-vectors and $\mathbb{C}^4$ spinors. So let us review some of them here (this is more of a long comment than an answer):
$\mathbb{C}^4$ spinors transform under $\text{Spin}^\mathbb{C}(1,3)$, the complexification of the double cover of $\text{SO}(1,3)$. The latter is the transformation group of 4-vectors. See https://en.wikipedia.org/wiki/Spin_group#Complex_case
All wavefunction which is fully in $\mathbb{C}^4$ (in the sense that all four components are complex) are not necessarily spinor valued. For example, if you come to describe with regular QM $W^+$ and $W^-$ bosons, you will have to use four complex components, which are just complex 4-vectors, and not spinors.
The obstructions to their construction are different. While for vectors, the non-annulation of the first Stiefel-Whitney class is an obstruction, for spinors, it is the second Stiefel-Whitney class that has to vanish. See https://en.wikipedia.org/wiki/Spin_structure#SpinC_structures
Their spins are different. Restricting ourselves to the subalgebras of $\text{Spin}^\mathbb{C}(1,3)$ and $\text{SO}(1,3)$ describing rotations, their Casimir invariants are those of $\mathfrak{spin}(3)$ in the $1/2$ representation and $\mathfrak{so}(3)$ in the $1$ representation respectively. While it is true that these two algebras happen to be isomorphic, it truly is the Casimir invariant that gives us the spin of the particles we are describing. See https://en.wikipedia.org/wiki/Representation_theory_of_SU(2)#Most_important_irreducible_representations_and_their_applications and https://en.wikipedia.org/wiki/3D_rotation_group#A_note_on_Lie_algebras
This one is more of a physicist's fact: Their relativistic equations are not the same. On one hand, you have the Dirac equation for spin 1/2 particles, and on the other hand, you have the Proca equations for spin 1 particles. These two equations are fundamentally different since the first is to be thought of as being the "square root" of the second (more precisely the " square root" of the Klein-Gordon equation). See https://en.wikipedia.org/wiki/Dirac_equation#Mathematical_formulation
Within the Whiteman axioms, there is the spin-statistics theorem, which tells you that spinor-valued wavefunctions do not obey the same statistics as vector-valued ones. See https://en.wikipedia.org/wiki/Spin%E2%80%93statistics_theorem#Consequences.
All these six points (which are not completely independent) indicate you that 4-vector valued wavefunctions are not spinor-valued ones. Again, this is more of a long comment than an answer, and I hope someone better than me will give you a real answer. | {
"domain": "physics.stackexchange",
"id": 92493,
"tags": "spacetime, field-theory, dirac-equation, spinors, clifford-algebra"
} |
Is commutation relation an equivalence relation? | Question: I'm now learning quantum mechanics with Liboff. In the book it deals with "a compete set of mutually compatible observables" in order to make a state maximally informative. How can one find such set? It seems very hard unless commutation relation is an equivalence relation. Is commutation relation an equivalence relation? That is, if $A, B, C$ are Hermitian operators, then does $AB=BA, BC=CB$ imply $AC=CA$?
Answer: Commuting is not an equivalence relation. All components of angular momentum commute with $J^2$ but they don't commute with each other.
How to find a complete set of mutually commuting observables is a difficult problem and I don't think you can give an algorithmic answer. It depends very much on the specific problem. An observable that commutes with the Hamiltonian is conserved, this can be a good starting point. For example, angular momentum
is conserved when the Hamiltonian is rotationally symmetric. | {
"domain": "physics.stackexchange",
"id": 30386,
"tags": "quantum-mechanics, measurement-problem, commutator, observables"
} |
Implemented heapsort algorithm in Java | Question: I am learning algorithms by using Java. Here is an implementation of heapsort. Let me know if anything is wrong and and suggestion to improve the algorithm.
public class HeapSort {
public int arr[];
public HeapSort(int[] arr) {
this.arr = arr;
}
// to make a heap
public void makeHeap() {
for (int i = 0; i < arr.length; i++) {
// index of child element
int index = i;
while (index != 0) {
int parent = (index - 1) / 2;
if (arr[index] <= arr[parent]) break;
// swap child element to its parent one
int temp = arr[index];
arr[index] = arr[parent];
arr[parent] = temp;
index = parent;
}
}
}
// to remove item from the top of the binary tree -> arr[0]
public void removeTopItem(int count) {
int a = arr[0];
arr[0] = arr[count];
arr[count] = a;
int index = 0;
count--;
// to remake binary tree
while (true) {
int leftChild = index * 2 + 1;
int rightChild = index * 2 + 2;
// check the boundary
if (rightChild > count) break;
if (arr[index] > arr[leftChild] && arr[index] > arr[rightChild]) break;
// to get greater parent
int parentGreat = arr[rightChild] > arr[leftChild] ? rightChild : leftChild;
// swap current item to its parent one
int temp = arr[index];
arr[index] = arr[parentGreat];
arr[parentGreat] = temp;
index = parentGreat;
}
}
// sort using by heap
public int[] heapSort() {
// make a heap
makeHeap();
// sorting
for (int i = arr.length - 1; i >= 0; i--) {
removeTopItem(i);
}
return arr;
}
}
Answer: I did test it, it is working well. Well,
System.out.println(Arrays.toString(
new HeapSort(new int[] {3, 2, 1}).heapSort()));
isn't working.
document/comment your code.
You commented (all public method) members, see above for standard tool support.
For every non-trivial piece of code, what is the reason it is there, in the first place?
Starting at the class-, if not package-level:
/** HeapSort as a <code>java.util.function.UnaryOperator<T></code>.
* Java & coding beginner's exercise
* in type design and algorithm implementation.
* The general idea is to turn the array into a max-heap
* and repeatedly move the max item
* from the shrinking heap to its current end.
*/
class HeapSort<T> implements In_placeSorter<T> {
@Override
public T sort(T toBeSorted) {
if (!(toBeSorted instanceof int[]))
throw new IllegalArgumentException("int[] only, for now");
comparables = (int[]) toBeSorted;
return (T) sort();
} …
(note sort() not (doc)commented here: inherits from In_placeSorter<T>)
name things for what use they are. arr is horribly unspecific (used comparables instead).
don't make things more visible than they need to be. When in doubt, start with default/not specifying visibility:
int comparables[];
pay attention to boundaries:
neither the loop in makeHeap() nor the one in heapSort() need to handle index 0.
More importantly, while it is true that a right child needs to exist to be handled: what about left?
trying to implement "the general idea" without looking what&how others have done is a great step in learning —
please precede it by considering how do I know/check the specification is met?
asking how to improve the procedure is another great step in programming
(professionals prefix a when and). Start with algorithm;
Just don't try to force it - if nothing suggests itself, turn to something else temporarily, sleep on it, or now look what others have done and how, for heap-sort:
why do heapify()s start at the middle of the array?
what is to be gained from not comparing the item to be (re-)inserted while there are two children?
Continue with coding: e.g., there are several instances where you exchange/swap elements: "factor out" a method/procedure
handle petty concerns last, if at all:
Java arrays & java.util.Arrays don't come with a swap()? Use java.util.Collections.swap(java.util.List, int, int) as a template
using a for-loop:
for (int index = i, parent ; index != 0 ; index = parent) {…
Just in case:
/** Rearranges items in ascending "natural" order. */
interface In_placeSorter<T> extends java.util.function.UnaryOperator<T> {
/** Rearranges items in ascending "natural" order.
* @param toBeSorted items to be sorted
* @return <code>toBeSorted</code> */
T sort(T toBeSorted);
@Override
default T apply(T t) { return sort(t); }
} | {
"domain": "codereview.stackexchange",
"id": 28902,
"tags": "java, algorithm, sorting, heap-sort"
} |
Statistical Mechanics: "Pressure" in a one-dimensional quantum system | Question: For statistical mechanics homework I have to solve this problem:
Consider a single quantum-mechanical particle in an infinite
one-dimensional well of width L. From elementary quantum mechanics, we
know that the spectrum of allowed energies is given by
$$E(n) = \frac{n^2 \hbar^2\pi^2}{2mL^2}$$
Where $n$ is an integer greater than 0.
Calculate the partition function, and use it to find the internal energy $U$, the heat capacity $c_L$, and the equation of state $f(T,P,L) = 0$, in both the high temperature and low temperature limits.
I started with the low temperature limit; haven't got to the high temperature part yet. Here's what I have so far:
Letting $a = \frac{\hbar^2\pi^2}{2mL^2}$, $\beta = \frac{1}{T}$ (with $k_B = 1$ for tidiness), the partition function is:
$$Z = \sum_n^\infty \exp(-\beta a n^2)$$
I tried getting a closed-form solution to this but it gave me a headache, so let $\gamma = \exp(-\beta a)$:
$$Z = \gamma + \gamma^4 + \gamma^9 + \dots$$
$$Z = \gamma(1 + \gamma^3 + \gamma^8 \dots)$$
We want $\ln{Z}$:
$$\ln{Z} = \ln{\gamma} + \ln{(1 + \gamma^3 + \gamma^8 + \dots)}$$
$$\ln{Z} = -\beta a + \ln{(1 + \gamma^3 + \gamma^8 + \dots)}$$
In the low temperature limit, $\gamma$ is very small, so expanding $\ln{(1 + \gamma^3)}$:
$$\ln{Z} = -\beta a + \gamma^3 = -\beta a + \exp(-3 \beta a)$$
The energy is
$$U = - \frac{\partial \ln{Z}}{\partial \beta} = a[1 + 3 \exp(-3 \beta a)]$$
The heat capacity is
$$c_L = \frac{dU}{dT} = 9a^2\beta^2 \exp(-3 \beta a)$$
(there's a $k_B$ somewhere in there but that's not a priority right now)
(Hopefully there aren't any errors in the above)
Anyway, all that is just to give context to the part I'm confused about, which is that I don't have an intuition for what "pressure" means in a system like this. In my notes, I have that the pressure is defined by the relation:
$$P = \frac{1}{\beta} \frac{\partial \ln{z}}{\partial V}$$
I can understand how this dimensionally gives a pressure quantity (Energy/Volume) for a 3-D system, but I don't know whether it applies for this system. The size of this system is characterized by a 1-D quantity $L$, and naively using
$$P = \frac{1}{\beta} \frac{\partial \ln{z}}{\partial L}$$
gives a quantity with units of Energy/Distance. Is this the correct analogue for "pressure" in this 1-D system? Like a "linear pressure"? Or do I need to substitute $V = L^{3}$ and use the first formula with the $\frac{\partial}{\partial V}$ in it?
Answer: The intensive parameter that partners with (extensive) length in one-dimensional systems is often called "tension" and given symbols like $\mathcal{T}$ or $\tau$ (the examples of such systems used in traditional texts like Zemansky are generally stretched wires and similar thinks).
As such it has dimensions of $\text{energy}/\text{length} = \text{force}$, and generally has a sign convention such that your particle-in-a-box system will have negative tension. | {
"domain": "physics.stackexchange",
"id": 40398,
"tags": "quantum-mechanics, thermodynamics, statistical-mechanics, pressure, dimensional-analysis"
} |
Digging depth and heat | Question: I have a question about the underground heat at different distances from the core of the Earth.
I was wondering if there was a depth at which you can dig that the decrease in heat (as a result of being lower) and the increase in heat due to the depth that you dug that the loss of heat was the same as the gain in heat.
Here are the assumptions that I am using:
The lower an object is, the cooler the air around it and therefore the cooler the object can become. Cold air tends to sink below heated air. (I'm thinking like a basement is cooler than the top floor during summer without air conditioning).
The deeper you dig towards the centre of the earth, the hotter your surroundings become. (While there may be some initial decrease in temperature, I'm assuming as you dig towards the mantle or core the hotter the ground around you is).
I am holding constant the heat from the sun and atmosphere. Is there a defined depth at which the loss of heat (from falling cool air) = gain in heat (from the heat of the earth)? Does the shape of the hole dug affect the depth at which this occurs?
Answer: The heat equilibrium depth will vary for different locations due to:
Differences in geothermal gradient at different locations (heat emitted by rock). It is generally accepted that the global geothermal gradient is between
${25 ^o C/km}$ & ${30 ^o C/km}$. At some locations the gradient can vary between ${15.4 ^o C/km}$ & ${102.6 ^o C/km}$.
The temperature of the air will increase with depth due to
auto-compression of the air (also known as lapse rate). [Environmental Engineering of South African Mines, 1989, pp 403-404]. The increase in dry bulb temperature of air due to auto-compression is ${9.66 ^o C/km}$, if there is no change to the humidity of the air.
Moisture content of the air. The greater the humidity, the greater
the heat in the air.
Ventilation of the hole being dug and how much heat the moving air is
removing from the bottom of the hole.
Temperature of the air entering the hole from the Earth's surface
The presence or absence of other sources of heat such as machinery, electrical equipment, warm water springs entering the hole and warm bodies.
The age of the hole and its exposure to ventilating air. New holes expose fresh warm rock to air in the atmosphere and the air eventually carries away the heat via ventilation. New holes, or new extensions to holes, are warmer/hotter than old holes.
The shape of the hole will be determined more by its function and geotechnical conditions, such as ground stability, than temperature. | {
"domain": "earthscience.stackexchange",
"id": 477,
"tags": "temperature, geothermal-heat, core"
} |
Extracting the last component (basename) of a filesystem path | Question: fn basename<'a>(path: &'a str, sep: char) -> Cow<'a, str> {
let pieces = path.split(sep);
match pieces.last() {
Some(p) => p.into(),
None => path.into(),
}
}
Usage:
println!("'{}'", basename("foo", '/')); // outputs 'foo'
println!("'{}'", basename("bob/", '/')); // outputs ''
println!("'{}'", basename("/usr/local/bin/rustc", '/')); // outputs 'rustc'
I think the split() into a match on last() is kind of elegant.
I know there is some work needed to handle both str and String, I am not sold on the use of Cow and needing to define a lifetime for the string.
I am not sold on Cow because later on I need to extract from it.
let prog = basename(&args[0], '/').into_owned();
It feels like I am working too hard.
Answer: Firstly, you should use rsplit and next rather than split and last, as it starts at the more appropriate end:
fn basename<'a>(path: &'a str, sep: char) -> Cow<'a, str> {
let pieces = path.rsplit(sep);
match pieces.next() {
Some(p) => p.into(),
None => path.into(),
}
}
Secondly, you shouldn’t be using strings for this; you should be using paths, because that’s semantically what you’re dealing with.
The easiest way to get a path tends to be to take a &Path or a generic parameter implementing AsRef<Path> and calling .as_ref() on it; str, String, Path, PathBuf and more implement it.
You can get the base name from a &Path with file_name; this admittedly produces a Option<&OsStr>, so if you want to display the path you’d need to convert it back towards a string with e.g. .and_then(|s| s.to_str()).
Anyway, the point of this latter part is just that for something that is semantically a path, you should be handling it specially, as a rule; a path need not be Unicode. Think on it more. | {
"domain": "codereview.stackexchange",
"id": 32917,
"tags": "strings, memory-management, url, rust"
} |
set gravity on each object Gazebo7 | Question:
I am tryng to set the gravity on each individual objects in gazebo. I am spawning a robot model that needs to have 0 gravity. But The objects it needs to interact with need to have gravity,so they don't float away. I am not sure how to accomplish this I can turn on/off gravity for the entire world, but not individual models. Here is my world file,
<?xml version="1.0" ?>
<sdf version="1.4">
<world name="default">
<include>
<uri>model://sun</uri>
</include>
<include>
<uri>model://ground_plane</uri>
</include>
<include>
<pose>0 10 0 0 0 0</pose>
<gravity>0 0 -9.81</gravity>
<uri>model://button_panel</uri>
</include>
<gravity>0 0 0</gravity>
</world>
</sdf>
I would greatly appreciate any guidance.
Originally posted by nagoldfarb on Gazebo Answers with karma: 3 on 2016-09-30
Post score: 1
Original comments
Comment by Peter Mitrano on 2016-10-02:
lol i see u
Answer:
One should probably switch from disabling gravity, to making the robot static. It's still a temporary solution, but static is meant to be at the robot level, not world level. Having different gravity on different objects is a pretty bizarre request.
Originally posted by Peter Mitrano with karma: 768 on 2016-10-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 3997,
"tags": "ros, ros-indigo"
} |
How are the number of bytes less than the number of pixels in an image? | Question: Lets take, for example this jpeg image here
The image there is 400 x 300, or 120000 pixels and The file size of the image (on my computer) is shown to be 65171 bytes.
This means the computer stores about 2 pixels in every byte. How does it do this? Is it due to the same color pixels repeating in the image, so it only has to save the RGB data once, or is there some other trickery going on here?
Answer: A colour image is typically digitalized using 256 levels for each of the 3 RGB channels. That gives 3 bytes per pixel. The trick to attain smaller file size is to apply some compression, to take advantage of the redundacy (neighbouring pixels tend to have similar colours). That depends on the image format. For example, PNG format applies a "lossless" compression (no information lost). JPEG format applies a more aggresive lossy compression (some information lost). BMP format is a format that does not (by default) compress; you can try to open your image and save it in BMP format and check that the size is approximately the expected (3 bytes per pixel). Other thing you can try is to generate a big "pure noise" image, and check that the size is also about 3 bytes per pixel (compression does not work here). | {
"domain": "cs.stackexchange",
"id": 7057,
"tags": "image-processing, data-compression, storage"
} |
Explicit form of Dirac field creation/annihilation operators? | Question: The explicit form of the creation and annihilation operators for the complex scalar field seems to be shown in all QFT lecture notes, but not those for the Dirac field (instead they tend to only give the anticommutation relation).
What is the explicit form of the Dirac field creation/annihilation operators?
Answer: You can recover them from the mode expansion of the field:
$$\psi(x) = \int \frac{d^3p}{(2\pi)^3}\frac{1}{\sqrt{2E_p}}\sum\limits_s[a^s_p u^2(p) e^{-ipx} + b^{s\dagger}_p v^s(p) e^{ipx}]$$
You can verify that:
$$a^s_p = \frac{i}{2m}\int d^3 x \frac{\bar{u}(p)}{\sqrt{2E_p}} (e^{ipx} \partial_0 \psi - \psi\partial_0e^{ipx})$$
The verification can be done by plugging in the mode expansion, using orthogonality relations (page 48 Peskin and Schroeder), and using Fourier Transform properties (to get delta functions). | {
"domain": "physics.stackexchange",
"id": 98729,
"tags": "quantum-field-theory, operators, fourier-transform, dirac-equation"
} |
Computation of concordant and discordant pairs | Question: I would like to compute the number of concordant and discordant pairs of two large vectors. This function is a nice try, and works quite efficiently.
pairs <- function(x,y){
n <- length(x)
ix <- order(x)
x <- x[ix]
y <- y[ix]
Nc <- sum(sapply(1:(n-1),function(i) sum(x[i]<x[(i+1):n] & y[i]<y[(i+1):n])))
Nd <- sum(sapply(1:(n-1),function(i) sum(x[i]<x[(i+1):n] & y[i]>y[(i+1):n])))
return(list(Nc=Nc,Nd=Nd))
}
x <- runif(10000)
y <- runif(10000)
system.time(pairs(x,y))
Do you have any idea on how it would be possible to boost the function?
Answer: These sorts of operations are tough to vectorize because you need to compare every element to all later elements. If the counting operation is a performance bottleneck in your code, then one possibility would be to implement it in C++ using the Rcpp package:
library(Rcpp)
cppFunction(
"IntegerVector CDcount(NumericVector x, NumericVector y) {
IntegerVector counts(2, 0);
int n = x.size();
for (int i=0; i < n; ++i) {
for (int j=i+1; j < n; ++j) {
counts[0] += (x[i] < x[j]) && (y[i] < y[j]);
counts[1] += (x[i] < x[j]) && (y[i] > y[j]);
}
}
return counts;
}")
pairs2 <- function(x, y) {
n <- length(x)
ix <- order(x)
x <- x[ix]
y <- y[ix]
counts <- CDcount(x, y)
return(list(Nc=counts[1], Nd=counts[2]))
}
For your test data, this results in a 50 times speedup on my system.
x <- runif(10000)
y <- runif(10000)
system.time(pairs(x,y))
# user system elapsed
# 4.892 1.447 6.584
system.time(pairs2(x, y))
# user system elapsed
# 0.119 0.001 0.120
identical(pairs(x, y), pairs2(x, y))
# [1] TRUE
The Rcpp package takes some time to compile the CDcount function, so this option is probably only helpful if you are operating on very large vectors or if you are repeating the operation many times. | {
"domain": "codereview.stackexchange",
"id": 26463,
"tags": "r, performance"
} |
Ford-Fulkerson pseudo-polynomial | Question: Can somebody explain please why Ford-Fulkerson Algorithm has pseudo-polynomial complexity? I understand that the complexity in this case strongly depends on the capacities of the edges of the network, since we have $O(|E|*f_{max})$, but why is the algorithm still not polynomial?
Answer: Lets recall that input for flow problems is a graph with capacities on their edges. Let $G(V,E)$ where $V$ denotes the vertex set of the graph and $E$ denotes the edge set. Let $f_{\text{max}}$ denote the maximum possible flow. It is an easy caluclation that input size is $\mathcal{O}(V+E \log (f_{\text{max}}))$ bits as writing $f_{\text{max}}$ takes $\mathcal{O}(\log f_{\text{max}})$ bits. Since $f_{\text{max}}$ may be arbitrarily high (it depends neither on $V$ or $E$), this running time could be arbitrarily high, as a function of $V$ and $E$.
A algorithm is said to have a polynomial runnning time if its runtime in input size in bits where as pseudo-polynomial runtime means runtime of the algorithm is polynomial in numeric value of the input (see Pseudo-polynomial time) | {
"domain": "cs.stackexchange",
"id": 15457,
"tags": "ford-fulkerson"
} |
sound travelling through a tube | Question: I am an artist who is designing some interactive play equipment for a playground. One of the playground components is a talk tube feature. The child will speak into a flower sculpture, cone shaped flower attached to a stem made of hollow pipe, and underground tubing will carry the sound to another Flower approximately 15 20 feet away. I will be using 1-1/2" flexible smooth pvc pipe. My question is: will the zig zag design of the stems adversely affect the sound transmission? I am attaching a picture of the design so far[view showing angles of a talktube.
Answer: For the shapes you have drawn, the angles don't look much different from the old speaking tubes of ships and the grand houses where they were used to summon the maid. The length of the pipe is pretty short so I don't think you will have any problems, as long as the bends underground are not too sharp.
It's an imaginative idea, children's voices are high pitched and the early ship speaking tubes were designed with whistles in mind :)
I hope you get an answer with more detail but I can't think of an obvious problem. | {
"domain": "physics.stackexchange",
"id": 36839,
"tags": "acoustics, vibrations"
} |
Are these if-statements too fancy? | Question: I just rewrote this:
if (budgetRemaining != 0 || totalOpenInvoices != 0)
{
}
Like this:
if (new[] { budgetRemaining, totalOpenInvoices }.Any(c => c != 0))
{
}
If I had seen that before I ramped up on Linq, it would have confused me. Now that I've been learning functional programming and using Linq, it seems natural, but is it sacrificing simplicity?
Answer: Seems to be swatting a fly with a Buick to me. The first form seems pretty concise and the variable names are quite descriptive. The second form creates a new object (the array) which will eventually have to be GC'd and introduces a new lambda variable, c which doesn't seem descriptive any more. | {
"domain": "codereview.stackexchange",
"id": 1088,
"tags": "c#, linq"
} |
What do the numbers in the Ising sampleset mean? | Question: I am trying to create a portfolio optimization with the DWave Quantum Computer. I wrote some code trying to somehow reconstruct the following Ising model paper:
Ai is the maximum amount of money that can be invested in the i-th asset.
B is the total budget.
Ri denote the random variable representing the return from asset i.
This is how I tried to code it:
import datetime
import pandas as pd
import fix_yahoo_finance as yf
import pandas_datareader.data as web
import numpy as np
import neal
import dimod
from dwave.system import DWaveSampler
import random
import hybrid
def cov(a,b):
return a.cov(b)
def hi(price, returns, cov):
#mean price
Ai = np.mean(price)
#mean expected return
E = np.mean(returns)
# hi = -(1/2)((1/3)*cov(Ri,Rj) + (1/3)Ai^2 - (1/3)E(Ri) - 2B(1/3)*Ai)
h = (-(1/2)*((1/3)*cov + (1/3)* (Ai ** 2) - (1/3)* E - 2*100*(1/3)*Ai))
return h
yf.pdr_override()
start = datetime.datetime(2018,1,3)
end = datetime.datetime(2021,1,1)
all_data = {ticker: web.get_data_yahoo(ticker,start,end)
for ticker in ['AAPL','IBM','MSFT','GOOGL']} #Note: GOOG has become GOOGL
price = pd.DataFrame({ticker:data['Adj Close']
for ticker,data in all_data.items()})
volume = pd.DataFrame({ticker:data['Volume']
for ticker,data in all_data.items()})
returns = price.pct_change() #calculate the percentage of the price
returns = returns.dropna()
print(returns.tail())
a = cov(returns['AAPL'], returns['IBM'])
b = cov(returns['IBM'], returns['MSFT'])
c = cov(returns['MSFT'], returns['GOOGL'])
d = cov(returns['GOOGL'], returns['AAPL'])
apple = hi(price['AAPL'],returns['AAPL'], a)
ibm = hi(price['IBM'],returns['IBM'], b)
microsoft = hi(price['MSFT'],returns['MSFT'], c)
google = hi(price['GOOGL'],returns['GOOGL'], d)
sampler = neal.SimulatedAnnealingSampler()
#qpu = DWaveSampler()
h = {apple: 0.0, ibm: 0.0, microsoft: 0.0, google: 0.0}
#energy changes when bias value changes
J = {(apple, ibm): 0.0, (ibm, microsoft): 0.0, (google, apple): 0.0, (apple, microsoft): 0.0, (ibm, google): 0.0}
sampleset = sampler.sample_ising(h, J, num_reads=10, annealing_time=2000)
print(sampleset)
And this is the output sampleset:
I was wondering what the numbers on top meant, so the -224463.77916595488 1414.5773363996423 etc. and if this is correct
Answer: I am not familiar with the application you are trying to implement but see a general misunderstanding in the setting of the terms h_i and J_i,j . The numbers in the top of the output are the names of the variables you have defined. You are defining the values of the variables apple, ibm, microsoft, google as variable names. The setting of the biases h_i and coupling strengths J_i,j can be done as dictionaries. But you have to use strings as variable names (keys) and the term is the actual value. So instead of h[apple]=0.0, you need to use something like h["apple"]=apple. The same goes for J_i,j.
In your current implementation all terms are set as 0.0 . This is why all "solutions" have an energy of zero and are trivially minimal. | {
"domain": "quantumcomputing.stackexchange",
"id": 2816,
"tags": "hamiltonian-simulation, d-wave, annealing"
} |
Forbidden capacitance in LC circuit connected to AC source | Question: While solving problems on AC circuits, I came across one that had a circuit connection as shown below.
The question was to find the value of "the capacitance that can never be used in this circuit". The AC source used was 220V, 50Hz and it was assumed, $\pi^{2}=10$.
As far as the solution was concerned,
First, I assumed that, $I = Ae^{jwt},
j=\sqrt{-1} , A \in C$
Then by KVL(and assuming that initially charge on capacitor is 0),
$\frac{\int_{0}^{t} I \,dt}{C} + L\frac{dI}{dt} = Ee^{jwt} $
Where $E=220V$,
And then on differentiating and substituting the value of I, obtained,
$ A j(wL-\frac{1}{wC}) = E $
If $C=10\mu F$,
The RHS would vanish which would cause an exception here which led me to believe this was the capacitance which could not be used, and the book does say that it is the correct answer.
However, can anyone shed light upon what's happening here, physically?
Note : As I am pretty new to AC circuits I assumed that current would always vary as $I=Ae^{jwt}$
, without any prior justification. I would appreciate it if someone clarified the validity of this assumption.
Answer: It's an odd question. What you've correctly found is that for the idealized whiteboard problem you have a perfect resonance that draws a divergent current from the source. But so what? Divergence doesn't cause the whiteboard to fail: it's a perfectly reasonable mathematical result.
In real physics, the components won't be perfect: the LC resonance won't be perfectly tuned to 50 Hz, and there will be resistance in the circuit. So, you'll draw a relatively large current from the source, but it won't diverge in a realistic model. Circuits like this are employed when drawing a large current from the source is desired. So, "can never be used" is ridiculous.
As for using a complex current as a tool to solve the system, see phasors. Your approach is fine, and is a common approach for problems like this. Congratulations on figuring it out for yourself. | {
"domain": "physics.stackexchange",
"id": 97580,
"tags": "homework-and-exercises, electric-circuits, electric-current, capacitance, inductance"
} |
Is there something like acceleration constraint in a rigid body? | Question: Suppose we have two points on a body which having acceleration a1 and a2 direction we know and magnitude as well , is there some sort of relation between them we can establish , like for velocities along the line joining velocities must be same sort of ? For example a rod which has two points of acceleration value known and directions too suppose angle theta1 for a1 and theta 2 for a2 from the rod , is there a constraint which can be established on that ? As we know velocity along the line joining the rod should be same everywhere , otherwise it fails to consider rigid body . Hope now clarity is given
[is there some relation between them which we can deduce thats what i am asking ] body is just intially has such acc vectors , so intiallly angulsr velocity is zero so not shown in figure
Answer: Of course, there is. All velocities/accelerations on the extended rigid body are a result of the same motion of the rigid body, which we refer to as the rotating frame.
Just as the velocities of two points on a rigid body are related by $$ \boldsymbol{v}_A + \boldsymbol{r}_A \times \boldsymbol{\omega} = \boldsymbol{v}_B + \boldsymbol{r}_B \times \boldsymbol{\omega}$$
where $\boldsymbol{r}_A$ is the position vector of point A, $\boldsymbol{r}_B$ the position vector of point B, $\boldsymbol{v}_A$ and $\boldsymbol{v}_B$ the velocity vectors of the points, and $\boldsymbol{\omega}
$ the rotationa velocity of the rigid body.
Take the time derivative of the above using the chain rule to get
$$ \boldsymbol{a}_A + \boldsymbol{r}_A \times \boldsymbol{\dot \omega} + \boldsymbol{v}_A \times \boldsymbol{\omega} = \boldsymbol{a}_B + \boldsymbol{r}_B \times \boldsymbol{\dot \omega} + \boldsymbol{v}_A \times \boldsymbol{\omega}$$
Which is the kinematic constraint between two points. The above is commonly rewritten as follows, using the relative position vector $\boldsymbol{d} = \boldsymbol{r}_A - \boldsymbol{r}_B$
$$\boldsymbol{v}_A = \boldsymbol{v}_B + \boldsymbol{\omega} \times \boldsymbol{d}$$
$$\boldsymbol{a}_A = \boldsymbol{a}_B + \boldsymbol{\dot \omega} \times \boldsymbol{d} + \boldsymbol{\omega} \times ( \boldsymbol{\omega} \times \boldsymbol{d}) $$ | {
"domain": "physics.stackexchange",
"id": 85566,
"tags": "newtonian-mechanics, rotational-dynamics, acceleration, rigid-body-dynamics, constrained-dynamics"
} |
Ros with Labview: can see topics but no data | Question:
Hello everybody,
I've got a ROS node running on Labview on my PC (Windows 10, 192.168.1.129). It is publishing some data on a topic. ROS is installed on a Raspberry Pi 3 with Ubuntu SO (18.04, IP: 192.168.1.130). I don't have ROS installed on Windows, so obviously the master is running on Raspberry PI 3. So my PC and Raspberry Pi are connected to the same network, and can communicate through ssh. If I ping each other, it works. But then, when I run the Publisher on Labview (on Windows), I mean when I run the node that publishes on a topic, from Ubuntu terminal I can see the topic, with “rostopic list”, but then with “rostopic echo” I can’t see anything. So then I try to ping the node publishing on Labview with “rosnode ping”, but it says: “ERROR: Communication with node[http://169.254.201.168:50436] failed!”. I’ve already done everything they say in other similar questions, I’ve edited the etc/host file on my PC adding the Ubuntu hostname and IP, and I’ve done the same on Ubuntu, adding my PC hostname and IP. I’ve turned off firewalls on Windows. I’ve exported ROS_MASTER_URI and ROS_IP. Nothing works. Any ideas ?
Originally posted by Lol on ROS Answers with karma: 16 on 2020-05-13
Post score: 0
Answer:
I just solved it, if someone has the same problem, using "ROS for LabVIEW Software" from GitHub, entering "/user.lib/ROS for LabVIEW Software/ROS/Code/ROS_Tools/ROSTerminal.vi", you only have to change the node IP inside it!
Originally posted by Lol with karma: 16 on 2020-05-13
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by junwen on 2022-05-05:
Hello,I have a same question now.But I don't understand the way you solve. Can you explain in more detail?
Comment by beluga on 2022-10-26:
The source distribution should be set under build specs in the project in Lv. There is a pdf file under the "Help" folder of "ROS for LabView Software" folder (from github). Check "myRIO and roboRIO Help" named file.
Comment by Lol on 2022-12-07:
Go to this path or look for the ROSTerminal.vi file, and set MANUALLY your IP directly there. | {
"domain": "robotics.stackexchange",
"id": 34951,
"tags": "ros, ros-melodic, raspberrypi, rasbperrypi, multiple-machines"
} |
Equivalent circuit for an arbitrary receiving antenna | Question: This Wikipedia entry tells me that the Thevenin equivalent circuit for an arbitrary receiving antenna on which an electric field $E_b$ is incident is a voltage source $V_a$ in series with an impedance $R_a + j X_a$ where (I have re-arranged the terms a bit to frame my question...)
$$
V_a = E_b\;
\frac{\cos {\psi}}{\sqrt{\pi Z_0}} \;
\left( \lambda \sqrt{R_a G_a} \right)
$$
... given that $G_a$ is the directive gain of the antenna in the directive gain of the antenna in the direction of incidence, and $\psi$ is the angle by which the electric field is 'misaligned' with the antenna.
The article does mention that this is derived from reciprocity, from which I assume that there should be some reasoning beginning with the Rayleigh-Carson theorem:
$$
\iiint_{V_1} \vec{J}_1 \cdot \vec{E}_2 \;dV
=
\iiint_{V_2} \vec{J}_2 \cdot \vec{E}_1 \;dV
$$
I am trying to understand how I can apply this, and in fact how I can approach an arbitrary antenna structure in general (I do understand how a dipole and a loop can be analyzed)
Unfortunately the article itself doesn't point out any sources where this relationship is derived, so I was wondering if anyone could point me to any textbook or paper where this derivation may be found?
My motivation is something like this -- the relation mentioned in the Wikipedia article is actually for a sinusoidal input -- and the frequency determines $\lambda$, $R_a$, and $G_a$ in the expression (and $X_a$ in the equivalent circuit). I am trying to understand if any insight can be obtained about the equivalent voltage source $V\left(t\right)$ given an arbitrary $E\left(t\right)$ -- maybe, for example, as a differential or integral equation? The $X_a$ can be replaced by frequency independent $C_a$ and $L_a$ in series -- and for the voltage source I would integrate over $\lambda$ -- but I don't know how to deal with $R_a\left(\lambda\right)$ (which is ideally only the radiation resistance) and $G_a\left(\lambda\right)$ for arbitrary antenna geometries. So I was hoping that the derivation would offer me some clues...
Update
OK, so it seems that I went on the wrong track here -- it is actually quite easy. I am answering my own question below.
Answer: A slight variant on your fine answer...
A reference is Ramo et al, Fields and Waves in Communication Electronics, chapter 12.
First, reciprocity: $Z_{21}=Z_{12}$ tells you that (assuming a conjugate-matched load):
$$ g_{dt} A_{er} = g_{dr} A_{et}$$
For both transmitting (subscript t) and receiving (r) antennas, $g_d$ is the antenna directional gain.
$A_{er}$ is the effective area of the receiving antenna, defined as the ratio of useful power removed from the receiving antenna $W_r$ to average power density $P_{av}$ in the incoming radiation.
Thus the ratio $g_d/A_e$ is the same for both transmitting and receiving antennas.
For large aperture antennas, it can be shown that the maximum possible gain satisfies:
$$ \frac{(g_d)_{max}}{A_e} = \frac{4 \pi}{\lambda^2} $$
For other geometries, $A_e$ is defined to give the same result. For example, for a Hertzian dipole, with a maximum directivity of 1.5:
$$ (A_e)_{max} = \frac{\lambda^2}{4 \pi} (g_d)_{max} = \frac{3}{8 \pi} \lambda^2 $$
Anyway, for the problem at hand, as you deduced, the useful power removed from the receiving antenna is:
$$ W_r = P_{av} A_{er} \text{, with the power density } P_{av} = \frac{E_b^2}{2 Z_o} , Z_o=377 \text{ ohms} $$
(Here, electric field and voltage are sinusoids measured as peak values.)
With a conjugate-matched load with real part $R_L$, equating load power dissipated with power delivered gives for the receiving antenna's Thevenin equivalent source voltage $V_a$:
$$\frac{(V_a/2)^2}{2 R_L} = \frac{E_b^2}{2 Z_o} A_{er} $$
$$ V_a = 2 \sqrt{A_{er}} \sqrt{\frac{R_L}{Z_o}} \, E_b $$
Substituting for $A_{er}$ from the reciprocity relation, the maximum voltage $V_{a,max}$ is:
$$ V_{a,max} = \sqrt{\frac{(g_{dr})_{max}}{\pi }} \sqrt{\frac{R_L}{Z_o}} \,\, \lambda E_b $$
I'm cautious about the $\cos \psi$ factor because beam patterns differ for different antennas. | {
"domain": "physics.stackexchange",
"id": 10341,
"tags": "electromagnetism, antennas"
} |
Creating msg messages | Question:
Can you please instructions with creating your package from scratch, or rather the description of what you need to specify in CMakeList.txt and package.xml.
I understood how to create my overlay and build a package (I collect it using ament_cmake)
but I do not understand what needs to be specified in the above files, when adding my own nodes and msg files.
I create packages using ros2 pkg create. And can you please describe this in more detail as I did not find this information in the official manual.
Originally posted by D0l0RES on ROS Answers with karma: 23 on 2019-01-29
Post score: 0
Answer:
Take a look at https://github.com/ros2/demos/tree/master/pendulum_msgs for a minimal example
cmake_minimum_required(VERSION 3.5)
project(pendulum_msgs)
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
find_package(ament_cmake REQUIRED)
find_package(builtin_interfaces REQUIRED)
find_package(rosidl_default_generators REQUIRED)
rosidl_generate_interfaces(pendulum_msgs
"msg/JointState.msg"
"msg/JointCommand.msg"
"msg/RttestResults.msg"
DEPENDENCIES builtin_interfaces
)
ament_package()
and
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>pendulum_msgs</name>
<version>0.6.2</version>
<description>Custom messages for real-time pendulum control.</description>
<maintainer email="michael@openrobotics.org">Michael Carroll</maintainer>
<license>Apache License 2.0</license>
<author>Jackie Kay</author>
<author>Mikael Arguedas</author>
<buildtool_depend>ament_cmake</buildtool_depend>
<build_depend>builtin_interfaces</build_depend>
<build_depend>rosidl_default_generators</build_depend>
<exec_depend>builtin_interfaces</exec_depend>
<exec_depend>rosidl_default_runtime</exec_depend>
<member_of_group>rosidl_interface_packages</member_of_group>
<export>
<build_type>ament_cmake</build_type>
</export>
</package>
Originally posted by lucasw with karma: 8729 on 2019-01-29
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 32362,
"tags": "ros, ros2, nodes, msg, ros-crystal"
} |
unable to build gazebo package in ROS | Question:
while running rosmake gazebo. i get the following error:
/home/pragyan/tredext/biome_simulator/simulator_gazebo/gazebo/build/gazebo/server/audio_video/AudioDecoder.cc:105:49: error: ‘avcodec_decode_audio2’ was not declared in this scope
I tried to uncomment the use of this function and tried to make it the next error was related OREG. Can anyone just tell me the solution to this problem?
Thanks in advance :)
Originally posted by pntripathi9417 on ROS Answers with karma: 66 on 2011-11-11
Post score: 0
Original comments
Comment by karthik on 2011-11-11:
Could you give more details like OS, ROS version and details of error in preformatted text format.
Answer:
i just downloaded the latest version of gazebo and it worked fine.. thanks for ur reply :)
Originally posted by pntripathi9417 with karma: 66 on 2011-11-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 7279,
"tags": "gazebo"
} |
Why was the challenger deep so small? | Question: According to James Cameron, the submarine barely had any space for him to move around etc., and as he approached the bottom of the Mariana Trench, the submarine shrieked down by a few inches (due to pressure).
Why was the challenger deep designed to be just big enough for James Cameron to fit in?
Answer: The design of the submarine to be used in such purpose depends on the following (arranged from least important to most):
Cost - The amount used to build the submarine. With respect to Solar Mike's and Fred's comments, the larger the submarine, the more money you will use to fund the project.
Buoyancy - The larger the submarine, the harder it gets to make it remain submerged. Remember that buoyant force is equal to the weight of the fluid displaced by the object, therefore, having bigger submarine requires bigger engines, thus requires more fuel. Fuel efficiency is very important as depth is about 11 kms below MSL.
Design strength - The walls of the submarine shall be designed to resist 107,910 kPa of pressure (gauge) because the pressure is directly proportional to the depth. Stress = PD / 4t. That means, as the diameter becomes bigger, the thicker the vessel should get to resist pressure. If the submarine becomes too thick, this goes back to item no. 1 (cost efficiency) and no. 2 (submarine now becomes very heavy and will need a lot of power to propel upwards)
Hope this helps. | {
"domain": "engineering.stackexchange",
"id": 1686,
"tags": "pressure"
} |
Is it computable to find the cardinality of intersection of two recursively enumerable sets? | Question: I am well aware that recursively enumerable sets (which are subsets of $\mathbb N$) are closed under intersection. What is more interesting is whether or not the cardinality of the intersection is computable/decidable?
That is, given two recursively enumerable sets $A$, $B$, is $|A \cap B|$ always decidable/computable (provided we know what $|A|$ and $|B|$ are, even if they are infinite)?
My first hunch is that this is equivalent to the halting problem for Turing Machines, but how would I show this?
Answer: Interesting question.
Claim: A finite recursively-enumerable set is decidable.
Proof: In fact, a finite set is decidable.
Claim: It is computable to find $|A\cap B|$ given two finite sets $A$ and $B$.
Proof: List all elements in $A$. For each one of them, check whether it is in $B$. Return the number of all elements that are found in $B$.
Claim: It is not computable to find $|A\cap B|$ given two recursively-enumerable sets $A$ and $B$ such that $|A|$ is finite and $|B|$ is infinite.
Proof: For the sake of contradiction suppose algorithm $M$ finds $|A\cap B|$ given such $A$ and $B$. Let us solve the halting problem, as you expected.
Let $X$ be an arbitrary Turing machine and $w$ be an arbitrary input.
Let $M_1$ be a Turing machine that halts only when the input is $w$. Let $M_\infty$ be a Turing machine that always halts except when the input is $w$, at which time $X$ behaves the same as $X$ upon input $w$. Note that $|L(M_1)|=1$ is finite and $|L(M_\infty)|$ is infinite.
Apply $M$ to $L(M_1)$ and $L(M_\infty)$. If $|L(M_1)\cap L(M\infty)|=1$, then $X$ halts upon $w$. Otherwise, $|L(M_1)\cap L(M_\infty)|=0$, and $X$ loops forever upon $w$. $\quad\checkmark$
Corollary: It is not computable to find $|A\cap B|$ given two recursively-enumerable sets $A$ and $B$ with known cardinalities. | {
"domain": "cs.stackexchange",
"id": 20388,
"tags": "computability, undecidability, decidability"
} |
how to access depth values form /camera/depth/image_raw? | Question:
Hey ,
i am using ros hydro and ubuntu 12.04 (LTS) , i need to know how to convert the 8-bit depth data which i receive from kinect in the topic /camera/depth/image_raw to depth of individual points in mm . i know that we have to use encoding and is_bigendian values , but i dont know how to use them . can someone please guide me how to use them .
Note: i didnt want to use point cloud data type because it was a bit complicated when compared to /camera/depth/image_raw and i only need depth data , so i thought this was better . if you feel that in order to access depth data point clouds are better please guide me how to obtain the depth data from point cloud .
Thank you for your patience .
Originally posted by hunterkaushik on ROS Answers with karma: 3 on 2014-04-08
Post score: 0
Original comments
Comment by BennyRe on 2014-04-08:
Do you mean something like this http://answers.ros.org/question/141741/calculate-depth-from-point-cloud-xyz/?answer=141756#post-id-141756
Comment by hunterkaushik on 2014-04-09:
no , i want to know how to get the values x,y,z in the first place , also the time taken for my robot to read point cloud data is very high like 4sec difference between one input and the other , how do i rectify it ?? please help me out .
Answer:
Maybe this Q&A helps: http://answers.ros.org/question/125241/how-to-use-depth-value-from-cameradepthimage/#127121
Originally posted by Wolf with karma: 7555 on 2014-04-10
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by askkvn on 2021-06-25:
it works perfect for /camera/depth/image_raw rostopic.
Comment by askkvn on 2021-06-25:
My answer with complete code (c++) : https://answers.ros.org/question/141741/calculate-depth-from-point-cloud-xyz/?answer=381085#post-id-381085 | {
"domain": "robotics.stackexchange",
"id": 17591,
"tags": "ros, ros-hydro, depth-image, sensor-msgs#image"
} |
Is there any relationship between gauge field and spin connection? | Question: For a spinor on curved spacetime, $D_\mu$ is the covariant derivative for fermionic fields is
$$D_\mu = \partial_\mu - \frac{i}{4} \omega_{\mu}^{ab} \sigma_{ab}$$
where $\omega_\mu^{ab}$ are the spin connection.
And the transformation of spin connection is very similar to gauge field.
So is there any relationship between them. If there is any good textbook or reference containing this area, please cite it. Thanks!
Answer: A gauge field for a particular group $G$ can be thought of as a connection, or a $G$ Lie algebra valued differential form. If we recall the Riemann curvature,
$$R(u,v)w = \left( \nabla_u \nabla_v - \nabla_v \nabla_u -\nabla_{[u,v]}\right)w$$
If $[u,v]=0$ the expression simplifies to the usual tensor in general relativity. Similarly, we may think of the field-strength of a gauge field as a curvature - it's essentially a commutator of covariant derivatives and attempts to quantify the affect of parallel transportation on tensorial objects. For a $U(1)$ field,
$$F=\mathrm{d}A $$
with no additional terms, because the analogue of the $\nabla_{[u,v]}$ term vanishes as $U(1)$ is abelian and all structure constants of the group vanish. The relation to the curvature tensor becomes even clearer as we express the field-strength in explicit index notation,
$$F=\partial_\mu A_\nu - \partial_\nu A_\mu$$
In gravitation, the gauge group is the group of diffeomorphisms $\mathrm{Diff}(M)$, infinitesimally these are vector fields which shift the coordinates; the binary operation of the group is the Lie bracket, and the metric changes by a Lie bracket, namely,
$$g_{ab}\to g_{ab}+\mathcal{L}_\xi g_{ab}$$
where $\xi$ is our vector field. The Lorentz group $SO(1,3)$ is a subgroup of the diffeomorphism group. In addition, the Killing vectors are those which produce no gauge perturbation of the metric, i.e.
$$\nabla_\mu X_\nu -\nabla_\nu X_\mu=0$$
These Killing vector commutators may form a Lie algebra of a Lie group $G$; the generators $T_a$ of a Lie group $G$ allow us to define the structure constants,
$$[T_a,T_b]=f^{c}_{ab}T_c$$
where $f$ are the structure constants, modulo some constants according to convention. | {
"domain": "physics.stackexchange",
"id": 13646,
"tags": "quantum-field-theory, general-relativity, mathematical-physics, gauge-theory, spinors"
} |
A variant of #POSITIVE-2-DNF | Question: Let $G=(V,E)$ be an undirected graph. I call a valuation of $G$ a function $\nu: V \to E$ that maps every node $x \in V$ to an edge incident to $x$ (so that there are $\prod_{x \in V} d(x)$ valuations of $G$, where $d(x)$ is the degree of node $x$). I say that $\nu$ is satisfying if there exist an edge $e\in E$ such that both endpoints of $e$ are mapped to $e$ by $\nu$. I am interested in the following problem:
INPUT: An undirected graph $G$
OUTPUT: The number of satisfying valuations of $G$
My question: What is the complexity of this problem, and does it already have a name?
My guess is that it is #P-hard, even for bipartite graphs. A closely related #P-hard problem is #POSITIVE-2-DNF, or even [#PARTITIONED-POSITIVE-2-DNF][1]. Indeed, you can see an instance of #(PARTITIONED-)POSITIVE-2-DNF as a (bipartite) graph $G$, and you say that a valuation of $G$ either maps a node $x$ to all of its incident edges or to none of them. So my problem is somewhat a variant of #POSITIVE-2-DNF, but where valuations map variables to a single clause in which they occur, instead of mapping them to $0$ of $1$.
==== UPDATE ====
As a3nm showed in his answer, the problem is hard on 3-regular graphs with multi-edges. My answer shows that the problem is also hard on $2$-$3$ regular simple graphs. There is the minor question of knowing if it is hard one $3-regular simple graphs. I don't really care about it, but I still leave it here for completeness.
Answer: Note: this reduction is written in the wrong direction, and when fixed it only works for multigraphs. See explanations in the edit to the original question.
I think the problem is #P-hard already on 3-regular graphs using the results of Cai, Lu and Xia, Holographic Reduction, Interpolation and Hardness, 2012. I will do this by showing the #P-hardness of counting the non-satisfying valuations of $G$, i.e., the valuations $\nu$ where for every edge $e$ at least one of the endpoints of $e$ is not mapped to $e$ by $\nu$. Indeed, counting this reduces in PTIME to counting the satisfying valuations of $G$ as you ask: this uses the fact that the total number of valuations (both satisfying and non-satisfying) can be computed in PTIME, using the closed-form formula in your question.
To show the hardness of counting non-satisfying valuations on 3-regular graphs, consider a 3-regular graph $G = (V, E)$, and construct the bipartite graph $G' = (V \cup E, W)$ between $V$ and $E$: it is a 2-3-regular graph in the sense that vertices in $V$ all have degree $3$ and vertices in $E$ all have degree $2$. Now, a non-satisfying valuation of $G$ in your sense amounts to picking one edge of $W$ incident to each vertex of $V$ in $G'$, so that we never pick the two edges of $W$ incident to a vertex of $E$. In other words, I'm claiming that counting the non-satisfying valuations of $G$ is exactly to counting the subsets $W'$ of $W$ such that each vertex of $V$ has exactly one incident edge in $W'$ (= we pick one edge for each vertex of $V$), and each vertex of $E$ has 0 or 1 incident edges in $W'$ (= no edge of $E$ has both its endpoints selected).
If I'm not mistaken, this is precisely the problem #[1,1,0][0,1,0,0] in the notation of Valiant used in the paper I quote: note that there's a hopefully legible explanation in Appendix D of this paper (which, incidentally, we co-authored ;-P). Now looking at the table on page 23 of Cai, Lu and Xia, we see that #[1,1,0][0,1,0,0] is #P-hard.
As for the problem having an established name more palatable than #[1,1,0][0,1,0,0], I don't know, but maybe this can be one direction in which to look. | {
"domain": "cstheory.stackexchange",
"id": 4780,
"tags": "reference-request, counting-complexity"
} |
Root password for default Turtlebot install | Question:
I bought a Turtlebot assembled from Clearpath Robotics and it arrived today (yay!). It came with Ubuntu and ROS already configured, however, I don't know the root password.
Following the set up instructions, I logged in with turtlebot/turtlebot successfully. However, when needing root access (changing the Time Settings in the GUI, for example), I'm prompted for the root user password. It's not "turtlebot" or "root".
I'm asking the question here instead of from Clearpath's support incase others run into the same issue.
Originally posted by baalexander on ROS Answers with karma: 233 on 2011-08-12
Post score: 0
Answer:
The user "turtlebot" has sudo access. I was able to change the root password with:
sudo passwd
Originally posted by baalexander with karma: 233 on 2011-08-12
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 6404,
"tags": "ros, installation, turtlebot"
} |
Riddle about speed | Question: This stems from a riddle I read in a magazine perhaps 20 years ago so I apologise for the imprecise recollection.
A dog that can run infinitely fast is placed on an infinitely large flat surface and an alarm clock is tied to his tail. The dog has been trained to double the speed he is running when he hears the bell go off.
So this dogs sets out running at, let's say 5 m/s and every 10 seconds, the alarm goes off.
The question is how fast is he running after two minutes.
Also can someone find the actual riddle? I have not been able to.
I have cloaked the rest of the question in case you want to answer that one first.
So the tricky part of this is of course that the dog supposedly stops
doubling his speed after he doubles past the speed of sound, outrunning the alarm
My question is, would he not intercept previous sound waves and start doubling again?
Thanks
Answer: There are only a non-infinite number of waves that escaped the dog. So he will double a couple more times, and then he will reach his final speed.
I wrote a small Python simulation for this. The output of the program is also on the same gist page.
To run it, download dog.py and call it with python dog.py, assuming you have a python interpreter on your machine.
It starts off with 6 waves catching the dog, and then the dogs catching the waves.
So I think there indeed is one final speed, the program suggests 20480 m/s. This is only true, if the dog can hear the faint and differently pitched sounds. | {
"domain": "physics.stackexchange",
"id": 15343,
"tags": "acoustics, speed"
} |
true three index tensors | Question: is such a tensor, $T_{\alpha\beta\, \gamma}$, possible such that
$$T_{\alpha\beta\, \gamma}=T_{\beta\alpha\, \gamma}=-T_{\alpha\gamma\, \beta}=-T_{\gamma\beta\, \alpha}$$
That is, symmetric under two indices, but antisymmetric under the third with the previous too. If so can it be build up by a linear combination and "multiplication" of 4-vectors?
Thanks,
Answer: Sadly your tensor would have to obey,
$$T_{abc} = -T_{cba} = -T_{bca} = T_{acb} = -T_{abc},$$
and therefore would have to be equal to zero. | {
"domain": "physics.stackexchange",
"id": 2929,
"tags": "tensor-calculus"
} |
Is the filter $1/(1-s)$ anti-causal? | Question: The filter with the response function
$$
H(s) = \frac{1}{1 - s}
$$
Produces a positive phase shift and a negative group delay for all frequencies
Is it anti-causal? Is there a way to deduce such information from the frequency response of the system.
Answer: From the transfer function alone it is generally impossible to say whether a system is causal or not. Only in combination with a given region of convergence (ROC), or, equivalently, with an assumption about stability, can we know for sure if a given system is causal or not.
The given transfer function has a pole at $s=1$. There are two possible time domain functions (impulse responses) that correspond to this transfer function. For the ROC to the right of the pole, i.e., $|s|>1$, the system is causal but unstable. It is unstable because the ROC does not include the imaginary axis ($\omega$-axis). The other system that is described by the same transfer function is obtained by assuming that the ROC is to the left of the pole, i.e., $|s|<1$. Now the imaginary axis is inside the ROC, so the corresponding filter is stable. However, it is anti-causal because of the ROC being a left half-plane.
By evaluating the transfer function on the imaginary axis (i.e., by plotting magnitude and phase), you imply that you're dealing with a stable system, i.e., you choose the ROC that includes the $\omega$-axis ($|s|<1$), which means that the system you're looking at is indeed anti-causal. | {
"domain": "dsp.stackexchange",
"id": 10724,
"tags": "filters, continuous-signals, linear-systems, causality, group-delay"
} |
What makes us move in time? | Question: Time is considered to be a dimension, and we are moving at certain rate in one direction in time. What force makes us move in time? I mean it must be ether time moving or us moving in time so there has to be some force that 'pushes'/'pulls'? Was this 'time inertia' acquired during big bang, or nothing is moving and I am just being silly?
Answer: Take a landscape. It can be modeled by a function f(x,y,z). If all the derivatives, df/dx, df/dy, df/dz are zero, the landscape is flat to infinity and nothing interesting exists in the landscape.
If one of the derivatives is different than zero, then we perceive a shape, and generally a landscape has a shape. As an example, suppose that we have a cone for this landscape, and there exists a funny "life" that exists in the distance from the center of the cone . All in one snapshot for us, birth is at the cone, middle age is some distance and death is where f is zero.
In a similar manner we can think of time for each of us as starting at birth making a four dimensional shape and ending at death. Another life form will see us as I explain in the example with the cone. Thus time as a dimension for human perception is a df/dt. If nothing changed, there would be an uninteresting landscape .
Now we have developed means of studying what at first is a fourth dimension time axis, because all matter exists and has a df/dt in the four dimensional space. We have concluded from our observations that time has an arrow, i.e. one cannot "move" in the negative direction, from observing how nature behaves thermodynamically and microscopically. Entropy always increases , and that defines an arrow of time independent of the human perception.
The "motion" of time is the motion of our perception. When we look at a three dimensional landscape we can perceive it from zero to infinity. The landscape is not moving. With time we are at a specific time=t_0 sequentially and the four dimensional landscape opens to our perception in slices ultimately controlled by the rate of increase in entropy in our surroundings. The "force" is the usual statistical mechanics and quantum statistical mechanics that rules the nature of matter. | {
"domain": "physics.stackexchange",
"id": 15722,
"tags": "time"
} |
Least present k-mers in the human genome | Question: What are the least present k-mers in the human genome at different sizes?
Starting with k=4 and going up in size until k=10, what are the k-mers least seen (or not at all) in the human genome? I am only interested in the reference human genome, so I am discounting SNPs/Indels in the population.
If this is not pre-calculated somewhere, what tool is recommended to use that starts with the GRCh38 reference as input?
Answer: You can use the Jellyfish software to calculate the k-mer profiles up to length 31.
From the instructions in the user guide:
The basic command to count all k-mers is as follows:
jellyfish count -m 21 -s 100M -t 10 -C reads.fasta
To compute the histogram of the k-mer occurrences, use the histo subcommand (see section 3.1):
jellyfish histo mer_counts.jf
To query the counts of a particular k-mer, use the query subcommand (see section 3.3):
jellyfish query mer_counts.jf AACGTTG
To output all the counts for all the k-mers in the le, use the dump subcommand (see section 3.2):
jellyfish dump mer_counts.jf > mer_counts_dumps.fa | {
"domain": "bioinformatics.stackexchange",
"id": 297,
"tags": "human-genome, sequence-analysis"
} |
Numeric quadrature vs summation of running costs in model predictive control | Question: Usually, an MPC consists of discretizing the optimal control problem in time using some numerical quadrature scheme. So the infinite-dimensional OCP reads
$$\begin{aligned}J(\vec{u}) &= \varphi\left(\vec{x}(t_i+t_{hor})\right) + \int_{t_i}^{t_i+t_ {hor}}l(\vec{x},\vec{u})\text{d}t \\
\text{s.t.}\\
\dot{\vec{x}}&=\vec{f}(\vec{x},\vec{u}), \quad \vec{x}(t_i)=\hat{\vec{x}}_i\end{aligned}$$
which can be transcribed to a static nonlinear program using, e.g., the trapezoidal rule for the integral and some other (or does it have to be the same?) integration scheme, e.g., the explicit Euler for the ODE, i.e.
$$\begin{aligned}
c(\vec w) &= \varphi\left(\vec{x}_N\right)+\frac{t_{hor}}{N}\sum_{k=0}^{N-1}\frac{l(\vec{x}_k,\vec{u}_k)+l(\vec{x}_{k+1},\vec{u}_{k+1})}{2}\\
\text{s.t.}\\
\vec{x}_{k+1}&=\vec{x}_{k}+\frac{t_{hor}}{N}\vec{f}(\vec{x}_k,\vec{u}_k), \qquad k=0,...,N-1\\
\vec{x}_0&=\vec{\hat{x}}_i
\end{aligned}$$
where $\vec{w}=[\vec{x}_0^T\ ... \ \vec{x}_N^T \ \vec{u}_0^T\ ... \ \vec{u}_N^T ]^T$ are the decision variables, $t_{hor}$ is the MPC horizon, and $N$ is the number of discretization steps.
But often it can be observed that instead of the above sum, the cost function simply reads
$$c(\vec w) = \varphi\left(\vec{x}_N\right)+ \sum_{k=0}^{N}l(\vec{x}_k,\vec{u}_k).$$
My question is, under what conditions is this possible? Can I always do this? I guess that it depends on whether the actual value of the integral matters or not but this implies that the integration does not change the optimal decision variables $\vec w^\ast$ since it involves just a scaling of the cost function? I could also imagine that it has to do with the function $l(\cdot)$ itself. Maybe this only works if $l(\cdot)$ is at most quadratic in the decision variables? Although I have seen this notation in nonlinear MPC.
EDIT 1: added ODE equality constraint and final costs
EDIT 2: Adding the final cost, I can see that one might need to have both discretizations to be the same or non at all for the continuous integral? Since in the discretized version, $\varphi\left(\vec{x}_N\right)$ depends on the final state which is the result of the ODE integration which, in this example, was done using the explicit Euler while the continuous integral has been approximated using the trapezoidal rule leading to different accuracies in the solution when they should probably be of the same order of accuracy? Therefore, just using the second sum leads to just adding up the values of the running cost term evaluated at the discretized steps which, in turn, depend on the ODE discretization. So this seems to make more sense to me than having two different integration schemes.
Answer: MPC finds the optimal input $u^*$ which is the input that minimizes the cost function $J$ or $c$. This means that regardless of what this actual value is, its proven to be its minimum. As such, multiplying the cost function with any constant value does not change this minimum, it just scales the value. Therefore, $\frac{t_{hor}}{N}$ does not affect the optimal input and can be eliminated. If there is a terminal cost present, the terminal state should also be computed using this trapezoidal rule and is also multiplied with this constant.
Secondly, taking the cost over actual sampled values of $x_k$ or taking the average between two samples seems nearly a matter of perspective. In fact, if you write out both sums, you discover that the difference is something like this:
$$\frac{1}{2}\sum_{k=0}^{N-1}l(x_k,u_k) + l(x_{k+1}, u_{k+1}) = \frac{1}{2}(l(x_0,u_0)+ l(x_{N}, u_{N})) + \sum_{k=1}^{N-1}l(x_k,u_k)$$
Additionally, if this cost function happens to have a initial cost term and terminal cost term you can imagine it begin fully equal to the other one. (albeit having a different initial and terminal cost value). However, if this $l(.)$ in your cost function is a continuous time one (I assume its not really linear in the states and inputs), I should not that the standard cost function probably uses $l(.)$ in a discretized way.
Hope this got you somewhere, I have not encountered your trapezoidal approach ever, so that makes it kinda interesting. | {
"domain": "engineering.stackexchange",
"id": 3617,
"tags": "control-engineering, control-theory, optimal-control, nonlinear-control"
} |
Making filters based on checkboxes | Question: I have a scenario where I do filters depending on checkboxes checked. Now I have only 2 checkbox and I need to cover all escenarios into if else conditionals like:
if (!chkProjectTechs.Checked && !chkTeamLeader.Checked)
{
foreach (DataRowView list in lstTech.SelectedItems)
{
var selectedEmpGuid = (Guid)list[0];
EmpGuid.Add(selectedEmpGuid);
}
parameters = ToDataTable(EmpGuid);
}
else if (!chkTeamLeader.Checked && chkProjectTechs.Checked)
{
foreach (var technician in projectTechnicians)
{
EmpGuid.Add(technician.EmpGuid);
}
parameters = ToDataTable(EmpGuid);
}
else if (!chkProjectTechs.Checked && chkTeamLeader.Checked)
{
foreach (var teamLeader in teamLeaders)
{
EmpGuid.Add(teamLeader.EmpGuid);
}
parameters = ToDataTable(EmpGuid);
}
else if (chkProjectTechs.Checked && chkTeamLeader.Checked)
{
foreach (var technician in projectTechnicians)
{
EmpGuid.Add(technician.EmpGuid);
}
parameters = ToDataTable(EmpGuid);
foreach (var teamLeader in teamLeaders)
{
EmpGuid.Add(teamLeader.EmpGuid);
}
parameters = ToDataTable(EmpGuid);
}
But I need to add more checkboxes. For foreach checkbox I will add to my form I need to add it to each conditional, and at the final of the day I will get very long code. Is there another way to do this?
Answer: It's not completely clear from the code, but assuming your intent is to display the combination of all selected sets, or the result of a separate list box if no sets are selected, then you can do it with one check per set.
Assuming EmpGuid is a List or similar that starts out empty (if not, modify as appropriate), and using some Linq to clean it up:
if(chkProjectTechs.Checked)
{
EmpGuid.AddRange(projectTechnicians.Select(x => x.EmpGuid));
}
if(chkTeamLeader.Checked)
{
EmpGuid.AddRange(teamLeaders.Select(x => x.EmpGuid));
}
// Other cases here ...
if(!EmpGuid.Any())
{
EmpGuid.AddRange(lstTech.SelectedItems.Select(row => (Guid)row[0]));
}
parameters = ToDataTable(EmpGuid); | {
"domain": "codereview.stackexchange",
"id": 31597,
"tags": "c#"
} |
How to prove Weyl spinors transform as a representation of Lorentz group? | Question: In my QFT lecture notes, it is written that the Lorentz group elements can be written as
\begin{equation*}
\Lambda = e^{i\vec{\theta}\cdot\vec{J} + i\vec{\eta}\cdot\vec{K}}
\end{equation*}
where $\Big\{\vec{J}, \vec{K}\Big\}$ are the generators of the Lorentz algebra.
Ater this, they write that Weyl spinors transform under a representation of the Lorentz group, as
\begin{equation*}
\phi' = e^{i\frac{\vec{\sigma}}{2}\cdot\left(\vec{\theta} - i\vec{\eta}\right)}\phi
\end{equation*}
Here, as $\Big\{\frac{\vec{\sigma}}{2}, -i\frac{\vec{\sigma}}{2}\Big\}$ indeed satisfies the Lorentz algebra commutation relations, it is indeed a representation of the Lorentz algebra $\Big\{\vec{J}, \vec{K}\Big\}$. However, not all representations of Lie algebras lead to a representation of the Lie group by exponentiation. So, for the case of the Weyl spinor, how can we show that the transformation rule is indeed a representation of the Lorentz group?
Answer: TL;DR: To discuss non-projective group representations of spinors we need to go to the universal covering group.
In detail:
First define a (left) Weyl spinor $\phi$ to transform in the defining group representation of $SL(2,\mathbb{C})$, which is the double cover of the restricted Lorentz group $SO^+(1,3;\mathbb{R})$.
Only thereafter, we should identify the corresponding Lie algebra $sl(2,\mathbb{C})\cong so(1,3;\mathbb{R})$, the Lie algebra representation, and their 6 generators of boosts and rotations. | {
"domain": "physics.stackexchange",
"id": 83068,
"tags": "special-relativity, group-theory, representation-theory, lie-algebra, spinors"
} |
How can I get both tf and tf2 working in ROS | Question:
-- +++ processing catkin package: 'tf'
-- ==> add_subdirectory(geometry/tf)
CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package):
Could not find a configuration file for package angles.
Set angles_DIR to the directory containing a CMake configuration file for
angles. The file will have one of the following names:
anglesConfig.cmake
angles-config.cmake
Call Stack (most recent call first):
geometry/tf/CMakeLists.txt:7 (find_package)
-- tf: 1 messages, 1 services
-- +++ processing catkin package: 'tf_conversions'
-- ==> add_subdirectory(geometry/tf_conversions)
-- Eigen found (include: /usr/include/eigen3)
-- Configuring incomplete, errors occurred!
make: *** [cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed
ubuntu@ubuntu-armhf:~/catkin_ws$ sudo aptitude install ros-hydro-tf
No candidate version found for ros-hydro-tf
I have a working version on BBB where I see both tf and tf2* in /opt/ros/hydro/share. I have a non working version with no tf in share. how do I build or install tf so it either get's to share or is built in a catkin workspace? I have geometry built in the carkin workspace - and it list tf as being there.
Originally posted by DrBot on ROS Answers with karma: 147 on 2014-09-02
Post score: 0
Original comments
Comment by tfoote on 2014-09-02:
As of hydro tf uses tf2 under the hood. There should be no conflict between them.
Answer:
The cmake error you posed indicates that you don't have the angles package installed. Have you tried installing it?
sudo apt-get install ros-hydro-angles
More generally, if you have a workspace where the dependencies may be missing, you can usually install them with rosdep:
rosdep install --from-paths src -i
Originally posted by ahendrix with karma: 47576 on 2014-09-02
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by kaanoguzhan on 2020-05-27:
Thank you !!
I wish your more general option would be mentioned on more places !
I guess even in 6 years this important info is still missing from many places | {
"domain": "robotics.stackexchange",
"id": 19271,
"tags": "ros, tf2, transform"
} |
Why is Avogadro's hypothesis true? | Question: Why is the number of molecules in a standard volume of gas at a standard temperature and pressure a constant, regardless of the molecule's composition or weight?
Let's say I have a closed box full of a heavy gas, one meter on a side. It has a certain number of molecules inside. I want to be able to add a lighter gas to the box without changing its internal pressure (or temperature), so I connect a cylinder to the side of the box, which holds a frictionless piston for expansion (the piston has a constant force applied to it, to maintain a constant pressure inside the box and allow the volume of the gas to grow as new gas is introduced into the box).
Now I add Helium to the box. The piston moves back to maintain constant pressure, but why does the number of molecules in the box proper stay constant?
My mental image of this is that it would be like adding water to a bucket of marbles, and that, evidently, is wrong, but why is it wrong?
Answer: Dear Wade, your good question is easily answered if you consider "pressure" to be a derived quantity, and let us derive it.
An average molecule (or atom) of an ideal gas - and your proposition only holds for an ideal gas - has kinetic energy equal to
$$mv^2/2=3kT/2$$
It's because every degree of freedom carries $kT/2$ and there are three degrees of fredom in translations. Note that lighter molecules will move faster than the heavier ones.
How do we calculate the pressure? Well, put the molecule in a cubic box of volume $a^3$. It will hit the walls - the total surface of the cube is $6a^2$. We need to compute the average (over molecules) transfered momentum per unit time - this is called the force - and divide it by the area to compute the pressure of one molecule.
The force on the wall may be in $x,y,z$ directions. Let's consider the $x$ direction. The velocity of a particular molecule in the $x$-direction is $v_x$, so it takes $a/v_x$ of time to get from one side to the other side of the box in the $x$ direction. Once it gets to the other side, it bounces off the wall and changes the sign of the $x$-component of the velocity (and momentum). At this moment, the momentum $p_x$ clearly changes the sign - i.e. changes by $2p_x$.
So the change of momentum $p_x$ per unit time is
$$F_x = 2p_x / (a/v_x) = 2p_x/(am/p_x) = 2p_x^2/am$$
That's equal to $4/a$ times the kinetic energy $K_x$ in the $x$-direction. The pressure is
$$p = (F_x+F_y+F_z) / 6a^2 = 3\times 4/a\times K_x / 6a^2 = 4K/a / 6a^2 = 2K/3a^3 $$
But as I have said, the average kinetic (motion) energy per molecule is $K=3kT/2$ where $T$ is the absolute temperature, so the pressure is
$$p = 2k/3a^3 = kT/a^3= kT/V$$
Note that we have just derived $pV=kT$ for one molecule or $pV=NkT$ for $N$ molecules - which is what we wanted. The mass of the molecule canceled: if the molecule is heavier, the average velocity at a given temperature is slower. But that doesn't matter - because the molecule has a greater momentum (because of the higher mass) which is compensated by the longer time it needs to get from one side to the other to transfer this momentum.
So for a fixed temperature, the pressure is independent of the molecule type. | {
"domain": "physics.stackexchange",
"id": 422,
"tags": "statistical-mechanics, physical-chemistry, pressure"
} |
Question about centre of mass | Question:
Example 9.4 in the essential university physics: Jumbo, a 4.8-t elephant, is standing near one end of a 15-t railcar, which is at rest, all by itself, on a frictionless horizontal track. Jumbo walks 19m toward the other end of car. How far does the car move?
I have several questions about this example
If the system including jumbo and the car, then the centre of mass of this system is not moving since there is no external net force. So is the cm not moving relative to the ground?
Does the car move because the elephant exerts a friction to the car toward left?
If taking the car itself as the system, then does it has a external net force which is the friction? If it is, then will it run to the left forever? If it will, how can the centre of mass not moving?
Thanks ahead.
Answer: Yes, the centre of mass of the system(elephant and railcar) does not change(it stays at the same point with respect to ground) as there is no net external force on it.
The elephant is able to move towards the right due to the friction between it and the surface of the railcar. The force due to friction acts towards the right on the elephant and towards the left on the railcar. This causes both of them to move. However as these are internal forces for the system, there is no net change in momentum and hence position of the centre of mass remains the same(the elephant and the railcar will have equal and opposite momentum). Note that although the net momentum hasn't changed the the kinetic energy of the system has increased.
If on reaching the end of the railcar if the elephant jumps down and continues its rightward motion, both the elephant and the railcar will keep on moving endlessly(assuming that no other forces will act on them, such as the friction between them and the ground). The velocity of the centre of mass of a system is defined as $$\frac{\sum_{i=1}^n m_i\vec{v_i}}{\sum_{i=1}^nm_i}$$ which is nothing but the summation of the momenta of individual bodies in the system divided by the total mass of the system. This is how we can mathematically show that the centre of mass is not moving. In this example as the elephant is coming to rest after reaching the end of the railcar, the velocity of the railcar also becomes zero. | {
"domain": "physics.stackexchange",
"id": 45029,
"tags": "newtonian-mechanics, forces"
} |
xsense ros kinetic package error-/usr/bin/ld: cannot find | Question:
Hi. I am trying to install xsens mti 3 imu driver. When i do catkin make , i get an error
[ 30%] Built target robot_pose_ekf_generate_messages_nodejs
[ 31%] Built target robot_pose_ekf_generate_messages_cpp
[ 34%] Built target amcl_sensors
/usr/bin/ld: cannot find -lxscontroller
/usr/bin/ld: cannot find -lxscommon
/usr/bin/ld: cannot find -lxstypes
collect2: error: ld returned 1 exit status
xsens_ros_mti_driver-master/CMakeFiles/xsens_mti_node.dir/build.make:169: recipe for target '/home/bfd/catkin_opt/devel/lib/xsens_mti_driver/xsens_mti_node' failed
make[2]: *** [/home/bfd/catkin_opt/devel/lib/xsens_mti_driver/xsens_mti_node] Error 1
CMakeFiles/Makefile2:13711: recipe for target 'xsens_ros_mti_driver-master/CMakeFiles/xsens_mti_node.dir/all' failed
make[1]: *** [xsens_ros_mti_driver-master/CMakeFiles/xsens_mti_node.dir/all] Error 2
These folders(xscontroller-xscommon-xstypes) are in xsens package but bin/ld can not find ?ow can we solve that ?
Originally posted by bfdmetu on ROS Answers with karma: 85 on 2020-08-06
Post score: 0
Answer:
Check out the README in the package. it has some instructions.
Make sure the permissions are set to o+rw on your files and directories.
sudo chmod -R o+rw xsens_ros_mti_driver *
then build xspublic from your catkin workspace:
pushd src/xsens_ros_mti_driver/lib/xspublic && make && popd
after that run your catkin_make and it should work!
Originally posted by gogrady with karma: 56 on 2020-08-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by bfdmetu on 2020-08-07:
It works. Thanks | {
"domain": "robotics.stackexchange",
"id": 35378,
"tags": "ros, sensor, ros-kinetic"
} |
How does dimensionality reduction occur in Self organizing Map (SOM)? | Question: We have n dimension input for SOM and the output 2-D clusters. How does it happen?
Answer: SOM (Self-Organinizing Map) is a type of artificial neural network (ANN), introduced by the Finnish professor Teuvo Kohonen in the 1980s, that is trained using unsupervised learning to produce a low-dimensional, discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction.
SOM produces a mapping from a multidimensional input space onto a lattice of clusters, i.e. neurons, in a way that preserves their topology, so that
neighboring neurons respond to “similar” input patterns.
It uses three basic processes:
Competition
Cooperation
Adaptation
In competition, each neuron is assigned a weight vector with the same
dimensionality d as the input space. Any given input pattern is compared to the weight vector of each neuron and the closest neuron is declared the winner.
In cooperation, the activation of the winning neuron is spread to neurons in its immediate neighborhood, and as a result this allows topologically close neurons to become sensitive to similar patterns. The size of the neighborhood is initially large, but shrinks over time, where an initially large neighborhood promotes a topology-preserving mapping and smaller neighborhoods allows neurons to specialize in the latter stages of training.
In adaptation, the winner neuron and its topological neighbors are adapted to make their weight vectors more similar to the input pattern that caused the activation.
So, a Self-Organizing Map (SOM) is to encode a large set of input vectors $\textbf{x}$ by finding a smaller set of “representatives” or
“prototypes” or “code-book vectors” $\textbf{w}$ that provide a good approximation to the original input space. This is the basic idea of vector quantization theory, the motivation of which is dimensionality reduction or data compression. Performing a gradient descent style minimization on SOM's loss function (eg. the sum of the Euclidean distances between the input sample and each neuron) does lead to the SOM weight update algorithm, which confirms that it is generating the best possible discrete low dimensional approximation to the input space (at least assuming it does not get trapped in a local minimum of the error function).
To answer your question, you should take into consideration that the Dimensionality reduction takes place in fields that deal with large numbers of observations and/or large numbers of variables. Thus, SOM helps finding good "prototypes", in a way that each input pattern belongs to exactly one of them. As a result, the training instances are mapped to the training "prototypes" and the whole training set is mapped to a new one with less instances.
In addition, the "prototypes" neurons resulted by SOM can often be used as good centers in RBF networks or to classify patterns with the LVQ family algorithms. | {
"domain": "ai.stackexchange",
"id": 2347,
"tags": "machine-learning, unsupervised-learning, self-organizing-map"
} |
How can I better the readability of inline test data? | Question: [TestCase(new[] { 1, 2 }, 1, Result = 2)]
[TestCase(new[] { 1, 2 }, 2, Result = 1)]
[TestCase(new[] { 1, 2, 3 }, 2, Result = 2)]
[TestCase(new[] { 1, 2, 3, 4 }, 2, Result = 2)]
[TestCase(new[] { 1, 2, 3, 4 }, 10, Result = 1)]
public int TotalPageCountIsAccurate(int[] testSequence, int pageSize)
{
var pagedList = new PagedList<int>(testSequence, 0, pageSize);
return pagedList.TotalPageCount;
}
I wrote this test about five minutes ago and I can barely understand it - I dread to think how hard it will be to read in a few weeks. How can I refactor the TestCase attribute to improve readability?
I thought about using named arguments or (maybe) introducing a some well-named fields, but neither of these ideas appear to be feasible.
Answer: If I'm right you could do at least two things:
Generate the array and pass only the number of items to the test method.
Rename testSequence to something more descriptive, like items, elements etc.
[TestCase(2, 1, Result = 2)]
[TestCase(2, 2, Result = 1)]
[TestCase(3, 2, Result = 2)]
[TestCase(4, 2, Result = 2)]
[TestCase(4, 10, Result = 1)]
public int TotalPageCountIsAccurate(int itemCount, int pageSize)
{
// generates the item array with itemCount items
var items = Enumerable.Range(0, itemCount).ToArray();
var pagedList = new PagedList<int>(items, 0, pageSize);
return pagedList.TotalPageCount;
}
(I can't check whether is it valid C# or not but I hope you get the idea.)
I would also create a test with an empty array. | {
"domain": "codereview.stackexchange",
"id": 8358,
"tags": "c#, unit-testing, nunit"
} |
Why does strength of bond matter if melting only affects the the intermolecular of the molecules | Question: So as I've been told, when a substance melts, the actual bonds of the substance aren't broken, only the IMFs (inter-molecular forces). So why is it that metal groups decrease in melting points going down when they should be increasing because more shells exist thus ions in the lattice will be larger so more IMFs therefore it takes more energy to overcome. Instead, I'm being told that since the metallic bonds get weaker due to less attraction between electron sea and the positive ion, melting point decreases.
Could someone please help me figure out what I'm missing here?
Answer: Depending whether you have a molecular, metallic or ionic compound, the independently moving particles in the liquid state are molecules, atoms and ions, respectively.
So for a molecular solid, you have to break the intermolecular forces to turn it into a liquid (or make them non-persistent so that the interaction partners can change over time).
For a metallic solid, you have to break the metal bonds intermittently so that the atoms can move in the liquid.
For an ionic solid, you have to break the ionic interactions intermittently so that the ions can move in the liquid.
Finally, for a network covalent solid, you would have to break covalent bonds to liquify it. For this reason, there is no liquid state of diamonds.
So why is it that metal groups decrease in melting points going down [...]
According to Jim Clark:
The strength of a metallic bond depends on three things:
The number of electrons that become delocalized from the metal
The charge of the cation (metal).
The size of the cation. | {
"domain": "chemistry.stackexchange",
"id": 13255,
"tags": "inorganic-chemistry, molecular-structure, melting-point"
} |
Shortcut to find $\hat{p}^2$ expectation value | Question: I have been going through several calculations where I am asked to calculate $\langle p^2 \rangle$ and the task is proving to be pretty tedious. Does anyone know of a shortcut for this? Such as with $\langle p \rangle$ where:
$$ \langle p \rangle = m\frac{d\langle x\rangle}{dt} $$
I have seen a few specific examples where it can be done knowing the energy eigenvalues and the potential...
$$ \frac{\langle p^2 \rangle}{2m} + \langle V \rangle = E_n $$
But I was hoping for something more fruitful than this.
I guess another way would be to utilize the Hermicity of $\hat{p}$.
Answer: I haven't found a really good shortcut, but the following can make the integration much simpler in some cases. The time independent Schrodinger Equation:
$$ \frac{\hat{p}^2}{2m}\Psi+V\Psi=E\Psi $$
$$ \frac{\hat{p}^2}{2m}\Psi=(E-V)\Psi $$
$$ \hat{p}^2\Psi=2m(E-V)\Psi $$
So....
$$ \langle p^2\rangle = \int\Psi^*\hat{p^2}\Psi dx = \int\Psi^*[2m(E-V)\Psi]dx $$
I thought that this was a nice little trick. Hopefully someone can get use out of it. | {
"domain": "physics.stackexchange",
"id": 14171,
"tags": "quantum-mechanics, operators, momentum"
} |
Sampling a distribution (from a galaxy model) | Question: I am reading the following article:
http://www.kof.zcu.cz/st/dis/schwarzmeier/galaxy_models.html
and am currently at section 5.6 (positions of bodies in a galaxy).
I am trying to redo the simulations myself in Python, but I have a questions regarding sampling of the distribution.
Given a distribution function (Hernquist distribution):
$$\rho(r)=\dfrac{M}{2\pi}\cdot \dfrac{a}{r(a+r)^3}$$
the article states that to simulate the distribution, one has to calculate the mass within a circle of radius $r$ like follows:
$$m(r) = \int_{0}^{r} 4 \pi r'^2 \rho(r') dr'$$
which is the cumulative mass distribution function.
The article states that this formula represents the PDF. However, looking at the shape this appears to me to be a CDF. Approaching infinity the function approaches 1.0 which is for me a clear indication of a CDF.
To sample this distribution the article cites the Von Neumann method where one has to generate an $r$ and a $m$ value, and scale them accordingly, and check whether or not they fall below the $m(r)$ graph. If they do they are accepted, else they are rejected.
Am I completely off by thinking this is wrong? If I do this I end up with the majority of stars ending up at the higher radii.
I have the feeling I am sampling an CDF here instead of a PDF. To get accurate results (e.g.: having the majority of the stars in the center) means that I have to perform the Von Neumann method with the $\rho(r)$ function.
I am unable to contact the author of the article, so that's why I am asking here.
Answer: The article looks indeed wrong. In fact, there are two mistakes.
First, you're right that the acceptance-rejection method has to be applied to $\rho(r)$, and not to $m(r)$. To understand how this idea works, suppose we want to generate a one-dimensional normalized distribution function $p(y)$. Now, let's assume we can rewrite this distribution function in terms of a variable $x$, such that it takes the form of a uniform distribution. That is,
$$
p(x) =
\begin{cases}
1& \text{for $0\leqslant x \leqslant 1,$}\\
0& \text{elsewhere}.
\end{cases}
$$
Given $p(y)$, what is $x$? We have the Jacobian transformation
$$
p(y)dy = p(x(y))\left|\frac{dx}{dy}\right|dy = \left|\frac{dx}{dy}\right|dy,
$$
which implies
$$
p(y) = \frac{dx}{dy},
$$
assuming that $x(y)$ is an increasing function. Thus
$$
x = \int_0^y p(y')dy' = F(y).
$$
In other words, the integral of $p(y)$ (or equivalently, the area under the curve) follows a uniform distribution. With this in mind, there are essentially two ways to perform a Monte-Carlo simulation.
The first way is the acceptance-rejection method: plot the curve $p(y)$ and uniformly generate a pair of numbers $(a,b)$ in the interval $([0,y_\max],[0,p_\max])$, where $y_\max$ and $p_\max$ are the upper bounds of $y$ and $p(y)$. If the coordinate $(a,b)$ lies under the curve $p(y)$, accept it; otherwise, reject it. If the coordinate is accepted, $y=a$ is generated point.
There are major drawbacks to this method: $y_\max$ and $p_\max$ can be infinite, so one would need a cut-off. And if $p(y)$ has a sharp peak, one ends up rejecting a lot of points.
A far more efficient method is to uniformly generate $x$, and calculate the corresponding $y$ by inverting $x=F(y)$:
$$
y = F^{-1}(x).
$$
This automatically fills up the area under the curve, without rejecting points.
If the calculation of $F^{-1}(x)$ is too numerically involved, one can use a combination of both methods: introduce another (simpler) function $f(y)$ that lies everywhere above $p(y)$. Apply the inversion method to $f(y)$, generating a point $y$. Then uniformly generate a value $b$ in the interval $[0,f(y)]$. If $b\leqslant p(y)$, accept $y$; otherwise, reject it.
Now, consider the Hernquist distribution. Since it has a cusp at the origin, and the cumulative mass $m(r)$ is a simple function
$$
m(r) = M\frac{r^2}{(a+r)^2},
$$
I'd definitely recommend the inversion method. But there is an important caveat here, and that's the second mistake in the article: $\rho(r)$ is not really a one-dimensional distribution. Instead it is a distribution in 3-dimensional space, and it is only a function of one variable due to spherical symmetry. In order to apply the Monte-Carlo method, we have to express $\rho$ as a truly one-dimensional distribution function, which we can do by expressing it in terms of the volume
$$
y = \frac{4\pi}{3}r^3.
$$
Now we have
$$
p(y) = \rho(y) = \frac{M}{2\pi}\frac{a\,(3y/4\pi)^{-1/3}}{\left[a + (3y/4\pi)^{1/3}\right]^3},\\
F(y) = m(y) = \int_0^y\rho(y')dy' = M\frac{(3y/4\pi)^{2/3}}{\left[a + (3y/4\pi)^{1/3}\right]^2}.
$$
Once we generated a point $y$, the corresponding radius is
$$r=\left(\frac{3y}{4\pi}\right)^{1/3}.$$
There is an important consequence: there are likely more particles at large radii than around the centre, even though $\rho(r)$ is much larger at small radii. The reason is that particles between two radii $r$ and $(r+\Delta r)$ occupy a shell with volume
$$V = \frac{4\pi}{3}\left[(r+\Delta r)^3-r^3\right].$$
The larger the radius $r$, the larger the volume of the shell, which means you need more particles to fill it and get $\rho(r)$. This is obvious in the case of a constant density, but it is also true for general densities. | {
"domain": "physics.stackexchange",
"id": 11283,
"tags": "computational-physics, galaxies, models"
} |
Please identify this moth from Bangladesh | Question: Sorry for bad resolution of the picture. This picture is taken from JU campus, Savar.
Answer: I think I have found the identification. It is a planthopper from ricaniidae family. Most likely Ricanula stigmatica.
Image source: https://en.m.wikipedia.org/wiki/Ricanula_stigmatica | {
"domain": "biology.stackexchange",
"id": 7026,
"tags": "species-identification, entomology, lepidoptera"
} |
Does a helmholtz resonator have a standing wave inside the volume at the resonant frequency? | Question: If one were to exercise a resonator at the resonant frequency and you were able to measure the pressure at all points within the volume at a particular moment in time would the pressure be uniform at all points within the volume or would there be points where the pressure was more than other points? If it is not uniform and you graphed the pressure within the volume would it appear to be a standing wave within the vessel?
To me it seems like it would need to be a pressure wave and it seems like it would be a standing wave but I am not a physicist so I thought I would ask one!
Answer: There is no standing wave (with nodes and antinodes). The wavelength does not fit inside the cavity. It is a mass-spring system, where the inertia is in the neck of the cavity and the springiness is provided by the air volume.
http://hyperphysics.phy-astr.gsu.edu/hbase/Waves/cavity.html | {
"domain": "physics.stackexchange",
"id": 50788,
"tags": "acoustics"
} |
Lamport Timestamps: When to Update Counters | Question: In the timepiece (excuse the pun) that is Time, Clocks and the Ordering of Events, Lamport describes the logical clock algorithm as the following:
Each process $Pi$ increments $Ci$ between any two successive events.
If event a is the sending of a message m by process $Pi$, then the message $m$ contains a timestamp $Tm = Ci(a)$.
Upon receiving a message $m$, process $Pi$ sets $Ci$ greater than or equal to its present value and greater than $Tm$.
However, the algorithm as it is described on Wikipedia (and other websites) is a little different:
A process increments its counter before each event in that process.
When a process sends a message, it includes its counter value with the message.
On receiving a message, the receiver process sets its counter to be greater than the maximum of its own value and the received value before it considers the message received.
This leaves me with the following questions:
Should be increment the counter before sending a message, as the sending of a message is itself an event. This incremented timestamp is the value that is sent with the message.
When a message is received by process $Pi$ Lamport states that $Pi$ logical clock should be set to $max(Tm + 1, Ci)$. However, the Wikipedia article says that this should be $max(Tm, Ci) + 1$. Is Wikipedia wrong?
Answer: Considering that any local action (e.g. increasing a counter) done by a process is an event, the Wikipedia sentence
"A process increments its counter before each event in that process."
does not make any sense to me. Let me try to answer your questions:
Should we increment the counter before sending a message, as the sending of a message is itself an event. This incremented timestamp is the value that is sent with the message.
Both actions (i.e. increasing the counter and sending the message) happen atomically in the same event. The same is true when a message is received: The receive event already includes the counter update.
When a message is received by process Pi Lamport states that Pi logical clock should be set to max(Tm+1,Ci). However, the Wikipedia article says that this should be max(Tm,Ci)+1. Is Wikipedia wrong?
Note that, according to Lamport's paper, the logical clocks must satisfy the following property:
If $a$ happens before $b$ then $C(a) < C(b)$.
In particular, this means that clock values (of events) at the same process must be strictly increasing.
Therefore, the correct update rule is $\max(Tm,Ci)+1$, as otherwise two subsequent events at the same process might have the same value. | {
"domain": "cs.stackexchange",
"id": 3480,
"tags": "distributed-systems, synchronization, clocks"
} |
Pandas merge a "%Y%M%D" date column with a "%H:%M:%S" time column | Question: I have a dataframe with a column composed by date object and a column composed by time object.
I have to merge the two columns.
Personally, I think it is so ugly the following solution.
Why I have to cast to str?
I crafted my solution based on this answer
#importing all the necessary libraries
import pandas as pd
import datetime
#I have only to create a Minimal Reproducible Example
time1 = datetime.time(3,45,12)
time2 = datetime.time(3,49,12)
date1 = datetime.datetime(2020, 5, 17)
date2 = datetime.datetime(2021, 5, 17)
date_dict= {"time1":[time1,time2],"date1":[date1,date2]}
df=pd.DataFrame(date_dict)
df["TimeMerge"] = pd.to_datetime(df.date1.astype(str)+' '+df.time1.astype(str))
Answer: We can let pandas handle this for us and use DataFrame.apply and datetime.datetime.combine like this:
df["TimeMerge"] = df.apply(lambda row: datetime.datetime.combine(row.date1, row.time1), axis=1)
Although the following approach is more explicit and might therefore be more readable if you're not familiar with DataFrame.apply, I would strongly recommend the first approach.
You could also manually map datetime.datetime.combine over a zip object of date1 and time1:
def combine_date_time(d_t: tuple) -> datetime.datetime:
return datetime.datetime.combine(*d_t)
df["TimeMerge"] = pd.Series(map(combine_date_time, zip(df.date1, df.time1)))
You can also inline it as an anonymous lambda function:
df["TimeMerge"] = pd.Series(map(lambda d_t: datetime.datetime.combine(*d_t), zip(df.date1, df.time1)))
This is handy for simple operations, but I would advise against this one-liner in this case.
By the way, the answer your were looking for can also be found under the question you linked. | {
"domain": "codereview.stackexchange",
"id": 41305,
"tags": "python, datetime, pandas"
} |
Parabolic or Hyperbolic? | Question: How can astronomers find the difference between a parabolic and a hyperbolic comet ? What are the criteria that helps them distinguish these ? Can a parabolic comet switch over to become a hyperbolic one and the vice-versa ?
Answer: The orbits of comets are distinguished, as you note, by their eccentricities or equivalently by their orbital energies or orbital velocities.
If the comet is approaching or receding from the Sun at less than escape velocity, it is on an elliptical trajectory and will end up in a regular elliptic orbit, or just crashing into the Sun. Elliptical comets, which make up the majority of known comets, are divided into "short period" (<200 yrs) and "long period" (>200 yrs). The eccentricity of an elliptical comet is on average less than 1, but can have instantaneous values > 1.
If the comet is approaching or receding from the Sun at escape velocity, it is on a parabolic trajectory. It is in the process of being either captured by or ejected from the Sun. As @Vince Mulhollon indicates in his answer, an exactly parabolic orbit is a fleeting state.
If the comet is approaching or receding from the Sun at greater than escape velocity, it is on a hyperbolic trajectory and won't be back. There is some dispute as to whether hyperbolic comets originating outside the solar system have truly been observed, and if we really know the periods of certain extremely long-term elliptical comets.
As you ask, parabolic comet can "switch over" to being a hyperbolic comet (and vice versa) if something adds or removes energy from the orbit. For example, gravitational interaction with another solar system body could add energy to a parabolic orbit, and eject the comet from the solar system along a hyperbolic trajectory.
As to how to tell the difference, just take a set of accurate enough position measurements spaced far enough apart, and calculate the orbital elements. Software will help. | {
"domain": "physics.stackexchange",
"id": 3720,
"tags": "cosmology, astrophysics, solar-system"
} |
Why is skidding considered kinetic friction when braking on a car? | Question: So according to my textbook applying the brakes hardly to essentially lock up the wheels causes skidding which is kinetic friction so the breaking distance is longer. However, if the brakes were "pumped" the card would not skid and it will be considered static friction so the braking distance is shorter. I UNDERSTAND that the coefficient of static friction has a greater magnitude than kinetic friction, so it makes sense as to why static has a greater deceleration and shorter brake distance. HOWEVER, I do not understand why skidding is considered kinetic friction and not static friction; and why "pumping" the brakes causes static friction. Can someone please explain to how they are so?
Answer: Kinetic friction is all about trying to stop one surface from skidding against another surface. When you have two things such as the wheel and the ground sliding against each other, this is kinetic friction. However, when the wheels are rotating, there is static friction between the ground and the wheel. This is because the wheel is rolling and not sliding against the ground. A point on the wheel only contacts the ground for a very very small instant per revolution, so this is static because there isn't sliding between the ground and the wheel. | {
"domain": "physics.stackexchange",
"id": 47707,
"tags": "friction, statics"
} |
Can we create a small black hole? | Question: Suppose we create very large spherical body by using gamma rays generator and they will concentrate on a single point at the centre of sphere.We will place this spherical body thousands of kms above the sun so that it can draw energy from sun.Can the centre of the sphere create a Mini Black hole by the energy from the concentration of gamma rays?
[Just applying my 14 yrs old brain..if im wrong then please correct me...]
Answer: Yes, taking any object and decreasing its volume, while keeping the mass constant, will result in creating a black hole - the question is how long is it going to "live", because there is such thing as evaporation due to Hawking's radiation. For example, turning Earth into a black hole requires squeezing it down into a ball with the radius of about $8\times 10^{-4} $ meters. | {
"domain": "physics.stackexchange",
"id": 8911,
"tags": "black-holes, gamma-rays"
} |
Rotational motion centre of mass | Question: When any body like a pen is given a gentle hit why does it rotate about its center of mass?
I gave my pen a hit from left end and executed circular motion about its center of mass? Why is it so?
Answer: It doesn't have to rotate about its COM during the push. But once you let go there is no net force acting on the body. Therefore, the COM cannot be accelerating. The solution to
$\ddot{\mathbf x}=\mathbf a=0$ is $\mathbf x(t)=\mathbf v_0t+\mathbf x_0$, where $\mathbf x_0$ and $\mathbf v_0$ are the position and velocity respectively of the center of mass when the push stops. So the COM will just move in a straight line while the other points of the body rotate about the COM. | {
"domain": "physics.stackexchange",
"id": 75221,
"tags": "newtonian-mechanics, rotational-dynamics, torque"
} |
Prove the existence of a language L over the alphabet Σ = {1} such that L ∌ RE and L ∌ CoRE | Question: I attempted to create a language $L_1$ = {$<M>| L(M) = 1^*$} and prove using a reduction that $L_1$ ∌ RE and $L_1$ ∌ CoRE by showing that $HP ≤ L_1$ and $\overline{HP}$ $≤ L_1$.
But my instructor said this isn't correct.
Answer: Both reductions look fine and are easy to come up with. You need to ask your instructor what is wrong with your suggested reductions.
A reduction from $HP$ to $L_1$ operates as follows. Given input $\langle M, w\rangle$, the reduction outputs $\langle T\rangle$, where $T$ is a machine that rejects every input not of the form $1^*$, and for every input $x$ of the form $1^*$, it simulates the run of $M$ on $w$, and accepts $x$ only when $M$ halts on $w$.
Then, a reduction from $\overline{HP}$ to $L_1$ operates as follows. Given input $\langle M, w\rangle$, the reduction outputs $\langle T\rangle$, where $T$ is a machine that operates as follows. On input $x$ for $T$, $T$ simulates the run of $M$ on $w$ for $|x|$ steps. If within the simulation, $M$ does not reach $q_{acc}$ or $q_{rej}$, then $T$ accepts $x$. Otherwise, $T$ rejects $x$.
I leave correctness to you, and note that both reductions are valid when $T$ is a machine over an unary alphabet $\Sigma = \{ 1\}$. I think one of your suggested reduction assumes that the alphabet of $T$ is not unary, and I suspect it is the one from $\overline{HP}$, but I cannot tell for sure without seeing your solution. | {
"domain": "cs.stackexchange",
"id": 21776,
"tags": "formal-languages, turing-machines"
} |
What is the format of the data from the JPL's HORIZONS system? | Question: I'm looking for accurate positions and velocities of the planets in the solar system over several decades. I want to simulate their trajectories using Newtonian laws of motion, and compare the precession of Mercury's perihelion with observed data. I used the JPL's HORIZONS system but I can't manage to find these positions and velocities in the output data.
Could you explain how the data is formatted? (The HORIZONS web-interface Tutorial didn't help.)
Answer: WebGeocalc (http://wgc.jpl.nasa.gov:8080/webgeocalc/) is actually a
fairly useful tool, and the next best thing to downloading the CSPICE
libraries, so let me provide a slightly more detailed answer to your
question.
When you first visit WebGeocalc, you will see something like this:
Because you are looking for perihelions, which involve distance,
scroll down and choose "distance finder":
Since you want to find the perihelion of Mercury, fill out the top
half of the form like this:
For this example, I'll find all perihelions from 1900 to 2201. A
perihelion is a minimum of distance, and since you're looking for all
perihelions, choose "is local minimum" on the lower half of the form,
so it looks as follows:
Scroll down once again (you can leave the remaining form values as is)
and click the "Calculate" button:
When you get your results, click "Save All Intervals" to save the
times of Mercury's perihelions. Note that each result is an interval,
but the interval is 0 seconds long, so it's actually a point in time:
Now click on "Calculation Menu" to get back to the main menu:
We'll now find Mercury's position at all of these perihelions so you
can see how the perihelion position changes.
To find Mercury's position at these times, first click on "State
Vector", which will take you to a form. Fill out the top half of the
form as follows:
Fill out the lower half of the form as follows, dragging the results
window (for the times of Mercury's perihelions) into the input times
window:
Click "Calculate" to receive your results:
See also http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/Tutorials/pdf/individual_docs/36_webgeocalc.pdf
NOTE: if you're going to do extensive calculations, you'll probably
want to download the C SPICE library and kernels:
http://naif.jpl.nasa.gov/naif/tutorials.html
EDIT: I emailed Jon D. Giorgini (jdg@tycho.jpl.nasa.gov) re the open-sourceness of HORIZONS, and got this answer (which I'm not sure is helpful, but...):
Horizons output is copyrighted in the sense one can't take a
table output by Horizons and re-publish it as one's own.
JPL is a contractor organization and so the output is not prepared
by an officer or employee of the US government, and it is not
transferred to the US government, so copyright remains relevant.
The data values themselves, say the RA/DEC of Pluto at
some instant, are not legally copyrighted, though it would
be unethical to publish it without citation.
Legal copyright protection is for fixed representations
analogous to a book; the book has copyright protection, but
an individual word extracted from the book (i.e., into a
different representation) is not itself protected. Up to
a point
Note that Horizons reads SPICE files for planets, natural
satellites, and spacecraft, then derives results it outputs
from scratch from that basis. But for comets and asteroids,
it creates all information on demand from scratch.
Further, we (JPL Solar System Dynamics) put the planet/satellite/
spacecraft information into the SPICE format to begin with. So
a SPICE file might be like the blank pages of a book; not
copyrighted until the original work is written into it, but not
itself copyrighted (beyond the source code, of course).
But if you need a more trustworthy legal opinion for a specific
situation, I can put you in touch with legal staff. If this is an
actual issue and not just a point of curiosity, that's the way to
since perhaps other rules would apply to your actual situation. | {
"domain": "astronomy.stackexchange",
"id": 1101,
"tags": "solar-system, mercury"
} |
Investigating Circular Motion using a Whirling Bung | Question:
This experiment is shown in my book.
The book says 'a mass of weight 1.0N is attached on the nylon thread. This creates a centripetal force F of 1.0N on the bung.'
However, isn't there a component of the rubber bung also contribute to the tension and therefore the centripetal force?
Answer: Indeed it does! The tension in the thread is affected both by the weight of the central mass and by the weight of the rubber bung; as you spin the rubber bung faster, the tension in the rope on the bung end tilts further and further askew from the direction of gravity, meaning the tension must grow larger so that its vertical component balances gravity. This is happily in line with physical intuition. | {
"domain": "physics.stackexchange",
"id": 36435,
"tags": "newtonian-mechanics, centripetal-force"
} |
Is the force of a lifting arm due to a piston an internal force? | Question: When I was analyzing an excavator, I was wondering if the force that the piston exerts on the lifting arm is an internal or external force. I am a bit confused because the geometry of the system changes when this force is exerted and this seems weird for an internal force.
Answer: Of course that's an internal force.
let's see what you are confusing about:
I am a bit confused because the geometry of the system changes when this force is exerted and this seems weird for an internal force.
Well, you cannot tell whether a force is internal or external by whether the geometry of the system change or not.
Here's an example:
Let's say you are "floating" in outer space and feel no external force. When you raise you hand or move you head, the geometry of the system (i.e. you body) change, but the centre of mass of you body will never move until some external forces are acting on you. Otherwise, no matter how you change your posture, the centre of mass of your body will not move. | {
"domain": "physics.stackexchange",
"id": 19796,
"tags": "classical-mechanics, forces"
} |
Counting characters inside of a textarea using Angular | Question: I have a textarea and need to calculate how many characters have been typed into it.
Both of the following solutions work correctly—which would be preferable, and why? Is there an even better way to implement it?
Option A:
<textarea
id="descriptionEducation"
name="description"
rows="5"
[(ngModel)]="fruit.description"></textarea>
<div class="form-row__description">
You have written {{ fruit.description.length }} characters
</div>
Option B:
<textarea
id="descriptionEducation"
name="description"
rows="5"
[ngModel]="fruit.description" #descriptionFruit></textarea>
<div class="form-row__description">
You have written {{ descriptionFruit.value.length }} characters
</div>
Answer: You don't need to declare an extra reference variable. I would suggest the first approach i.e. using [(ngModel)] for two-way data binding.
However, in the first approach, you must initialize the model value before using it in your template i.e. fruit.description should not be undefined otherwise you'll get the following TypeError:
Cannot read property 'length' of undefined
Using the second approach, you wont get that error. | {
"domain": "codereview.stackexchange",
"id": 27232,
"tags": "javascript, comparative-review, form, angular-2+"
} |
rospy message_converter outputs zero while in the terminal I got different output for the same topic | Question:
ROS Kinetic
Ubuntu 16.04
Python 3.5
I am using diff_drive_controller to publish rwheel_angular_vel_motor and lwheel_angular_vel_motor topics. My goal is to take messages published by those topics and transmit them via tcp/ip.
Currently, I am trying to convert ros msgs into a json file by using following script.
from rospy_message_converter import json_message_converter
from std_msgs.msg import Float32
while True:
message = Float32()
json_str = json_message_converter.convert_ros_message_to_json(message)
print(json_str)
When I run it I always see {"data": 0.0} in the output. But when I use:
rostopic echo rwheel_angular_vel_motor
I am able to see the data. I also used rostopic info rwheel_angular_vel_motor command and got the following output:
Type: std_msgs/Float32
Publishers:
/gopigo_controller (http://R2:34603/)
Subscribers:
/gopigo_state_updater (http://R2:45111/)
So anyone knows why I am getting 0 as output?
Originally posted by wallybeam on ROS Answers with karma: 57 on 2019-11-12
Post score: 0
Original comments
Comment by mali on 2019-11-12:
it seems that you define Float32 msg and convert it without assigning any value. where do you subscribe to the topic rwheel_angular_vel_motor in your code?
Comment by wallybeam on 2019-11-12:
Actually, I am not subscribing to that topic. I am only reading std_msg/FLoat32. I thought since rwheel_angular_vel_motor publishes that message, I don't need to subscribe to that topic or the from std_msgs.msg import Float32 part handles it itself.
Comment by mali on 2019-11-12:
from std_msgs.msg import Float32 imports the definition of the message so that you can use it in your code. To access the data published on any topic, you should subscribe to that topic first. so, you need to write a subscriber to rwheel_angular_vel_motor and then assign the coming data to a variable and then convert the data.
Answer:
Thanks to mali's comment I changed my code. Following script works for me. My topic name is rwheel_angular_vel_motor and it publishes std_msgs/Float32 message. It turns out for converting messages to json package, json_message_converter is not necessary.
import rospy
from std_msgs.msg import Float32
import json
def callback(data):
rospy.loginfo("I heard %s",data.data)
dict_data = {'rwheel': data.data}
json_data = json.dumps(str(dict_data), ensure_ascii=False)
print(json_data)
def listener():
rospy.init_node('node_name')
rospy.Subscriber("rwheel_angular_vel_motor", Float32, callback)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
listener();
Originally posted by wallybeam with karma: 57 on 2019-11-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34001,
"tags": "ros, rosmessage, ros-kinetic, ubuntu, rospy"
} |
Does entropy decrease if time is reversed? | Question: Entropy increases if we let newton's equation work its magic.
Since newton's equation is time reversible, I would assume that in a closed isolated system, solving the differential equation and running time backwards would increase (and NOT decrease) the entropy of the system.
Is that true?
Answer: Simple answer: in our universe, definitely no.
You're hitting here on an idea known as Loschmidt's Paradox[1]: given that microscopic laws are time reversible, entropy should have the same tendency to increase whether we run a system forwards or backwards in time, exactly as you understand.
The fact that this understanding is manifestly against experimental observation can be explained if we observe that the universe began (i.e. found itself at the time of the big bang) in an exquisitely low entropy state, so that almost any random walk in the universe's state space tends to increase entropy. Likewise, in the everyday world, things "happen" when a systems are not in its maximum entropy state: they spontaneously wander towards these maximum entropy states, thus changing their states and undergoing observable changes. Sir Roger Penrose calls this notion the "Thermodynamic Legacy" of the big bang and you could read the chapter entitled "The Big Bang and its Thermodynamic Legacy" in his "Road to Reality". In summary, we have a second law of thermodynamics simply by dint of the exquisitely low entropy state of the early universe.
[1] Loschmidt's own name for it is "reversal objection" (umkehreinwand), not "paradox". Paradoxes, i.e. genuine logical contradictions cannot arise in physics, otherwise they could not be experimentally observed. | {
"domain": "physics.stackexchange",
"id": 67110,
"tags": "entropy"
} |
Shorten dict comprehension with repeated operation | Question: dc = {line.split('=')[0]: line.split('=')[1] for line in txt}
Below avoids duplication but is even longer:
dc = {k: v for line in txt for k, v in
zip(*map(lambda x: [x], line.split('=')))}
Any better way? Just without any imports.
Context: with open('config.txt', 'r') as f: txt = f.read().split('\n')
a=1
bc=sea
>>> {'a': '1', 'bc': 'sea'} # desired output; values should be string
Answer: dc = dict(line.split('=') for line in txt) | {
"domain": "codereview.stackexchange",
"id": 39622,
"tags": "python, python-3.x, iterator"
} |
How does one mathematically derive the damping coefficient of a theoretical viscous dashpot? | Question: I am very well aware of how to get the damping coefficient experimentally by observing a system in action.
Given the dimensions and fluid properties of a theoretical viscous fluid dashpot, how does one calculate the damping coefficient?
I found this website, which seems to have a calculator that does just that, but I cannot find where they get their formula: http://www.tribology-abc.com/calculators/damper.htm
I want to design an damper with a specific damping coefficient, and would love to be pointed in the right direction.
Answer: It seems pretty straightforward how they derived the hidden formula for the damper constant in their idealized design. The volumetric flow rate of fluid displaced by the piston would be $$Q=VA$$where V is the piston velocity and A is the cross sectional area of the piston. This is also the volumetric flow rate of fluid through the small tube of diameter d. The force on the piston is $$F=A\Delta P $$where $\Delta P$ is the pressure difference across the piston. The relationship between the pressure drop and the volumetric flow rate through the small tube is determined by the Hagen Poiseulle equation for laminar flow in a tube:
$$\Delta P=\frac{128 QL}{\pi d^4}\mu$$where L is thickness of the piston and $\mu$ is the fluid viscosity. Just combine these equations to get F/V. | {
"domain": "physics.stackexchange",
"id": 52055,
"tags": "fluid-dynamics, friction, vibrations, viscosity"
} |
Finding concentrations in a voltaic cell | Question:
A voltaic cell with $\ce{Ni/Ni^2+}$ and $\ce{Co/Co^2+}$ half cells has the initial concentrations of $\ce{Ni^2+}$ $\pu{0.80 M}$ and $\ce{Co^2+}$ $\pu{0.20 M}$.
(a) Find initial cell potential;
(b) Find concentration of $\ce{Ni^2+}$ when $E_\mathrm{cell}$ reaches $\pu{0.03 V}$
In part (a), I correctly found the initial $E_\mathrm{cell}$ to be $\pu{0.05 V}$.
In part (b), the book doesn't have any other examples that are explained showing how to get one of the concentrations, it's supposed to be $\ce{0.50 M}$, but I'm not sure how they get to that. From my tables, I found $E$ half cell for $\ce{Ni}$ to be $\pu{-0.25 V}$ and $E$ half cell for $\ce{Co}$ to be $\pu{-0.28 V}$, and the overall then to be $\pu{0.03 V}$.
Tried plugging in everything I have into the Nernst equation, but that still leaves me coming up short.
Answer: I think you were on the right track!
You were right that you need the Nernst equation, which must be how you determined the initial $E_\text{cell}$.
$$E_\text{cell}=E_\text{cell}^\circ-\frac{RT}{nF}\ln Q$$
You correctly identified that your $\ce{Co|Co^2+||Ni|Ni^2+}$ cell has the cell standard potential
$$E_\text{cell}=E_\text{cathode}^\circ-E_\text{anode}^\circ=-0.25\ \mathrm V--0.28\ \mathrm V=0.03\ \mathrm V$$
So the question is essentially asking what concentrations are needed for your cell to have the standard potential. Again looking at the Nernst equation, you can solve for the reaction quotient $Q$ when $E_\text{cell}=E_\text{cell}^\circ$
$$E_\text{cell}=E_\text{cell}^\circ-\frac{RT}{nF}\ln Q$$
$$E_\text{cell}-E_\text{cell}^\circ=0=\frac{RT}{nF}\ln Q$$
$$\ln Q=0$$
$$Q=1$$
At this point, you should be able to solve for the concentration of $\ce{Ni^2+}$ with simple algebra. | {
"domain": "chemistry.stackexchange",
"id": 3900,
"tags": "electrochemistry"
} |
Increasing opacity based on an element's location | Question: I've got a piece of code that takes an element, checks where it is, and if it's beyond a set place in the viewport, make the opacity increase to 1. I've made the code so that it only runs the checks if they're needed, the problem is that the code looks atrocious, and I suspect there's a better way for me to be doing it.
var $wheresThisAt = viewportHeight*4;
if (scrolled > ($wheresThisAt))
{
//there has to bve a better way...
$('.salafoot').css('opacity', '0');
$('#salamander-1').css('opacity', '1');
if (scrolled > (($wheresThisAt)+20)){
$('#salamander-2').css('opacity', '1');
if (scrolled > (($wheresThisAt)+40)){
$('#salamander-3').css('opacity', '1');
if (scrolled > (($wheresThisAt)+60)){
$('#salamander-4').css('opacity', '1');
if (scrolled > (($wheresThisAt)+80)){
$('#salamander-5').css('opacity', '1');
if (scrolled > (($wheresThisAt)+100)){
$('#salamander-6').css('opacity', '1');
if (scrolled > (($wheresThisAt)+120)){
$('#salamander-7').css('opacity', '1');
if (scrolled > (($wheresThisAt)+140)){
$('#salamander-8').css('opacity', '1');
if (scrolled > (($wheresThisAt)+160)){
$('#salamander-9').css('opacity', '1');
if (scrolled > (($wheresThisAt)+180)){
$('#salamander-10').css('opacity', '1');
if (scrolled > (($wheresThisAt)+200)){
$('#salamander-11').css('opacity', '1');
if (scrolled > (($wheresThisAt)+220)){
$('#salamander-12').css('opacity', '1');
if (scrolled > (($wheresThisAt)+240)){
$('#salamander-13').css('opacity', '1');
if (scrolled > (($wheresThisAt)+260)){
$('#salamander-14').css('opacity', '1');
if (scrolled > (($wheresThisAt)+280)){
$('#salamander-15').css('opacity', '1');
if (scrolled > (($wheresThisAt)+300)){
$('#salamander-16').css('opacity', '1');
if (scrolled > (($wheresThisAt)+320)){
$('#salamander-17').css('opacity', '1');
if (scrolled > (($wheresThisAt)+340)){
$('#salamander-18').css('opacity', '1');
if (scrolled > (($wheresThisAt)+360)){
$('#salamander-19').css('opacity', '1');
if (scrolled > (($wheresThisAt)+380)){
$('#salamander-20').css('opacity', '1');
if (scrolled > (($wheresThisAt)+400)){
$('#salamander-21').css('opacity', '1');
if (scrolled > (($wheresThisAt)+420)){
$('#salamander-22').css('opacity', '1');
if (scrolled > (($wheresThisAt)+440)){
$('#salamander-23').css('opacity', '1');
if (scrolled > (($wheresThisAt)+460)){
$('#salamander-24').css('opacity', '1');
if (scrolled > (($wheresThisAt)+480)){
$('#salamander-25').css('opacity', '1');
if (scrolled > (($wheresThisAt)+500)){
$('#salamander-26').css('opacity', '1');
if (scrolled > (($wheresThisAt)+520)){
$('#salamander-27').css('opacity', '1');
if (scrolled > (($wheresThisAt)+540)){
$('#salamander-28').css('opacity', '1');
if (scrolled > (($wheresThisAt)+560)){
$('#salamander-29').css('opacity', '1');
if (scrolled > (($wheresThisAt)+580)){
$('#salamander-30').css('opacity', '1');
if (scrolled > (($wheresThisAt)+600)){
$('#salamander-31').css('opacity', '1');
if (scrolled > (($wheresThisAt)+620)){
$('#salamander-32').css('opacity', '1');
if (scrolled > (($wheresThisAt)+640)){
$('#salamander-33').css('opacity', '1');
if (scrolled > (($wheresThisAt)+660)){
$('#salamander-34').css('opacity', '1');
if (scrolled > (($wheresThisAt)+680)){
$('#salamander-35').css('opacity', '1');
if (scrolled > (($wheresThisAt)+700)){
$('#salamander-36').css('opacity', '1');
}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} // close all if statements before if (scrolled > ($wheresThisAt))
} // end if (scrolled > ($wheresThisAt))
Here's a sample of the HTML that's being affected by the code
<div class="salafoot" id="salamander-8" style="opacity: 0;"></div>
As you can see I reuse almost identical code a lot of times, and if I want to change the number of steps (say to 37) I need to add another line. The if statements all get left open until the end, so that it only runs the check on the next line if it's useful (only checks +40 if the +20 has already been approved). All of the #salamander-(n) IDs are tied to the .salafoot class. The expected output is footprints that appear one at a time as you scroll down the page.
Answer: var top = Math.max(Math.min(Math.ceil((scrolled-$wheresThisAt)/20),35),1);
for (var i = 1; i < top; i++) {
$('#salamander-' + i).css('opacity', '1');
}
This is similar to Amon's answer, but avoids the if structure. | {
"domain": "codereview.stackexchange",
"id": 6816,
"tags": "javascript, optimization, jquery"
} |
Installing "mods" with Python | Question: A little disclaimer: you're going to have a field day with this. It's horrible. Looking at it makes me want to vomit, and don't ask me to explain it, because I've long forgotten what I was thinking when I wrote it. That being said, I can't really find a way to shorten it of make it make sense. Perhaps someone could help me out?
A bit of a background:
This program is meant to install mods for a game called Supreme Commander 2. It is meant to install ANY mod. The following is true of all the mods:
The only archive type is the .scd. SCD's have the same encoding as .zip's, but with a 0% compression rate.
The way to install the mod is simple: There will always be at least one scd inside the first, and there may be more scd's inside that one. Any scd with folders inside it should be moved to
C:\Program Files (x86)\Steam\steamapps\common\supreme commander 2\gamedata
There may be more than one scd to be moved per mod.
And that's it. This is something of a "programming challenge," but I am really looking for improvements upon the following code:
import os, zipfile
def installfromdownload(filename):
global scdlist
targetfolder = r"C:\Program Files (x86)\Mod Manager\Mods\f_" + filename[:-4]
lentf = len(targetfolder)
unzip(r"C:\Program Files (x86)\Mod Manager\Mods\z_" + filename + ".scd", targetfolder)
if checkdirs(targetfolder) == False:
os.system("copy C:\Program Files (x86)\Mod Manager\Mods\z_" + filename + ".scd" " C:\Program Files (x86)\Steam\steamapps\common\supreme commander 2\gamedata")
else:
newfolderonelist = []
getscds(targetfolder)
for file in scdlist:
newfolderone = targetfolder + "\f_" + file[:-4]
filepath = targetfolder + "\\" + file
unzip(filepath, newfolderone)
newfolderonelist.append(newfolderone)
for newfolders in newfolderonelist:
if os.checkdirs(newfolders) == False:
newfolders = newfolders[lentf + 1:]
for scds in scdlist:
if newfolder in scds == True:
scdpath = targetfolder + "\\" + scds
os.system("copy " + scdpath + " C:\Program Files (x86)\Steam\steamapps\common\supreme commander 2\gamedata")
Note that there is no error handling here, that is done when I call the function. The function unzip() is here:
def unzip(file, target):
modscd = zipfile.ZipFile(file)
makepath(target)
modscd.extractall(target)
The function checkdirs() is here (note this is pretty bad, it only checks for periods in the file name, so please submit suggestions):
def checkdirs(target):
isfile = 0
targetlist = os.listdir(target)
for files in targetlist:
for letters in files:
if letters == ".":
isfile = isfile + 1
if isfile == len(os.listdir(target)):
return False
else:
return True
The getscds() function just checks a filename for ".scd".
Answer: Your main function is hard to follow, but it seems to repeat its logic with subfolders. Generally something like that should be implemented as a recursive function. In pseudo-code, the basic structure of the recursive function would be:
def copy_scds(scd, target_folder):
unzip scd to target_folder
if target_folder has scds:
for each inner_scd:
subfolder = new folder under target_folder
copy_scds(inner_scd, subfolder)
else:
copy scd to gamedata
As you can see, the unzip and copy parts are only written once, as opposed to repeated in nested for loops. The additional scds and subfolders are simply passed in as new arguments for scd and target_folder.
The main function then only needs to determine the initial "root" scd and target_folder and begin the process:
def install(filename):
scd = r"C:\Program Files (x86)\Mod Manager\Mods\z_" + filename + ".scd"
target_folder = r"C:\Program Files (x86)\Mod Manager\Mods\f_" + filename[:-4]
copy_scds(scd, target_folder)
My only other suggestion other than what was given in the other answers is to store the paths to make them more easily configurable and eliminate repetition in the code. For example:
mods_folder = r"C:\Program Files (x86)\Mod Manager\Mods"
zip_prefix = "z_"
output_prefix = "f_"
gamedata_folder = r"C:\Program Files (x86)\Steam\steamapps\common\supreme commander 2\gamedata"
In fact, it may be possible to completely remove the need to configure the game folder. Steam knows how to launch any game based only on a unique AppID, so theoretically there must be some way to determine the path automatically. | {
"domain": "codereview.stackexchange",
"id": 6102,
"tags": "python, python-3.x, installer"
} |
Since the Earth accelerates, does a stationary charge on Earth produce EM waves? | Question: Since the Earth is revolving around the Sun, therefore it is accelerating. This implies that any stationary charge on earth is also accelerating. However, I have never heard anyone saying anything about that. Why is this so? Do charges on Earth not emit electromagnetic radiations?
Answer: Yes... if that's the only charge in the system. However, to a good approximation, the Earth is globally neutral, which means that if you collect a charge $+Q$ in some location, you're creating regions with a total charge $-Q$ elsewhere on Earth, with the same acceleration. In that scenario, while your charge $+Q$ formally radiates, its field will get cancelled with that of the (equal and opposite) radiation from the opposite charge, and the global radiation becomes negligible.
That said, the Earth does have some nonzero charge, however small. The linked question does not have definitive numbers, but this question puts the net charge of the Sun at 77 C, so as a rough estimate, let's put a net charge of 1C on the Earth. How much does this radiate? We can find that via the Larmor radiation formula, which predicts a power dissipation of
$$
P_\mathrm{yearly}
=\frac {q^{2}a^{2}}{6\pi \varepsilon _{0}c^{3}}
\approx 8\times10^{-21}\:\rm J.
$$
This is absolutely tiny, because it depends on the square of the orbital acceleration, and therefore on the fourth power of the orbital period, $T^4\approx 10^{30}\:\rm s^4$. Moreover, that radiation is at frequencies of the order of 1/year, which is much lower than you can realistically detect.
That said, you can do a good deal better by separating your positive 1C charge on one side of the Earth and putting the negative 1C on the other side, so that you get an electric dipole that oscillates daily, so that it does radiate. If you do that, you get a much higher Larmor power dissipation, of roughly
$$
P_\mathrm{daily} \approx 10^{-10} \: \rm J,
$$
i.e. something that's still negligible. Moreover, once you get into the electric and magnetic dynamics of charge at the scale of the Earth, you get a much more dynamic environment with movement of charges and currents, from thunderstorms to the geomagnetic field to the solar wind to the van Allen belts, all of which produce nontrivial radiation in that frequency range, i.e. radio noise at ultralong wavelengths. | {
"domain": "physics.stackexchange",
"id": 43018,
"tags": "electromagnetic-radiation, acceleration, charge"
} |
The vapour density of $N_2O_4$ at certain temperature is 30.Calculate the percentage of dissociation of $N_2O_4$ at this temperature? | Question: The vapour density of $N_2O_4$ at certain temperature is 30.Calculate the percentage of dissociation of $N_2O_4$ at this temperature.$\ce{N2O4_{(g)} <=> 2NO2_{(g)}}$?
I am unable to understand the concept behind vapour density of a the mixture.
Currently I understand that
2 x vapour density=molar mass.
vapour density =
mass of n molecules of gas ÷ mass of n molecules of hydrogen.
vapour density
= molar mass of gas ÷ molar mass of H2.
I am unable to apply the above formula because a mixture does not have molar mass.
And I am also not able to understand that if
2 x vapour density=molar mass
then in the question molar mass of $N_2O_4$ at the certain temperature given would be 60 instead of 92.
The correct answer is 53.33%
Answer: The big mistake you made was assuming a mixture does not have a molar mass.
Molar mass for a mixture is calculated by using the mole fractions and molar masses of each constituent.
Let:
$A$ represent $\ce{N2O4}$
$C$ represent $\ce{NO2}$
Then the reaction becomes:
$$\ce{A<=>2C}$$
First, we calculate the molar mass of the mixture using the given vapor density:
$$M=2v=(30)(2)=60\;g/mol$$
Then, we can set up the following system of equations in terms of molar masses and mole fractions:
$$X_A\;M_A+X_C\;M_C=M$$
$$X_A+X_C=1$$
Substituting all known values, the system looks like this:
$$92\;X_A+46\;X_C=60$$
$$X_A+X_C=1$$
Solving this system, we get the equilibrium molar fractions:
$$X_A=0.3043$$
$$X_C=0.6957$$
Then, we can calculate $K_X$, the equilibrium constant in terms of molar fractions:
$$K_X=\frac{X_C^2}{X_A}=\frac{0.6957^2}{0.3043}=1.5905$$
Finally, we use the relationship between $K_X$ and dissociation fraction $\alpha$ for this reaction to calculate it:
$$K_X=\frac{(2\alpha)^2}{1-\alpha^2}=1.5905$$
Solving for $\alpha$:
$$\alpha=\sqrt{\frac{K_X}{4+K_x}}=\sqrt{\frac{1.5905}{4+1.5905}}$$
$$\alpha=0.5333$$ | {
"domain": "chemistry.stackexchange",
"id": 16995,
"tags": "vapor-pressure"
} |
Real-time video processing on video feed from a drone's camera | Question: I am working on a project where I want to run some computer vision algorithms (e.g. face recognition) on the live video stream coming from a flying drone.
There are many commercial drones out there that offer video streams, like
http://www.flyzano.com/mens/
https://www.lily.camera/
etc..
But none of them seem to give access to the video feed for real-time processing.
Another idea is to have the drone carry a smartphone, and do the processing on the phone through the phone's camera. Or just use a digital camera and an arduino that are attached to the drone.
Although these ideas are feasible, I would rather access the video-feed of the drone itself. So my question is that are there any drones out there that offer this feature? or can be hacked somehow to achieve this?
Answer: This AR.Drone provides SDK, therefore, you can access the images on real-time. It is fully compatible with Linux. They have examples also for smartphones. I believe android and iPhone. It has two cameras. I've bought it and its price is reasonable. At that time, the price was roughly 272 CAD. Of course, the price is now more expensive than before but I believe it is affordable. You need to know c programming and Makefiles. | {
"domain": "robotics.stackexchange",
"id": 1622,
"tags": "computer-vision, cameras"
} |
Router for personal use | Question: This is router I created for use, however, I wish to refactor this code and make this more robust. I would really appreciate anyone who can:
Point out how to make this code more robust, clean and flexible in terms of reusability where the user of this module can easily alter the configuration, etc to his/her needs without possibly altering the source code.
Point out where code can be refactored and possibly how.
Point out how the code could be made more efficient.
Point out places where I have breached any principles of OOP.
Implementation:
$router = new Router\Router();
/**
* The arguments for the method createRoute:
* First Argument - URI
* Second Argument - Namespace or Classname of which the method will be executed.
* Third Argument - The method to be run in the specified namespace/class in the second argument.
*/
$router->createRoute('/', 'PageController', 'showHome');
$result = $router->runRouter();
Source Code:
<?php
namespace Router;
class Router
{
private $routes = array();
private $patternForURIs = '/\{[a-zA-Z0-9]*\}/';
public function createRoute($uri, $class, $method)
{
$result = $this->checkWhetherParametersAreRequired($uri);
if($result != false) {
$this->routes[] = [$uri, $class, $method, $result];
return true;
}
$this->routes[] = [$uri, $class, $method];
return true;
}
public function checkWhetherParametersAreRequired($uri)
{
if(preg_match_all($this->patternForURIs, $uri, $matches)) {
return $matches[0];
}
return false;
}
public function runRouter()
{
$routesWhichExpectParameters = array();
foreach($this->routes as $route) {
if(isset($route[3])) {
$routesWhichExpectParameters[] = $route;
}
if($route[0] === $_SERVER["REQUEST_URI"]) {
return $route;
}
}
foreach($routesWhichExpectParameters as $route) {
$route[0] = ltrim($route[0], '/');
$route[0] = explode('/', $route[0]);
$partsToConfirm = count($route[0]);
$requestURI = ltrim($_SERVER["REQUEST_URI"], '/');
$requestURI = explode('/', $requestURI);
$partsAvailable = count($requestURI);
if($partsToConfirm != $partsAvailable) {
continue;
}
$confirmedParts = 0;
$numberOfParametersToDetect = count($route[3]);
$numberOfParametersDetected = 0;
$detectedParameters = array();
for($iterator = 0; $iterator < $partsToConfirm; $iterator++) {
if($route[0][$iterator] == $requestURI[$iterator]) {
$confirmedParts++;
continue;
} else if(preg_match($this->patternForURIs, $route[0][$iterator])) {
$confirmedParts++;
$detectedParameters[] = array(
$route[3][$numberOfParametersDetected] => $requestURI[$iterator]
);
$numberOfParametersDetected++;
continue;
}
}
if(($numberOfParametersDetected == $numberOfParametersToDetect) && ($partsToConfirm == $confirmedParts)) {
$route[4] = $detectedParameters;
return $route;
break;
}
}
return false;
}
}
Answer: You need to decide on the Domain Objects that represent concepts in your current model.
A Route object should encapsulate the data to do with a route. If you look at Symfony's Route you'll see that it is simply an object with some properties, getters and setters, and that's about it (apart from a few select helper functions). This is also known as an Entity, the end object that you get back.
So, now you know that you want to get back some Route entities (@return Route[] in phpdoc), you should:
Create a Route object which merely represents a route
Take a look at the Factory Pattern and create a factory for these
The object API I would then expect to be using would be:
$router = new Router(new RouteFactory);
/** @var Route $route */
$route = $router->createRoute('/', 'PageController', 'showHome');
Somewhere in the Router, it would call RouteFactory::create($param1, $param2 ...) to actually make a Route object.
Don't expect the user to remember to call createRoute() followed by runRoute(). If one can't be run before the other, then it looks like you're trying to implement the builder creational pattern (which is for optional parameters). Your createRoute() method should return a Route object.
In the context of routing itself, you'll likely want something called a Resolver. That is, something you pass a Route to, and it instantiates the thing you are routing to. In an 'MVC' framework, you have a ControllerResolver - so you would create a Route, call ControllerResolver::resolve(Route $route) and that would contain the logic to decide what controller to create based on the contents of the Route object (so it'd do things like class_exists($controllerName) etc.
It looks like you're trying to design some generic router, but it is doing too many things. Focus on making the following seperately:
An object representing your route (Route)
An object responsible for creating a route from $_POST etc (RouteFactory::createFromGlobals($_POST) (or similar))
An object responsible for taking your Route and doing what you want to do with it (ControllerResolver (or similar))
Separate the actions out that you want to do, contextually, and you'll achieve better SoC.
Also, you shouldn't be accessing any superglobals ($_*) within this class, you should pass them in instead. This means that during testing you fake (or 'mock') the data you pass in with relative ease and test that your object still functions as you expect over different scenarios. | {
"domain": "codereview.stackexchange",
"id": 13327,
"tags": "php, object-oriented, php5"
} |
Single Linked-List Implementation in C++ | Question: I haven't done much C++ coding the last few years, so I've been reviewing in preparation for upcoming interviews. I wrote a minimally functional singly linked-list. I'm using my implementation for practicing interview problems and hopefully later to build more complex data structures.
Currently, the code is not templatized since I've been focusing on the basics more than wide usability. I've made the head and tail pointers protected because of how I code up practice problems - each one is a derived class where I implement the required functions and they often require modifying head and/or tail.
Style-wise, I've at least tried to use the Google C++ Style Guide. Please provide feedback on the overall design, improvements, style, and others.
Header:
#ifndef LINKED_LIST_H
#define LINKED_LIST_H
struct Node
{
int data;
Node *next = nullptr;
};
class LinkedList
{
public:
LinkedList();
LinkedList(const LinkedList &list);
~LinkedList();
LinkedList & operator=(const LinkedList &other);
LinkedList & operator+=(const LinkedList &rhs);
friend LinkedList operator+(const LinkedList &lhs, const LinkedList &rhs);
bool IsEmpty() const;
void Append(int value); // add to end
void Insert(int value); // add to beginning
void Print() const;
protected:
// Protected so that derived class can modify these directly
Node *head;
Node *tail; // allows O(1) append
private:
void swap(LinkedList &other);
};
#endif
Implementation:
#include <iostream>
#include "linked_list.h"
LinkedList::LinkedList(): head(nullptr), tail(nullptr)
{
}
LinkedList::LinkedList(const LinkedList &list): LinkedList() // call base constructor
{
Node *curr = list.head;
while(curr != nullptr)
{
this->Append(curr->data);
curr = curr->next;
}
}
LinkedList::~LinkedList()
{
Node *curr = head;
while (curr != nullptr)
{
Node *next = curr->next;
delete curr;
curr = next;
}
head = tail = nullptr;
}
void LinkedList::swap(LinkedList &other)
{
using std::swap;
swap(this->head, other.head);
swap(this->tail, other.tail);
}
// Operator '=': use copy-and-swap idiom
LinkedList & LinkedList::operator=(const LinkedList &other)
{
LinkedList temp(other); // Use copy constructor to make deep local copy
this->swap(temp); // Swap contents, destroy local list with old data
return *this;
}
LinkedList & LinkedList::operator+=(const LinkedList &rhs)
{
LinkedList *addition = new LinkedList(rhs); // make deep copy of passed in list
if (this->IsEmpty())
head = addition->head;
else
tail->next = addition->head;
tail = addition->tail;
// Destroy deep copy members but not what they point to
addition->head = addition->tail = nullptr;
delete addition;
return *this;
}
LinkedList operator+(const LinkedList &lhs, const LinkedList &rhs)
{
LinkedList result(lhs); // make deep local copy
result += rhs;
return result;
}
bool LinkedList::IsEmpty() const
{
return head == nullptr;
}
void LinkedList::Append(int data)
{
Node *new_node = new Node;
new_node->data = data;
if (IsEmpty())
{
head = tail = new_node;
return;
}
tail->next = new_node;
tail = new_node;
}
void LinkedList::Insert(int data)
{
Node *new_node = new Node;
new_node->data = data;
new_node->next = head;
head = new_node;
if (tail == nullptr)
tail = head;
}
void LinkedList::Print() const
{
Node *curr = head;
while(curr != nullptr)
{
std::cout << curr->data << "-->";
curr = curr->next;
}
std::cout << "nullptr";
std::cout << " [head = " << ((head == nullptr) ? (-1) : (head->data)) << "], [tail = " <<
((tail == nullptr) ? (-1) : (tail->data)) << "]" << std::endl;
}
Answer: If, instead of just writing new, you write new (std::nothrow), then if new fails, it will return a nullptr instead of throwing an exception. Then check if the result was nullptr. Some basic error checking like this isn't going to slow your program down if you're already using linked lists, which are inherently of the devil when it comes to performance.
Also, I don't think an employer would like to see code that might cause bugs. They rather see slower code than code that will cost them development time, I suspect. (I am definitely not an expert on that subject, though.)
I'm also not sure why you keep writing this-> in lots of places, such as this->Append(curr->data);. In case you didn't know, this-> is implicit, so you're only cluttering the code.
At the end of the destructor, you've written head = tail = nullptr;. I'm not sure what the point is, since the very next line, those member functions will be gone as the object is deleted. Just a thought. Perhaps you have some reason for it, I'm just pointing it out as I see it.
Your assignment operators do not check for self-assignment. At the start, check to see if this == &rhs. Even if it still "works", you're making a copy of "yourself", and then swapping yourself with the copy, which is clearly useless.
It would also probably be a good idea to see if the list you're trying to add in operator+= is actually empty, in which case you shouldn't do anything.
Lastly, you should mark all functions that do not throw any exceptions as noexcept, similarly to how you mark some functions as const. That is, unless you're using some older version of C++, for some reason.
I would personally make Struct Node a member structure of LinkedList, but that is a design choice, really, and not of much relevance.
Other than this, it looks pretty neat and good. Good work, I'd say. If I missed anything, perhaps others will care to point those things out. | {
"domain": "codereview.stackexchange",
"id": 19583,
"tags": "c++, object-oriented, linked-list, overloading"
} |
Is there a potential difference across an inductor | Question: Walter Lewin in his lectures says but all books and online resources go against him. Who is correct?
Answer: You probably misunderstood what Lewin is saying. I watched one of his lectures where he deals with inductor in a circuit and he gets the potential differences correctly, including the inductor. Here is the video of the lecture:
https://www.youtube.com/watch?v=cZN0AyNR4Kw
What he is criticizing is a different thing - that teachers and textbook authors explain Kirchhoff's law in circuits with inductors incorrectly: they assume
$$
\oint \mathbf E\cdot d\mathbf s = 0
$$
is valid even for circuit with inductor (it is not) and then rewrite this using partial integrals across the elements in the circuit, assuming (again incorrectly) that the integral over the wire of the inductor is $+LdI/dt$ (in fact, the integral is much lower and for ideal coil, zero since there is no field inside perfect conductor).
The Kirchhoff Voltage Law is really a practical rule to formulate circuit equations rather than a law of physics or a specific condition valid only in some cases. KVL is based on the fact that since potential is single-valued function of position, sum of drops of potential in a closed path is zero. KVL is not and does not derive from integral of total electric field being zero. Even if $\oint \mathbf E \cdot d\mathbf s \neq 0$, the sum of potential drops in a circuit is still 0. This is because electrostatic potential is a function of position, it does not depend on path, so one must get to the same value after a round-trip. In usual circuits, including low frequency circuits with inductors, difference of electrostatic potential is measurable and Kirchhoff's voltage law is valid. Drop of voltage across the inductor is $+L dI/dt$. | {
"domain": "physics.stackexchange",
"id": 50173,
"tags": "electric-circuits, electromagnetic-induction, inductance"
} |
Dependency on python module `sys` | Question:
I am working on a ROS noetic package which includes a node which imports the python sys module (in Ubuntu 20.04 with python3).
Is it necessary to declare the dependency on the sys module in the package.xml file?
If yes, which <exec_depend> is required? I had a look at the documentation on the documentation to python module dependencies, but I have not been able to find the <exec_depend> for sys in the rosditro repository.
If no, why is it not necessary to declare dependency on this import?
Originally posted by CoffeeKangaroo on ROS Answers with karma: 65 on 2021-12-01
Post score: 0
Answer:
Is it necessary to declare the dependency on the sys module in the package.xml file?
No.
sys is a part of the Python standard library, which means it comes installed by-default whenever you install a version of the Python interpreter. Apart from that, you cannot "install" sys yourself as a stand-alone component or library for Python.
What you should exec_depend on would be rospy. This already depends on the correct version of Python.
Technically, your package should not depend on one of its dependencies to bring in one of its other direct dependencies (ie: if A depends on B and C, and B depends on C, A should not just depend on B, but also on C itself). But as rospy will almost certainly never drop its own dependency on Python, it would be safe to just depend on rospy.
Originally posted by gvdhoorn with karma: 86574 on 2021-12-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2021-12-01:
Compare this to regular Python packaging: would you add a dependency on sys in your pyproject.toml and/or setup.py/setup.cfg? If the answer is "no", then you wouldn't need to add it to your ROS package.xml.
Comment by CoffeeKangaroo on 2021-12-02:
@gvdhoorn: Thanks for the detailed clarification.
So, can "would you add a dependency on xyz in your pyproject.toml?" be taken as a rule of thumb for dependencies on python packages which have to be defined in package.xml?
Comment by gvdhoorn on 2021-12-02:
Any time you use something which is not part of the standard library it will have to be installed separately.
Anything that needs to be installed / present for your node to function should be expressed as a dependency. | {
"domain": "robotics.stackexchange",
"id": 37200,
"tags": "ros, python, python3"
} |
Noise spectrum of two systems and interacting Hamiltonian | Question: I've been discovering recently the concept of noise spectrum, defined as:
$$S_{xx}[\omega] = \int dt \langle x(t)x(0)\rangle \text{e}^{-i\omega t}$$
Roughly the Fourrier transform of the two-point function.
Apparently it represents, the probability that the system has to absorb (or emit) energy for positive (negative) $\omega$. I am not familiar with this new object to me, but let's say that I look at composite system, for example an oscillator in a bath. The Hamiltonian pictures says that the exchange of energy between both are caused by the existence of an interacting Hamiltonian:
$$H_{int} = g\;\hat{F}.\hat{x}$$
quantum in this example. From what I said about the noise spectrum, I would say that the exchanges of energy would be motivated by an overlap of the two noise spectrum functions $S_{xx}$ and $S_{FF}$. However, I don't see yet how to conciliate these two points?
Answer: The quantity you are describing ($S(\omega)$) is called the power spectral density.I can't say if your interpretation of the power spectral density in this case is mistaken or not, because I haven't encountered it myself.
But in context of a stationary physical process, the power spectral density describes how the total power in the system is distributed over various frequencies. Here power is taken to mean the square of the signal, i.e, if the signal is $f(t)$, then the total power is given as $$ P = \frac{1}{T} \int_{0}^{T} |f(t)|^2 dt$$
The function that you have described is actually the fourier transform of the autocorrelation function, a result given by the Weiner-Khinchin theorem.
If your $x(t)$ is a stochastic variable, then $S(\omega)$ is the spectral power distribution for that variable/process. | {
"domain": "physics.stackexchange",
"id": 5803,
"tags": "quantum-mechanics, harmonic-oscillator, hamiltonian-formalism"
} |
Lower bound on the covering radius of a code | Question: Let $C$ be a $[n,k]$ linear code over $\mathbb{F}_q$.
Suppose that $\rho$ is the covering radius .
I want to show that $\rho \geq \frac{n-k}{1+ \log_q{(n)}}$.
Could you give me a hint how we could show this?
Answer: The idea is to use a sphere packing argument, only for covering. Let $V_q(\rho)$ be the volume of a Hamming ball of radius $\rho$. The Hamming balls around all codewords cover the entire space, so $V_q(\rho) q^k \geq q^n$, or $V_q(\rho) \geq q^{n-k}$. To deduce the bound, use an approximation for $V_q(\rho)$. | {
"domain": "cs.stackexchange",
"id": 6362,
"tags": "coding-theory"
} |
Why don't (or why do) current carrying wires attract a stationary charge placed at a distance? | Question: I've learned that moving charges produce magnetic fields which in turn affect other charges in motion. After seeing explanations that point to special relativity, I am kind of confused. Can ALL magnetic fields be accounted as some kind of electric field from a particular reference frame?
And if there is relative motion between the electrons of the wire and the charge at rest(from the lab frame), then will it not experience a magnetic force from the electron's reference frame? I am not sure if that is the actual case, so even if the stationary charge is attracted to the wire, can it be accounted as an electrostatic force from the lab frame due to length contraction and as a magnetic force from the electrons POI?
I am not even completely clear with even how to phrase the ambiguity I have in my mind. Detailed answers are very much appreciated :)
Answer:
Can ALL magnetic fields be accounted as some kind of electric field from a particular reference frame?
No. Relativity really tells us that electric and magnetic fields are on an equal footing. In some situations, you can find a frame where there's only an electric field. In others, you can find a frame where there's only a magnetic field. But most of the time, you can't do either.
And if there is relative motion between the electrons of the wire and the charge at rest(from the lab frame), then will it not experience a magnetic force from the electron's reference frame? I am not sure if that is the actual case, so even if the stationary charge is attracted to the wire, can it be accounted as an electrostatic force from the lab frame due to length contraction and as a magnetic force from the electrons POI?
I'm not sure if this is getting at your confusion, but recall some basic examples in relativity. For example, suppose that in your frame a spaceship passes by you. In your frame, this can happen really quickly because the spaceship is length contracted. In the spaceship's frame, it happens really quickly according to you because your time is dilated. So what's really going on? Is it really time dilation or is it really length contraction? Of course, the point is that the two frames are on an equal footing. Time dilation in one frame can be equivalently described as length contraction in another, and neither is inherently more correct.
Similarly, in some situations, what can be described as a magnetic force due to motion in a magnetic field in one frame, could be described as an electric force due to an electric field in another frame. In each individual frame, absolutely everything works as usual: Maxwell's equations are true, the Lorentz force expression holds, and so on. So, for example, in a frame where a charge is still, it experiences no magnetic force, even if it might in a different frame where it is moving. The description of what is going on changes between different frames, but neither frame is more "correct".
Saying that magnetic forces are "always really just because of electric forces due to a charge imbalance due to length contraction in a different frame" doesn't make sense. It doesn't work in general, and it's kind of like saying "time dilation doesn't really exist, only length contraction does". It's actually the exact opposite of the spirit of relativity. | {
"domain": "physics.stackexchange",
"id": 86154,
"tags": "electromagnetism, special-relativity, magnetic-fields, electric-fields, electric-current"
} |
Trace of a linear operator in Dirac notation | Question: I've been banging my head against a wall trying to find a proof for:
$$Tr() = ∑_⟨||⟩.$$
This is supposedly fundamental knowledge. Can anyone help with the proof or direct me to a resource that has it?
Answer: The trace of an operator $X$ is defined as the sum of its diagonal components. That is, $$Tr(X)=\sum_j \langle j|X|j\rangle = \langle 1|X|1\rangle + \langle 2|X|2\rangle + \langle 3|X|3\rangle \ldots $$
Not sure what you mean by a proof since as stated above, this is strictly a definition and requires no proof. | {
"domain": "physics.stackexchange",
"id": 86343,
"tags": "quantum-mechanics, operators, definition, linear-algebra, trace"
} |
How to model "name similarity"? | Question: I am new to machine learning. But have basic understanding of the concepts.
Problem statement : We humans won't perceive much difference between "Tim Cook" and "Tim C0ok". ( 0 is been replaced with zero ). I am trying to model this.
Task is to come-up with a model which can predict if two names are visually similar. The system will be configured with a predefined set of names. When the system is online, it should predict whether a given name is similar to one of the configured name or is not similar to any of the configured name.
My current approach: I'm trying to come up with a two-stage model. First stage is a binary classier which predicts name is know to system or not. Second stage is a multi-class classifier, which tells to which configured name given name is similar.
Features : Using string distance measures Levenshtein Distance, Damerau-Levenshtein Distance, Jaro Distance, Jaro-Winkler Distance and Hamming Distance as the feature vector. Feature vector is computed between a reference string ( Ex : 'aaaaaaaaa' ) and the name.
Answer: There are existing dictionaries that list pairs of characters that are visually similar. See, e.g.,
https://security.stackexchange.com/q/36257/971
https://security.stackexchange.com/q/128286/971
https://stackoverflow.com/q/4846365/781723
Therefore, I suggest you try modified edit distance: compute the edit distance, but with a cost function that treats the distance between a pair of visually similar characters as zero or much smaller than the distance between a pair of different characters.
I recommend using the Damerau–Levenshtein edit distance. It includes transpositions among the changes it considers. There's some reason to believe this corresponds better to mistakes made by humans. So, maybe it'll correspond better to what humans consider similar.
Another plausible approach would be to render the two strings to images (using whatever font you think will be used to display them), then compare the images.
To compare how visually similar the images are, a basic starting point would be to use L1 or L2 distance (both have been shown to correspond well to human perception in some situations). This probably won't be enough because it will fail badly if the images aren't perfectly aligned. To address that, I suggest you apply a sort of edit distance that allows inserting or deleting entire columns of pixels (a full-height column), in addition to changing individual pixels.
It's not clear whether machine learning will be useful here. The obvious approach to try first would be to compute the edit distance and compare it to some threshold. That might be sufficient. | {
"domain": "cs.stackexchange",
"id": 6982,
"tags": "machine-learning, approximation, strings"
} |
Enum string parse and get custom attribute assign to array | Question: I wrote a code which:
- parse string to my enum
- get description from enum
- assign to string[] array
private readonly string[] allowed;
public AuthorizeRoles(params object[] arrayParam)
{
allowed = (arrayParam.Select(myField => Enum.Parse(typeof (MyEnum), myField.ToString()))
.Select(temp => temp.GetType().GetField(temp.ToString()))
.Select(field => GetCustomAttribute(field, typeof (DescriptionAttribute)))).OfType<DescriptionAttribute>()
.Select(attribute => attribute.Description)
.ToArray();
}
Could you tell me if it possible to improve this code for better efficiency and clean code.
Answer: Overall code seems OK then I highlight just few minor issues (and I propose a slightly bigger change to simplify code).
1.
public AuthorizeRoles(params object[] arrayParam)
arrayParam is not a descriptive name, what is it? List of roles? Make it clear changing name to - for example - roles.
2.
arrayParam.Select(myField => Enum.Parse(typeof(MyEnum), myField.ToString()))
You're converting each arrayParam item to a string however you're not doing any check, if one of them is null then code will fail with NullReferenceException for a long line packed with a lot of code. Not so helpful. You may check in advance with:
if (arrayParam.Any(x => x == null))
throw new ArgumentException("...");
Otherwise just let Enum.Parse() fail and use Convert.ToString(myField) won't throw for null input. Note that if type is restricted to be string or MyEnum then you may change your function definition accordingly:
public AuthorizeRoles(params string[] arrayParam)
public AuthorizeRoles(params MyEnum[] arrayParam)
Note that a weird user may have this:
object[] pars = null; // It may be a function argument...
AuthorizeRoles(pars);
Then you should also check if arrayParam is null itself:
if (arrayParam == null)
throw new ArgumentNullException("arrayParam");
3.
.Select(field => GetCustomAttribute(field, typeof(DescriptionAttribute))))
.OfType<DescriptionAttribute>()
In this case you do not need OfType<DescriptionAttribute>() because values are just of one type, it's a minor optimization but I think it's better to don't confuse future readers and Cast<DescriptionAttribute>() or directly (DescriptionAttribute) before GetCustomAttribute().
Now if it's allowed to change little bit your code I think it may be simplified. Pick this part:
arrayParam.Select(myField => Enum.Parse(typeof(MyEnum), myField.ToString()))
.Select(temp => temp.GetType().GetField(temp.ToString()))
First you convert input values to strings then you parse them to have MyEnum and finally you convert them back to string to use them with GetField(). One step should be omitted:
arrayParam.Select(x => Convert.ToString(x))
.Select(x => typeof(MyEnum).GetField(x))
GetCustomAttribute() is a custom helper function, if you change it to this prototype:
private static T GetCustomAttribute<T>(FieldInfo field)
where T : Attribute
Then last few lines of your code may be simplified (cast is not required and last .Select() may be merged in previous one). Note that with the generic argument T we don't even need a Type parameter (you can obtain type using typeof(T)).
Then if we rewrite whole function for an overview we have something like this:
public AuthorizeRoles(params object[] roles)
{
if (roles == null)
throw new ArgumentNullException("roles");
if (roles.Any(x => x == null))
throw new ArgumentException("...");
allowed = roles.Select(x => Convert.ToString(x))
.Select(x => typeof(MyEnum).GetField(x))
.Select(x => GetCustomAttribute<DescriptionAttribute>(x).Description)
.ToArray();
}
The two .Select() may be merged but I'd not scarify readability just to keep code shorter and a minor performance gain (after all we're working with Reflection here...) You may also want to add some checks before the second .Select() or inside GetCustomAttribute<T>() to throw a meaningful error message if one of given values is not a valid enum element. | {
"domain": "codereview.stackexchange",
"id": 20492,
"tags": "c#, performance, array, .net, enum"
} |
fatal error: ros/ros.h: No such file or directory | Question:
I have created a package using catkin_create_pkg on PC where I have previously installed Ubuntu 16.04 LTS and ROS kinetic. But I am trying to compile the cpp file included in the created package I get the following error
my@my-MS-7972:~/catkin_ws/src/sample_opencv/src$ g++ test_cv_bridge.cpp
test_cv_bridge.cpp:1:21: fatal error: ros/ros.h: no such file or directory
compilation terminated.
looks like it does not see any of the header files included in my cpp file.
Edit: This is what I added to my package.xml
<buildtool_depend>catkin</buildtool_depend><build_depend>cv_bridge</build_depend><build_depend>image_transport</build_depend><build_depend>roscpp</build_depend><build_depend>sensor_msgs</build_depend><build_depend>std_msgs</build_depend><build_export_depend>cv_bridge</build_export_depend><build_export_depend>image_transport</build_export_depend><build_export_depend>roscpp</build_export_depend><build_export_depend>sensor_msgs</build_export_depend><build_export_depend>std_msgs</build_export_depend><exec_depend>cv_bridge</exec_depend><exec_depend>image_transport</exec_depend><exec_depend>roscpp</exec_depend><exec_depend>sensor_msgs</exec_depend><exec_depend>std_msgs</exec_depend>
here is the Cmakelist.txt:
cmake_minimum_required(VERSION 2.8.3)
project(sample_opencv)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
make_minimum_required(VERSION 2.8.3)
project(sample_opencv)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(cat
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
cv_bridge
image_transport
roscpp
sensor_msgs
std_msgs
)
find_package(OpenCV REQUIRED)
## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)
## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()
################################################
## Declare ROS messages, services and actions ##
################################################
## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
## your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
## * add a build_depend tag for "message_generation"
## * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
## * If MSG_DEP_SET isn't empty the following dependency has been pulled in
## but can be declared for certainty nonetheless:
## * add a exec_depend tag for "message_runtime"
## * In this file (CMakeLists.txt):
## * add "message_generation" and every package in MSG_DEP_SET to
## find_package(catkin REQUIRED COMPONENTS ...)
## * add "message_runtime" and every package in MSG_DEP_SET to
## catkin_package(CATKIN_DEPENDS ...)
## * uncomment the add_*_files sections below as needed
## and list every .msg/.srv/.action file to be processed
## * uncomment the generate_messages entry below
## * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)
## Generate messages in the 'msg' folder
# add_message_files(
# FILES
# Message1.msg
# Message2.msg
# )
## Generate services in the 'srv' folder
# add_service_files(
# FILES
# Service1.srv
# Service2.srv
# )
## Generate actions in the 'action' folder
# add_action_files(
# FILES
# Action1.action
# Action2.action
# )
## Generate added messages and services with any dependencies listed here
# generate_messages(
# DEPENDENCIES
# sensor_msgs# std_msgs
# )
################################################
## Declare ROS dynamic reconfigure parameters ##
################################################
## To declare and build dynamic reconfigure parameters within this
## package, follow these steps:
## * In the file package.xml:
## * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
## * In this file (CMakeLists.txt):
## * add "dynamic_reconfigure" to
## find_package(catkin REQUIRED COMPONENTS ...)
## * uncomment the "generate_dynamic_reconfigure_options" section below
## and list every .cfg file to be processed
## Generate dynamic reconfigure parameters in the 'cfg' folder
# generate_dynamic_reconfigure_options(
# cfg/DynReconf1.cfg
# cfg/DynReconf2.cfg
# )
###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES sample_opencv
# CATKIN_DEPENDS cv_bridge image_transport roscpp sensor_msgs std_msgs
# DEPENDS system_lib
)
###########
## Build ##
###########
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
# include
${OpenCV_INCLUDE_DIRS}
${catkin_INCLUDE_DIRS}
)
## Declare a C++ library
# add_library(${PROJECT_NAME}
# src/${PROJECT_NAME}/sample_opencv.cpp
# )
## Add cmake target dependencies of the library
## as an example, code may need to be generated before libraries
## either from message generation or dynamic reconfigure
# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
add_executable(sample_opencv_node src/test_cv_bridge.cpp)
## Rename C++ executable without prefix
## The above recommended prefix causes long target names, the following renames the
## target back to the shorter version for ease of user use
## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")
## Add cmake target dependencies of the executable
## same as for the library above
# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
## Specify libraries to link a library or executable target against
target_link_libraries(sample_opencv_node
${OpenCV_LIBRARIES}
${catkin_LIBRARIES}
)
#############
## Install ##
#############
# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html
## Mark executable scripts (Python etc.) for installation
## in contrast to setup.py, you can choose the destination
# install(PROGRAMS
# scripts/my_python_script
# DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark executables and/or libraries for installation
# install(TARGETS ${PROJECT_NAME} ${PROJECT_NAME}_node
# ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
# RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )
## Mark cpp header files for installation
# install(DIRECTORY include/${PROJECT_NAME}/
# DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
# FILES_MATCHING PATTERN "*.h"
# PATTERN ".svn" EXCLUDE
# )
## Mark other files for installation (e.g. launch and bag files, etc.)
# install(FILES
# # myfile1
# # myfile2
# DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
# )
#############
## Testing ##
#############
## Add gtest based cpp test target and link libraries
# catkin_add_gtest(${PROJECT_NAME}-test test/test_sample_opencv.cpp)
# if(TARGET ${PROJECT_NAME}-test)
# target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
# endif()
## Add folders to be run by python nosetests
# catkin_add_nosetests(test)
and here is how Im including my header files
#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
Originally posted by Tommy Hay on ROS Answers with karma: 1 on 2018-08-11
Post score: 0
Original comments
Comment by lalten on 2018-08-11:
Did you
source /opt/ros/kinetic/setup.bash
source ~/catkin_ws/devel/setup.bash
See http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment
Comment by Tommy Hay on 2018-08-11:
I do these step ,still have same problem
Comment by gvdhoorn on 2018-08-12:
@PeteBlackerThe3rd is correct.
Also: this post is a duplicate of #q295301.
Answer:
If you look at the Building Packages tutorial here, you need to use catkin_make to build a package (or a node within it). You're trying to use gcc with only the standard c++ headers and libraries so the compiler cannot find the ROS headers.
Since ROS is a very complex multi-language system the catkin built tool is used to automate the process.
Hope this helps.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-08-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31513,
"tags": "ros-kinetic, cv-bridge"
} |
Dropping a ball in a fast moving train | Question: If I drop a ball in a train moving at a constant speed, will it land on the spot I aimed it at or a little away as the train has moved while it was in air? If it lands away, will the observer not know that he is in a moving frame of reference? If it lands on the intended spot, how did the ball know it is inside a train?
Answer: Assuming the train doesn't accelerate during the ball's fall, it will land in the spot you aimed at. Think about it this way. Before you drop the ball, it is moving along with the train (i.e. it has some horizontal speed). When you drop it, the ball still has this speed, and since an object in motion tends to stay in motion unless you exert a force on it (Newton's first law), the ball will continue to move at this horizontal speed as it falls. Thus it will land exactly where it would if the train were at rest and so the observer won't be able to figure out he is in a moving reference frame.
This actually speaks to something much deeper: namely that physics behaves the same in any inertial reference frame (a reference frame moving with constant velocity). There is thus no concept of "absolute motion." The train is moving with respect to the earth, but that is no different from the train being at rest and the earth moving underneath it.
Of course this all assumes that the train doesn't accelerate during the ball's fall. If the train accelerates, then the ball will still move as it would have if the train had not accelerated. Thus in this case the ball may land elsewhere than where you aimed it, and an observer can figure out in this case that the train is accelerating. | {
"domain": "physics.stackexchange",
"id": 63229,
"tags": "newtonian-mechanics, newtonian-gravity, reference-frames, projectile, relative-motion"
} |
If $f$ reduces $L_1$ to $L$ and also $L_2$ to $L$ is $L_1=L_2$ | Question: If the same $f$ reduces $L_1$ to $L$ and also $L_2$ to $L$ does it imply that $L_1=L_2$?
My intuition says no, but I couldn't find a counterexample.
Answer: Using the definition of reduction,
$x\in L_1\iff f(x)\in L \iff x\in L_2$
And thus $L_1=L_2$. | {
"domain": "cs.stackexchange",
"id": 18006,
"tags": "computability, reductions"
} |
Why laboratory glassware shatter in big pieces? | Question: What are laboratory glassware made up of? This question came up in mind because once I accidentally dropped a test tube and it shattered into big pieces. I observed that it didn't broke into pieces on first impact to the floor rather it bounced and broke into pieces on second impact. Any ordinary glass would shatter into small pieces On the very first impact to the floor. Does this type of glass possess elasticity?
Answer: Kitchen glassware is made to contain large internal stresses, so it shatters into small, non-dangerous pieces. That is hard to achieve with Duran etc., because that stuff has a small thermal expansion coefficient, just so that it does not break under large temperature changes.
I expect that the kind of kitchen glassware that is supposed to withstand large temperature changes also breaks into rather large pieces.
And yes, glass is highly elastic. Elasticity is effectively the only mechanical property it does possess, before breaking. ;-) | {
"domain": "chemistry.stackexchange",
"id": 4313,
"tags": "equipment"
} |
Cat Scoring app | Question: Can you please review my code and suggest why it may or may not be a professional looking code? I am just starting out as a programmer I am able to write code but I don't really know where I am right and where I am wrong.
Also, I have heard that using the MVC approach in code is a smart way. Please suggest what changes this code would have if it were to comply with the MVC approach.
About the Code: This application is part of a cat scoring app. You score the cats by clicking on them. This way the most liked cat can be determined. This code does not currently store the score. So, as soon as the refresh button is clicked, the score is lost.
This app is also available on CodePen.
const imageBasePath = "https://raw.githubusercontent.com/smartcoder2/CatClickerApp/master/images/";
const imageNameArrary = [
"tom.jpg",
"jack.jpeg",
"zoe.jpeg",
"simba.jpg",
"george.jpeg"
];
let catScore = [0, 0, 0, 0, 0]; // this keeps the score of each cat. index of array determines the cat
let htmlUpdate;
let ddl;
const imageVar = document.getElementById("cat-image");
const textVar = document.getElementById("show-click-value");
imageVar.addEventListener("click", incrementClickVar);
function incrementClickVar() {
ddl = document.getElementById("select-cat");
catScore[ddl.selectedIndex]++;
htmlUpdate =
catScore[ddl.selectedIndex] == 0 ?
"zero" :
catScore[ddl.selectedIndex];
textVar.innerHTML = htmlUpdate;
//
}
function validate() {
ddl = document.getElementById("select-cat");
htmlUpdate =
catScore[ddl.selectedIndex] == 0 ?
"zero" :
catScore[ddl.selectedIndex];
textVar.innerHTML = htmlUpdate;
let selectedValue = ddl.options[ddl.selectedIndex].value;
imageVar.src = imageBasePath + imageNameArrary[ddl.selectedIndex];
}
.outer-box {
height: 100vh;
display: grid;
grid-template-columns: 1fr 1fr 1fr;
grid-template-rows: 1fr 1fr 1fr;
grid-gap: 2vw;
align-items: center;
}
.outer-box>div,
img {
max-width: 25vw;
min-height: 10vh;
max-height: 44vh;
justify-self: center;
}
.outer-box>div {
text-align: center;
}
#show-click-value {
font-size: 6vh;
}
<html>
<head>
<link rel="stylesheet" href="styles\style.css" />
</head>
<body>
<div class="outer-box">
<div class="item1"></div>
<div class="item2">
<label class="cat-label" for="select-cat">Select a cat</label>
<select id="select-cat" name="cats" onchange="validate()">
<option value="Tom">Tom</option>
<option value="Jack">Jack</option>
<option value="Zoe">Zoe</option>
<option value="Simba">Simba</option>
<option value="George">George</option>
</select>
<br />
</div>
<div class="item3"></div>
<div class="item4"></div>
<img id="cat-image" src="https://raw.githubusercontent.com/smartcoder2/CatClickerApp/master/images/tom.jpg" alt="image not loaded" />
<div class="item6"></div>
<div class="item7"></div>
<div id="show-click-value">zero</div>
<div class="item9"></div>
</div>
<!-- srr-to-do-later: position the image and other elements properly -->
</body>
</html>
Answer: Group cat data together in object, have a single source.
You have two arrays and an element, all joined together by a common index. I would move things around so all the details are coming from a single source. This means if you want to add a cat, you add it in one place. It also allows for an easy upgrade if you wanted to start storing this data somewhere that is not hard-coded.
I have created a new cats array. This contains all the cats and their data:
const cats = [{
name: "Tom",
image: "tom.jpg",
score: 0
}];
For now we will stick with hard-coded data, however this could just as easily be a response from an API. For example:
fetch('https://example.com/cats.json')
.then(function (response) {
return response.json();
})
.then(function (cats) {
//... do something with the cats
});
The cats array stores all the details we need to populate our dropdown:
const catSelect = document.getElementById("cat-select");
cats.forEach(function(cat, index) {
const option = document.createElement("option");
option.value = index;
option.text = cat.name;
catSelect.add(option);
});
Handling user interaction
You have right idea with storing document.getElementById("cat-select") in a variable, however, you are redefining it on every user interaction (ddl = document.getElementById("cat-select");). Instead, lets define some constants. I have also added a new variable to keep track of the currently selected cat.
const catImage = document.getElementById("cat-image");
const catScore = document.getElementById("cat-score");
const catSelect = document.getElementById("cat-select");
let selectedCat;
Now we can use these in our functions. I have modified your two functions and also added one new one:
/*
This simple function is just to update the display of the score.
I thought it was nicer to put it in a function to avoid duplicating the code.
If you wanted to change it in the future, you would only have to change it in one place.
*/
function displayCatScore(score) {
catScore.innerText = score == 0 ? "zero" : score;
}
/*
This simply updates the score of the currently selected cat and then displays it.
*/
function incrementSelectedCatScore() {
displayCatScore(++selectedCat.score);
}
/*
This function updates `selectedCat` and displays that cat's score and image.
`cat` is the index of the cat that should be displayed.
*/
function displayCat(cat) {
selectedCat = cats[cat];
displayCatScore(selectedCat.score);
catImage.src = imageBasePath + selectedCat.image;
catImage.alt = selectedCat.name;
}
Now for event listeners - I have moved them from your HTML to the Javascript. It's easier when everything is kept together and in my opinion, it is just much cleaner.
catImage.addEventListener("click", incrementSelectedCatScore);
catSelect.addEventListener("change", function() {
displayCat(this.value); // this.value will be the index of the selected cat
});
displayCat(0); // Display the first cat
Full working example:
function displayCatScore(score) {
catScore.innerText = score == 0 ? "zero" : score;
}
function incrementSelectedCatScore() {
displayCatScore(++selectedCat.score);
}
function displayCat(cat) {
selectedCat = cats[cat];
displayCatScore(selectedCat.score);
catImage.src = imageBasePath + selectedCat.image;
catImage.alt = selectedCat.name;
}
const imageBasePath = "https://raw.githubusercontent.com/smartcoder2/CatClickerApp/master/images/";
const cats = [{
name: "Tom",
image: "tom.jpg",
score: 0
},
{
name: "Jack",
image: "jack.jpeg",
score: 0
},
{
name: "Zoe",
image: "zoe.jpeg",
score: 0
},
{
name: "Simba",
image: "simba.jpg",
score: 0
},
{
name: "George",
image: "george.jpeg",
score: 0
}
];
const catImage = document.getElementById("cat-image");
const catScore = document.getElementById("cat-score");
const catSelect = document.getElementById("cat-select");
let selectedCat;
cats.forEach(function(cat, index) {
const option = document.createElement("option");
option.value = index;
option.text = cat.name;
catSelect.add(option);
});
catImage.addEventListener("click", incrementSelectedCatScore);
catSelect.addEventListener("change", function() {
displayCat(this.value);
});
displayCat(0);
.outer-box {
height: 100vh;
display: grid;
grid-template-columns: 1fr 1fr 1fr;
grid-template-rows: 1fr 1fr 1fr;
grid-gap: 2vw;
align-items: center;
}
.outer-box>div,
img {
max-width: 25vw;
min-height: 10vh;
max-height: 44vh;
justify-self: center;
}
.outer-box>div {
text-align: center;
}
#show-click-value {
font-size: 6vh;
}
<html>
<head>
<link rel="stylesheet" href="styles\style.css" />
</head>
<body>
<div class="outer-box">
<div class="item1"></div>
<div class="item2">
<label class="cat-label" for="cat-select">Select a cat</label>
<select id="cat-select" name="cats"></select>
<br />
</div>
<div class="item3"></div>
<div class="item4"></div>
<img id="cat-image">
<div class="item6"></div>
<div class="item7"></div>
<div id="cat-score"></div>
<div class="item9"></div>
</div>
<!-- srr-to-do-later: position the image and other elements properly -->
</body>
</html>
If you have any questions or would like any further clarification, please let me know and I will be happy to help :) | {
"domain": "codereview.stackexchange",
"id": 35009,
"tags": "javascript, beginner, html, css"
} |
How is the state-value function expressed as a product of sums? | Question: The state-value function for a given policy $\pi$ is given by
$$\begin{align}
V^{\pi}(s) &=E_{\pi}\left\{r_{t+1}+\gamma r_{t+2}+\gamma^{2} r_{t+3}+\cdots \mid s_{t}=s\right\} \\
&=E_{\pi}\left\{r_{t+1}+\gamma V^{\pi}\left(s_{t+1}\right) \mid s_{t}=s\right\} \tag{4.3}\label{4.3} \\
&=\sum_{a} \pi(s, a) \sum_{s^{\prime}} \mathcal{P}_{s s^{\prime}}^{a}\left[\mathcal{R}_{s s^{\prime}}^{a}+\gamma V^{\pi}\left(s^{\prime}\right)\right] \tag{4.4}\label{4.4}
\end{align}$$
It is given in section 4.1 of the first edition of Sutton and Barto's book (equations 4.3 and 4.4).
I don't understand how equation \ref{4.4} derives from equation \ref{4.3}. How can I get the product in equation \ref{4.4} from the expectation in equation \ref{4.3}?
Answer: A quick review of resolving expectations: If you know that a discrete random variable $X$, drawn from set $\mathcal{X}$ has probability distribution $p(x) = \mathbf{Pr}\{X=x \}$, then
$$\mathbb{E}[X] = \sum_{x \in \mathcal{X}} xp(x)$$
This equation is the core of what is going on when resolving the expectation in your quoted equation.
Resolving the expectation to show how the value function of a state relates to the possible next rewards and future states means summing up all possible rewards and next states. There are two components to the distribution over the single step involved - the policy $\pi(a|s)$, and the state progression $P^a_{ss'}$. As they are both independent probabilities, they need to be multiplied to establish the combined probability of any specific trajectory.
So, looking at only a single trajectory starting from state $s$, the trajectory of selecting action $a$ and ending up in state $s'$ has a probability of:
$$p_{\pi}(a,s'|s) = \pi(a|s) P^a_{ss'}$$
Iterating over all possible trajectories to get the expected value of some function of the end of the trajectory $f(s,a,s')$ looks like this:
$$\mathbb{E}[f(S_t, A_t, S_{t+1})|S_t=s] = \sum_a \pi(a|s)\sum_{s'}P_{ss'}^a f(s,a,s')$$
It is important to note that the sums are nested here, not separately resolved then multiplied. This is standard notation, but you could add some brackets to show it:
$$\mathbb{E}[f(S_t, A_t, S_{t+1})|S_t=s] = \sum_a \pi(a|s)\left(\sum_{s'}\left(P_{ss'}^a f(s,a,s')\right)\right)$$
In the equation from the book, $f(s,a,s') = R_{ss'}^a + \gamma v_{\pi}(s')$ | {
"domain": "ai.stackexchange",
"id": 2818,
"tags": "reinforcement-learning, math, value-functions, books, expectation"
} |
How to take save a snapshot from my kinect to disk? | Question:
I'm trying to debug a problem with my Kinect not working right.
In the tutorial, there is a pretty standard way to view the camera stream:
rosrun image_view image_view image:=/camera/rgb/image_color
Is there a similarly simple way to save a snapshot to disk?
Originally posted by Murph on ROS Answers with karma: 1033 on 2011-03-10
Post score: 3
Answer:
Left clicking on the image in image_view will save a screenshot to the directory from which you started image_view in c-turtle and below. In Diamondback, you have to right click for this behavior; this prevents a lot of accidental frame grabs. :)
For example, if you were in /home/murph/code/topSecret in your terminal and you ran rosrun image_view image_view image:=/camera/rgb/image_color, the image would be saved in that directory.
Images are titled frame0000.jpg, frame0001.jpg, and so on.
To read more, visit the image_view wiki page.
Originally posted by Chad Rockey with karma: 4541 on 2011-03-10
This answer was ACCEPTED on the original site
Post score: 6 | {
"domain": "robotics.stackexchange",
"id": 5030,
"tags": "kinect, image-view, openni-camera"
} |
How does Inverse QFT work in Quantum Phase Estimation? | Question: I'm trying to implement Quantum Phase Estimation from qiskit textbook.
Below is the implementation circuit taken from the above-mentioned site:
The output at position 2 will be as follows:
$$|\psi _2⟩ = \frac{1}{2^{\frac{n}{2}}} \sum_{k=0}^{2^{n}-1} e^{2\pi i k} |k⟩ ⊗ |\psi⟩ $$
and after applying inverse QFT, the state becomes:
$$ | \psi _3⟩ = \frac{1}{2^{n}} \sum_{x=0}^{2^{n}-1} \sum_{k=0}^{2^{n}-1} e^{- \frac{2 \pi i k}{2^{n}}(x-2^n \theta)} |x⟩ ⊗| \psi⟩ $$
However, the next step claims that the above expression peaks near $ x = 2^n \theta $ which is my point of doubt, why is this the case? Wouldn't the maximum amplitude be when $ x = 0 $ based on simple calculus?
Answer: The expression you obtain after applying the QFT contains sums of the unit square, $e^{2\pi i/2^n}$, which sum up to 0 if you sum over the full range of $2^n$:
$$
\sum_{k=0}^{2^n - 1} e^{\frac{2\pi i}{2^n} k} = 0
$$
See here for explanations why this is the case.
Now the inner sum in the amplitudes will also sum up to 0, if $(x - 2^n\theta)$ is an integer, because that's just multiplication with a constant factor and the sum-to-zero-rule still holds.
$$
\sum_{x=0}^{2^n - 1} \sum_{k=0}^{2^n - 1} e^{\frac{2\pi i}{2^n} k (x - 2^n \theta)}
$$
But: If $(x - 2^n \theta) = 0$, then the exponential will be $e^0 = 1$ and your sum is collapsing to
$$
\sum_{x=0}^{2^n - 1} \sum_{k=0}^{2^n - 1} 1
$$
Therefore, the amplitudes are 0 if $x \neq 2^n \theta$ and 1 if $x = 2^n \theta$.
Note that the derivation is more complicated if $2^n\theta$ is not an integer.
The QFT and its derivation is also very well explained in Nielsen and Chuang, chapter 5.1, page 217, I would recommend to look up there if you have any questions! | {
"domain": "quantumcomputing.stackexchange",
"id": 1451,
"tags": "qiskit, quantum-fourier-transform, quantum-phase-estimation, phase-kickback"
} |
What is the speed of a neutral pion and how is it measured? | Question: (As pointed out by responders the speed of a neutral pion in a lab setting depends on the nature of the reaction which produced it. So the remaining question is "how is the speed determined?".)
Alvager et al 1964 reported evidence against Ritz's emitter theory in an experiment that generated neutral pions ($\pi_0$'s) with a velocity of $v \approx 0.99.c$. How is the velocity of a neutral pion determined? Does the determination invoke special relativity?
Alvager et al (http://mysite.verizon.net/cephalobus_alienus/papers/Alvager_et_al_1964.pdf) directed a pulsing beam of high energy ($19.2 GeV$) protons at a beryllium crystal and measured arrival times of resulting emitted $\gamma$ ray photons (energy $>= 6GeV$) at downstream detectors. The photons were emitted from the decay of $\pi_0$'s produced from collisions of the protons with the beryllium nuclei. The $\pi_0$'s had speeds $v = \beta.c$. They state that $1/(1-\beta^2) >45$ and so $\beta >= 0.989$. Later they use the value $\beta=0.99975$.
Time intervals between the peaks of successive photon bursts (interval ~105 ms) detected at two separate detectors were interpreted as indicating that, if the photons were travelling at some speed other than $c$ and given by $v=c+k.\beta.c$, then the value of $k$ must be very small (-3+/- 13 * 10e-5). (I think that this formula for altered photon velocity harks back to Fizeau experiments investigating the effect of moving water on the velocity of light. Applying simple galilean relativity in the Alvager et al experiments would predict a range of photon velocities $v$ between $c -\beta.c$ and $c + \beta.c$).
In a contemporary paper Velocity of Gamma Rays from a Moving Source T. A. Filippas and J. G. Fox, Phys. Rev. 135, B1071, 1964 ) used $\pi_0$'s with velocity $0.2c$ which decayed to produce 68 MeV $\gamma$ ray photons. They declare that "independently of relativity, and indeed of nuclear theory, there can be no reasonable doubt about the velocity of the ($\pi_0$) source(s)" based on reported aberration angle and Doppler energy shift for $\pi_0$'s produced by the same reaction ($\pi- + p \rightarrow \pi_0$).
I pose the question because it seems that Alvager et al were trying to confirm special relativity (SR) by assuming pion velocities which were themselves determined assuming SR.
Regarding the interpretation of the Alvager experiment the following note is interesting: (http://worldnpa.org/pipermail/memberschat_worldnpa.org/attachments/20090115/db6f6bc5/attachment.pdf)(since deleted).
The answer by AnnaV is very helpful. I made a follow-up question in which I was led to obtain a formula for $v$ myself:- $v=\sqrt(2KE/m_0)$.
The community declared this solution correct but then deemed the question a pointless homework question and deleted/hid it. As it happens the formula does not seem to be correct when I try to apply the Filippas & Fox 1964 data ($\pi_0$ rest mass $m_0=135MeV/c^2$, $KE=68 MeV$) it gives $v_{calc} = 1.004c$ instead of the authors' value $v=0.2c$. Applying the Alvager et al data to it ($KE= 6000 MeV$) it gives $v_{calc}=9.428c$. So I am still in the dark on the Special Relativity approach. (The aberration method mentioned by Fox & Filippas 1964 is relatively simple to understand for me.)
Answer: If you go to this link you will see that the lifetime of the pi0 is orders of magnitude shorter than of the charged pions.
8.4 ± 0.6 × 10^−17 seconds, a time characteristic of electromagnetic reactions.
It decays to two photons, which can be measured in the laboratory.
If it is produced with some energy in the laboratory system, its speed can be estimated by measuring the four momenta of the photons and equating the sum to the four momentum of the pi0. Its speed then can be found for that individual measurement. There is no general "speed" of the pi0, as there is no general speed of any elementary particle, their four momenta being dependent of the interaction that produced them and very variable.
To have a speed a fraction of the speed of light any pion or other elementary particle should have an energy given by the relativistic formulae. Have a look here where they calculate the energy necessary for a velocity 1% of the velocity of light for various particles. | {
"domain": "physics.stackexchange",
"id": 9438,
"tags": "particle-physics, experimental-physics, nuclear-physics, pions"
} |
Calculate the minimum absolute difference between the maximum and minimum number from the triplet of 3 arrays | Question: Question is:
Calculate the minimum absolute difference between the maximum and minimum number from the triplet a, b, c such that a, b, c belongs arrays A, B, C respectively.
There is another question: Find i, j, k such that :
max(abs(A[i] - B[j]), abs(B[j] - C[k]), abs(C[k] - A[i])) is minimized.
Return the minimum max(abs(A[i] - B[j]), abs(B[j] - C[k]), abs(C[k] - A[i])). A, B & C are sorted arrays.
Reading the solution approach here and here, I realize that the solution approach for both these questions is same, to:
minimize abs( max(a,b,c) - min(a,b,c) )
I'm not sure why this follows and how we should go about relating these two questions and their algorithms. Why are these two questions identical?
Answer: Can you see the following equality?
abs( max(a,b,c) - min(a,b,c) ) = max(abs(a - b), abs(b - c), abs(c - a))
Here is a hint to prove the equality.
Without loss of generality, we can assume a >= b >= c.
By the way, the first abs is not necessary since max(a, b, c) is always no less than min (a, b, c). | {
"domain": "cs.stackexchange",
"id": 11985,
"tags": "algorithms, arrays"
} |
Simple multi-dimensional Array class in C++11 | Question: The new version of the code can be reviewed in Simple multi-dimensional Array class in C++11 - follow-up.
The following code implements a simple multi-dimensional array class (hyper_array::array).
I modeled most (if not all) the feature on the orca_array.hpp header by Pramod Gupta after watching his talk at this year's cppcon.
I think that orca_array is fine. However, a more generic implementation might prove interesting as well (e.g. reducing code repetition, gaining more efficiency through compile-time computations/verification and allowing more dimensions (even though the relevance of the latter feature is debatable)).
The element type and the number of dimensions is given at compile-time. The length along each dimension is specified at run-time.
As a start, there are 2 configuration options:
HYPER_ARRAY_CONFIG_Check_Bounds: controls run-time checking of index bounds,
HYPER_ARRAY_CONFIG_Overload_Stream_Operator: enables/disables the overloading of operator<<(std::ostream&, const hyper_array::array&).
The implementation requires some C++11 features and uses very basic template (meta)programming and constexpr computation when possible.
I have mainly the following goals:
Self-contained (no dependency to external libraries), single-header implementation
Minimal code repetition (if any) and clarity/"readability" of implementation
Conforming to "good" programming practices in modern C++ while developing a solution that might prove interesting to use for others
Clean API "that makes sense" to the user
As much compile-time computation/evaluation/input validation as possible (template-metaprogramming, constexpr?)
Maximum efficiency while remaining written in standard C++11 (I made an exception once for std::make_unique(), but I'll probably remove it),
Allow inclusion in STL containers while still being efficient
My current concerns are:
I'm sure that many computations could be done and for loops could be unwound at compile time, but I haven't wrapped my head around template metaprogramming yet to come up with an appropriate solution,
Performance,
The API: it's still very basic, but I'm open to suggestions for making it more relevant (orca_array was just my starting point).
Run The Code Online
hyper_array.hp`
#pragma once
// make sure that -std=c++11 or -std=c++14 ... is enabled in case of clang and gcc
#if (__cplusplus < 201103L) // C++11 ?
#error "hyper_array requires a C++11-capable compiler"
#endif
// <editor-fold desc="Configuration">
#ifndef HYPER_ARRAY_CONFIG_Check_Bounds
/// Enables/disables run-time validation of indices in methods like [hyper_array::array::at()](@ref hyper_array::array::at())
/// This setting can be overridden by defining `HYPER_ARRAY_CONFIG_Check_Bounds` before the inclusion
/// of this header or in the compiler arguments (e.g. `-DHYPER_ARRAY_CONFIG_Check_Bounds=0` in gcc and clang)
#define HYPER_ARRAY_CONFIG_Check_Bounds 1
#endif
#ifndef HYPER_ARRAY_CONFIG_Overload_Stream_Operator
/// Enables/disables `operator<<()` overloading for hyper_array::array
#define HYPER_ARRAY_CONFIG_Overload_Stream_Operator 1
#endif
// </editor-fold>
// <editor-fold desc="Includes">
// std
//#include <algorithm> // used during dev, replaced by compile-time equivalents in hyper_array::internal
#include <array> // std::array for hyper_array::array::dimensionLengths and indexCoeffs
#include <memory> // unique_ptr for hyper_array::array::_dataOwner
#if HYPER_ARRAY_CONFIG_Overload_Stream_Operator
#include <ostream> // ostream for the overloaded operator<<()
#endif
#if HYPER_ARRAY_CONFIG_Check_Bounds
#include <sstream> // stringstream in hyper_array::array::validateIndexRanges()
#endif
#include <type_traits> // template metaprogramming stuff in hyper_array::internal
// </editor-fold>
/// The hyper_array lib's namespace
namespace hyper_array
{
// <editor-fold defaultstate="collapsed" desc="Internal Helper Blocks">
/// Helper functions for hyper_array::array's implementation
/// @note Everything related to this namespace is subject to change and must not be used by user code
namespace internal
{
/// Checks that all the template arguments are integral types using `std::is_integral`
template <typename T, typename... Ts>
struct are_integral
: std::integral_constant<bool,
std::is_integral<T>::value
&& are_integral<Ts...>::value>
{};
template <typename T>
struct are_integral<T>
: std::is_integral<T>
{};
/// Compile-time sum
template <typename T>
constexpr T ct_plus(const T x, const T y)
{
return x + y;
}
/// Compile-time product
template <typename T>
constexpr T ct_prod(const T x, const T y)
{
return x * y;
}
/// Compile-time equivalent to `std::accumulate()`
template
<
typename T, ///< result type
std::size_t N, ///< length of the array
typename O ///< type of the binary operation
>
constexpr T ct_accumulate(const ::std::array<T, N>& arr, ///< accumulate from this array
const size_t first, ///< starting from this position
const size_t length, ///< accumulate this number of elements
const T initialValue, ///< let this be the accumulator's initial value
const O& op ///< use this binary operation
)
{
// https://stackoverflow.com/a/33158265/865719
return (first < (first + length))
? op(arr[first],
ct_accumulate(arr,
first + 1,
length - 1,
initialValue,
op))
: initialValue;
}
/// Compile-time equivalent to `std::inner_product()`
template
<
typename T, ///< the result type
typename T_1, ///< first array's type
size_t N_1, ///< length of the first array
typename T_2, ///< second array's type
size_t N_2, ///< length of the second array
typename O_SUM, ///< summation operation's type
typename O_PROD ///< multiplication operation's type
>
constexpr T ct_inner_product(const ::std::array<T_1, N_1>& arr_1, ///< perform the inner product of this array
const size_t first_1, ///< from this position
const ::std::array<T_2, N_2>& arr_2, ///< with this array
const size_t first_2, ///< from this position
const size_t length, ///< using this many elements from both arrays
const T initialValue, ///< let this be the summation's initial value
const O_SUM& op_sum, ///< use this as the summation operator
const O_PROD& op_prod ///< use this as the multiplication operator
)
{
// same logic as `ct_accumulate()`
return (first_1 < (first_1 + length))
? op_sum(op_prod(arr_1[first_1], arr_2[first_2]),
ct_inner_product(arr_1, first_1 + 1,
arr_2, first_2 + 1,
length - 1,
initialValue,
op_sum, op_prod))
: initialValue;
}
}
// </editor-fold>
/// A multi-dimensional array
/// Inspired by [orca_array](https://github.com/astrobiology/orca_array)
template
<
typename ElementType, ///< elements' type
size_t Dimensions ///< number of dimensions
>
class array
{
// Types ///////////////////////////////////////////////////////////////////////////////////////
public:
using SizeType = size_t; ///< used for measuring sizes and lengths
using IndexType = size_t; ///< used for indices
// Attributes //////////////////////////////////////////////////////////////////////////////////
// <editor-fold desc="Static Attributes">
public:
static constexpr SizeType dimensions = Dimensions;
// </editor-fold>
// <editor-fold desc="Class Attributes">
public:
// ::std::array's are used here mainly because they are initializable
// from `std::initialzer_list` and they support move semantics
// cf. hyper_array::array's constructors
// I might replace them with a "lighter" structure if it satisfies the above 2 requirements
const ::std::array<SizeType, Dimensions> dimensionLengths; ///< number of elements in each dimension
const SizeType dataLength; ///< total number of elements in [data](@ref data)
const ::std::array<SizeType, Dimensions> indexCoeffs; ///< coefficients to use when computing the index
///< C_i = \prod_{j=i+1}^{n-2} L_j if i in [0, n-2]
///< | 1 if i == n-1
///<
///< where n : Dimensions - 1 (indices start from 0)
///< | C_i : indexCoeffs[i]
///< | L_j : dimensionLengths[j]
///< @see at()
private:
/// handles the lifecycle of the dynamically allocated data array
/// The user doesn't need to access it directly
/// If the user needs access to the allocated array, they can use [data](@ref data) (constant pointer)
std::unique_ptr<ElementType[]> _dataOwner;
public:
/// points to the allocated data array
ElementType* const data;
// </editor-fold>
// methods /////////////////////////////////////////////////////////////////////////////////////
public:
/// It doesn't make sense to create an array without specifying the dimension lengths
array() = delete;
/// no copy-construction allowed (orca_array-like behavior)
array(const array&) = delete;
/// enable move construction
/// allows inclusion of hyper arrays in e.g. STL containers
array(array<ElementType, Dimensions>&& other)
: dimensionLengths (std::move(other.dimensionLengths))
, dataLength {other.dataLength}
, indexCoeffs (std::move(other.indexCoeffs))
, _dataOwner {other._dataOwner.release()} // ^_^
, data {_dataOwner.get()}
{}
/// the usual way for constructing hyper arrays
template <typename... DimensionLengths>
array(DimensionLengths&&... dimensions)
: dimensionLengths{{static_cast<SizeType>(dimensions)...}}
, dataLength{internal::ct_accumulate(dimensionLengths,
0,
Dimensions,
static_cast<SizeType>(1),
internal::ct_prod<SizeType>)}
, indexCoeffs([this] {
::std::array<SizeType, Dimensions> coeffs;
coeffs[Dimensions - 1] = 1;
for (SizeType i = 0; i < (Dimensions - 1); ++i)
{
coeffs[i] = internal::ct_accumulate(dimensionLengths,
i + 1,
Dimensions - i - 1,
static_cast<SizeType>(1),
internal::ct_prod<SizeType>);
}
return coeffs; // hopefully, NRVO should kick in here
}())
#if (__cplusplus < 201402L) // C++14 ?
, _dataOwner{new ElementType[dataLength]} // std::make_unique() is not part of C++11 :(
#else
, _dataOwner{std::make_unique<ElementType[]>(dataLength)}
#endif
, data{_dataOwner.get()}
{
// compile-time input validation
// can't put them during dimensionLengths' initialization, so they're here now
static_assert(sizeof...(DimensionLengths) == Dimensions,
"The number of dimension lengths must be the same as "
"the array's number of dimensions (i.e. \"Dimentions\")");
static_assert(internal::are_integral<
typename std::remove_reference<DimensionLengths>::type...
>::value,
"The dimension lengths must be of integral types");
}
/// Returns the length of a given dimension at run-time
SizeType length(const size_t dimensionIndex) const
{
#if HYPER_ARRAY_CONFIG_Check_Bounds
if (dimensionIndex >= Dimensions)
{
throw std::out_of_range("The dimension index must be within [0, Dimensions-1]");
}
#endif
return dimensionLengths[dimensionIndex];
}
/// Compile-time version of [length()](@ref length())
template <size_t DimensionIndex>
SizeType length() const
{
static_assert(DimensionIndex < Dimensions,
"The dimension index must be within [0, Dimensions-1]");
return dimensionLengths[DimensionIndex];
}
/// Returns the element at the given index tuple
/// Usage:
/// @code
/// hyper_array::array<double, 3> arr(4, 5, 6);
/// arr.at(3, 1, 4) = 3.14;
/// @endcode
template<typename... Indices>
ElementType& at(Indices&&... indices)
{
return data[rawIndex(std::forward<Indices>(indices)...)];
}
/// `const` version of [at()](@ref at())
template<typename... Indices>
const ElementType& at(Indices&&... indices) const
{
return data[rawIndex(std::forward<Indices>(indices)...)];
}
/// Returns the actual index of the element in the [data](@ref data) array
/// Usage:
/// @code
/// hyper_array::array<int, 3> arr(4, 5, 6);
/// assert(&arr.at(3, 1, 4) == &arr.data[arr.rawIndex(3, 1, 4)]);
/// @endcode
template<typename... Indices>
IndexType rawIndex(Indices&&... indices) const
{
#if HYPER_ARRAY_CONFIG_Check_Bounds
return rawIndex_noChecks(validateIndexRanges(std::forward<Indices>(indices)...));
#else
return rawIndex_noChecks({static_cast<IndexType>(indices)...});
#endif
}
private:
#if HYPER_ARRAY_CONFIG_Check_Bounds
template<typename... Indices>
::std::array<IndexType, Dimensions> validateIndexRanges(Indices&&... indices) const
{
// compile-time input validation
static_assert(sizeof...(Indices) == Dimensions,
"The number of indices must be the same as "
"the array's number of dimensions (i.e. \"Dimentions\")");
static_assert(internal::are_integral<
typename std::remove_reference<Indices>::type...
>::value,
"The indices must be of integral types");
// runtime input validation
::std::array<IndexType, Dimensions> indexArray = {{static_cast<IndexType>(indices)...}};
// check all indices and prepare an exhaustive report (in oss)
// if some of them are out of bounds
std::ostringstream oss;
for (size_t i = 0; i < Dimensions; ++i)
{
if ((indexArray[i] >= dimensionLengths[i]) || (indexArray[i] < 0))
{
oss << "Index #" << i << " [== " << indexArray[i] << "]"
<< " is out of the [0, " << (dimensionLengths[i]-1) << "] range. ";
}
}
// if nothing has been written to oss then all indices are valid
if (oss.str().empty())
{
return indexArray;
}
else
{
throw std::out_of_range(oss.str());
}
}
#endif
IndexType rawIndex_noChecks(::std::array<IndexType, Dimensions>&& indexArray) const
{
// I_{actual} = \sum_{i=0}^{N-1} {C_i \cdot I_i}
//
// where I_{actual} : actual index of the data in the data array
// N : Dimensions
// C_i : indexCoeffs[i]
// I_i : indexArray[i]
return internal::ct_inner_product(indexCoeffs, 0,
indexArray, 0,
Dimensions,
static_cast<IndexType>(0),
internal::ct_plus<IndexType>,
internal::ct_prod<IndexType>);
}
};
// <editor-fold desc="orca_array-like declarations">
template<typename ElementType> using array1d = array<ElementType, 1>;
template<typename ElementType> using array2d = array<ElementType, 2>;
template<typename ElementType> using array3d = array<ElementType, 3>;
template<typename ElementType> using array4d = array<ElementType, 4>;
template<typename ElementType> using array5d = array<ElementType, 5>;
template<typename ElementType> using array6d = array<ElementType, 6>;
template<typename ElementType> using array7d = array<ElementType, 7>;
// </editor-fold>
}
#if HYPER_ARRAY_CONFIG_Overload_Stream_Operator
/// Pretty printing to STL streams
/// Should print something like
/// @code
/// [Dimensions:1];[dimensionLengths: 5 ];[dataLength:5];[indexCoeffs: 1 ];[data: 0 1 2 3 4 ]
/// @endcode
template <typename T, size_t D>
std::ostream& operator<<(std::ostream& out, const hyper_array::array<T, D>& ha)
{
out << "[Dimensions:" << ha.dimensions << "]";
out << ";[dimensionLengths: ";
for (auto& dl : ha.dimensionLengths)
{
out << dl << " ";
}
out << "]";
out << ";[dataLength:" << ha.dataLength << "]";
out << ";[indexCoeffs: ";
for (auto& ic : ha.indexCoeffs)
{
out << ic << " ";
}
out << "]";
out << ";[data: ";
for (typename hyper_array::array<T, D>::IndexType i = 0; i < ha.dataLength; ++i)
{
out << ha.data[i] << " ";
}
out << "]";
return out;
}
#endif
Test program
// g++ -std=c++11 -std=c++11 -fdiagnostics-show-option -Wall -Wextra -Wpedantic -Werror -Wconversion hyper_array_playground.cpp -o hyper_array_playground
#include <iostream>
#include <vector>
#include "hyper_array/hyper_array.hpp"
using namespace std;
int main()
{
// 3d array
{
hyper_array::array3d<double> a{2, 3, 4};
int c = 0;
for (size_t i = 0; i < a.length<0>(); ++i) // hyper_array
{ // should
for (size_t j = 0; j < a.length<1>(); ++j) // probably
{ // implement
for (size_t k = 0; k < a.length<2>(); ++k) // some
{ // kind
a.at(i, j, k) = c++; // of
} // iterator
} // to prevent
} // so much typing
cout << a << endl;
cout << "(a.length(1) == a.length<1>()): " << (a.length(1) == a.length<1>()) << endl;
}
// 1D array
{
hyper_array::array1d<double> a{5};
int c = 0;
for (size_t i = 0; i < a.length<0>(); ++i)
{
a.at(i) = c++;
}
cout << a << endl;
}
// size w.r.t. std::array
{
constexpr size_t elementCount = 10;
hyper_array::array1d<double> aa{hyper_array::array1d<double>{elementCount}};
// 40 bytes bigger than std::array...
cout << "sizeof(aa): " << (sizeof(aa) + (elementCount*sizeof(double))) << endl;
cout << "sizeof(std::array): " << sizeof(std::array<double, elementCount>) << endl;
}
// in STL containers (e.g. std::vector)
{
vector<hyper_array::array2d<double>> v;
v.emplace_back(hyper_array::array2d<double>{1,2});
v.push_back(hyper_array::array2d<double>{2,1});
}
cout << "done" << endl;
}
New versions of hyper_array can now be found on Github.
Answer: I think your class isn't nearly as useful enough as it could be due to your choices in member variables. Furthermore, it's less efficient than it could be. Also the code is written in a style that overcomplicates the problem.
Member Variables
Your members are:
std::unique_ptr<ElementType[]>
ElementType* const
const std::array<SizeType, Dimensions>
const SizeType
const std::array<SizeType, Dimensions>
This choice makes the class noncopyable and nonassignable. But why? There's nothing inherent about a multidimensional array that suggests it shouldn't be assignable or copyable. You make some members public. There's no reason to do that. Particularly bad is data - which is redundant with _dataOwner.
You should strive to make your class as generic as possible. To that end I suggest you simply have two members, both private:
ElementType* data;
std::array<size_t, Dimensions> dimensions;
You can derive dataLength and indexCoeffs from dimensions if need be, and since you'd have to iterate over the array to do anything anyway, I don't see what precomputing saves you.
This also allows you do support copying and moving.
Forwarding References
Forwarding references are a great choice for function templates when you can take objects by lvalue or rvalue and do the cheapest correct thing possible in all cases. However, everywhere that you are using them, the objects getting past in must be integral types (I don't see you checking this, but you should). There is no different between copying and moving an integral type, so simply take everything by value. That saves you from having to do all of the std::forward<>-ing. For example:
template <class... Indices>
ElementType& at(Indices... indices)
{
return data[rawIndex(indices...)];
}
Bounds Checking
You introduce a macro for whether or not to do bounds checking. However, convention from the standard library suggests that we just provide functions that DO range checking and functions that don't. at() should throw std::out_of_range, and operator() should never throw:
template <typename... Indices>
ElementType& operator()(Indices... indices) {
// nothrow implementation
}
template <typename... Indices>
ElementType& at(Indices... indices)
{
some_range_checking(indices...);
return (*this)(indices...);
}
Compile time checking
First, a cleaner way to write are_integral would be to use the bool_pack trick:
template <bool... > struct bool_pack { };
template <bool... b>
using all_true = std::is_same<bool_pack<true, b...>, bool_pack<b..., true>>;
With:
template <typename... T>
using are_all_integral = all_true<std::is_integral<T>::value...>;
And you should actually use that metafunction as part of the signature of every function! That is preferred to a simple static_assert since any reflection-style operations on your class would actually yield the correct result:
template <typename.... DimensionLengths,
typename = std::enable_if_t<are_all_integral<DimensionLengths...>::value && sizeof...(DimensionLengths) == Dimensions>
>
array(DimensionLengths... )
{ ... }
Otherwise, you would get something weird like std::is_constructible<array<int, 4>, std::string> being true.
Iterators
An important part of writing a container is writing iterators for it. You may just provide general iterators that just go over the whole array front to end. Or you may want to support arrays that iterate over a single dimension and provide a proxy object to a multi-dimensional array of one dimension less. Either would be good to have. | {
"domain": "codereview.stackexchange",
"id": 16267,
"tags": "c++, performance, c++11, array, template-meta-programming"
} |
What is the output of IFFT operations, continuous or discrete time? | Question: Assume we have a sequence of symbols
$$S= \{0, 0, 1+j, -1-j, 0 ,0 , 1+j,-1+j\}$$
and that this sequence is a frequency domain sequence and will be input to an IFFT operation.
Is the output a continous time or discrete time signal?
Thanks
Answer: The FFT is just a fast algorithm for evaluating the discrete Fourier transform (DFT). The domains for the DFT are discrete in both time and frequency. So, the result of an inverse DFT using the inputs you showed above will be a discrete-time signal. | {
"domain": "dsp.stackexchange",
"id": 2717,
"tags": "digital-communications"
} |
Determining to terminate at a reward or not | Question: I am practicing the Bellman equation on Grid world examples and in this scenario, there are numbered grid squares where the agent can choose to terminate and collect the reward equal to the amount inside the numbered square or they can choose to not do this and just move to the state grid square.
Since this is a determinisitc grid, I have utilised the following Bellman equation:
$$V(s) = max_a(R(s,a)+\gamma V(s'))$$
Where $\gamma=0.5$ and any movement reward is $0$, since this will allow the agent to have a balance of thinking long-term and short-term.
I am trying to understand how you would determine whether it is better for the agent to terminate at the state with the number $3$ or to continue to the state with a number $4$ to collect the more reward? I have determined and marked (X) the terminal states, where with my current calculations, I feel the agent should exit.
Answer:
I am trying to understand how you would determine whether it is better for the agent to terminate at the state with the number 3 or to continue to the state with a number 4 to collect the more reward?
Which is better is determined by looking at the expected return from either choice, with higher expected returns being better.
The return from travelling one time step to the "4 on exit" state, is 2 as you have shown, due to discounting. That is assuming the 4 is gained by taking a separate "exit" action once in that position. That is, there is no combined "move and exit" action that only takes one time step - whether or not such actions exist in this environment makes a large change to your example and what will be optimal, so it is really important to be clear about that.
The return from exiting immediately in the "3 on exit" state is 3.
3 is larger than 2, so if the agent finds itself in the "3 on exit" state, then it should exit immediately to get the best expected return.
Possibly what might be making this harder to understand is the role that discounting takes. Sometimes it appears to be used to "fix" infinite rewards from continuing environments. However, discounting is part of the definition of return, and it changes what counts as optimal. With a low discount factor, such as $0.5$, then it can be optimal to take a lower reward sooner as opposed to a larger reward later. The value of the discount factor allows to make that comparison exactly. | {
"domain": "ai.stackexchange",
"id": 3301,
"tags": "reinforcement-learning, deep-rl, bellman-equations, deterministic-policy, stopping-conditions"
} |
Why does the Moon's terminator look "wrong" in this image? | Question: There is a picture of the moon in National Public Radio's on-line article Get Ready For Halloween By Watching The Moon's 'Occultation' Tonight. It looks wrong to me - specifically, the brightness gradation near the terminator - or lack thereof.
The image is credited "A waning gibbous moon occultation will be visible Tuesday night in parts of the United States. JPL/NASA" but I wonder if this is true. Something just looks "wrong" here.
In the original article the JPL/NASA contains a link to this page, which currently contains this image also shown below. It looks more like an actual photograph.
And here is a NASA/JPL image from https://svs.gsfc.nasa.gov/4404 for 2016-10-19 04:00 UT, which if I uderstand correctly is not actually a photograph, but simulated from LRO data. It also illustrates that the terminator is expected to be graded from light to dark and contain contrast from shadowing.
above: terminator from all three images. Realistic terminator on the right show graded intensity and strong shadowing from the highly oblique incident light.
Answer: The reason that moon image looks wrong is because it is wrong. It is not a real image of the moon $-$ at least the terminator is not real.
The original article you cite has a link just below their image indicating the source of their image of the moon. That source is the night sky planner, hosted by JPL. You'll find the same image on that website, albeit slightly darker (it seems the NPR people lightened up the image a bit).
If you do some more digging, you'll see that the night sky planner got its image of that moon from someone else. Within the html code, they have the image defined as:
<img alt="the moon" src="http://api.usno.navy.mil/imagery/moon.png">
Clearly you can see that the image was taken from the United States National Observatory.
After a bit of digging, I found out what exactly is going on here. The purpose of this site is to show you the current phase of the moon. To do this, they take a single image of the full moon and artificially shade out a region to make it appear as the current phase of the moon. You can see their process here and how it was done by some guy named R. Schmidt. They've broken down the moon phases into 181 images which you can download here, if you're interested.
As you can see, the terminator on that image is wrong because it is not a real image of the current phase of the moon, but rather a computer generated "shading" of the full moon to indicate the current moon phase. | {
"domain": "astronomy.stackexchange",
"id": 1905,
"tags": "photography, the-moon"
} |
Why is v lowercase and D uppercase in $v=H_0D$? | Question: Why is v almost always written in lowercase and D in uppercase in $v=H_0D$?
OK, v is in lowercase, as usual, but then why is D in uppercase? What's so different/special about it?
In my physics school text/book and Wikipedia it is written as v=H0D, and I always assumed that was the correct form. Yes I have seen it written as v=H0d, but I always thought that was a mistake. I think I assumed that it was D because both my sources pointed out that it was a proper distance, as opposed to a co-moving distance, I suppose. I cannot believe it is just arbitrary. Physics symbols are case sensitive. Was v=Hod considered too confusing!?
Answer: The question is built on a false premise and $v = H_0 D$ is not a universal convention.
If a child asked you, you could point them first to the BBC bitesize revision notes on Hubble's law, which uses $v = H_0 d$.
Then hyperphysics notes, which uses $v = H_0 r$.
(These were the two of the first three non-pdf hits I got when I googled "Hubble's law")
I would then explain to the child that any algebraic symbol can have any meaning you choose to attach to them. | {
"domain": "astronomy.stackexchange",
"id": 7289,
"tags": "observational-astronomy, astrophysics, expansion, mathematics, hubble-constant"
} |
Is using defvar for a non-global variable ok? | Question: I am calling defvar in the middle of a function definition. And so far I've always seen its use, with defparameter for global variable. Like *error-output* or *standard-output*.
(defun consume-socket-reply (socket end-test-form)
(do* ((line "" (read-line (usocket:socket-stream socket) nil))
(text "" (concatenate 'string text line)))
((funcall end-test-form) text)))
(defun read-http-content (socket)
(defvar line)
(consume-socket-reply socket (lambda () (not line))))
Is there a better way to write what I am trying to do? That is: being able to pass the end-test-form to the final inner loop.
Answer: Ok I've modified my code that way, and removed the call to defvar:
(defun consume-socket-reply (socket end-test-form)
(do* ((line (read-line (usocket:socket-stream socket) nil)
(read-line (usocket:socket-stream socket) nil))
(text line (concatenate 'string text line)))
((funcall end-test-form line) text)))
(defun read-http-header (socket)
(consume-socket-reply socket (lambda (line) (equal line ""))))
(defun read-http-content (socket)
(consume-socket-reply socket (lambda (line) (not line)))) | {
"domain": "codereview.stackexchange",
"id": 5246,
"tags": "lisp, common-lisp"
} |
By convention, when observing an exoplanet, is the observer's line of sight (on Earth) perpendicular to the plane of sky? | Question: This is just a quick question. In this picture of the orbit of an exoplanet, is the observer's line of sight perpendicular to the plane of reference, which is in the +z direction on the diagram? In the case when the inclination equals to $90^{\circ}$, it is ideal for us to measure the radial velocity. Is that right?
Answer: Yes, the reference plane for exoplanet orbital inclination is our sky plane.
From the NASA Exoplanet Archive documentation:
The Observed Inclination is the orbital inclination with respect to the plane of the sky....
0 degrees correspond to an orbit in the plane of the sky, face on with respect to our line of sight.
90 degrees corresponds to an orbit that is edge-on with respect to our line of sight, perpendicular to the plane of the sky.
If you query the archive for exoplanets with discovery method "Transit," you get inclinations mostly between 85° and 90°.
If the only available observation is a fluctuation in the star's radial velocity, then the planet's mass m and orbital inclination i are unknown, but a minimum mass m sin i can be computed.
The actual mass is greater unless i = 90°.
The orbit diagram from Wikipedia can apply to extrasolar systems with one change: the reference direction is celestial north instead of ♈. | {
"domain": "astronomy.stackexchange",
"id": 3961,
"tags": "orbit, astrophysics, exoplanet, binary-star"
} |
Can all regular tree types be expressed as $\mu$ types? | Question: In "Types and Programming Languages", Pierce gives a translation from recursive types ($\mu$ types) to types expressed as regular trees: possibly infinite trees, but with finitely many distinct subtrees.
I'm wondering, is the converse true? Can every regular tree type be expressed using the $\mu$ fixpoint notation? It seems obvious that this can be done if you have mutually recursive types: you have a type for each subtree of the regular tree type. But can it be done with singly recursive types?
Answer: You can reduce mutual recursion to a single recursion, see Bekic's Theorem, see e.g. Section 10.1 of Winskel's (1), where Bekic is worked out for programs rather than types. Note however that the details depend on the exact nature of the formal system in question. See A. Bauer's answer in (2) for the case of types.
G. Winskel, The Formal Semantics of Programming Languages.
CSTheory Stackexchange, Can Isorecursive types capture mutually recursive data types? | {
"domain": "cs.stackexchange",
"id": 14022,
"tags": "programming-languages, lambda-calculus, type-theory, types-and-programming-languages, tree-automata"
} |
Sign in the photon propagator | Question: The Klein Gordon propagator is given (I use Peskin and Schroeder's conventions, if it matters...),
\begin{equation}
\frac{ i }{ p ^2 - m ^2 + i \epsilon }
\end{equation}
The photon propagator (using Feynman gauge) is
\begin{equation}
\frac{ - i \eta^{\mu\nu}}{ k ^2 + i \epsilon }
\end{equation}
The time-like component of the photon field propagates with a different sign then the scalar field and the spatial components propagate with the same sign.
Is there a physical significance to the difference in sign between the two or it is just a consequence of our conventions?
Answer: Yes, there is a physical significance. The longitudinal mode $A^0$ is pure gauge, it does not propagate (in other words, the equation of motion for $A^0$ is a constraint [Gauss Law], not an equation of motion and it's canonical momenta is identically 0 , meaning we cannot impose canonical commutation relations on it). Some of the spatial modes do propagate, so they should have the same sign propagator as the scalars. This is really the easiest way to remember the sign of the photon propagator. The wrong sign in the $A^0$ propagator is at the heart of many problems in QFT. It marks the conflict between the positive definite norm of quantum mechanics and the indefinite norm on Minkowski space that we are forced to deal with because we want a manifestly local Lorentz invariant formulation of the theory. | {
"domain": "physics.stackexchange",
"id": 12088,
"tags": "quantum-field-theory, photons, gauge-theory, path-integral"
} |
What does python_qt_binding.loadUi's 3rd arg do in PyQt binding? | Question:
For example, in rqt_bag loadUi takes a python dictionary as its 3rd arg like this:
ui_file = os.path.join(rp.get_path('rqt_bag'), 'resource', 'bag_widget.ui')
loadUi(ui_file, self, {'BagGraphicsView': BagGraphicsView})
Now tracking back loadUi, what PySide binding does makes sense looking at source code -- it uses dictionary "as dictionary":
def loadUi(uifile, baseinstance=None, custom_widgets=None):
from PySide.QtUiTools import QUiLoader
from PySide.QtCore import QMetaObject
:
def createWidget(self, class_name, parent=None, name=''):
# don't create the top-level widget, if a base instance is set
if self._base_instance is not None and parent is None:
return self._base_instance
if class_name in self._custom_widgets:
widget = self._custom_widgets[class_name](parent)
But what about PyQt binding?
def loadUi(uifile, baseinstance=None, custom_widgets=None):
from PyQt4 import uic
return uic.loadUi(uifile, baseinstance=baseinstance)
Just ignoring? If so, what's the purpose of letting PySide binding have 3rd arg?
Originally posted by 130s on ROS Answers with karma: 10937 on 2013-02-26
Post score: 0
Answer:
PyQt uses the "promoted classes" specified in the UI file and there does not need the 3rd argument.
PySide on the other hand does not support that, so you have to explicitly pass the mapping to the loadUi function.
Originally posted by Dirk Thomas with karma: 16276 on 2013-02-26
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by 130s on 2013-02-27:
I see. Given that PySide requires the 3rd arg to incorporate custom classes, I think python_qt_binding.loadUi as an API should obligate the 3rd arg. I'll open a ticket.
Comment by Dorian Scholz on 2013-02-27:
This was certainly true for PySide 1.0.6, but has anyone checked if it changed in more recent versions?
Comment by 130s on 2013-02-28:
Ticketed. Let's move further discussion there. | {
"domain": "robotics.stackexchange",
"id": 13080,
"tags": "ros, pyqt, rqt"
} |
Conjugate nuclei | Question: I'm trying to find an explanation for the difference in binding energies between conjugate nuclei being governed by the Coulomb interaction. I was looking at the semi-empirical mass formula and trying to draw some conclusion.
We have that:
$$ R = R_0 A^{1/3}$$
and:
$$ Z = A-N$$
so for the Coulomb term I find that it gets a $A^{5/3}$ dependence and of all the terms in the formula that one dominates in terms of $A$ dependence but it's not entirely clear as to why does the Coulomb interaction govern the difference in binding energies for conjugate nuclei. How do we justify this?
Answer: In the semi-empirical mass formula
$$E_B(N,Z)=a_VA-a_SA^{2/3}-a_C\frac{Z(Z-1)}{A^{1/3}}-a_A\frac{(N-Z)^2}{A}+\delta(N,Z)$$
the volume and surface terms only depend on $A$, so they give the same value for a nucleus and its mirror nucleus, since the mirror nucleus is obtained by changing $N\leftrightarrow Z$, so $A$ is fixed. The asymmetry term depends on $(N-Z)^2/A$, so it's also the same when $N$ and $Z$ are interchanged. The pairing term $\delta(N,Z)$ is either $0$ or only depends on $A$. So the only term that changes between a nucleus and its conjugate is the Coulomb term and the difference in their binding energies $\Delta E_B=E_B(N,Z)-E_B(Z,N)$ must come from this term. | {
"domain": "physics.stackexchange",
"id": 79914,
"tags": "nuclear-physics, binding-energy"
} |
Simulating dynamic polymorphism in C | Question: This code snippet is a research attempt to find means for dealing with dynamic polymorphism (in C): two or more related "classes" share a common interface. Here is my best shot:
#include <stdio.h>
#include <stdlib.h>
#define CALL(INTERFACE_PTR, METHOD_NAME, ...) \
INTERFACE_PTR->METHOD_NAME(INTERFACE_PTR->object, ##__VA_ARGS__)
#define DESTRUCT(INTERFACE_PTR) INTERFACE_PTR->destruct(INTERFACE_PTR->object)
/*******************************************************************************
* Dummy data structures used for the demonstration. *
*******************************************************************************/
typedef struct int_triple {
int a;
int b;
int c;
} int_triple;
typedef struct int_pointer_triple {
int* a;
int* b;
int* c;
} int_pointer_triple;
void int_triple_init(int_triple* it)
{
it->a = 0;
it->b = 0;
it->c = 0;
}
void int_pointer_triple_init(int_pointer_triple* ipt)
{
ipt->a = malloc(sizeof(int));
ipt->b = malloc(sizeof(int));
ipt->c = malloc(sizeof(int));
*ipt->a = 0;
*ipt->b = 0;
*ipt->c = 0;
}
void int_triple_set(void* int_triple_ptr, int a, int b, int c)
{
int_triple* it = (int_triple*) int_triple_ptr;
it->a = a;
it->b = b;
it->c = c;
printf("int_triple_set(%d, %d, %d)\n", a, b, c);
}
void int_pointer_triple_set(void* int_triple_pointer_ptr, int a, int b, int c)
{
int_pointer_triple* ipt = (int_pointer_triple*) int_triple_pointer_ptr;
*ipt->a = a;
*ipt->b = b;
*ipt->c = c;
printf("int_pointer_triple_set(%d, %d, %d)\n", a, b, c);
}
int int_triple_get_sum(void* int_triple_ptr)
{
int_triple* it = (int_triple*) int_triple_ptr;
puts("int_triple_get_sum()");
return it->a + it->b + it->c;
}
int int_pointer_triple_get_sum(void* int_pointer_triple_ptr)
{
int_pointer_triple* ipt = (int_pointer_triple*) int_pointer_triple_ptr;
puts("int_pointer_triple_get_sum()");
return *ipt->a + *ipt->b + *ipt->c;
}
int int_triple_get_product(void* int_triple_ptr)
{
int_triple* it = (int_triple*) int_triple_ptr;
puts("int_triple_get_product()");
return it->a * it->b * it->c;
}
int int_pointer_triple_get_product(void* int_pointer_triple_ptr)
{
int_pointer_triple* ipt = (int_pointer_triple*) int_pointer_triple_ptr;
puts("int_pointer_triple_get_product()");
return *ipt->a * *ipt->b * *ipt->c;
}
void int_triple_destruct(void* int_triple_ptr)
{
free(int_triple_ptr);
puts("int_triple_destruct()");
}
void int_pointer_triple_destruct(void* int_pointer_triple_ptr)
{
int_pointer_triple* ipt = (int_pointer_triple*) int_pointer_triple_ptr;
free(ipt->a);
free(ipt->b);
free(ipt->c);
free(ipt);
puts("int_pointer_triple_destruct");
}
/*******************************************************************************
* Interface stuff. *
*******************************************************************************/
typedef struct int_triple_interface {
void* object;
void (*set) (void*, int, int, int);
int (*get_sum) (void*);
int (*get_product) (void*);
void (*destruct) (void*);
} int_triple_interface;
int_triple_interface* new_int_triple()
{
int_triple_interface* interface = malloc(sizeof(*interface));
int_triple* it = malloc(sizeof(*it));
int_triple_init(it);
interface->object = it;
interface->set = int_triple_set;
interface->get_sum = int_triple_get_sum;
interface->get_product = int_triple_get_product;
interface->destruct = int_triple_destruct;
return interface;
}
int_triple_interface* new_int_pointer_triple()
{
int_triple_interface* interface = malloc(sizeof(*interface));
int_pointer_triple* ipt = malloc(sizeof(*ipt));
int_pointer_triple_init(ipt);
interface->object = ipt;
interface->set = int_pointer_triple_set;
interface->get_sum = int_pointer_triple_get_sum;
interface->get_product = int_pointer_triple_get_product;
interface->destruct = int_pointer_triple_destruct;
return interface;
}
int main() {
int_triple_interface* interface;
int sum;
int product;
puts("--- int_triple ---");
interface = new_int_triple();
CALL(interface, set, 2, 3, 4);
sum = CALL(interface, get_sum);
product = CALL(interface, get_product);
DESTRUCT(interface);
printf("sum = %d, product = %d.\n", sum, product);
puts("\n--- int_pointer_triple ---");
interface = new_int_pointer_triple();
CALL(interface, set, 5, 6, 7);
sum = CALL(interface, get_sum);
product = CALL(interface, get_product);
DESTRUCT(interface);
printf("sum = %d, product = %d.\n", sum, product);
}
The output from the main is as follows:
--- int_triple ---
int_triple_set(2, 3, 4)
int_triple_get_sum()
int_triple_get_product()
int_triple_destruct()
sum = 9, product = 24.
--- int_pointer_triple ---
int_pointer_triple_set(5, 6, 7)
int_pointer_triple_get_sum()
int_pointer_triple_get_product()
int_pointer_triple_destruct
sum = 18, product = 210.
Critique request
I would like to hear comments on:
naming conventions,
coding conventions,
overall comments,
design pattern,
anything else.
Answer: Your approach has at least two unfortunate aspects:
Your approach to interface->object uses an extra heap-allocation.
Adding a new "method" to int_triple_interface requires increasing the size of every "int triple" "object" in the program by sizeof(void(*)()).
Nit: Your DESTRUCT(x) macro doesn't actually free(x).
The first problem could be solved by allocating the "object data" directly after the "interface data", with allowances for alignment/padding:
void int_triple_destruct(void* int_triple_ptr)
{
puts("int_triple_destruct() is now a no-op");
}
int_triple_interface* new_int_triple()
{
int_triple_interface* interface = malloc(sizeof(*interface) + sizeof(int_triple));
int_triple* it = (void*)(interface + 1);
int_triple_init(it);
interface->object = it;
interface->set = int_triple_set;
interface->get_sum = int_triple_get_sum;
interface->get_product = int_triple_get_product;
interface->destruct = int_triple_destruct;
return interface;
}
The second problem could be solved by storing all your function pointers away in a static data table: one table per object type, instead of one table per object instance.
#define CALL(INTERFACE_PTR, METHOD_NAME, ...) \
INTERFACE_PTR->vtable->METHOD_NAME(INTERFACE_PTR->object, ##__VA_ARGS__)
#define DESTRUCT(INTERFACE_PTR) INTERFACE_PTR->vtable->destruct(INTERFACE_PTR->object)
typedef struct {
void (*set) (void*, int, int, int);
int (*get_sum) (void*);
int (*get_product) (void*);
void (*destruct) (void*);
} int_triple_vtable_t;
int_triple_vtable_t int_triple_vtable = {
int_triple_set,
int_triple_get_sum,
int_triple_get_product,
int_triple_destruct,
};
typedef struct {
void* object;
int_triple_vtable_t* vtable;
} int_triple_interface;
int_triple_interface* new_int_triple()
{
int_triple_interface* interface = malloc(sizeof(*interface) + sizeof(int_triple));
int_triple* it = (void*)(interface + 1);
int_triple_init(it);
interface->object = it;
interface->vtable = int_triple_vtable;
return interface;
}
You probably don't really need the object pointer at all (since now it just points to (char *)interface + k for some known offset k), but I haven't thought about it too much.
In general, the question "how do I do OOP in C" can be answered by looking at what C++ does and then slavishly copying it. That's where I got the idea of "vtables" here, for example. Your object pointer is roughly equivalent to a C++ "virtual base class". And if you really want to do efficient, type-safe OOP in C, you're eventually going to have to reckon with the concepts of "copy construction" and "move construction", both of which you can find implemented in C++. | {
"domain": "codereview.stackexchange",
"id": 22699,
"tags": "c, polymorphism"
} |
How to write a matrix $\mathcal{M}$ such that $\mathcal{M} \boldsymbol{x}=\boldsymbol{\omega}\times\boldsymbol{x}$? | Question: As is well known, it is possible to use the $\nabla$ operator as if it were a vector. Someone consider it an abuse of notation but surely something that works well and is very useful. Well, how is it possible to consider the operator $\boldsymbol{\omega}\times$ as a matrix? How build a matrix $\mathcal{M}$ such that $\boldsymbol{\omega} \times \boldsymbol{x} = \mathcal{M} \boldsymbol{x}$?
Answer: $$ [\mathbf{a}]_{\times} = \begin{bmatrix}
\,\,0 & \!-a_3 & \,\,\,a_2 \\
\,\,\,a_3 & 0 & \!-a_1 \\
\!-a_2 & \,\,a_1 & \,\,0
\end{bmatrix}, $$
Source: https://en.wikipedia.org/wiki/Skew-symmetric_matrix#Cross_product | {
"domain": "physics.stackexchange",
"id": 88100,
"tags": "operators, rotation, linear-algebra"
} |
The complementary variable to the qubit and spin-1/2 | Question: The qubit is a big topic of quantum information theory. A qubit is a single quantum bit. Physical examples of qubits include the spin-1/2 of an electron, for example, see page 39 of Preskill:
http://www.theory.caltech.edu/people/preskill/ph229/notes/chap5.pdf
In quantum mechanics, two variables are called complementary if knowledge of one implies no knowledge whatsoever of the other. The usual example is position and momentum. If one knows the position exactly, then the momentum cannot be known at all. And to the extent that a situation can exist where we know something about both, there is a restriction, Heisenberg's uncertainty principle, that relates the accuracy of our knowledge:
$$\sigma_x\sigma_p \ge \hbar/2$$
where $\sigma_x$ and $\sigma_p$ are the RMS errors in the position and momentum, and $\hbar$ is Planck's constant $h$ divided by $2\pi$. The same relationship obtains for other pairs of complementary variables.
The units of $\hbar$ are that of angular momentum. Since spin-1/2 has units of angular momentum, it's natural that its complementary variable has no units. This is typically taken to be angle. That is, the usual assumption of quantum mechanics is that the complementary variable to spin is angle. For example, see Physics Letters A Volume 217, Issues 4-5, 15 July 1996, Pages 215-218, "Complementarity and phase distributions for angular momentum systems" by G. S. Agarwal and R. P. Singh, http://arxiv.org/abs/quant-ph/9606015
At the same time, in quantum information theory, the concept of "mutually unbiased bases" has to do with complementary variables in a finite Hilbert space. The usual example of this is that spin-1/2 in the $x$ or $y$ direction is complementary to spin in the z direction. In other words:
in quantum information theory, the usual complementary variable to spin is not taken to be angle, but instead is taken to be spin itself.
For example, see J. Phys. A: Math. Theor. 43 265303, "Mutually Unbiased Bases and Complementary Spin 1 Observables" by Paweł Kurzyński, Wawrzyniec Kaszub and Mikołaj Czechlewski, http://arxiv.org/abs/0905.1723
But according to the Heisenberg uncertainty principle, spin can only be its own complementary variable if we have $\hbar=1$. Of course it's possible to choose coordinates with $\hbar=1$, this is common in elementary particles, but what I'm asking about is this:
Is there a compatible way to interpret the two different choices for the complementary variable to spin angular momentum? For example, can we also interpret spin as an angle?
Answer: "Composite" observables such as the angular momentum have no "unique" complementary observables. In fact, the three components of the angular momentum are not really "independent" in the sense of spanning a proper configuration space. Only two functions of the three angular momentum components - conventionally $j^2$ and $j_z$ - may be selected into a basis of mutually commuting observables.
Once you have this basis, you may talk about other observables that don't commute with them. There are many. Of course, they include the other components, e.g. $j_x, j_y$, of the angular momentum. If you want a treatment that is analogous to the treatment of the momenta and positions, of course that the angles $\theta,\phi$ are the natural dual variables to $j,m$. You may either use the basis of spherical harmonics, $Y_{lm}$, or you may choose the continuous basis of delta-functions located at particular values of $(\theta,\phi)$.
Clearly, for internal spin - especially the half-integer-valued spin - there is no orbital rotation so there is no $(\theta,\phi)$ basis of the Hilbert space.
The reason why there's no unique answer is that the spin is "composite" and the full Lagrangian can't be written as a function of the angular momentum only - and even if it could, there are many ways to do so. In particular, a radial motion away from the origin carries $\vec j=0$ but it is still nontrivial. For such motion, the parameterization via the angular momentum would go singular. You would need both the angular momentum and the ordinary one - but then the variables would be redundant.
For higher-dimensional space, it becomes even more clear that the angular momentum cannot describe a proper basis of the configuration or phase space because the angular momentum has many components - $d(d-1)/2$ of them - which becomes (much) higher than the actual number of components describing the motion of a point-like particle, $d$.
To summarize, the basic assumption that there exists "the" dual variable for an arbitrary observable you choose in any theory, is incorrect. Nevertheless, if you want the most accurate analogy of the relationship of the momentum- and position-basis, it's the basis of spherical harmonics and the delta-functions on the sphere and it only works for orbital angular momentum. | {
"domain": "physics.stackexchange",
"id": 38424,
"tags": "quantum-information, quantum-spin, heisenberg-uncertainty-principle"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.