anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Darzens reaction on allylic position | Question: Question
Attempt
I thought the products would be from simple darzens reaction. So I replaced the $\ce{OH}$ by $\ce{Cl}$. But the answer of mine seems way wrong. Do give some tips on what might be happening in the reacton.
Answer: This reaction is a special case of Darzen's halogenation. I think you are aware of mechanism of Darzen's reaction or else you can check it here. It occurs through a 6 membererd cyclic intermediate. I'll show you the first reaction you would be able to answer second one in succession. | {
"domain": "chemistry.stackexchange",
"id": 13734,
"tags": "organic-chemistry, reaction-mechanism, alcohols, halides, nucleophilic-substitution"
} |
Speeding up Dijkstra's algorithm | Question: I have Dijkstra's algorithm:
# ==========================================================================
# We will create a dictionary to represent the graph
# =============================================================================
graph = {
'a' : {'b':3,'c':4, 'd':7},
'b' : {'c':1,'f':5},
'c' : {'f':6,'d':2},
'd' : {'e':3, 'g':6},
'e' : {'g':3, 'h':4},
'f' : {'e':1, 'h':8},
'g' : {'h':2},
'h' : {'g':2}
}
def dijkstra(graph, start, goal):
shortest_distance = {} # dictionary to record the cost to reach to node. We will constantly update this dictionary as we move along the graph.
track_predecessor = {} # dictionary to keep track of path that led to that node.
unseenNodes = graph.copy() # to iterate through all nodes
infinity = 99999999999 # infinity can be considered a very large number
track_path = [] # dictionary to record as we trace back our journey
# Initially we want to assign 0 as the cost to reach to source node and infinity as cost to all other nodes
for node in unseenNodes:
shortest_distance[node] = infinity
shortest_distance[start] = 0
# The loop will keep running until we have entirely exhausted the graph, until we have seen all the nodes
# To iterate through the graph, we need to determine the min_distance_node every time.
while unseenNodes:
min_distance_node = None
for node in unseenNodes:
if min_distance_node is None:
min_distance_node = node
elif shortest_distance[node] < shortest_distance[min_distance_node]:
min_distance_node = node
# From the minimum node, what are our possible paths
path_options = graph[min_distance_node].items()
# We have to calculate the cost each time for each path we take and only update it if it is lower than the existing cost
for child_node, weight in path_options:
if weight + shortest_distance[min_distance_node] < shortest_distance[child_node]:
shortest_distance[child_node] = weight + shortest_distance[min_distance_node]
track_predecessor[child_node] = min_distance_node
# We want to pop out the nodes that we have just visited so that we dont iterate over them again.
unseenNodes.pop(min_distance_node)
# Once we have reached the destination node, we want trace back our path and calculate the total accumulated cost.
currentNode = goal
while currentNode != start:
try:
track_path.insert(0, currentNode)
currentNode = track_predecessor[currentNode]
except KeyError:
print('Path not reachable')
break
track_path.insert(0, start)
# If the cost is infinity, the node had not been reached.
if shortest_distance[goal] != infinity:
print('Shortest distance is ' + str(shortest_distance[goal]))
print('And the path is ' + str(track_path))
It works fine if I have a small amount of nodes (like in the code), but I have graph with around 480 000 nodes and by my approximation, in such a large graph, it will take 7.5 hours, and that is only 1 way! How could I make it work faster? In OSM, for instance it calculates in seconds!
Answer: PEP-8 Violations
Variables (unseenNodes, currentNode) should be snake_case, not camelCase.
Infinity
99999999999 is no where near infinity. If you want to use infinity, use the real infinity: float("+inf"), or in Python 3.5+, math.inf
Conversion to strings
Python's print() statement automatically converts any non-string object into a string, for printing. You don't have to. This:
print('Shortest distance is ' + str(shortest_distance[goal]))
print('And the path is ' + str(track_path))
can be written as:
print('Shortest distance is', shortest_distance[goal])
print('And the path is', track_path)
which is slightly shorter to type. As a bonus, it is slightly more efficient, as it does not need to create, and subsequently deallocate, a new string object which is the concatenation of the the strings.
If you are using Python 3.6+, you might want to use f-strings:
print(f'Shortest distance is {shortest_distance[goal]}')
print(f'And the path is {track_path}')
which interpolates values directly into the strings.
Return Value
Your function finds the path, prints the result, and does not return anything. This is not useful if you wish to use the discovered path in any other fashion. The function should compute the result and return it. The caller should be responsible for printing the result.
Document the code
True, you have plenty of comments. But the caller can only guess at what the functions arguments are supposed to be, and what the function returns. You should document this with type-hints, and """doc-strings""". Something like:
from typing import Any, Mapping, Tuple, List
Node = Any
Edges = Mapping[Node, float]
Graph = Mapping[Node, Edges]
def dijkstra(graph: Graph, start: Node, goal: Node) -> Tuple[float, List]:
"""
Find the shortest distance between two nodes in a graph, and
the path that produces that distance.
The graph is defined as a mapping from Nodes to a Map of nodes which
can be directly reached from that node, and the corresponding distance.
Returns:
A tuple containing
- the distance between the start and goal nodes
- the path as a list of nodes from the start to goal.
If no path can be found, the distance is returned as infinite, and the
path is an empty list.
"""
Avoid multiple lookups
In this code:
min_distance_node = None
for node in unseenNodes:
if min_distance_node is None:
min_distance_node = node
elif shortest_distance[node] < shortest_distance[min_distance_node]:
min_distance_node = node
you continuously look up shortest_distance[min_distance_node]. In compiled languages, the compiler may be able to perform data-flow analysis, and determine that the value only need to be looked up again if min_distance_node changes. In an interpreted language like Python, where a lookup action can execute user-defined code and change the value, each and every lookup operation must be executed. shortest_distance[min_distance_node] is two variable lookups plus a dictionary indexing operation. Compare with:
min_distance_node = None
min_distance = infinity
for node in unseenNodes:
distance = shortest_distance[node]
if min_distance_node is None:
min_distance_node = node
min_distance = distance
elif distance < min_distance:
min_distance_node = node
min_distance = distance
This code will run faster, due to less lookups of shortest_distance[min_distance_node] and shortest_distance[node].
But finding the minimum of a list is such a common operation, that Python has a built-in function for doing this: min(iterable, *, key, default). The key argument is used to specify an ordering function ... in this case, a mapping from node to distance. The default can be used to prevent a ValueError if there are no nodes left, which is unnecessary in this case.
min_distance_node = min(unseenNodes, key=lambda node: shortest_distance[node])
In the same vein:
for child_node, weight in path_options:
if weight + shortest_distance[min_distance_node] < shortest_distance[child_node]:
shortest_distance[child_node] = weight + shortest_distance[min_distance_node]
track_predecessor[child_node] = min_distance_node
repeatedly looks up shortest_distance[min_distance_node]; again, two variable lookups and a dictionary indexing operation. Again, we can move this out of the loop:
min_distance = shortest_distance[min_distance_node]
for child_node, weight in path_options:
if weight + min_distance < shortest_distance[child_node]:
shortest_distance[child_node] = weight + min_distance
track_predecessor[child_node] = min_distance_node
Reducing the Working Set
The code to find the min_distance_node: how many nodes does it check? In your toy graph "a" to "h", on the first iteration, it needs to search 8 nodes. With 480 000 nodes, it would need to search 480 000 nodes! In the second iteration, one node has been removed from unseenNodes, so the it would search one node less. 7 nodes is fine, but 479 999 nodes is a huge number of nodes.
How many nodes does "a" connect to? Only 3. The min_distance_node will become one of those 3 nodes. Searching the remaining nodes (with infinite distances) isn't necessary. If you added to the unseenNodes only the nodes which can be reached at each step of the algorithm, your search space would reduce from several thousand nodes to a couple of hundred.
Moreover, if you maintained these unseenNodes in a sorted order by distance, the min_distance_node would always be the first node in this “priority queue”, and you wouldn’t need to search through the unseenNodes at all.
Maintaining the unseen nodes in a priority queue is easily done through a min-heap structure, which is built into Python (heapq):
from math import inf, isinf
from heapq import heappush, heappop
from typing import Any, Mapping, Tuple, List
Node = Any
Edges = Mapping[Node, float]
Graph = Mapping[Node, Edges]
def dijkstra(graph: Graph, start: Node, goal: Node) -> Tuple[float, List]:
"""
Find the shortest distance between two nodes in a graph, and
the path that produces that distance.
The graph is defined as a mapping from Nodes to a Map of nodes which
can be directly reached from that node, and the corresponding distance.
Returns:
A tuple containing
- the distance between the start and goal nodes
- the path as a list of nodes from the start to goal.
If no path can be found, the distance is returned as infinite, and the
path is an empty list.
"""
shortest_distance = {}
predecessor = {}
heap = []
heappush(heap, (0, start, None))
while heap:
distance, node, previous = heappop(heap)
if node in shortest_distance:
continue
shortest_distance[node] = distance
predecessor[node] = previous
if node == goal:
path = []
while node:
path.append(node)
node = predecessor[node]
return distance, path[::-1]
else:
for successor, dist in graph[node].items():
heappush(heap, (distance + dist, successor, node))
else:
return inf, []
if __name__ == '__main__':
graph = {
'a' : {'b':3, 'c':4, 'd':7},
'b' : {'c':1, 'f':5},
'c' : {'f':6, 'd':2},
'd' : {'e':3, 'g':6},
'e' : {'g':3, 'h':4},
'f' : {'e':1, 'h':8},
'g' : {'h':2},
'h' : {'g':2}
}
distance, path = dijkstra(graph, 'a', 'e')
if isinf(distance):
print("No path")
else:
print(f"Distance = {distance}, path={path}")
OSM
By 'OSM' do you mean "Open Street Maps"? If so, you are using the wrong algorithm. Map nodes have coordinates, which can be use as "hints", to direct the search in a given direction. See A* Search Algorithm | {
"domain": "codereview.stackexchange",
"id": 37185,
"tags": "python, python-3.x, pathfinding"
} |
Can you still see Polaris even if you are in the south pole? | Question: I haven't been to south pole but can the Polaris still be viewed if the viewer is in the south pole? Or this question makes no sense at all?
Answer: Currently Polaris is at a declination of a bit over 89 degrees, which means that no one south of 1 degree south latitude can see Polaris. That's almost all of the Southern hemisphere, let alone the South Pole.
Polaris won't be the North Star forever, thanks to axial precession. In about 13000 years or so, Polaris will have a declination of about 46 degrees or so (twice the 23 degree axial tilt). Polaris will thus be visible in 13000 years or so as a wintertime star to all of Africa, all of Australia, and most of South America, but none of Antarctica.
After millions of years, proper motion may make Polaris visible over Antarctica. But then again, being a yellow supergiant, its unlikely that Polaris will be visible anywhere (without a telescope). It will instead be dead. | {
"domain": "astronomy.stackexchange",
"id": 1673,
"tags": "amateur-observing, star-gazing"
} |
I want to get tilt sensor value for a given time t_0, but the tilt sensor data points are discrete. How do I interpolate between the discrete points? | Question: I have 3-dimensional readings from a tilt sensor (specifically these are rotational angles about X, Y, and Z axes) over time. Let's call these angles S. I want to infer S at a specific time t_0, but each reading is discrete and are at different time points across axes. Here is graphic1 to illustrate.
graphic1
I'm thinking the solution will be something like:
Construct a window in time centred on t_0, then smooth/interpolate between the discrete points within that time window to obtain the 3-dimensional value at t_0, like in graphic2.
graphic2
However, I'm not sure on the following:
How wide should the window be such that no information is lost? (maybe something about Nyquist theorem?)
What smoothing algorithm to use?
The readings themselves are noisy.
Am I thinking about this problem the right way? I'm open to other alternatives.
Also worth nothing that I have to write an implementation in TypeScript.
Answer: Assuming that the signal is prefiltered and uniformly sampled according to Shannon Nyquist the textbook solution is to upsample it using some approximation of the ideal (frequency domain) brickwall lowpass filter, equivalent with a (windowed) sin(x)/x function in the time domain.
The problem is similar to showing a smooth (zoomed in) waveform of discretely sampled audio in an audio editor. Perhaps you can find the code you need in eg Audacity?
If the signal contains noise, you might be able to trade noise suppression for signal distortion by knowing the statistics of both and using a Wiener filter.
As this seems to be a control systems project, perhaps you can use a Kalman filter. | {
"domain": "dsp.stackexchange",
"id": 8908,
"tags": "discrete-signals, noise, smoothing, sensor"
} |
Aliasing when interpolating with DFT? | Question: I'm coming from an understanding of the continuous-time Fourier Transform, and the effects of doing a DFT and the inverse DFT are mysterious to me.
I have created a noiseless signal as:
import numpy as np
def f(x):
return x*(x-0.8)*(x+1)
X = np.linspace(-1,1,50)
y = f(X)
Now, if I were to perform a continuous Fourier transform on the function $f$ given above, restricted to $[-1,1]$, I would expect the sum of the first few Fourier basis components to give a reasonable approximation to the function $f$ (this is an observation specific to our $f$, since it is approximately sine-wavey over $[-1,1]$). The discrete Fourier transform is an approximation to the continuous one, so assuming that my points y are sampled noiselessly from $f$ (which they are by design), then the DFT coefficients should approximate the CFT coefficients (I think). So, I obtain a DFT like so (formulae employed):
def DFT(y):
# the various frequencies
terms = np.tile(np.arange(y.shape[0]), (y.shape[0],1))
# the various frequencies cross the equi-spaced "X" values
terms = np.einsum('i,ij->ij',np.arange(y.shape[0]),terms)
# the "inside" of the sum in the DFT formula
terms = y * np.exp(-1j*2*np.pi*terms/y.shape[0])
# sum up over all points in y
return np.sum(terms, axis=1)
def iDFT_componentwise(fy, X):
# this function returns the various basis function components of y, sampled at X
# so the result is a len(X) x len(fy) matrix with each:
# row corresponding to a point in X and each
# column corresponding to a particular frequency.
terms = np.tile(np.arange(len(fy)), (X.shape[0],1))
terms = fy * np.exp(1j*2*np.pi*np.einsum('i,ij->ij',np.arange(X.shape[0])*fy.shape[0]/X.shape[0],terms)/fy.shape[0])
return terms/fy.shape[0]
def iDFT(fy,X):
# summing the Fourier components over all frequencies gives back the original function
return np.sum(iDFT_componentwise(fy,X), axis=1)
I am interested in inspecting the various basis functions that comprise my signal, so I oversample the domain to get a better-resolved picture:
oversampled_X = np.linspace(-1,1,100)
and proceed to check out my components:
fy = DFT(y)
y_f_components = iDFT_componentwise(fy, oversampled_X)
The positive-frequency components look as expected.
import matplotlib.pyplot as plt
plt.plot(oversampled_X, y_f_components[:,1],c='r')
plt.plot(X,y)
plt.show()
However, the negative frequency components look all weird:
plt.plot(oversampled_X, y_f_components[:,49],c='r')
plt.plot(X,y)
plt.show()
This last image looks like it has problems with aliasing. This, in turn, causes problems when I try to reconstitute the function from the Fourier components (see image below)
plt.plot(oversampled_X, iDFT(fy,oversampled_X),c='r')
plt.plot(X,y)
plt.show()
This problem does not occur when I truncate the continuous time Fourier transform of the function to include the same number of terms (see image below):
import sympy
from sympy import fourier_series
from sympy.abc import x
from sympy.utilities.lambdify import lambdify
f = x*(x-0.8)*(x+1)
fourier_f = fourier_series(f, (x, -1, 1))
lambda_fourier_f = lambdify(x,fourier_f.truncate(25),'numpy')
reconstructed_y = lambda_fourier_f(oversampled_X)
plt.plot(oversampled_X,reconstructed_y,c='r')
plt.plot(X,y)
tl;dr
My oversampled inverse Discrete Fourier Transform has a terrible aliasing problem as illustrated here:
The oversampled inverse Discrete Transform:
As opposed to the oversampled inverse Continuous Transform (trucated to the number of terms in the discrete version).
What is the intrinsic property of the DFT that causes this? If the DFT coefficients approximate the CFT coefficients, then why doesn't the CFT have this problem?
Update: The spectrum
As requested, here is the spectrum of $f$. Note that since $f$ is real, the discrete spectrum (excepting the constant term) is symmetric about n/2. I have not attempted to fix the units.
Update2: Extending the function
Per @robertbristow-johnsons suggestion, I decided to check out a slightly different function: $x(x-1)(x+1)$ on $[-1,1]$ (so that the "ends" agree) and I have "repeated" the data a number of times end-to-end. The thought was that this would alleviate some of the weird effects. However, the exact same features appear. (one may wish to open this figure by itself in a new window to enable zooming)
Answer: Let me summarize my understanding of what you're trying to do. You have a real-valued sequence $x[n]$, obtained by sampling a real-valued continuous function, and you computed its DFT $X[k]$. The sequence can be expressed in terms of its DFT coefficients:
$$x[n]=\frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{j2\pi nk/N},\qquad n\in[0,N-1]\tag{1}$$
where $N$ is the length of the sequence.
Now you want to interpolate that sequence, and I believe you're trying to do this in the following way:
$$\tilde{x}[m]=\frac{1}{N}\sum_{k=0}^{N-1}X[k]e^{j2\pi mk/M},\qquad m\in[0,M-1],\quad M>N\tag{2}$$
This, however, doesn't work. If $M$ happens to be an integer multiple of $N$, then $\tilde{x}[nM/N]=x[n]$ is satisfied, but the other values of $\tilde{x}[m]$ are by no means interpolated values of $x[n]$. Note that these values are not even real-valued.
What you can do is approximately compute the Fourier coefficients of the original continuous function using the (length $N$) DFT of the sampled function, and then approximately reconstruct samples of the function on a dense grid (of length $M>N$):
$$\tilde{x}[m]=\frac{1}{N}\sum_{k=-K}^KX[k]e^{j2\pi mk/M},\qquad m\in[0,M-1]\tag{3}$$
Note that in $(3)$ the summation indices are symmetric, and the number $K$ cannot exceed $N/2$ because that's the number of independent DFT coefficients you have due to conjugate symmetry of $X[k]$ (because $x[n]$ is assumed to be real-valued).
Eq. $(3)$ is just equivalent to zero-padding in the frequency domain, which corresponds to interpolation in the time domain. Note, however, that the zero padding is done in such a way that conjugate symmetry is retained, i.e., the zeros are inserted around the Nyquist frequency, and not simply appended to the DFT coefficients.
With $X[-k]=X[N-k]$ and $X[k]=X^*[N-k]$, Eq. $(3)$ can be rewritten as
$$\begin{align}\tilde{x}[m]&=\frac{1}{N}X[0]+\frac{1}{N}\sum_{k=1}^K\left(X[k]e^{j2\pi mk/M}+X[-k]e^{-j2\pi mk/M}\right)\\&=\frac{1}{N}X[0]+\frac{1}{N}\sum_{k=1}^K\left(X[k]e^{j2\pi mk/M}+X^*[k]e^{-j2\pi mk/M}\right)\\&=\frac{1}{N}X[0]+\frac{2}{N}\textrm{Re}\left\{\sum_{k=1}^KX[k]e^{j2\pi mk/M}\right\},\qquad m\in[0,M-1]\end{align}\tag{4}$$
The following Matlab/Octave code illustrates the above:
N = 100;
t = linspace (-1,1,N);
M = 200;
ti = linspace (-1,1,M);
x = t .* (t - 0.8) .* (t + 1);
x = x(:);
X = fft(x);
X = X(:);
Nc = 20; % # Fourier coefficients (must not exceed N/2)
x2 = X(1) + 2*real( exp( 1i * 2*pi/M * (0:M-1)' * (1:Nc-1) ) * X(2:Nc) );
x2 = x2 / N;
plot(t,x,ti,x2)
Note that the approximation of the blue curve by the green curve in the above figure is two-fold: first, there's only a finite number of Fourier coefficients, and second, the Fourier coefficients are only approximately computed from samples of the original function. | {
"domain": "dsp.stackexchange",
"id": 7825,
"tags": "discrete-signals, fourier-transform, continuous-signals, fourier-series"
} |
Behaviour of water at exactly 0 °C | Question: If I had a beaker of water (i.e. many molecules), at exactly 0 °C and at atmospheric pressure, what phase would be encountered?
And would the answer to this also apply to other substances at their freezing/melting point, and out of interest, would the same effect be observed at their other "phase intercepts" too?
Answer: It depends on how you achieved water at 0 °C. If you cooled water down to 0 °C, then it will remain water, unless you remove more heat, at which point it will start to freeze. Water needs to be at 0 °C to freeze.
If you warm ice up from below freezing to 0 °C, then it will remain ice, unless you add more heat to melt the ice. Ice needs to be at 0 °C to melt.
It will be the same for all substances. | {
"domain": "chemistry.stackexchange",
"id": 1362,
"tags": "water, phase, temperature"
} |
Lazy prime number generator | Question: There are multiple Project Euler problems (e.g. 7, 10) that deal with prime numbers. I decided make a reusable solution that could be used for most of them.
I wrote a method to generate an infinite* prime number sequence, which would enumerate lazily (that is - computing only when requested), using yield return. At this point I can use the whole power of LINQ, MoreLINQ, and any other IEnumerable extension method.
It doesn't perform great though - finding prime numbers below 2 million takes over a second. I don't think I can use a sieve as I don't know the upper bound. What could be done to improve it? Any other non-performance improvements?
*The sequence isn't really infinite, only up to 263-1 or until an OutOfMemoryException.
The prime sequence generator:
class PrimeSequenceGenerator : ISequenceGenerator
{
public IEnumerable<long> GetSequence()
{
var primeCache = new List<long>();
long curentNumber = 2;
while (true)
{
var isPrime = true;
var currentRoot = Math.Sqrt(curentNumber);
foreach (var cachedPrime in primeCache)
{
if (curentNumber % cachedPrime == 0)
{
isPrime = false;
break;
}
if (cachedPrime > currentRoot)
{
break;
}
}
if (isPrime)
{
primeCache.Add(curentNumber);
yield return curentNumber;
}
curentNumber++;
}
}
}
An interface for all my sequence generators:
interface ISequenceGenerator
{
IEnumerable<long> GetSequence();
}
Usage:
var generator = new PrimeSequenceGenerator();
// Euler problem 7 - Find the 10001st prime.
var problem7Solution = generator.GetSequence().ElementAt(10001 - 1);
// Euler problem 10 - Find the sum of all primes below two million.
var problem10Solution = generator.GetSequence().TakeUntil(x => x > 2000000).SkipLast(1).Sum();
Answer: For small primes - i.e. up to 2^32 and maybe up to 2^40..2^50ish regions - you can improve speed by several orders of magnitude if you use a Sieve of Eratosthenes.
In order to get decent speed the factor primes - i.e. those up to sqrt(limit), whatever your limit might be - should be sieved up front and stored in an array, alongside with the current working offset for each prime (initialised to the square of the prime). The sieving of the small factor primes can also be done with a - simpler - Sieve of Eratosthenes.
The size of the sieve should be equal to the L1 cache size of the expected target machine (usually 32 KByte), and for maximum efficiency you should use a packed odds-only bitmap. BitArray is slow as molasses, so either use bool[1<<15], or byte[1<<15] with manual bit indexing (i.e. sieve[i>>3] |= 1<<(i&7) to set bit i). Using wheels like mod30 can improve efficiency further, but only in connection with major unrolling of loops and lots of other complications.
Another performance boost is offered by presieving. Instead of clearing the sieve prior to sieving a new window's worth of primes, you simply blast over the sieve a pre-computed bit pattern that corresponds to sieving by a certain number of (odd) small primes. Presieving up to 13 hits a sweet spot in a situation like this, but going up to 17 or 19 can realise further (tiny) speed gains at the cost of much increased size of the precomputed pattern.
In order to illustrate what the code would look like and how the various techniques connect, here's the actual sieving function from my SPOJ PRIME1 submission:
internal static byte[] presieve_pattern;
internal static int presieve_modulus;
internal static int first_sieve_prime_index;
internal static byte[] sieve;
internal static int[] result;
internal static int result_size;
// ...
internal static void sieve_primes_between_ (int m, int n)
{
if (m > n)
return;
Trace.Assert(first_sieve_prime_index == 0 || m > small_odd_primes[first_sieve_prime_index - 1]);
Trace.Assert((m & 1) != 0 && (n & 1) != 0);
Trace.Assert(m > 1);
int sqrt_n = (int)Math.Sqrt(n);
int max_bit = (n - m) >> 1; // (the +1 is redundant here since both are odd)
Trace.Assert(max_bit < sieve.Length * 8);
if (first_sieve_prime_index == 0) // without presieving
Array.Clear(sieve, 0, (max_bit >> 3) + 1);
else
{
int base_bit = (m >> 1) % presieve_modulus;
while ((base_bit & 7) != 0)
base_bit += presieve_modulus;
Buffer.BlockCopy(presieve_pattern, base_bit >> 3, sieve, 0, (max_bit >> 3) + 1);
}
for (int i = first_sieve_prime_index, e = small_odd_primes.Length; i < e; ++i)
{
int prime = small_odd_primes[i];
if (prime > sqrt_n)
break;
int start = prime * prime, stride = prime << 1;
if (start < m)
{
start = (stride - 1) - (m - start - 1) % stride;
}
else
start -= m;
for (int j = start >> 1; j <= max_bit; j += prime)
sieve[j >> 3] |= (byte)(1 << (j & 7));
}
Trace.Assert(m > 3 && m % 3 != 0);
for (int i = 0, d = 3 - m % 3, e = (n - m) >> 1; i <= e; i += d, d ^= 3)
if ((sieve[i >> 3] & (1 << (i & 7))) == 0)
result[result_size++] = m + (i << 1); // m is odd already
}
The code should answer quite a few questions that I simply glossed over in the text. This version is geared towards experimentation with various aspects (like number of presieve primes and so on), and thus it lacks a lot of - uglifying and code-bloating - optimisations.
The wrapper function needs to take care of things like non-sieve primes and normalising arguments, which is why I'm showing it here as well:
internal static void sieve_primes_between (int m, int n)
{
result_size = 0;
if (m < small_odd_primes[first_sieve_prime_index])
{
for (int i = -1; i < first_sieve_prime_index; ++i)
{
int prime = i < 0 ? 2 : small_odd_primes[i];
if (m <= prime && prime <= n)
{
result[result_size++] = prime;
}
}
m = small_odd_primes[first_sieve_prime_index];
}
n -= 1 - (n & 1); // make it odd
m |= 1; // make it odd
m += m % 3 == 0 ? 2 : 0; // skip if multiple of 3
sieve_primes_between_(m, n);
}
This illustrates the kind of complications that you have to deal with when certain optimisations are applied, like special handling for the wheel primes (in this case only the number 2) and the presieve primes.
What is not shown here - because it does not fit the profile of the PRIME1 task - is the remembering of offsets. I don't have working C# code that uses the technique at the moment, because I just started learning the language. Remembering the working offsets shouldn't be too difficult to implement, despite the peculiarities of C# like lack of reference types.
The code takes 3..4 milliseconds to sieve 10 x 100,000 numbers just below the PRIME1 limit (10^9). A version using bool[] could be marginally faster for the low PRIME1 limit but certain further optimisations require the use of byte[] or int[], especially as C# has major performance problems with indexing.
This type of sieve performs decently in the 32-bit range. It can be modified to operate beyond 32 bits but in higher regions of the 64-bit range it gets bogged down by the sheer number of potential factor primes (hundreds of millions, at the end of the range) each of which needs to be considered for sieving each window unless a technique like bucketing is used. | {
"domain": "codereview.stackexchange",
"id": 18990,
"tags": "c#, algorithm, programming-challenge, primes, lazy"
} |
Free falling elevator question | Question: The question is about inertial and non-inertial frame of references.
The question:
And my solution for this question:
My question about this problem is, what if $\vec{a}_{0} < \vec{g}$? Then would not the mass fall to the floor?
Answer: Yes, when ${\vec a_o}<{\vec g}$, the object will definitely fall to the floor. The question is incorrect. In the non-inertial frame, the ball will fall as long as there is a net acceleration downwards. However, the mass will take more time to fall as compared to on ground. | {
"domain": "physics.stackexchange",
"id": 62506,
"tags": "reference-frames, inertial-frames"
} |
Why does work depend on distance? | Question: So the formula for work is$$
\left[\text{work}\right] ~=~ \left[\text{force}\right] \, \times \, \left[\text{distance}\right]
\,.
$$
I'm trying to get an understanding of how this represents energy.
If I'm in a vacuum, and I push a block with a force of $1 \, \mathrm{N},$ it will move forwards infinitely. So as long as I wait long enough, the distance will keep increasing. This seems to imply that the longer I wait, the more work (energy) has been applied to the block.
I must be missing something, but I can't really pinpoint what it is.
It only really seems to make sense when I think of the opposite scenario: when slowing down a block that is (initially) going at a constant speed.
Answer: You have to put in the distance on which the force acts. If you release the force, there will be no work done since there is no force acting on the body. | {
"domain": "physics.stackexchange",
"id": 86614,
"tags": "newtonian-mechanics, forces, work, definition, distance"
} |
How do I built the ros-indigo-pointgrey-camera-driver aptitude package for the Ubuntu trusty armhf platform? | Question:
It's pretty simple. It seems that only ros-jade-pointgrey-camera-driver aptitude package is available for armhf. I'd rather continue to use indigo if possible, and I know that ros-indigo-pointgrey-camera-driver is available on other platforms.
Originally posted by Bryce on ROS Answers with karma: 1 on 2015-05-13
Post score: 0
Original comments
Comment by ahendrix on 2015-05-13:
Why do you want to build the apt package? Why not just install the package from source?
Comment by Bryce on 2015-05-13:
I have actually tried that, and I get errors when I run the camera.launch file:
Could not find library corresponding to plugin pointgrey_camera_driver/PointGreyCameraNodelet.
Answer:
When I was building Indigo packages for armhf six months ago, the pointgrey driver included a binary driver that only supported x86, so it didn't build on ARM.
It looks like this has been resolved in a recent release ( https://github.com/ros-drivers/pointgrey_camera_driver/issues/32 ), but the fix hasn't made it back to the Indigo builds for armhf yet.
You should be able to check out the latest version from source, build and run it on armhf. Perhaps you just forgot to build it before you tried to run it?
Originally posted by ahendrix with karma: 47576 on 2015-05-13
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Bryce on 2015-05-14:
Thanks, turns out I'm not used to the new catkin_make system. There's some setup.bash script that catkin_make generates that I needed to source in order for all the paths to line up. That said, I'm not too sure this package is working well on armhf, but that's another conversation. | {
"domain": "robotics.stackexchange",
"id": 21680,
"tags": "ros, pointgrey, armhf"
} |
How to colorize shell while using rosmake? | Question:
Hello All,
I am a ROS user, and I compile and build packages using the "rosmake" command.
I would be more than happy to know how to see the warnings and the errors coloured in my shell (if possible, using different colours).
Thank you all in advance,
Felix.
Originally posted by Felix Tr on ROS Answers with karma: 378 on 2011-10-11
Post score: 3
Answer:
Are you aiming specifically at rosmake (i.e. not make in a ROS package)?
The normal make with CMake is colored from CMake and to get colored warnings and errors from g++ I use colorgcc.
Install colorgcc by sudo apt-get install colorgcc
Define a directory where you will have executables in your path, e.g. $HOME/bin (create bin in your home directory)
3. Add this directory to your path, i.e. `export PATH=$HOME/bin:$PATH`. Prefereably put that line in your .bashrc or similar, so you only have to do this once
4. In that directory create symlinks to colorgcc, i.e.:
<pre>
~/bin$ ln -s /usr/bin/colorgcc c++
~/bin$ ln -s /usr/bin/colorgcc g++
~/bin$ ln -s /usr/bin/colorgcc gcc
~/bin$ ln -s /usr/bin/colorgcc cc
For CMake only the first line is required.
Originally posted by dornhege with karma: 31395 on 2011-10-11
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Felix Tr on 2011-10-16:
Hello, the last answer was very helpful, thanks a lot!!! I will try it, and I hope there will be no problems :)
Comment by dornhege on 2011-10-13:
None. Just call make, it will be colorized. I've cleared up the description how to make that happen. As a side effect all calls that you do to c++/g++ (also non-ROS) will be colored - which for me is a desired effect.
Comment by Felix Tr on 2011-10-12:
Thank you again, but I still don´t get it.
If I use this line in order to compile my code "make" (inside of the needed package).
What are the changes I should do in the command line in order to receive coloured output of the "make" command?
Thank you again?
Comment by dornhege on 2011-10-12:
My answer works only for 2. which I usually do. It seems quicker and during development dependencies don't change, but yes rosmake package would be better for that.
Comment by Felix Tr on 2011-10-12:
Hello, first of all I want to thank you for your answer. The second thing is that it is not clear for me :). I can compile my code in 2 different ways: 1. Just enter the shell and then use "rosmake my_package_name". 2. Enter into the package Ĩ want to build and using "make". I prefer the first way because in such way (as far as I know) dependencies are solved. I would be grateful to receive an answer "How to colourise errors and warnings in this case"? Thanks in advance.
Comment by Susannah on 2013-03-07:
I've followed your instructions but there is still no color output. Rosmake should color its output by default because compiling errors are not red (and thus hard to find).
Comment by dornhege on 2013-03-07:
This only colors cmake, not rosmake. If you build you packages normally with make everything should be colored. When you use catkin instead of rosmake everything should be colored. | {
"domain": "robotics.stackexchange",
"id": 6931,
"tags": "rosmake"
} |
Simple factory pattern for cooking a pizza | Question: I have developed a command line application which prompts the user to initially select an oven and then requests that they cook a pizza. The oven affects the pizzas cooking time. The pizzas information will be printed to the command line as well as the altered cooking time.
I wanted to use the best design principles I knew to ensure future changes were possible.
I would massively appreciate critique on:
My actual implementation of the pattern
Whether or not I chose a sufficient design pattern to use to fulfill a satisfactory result
Any obvious sore-thumbs in my code
OvenFactory.php
namespace PizzaShop\Oven;
class OvenFactory
{
/**
* @var array
*/
private $ovens = [
'Gas Oven' => 'PizzaShop\Oven\GasOven',
'Stone Oven' => 'PizzaShop\Oven\StoneOven',
'Hair Dryer' => 'PizzaShop\Oven\HairDryer',
];
/**
* @return array
*/
public function getOvens() {
return $this->ovens;
}
/**
* @param $name
* @return $oven
*/
public function create($name)
{
if (isset($this->ovens[$name]))
{
$oven = new $this->ovens[$name];
return $oven;
}
return null;
}
/**
* @param $name
* @return $oven
*/
public function printOven($name) {
if (isset($this->ovens[$name]))
{
$oven = $this->create($name);
print('--- ' . $oven->name . ' ---') . PHP_EOL;
$alters = 'Decreases';
if($oven->time > 0) {
$alters = 'Increases';
}
print($alters . ' cooking time by: ' . $oven->time . ' Minutes') . PHP_EOL;
return $oven;
}
return null;
}
}
StoneOven.php
<?php
namespace PizzaShop\Oven;
class StoneOven extends Oven
{
function __construct()
{
parent::setName('Stone Oven');
parent::setTime(-10);
}
}
GasOven.php
<?php
namespace PizzaShop\Oven;
class GasOven extends Oven
{
function __construct()
{
parent::setName('Gas Oven');
parent::setTime(0);
}
}
Oven.php
<?php
namespace PizzaShop\Oven;
abstract class Oven
{
/**
* @var
*/
public $time;
/**
* @var
*/
public $name;
/**
* @return mixed
*/
public function getTime() {
return $this->time;
}
/**
* @param float $time
*/
public function setTime($time)
{
$this->time = $time;
}
/**
* @return string
*/
public function getName() {
return $this->name;
}
/**
* @param $name
*/
public function setName($name) {
$this->name = $name;
}
/**
* @param $cookingtime
* @return float $time
*/
protected function getFinalTime($cookingtime) {
$oventime = $this->getTime();
if($oventime > 0) {
$time = $cookingtime + $oventime;
} else {
$time = $cookingtime - abs($oventime);
}
return $time;
}
/**
*
* @param null $pizza
*/
public function bake($pizza) {
print('*************************') . PHP_EOL;
print('--- ' . $pizza->name . ' ---') . PHP_EOL;
print('*************************') . PHP_EOL;
PHP_EOL;
print('Dough rolled out.') . PHP_EOL;
print($pizza->sauce . ' sauce spread') . PHP_EOL;
foreach ($pizza->toppings as $topping) {
print($topping . ' added.') . PHP_EOL;
}
print('Baked in ' .$this->name . ' for ' . $this->getFinalTime($pizza->defaulttime) . ' minutes') . PHP_EOL;
return;
}
}
PizzaFactory.php
<?php
namespace PizzaShop\Pizza;
class PizzaFactory
{
/**
* @var array
*/
private $pizzas = [
'Cheese and Tomato' => 'PizzaShop\Pizza\CheeseAndTomato',
'Chicken Supreme' => 'PizzaShop\Pizza\ChickenSupreme',
'Meat Feast' => 'PizzaShop\Pizza\MeatFeast',
];
public function getPizzas()
{
return $this->pizzas;
}
public function create($name)
{
if (isset($this->pizzas[$name])) {
$pizza = new $this->pizzas[$name];
return $pizza;
}
return null;
}
public function printPizza($name) {
$pizza = $this->create($name);
print('*************************') . PHP_EOL;
print('--- ' . $pizza->name . ' ---') . PHP_EOL;
print('*************************') . PHP_EOL;
PHP_EOL;
}
}
CheeseAndTomato.php
<?php
namespace PizzaShop\Pizza;
class CheeseAndTomato extends Pizza
{
function __construct()
{
parent::setName('Cheese and Tomato');
parent::setSauce('Tomato');
parent::setToppings(['Tomatoes']);
parent::setDefaultTime(20);
}
}
MeatFeast.php
<?php
namespace PizzaShop\Pizza;
class MeatFeast extends Pizza
{
function __construct()
{
parent::setName('Meat Feast Pizza');
parent::setSauce('Tomato');
parent::setToppings(['Chicken','Beef']);
parent::setDefaultTime(25);
}
}
Pizza.php
<?php
namespace PizzaShop\Pizza;
abstract class Pizza {
/**
* @var string
*/
public $name;
/**
* @var array
*/
public $toppings = [];
/**
* @var string
*/
public $sauce;
/**
* @var int
*/
public $time;
/**
* @var object
*/
public $oven;
/**
* @var float
*/
public $defaulttime;
/**
* @return string
*/
public function getName()
{
return $this->name;
}
/**
* @param string $name
*/
public function setName($name)
{
$this->name = $name;
}
/**
* @return array
*/
public function getToppings()
{
return $this->toppings;
}
/**
* @param array $toppings
*/
public function setToppings($toppings)
{
$this->toppings = $toppings;
}
/**
* @return string
*/
public function getSauce() {
return $this->sauce;
}
/**
* @param $sauce
*/
public function setSauce($sauce) {
$this->sauce = $sauce;
}
/**
* @return mixed
*/
public function getOven()
{
return $this->oven;
}
/**
* @param mixed $oven
*/
public function setOven($oven)
{
$this->oven = $oven;
}
/**
* @return int
*/
public function getTime()
{
return $this->time;
}
/**
* @param int $time
*/
public function setTime($time)
{
$this->time = $time;
}
/**
* @return float
*/
public function getDefaultTime()
{
return $this->defaulttime;
}
/**
* @param float $defaulttime
*/
public function setDefaultTime($defaulttime)
{
$this->defaulttime = $defaulttime;
}
}
make-pizza.php (My client)
<?php
require 'vendor/autoload.php';
$ovenFactory = new \PizzaShop\Oven\OvenFactory();
$pizzaFactory = new \PizzaShop\Pizza\PizzaFactory();
//List pizzas and ovens.
$ovenNames = $ovenFactory->getOvens();
$pizzaNames = $pizzaFactory->getPizzas();
//Select an oven
$counter = 1;
foreach ($ovenNames as $ovenName => $key) {
print('Selection: ' . $counter) . PHP_EOL;
$ovenFactory->printOven($ovenName);
$counter++;
}
$ovenNames = array_keys($ovenNames);
$counter = 1;
$ovenSelection = readline("Select your Oven: ");
foreach ($pizzaNames as $pizzaName => $key) {
print('Selection: ' . $counter) . PHP_EOL;
$pizzaFactory->printPizza($pizzaName);
$counter++;
}
$pizzaSelection = readline("Select your Pizza: ");
$pizzaNames = array_keys($pizzaNames);
$pizza = $pizzaFactory->create($pizzaNames[$pizzaSelection-1]);
$oven = $ovenFactory->create($ovenNames[$ovenSelection-1]);
$oven->bake($pizza);
Answer: A few thoughts:
As an academic exercise, I generally like what you have done. Your code is clear, concise, understandable, largely documented (though doc blocks are inexplicably missing in some cases), and applies a reasonable design/inheritance model.
I would encourage you in the future to always strive to model your objects, interfaces, etc. in terms that are as near to the real world as possible. For example, it is not really the type of oven that determines cooking time, but rather the temperature, along with the properties of whatever is being cooked in the oven (size, weight, etc.).
So perhaps if you want to model this in more real-world terms, you could generally say that any food object that you want to use with the oven would implement a Bakeable interface.
And say that Bakeable could have methods like:
// what temp should food be cooked at ideally?
public static abstract getIdealBakingTemperature();
// determine cooking time for given temperature
public static abstract getBakingTimeForTemp($temp);
// bake the object, perhaps transforming it - flagging it as done,
// adding flavor from oven (smoke from wood oven, burnt hair from hair dryer).
// Perhaps returns baking time.
public abstract bake($temp);
// etc.
Any class such as Pizza would need to implement these methods if they wanted to be used with an oven.
Now, perhaps your Oven objects must all implement a CanBake interface which has methods such as:
public static abstract getMaximumTemp();
public static abstract getMinimumTemp();
// structured data return of min/max temps perhaps you caller can determine
// would oven is needed
public static abstract getTempRange();
// does the oven add any flavor to food?
public static abstract getFlavor();
// set temp on oven
public abstract setTemp();
// etc.
Cooking times are now no longer a property of the oven but rather a property of the food, which makes more real-world sense, especially when you want to, for example, implement a DeepDishPizza class that takes longer to cook than your regular pizza. Down the line, if you want to make your pizza Microwaveable, Boilable, Grillable, etc. you would simply implement similar interfaces.
Also, you decouple yourself from having to have an Oven do the baking. In your example, a hair dryer might not be best modeled as an Oven but rather as HairDryer that implements CanBake. You could apply this same interface to a PopcornPopper, a Campfire, a Car (on a hot day), or anything else you might use for the purposes of baking.
This may be more complexity than what is needed for this simple exercise, but I hope you can see how in a more complex, real-world application, you might need to think in more real-world terms.
I don't know that what you have is truly a factory pattern, as it is simply providing a selection interface. When I think of a factory, I am thinking of a class that can get a set of requirements from the caller and return an object that fulfills that contract. So extending your example, perhaps a true oven factory would accept a requirements such as ($temp === 500, getFlavor() === smoke), and return an object implementing CanBake interface that meets the requirements. Oftentimes factories do let callers specifically choose their class of object as well, which you do here, but if that is all the "factory" is doing, what value does it really add?
Perhaps these classes might better be called OvenSelector, PizzaSelector or similar. For this simple application and use case, I think your pattern is OK, regardless as to name.
Consider throwing exceptions in your factory methods if you are unable to instantiate a class (it doesn't match expected value) not just returning null.
In general, you are not doing any data validation inside your public methods. You should think about logging errors and/or throwing exceptions when you get unexpected parameters passed to a method.
I have some question as to some aspects of your approach to inheritance.
Take this example:
class StoneOven extends Oven
{
function __construct()
{
parent::setName('Stone Oven');
parent::setTime(-10);
}
}
Calls to parent::* seem unnecessary. Why not $this->setName()? After all, this class will inherit (and possibly want to override) this method from base class. Invoke the method for this class.
If for some reason you do want to override setName() method in a class and need to reference the parent method, do so within the context of the setName() method. So for example:
public function __construct()
{
$this->setName('Stone Oven');
$this->setTime(-10);
}
public function setName($name) {
// we override parent method to cast name to upper case
$name = strtoupper($name);
parent::setName($name);
}
Or better yet, why not simply override inherited properties since that is truly what you are doing here, not overriding the methods. PHP allows late static binding (as of 5.3). You should consider taking advantage of it.
That might make your class like this:
class StoneOven extends Oven
{
public static $name = 'Stone Oven';
public static $time = -10;
}
There really is no reason to override the constructor in this case since really what is changing for this specific inheritance case are the properties of the oven.
You would of course need to change Oven class to have those properties you want to override declared as static and change all references to these properties to be static. This also might require some wider rethinking of your class hierarchies, as you would need to begin thinking about which properties should really be static (i.e. they are common to every instance of a class) vs. which should be instance-based.
I don't think you are really considering static properties and methods in your current code, when perhaps you should be.
Get in the habit of specifying visibility for every property and method (including constructors) on your classes.
I am not sure why you are using getters and setters as widely as you are, when the properties themselves are all public. My guess is that many of these properties really should not be public. You always need to think of your property visibility in terms of how you want the caller to interact with your classes.
For example:
public => callers can do whatever they want to your properties, getting and setting them without the class having the ability to intervene. There is no reason to have getters/setters with this visibility as they can be circumvented anyway. I find myself having very few practical use cases for this level of visibility when implementing classes that are in production systems and need to truly enforce contract with caller.
protected => callers can only interact with with properties via getters/setters or other class methods. If "property" should be visible a getter should be provided. If property should be mutable by caller, then a setter is needed. I find myself using this visibility most often.
private => same as protected from a visibility standpoint, but would only apply to cases where the property should only be available to this specific class (and not to inheriting classes).
__get() and __set() magic methods => only are triggered for cases where direct access is attempted against protected/private properties. They can allow for simple public-like property access to the caller (i.e. $obj->prop), but these come with a cost (see this link for discussion), and I generally don't prefer to use them. | {
"domain": "codereview.stackexchange",
"id": 22457,
"tags": "php, object-oriented, design-patterns, factory-method"
} |
Embedding an RSS feed into a page after it's already loaded | Question: I'm working on reworking our website from a WYSIWYG editor to MVC. I'm all right with server side code, but complete rubbish when it comes to client side Javascript, so I'd appreciate any/all feedback.
The goal
Embed this rss feed from our blog into a page on our main website.
So that it looks like this:
Design choices/rationale
I had originally written this code to go get the RSS feed in the News controller, and bind it to the News view. This caused unacceptable load times for the page. Obviously, the page couldn't render anything until it had gone across the network to a page, that may or may not be available, and retrieve the RSS feed. It took upwards of 5-6 seconds for anything at all to render my browser.
So, I decided to try my hand at retrieving this and loading it client side after allowing the "container" of the page to load. I extracted a BlogFeed partial view that is retrieved and inserted by an Ajax call after the page is ready. This allows initial rendering to happen quickly, only later filling in the content that takes time to retrieve.
I considered either creating a separate BlogFeedService class, a separate News controller, or both, but it felt like over engineering a simple site that could almost be served up as static HTML. If either the Home controller, or News code grows any more, I will likely extract a few classes.
I used MVC so that I could extend the site with more dynamic behavior later. I intend on adding a preview of some of our project's functionality through a webapp section of this site.
Questions
Is this a good approach? I'll admit that I basically copy/pasta'd the jquery code, then monkey patched it with a Url.Action to make sure I wasn't hardcoding Urls into it.
Did I do the async/await stuff correctly? I'm never quite sure I've gotten that correct either. Secondarily, was using async await here overkill? It felt right, because reaching out over the network is potentially blocking, but there's not much for the rest of the code to do while it waits.
Again, any/all feedback is appreciated. I've only been doing web development for a few months.
Note: The Twitter code was generated by Twitter and simply embedded into the view. It's not mine.
News.cshtml
@{
ViewBag.Title = "News";
}
<section class="row lead">
<h2>What's up duck?</h2>
<p>
When we have an announcement to make, we publish on <a href="https://rubberduckvba.wordpress.com/" target="_blank"><strong>Rubberduck News</strong></a>.
If you're following us on Twitter or WordPress, you'll be notified right away.
Here's what you might have been missing lately...
</p>
</section>
<div class="row">
<h3>Rubberduck News</h3>
<section id="blogFeed" class="col-md-9">
<!--We'll load this client side to speed up inital page load-->
</section>
@section scripts{
<script>
$(document).ready(function() {
//Url will look something like this '/home/BlogFeed/'
$.get(@String.Concat(Url.Action("BlogFeed"), "/"), function(data) {
$('#blogFeed').html(data);
});
});
</script>
}
<div id="twitterFeed" class="col-md-3">
<a class="twitter-timeline" href="https://twitter.com/rubberduckvba" data-widget-id="689036212724723712" height="1200">Tweets by @@rubberduckvba</a>
<script>
!function(d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0], p = /^http:/.test(d.location) ? 'http' : 'https';
if (!d.getElementById(id)) {
js = d.createElement(s);
js.id = id;
js.src = p + "://platform.twitter.com/widgets.js";
fjs.parentNode.insertBefore(js, fjs);
}
}(document, "script", "twitter-wjs");
</script>
</div>
</div>
BlogFeed.cshtml
@model IEnumerable<System.ServiceModel.Syndication.SyndicationItem>
@if (Model == null)
{
<p class="alert alert-danger">Aww snap! We couldn't retrieve the RSS Feed!</p>
}
else
{
foreach (var post in Model)
{
<div class="row feedSnippet col-md-12">
<h4>
<!--Id is the permalink-->
<a href="@post.Id" target="_blank">@post.Title.Text</a> <small>@post.PublishDate.Date.ToLongDateString()</small>
</h4>
<p>@Html.Raw(post.Summary.Text)</p>
</div>
}
}
HomeController.cs
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.ServiceModel.Syndication;
using System.Threading.Tasks;
using System.Web.Mvc;
using System.Xml;
namespace RubberduckWeb.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
return View();
}
public ActionResult Features()
{
return View();
}
public ActionResult About()
{
return View();
}
public ActionResult Contact()
{
return View();
}
public ActionResult News()
{
return View();
}
public async Task<PartialViewResult> BlogFeed()
{
return PartialView(await GetBlogFeedItemsAsync());
}
private async Task<IEnumerable<SyndicationItem>> GetBlogFeedItemsAsync()
{
const string rssUri = "https://rubberduckvba.wordpress.com/feed/";
using (var httpClient = new HttpClient())
{
var response = await httpClient.GetAsync(new Uri(rssUri));
if (!response.IsSuccessStatusCode)
{
return null;
}
using (var reader = XmlReader.Create(await response.Content.ReadAsStreamAsync()))
{
var feed = SyndicationFeed.Load(reader);
return feed?.Items;
}
}
}
}
}
Answer: A couple of really quick comments:
jQuery provides a load function which simplifies your code. There's also a shorthand for $(document).ready(function() {}); which is simply $(function() {});
Putting that together:
$(function() {
$('#blogFeed').load('@Url.Action("BlogFeed")');
});
That's a bit nicer!
I'd prefer Razor server side commenting. E.g.
<!--Id is the permalink-->
becomes:
@* Id is the permalink *@
That way the comment is never sent to the client browser.
Your comment about async and await:
It felt right, because reaching out over the network is potentially blocking, but there's not much for the rest of the code to do while it waits.
By not blocking, the framework can respond to more client requests! It's definitely the right thing to be doing.
Update: String.Concat explanation:
$.get(@String.Concat(Url.Action("BlogFeed"), "/"),
This code will render the following js:
$.get(/BlogFeed/,
Which is a regex literal but being treated like a string. However, this only works because it's on the default controller and so doesn't have any slashes. If it had been on it's own controller e.g. 'News/BlogsFeed' it wouldn't have worked. | {
"domain": "codereview.stackexchange",
"id": 18051,
"tags": "c#, jquery, asp.net-mvc, async-await, rss"
} |
Validity of exponentiation in a polynomial time reduction | Question: I asked this question 10 days ago on cs.stackexchange here but I didn'y have any answer.
In a very famous paper (in the networking community), Wang & Crowcroft present some $\mathsf{NP}$-completeness results of path computation under several additive/multiplicative constraints. The first problem is the following :
Given a directed graph $G=(V,A)$ and two weight metrics $w_1$ and $w_2$ over the edges, define, for a path $P$, $w_i(P)=\sum_{a\in P}w_i(a)$ ($i=1,2$). Given two nodes $s$ and $t$, the problem is to find a path $P$ from $s$ to $t$ s.t. $w_i(P)\leq W_i$, where the $W_i$ are given positive numbers (example: Delay constraint and cost in a network).
The authors prove that this problem is $\mathsf{NP}$-complete by providing a polynomial reduction from PARTITION.
Then they present the same problem except that the metrics are multiplicative, i.e., $w'_i(P)=\prod_{a\in P}w'_i(a)$. In order to prove the multiplicative version is $\mathsf{NP}$-complete, they provide a "polynomial" reduction from the additive version just by putting $w'_i(a)=e^{w_i(a)}$ and $W'_i=e^{W_i}$.
I am very puzzled by this reduction. Since $W'_i$ and $w'_i(a)$ are part of the input (in binary, I guess), then the $|w'_i(a)|$ and $|W'_i|$ are not polynomial in $|w_i(a)|$ and $|W_i|$. Thus the reduction is not polynomial.
Am I missing something trivial or is there a flaw in the proof? My doubt is about the validity of the proof, even if the result is clearly true.
Paper reference : Zheng Wang, Jon Crowcroft. Quality-of-Service Routing for Supporting Multimedia Applications. IEEE Journal on Selected Areas in Communications 14(7): 1228-1234 (1996).
Answer: The proof as presented in the paper is not conclusive.
However, the stated result itself is correct. It can easily be derived by slightly changing the reduction and by using SUBSET PRODUCT instead of SUBSET SUM.
A useful link for the SUBSET PRODUCT problem:
https://cs.stackexchange.com/questions/7907/is-the-subset-product-problem-np-complete | {
"domain": "cstheory.stackexchange",
"id": 3873,
"tags": "np-hardness, reductions"
} |
When a rubber band is stretched, it returns slightly larger. What is this effect called? Is there a material/property that doesn't exhibit this? | Question: A rubber band that has a 5 cm diameter (Assume a perfect circle). The rubber band is then stretched. The rubber band returns to it's original shape. The rubber band now has a 6 cm diameter. What is this effect called?
Is there a material that returns to it's original shape exactly and always or is that impossible?
Answer: This effect is called the elastic limit: https://en.wikipedia.org/wiki/Yield_(engineering)#Definition
Elastic limit (yield strength)
Beyond the elastic limit, permanent deformation will occur. The elastic limit is therefore the lowest stress at which permanent deformation can be measured. This requires a manual load-unload procedure, and the accuracy is critically dependent on the equipment used and operator skill. For elastomers, such as rubber, the elastic limit is much larger than the proportionality limit. Also, precise strain measurements have shown that plastic strain begins at low stresses.
As stated in the Wikipedia article, as the rubber band comes under higher degrees of stress, it will experience a stretching (strain) which it will recover from if the strain does not surpass the elastic limit.
Also remember that the strain experienced by the material is time-dependent, so if your rubber band is under constant stress (stretched around something for long time) it will be more deformed than if stressed the same amount for only a moment.
For more information on strain accumulating over time: https://en.wikipedia.org/wiki/Creep_(deformation)
Creep (deformation)
The rate of deformation is a function of the material properties, exposure time, exposure temperature and the applied structural load. Depending on the magnitude of the applied stress and its duration, the deformation may become so large that a component can no longer perform its function.
All materials will experience this creep. If a material can be strained by stress (can be stretched), it can be deformed by experiencing a large enough cumulative strain. | {
"domain": "physics.stackexchange",
"id": 43759,
"tags": "elasticity, stress-strain"
} |
Is any particle's energy quantized? | Question:
In the above picture, the author is trying to summarize the correlation between particle and wave packet. In doing so, he assumes that frequency is related to energy as: $E=h\nu$. Is this apparent assumption correct?
Because as far as I know, the quantization is only for oscillators on blackbody surface. And Einstein extended it for light waves.
But when it comes to matter wave, is it still true? That the particles energy can only be an integral multiple of the frequency associated with its matter wave times the Plank's Constant?
In fact, the slide doesn't even mention the integral multiple. So where am I missing out?
Answer: Any particle has an energy given by the relativistic equation for the energy:
$$ E^2 = p^2 c^2 + m^2 c^4 \tag{1} $$
In this equation the variable $m$ is the rest mass and $p$ is the (relativistic) momentum. The momentum is related to the de Brogie wavelength by:
$$ p = \frac{h}{\lambda} \tag{2} $$
For photons the rest mass is zero, so equation (1) for the energy simplifies to:
$$ E = pc = \frac{hc}{\lambda} = h\nu $$
And that's why the energy of a photon is always equal to $h\nu$. With a massive particle the same equation applies, but with two big differences. Firstly the rest mass is no longer zero, and secondly the phase velocity is not $c$. So we can write the energy of the particle as:
$$ E = \sqrt{p^2c^2 + m^2c^4} = \sqrt{\frac{h^2c^2}{\lambda^2} + m^2c^4} $$
which sadly is a rather less elegant expression.
However we can define a matter wave frequency:
$$ \nu = \frac{E}{h} $$
and this automatically makes the energy equal to $h\nu$. Proceed with caution though as unlike light, with massive particles this frequency is not something that is directly measured. We can directly measure the de Broglie wavelength by diffracting the particles with some suitable grating, but the de Broglie frequency is rather more abstract. | {
"domain": "physics.stackexchange",
"id": 51472,
"tags": "quantum-mechanics, energy, wave-particle-duality, discrete"
} |
Can the variation of count rate over distance for a disk radioactive source be treated as an electric field of a disk charge problem | Question: So I did an experiment in which the count rate of a disk source of Sr-90 is measured using a plane detector, varied from ~0.7cm to 100cm, multiplying the count rate by the distances squared (i.e. $\frac{N}{\Delta t}d^2$) I got the following:
So the dip at small distances is mainly due to the inverse square law (which only applies for a point source) failing as a result of the extendedness of the Sr-90 disk source, and the linearly falling region is due to the ionisation of beta particles with air (Sr-90 decays both gamma at 1.76MeV, and Beta at 2.27MeV & 0.57MeV).
I am trying come up with a geometric correction for the inverse square law accounting for the extendedness of the Sr-90 disk, and tried using the classical electric field at a point z from the centre of a disk charge:
that is since $E_z=2\pi k\sigma[1-\frac{z}{\sqrt{z^2+R^2}}]$
can I use this as the geometric correction to my plot? Like (excluding the ionisation of beta)
$$\frac{N}{\Delta t}(d) \propto [1-\frac{d}{\sqrt{d^2+R^2_{Sr-90-Source}}}]$$
I came across a paper https://www.sciencedirect.com/science/article/abs/pii/S0969804306002314 that uses something entirely different (equation (2) in the paper) $$\frac{N}{\Delta t}(d) \propto \frac{d^2}{R^2}ln(1+\frac{R^2}{d^2})$$ which I cannot derive from first principle, and seems very different to the electric field treament. Or is diffraction important?
Answer: If you ignore all attenuation and specifics of the detector, then yes. It's the same math. The solution of integrating a $1/r^2$ source over a disk is not anything specific to electrostatics.
The paper you've linked to is specifically interested in both this attenuation and specifics of the detector, so of course they will come up with a different result.
Note that your formula assumes that your detector is also a point source. If you want to do better, you should assume that both the source and the detector have some finite extent. | {
"domain": "physics.stackexchange",
"id": 86928,
"tags": "optics, nuclear-physics, radiation, diffraction, radioactivity"
} |
Attach an object to already attached object | Question:
Hi,
I want to know if it is possible to attach an object to an already attached object to the robot.
I tried to attach a second object to the first object (already attached to the robot) but got the error Link 'first_object' not found in model 'robot'
I attached the second object as a planning scene diff with these details :
attached_object.link_name = "first_object"; planning_scene.robot_state.attached_collision_objects.push_back(attached_object) planning_scene.robot_state.is_diff = true;
Originally posted by bhavyadoshi26 on ROS Answers with karma: 95 on 2016-12-14
Post score: 0
Answer:
Like you already guessed, that is not possible. The error Link 'first_object' not found in model 'robot' is coming from the robot_model.cpp. The planning scene updater looks for a LinkModel to attach the object to but cannot find it.
However, if you attach to the link of the robot that the first object is attached to, you will get the exact same result. As the transform between the Link and first_object will be static, and between Link and second_object as well (once you attach them), the transform between first_object and second_object will also be static.
Originally posted by rbbg with karma: 1823 on 2016-12-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by bhavyadoshi26 on 2016-12-14:
I thought of that method but the problem in my case is that I have a host of objects of different sizes that I have to attach. So attaching them to the link the first_object was attached to works for some but not all (The others get displaced from where they are supposed to be).
Comment by bhavyadoshi26 on 2016-12-14:
Can I attach the second_object to the link but assign the TF of the first_object ?
Comment by gvdhoorn on 2016-12-14:\
The others get displaced from where they are supposed to be
that would seem to imply that your objects either have their origins not where they should be, of you're not accounting for the additional offset that you need to include between Link and your new objects.
Comment by bhavyadoshi26 on 2016-12-19:
Yes, I had manually shifted the origin of the second_object so that they would be oriented properly when attached to the link. But after reading your comment, I got another idea of how to do it. So I set the origins at the right places and set up some parameters and now it works fine. Thank you! | {
"domain": "robotics.stackexchange",
"id": 26481,
"tags": "moveit"
} |
Elastic Collision: question on Wiki equation | Question: I'm confused over the Wiki equation for an elastic collision. Does anyone know how equations (1) and (2) are formed to result in (3)? I think I'm overlooking some simple algebra.
Consider particles 1 and 2 with masses $m_1$, $m_2$, and velocities $u_1$, $u_2$ before collision, $v_1$, $v_2$ after collision. The conservation of the total momentum before and after the collision is expressed by:
$$m_{1}u_{1}+m_{2}u_{2} \ =\ m_{1}v_{1} + m_{2}v_{2} \tag 1$$
Likewise, the conservation of the total kinetic energy is expressed by:
$$\tfrac12 m_1u_1^2+\tfrac12 m_2u_2^2 \ =\ \tfrac12 m_1v_1^2 +\tfrac12 m_2v_2^2 \tag 2$$
These equations may be solved directly to find $v_1,v_2$ when $u_1,u_2$ are known.
$$
\begin{array}{ccc}
v_1 &=& \dfrac{m_1-m_2}{m_1+m_2} u_1 + \dfrac{2m_2}{m_1+m_2} u_2 \\[.5em]
v_2 &=& \dfrac{2m_1}{m_1+m_2} u_1 + \dfrac{m_2-m_1}{m_1+m_2} u_2
\end{array} \tag 3
$$
Answer: Note that the following procedure is valid only for the one-dimensional elastic collision. If elastic collision happens in two dimensions, then conservation of momentum (your Eq. (1)) gives 2 equations while conservation of energy (your Eq. (2)) gives only 1 equation, in which case you have 3 equations and 4 unknowns (x and y components for two velocities). This means that in the two-dimensional elastic collision you need to know at least one magnitude or one angle for one of the two velocities after the collision in order to be able to solve for other unknowns.
One-dimensional elastic collision
From your Eq. (1) it follows
$$m_1 (u_1 - v_1) = -m_2 (u_2 - v_2) \tag {1a}$$
From your Eq. (2) it follows
$$\frac{1}{2} m_1 (u_1^2 - v_1^2) = -\frac{1}{2} m_2 (u_2^2 - v_2^2)$$
$$m_1 (u_1 - v_1) (u_1 + v_1) = -m_2 (u_2 - v_2) (u_2 + v_2) \tag{2a}$$
From my Eqs. (1a) and (2a) it follows
$$u_1 + v_1 = u_2 + v_2$$
$$\boxed{u_1 - u_2 = -(v_1 - v_2)} \tag {3a}$$
This equation tells you that relative velocity before and after elastic collision has the same magnitude and opposite direction, which is important property of elastic collisions.
You can now combine my Eqs. (1a) and (3a) to reach your Eq. (3)
$$v_2 = u_1 - u_2 + v_1 \qquad \text{and} \qquad v_2 = u_2 + \frac{m_1}{m_2} (u_1 - v_1) \tag {4a}$$
$$\text{give} \qquad v_1 = \frac{m_1 - m_2}{m_1 + m_2} u_1 + \frac{2 m_2}{m_1 + m_2} u_2$$
Procedure is similar to get the expression for velocity $v_2$. | {
"domain": "physics.stackexchange",
"id": 87270,
"tags": "energy, momentum, collision"
} |
Multiple if conditions in one block - looking for faster and more efficient code | Question: i am looking for a faster and more efficient way of this code. What the code does is simply compares message context of (user, application, service and operation) and compares it with some rules.
private boolean match(Context messageContext, ContextRule contextRule) {
if(contextRule.getMessageContext().getUser().equals(ContextRuleEvaluator.WILDCARD)
|| (contextRule.getMessageContext().getUser().equals(messageContext.getUser()))) {
if(contextRule.getMessageContext().getApplication().equals(ContextRuleEvaluator.WILDCARD)
|| (contextRule.getMessageContext().getApplication().equals(messageContext.getApplication()))) {
if(contextRule.getMessageContext().getService().equals(ContextRuleEvaluator.WILDCARD)
|| (contextRule.getMessageContext().getService().equals(messageContext.getService()))) {
if(contextRule.getMessageContext().getOperation().equals(ContextRuleEvaluator.WILDCARD)
|| (contextRule.getMessageContext().getOperation().equals(messageContext.getOperation()))) {
return true;
}
}
}
}
return false;
}
Edit: Context and ContextRule are an interfaces
public interface Context {
public String getUser();
public void setUser(String user);
public String getApplication();
public void setApplication(String application);
public String getService();
public void setService(String service);
public String getOperation();
public void setOperation(String operation);
}
public interface ContextRule {
public Context getMessageContext();
public int getAllowedConcurrentRequests();
}
Any suggestions appreciated :)
Answer: First of all, your formatting seems messy, please fix it.
Did you rip it out or is there really zero documentation in there (JavaDoc)? Consider adding some, it will make your life and that of others easier.
First step would be to add an overload to match() which accepts two strings, like this:
private boolean match(Context messageContext, ContextRule contextRule) {
return match(contextRule.getMessageContext().getUser(), messageContext.getUser())
&& match(contextRule.getMessageContext().getApplication(), messageContext.getApplication())
&& match(contextRule.getMessageContext().getService(), messageContext.getService())
&& match(contextRule.getMessageContext().getOperation(), messageContext.getOperation());
}
private boolean match(String value, String contextValue) {
return value.equals(ContextRuleEvaluator.WILDCARD) || value.equals(contextValue);
}
This makes it already a lot easier to read. Naming of the variables is a little bit suboptimal, that's because I could not come up with better names right now.
Ultimately you should see if you can't extend and change Context to contain a match() function. This would require to change Context into an abstract class, so it might not be possible.
public abstract class Context {
public abstract String getUser();
public abstract void setUser(String user);
public abstract String getApplication();
public abstract void setApplication(String application);
public abstract String getService();
public abstract void setService(String service);
public abstract String getOperation();
public abstract void setOperation(String operation);
/**
* Matches this context against the given Context.
*
* @param context
* @return
*/
public boolean match(Context context) {
return match(getUser(), context.getUser())
&& match(getApplication(), context.getApplication())
&& match(getService(), context.getService())
&& match(getOperation(), context.getOperation());
}
private boolean match(String value, String otherValue) {
return value.equals(ContextRuleEvaluator.WILDCARD) || value.equals(otherValue);
}
}
That would allow you to do this in your code:
return contextRule.getMessageContext().match(messageContext);
Which is very readable, even if you don't pack it into a function but call it directly. | {
"domain": "codereview.stackexchange",
"id": 4904,
"tags": "java"
} |
Navigation with Gmapping | Question:
Im using navigation with gmapping and wanted to know that if in the global_costmap_params.yaml the static map should be true or false? I have it with false and its working fine. the problem is that when i look in rxgraph the move_base node DOES NOT subscibe to the /map but to /tf published by gmapping. Can anyone explai to me why is it that /map message does not appear in rxgraph?
Originally posted by ctguell on ROS Answers with karma: 63 on 2014-04-03
Post score: 1
Original comments
Comment by ctguell on 2014-04-04:
@demmeln when i look in the rxgraph i only see the move_base node related to navigation i dont see the any costmap, but it appears that the costmap subscribes to the /map. So why is there no costmap node and is the /map only usefull for localisation as @AbuIbra ? i have to explain whats the use of /map
Answer:
You definitely should be running the costmap with a static global map for gmapping. If you're using a laser scanner for global navigation, you will get artifacts and potentially false openings/obstacles whenever gmapping decides to relocalize itself.
Depending on your ros distribution (i.e. hydro and layered costmaps), you can also subscribe to updates on the /map topic, which would make it work properly with gmapping.
Originally posted by paulbovbel with karma: 4518 on 2014-04-04
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by ctguell on 2014-04-04:
@paulbovbel when i put the static global map true the navigation does not work and i get the following error.
You have set map parameters, but also requested to use the static map. Your parameters will be overwritten by those given by the map server. So i dont know what to do, also the /tf message that goes from gmapping to move_base what does it send? @demmeln
Comment by paulbovbel on 2014-04-05:
move_base will need transforms from your robot frame to your costmaps' global frames. That's usually odom for local, and map for global.
Comment by paulbovbel on 2014-04-05:
Even though you set a width, height and resolution for your costmap, those will be overwritten by the map received from gmapping, so you can probably leave those blank.
Comment by ctguell on 2014-04-05:
@paulbovbel thanks, when you say theres a tf from map to global whats its use?? to use map as the globalcostmap?? thanks | {
"domain": "robotics.stackexchange",
"id": 17528,
"tags": "navigation, mapping, rostopic, gmapping"
} |
Is there a formal definition of sub-instances or sub-problems? | Question: A decision problem is denoted as a language $L \subseteq \Sigma^{*}$.
For every instance $x \in \Sigma^{*}$, we say $x$ is a yes-instance if $x \in L$ and a no-instance if $x \not\in L$.
For some algorithms (e.g. devide-and-conquer, dynamic programming), we may consider some sub-instances firstly, given an instance $x$.
I want to know whether I can let the substrings or the prefixs (suffixs) of $x$ be (all of) the sub-instances.
Is there a formal definition of sub-instances?
Answer: To add to the other good answers already here consider how subinstances are used in practice. Given an instance (formally a string $x$) we construct another instance (this too we could formally describe as a string $y$). This new string representing our subinstance is the output of some (computable) function of the original instance (We could say $y = f(x)$ for some function $f$). Generally, we assume our original string $x$ has some structure (can be interpreted as a specific type of problem). You could define a language $L' \supseteq L$ that is the set of strings that could be interpreted as the input to the particular type of problem. We know (or at least can check if) $x \in L'$ and then you could say that a "subinstance" should also be in $L'$ (after all, a subinstance should be the same kind of problem). Of course, to make it a "sub"-instance, the new string $y$ should probably be shorter than $x$. Thus, one could define a subinstance of $x \in L'$ to be any string $y \in L'$ such that $|y| < |x|$ and there exists some computable function $f$ such that $f(x) = y$.
However, this definition seems too broad to actually help us do or say anything. But it is kind of fun to think about. | {
"domain": "cs.stackexchange",
"id": 17110,
"tags": "formal-languages, substrings"
} |
Subscribing to ROS and Gazebo topic in one node (ROS callback failing) | Question:
Hi,
I have a node which successfully subscribes to a contacts topic in Gazebo. Now I want the same node to subscribe to a ROS topic which of course runs through a separate callback. Unfortunately the ROS callback is not being run even though the topic is available and publishing. Only the Gazebo callback is running. Is there something I'm missing here? Are the publishers/subscribers from ROS and Gazebo conflicting?
Of course I asked this question on the Gazebo answers page as well but seeing as it is the ROS subscriber that is not working, I was hoping someone could tip me why the callback could not be running.
Here is my code:
#include <gazebo/transport/transport.hh>
#include <gazebo/msgs/msgs.hh>
#include <gazebo/gazebo.hh>
#include <nav_msgs/Odometry.h>
#include <ros/ros.h>
#include <geometry_msgs/Vector3.h>
#include <iostream>
ros::Publisher pub;
bool airborne;
// Forces callback function
void forcesCb(ConstContactsPtr &_msg){
geometry_msgs::Vector3 msgForce;
// What to do when callback
if ( _msg->contact_size()!= 0){
// Now publish
msgForce.x = _msg->contact(0).wrench().Get(0).body_1_wrench().force().x();
msgForce.y = _msg->contact(0).wrench().Get(0).body_1_wrench().force().y();
msgForce.z = _msg->contact(0).wrench().Get(0).body_1_wrench().force().z();
pub.publish(msgForce);
} else if (airborne = false) {
msgForce.x = 0;
msgForce.y = 0;
msgForce.z = 0;
} else {
msgForce.x = 0;
msgForce.y = 0;
msgForce.z = 0;
}
pub.publish(msgForce);
}
// Position callback function
void positionCb(const nav_msgs::Odometry::ConstPtr& msg2){
if (msg2->pose.pose.position.z > 0.3) {
airborne = true;
} else {
airborne = false;
}
}
int main(int _argc, char **_argv){
// Set variables
airborne = false;
// Load Gazebo & ROS
gazebo::setupClient(_argc, _argv);
ros::init(_argc, _argv, "force_measure");
// Create Gazebo node and init
gazebo::transport::NodePtr node(new gazebo::transport::Node());
node->Init();
// Create ROS node and init
ros::NodeHandle n;
pub = n.advertise<geometry_msgs::Vector3>("forces", 1000);
// Listen to Gazebo contacts topic
gazebo::transport::SubscriberPtr sub = node->Subscribe("~/quadrotor/base_link/quadrotor_bumper/contacts", forcesCb);
// Listen to ROS for position
ros::Subscriber sub2 = n.subscribe("ground_truth/state", 1000, positionCb);
// Busy wait loop...replace with your own code as needed.
while (true)
gazebo::common::Time::MSleep(20);
// Spin ROS (needed for publisher)
ros::spinOnce();
// Mayke sure to shut everything down.
gazebo::shutdown();
}
Update:
CMakeLists.txt:
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
project(force_measure)
include (FindPkgConfig)
if (PKG_CONFIG_FOUND)
pkg_check_modules(GAZEBO gazebo)
endif()
include(FindBoost)
find_package(Boost ${MIN_BOOST_VERSION} REQUIRED system filesystem regex)
find_package(Protobuf REQUIRED)
find_package(catkin REQUIRED COMPONENTS roscpp gazebo_msgs geometry_msgs nav_msgs)
catkin_package()
include_directories(${GAZEBO_INCLUDE_DIRS})
link_directories(${GAZEBO_LIBRARY_DIRS})
add_executable(listener listener.cpp)
add_executable(listener2 listener2.cpp)
target_link_libraries(listener ${GAZEBO_LIBRARIES} ${Boost_LIBRARIES} ${PROTOBUF_LIBRARIES} pthread)
target_link_libraries(listener2 ${catkin_LIBRARIES} ${GAZEBO_LIBRARIES} ${Boost_LIBRARIES} ${PROTOBUF_LIBRARIES} pthread)
Originally posted by niall on ROS Answers with karma: 47 on 2015-07-29
Post score: 0
Original comments
Comment by cyborg-x1 on 2015-07-29:
Did you check rqt_graph? So that your node is connected to the right topic?
You do not have a "/" in the beginning so there could be: <possible namespace>/ground_truth/state or something like that
Comment by niall on 2015-07-29:
Good questions, thanks! I did check rqt_graph indeed and force_measure (this node) is connected (and receiving) topic /ground_state/state from node ground_truth. I also did try the "/", in fact I had it that way before but that also didn't help.
Comment by polde on 2017-10-24:
Hi. I know this may be an old question.
How to make "#include <gazebo/transport/transport.hh>" generate no error?
What do I have to modify in order to let this library properly work?
Thanks
Comment by jayess on 2017-10-24:
@polde You'll probably have better luck getting an answer by asking a new question.
Comment by niall on 2017-10-25:
@polde What are you trying to do? This include was never the issue for me (as explained below). I presume you're not including the library in exactly that way (with all the extra quotes)?
Comment by AyaO on 2018-11-21:
Hi, Can you share the CMakeLists.txt of your program ? because I want to implement a similar program but I have a problem when executing catkin_make caused by the CmakeLists.txt
Comment by niall on 2018-11-21:
Sure. But I'll have to dig it up out of a zip far far away. Give me a few days.
In the meantime, what "problem" is catkin_make giving you?
Comment by niall on 2018-12-01:
@AyaO
See below:
https://pastebin.com/f5n6B3za
Comment by jayess on 2018-12-01:
@niall I updated your question to include the code from the pastebin in order to keep the question self-contained in case that site goes down or the pastebin disappears.
Answer:
Your problem is here:
// Busy wait loop...replace with your own code as needed.
while (true)
gazebo::common::Time::MSleep(20); //<<< That is your only while loop ;-)
// Spin ROS (needed for publisher)
ros::spinOnce();//<< executed at the end only once...
try:
// Busy wait loop...replace with your own code as needed.
while (true)
{
gazebo::common::Time::MSleep(20);
// Spin ROS (needed for publisher) // (nope its actually for subscribers-calling callbacks ;-) )
ros::spinOnce();
...
}
Originally posted by cyborg-x1 with karma: 1376 on 2015-07-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by niall on 2015-07-29:
Simple and effective, stupid mistake.. Thanks a bunch!
Comment by cyborg-x1 on 2015-07-29:
Welcome ;-) | {
"domain": "robotics.stackexchange",
"id": 22329,
"tags": "ros, ros-indigo"
} |
Is it ok to use Thread.Sleep and Thread.Interrupt for pausing and resuming Thread like this? | Question: I need to observe a ConcurrentQueue, but to minimize the resources I want to pause the Thread if the Queue is empty and resume it from another Thread if there is a new Entry in the Queue. I implemented a pausing and an resuming of a thread like this:
My Worker, where the DoWork() method is called in a new Thread, looks like this:
public static class Worker
{
public static bool Running = true;
public static void DoWork()
{
while (Running)
{
try
{
Thread.Sleep(Timeout.Infinite);
}
catch (ThreadInterruptedException)
{
DoActualWork();
}
}
}
private static void DoActualWork()
{
//Do something
}
}
I start the thread like this:
Thread workerThread = new Thread(Worker.DoWork);
workerThread.Start();
I interrupt the Thread like this:
workerThread.Interrupt();
I stop the Thread like this:
Worker.Running = false;
Everything is working as expected but I'm not sure if this is how it should be implemented.
Is this best practice?
What can go wrong?
Is there a problem with the static class and the members? (It have to be static because the Thread have to be interrupted by different Threads)
Answer: This is pretty bad.
Instead why not use BlockingCollection. Then the loop will be:
try
{
while(true)
{
doSomethingWith(queue.Take());
}
}
catch(InvalidOperationException e)
{
// ignore and cleanup
}
And to stop the thread you need to call CompleteAdding() on the queue. | {
"domain": "codereview.stackexchange",
"id": 12326,
"tags": "c#, multithreading"
} |
Number of Components of a Spinor | Question: I'm trying to develop my understanding of spinors. In quantum field theory I've learned that a spinor is a 4 component complex vector field on Minkowski space which transforms under the chiral representation of the Lorentz group.
Now I've been reading that we can derive spinor representations by looking at the universal covering group of the proper orthochronous Lorentz group, which is $SL(2,\mathbb{C})$. Now $SL(2,\mathbb{C})$ acts on $\mathbb{C}^2$ by the fundamental representation. My book (Costa and Fogli) then calls elements of $\mathbb{C}^2$ spinors.
But the second type of spinors have a different number of components to the first! What is going on here? Could someone clearly explain the link between these two concepts in a mathematically rigorous way? I come from a maths background of group theory and topology, but don't know much representation theory at present.
Many thanks in advance!
Answer: There are a number of mathematical imprecisions in your question and your answer. Some advice: you will be less confused if you take more care to avoid sloppy language.
First, the term spinor either refers to the fundamental representation of $SU(2)$ or one of the several spinor representations of the Lorentz group. This is an abuse of language, but not a bad one.
A particularly fussy point: What you've described in your first paragraph is a spinor field, i.e., a function on Minkowski space which takes values in the vector space of spinors.
Now to your main question, with maximal pedantry: Let $L$ denote the connected component of the identity of the Lorentz group $SO(3,1)$, aka the proper orthochronous subgroup. Projective representations of $L$ are representations of its universal cover, the spin group $Spin(3,1)$. This group has two different irreducible representations on complex vector spaces of dimension 2, conventionally known as the left- and right- handed Weyl representations. This is best understood as a consequence of some general representation theory machinery.
The finite-dimensional irreps of $Spin(3,1)$ on complex vector spaces are in one-to-one correspondence with the f.d. complex irreps of the complexification $\mathfrak{l}_{\mathbb{C}} = \mathfrak{spin}(3,1) \otimes \mathbb{C}$ of the Lie algebra $\mathfrak{spin}(3,1)$ of $Spin(3,1)$. This Lie algebra $\mathfrak{l}_{\mathbb{C}}$ is isomorphic to the complexification $\mathfrak{k} \otimes \mathbb{C}$ of the Lie algebra $\mathfrak{k} = \mathfrak{su}(2) \oplus \mathfrak{su}(2)$. Here $\mathfrak{su}(2)$ is the Lie algebra of the real group $SU(2)$; it's a real vector space with a bracket.
I'm being a bit fussy about the fact that $\mathfrak{su}(2)$ is a real vector space, because I want to make the following point: If someone gives you generators $J_i$ ($i=1,2,3$) for a representation of $\mathfrak{su}(2)$, you can construct a representation of the compact group $SU(2)$ by taking real linear combinations and exponentiating. But if they give you two sets of generators $A_i$ and $B_i$, then you by taking certain linear combinations with complex coefficients and exponentiating, you get a representation of $Spin(3,1)$, aka, a projective representation of $L$. If memory serves, the 6 generators are $A_i + B_i$ (rotations) and $-i(A_i - B_i)$ (boosts). See Weinberg Volume I, Ch 5.6 for details.
The upshot of all this is that complex projective irreps of $L$ are labelled by pairs of half-integers $(a,b) \in \frac{1}{2}\mathbb{Z} \times \frac{1}{2}\mathbb{Z}$. The compex dimension of the representation labelled by $a$,$b$ is $(2a + 1)(2b+1)$.
The left-handed Weyl-representation is $(1/2,0)$. The right-handed Weyl representation is $(0,1/2)$. The Dirac representation is $(1/2,0)\oplus(0,1/2)$. The defining vector representation of $L$ is $(1/2,1/2)$.
The Dirac representation is on a complex vector space, but it has a subrepresentation which is real, the Majorana representation. The Majorana representation is a real irrep, but in 4d it's not a subrepresentation of either of the Weyl representations.
This whole story generalizes beautifully to higher and lower dimensions. See Appendix B of Vol 2 of Polchinski.
Figuring out how to extend these representations to full Lorentz group (by adding parity and time reversal) is left as an exercise for the reader. One caution however: parity reversal will interchange the Weyl representations.
Sorry for the long rant, but it raises my hackles when people use notation that implies that some vector spaces are spheres. (If it's any consolation, I know mathematicians who get very excited about the difference between a representation $\rho : G \to Aut(V)$ and the "module" $V$ on which the group acts.) | {
"domain": "physics.stackexchange",
"id": 5768,
"tags": "quantum-field-theory, quantum-spin, group-theory, group-representations, spinors"
} |
Correct way to add AWGN to a signal | Question: I have a signal, S whose bandwidth is bw Hertz and is sampled at fs Hertz.
To contaminate it with a noise corresponding to SNR x dB, I used the matlab function
out = awgn(S,x);
But I have came across some codes doing it instead like the following.
out = awgn(S,corrected_x);
Where corrected_x = x + 10*log(bw/fs);
Which is the correct method and why?
Answer: Short answer
10*log(bw/fs) to take into account the oversampling operation because the awgn() function specifies the signal-to-noise ratio per sample, in dB.
Longer answer
The discrete time AWGN model is
$$Y = X+N$$
where X is data from continuous time $X(t)$, N is noise sequence from AWGN process $N(t)$ and Y is receive symbols.
If $X(t)$ is characterized by its baseband equivalent limited between $[-W/2, +W/2] \textrm{ (Hz)}$, then we can identify $X(t)$ by observing $Y$ at a rate $W$ symbols per second. See chapter 2, sampling theorem and Theorem of irrelevance.
Call $P$ the average power (joules per second). The sample power $E_s= P/W$ and the noise symbol power is $N_0$. The signal noise ratio per symbol is defined $\mathrm{SNR} = \frac{P}{BW_0} = E_s/N_0$.
If the complex baseband signal is oversampled $\alpha = f_s/W$, the noise sample power is still $N_0$ while the data sample power is reduced $\alpha$ times, thus $E_s/N_0 = \mathrm{SNR} \times W/f_s$.
In dB, $E_s/N_0 = \mathrm{SNR} + 10\log_{10}{(W/f_s)}$.
The awgn() function add AWG noise by the previously defined $E_s/N_0$. | {
"domain": "dsp.stackexchange",
"id": 5609,
"tags": "matlab, noise, gaussian, snr, bandwidth"
} |
Probabilistic vs Statistical interpretation of Double Slit experiment | Question: Why is it assumed that the results seen in the double slit experiment are probabilistic and not just a statistical result of some unknown variable or set of variables within the system.
Answer:
Ever since the origination of quantum mechanics, some theorists have searched for ways to incorporate additional determinants or "hidden variables" that, were they to become known, would account for the location of each individual impact with the target.
Wikipedia
In my opinion, the "were they to become known" is the tricky bit (to put it mildly). And, as things stand, for prediction purposes one might as well assume an inherently probabilistic nature.
(I'll add to this later.) | {
"domain": "physics.stackexchange",
"id": 9968,
"tags": "quantum-mechanics, statistical-mechanics, double-slit-experiment, probability"
} |
Time period of piston performing SHM | Question: Suppose I have a closed piston-cylinder system which contains an ideal gas,and it is being compressed adiabatically by the piston which has a vertical orientation,and in doing so,the piston is actually undergoing SHM. Now if I use a similar piston in a horizontal orientation (keeping in mind the dimensions of the cylinder), would the time period of the SHM be affected? Would anything change in this orientation? The equation $pV^\gamma=\text{constant}$ would still be valid, but would anything change at all?
Answer: The equilibrium position of the piston would change due to the weight of piston. However it would not interfere with time period or frequency as it is acting throughout the process. In the second position equilibrium position would be different. For more accuracy you could also take into account the change in external pressure in the two cases . Yet it will also have no effect on time period but only on equilibrium position. | {
"domain": "physics.stackexchange",
"id": 65412,
"tags": "thermodynamics, pressure, work, gas, volume"
} |
Read message structure at runtime from advertised topic | Question:
I have been looking in the roscore source code with no success. I want to "read" the structure of a given message (by topic name for example). By structure I mean a tree showing the fields a message has and their data types. The format doesn't really matter (XML, JSON etc.).
Is this possible to do in C++? One of my goals would be to dynamically generate a message container at runtime in c++ (maybe eventually a subscriber too).
The only solution I found right know is to run rosmsg show MSG and parse its output.
Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2017-09-07
Post score: 0
Answer:
No easy core support for this afaik, but I believe ros_type_introspection can do what you want.
Originally posted by gvdhoorn with karma: 86574 on 2017-09-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Mehdi. on 2017-09-07:
Thanks, that is exactly what I needed. | {
"domain": "robotics.stackexchange",
"id": 28786,
"tags": "ros, c++, message-generation, xmlrpc, rosmsg"
} |
PassThrough nodelet with more than one field | Question:
Dear ROS users, I am using the passthrough nodelet (link text) on a point cloud. Setting one field parameter (like in the tutorial) works. What I want to do is to use the nodelet to filter all the fields (x, y, z) at the same time. Since the param "filter_field_name" is only one, how I can achieve that?
Thanks!
Originally posted by rastaxe on ROS Answers with karma: 620 on 2016-02-17
Post score: 1
Original comments
Comment by arifzaman on 2021-05-14:
Respected member I am very much new to ROS and I exactly want to filter the xyz all at one so have you got any launch file for this? Kindly share it I will be very much thankful to you.
Answer:
Although it doesn't seem to be currently documented on the wiki, the pcl_ros package also has a nodelet available for a crop box filter (feel free to add your own documentation!). You can see the description of the crop box filter here, and the dynamic reconfigure config file for the crop box filter should tell you what parameters to set and what they do.
Originally posted by jarvisschultz with karma: 9031 on 2016-02-18
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 23809,
"tags": "pcl"
} |
What publication first introduced the concept of a non-deterministic Turing machine? | Question: What publication first introduced the concept of a non-deterministic Turing machine?
Turing did not define the concept in his 1936 paper.
Answer: This will be a partially inconclusive answer, unfortunately. Hopefully someone more knowledgeable can chime in and confirm the missing details. Let me summarize what I could find out so far.
Nondeterministic automata were introduced by Rabin and Scott in 1959. They also define two-way automata (which is a prelude to the LBAs introduced by Myhill the following year), but I could not find any mention of a nondeterministic two-way automaton (let alone a Turing machine) there.
The next paper (in chronological order) in which both nondeterminism and Turing machines appear is "On computability by certain classes of restricted Turing machines" by Fischer, 1963. In it, he defines general machines and also allow them to be nondeterministic. However, in the proof of Theorem 5 there he curiously states that some results (regarding deterministic and nondeterministic machines) are "well-known" to be equivalent for Turing machines. Hence, one is lead to believe the notion must have arisen sometime between the Rabin and Scott paper and Fischer's paper in 1963. Fortunately, Fischer gives a clue as to where he has those results from:
The author is indebted to Dr. J. Evey for many of the ideas contained herein. He has worked closely with the author in this area and, in particular, is responsible for the results constituting parts (c) and (d)
of Theorem 3 and for the main construction in part (e).
Theorem 3 in case is the computational equivalence of DTMs and NTMs.
Indeed, roughly a month after Fischer's paper Evey published "Application of pushdown-store machines," in which we also find nondeterminism applied to general machines (including Turing machines), and in which he proves the result that Fischer mentions (and extending it to some other types of machines). Fischer's work is actually listed in the bibliography there as well, but as far as I can tell there is no explicit citation of it in the text. The paper is most likely a summary of Evey's doctoral thesis (similarly titled), but I could not find it online; maybe someone with access to it can confirm that.
What I conclude then is that, unless I have made a gross oversight somewhere, the notion of an NTM must have just been evident to the research community once Rabin and Scott had brought nondeterminism to the table. Evey deserves credit for proving the computational equivalence of DTMs and NTMs, but it is hard to tell from the evidence at hand if he was the first to mention NTMs as a concept too (possibly in his doctoral thesis). | {
"domain": "cs.stackexchange",
"id": 14472,
"tags": "turing-machines, reference-request, nondeterminism"
} |
Can this sum be done without integrating? | Question: As by using conservation of mechanical energy, we can just be concerned about the final and initial position and the velocity could be found out. Am I not right?
But I am not getting the correct answer. Can I be made understood when to integrate and find out velocity and similar stuffs? And when just to directly use conservation of energy.
Answer: This question can be done using conservation of Energy. The only point to be careful about here is that the change in potential energy has to be seen with respect to the centre of mass of the hanging object, as this is is a continuous body.
Also the potential energy of rope is only due to the hanging part of the rope, so the mass has to be taken carefully.
Energy conservation can be used anywhere keeping in mind that you are considering the work of all forces involved in the given system. | {
"domain": "physics.stackexchange",
"id": 56760,
"tags": "homework-and-exercises, newtonian-mechanics, energy-conservation, conservation-laws, string"
} |
Psipred installation | Question: I am trying to install psipred in a unix server.
Psipred is installed, but fails because it can not find the correct database files. I have downloaded uniprod90.fasta. Psipred also loads the old legacy ncbi BLAST-2.2.26 and BLAST+.
runpsipred and runpsipredplus have been patched to call the correct version of BLAST.
I have run a few commands to index the uniprod90 database:
pfilt uniref90.fasta > uniref90filt
formatdb -t uniref90filt -i uniref90filt
makeblastdb -in uniref90.fasta -dbtype prot -input_type fasta -out uniref90.fasta
I then ran psipred and got this error:
$ ./psipred/example/example.fasta
Running PSI-BLAST with sequence ./psipred/example/example.fasta ...
[blastpgp] WARNING: Unable to open uniref90.pin
[blastpgp] WARNING: Unable to open uniref90.pin
FATAL: Error whilst running blastpgp - script terminated!
makeblastdb created 26 pin files, but not uniref90.pin.
Answer: Try with:
makeblastdb -in uniref90.fasta -dbtype prot -input_type fasta -out uniref90
The -out is the name of the DB. So if you use uniref90.fasta, it creates uniref90.fasta.pinwhile the script seems to be expectinguniref90.pin`. | {
"domain": "bioinformatics.stackexchange",
"id": 638,
"tags": "software-installation"
} |
Is the earth expanding? | Question: I recently saw this video on youtube:
http://www.youtube.com/watch?v=oJfBSc6e7QQ
and I don't know what to make of it. It seems as if the theory has enough evidence to be correct but where would all the water have appeared from? Would that much water have appeared over 60 million years? Also what would cause it to expand. The video suggests that since the time of dinosaurs the earths size has doubled in volume, how much of this is and can be true?
[could someone please tag this, I don't know what category this should come under]
Answer: It's not that the earth is expanding; instead the primary effect is that the earth's surface is shrinking. The effect occurs because the earth's surface doesn't remain flat. Instead, it gets tilted and folded. Some parts of the crust get subducted; once they disappear, what's left appears smaller.
Over any appreciable distance, rock has good compressive strength but negligible tensile strength. Consequently, when rock is pushed together it becomes thicker and taller (thereby decreasing the surface area; see the demonstration here: http://en.wikipedia.org/wiki/Mountain_building ). But when rock is pulled apart it instead tends to form cracks that are obviously new crust (and so are not counted when trying to determine if the earth's surface is expanding).
The overall effect is that the size of the older portion of the earth's surface is smaller than the actual current surface of the earth but this is only an effect of standard geology. | {
"domain": "physics.stackexchange",
"id": 98580,
"tags": "mass, earth, geophysics"
} |
PRNG for generating numbers with n set bits exactly | Question: I'm currently writing some code to generate binary data. I specifically need to generate 64-bit numbers with a given number of set bits; more precisely, the procedure should take some $0 < n < 64$ and return a pseudo-random 64-bit number with exactly $n$ bits set to $1$, and the rest set to 0.
My current approach involves something like this:
Generate a pseudorandom 64-bit number $k$.
Count the bits in $k$, storing the result in $b$.
If $b = n$, output $k$; otherwise go to 1.
This works, but it seems inelegant. Is there some kind of PRNG algorithm which can generate numbers with $n$ set bits more elegantly than this?
Answer: What you need is a random number between 0 and ${ 64 \choose n } - 1$. The problem then is to turn this into the bit pattern.
This is known as enumerative coding, and it's one of the oldest deployed compression algorithms. Probably the simplest algorithm is from Thomas Cover. It's based on the simple observation that if you have a word that is $n$ bits long, where the set bits are $x_k \ldots x_1$ in most-significant bit order, then the position of this word in the lexicographic ordering of all words with this property is:
$$\sum_{1 \le i \le k} { x_i \choose i}$$
So, for example, for a 7-bit word:
$$i(0000111) = { 2 \choose 3 } + {1 \choose 2 } + {0 \choose 1} = 0$$
$$i(0001011) = { 3 \choose 3 } + {1 \choose 2 } + {0 \choose 1} = 1$$
$$i(0001101) = { 3 \choose 3 } + {2 \choose 2 } + {0 \choose 1} = 2$$
...and so on.
To get the bit pattern from the ordinal, you just decode each bit in turn. Something like this, in a C-like language:
uint64_t decode(uint64_t ones, uint64_t ordinal)
{
uint64_t bits = 0;
for (uint64_t bit = 63; ones > 0; --bit)
{
uint64_t nCk = choose(bit, ones);
if (ordinal >= nCk)
{
ordinal -= nCk;
bits |= 1 << bit;
--ones;
}
}
return bits;
}
Note that since you only need binomial coefficients up to 64, you can precompute them.
Cover, T., Enumerative Source Encoding. IEEE Transactions on Information Theory, Vol IT-19, No 1, Jan 1973. | {
"domain": "cs.stackexchange",
"id": 18896,
"tags": "algorithms, information-theory, coding-theory, pseudo-random-generators"
} |
Is this a Black Hole? | Question: While researching about galaxies in Google sky, I came upon a strange area.
Co-ordinates: 11h 6m 15.0 seconds | -77° 22' 9.8"
A lot of space around this body is empty, is it possible that the body is a black hole?
The reason I'm asking is because of this image -
This is a photograph and readings are of a real black hole..
It looks like the one on the co-ordinates which I observed with all the dust around it.
The co-ordinates of the real black hole are - 03h 19m 47.60s | +41° 30' 37.00"
Answer: Going by those coordinates, the dark area you're seeing in Google Sky is the Chamaeleon I cloud. It's a nearby star-forming region that is part of the Chamaeleon complex (see also the "Notable features" section of this article on the Chamaeleon constellation). The star nearest to the coordinates you provided is the T Tauri star TYC 9414-787-1. | {
"domain": "astronomy.stackexchange",
"id": 3487,
"tags": "black-hole, observational-astronomy, coordinate"
} |
Multiplication Exercise Generator | Question: I'm new to programming in Python, and I am trying to create a multiplication exercise generator. I've got the following code for the solution, but I'm wondering if there is any more elegant solutions than this one.
Note: I have not covered lists, tuples, or dictionaries as of yet so I'm using the absolute basics to create this program.
#solution.py
#A script that allows a student to learn multiplication using the random
#module. Answers will be made and the script must display to try again
#or to present the next questions.
from random import randrange
answer = 0
def generate_question() :
global answer
#Random numbers
number_one = randrange(1, 13)
number_two = randrange(1, 13)
#Correct answer
answer = number_one * number_two
#Display question
print "What is %d times %d?" % (number_one, number_two)
def check_answer(user_input) :
global answer
#Answer check
if user_input == answer :
print "Very good!"
generate_question()
else :
print "Try again!"
#Script Section
generate_question()
while True :
#Answer to question
user_input = int(raw_input("Answer: "))
check_answer(user_input)
Answer: You're using functions, that's good. But, your program would be easier to understand, and better, if it didn't.
As a rule of thumb; don't put things in global scope.
Whether it be via global or by not putting your code in a function.
This is the global scope becomes un-reliable and you proceed to not know what your code is doing.
I would say that you should move check_answer into your while loop.
This is as rather than using generate_question you should use break.
This at the moment would end the program, but if you add another while True loop you can get the same functionality.
This is better as then you are starting to separate generating a question and checking if it's correct.
As we now only rely on the outer while to correctly setup the inner while, rather than changing globals to get things to work.
And so I'd use:
from random import randrange
def generate_number():
return randrange(1, 13)
def main():
while True:
num1, num2 = generate_number(), generate_number()
answer = num1 * num2
print "What is %d times %d?" % (num1, num2)
while True:
user_input = int(raw_input("Answer: "))
if user_input == answer:
print "Very good!"
break
else:
print "Try again!"
You should notice that this is your code. With a second while loop and a break.
But you should notice it's easier to understand what it's doing, when coming as a third party.
As at first, I didn't notice your code asks more than one question.
Just so you know you're already using a tuple! In print "What is %d times %d?" % (num1, num2).
The (num1, num2) is creating a tuple.
If you decided to use this rather than num1 and num2 then you would change num1 to num[0] and num2 to num[1].
This is as in most programming languages they start lists with zero rather than one.
Lists actually work the same way as tuples, but are created using [] rather than ().
The only difference is you can change num[0] when using a list, but you can't with a tuple.
You should also learn what try-except are. If I enter abc rather than 123 into your program it would exit abruptly.
What these do is try to run the code, if an error occurs run the except code.
But I'll warn you now, never use a 'bare except'. Which is an except that doesn't say what it's protecting against.
This is as if the code raises a different error than you'll run code as if it were another, which can hide bugs.
And makes finding those bugs really hard to find.
And so I'd further change the code to use:
from random import randrange
def generate_number():
return randrange(1, 13)
def main():
while True:
num1, num2 = generate_number(), generate_number()
answer = num1 * num2
print "What is %d times %d?" % (num1, num2)
while True:
try:
user_input = int(raw_input("Answer: "))
except ValueError:
print('That is not a valid number.')
continue
if user_input == answer:
print "Very good!"
break
else:
print "Try again!" | {
"domain": "codereview.stackexchange",
"id": 23078,
"tags": "python, beginner, python-2.x, random, quiz"
} |
Calculating aerodynamic force on open surface | Question: I've been having quite a heated debate with a friend of mine on a subject that seems so simple but we can't find a definitive agrement.
In the case of a backwards facing step, a lower pressure zone is created behind the step. On the backward facing face, the pressure coefficient (P-Pinf)/Qinf (Pinf: farfield pressure; P: local pressure: Qinf: farfield dynamic pressure) is negative. So far we both agree.
In the figure describing the problem, the step is considered solid and infinite in all directions
Where we disagree is here: What is the direction of the aerodynamic force on the step?
Two possibilities:
The load is a pressure load and therefore is applied on the face of the step resulting in a load towards the left of the figure (sure the pressure is less than the atmospheric pressure but it still "pushes" on the wall and therefore towards the load is towards the left).
The load is a function of (P-Pinf) in which case it is directed towards the right of the figure (in this case the depression tries to "suck" the step towards the right).
Any help would be quite appreciated. All the better if some sort of source material is available :)
Thanks a lot
Answer:
In the figure describing the problem, the step is considered solid and infinite in all directions
I presume that the solid body extends infinitely in the leftward and rightward directions (in addition to the direction perpendicular to the plane of drawing). In that case the only vertical face in the entire solid body is that of the step. So the only horizontal force on the solid body is that due to pressure acting on that vertical face of the step. Since $p>0$ the solid body experiences a leftward force on it. The fact that $p-p_\infty<0$ is irrelevant; it only says that the leftward force acting on the solid body is less than would be the case if we assumed the pressure in the vicinity of the step to be equal to far-stream pressure $p_\infty$.
If however the solid body were finite in the leftward direction then a second step would form there and thus a second vertical face would be present for action of pressure. On this vertical face the pressure would be higher than $p_\infty$, and then the body experiences a net rightward force.
P.S. As far as I know, the technique of calculating force on the body due to a pressure field by first subtracting a uniform pressure everywhere on the surface of the body works only for finite closed bodies (or control volumes). This does not therefore apply to your case, i.e. you are not allowed to integrate $p-p_\infty$ over the body's surface to obtain the load on it. | {
"domain": "physics.stackexchange",
"id": 45332,
"tags": "fluid-dynamics, pressure, aerodynamics"
} |
Optimization of sieve of Erathosthenes | Question: #include <iostream>
#include <conio.h>
#include <windows.h>
#include <math.h>
using namespace std;
#define RUNS 1000
char z[100000];
int i,j,k,c;
void main(void)
{
DWORD starttime,endtime;
float totaltime;
starttime = GetTickCount();//get start time
for(k=0;k<RUNS;k++)
{
c=0;
//for (i=0;i<10000;i++) z[i]=0; //clear array
memset(z,0,100000);
z[0]=1; // 0 is not a prime
z[1]=1; // 1 is not a prime
//now remove all evens from 4 up
for(i=4;i<100000;i=i+2) z[i]=1; //remove evens
//now loop through remaing up to square root of max number
for(i=3;i<316;i=i+2)
{
if(z[i]==0) for(j=2*i;j<100000;j=j+i) z[j]=1;
}
}
endtime=GetTickCount();//get finish time
//calc time
for(i=0;i<100000;i++)
{
if(z[i]==0) {cout<<i<<" ";c++;}
}
cout<<"primes found="<<c<<endl;
totaltime=((float)endtime-(float)starttime)/(1000.0*RUNS);//calculate total time in secs
cout<<"Totaltime="<<totaltime<<" sec\n";
printf("Press any key to end");
getch();
}
I'm trying to find any optimization for my sieve of Eratosthenes code for counting first 100000 prime numbers.
The program first mark all even numbers, than square root of max number.
The program already does take fraction of the seconds to count these prime numbers, but I`m looking for any optimization to make it even quicker.
Answer: if (z[i] == 0)
for (j = 2 * i; j < 100000; j = j + i)
z[j] = 1;
can be changed to
if (z[i] == 0)
for (j = i * i; j < 100000; j = j + 2 * i)
z[j] = 1;
as
k * i (with k < i) is already set to 1 by sieve k.
when j is odd, j + i is even and so already set by even sieve.
For readability, you may split into function and do some renaming, to have something like:
const int N = 100000;
const int SQRT_N = 316;
void shieve(char (&isNotPrime)[N])
{
memset(isNotPrime, 0, N);
isNotPrime[0] = 1; // 0 is not a prime
isNotPrime[1] = 1; // 1 is not a prime
// now remove all evens from 4 up
for (int i = 4; i < N; i = i + 2) {
isNotPrime[i] = 1; // remove evens
}
// now loop through remaing up to square root of max number
for (int i = 3; i < SQRT_N; i = i + 2) {
if (!isNotPrime[i]) {
for(int j = i * i; j < N; j = j + 2 * i) {
isNotPrime[j] = 1;
}
}
}
} | {
"domain": "codereview.stackexchange",
"id": 12176,
"tags": "c++, optimization, primes, sieve-of-eratosthenes"
} |
How to use c-gap problems to prove inapproximability? | Question: Suppose there is a specific set function with some properties - $f=2^V\to \mathcal{R}$. It is known that the following problem is NP-Hard: Find $S\subseteq V, |S|\leq k$ such that $f(S)$ is maximized. My goal is to show that designing a constant factor approximation algorithm in polynomial time is NP-Hard.
Correct me if I am wrong: As per these notes on gap reductions, we can design a c-gap problem with $OPT$ being the maximum value of $f$. The problem takes as input $\beta$. The goal is to answer YES if $OPT\geq \beta$. NO if $OPT< c\cdot \beta$. (here $c<1$).
To show the desired inapproximability, it would suffice to show the above c-gap problem is NP-Hard. My question is: Why is it this decision problem is designed for one $\beta$ when the optimimum value for the maximization problem is not known? Is the idea that, suppose you have a solver for the c-gap problem, you can call it with multiple values of $\beta$? If so, how many calls are possible?
Also any additional references on proving inapproximability would be greatly appreciated.
Answer: One way to show that there is no polynomial time approximation algorithm with ratio $c$ for a certain problem X (assuming P≠NP) is to give a reduction $f$ from an NP-hard problem Y to X such that:
If $x$ is a Yes instance then the optimal value for X is at least $\beta(x)$, where $\beta(x)$ can be computed from $x$ in polynomial time.
If $x$ is a No instance then the optimal value for $X$ is less than $c\beta(x)$ (for the same $\beta(x)$).
Any approximation algorithm for X whose approximation ratio is $c$ can be used to solve Y via the reduction $f$. Therefore, there cannot be any such efficient approximation algorithm unless P=NP.
I'll let you figure out the proof of these two statements. | {
"domain": "cs.stackexchange",
"id": 14484,
"tags": "np-complete, np-hard, approximation"
} |
Mathematical form of four hybrid orbitals | Question:
The mathematical form of the four $\ce{sp^3}$ hybrid orbitals are given below.
\begin{align}
\ce{
\tag{a} sp^3 &= 1/2s + 1/2p_x + 1/2p_y + 1/2p_z\\
\tag{b} sp^3 &= 1/2s + 1/2p_x - 1/2p_y - 1/2p_z\\
\tag{c} sp^3 &= 1/2s - 1/2p_x + 1/2p_y - 1/2p_z\\
\tag{d} sp^3 &= 1/2s - 1/2p_x - 1/2p_y - 1/2p_z\\
}\end{align}
For each hybrid orbital the number in front of the $\ce{s}$ and three $\ce{p}$ functions, called a coefficient, describes the contribution and relative ratio of each canonical orbital to the hybrid wave function. Add up the coefficients and prove to that these orbitals are $\ce{s^1p^3}$.
Yes, I know that the grammar is terrible on that last sentence, that is how it is written. So, I know that for an $\ce{sp^3}$ orbital it is one part $\ce{s}$ orbital and three parts $\ce{p}$ orbital. However, I have no idea what this question wants me to do. Adding up the coefficients gives me $2$, $0$, $0$, and $-1$, but I have no idea if that is how it wants me to add it up (you could ignore the sign and get $2$ every time, and show that $1/2$ is a quarter of $2$, thus proving that each component contributes a quarter to the hybridized orbital, but this seems too simple and incorrect).
Answer: If a molecular orbital $\psi$ (in this case, a hybrid orbital) is constructed from an orthonormal basis set of atomic orbitals $\{\phi_n\}$ via a linear combination
$$\psi = \sum_n c_n \phi_n$$
then loosely speaking, the "amount" of $\phi_n$ in $\psi$ is given not by $c_n$, but rather by $\left|c_n\right|^2$. (See, for example, Griffiths, Introduction to Quantum Mechanics, 2nd ed., Section 3.4.)
Therefore, summing the four values of $c_n$ for each AO has no physical meaning.
Instead let's take wavefunction (b) for example. The coefficient in front of the s orbital is $1/2$, so the "amount" of s-orbital character is simply $\left|1/2\right|^2 = 1/4$. The coefficient in front of the $\mathrm{p}_y$ orbital is $-1/2$, so there is $\left|-1/2\right|^2 = 1/4$ $\mathrm{p}_y$-character.
Armed with this knowledge, it's incredibly easy to see that each orbital has exactly $1/4$ s-character and a total of $3/4$ p-character ($1/4$ from each p orbital), because none of the $\pm$ signs matter (the minus signs are removed by the absolute value). | {
"domain": "chemistry.stackexchange",
"id": 6385,
"tags": "orbitals, theoretical-chemistry"
} |
Why can't I observe a voltage between two capacitor plates when only one of the plates is connected to a battery? | Question: Let's say I have a battery and a capacitor that is neutral. Now I connect the battery positive terminal to only one plate of the capacitor. From what I know, there is a potential difference between the terminal and the plate so there should be an electric field that causes some of the negative charges of the capacitor connected plate to move to the terminal (through a conducting wire) to reduce the potential difference to zero. This makes the connected capacitor plate to be positively charged which in turn causes the not connected plate to be negatively charged because there is an electric field between the plates that causes negative charges to gather in the not connected plate. Now after this happens I expect that there is a difference in potential between the plates of capacitor. But with a multimeter connected to the plates of the capacitor I can observe zero voltage. Why is that? What do I miss?
Answer: There is no potential difference because the connection of the positive battery terminal simply causes a redistribution of charge on both plates, but no net charge on either plate. That would require current in a complete circuit involving the other battery terminal.
The free electrons of the connected plate move towards the surface of the plate connected to the positive battery terminal. That, in turn, induces movement of free electrons on the non connected plate towards the surface nearest the connected plate. But the end result is the net charge on the two plates remains zero for a potential difference of zero between the plates.
See FIG 1 below.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 91807,
"tags": "electric-fields, electricity, voltage, capacitance, batteries"
} |
How does isotropy of free space imply $L(v^2)$ for a free particle? | Question: From Mechanics; Landau and Lifshitz, it's stated on page 5:
Since space is isotropic, the Lagrangian must also be indpendent of the direction of $ \mathbf{v}$, and is therfore a function only of its magnitudie, i.e. of $ \mathbf{v} \cdot \mathbf{v}=v^2$:
$L = L(v^2)$
How is he able to exlude $L = L(| \mathbf{v}|)$?
Answer: for any function $f$, we have $f(|v|) = f(\sqrt{v^{2}}) = g(v^{2})$, where $g = f\circ \sqrt{}$, so we lose no generality by assuming that the function is a function of the square. Also, it generalizes more nicely to vector spaces, where $v^{2} ={\vec v}\cdot {\vec v}$ is defined, but it is not necessarily the case that the absolute value function is defined. | {
"domain": "physics.stackexchange",
"id": 12379,
"tags": "classical-mechanics, lagrangian-formalism"
} |
The atom, solid or liquid? | Question: Now I know what you may be thinking, "buddy, do you realize that the quality of solid or liquid is Made by the sum total of atoms in a thing therefore an atom can't be a solid or liquid??" This, good people, is not what am asking exactly. My intended meaning is more accurately said like so:
How would touching a proton or neutron feel? Would it feel as hard as say a Steele NY building compressed into the size of a marble? Would it be metallic and slick? Would it feel as gelatin which cannot be bent? Or would it be translucent and untouchable such as a ghost? Using the current scientific model of the atom can you infer how it might feel to the senses?
Answer: I think this is a very non-trivial question and I can't give you all the answers, but I can make a few general remarks. First of all, when atoms "touch", it's really their electronic orbitals overlapping. Those interactions are, of course, governed by quantum mechanics but it is very hard to solve multi-electron atoms and complete molecules with a completely quantum mechanical treatment the way we are solving the hydrogen problem. As it turns out, one can simplify the equations significantly and for many applications still get reasonable approximations by using so called mean-field potentials.
In its most simple implementation mean field theory assumes that one can average the quantum mechanical movement of electrons around nuclei and derive a classical potential for the effective force between the nuclei. The physical reason why this works reasonable well is that the nuclei are, on average, a couple thousand times heavier than the electrons, i.e. their movement is much slower than the time scale of the electronic wave function. As a consequence it can make sense to describe molecular binding by the average distances of their nuclei and to approximate the actual dynamics of electronic orbitals with a non-linear distance and angle dependent classical force.
In general even these mean field forces depend very strongly on distance between the nuclei and on angles (and to smaller extent on the spins of the electrons), which makes for a complex multi-dimensional potential.
The radial dependence of these potentials has two general properties: at short distances the electrostatic repulsion of the nuclei (and the electrons) will be very strong and the potential has to diverge to infinity. At infinite distance, however, there is no force between the molecules and the potential is zero. At some intermediate distance there may be a potential minimum, in which case a stable chemical bond of that length can form.
I think this is where your question comes into play. One could interpret the average molecular forces between the nuclei in form of a human "touch" sensation. At a distance one wouldn't feel anything, going closer there would either be a soft repulsion that would be getting stronger very quickly, or a stickiness, maybe similar to two magnets that are attracting each other, and eventually, even closer in, there would be a rather hard surface, which one could not penetrate.
The "surfaces" of these molecular orbitals would feel incredibly slippery, because there wouldn't be any friction and there would be a constant trembling from quantum mechanical fluctuations. Depending on the combination of molecular "fingertip" and "molecular ball", the balls would either stick or constantly trying to get away. And depending on the reaction energy of the sticky ones it could be very hard to impossible to remove them without also ripping off a "finger tip" or two! In short, we would get all of the variability of chemical reactions "at hand"! It would certainly be a wonderful tool for chemists to explore chemical potential landscapes like that and I think I have heard about some folks in the virtual reality department who are working on tools like that, but I would have to find the article. | {
"domain": "physics.stackexchange",
"id": 16973,
"tags": "atoms, perception"
} |
Regioselectivity of acid-catalyzed ring-opening of epoxides | Question: Not to be confused with what is the mechanism of acid-catalyzed ring opening of epoxides.
What is the correct order of regioselectivity of acid-catalyzed ring-opening of epoxides: $3^\circ$ > $2^\circ$ > $1^\circ$ or $3^\circ$ > $1^\circ$ > $2^\circ$? I am getting ready to teach epoxide ring-opening reactions, and I noticed that my textbook has something different to say about the regioselectivity of acid-catalyzed ring-opening than what I learned. My textbook does not agree with 15 other introductory texts I own, but it does agree with one. None of my Advanced Organic Chemistry texts discuss this reaction at all. Thus, I have no ready references to go read.
Edit: My textbook is in the minority, and it is a first edition. Is it wrong, or do the other 15 texts (including some venerable ones) oversimplify the matter?
What I learned
Although the acid-catalyzed ring-opening of epoxides follows a mechanism with SN2 features (inversion of stereochemistry, no carbocation rearrangements), the mechanism is not strictly a SN2 mechanism. The transition state has more progress toward the C-LG bond breaking than an SN2, but more progress toward the C-Nu bond forming than SN1. There is significantly more $\delta ^+$ character on the carbon than in SN2, but not as much as in SN1. The transition states of the three are compared below:
In a More O’Ferrall-Jencks diagram, the acid-catalyzed ring-opening of epoxides would follow a pathway between the idealized SN2 and SN1 pathways.
Because of the significant $\delta ^+$ character on the carbon, the reaction displays regioselectivity inspired by carbocation stability (even though the carbocation does not form): the nucleophile preferentially attacks at the more hindered position (or the position that would produce the more stable carbocation if one formed). If a choice between a primary and a secondary carbon is presented, the nucleophile preferentially attacks at the secondary position. If a choice between a primary and a tertiary carbon is presented, the nucleophile preferentially attacks at the tertiary position. If a choice between a secondary and a tertiary carbon is presented, the nucleophile preferentially attacks at the tertiary position.
The overall order of regioselectiveity is $3^\circ$ > $2^\circ$ > $1^\circ$.
What my textbook says
My text agrees that the mechanism is somewhere in between the SN2 and SN1 mechanisms, but goes on to say that because it is in between, electronic factors (SN1) do not always dominate. Steric factors (SN2) are also important. My text says that in the comparison between primary and secondary, primary wins for steric factors. In other words, the difference between the increased stabilization of the $\delta ^+$ on secondary positions over primary positions is not large enough to overcome the decreased steric access at secondary positions. For the comparison of primary and tertiary, tertiary wins. The increased electronic stabilization at the tertiary position is enough to overcome the decreased steric access at the tertiary position. The comparison between secondary and tertiary is not directly made, but since $3^\circ$ > $1^\circ$ and $1^\circ$ > $2^\circ$, it is implied that $3^\circ$ > $2^\circ$.
If this pattern is true, then other cyclic "onium" ions (like the bromonium ion and the mercurinium ion) should also behave this way. They don't.
Typical of introductory texts, no references are provided. A Google search did not yield satisfactory results and the Wikipedia article on epoxides is less than helpful.
Since I 15 other introductory texts on my bookshelf, I consulted all of them on this reaction. The following is a summary of my findings. Only two of the texts (the one I am using and one other) describe the regioselectivity as $3^\circ$ > $1^\circ$ > $2^\circ$. All of the other books support the other pattern, including Morrison and Boyd (which lends credence to the pattern that I learned).
Books that have $3^\circ$ > $2^\circ$ > $1^\circ$
Brown, Foote, Iverson, and Anslyn
Hornsback
Ege
Wade
Bruice
Smith
Fessenden and Fessenden
Volhardt and Schore
Solomons and Fryhle
Jones
Baker and Engel
Ouellette and Rawn
Carey
Morrison and Boyd
Streightweiser and Heathcock
Books that have $3^\circ$ > $1^\circ$ > $2^\circ$
Klein (the text I am using)
McMurray
I also surveyed my various Advanced Organic texts (March, Smith, Carey and Sundberg, Wyatt and Warren, Lowry and Richardson, etc.). Interestingly, none of them even mention acid-catalyzed ring-opening of epoxides (either by Brønsted or Lewis acids). I suspect that these omissions mean that this reaction 1) has difficult to predict regioselectivity (despite the predominance of introductory books that suggest otherwise), and thus 2) is synthetically useless. If #2 is true, then why is this reaction in introductory organic texts?
Answer: First part
It won't decide the issue but the Organic Chemistry text by Clayden, Greeves, Warren and Wothers also mentions that the matter might not be as clear-cut as the majority of your textbooks make it seem. This might strengthen the position of the textbook you're using a bit. But again, there are no references given. Here is the relevant passage (especially the last two paragraphs):
Second Part
I have found the following passage on the formation of halohydrins from epoxides in the book by Smith and March (7th Edition), chapter 10-50, page 507:
Unsymmetrical epoxides are usually opened to give mixtures of regioisomers. In a typical reaction, the halogen is delivered to the less sterically hindered carbon of the epoxide. In the absence of this
structural feature, and in the absence of a directing group, relatively equal mixtures of
regioisomeric halohydrins are expected. The phenyl is such a group, and in 1-phenyl-2-
alkyl epoxides reaction with $\ce{POCl3}/\ce{DMAP}$ ($\ce{DMAP}$ = 4-dimethylaminopyridine) leads to
the chlorohydrin with the chlorine on the carbon bearing the phenyl.${}^{1231}$ When done in an
ionic liquid with $\ce{Me3SiCl}$, styrene epoxide gives 2-chloro-2-phenylethanol.${}^{1232}$ The
reaction of thionyl chloride and poly(vinylpyrrolidinone) converts epoxides to the corresponding
2-chloro-1-carbinol.${}^{1233}$ Bromine with a phenylhydrazine catalyst, however,
converts epoxides to the 1-bromo-2-carbinol.${}^{1234}$ An alkenyl group also leads to a
halohydrin with the halogen on the carbon bearing the $\ce{C=C}$ unit.${}^{1235}$ Epoxy carboxylic
acids are another example. When $\ce{NaI}$ reacts at pH 4, the major regioisomer is the 2-iodo-3-
hydroxy compound, but when $\ce{InCl3}$ is added, the major product is the 3-iodo-2-hydroxy
carboxylic acid.${}^{1236}$
References:
${}^{1231}$ Sartillo-Piscil, F.; Quinero, L.; Villegas, C.; Santacruz-Juarez, E.; de Parrodi, C.A. Tetrahedron Lett. 2002,
43, 15.
${}^{1232}$ Xu, L.-W.; Li, L.; Xia, C.-G.; Zhao, P.-Q. Tetrahedron Lett. 2004, 45, 2435.
${}^{1233}$ Tamami, B.; Ghazi, I.; Mahdavi, H. Synth. Commun. 2002, 32, 3725.
${}^{1234}$ Sharghi, H.; Eskandari, M.M. Synthesis 2002, 1519.
${}^{1235}$ Ha, J.D.; Kim, S.Y.; Lee, S.J.; Kang, S.K.; Ahn, J.H.; Kim, S.S.; Choi, J.-K. Tetrahedron Lett. 2004, 45, 5969.
${}^{1236}$ Fringuelli, F.; Pizzo, F.; Vaccaro, L. J. Org. Chem. 2001, 66, 4719. Also see Concellón, J.M.; Bardales, E.;
Concellón, C.; García-Granda, S.; Díaz, M.R. J. Org. Chem. 2004, 69, 6923. | {
"domain": "chemistry.stackexchange",
"id": 891,
"tags": "organic-chemistry, reaction-mechanism, erratum, regioselectivity"
} |
Mutual capacitance upper limit | Question: I am having trouble making an analog for mutual capacitance from mutual inductance. In circuits with magnetic coupling, there is an upper limit established on mutual inductance due to energy conservation principles:
$M \leq \sqrt{L_1 * L_2} $
Where $L_1$ and $L_2$ are the two coupled inductors.
This makes sense intuitively as well because the mutual inductance represents magnetic flux generated by one inductor and coupling with another, and you can't share more flux than is produced by a single inductor. It seems like a similar relationship would hold for Mutual Capacitance, but I can believe there should be a case where the mutual capacitance between two components is greater than the geometric mean of their respective capacitances...
Does anyone know of an energy bound upper limit for mutual capacitance? Thank you in advance and I apologize if my question is poorly formed, most of my confusion comes from reconciling circuit interpretations of mutual capacitance and the physics of how the mutual capacitance is manifested.
Answer: Yes, there is a constraint also for the mutual capacitance. As I shortly reviewed in this answer, the capacitance matrix of a system of conductors is a symmetric and positive definite matrix, exactly as the inductance matrix of a system of inductors. Since the relationship $M\le \sqrt{L_1L_2}$ can be directly obtained from this property, which, in turns can be derived from the conservation of energy, it's not difficult to prove the analogue for a system of three conductors where one is taken as reference conductor (ground).
Consider the system of two conductors with ground shown below (beware the sign of $C_{12}$: what you called "mutual capacitance" is actually $-C_{12}$, $C_{12}=C_{21}$ being the off-diagonal element of the capacitance matrix):
For such a system, the capacitance matrix is a 2-by-2 symmetric matrix with $C_{11} = C_1-C_{12}$, $C_{22} = C_2-C_{12}$ and $C_{12}<0$ ($C_{11}$ and $C_{22}$ are the total capacitances from each conductor to the other conductors). Such a matrix is further positive definite if and only if its trace and determinant are positive (see e.g. this question on Math SE). From the condition on the determinant, we thus have
$$C_{11}C_{22}-C_{12}^2\ge 0,$$
that is,
$$|C_{12}|\le\sqrt{C_{11}C_{22}}.$$
For further details and derivations for the general case, see:
R. M. Fano, L. J. Chu, and R. B. Adler, Electromagnetic fields, energy, and forces, MIT Press, 1968. | {
"domain": "physics.stackexchange",
"id": 62171,
"tags": "energy, capacitance, inductance"
} |
Measuring voltage output of a Wimshurst machine | Question: I see that there are a few variables that effect the voltage output of the Wimshurst machine. For one the size of the disk, the amount of conductive strips on the disk, and obviously the spark gap itself.
Heres my question : if air breakdown voltage is 3000v per mm then if a spark jumps 50mm (2") is it 150,000v? Or is the electrostatic build up around the two electrodes what allows the spark to jump easily? If i am right yay. If i am wrong, what must be taken into the equation of figuring out its output voltage?
And this is less important but i still would like to know, how can I calculate how much max power my Wimshurst machine will have based on the disk size, conductive plates, and Leyden jars?
Answer: Your disk size and how many conductive strips you have affect 2 things, depending on how close together the strips are, and how many there are will determine whether you need to self start the machine or not. It will also determine how much voltage can be produced. The gauge of the wire will tell you how much voltage can be carried over it, so use a thick wire. The most important thing is the leyden jar. This will determine how much voltage you can hold before a spark, or before you ruin your leydfen jars. | {
"domain": "physics.stackexchange",
"id": 78348,
"tags": "electrostatics, voltage, power"
} |
Autocompleting console input | Question: Yesterday I stumbled upon a Stack Overflow question that asked if it was possible to implement Tab-triggered auto-completion in a console application.
I thought that was an interesting idea, so I went ahead and wrote a small program that does just that, given a hard-coded string[] - I might eventually refactor it into its own class and reuse it in my own projects if I ever actually need auto-completion in a console app, but before I do that I'd like some feedback on the way it's implemented, given static aside, the logic itself is pretty much the way I'll have it regardless of whether it's in a dedicated class as part of something much bigger, or right there in a console application that does nothing but verify that the code works.
Pressing Tab when there's no input will change the current line to say Bar; because there's a word in data that starts with Bar, pressing Tab again will autocomplete to Barbec; another Tab will make it Barbecue, and then any subsequent Tab will have no effect, because nothing in the data starts with Barbecue - but then you could Backspace until the input is Ba, type a t to make it Bat, and when you press Tab then it autocompletes to Batman.
In other words, it all works exactly as it should. But does it look right?
class Program
{
static void Main(string[] args)
{
var data = new[]
{
"Bar",
"Barbec",
"Barbecue",
"Batman",
};
var builder = new StringBuilder();
var input = Console.ReadKey(intercept:true);
while (input.Key != ConsoleKey.Enter)
{
if (input.Key == ConsoleKey.Tab)
{
HandleTabInput(builder, data);
}
else
{
HandleKeyInput(builder, data, input);
}
input = Console.ReadKey(intercept:true);
}
Console.Write(input.KeyChar);
}
/// <remarks>
/// https://stackoverflow.com/a/8946847/1188513
/// </remarks>>
private static void ClearCurrentLine()
{
var currentLine = Console.CursorTop;
Console.SetCursorPosition(0, Console.CursorTop);
Console.Write(new string(' ', Console.WindowWidth));
Console.SetCursorPosition(0, currentLine);
}
private static void HandleTabInput(StringBuilder builder, IEnumerable<string> data)
{
var currentInput = builder.ToString();
var match = data.FirstOrDefault(item => item != currentInput && item.StartsWith(currentInput, true, CultureInfo.InvariantCulture));
if (string.IsNullOrEmpty(match))
{
return;
}
ClearCurrentLine();
builder.Clear();
Console.Write(match);
builder.Append(match);
}
private static void HandleKeyInput(StringBuilder builder, IEnumerable<string> data, ConsoleKeyInfo input)
{
var currentInput = builder.ToString();
if (input.Key == ConsoleKey.Backspace && currentInput.Length > 0)
{
builder.Remove(builder.Length - 1, 1);
ClearCurrentLine();
currentInput = currentInput.Remove(currentInput.Length - 1);
Console.Write(currentInput);
}
else
{
var key = input.KeyChar;
builder.Append(key);
Console.Write(key);
}
}
}
Answer: Testing
Personally I think it’s conceptually easier to test this code if you are returning a string – rather than a void: what if you decided to change the out put device from a console window to your printer – it would not be easy to change that. What if sometimes you wanted a console window and other times you wanted to send the same thing via text message? I guess it’s easier to separate what is being done vs how it is being done. E.g. Output key is the abstract concept but this could be done a number of different ways: (i) to the console, or (ii) through a voice synthesiser etc. I wrote some tests but then i threw it once it broke when i changed the API.
Naming
What is data? It’s harder to think of a more abstract name. I’ve changed this to keywords.
Personally I like to see the type along with the variable declaration.
Refactoring the code:
I simplified and extracted some methods.
The The HandleKeyInput and the HandleTabInput both seem to be doing conceptually similar things: a key is inputted and then a result is obtained. If it’s a tab then do X, but if it’s a key then do Y. There’s a common abstraction here that can be extracted.
What I want to do is to make the two methods: HandleTabInput and HandleKeyInput to look and be absolutely identical.
Data (renamed to keywords) can be a field so there’s no need to pass it in as a variable.
I want to make two of your methods to basically be the same. They’re kinda doing this same things. After some refactoring this is what I got:
public void HandleTabInput(StringBuilder builder, ConsoleKeyInfo keyInput)
{
// Perform calculation
string match = ExtractMatch(builder);
// Alter the builder
builder.Clear();
builder.Append(match);
// Print Results
PrintMatch(match);
}
And then this:
private void HandleKeyInput(StringBuilder builder, ConsoleKeyInfo keyInput)
{
if (keyInput.Key == ConsoleKey.Backspace && builder.ToString().Length > 0)
{
// Perform Calculation (nothing here)
// Alter the builder
builder.Remove(builder.Length - 1, 1);
// Print Results
ClearCurrentLine();
Console.Write(builder.ToString().Remove(builder.ToString().Length - 1));
}
else
{
// Perform calculation (nothing here)
// Alter the Builder
var key = keyInput.KeyChar;
builder.Append(key);
// Print Reuslts
Console.Write(key);
}
}
We can abstract out the calculation, and then the printing. After some work this was the result. It’s starting to look pretty similar:
private void HandleKeyInput(StringBuilder builder, ConsoleKeyInfo keyInput)
{
if (keyInput.Key == ConsoleKey.Backspace && builder.ToString().Length > 0)
{
Program.KeyInput backSpaceKey = new Program.KeyInput.BackspaceInput(builder, keywords);
backSpaceKey.AlterBuilder();
backSpaceKey.PrintResult();
}
else
{
KeyInput input = new KeyInput.StandardKeyInput(builder, keywords, keyInput.KeyChar);
input.AlterBuilder();
input.PrintResult();
}
}
You could probably tidy things up a little more.
The final Result:
public void RunProgram()
{
StringBuilder builder = new StringBuilder();
ConsoleKeyInfo capturedCharacter = Console.ReadKey(intercept: true);
while (EnterIsNotThe(capturedCharacter))
{
KeyInput key = KeyInput.GetKey(builder, keywords, capturedCharacter);
builder = key.UpdateBuilder();
key.Print();
capturedCharacter = Console.ReadKey(intercept: true);
}
Console.Write(capturedCharacter.KeyChar);
}
Concluding Thoughts
It reads marginally better. Still If you want to reuse the code you’ll have to make changes, but at least it will be a little easier to do when the time comes.
Encapsulation could be a little better – typically speaking you want the KeyInput class to be instantiated via the factory only.
A link to see the full code. | {
"domain": "codereview.stackexchange",
"id": 29426,
"tags": "c#, console, autocomplete"
} |
Why normalize the data set before applying Direct Linear Transform | Question: Direct Linear Transform (DLT for short) is a method of homography estimation, it solves the overdetermined linear system via SVD $$Ah=b$$to find a solution $h$ under constraint $\|h\|=1$. Actually it finds the least square solution which minimize $\|Ah - b\|$.
I understand the basic idea of this algorithm, but it is recommended to normalize the data set before applying DLT on it, and here is a intro about how to do the normalization. It is lectured that data normalization is important to DLT, without normalization the results from DLT is not stable.
I wonder why? Just because DLT involves solving the linear system using SVD and $A$ might be singular?
Answer: The normalization is basically a preconditioning to decrease condition number of the matrix $A$ (the larger the condition number, the nearer the matrix is to the singular matrix).
The normalizing transform is also represented by a matrix in the case of homography estimation, and this happens to be usable as a good preconditioner matrix. The reason why is that is more elaborate and is explained briefly in H&Z book (4.4.4, p. 107: Why is normalization essential?) or in more detail in the paper "In Defense of the Eight-point Algorithm".
Put it simply, the matrix $A$ consists of products of image coordinates which can have different scale. If the scale differs by factor of $10$, the products differ by a factor of $10^2$.
The source and target coordinate data are usually noisy. Without normalization, the data from source would can have two orders of magnitude larger variance than from target (or vice versa).
The homography estimation usually finds parameters in a least-squares sense - hence the best statistical estimate is found only if variances of the parameters are the same (or known beforehand, but it is more practical just to normalize the input).
Direct solvers do not like poorly scaled problems because numerical instabilities appear (e.g. dividing very large number by a very small number easily leads to numerical overflow).
Iterative solvers struggle with badly conditioned matrices by needing more iterations.
So normalization is essential not only for numerical stability, but also for more accurate estimation in presence of noise and faster solution (in case of iterative solver). | {
"domain": "dsp.stackexchange",
"id": 1200,
"tags": "image-processing, homography, normalization"
} |
How much information do I need for a Lorentz transformation? | Question: If I use Lorentz transformations,
\begin{align}
x' &= \gamma (x-vt), \\
t' &=\gamma \left(t-\frac{vx}{c^2}\right),
\end{align}
I need $x,v,t$ to calculate $x'$ and $t'$. If I only know, say for example, $x$ and proper time $t'$, I can calculate the relative velocity of the frames, $x'$ and $t$ by using length contraction and $v=x/t=x'/t'$. But how do I derive these quantities directly with Lorentz transformation? Is this possible? Even if I try to use the constant space-time-distance, it doesn't work out. In general I'm confused why Lorentz transformations are so important because it seems to me like one can calculate the same things with less effort by length contraction and time dilatation?
Answer:
In general I'm confused why Lorentz transformations are so important because it seems to me like one can calculate the same things with less effort by length contraction and time dilatation?
This is not correct. The Lorentz transformations include length contraction, time dilation, and the relativity of simultaneity. Most of the so-called "paradoxes" of SR center around the relativity of simultaneity. So if you use only length contraction and time dilation then you will get most of the "paradoxes" wrong.
The Lorentz transform is an essential tool for SR, and (in my opinion) the simplified length contraction and time dilation formulas should be avoided for new students. They frequently misuse them and there is no need for them since they automatically drop out of the Lorentz transform whenever appropriate.
I can calculate the relative velocity of the frames, x′ and t by using length contraction and v=x/t=x′/t′.
No, in general it is not true that $v=x/t$. If you happen to know that it is true for a specific scenario then you can use that fact also, but you cannot assume it in general.
In general, it depends on what you want to know. You have two equations in 5 variables ($c$ is not a variable and $\gamma$ is just a function of $v$ so it isn't an independent variable). So if you want to determine the coordinates of a specific single event $(t',x')$ then you need three pieces of information. However, if you only want to determine, for example, the coordinates of a worldline $(t',f(t'))$ then you may only need two.
Of course, the problem itself may introduce new unknowns such as equations of motion or other new variables. There is thus no one universal answer to the question.
Small nitpick: the $t'$ used in the Lorentz transform is not proper time, it is just coordinate time in the primed frame. | {
"domain": "physics.stackexchange",
"id": 86379,
"tags": "special-relativity, spacetime, time-dilation"
} |
Bound states for the Delta Function Well | Question: In "Introduction to Quantum Mechanics" by Griffiths we discussed the delta potential well. They speak about bound and scattering states for $E<0$ and $E>0$ resp. But before that (Problem 2.2) we proved that $E$ must exceed the minimum value of $V(x)$. Clearly $V(x)$ has minimum value 0, How then, can there exist bound states?
Answer: The one-dimensional delta function well $V(x)= -\lambda \delta(x)$ can be constructed by taking $\delta(x)= \lim_{\epsilon\to 0} \delta_\epsilon(x)$, where $\delta_\epsilon(x)$ is a square potential of width $\epsilon$ and hight $1/\epsilon$. Thus the minimum value of $V(x)$ is $-\lambda/\epsilon$ which is heading to minus infinity. | {
"domain": "physics.stackexchange",
"id": 41327,
"tags": "quantum-mechanics, energy, wavefunction, schroedinger-equation, potential"
} |
How long does it take to register different impacts in the Double-Slit Experiment? | Question: Consider a Double-Slit Experiment with a lens between the slits and the screen. The lens focuses the interference patterns in such a way that R01 + R02 = D0. When a measuring device is put at one of the slits the interference pattern disappears and on the screen we can see R03 + R04 = D0.
Because of the lens there is no way to tell just by looking at the screen if it is a combination of the interference patterns or a combination of the lumps.
Frequency of impacts at positions:
What would a plot of the time taken to get to the screen at positions look like? Would it look the same for when there are lumps and when there is interference?
Answer: Photons always have their wave properties (and particle properties). If you want to observe the wave properties the double slit is a good experiment. (The single slit or a dichroic filter is another way to observe wave properties).
The small source, double slit and screen combine to constrain the path of the photons to a few visible choices .... the "interference pattern" shows bright areas where photons can go, the dark areas that have no photons. Per Feynman " a photon considers all paths" and chooses "the most probable". Or stated another way as .. every photon determines its own path.
In the QEE a source photon creates 2 entangled photons, the creation of the entangled pair will not occur unless both photons have a path. Per wikipedia "Thus, when a pair of entangled photons is created, but one of the two is blocked by a polarizer and lost, the remaining photon will be filtered out of the data set as if it was one of the many non-entangled photons. When viewed this way, it is not surprising that making changes to the upper path can have an impact to measurements taken on the lower path ...."
In general the QEE was said to allow information faster than the speed of light but this has been disproven.
In the top arm of the apparatus the detector D0 (per QEE diagram on wikipedia) detects both the interfered type photons as well as the regular (single slit like) photons. In this arm photons take a similar path for both types, the time for this path is the same for both types. In the bottom arm the path is 8ns longer (per wikipedia), it is in the lower arm where the photons interact with the first set of beam splitters (50/50 random chance) ... when they have the d3/d4 path, the partner (idler) photon shows no interference. When the partner (idler) photon takes the d1/d2 path it has a choice of d1 or d2 ... thus interference. | {
"domain": "physics.stackexchange",
"id": 81818,
"tags": "quantum-mechanics, quantum-information, quantum-eraser"
} |
Dimensions of Transformer - dmodel and depth | Question: Trying to understand the dimensions of the Multihead Attention component in Transformer referring the following tutorial https://www.tensorflow.org/tutorials/text/transformer#setup
There are 2 unknown dimensions - depth and d_model which I dont understand.
For example, if I fix the dimensions of the Q,K,V as 64 and the number_of_attention_heads as 8, and input_embedding as 512 , can anyone please explain what is depth and d_model?
Answer:
d_model is the dimensionality of the representations used as input to the multi-head attention, which is the same as the dimensionality of the output. In the case of normal transformers, d_model is the same size as the embedding size (i.e. 512). This naming convention comes from the original Transformer paper.
depth is d_model divided by the number of attention heads (i.e. 512 / 8 = 64). This is the dimensionality used for the individual attention heads. In the tutorial you linked, you can find this as self.depth = d_model // self.num_heads. Each attention head projects the original representation into a smaller representation of size depth, then computes the attention, and then all the attention head results are concatenated together, so that the final dimensionality is again d_model. You can find more details on the individual computations in this other answer.
Note that the implementation of the multi-head attention in the tutorial is not a straightforward implementation from the original paper but it is equivalent: in the original paper, there are different matrices $W_i^Q, W_i^K, W_i^V$ for each attention head $i$, while in the implementation of the tutorial there are combined matrices $W^Q, W^K, W^V$ that compute the projection for all attention heads, which is then split into the separate heads by means of the function split_heads. | {
"domain": "datascience.stackexchange",
"id": 9465,
"tags": "deep-learning, neural-network, keras, tensorflow, transformer"
} |
robot_pose_ekf with only odometry information | Question:
Hi rossers ;).
My robot delivers only very noisy odometry data. The worst character of that noise are sudden peaks of extremely unlikely data and sudden accelerations. I thought about using some kind of Bayesian filter to just filter that data based on an expected variance. Can I use the robot_pose_ekf-node for that? Of cause, it would not fusion any data. But can it just serve as a basic probabilistic filter? Will it also work on one data source to create a speed hypothesis?
Thanks for your help
Originally posted by ct2034 on ROS Answers with karma: 862 on 2015-01-29
Post score: 0
Answer:
robot_pose_ekf is not terribly easy to adapt to new situations or use cases. If you understand how to implement the bayesian filter that you need, it will probably be easier to write and debug your own filter.
You may also want to look into the robot_localization and graft packages for more general-purpose kalman filter nodes.
Originally posted by ahendrix with karma: 47576 on 2015-01-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 20734,
"tags": "navigation, odometry, robot-pose-ekf"
} |
Why use fit when already have fit_transform? | Question: This is a follow up question to: What's the difference between fit and fit_transform in scikit-learn models?
I want to know why should we use fit at all when we have fit_transform which is much faster than using fit and transform separately? After all we will always transform the training data after fitting it. Do we have any use of fit all by itself?
Answer: It probably is fairly rare to need to use fit and not instead fit_transform for a sklearn transformer. It nevertheless makes sense to keep the method separate: fitting a transformer is learning relevant information about the data, while transforming produces an altered dataset. Fitting still makes sense for sklearn predictors, and only some of those (particularly clusterers and outlier detectors) provide a combined fit_predict.
I can think of at least one instance where a transformer gets fitted but does not (immediately) transform data, but it is internal. In KBinsDiscretizer, if encode='onehot', then an internal instance of OneHotEncoder is created, and at fit time for the discretizer, the encoder is fitted (to dummy data) just to prepare it to transform future data. Transforming the data given to KBinsDiscretizer.fit would be wasteful at this point.
Finally, one comment on your post:
we have fit_transform which is much faster than using fit and transform separately
In most (but not all) cases, fit_transform is literally the same as fit(X, y).transform(X), so this should not be faster. | {
"domain": "datascience.stackexchange",
"id": 10932,
"tags": "python, scikit-learn"
} |
Can a difference in the "speed of time" introduce acceleration? | Question: Hypothetically, lets say we have a space divided equally into two adjacent areas where (somehow) in one of the areas time goes by at half the speed as the other area. Or specifically, when a clock in the fast area shows 1 minute having gone by, a clock in the slow area shows only 30 seconds having elapsed.
In order for the speed of light to be constant, would lengths in the fast half be compressed to half the size of the slow half? My guess: Yes
(assuming (1) is correct) Let's create two identical objects, one in each area, positioned as close together as possible without passing into the other area, each given an equal velocity in a direction parallel to the boundary between the two spaces. Q: Over time would the distance between the objects increase? My guess: No
OK, (2) was all kinds of lame. So to make it more interesting lets put a strip of space between (but not touching) the two objects where time flows at the average rate between the two areas (so 75% as fast as the "fast" area). Then lets connect the two objects with a rigid bar of negligible size and mass that goes right through the "average speed" area. Then at the exact center of the bar (equidistant between the two objects) we give the compound object a specific velocity - again in a direction parallel to the boundaries between the areas. Q: Would the compound object rotate? My guess: yes - but it's just a hunch
(if (3) is correct) Depending on the direction of the rotation, since you would constantly get the two objects changing which area they are in, would the compound object just keep spinning faster, accelerate toward the slow area, or accelerate toward the fast area?
I've totally been ignoring mass on purpose. I realize that the same things that cause time dilation also cause a change in mass. Would taking mass into account change the results of any of the above?
(Edit below)
I realize that this is all hypothetical and is a situation that cannot exist naturally (which is why I tagged it as a thought-experiment). What I want to know is whether the time-dilation portion of a space-time curvature introduces an acceleration above and beyond that caused by gravity, or whether it gets cancelled out by the length distortion, or if the curvature of space-time is what causes the acceleration of gravity and it would make no sense to talk about the effects of time-dilation separately. The thought experiment is just my effort to figure out exactly what the interactions between time and gravity are.
Answer: As it stands your question is rather hypothetical. You introduce a difference in the time without describing the physics behind it, and without any mathematical model to describe the phenomenon it's hard to make any useful comments.
However something like what you describe happens in the real world, and yes it does cause acceleration. In General Relativity the trajectory of a freely falling body is described by the geodesic equation:
$$ {d^2 x^\mu \over d\tau^2} + \Gamma^\mu_{\alpha\beta} {dx^\alpha \over d\tau} {dx^\beta \over d\tau} = 0 $$
The $\Gamma^\mu_{\alpha\beta}$ terms are the Christoffel symbols. If we stick to Cartesian coordinates$^1$ the Christoffel symbols are only non-zero when spacetime is curved so for flat spacetime the geodesic equation simplifies to:
$$ {d^2 x^\mu \over d\tau^2} = 0 $$
which just gives us a straight line in spacetime so there is no acceleration. Offhand I can't think of a (realistic) metric where only the time coordinate is curved and the spatial coordinates are flat. However in such a metric some of the Christoffel symbols involving time would be non-zero, and the result would be that the geodesic would no longer be a straight line i.e. the freely falling object would accelerate.
$^1$ as Chris points out, in polar coordinates some of the Christoffel symbols will be non-zero even in flat space. | {
"domain": "physics.stackexchange",
"id": 14622,
"tags": "time-dilation, thought-experiment"
} |
Friction drag force | Question: When an object moves through air, the air closest to the object’s surface is dragged along with it, pulling or rubbing at the air that it passes. This rubbing exerts a force on the object opposite to the direction of motion—friction drag.
The thin layer of air closest to the surface of a moving object is called the boundary layer. This is where friction drag occurs.
Reference
What is the difference between this drag and the drag that appears when an object is in a free fall? if it is the same, how a molecule of a fluid that is sticked to the object can produce friction and thus heat?
Thank you
Answer: That is the same force, both in free fall through the air/atmosphere and, say, in wind tunnel.
Heat is transferred to object in free fall/wind tunnel via transfer of momentum from air. Momentum is being transferred due to collision of stationary particles (surface of object and molecules stuck to surface) with surrounding particles. | {
"domain": "physics.stackexchange",
"id": 22782,
"tags": "fluid-dynamics, drag"
} |
mass defect - total mass loss of one pellet | Question: I have a task where there is a uranium pellet (mass=10g) with 3.5% U-235 and 96.5% U-238.
Now I should calculate how much mass the pellet loses. I shall only consider the decay of U-235 and consider that 0.27g of U-235 decayed.
U-235 gets split according to following formula: U-235 + neutron -> 100Zr + 133Te + 3*neutrons
I now calculate the number of uranium atoms in those 0.27g N=0.00027kg/m(U-235). Since the free neutrons decay on their own and should not contribute to the mass of the pellet I assume that the mass loss per split is Δm = m(U-235) - m(Zr-100) - m(Te-133). Now I multiply that with the number of decays I calculated earlier and I should get the mass loss of the pellet. However, it seems as if the proposed solution of the tasks wants me to simply use the normal mass defect equation (using the neutrons). My teacher explained it confusingly, something about "since I also subtract the neutrons from the Uranium mass they are considered to be gone (what I wanted to achieve with my approach)". So it seems as if the correct mass loss would be Δm = m(U-235) - m(Zr-100) - m(Te-133) - 2*m(neutron).
I don't understand that, since that mass difference Δm would be tinier as the one I calculated earlier. And shouldn't it be bigger since neutrons are missing? Or am I overthinking this and ignore that neutrons decay - since they decay into a proton, which effectively has almost the same mass as a neutron, so their mass still contributes to the mass of the pellet?
Answer: It sounds like you’re doing a calculation like this one, where you are computing the mass lost due to the difference in binding energy between the uranium nucleus and its daughter products. However, you have made the clever and complicating observation that the neutrons don’t hang around in the fuel pellet: the neutrons’ mean free path is longer than the pellet. In a working reactor the fuel pellets are embedded in some “moderator” (often water) whose purpose is to exchange heat with the neutrons, slowing them down so that they are more likely to engage in capture reactions, and this matrix is shot through with “control rods” which contain some neutron absorber or “neutron poison” (often boron) so that the reaction does not run away.
A fission event releases three-ish very fast neutrons, each of which instantly leaves the fuel pellet. Some of the neutrons will scatter back into the pellet and trigger another fission; some of them will scatter back into the pellet and participate in a non-fission capture reaction; some of them will scatter into the control rods and capture there. (The half-life for free neutron decay is about fifteen minutes; nearly every neutron in the reactor will capture on some other nucleus instead of decaying to a free proton.) The number of neutrons per fission which trigger a new fission is called $k_\text{effective}$. Reactor operators modulate the amount of neutron poison to maintain $k_\text{eff} \approx 1$, so that the rate of fission is stable rather than growing or shrinking. A device with $k_\text{eff} \approx 2$ is a bomb; a device with $k_\text{eff} \approx \frac12$ is shutting down.
So you’ve identified two related issues here, both of which have to be resolved to know how much the pellet’s mass changes:
The binding energy issue: each fission releases something like 200 MeV of energy, which is equivalent to a mass change thanks to $E=mc^2$.
An actual issue of mass flow: neutrons leave the fuel pellet and end up in the control rods. (If I cut a wooden plank in half, the mass of the halves doesn’t add up to the mass of the whole; I have to weigh the sawdust.)
I’m a little murky on which approach is yours and which is your teacher’s. The discovery of point #1 is one of the major events in 20th-century physics, and is enormously important, so most homework assignments stop there for pedagogical purposes. But if your job were actually to weigh a fuel pellet, #2 is a bigger effect by a factor of ten. (The neutron’s mass is about $1000\,\mathrm{MeV}/c^2$.)
If you were my student, I’d tell you to just do the binding-energy calculation; we might call it the mass loss, per fuel pellet, of the entire reactor assembly. | {
"domain": "physics.stackexchange",
"id": 78310,
"tags": "homework-and-exercises, mass, nuclear-physics, mass-energy, binding-energy"
} |
The inverse square law of sound through solids? | Question: We all know about the inverse square law of sound. In short the power of the wave will get evenly spread on an ever increasing spherical expansion and this will dissipate the power of the wave at a rate of inverse squared to the distance travelled.
But what about sound waves travelling through a solid like a rectangular table? Is there such a law that relates the power of a signal to the distance travelled? Even a approximation of the relationship would help; is this an inverse square relations as well?
Answer: The trouble is that your table, or whatever object it is, will act as a waveguide. That's because the sound waves will (partially) reflect of the wood/air surface then travel back into the table and interfere with other waves. The result is going to be hideously complicated to calculate.
As Luboš says in a comment, if the thickness of the table is much less than the wavelength of the sound then the system effectively becomes two dimensional. If we can ignore energy loss from the flat surfaces then when you bang the middle of the table then the energy will spread out in rings and fall off as $\tfrac{1}{r}$ rather than $\tfrac{1}{r^2}$. But even in this simple case as soon as the sound waves hit the edges of the table they will reflect back and start interfering with other waves and you can get all sorts of complicated patterns.
So I'm afraid that in general for a finite sized object the sound intensity isn't going to decrease in any simple way. | {
"domain": "physics.stackexchange",
"id": 15484,
"tags": "waves, acoustics, power, density, vibrations"
} |
POSIX shell function for asking questions | Question: I've written a tiny function for asking questions intended for my POSIX shell scripts, where I often need user input.
The function takes 2+ arguments, where:
Is a string containing the question.
, 3., ... = arguments containing the right answers (case insensitive)
I also changed my style by a bit, no longer use ${var} instead of just $var.
The requirement was simple: Check for exactly the answers given. So, not matching yeahh if given answer is yeah.
I also included a maybe performance-wise quick test if the user answer is in the list of answers, so answering no will not make the script iterate through all the answers and just dies at that check.
#!/bin/sh
set -u
confirmation ()
# $1 = a string containing the question
# $2,.. = arguments containing the right answers (case insensitive)
{
question=$1; shift
correct_answers=$*
correct_answers_combined=$( printf '%s' "$correct_answers" | sed 's/\( \)\{1,\}/\//g' )
printf '%b' "$question\\nPlease answer [ $correct_answers_combined ] to confirm (Not <Enter>): "
read -r user_answer
# this part is optional in hope it would speed up the whole process
printf '%s' "$correct_answers" | grep -i "$user_answer" > /dev/null 2>&1 ||
return 1
# this part iterates through the list of correct answers
# and compares each as the whole word (actually as the whole line) with the user answer
for single_correct_answer in $correct_answers; do
printf '%s' "$single_correct_answer" | grep -i -x "$user_answer" > /dev/null 2>&1 &&
return 0
done
# this might be omitted, needs verification, or testing
return 1
}
# EXAMPLE usage, can be anything, DO NOT review this part please
if confirmation 'Is dog your favorite pet?' y yes yep yeah
then
tput bold; tput setaf 2; echo 'TRUE: You just love dogs! :)'; tput sgr0
else
tput bold; tput setaf 1; echo 'FALSE: Dog hater, discontinuing! :('; tput sgr0
exit 1
fi
# do other stuff here in TRUE case
echo 'And here comes more fun...'
Answer: printf '%b' will expand backslash escapes in $question and in $correct_answers_combined. It's not obvious that both of those are desirable.
I'd probably re-write that to expand only $question, and to avoid an unnecessary pipeline:
printf '%b\nPlease answer [ ' "$question"
printf '%s ' "$@"
printf '] to confirm (Not <Enter>): '
You almost certainly want fgrep (or grep -F) rather than standard regular-expression grep, and it would be simpler to search through the items one per line, rather than using a for loop:
read -r user_answer
printf '%s\n' "$@" | grep -qFx "$user_answer"
If this is the last command in the function, then the return status will be that of the grep command, which is just what we need.
Finally, be aware that read can fail (e.g. when it reaches EOF). If you don't want that to be an automatic "no", then make provision for that. I don't know what the right behaviour is for this application so I'll leave that as an open issue for you to address appropriately.
Modified version
Here's what I ended up with:
# $1 = a string containing the question
# $2,.. = arguments containing the right answers (case insensitive)
confirmation()
{
question=$1; shift
printf '%b\nPlease answer [ ' "$question"
printf '%s ' "$@"
printf '] to confirm (Not <Enter>): '
read -r user_answer
printf '%s\n' "$@" | grep -qFxi "$user_answer"
} | {
"domain": "codereview.stackexchange",
"id": 34529,
"tags": "validation, console, shell, sh, posix"
} |
How to prove correctness of a shuffle algorithm? | Question: I have two ways of producing a list of items in a random order and would like to determine if they are equally fair (unbiased).
The first method I use is to construct the entire list of elements and then do a shuffle on it (say a Fisher-Yates shuffle). The second method is more of an iterative method which keeps the list shuffled at every insertion. In pseudo-code the insertion function is:
insert( list, item )
list.append( item )
swap( list.random_item, list.last_item )
I'm interested in how one goes about showing the fairness of this particular shuffling. The advantages of this algorithm, where it is used, are enough that even if slightly unfair it'd be okay. To decide I need a way to evaluate its fairness.
My first idea is that I need to calculate the total permutations possible this way versus the total permutations possible for a set of the final length. I'm a bit at a loss however on how to calculate the permutations resulting from this algorithm. I also can't be certain this is the best, or easiest approach.
Answer: First, let us make two maybe obvious, but important assumptions:
_.random_item can choose the last position.
_.random_item chooses every position with probability $\frac{1}{n+1}$.
In order to prove correctness of your algorithm, you need an inductive argument similar to the one used here:
For the singleton list there is only one possibility, so it is uniformly chosen.
Assuming that the list with $n$ elements was uniformly chosen (from all permutations), show that the one with $n+1$ elements obtained by your technique is uniformly chosen.
From here on, the proof is wrong. Please see below for a correct proof; I leave this here because both the mistake and the following steps (which are sound) might be educational.
It is useful to derive a local (i.e. element-wise) property that has to hold, because arguing about the whole permutation is painful. Observe that a permutation is uniformly chosen if every element has equal probability of being at each position, i.e.
$\qquad \displaystyle \mathop{\forall}\limits_{\pi \in \mathrm{Perm}_n} \operatorname{Pr}(L = \pi) = \frac{1}{n!} \quad \Longleftrightarrow \quad \mathop{\forall}\limits_{i=1}^n\ \mathop{\forall}\limits_{j=1}^n \operatorname{Pr}(L_i = j) = \frac{1}{n} \qquad (1)$
where $n = |L|$ and we assume for the sake of notational simplicity that we insert $\{1,\dots,n\}$ into the list.
Now, let us see what your technique does when inserting the $n+1$st element. We have to consider three cases (after the swap):
One of the elements in the list, not swapped, i.e. $i \in \{1,\dots,n\}$ and $j \in \{1,\dots,n\}$
One of the elements in the list, swapped, i.e. $i = n+1$ and $j \in \{1,\dots,n\}$
The new element, i.e. $i \in \{1,\dots,n+1\}$ and $j = n+1$
For each case, we compute the probability of element $j$ being at position $i$; all have to turn out to be $\frac{1}{n+1}$ (which is sufficient because of $(1)$). Let $p_n = \frac{1}{n}$ be the probability of one of the first $n$ elements being at any position in the old list (induction hypothesis), and $p_s = \frac{1}{n+1}$ the probability of any position being chosen by random_item (assumptions 1, 2). Note that the coice of the list with $n$ elements and picking the swap position are independent events, so the probabilities of joint events factor, e.g.
$\qquad \displaystyle \operatorname{Pr}(L_i=j, i \text{ swapped}) = \operatorname{Pr}(L_i=j)\cdot \operatorname{Pr}(i \text{ swapped}) = p_np_s$
for $i,j \in \{1,\dots,n\}$. Now for the calculations.
We only consider the old $n$ elements. Such an element $j$ is at position $i$ if and only if it was there before the last insertion and $i$ is not selected as swap position, that is
$\quad \displaystyle \operatorname{Pr}(L_i = j) = p_n(1-p_s) = \frac{1}{n}\cdot\frac{n}{n+1} = \frac{1}{n+1}$.
Here we consider that one of the old elements is swapped to the last position. Element $j$ could have been at any of the old positions, so we sum over all probabilities that $j$ was at position $i$ and $i$ is chosen as swap position, that is
$\quad \displaystyle \operatorname{Pr}(L_{n+1} = j) = \sum_{i=1}^n p_np_s = \sum_{i=1}^n \frac{1}{n}\cdot\frac{1}{n+1} = \frac{1}{n+1}$.
The new element ends up at position $i$ if and only if $i$ is chosen as swap position, that is
$\quad \displaystyle \operatorname{Pr}(L_i = j) = p_s = \frac{1}{n+1}$.
All turned out well, your insertion strategy does indeed preserve uniformity. By the power of induction, that proves that your algorithm creates uniformly distributed permutations.
A word of warning: this proof breaks down if the inserted elements are not pairwise different resp. distinguishable, because then the very first equation is no longer valid. But your algorithm is still valid; every permutation with duplicates is generated by the same number of random executions. You can proof this by marking duplicates (i.e. making them distinguishable), perform above proof and remove the markings (virtually); the last step collapses equal sized sets of permutations to the same.
As Steven has remarked correctly in the comments, above proof is fundamentally flawed as $(1)$ does not hold; you can construct distributions on the set of permutations that fulfill the right-hand, but not the left-hand side¹.
Therefore, we will have to work with probabilities of permutations, which turns out to be not that bad after all. The assumptions on random_item and the inductive structure outlined in the very beginning of the post remain in place, we continue from there. Let $L^{(k)}$ denote the list after $\{1,\dots,k\}$ have been inserted.
Let $\pi' \in \mathrm{Perm}_{n+1}$ an arbitrary permutation of $\{1,\dots,n+1\}$. It can be written uniquely as
$\qquad \displaystyle \pi' = (\pi(1), \pi(2), \dots, \pi(i-1), n+1, \pi(i+1), \dots, \pi(n), \pi(i))$
for some $\pi \in \mathrm{Perm}_n$ and $i \in \{1,\dots,n+1\}$. By induction hypothesis, $\operatorname{Pr}(L^{(n)} = \pi) = \frac{1}{n!}$. Furthermore, random_item picks position $i$ with probability $\frac{1}{n+1}$ by assumption. As the random choices of $\pi$ and $i$ are (stochastically) independent, we get
$\qquad \displaystyle \operatorname{Pr}(L^{(n+1)} = \pi') = \operatorname{Pr}(L^{(n)} = \pi) \cdot \operatorname{Pr}(i \text{ swapped}) = \frac{1}{(n+1)!}$
which we had to show. By the power of induction, that proves that your algorithm creates uniformly distributed permutations.
For example, assign every permutation in $\{(1, 2, 3, 4), (2, 3, 4, 1), (3, 4, 1, 2), (4, 1, 2, 3)\}$ probability $\frac{1}{4}$ and all others $0$. There are also examples that assign every permutation a non-zero probability. | {
"domain": "cs.stackexchange",
"id": 262,
"tags": "algorithms, proof-techniques, randomized-algorithms, correctness-proof, randomness"
} |
Which is better practice for this if elif… else statement? | Question: If below piece of code has to be restructured to use but one single if-elif block, which is better? 1. or 2.?
A = int(input("Enter an integer(1~30): "))
if A % 6 == 0:
print("A is even")
print("A is even and a multiple of 6 (6, 12, 18, 24, 30).")
elif A % 10 == 0:
print("A is even")
print("A is even and a multiple of 10 (10, 20).")
elif A % 2 == 0:
print("A is even")
print("A is even and a number out of 2, 4, 8, 14, 16, 22, 26, 28.")
elif A % 3 == 0:
print("A is odd")
print("A is odd and a multiple of 3 (3, 9, 15, 21, 27).")
else:
print("A is odd")
print("A is 1 or a number out of 1, 5, 7, 11, 13, 17, 19, 23, 25, 29.")
if A % 2 == 0 and A % 3 == 0:
print("A is even")
print("A is even and a multiple of 6 (6, 12, 18, 24, 30).")
elif A % 2 == 0 and A % 5 == 0:
print("A is even")
print("A is even and a multiple of 10 (10, 20).")
elif A % 2 == 0 and A % 3 != 0 and A % 5 != 0:
print("A is even")
print("A is even and a number out of 2, 4, 8, 14, 16, 22, 26, 28.")
elif A % 2 != 0 and A % 3 == 0:
print("A is odd")
print("A is odd and a multiple of 3 (3, 9, 15, 21, 27).")
else:
print("A is odd")
print("A is 1 or a number out of 1, 5, 7, 11, 13, 17, 19, 23, 25, 29.")
This is part of a question in a school assignment and the problem is to reformat some code into the structure of
if
…
elif
…
elif
…
else
…
There are obviously better structures to avoid multiplicating print("A is even") but the question is to format it in the structure above.
So if the two above are the only options which is better and why?
Answer: The important fact first: You are required to refactor a reasonable piece of code into an evil one. An if-elif-else-rake is very seldom the best solution. That said I'll try to give two general rules.
When testing for "a multiple of 6 (6, 12, 18, 24, 30)"
if A % 6 == 0:
is the more readable and less error prone solution compared to
if A % 2 == 0 and A % 3 == 0:
Writing good code is not about showing off math knowledge.
When we do an if-elif-else rake we do not do
if a:
...
elif not a and b:
...
elif not a and not b and c:
...
elif not a and not b and not c and d:
...
we do
if a:
...
elif b:
...
elif c:
...
elif d:
...
The not conditions are implicitly there.
Again the reason is readability, simplicity and thus less probability of errors. Maintainability counts, think of inserting a clause later. It is also avoiding multiple evaluation of tests.
So your first code example is a little less evil as there is no code duplication and better readability. | {
"domain": "codereview.stackexchange",
"id": 39473,
"tags": "python, comparative-review"
} |
Entanglement universality at criticality | Question: In $(d+1)$-dimensional quantum systems described by conformal field theories at criticality, I have been under the impression that the entanglement entropy is described by a non-universal area law plus universal logarithmic corrections.
This area+log is true in $(1+1)$-dimensional systems, which has
$$S(l) = \frac{c}{3}\ln \left(\frac{L}{\pi} \sin{\frac{\pi l}{L} }\right) +C$$
as seen in the transverse field ising model in another question, though please see another question of mine about this when $c$ is large compared to the local degrees of freedom. Note that the area law in $1d$ is just a constant law.
Fradkin and Moore in "Hearing the Shape of a Quantum Drum" showed that in $(2+1)$-dimensional systems, the area+log statement is still true, though the log correction can sometimes vanish depending on the geometry of the region:
$$S(R) = 2f_s(l/a) + \alpha c \log(l/a) + O(1)$$
where $l$ is the length of the boundary of the region $R$, $a$ is the ultraviolet cutoff, $f_s$ is the non-universal leading area-law term, and there is the requisite universal $c \log(l/a)$ term. In said paper, they speculate that the $O(1)$ term is not universal. $\alpha$ here depends on the shape of the region $R$ and can be $0$.
However, I was recently perusing a paper that noted that for a a (d-1 + 1)-dimensional theory, the entanglement entropy of a spherically shaped region with radius $r$ is
$$S(r) = \mu_{d-2}r^{d-2} + \mu_{d-4}r^{d-4} + ... + \begin{cases}
(-1)^{d/2-1}4A\log(r/a), & \text{if $d$ is even} \\
(-1)^{(d-1)/2}F, & \text{if $d$ is odd}
\end{cases}
$$
where this last term is the universal term. Note that the above predicts a logarithmic term in the case $(1+1)$-dimensions with $d=2$ but not in $(2+1)$ dimensions with $d=3$.
This seems to partially contradict Fradkin and Moore's result above. While it's likely that the lack of a log term in $(2+1)$-dimensions for a sphere comes from that particular geometry suffering $\alpha=0$ in Fradkin and Moore's result, it seems that the $O(1)$ piece is indeed universal.
Is it that in $(2+1)$-dimensions both the logarithmic and constant terms are universal? Or is the constant term only universal in some geometries of the region? How about for other dimensions?
Answer: Indeed, it has to do with the geometry of the entangling region. In (2+1)d, when the boundary of the region is smooth (e.g. a disk region in (2+1)d), the subleading correction to the area law is a universal constant and is related to the free energy of the theory on $S^3$. If the region contains sharp corners (e.g. rectangle), the subleading term is logarithmic in $l$. In that case, the constant piece is not universal as it can be absorbed into $\ln (l/a)$, by redefining $a$. But the coefficient of the log is a universal function of the opening angles of the corners.
The higher-dimensional result you quote applies to sphere boundary, which is all smooth. When there are singularities on the boundary additional logarithmic corrections occur. | {
"domain": "physics.stackexchange",
"id": 84694,
"tags": "quantum-entanglement, conformal-field-theory, critical-phenomena"
} |
If L is regular so is the language of compressed doubles | Question: Suppose L is a regular language over the alphabet $\Sigma$. I need to prove that
$$ L'=\{x_0\cdots x_n:x_0x_0x_1x_1\cdots x_nx_n\in L, \ \ x_i\in \Sigma\}$$
I thought I could take a DFA which computes L, and take each accepting state, together with the state before it along an edge labeled 0, and the state before that along an edge labeled 0, and make an edge that skips over the middle state. Then do likewise with edges labeled 1.
If you either cut out the middle state or if you leave it and regard the result as an NFA, I can't seem to show that the resulting automaton to computes L'.
Answer: When you want to prove your construction correct you have to be precise in your construction. The new automaton has the same states, and the same initial and final states as the original one. Then, as you suggest, for every pair of consecutive edges with the same label in the original automaton, the new automaton will have one edge with that label from the first state to the third one. We cannot delete the middle state, as what is a middle state in one pair of edges might be begin or end state in another.
The proof of correctness is now a simple observation on the construction. There is a path $(q_0,a_1,q_1)(q_1,a_2,q_2)\dots(q_{n-1},a_n,q_n)$ in the new automaton if and only if there is a 'duplicated path' $(q_0,a_1,p_1)(p_1,a_1,q_1)(q_1,a_2,p_2)(p_2,a_2,q_2)\dots(q_{n-1},a_n,p_n)(p_n,a_n,q_n)$ in the original one. For the corresponding languages it is now sufficient to consider 'accepting' paths with initial state $q_0$ and accepting state $q_n$. | {
"domain": "cs.stackexchange",
"id": 19164,
"tags": "formal-languages, regular-languages, finite-automata, regular-expressions"
} |
Kinematics of shooting a projectile at a target | Question: I want to determine the initial velocity given an angle at which a projectile must be launched in order to hit a target located at a distance $D$ and height $Y$ from the origin.
Following the discussion here I have tried to solve for the velocity and came up with the following equation for the velocity:
$$v^2 = \frac{5D^2}{D \sin\theta \cos\theta - Y \cos^2\theta}$$
This was derived from $$ D = vtcos(\theta)$$
and $$Y=vtsin(\theta) - 5(t)^2$$
where I rearranged the equation for $D$ for $t$ and subsituted into the equation for $Y$.
Unfortunately, this seems to fail for values where:
$$Y = D,\theta <= 45$$
Why does my method fail for specific values of $Y$ and $\theta$?
Answer: Actually, your method fails for $\theta\leq 45$, not $\theta\gt45$.
Mathematically, this is because for your situation the denominator reduces to $tan\theta-1$, which becomes negative for $\theta\leq 45$. This is a problem when subsequently taking the square root.
Physically, consider what happens for your specific values of $Y=D$; the target is located at distance of $\sqrt{2}D$ at exactly an angle of $\theta=45$ from the origin. If you now shoot your projectile at the exact same angle (or less) you will never reach the target because the tiniest bit of gravity will reduce the $y$-velocity such that it will curve under the target. | {
"domain": "physics.stackexchange",
"id": 52283,
"tags": "homework-and-exercises, newtonian-mechanics, kinematics, velocity"
} |
How to create a 2D array in ROS | Question:
I want to do a custom msg as 2D array which has 4 variables. x, y, velocity, and angle. Each variable will be set to a value as 2D array. Below is the code:
matlab_file_dic = scipy.io.loadmat(map_file)
map_data_values = matlab_file_dic['mapData'][0][0][2]
map_data = np.reshape(map_data_values,(1000,9), 'F')
for k in range(9):
for i in range(1000):
par_name = 'Parameters.mapData.FunctionValues[{}]'.format(k*1000 + i)
self.model.set(par_name, map_data[i, k])
self.model.set('Parameters_INI_VALUES_COURSE_ANGLE', map_data[start_sim, 2])
Originally posted by m0gha on ROS Answers with karma: 3 on 2022-09-14
Post score: 0
Original comments
Comment by ravijoshi on 2022-09-14:
The question is hard to understand. Do you want to create a custom message? Furthermore, I could not understand the relation of your code snippet with the title of your question. Can you please provide more information?
Comment by m0gha on 2022-09-18:
I want to create a custom message, the message should be a 2D array " map_data[i, k] "
Comment by gvdhoorn on 2022-09-19:
Just to make it extra clear: there is no such thing as an "Nd array" (with N > 1) in ROS msg IDL. It's not supported.
The best you can do is use one of the work-arounds suggested/described by @ravijoshi in his answer below.
Answer:
The following two approaches can be utilized in order to create a custom message that contains a 2D array, (map_data in your case):
Using nested .msg: In this approach, you define two .msg files as shown below:
Create Row.msg file with the following content:
int8[] data
Here, you create a row of your map.
Please note that, I am assuming that map contains data of type int8. Please change it as per your need.
Create Map.msg file with the following content:
Row[] row
Here, you append the above created row to your msg.
Please check the following link for more info. #q67273
Using flattened array: In this approach, you reshape your 2D array to make it a 1D array. This is the simplest approach and used by ROS to publish images (2D data). You are free to check the message definition of sensor_msgs/Image and read more about it.
Example
Consider you have the following 2D array:
In [1]: map_data
Out[1]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]], dtype=int8)
Approach #1: Create every row and append it to map as shown below:
# initialize map
map = Map()
# initialize row with 1D data
row1 = Row([ 0, 1, 2, 3])
# append row to map
map.row.append(row1)
row2 = Row([ 4, 5, 6, 7])
map.row.append(row2)
row3 = Row([ 8, 9, 10, 11])
map.row.append(row3)
1. NumPy Simulation:
In [2]: row1 = np.array([ 0, 1, 2, 3])
In [3]: row2 = np.array([ 4, 5, 6, 7])
In [4]: row3 = np.array([ 8, 9, 10, 11])
In [5]: map_data_received = np.vstack((row1, row2, row3))
In [6]: map_data_received
Out[6]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
Approach #2: Just flatten your data and publish it. Upon subscribing, reshape it back to 2D as shown in the NumPy simulation below:
In [6]: map_data.flatten()
Out[6]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
In [7]: map_data_received = map_data.flatten().reshape((3,-1))
In [8]: map_data_received
Out[8]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
On a side note, it seems that your variable map_data_values is already a 1D array. So, approach #2 can be applied directly.
Originally posted by ravijoshi with karma: 1744 on 2022-09-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by m0gha on 2022-09-20:
Thanks so much. It works :)
Comment by ravijoshi on 2022-09-20:
Glad you made it work. Please upvote the answer and accept it by clicking on mark this answer correct. | {
"domain": "robotics.stackexchange",
"id": 37978,
"tags": "ros, python3"
} |
Conditions for a node to be a leaf node on DFS tree | Question: Question:
Consider the DFS tree generated by a DFS on a connected graph. Write below the necessary and sufficient condition for a node v to be a leaf node on the DFS tree. The condition must be in term's of v's discovery time d[v] and finishing time f[v].
I honestly have no idea what the answer to this question from a past exam may be...
Answer: A node v is a leaf node on the DFS tree if and only if f[v] = d[v] + 1.
The proof is obvious once you follow the DFS, marking each node's discovery time and finishing time. A leaf is defined as a node that has no children.
Let $v$ be a node.
Suppose $v$ has children in the final DFS tree. That is, we will go forward to a child $u$ of $v$ right after we have discovered $v$. That is, $d[v]+1 = d[u]$. The finishing time of $v$ must come after that. So f[v] $\gt$ d[v] + 1.
Suppose $v$ has no children in the final DFS tree. That is, we will go backtracking once we have discovered $v$. That is, we will finish $v$ right after we have discovered it. That is, f[v] = d[v] + 1.
For people who are not familiar with discovery time and finishing time of a node v associated to a DFS, you can check either Stanford course note or Carnegie Mellon course note. Of course, you may just search by yourself. | {
"domain": "cs.stackexchange",
"id": 11983,
"tags": "trees, graph-traversal"
} |
Why are gas giants colored the way they are? | Question: As I understand it, Jupiter, Saturn, Uranus, and Neptune are all made primarily from varying proportions of hydrogen and helium. Despite this, Jupiter is very red, Saturn is yellow, and Uranus and Neptune are blue. Why the difference in colors for planets with similar chemical makeups? Or do they not have similar chemical makeups?
If such an explanation is available, I would like specific descriptions of why each planet is colored the way it is.
Answer: You could just have Googled this question.
Post #5 from the first hit:
First off, by "wondering what colors different gas giants can be", you
are presumably asking about their light spectra through the visible
range of wavelengths (380-720 nm), right?**
Light interacts primarily with electrons. It is scattered or absorbed
in the presence of electrons, which come in a variety of "phases".
Here are the most relevant:
free within an ionized gas (can absorb in the presence of the
electric field of an ion)
attached to atoms and ions
attached to molecules
attached to molecules that have condensed into solid state aerosols and grains, or liquid droplets.
The most important thing to take away from this is that every type of
material, in terms of composition and "phase", absorbs and scatters
light uniquely.
The prevalence and importance of each of the above four "phases"
depend on (a) the elemental composition of the giant planet's
atmosphere (defined as that layer responsible for light
reflected/emitted by planet), and (b) its equation of state (how
pressure changes as a function of density and temperature). The first
of these provides the raw materials, and the second arranges them in
"phase". Very roughly speaking, one may assign decreasing temperatures
(T) to the above 4 "phases" moving down the list 1-->4. Pressure (P)
also plays a role, and in general one may place the above phases on a
P-T diagram. Physics and Chemistry are at work to determine what kind
of "stuff" is present as a function of depth through the giant planet
atmosphere. One rule of thumb is that chemistry is much more effective
at higher temperatures (to a point) and/or in the presence of
moderately energetic light.
Next, before proceeding, go back and read the bold statement, above.
To finish off this overly long post: Two giant planets of equal bulk
compositions will almost certainly appear differently if their P vs. T
profiles differ, or if their atmospheric compositions differ (e.g.,
due to convective mixing from the interior, mixing due to wind
currents, heterogeneous settling of heavier matter towards the center
over time). Two giant planets of equal bulk compositions, but
differing ages will appear differently, since a planet's interior
cools over time, affecting P-T relation within the planet as well as
its thermally emitted spectrum. The intensity and spectral shape of
the light incident from the parent star will affect the P-T diagram,
the chemistry and phase of the matter, the thermally emitted spectrum,
as well as the distribution of photons available for scattering.
From the second hit:
Jupiter is a giant gas planet with an outer atmosphere that is mostly
hydrogen and helium with small amounts of water droplets, ice
crystals, ammonia crystals, and other elements. Clouds of these
elements create shades of white, orange, brown and red. Saturn is also
a giant gas planet with an outer atmosphere that is mostly hydrogen
and helium. Its atmosphere has traces of ammonia, phosphine, water
vapor, and hydrocarbons giving it a yellowish-brown color. Uranus is a
gas planet which has a lot of methane gas mixed in with its mainly
hydrogen and helium atmosphere. This methane gas gives Uranus a
greenish blue color Neptune also has some methane gas in its mainly
hydrogen and helium atmosphere, giving it a bluish color | {
"domain": "astronomy.stackexchange",
"id": 745,
"tags": "solar-system, jupiter, gas-giants, saturn, uranus"
} |
What is hybridisation of XeF6 in solid state? | Question: According to the following formula:
Type of hybridisation/steric no. = (no. of sigma bonds + no. of lone pairs)
it should be $\ce{sp^3d^3}$, however according to my textbook it is $\ce{sp^3d^2}$ in solid state. That is how it was written (in my textbook), emphasising on the fact that "in solid state it is $\ce{sp^3d^2}$". How does solid state of a compound change its hybridisation?
Answer:
Forget about applying hybridization outside the second row, especially in 'hypervalent' compounds. I know, that it is common to use and sometimes works, but it is incorrect.
The $\ce{XeF6}$ molecule is a hard spot. While, indeed, experimental data suggest that it adopts distorted octahedral geometry in the gas phase, there is evidence that the minimum is very shallow.
The octahedral structure of $\ce{XeF6}$ (which is probably a local minimum, or at least found to be one in more than one calculation) can be pretty easily described in terms of three-center four-electron bonds.
There is no good agreement on the nature of the stereo-active electron pair in $\ce{XeF6}$. It seems, that repulsion with core d-shells is important, but the recent articles I could get my hands on has little to no rationalization on the fact. Still, it must be noted that the minimum is very shallow, meaning that rough qualitative theories like VSEPR and MO LCAO are too rough for a good rationalization anyway.
The solid state typically involves a lot more interaction, and, as result, compounds may adopt very different structures. The simplest one example I can think of, I could point to $\ce{SbF5}$ for example. A monomer adopts trigonal-bipyramidal structure. However, in the solid state, a tetramer with octahedrally coordinated antimony and four bridging fluorines is formed. In terms of VSEPR, it is a move from $\mathrm{sp^3d}$ to $\mathrm{sp^3d^2}$. Similarly, $\ce{XeF6}$ has a crystal phase (one of six) that involve bridging atoms. | {
"domain": "chemistry.stackexchange",
"id": 8682,
"tags": "inorganic-chemistry, halides, vsepr-theory"
} |
In what way observables are a representation of the symmetry group? | Question: I was studying a course about Lie groups, Lie algebras and their representations (and classifications) when I encountered this statement :
When a physical system admits symmetry, the observable form a representation of the group concerned. Lorentz and Poincaré groups are very important examples.
(It's originally in french so this transduction might be a bit off)
This sounds like it's very important to understand the "big picture" of such theories but I don't really understand it. Is this only for quantum theories? If so, what is the link between representation theory and observable?
I'm in first year of master so I can understand concepts about Lie groups, Lie algebras, manifolds, QFT, etc.
Answer: If a quantum system admits a Lie symmetry group, this means that there exist a unitary strongly continuous representation of that Lie group acting in the Hilbert space of the system. The one-parameter subgroups are represented by one-parameter strongly continuous unitary (sub) groups. Stone's theorem proves that each such group is if the form $e^{-i aA}$ for a unique selfadjoint operator $A$. Selfadjoint operators are observables by definition. The set of all $A$ above form a representation of the Lie algebra of the symmetry group. | {
"domain": "physics.stackexchange",
"id": 70351,
"tags": "quantum-mechanics, special-relativity, symmetry, group-theory, representation-theory"
} |
Why is the final temperature of irreversible adiabatic processes higher than that of reversible adiabatic processes? | Question: Suppose an irreversible adiabatic expansion process and a reversible adiabatic expansion process are starting from the same initial state, say, P1V1. Now, let both of these processes have equal pressure in their final state. My teacher said that the magnitude of the work done in the reversible process will be more than that done in the irreversible process. The reason behind this, my teacher said, was that the final temperature of irreversible process will always be higher than that of reversible process. But we were not taught why this happens. Is there a formula based proof (not graph based) that uses equations related to the first law to explain why this happens?
*I specifically say first law because we haven't been taught the other laws yet.
Answer: It is better to think of it the other way around and understand/derive the "2nd law" from the observation that in an adiabatic process the work done is maximum if it is reversible. When I say "derive" the 2nd law I mean making equivalent statements to that for this is at the heart of it.
Take that statement as being an axiom derived from observation and base the rest on that.
In general, you cannot avoid using concepts/statements that are observational and use them as basis for the rest of physics. You can even phrase the observational content of your question by stating that no adiabatic irreversible cycle exists, all adiabatic cycles are reversible. At this level "reversibility" is almost a basic undefined concept in the sense of a point or line in Euclidean geometry so that axioms/postulates "define" the concept. | {
"domain": "physics.stackexchange",
"id": 92357,
"tags": "thermodynamics, temperature, ideal-gas, reversibility, adiabatic"
} |
How Chern-Simons gauge field transform fermion to scalar? | Question: In A.Zee's "QFT in a nutshell"book Page 324, after he wrote down the general Lagrangian, he said "in previous chapter, we learned that by introducing a Chern-Simons gauge field we can transform $\psi$ to a scalar field" However, I didn't found any clarification about this argument in previous chapter. Any comment or references are greatly appreciated.
Answer: Yes, this is explained in the previous chapter (Fractional Quantum Hall effect)
Surprisingly, when we have a filling factor $\nu$ of the Landau levels of form $\frac{1}{3}$ or $\frac{1}{5}$, the quantum Hall fluid appears incompressible. This seems curious, because the integer quantum Hall effect appears only for integer values, corresponding to $\nu$ Landau levels completely filled (they have the degeneracy $\frac{BA}{2\pi}$, where $B$ is the magnetic field, and $A$ is the area occupied by the electrons).
So, something happens, calling $N_e$ the number of electrons ,when $\dfrac{N_e}{(\frac{BA}{2\pi})}= \nu$, with $\nu^{-1}$ odd.
Now, you have to notice that $\frac{BA}{2\pi}$ is simply the number of flux quanta $N_\phi$
So, finally, the number of flux quanta by electron, is : $\dfrac{N_\phi}{N_e} = \nu^{-1}$, a odd integer.
Now, if we look at the statistics, exchanging $2$ particles, we have the $(-1)$ term coming from the Fermi statistics, now multiplyed by an other $(-1)^{\nu^{-1}}$ term coming from a Aharanov-Bohm effect.
So, finally, one have a $(+1)$ term, which signals a Bose statistics.
A Dirac spinor in $(2+1)$ dimensions having $2$ degrees of freedom, we simply have to choose a bosonic field with $2$ degrees of freedom, that is a complex scalar field. | {
"domain": "physics.stackexchange",
"id": 15451,
"tags": "quantum-field-theory, chern-simons-theory"
} |
How to fill a sensor_msgs/image without a memcpy? | Question:
Hi,
i am writing a ros driver for our deepsea tyzx stereo camera. I try to adapt the driver for firewire camera, and i need an advice for this part of the code :
right_image_.data.resize(image_size);
memcpy(&right_image_.data[0], DSIF->getRImage(), image_size);
right_image_ is a sensor_msg/Image , whose field data is a vector<uint_8,...>
image_size is the size of the image returned by my camera proprietary driver, and DSIF->getRImage() is a pointer to this image.
My question is :
Is it possible to avoid a memcpy with something like creating a vector with a custom allocator using directly the pointer returned by DSIF->getRImage() as data and image_size as size, plus type if needed, and then use set_data_vec() fuction of the image message ?
Originally posted by Nicolas Turro on ROS Answers with karma: 11 on 2011-03-08
Post score: 1
Original comments
Comment by Eric Perko on 2011-03-11:
What's your use case that you want to avoid the memcpy? Seems like if you had th ROS image and the Driver image sharing data, the driver's framerate could become dependent on how quickly you could publish ROS images (i.e. you'd have to lock to avoid the changing the driver frame while publishing).
Comment by emrainey on 2016-11-02:
Ideally they would have a nodelet receiving the sensor_msgs::Image in a callback so the serialization would be skipped. However the initial copy from the driver is still wasted cycles. They aught to be able to create a custom allocator.
Answer:
Hi Nicolas,
It's possible to avoid that memcpy, but probably not useful.
Consider what happens in detail when you publish your image message to a separate node:
You mempcy the data to a sensor_msgs/Image message.
When you publish the sensor_msgs/Image, ROS serializes its fields into a buffer for inter-process transport - another memcpy.
The subscriber node populates another sensor_msgs/Image by deserializing the received buffer. If the nodes are on the same machine (subscribed over loopback), I believe the kernel optimizes this pretty well, but it's another memcpy or two.
It's possible to reduce the first two steps to a single memcpy by defining your own custom image type, that simply points to the DSIF->getRImage() data, and registering it with roscpp using message traits. Essentially, your custom type pretends to be sensor_msgs/Image (by registering the same MD5 sum) and serializes to exactly the same over-the-wire format as sensor_msgs/Image, so any subscribing nodes can't tell the difference. I can point you to examples of this if you want.
However, after doing all that we're still talking about IPC and 2-3 memcpy's at best. Another issue is that you can't directly use image_transport to provide compressed topics anymore, because image_transport currently understands only sensor_msgs/Image.
A different approach is to write your camera driver and processing nodes as nodelets, and load them all in the same process. In that case you skip the serialization bottleneck (steps 2 and 3) entirely. The driver publishes a shared_ptr<sensor_msgs::Image>, and that gets passed directly to the in-process subscriber nodelets. You still pay the cost of the initial memcpy, but that's it.
There's a tutorial on porting nodes to nodelets that needs a little love, but gives you the idea. For a complete example of nodelet-ized camera driver, see Diamondback camera1394.
So my advice is: live with the memcpy, and consider turning your driver into a nodelet if (and only if) message serialization is causing performance problems.
Originally posted by Patrick Mihelich with karma: 4336 on 2011-04-01
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by Asomerville on 2011-05-12:
Side note: The memcopy for interprocess transport won't happen between nodelets if boost::shared_ptr is used. | {
"domain": "robotics.stackexchange",
"id": 4993,
"tags": "ros, optimization, sensor-msgs, image-transport"
} |
Two Knockout computed dependent on each other | Question: I have 3 fields:
Net Price (ex. tax)
tax amount
Total price (price ex. vat + tax amount)
The NetPrice and the Total are writable (i.e. you can change either of them and the other 2 values must be auto-calculated).
The way I've done it is using 3 observable and 2 computed knockout objects but I thought perhaps someone who knows Knockout a lot better could suggest a more efficient way to achieve this.
Working jsFiddle
HTML:
Net Price:
<input type="textbox" data-bind="value: NetPriceCalc" />
<br />Tax Amount:
<label data-bind="html: TaxAmt"></label>
<br />Total:
<input type="textbox" data-bind="value: TotalCalc" />
Script:
var viewModel = {
NetPrice: ko.observable(100),
TaxAmt: ko.observable(20),
Total: ko.observable(120),
TaxRate: 0.2
};
viewModel.updateTaxAmt = function (useNetPrice) {
if (useNetPrice) {
return this.TaxAmt(this.NetPrice() * this.TaxRate);
} else {
var total = Number(this.Total());
var taxAmt = total - total / (1 + this.TaxRate);
return this.TaxAmt(taxAmt);
}
};
viewModel.updateNetPrice = function () {
this.NetPrice(Number(this.Total()) - Number(this.TaxAmt()));
};
viewModel.updateTotal = function () {
this.Total(Number(this.NetPrice()) + Number(this.TaxAmt()));
};
viewModel.NetPriceCalc = ko.computed({
read: function () {
console.log("NetPriceCalc read");
return viewModel.NetPrice();
},
write: function (value) {
console.log("NetPriceCalc write");
viewModel.NetPrice(value);
viewModel.updateTaxAmt(true);
return viewModel.updateTotal();
}
});
viewModel.TotalCalc = ko.computed({
read: function () {
console.log("TotalCalc read");
return viewModel.Total();
},
write: function (value) {
console.log("TotalCalc write");
viewModel.Total(value);
viewModel.updateTaxAmt(false);
return viewModel.updateNetPrice();
}
});
ko.applyBindings(viewModel);
Answer: I will review and contrast the SO answer that I liked best:
function viewModel() {
var self = this;
self.NetPrice = ko.observable(100);
self.TaxRate = 0.2;
self.TaxAmt = ko.computed(function() {
return parseFloat(self.NetPrice()) * self.TaxRate;
});
self.Total = ko.computed({
read: function() {
return parseFloat(self.NetPrice()) + self.TaxAmt();
},
write: function(val){
var total = parseFloat(val);
var taxAmt = total - total / (1 + self.TaxRate);
self.NetPrice(total - taxAmt);
}
});
}
It is more standard to create a constructor, then to create an object with object notation and then add functions.
Not to big a fan of using self everywhere instead of this in the SO answer
Do not use console.log in production code
Do not use calc as part of the variable name
Do not use a useNetPrice type of indicator, knockout can figure out the computed values on its own and knows when to recalculate what
write functions do not have to return anything | {
"domain": "codereview.stackexchange",
"id": 6664,
"tags": "javascript, knockout.js"
} |
How to remove heat from an IP66 enclosure? | Question: I have a small IP66 polycarbonate enclosure with a Zirconia-based oxygen sensor which produces significant amounts of heat (the internal sense element runs at 450C using a 1.5W heating element, sensor surface reaches 85C). The enclosure also contains sensitive analog electronics to read the sensor. The residual heat from the sensor is driving the ambient temperature in the enclosure up past 40C (measured on the outside of the enclosure front panel). The oxygen sensor and another gas sensor are only rated for ambient temperatures of 50C maximum, so I'm worried about keeping those in spec. I'm also concerned about needing temperature compensation for the analog circuits.
How can I remove the excess ambient heat from the enclosure while still maintaining an IP66 rating?
Answer: Considering the three methods of moving heat:
Convection - To keep the IP66 rating of the enclosure, you can't add any holes for exhaust fans.
Radiation - At the temperatures that you are talking about, radiation will not be removing much heat.
Conduction - This is a viable alternative since it could work through the walls of the enclosure.
You can add fins on the outside of the enclosure and rely on heat being conducted through the walls of the enclosure to the external heatsink. This will depend on the material that the enclosure is made from. If it is metal already, this could work as-is. If it is plastic, you may be able to retrofit a metal plate into a wall and still seal it to IP66.
Once the heat is outside of the enclosure, a fan could be added to help the heatsink function.
Another option may be to have an internal heatsink and then move the heat through the walls of the enclosure via either heat pipes or a circulation closed-loop cooling system. The wall penetrations could be sealed to still meet the IP66 requirements. | {
"domain": "engineering.stackexchange",
"id": 81,
"tags": "mechanical-engineering, electrical-engineering, heat-transfer, cooling"
} |
Would the oceans have boiled off if life (cyanobacteria) hadn't evolved on Earth? | Question: The emergence of cyanobacteria caused removal of carbon dioxide from the atmosphere, resulting in Earth becoming cooler. That perhaps implies that the Earth's oceans would have boiled off (in billions of years) if life like cyanobacteria didn't evolve. Is it true?
Answer: Short answer: no. The life does contribute to the amount of CO2 in the atmosphere and the temperature on the Earth but mostly not by photosynthetic organisms.
The main mechanism, which regulates the temperature and removes CO2 from the atmosphere is silicate and carbonate weathering cycle. In simple model the more CO2 you have in the atmosphere, the more acidic is rain water. More acidic rain can dissolve more of silicate and carbonate minerals and the dissolved minerals are carried to the ocean, where they either precipitate or are used by living organisms to create shells of CaCO3 (calcite) or SiO2 (opal). When these organisms die, many shells are dissolved but some shells fall all the way to the sea floor and are buried.
In simple words: more CO2 -> more (acidic) rain -> more weathering and C02 removal -> more mineral deposits -> less CO2 in the atmosphere -> lower temperature -> less rain
If you want more information with formulas, I suggest looking up for description of carbonate–silicate geochemical cycle. | {
"domain": "earthscience.stackexchange",
"id": 1225,
"tags": "ocean"
} |
What is the connection between the Mersenne-Twister and Linear Congruential Generator? | Question: This video claims that the Mersenne-Twister is based on the Linear Congruential Generator. The math in the Mersenne-Twister original paper is a bit too much for me. I have not seen that claim explicitly made in the wikipedia article for Mersenne-Twister or in some of the other sources I consulted. Is the Mersenne-Twister based on the Linear Congruential Generator? High level, how so?
Answer: No it is not.
MT is based on a twisted generalized feedback shift register. It is defined on vector spaces over finite fields, specifically $GF(2)$.
The LCG is based on integer arithmetic over the integers modulo $n,$ say $\mathbb{Z}_n.$
The two algebraic structures are quite different so this statement is false. | {
"domain": "cs.stackexchange",
"id": 20445,
"tags": "pseudo-random-generators"
} |
Simulating a 1-dimensional wave on (a segment of) an infinite line | Question: I'm trying to numerically simulate a 1-dimensional with a chain of linked harmonic oscillators as described here (the result can be seen here). The simulation behaves like a wave on a finite line segment with fixed or free boundaries (depending on the boundary conditions you set): when the wave reaches the border it bounces back (as it should by by conservation of energy). What I would like to do is to make my queue of oscillator behave like a segment of an infinite queue of oscillators, that means there must be no bounces and the energy must flow away throught the boundaries. I tried some strategies which turned out not to work such as adding invisible extensions of the queue and making it more dampened (actually it turned out to be equivalent to not adding the extension at all!). Any Idea?
Answer: So what should the boundary conditions be for a segment of an infinite line. Let us explore the options ($u$ is displacement, $u'$ is slope and $u''$ is curvature):
Fixed Ends: $u(0)=0$, $u(L)=0$
Mirror Ends: $u'=0$, $u'(L)=0$
Free Ends: $u''=0$, $u''(L)=0$
Periodic Displacements: $u(0)=u(L)$, $u'(0)=u'(L)$
Periodic Tensions: $u'(0)=u'(L)$, $u''(0)=u''(L)$
None of them seem to work to simulate an infinite line. In fact the only way to simulate an infinite line is to assume there is some kind of periodicity in the shape and apply boundary conditions to enforce it. So the answer is you cannot do it.
Remember the old rule of physics, that every symmetry is a manifestation of some kind of conservation law. The energy is not conserved in the segment so there cannot be a symmetry to apply.
The best solution is to create some kind a really long segment, with free end conditions, add dampers and light springs on the end, and only display a short part of that away from the ends. | {
"domain": "physics.stackexchange",
"id": 9348,
"tags": "waves, simulations, computational-physics"
} |
Do solitons in QFT really exist? | Question: In general, solitons are single-crest waves which travel at constant speed and don't loose their shape (due to their non-dispersivity), and there are many examples of them in the real world.
Now in QFT a soliton can be defined as a single crest which travels through a potential that (when we consider two dimensions) has the form of a sinusoid in the x-direction and has a value in the y direction that is the same as the corresponding point in the x-direction, more or less like the surface of a frozen sea with sinusoid waves.
Now one part of the soliton lies between two crests of the sinusoid, while the other part lies between the next two crests of the sinusoid, and it's stretched and moves in the y-direction. The y-direction extends to infinity on both sides. For those interested in the math take a look at Sine Gordon equation.
A soliton in QFT can be represented by this photograph:
Both sides of the rod about which the little rods rotate extend to infinity.
Now does a soliton in QFT really exist, or is it a mathematical construction? Or more concrete, does the described potential really exist? And IF they exist, how do they make themselves detectable?
Answer: All solitons are mathematical constructions. By definition, a soliton is a solution of a nonlinear partial differential equation (PDE) in which the dispersion is exactly cancelled by the non-linearity yielding a propagating non-dispersing wave like solution. There are many PDE's with this property and there are many physical systems that may be approximated by such PDE models. These models are never exact, so the "real world" solitons are non-existent. Such models are never-the-less frequently used to gain valuable insights into the behaviors of nonlinear mechanical (see my answer to this question), optical (see here), and fluid (see here) systems.
When the properties of solitons were first discovered, their temporal permanence was recognized as analogous to that of elementary particles and there was considerable study of QFT models with soliton solutions. I am not aware of any of these models that were based upon theories that became part of the Standard Model, but you may be interested in this question and the comments that it elicited. | {
"domain": "physics.stackexchange",
"id": 40677,
"tags": "quantum-field-theory, solitons"
} |
What shape of track minimizes the time a ball takes between start and stop points of equal height? | Question: I was at my son's high school "open house" and the physics teacher did a demo with two curtain rail tracks and two ball bearings. One track was straight and on a slight slope. The beginning and end points of the second track were the same but it was bent into a dip with a decending slope and an ascending slope. The game was to guess which ball bearing wins the race. At first blush ("the shortest distance between two points is a straight line") one might think the straight wins, but in fact the bent track wins easily (because the ball is going very fast for most of the time). His point was that nature doesn't always work that way our antediluvian brains think it should.
So my question (and I do have one) is what shape of track give the absolute shortest time? I think it's a calculus of variation problem or something. For simplicity, let's assume a) no friction b) a point mass and c) that the end point is a small delta below the start point. I played around with some MATLAB code and it seemed to give a parabola of depth about 0.4 times the distance between the start and stop point. The value of g and the mass didn't seem to matter. But I don't know if my code was right or wrong and I couldn't figure out the significance of 0.4. I Googled around for an answer but couldn't find one. Thanks for any pointers!
Answer: This is a special case of the brachistochrone problem, and has been known since at least ancient Greek times.
The solution is a cycloid.
It is most easily found using the calculus of variations and the problem is often chosen as a first example of the technique. | {
"domain": "physics.stackexchange",
"id": 4719,
"tags": "classical-mechanics, variational-principle, brachistochrone-problem"
} |
Why is there destructive interference in one path in Wheeler's delayed choice experiment? | Question: In the diagram below I've drawn Wheeler's delayed choice experiment, in the case where a 2nd half-silvered mirror has been inserted near the detectors.
---->\-------->\
| |
v v
\-------->\--> D2 (0%)
|
v
D1 (100%)
Every summary of this experiment I've found says that in this case destructive interference will occur at D2 and constructive interference will occur at D1.
I understand why the two paths interfere constructively at D1:
One path has 1 reflection => phase change of 180$^\circ$
The other path has 3 reflections => phase change of 3 x 180$^\circ$ which equals 180$^\circ$ modulo 360$^\circ$
I don't understand why the two paths interfere destructively at D2:
One path has 2 reflections
The other path also has 2 reflections
Can anyone explain why interference is supposed to be destructive at D2?
Answer: For beam-splitters there is a phase difference of +- Pi/2 between transmitted an reflected beam. If you take that into account and the direction of the beam vs. splitter, everything seems fine.Look here: https://arxiv.org/ftp/arxiv/papers/1509/1509.00393.pdf | {
"domain": "physics.stackexchange",
"id": 76198,
"tags": "quantum-mechanics, photons"
} |
Heating the air in a small box | Question: Background
I am working on a project that involves heating air inside a small box, then measuring the temperature over time. The purpose of the box is to be a basic test chamber for PID experiments.
Initial Test
I made a 10 cm * 10 cm * 10 cm box from 3 mm plywood. I then placed a small ~1 W heater inside with a temperature probe and observed the temperature rise... very slowly.
Proposed New Prototype
I am going to construct a new prototype once I decide how to calculate the approximate time to warm the air inside the box to a set-point. Some specifications and assumptions are laid out below:
The ~1 W heater consists of 4 parallel 100 ohm resistors and a 5 V power supply (there is also a 330 ohm resistor and an LED for visual indication). I don't want to change this heater design, as the intention is that this is easy to replicate and uses readily accessible 5 V power sources and electronic components.
Currently no fan has been used but I have a small 20 mm, 5 V fan on order, so will integrate this into the design at some point.
The box will be used at room temperature between 21–23 °C
Ideally a temperature rise up to 30 °C (or more) would be possible over a 5-10 minute time interval
The new proposed box size is 5 cm * 5 cm * 5 cm, 8 times smaller than the previous prototype. This is open to change depending on the previous 2 requirements of maximum temperature rise and time taken to achieve that change.
I am not after exact timings - I am aware that losses through the box will have an effect. But if this effect is minimal, then a simplified approximate solution is preferable as some design parameters may change slightly. I am looking for any guidance and calculations that will save me time instead of having to make different boxes and learn by trial and error.
Answer: A simple approach would be to just consider the heat added.
The specific heat equation gives the temperature change due to heating as
$$Q=mc\Delta T$$ where $Q$ is heat added, $m$ is mass. $c$ is the specific heat of the material in question (~1.005kJ/kgK for air at room temperature) and $\Delta T$ is the temperature change.
A 5x5x5 cm box will have an air mass of ~0.15 g (air density ~1.2 kg/m$^3$).
For a 1W (1 J/s) heater this gives a temperature change of
$$\Delta T = \frac{Q}{mc}= \frac{1}{0.00015*1005}= 6.6 K/s$$
I will suggest this is much higher than you observed with the larger box (even accounting the increased volume) which suggests there is a factor not being considered.
Two factors leap to mind.
1) Heat losses from the box. Unless your box is well insulated it will lose heat via radiation and conduction. However for temperatures near room temp. I would expect these loses to be low.
2) Heating the box. Inevitably if you heat the air in the box this will transfer to the box itself. The box has much larger weight so will require more energy to heat (probably 100x or more). Intuitively I would think the transfer of heat from the air to the box would be slow compared to the heating of the air. But I am not an expert in this so may be wrong.
More likely I think the problem may be that most of the heat is going into the box initially if the heater is in contact with it, due to much much better thermal conductivity of wood than air. This is what I would look at to improve the rate of heating. | {
"domain": "engineering.stackexchange",
"id": 201,
"tags": "thermodynamics, heat-transfer, heat-exchange"
} |
Finding frequency respone of a differential/integral LTI system | Question: So suppose that we have an LTI system defined by the differential/integral equation below, where $x(t)$ and
$y(t)$ denote the system input and output, respectively. How would I find the frequency response of this systems?
I haven't been taught this yet and the text book has no examples on it, I'm really confused. I'd really appreciate the help. This is the system:
$$
\frac{d}{dt}y(t) + 2y(t) + \int_{-\infty}^t 3y(\tau)d\tau + 5\frac{d}{dt}x(t)-x(t) = 0
$$
Answer: Frequency response deals with the steady-state behavior. Differentiating with respect to $t$ yields:
$$\frac{d^2y}{dt^2}+2\frac{dy}{dt}+3y(t)=\frac{dx}{dt}-5\frac{d^2x}{dt^2}$$
The Laplace transform is $$(s^2+2s+3)Y(s)=(s-5s^2)X(s)$$
and we have
$$H(s)=\frac{Y(s)}{X(s)}=\frac{s-5s^2}{s^2+2s+3}$$
I think you can easily do the rest. | {
"domain": "dsp.stackexchange",
"id": 4408,
"tags": "frequency-response, homework, fourier"
} |
A basic calculator in Python | Question: This is a simple calculator I created using my Python basic knowledge. I would like to know any coding techniques to simplify and enhance the code to make it efficient. I would also like to know some advanced techniques. Also, please specify where I can reduce duplication to make it DRY.
import math
import cmath
print("""Choose from
1. Perform basic BODMAS
2. Find squareroot
3. Solve quadratic equation
4. Find Trignometric values
5. Find logarithms""")
choose = input("enter required number \n")
if choose == '1':
def bodmas():
num1 = float(input("Enter number 1 \n"))
num2 = float(input("Enter number 2 \n"))
op= input("Enter add,sub,mul,div \n")
if op == "add":
res = num1+num2
elif op == "sub":
res = num1 - num2
elif op == "mul":
res = num1*num2
elif op == "div":
res = num1/ num2
else:
res = print("Error")
return res
print(bodmas())
elif choose == '2':
def sqrt_n():
num1 = float(input("Enter number"))
res = math.sqrt(num1)
return res
print(sqrt_n())
elif choose =='3':
def quad_s():
print("Standard form of Quadratic equation (Ax^2 + Bx + C)")
A = int(input("Enter coefficient A: "))
B = int(input("Enter coefficient B: "))
C = int(input("Enter coefficient C: "))
D = (B**2)-(4*A*C) # Calculate Discriminant
sol1 = (-B-cmath.sqrt(D))/(2*A)
sol2 = (-B+cmath.sqrt(D))/(2*A)
z =print("The solutions are {0} and {1}" .format(sol1 ,sol2))
return z
print(quad_s())
elif choose == '4':
choice = input("Choose sin,cos,tan: ")
if choice == "sin":
ang = float(input("Enter angle: "))
x = math.sin(math.radians(ang))
print (x)
elif choice == "cos":
ang = float(input("Enter angle: "))
x = math.cos(math.radians(ang))
print (x)
elif choice == "tan":
ang = float(input("Enter angle: "))
x = math.tan(math.radians(ang))
print (x)
else:
print("Error")
elif choose == "5":
z = int(input("Enter Value: "))
res = math.log10(z)
print (res)
else
print("Error")
How can I use the functions in a more preferable and efficient way?
Answer: Welcome to Code Review :)
Here are some nits and tips to get you on your way.
It is much more Pythonic to define your functions outside the user flow and put the user flow inside of a main() function. The flow should look like this:
def bodmas():
...
def sqrt_n():
...
def main():
print("""Choose from
...
""")
choose = input("enter required number \n")
if choose == '1':
...
bodmas()
...
elif choose == '2':
...
Use an if __name__ == '__main__': guard to invoke your main() function.
if __name__ == '__main__':
main()
Do you want to handle and catch errors?
A = int(input("Enter coefficient A: "))
...
sol1 = (-B-cmath.sqrt(D))/(2*A)
What if the user enters a non-numeric coefficient? Or what if the user enters 0? This is also applicable if the user tries to divide by zero in the bodmas() function.
Consider following a style guide like PEP8 to make your code more readable and consistent.
Happy coding! | {
"domain": "codereview.stackexchange",
"id": 27010,
"tags": "python, python-3.x, calculator"
} |
Calculating the moment inertia for a circle with a point mass on its perimeter | Question: I want to calculate the tensor of the moment of inertia. Consider this situation:
The dot represents a points mass, in size equal to $\frac{5}{4}m$. $m$ is the mass of the homogenous circle. I'm trying to calculate the tensor of inertia. Because this is in two dimensions, all components but $I_{xx}$, $I_{yy}$, $I_{zz}$ and $I_{xy}$ are zero. ($\bar{I}$ denotes the inertia around the mass center.)
$I_{xx}=\bar{I}_{xx}+m_{tot}R^2=\frac{mR^2}{4}+\frac{5mR^2}{4}+(m+\frac{5}{4}m)R^2=\frac{15}{4}mR^2$
But this is not the right answer. The right answer is supposed to be $\frac{10}{4}mR^2$, why?
$I_{zz}$ and $I_{yy}$ I can get for some reason, by the above method. For those it works.
$I_{zz}=\frac{mR^2}{2}+\frac{5mR^2}{4}+(\frac{5}{4}+1)mR^2=4mR^2$
$I_{yy}=\frac{mR^2}{4}+\frac{5mR^2}{4}=\frac{3mR^2}{2}$
These are correct. The last one is a bit easier because the axis passes through the mass center.
And then there's $I_{xy}$ which I cannot figure out how to calculate since I don't know the x and y position of the point mass, so I can't use the formula $I_{xy}=md_xd_y$. How would I calculate it? The answer is supposed to be $I_{xy}=-\frac{5mR^2}{4}$.
Answer: You don't need to apply Steiner's theorem onto the point mass. The point mass finds itself at a distance (apparently) $R$ of the x-axis. Since the moment of inertia is an extensive value, you can simply add all moments of inertia.
There's the moment of inertia of the solid disk with respect to it's diameter. You have to 'Steiner' that away from a distance $R$. Then you need to add $\frac{5}{4} mR^2$ which is the moment of inertia associated to a point-mass of mass $\frac{5}{4}m$ a distance $R$ away from the rotation axis
Therefore,
$I_{xx} = \frac{mR^2}{4} + mR^2 + \frac{5}{4}mR^2 = \frac{10}{4}mR^2$
From the drawing, I strongly suppose the point mass finds itself at $(R,R)$, therefore lying on the diagonal. You should be able to conclude what the moment of Inertia is then. The minus sign you give for the supposed answer seems highly suspiciou, as the moment of inertia is a sum of positive quantities. | {
"domain": "physics.stackexchange",
"id": 7617,
"tags": "homework-and-exercises, classical-mechanics, angular-momentum, rotational-dynamics"
} |
Speed of evolution | Question: I know that evolution can happen quite rapidly in single celled organisms. How about animals? Has there been any record of new speciations over the past 1000 years? In this video it is claimed that Darwin's finches arrived at Galápagos Islands just a few hundred years ago.
Answer: The video is a simplification of the speciation concept in order to convey principles of evolution using Darwin's finches as an example. Very rapid speciation (obvious speciation in 1000 years or less) for animals with much longer lifespans, is probably not documented. However, much of a potential answer to your question depends on how you define "speciations". Species is a very fluid term. When speciation is defined as major genetic changes resulting in obvious differences that distinguish two different populations, it is easier to say "yes" it can occur in shorter periods of time. For those differences to accumulate and increase to the point where animal phenotypes and genetic differences make reproduction (genetic exchange) completely impossible probably takes much longer. Here are two links that I found helpful.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC21824/
https://en.wikipedia.org/wiki/Species | {
"domain": "biology.stackexchange",
"id": 8568,
"tags": "evolution, speciation"
} |
Where do electrons come from in thermionic emission? | Question: Let we suppose to have a HV Tube or CRT . The filament is connected to the secondary of the transformer used to supply the filament.
At time $t=0$ the wires and cathode are neutrally charged. Then when we switch the power on, the filament begins to heat and free electrons are being evaporated from the cathode. Now I would come to conclusion that if the electron has left the metal there should remain a hole and the cathode would begin to be more and more positively charged, but this is not happening.
Are those electrons replaced? If yes, then from where?
Or if they aren't emitted at all, are they generated?
It might be clear from the picture that, electrons emmited by cathode never strikes the anode. The anode is used for accelerating the electron, not to recombine the electron.
Further question is: Where the electrons in anode come from? For accelerating the electron beam, energy is used proportional to the kinetic energy of electrons, but how the current flows if the electons never strike the anode?
Answer: Here is an image with the conducting layer all around the glass from the inside:
and here is the circuitry (from a different link):
As you see the circuit closes with the conducting layer. The power supply provides the energy to keep the cathode negative and the anodes positive. In this diagram the heating of the cathode comes from a different power supply. The electron beam part of the circuit current
Here is another paper
It should be mentioned that the screen has a plate of some metal (often
aluminum) underneath the coating of phosphors; This plate is given a very
strong positive charge, on the magnitude of several thousand volts. This
positive charge pulls the electrons strongly toward the screen.
and of course, close the circuit with the power supply.
With the above in mind,
Now I would come to conclusion that if the electron has left the metal there should remain a hole and the cathode would begin to be more and more positively charged, but this is not happening.
Are those electrons replaced? If yes, then from where?
because the power supply through the circuit replaces the charge
Or if they aren't emitted at all, are they generated?
Of course they are thermally emitted. If there were no thermally emitted beam of electrons the circuit would represent an elaborate capacitor circuit, no current would be flowing. | {
"domain": "physics.stackexchange",
"id": 25085,
"tags": "quantum-mechanics, electrons, elementary-particles"
} |
Justification for acoustic wave equation boundary conditions | Question: In analyzing the standing acoustic waves produced by a wind instrument, one usually assumes that the openings of the instrument are antinodes of the acoustic wave (as depicted below). What is the justification for this boundary condition?
Answer: Before I answer the question, first some explanation about what the graph shows. There are generally two ways to graphically depict a one dimensional pressure wave.
One is by showing how the pressure is distributed over space. When a standing wave is depicted in this way, the antinodes indicate large pressure variations while the nodes indicate a constant pressure.
The second method is by showing the local displacement of the air from it's rest position. When a standing wave is depicted in this way, the antinodes indicate large displacement variations while the nodes indicate a spot where the air almost stands still.
At positions where the pressure has an antinode, the displacement has an node and the other way around.
The graph you give uses the second method and shows the displacement of the air. You can see that there is a node at the end of the tube, because the air there does not move.
Now about the hole. The (very rough) assumption that is made in this model is that the hole creates a connection to the environment, thereby reducing the pressure to zero at that point. This will thus lead to a node in a pressure graph and a antinode in a displacement graph.
This assumption is perfectly fine to get a feeling of how a flute works, but, as Alephzero's explains, is far from the complete story. | {
"domain": "physics.stackexchange",
"id": 39865,
"tags": "waves, acoustics, boundary-conditions"
} |
Reference for Levin's optimal factoring algorithm ? | Question: In Manuel Blum's "Advice to a Beginning Graduate Student":
LEONID LEVIN believes as I do that whatever the answer to the P=NP? problem, it won't be like anything you think it should be. And he has given some wonderful examples.
For one, he has given a FACTORING ALGORITHM that is proVably optimal, up to a multiplicative constant.
He proves that if his algorithm is exponential,
then every algorithm for FACTORING is exponential.
Equivalently, if any algorithm for factoring is poly-time,
then his algorithm is poly-time.
But we haven't been able to tell the running time of his algorithm because, in a strong sense, it's running time is unanalyzable.
Levin's publications page returns a 404, DBLP shows nothing related to factoring, and a search for "leonid levin factoring" on Google Scholar returns nothing of interest that I could find. AFAIK the generalized sieve is the fastest algorithm known for factoring. What is Manuel Blum talking about? Can anyone link me to a paper?
Answer: Manuel Blum is talking about applying Levin's universal search algorithm to the Integer Factorization problem. The idea of Levin's Universal search algorithm is equally applicable to any problem in $NP$.
Here is a quote from lectures notes given by Blum on SECURITY and CRYPTOGRAPHY:
Leonid LEVIN's OPTIMAL NUMBER-SPLITTING (FACTORING) ALGORITHM.
Let SPLIT denote any algorithm that computes
INPUT: a positive composite (i.e. not prime) integer n.
OUTPUT: a nontrivial factor of n.
THEOREM: There exists an "optimal" number-splitting algorithm, which we
call OPTIMAL-SPLIT. This algorithm is OPTIMAL in the sense that:
for every number-splitting Algorithm SPLIT
there is a (quite large but fixed) constant C such that
for every positive composite integer input n,
the "running time" of OPTIMAL-SPLIT on input n is at most C times the
running time of SPLIT on input n.
Here is Levin's optimal factoring algorithm:
The OPTIMAL-SPLIT ALGORITHM:
BEGIN
Enumerate all algorithms in order of size, lexicographically within each size.
Run all algorithms so that at any moment in time, t, the ith algorithm
gets [1/(2^i)] fraction of the time to execute.
Wnenever an algorithm halts with some output integer m in the range 1 < m
< n, check if m divides n (i.e. if n mod m = 0).
If so, return m.
END | {
"domain": "cstheory.stackexchange",
"id": 834,
"tags": "ds.algorithms, reference-request, factoring"
} |
Why is $C_v$ used in the adiabatic expansion of Carnot Cycle to calculate internal energy? | Question: When I took this class years ago, I simply accepted it as fact. However, now that I'm teaching it, the following bugs me....a lot.
Why do they use Cv to describe the change of internal energy during the carnot cycle?
http://chemwiki.ucdavis.edu/Physical_Chemistry/Thermodynamics/Thermodynamic_Cycles/Carnot_Cycle
There is obviously a volume change. But it is defined that $\Delta U = n*Cv*\Delta T$. How do you justify the use of Cv when there's a volume change?
Answer: It is always true that, for an ideal gas, $\Delta U = C_V \Delta T$, regardless of the process. Remeber, we define $C_V=(\delta Q/dT)_V$. Since this is happening at constant volume (aka $\delta W=0$), we have $C_V=(\delta Q/dT)_V=(dU/dT)_V$. Then, since $U$ doesn't depend on volume for an ideal gas, we have that $C_V=dU/dT$ even if volume is changing. So $dU=C_V dT$.
Another way to think about it, just using algebra: for an ideal gas, $U=\alpha NkT$. Thus $\Delta U=\alpha NK\Delta T$. But of course $\alpha NK$ is just $C_V$ for an ideal gas, proving the statement. | {
"domain": "physics.stackexchange",
"id": 69676,
"tags": "thermodynamics, temperature"
} |
How do non-mechanical solid-state optical switches work? | Question: I am currently looking for a fiber-optical switch (FOS) in order to be able to change the light source of a spectrometer. As this will be used in harsh conditions, I was hoping to find a FOS with no moving parts.
Today I came across this, which appears to be an optical fiber switch with no moving parts. Although I will not be able to use this switch due to the wavelength range not being a good fit, I was wondering how these kinds of switches work. So far, I have only seen optomechanical ones which I assume to use some sort of actuated mirror assembly to redirect the light from the correct input fibers to the output fiber. On a smaller scale, I have seen piezoelectric devices which just move the physical output fiber by a tiny amount left and right.
I have read somewhere about a magneto-optic crystals, which appear to be a crystal whose transmission is based on the voltage applied to it. Is this how such a switch could be implemented? Are there other ways?
Answer: It looks like there are at least two ways to go about this:
Bistable MEMS
MEMS (Microelectromechanical systems) are very small structures, with features from $1-100\mu m$ in size, generally made with semiconductor-like processes. They can be made so they are bistable, if you apply a pulse of power, their state will change and remain stable until another pulse of power is applied to change it back. Patent US8203775B1 describes one of these structures used by Agiltron for at least some of their 1xN and NxN optical switches.
Here is the patent drawing of the two states the MEMS mirror can be in:
Here is a diagram (from Agiltron) showing how they are used to form a 1xN switch:
Magneto-optic effect
This can be achieved with materials that change the polarization of light like bismuth-substituted rare earth iron garnet single crystal that depending on the polarity of the applied magnetic field, change the polarization by plus or minus 45 deg. The light will then either be blocked or transmitted by a static polarizer. A magnetic field applied to the rare earth iron garnet will magnetize it, so a continuous field is not required. You only need a pulse of power to change the polarity of the magnetization. Below is a diagram from patent US6577430B1 that shows the stack up inside a bidirectional 1x2 switch. I believe this technique is used in Agiltron's CrystaLatch 1xN and NxN switches. | {
"domain": "physics.stackexchange",
"id": 76358,
"tags": "optics, experimental-technique, fiber-optics, optical-materials, experimental-technology"
} |
Fibonacci sequence implementation | Question: This was too slow for my computer:
int f(unsigned int x)
{
if(x <= 1) return 1;
else return f(x-1)+f(x-2);
}
/* main */
int main()
{
for(int i = 0; i < 50; i++) cout << f(i) << endl;
return 0;
}
So I've made a faster implementation. It works, but is it well-written? Is there some way to improve it?
void f(unsigned long now)
{
static int counter = 0;
static unsigned long last = 0, tmp = last;
if(counter++ == 50)
{
last = tmp = counter = 0;
return;
}
std::cout << last + now << std::endl;
tmp = last;
last = now;
f(now+tmp);
}
// calling f(1); in main
Answer: There's a few things that could be improved about the new function:
Most obviously, its interface is awkward: when you call f, you have to pass 1 and it prints the first 50 Fibonacci numbers. The old function was better in that respect: you call it with an argument n and get back the n'th Fibonacci number.
It uses an iterative algorithm, implemented recursively using a tail call. That's common in functional languages, but I think in C++ a loop is simpler.
It passes state from one invocation of the function to the next through static variables. That seems inelegant.
Here's a straightforward iterative implementation of the algorithm:
// return the n'th Fibonacci number
unsigned long fib(unsigned int n) {
if (n == 0) return 0;
unsigned long previous = 0;
unsigned long current = 1;
for (unsigned int i = 1; i < n; ++i) {
unsigned long next = previous + current;
previous = current;
current = next;
}
return current;
} | {
"domain": "codereview.stackexchange",
"id": 4126,
"tags": "c++, recursion, complexity, fibonacci-sequence"
} |
What is the actual physical actuated quantity when controlling the position of a servo? | Question: I'm trying to learn about servo control. I have seen that the most generic position control method for servos is PID, where the control input is position error. However, I am not sure about what is the actuated quantity. I am guessing that it is one of:
Voltage applied to the motor
Current applied to the motor
I am then guessing that the actuated quantity gets turned into one of:
Torque that the motor exerts
Angular velocity that the motor runs at
I haven't been able to get my hands on and explicitly control a physical servo so I cannot confirm that the actuated quantity is any of these. I know very little of the electronics that controls the motor. It might well be that the controlled quantities are different for different series servos.
My bet is on torque control. However, assume that the servo is holding a weight at a distance (so it is acting against gravity), which means an approximately constant torque load. In this case, if the position error is zero and the servo is at rest, then each of P, I and D components are zero, which means the exerted torque is zero. This would cause the weight to sink, which is countered by the error in its position causing P,I components to increase. Wouldn't this situation cause the lifted weight to oscillate and balance at a constant position which is significantly different from the goal position? This isn't the case with the videos of servos I have seen lifting weights. Or is this the case and friction is smoothing everything out? Please help me understand.
Answer: motor controllers
what is the actuated quantity
Yes, the output of the control electronics and the input of the motor, in the simplest case -- DC permanent-magnet motors -- is the voltage applied to the motor.
In other cases, the output of the control electronics is the duty cycle of a PWM voltage applied directly to the motor or indirectly to the "signal" wire of a radio control servo.
They work a little differently.
In yet other cases, many people control position using stepper motors.
They work very differently than DC permanent-magnet motors.
motors
the actuated quantity gets turned into
Your suggestion of "torque control" is approximately true when motor very slow or stopped.
The so-called "back-EMF" generated by the motor by "generator action" is proportional to its angular velocity.
This back-EMF allows motors to be used as generators, such as the motor/generators used in a few cars and the regenerative breaking used in a few vehicles.
(Part of the back-EMF is caused by the "autoinductance" of the windings, but that part is usually negligible, so I will not mention it further -- the article you mentioned has a good explanation).
At any instant, the electrical current in the motor is proportional to the applied voltage minus the back-EMF.
Meanwhile, the mechanical torque generated by the motor is approximately proportional to that electric current.
Therefore at low speeds the mechanical torque generated by the motor is proportional to the applied voltage.
But at high positive speeds, the torque generated by the max positive voltage is less; the "max speed" is often defined as the speed where the max positive voltage gives zero torque.
PID
assume that the servo is holding a weight at a distance (so it is acting against gravity), which means an approximately constant torque load. In this case, if the position error is zero and the servo is at rest, then each of P, I and D components are zero, which means the exerted torque is zero. This would cause the weight to sink, which is countered by the error in its position causing P,I components to increase. Wouldn't this situation cause the lifted weight to oscillate and balance at a constant position which is significantly different from the goal position?
There are 2 different cases: the short-term state immediately after some heavy load is applied, and the long-term state after the arm is allowed to settle.
Please tell the person who told you that "if the position error is zero and the servo is at rest, then the I component is zero" to experiment with a PID controller, or read a little more about control systems (a, b, c, d, e), or both, to fill in the gaping hole in his knowledge of what the I component does in a PID controller.
PID with near-zero I component
In the short term, the P, I, and D components start out at approximately zero,
and so the exerted torque is approximately zero.
When Fred suddenly applies a heavy load, there is not enough torque to hold it in position, so it sinks.
The error in its position causes the P,I components to increase.
If, hypothetically, one had a controller where the I component was completely ignored,
then the arm would settle at some constant position, as you mentioned.
The arm would stabilize at the position where the voltage supplied by the controller (proportional to P, the error in position) was exactly enough to hold up the weight.
PID with significant I component
However, with the PID controller you mentioned, the I component increases as long as there is any error.
Eventually there would be enough I component accumulated that the controller would increase the voltage more than enough to hold up the weight, pushing the weight back up towards the zero-error point.
Whether the weight overshoots or not depends on how the PID controller is tuned, but as long as the P,I,D components are anywhere close to a reasonable value, the PID controller will eventually settle down to the state where:
the arm is stable at almost exactly the goal position (with practically zero error)
therefore the P and D components are practically zero
The I component is not zero -- it still has some large value that accumulated previously when the arm was below the desired position.
the control electronics (because the I component is not zero) drive the motor with some voltage
the motor converts that voltage into some torque that holds the weight up at the goal position.
Many robotic control systems are quick enough that they converge on this final state within a tenth of a second.
When Fred (that prankster!) yanks the weight off the arm,
even though the arm is already at the goal position,
the high accumulated I component causes the arm to pop up.
That small error causes the accumulated I component to bleed off,
and (hopefully soon) the arm returns to almost exactly the goal position (with practically zero error). | {
"domain": "robotics.stackexchange",
"id": 173,
"tags": "otherservos, pid"
} |
Array length handling when serializing messages | Question:
I am trying to code a rosbag serializer, and I am trying to generate code from the messages, like https://github.com/ros/genmsg
I noticed there's an inconsitency in how to handle unbounded array lengths.
In some cases, it is assumed the array elements are preceded with the length of the array: CameraInfo
In some others, the length is calculated from other properties of the message: Image, PointCloud2
Then there's cases in which it's not clear if the length needs to be handled based on the length of the whole message itself: CompressedImage.
So, are these cases handled manually? or is there some sort of non obvious rule?
I have noticed that in the definitions of some unbounded arrays, it is stated that the length needs to be computed from other parameters.
Originally posted by Vicente Penades on ROS Answers with karma: 130 on 2018-10-17
Post score: 0
Answer:
The serialization logic only considers the length of the actual array.
Any other fields in a message containing such length information are additionally information and might (or might not) match the actual data. They are serialized independently.
Originally posted by Dirk Thomas with karma: 16276 on 2018-10-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31919,
"tags": "rosbag, ros-jade"
} |
Is the non-perturbative approach to QFT a path integral approach? If so then how, given we don't have simple path integral formula for Dirac equation? | Question: Here is my understanding of the scenario. Please correct me if i go wrong somewhere.
Initially, the perturbative approach to QED (Feynman diagrams) was very successful. But the same approach to QCD proved cumbersome, and hence some sort of 'path integral' approach on vanishing lattice spacetime was developed. This approach was actually non-perturbative.
So, my understanding is that there are two approaches that everyone learns to QFT. The canonical (or perturbative or Feynman diagram approach) and the path integral (or lattice spacetime approach).
Now, having read Feynman and hibb's path integral book a little, i am under impression that the path integral approach works only for Schrodinger's equation, and becomes really difficult when we try to incorporate the relativistic Dirac equation. For the same reason, the same book treats QED as an approximation, using Schrodinger's equation and avoiding Dirac's.
So, i am surprised that in QCD, the path integral approach returns. How did they manage this? How did they overcome the problem of providing the path integral equations for Dirac's equation? Did they?
Please clarify this.
Answer: All types of QFTs can be formulated using path integrals - relativistic, nonrelativistic, Schrodinger equation, QED and QCD.
Your confusion may lie in the order of historical development, which went (1) QED, (2) Feynman diagrams, (3) path integrals, then (4) QCD. But path integrals can be (and today often are) taught before QED, and QED can definitely be formulated in terms of a path integral, albeit with the subtlety that the fermion fields are Grassmann-number-valued. The path integral wasn't invented specifically for QCD, but to improve our general understanding of quantum mechanics (although in practice its main use lies in quantum field theory, not few-body nonrelativistic QM).
The Dirac equation is the classical equation of motion for the free Dirac Lagrangian $i \bar{\psi} \gamma^\mu \partial_\mu \psi - m \bar{\psi} \psi$, so in a sense it's not quantum-mechanical at all (although of course the complex numbers and the spin degrees of freedom carried by the spinors that it describes are most naturally interpreted in a quantum context). The QED Feynman diagrams come in when you consider the spinor's coupling to an EM gauge field. | {
"domain": "physics.stackexchange",
"id": 40749,
"tags": "quantum-electrodynamics, path-integral, feynman-diagrams, quantum-chromodynamics, non-perturbative"
} |
Should planar Euclidean graphs be planar straight-line graphs? | Question: An Euclidean graph, by definition is
A weighted graph in which the weights are equal to the Euclidean
lengths of the edges in a specified embedding
and a graph is called planar if
it can be drawn in a plane without graph edges crossing
lastly, a planar straight-line graph (PSLG) is
embedding of a planar graph in the plane such that its edges are mapped into straight line segments.
By these three definitions, I cannot conclude that if an Euclidean planar graph should be a PSLG or not.
For instance, given an Euclidean non-planar graph:
I can convert this graph into an Euclidean planar graph
by sticking with the definitions. Given the definitions above, the former one is a non planar straght-line graph whereas the latter one is a planar graph. Assume that the length of each edge is preserved. Then is the latter one still an Euclidean graph?
Answer: Fáry's theorem states that every planar graph can be drawn in such a way that its edges are (non-crossing) straight lines. Hence every planar graph is a planar straight-line graph.
However, this doesn't really have any bearing on the definition of Euclidean graph. According to the definition you link too, the edges need not be straight. A Euclidean graph is a planar graph in which the edge weights are their lengths in some specific (planar) embedding. In particular, a non-weighted graph isn't Euclidean any more than a cucumber.
Finally, the first graph in your question is planar – it's just that your specific embedding isn't a planar embedding. | {
"domain": "cs.stackexchange",
"id": 6327,
"tags": "graphs, euclidean-distance"
} |
Exception AttributeError from '/usr/lib/python2.7/threading.pyc' when RUSCORE start! | Question:
Hi friends!
i don't know anything about PYYYTHo0o0o0oN :-s
what am i should to0 do0 whit these exceptionS?
(i have ROS FUERTE)
tnQ
mohsen@mohsen-ThinkPad-R500:~$ roscore
... logging to /home/mohsen/.ros/log/6d9214a4-a0f9-11e2-bd04-00216b5b4a7c/roslaunch-mohsen-ThinkPad-R500-4942.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://mohsen-ThinkPad-R500:57850/
ros_comm version 1.8.10
SUMMARY
========
PARAMETERS
* /rosdistro
* /rosversion
NODES
auto-starting new master
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[master]: started with pid [4958]
ROS_MASTER_URI=http://mohsen-ThinkPad-R500:11311/
setting /run_id to 6d9214a4-a0f9-11e2-bd04-00216b5b4a7c
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
process[rosout-1]: started with pid [4971]
started core service [/rosout]
Originally posted by Mohsen Hk on ROS Answers with karma: 139 on 2013-04-08
Post score: 0
Answer:
As far as I know, you can safely ignore this error. There are some patches that I know but they only suppress the message.
Originally posted by yigit with karma: 796 on 2013-04-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13736,
"tags": "ros, ros-fuerte, python2.7, roscore"
} |
Null conserved angular momentum | Question: If the angular momentum of a particle is conserved and it is also 0, then is it true that the particle moves along a line? If so, how can we derive the equation for the trajectory from both the above fact and knowledge of initial position?
(This question was inspired by the fact that if angular momentum is conserved and not 0 then a particle always moves on the same plane, but nothing is said if it is 0).
As always, any hint or comment is highly appreciated!
Answer: Yes, if the angular momentum of a particle is conserved and it is zero, the particle must move along a straight line.
Indeed, by using the triple product identity, we have
$$
{\bf r} \times ( {\bf r} \times {\bf v} ) = ( {\bf r} \cdot {\bf v} ) {\bf r} - r^2 {\bf v}.
$$
Therefore, if the angular momentum is zero, we must have at every time $t$
$$
( {\bf r} \cdot {\bf v} ) {\bf r} = r^2 {\bf v}
$$
which means that velocity always has the direction of the initial position vector, i.e., the motion is always along a line. | {
"domain": "physics.stackexchange",
"id": 88274,
"tags": "newtonian-mechanics, rotational-dynamics, angular-momentum, reference-frames, conservation-laws"
} |
Why add acid to rid of soluble carbonate or hydroxide impurities? Will it not also react with halide I'm testing for | Question: When identifying metal halides with silver ions, the positive ion $\ce{Ag+}$ will form $\ce{AgX}$ with halide in a solution. However, $\ce{H+}$ is added (typically in the form of an acid like $\ce{HNO3}$) to solution to get rid of $\ce{OH-}$ and $\ce{CO3^{2-}}$ ions first. Will the $\ce{H+}$ not also react with the halide $\ce{X-}$?
Answer: Hydrogen halides are strong acids, meaning that their conjugate bases $\ce{X-}$ are extremely weak. Thus, adding $\ce{H+}$ will neutralize stronger bases such as $\ce{OH-}$ first before $\ce{X-}$. | {
"domain": "chemistry.stackexchange",
"id": 8288,
"tags": "halides, hydrogen, silver"
} |
Finding $k^{th}$ smallest in row and column-wise sorted matrix $m\times n$ | Question: Let $M$ be a $m\times n$, in which rows and columns of $M$ are sorted in increasing.
Problem.
Find $k^{th}$ smallest number in $O(m\log(mn))$.
My attempts.
I merge all rows in a single sorted array in time $O(mn\log(mn))$ but this idea isn't efficient and I think there is a solution with running time $O(m\log(mn))$. After many hours thinking about this problem, I found a solution with running time like $O(k\log(mn))$, so $k$ can be $O(mn)$, eventually, this gives us the bound $O(mn\log(mn))$.Anyone can give me some hints about how I can achieve my desired time complexity?
Answer: There is a $O(n)$ time solution. See discussion here.
[1]: Selection in X+Y and Matrices with Sorted Rows and Columns. A. Mirzaian, E. Arjomandi, Information Processing Letters 20:13-17, 2 January 1985.
[2]: https://softwareengineering.stackexchange.com/questions/233031/given-two-sorted-array-in-ascending-order-with-same-length-n-calculate-the-kth | {
"domain": "cs.stackexchange",
"id": 20742,
"tags": "algorithms"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.