anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Can this graph search be optimized? | Question: I have the following which searches my graph to see if a vertex is reachable from the first vertex, which everything should be connected to. I do this to ensure there are no disconnected parts.
Unfortunately it is very slow.
Is there something I could do or store to optimize this?
I want to learn about graphs and generated cities so Im not using a real graph library.
private void removeDisconnectedSquares()
{
for(int i = 0; i < getNumXNodes(); ++i)
{
for(int j = 0; j < getNumYNodes(); ++j)
{
//removeDisconnectedSquare(i, j);
visitedNodes.clear();
if(!isNodeReachableFrom(getNodeAt(i, j), getNodeAt(0, 0)))
{
removeVertex(i, j);
}
}
}
}
private boolean isNodeReachableFrom(GraphNode node, GraphNode target)
{
if(node == null)
{
return false;
}
if(visitedNodes.contains(node))
{
return false;
}
else
{
visitedNodes.add(node);
}
if(node == target)
{
return true;
}
if(node.contains(target))
{
return true;
}
for(int i = 0; i < node.getSize(); ++i)
{
if(isNodeReachableFrom(node.at(i), target))
{
return true;
}
}
return false;
}
Answer: Rather than using recursion (which is pretty slow), you can for this using a Set of all reachable nodes, and a Queue of nodes you still haven't searched. This is a basic implementation of Breadth-First Search.
private boolean isNodeReachableFrom(GraphNode start, GraphNode target) {
// These are GraphNodes who's connected GraphNodes we haven't looked at yet.
// Maybe the target is connected to one of these!
Queue nodesToSearch = new LinkedList<GraphNode>();
nodesToSearch.add(start);
// These are GraphsNodes which we have proved are reachable from the given node.
// If we are ever about to add the given target to this list, we return true.
Set reachableNodes = new HashSet<GraphNode>();
// As long as there are still nodesToSearch, we could still find our target.
while (nodesToSearch.peek() != null) {
GraphNode node = nodesToSearch.remove();
// If we have already seen this node, we don't want to look at its connected
// nodes again, because we could get stuck in a cycle.
boolean isNewNode = reachableNodes.add(node);
// If this is a new node, see if the target is connected. If it is, we are
// done successfully. Otherwise, add all of the connected nodes to our
// list of nodesToSearch.
if (isNewNode) {
for (GraphNode connectedNode : getConnectedNodes(node)) {
if (connetedNode.equals(target)) {
return true;
}
nodesToSearch.add(connectedNode);
}
}
}
// There are no nodes we haven't searched, so the target is not reachable.
return false;
}
Note: I made up getConnectedNodes(node) because your code didn't show me how to do this with the GraphNode object.
Note that this is an implementation of isNodeReachableFrom. However, you seem to want to figure out the list of nodes which aren't reachable from a starting node. Notice that we build up the list of all reachableNodes in the call above, which is probably what you really want. You could write a function to return that reachableNodes structure, which would reuse the above logic without the target sections. Something like this:
public Set<GraphNode> getReachableNodes(GraphNode start) { ... } | {
"domain": "codereview.stackexchange",
"id": 2494,
"tags": "java"
} |
How to integrate a Gaussian path integral of free particle using zeta function regularization? | Question: I am attempting to integrate this path integral in Euclidean variable $\tau $ (but this need not be the same as the $X^0$ field):
$$Z=\int _{X(0)=x}^{X(i)=x'}DX\exp \left(-\int _0^i d\tau \left[\frac{1}{2L}\partial _\tau X^\mu \partial _{\tau} X_{\mu} +\frac{L}{2}m^2\right ]\right).$$
The indices are being contracted in the Minkowski metric of dimension $D$. What I did so far was I moved the $X^\mu $ field to the left using integration by parts (dropping boundary terms) and pulled out the constant mass term:
$$Z=\exp \left(-\frac{L}{2}m^2\right)\int _{X(0)=x}^{X(i)=x'}DX \exp \left(\int _0^i d\tau \frac{1}{2L}X^\mu \partial _\tau ^2 X_\mu \right).$$
The answer might be related to:
$$Z=\frac{\exp \left(-\frac{L}{2}m^2\right)(D-1)\sqrt{2L}}{\sqrt{\det \partial _\tau ^2}}.$$
This is around where I got stuck. How does one go about finding the eigenvalues of $\partial _\tau ^2$? Should I have Fourier transformed earlier in the process? How does one incorporate the boundary fields $x$ and $x'$ into the answer?
Ideally, I would like to utilize zeta-function regularization. Supposing I can find and order the eigenvalues of $\partial _\tau ^2$ as $0<\lambda _1\leq \lambda _2\leq \ldots $ and $\lambda _n\rightarrow \infty $. Then, the zeta-function is defined as:
$$\zeta _{\partial _\tau ^2}(s)\equiv \sum _{n=1}^\infty \frac{1}{\lambda _n^s}.$$
(Is there a typo on the Wikipedia for the index?)
Then, differentiating term-by-term yields: $$\zeta '(s)=\frac{d\zeta }{ds}=\sum _{n=1}^\infty \frac{-\ln \lambda _n}{\lambda _n^s}.$$
Then, the determinant of the operator is: $\det \partial _\tau ^2=\exp \left[-\zeta _{\partial _\tau ^2}'(0)\right]$. This equation made sense as basically the determinant is the product of the eigenvalues however it seems really unclear to me how to use this process to regularize the path integral, as well as the issue of solving for the actual eigenvalues of the operator themselves.
Answer: Since $X^\mu$ does not vanish on the boundaries and is not periodic, you cannot just ignore boundary terms. You need to be more careful and you can do so by moving Fourier space.
We consider the path integral
\begin{equation}
\begin{split}
K ( x , x' ; T ) = \int\limits_{X(0) = x}^{X(T)=x'} [DX] \exp \left( - \int_0^T d \tau \left[ \frac{1}{2L} \partial_\tau X^\mu \partial_\tau X_\mu + \frac{L}{2} m^2 \right] \right) .
\end{split}
\end{equation}
Given the boundary conditions in the path integral, we can expand $X^\mu(\tau)$ in a Fourier series expansion as
$$
X^\mu(\tau) = x^\mu + \tau \left( \frac{x'^\mu - x^\mu}{T} \right) + L \sum_{n\neq0} X_n^\mu \exp \left( \frac{2\pi i n \tau}{T} \right) , \tag{1}
$$
with
$$
\sum_{n\neq0} X_n^\mu = 0 . \tag{2}
$$
Note that the sum is over all integers not equal to zero since the zero mode has been included in (1) separately. This zero mode is entirely fixed by the boundary conditions and we do not path integrate over this mode. Finally, a factor of $L$ has been included in (1) to make $X_n$ dimensionless.
The path integral measure is
\begin{equation}
\begin{split}
\int\limits_{X(0) = x}^{X(T)=x'} [DX] &= \int \frac{d^D p}{(2\pi)^D} \int \left( \prod_{n>0} (4\pi)^D d^D a_n d^D b_n e^{ i p \cdot a_n } \right) .
\end{split}
\end{equation}
Since the $X_n$'s are complex, we have broken up the integral into its real part $a_n$ and its imaginary part $b_n$. Further since $(X^\mu_n)^* = X^\mu_{-n}$ (due to reality of $X^\mu(\tau)$), we restrict only to $n > 0$. The integral over $p$ imposes the constraint (2).
Plugging in all of these explicit formulas into the path integral and evaluating, we find
\begin{equation}
\begin{split}
K ( x , x' ; T ) &= \exp \left( - \frac{1}{2} \left[ \frac{( x - x' )^2}{TL} + TL m^2 \right] \right) \prod_{n>0} \left( \frac{ 4 \pi T }{ n^2 L } \right)^{D/2} .
\end{split}
\end{equation}
The infinite product can be written as
\begin{equation}
\begin{split}
\prod_{n>0} \left( \frac{ 4 \pi T }{ n^2 L } \right)^{D/2} &= \exp \left[ \frac{D}{2} \sum_{n>0} \ln \frac{ 4 \pi T }{ n^2 L } \right] = \exp \left[ \frac{D}{2} \ln \frac{ 4 \pi T }{L} \sum_{n>0} 1 - D \sum_{n>0} \ln n \right]
\end{split}
\end{equation}
Each of these infinite sums can be regulated using the zeta function. In the usual physics way, we write
\begin{equation}
\begin{split}
\sum_{n>0} 1 \quad &\to \quad \left( \sum_{n>0} n^{-s} \right) \bigg|_{s=0} = \zeta(0) = - \frac{1}{2} , \\
\sum_{n>0} \ln n \quad &\to \quad - \frac{d}{ds} \left( \sum_{n>0} n^{-s} \right) \bigg|_{s=0} = - \zeta'(0) = \frac{1}{2} \ln (2\pi) .
\end{split}
\end{equation}
Thus,
\begin{equation}
\begin{split}
\prod_{n>0} \left( \frac{ 4 \pi T }{ n^2 L } \right)^{D/2} \quad \to \quad \left( \frac{ 16 \pi^3 T }{L} \right)^{-D/4} .
\end{split}
\end{equation}
The full result is therefore
\begin{equation}
\begin{split}
\boxed{ K ( x , x' ; T ) = \left( \frac{ 16 \pi^3 T }{L} \right)^{-D/4} \exp \left( - \frac{1}{2} \left[ \frac{( x - x' )^2}{TL} + TL m^2 \right] \right). }
\end{split}
\end{equation}
WARNING: I have not double checked this calculation. I could have easily gotten numerical factors incorrect. Please verify all of this yourself! | {
"domain": "physics.stackexchange",
"id": 97549,
"tags": "renormalization, path-integral, regularization, functional-determinants"
} |
Is RL applied to animal dispersion a valid approach? | Question: I have an agent which has a medium-sized, discrete set of actions $A$: $10<|A|<100$. The actions can be taken over an infinite horizon of 1 second per timestep $t$. The world is essentially pictures from a static camera and the states $S$ are the amount of animals detected on the picture (assuming perfect detection for simplicity). We can safely say that $max(S) < 200$. Each action $a$ is meant to lower the amount of detections because of dispersion or fear from the animals. The reward is the difference of detections from $s_{-1}$ to $s$.
Admittedly I am only a beginner in RL, so I haven't seen much more than MDP's. I believe that this isn't a Markov problem since the states aren't independent (past actions having an impact on the outcome of the current action). That being said, I'm wondering if there's a specific RL algorithm for this setup or if RL even is the right way to go ?
Answer:
Admittedly I am only a beginner in RL, so I haven't seen much more than MDP's. I believe that this isn't a Markov problem since the states aren't independent (past actions having an impact on the outcome of the current action).
If there are enough hidden variables, then this could be a real problem for you. A policy maps states to actions - and searching for an optimal policy will always map the same action to the same state. If this is some kind of automated scarecrow system, then the animals are likely to become habituated to the "best" action.
Two ways around that:
If the habituation is slow enough, you might get away with treating the environment as having the simple state that you suggest and have an agent which constantly learns and adapts to changes in what will trigger animals to leave. This would be an environment with non-stationary dynamics (the same action in the same state will, over time, drift in terms of its expected reward and next state). For a RL agent, this could just be a matter of sticking with a relatively high exploration rate and learning rate.
If the habituation is fast or adaptive, then you have to include some memory of recent actions used in the state representation. This will make the state space much larger, but is unavoidable. You could try to keep that memory inside the agent - using methods that work with Partially Observable MDPs (POMDPs) - it doesn't really make the problem any easier, but you may prefer that representation.
The reward is the difference of detections from $s_{-1}$ to $s$.
You need to re-think the reward system if habituation is fast/adaptive, as it will fail to differentiate between good and bad policies. All policies will end up with some distribution of detections, and all policies will have a mean total reward of 0 as a result. Your reward scheme will average zero in the long term for random behaviour, and also will average zero if the number of creatures remains at 200 the whole time, or is zero the whole time. Presumably having zero creatures would be ideal, whilst having 200 would be terrible - you need to be able to differentiate between those two scenarios.
You want to minimise the number of creatures in the area consistently, so a simple reward scheme is just negative of the number of visible animals, per time step. Maybe scale this - e.g. divide by 100, or by some assumed background rate where you would get -1 reward per time step if the number of animals is the same on average as if the agent took no action. You could measure this average using the camera over a few days when the agent is not present or simply not active. It doesn't need to be super accurate, teh scaling is just for convenience - you could think of an agent that gets a mean reward of -0.2 per time step is five times better than having no agent at all, whilst an agent that scores -1.5 per time step might be attracting creatures for all you know!
That being said, I'm wondering if there's a specific RL algorithm for this setup or if RL even is the right way to go ?
The problem does seem like a good match to RL. There are other ways of searching for good policy functions, such as genetic algorithms, that might also apply. However, you will still need to equip the agent with either continuous learning so it can re-find the best action as it changes, or with a memory of recent actions, depending on the speed of habituation. You may even need both as smart animals like birds or mammals can adapt in long and short term in different ways. | {
"domain": "datascience.stackexchange",
"id": 5752,
"tags": "reinforcement-learning"
} |
On NP and PP in RP? | Question: Does $NP\subseteq RP\implies NP=RP$?
Does $PP\subseteq RP\implies \oplus P=NP=RP$?
At least what additional minimal conditions will give truth of above?
Answer: To see why the first implication holds, note that you can use the sequence of coin tosses as a witness (if the input is not in the language, no accepting sequence exists).
As for the second implication, note that $\mathsf{\oplus P\subseteq P^{PP}}$, since you can find the exact number of accepting paths using binary search. If $\mathsf{PP=RP}$, then $\mathsf{\oplus P\subseteq P^{RP}}$. Since $\mathsf{PP}$ is closed under complement, you have $\mathsf{RP=coRP=ZPP}$, which means $\mathsf{\oplus P\subseteq P^{ZPP}\subseteq RP}$. I doubt that you can show $\mathsf{RP\subseteq \oplus P}$ in this case, since relativizing techniques won't do here. In "NP Might Not Be As Easy As Detecting Unique Solutions" by Beigel et al. they construct an oracle relative to which $\mathsf{ZPP=EXP}$ (implying $\mathsf{PP=RP}$) and $\mathsf{P = \oplus P}$, which means $\mathsf{RP\nsubseteq \oplus P}$. | {
"domain": "cs.stackexchange",
"id": 11161,
"tags": "complexity-theory, complexity-classes"
} |
How to find minimum number of k-input LUTs needed to express a n variable boolean function? | Question: An k-input LUT (look up table) takes in atmost k-inputs and gives 1 output (which is a function of the k inputs). I need to devise an algorithm to find the minimum number of k-input LUT's required to express a n variable boolean function.
For example : - n=8 and k=4 8 input variables are a0,a1...a7
1) f1(a0,a1,..a7)= a0*a1 +a3*a4 + a5*a7 + a6*a8 Requires 3 LUTs : LUT1 :a0*a1 + a3*a4 =x1 LUT2 :a5*a7 + a6*a8 =x2 LUT3 : x1+x2
2) f2(a0,a1,..a7)= a0*a1 +a3*a4 + a5*a7 + a6 Requires 2 LUTs : LUT1: a0*a1 +a3*a4 =x1 LUT2: x1+a5*a7+a6
NOTE: I want to solve this for the particular case of n=8 and k=4. Though, I would like to extend it to other cases like (n,k)= (8,2).
Answer: The following decision version of your problem is NP-complete:
Given a formula $f$, an integer $k$, and an integer $s$ (encoded in unary), determine whether $f$ can be represented using at most $s$ many $k$-input LUT's.
Indeed, you can easily solve SAT by asking whether the instance of SAT can be represented using at most one 0-input LUT. If you disallow $k=0$, you need to work a bit harder: a formula $\phi$ is satisfiable iff $\phi \oplus x$ (where $x$ is a fresh variable) can be represented using at most one 1-input LUT. | {
"domain": "cs.stackexchange",
"id": 9648,
"tags": "algorithms, optimization, boolean-algebra"
} |
Best practices: checking a path with Costmap2DROS | Question:
Hi,
I have a populated Costmap2DROS object and a vector of poses which make up the robot's path. I need to verify that on every pose, the robot is not in collision.
It should be sufficient to check whether the pose lies within the robot's inscribed radius, although if the computational load is light enough I could check the robot's footprint.
What are the best practices for doing this with a Costmap2DROS? I have found that I can make a copy of the underlying costmap, but this seems inefficient.
Thanks!
Originally posted by bkx on ROS Answers with karma: 145 on 2012-02-20
Post score: 0
Answer:
You should be able to use the CostmapModel class in the base_local_planner package to allow you to check if a given pose is in collision. You can find documentation on the class here.
Originally posted by eitan with karma: 2743 on 2012-02-21
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8304,
"tags": "ros, navigation, collision, costmap-2d, costmap"
} |
How can I calculate the total signal level (noise) of PC fans? | Question: I have a fan on a PC case running at 100% speed making 40dB/A of noise (sound pressure). Then I add a second fan right next to it running at 100% speed making 30dB/A of noise.
What is the combined noise level?
What happens if add and extra 50dB/A fan?
And then 10 more 30dB/A fans?
Forget about space, treat them as sources coming from the same point in 3D space.
EDIT
The total signal level from sources with different strengths can be calculated as:
Lt = 10 log ((S1 + S2 ... + Sn) / Sref)
Where
Lt = total signal level (dB)
S = signal (signal unit)
Sref = signal reference (signal unit)
Source
I can't directly use this formula because it requires the sound power of fans instead of the sound pressure manufacturers such as Noctua typically provide. How can I convert it? What will I need?
Answer: This is a pretty straightforward calculator: https://www.engineeringtoolbox.com/adding-decibel-d_63.html
Basically noise level is dictated by your 2 loudest sources, so 50 dB + 40 dB (assuming they are at or near the same location) would result in about 50.5 dB. | {
"domain": "engineering.stackexchange",
"id": 3299,
"tags": "mechanical-engineering, acoustics, audio-engineering, waves"
} |
What are good ways to detect signal clipping in a recording? | Question: Given a recording I need to detect whether any clipping has occurred.
Can I safely conclude there was clipping if any (one) sample reaches the maximum sample value, or should I look for a series of subsequent samples at the maximum level?
The recording may be taken from 16 or 24-bit A/D converters, and are converted to floating point values ranging from $-1...1$. If this conversion takes the form of a division by $2^{15}-1$ or $2^{23}-1$, then presumably the negative peaks can be somewhat lower than -1, and samples with the value -1 are not clipped?
Obviously one can always create a signal specifically to defeat the clipping detection algorithm, but I'm looking at recordings of speech, music, sine waves or pink/white noise.
Answer: I was in the middle of typing an answer pretty much exactly like Yoda's. He's is probably the most reliable but, I'll proposed a different solution so you have some options.
If you take a histogram of your signal, you will more than likely a bell or triangle like shape depending on the signal type. Clean signals will tend to follow this pattern. Many recording studios add a "loudness" effect that causes a little bump near the top, but it is still somewhat smooth looking. Here is an example from a real song from a major musician:
Here is the histogram of signal that Yoda gives in his answer:
And now the case of their being clipping:
This method can be fooled at times, but it is at least something to throw in your tool bag for situations that the FFT method doesn't seem to be working for you or is too many computations for your environment. | {
"domain": "dsp.stackexchange",
"id": 47,
"tags": "audio, algorithms"
} |
Marginal and relevant operators that a $\phi^4$ theory should contain as an effective field theory | Question: Consider the Lagrangian of $\phi^4$ theory in 4-dimensions $$\mathcal{L}=\frac{1}{2}(\partial_\mu\phi)^2-\frac{1}{2}m^2\phi^2-\frac{\lambda}{4!}\phi^4\tag{1}$$ For a term in the Lagrangian of the form $C_{m,n}\phi^n(\partial_\mu\phi)^m$, the corresponding coefficient in 4-dimensions scales as $$C^\prime_{n,m}=b^{n+m-4}C_{n,m}\tag{2}.$$ For a reference, see Eq. 12.27, page 402 of Peskin and Schroeder.
The Lagrangian $\mathcal{L}$ in Eqn.(1), contains all relevant and marginal operators except $$\phi^2(\partial_\mu\phi)^2, \phi(\partial_\mu\phi)^3,\phi^3(\partial_\mu\phi),(\partial_\mu\phi)^4,\phi^3.\tag{3}$$ Even if we exclude the $\phi^3$ terms by demanding a symmetry under $\phi\to-\phi$, or by demanding the Hamiltonian to be bounded from below, one is still left with the other 4 possibilities.
If $\phi^4$ theory is regarded as a low-energy effective theory, I do not understand why the first 4 terms in (3) are they not considered? Is it just for simplicity?
Answer: your power-counting is not correct. In $d=4$ a gradient $\partial_{\mu}$ counts like a field $\phi$. So for instance you should have $C_{m,n}'=b^{n+2m-4}C_{m,n}$ | {
"domain": "physics.stackexchange",
"id": 40822,
"tags": "quantum-field-theory, renormalization, effective-field-theory"
} |
Use SemaphoreSlim to control access to a resource | Question: We have an existing situation in an MVC ASP.NET app where it's possible for two threads to come back asynchronously, one from an external api(the payment gateway) and one from within the browser for each cart in an online shop. Only one should be able to run some further code, but they can come back almost simultaneously. We have something in place that kind of works, but is not working correctly all the time. The following code has been suggested to replace it.
using System;
using System.Collections.Generic;
using System.Threading;
namespace semaphoreTest.Semaphore
{
public class NxSemaphore : SemaphoreSlim
{
public NxSemaphore() : base(1, 1){}
public DateTime Created { get; set; }
public DateTime LastUsed { get; set; }
public string Key { get; set; }
}
public static class AppSemaphoreDict
{
private static Dictionary<string, NxSemaphore> _cartSemaphores = new Dictionary<string, NxSemaphore>();
private static object _lockObj = new object();
public static NxSemaphore GetForCart(string nxID)
{
lock (_lockObj)
{
NxSemaphore semaphore;
if (_cartSemaphores.TryGetValue(nxID, out semaphore))
{
semaphore.LastUsed = DateTime.Now;
}
else
{
semaphore = new NxSemaphore()
{
Created = DateTime.Now,
Key = nxID,
LastUsed = DateTime.Now
};
_cartSemaphores.Add(semaphore.Key, semaphore);
}
return semaphore;
}
}
}
}
The idea is that a semaphore should be created for each cart (which has a unique id - NxId) and would be used in a manner below
public async Task<string> DoStuffDict()
{
NxSemaphore semaphore = null;
try
{
//get a random cart id for testing
string NxGuid = SempahoreTest.Models.RandomNxGuid.GetRestrictedNxGuid(300);
semaphore = AppSemaphoreDict.GetForCart(NxGuid);
await semaphore.WaitAsync();
//Get some stuff from an api
NxApi api = new NxApi(NxRequestContext);
var data = await api.GetSomeStuff;
//write some stuff to the database
using (DataContext dbContext = new DataContext())
{
SemaphoreTest test = new SemaphoreTest();
test.Key = semaphore.Key;
test.DateTimeCreated = semaphore.Created;
test.DateTimeLastUsed = semaphore.LastUsed;
dbContext.SemaphoreTest.InsertOnSubmit(test);
dbContext.SubmitChanges();
}
return "All Good";
}
catch (Exception ex)
{
return ex.ToString();
}
finally
{
if (semaphore != null)
{
semaphore.Release();
}
};
}
I have run the above code using a WebSurge with a bunch of threads and it seems to pick up the correct existing SemaphoreSlim class for the same cart id. Obviously there needs to be some clean up run to remove old SemaphoreSlim instances from the in memory Dictionary which I have not added as yet. Not sure about the best way to go about this - any ideas ? Also, thoughts on this generally as a pattern ? Are there some glaring mistakes in there and a better method to employ ?
Edit
There will potentially be a number of calls to this function with different cart ids. The same id can be used multiple times (but usually at least twice).The line below is just simulating passing an id to the function. It is using a fixed list of 300 keys and grabbing one each time. When I hit this with a load tester using 20 threads, it picks up the correct SempahoreSlim each time for the same id.
SempahoreTest.Models.RandomNxGuid.GetRestrictedNxGuid(300);
In practice, the cart key will be passed in to the function.Currently there may be a hundred or so sessions at any point in time, but they won't all be trying to hit the code at the same time, unless everyone is paying for their cart simultaneously. The crucial part is that for each different cart id, there may be two different threads hitting the function and only one of them should run the task. I also omitted in the code above a check that is made to see if the other thread has done the job (and saved a record in the database), in which case the second thread coming through does not need to do anything. If we just have one SemaphoreSlim for all carts, my worry was that it would block the others too much as the api and database calls are quite heavy and may take up around 500ms to complete.
Answer: Review
I would say, KISS (keep it stupid simple). Try going for a single lock and benchmark both normal expected thoughput and peek scenarios. If you would manage with this setup, you're done.
In case you would require a mutex for every unique key, using a lock to get that key is good practice. You have optimized lookup (tryget vs contains and get) if (_cartSemaphores.TryGetValue(nxID, out semaphore)) and all actions inside this lock are fast.
However, as mentioned in the comments, your cache might grow over time, and you'll eventually reach a point most keys would become lingering. You would need to think about how to perform some garbage collection on this cache. This can be tricky. Check out this post force-concurrentdictionary-in-a-singleton-registry-to-collect-removed-items-spac to figure out C# collections are optimized for performance, not for memory management when cleaning up lingering items. So at some point, you would probably want to clear the cache completely, and re-assign a new instance of the cache. This could be very expensive, but required when your server runs 24/7 and has lots of new and stale semaphores.
One possible way of cleaning the cache is to have a scheduled task that periodically (you should figure out a good cycle window) clears the cache.
in pseudo code (comments)
lock (_lockObj)
{
// acquire the semaphore for each key in the dictionary
// clear the dictionary
// assign a new empty dictionary
}
One other thing, you are assigning a slightly different update time than creation time:
semaphore = new NxSemaphore()
{
Created = DateTime.Now,
Key = nxID,
LastUsed = DateTime.Now
};
This probably doesn't have a major impact, but it would better to have the initial state consistent.
public NxSemaphore()
{
Created = DateTime.Now;
LastUsed = Created;
} | {
"domain": "codereview.stackexchange",
"id": 35788,
"tags": "c#, thread-safety, asp.net-mvc"
} |
How to find the correct expression for the gravitational field on a mass $m$? | Question: The condition is that the mass $m$ is placed between two concentric shells of masses $M_1$ and $M_2$ with radii $r_1$ and $r_2$ at some radius $r$.
Should the mass $M_1$ be considered as the mass $m$ is present at $r_1<r<r_2$?
Answer: It can be shown that for a test mass $m$ inside a shell of mass $M$,
the net gravitational force from $M$ on $m$ is equal to zero.
Therefore in your example the mass $M_2$ is irrelevant. Now since $m$ is outside of $M_1$, the gravitational force from $M_1$ on $m$ is not zero.
In fact the gravitational force of $M_1$ on $m$ is the same as if we considered the whole shell $M_1$ to be concentrated in a point (namely the center of the mass shell).
See also here | {
"domain": "physics.stackexchange",
"id": 93321,
"tags": "homework-and-exercises, newtonian-gravity, symmetry"
} |
Given two $a^n b^n$-style languages, how do I find out if their intersection is regular? | Question: I have two languages as below.
$$L_1=\{a^ncb^n\}\cup\{a^mdb^{2m}\}$$
$$L_2=\{a^{2n}cb^{2m+1}\}\cup\{a^{2m+1}db^{2n}\}$$
Now, I wonder what is $L_1\cap L_2$. Is it a regular language? Is it context-free?
To solve the problem, I feel like I need to solve the following system of equations, but I'm not sure.
$$
\begin{cases}
2n=2m+1\\
2m+1=\frac{2n}{2}
\end{cases}
$$
Answer: $L_1=\{a^ncb^n\}\cup\{a^mdb^{2m}\}$
$L_2=\{a^{2n}cb^{2m+1}\}\cup\{a^{2m+1}db^{2n}\}$
If we say $\;L_1=L_{11}\cup L_{12}\;\:$and$\;L_2=L_{21}\cup L_{22}$
$L_1\cap L_2=((L_{11}\cap L_{21})\cup (L_{12}\cap L_{21}))\cup ((L_{11}\cap L_{22})\cup (L_{12}\cap L_{22})) $
$L_{11}\cap L_{22}\:,\:L_{12}\cap L_{21}\;$will be zero because d and c are different symbols.
So $\;L_1\cap L_2=(L_{11}\cap L_{21})\cup (L_{12}\cap L_{22}) $
$L_{11}\cap L_{21}=\emptyset \:\;$because it is not possible for an even number to be equal to an odd number.
$\;L_1\cap L_2=L_{12}\cap L_{22}=L=\{a^{2m+1}db^{4m+2}\}$
So the language is not regular but context free. It is impossible to construct a finite automaton for this language because finite automaton has a memory that is fixed and cannot thereafter be expanded. ( It cannot store number of a's ) But it is easy to recognize that L is context free since $L=(a(aa)^*db^*)\cap \{a^ndb^{2n}|n\geq 0\}$ ( intersection of a CFL with a regular language is CFL ) And $ \{a^ndb^{2n}|n\geq 0\} $ is context free because there exists a push-down automaton that accepts L. When this automaton sees an $\:a\:$ it pushes 2 $\:a\:$'s into the stack and when $\:d\:$ is the input it changes it's state to a final state and then for each $\:b\:$ it pops an $\:a\:$ from the stack. | {
"domain": "cs.stackexchange",
"id": 8945,
"tags": "formal-languages, regular-languages, context-free"
} |
Find Top 10 IP out of more than 5GB data | Question: I have a few of files, and total size of them is more than 5 GB. Each line of the files is a IP address, looks like:
127.0.0.1 reset success
...
127.0.0.2 reset success
how can i find Top10 frequently IP in 25s within 500M memory?
Here is my code,but it takes probably 64s and use more than 1G memory.
package main;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.FileLock;
import java.nio.charset.Charset;
import java.time.LocalTime;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.TreeMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
public class TopK {
public TreeMap<String, Integer> tMap = new TreeMap<String, Integer>();
public static int MaxLength = 1024 * 1024 * 10;
public static void main(String[] args) {
ExecutorService executorService = Executors.newFixedThreadPool(12);
FileHelper fileHelper = new FileHelper(args[0]);
fileHelper.getFileList(args[0]);
System.out.println("begin submit: " + LocalTime.now());
TopK testCountWords = new TopK();
fileHelper.allFiles.parallelStream().forEach(file -> {
int count = (int) (file.length() / MaxLength + 1);
for (int i = 0; i < count; i++) {
if (i == count - 1) {
CountWords cw = new CountWords(testCountWords, file, i * MaxLength, file.length() - i * MaxLength);
executorService.execute(cw);
} else {
CountWords cw = new CountWords(testCountWords, file, i * MaxLength, MaxLength);
executorService.execute(cw);
}
}
});
executorService.shutdown();
System.out.println("end submit: " + LocalTime.now());
try {
while (!executorService.awaitTermination(10, TimeUnit.SECONDS))
;
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("end count: " + LocalTime.now());
List<IPAndCount> result = topK(testCountWords.tMap, 5);
result.stream().forEach(System.out::println);
}
public static List<IPAndCount> topK(Map<String, Integer> ips, int k) {
List<IPAndCount> ipAndCounts = new ArrayList<>();
for (Map.Entry<String, Integer> entry : ips.entrySet()) {
ipAndCounts.add(new IPAndCount(entry.getKey(), entry.getValue()));
}
return ipAndCounts.parallelStream().sorted((ic1, ic2) -> {
return ic2.getCount() - ic1.getCount();
}).collect(Collectors.toList()).subList(0, k);
}
public synchronized void AddKey(String word) {
if (tMap.containsKey(word)) {
tMap.put(word, tMap.get(word) + 1);
} else {
tMap.put(word, 1);
}
}
}
class CountWords implements Runnable {
private FileChannel fc;
private FileLock fl;
private MappedByteBuffer mbBuf;
private TopK countWords;
private RandomAccessFile accessFile;
public Pattern resetCommandPattern = Pattern.compile(
"\\s*((?:(?:25[0-5]|2[0-4]\\d|((1\\d{2})|([1-9]?\\d)))\\.){3}(?:25[0-5]|2[0-4]\\d|((1\\d{2})|([1-9]?\\d))))\\s+reset\\s+success",
Pattern.CASE_INSENSITIVE);
public CountWords(TopK testCountWords, File src, long start, long size) {
this.countWords = testCountWords;
try {
accessFile = new RandomAccessFile(src, "rw");
fc = accessFile.getChannel();
// 锁定当前文件的部分
fl = fc.lock(start, size, false);
// 对当前文件片段建立内存映射,如果文件过大需要切割成多个片段
mbBuf = fc.map(FileChannel.MapMode.READ_ONLY, start, size);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void run() {
String text = Charset.forName("UTF-8").decode(mbBuf).toString();
Matcher m = resetCommandPattern.matcher(text);
while (m.find()) {
countWords.AddKey(m.group(1));
}
try {
// 释放文件锁
fl.release();
fc.close();
accessFile.close();
} catch (IOException e) {
e.printStackTrace();
} finally {
text = null;
}
return;
}
}
class IPAndCount {
private String ip;
private int count;
public IPAndCount(String ip, int count) {
this.setIp(ip);
this.setCount(count);
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public int getCount() {
return count;
}
public void setCount(int count) {
this.count = count;
}
public String toString() {
return ip + "," + String.valueOf(count);
}
}
Here is my code
Answer: Yours is an interesting problem, for sure, and problems like this can often be solved better by using more primitive data structures. Additionally, concurrency is often useful, but in IO based input operations your bottleneck is often the IO component, and not the processing.
So, multiple input files may seem like a good thing to parallelize, but the reality is that IO is often sequential. Admittedly, in the current age of SSDs and so on, random access is less of a concern, but I doubt that having parallel IO streams is helping you at all.
So, three major items for you to consider:
Remove all concurrency for this problem - your inputs are sequential, and are probably also your bottleneck. You can sequentially read 5GB of data at 100MB/s on a decent HDD in 50 seconds, and on an SSD you can get that done in 10 seconds.
Do the work in two operations - one to count each record, and the second to sort the results.
Use a better data structure. I would consider using a sparse trie-like primitive int[][][] array. See https://en.wikipedia.org/wiki/Trie
The nature of most network-based operation is that almost all traffic comes from a small set of subnets, so your data will be clustered in small number of high-frequency clusters. The array I would use to count in would be something like:
private int[][][] counters = new int[1 << 16][][];
That creates an array of 65K pointers to int[][] arrays. Now, you take your IP addresses, like 127.0.0.1 and the first two bytes 127.0 is the index in to that array. 127.0 is, in hex, 0x7f00, so make sure that there's an array populated in that location:
/**
* Locate the counter for the given IP address (creating it if necessary) and increment it.
* @returns the newly incremented count for the given IP.
*/
private static int incrementCount(int[][][] counters, String ipString) {
// .... ipString = "127.0.0.1"
byte[] ip = InetAddress.getByName(ipString).getAddress();
int majorIndex = ip[0] << 8 + ip[1];
int[][] midTrie = counters[majorIndex];
if (midTrie == null) {
midTrie = new int[256][];
counters[majorIndex] = midTrie;
}
if (midTrie[ip[2]] == null) {
midTrie[ip[2]] = new int[256];
}
return ++midTrie[ip[2]][ip[3]];
}
OK, so the above function will maintain the counts for each IP address in a primitive data structure, and with little waste of memory. In a worst-case scenario, where you have traffic from every subnet in the planet, you will run out of memory..... but you will have other problems at that point.
So, with the above, you can then build the full set of counters for your data, and after that, you can post-process it and extract the topX count for the IP's.
That topX count is done with a normal mechanism of managing a list of size X containing the IP and count of the largest values.... something like:
private static class Candidate {
private String ip;
private int count;
// ....
}
// walk the entire trie of counters, scanning it all.
List<Candidate> topX = new ArrayList<>(x);
for (int i = 0; i < counters.length; i++) {
if (counters[i] == null) {
continue;
}
for (int j = 0; j < counters[i].length; j++) {
if (counters[i][j] == null) {
continue;
}
for (int k = 0; k < counters[i][j].length; k++) {
int count = counters[i][j][k];
if (count == 0) {
continue;
}
checkCounter(topX, x, i, j, k, count);
}
}
}
Now, your checkCounter method just needs to see whether the count supplied is better than the smallest value already in the topX list, and, if it is, it needs to insert the new value at the correct (sorted) position, and if the list is now larger than size x, it should discard the smallest value.
These operations would all be encumbered by concurrency checking, etc. | {
"domain": "codereview.stackexchange",
"id": 22450,
"tags": "java, mapreduce"
} |
Python Machine Learning Experts | Question: I'd like to apply some of the more complex supervised machine learning techniques in python - deep learning, generalized addative models, proper implementation of regularization, other cool stuff I dont even know about, etc.
Any recommendations how I could find expert ML folks that would like to collaborate on projects?
Answer: You could try some competitions from kaggle.
Data Science courses from Coursera, edX, etc also provide forums for discussion.
Linkedin or freelance sites could be other possibilities. | {
"domain": "datascience.stackexchange",
"id": 138,
"tags": "machine-learning, python"
} |
Generate a large block diagonal covariance matrix with exponential decay | Question: I am implementing Kalman filtering in R. Part of the problem involves generating a really huge error covariance block-diagonal matrix (dim: 18000 rows x 18000 columns = 324,000,000 entries). We denote this matrix Q. This Q matrix is multiplied by another huge rectangular matrix called the linear operator, denoted by H.
I am able to construct these matrices but it takes a lot of memory and hangs my computer. I am looking at ways to make my code efficient or do the matrix multiplications without actually creating the matrices exclusively.
library(lattice)
library(Matrix)
library(ggplot2)
nrows <- 125
ncols <- 172
p <- ncols*nrows
#--------------------------------------------------------------#
# Compute Qf.OSI, the "constant" model error covariance matrix #
#--------------------------------------------------------------#
Qvariance <- 1
Qrho <- 0.8
Q <- matrix(0, p, p)
for (alpha in 1:p)
{
JJ <- (alpha - 1) %% nrows + 1
II <- ((alpha - JJ)/ncols) + 1
#print(paste(II, JJ))
for (beta in alpha:p)
{
LL <- (beta - 1) %% nrows + 1
KK <- ((beta - LL)/ncols) + 1
d <- sqrt((LL - JJ)^2 + (KK - II)^2)
#print(paste(II, JJ, KK, LL, "d = ", d))
Q[alpha, beta] <- Q[beta, alpha] <- Qvariance*(Qrho^d)
}
}
# dn <- (det(Q))^(1/p)
# print(dn)
# Determinant of Q is 0
# Sum of the eigen values of Q is equal to p
#-------------------------------------------#
# Create a block-diagonal covariance matrix #
#-------------------------------------------#
Qf.OSI <- as.matrix(bdiag(Q,Q))
print(paste("Dimension of the forecast error covariance matrix, Qf.OSI:")); print(dim(Qf.OSI))
It takes a long time to create the matrix Qf.OSI at the first place. Then I am looking at pre- and post-multiplying Qf.OSI with a linear operator matrix, H, which is of dimension 48 x 18000. The resulting HQf.OSIHt is finally a 48x48 matrix. What is an efficient way to generate the Q matrix? The above form for Q matrix is one of many in the literature. In the below image you will see yet another form for Q (called the Balgovind form) which I haven't implemented but I assume is equally time consuming to generate the matrix in R.
Answer:
JJ <- (alpha - 1) %% nrows + 1
II <- ((alpha - JJ)/ncols) + 1
That looks likely to be buggy. I would guess that a is supposed to be an encoding for a pair (row, col), but in that case the same base should be used for the %% and the /.
I would also suggest that if you can't use 0-indexed matrices then you do the offset to 1-based when you access the matrices, and keep the values you manipulate 0-based. See how much simpler this is:
for (rowa in 0:(nrows-1))
{
for (cola in 0:(ncols-1))
{
a = rowa * ncols + cola
for (rowb in 0:(nrows-1))
{
for (colb in 0:(ncols-1))
{
b = rowb * ncols + colb
d = sqrt((rowa - rowb)^2 + (cola - colb)^2)
Q[a+1, b+1] <- Qvariance * (Qrho^d)
}
}
}
}
Incidentally, since Qvariance is multiplied into every single element you could pull that out and post-multiply the final \$48 \times 48\$ matrix instead.
Now, elimination of the matrix. We have \$(AB)_{i,j} = \sum_k A_{i,k} B_{k,j}\$, so $$(HQH^T)_{i,j} = \sum_k H_{i,k}(QH^T)_{k,j} = \sum_k H_{i,k} \sum_l Q_{k,l} H^T_{l,j} = \sum_k \sum_l H_{i,k} H_{j,l} Q_{k,l}$$ which allows you to restructure the code so as to avoid creating \$Q\$ in memory. However, it is at the cost of using the naïve algorithm for matrix multiplication, and your matrices are large enough that R is probably using a sub-cubic algorithm. So what you might want to do is to instead break it down into chunks: e.g. of size nrows \$\times\$ nrows. I don't know enough R to be certain, but I expect that its index range notation allows you to do this quite cleanly.
Following up on some comments, we can expand \$k = r_1 C + c_1\$, \$l = r_2 C + c_2\$ where \$C\$ is ncols, and get
$$(HQH^T)_{i,j} = \sum_{r_1} \sum_{c_1} \sum_{r_2} \sum_{c_2} H_{i,r_1 C + c_1} H_{j,r_2 C + c_2} Q_{r_1 C + c_1,r_2 C + c_2} \\
= \sigma \sum_{r_1=1}^R \sum_{r_2=1}^R \sum_{c_1=1}^C \sum_{c_2=1}^C H_{i,r_1 C + c_1} H_{j,r_2 C + c_2} \rho^{\sqrt{(r_1-r_2)^2 + (c_1-c_2)^2}} $$
Let \$Q^{(\delta)}\$ be a symmetric \$C \times C\$ matrix with \$Q^{(\delta)}_{i,j} = \rho^{\sqrt{\delta^2 + (i-j)^2}}\$. Then $$(HQH^T)_{i,j} = \sigma \sum_{r_1=1}^R \sum_{r_2=1}^R \sum_{c_1=1}^C \sum_{c_2=1}^C H_{i,r_1 C + c_1} Q^{(|r_1-r_2|)}_{c_1,c_2} H_{j,r_2 C + c_2} \\
HQH^T = \sigma \sum_{r_1=1}^R \sum_{r_2=1}^R H_{1..48,r_1 C .. (r_1+1)C} Q^{(|r_1-r_2|)} H_{1..48,r_2 C..(r_2+1)C}^T $$
and the sum can be regrouped by \$|r_1 - r_2|\$ to calculate each \$Q^{(\delta)}\$ only once. When calculating \$Q^{(\delta)}\$ you can exploit the symmetry without worrying too much about cache coherence, because the whole of \$Q^{(\delta)}\$ should fit in L2 cache. | {
"domain": "codereview.stackexchange",
"id": 35400,
"tags": "time-limit-exceeded, matrix, statistics, r, memory-optimization"
} |
Low Pass Filter - pixel correlation | Question: I am trying to understand low pass filters for image processing, I came across an article that stated that low pass filters effect noise more than the real data due to that the image
Noise always changes rapidly from pixel to pixel because each pixel
generates its own independent noise. The image from the telescope
isn't "uncorrelated" in this fashion because real images are spread
over many pixels.
Does this mean that the filter effects noise more due to the noise being correlated from pixel to pixel?
Answer: Most images are inherently low-pass: the object that is displayed in the image generally doesn't change from pixel to pixel.
Most image pixel noise is generated by each individual pixel capturing the image. As a result this noise is generally assumed to be uncorrelated from pixel to pixel. That means the noise is (bandlimited) white noise, which has uniform components across all frequencies low to high.
So, when you apply a low pass filter to an image, most of the information about the objects imaged will pass straight through because they are in the low pass filter's pass band.
Conversely, noise will have its high frequency components attenuated by the low pass filter.
That is why the statement low pass filters effect noise more than the real data is generally true.
To answer your explicit question:
Does this mean that the filter effects noise more due to the noise being correlated from pixel to pixel?
The filter affects the noise more due to the noise being uncorrelated from pixel to pixel. | {
"domain": "dsp.stackexchange",
"id": 3456,
"tags": "image-processing, computer-vision, noise, lowpass-filter"
} |
Expression of electrostatic field | Question: An electrostatic field is characterized by the fact that it depends only by $r$, isn't it? If it is true, I don't understand why this expression, given in cylindrical coordinates,
$${\bf E(r)}=\frac{\alpha}{z^2}{\bf u_r}-2 \frac{\alpha r}{z^3}{\bf u_z} $$
rapresents an electrostatic field.
Answer: An electric field is said to be static if it does not change with time, i.e. the the charges that produced that field are stationary. This doesn't imply any constriction on its spatial dependence. In particular, no spherical symmetry is implicit in the definition of electrostatic field, and that field may not depend only on $r$, as your example shows. This is common when you consider examples of field produced by more than one point charge, e.g. an electric dipole:
$$\mathbf{E}(\mathbf{r})={3\mathbf{p}\cdot\hat{\mathbf{r}}\over 4\pi\varepsilon_0 r^3}\hat{\mathbf{r}}-{\mathbf{p}\over 4\pi\varepsilon_0 r^3}$$
depends on $\mathbf{r}$ and $\mathbf{p}$. | {
"domain": "physics.stackexchange",
"id": 9469,
"tags": "homework-and-exercises, electromagnetism, electric-fields, coordinate-systems"
} |
Training with data of different shapes. Is padding an alternative? | Question: I have a dataset of about 1k samples and I want to apply some unsuspervised techniques in order to clustering and visualization of these data.
The data can be interpreted as a table of a spreadsheet and unfortunately it doensn't have a very defined pattern of structure. The number of table lines varies, but not the columns.
The data is structured like this:
sample 1:
{
"table1": {
"column1": [
"-",
"-",
"-"
],
"column2": [
"2017-04-16 10:00",
"2017-04-16 10:00",
"2017-04-16 10:00"
],
"column3": [
"-",
"-",
"-"
],
"column4": [
"name X",
"name Y",
"name Z"
],
"column5": [
"0",
"0",
"0"
],
}
}
sample 2:
{
"table1": {
"column1": [
"-",
"-",
"-",
"-",
"-",
"-",
"-",
"-"
],
"column2": [
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00",
"2017-04-10 22:00"
],
"column3": [
"-",
"-",
"-",
"-",
"-",
"-",
"-",
"-"
],
"column4": [
"name A",
"name Z",
"name B",
"name X",
"name C",
"name D",
"name E",
"name F"
],
"coumn5": [
"",
"",
"3",
"1",
"0",
"3",
"0",
"0"
]
}
}
These samples comes from alarms generated by a systems that collects informations from a lot of nodes (these nodes are named as "name A", "name B"...). My objective is to transform these data into a matrix (n_samples x n_features) to apply clustering and visualization algorithms.
How can I work with these data for unsupervised training? Is padding a way forward for this problem? If so, how can I apply the padding on this case?
Answer: Whether or not padding is ppropriate really depends on the entire structure of your dataset, how relevant the different variables/columns are and also the type of model you want to run at the end.
Padding would be used, whereby you would have to fix the length of each sample (either to the length of the longest sample, or to a fixed length - longer samples would be trimmed or filtered somehow to fit into that length). Variables that are strings can be padded with empty strings, variables with number can be padded with zeros. There are however many other ways to pad, e.g. using the average of a numerical variable, or even model-based padding, padding with values that "make most sense" for filling gaps in that specific sample. Getting deep into it like that might more generally be called imputation, instead of padding - it is common in time series data, where gaps aren't always at one end of a sample.
Below I outline one approach to padding or standardizing the length of each sample. It it not specifically padding.
As you did not mention a programming language, I will give and code snippet in Python, but the same is easily achievable in other languages such as R and Julia.
The Approach
Based on the two examples you provide, it seems each example would be a calendar day, on which there are a variable number of observations.
There are also columns that are strings, and others are strings of numbers (e.g. column 5 in sample 2).
In time-series analysis in general, it is desirable to have a continuous frequency of data. That means have one day give one input. So my approach would be to make your data into a form that resembles a single input for each of the variables (i.e. columns) of each sample.
This is general approach, and you will have to try things out or do more research on how this would look in reality for your specific data at large.
Timestamps
I would use these as a kind of index, like the index in a Pandas DataFrame. One row = one timestamp. Multiple variables are then different columns.
Dealing with strings
I will assume that your dataset has a finite number of possible strings in each column. For example, that column 4 (holding names), will always hold a name from a given set. One could perform set(table2['column 4']) to see which values there are (removing duplicates). Or even:
# Gather a single list containing all strings in column 4 of all samples
all_names = []
[] # list comprehension to loop
# Check how many strings there are
from collections import Counter
counter = Counter(table2['column4'])
print(len(counter)) # see how many unique values exist
print(counter) # see how many times each string appears
print(counter.most_common(5)) # see the most common 5 strings
Now assuming this shows there is a finite number (more than likely the case), You could look into using a sparse representation of each sample (that means for each day). For example, if all the words in the entire dataset were: ['hi', 'hello', 'wasup', 'yo', 'bonjour'] (duplicates removed), then for one single sample with column 4 holding e.g. ['hi', 'hello', 'yo', 'hi'], your sparse representation for this sample would be: [2, 1, 0, 1, 0], because the sample has two 'hi', one 'hello', zero 'wasup' and so on. This sparse representation would then be your single input for column 4 for the timestamp (that single sample).
It might be worth looking into something like the DictVectorizer and CountVectorizer from Scikit-Learn.
Dealing with number columns
As I mentioned right at the beginning, you could pad these to a chosen length, perhaps matching the length of the string based representation above (or not!), depending on your final model.
You can then pad the inputs with a value that makes sense for your model (see the kind of options I mentioned at the beginning of my answer).
This should land you with, once again, a single vector for the given day, containing the information in the numerical column. | {
"domain": "datascience.stackexchange",
"id": 3067,
"tags": "python, visualization, data-cleaning, data, unsupervised-learning"
} |
Let's speed that file sentence searching program | Question: Intro:
I've written a small piece of Python program which is looking for a given sentence in multiple sub directories of a given path.
I'm looking for improvements regarding the speed of my script.
Code:
from os import walk
from os.path import join
def get_magik_files(base_path):
"""
Yields each path from all the base_path subdirectories
:param base_path: this is the base path from where we'll start looking after .magik files
:return: yield full path of a .magik file
"""
for dirpath, _, filenames in walk(base_path):
for filename in [f for f in filenames if f.endswith(".magik")]:
yield join(dirpath, filename)
def search_sentence_in_file(base_path, sentence):
"""
Prints each file path, line and line content where sentence was found
:param base_path: this is the base path from where we'll start looking after .magik files
:param sentence: the sentence we're looking up for
:return: print the file path, line number and line content where sentence was found
"""
for each_magik_file in get_magik_files(base_path):
with open(each_magik_file) as magik_file:
for line_number, line in enumerate(magik_file):
if sentence in line:
print('[# FILE PATH #] {} ...\n'
'[# LINE NUMBER #] At line {}\n'
'[# LINE CONTENT #] Content: {}'.format(each_magik_file, line_number, line.strip()))
print('---------------------------------------------------------------------------------')
def main():
basepath = r'some_path'
sentence_to_search = 'some sentence'
search_sentence_in_file(basepath, sentence_to_search)
if __name__ == '__main__':
main()
Miscellaneous:
As you may already figured out, the reason for my program being so slow resides in search_sentence_in_file(base_path, sentence) where I need to open each file, read it line by line and look for a specific sentence.
I know I could use a logging library instead of printing the results to see who matched what, but that wouldn't serve the purposes of the program. So I'm not looking for that (I'm building this to have a fast way of looking for certain classes / methods / slots definitions in multiple .magik files in a fast manner. Opening a log file won't satisfy me).
For whoever is interested in the Magik language, and as a bonus of taking your time to take a look at this question, here's a small introduction to Magik.
To sum up:
is there any way of improving the speed of my program ?
do you have other suggestions regarding the way I'm searching a sentence ?
PS: I'm looking for answers which aims a Windows distro.
Any other improvements are welcomed !
Answer: Yay, PEP 8
72 characters for docstrings, 79 for the code. The rest seems fine.
Separation of concerns
search_sentence_in_file should search, and return its results. Not print, it is the duty of the caller.
I feel it is also wrongly named as it search a sentence in several files. So at least add the missing s at the end of the name. And to make it even more reusable, why not pass an iterable of filepath (like the get_magic_files generator)?
Genericity
Besides search_sentence_in_file accepting an iterable, you could make get_magik_files more generic by passing the required extension as a parameter. This will let you extend your script to allow search in various kind of files.
First rewrite
from os import walk
from os.path import join, splitext
def get_files(base_path, extension=None):
"""
Yields each path from all the base_path subdirectories
:param base_path: this is the base path from where the
function start looking for relevant files
:param extension: filter files using provided extension.
If None, no filter is applied.
:return: yield full path of a requested file
"""
if extension is None:
def filter_files(filenames):
yield from filenames
else:
def filter_files(filenames):
for filename in filenames:
if splitext(filename)[1] == extension:
yield filename
for dirpath, _, filenames in walk(base_path):
for filename in filter_files(filenames):
yield join(dirpath, filename)
def search_sentence_in_files(files, sentence):
"""
Yield each file path, line and line content where
sentence was found.
:param files: iterable of files to search the sentence into
:param sentence: the sentence we're looking up for
:return: yield the file path, line number and line
content where sentence was found
"""
for filepath in files:
with open(filepath) as fp:
for line_number, line in enumerate(fp):
if sentence in line:
yield filepath, line_number, line.strip()
def main():
basepath = r'some_path'
sentence_to_search = 'some sentence'
files = get_files(basepath, 'magik')
results = search_sentence_in_files(files, sentence_to_search)
for filepath, line, content in results:
print('[# FILE PATH #]', filepath, '...')
print('[# LINE NUMBER #] At line', line)
print('[# LINE CONTENT #] Content:', content)
print('-'*80)
if __name__ == '__main__':
main()
Reusability
Your script make it hard to reuse for other purposes: different sentences, different kind of files. Better to add a CLI using argparse. Provide sensible default for your current usage but allows for customization at will.
from os import walk
from os.path import join, splitext
import argparse
def get_files(base_path, extension=None):
"""
Yields each path from all the base_path subdirectories
:param base_path: this is the base path from where the
function start looking for relevant files
:param extension: filter files using provided extension.
If None, no filter is applied.
:return: yield full path of a requested file
"""
if extension is None:
def filter_files(filenames):
yield from filenames
else:
def filter_files(filenames):
for filename in filenames:
if splitext(filename)[1] == extension:
yield filename
for dirpath, _, filenames in walk(base_path):
for filename in filter_files(filenames):
yield join(dirpath, filename)
def search_sentence_in_files(files, sentence):
"""
Yield each file path, line and line content where
sentence was found.
:param files: iterable of files to search the sentence into
:param sentence: the sentence we're looking up for
:return: yield the file path, line number and line
content where sentence was found
"""
for filepath in files:
with open(filepath) as fp:
for line_number, line in enumerate(fp):
if sentence in line:
yield filepath, line_number, line.strip()
def main(files, sentence):
results = search_sentence_in_files(files, sentence)
for filepath, line, content in results:
print('[# FILE PATH #]', filepath, '...')
print('[# LINE NUMBER #] At line', line)
print('[# LINE CONTENT #] Content:', content)
print('-'*80)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Search text in files')
parser.add_argument('sentence')
parser.add_argument('-p', '--basepath',
help='folder in wich files will be examinated',
default=r'some folder')
parser.add_argument('-e', '--extension',
help='extension of files to examine',
default='magik')
args = parser.parse_args()
files = get_files(args.basepath, args.extension)
main(files, args.sentence)
You can also allows for other improvements such as the sentence being a regular expression. | {
"domain": "codereview.stackexchange",
"id": 23527,
"tags": "python, performance, python-3.x"
} |
Is the set of minimal DFA decidable? | Question: Let $\mathrm{MIN}_{\mathrm{DFA}}$ collection of all the codings of DFAs such that they are minimal regarding their states number. I mean if $\langle A \rangle \in \mathrm{MIN}_{\mathrm{DFA}}$ then for every other DFA $B$ with less states than $A$, $L(A)\ne L(B)$ holds. I'm trying to figure out how come that $\mathrm{MIN}_{\mathrm{DFA}} \in R$? How come it is decidable?
What is about this kind of DFAs that is easy to decide?
Answer: Given two DFAs $A,B$, it is decidable (even efficient!) whether $L(A) = L(B)$: use the product construction to get a DFA for $L(A) \triangle L(B)$, and now use reachability analysis.
In order to determine whether $A$ is a minimal DFA, one can just try all DFAs $B$ with less states, looking for one satisfying $L(A) = L(B)$. Alternatively, there are algorithms for minimizing DFAs which are vastly more efficient than that approach. They are even more efficient if all you want to know is whether the DFA is minimal or not. | {
"domain": "cs.stackexchange",
"id": 405,
"tags": "formal-languages, computability, automata, finite-automata"
} |
Deriving the irreps of the d-orbitals under C3 and S6 operations in octahedral symmetry | Question: Contextual background: I am trying to reproduce the symmetries in the graph on this page (MO diagram for $\ce{ML6}$ complex). I have reproduced the metal s and p orbital representations as well as the linear combination of representations for the $\ce{L6}$ fragment. All that remain are the metal d orbitals.
So I know the five d-orbitals collectively reduce to the $\mathrm{e_g}$ + $\mathrm{t_{2g}}$ representation. I put my five d-orbitals at the origin and went through the operations for the reducible representation. Everything else adds up, except I can't find that $-1$ in the $C_3$ and $S_6$ operations.
Character table
$$\small\begin{array}{c|cccccccccc|cc}\hline
O_\mathrm{h} & E & 8C_3 & 6C_2 & 6C_4 & \begin{aligned}3C_2 \\ \scriptsize=C_4^2\end{aligned} & i & 6S_4 & 8S_6 & 3\sigma_\mathrm{h} & 6\sigma_\mathrm{d} & & \\ \hline
\mathrm{A_{1g}} & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & x^2+y^2+z^2 \\
\mathrm{A_{2g}} & 1 & 1 & -1 & -1 & 1 & 1 & -1 & 1 & 1 & -1 & & \\
\mathrm{E_g} & 2 & -1 & 0 & 0 & 2 & 2 & 0 & -1 & 2 & 0 & & \begin{aligned}(2z^2-x^2-y^2,\\ x^2-y^2)\,\,\,\,\,\, \end{aligned} \\
\mathrm{T_{1g}} & 3 & 0 & -1 & 1 & -1 & 3 & 1 & 0 & -1 & -1 & (R_x,R_y,R_z) & \\
\mathrm{T_{2g}} & 3 & 0 & 1 & -1 & -1 & 3 & -1 & 0 & -1 & 1 & & (xy,xz,yz) \\
\mathrm{A_{1u}} & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 & & \\
\mathrm{A_{2u}} & 1 & 1 & -1 & -1 & 1 & -1 & 1 & -1 & -1 & 1 & & \\
\mathrm{E_u} & 2 & -1 & 0 & 0 & 2 & -2 & 0 & 1 & -2 & 0 & & \\
\mathrm{T_{1u}} & 3 & 0 & -1 & 1 & -1 & -3 & -1 & 0 & 1 & 1 & (x,y,z) & \\
\mathrm{T_{2u}} & 3 & 0 & 1 & -1 & -1 & -3 & 1 & 0 & 1 & -1 & & \\ \hline
\end{array}$$
Work
$$\begin{array}{c|cccccccccc} \hline
O_\mathrm{h} & E & 8C_3 & 6C_2 & 6C_4 & 3C_2 & i & 6S_4 & 8S_6 & 3\sigma_\mathrm{h} & 6\sigma_\mathrm{d} & & \\ \hline
\mathrm{E_g} & 2 & -1 & 0 & 0 & 2 & 2 & 0 & -1 & 2 & 0 \\
\mathrm{T_{2g}} & 3 & 0 & 1 & -1 & -1 & 3 & -1 & 0 & -1 & 1 \\ \hline
\Gamma_{\text{d-orbitals}} & 5 & ? & 1 & -1 & 1 & 5 & -1 & ? & 1 & 1 \\ \hline
\end{array}$$
As can be seen, everything matches up but the $C_3$ and $S_6$ operations. I tried using Molecule Viewer and rotating the d-orbitals, but I still don't see the $-1$. They all look like zero to me.
I would expect the $\mathrm d_{x^2-y^2}$ orbital to correspond to $E_g$ from the character table, but I only see it rotate into a "$\mathrm d_{y^2-z^2}$" orbital, with the orbitals along the y and z axes instead of x and y axes. The $\mathrm d_{z^2}$ orbital rotates into a "$\mathrm d_{x^2}$" orbital, lying along the x axis instead of the z axis.
Anyone know which d-orbital rotates into $-1$ for a $C_3$ operation? I assume a $-1$ for $S_6$ is the same orbital, but if not then does anyone which that one is as well?
Thank you
Answer: The $C_3$ operation rotates $x \rightarrow y$, $y \rightarrow z$ and $z \rightarrow x$.
Hence,
$$\phi_1=2z^2-(x^2+y^2)$$
rotates into
$$-(1/2)\phi_1 +(3/2)\phi_2$$ and
$$\phi_2=x^2-y^2$$
rotates into
$$-(1/2)\phi_1 -(1/2)\phi_2$$
so, in the basis
$( \phi_1, \phi_2 )$ the matrix that represents $C_3$ is
$$\left [\begin{matrix} -1/2 & -1/2 \\ 3/2 & -1/2 \end{matrix}\right]$$
Denoting this 2 by 2 matrix by M what we have is
$C_3(\phi_1 \phi_2) = (\phi_1 \phi_2)M$.
The trace of this matrix is $-1$.
This is what you find in the character table.
Since $C_3^3=I$ (identity) its eigenvalues are the cubic roots of unity:
$1, \lambda$ and $ \lambda^*$
where $\lambda = \exp(2\pi i/3)=-1/2+i\sqrt 3/2$,
and $\lambda^*$ is the complex conjugate of $\lambda $.
It is easy to find eigenvectors with the eigenvalue 1 [e.g., an s orbital or x+y+z].
You can find a linear combination of $\phi_1$ and $\phi_2$ that will be an eigenvector of $C_3$ with the eigenvalue $\lambda$ (transform into itself multiplied by $\lambda$)
and another linear combination that will have eigenvalue $\lambda^*$. Now, the matrix representing $C_3$ will be diagonal with $ \lambda$ and $\lambda^*$ along the diagonal. The trace remains $-1$.
Something similar should work for $S_6$. | {
"domain": "chemistry.stackexchange",
"id": 7022,
"tags": "coordination-compounds, symmetry, group-theory"
} |
Splitting code, am I doing it right? | Question: I would like to have my code reviewed, I'm trying to get a general model I'm using for my stuff going and I don't want to keep using wrong or inefficient code if I can.
My file structure is like this:
In conf there's a file sitting called database.conf.php with the following content:
<?php
$this->host = "localhost";
$this->user = "snowclouds_LoL";
$this->password = "****";
$this->database = "snowclouds_LoL";
?>
In functions I have my functions included like this in functions.php
<?php
//Constructor class for calling all function scripts.
require("database.php");
require("security.php");
require("show.php");
require("user.php");
require("misc.php");
?>
This is my database class:
<?php
class Database {
private $host;
private $user;
private $password;
private $database;
private $link;
public function connect(){
$this->link = mysqli_connect ($this->host, $this->user, $this->password) or die("Connection problem: " . mysqli_error());
mysqli_select_db($this->link, $this->database);
}
public function resultquery($query){
$result = mysqli_query($this->link, $query) or die(mysqli_error($this->link));
return $result;
}
public function insertquery($query){
if(mysqli_query($this->link, $query)){
return true;
}else{
return mysqli_error($this->link);
}
}
public function __construct(){
if(!file_exists("conf/database.conf.php")){
die("conf/database.conf.php file does not exist!");
}else{
include('conf/database.conf.php');
}
}
}
?>
Indenting is a little broke because pasting 4 spaces before every line is just time consuming :/
This is my index.php
$show->header();
$show->menu();
require("act.php");
$show->footer();
?>
This is my init.php
<?php
session_start();
ob_start();
require ("functions/functions.php");
//Check security for hackers.
$security = new Security();
$security->checkuri();
$user = new User();
$misc = new Misc();
$show = new Show();
?>
And in my act.php I write code like this:
if(isset($_GET['act']) & $_GET['act'] == "lostpass"){
if(isset($_SESSION['username'])){
$show->error("U bent al ingelogd met een bestaand account en u kunt uw wachtoowrd dus gewoon aanpassen op de aanpas pagina.");
}else{
include("view/lostpass.php");
}
}
The files in view contain mostly html code with an occasional while for tables.
So how is this structure?
Is it good or unbelievably bad? (I want to stay away from stuff like Smarty because I want my own model for this kind of stuff and I'm still learning).
Extra stuff:
This is my Misc class:
<?php
class Misc{
function check_email($email) {
// First, we check that there's one @ symbol,
// and that the lengths are right.
if (!ereg("^[^@]{1,64}@[^@]{1,255}$", $email)) {
// Email invalid because wrong number of characters
// in one section or wrong number of @ symbols.
return false;
}
// Split it into sections to make life easier
$email_array = explode("@", $email);
$local_array = explode(".", $email_array[0]);
for ($i = 0; $i < sizeof($local_array); $i++) {
if
(!ereg("^(([A-Za-z0-9!#$%&'*+/=?^_`{|}~-][A-Za-z0-9!#$%&
?'*+/=?^_`{|}~\.-]{0,63})|(\"[^(\\|\")]{0,62}\"))$",
$local_array[$i])) {
return false;
}
}
// Check if domain is IP. If not,
// it should be valid domain name
if (!ereg("^\[?[0-9\.]+\]?$", $email_array[1])) {
$domain_array = explode(".", $email_array[1]);
if (sizeof($domain_array) < 2) {
return false; // Not enough parts to domain
}
for ($i = 0; $i < sizeof($domain_array); $i++) {
if
(!ereg("^(([A-Za-z0-9][A-Za-z0-9-]{0,61}[A-Za-z0-9])|
?([A-Za-z0-9]+))$",
$domain_array[$i])) {
return false;
}
}
}
return true;
}
}
?>
It's just for stuff that's pretty annoying to fully put into everything each time I need it.
Security:
<?php
/**
* @author Jeffro
* @copyright 2011
*/
class Security{
var $privatekey = "thisshouldgoinaconfigfileaswell";
function mcryptexists()
{
if (function_exists("mcrypt_encrypt")){
return true;
}else{
return false;
}
}
function sslactive()
{
if ($_SERVER['HTTPS']) {
return true;
}else{
return false;
}
}
function safe_b64encode($string) {
$data = base64_encode($string);
$data = str_replace(array('+','/','='),array('-','_',''),$data);
return $data;
}
function safe_b64decode($string) {
$data = str_replace(array('-','_'),array('+','/'),$string);
$mod4 = strlen($data) % 4;
if ($mod4) {
$data .= substr('====', $mod4);
}
return base64_decode($data);
}
function encode($value){
if(!$value){return false;}
$text = $value;
$iv_size = mcrypt_get_iv_size(MCRYPT_RIJNDAEL_256, MCRYPT_MODE_ECB);
$iv = mcrypt_create_iv($iv_size, MCRYPT_RAND);
$crypttext = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $this->privatekey, $text, MCRYPT_MODE_ECB, $iv);
return trim($this->safe_b64encode($crypttext));
}
function decode($value){
if(!$value){return false;}
$crypttext = $this->safe_b64decode($value);
$iv_size = mcrypt_get_iv_size(MCRYPT_RIJNDAEL_256, MCRYPT_MODE_ECB);
$iv = mcrypt_create_iv($iv_size, MCRYPT_RAND);
$decrypttext = mcrypt_decrypt(MCRYPT_RIJNDAEL_256, $this->privatekey, $crypttext, MCRYPT_MODE_ECB, $iv);
return trim($decrypttext);
}
}
?>
That's from my new project btw. (Plesk API system)
And here is my show class:
<?php
class show{
function header(){
include("view/header.php");
}
function menu(){
include("view/menu.php");
}
function footer(){
include("view/footer.php");
}
function error($message){
echo $message;
$this->footer();
die();
}
}
?>
User Class:
<?php
class User{
private $database;
function exists($user){
$this->database->connect();
if($result = $this->database->resultquery("SELECT * FROM users WHERE got_username='" . $user . "'")) {
$count = mysqli_num_rows($result);
if($count > 0){
return true;
}else{
return false;
}
}
}
function isfriend($user, $friend){
$this->database->connect();
if($result = $this->database->resultquery("SELECT * FROM friends INNER JOIN users ON friends.user_id = users.user_id WHERE users.got_username='" . $user . "' AND friends.friend_id='" . $friend . "'")) {
$count = mysqli_num_rows($result);
if($count > 0){
return true;
}else{
return false;
}
}
}
function addfriend($username, $friend){
$this->database->connect();
$user = $this->getuser($username);
$result = $this->database->insertquery("INSERT INTO friends (user_id, friend_id) VALUES ('" . $user['user_id'] . "', '" . $friend . "')");
return $result;
}
function delfriend($username, $friend){
$this->database->connect();
$user = $this->getuser($username);
$result = $this->database->insertquery("DELETE FROM friends WHERE user_id = '" . $user['user_id'] . "' AND friend_id = '" . $friend . "'");
return $result;
}
function updateuser($got_username, $eu_username, $us_username, $user_client, $play_ranked, $user_email, $user_password, $user_password_check, $comment, $ip, $date){
$this->database->connect();
$result = $this->database->insertquery("
UPDATE users SET EU_username='$eu_username', US_username='$us_username', user_client='$user_client', plays_ranked='$play_ranked', comment='$comment', user_email='$user_email', user_password='" . sha1($user_password) . "', last_login='$date', last_ip='$ip'
WHERE GoT_username='$got_username'");
return $result;
}
function updateusernopass($got_username, $eu_username, $us_username, $user_client, $play_ranked, $user_email, $comment, $ip, $date){
$this->database->connect();
$result = $this->database->insertquery("
UPDATE users SET EU_username='$eu_username', US_username='$us_username', user_client='$user_client', plays_ranked='$play_ranked', comment='$comment', user_email='$user_email', last_login='$date', last_ip='$ip'
WHERE GoT_username='$got_username'");
return $result;
}
function register($got_username, $eu_username, $us_username, $user_client, $play_ranked, $user_email, $user_password, $user_password_check, $comment, $ip, $date){
$result = $this->database->insertquery("
INSERT INTO users (GoT_username, EU_username, US_username, user_client, plays_ranked, comment, user_email, user_password, reg_date, last_login, last_ip)
VALUES ('$got_username', '$eu_username', '$us_username', '$user_client', '$play_ranked', '$comment', '$user_email', '". sha1($user_password) ."', '$date', '$date', '$ip')");
return $result;
}
function getusers(){
$this->database->connect();
$query = "SELECT * FROM users ORDER BY user_id DESC";
$result = $this->database->resultquery($query);
$array = array();
while($value = mysqli_fetch_array($result))
{
$array[] = $value;
}
return $array;
}
function inserthash($email, $hash){
$this->database->connect();
$query = "UPDATE users SET user_hash='$hash' WHERE user_email='$email'";
$result = $this->database->insertquery($query);
return $result;
}
function updatepassword($hash, $password){
$this->database->connect();
$password = sha1($password);
$query = "UPDATE users SET user_password='$password' WHERE user_hash='$hash'";
$this->database->insertquery("UPDATE users SET user_hash='' WHERE user_hash='$hash'");
$result = $this->database->insertquery($query);
return $result;
}
function existsbyemail($email){
$this->database->connect();
$query = "SELECT * FROM users WHERE user_email = '$email'";
$result = $this->database->resultquery($query);
$count = mysqli_num_rows($result);
if($count > 0){
return true;
}else{
return false;
}
}
function getuserbyemail($email){
$this->database->connect();
$query = "SELECT * FROM users WHERE user_email = '$email'";
$result = $this->database->resultquery($query);
$array = mysqli_fetch_array($result);
return $array;
}
function getuser($username){
$this->database->connect();
$query = "SELECT * FROM users WHERE GoT_username = '$username'";
$result = $this->database->resultquery($query);
$array = mysqli_fetch_array($result);
return $array;
}
function login($username, $password){
$this->database->connect();
$query = "SELECT * FROM users WHERE GoT_username = '$username' AND user_password = '". sha1($password) . "'";
$result = $this->database->resultquery($query);
$count = mysqli_num_rows($result);
if($count > 0){
return true;
}else{
return false;
}
}
function logout(){
unset($_SESSION['GoT_username']);
unset($_SESSION['users_password']);
}
public function __construct(){
$this->database = new Database();
}
}
?>
Answer: The only thing I see so far is that you have high Coupling between the Database class and the location of the config file. What you should do is either:
Create a config class that handles config parsing and setting
Or, at minimum, load the config file somewhere else
In both cases you will need to pass the parts that matter to the Database constructor. This reduces Coupling which, in turn, reduces the amount of rewrite needed during refactoring.
There is little more I can offer except questions. For example, what is Security doing? Is there only the one function? What else is declared in it? The same goes for Misc. Both of these classes sound like they don't really belong and are classes for the sake of being classes.
EDIT
... the config class sounds interesting. Do you have any good places where I should start looking for a decent way?
The Zend_Config documentation, part of the Zend Framework, would probably be a good place to see how people are using something like this.
This is my Misc class:
You could keep this I suppose, but Misc is a poor name for it. Calling some thing Misc is just asking for disorganization. If you really want this to be a function, and you can support PHP 5.3, I would wrap this in a namespace and treat the namespace similar to what most people would call a module.
Example:
<?php
namespace Utilities
{
function validateEmail( $email )
{
// Code
}
}
However, you should really consider refreshing yourself on the PHP documentation, specifically the newer stuff that's been added since 4.0. A perfect example of this is the use of the Filter extension. In your case, FILTER_VALIDATE_EMAIL will do wonders.
Security:
This class, I won't really comment on because I'm not sure the use case or need. I'll assume it is necessary as is.
Similar to Database you have high Coupling by putting the private key directly into the class. Depending on the needs of the key, it could exist in a config file and be treated just like the Database config attributes. On initialization, you would pass the config var for passkey into Security::__constructor.
Doing so means that you could have a different passkey per user instead of a single passkey. Expanding this to a public+private key system in the future would also be easy (just add another parameter).
User:
This class is also plagued with high Coupling because of the self loading of Database. There are a few ways to handle dependency injection and I'm not going to sway you either way on this matter. However, I'd suggest you spend some time looking through the many articles your favorite search engine will provide (or Bing! and decide...lol).
I would also keep from naming functions in all lower case. It is common for people to name functions either in CamelCase or underlines _ between words. Not doing so can impact readability.
Aside form that, this class is the first I've seen you provide that is acting as a Model (thinking in terms of MVC). That's a good thing, so nice job. I would expand your understanding of MVC if this is the path you are headed for so that you don't fall into many of the common pitfalls. | {
"domain": "codereview.stackexchange",
"id": 155,
"tags": "php, classes"
} |
What is time measured against? | Question: Today I was observing a clock and its movement, every second is an exact second on every clock.
I was making a comparison between a second and a meter. I know in France there is a metal stick one meter long and is a basic measure valid for every place on earth.
So I was wondering what is time measured against?
I don't know if my question is clear enough, but how do you measure one second?
Answer: The "standard meter" in Paris you refer to is obsolete and an interesting historical relic only. It was the definition of the meter from 1870 until 1960, when the meter was redefined as a number of wavelengths of a certain emission line from krypton 86. Since 1983, the meter has been wholly defined in terms of the second: it is the length such that the universal Lorentz covariant speed $c$ (experimentally found to be the speed of light) is precisely 299792458 meters per second. Otherwise put, it is the distance travelled by light in 1/299792458 second.
This leaves the second to be defined, as in your question's title. Currently the second is defined as (quoting from Wikipedia)
"... the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom ... at rest at a temperature of 0 K"
So it is defined in terms of a vibration frequency that can be measured relatively simply in any well kitted laboratory. Notice how the definition altogether gets rid of the need to keep a "standard length" or "standard clock" anywhere: the description suffices for anyone to reproduce the standard. | {
"domain": "physics.stackexchange",
"id": 25732,
"tags": "time, conventions, si-units, metrology"
} |
Efficient Representation of Qubits on a Digital Simulator | Question: I was wondering about quantum simulators recently, and I was thinking about how a qubit could be represented on a digital machine. This Stack Overflow post seems to say that one will need at least $a2^n$ bits to represent a qubit (albeit partially), where $n$ is the number of qubits represented, and $a$ is the bit size of the complex number used. This has to do with the matrix representation of a qubit. However, I was thinking of a more efficient way of representing a qubit as a point on a Bloch Sphere. This way, a qubit object would only need to store two units of information. For example, in C, this would look like:
#include <stdlib.h>
#include <math.h>
#define QUBIT_PI (1 << sizeof(short) * 8 - 1)
typedef struct QUBIT_STRUCT
{
unsigned short theta;
unsigned short phi;
} qubit_t;
qubit_t *qubit_from_angles(double theta, double phi)
{
qubit_t *ret = malloc(sizeof(struct QUBIT_STRUCT));
qubit_t->theta = theta / M_PI * QUBIT_PI;
qubit_t->phi = phi / M_PI * QUBIT_PI;
return ret;
}
and a quantum gate would be like:
qubit_t *qubit_X(qubit_t *qubit)
{
qubit->theta = QUBIT_PI - qubit->theta;
qubit->phi = -qubit->phi;
return qubit;
}
Is there anything wrong with storing qubit representations in this manner? Are there any quantum simulators that store qubits like this?
Answer: Be careful to differentiate a qubit from a quantum system. As seen here, a qubit can be written as:
$$|\psi\rangle=\cos\left(\frac{\theta}{2}\right)\begin{pmatrix}1\\0\end{pmatrix}+\mathrm{e}^{\mathrm{i}\varphi}\sin\left(\frac{\theta}{2}\right)\begin{pmatrix}0\\1\end{pmatrix}$$
As such, two real numbers seems to be enough to represent a single qubit.
But this doesn't feel right.
What about the following qubit?
$$|\chi\rangle=\frac{\mathrm{i}}{\sqrt{2}}\begin{pmatrix}1\\0\end{pmatrix}+\frac{\mathrm{i}}{\sqrt{2}}\begin{pmatrix}0\\1\end{pmatrix}$$
As an unitary vector of $\mathbb{C}^2$, it is a perfectly valid qubit, but still can't be represented using the previous representation. However, we can write it as:
$$|\chi\rangle=\mathrm{i}\left(\cos\left(\frac{\pi}{4}\right)\begin{pmatrix}1\\0\end{pmatrix}+\sin\left(\frac{\pi}{4}\right)\begin{pmatrix}0\\1\end{pmatrix}\right)$$
The $\mathrm{i}$ factor is a global phase, and can be safely removed. Two states equal up to a global phase behaves the exact same way, no matter what your experiment is. Also, if you add another qubit, the global phases will multiply themselves to another global phase. All in all, it seems that $2n$ real numbers are enough to represent an $n$-qubit system.
But this doesn't feel right.
If the previous fact was true, there would be no need for quantum computers at all, let's just simulate them!
The problem is that the previous fact is true only when you don't consider entanglement. For instance, the following $2$-qubit system:
$$|\varphi\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\0\\0\\1\end{pmatrix}$$
As an unitary vector of $\mathbb{C}^{2^2}$, it is a valid $2$-qubit system. But what are the individual representations of its qubits? There are a lot of proofs that there is no way to write this system as:
$$|\varphi\rangle=|\alpha\rangle\otimes|\beta\rangle$$
Thus, the only way to represent this system is to store its $2^2$ components1.. More generally, you have no choice but to store the $2^n$ components of a quantum system to perfectly simulate a general quantum algorithm. Each of these components being complex, this adds up to $a2^n$ bytes, where $a$ is the number of bytes needed to represent a complex number (though a small optimization may be applied using the normalization constraint).
The same fact applies to quantum gates: you have no choice but to store the $4^n$ complex components that defines it (though a small optimization may be performed using the unitary constraint).
Conclusion
All in all, it is possible to represent qubits the way you've described it. As long as you work with single-qubit state, your approach is valid. In fact, as long as your system is separable (that is, it can be written as a tensor product of $n$-qubits, i.e. there's no entanglement), your approach would work.
But there is no quantum algorithm that provides a speedup that doesn't use entanglement. As such, your only choice is to store all of the complex components of your system.
1. It may be possible that this fact is not true, since I think it is possible to efficiently represent and compute with gates from the Clifford group only, which is the case of this state. Point is, a general quantum state must be represented with its $2^n$ components. | {
"domain": "quantumcomputing.stackexchange",
"id": 3616,
"tags": "programming, simulation, bloch-sphere"
} |
Shoud the total energy of a particle increase when work is done on that particle? | Question: Shoud the total energy of a particle$($kinetic energy $+$ potential energy$)$ increase when positive work is done on that particle?
If the answer is yes, then why the total energy does not increase when work is done on a falling object by gravitational force?
Answer: Positive work done on a particle by a non-conservative force does increase the total energy of the particle & work done by it depends on the path followed by the particle. On the other hand, gravitational force is a conservative force work done by which doesn't depend on the path followed by the particle & doesn't increase total energy ($KE+PE$) of particle. | {
"domain": "physics.stackexchange",
"id": 67765,
"tags": "newtonian-mechanics, energy, work"
} |
Local Lorentz transformations | Question: If $\gamma^m$ denotes a tangent space gamma matrix, and $\gamma^\mu$ denotes a curved space gamma matrix, then they are related by
$$\gamma^\mu(x) = \gamma^m e_{m}^{\mu}(x)$$
where $e_{m}^{\mu}(x)$ is the vielbein. Under a local Lorentz transformation,
$$\delta_{lL}e_m^\mu(x) = {\lambda_m}^n(x) e_n^\mu(x)$$
Also, the gamma matrices with flat space indices satisfy $\{\gamma^m, \gamma^n\} = 2\eta^{\mu\nu}$ whereas the gamma matrices with curved space indices satisfy $\{\gamma^\mu(x), \gamma^\nu(x)\} = 2g^{\mu\nu}(x)$.
To my mind, $\gamma^m$ and $e_{m}^{\mu}(x)$ should transform oppositely under local Lorentz transformations, and hence $\gamma^\mu(x)$ should stay inert.
But then so should arbitrary products of curved space gamma matrices, in particular something like $\gamma^{\mu\nu\rho}$. However, when proving that the spin-3/2 Rarita-Schwinger Lagrangian density is invariant under local Lorentz transformations, one does encounter a term $\bar{\psi}_\mu \gamma^{\mu\nu\rho}D_{\nu}\psi_\rho$, which would involve $\delta_{lL}(\gamma^{\mu\nu\rho})$ (among other things). Is such a term zero?
What is the flaw in this hypothesis, if any?
EDIT: It is not true that $\gamma^m$ and $e_{m}^{\mu}$ transform oppositely under local Lorentz transformations. In fact, as the answers below show, $\gamma^m$ does not transform at all.
Answer: Comments to the question (v1):
There are three types of indices at play: (i) spinor indices, (ii) flat (vector) indices, and (iii) curved (vector) indices.
The gamma matrices with flat indices are constants. They don't transform under local Lorentz transformations (LLTs). They can be viewed as intertwiners between spinor indices and flat indices. (LLTs are also discussed in e.g. this Phys.SE post.)
Spinor indices and flat indices of (1) vielbeins and (2) spinor/vector/tensor fields (such as, e.g., the Rarita-Schwinger field) transform under LLTs, but not curved indices.
One may show that the Rarita-Schwinger Lagrangian density
$$\tag{A} {\cal L}_{RS}(e,\psi)~=~\bar{\psi}_\mu \gamma^{\mu\nu\rho}D_{\nu}\psi_\rho$$
is invariant under LLTs. Be aware that the spinor indices are implicitly understood in eq. (A). | {
"domain": "physics.stackexchange",
"id": 26279,
"tags": "general-relativity, differential-geometry, spinors, supergravity, dirac-matrices"
} |
Is heat from a stovetop, transfered through convection, radiation or conduction? | Question: It doesn't appear to be convection, as there are no moving objects (or are there); probably not radiation, so it is conduction?
I am confused because it seems that the molecules in the stove does not move and therefore don't conduct.
Answer: When you put the pot on the stove, the heat from the stove is somehow getting to the pot, which gets hot.
The pot and the stove are obviously in contact with each other. Therefore conduction plays a role here. If you have an old pot, with a warped bottom, it will heat up slower, because the contact surface between pot and stove is smaller.
When you hold your hand over the stove (not touching it), you can
feel the heat. The air above the stove is heated and because it is a
gas, moves upward. This is convection. The bottom of the pot and the
surface of the stove are not 100% flat. That's why there will be
little pockets of air underneath the pot, even if you place it on
the stove.
If you heat up the stove as much as you can, it will glow red. This
is a visible sign of radiation. I'd assume that even if not visibly
glowing, the stove radiates heat, too. In those areas where the
stove and the bottom of the pot are not in contact, radiation
transports heat from the stove to the heat.
As you can see, all 3 kinds of heat transfer are involved. Conduction certainly does the most part, which is why you want to have pots with flat bottoms, to make best contact with the stove. | {
"domain": "physics.stackexchange",
"id": 21501,
"tags": "thermodynamics, radiation, convection, heat-conduction"
} |
Compiling win_ros with msvc2010 x64 | Question:
Hi,
So i found myself in the situation where i need to compile win_ros in x64.
And it seems that i've made some progress and succeeded in that task. I will try to outline the main steps i took.
I installed 64 bit version of Python2.7.5 and then went down the list of packages given here:
www.ros.org/wiki/win_python_build_tools/groovy
and installed 64 bit version where possible. In many of those packages i had to manually specify the Python installation directory because it couldn't find it in the registry. The only package i skipped was PySvn but that didn't seem to matter.
After that i compiled dependencies from sources in msvc2010 for x64:
boost 1.47
bzip2 1.0.6
log4cxx 0.10.0
tinyxml 2.6.2 (get it from sourceforge)
tinyxml gave me some trouble trouble down the road so i had to go back and recompile it with /MD(Multi-threaded DLL) flag in C++/Code Generation and also make sure you add TIXML_USE_STL to Preprocessor Definitions
eigen3 (i just copied what i found in include folder in rospeds\groovy\x86)
i put all the compiled dependencies c:/rosdeps/groovy/x86 and added c:/rosdeps/groovy/x86/bin and c:/rosdeps/groovy/x86/lib $PATH. I copied c:/rosdeps/groovy/x86/shared from deps provided for x86 compilation. I also copied Boost.cmake and Boost-debug.cmake to c:/rosdeps/groovy/x86/lib (also taken from x86 deps). I had to modify Boost-debug.cmake to make sure that the names of the boost libraries correspond to the names of newly compiled boost libraries.
then i followed the same procedure for compiling win_ros:
www.ros.org/wiki/win_ros/groovy/Msvc%20Compiled%20SDK
I've made changes to setup.bat in ws folder to use msvc2010x64 toolset:
@call "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" x64
And i think that's it. I hope i'm not forgetting anything. I will try to compile win_ros in Debug and also will try to compile it with msvc2012.
Originally posted by Eugene Simine on ROS Answers with karma: 51 on 2013-07-30
Post score: 0
Answer:
Can you join us on the https://groups.google.com/forum/#!forum/win-ros mailing list? Can be alot more interactive there and I'd love to hear how you go and possibly integrate this properly into win_ros.
Originally posted by Daniel Stonier with karma: 3170 on 2013-08-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15102,
"tags": "ros, compile, win-ros"
} |
How do visual obstructions impact the ability to localize using LIDAR? | Question: If a street is extremely crowded to an extent that the terrain is not visible from the point of view of the LIDAR (e.g. in google's self driving car), can it still manage to localize itself and continue to operate? I recall Sebastian Thrun saying that Google's car cannot navigate through snow filled roads since the onboard LIDAR cannot map the terrain beneath the snow (e.g. here).
[Edit : Based on the comments] Clarifying the context, here "not visible" means there is an obstacle between the LIDAR and the terrain
Answer: The snow problem you're referring to seems to that the car can no longer tell where the edge of the road is.
Even in heavy traffic, a good driver still maintains enough space from the car in front that plenty of road is visible, and the side of the road identifiable from LiDAR. This is not common practice, but I imagine autonomous cars would be programmed to do so. In that case, plenty of road is always visible for the car.
Google's car seems to have no problems in heavy traffic. | {
"domain": "robotics.stackexchange",
"id": 197,
"tags": "ugv, lidar"
} |
Pairing Labels with Images when building a UI | Question: In a prototype 4X space game that I am working on recently, I have two different scroll panes to show the ships and the stars of the player. The ships are displayed in a vertical scroll pane on the right side of the screen, and the stars are displayed in a horizontal scroll pane running along the bottom. Here is a picture for the sake of clarity:
With the vertical scroll pane, displaying the labels along with the images is very straightforward. When iterating over the list of ships, a label is added for each ship, then a row, then the ship image.
The horizontal scroll pane presents a problem, however. I want to have the label displayed above the corresponding image. Accomplishing this is a bit more complicated than simply adding a row between each element in the table.
I thought about having two lists, iterating over the first list and getting the correct element from the second list based on the index. Instead of doing this, I went with a Map<Label, Image>. I'm not sure about this decision, and would love feedback on this approach. It feels wrong to be using a Map here, because I will never be using it to look up the values based on the keys.
First, I have this method that creates the contents that will be used when building the scroll pane:
private void createContents() {
this.contents = new HashMap<Label, Image>();
for (Star star : this.game.getPlayerStars()) {
Image starImage = new Image(this.libGDXGame.starTextures.get(star.type));
starImage.addListener(new StarClickListener(star) {
@Override
public void clicked(InputEvent event, float x, float y) {
StarScrollPane.this.gameScreen.playerClickedStar(star);
}
});
Label starLabel = new Label(star.type.name(), this.libGDXGame.skin);
this.contents.put(starLabel, starImage);
}
}
Then I have another method that actually creates the table and adds the contents to it like this:
for (Label label : this.contents.keySet()) {
this.starTable.add(label);
}
this.starTable.row();
for (Image image : this.contents.values()) {
this.starTable.add(image).width(width).height(height);
}
By adding all of the labels to the table, then one row, then all of the images, they line up perfectly.
Answer: A HashMap does not guarantee any order whatsoever, so that is a bad idea to use.
Your Image and Label belongs together, so make a class where they actually do belong together.
public class StarView {
private final Label label;
private final Image colorImage;
// constructor and stuff...
}
private void createContents() {
this.contents = new ArrayList<StarView>();
for (Star star : this.game.getPlayerStars()) {
Image starImage = new Image(this.libGDXGame.starTextures.get(star.type));
starImage.addListener(new StarClickListener(star) {
@Override
public void clicked(InputEvent event, float x, float y) {
StarScrollPane.this.gameScreen.playerClickedStar(star);
}
});
Label starLabel = new Label(star.type.name(), this.libGDXGame.skin);
this.contents.add(new StarView(starLabel, starImage));
}
for (StarView starView : this.contents) {
this.starTable.add(starView.label);
}
this.starTable.row();
for (StarView starView : this.contents) {
this.starTable.add(starView.image).width(width).height(height);
}
}
This could be combined with @Vogel's suggestion about changing the layout manager. Or you could make the StarView extend Actor and put it in a horizontal group. | {
"domain": "codereview.stackexchange",
"id": 17083,
"tags": "java, game, gui, libgdx"
} |
What is the flow of data from a wifi receiver to the application layer? | Question: I am not sure if I should ask this here but I am researching to build a receiver similar to a wifi one. I don't know here to look to find the flow that the data goes through.
For example, what I believe right now is that
the data stream gets captured by the antenna.
Now you have data to work with and it either gets put into packets in the receiver or in the computer.
Once in the computer, I assume it goes through a device driver for the wifi (usb for example).
Then from there I don't know where it goes.
If you can point me to a
resource that would help with creating and programming a receiver for
a different spectrum, that would be awesome. Thanks
Answer:
the data stream gets captured by the antenna.
No, that's not right. The antenna doesn't "capture data"; it just converts electromagnetic waves to electrical signal that your receiver can work with. It's still analog RF oscillations, and not digital "data".
The receiver then goes and converts that RF to baseband, i.e. moves it from 2.4 GHz to 0 Hz, for example, resulting in analog complex baseband signal.
That is then simply sampled (like a microphone-generated electrical signal with a sound card, but thousands of times faster) at regular intervals: it now becomes a digital signal, i.e. an "infinite" sequence of complex numbers in a chip.
These numbers are then analyzed by algorithms that – all within your Wifi chip –
look for the presence of a Wifi transmission
recognize the start time and exact frequency of that
correct tat
divide the result in many parallel, smaller chunks of signal
on each of them, correct influences of the transmission over the air
then, decide which symbol (i.e. a complex number) was originally sent
convert the resulting symbol stream to a stream of code bits
processing the stream of bits with an error-correcting code
passing the resulting info bits on to the next layer
That next layer typically is still on the same IC; it now understands what bits belong to a data packet, checks whether the error correction has been successful (checksum), checks whether the packet is relevant to the receiver at all, and then hands it of to the host (over whatever link you use – USB, PCIe, SPI...).
There, a wifi driver takes the "surviving" packets and analyzes them to see whether they contain data that is of any relevance to the network stack of your computer.
Then from there I don't know where it goes.
Network stack.
If you can point me to a resource that would help with creating and programming a receiver for a different spectrum
You need to learn about Software-Defined Radio, then (book example). That will make little sense if you haven't had basics of communications; the book I linked to tries to bring these basics to you, but it's certainly not the only book. If you're currently enrolled at university, you'd want to study their course on "basics of communications technology" or so.
Your Wifi card will not help you very much - it's not meant to be anything than a wifi receiver.
So, really, the hard part is not so much letting the data flow – your network stack of the operating system you're using will take care of that – but coming up with a receiver, which in essence is nothing but an estimator for the data sent by someone far away, whose analog signals were subject to a fiercely antagonistic channel. | {
"domain": "dsp.stackexchange",
"id": 8344,
"tags": "digital-communications"
} |
NL-Hardness of Target | Question: When revising for an upcoming exam in complexity theory, I came across this problem on the final part of a question, which I was unable to solve:
$ TARGET = \{<G, t> : t\ is\ reachable\ from\ every\ other\ node\ in\ G\}$
Now we are allowed to use any standardly known problems that are NL-Hard in this part of the question. I considered between reducing from $REACH$ or $\overline{REACH}$, its complement - where $REACH$ is the standard NL-Complete problem. I thought perhaps $\overline{REACH}$ would reduce more naturally to $TARGET$ as the negation of $REACH$'s membership logical formula would be "$<G,s,t>$ where for all paths $P$ originating from $s$, $t$ is not on $P$".
However, I did not get very far from here. Then again, it is quite late in the evening and perhaps I am missing an obvious reduction here.
Many thanks for any hints and pointers!
Answer: Let us reduce REACH to TARGET. Given an instance $(G,s,t)$ of REACH, add edges from all nodes other than $s$ to $s$ to form a new graph $G'$. If $t$ is reachable from $s$ in $G$ then it is reachable from all other nodes in $G'$ using the new edges. Conversely, if $t$ is reachable from all other nodes in $G'$, then in particular it is reachable from $s$ in $G'$. Even if this path uses any of the new edges, all they can do is bring it back to the starting point, and so $t$ is reachable from $s$ already in $G$.
Altogether, we have shown that $(G,s,t)$ is a yes instance of REACH iff $(G',t)$ is a yes instance of TARGET. | {
"domain": "cs.stackexchange",
"id": 14140,
"tags": "complexity-theory, reductions, complexity-classes, space-complexity"
} |
robot_pose_ekf hardcoded topics and tf | Question:
Why are "odom", "imu_data" and "base_footprint" hardcoded in robot_pose_ekf? Why not pass these as parameters similar to "odom_combined"? see below:
nh_private.param("input_odom", input_odom_, std::string("odom"));
nh_private.param("input_imu", input_imu_, std::string("imu_data"));
nh_private.param("robot_frame", robot_frame_, std::string("base_footprint"));
The above parameters are not currently defined in robot_pose_ekf.
Originally posted by isura on ROS Answers with karma: 403 on 2013-01-15
Post score: 1
Answer:
Edit #1:
I think you are correct about about base_footprint. I was a bit suprised to see it was not a parameter. You should file an enhancement ticket.
On the other hand, topics such as odom, imu_data and vo can remapped on the command line and in roslaunch. See here for roslaunch remapping and here for command line remapping.
The original answer below does not answer the question.
Those are the default values of the parameters. You can still pass in these parameters and the default values will be overwritten. See the parameter server documentation and the C++ API.
Originally posted by piyushk with karma: 2871 on 2013-01-15
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by isura on 2013-01-15:
Those parameters are not defined in robot_pose_ekf. The code I posted was my suggestion.
Comment by piyushk on 2013-01-16:
I missed the last line of your question. See updated answer.
Comment by isura on 2013-01-16:
Thanks piyushk! actually the last line was an edit to clarify my question, I forgot to add an EDIT: mark above it. | {
"domain": "robotics.stackexchange",
"id": 12440,
"tags": "imu, navigation, odometry, parameters, robot-pose-ekf"
} |
Centripetal force disappearing when switching between inertial frames | Question: Suppose we have a particle moving in a circular motion governed by the equation $\vec{r} = \cos(t) \hat{i} + \sin(t) \hat{j}$, then we notice that velocity at $t=0$ is given as: $\vec{\dot{r}} = \hat{j}$, from this we can write centripetal acceleration as:
$$ \ddot{r_c} = \frac{| \hat{j}|^2}{ |\vec{r}|}$$
by the formula:
$$ \vec{\ddot{r_c} } = \frac{|\vec{\dot{r} }|^2}{|\vec{r}|^2}$$
But, suppose we a particle moving with velocity $\hat{j}$ located somewhere, then in this particles from the velocity of the particle in circular motion would be zero and hence the centripetal acceleration of the particle in circular motion is zero.
So, switches between frames seems to have cancelled out the force, this leads me to the question: In which frame do we put the velocity in for the centripetal acceleration formula?
Answer:
from this we can write centripetal acceleration as:
$$ \ddot{r_c} = \frac{| \hat{j}|^2}{ |\vec{r}|}$$
by the formula:
$$ \vec{\ddot{r_c} } = \frac{|\vec{\dot{r} }|^2}{|\vec{r}|^2}$$
These formulas are not general formulas but only apply for the specific case of uniform circular motion. While $\vec{r} = \cos(t) \hat{i} + \sin(t) \hat{j}$ is uniform circular motion $\vec{R} = \cos(t) \hat{i} + (\sin(t)+t) \hat{j}$ is not. Therefore, the equations for uniform circular motion do not apply in the transformed frame since the motion is not uniform circular motion in that frame.
Note, although the motion is not uniform circular motion and therefore the acceleration is not centripetal, both frames do agree on the magnitude and the direction of the acceleration itself. So it is not that the force disappears, just that calling the force "centripetal" no longer makes sense. | {
"domain": "physics.stackexchange",
"id": 77754,
"tags": "forces, centripetal-force"
} |
Why use two tower cranes on one construction site? | Question: I am fascinated by construction. My university is building a new facility, and they are using 2 tower cranes in the construction.
Why would a contractor choose to use 2 cranes on one construction project? Is it simply a matter of saving time by having the two cranes work simultaneously? What are some other considerations that might be taken into account?
EDIT: after checking your answers, I agree that it is most likely that using two cranes in this situation was mostly done to increase productivity/throughput, and the positioning of the cranes depends on the building design and ease of loading materials to be transported by the crane.
Answer: Tower cranes are a very significant cost on any construction site, and so builders will usually only use as many cranes as is necessary.
I work for a construction scheduling company, and the one of the biggest factors for deciding on the number of cranes and their location is access. Obviously the crane needs to be able to access all parts of the building during construction, but the placement of the crane is also often determined by access to roads and materials on the ground. Other big restrictions such as site boundaries and roads also often come into play in urban areas. If the access limitations mean you need two cranes, then two cranes will be used.
Secondly, as you noted, is time. Time is money on a site, and if having a second crane can save you a couple of months off your finish date, then it is usually a cost effective idea. Other times, there may be a hard deadline for a project finish, in these cases more than one crane may be used to accelerate the program at the request of the client despite it being more expensive. | {
"domain": "engineering.stackexchange",
"id": 1311,
"tags": "civil-engineering, construction-management"
} |
How does projector Monte Carlo method work? | Question: Projector Monte Carlo states that if we have a trial wavefunction $|\phi\rangle$ which is not orthogonal to true ground-state $|\psi\rangle$ of the system then application of a projector
$$P=\exp{(-\tau H)}$$
$m$ times on $|\phi\rangle$ while $m$ goes to infinity will give us true ground state $|\psi\rangle$ i.e.
$$|\psi\rangle=\lim_{m\to\infty}P^m|\phi\rangle$$
My question is how is the project $P$ is projecting out true ground-state when it is applied $m\to\infty$ times?
Answer: The projector operator is given by,
\begin{equation}
\hat{P} = e^{-\tau \hat{H}}
\end{equation}
We have to show that $ | \phi^{m} \rangle = (\hat{P})^{m} | \phi \rangle $ in the large $m$ limit gives,
$ | \phi^{m} \rangle = | \psi \rangle $.
\begin{align}
| \phi^{m} \rangle = (\hat{P})^{m} | \phi \rangle
& = \displaystyle\lim_{m \to \infty} \sum_{\alpha} e^{-m \tau \hat{H}} | \alpha \rangle \langle \alpha | \phi \rangle \\
& = \displaystyle\lim_{m \to \infty} \sum_{\alpha} e^{-m \tau E_{\alpha}} | \alpha \rangle \langle \alpha | \phi \rangle \\
& = | \psi \rangle
\end{align}
In the limit of large $m$ and fixed $\tau$, the projection will wipe out all the states with high energy and only the ground state (lowest energy i.e $E_{0}$) will survive. We have assumed that the ground state is non-degenerate (unique!). In case of degenerate vacua, there is some I critical slowdown. | {
"domain": "physics.stackexchange",
"id": 51449,
"tags": "statistical-mechanics, condensed-matter, computational-physics, many-body"
} |
What is, diagrammatically, the 2-vertex $\Gamma^{(2)}$? | Question: I know that the 2-vertex $\Gamma^{(2)}$ is the second derivative of the effective action, but I fail to see what it is diagrammatically: is it the truncated 1PI diagram? The non-truncated one?
If this helps, the trouble comes from the identity, in massless $\lambda\phi^4$ theory, that states
$$\Gamma^{(2)}=G^{-1}=P^{-1}-\Sigma\tag{1}$$
where $G$ is the propagator, $P$ is the free propagator, and $\Sigma$ is the self-energy. I understand why
$$G^{-1}=P^{-1}-\Sigma\tag{2}$$
(using the geometric series), but I fail to see what
$$\Gamma^{(2)}=G^{-1}\tag{3}$$
means in terms of diagrams.
Answer: The 1PI effective action $\Gamma[\phi]$ generates the 1PI connected amputated Green's functions. Let us denote the connected fully resummed non-amputated vertices by shaded circles, and the 1PI amputated vertices by empty circles.
Diagrammatically:
Note that the row for $n=2$ is $G = G\Gamma^{(2)}G$, which is the same as $\Gamma^{(2)} = G^{-1}$.
If you think about it, this is just drawing what amputated means: $\Gamma^{(n)}$ is the fully resummed vertex with all the fully resummed propagators $G$ removed from the legs. | {
"domain": "physics.stackexchange",
"id": 84128,
"tags": "quantum-field-theory, feynman-diagrams, propagator, 1pi-effective-action, self-energy"
} |
tabletop_segmentation in rviz | Question:
Hi, I am trying to see the output of tabletop_segmentation in rviz. Would really appreciate a little help. Nothing fancy - just the plain vanilla segmented point cloud, dont need to recognize objects, etc.
Precise 12.04, Fuerte
Running openni.launch (XBOX 360 Kinect cam), rviz and tabletop_segmentation.launch
All I can see in rviz is the point cloud from Kinect. How can I see markers from segmentation like shown in the tutorial for tabletop_object_detector?
Ran rostopic - I can see a published topic: tabletop_segmentation_markers - but nothing is being published to it.
Looks like tabletop_segmentation node is not getting data from Kinect. In the startup logs, I can see it using cloud_in:=narrow_stereo_textured/points2. Is that same as Kinect? If not, do I have to remap topics somehow?
Also, Kinect's IR projector lamp does not turn on when I just run tabletop_segmentation - it turns on only when I start rviz - another reason to suspect that I am not hooking things up correctly.
Please help, been trying to get this working for several long days now ...
Thanks!
Greg
Originally posted by Greg S on ROS Answers with karma: 36 on 2013-05-23
Post score: 0
Answer:
The problem was the setting in tabletop_segmentation.launch
...
<arg name="tabletop_segmentation_points_in" default="/camera/depth_registered/points" />
<arg name="tabletop_segmentation_convert_to_base_link" **default="false"** />
...
This decouples tabletop segmentation from PR2 coordinate system
Thanks to Matei for pointing it out to me offline!
Originally posted by Greg S with karma: 36 on 2013-06-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ctguell on 2013-09-12:
I have the same problem, specially with the kinect not turning on, but when i implement your answer i still get the same problem, do you have any idea why? Also when you want to see the segmentation in rviz what new display type do you add? Thanks | {
"domain": "robotics.stackexchange",
"id": 14273,
"tags": "ros, kinect, tabletop-object-detector"
} |
Type II Superstring and Open String | Question: I would like to trow away some confusion I'm accumulating.
Usually when superstrings are introducted (see for example Wikipedia) one says: Type I is a theory of open and closed strings, while Type II (A and B) is a theory of only closed strings. That is ok;
After some exploration ones find that one of most important object in Type II theory are Dbrane, where, by definition, open strings can end. So in Type II theory there are open string. As you can see at page 53 of the PDF, Tong affirm:
For type II superstrings, the open strings and Dbranes
are necessary ingredients.
How can the apparent tension (if not contraddiction) of points 1. 2. be solved?
Answer: I think the idea is that there are no open strings in IIA, IIB in perturbation theory around the vacuum. As you say, we know D-branes exist in IIA and IIB, and these are defined as the submanifolds where open strings can end, so if a theory has D-branes it should also have open strings. But to have an open string means that you necessarily have a heavy, solitonic-like object (the D-brane), the D-brane tension goes like $T_p \propto 1/g_s$, so if $g_s \ll 1$ as it would be in perturbation theory, the D-branes are non-perturbative objects. Therefore you would not say that IIA, IIB has open strings in perturbation theory around the vacuum; they have open strings in perturbation theory around the various D-brane solutions. | {
"domain": "physics.stackexchange",
"id": 30024,
"tags": "string-theory, branes"
} |
Interpretation of statistical features in ML model | Question: I have a data like as shown below (working on classification problem using traditional classification and DL based approaches)
I see in feature engineering tutorials (and tools) here and here, they usually compute basic statistics features based on numeric column such as max(loan amount), min(loan amount), sum(loan amount),stddev(loan amount), average (loan amount) etc.
I understand all these are done in an attempt to increase the predictive power of the model.
However, my question is
what does it mean when max(loan amount) or std dev(loan amount) is an important feature? can help me understand what insight does it convey? how to interpret this feature? can explain in simple english?
Let's assume we run a random forest model and in the feature importance we see that max(loan amount) is the top most feature. What does it mean? I am looking for meaning to understand the insight that it communicates. This question is not about the model. It's simply about the meaning of the term/feature std dev (loan amount) or max(loan amount) or min(loan amount)
Answer: First it's important to define the problem: let's assume that the goal is to predict for a person identified by their s_id the probability that they would default on a particular loan (or the risk that they would default in general).
So in this setting one instance represents one person. The available data contains information about the person's past loans. First there's a technical issue: this history can be any length, but with traditional feature representation we need a fixed length vector of features. Semantically, it's a matter of providing the model with the information contained in this history in a way which is useful to predict the target variable: intuitively, the exact amount of every loan is not essential since it's very specific (it might even even cause some overfitting if used directly).
For these reasons it makes sense to "summarize" the loan history as a fixed-size vector of stats. Typically number, mean, median, possibly quantiles, standard deviation, etc. These values make the instances comparable to each other, in the sense that the model can distinguish customers with different patterns in their history. In this particular case it would certainly make sense to create two series of stats: one for the loans paid and the other for the unpaid loans, since this is clearly something which can help the model. | {
"domain": "datascience.stackexchange",
"id": 10622,
"tags": "machine-learning, deep-learning, classification, predictive-modeling, feature-selection"
} |
How do cars move, it seems inexplicable (to me)? | Question: Confiteor, I have only a rudimentary knowledge of how machines work. I'll be discussing fossil fuel engines only.
First off, I have a block of iron, B. I find a pan balance. I place B in one of the pans. I then place X in the other pan. X > B. The arms will move in such a way that you can infer, B can't lift X. Now remove X, place Y in its place. Y = B. Equilibrium!!!
Conclusion: If X > B then B can't lift X (we're talking about weight here).
My question is, with a car, the weight of the fuel (even at full tank) is much, much, less than the weight of the car + fuel. and yet ... a fraction of the fuel "lifts" the car.
Quomodo?
Answer: the gasoline contains a tremendous amount of stored chemical energy per pound of weight. the engine in the car burns the gasoline to release the stored chemical energy as heat and convert that into shaft work, which turns the wheels of the car. | {
"domain": "engineering.stackexchange",
"id": 5373,
"tags": "energy, machine-elements, fuel"
} |
Does the Sun have permanent geographical features? | Question: Equivalent formulations of this question:
Would it make any sense to draw a map of the Sun?
Is the Sun heterogeneous with respect to latitude and longitude? (I know that it is heterogeneous with respect to altitude/depth)
I know that the Sun does not rotate uniformly, so any map would line up differently as time passes. However, this does not rule out a slice of the Sun at a given latitude having permanent geographical features, since this slice does rotate uniformly.
Of course, even geographical features on Earth are not really permanent, they change on long time scales. I don't expect the Sun to have any eternal features, I'm really just thinking about features that change on long time scales, or at least longer than a few months (e.g. not sunspots).
Answer: As you suspect the Sun does not have any permanent surface features. Up till 1951 the longest actually observed sunspot group lasted 134 days see http://adsabs.harvard.edu/full/1951ASPL....6..146P. That duration gives you an idea how long sunspot groups last. There are associated with coronal holes which would last about as long. However, the structure (shape size) of the group and any coronal hole would be constantly changing.
Temporary maps are made and there is a heliographic co-ordinate system, with a latitude and longitude.
The sunspots show the rotation of the Sun, and so a north and south solar pole can be found. North is in the same side of the ecliptic as the Earth's north pole).
Solar longitude is much more arbitary. The zero helographic meridian is defined as the plane (solar pole to solar pole) passing through where the solar equator crossed the plane of the ecliptic (earths orbital plane) at Greenwich mean noon on January 1, 1854. see http://wso.stanford.edu/words/Coordinates.html
Since then astronomers have just used a constant rotation rate based upon the suns equatorial rotation rate. These are called Carrington Solar Coordinates
A similar system is used for the longitude of the gas giant planets like Jupiter and Saturn. | {
"domain": "astronomy.stackexchange",
"id": 2536,
"tags": "the-sun"
} |
Charge conjugation in Dirac equation | Question: I need to know the mathematical argument about why this relation is true $(C^{-1})^T\gamma ^ \mu C^T = - \gamma ^{\mu T} $ .
Where $C$ is defined by $U=C \gamma^0$ ; $U$= non singular matrix , $T$= transposition, $\gamma^0= $Dirac gamma matrix = $\beta$
I need to know the significance of these equations in charge conjugation.
Answer: First, $U$ is surely not "any non-singular matrix". For a given basis, $U$ is almost completely determined i.e. unique. It contains $\gamma_2$ because it's derived from the only imaginary Pauli matrix.
Because of the basic Dirac algebra
$$ \{ \gamma_\mu,\gamma_\nu \} = 2\cdot 1_{2\times 2} \cdot g_{\mu\nu} $$
one may see that $\gamma_0$ is Hermitian, $\gamma_0=\gamma_0^\dagger$, while the spatial ones are anti-Hermitian, $\gamma_i=-\gamma_i^\dagger$.
In your identity, you want to relate $\gamma^\mu$ to its transposition $\gamma^{\mu T}$. Up to the sign that depends on the spatial or temporal character of $\mu$, the transposition is the same thing as complex conjugation.
So a related problem is whether the complex conjugate matrices $\gamma^{\mu*}$ can be related to $\gamma^\mu$ by something like a conjugation. And the answer is Yes. The main fact behind the exercise is that $\sigma^2$ is the only imaginary Pauli matrix, so complex conjugation of Pauli matrices is equivalent to the conjugation by $\sigma^2$ with an extra sign. This may be easily generalized if you also include the temporal 0th component and if you use the normal basis.
You should check the identity you want to verify in a particular convenient basis, i.e. with an explicit form of the gamma matrices. The verification is most convenient if you write the gamma matrices in block form, with $2\times 2$ blocks being either multiples of Pauli matrices or the unit matrix.
In a more general representation, the Dirac gamma matrices differ from those in the particular basis you will have verified by a conjugation only, and this may only mean that $U$ is changed in the formula, but the essence of the conjugation is unchanged.
These equations are important because $C$ is related to the charge conjugation – the replacement of particles by antiparticles (e.g. exchange of electrons and positrons). Mathematically, the most important part of the charge conjugation is complex conjugation which is why we needed to express the "complex conjugate gamma matrices as some conjugations of the normal ones".
Theories with a symmetry between matter and antimatter are symmetric under C - the charge conjugation symmetry. Spinors are mapped to $\psi\to C\psi$ etc. and the only hard part of the symmetry of the Lagrangian is a step that requires you to conjugate the gamma matrices by $C$ which is why it's good that we have a way to simplify $C^{-1T} \gamma^\mu C^T$. | {
"domain": "physics.stackexchange",
"id": 5230,
"tags": "dirac-equation, spinors, dirac-matrices, charge-conjugation, cpt-symmetry"
} |
Modern chain of responsibility design pattern | Question: Based on this question, I tried to simplify the use case with simpler model,
Let me know if you have other suggests on optimizations potentially algorithmic.
#include <iostream>
#include <tuple>
#include <iostream>
class AtmHandler
{
static inline auto chain = std::make_tuple(
[](auto &amt)
{
if (amt % 50 == 0)
{
std::cout << "Number of 50 Dollar:" << amt / 50 << std::endl;
std::cout << "Request is completed so no need to forward it" << std::endl;
return true;
}
int numberOf50Dollar = amt / 50;
std::cout << "Number of 50 Dollar:" << numberOf50Dollar << std::endl;
amt %= 50;
return !amt;
},
[](auto &amt)
{
if (amt % 20 == 0)
{
std::cout << "Number of 20 Dollar:" << amt / 20 << std::endl;
std::cout << "Request is completed so no need to forward it" << std::endl;
return true;
}
int numberOf20Dollar = amt / 20;
std::cout << "Number of 20 Dollar:" << numberOf20Dollar << std::endl;
amt %= 20;
return !amt;
},
[](auto &amt)
{
if (amt % 10 == 0)
{
std::cout << "Number of 10 Dollar:" << amt / 10 << std::endl;
std::cout << "Request is completed so no need to forward it" << std::endl;
return true;
}
std::cout << "!!!!!!!!!!!!!!!!!!!Can Not with draw this amout please enter correct amount" << std::endl;
return false;
});
public:
bool parse(int value)
{
bool result;
auto handle = [&](auto &h)
{
return result = h(value);
};
std::apply([&](auto &&...xs)
{ (handle(xs) || ...); },
chain);
return result;
}
};
int main()
{
AtmHandler handler;
std::cout << handler.parse(530) << std::endl;
}
Answer: You wouldn't use the chain of responsibility for converting an amount of money into a number of bills, there are much better ways to solve that problem. But let's forget that for now. There are still some other improvements possible:
Use '\n' instead of std::endl
Prefer to use '\n' instead of std::endl; the latter is equivalent to the former, but also forces the output to be flushed, which is usually unnecessary and might impact performance.
Avoid code duplication
You already duplicated a lot of code by using the chain-of-responsibility pattern for this problem, as the three handlers look very much alike. However, even within one handler you have duplication. Here's one way to address all that duplication:
template<std::size_t Denomination>
static inline auto billHandler = [](auto &amount) {
std::size_t numberOfBills = amount / Denomination;
std::cout << "Number of " << Denomination << " Dollar bills: "
<< numberOfBills << '\n';
amount -= numberOfBills * Denomination;
if (!amount) {
std::cout << "Request is completed so no need to forward it.\n"
}
return !amount;
};
static inline auto chain = std::make_tuple(
billHandler<50>,
billHandler<20>,
billHandler<10>,
[](auto &amount) {
std::cout << "Cannot handle the change!\n";
return false;
}
);
Simplify parse()
Your parse() function is unnecessarily complex. There is no need for the lambda handle, and since std::apply() will return the return value of the function it calls, you can use that to directly generate the return value of parse():
bool parse(auto value)
{
return std::apply([&](const auto&... handler) {
return (handler(value) || ...);
}, chain);
}
There is no need to make a class
If there is only one member function, and no state is stored in an object of class AtmHandler, the class is unnecessary. Instead, you could write a free function parse(). | {
"domain": "codereview.stackexchange",
"id": 44479,
"tags": "c++, design-patterns, c++20"
} |
Angular Velocity of a car | Question: Given a time $t$ in seconds, and a $\theta$ of the curve, and the radius $r$ in meters, how would I calculate the angular velocity of a car going through the curve in $t$ seconds?
I think it will be $\omega=\frac{\Delta\theta}{\Delta t}$.
What is $\Delta \theta$?
Answer:
What is $\Delta \theta$?
For a general time-dependent function $f(t)$:
$$f'(t)=\frac{\text{d}f}{\text{d}t}=\lim_{\Delta t \to 0} \frac{\Delta f}{\Delta t}$$
where:
$$\Delta f=f(t+\Delta t)-f(t)$$
So for $\theta(t)$, $\theta'=\frac{\text{d}\theta}{\text{d}t}$, where:
$$\boxed{\Delta \theta=\theta(t+\Delta t)-\theta(t)}$$
And:
$$\frac{\text{d}\theta}{\text{d}t}=\lim_{\Delta t \to 0} \frac{\theta(t+\Delta t)-\theta(t)}{\Delta t}$$
Example- let:
$$\theta(t)=\theta_0+\omega t$$
Then:
$$\frac{\text{d}\theta}{\text{d}t}=\lim_{\Delta t \to 0} \frac{\theta_0+\omega(t+\Delta t)-(\theta_0+\omega t)}{\Delta t}$$
$$=\lim_{\Delta t \to 0} \frac{\omega\Delta t}{\Delta t}=\omega$$ | {
"domain": "physics.stackexchange",
"id": 67276,
"tags": "homework-and-exercises, angular-velocity"
} |
How is electrostatic potential distributed along a circuit element? | Question: Suppose we have a diode circuit like that:
Suppose the voltage has magnitude $\varphi$ and one end of the wire has potential zero.
How will the potential be distributed throughout the diode?
Does $\varphi = \varphi_A = \varphi_B$?
Or will the potential be highest on the red line and then lower as we go up or down due to some electron-hole-ion configurations we have in our semiconductor that may have some effect on electric potential?
How to describe this mathematically?
Answer: The answer depends on the contact surface, which contains A and B.
If it is metallic, as it is in many (integrated) devices, $\phi = const$ along that surface. Current flow across the diode can be modeled as 1D. (This is also a nice and easy to obtain reference estimate for the next one.)
If A is a point contact and B is practically isolated, then your second description holds. In reality placing a needle on the semiconductors surface by a tester comes close, but can be a mixture with the first case, depending on the dimensions.
See my sketch below:
AB is high resistive (insulating)
red is my guess of the electric field inside (not much different than in vaccum)
pink is the electrical potential, which IS of course perpendicular to the field (as you can see ;)
the distances of the potential lines roughly indicates distribution of resistance (voltage drops)
A "simple" way to model this is using resistor nets, mimicking the p and n zones. In fact, triangulation is exactly this, done in $2D$-simulators. Instead of triangles ($3$ nodes per triangle) you can also take $4$ nodes, and build such an electric network with an ordinary circuit simulator. (Try it manually, e.g. by dividing both zones into $3*3$ elements and use symmetry between A and O).
Or you can guess stripes of resistors from my sketch (e.g $3-4$ per half from my sketch), put them in parallel, estimate resonable resistances and run your circuit simulator with it.
Or run your $2D$ physical simulator (if you have access to one).
Things like that.
Here's a so called lumped model in a least useful granularity which you can use with a circuit simulator:
Procedure:
calculate all voltage drops (i.e. potentials)
map them in $2D$, e.g. on paper
sketch reasonable curves of constant potential $\phi$
estimate field lines $\nabla \phi$ as a perpendicular vector field
increase granularity, i.e. introduce more nodes if needed and repeat.
You can model D either as an ideal diode $I = I_s \times \exp{\frac{V}{v_t}}$ or with a series resistance to take high voltage saturation into account. | {
"domain": "physics.stackexchange",
"id": 94331,
"tags": "electromagnetism, statistical-mechanics, electric-circuits, potential, semiconductor-physics"
} |
On Masses Related to a Greater Gravitational Force | Question: Small Introduction
Today I was speaking with my Physics teacher and we ended up discussing the Law of Universal Gravitation, essentially going from the three postulates it has (I am not sure of this affirmation), and the widely known proportionality:
$$F\propto \frac{m_1 m_2}{r^2}$$
To the known Newton's equation:
$$F=G\frac{m_1m_2}{r^2}$$
Question
Now from that equation I deduced that since the gravitational force is proportional to the product of both masses, then it follows that the greater the mass of a body is, the greater the constant g, but my professor told me that wasn't the case, that it was dependent of the planet. That sounds just wrong, is that correct? Because as far as I can see, the sun has a greater g value than, for example, Jupiter (which has already a greater value than the Earth), and their masses are congruent to their gravitational force.
Answer: It depends on your phrasing. The statement under question is, "the greater the mass of a body is, the greater the constant g". But the real question is which body we're talking about.
The whole idea of having a little "g" is for the common circumstance in our daily lives, where one body is fixed in space and the other object is free to accelerate. Say that $g=G m_1/{R^2}$, $R$ being the radius of the planet and $m_1$ being the mass of the planet (the fixed object). Does "the mass of a body" refer to the fixed object $m_1$ or the moving object $m_2$? If it refers to the stationary object $m_1$ then the original statement is true. If it refers to the moving object then the original statement is false.
So it is possible you are both correct and it was just a miscommunication.
In the case where both objects are free to move, things are much more complicated; you get things like the reduced mass $\frac{1}{\mu}=\frac{1}{m_1}+\frac{1}{m_2}$ and you have to talk in more precise/general terms, so it's good to avoid! | {
"domain": "physics.stackexchange",
"id": 40723,
"tags": "newtonian-gravity"
} |
Producer-Consumer using low level synchronization | Question: I saw a tutorial from a site where it showed the use of ArrayBlockingQueue as a thread-safe concurrent data-structure . Just for learning purposes , i tried to build something similar using synchronized and wait-notify methods in java. I'm really very new to java.util.concurrent and i'm hoping someone could guide me to some improvements and point out the bad parts of this code i wrote :
import java.util.ArrayList;
import java.util.Random;
public class LowLevelProducerConsumer {
ArrayList<Integer> list = new ArrayList<Integer>();
private final int size = 10;
private int elems = 0;
Object lock = new Object();
Random r = new Random(System.currentTimeMillis());
public void producer() {
while (true) {
synchronized (lock) {
try {
while (elems != size) {
list.add(r.nextInt(100));
elems++;
}
} finally {
// allows consumer to remove an entry
lock.notify();
}
}
}
}
public void consumer() throws InterruptedException {
Thread.sleep(100); // any better way to ensure producer runs first ?
int ran = 0;
while (true) {
// consumer tries to acquire lock here , but it can do so only after
// producer has called notify
synchronized (lock) {
// do i need a lock.wait() here somewhere ?
ran = r.nextInt(10);
if (ran == 7) { // just an arbitrary condition
int loc = r.nextInt(list.size());
int data = list.remove(loc);
System.out.format(
"%d removed from index %d , size : %d\n",
data, loc, list.size());
// decrementing elems to let the producer work again
elems--;
Thread.sleep(300);
}
// release the lock so that producer can fill things up again
lock.notify();
}
}
}
public static void main(String[] args) {
final LowLevelProducerConsumer low = new LowLevelProducerConsumer();
Thread t1 = new Thread(new Runnable() {
@Override
public void run() {
low.producer();
}
});
Thread t2 = new Thread(new Runnable() {
@Override
public void run() {
try {
low.consumer();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
ps : i'v kept my understanding of how the code works in the comments. Any correction to them would be much appreciated.
Answer: Miscellaneous Remarks
Your elems variable is just list.size(), so why not just use list.size() everywhere and eliminate elems?
Your other size is a constant, and should be called SIZE by convention.
Instead of spreading out int ran = 0; ran = r.nextInt(10);, and if (ran == 7) { ... } over three lines, you should just do if (r.nextInt(10) == 7) { ... }. Putting it all on one line takes less mental effort to follow the code.
Threading
Both the producer and consumer threads are infinite loops. Neither thread will end normally, and therefore trying to .join() them is superfluous.
Rather than ensuring that the producer runs first, you should just design the consumer to correctly handle the case when there are no more values to consume. As @DanielR pointed out, when list.size() is zero, r.nextInt(list.size()) throws an IllegalArgumentException.
Your use of lock.notify() right now is superfluous, as there is no corresponding lock.wait() that expects a notification. A good place to insert a lock.wait() would be where you suspected. (Without the lock.wait(), the consumer would keep trying frantically to fetch a value, which could work, but is inefficient. Inserting lock.wait() makes the consumer yield the processor to the producer until the producer notifies the consumer that the values have been replenished.)
public void consumer() throws InterruptedException {
synchronized (lock) {
while (list.isEmpty()) {
// Nothing to consume for now; wait for more.
lock.wait();
}
...
}
}
I think that the vocabulary is slightly off. To me, a "producer" and a "consumer" are both threads. I would say,
Thread producer = new Thread(new Runnable() {
@Override
public void run() {
low.produce();
}
});
Therefore, I would rename your methods as verbs produce() and consume().
In Java, every Object can act as a lock. There is no need to make a separate lock object; you could just synchronize on list or this instead. (It doesn't matter which object you choose as the lock, as long as the producer and consumer both agree on which object it is.) As a special case,
public void method() {
synchronized (this) {
...
}
}
can be written as
public synchronized void method() {
...
}
which is slightly more readable and also compiles to slightly tighter bytecode.
I would take move the while-loops from produce() and consume() into the run() method. That would have the triple benefit of
keeping produce() and consume() simpler
making it more obvious that each Runnable is an infinite loop
moving the synchronized keyword into the method signatures, making it easier to see that produce() and consume() run mutually exclusively, and generating slightly tighter bytecode.
Proposed Solution
import java.util.ArrayList;
import java.util.Random;
public class LowLevelProducerConsumer {
ArrayList<Integer> list = new ArrayList<Integer>();
private final int SIZE = 10;
Random r = new Random(System.currentTimeMillis());
public synchronized void produce() {
try {
while (list.size() != SIZE) {
list.add(r.nextInt(100));
}
} finally {
// Notifies consumer that entries have been generated
this.notify();
}
}
public synchronized void consume() throws InterruptedException {
while (list.isEmpty()) {
// Nothing to consume for now; wait for more.
// System.err.println("Consumer waiting...");
this.wait();
}
if (r.nextInt(10) == 7) { // just an arbitrary condition
int loc = r.nextInt(list.size());
int data = list.remove(loc);
System.out.format(
"%d removed from index %d , size : %d\n",
data, loc, list.size());
Thread.sleep(300);
}
}
public static void main(String[] args) {
final LowLevelProducerConsumer low = new LowLevelProducerConsumer();
Thread producer = new Thread(new Runnable() {
@Override
public void run() {
while (true) {
low.produce();
}
}
});
Thread consumer = new Thread(new Runnable() {
@Override
public void run() {
while (true) {
try {
low.consume();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
});
producer.start();
consumer.start();
}
} | {
"domain": "codereview.stackexchange",
"id": 4757,
"tags": "java, multithreading, concurrency"
} |
Number of cliques in a graph | Question: I think the number of cliques in a graph is generally exponential in the of vertices of that graph. Does anyone know any reference for that?
Answer: I am assuming you mean the number of maximal cliques, as the number of cliques of a complete graph is trivially $2^n$ (any subset of the vertices forms a clique).
For the number of maximal cliques, take the complement of a disjoint union of triangles. Since the number of maximal independent sets is exactly the same (in the complement), you can count the number of maximal independent set in a graph that is a disjoint union of triangles. This number is $3^{n/3}$ (Moon and Moser, 1965).
See also: The number of cliques in a graph: the Moon and Moser 1965 result. | {
"domain": "cs.stackexchange",
"id": 9437,
"tags": "graphs, reference-request, clique"
} |
Problem with image_transport callback | Question:
I have the next scenario,
class controller :
{
public:
controller(ros::NodeHandle node);
~controller(void);
// The callbacks are implemented correctly
void imageCallback(const sensor_msgs::ImageConstPtr& msg);
void navdataCallback(const ardrone_autonomy::Navdata& data);
void foo(){
// Do Something
std::thread foo_thread(&controller::foo2, this);
foo_thread.join();
}
void foo2(){
// Do Something
while (something){
// Do Something
std::this_thread::sleep_for(std::chrono::milliseconds(20));
ros::spinOnce();
}
}
private:
//Other Stuff
}
and the main program is
int main(int argc, char **argv)
{
ros::init(argc,argv, "main");
ros::NodeHandle n;
controller control(n);
image_transport::ImageTransport it(n);
image_transport::Subscriber sub = it.subscribe("ardrone/image_raw", 1, &controller::imageCallback, &control);
ros::Subscriber navdata = n.subscribe("ardrone/navdata", 1000, &controller::navdataCallback, &control);
while(ros::ok()){
ros::spin();
}
return 0;
}
the problem is when I process data in foo2() the navdata messages arrives fine, but stop reciving image data until the end of the thread.
How can I resolve this? Is my approach correct? May be the ros::spinOnce is not placed correctly?
Originally posted by Abind on ROS Answers with karma: 16 on 2017-01-17
Post score: 0
Answer:
I will answer my own question, is not a good idea use ros::spin in diferents threads, you need to use multi thread spining.
So, in order to solve this I just put all the callbacks in a different thread, so I just need to use ros::spin in just one thread and problem solved :)
Originally posted by Abind with karma: 16 on 2017-01-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26751,
"tags": "ros, callbacks, image-transport"
} |
Hangman program in Java | Question: Here's an implementation of Hangman using Java 6. I've split the code into 2 classes - a logic class & a gui class. Is it ok to have so many static member variables? Please let me know if there are any design improvements that I could make.
Hangman logic - main class
import java.awt.*;
import java.io.*;
import java.util.*;
import java.util.List;
import javax.swing.*;
public class Hangman {
static String[] wordList;
static String secretWord;
static Set<Character> alphabet;
static Set<Character> lettersGuessed; // letters the user has guessed
static boolean[] lettersRevealed; // determines if the letter should be revealed or not
static int guessesRemaining;
public static void main(String[] args){
Hangman hangman = new Hangman();
hangman.createAlphabetSet();
hangman.readFile("words.txt");
HangmanGUI.buildGUI();
setUpGame();
}
// checkIfWon - sees if the user has won the game
static boolean checkIfWon(){
for(boolean isLetterShown : lettersRevealed){
if(!isLetterShown)
return false;
}
return true;
}
// checkUserGuess - get input from the user
static boolean checkUserGuess(String l){
if(l.length() == 1 && !lettersGuessed.contains(l.charAt(0)) && alphabet.contains(l.charAt(0))) {
HangmanGUI.setText(null);
lettersGuessed.add(l.charAt(0));
return true;
}else{
Toolkit.getDefaultToolkit().beep();
}
return false;
}
// chooseSecretWord - selects a word
private static String chooseSecretWord(String[] wordList){
return wordList[(int)(Math.random() * wordList.length)];
}
// createAlphabetSet - Creates the alphabet set that's used to ensure that the user's guess not a number nor a special character
private void createAlphabetSet(){
alphabet = new HashSet<Character>(26);
for(Character c = 'a'; c<='z'; c++){
alphabet.add(c);
}
}
// loseSequence - when the the user runs out of guesses
static void loseSequence(){
for(int i = 0; i < lettersRevealed.length; i++){
lettersRevealed[i] = true;
}
HangmanGUI.drawSecretWord();
playAgain("Tough luck. The secret word was " + secretWord + ".\nWould you like to play another game of hangman?");
}
// playAgain - Allows the user to play another game of hangman
private static void playAgain(String message){
int ans = HangmanGUI.playAgainDialog(message);
if(ans == JOptionPane.YES_OPTION){
setUpGame();
}else{
System.exit(0);
}
}
// readFile - read in wordList
private String[] readFile(String loc){
BufferedReader input = null;
try{
input = new BufferedReader(new InputStreamReader(this.getClass().getClassLoader().getResourceAsStream(loc)));
wordList = input.readLine().split(" ");
}catch(IOException ioException) {
ioException.printStackTrace();
}finally{
try {
if (input != null) input.close();
}catch(IOException ioEx){
ioEx.printStackTrace();
}
}
return wordList;
}
// setUpGame - sets up the variables appropriately
private static void setUpGame(){
guessesRemaining = 5;
secretWord = chooseSecretWord(wordList);
lettersRevealed = new boolean[secretWord.length()];
Arrays.fill(lettersRevealed, false);
lettersGuessed = new HashSet<Character>(26); // 26 letters in alphabet
HangmanGUI.drawSecretWord();
HangmanGUI.drawLettersGuessed();
HangmanGUI.drawGuessesRemaining();
}
// updateSecretWord - updates which letters of the secret word have been revealed
static void updateSecretWord(String l){
List<Integer> changeBool = new ArrayList<Integer>();
if(secretWord.contains(l)){
// Searches through secretWord & notes down all letters that equal the user's guess
for(int i=0; i<secretWord.length(); i++){
if(secretWord.charAt(i) == l.charAt(0))
changeBool.add(i);
}
// Changes the boolean value for those letters @ their corresponding indexes
for(Integer idx : changeBool)
lettersRevealed[idx] = true;
}else{
guessesRemaining--;
HangmanGUI.drawGuessesRemaining();
}
}
// winSequence - when the user has correctly guessed the secret word
static void winSequence(){
playAgain("Well done! You guessed " + secretWord + " with " + guessesRemaining + " guesses left!\n" +
"Would you like to play another game of hangman?");
}
}
Hangman GUI
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.*;
public class HangmanGUI {
// GUI
static JFrame frame;
static JTextField textField;
static JLabel guessesRemainingLabel;
static JLabel lettersGuessedLabel;
static JLabel secretWordLabel;
// buildGUI - builds the GUI
static void buildGUI(){
SwingUtilities.invokeLater(
new Runnable(){
@Override
public void run(){
frame = new JFrame("Hangman");
// JLabels
guessesRemainingLabel = new JLabel("Guesses remaining: " + String.valueOf(Hangman.guessesRemaining));
lettersGuessedLabel = new JLabel("Already guessed: ");
secretWordLabel = new JLabel();
// TextField & checkButton
textField = new JTextField();
JButton checkButton = new JButton("Guess");
GuessListener guessListener = new GuessListener();
checkButton.addActionListener(guessListener);
textField.addActionListener(guessListener);
// Panel for all the labels
JPanel labelPanel = new JPanel();
labelPanel.setLayout(new BoxLayout(labelPanel, BoxLayout.PAGE_AXIS));
labelPanel.add(guessesRemainingLabel);
labelPanel.add(lettersGuessedLabel);
labelPanel.add(secretWordLabel);
// User panel
JPanel userPanel = new JPanel(new BorderLayout());
userPanel.add(BorderLayout.CENTER, textField);
userPanel.add(BorderLayout.EAST, checkButton);
labelPanel.add(userPanel);
// Add everything to frame
frame.add(BorderLayout.CENTER, labelPanel);
frame.setSize(250, 100);
frame.setLocationRelativeTo(null);
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.setVisible(true);
}
}
);
}
// drawGuessesRemaining - Outputs the guesses remaining
static void drawGuessesRemaining(){
final String guessesMessage = "Guesses remaining: " + String.valueOf(Hangman.guessesRemaining);
SwingUtilities.invokeLater(
new Runnable(){
@Override
public void run(){
guessesRemainingLabel.setText(guessesMessage);
guessesRemainingLabel.setAlignmentX(Component.LEFT_ALIGNMENT);
}
}
);
}
// drawLettersGuessed - Outputs the letters guessed
static void drawLettersGuessed(){
StringBuilder lettersBuilder = new StringBuilder();
for (Character el : Hangman.lettersGuessed) {
String s = el + " ";
lettersBuilder.append(s);
}
final String letters = lettersBuilder.toString();
SwingUtilities.invokeLater(
new Runnable() {
@Override
public void run() {
lettersGuessedLabel.setText("Already guessed: " + letters);
}
}
);
}
// drawSecretWord - draws the secret word with dashes & etc for user to use to guess the word with
static void drawSecretWord(){
StringBuilder word = new StringBuilder();
for(int i=0; i<Hangman.lettersRevealed.length; i++){
if (Hangman.lettersRevealed[i]) {
String s = Hangman.secretWord.charAt(i) + " ";
word.append(s);
} else {
word.append("_ ");
}
}
final String w = word.toString();
SwingUtilities.invokeLater(
new Runnable(){
@Override
public void run() {
secretWordLabel.setText(w);
}
}
);
}
//playAgainDialog - shows the confirm w
static int playAgainDialog(String m){
return JOptionPane.showConfirmDialog(frame, m, "Play again?", JOptionPane.YES_NO_OPTION, JOptionPane.PLAIN_MESSAGE);
}
// GETTERS
private static String getText(){
return textField.getText();
}
// SETTERS
static void setText(final String t){
SwingUtilities.invokeLater(
new Runnable() {
@Override
public void run() {
textField.setText(t);
}
}
);
}
// ActionListener
private static class GuessListener implements ActionListener {
@Override
public void actionPerformed(ActionEvent ev){
String guess = getText();
if(Hangman.checkUserGuess(guess)) {
Hangman.updateSecretWord(guess);
drawSecretWord();
if(Hangman.lettersGuessed.size() != 0) // No letters have been guessed by the user at the beginning
drawLettersGuessed();
// Checks if the user has won or lost
if (Hangman.checkIfWon())
Hangman.winSequence();
else if (Hangman.guessesRemaining == 0)
Hangman.loseSequence();
}
}
}
}
Answer:
Is it ok to have so many static member variables?
No, it's almost always a sign of bad design (and not just because of the static variables, but also the static functions; only utility functions should be static). Another bad sign is that you import javax.swing.* in your logic class.
You classes also do too many things, which makes them very static, hard to read, and hard to write automated tests for.
I would at least create:
aHangmanGame which contains the word that currently has to be guessed, the guessed letters, the revealed letters, and the remaining guesses. Maybe the alphabet set as well. But it doesn't deal with any user input or the like, it just statically stores the game data and handles the logic. The constructor would only take the word to be guessed, and maybe remaining guesses. Methods might be public boolean guess(Character), public boolean isGameOver(), public boolean isGameWon, String[] getWrongGuessesMade() (or create a Guess class, which then can have a field wrong, and return a list of those classes), etc. All these methods should not be static.
HangmanMain: the main game loop. Build gui, game, etc; manage play again, etc.
WordReader: gets the words
HangmanGUI: pretty much as before, but don't let it be responsible for getting the data it needs. drawGuessesRemaining() for example could look like this: drawGuessesRemaining(int guesses) and then used like this in HangmanMain: // init gui somewhere at the beginning: HangmanGUI hangmanGUI = new HangmanGUI(); // somewhere else: hangmanGUI.drawGuessesRemaining(5);
I would move the GuessListener to its own class.
Misc
don't import *, but import all classes explicitly, so a reader knows exactly what you use.
use private (or public if it makes sense) instead of the default package private.
use curly brackets even for one line statements
use JavaDoc style comments for more readable method documentation. Also, some comments could be improved (eg chooseSecretWord - selects a word: a word for what? What happens with the selected word?
don't hardcode magic numbers such as 5, because it makes it hard to see what the 5 stands for, and it makes it hard to reconfigure the game. Move these to static fields. This is especially bad for lettersGuessed = new HashSet<Character>(26), because it is independent of the 26 used in the createAlphabetSet function. So theoretically, I could reconfigure the game to generate a bigger alphabet set (because I want to include ß and ä), raise the guessesRemaining to 28 (because who likes losing?), and then get an exception (a bit of a hypothetical, but I think the idea is clear).
You don't have to declare the type twice when declaring something. List<Integer> changeBool = new ArrayList<>() works fine.
some of your variable names are rather short (s, el, idx, ans, etc). This is only in a small context, so it's not that bad, but I would prefer more expressive names
your spacing is sometimes off | {
"domain": "codereview.stackexchange",
"id": 12332,
"tags": "java, gui, hangman"
} |
Recent news about Tokamak | Question: I first heard about the Tokamak in highschool 10 years ago and was wondering how far the technology has come since then. Can it sustain a reaction for more than a few seconds? Are these devices still huge or have they been made on a smaller scale?
Answer:
Can it sustain a reaction for more than a few seconds?
Yes, at JET.
Lifetime of the plasma: 20–60 s
At ITER:
It will operate over a wide range of ITER plasma scenarios, from short plasma pulses (a few hundred seconds) with enlarged fusion power (700 MW) to long plasma pulses of 3,000 s
Are these devices still huge or have they been made on a smaller scale?
They have to be large in order to give out more energy than they consume. This is the ITER link, the one planned and already being built, to get the size idea. | {
"domain": "physics.stackexchange",
"id": 1306,
"tags": "experimental-physics, fusion"
} |
Relation between a Quasistatic and a reversible process | Question: Why is it that if a process is reversible, it is quasi-static? Does it mean that then the process is also non-dissipative if it is quasistatic?
Answer: A reversible process is defined as an ideal process, without friction, losses and entropy production. In general, such an ideal system can be represented most closely by a process with very low velocity. For example, compressing a gas piston has to be done very slowly to minimize losses and approach an ideal reversible system. In practice however, this is virtually impossible, and losses (even if very small losses) are inevitable. | {
"domain": "physics.stackexchange",
"id": 11689,
"tags": "thermodynamics, statistical-mechanics, reversibility"
} |
Curvature Singularities in Geodesically Complete Manifolds | Question: Do there exist manifolds which are geodesically complete, and yet have a curvature singularity? While I don't believe this is the case, I have yet to find a proper proof of the same.
Answer: Well, it's no wonder the theorem is difficult to prove false: it's actually true. There do exist manifolds that are geodesically complete and have curvature singularities - a necessary condition for which is that the existence of a divergence in a curvature invariant does not imply geodesic incompleteness.
Heuristically, this is because curvature singularities can occur due to the divergence of curvature invariants which are higher-order in the metric than the first-order condition imposed on the tangent vectors of a geodesic (more specifically, it only depends on the continuously differentiable structure of the metric), so the geodesic cannot "see" the curvature singularity. This result is in some sense converse to the implications of the strong cosmic censorship hypothesis [1]. You may find a few explicit examples in [2], although it's debatable as to whether such solutions are realisable once we impose physically reasonable conditions.
If you are willing to go beyond the framework of GR, then [3] explicitly identifies classes of wormhole solution spacetimes with complete causal geodesics, finite energy density, and curvature singularities in quadratic Palatini $f(R)$ gravity.
References:
[1]: Choquet-Bruhat, General Relativity and the Einstein Equations (2009)
[2]: Geroch, What is a Singularity in General Relativity?
[3]: Bejarano,
What is a singular black hole beyond General Relativity? | {
"domain": "physics.stackexchange",
"id": 77587,
"tags": "general-relativity, differential-geometry, curvature, singularities, geodesics"
} |
navfn compile error | Question:
Hello ,
for writing my own global planner, i cloned source of navigation stack
git clone https://github.com/ros-planning/navigation.git
i copied navfn package to my own workspace and built with catkin_make but this weird error comes;
/home/largo/ws/src/navfn/src/navfn_node.cpp: In member function ‘void navfn::NavfnWithCostmap::poseCallback(const ConstPtr&)’:
/home/largo/ws/src/navfn/src/navfn_node.cpp:85:34: error: no matching function for call to ‘costmap_2d::Costmap2DROS::getRobotPose(geometry_msgs::PoseStamped&)’
cmap_->getRobotPose(global_pose);
I had built this package before with the same steps!!!
Any help will be appreciated.
Best regards
Originally posted by zeynep on ROS Answers with karma: 47 on 2018-08-23
Post score: 0
Answer:
Did you only copy the one package to your workspace? You probably want to build the whole navigation stack from a consistent version. Also -- pay attention to which branch you are using -- we've made a lot of API changes in the melodic-devel branch (the default branch when cloning). If you are on Kinetic, you want to "git checkout kinetic-devel" within the git repo to get the right version.
Originally posted by fergs with karma: 13902 on 2018-08-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by zeynep on 2018-08-23:
thank you very much,
you are right, it is because of the wrong branch, i use kinetic but cloned melodic-devel branch | {
"domain": "robotics.stackexchange",
"id": 31627,
"tags": "ros, compile, ros-kinetic, navfn"
} |
On the Twin Paradox Again | Question: The search based on the term "Twin Paradox" gave (today) 538 results.
In all the answers, the answerers explained the phenomenon by referring to arguments sort of falling out of the framework of special relativity. I saw answers referring to acceleration and deceleration, changing coordinate systems, etc. Even Einstein referred to General Relativity when explaining the TP...
THE QUESTION
Is it not possible to explain the phenomenon purely within Special Relativity and without having to change the frame of reference?
EDIT
... and without referring to acceleration and deceleration, without having to stop and turn around or stop and start again one of the twins?
Answer:
Is it not possible to explain the phenomenon purely within Special Relativity and without having to change the frame of reference?
It is possible to obtain the correct answer to the amount of time accumulated by either of the twins by using any single reference frame, without changing that frame at any point in the analysis. Whether or not such a calculation constitutes an “explanation” is a matter of opinion. I would tend to say “no” because the “paradox” is precisely about what happens when you incorrectly change reference frames.
To obtain the amount of time accumulated by any traveler we write their worldline as a parametric function of some parameter (using units where c=1), for example $r(\lambda)=(t(\lambda),x(\lambda),y(\lambda),z(\lambda))$ where $r$ is the worldline and $t$, $x$, $y$, and $z$ are the coordinates of the traveler in some reference frame whose metric is given by $d\tau$. Then, for any reference frame for any spacetime for any traveler, the amount of time is given by $\Delta\tau=\int_R d\tau$ where R is the total path of interest (i.e. all of the $r(\lambda)$ of interest). Because this is a completely general formula it applies for an inertial traveler or for a non inertial traveler, it also applies for an inertial reference frame or for a non inertial reference frame, it also applies in the presence of gravity or not.
For the specific case of an inertial frame we have $d\tau^2=dt^2-dx^2-dy^2-dz^2$ from which we can easily obtain $$\frac{d\tau}{dt}=\sqrt{1-\frac{dx^2}{dt^2}-\frac{dy^2}{dt^2}-\frac{dz^2}{dt^2}}=\sqrt{1-v^2}$$ So then $$\Delta\tau=\int_R d\tau=\int_R \frac{d\tau}{dt} dt = \int_R \sqrt{1-v^2} dt$$
Note, this last paragraph assumes an inertial frame (any inertial frame is the same). The usual mistake is to use the inertial frame expression in a non inertial frame. A similar procedure can be used in a non inertial frame, but you must use the appropriate expression for $d\tau$ | {
"domain": "physics.stackexchange",
"id": 58842,
"tags": "general-relativity, special-relativity, relativity"
} |
Implementation of warning storage | Question: I implemented type (UserRuleManager) that will apply list of rules to entity User. I need your code review and your advices.
Here is usage of that type
private void ProcessUser(User user)
{
if (!_userRuleManager.ApplyGeneralRules(user))
{
SetWarning();
} else if (!_userRuleManager.ApplySpecificRules(user)) {
SetWarning();
} else {
CreateUser(user);
}
}
private void SetWarning()
{
foreach (var warningMessage in _userRuleManager.GetWarnings())
{
Warnings.Add(warningMessage);
}
}
I have 2 types of rule: general and specific. First, I'm trying to apply general rules (ApplyGeneralRules) and if it is false then get list of warnings. The same for specific rules (ApplySpecificRules).
In implementation of UserRuleManager I have:
private list of warnings:
private readonly List<string> _warnings;
private property
private bool IsVerified
{
get { return _warnings.Count == 0; }
}
method to return list with warnings
public ICollection<string> GetWarnings()
{
return _warnings;
}
method for general rules
public bool ApplyGeneralRules(User user)
{
_warnings.Clear();
ICollection<Rule> rules = _userRepository.GetActiveGeneralRules(user.CustomerId);
foreach (Rule rule in rules.OrderBy(x => x.Order))
{
if (RuleChecker.IsVerified(rule, user))
{
_warnings.Add(string.Format("General rule : {0}", rule.TypeId));
}
}
return IsVerified;
}
and method for specific rules
public bool ApplySpecificRules(User user)
{
_warnings.Clear();
var questions = GetQuestions(user);
if (!questions.Any())
{
_warnings.Add(string.Format("No questions with code {0}", user.QuestionCodeId));
return false;
}
foreach (var question in questions.OrderBy(q => q.Order))
{
var rules = _userRepository.GetSpecificRules(user.CustomerId, question.Id);
foreach (var rule in rules)
{
if (!RuleChecker.IsVerified(rule, user))
{
_warnings.Add(string.Format("Specific rule: {0} -> {1} -> {2}", question.TypeId, question.CodeId, rule.TypeId));
}
}
}
return IsVerified;
}
Could you please give me advice how can I improve or maybe implement more proper relation between this entities. What the pattern can I use here?
Thank you in advance and let me know if I should give more information
UPDATE
public static bool IsVerified(Rule rule, User user) {
RuleParameterValue daysParameter = rule.RuleParameterValues.SingleOrDefault(v => v.RuleParameterTypeId == Rule.Parameter.Days);
if (daysParameter == null) {
return false;
}
int days = daysParameter.IntValue;
DateTime? lastLoginDateUtc = _userRepository.GetLastLoginDate(user.Id);
if (lastLoginDateUtc == null || lastLoginDateUtc.Value.AddDays(days) <= DateTime.UtcNow) {
return true;
} else {
return false;
}
}
Answer: I think I got it :) I removed my previous answer as it was not enough SOLID (too much dependency on subtypes). Here is how it can be:
var repository = new UserRepository();
IRule<User> rule = new Validator<User>(
new GeneralRules(repository),
new SpecificRules(repository));
var warnings = rule.Validate(new User());
if(!warnings.Any())
Console.WriteLine("Add user");
else
foreach (var warning in warnings)
Console.WriteLine(warning);
Where:
class Warning : IComparable<Warning>
{
public Warning(string message, int priority)
{
Message = message;
Priority = priority;
}
public override string ToString() => Message;
public int CompareTo(Warning other) =>
other.Priority.CompareTo(Priority);
string Message { get; }
int Priority { get; }
}
And:
interface IRule<T>
{
IEnumerable<Warning> Validate(T subject);
}
And:
interface IRuleSource<T>
{
IEnumerable<IRule<T>> Query(T subject);
}
And:
class Validator<T> : IRule<T>
{
public Validator(params IRuleSource<T>[] sources)
{
Sources = sources;
}
public IEnumerable<Warning> Validate(T subject) =>
Sources
.Select(s => s
.Query(subject)
.SelectMany(r => r.Validate(subject))
.OrderBy(w => w))
.FirstOrDefault(sw => sw.Any()) ??
Enumerable.Empty<Warning>();
IEnumerable<IRuleSource<T>> Sources { get; }
}
}
And:
class User
{
public string Name { get; set; }
public int CustomerId { get; set; }
public int QuestionCodeId { get; set; }
}
class Question
{
public int Id { get; set; }
}
interface IUserRepository
{
IEnumerable<IRule<User>> GetActiveGeneralRules(int customerId);
IEnumerable<IRule<User>> GetSpecificRules(int customerId, int questionId);
IEnumerable<Question> GetQuestions(int questionCodeId);
}
And:
class GeneralRules : IRuleSource<User>
{
public GeneralRules(IUserRepository repository)
{
Repository = repository;
}
public IEnumerable<IRule<User>> Query(User subject) =>
Repository.GetActiveGeneralRules(subject.CustomerId);
IUserRepository Repository { get; }
}
And:
class SpecificRules : IRuleSource<User>
{
static IRule<User> MissingsQuestions { get; } = new MissingsQuestionsRule();
public SpecificRules(IUserRepository repository)
{
Repository = repository;
}
public IEnumerable<IRule<User>> Query(User subject) =>
Repository.GetQuestions(subject.QuestionCodeId)
.SelectMany(q => Repository.GetSpecificRules(subject.CustomerId, q.Id))
.DefaultIfEmpty(MissingsQuestions);
IUserRepository Repository { get; }
}
Where:
class MissingsQuestionsRule : IRule<User>
{
public IEnumerable<Warning> Validate(User subject)
{
yield return new Warning($"No questions with code {subject.QuestionCodeId}", 1000);
}
} | {
"domain": "codereview.stackexchange",
"id": 20254,
"tags": "c#, object-oriented, design-patterns"
} |
Checking whether a table has modified rows | Question: The company I am with has a lot of old code migrated over from ALPHA Basic, written back in the 1980s. As development languages have evolved (should I say, "died off"?) the code has been migrated to newer languages.
The company also hires a lot of new graduates from a local college. Sometimes, the code is written in a textbook fashion. I see a lot of code that could be written differently.
Here is one piece of code that I have encountered in a project I maintain:
Public ReadOnly Property HasModifiedRows() As Boolean
Get
Dim dt As DataTable
dt = Nothing
Try
dt = Me.GetTableObj.GetChanges(DataRowState.Modified)
If dt IsNot Nothing Then
Return True
Else
Return False
End If
Finally
If dt IsNot Nothing Then
dt.Dispose()
dt = Nothing
End If
End Try
End Get
End Property
The code looks deliberately spelled out, step for step. This could be to help our new developers understand what is going on, or it could be code that was migrated over from older languages that required all variables be defined at the top.
I do not know why, but seeing all of the unnecessary lines of code is like fingernails on a chalkboard.
My planned rewrite
I am constantly compelled to rewrite pieces of code like that above to this:
Public ReadOnly Property HasModifiedRows() As Boolean
Get
Using table As DataTable = GetTableObj.GetChanges(DataRowState.Modified)
Return True
End Using
Return False
End Get
End Property
Question and concerns
Is one design pattern better than another?
Is there a performance difference in the two?
Would my version with the using syntax have any unanticipated results?
As I understand the VB Using Statement, if an object enters a using block, it cannot be null and it will always dispose of itself whenever it goes out of scope. I could of course be wrong though. I do not know how to guarantee this in .Net.
Should I leave the old code alone?
If there is any benefit to rewriting it, I am certainly happier with the streamlined look.
I'd really like to get some opinions on if I am doing any good by changing the code. If I am doing harm, I certainly would like to know!
Answer:
Is one design pattern better than another?
Every code statement is a potential defect. If you can express yourself in a shorter form without sacrificing readability, then by all means you should. The less code, the less to read and debug, the fewer suspicious elements.
Is there a performance difference in the two?
I don't actually know VB, but this is a common pattern in many languages, and it's clear that the performance should be the same, or the second might be even faster, thanks to some optimization magic by the compiler.
Would my version with the using syntax have any unanticipated results?
Increased understanding among readers? You changed not just the writing style, but you also gave the variables more meaningful names. It's intuitive and a lot easier to read than the first one. Both seem to accomplish exactly the same thing.
Should I leave the old code alone?
Probably. Don't fix what ain't broken. Although the original method is not implemented pretty, from the outside it doesn't really matter: it has a good name, and it does exactly what the name says and nothing else, it works. It's nice to make it nicer, but there isn't really a need. Now that you looked at it closely and know that it works, try to treat it as a black box, and remember not to open again. Use your energy for more creative tasks. | {
"domain": "codereview.stackexchange",
"id": 13908,
"tags": "vb.net, comparative-review"
} |
What is this weed? | Question: I first noticed this weed a few years ago, and saw its long, thin, V-shaped bean-shaped seed pods on meter height plants. The pods start out connected together at the far end, but eventually separate into an inverted V.
If I recall, correctly, the thin pods split open to release very small seeds with a sail.
It somewhat resembles milkweed, but is branched, red stemmed and, of course, has these strange pods.
Location: My back yard, in Augusta County, in the Shenandoah Valley in Virginia.
Answer: My guess is Hemp Dogbane. The milkweed "family" has 140 species; all have milky sap. Add red stems, branching, downward-facing pods, and hemp dogbane comes up.
The flowers are white, not perfectly like common milkweed, and the pods hang down.
A photo with flowers and pods can be found here.
Hemp Dogbane is common throughout the country. | {
"domain": "biology.stackexchange",
"id": 7381,
"tags": "botany, species-identification"
} |
Largest binary gap | Question: I took a task from Codility to find longest sequence of zeros in binary representation of an integer.
For example, number 9 has binary representation 1001 and contains a binary gap of length 2. The number 529 has binary representation 1000010001 and contains two binary gaps: one of length 4 and one of length 3. The number 20 has binary representation 10100 and contains one binary gap of length 1. The number 15 has binary representation 1111 and has no binary gaps. The number 32 has binary representation 100000 and has no binary gaps.
I am a Java developer, but decided to implement it in C++. What could be improved? Is my approach correct? (The compiler version provided by Codility is C++14).
int solution(int N) {
int longest_binary_gap = -1;
unsigned int mask = 1 << 31;
// Find the first occurence of 1
for (; !(mask & N) && mask != 0; mask >>= 1);
while (mask != 0) {
// Move to the next binary digit
mask >>= 1;
int current_gap = 0;
// Count zeroes
for (; !(mask & N) && mask != 0; mask >>= 1, current_gap += 1);
// Check if the interval ends with 1 and if it is the longes found so far
if (mask != 0 && current_gap > longest_binary_gap) {
longest_binary_gap = current_gap;
}
}
return (longest_binary_gap < 0) ? 0 : longest_binary_gap;
}
Answer:
Using a signed type for an integer you want to do bit-manipulations on, or which cannot (must not) be negative is an obvious case of Java damage.
While in Java, an int is always a 32 bits 2's complement integer, C++ caters to a far wider range of targets. Admittedly, not all the options are still used (even rarely), but you should not assume the bit-width.
Java may not allow implicit conversion of integers to boolean, but C++ does. You seem to get that difference though. Sometimes. Maybe half the time.
Yes, you can start from the most significant bit. The most telling disadvantages are
that you will always traverse all bits,
that you have to use a mask, and
that you need the bit-width (look in <cstdint>/stdint.h> for a macro, or better use std::numeric_limits from <limits>) for the mask.
Better start on the other end.
I really wonder why you initialize longest_binary_gap to -1 and then use a conditional-operator to ensure you return at least 0. That's a lot of work for nothing,
You were exceedingly generous with long and descriptive names in your function, don't you think it's much more important for the function itself?
The function cannot throw by design, so make it noexcept.
There also doesn't seem to be a reason not to allow use in a ctce, so mark it constexpr.
Re-coded:
constexpr int biggest_binary_gap(unsigned n) noexcept {
while (n && !(n & 1))
n >>= 1;
int r = 0;
for (;;) {
while (n & 1)
n >>= 1;
if (!n)
return r;
int m = 0;
while (!(n & 1)) {
++m;
n >>= 1;
}
if (r < m)
r = m;
}
} | {
"domain": "codereview.stackexchange",
"id": 33305,
"tags": "c++, programming-challenge, c++14, bitwise"
} |
Overwhelming confusion regarding electromagnetic induction in Step Up Transformers | Question: Suppose I've two step up transformers $A$ and $B$, each with same number of coils and same length of wire and same material of input wire. But $A$ has an output wire of higher resistance than $B$.
According to the internet and my book, the induced emf doesn't depend on the resistance of material of the output wire. So as long as the power generator configuration and the number of coils in the output and input wires remain the same, the output induced emf will be equal in both $A$ and $B$. The output power cannot be more than input power since that violates the law of conservation of energy.
In transformer $A$, let the input power be $$P_I=V_{I}I_I$$ and the output power be $$P_O=V_OI_O$$ and since $P_I=P_O$(considering an ideal situation),
$$V_O > V_I$$ $$\therefore I_O<I_I$$
Now, in the second transformer $B$ with lower resistance output wire, $P'_I=V'_II'_I=P_I$ and the output voltage $V'_O=V_O$ since induced emf doesn't depend on resistance.
But by Ohm's Law, $$I'_O=\frac{V'_O}{R'_O}>I_O$$
Now considering Lenz Law, if $R'_O$ decreases, $I'_O$ increases so opposing force increases and for the same input power, $V'_O$ decreases. But this means that $I'_O$ decreases now, so resistive force decreases, so $V'_O$ increases again??? What is exactly happening, and how is conservation of energy being maintained? I'm getting overwhelmed, please help me with this confusion.
Answer: (a) A point about your terminology... Simple transformers have two coils, a primary coil and a secondary coil (or, if you like, an input and an output coil). Each coil consists of multiple turns. [You have been referring to a single turn as a 'coil'. Not a good idea.]
(b) Since there is a current in the A and B secondary coils they must be connected to loads, which we can take to be resistors. So when you say "But has an output wire of higher resistance than " I'm interpreting this to mean that its load resistance is greater.
(c) You have $P_I'=V_I'I_I'=P_I$. But you shouldn't be equating $P_I'$ to $P_I$. For transformer B the load resistance is smaller so $I_O'>I_O$. So more power is taken by the load (since, as you point out, $V_O'=V_O$). Therefore more power is taken by the primary (input) coil, that is $P_I'>P_I$.
(d) I'm not going to try and unravel your last paragraph. I'll explain ... Transformer behaviour can be understood to a good approximation by assuming that
• $\frac{V_O}{V_I}=\frac{\text{number of turns on secondary}}{\text{number of turns on primary}}$
• Power in = Power out, so $V_OI_O=V_II_I$
This is the 'black box' approach that you have been using. The first of these 'bullet points' looks like a simple application of Faraday's law, but to understand fully what's going on in terms of electromagnetic induction requires a different approach. A key consideration has to be that transformers don't work unless the current is continually changing (we'll assume alternating sinusoidally). The $V_I, I_I$ etc. that you have been using in the black box approach are usually taken as rms values (a sort of average). When you apply Lenz's law, you need to apply it to the variations in 'instantaneous' current due to its alternating nature.
But you seem to have in mind an additional change in the instantaneous output current, due to a step-change in the load resistance. As well as causing a back-emf in the secondary coil, a back-emf will also be induced in the primary. What happens just after you've changed the load resistance will indeed be quite complicated (especially since changes in pds and currents due to the alternating supply voltage are going on at the same time). However a steady state will soon be restored, with new values of currents in the bulleted equations. I would suggest – and it is only a suggestion – that you don't try to understand these equations in terms of Lenz's law unless you have specifically been asked to do so. | {
"domain": "physics.stackexchange",
"id": 86346,
"tags": "electromagnetism, electricity, electrical-resistance, voltage, electromagnetic-induction"
} |
1+1 Numerical Relativity Simulation | Question: I am trying to teach myself numerical relativity by starting with the simplest non-trivial scenario: a 1+1 vacuum spacetime with a non-trivial initial slice. Essentially, I am following this paper and trying to reproduce the results for the flat 2d (1+1) spacetime. The problem is also discussed on page 364 of "Introduction to 3+1 relativity".
Since there is only one spatial dimension, let $g\equiv g_{xx}$ and $K\equiv K_{xx}$. Also take the shift $\beta^{i}$ to be zero. The system of PDEs to be evolved is made first order by defining $D_{\alpha}\equiv\partial_{x}\ln \alpha$, $D_{g}\equiv\partial_{x}\ln g$, and $\tilde{K}\equiv\sqrt{g}K$.
Thus the system consists of the five evolving fields:
$$
\partial_{t}\alpha = -\alpha^{2}f\frac{\tilde{K}}{\sqrt{g}} \\
\partial_{t}g = -2\alpha\sqrt{g}\tilde{K} \\
\partial_{t}D_{\alpha} = -\partial_{x}\left(\alpha f\frac{\tilde{K}}{\sqrt{g}}\right) \\
\partial_{t}D_{g} = -\partial_{x}\left(2\alpha\frac{\tilde{K}}{\sqrt{g}}\right) \\
\partial_{t}\tilde{K} = -\partial_{x}\left(\frac{\alpha D_{\alpha}}{\sqrt{g}}\right)
$$
Here, I am using $f=1$ for the harmonic slicing condition.
The spacetime is a vacuum, but the problem is to study the gauge dynamics of using a non-trivial initial slice. Such a slice may be defined in the Minkowski coordinates:
$$t_{M}=h(x_{M})$$
Here, $h$ is chosen to be a Gaussian. The lapse $\alpha$ is taken to be initially 1 everywhere.
Therefore, the initial value problem is
$$
\alpha(0,x) = 1 \\
g(0,x) = 1 - h'^{2}\\
D_{\alpha}(0,x) = 0 \\
D_{g}(0,x) = \frac{2h'h''}{g} \\
K(0,x) = -\frac{h''}{g} \\
$$
From this I discretize the system and advance all the fields simultaneously in an FTCS scheme. (I know it's unstable but I want to get it working before moving on to a more advanced scheme.)
The results are shown in the paper. Basically what is supposed to happen is that in every field, there develop two wave pulses propagating in either direction. They should travel at speed $\sqrt{f}=1$.
However, in my case, I get a sort of waveform that appears immediately but does not propagate, instead it just increases in amplitude. I am fairly sure there are no errors in my code, so I believe I am missing something conceptually. I have the right initial conditions which are also shown in the paper.
What puzzles me is that the author remarks that there should be "3 fields that propagate along the time lines (speed zero)". Two of these mentioned are $\alpha$ and $g$. Does this mean there is some sort of coordinate transformation that I need to make before trying to visualize the data?
Does anyone know of any explicit 1+1 numerical relativity routines that I could consult? I would like to see the actual code.
Answer: Well this is embarrassing. It turns out there was a very subtle typo in my code. So there is no real physical problem at all, and the approach described is accurate and there is no need for any coordinate transformations. However, I still found that I could not reproduce the same results in the paper. My propagating solutions were traveling much slower than what is expected, even when I used similar initial parameters for $h(x)$. I also discovered that the propagation speed in the lapse changed depending on my choice $\Delta t$ and $\Delta x$. Therefore, the problem stems from the first-order FTCS scheme chosen; it is not accurate enough.
For
$$h(x) = \mathrm{e}^{-\frac{x^{2}}{\sigma^{2}}}\\
\sigma = 10.0$$
And using the same discretization used in the paper, $\Delta t=0.125$ and $\Delta x=.25$, I find:
From this it is clear that the pulses do not travel at $\sqrt{f}=1$, instead it would appear to be closer to $4$. Note that since the lapse is simply a gauge function there would be no problem with the pulses traveling at light speed. | {
"domain": "physics.stackexchange",
"id": 67516,
"tags": "general-relativity, special-relativity, computational-physics, simulations"
} |
Is there any documentation available for rviz gui programming in python? | Question:
I have been trying to make a gui using rviz in python. The widgets I am creating have an rviz VisualizationFrame in them. There are gui elements there but there is no documentation anywhere that I could find, especially for the apis to these elements.
Anyone knows anyway I could solve this problem, apart from the many dir() calls I am currently making at each step to know all the callable functions?
I am not sure if this should be another question or can be included here, but is there any documentation for writing the config file for rviz(to load from)?
Originally posted by devesh on ROS Answers with karma: 104 on 2013-03-17
Post score: 0
Answer:
Depending on what you are doing, rqt may be a good solution.
It is designed to handle both Python and C++ plugins. In Groovy, there is an rviz plugin available for rqt.
Originally posted by joq with karma: 25443 on 2013-03-17
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by devesh on 2013-03-17:
Thanks, joq.
I did consider rqt too but later stuck to pure rviz. But, the main thing I want here is a way for coding using rviz without any documentation. | {
"domain": "robotics.stackexchange",
"id": 13405,
"tags": "ros, python, rviz, librviz"
} |
Minimal basis for Li | Question: Currently, I'm trying to get into the field of quantum chemistry and was reading about minimal basis sets. As I understood this is the minimal number of orbital/spatial wavefunctions needed for a specific number of electrons.
For example, H has one electron, so a minimal basis set would be $\mathrm{1s}$ (STO, STO-nG, or something else).
Li has 3 electrons, so my guess would be that we need 1s and 2s. But in the literature, also the $\mathrm{2p_x}$, $\mathrm{2p_y}$ and $\mathrm{2p_z}$ orbitals are included in the minimal basis. Why is this the case? Together with the spin degrees of freedom, the three electrons already fit in the 1s and $\mathrm{2s}$ orbitals?
Answer: A minimal basis set is a set that employs just one function to describe each orbital. For $\ce{H}$ you take into account one orbital ($\mathrm{1s}$), so you use one function, and for $\ce{Li}$ you consider five orbitals ($\mathrm{1s}$, $\mathrm{2s}$, $\mathrm{2p_x}$, $\mathrm{2p_y}$, $\mathrm{2p_z}$) and you use five functions, one for each.
What you described ("the minimal number of orbital/spatial wavefunctions needed for a specific number of electrons") would use a function for each occupied orbital - leaving unoccupied orbitals, even in a partially occupied subshell, out of the picture. This is a very bad idea - shells are a set of degenerate solutions for the hydrogen atom, so reducing the number of functions to describe them will seriously distort the depiction of the atom.
The usual approach is, instead, to always provide functions at least for the entire valence shell of the atom (and its inner shells). So for all 2nd period elements, you would use all 2nd shell orbitals ($\mathrm{2s}$, $\mathrm{2p_x}$, $\mathrm{2p_y}$ and $\mathrm{2p_z}$), even in a minimal basis set, no matter how many electrons actually occupy that shell. What still makes that basis set a minimal one is that you are only using one function for each orbital.
Similarly, a minimal basis set calculation for $\ce{Na}$ would include one function for each orbital in the 1st, 2nd and 3rd shells, and so on. | {
"domain": "chemistry.stackexchange",
"id": 9848,
"tags": "quantum-chemistry, basis-set"
} |
Configuring NLog | Question: I have this method:
public void AddNLogConfigurationTypeTagret()
{
var filePath = _loggerModel.file_path_pattern.Replace("\\YYYY\\MM\\DD", "");
var dateTime = DateTime.Now;
filePath += "\\" + dateTime.Year + "\\" + dateTime.Month.ToString() + "\\" + dateTime.Day;
var filePattern = _loggerModel.file_name_pattern.Split('.');
var dateTimeFormat = filePattern[1].Replace("{", "").Replace("}", "").ToString();
var fileName = filePattern[0] + '.' + dateTime.ToString(dateTimeFormat) + "." + filePattern[2];
var fileTargetWithStackTrace = new FileTarget();
fileTargetWithStackTrace.Layout = _loggerModel.layout + "|${stacktrace}";
fileTargetWithStackTrace.Name = FILE_WITH_STACK_TRACE;
fileTargetWithStackTrace.FileName = Path.Combine(filePath, fileName);
_nLogLoggingConfiguration.AddTarget(FILE_WITH_STACK_TRACE, fileTargetWithStackTrace);
var fileTargetWithoutStacktrace = new FileTarget();
fileTargetWithoutStacktrace.Name = FILE_WITHOUT_STACK_TRACE;
fileTargetWithoutStacktrace.Layout = _loggerModel.layout;
fileTargetWithoutStacktrace.FileName = Path.Combine(filePath, fileName);
_nLogLoggingConfiguration.AddTarget(FILE_WITHOUT_STACK_TRACE, fileTargetWithoutStacktrace);
}
It seems that there are similarity inside the code, so I refactored it:
public void AddNLogConfigurationTypeTagret()
{
var filePath = _loggerModel.file_path_pattern.Replace("\\YYYY\\MM\\DD", "");
var dateTime = DateTime.Now;
filePath += "\\" + dateTime.Year + "\\" + dateTime.Month.ToString() + "\\" + dateTime.Day;
var filePattern = _loggerModel.file_name_pattern.Split('.');
var dateTimeFormat = filePattern[1].Replace("{", "").Replace("}", "").ToString();
var fileName = filePattern[0] + '.' + dateTime.ToString(dateTimeFormat) + "." + filePattern[2];
var fileTargetWithStackTrace = new FileTarget();
fileTargetWithStackTrace.Layout = _loggerModel.layout + "|${stacktrace}";
fileTargetWithStackTrace.Name = FILE_WITH_STACK_TRACE;
fileTargetWithStackTrace.FileName = Path.Combine(filePath, fileName);
AddFileTarget(fileTargetWithStackTrace);
var fileTargetWithoutStacktrace = new FileTarget();
fileTargetWithoutStacktrace.Name = FILE_WITHOUT_STACK_TRACE;
fileTargetWithoutStacktrace.Layout = _loggerModel.layout;
fileTargetWithoutStacktrace.FileName = Path.Combine(filePath, fileName);
AddFileTarget(fileTargetWithoutStacktrace);
}
private void AddFileTarget(FileTarget fileTarget)
{
_nLogLoggingConfiguration.AddTarget(fileTarget.Name, fileTarget);
}
Is it good enough?
Edit
By the comment, I made some changes.
public void AddNLogConfigurationTypeTagret()
{
var dateTime = DateTime.Now;
String.Format("\\yyyy\\MM\\dd", dateTime);
var filePath = _loggerModel.file_path_pattern.Replace("\\YYYY\\MM\\DD", "") + dateTime.ToString("\\\\yyyy\\\\MM\\\\dd", CultureInfo.InvariantCulture);
var filePattern = _loggerModel.file_name_pattern.Split('.');
var dateTimeFormat = filePattern[1].Replace("{", "").Replace("}", "").ToString();
var fileName = filePattern[0] + '.' + dateTime.ToString(dateTimeFormat, CultureInfo.InvariantCulture) + "." + filePattern[2];
var fileTargetWithStackTrace = new FileTarget();
fileTargetWithStackTrace.Layout = _loggerModel.layout + "|${stacktrace}";
fileTargetWithStackTrace.Name = FILE_WITH_STACK_TRACE;
fileTargetWithStackTrace.FileName = Path.Combine(filePath, fileName);
AddFileTarget(fileTargetWithStackTrace);
var fileTargetWithoutStacktrace = new FileTarget();
fileTargetWithoutStacktrace.Name = FILE_WITHOUT_STACK_TRACE;
fileTargetWithoutStacktrace.Layout = _loggerModel.layout;
fileTargetWithoutStacktrace.FileName = Path.Combine(filePath, fileName);
AddFileTarget(fileTargetWithoutStacktrace);
}
private void AddFileTarget(FileTarget fileTarget)
{
_nLogLoggingConfiguration.AddTarget(fileTarget.Name, fileTarget);
}
Answer: Some quick remarks:
I'm pretty sure filePath += "\\" + dateTime.Year + "\\" + dateTime.Month.ToString() + "\\" + dateTime.Day; can be replaced by dateTime.ToString() with an appropriate parameter.
Also, why are you doing a replace on _loggerModel.file_path_pattern when assigning its value to filePath and then concatenate a value to filePath? Do this in one go.
I would use string.Format instead of concatenation to create the filename.
You twice execute Path.Combine(filePath, fileName);. Is that an error? Because right now it looks like you're writing to the same file each time.
The creation of a FileTarget and the assignment of its Layout, Name and FileName should move to AddFileTarget (you'll need to change the parameters of that method, of course).
FILE_WITH_STACK_TRACE, FILE_WITHOUT_STACK_TRACE: constants should be PascalCase. | {
"domain": "codereview.stackexchange",
"id": 14320,
"tags": "c#, logging"
} |
Pending method invocation for a game | Question: I need to do pending methods invocation in Java (Android) in my game project. A method represents actions in a scene.
Maybe you're familiar with Context.startActivity in Android, which is not suddenly starting the activity, but statements below it is still executed before starting the requested activity. I've found 3 ways, but I'm not sure which one should I choose by considering the performance and PermGen issues (actually I don't know whether Android has PermGen issue if there are so many classes). These methods will not be called very frequent, may be once in 5 seconds, but may be they will be very many (there are so many scenes).
Please suggest the best way of doing this, by considering memory usage (maybe PermGen too), performance, ease of coding, and bug freedom.
Using switch-case
I need to add each method call in switch-case.
public class MethodContainer {
public void invoke(int index) {
switch (index) {
case 0:
method0();
break;
.
.
.
case 100:
method100();
break;
}
}
private void method0() {
...
}
.
.
.
private void method100() {
...
}
}
Using for-loop and annotation/reflection
Like the above, but make coding easier (except in defining constants).
public class MethodContainer {
private static final int METHOD_0 = 0;
...
private static final int METHOD_100 = 100;
public void invoke(int index) {
for (Method m : MethodContainer.class.getMethods()) {
MyAnnotation annotation = m.getAnnotation(MyAnnotation.class);
if (annotation != null && annotation.value() == index) {
try {
m.invoke(this);
break;
} catch (...) {
...
}
}
}
}
@MyAnnotation(METHOD_0)
private void method0() {
...
}
.
.
.
@MyAnnotation(METHOD_100)
private void method100() {
...
}
}
Using inner classes
There's no need to declare constants and no reflection, but too many classes.
public class MethodContainer {
public void invoke(Runnable method) {
method.run();
}
private Runnable method0 = new Runnable() {
public void run() {
...
}
};
.
.
.
private Runnable method100 = new Runnable() {
public void run() {
...
}
};
}
Answer: Your first example (switch-case) is so poorly thought of in the Object Oriented community that there is a refactoring designed specifically to replace it with your third implementation: Replace Conditional With Polymorphism.
To be honest, I would have never thought of your second implementation. It isn't completely heinous, though I would probably use the constructor to create an array of method objects that you can index into directly - rather than run through a for loop every time you want to make a call.
Between a revised second implementation (array with annotation/reflection) and the third implementation (inner classes) I would personally be more fond of the "inner class" version. I don't think that memory or performance is going to be a factor at this level and I think the third, inner class, version is much more readable (at least in Java - in other languages, you could implement something like the second approach directly). | {
"domain": "codereview.stackexchange",
"id": 2090,
"tags": "java, android"
} |
Should we expect clues from a gamma ray burst causing massive extinctions on Earth? | Question: I remember reading an article on the subject highlighting the periodicity of this event throughout history supposing it could be caused by the rotation of the sun around the galaxy.
My question is: Do we have any clues that can confirm this hypothesis? What should we expect to see? Shouldn't it leave any remains?
Answer: A gamma ray burst could cause an extinction event. There is no evidence that this has ever happened to the Earth.
Gamma ray bursts are rare events, we don't know exactly how rare, but to cause a mass extinction a GRB would need to be near (6000 light years or less is near in this context) and pointed right at Earth. GRBs are rare enough for it to be possible that this has never happened. It is possible that one or two of the extinctions in the last billion years were due to GRBs
The mass extinction that is most likely to have been caused by a GRB is the Orodvician-Silurian Event, 444 million years ago. There is no positive evidence of a GRB, but it can't be excluded. The best that can be said is that the pattern of extinction is consistent with a GRB.
A gamma ray burst would leave few signs that could be used to prove it occurred. The radiation would strip the ozone layer, leaving the surface vulnerable to UV radiation. But it wouldn't leave a crater, or a geological layer.
Gamma ray bursts can occur when a hypernova occurs. There is no reason to think that the Earth is more vulnerable to a hypernova when it is in the galatic plane. The evidence of a 26 million year periodicity to mass extinctions is weak, and there is no evidence that closely ties this to the sun's orbit around the milky way. There is good reason to tie other extinctions to particular impacts, for example there is no need to hypothesise that a GRB occurred 65 million years ago, at the K-T mass extinction
Gamma ray bursts are one of the less likely causes of a mass extinction. Researcher David Thompson likens them to the danger of finding a polar bear in his closet. It could happen; it would be dangerous, but the chance of it happening is so low that it is not a risk worth worrying about. | {
"domain": "astronomy.stackexchange",
"id": 2300,
"tags": "gamma-ray-bursts"
} |
Mistake in eq. (10.7.19) Weinberg Vol I? | Question: In Weinberg Vol I, he writes in equation 10.7.19,
$$
\left\langle 0 | \phi (0) | {\mathbf{k}} \right\rangle = ( 2\pi ) ^{ - 3/2} \left( 2 \sqrt{ {\mathbf{k}} ^2 + m ^2 } \right) ^{ - 1/2} N,
\tag{10.7.19}$$
where $\phi $ is a unrenormalized scalar field, $ \left| {\mathbf{k}} \right\rangle $ is a one-particle state, and $ N $ is some constant. The $ {\mathbf{k}} $ dependence seems wrong. For one thing, this is often taken as the renormalization condition and set equal to $ 1 $. Furthermore, its easy to show the braket should be $ {\mathbf{k}} $-independent by performing a Lorentz transform:
\begin{align*}
\left\langle 0 | \phi (0) | {\mathbf{k}} \right\rangle & = \left\langle 0 | U ( \Lambda ) ^\dagger \phi (0) U ( \Lambda ) | {\mathbf{k}} \right\rangle \\
& = \left\langle 0 | \phi (0) | \Lambda {\mathbf{k}} \right\rangle
\end{align*}
Since I am free to choose $ \Lambda $ as I please, the result can't be dependent on $ {\mathbf{k}} $. Is this a mistake or am I overlooking something?
Answer: I think the problem is you are taking the inner product to be Lorentz Invariant. In Weinberg's convention the inner products are not Lorentz Invariante, see Eqn. 2.5.19. For a covariant normalization the $\sqrt{2E}$ factor would be absent and your argument would go through. | {
"domain": "physics.stackexchange",
"id": 50487,
"tags": "quantum-field-theory, hilbert-space, normalization, textbook-erratum"
} |
Tic-Tac-Toe class written in Python | Question: How well is this Tic Tac Toe code written? Where may I improve it? This is meant to be a platform for a machine learning algorithm.
The code is written so any player may play first. Once a player plays first and starts the game, the code enforces turns. Overwrite protection is also enforced.
#! /usr/bin/python3
class xo:
def __init__(self):
self.board=[[0,0,0],[0,0,0],[0,0,0]];
self.sym=[' ','0','X'];
self.turn=0;
self.modResX=-1;
self.modResO=-1;
self.won=False;
def setPos(self,posx,posy,who):
if (who>=0 & who<3):
self.board[posx][posy]=who;
return 0;
def setX(self,posx,posy):
# check if X is playing first.
if self.turn==0:
self.modResX=0;
self.modResO=1;
# check if X is not playing out of turn.
if self.turn%2==self.modResX:
# check if we are overwriting a position
if (self.board[posx][posy]==0):
self.board[posx][posy]=2;
self.turn+=1;
self.win(2);
return 0;
else:
return -2;
else:
return -1;
def setO(self,posx,posy):
# check if O is playing first.
if self.turn==0:
self.modResX=1;
self.modResO=0;
# check if O is not playing out of turn.
if self.turn%2==self.modResO:
# check if we are overwriting a position
if (self.board[posx][posy]==0):
self.board[posx][posy]=1;
self.turn+=1;
self.win(1);
return 0;
else:
return -2;
else:
return -1;
def win(self,who):
win=False;
if self.board[0]==[who, who, who]:
win=True;
if self.board[1]==[who, who, who]:
win=True;
if self.board[2]==[who, who, who]:
win=True;
if ([self.board[0][0], self.board[1][1], self.board[2][2]]==[who,who,who]):
win=True;
transList=list(map(list, zip(*self.board)))
if transList[0]==[who, who, who]:
win=True;
if transList[1]==[who, who, who]:
win=True;
if transList[2]==[who, who, who]:
win=True;
if ([transList[0][0], transList[1][1], transList[2][2]]==[who,who,who]):
win=True;
self.won=win;
return win;
def showBoard(self):
for i in range(0, 3):
for j in range (0,3):
if j<2:
print (self.sym[self.board[i][j]],'| ',end="",flush=True)
else:
print (self.sym[self.board[i][j]],end="",flush=True)
print(end="\n")
print(end="\n")
return 0;
def main():
print("Hello");
g=xo();
g.showBoard();
print(g.setX(2,2));
g.showBoard();
print(g.won);
print(g.setO(1,1));
g.showBoard();
print(g.won);
print(g.setX(0,1));
g.showBoard();
print(g.won);
print(g.setO(1,0));
g.showBoard();
print(g.won);
print(g.setX(1,2));
g.showBoard();
print(g.won);
print(g.setO(2,0));
g.showBoard();
print(g.won);
# Playing out of turn
print(g.setO(0,2));
g.showBoard();
print(g.won);
# Overwriting a position
print(g.setX(2,0));
g.showBoard();
print(g.won);
print(g.setX(0,2));
g.showBoard();
print(g.won);
if __name__ == '__main__':
main()
Readme
XO
A Python class for a game of X and O.
To use the class:
import xo game=xo.xo();
game.showBoard()
This displays:
| X | 0
0 | 0 | X
0 | | X
game.setX(<posx>,<posy>)
This sets 'X' at a certain position. If successful, returns 0; else -1
game.setO(<posx>,<posy>)
This sets 'O' at a certain position. If successful, returns 0; else -1
game.won
After a move, this variable will be True if the last player who played has won. Else, it is False.
Updated Version
Followup question
Answer: Short answer: not terribly well! It's a good start (the class is a sensible idea and if __name__ == '__main__': is a good way to run the code), but you have made a couple of mistakes. Here are a few pointers.
Style
Python has a style guide, which you should read and follow. A few highlights:
Naming: classes should be CamelCase and function/method/attribute names should be lowercase_with_underscores.
Whitespace: you should put spaces around operators, e.g. self.board = [...] and after commas (e.g. self.board = [[0, 0, 0], ...]).
Semicolons: just don't!
Documentation
It's good to have a README, although it's not clear what format it's in, but much better is to include docstrings for modules, classes, methods and functions, so that the script has documentation built right into it. This can also be used to generate formatted documentation (see e.g. Sphinx) and by your IDE to provide tooltips and even e.g. type hinting.
DRY
Or "don't repeat yourself". For example, note that setX and setO are almost exactly the same; you could refactor this to a single method which is called by the other methods as appropriate.
You could also significantly simplify main by using loops to reduce repetition, e.g.:
def main():
print("Hello")
g = xo()
g.showBoard()
now, nxt = g.setX, g.setO
for x, y in [(2, 2), (1, 1), ...]:
print(now(x, y))
g.showBoard()
print(g.won)
now, nxt = nxt, now
"Magic numbers"
Like 1 and 2 (for X and O) and -1 and -2 (not clear) are best avoided - they don't provide any useful information to the reader and make refactoring very difficult. Instead, use named "constants", e.g.:
PLAYER_O = 1
PLAYER_X = 2
Then the code becomes clearer, e.g. self.win(PLAYER_X) tells you what's actually happening.
Naming
As well as the styling of the names, it's worth thinking about their meaning, too. It's best to avoid contractions (e.g. self.symbols is clearer than self.sym, and I've no idea what e.g. modResX really means). Better names means more readable code, which is a key aim of Python.
Functionality
It's conventional for a method to either change state and return None, or to return something but not change state - win returns something and changes state (although the return value is ignored anyway).
Rather than showBoard, I would be inclined to implement a __str__ method that returns the "friendly" representation of the board; then you can use print(g) rather than g.showBoard(). If you do retain the current functionality, note that again there's no need for a return value that's never used.
0 is the default first argument to range, so you can write simply e.g. for i in range(3):, but note that it's neater to iterate over the board itself (i.e. for row in self.board:, etc.). | {
"domain": "codereview.stackexchange",
"id": 15518,
"tags": "python, python-3.x, tic-tac-toe"
} |
Why does this transfer function has a second zero | Question: I'm learning about $\mathcal Z$-transforms in DSP and I have a transfer function of the following form:
$$H(z)=\frac{2-3z^{-1}}{1-1.6z^{-1}+0.8z^{-2}}$$
When I calculate zeros and poles of this system by hand,
I get these poles from the equation
\begin{align}
\frac{z^2-1.6z+0.8}{z^2} &= 0\\
p_1&=0.8-0.4i\\
p_2&=0.8+0.4i
\end{align}
And a single zero from the equation
\begin{align}
\frac{2z-3}{z}&=0\\
z_1&=1.5+0i
\end{align}
However, when I use Wolfram Alpha to compute the poles and zeros, it also lists a second zero positioned at origin. Or in MATLAB:
The question is, where does that second zero come from?
Answer: Maybe this way it is simpler to understand:
$$1-1.6z^{-1}+0.8z^{-2}=\frac{z^2-1.6z+0.8}{z^2}$$
Hence,
$$\frac{2-3z^{-1}}{1-1.6z^{-1}+0.8z^{-2}}=\frac{2-3z^{-1}}{\frac{z^2-1.6z+0.8}{z^2}}=\frac{z^2(2-3z^{-1})}{z^2-1.6z+0.8}$$
whose numerator is $2z^2-3z$. | {
"domain": "dsp.stackexchange",
"id": 4441,
"tags": "z-transform, transfer-function, poles-zeros"
} |
Understanding the Bennett Acceptance Ratio (BAR) method | Question: The Bennett acceptance ratio method (BAR) is a method for computing free energy differences. See also
Wikipedia
Alchemistry.org
Original publication
I have problems understand the very beginning of the derivation in the original publication:
...weighting function. Let $W(q_1,...,q_N)$ be an everywhere-finite function of the coordinates. It then follows easily that $$\frac{Q_0}{Q_1}=\frac{Q_0\int W \exp(-U_0-U_1)dq^N}{Q_1\int W \exp(-U_1-U_0)dq^N}{Q_1}=\frac{\langle W \exp(-U_0)\rangle_1}{\langle W \exp(-U_1)\rangle_0} \tag{6}$$ IIb. Optimized Acceptance Ratio Estimator--Large Sample Regime
Optimization of the free energy estimate is the most easily carried out in the limit of large samples sizes. Let the available data consist of $n_0$ statistically independent configurations from the $U_0$ ensemble and $n_1$ from the $U_1$ ensemble, and let this data be used in Eq. (6) to obtain a finite sample estimate of the reduced free energy difference $\Delta A=A_1-A_0=\ln(Q_0/Q_1)$. For sufficiently large sample sizes, the error of this estimate will be nearly Gaussian, and its expected square will be $$\text{Expectation of } (\Delta A_{est}-\Delta A)^2 \approx \frac{\langle W^2 \exp(-2U_1)\rangle_0}{n_0[\langle W \exp(-U_1) \rangle_0]^2}+\frac{\langle W^2 \exp(-2U_0)\rangle_1}{n_1[\langle W \exp(-U_0) \rangle_1]^2}-\frac{1}{n_0}-\frac{1}{n_1}=\frac{\int ((Q_0/n_0)\exp(-U_1)+(Q_1/n_1)\exp(-U_0))W^2\exp(-U_0-U_1)dq^N}{[\int\exp(-U_0-U_1)dq^N]^2}-(1/n_0)-(1/n_1)$$
Can someone explain to my how this formula is derived?
E.g. why are there no logarithms? (energy differences in terms of partition functions involve the natural logarithm ln, as written in the text above). Or why are there no sums in the formula? (The estimated value of the free energy difference should be a finite sum of weighted values).
Answer: Equation 1 shows what Bennet wrote on his paper as equation 7. From now on, I will call this equation 1.
\begin{equation}
\begin{aligned}
&\text{Expectation of} \ (\Delta A_{est} -\Delta A)^2\\
&\approx \frac{\langle W^2 \exp(-2U_1)\rangle _0}{n_0[\langle W\exp(-U_1)\rangle _0]^2}+\frac{\langle W^2\exp(-2U_0)\rangle _1}{n_1[\langle W\exp(-U_0)\rangle _1]^2}-\frac{1}{n_0}-\frac{1}{n_1}\\
&=\frac{\int((Q_0/n_0)\exp(-U_1)+(Q_1/n_1)\exp(-U_0))W^2\exp(-U_0-U_1)dq^N}{[\int W\exp(-U_0-U_1)dq^N]^2}\\
&\ \ \ \ -(1/n_0)-(1/n_1)
\end{aligned}
\tag{1}
\end{equation}
It is the first approximation($\approx$) step where many including myself are confused. First of all recall that Bennett, earlier in the paper have shown the following equality,
\begin{equation}
\begin{aligned}
&\frac{Q_0}{Q_1} = \frac{Q_0\int W \exp(-U_0-U_1)dq^N}{Q_1\int W \exp(-U_1-U_0)dq^N} = \frac{\langle W \exp(-U_0)\rangle _1}{\langle W \exp(-U_1)\rangle _0},
\end{aligned}
\tag{2}
\end{equation}
which is numbered equation 6 in the paper. Recall that $\langle \rangle _1$ indicates that the values in the bracket are average over configuration at state 1 whereas $\langle \rangle _0$ indicates that the values in the bracket are average over configuration at state 0.The result here is exact, meaning that no approximation is used. Using this relation and the fact that free energy difference can be written in the following form, $\Delta A = A_1-A_0=\ln(Q_0/Q_1)$, where $\beta(\frac{1}{k_b T})$ is included in $A_1$ and $A_0$, the very first line of the equation 1 can be written as following.
\begin{equation}
\begin{aligned}
&\text{Expectation of} (\Delta A_{est} -\Delta A)^2\\
&= \langle (\Delta A_{est} -\Delta A)^2\rangle \\
&= \langle (\ln\frac{Q_{0,est}}{Q_{1,est}}-\ln\frac{Q_0}{Q_1})^2\rangle \\
&= \langle (\ln\frac{\frac{1}{n_1}\sum\limits^{n_1}_{j=1}W \exp(-U_0(r_{1,j}))}{\frac{1}{n_0}\sum\limits^{n_0}_{i=1}W \exp(-U_1(r_{0,i}))}-\ln\frac{\langle W \exp(-U_0)\rangle _1}{\langle W \exp(-U_1)\rangle _0})^2\rangle
\end{aligned}
\tag{3}
\end{equation}
$Q_{0,est}$, and $Q_{1,est}$ indicate estimated $Q_0$ and $Q_1$ from simulation data, $n_0$ and $n_1$ are number of samples obtain at each state, 0 and 1, $r_{0,i}$ is the $i$th point obtained from simulation at state 0, and $r_{1,j}$ is the $j$th point obtained from simulation at state 1. Now, rearrange the final line in equation 3.
\begin{equation}
\begin{aligned}
&= \langle (\ln\frac{\frac{1}{n_1}\sum\limits^{n_1}_{j=1}W \exp(-U_0(r_{1,j}))}{\langle W \exp(-U_0)\rangle _1}-\ln\frac{\frac{1}{n_0}\sum\limits^{n_0}_{i=1}W \exp(-U_1(r_{0,i}))}{\langle W \exp(-U_1)\rangle _0})^2\rangle
\end{aligned}
\tag{4}
\end{equation}
At large sampling regime, the following approximations can be made.
\begin{equation}
\begin{aligned}
\frac{\frac{1}{n_1}\sum\limits^{n_1}_{j=1}W \exp(-U_0(r_{1,j}))}{\langle W \exp(-U_0)\rangle _1} \approx 1\\
\frac{\frac{1}{n_0}\sum\limits^{n_0}_{i=1}W \exp(-U_1(r_{0,i}))}{\langle W \exp(-U_1)\rangle _0} \approx 1\\
\end{aligned}
\tag{5}
\end{equation}
Also, when $x \approx 1, \ln(x) \approx x-1$ and as a result, equation 4 can be rewritten as following.
\begin{equation}
\begin{aligned}
&= \langle (\frac{\frac{1}{n_1}\sum\limits^{n_1}_{j=1}W \exp(-U_0(r_{1,j}))}{\langle W \exp(-U_0)\rangle _1}-\frac{\frac{1}{n_0}\sum\limits^{n_0}_{i=1}W \exp(-U_1(r_{0,i}))}{\langle W \exp(-U_1)\rangle _0})^2\rangle \\
&=\langle \frac{(\sum\limits^{n_1}_{j=1}W \exp(-U_0(r_{1,j})))^2}{n_1^2\langle W \exp(-U_0)\rangle _1^2}+\frac{(\sum\limits^{n_0}_{i=1}W \exp(-U_1(r_{0,i})))^2}{n_0^2\langle W \exp(-U_1\rangle _0^2}\\
&\ \ \ \ -2\frac{\sum\limits^{n_1}_{j=1}W \exp(-U_0(r_{1,j}))\sum\limits^{n_0}_{i=1}W \exp(-U_1(r_{0,i}))}{n_0 n_1 \langle W \exp(-U_0)\rangle _1\langle W \exp(-U_1)\rangle _0}\rangle \\
&=\langle \frac{\sum\limits^{n_1}_{j=1}W^2 \exp(-2U_0(r_{1,j}))}{n_1^2\langle W \exp(-U_0)\rangle _1^2}\rangle +\langle \frac{\sum\limits^{n_1}_{j=1}\sum\limits^{n_1}_{j\neq j'}WW'\exp(-U_0(r_{1,j}))\exp(-U_0(r_{1,j'}))}{n_1^2\langle W \exp(-U_0)\rangle _1^2}\rangle \\
&+\langle \frac{\sum\limits^{n_0}_{i=1}W^2 \exp(-2U_1(r_{0,i}))}{n_0^2\langle W \exp(-U_1)\rangle _0^2}\rangle +\langle \frac{\sum\limits^{n_0}_{i=1}\sum\limits^{n_0}_{i\neq i'}WW'\exp(-U_1(r_{0,i}))\exp(-U_1(r_{0,i'}))}{n_0^2\langle W \exp(-U_1)\rangle _0^2}\rangle \\
&-2\langle \frac{\sum\limits^{n_1}_{j=1}W \exp (-U_0(r_{1,j}))\sum\limits^{n_0}_{i=1}W \exp(-U_1(r_{0,i}))}{n_0n_1\langle W \exp(-U_0)\rangle _1\langle W \exp(-U_1)\rangle _0}\rangle ,
\end{aligned}
\tag{6}
\end{equation}
where $W'$ is the value of function $W$ with configuration $r(1,j')$ or $r(0,i')$.
Now, let's take a closer look at the numerator on the second term of the last line on equation 6 which is,
\begin{equation}
\begin{aligned}
&\langle \sum\limits^{n_1}_{j=1}\sum\limits^{n_1}_{j\neq j'}WW'\exp(-U_0(r_{1,j}))\exp(-U_0(r_{1,j'}))\rangle \\
&=\sum\limits^{n_1}_{j=1}\sum\limits^{n_1}_{j\neq j'}\langle WW'\exp(-U_0(r_{1,j}))\exp(-U_0(r_{1,j'}))\rangle
\end{aligned}
\tag{7}
\end{equation}
The second summation is over $j'$ where $j'$ is not equal to $j$. With this in mind, it is safe to assume that the function that depends on $j$, $W \exp(-U_0(r_{1,j}))$, is uncorrelated with the function that depends on $j'$, $W' \exp(-U_0(r_{1,j'}))$. If, for instance, function $f$ and $g$ are uncorrelated, average of $f$ multiplied with $g$, $\langle f g\rangle $, are equal to multiple of average $f$ and average $g$.
\begin{equation}
\begin{aligned}
\langle f g \rangle = \langle f \rangle \langle g \rangle \ \text{if f and g are uncorrelated}
\end{aligned}
\tag{8}
\end{equation}
Then we can rearrange the last line on equation 7,
\begin{equation}
\begin{aligned}
&\sum\limits^{n_1}_{j=1}\sum\limits^{n_1}_{j\neq j'}\langle WW'\exp(-U_0(r_{1,j}))\exp(-U_0(r_{1,j'}))\rangle \\
&=\sum\limits^{n_1}_{j=1}\sum\limits^{n_1}_{j\neq j'}\langle W \exp(-U_0(r_{1,j}))\rangle \langle W \exp(-U_0(r_{1,j'}))\rangle \\
&=n(n-1)\langle W \exp(-U_0)\rangle _1^2.
\end{aligned}
\tag{9}
\end{equation}
Note that on the last line of equation 9, subscript 1 on the average($\langle \rangle $) indicates that the average is taken over configurations at state 1 whereas the average on the second line of equation 9 does not have subscript 1. However, it was implied in the equation on the second line of equation 9 that the average is the average over state 1, as configurations taken into account are the ones from state 1($r_{\textbf{1},j}$, $r_{\textbf{1},j'}$). By applying the same analogy to the 4th and 5th term on the last line of equation 6, and taking the summation signs out from the bracket($\langle \rangle $)we get,
\begin{equation}
\begin{aligned}
&=\sum\limits^{n_1}_{j=1}\frac{1}{n_1^2}\frac{\langle W^2 \exp(-2U_0(r_{1,j}))\rangle }{\langle W \exp -U_0\rangle _1^2} + \frac{n_1(n_1-1)}{n_1^2}\frac{\langle W \exp(-U_0)\rangle _1^2}{\langle W \exp(-U_0)\rangle _1^2}\\
&+\sum\limits^{n_0}_{i=1}\frac{1}{n_0^2}\frac{\langle W^2 \exp(-2U_1(r_{0,i}))\rangle }{\langle W \exp -U_1\rangle _0^2} + \frac{n_0(n_0-1)}{n_0^2}\frac{\langle W \exp(-U_1)\rangle _0^2}{\langle W \exp(-U_1)\rangle _0^2}\\
&+2\frac{n_0n_1}{n_0n_1}\frac{\langle W \exp -U_0\rangle _1\langle W \exp -U_1\rangle _0}{\langle W \exp -U_0\rangle _1\langle W \exp -U_1\rangle _0}
\end{aligned}
\tag{10}
\end{equation}
After some basic algebra, we get,
\begin{equation}
\begin{aligned}
&=\frac{\langle W^2 \exp(-2U_0(r_{1,j}))\rangle }{n_1\langle W \exp(-U_0)\rangle _1^2}+\frac{n_1-1}{n_1}+\frac{\langle W^2 \exp(-2U_1(r_{0,i}))\rangle }{n_0\langle W \exp(-U_1)\rangle _0^2}+\frac{n_0-1}{n_0}-2\\
&=\frac{\langle W^2 \exp(-2U_0)\rangle _1}{n_1\langle W \exp(-U_0)\rangle _1^2}+\frac{\langle W^2 \exp(-2U_1)\rangle _0 }{n_0\langle W \exp(-U_1)\rangle _0^2}-\frac{1}{n_1}-\frac{1}{n_0},
\end{aligned}
\tag{11}
\end{equation}
which is exactly what Bennet wrote on the second line of equation 7 in the paper. | {
"domain": "physics.stackexchange",
"id": 54652,
"tags": "statistical-mechanics, simulations, physical-chemistry, molecular-dynamics"
} |
Why was the evolution of large, slowly reproducing organisms preferred? | Question: One of the very basic facts that Darwin pointed out was about the life span of organisms. Organisms with smaller lifespan reproduce quickly and hence variations are produced faster. This helps in faster evolution.
One of the main goals of living organisms is to reproduce, this fact being the backbone of neo-Darwinism.
For organisms with longer life span, say humans, the amount of time required for new generations to come up, as well as the energy requirement is high, much higher if compared to bacteria. Then why did evolution of complexity arise if bacterial life forms could survive in harsh conditions and reproduce and mutate faster?
Answer: The simple answer is that the evolution of large, slowly reproducing organisms is not preferred: it is simply not selected against.
The key mistake in your thinking is this statement:
One of the main goals of living organisms is to reproduce
Most living organisms have no such goal, they simply take actions that have, historically, led to the continuation of their lineage.
In a complex world, there are many different strategies that can lead to survival and propagation of an organism, and cooperation between cells is one of them. Even simple bacteria often form large multicellular aggregates, which can provide physical advantages for their members over dissociated individuals, such as resistance to physical damage and formation of protected microclimates. The inside of your body is another such protected microclimate. So multicellularity can be quite advantageous.
As for limiting reproduction: remember that the closest competitors every new organism has are its own relatives, who occupy the same space and compete for the same resources. Even bacteria will often limit their reproduction when resources are limited. Reproduction also takes significant resources. A faster reproduction then, is also not necessarily advantageous for long-term survival.
Bottom line: neither large nor small, fast nor slow is preferred. Instead, the study of evolution predicts that we will see what we do see: a multiplicity of forms and strategies adapted for different niches in the ecosystem formed by their interactions with other species and the external environment. | {
"domain": "biology.stackexchange",
"id": 9919,
"tags": "evolution"
} |
Normal equation for linear regression | Question: I am going through the derivation of normal equation for multivariate linear regression. The equation is given by :
$\theta = (X^{T}X)^{-1}X^{T}Y$
The cost function is given by:
$J(\theta) = \frac{1}{2m}(X\theta-Y)^{T}(X\theta-Y)$
Simplifying,
$J(\theta) = \frac{1}{2m}(\theta^{T}X^{T}X\theta - 2(X\theta)^{T}Y + Y^{T}Y)$
Differentiating w.r.t $\theta$ and equating to zero
$\frac{dJ(\theta)}{d\theta} = \frac{d}{d\theta}(\theta^{T}X^{T}X\theta)-\frac{d}{d\theta}(2(X\theta)^{T}Y) = 0$
I want to specifically understand the differentiation of the left term:
$\frac{d}{d\theta}(\theta^{T}X^{T}X\theta) = X^{T}X\frac{d}{d\theta}(\theta^{T}\theta) $
$\frac{d}{d\theta}(\theta^{T}\theta) = [\frac{d}{d\theta_1}(\theta_1^{2}+\theta_2^{2}+...\theta_n^{2}),\frac{d}{d\theta_2}(\theta_1^{2}+\theta_2^{2}+...\theta_n^{2}) ,...., \frac{d}{d\theta_n}(\theta_1^{2}+\theta_2^{2}+...\theta_n^{2})]$
$\frac{d}{d\theta}(\theta^{T}\theta) = [2\theta_1,2\theta_2,...,2\theta_n] $
$\frac{d}{d\theta}(\theta^{T}\theta) = 2\theta^{T} $
But the final equation is obtained by using $\frac{d}{d\theta}(\theta^{T}\theta) = 2\theta$
How is $\frac{d}{d\theta}(\theta^{T}\theta) = 2\theta$ and not $2\theta^{T}$
Answer: It is basically a matter of convention, which becomes a bit more clear if you write the whole thing in terms of elements, rather than vectors. Consider
$$ \theta^T \theta = \sum_{n=1}^N \theta_n \theta_n = \sum_{n=1}^N \theta_n^2 $$
What is usually meant if you write $\frac{df}{d\theta}$ is that you take the gradient of a scalar $f$, i.e. you get a vector $\frac{df}{d\theta}$ where each element $i$ is the derivative with respect to the respective coordinate $\theta_i$:
$$ \left( \frac{df}{d\theta} \right)_i = \frac{df}{d\theta_i} $$
Let's apply this to $\theta^T\theta$:
$$ \left( \frac{d}{d\theta} \theta^T\theta \right)_i = \frac{d}{d\theta_i} \sum_{n=1}^N \theta_n^2 = 2 \sum_{n=1}^N \theta_n \frac{d\theta_n}{d\theta_i} $$
and since $\frac{d\theta_n}{d\theta_i} = \delta_{i,n}$:
$$ \left( \frac{d}{d\theta} \theta^T\theta \right)_i = 2 \theta_i $$
If you interpret this as columns or rows is pretty much up to you. Commonly we would take $\theta_i$ as the elements of a column vector $\theta$ and for practicality we usually want the gradient to live in the same vector space as our coordinates, so $\left( \frac{df}{d\theta} \right)_i$ would also be a column vector. Hence
$$ \frac{d}{d\theta} \theta^T\theta = 2 \theta $$ | {
"domain": "datascience.stackexchange",
"id": 6148,
"tags": "machine-learning, linear-regression"
} |
Open access Adept-like dataset? (LLM-to-computer-input) | Question: Here's a demo for Adept ACT-1 for Transformers. I don't doubt that one could create a demo video using zero-shot; actually I tested just now and the basic chat.openai.com interface was able to do some web browsing stuff on the first try if I just prompted it with a list of API methods (e.g. el.click) and streamed the DOM to it.
But I assume that, to take their product to the next level, they must be training or fine-tuning on some actual multimodal data, e.g. session recordings labeled with natural language? Am I wrong?
Are there any open-access datasets like this that could be used for open-source models? I'm actually interested in native (e.g. pixel + natural language to game input) more than browser.
Answer: There's a couple:
Mind2Web: https://arxiv.org/pdf/2306.06070.pdf
Webshop: https://webshop-pnlp.github.io/
MiniWoB++: https://arxiv.org/pdf/2305.11854.pdf
These papers also reference other, similar datasets. | {
"domain": "ai.stackexchange",
"id": 3884,
"tags": "large-language-models, training-datasets"
} |
Did researchers evolve multicellular yeast or did they just turn on multicellularity? | Question: In this new paper "Experimental evolution of multicellularity" found via Ars Technica the researchers describe having developed multicellularity and apoptosis within 60 days from a unicellular yeast species.
Is it possible that what they have done is merely turn on an ability of that species that evolved previously and just lay dormant? Do we know that that yeast species and all its ancestors were never multicellular? The paper doesn't even mention the possiblity of this.
Answer: You're right:
Within the Fungi, simple linear multicellularity of hyphae occurs in all major clades (see below), but only Ascomycota and Basidomycota display more complex two- and three- dimensional multicellularity in the form of sexual spore- producing fruiting bodies. In both of these groups, reversals to unicellular lifeforms have occurred, for example, Saccharomyces and many other related yeasts in the Saccharomycotina (Ascomycota) or Cryptococcus albidus and related species in the hymenomycete clade of Basidiomycota (de Hoog et al. 2000, p. 130).
Medina, M., A. G. Collins, J. W. Taylor, J. W. Valentine, J. H. Lipps, L. A. Amaral Zettler and M. L. Sogin (2003). "Phylogeny of Opisthokonta and the evolution of multicellularity and complexity in Fungi and Metazoa." International Journal of Astrobiology 2(3): 203-211. doi:10.1017/S1473550403001551
(PDF)
Update: The authors responded to criticism like this on The Loom, here's an excerpt:
Our yeast are not utilizing ‘latent’ multicellular genes and reverting back to their wild state. The initial evolution of snowflake yeast is the result of mutations that break the normal mitotic reproductive process, preventing daughter cells from being released as they normally would when division is complete. Again, we know from knockout libraries that this phenotype can be a consequence of many different mutations. This is a loss of function, not a gain of function. You could probably evolve a similar phenotype in nearly any microbe (other than bacteria, binary fission is a fundamentally different process). We find that it is actually much harder to go back to unicellularity once snowflake yeast have evolved, because there are many more ways to break something via mutation than fix it. | {
"domain": "biology.stackexchange",
"id": 103,
"tags": "evolution, cell-biology"
} |
Two-GMSK channels demodulation: 3dB difference in BER | Question: I am trying to demodulate individually two GMSK channels separated by 50kHz (one channel = 25kHz).
I am working on a limited resources hardware so I decided to make some optimization on the signal processing chain.
Here is the architecture: after sampling, my two channels are aliased at 4.6MHz and at 4.65MHz.
The architecture for one channel is as follow: a DDS (Direct Digital Synthesis) and a mixer bring my signal in baseband. I then have a down-conversion, filtering, the decoding algorithm and an error controller to compute a BER.
The basic architecture for demodulating two disctinct signals would be to duplicate this processing. Unfortunately, my hardware cannot contain so much computing.
One of the idea I had was to put in common some DSP operations. The down-conversion uses a lot of resources so I decided to mutualize the process for both channels.
The down-conversion uses a CIC filter, it filters and decimates the signal.
The new signal path would be: DDS+MIXER (channel 1 brought to baseband, channel 2 centered around 50kHz), CIC + compensation filter (CIC response is not flat in passband), second DDS + mixer to bring channel 2 to baseband.
here is a scheme of the processing:
The "tricky" thing is that on my second DDS I multiply both I and Q by cosine instead of cosine / sine (in that case, it does not work).
When doing this, I guessed that maybe I would lose something.
When measuring the BER, I can easily see that I have a 3dB loss between my two channels. My question: how can I "mathematically" explain this?
Is it because I am mixing two cosine instead of one cos and one sine; is half of the information lost?
Am I losing 3 dB on the SNR?
Edit: Here are the BER curves
Answer: Yes that is correct, you are only getting half the signal: with cosine on both inputs half the signal moves to baseband and the other half moves to twice the frequency offset. I suspect the reason it is not working in the sine cosine case is the sign of that is such that you translated the entire signal upward to twice the frequency offset instead of to baseband. If that is the case you could invert the sine input and it should work without the 3 dB loss. | {
"domain": "dsp.stackexchange",
"id": 8411,
"tags": "digital-communications, digital, gmsk"
} |
N-Grams - how to predict further into the future and how far can we predict? | Question: I am using N-Grams to predict future character input. For n+1, it performs really well.
I've been looking, but I have not been able to find any information on what the feasible maximum distance into the future N-Grams are capable of.
One way I could do this is to predict the next character (n+1) and then use that result to predict n+2, use n+2 to predict n+3... etc... but obviously an incorrect prediction will cascade and impact the future predictions as there is a dependency.
Is there a better (more accurate, reliable) way to do this? Also, I can easily use N-Grams to predict the next (n+1) character in an incoming stream, but what about n+2? How about n+3? When does it become unrealistic to predict further?
I'm sure there is some research on this, so I'd love it if you could link me to a paper or give me a brief overview to get me going.
Answer: There's no simple answer to "how far into the future can you predict?". Obviously, the further you go, the less your ability to predict. For English text, I would expect that ability to predict would decrease rapidly.
You asked if there's a better wya to do this. You can of course consider a generalization of $n$-grams where you use the characters at positions $i-n+1,i-n+2,\dots,i-1,i$ to predict the character at position $i+k$ (instead of predicting the character at position $i+1$), for any fixed $k$. This will probably perform worse the larger $k$ is.
You might also be interested in skipgrams. | {
"domain": "cs.stackexchange",
"id": 9015,
"tags": "computational-linguistics"
} |
Elementary question on pion-proton scattering cross-section | Question:
Is $E_k$ the energy of the outgoing or incoming pion?
The first peak is supposed to be a delta baryon.
What does the graph tell us, experimentally? A pion of kinetic energy x comes in, then we look at the graph and find the cross-section, so then we have the probability of what happening? Or is $E_k$ the net loss in kinetic energy of the pion, and this graph tells us the distribution of its energy loss?
How is this graph constructed, what is measured?
Answer: This is a plot where the incoming pion beam is varied and the total interaction cross section is measured , showing higher scattering cross section at the resonances, as the beam kinetic energy is varied. The interactions are measured by looking how many pions are left in the beam direction after the beam has passed has passed the target. The difference is due to their having interacted , from elastic scattering and changing the angle to creating more particles , they are missing in the count.
This is related. explaining how similar plots can be made after the interaction:
In particle physics E_k is the kinetic energy of an outgoing pion from a pi proton scattering.Such plots are usually constructed by scattering a beam of pions with fixed energy and looking at two body interactions with the protons at rest, identifying the outgoing proton and pion by ionization. One would have plotted the invariant mass distribution and then the resonances would have been easily labeled. | {
"domain": "physics.stackexchange",
"id": 7523,
"tags": "particle-physics, scattering"
} |
How can Bound state energy be negative if the $V_{min}$ is positive? | Question: We know that Energy must be negative for bound states (as the wavefunction must go to 0 at infinity) but when we are looking at potential wells, we also say that E must be greater than the minimum value of potential ($V_{min}$ ) for a normalisable solution, so don't we have a contradiction when the $V_{min}$ is positive or even 0?
I think I might've made a major mistake in my thought process but I can't figure out what that is, any insight will be greatly appreciated.
Answer: Since the potential is only defined up to an overall constant, you should be suspicious of any definitions that depend on the absolute value of $V(x)$. Thus, a statement like $E<0$ is meaningless, since only energy differences are physically meaningful quantities!
In general, bound states are those which have an energy less than the value of the potential at $\pm \infty$. Now, in some physical problems like the finite potential well and the hydrogen atom we assume that $V(x)\to 0$ as $x\to \pm \infty$, which is why it is often said that bound states have negative energy. However, it should be clear to you that this is certainly not the case in general!
Indeed, I am certain you already know of two famous counter-examples: the particle-in-a-box (the "infinite square well") and the harmonic oscillator. Both of these potentials have only bound states, and they certainly don't have negative energies! The fact that all their states are bound can now be explained using my earlier argument: since both these potentials blow up at $x\to\pm\infty$, all energies are less than $\lim_{x\to\pm\infty}V(x) = V_{\pm\infty}$, and therefore all states are bound.
To answer your specific question about the potential well, in order for your potential to be a "well" you would require it to have $V_\text{min} < V_{\pm\infty}$ (since otherwise it would be something like a "top-hat" potential which would have no bound states anyway). If you choose $V_{\pm\infty} = 0$, then $V_\text{min}$ must be less than zero in order for your potential to describe a "well". You could, of course, choose $V_{\pm\infty}=V_0>0$, but in this case bound states would be those that have an energy $E<V_0$.
See also the answers to these questions: What exactly is a bound state and why does it have negative energy? and Scattering vs bound states. | {
"domain": "physics.stackexchange",
"id": 79799,
"tags": "quantum-mechanics, energy, schroedinger-equation, potential-energy"
} |
How do we justify that work is a "transfer of energy" in the general case? | Question: By the work-energy theorem, we can justify that the work on a particle due to the net force equals the change in kinetic energy of the particle. In compact notation,
\begin{align}\tag{1}
W_{\text{net}} = \Delta KE.
\end{align}
This seems very useful. However, there are other contexts where work is used.
For example, we might want to find the potential energy of a certain configuration of charges in electrostatics. In that case, we imagine bringing charges from infinity together quasi-statically. By calculating the work needed to assemble the charge configuration, we say we have found the potential energy of the setup: $W = \Delta U$. But this time, we assume there is no kinetic energy is involved, because the process was quasi-static.
A more simple example is the case of trying to find gravitational potential energy, where we consider what it takes to lift something up quasi-statically (which can be done by applying a constant force $\vec{F} = mg\vec{e}_{z}$ to counteract $\vec{F}_{g}=-mg\vec{e}_{z}$), compute $W = \int_{0}^{h} mg\, dz = mgh$, and find $\Delta U = mgh$.
So now we might want to write
\begin{align}\tag{2}
W = \Delta KE + \Delta U.
\end{align}
I am aware we are no longer considering net work (which was crucial to the initial formulation of the work-energy theorem).
To make this more general, we might also consider internal energy (which will take into account thermal energy). So for example, maybe we have a system in which friction occurs internally, so there is work done by friction which "siphons" off energy into heat. In that case, if we define our system carefully, the work produced by a force external to the system will result in
\begin{align}\tag{3}
W_{\text{ext}} = \Delta KE + \Delta U + \Delta E_{\text{int}}.
\end{align}
My question is, are there theorems or (semi-)rigorous arguments that demonstrate the validity of equations $(2)$ and $(3)$? I suppose $(2)$ might be trivial if we qualify that the forces involved are conservative (so by definition we have $\vec{F} = -\nabla U$), but it's not clear exactly what rigorous argument can be made. And what about equation $(3)$? How is that rigorously justified?
Some related questions:
How does the work-energy theorem relate to the first law of thermodynamics?
Does work-energy theorem involve potentials?
Does work-energy theorem account for thermal energy?
Answer:
But this time, we assume there is no kinetic energy is involved,
because the process was quasi-static.
Actually, we say there is no change in kinetic energy because the net work done is zero in bringing the charges together. To bring like charges closer together positive work must be done by an external agent while an equal amount of negative work is done by the electrostatic field which takes the energy transferred to the charges by the external agent and stores it as electrostatic potential energy of the system of charge. It doesn't matter if the process is quasi-static as long as the difference between the initial and final kinetic energy is zero.
Similarly, for gravitational potential energy (GPE), if we lift an object of mass $m$ initially at rest and bring it to rest at some height $h$ where $g$ is constant, the change in KE is zero and the work we do is stored as GPE of $mgh$ in the Earth-Object system. It doesn't matter if we carry it out quasi-statically. All that matters is the initial and final state. Of course in order to accomplish this, we must exert an upward force greater than $mg$ to initiate motion and then exert an upward force less than $mg$ to bring it to rest at $h$, so that the net applied force is zero.
My question is, are there theorems or (semi-)rigorous arguments that
demonstrate the validity of equations $(2)$ and $(3)$?
Equation (2) is the general equation for the conservation of energy of a mechanical system. It only addresses KE and PE at the macroscopic level, i.e., the kinetic and potential energy of the motion and position of the system as a whole with respect to an external frame of reference.
Equation (3), by including $E_{int}$, by which I assume you mean the the change in internal energy of the system, i.e. the motions and positions of the atoms and molecules at the microscopic level, gets you (almost) into the realm of thermodynamics and the general equation of the first law of thermodynamics. In this case, energy transfer can occur by both work and heat, so heat must be included. The equation for a closed system is generally written as
$$Q-W=\Delta U+\Delta KE+\Delta PE$$
Where $\Delta U$ is the change in internal energy, $Q$ is the net heat added to the system, and $W$ is the net work done on the system. Work can be boundary work, $w_b$ or other forms of work (electrical work, etc.)
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 95440,
"tags": "classical-mechanics, energy, energy-conservation, work"
} |
How do physicists mathematically define gravitational waves? | Question: When one first encounters gravitational waves in a standard GR lecture or a standard textbook like Carroll's "Spacetime and Geometry", they are often "defined" as follows: The metric $g$ can be split up into $g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}$, where $\eta$ is the standard Minkowski metric and $h$ is a perturbation upon that flat background which satisifies a wave equation.
This is all nice and dandy but when one approaches the following metric this naïvely, one might think that it constitutes a gravitational wave:
$$g = \left(\eta_{\mu\nu} + h_{\mu\nu}\right)dx^\mu dx^\nu = -d\tau^2 + dx^2 + dy^2 + dz^2 ++\left\lbrace-\cos\left(\tau - x\right)\left[2+ \cos\left(\tau - x\right)\right]d\tau^2 + \cos\left(\tau - x\right)\left[1+\cos\left(\tau-x\right)\right]\left(d\tau dx + dxd\tau\right) +\\
- \cos\left(\tau -x\right)^2 dx^2\right\rbrace ,$$
after all $h$'s coefficients do satisfy a wave equation.
But psych! It's actually just Minkowski space hiding with weird coordinates as we get from the standard metric $\eta$ to this "wavy" metric by the coordinate transformation $\tau' = \tau + \sin\left(\tau - x\right)$ as one can easily check.
I am aware that in mathematics there are very precise and rigorous definitions of what constitutes a spacetime with gravitational waves; the spacetime having to be asymptotically of Petrov type $N$ and possessing a certain, 5-dimensional isometry group$^1$. While this is all very neat and tidy, I have yet to hear/read about it in any physics lecture or physics textbook. So my questions are:
How would a working theoretical physicist go about showing that a certain spacetime is in fact one containing gravitational waves? Or is it just that these mathematical definitions are known and used by everyone dealing with gravitational waves on the theoretical side but never put down in any paper or ever mentioned? If so, why?
$^1$ An excellent, concise source for those who want to know more about this, is this paper which goes into detail about the different mathematical conditions for a spacetime to contain gravitational waves.
Answer: The most straightforward way is to simply take the transverse-traceless (TT) part of $h_{ij}$. The TT part of the metric, denoted $h^{\mathrm{TT}}_{ij}$, contains precisely the two propagating degrees of freedom, which correspond to the two polarizations of gravitational waves. This enables coordinate effects to be removed and exposes the true propagating gravitational waves. It is possible to find a gauge transformation in which the only nonzero part of $h_{\mu\nu}$ is $h^{\mathrm{TT}}_{ij}$. This is known as the TT gauge.
The wavevectors can then be found by taking the Fourier transform of $h^{\mathrm{TT}}_{ij}$. If there is just one single gravitational wave with propagation direction $n^i$, it is possible to find $h^{\mathrm{TT}}_{ij}$ by defining $P_{ij} = \delta_{ij} - n_i n_j$. Then, given $h_{kl}$ in the Lorenz gauge,
$$h^{\mathrm{TT}}_{ij} = \left(P_{ik}P_{jl} - \frac{1}{2}P_{ij}P_{kl}\right)h_{kl}.$$
In your example, since there is only one spatial term in your $h_{\mu\nu}$ and it is on the diagonal, it is immediately obvious that its TT part is zero. | {
"domain": "physics.stackexchange",
"id": 100300,
"tags": "general-relativity, metric-tensor, gauge-theory, gravitational-waves, linearized-theory"
} |
Bool state is incorrect when retrieved from param server | Question:
The code is:
ros::NodeHandle nh_private("~");
bool state=false;
nh_private.param<bool>("state", state, "false");
It turns out the "state" is "true". Why???
Originally posted by GuoliangLiu on ROS Answers with karma: 36 on 2014-09-29
Post score: 1
Original comments
Comment by gvdhoorn on 2014-09-29:
Following the answer from @kmhallen, this would seem to be expected (and correct) behaviour (you're just using the wrong type for your default value). Perhaps change your topic title to reflect this?
Answer:
template<typename T>
void param(const std::string& param_name, T& param_val, const T& default_val) const;
From the function prototype, param_val and default_val are both the template type, in this case bool.
The string value "false" is a pointer to the characters 'false' which is non-zero. This is interpreted as true when used as a bool. Try without quotes around false;
nh_private.param<bool>("state", state, false);
Originally posted by kmhallen with karma: 1416 on 2014-09-29
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by GuoliangLiu on 2014-10-07:
great! thinks!
Comment by GPereira on 2020-07-27:
I am getting the following error on ROS Noetic. Any ideas?
error: expected primary-expression before ‘bool’
nh.getParam<bool>("dissemination_rules", dissemination_rules, true);
Comment by dwyackzan on 2022-07-11:
The getParam method does not have a default setting parameter (and does not require templating) like the param method.
So if you want to be able to set a default value then use the param method:
nh.param<bool>("dissemination_rules", dissemination_rules, true);
Or if you want to just use getParam without setting a default value:
nh.Param("dissemination_rules", dissemination_rules); | {
"domain": "robotics.stackexchange",
"id": 19554,
"tags": "ros, parameter, param-server"
} |
Sitting to Standing Posture Transition Accelerometer Triaxial Data Explanation | Question: I don't have a physics background. (I'm a Computer Science student)
I have a waist-worn accelerometer provided by the school which is quite reliable. I've captured data of sitting still to standing still transitions from a person. Here is the result synchronised from many tests, the orange curve depicts data from the vertical axis.
Could anybody please explain to me the pattern of acceleration data?
Why does the acceleration's magnitude always increase first before decreasing? (The sign of value depends on the device's configuration so you don't need to worry about it)
Should it not be the other way around?
As far as I know about physics, Newton's 2nd Law states $F = m . a$ so I would think that when the person stands up, a force in the upwards direction is created which is opposite the the Earth gravitational force which heads down so the resulting force must be lower than the original force (which only includes the Earth gravitational force) leading to a reduction in the acceleration?
Also if possible could you let me know what possibly causes the sudden drop in acceleration right after that?
Many thanks.
Answer: The accelerometers on the Space Station, or on any other orbiting spacecraft, report an acceleration of nearly zero in all three axes except when the spacecraft is firing its thrusters. Accelerometers do not sense gravitation.
When an accelerometer is at rest with respect to the surface of the Earth, the forces acting on the accelerometer are gravitation pulling it Earthward and an upward force (typically the normal force) that keeps the accelerometer from falling Earthward. While the accelerometer doesn't sense gravitation, it does sense that upward force. Your accelerometer's output is in units of gees. You see an acceleration of -1 g on the second component of the accelerometer's output. This direction is "up".
To start moving in some direction, the net force on the accelerometer must necessarily be non-zero, at least momentarily. When you stand up from a sitting position, your waist initially accelerates upward and forward. The acceleration that the accelerometer can sense are an upward component that exceeds greater than one g and a horizontal acceleration forward. This is exactly what your accelerometer registers. | {
"domain": "physics.stackexchange",
"id": 26726,
"tags": "gravity, newtonian-gravity, acceleration, sensor"
} |
Google Cartographer Successes on Turtlebot (Or Similar?) | Question:
Hi,
I was just wondering if anyone has had any success yes with the likes of TurletBot (or similar robots) using Google's Cartographer .
I've see TRI playing with it https://www.youtube.com/watch?v=cK6s7soVwws
Thanks
Mark
Originally posted by MarkyMark2012 on ROS Answers with karma: 1834 on 2016-11-02
Post score: 2
Original comments
Comment by lucasw on 2016-11-03:
There was also a roscon 2016 video https://vimeo.com/187699364, and a slide around 3:07 showing a link to http://robocup2017.org/eng/index.html, though I don't see code there.
Answer:
There is a cartographer_turtlebot repo with a dedicated cartographer config for use with the Turtlebot, so that's probably a good place to get started.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-11-02
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by fj138696 on 2017-05-25:
Can I use the package cartographer_turtlebot for iRobot Create2? | {
"domain": "robotics.stackexchange",
"id": 26128,
"tags": "ros, slam, navigation, cartographer"
} |
Statistics of 1D discrete random walks | Question: I have already asked this question in Math.SE.
Let $P(n)$ be a probability distribution on the integers. Suppose a random walker starts off at the origin and, at every positive integer time, takes a step of length $n$ with probability $P(n)$.
This walker will revisit the origin over and over. Let me call 'excursions' the steps they take between two consecutive visits to the origin. I am interested in the distribution of excursion 'duration' (time interval between consecutive visits to the origin) and excursion 'reach' (position farthest from the origin reached during excursion).
These are simple concepts, so I imagine they have been much discussed in the literature. Question is: where can I find good discussions of these topics?
Answer: First, by the Chung-Fuchs theorem, any mean-zero one-dimensional random walk is recurrent. This tells you what the proper assumption on the step-distribution $P$ is.
If, in addition, the step-distribution has finite variance $\sigma^2$, then the law of its excursions converges, after diffusive scaling, to the law of Brownian excursions (see, e.g., Annals of Probability 4(1), 1976, 115-121). From this (and the detailed results about Brownian excursions) you can extract the information you want on the "reach".
Concerning the "duration", i.e. the random variable $T=\inf\{n>0:S_n=0\}$, Kesten first proved that, as $n\to\infty$,
$$
\mathbb{P}_0(T=n) = \frac{\sigma}{\sqrt{2\pi}} n^{-3/2}\,(1+o(1)).
$$
This result also holds under the assumption that the variance $\sigma^2$ is finite. There are certainly extensions to the case of infinite variance, as well as sharper results, but I don't know the state-of-the-art out of my head. Probably you can find that by looking at recent papers citing those. | {
"domain": "physics.stackexchange",
"id": 54052,
"tags": "statistical-mechanics, resource-recommendations, probability, brownian-motion, stochastic-processes"
} |
Proving that there is no solution to the PCP problem using induction | Question: I'm studying for the Algorithms and Computability course. I have encountered a problem that I cannot solve and cannot find any materials to help me solve it. It's the following PCP problem:
We have two sets:
A = (b, a, ca, bac)
B = (ca, ab, a, c).
The task is to find a solution to the given PCP problem or prove that there is none. I know that there will be no solutions, as after choosing elements of the sets with the following indices: 2, 1, 3, 2, 2, 1, 2... we'll end up in a situation where the string formed using set B will always be longer than the first one. Our lecturer wants us to prove it using induction, but unfortunately, I have no clue how to do that. I tried two times on exams and failed every time.
Could anybody show me, what is the correct way to form such a proof or link me to some source that shows that? I will be very grateful.
Answer: There is no solution.
Proof by induction
Here is one-line summary: the bottom string is always longer than the top string by a string in the form $(abca\mid abcaaa\mid abcaaab)^+$.
Suppose we try to construct a solution by adding dominos one by one, keeping the string on the top row (the top string) and the strong on the bottom row (the bottom string) consistent all the time. "Consistent" means either the top string is a prefix of the bottom string or the bottom string is a prefix of the top string.
Initially both strings are empty. Then, however far we may try, as you have noticed, the bottom string is always longer than the top string.
However, we cannot prove that simple statement by straightforward induction since, when considered as the induction hypothesis, it is too weak to support the induction step.
Let us observe the overhang, the extra part of the bottom string. It changes as follows:
$b\to ca \to a\to ab\to bab\to abca\to abcaa\to\cdots$
Continuing, we find the overhang grows as follow:
$$\begin{aligned}
&\to^*\underline{abca}\\
&\to^*\underline{abcaa}\\
&\to^*\underline{abcaaab}\\
&\to^*\underline{abcaaab}\,\underline{abca}\\
&\to^*\underline{abca}\,\underline{abcaaab}\,\underline{abca}\\
&\to^*\underline{abcaaab}\,\underline{abca} \,\underline{abcaa}\\
&\to^*\cdots
\end{aligned}$$
where $o_1\to^* o_2$ means overhang $o_1$ is transformed to overhang $o_2$ in several steps without ever being empty.
Claim (decomposition of the overhang): The overhang is $abca$ when there are 6 dominos. Suppose the overhang is in the form $(abca\mid abcaa\mid abcaaab)^+$. Then it will be transformed to the same form when $3$, $4$ or $6$ dominos are added.
Proof: In the following $W$ stands for some string.
The overhang is transformed by the following rules.
If it is $aW$, the next domino can be and can only be
$\begin{bmatrix}a\\ ab\end{bmatrix}$.
The new overhang will be $Wab$.
If it is $caW$, the next domino can be and can only be
$\begin{bmatrix}ca\\ a\end{bmatrix}$.
The new overhang will be $Wa$.
If it is $bW$, where $W$ does not start with $ac$, the next domino can be and can only be
$\begin{bmatrix}b\\ ca\end{bmatrix}$.
The new overhang will be $Wab$.
Hence, we can verify that
if the overhang is $\underline{abca}W$, then it will become $W\underline{abcaa}$ when $3$ dominos are added.
if the overhang is $\underline{abcaa}W$, then it will become $W\underline{abcaaab}$ when $4$ dominos are added.
if the overhang is $\underline{abcaaab}W$, then it will become $W\underline{abcaaab}\,\underline{abca}$ when $6$ dominos are added.
With the claim, it is straightforward to use induction to show that the overhang is never empty. That means the bottom string is always longer than the top string. There is no solution to the PCP problem.
Proof by counting
Let us call $\begin{bmatrix}b\\ ca\end{bmatrix},$
$\begin{bmatrix}a\\ ab\end{bmatrix},$
$\begin{bmatrix}ca\\ a\end{bmatrix},$
$\begin{bmatrix}bac\\ c\end{bmatrix}$ type 1, 2, 3, 4 respectively.
Lemma, more type-3 than type-4: Suppose the list of $n$ dominos $d_1, d_2, \cdots, d_n$, $n\ge3$ is a solution. Then there are more type-3 dominos than type-4 dominos in the list.
Proof: The first 3 dominos in the list must be
$$\begin{bmatrix}a\\ ab\end{bmatrix}
\begin{bmatrix}b\\ ca\end{bmatrix}
\begin{bmatrix}ca\\ a\end{bmatrix},$$
where the third domino is $\begin{bmatrix}ca\\ a\end{bmatrix}$ but there is no $\begin{bmatrix}bac\\ c\end{bmatrix}$.
Suppose $d_i$ is type-4. It presents a substring $bac$ on the top row. Since the given list is a solution, it corresponds to a substring $bac$ on the bottom row.
Consider that substring $bac$ on the bottom row. The letter $a$ in the middle is neither preceded by $c$ nor followed by $b$. Hence it can only be brought by some $d_{x(i)}$ of type-3, $1\le x(i)\le n$. $d_i\mapsto d_{x(i)}$ is a correspondence from type-4 dominos to type-3 dominos in the list. Note that a different $d_i$ must be mapped to a different $d_{x(i)}$. Moreover, the third domino, which is type-3 is not mapped to by any type-4 domino.
Claim: There is no solution to this PCP problem.
Proof. Suppose there is a solution. Let $\#1, \#2, \#3, \#4$ be the number of type-1,2,3,4 dominos in the solution. The number of $a$s on the top row and on the bottom row are $\#2+\#3+\#4$ and $\#1 +\#2+\#3$ respectively. The number of $c$s on the top row and the bottom row are $\#3+\#4$ and $\#1+\#4$ respectively. So we have
$$\begin{aligned}
\#2+\#3+\#4 &= \#1+\#2+\#3,\\
\#3+\#4 &= \#1+\#4,
\end{aligned}$$
which implies $\#3=\#4$.
On the other hand, there must be at least $3$ dominos in the solution. The lemma says $\#3>\#4$. This contradiction means we do not have a solution. | {
"domain": "cs.stackexchange",
"id": 20926,
"tags": "automata, undecidability, induction"
} |
What is a 'statistical operator' in quantum mechanics? | Question: What is a 'statistical operator' in quantum mechanics? How is it different from just an operator? Are there any operator properties (e.g., normal, Hermitian, unitary, etc.) universally attributable to statistical operators?
Or is it just an operator for which there's an expectation value with respect to some vector?
Answer: In mathematical sense, we say that operator $\rho$ is statistical if:
It is hermitian $\rho^\dagger = \rho$
It is positive. This means that condition 1. is satisfied and also $\langle \psi| \rho |\psi \rangle \geq 0$
It's trace is equal to unity $\mathrm{Tr} \; \rho =1$
In quantum mechanics density matrix satisfies all three of these properties, so you will often hear that people refer to density matrix as a statistical operator (they basically use it as a synonym). | {
"domain": "physics.stackexchange",
"id": 67077,
"tags": "quantum-mechanics, terminology, definition, density-operator"
} |
ASUS Xtion 2 - how to intergrate in ros | Question:
Hi everyone,
was wondering if anyone has tried integrating xtion 2 into ros. I have recently obtained it and was wondering if anybody has experience with this new version of xtion. Also, the provided sdk(openni2) installs successfully but the given samples crashes complaining about segmentation error. any suggestions are welcome. thanks.
Originally posted by aamir_khan_cvas on ROS Answers with karma: 11 on 2017-06-16
Post score: 1
Answer:
This is not a real solution, but can be useful for others looking for the same issues, as there is nothing on the web addressing this at the moment.
I came here looking for a fix for the same problem. I am not a ROS user, but I'm trying to use Xtion2 on Linux. The provided examples, like NiViewer crashes. I tried on different computers with different Ubuntu version, I managed to run it only on my computer, but only lowering the bandwidth as they suggest in the manual (however this solution doesn't work in any of the other computers I tested.
It is due to a bandwidth problem.
I am in contact with the support service and they told me that they are working for a solution, probably to be released as new firmware in August.
Originally posted by rok with karma: 36 on 2017-07-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by aamir_khan_cvas on 2017-07-18:
thank you for sharing your experience.
for now, to get around, is to copy the library files and openni2 drivers after making the openni2 sdk provided on their website, and replacing these with the roslaunch files, have done the job. I hope this helps whoever is working with xtion2 and ros.
Comment by jastion on 2017-07-26:
Hi, I am a relatively new student user using ROS Kinetic on Ubuntu 16.04LTS. Could you elaborate more on which library files and openni2 drivers I need to copy from the roslaunch files into the SDK? I cloned from the openni2_camera and openni2_launch repos but I do not see the lib files to copy. | {
"domain": "robotics.stackexchange",
"id": 28133,
"tags": "ros, camera, xtion, asus, depth"
} |
Express the total mass of the galaxy in solar masses | Question: I have the following problem:
"The Sun has an orbital speed of about 220 km s−1 around the center
of the Galaxy, whose distance is 28 000 light years. Estimate the total
mass of the Galaxy in solar masses."
I know how to solve it in two different ways, considering the galaxy mass as the total mass inside the sphere that the sun is orbiting:
using that the gravitational force is equal to the centripetal force;
by Kepler's third law
But, in both cases, what I get is the total mass of the Galaxy, in kilograms.
To estimate this mass in solar masses, I need to know the solar mass - which is easily found, but it's not given in this problem - and then I divide the value that I find by it.
The question is: is there a way I can obtain this ratio without knowing somehow the solar mass? Is there a way that I can get directly an expression with the ratio M(galaxy)/M(sun) and not depending on the value of the last one?
Answer: The answer is no. The orbital speed of the Sun doesn't depend on its mass, so the mass of the galaxy which you obtain from the orbital speed doesn't depend on the mass of the Sun. So there's no hope that in their ratio the mass of the Sun cancels out. | {
"domain": "physics.stackexchange",
"id": 61050,
"tags": "homework-and-exercises, newtonian-gravity, orbital-motion, unit-conversion, galaxy-rotation-curve"
} |
What is the population limit that makes consanguinity an issue? | Question: A recent incident brought in the news one of the last uncontacted people - the Sentinelese:
the Sentinelese appear to have consistently refused any interaction
with the outside world.
There is significant uncertainty as to the group's size, with
estimates ranging between 40 and 500 individual
If I understood correctly, the Sentinelese have a rather small population for dozens if not hundreds of generations and I am wondering if consanguinity is not issue (e.g. serious childhood effects) for them.
Question: What is the population limit (lower bound) that makes consanguinity an issue?
Answer: I don't have a great knowledge in population genetics but I think your question can be answered by the relationship between loss of heterozygosity and the effective population size.
For an ideal asexually reproducing genetically diploid population, the heterozygosity is lost with increasing generations according to this equation:
$$H_t=\left(1-\frac{1}{2N}\right)^t H_0$$
where $H_t$ is the heterozygosity at the generation $t$, $H_0$ is the heterozygosity of initial population and $N$ is the population size.
If you analyse this equation then you'll note that heterozygosity exponentially reduces at rate inversely proportional to the population size.
For real populations you have to replace the population size with effective population size. Effective population size for a sexually reproducing population would be:
$$\frac{1}{N_e} = \frac{1}{4N_m} + \frac{1}{4N_f}$$
Where $N_e$ is effective population size, $N_m$ is number of males and $N_f$ is number of females.
You can find the derivation of these formulas in Principles of Population Genetics by Hartl and Clark.
Other than that, the probability of extinction is also higher, the smaller the population is.
When would consanguinity be an issue depends on other factors too such as initial heterozygosity, presence of deleterious alleles in the gene pool and other environmental factors. I don't think there is some kind of mathematical/practical lower bound. The minimum viable population is approximated using simulations. | {
"domain": "biology.stackexchange",
"id": 9318,
"tags": "genetics, population-genetics"
} |
How are the Bell states entangled | Question: I've been trying to follow Qiskit Global Summer School 2020 and understood that if a pure state $S$ on systems $A$ and $B$ cannot be written as a tensor product of some state from $A$ and some state from $B$ then it is correlated and therefore entangled.
The thing is that on the same slide, the instructor shares a general formula of the Bell states that represents it as a tensor product which was very conflicting.
Course link: https://youtu.be/9MpSQglnqI0?t=327 minute 5:27
Answer: The term
$$(\mathbb{I} \otimes \sigma_{x}^j\sigma_{z}^i)|\psi^{00}\rangle$$
is the state $|\psi^{00}\rangle$ multiplied by $(\mathbb{I} \otimes \sigma_{x}^j\sigma_{z}^i)$. So, the formula doesn't represent $|\psi^{ij}\rangle$ as a tensor product. It shows how to get any of the four entangled forms starting from $|\psi^{00}\rangle$ by applying local operations. | {
"domain": "quantumcomputing.stackexchange",
"id": 4950,
"tags": "entanglement, bell-basis"
} |
Variance of Integral of a real white Gaussian Noise Process | Question:
In this question, is the answer not equal to infinity ? Answer is mentioned as 6. But my doubt is cant we think of it like a linear combination of many independent random variables each having infinite variance, so the resulting random variable also has infinite variance. This was a question asked in GATE exam conducted in India for Electronics and Communication stream
Answer: Since $W(t)$ is assumed to be zero-mean, also the RV $Y$ is zero-mean. Hence, the variance of $Y$ is given by
$$\begin{align}\sigma_Y^2&=E\left\{Y^2\right\}\\&=E\left\{\int_{-\infty}^{\infty}W(t_1)\phi(t_1)dt_1\int_{-\infty}^{\infty}W(t_2)\phi(t_2)dt_2\right\}\\&=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\phi(t_1)\phi(t_2)E\big\{W(t_1)W(t_2)\big\}dt_1dt_2\tag{1}\end{align}$$
where $E\big\{W(t_1)W(t_2)\big\}$ is the auto-correlation function $R_W(t_2-t_1)$ of $W(t)$. Now you just have to figure out the expression for $R_W(\tau)$ and solve the integral $(1)$. | {
"domain": "dsp.stackexchange",
"id": 9644,
"tags": "noise, gaussian, random-process, integration, white-noise"
} |
How do neutrons escape nuclei? | Question: How do neutrons ejected from a nucleus gain kinetic energy if the they don't repel electrically and the nuclear force only attracts? Does it have something to do with the weak force?
Answer: Neutrons are only "ejected" if the nucleus is excited.
One way to eject a neutron is by simply bumping them off. You could bombard nuclei with energetic particles that could transfer enough momentum to an individual neutron to dislodge it. So the "energy" used here comes from the momentum transfer.
However, there's also "spontaneous" neutron emission. This happens when a nucleus is very energetically excited and thus, wants to relax into its ground state. This mechanism is called neutron evaporation. A nucleus could be left excited by a high energy collision, for example. When the neutrons evaporate in this situation, their kinetic energy comes from the change of nucleon orbital configuration, where the system rearranges itself to reach a lower level. Nucleons are fermions and they have an "orbital" structure analogous to electrons in an atom. Much of this energy comes from the strong interaction, which governs neutrons and protons and is "blind" to charge.
Another way to think of it is by using the famous example of $\alpha$ particles being released through $\alpha$ decay. The $\alpha$ particle is caught in a strong force well caused by the nucleus but if the $\alpha$ particle gains sufficient kinetic energy through excitations it can quantum tunnel through the strong force potential wall. Here, the strong force is attractive, yet the $\alpha$ particle still manages to tunnel through. Here, the $\alpha$ particle could be a neutron that tunnels through the strong force potential wall.
Furthermore the strong force isn't only "attractive", there are repulsive components to the strong force as well. It's hard to trivially describe it because QCD is a very complicated problem in general. | {
"domain": "physics.stackexchange",
"id": 23638,
"tags": "neutrons, weak-interaction"
} |
Period $T$ of oscillation with cubic force function | Question: How would I find the period of an oscillator with the following force equation?
$$F(x)=-cx^3$$
I've already found the potential energy equation by integrating over distance:
$$U(x)={cx^4 \over 4}.$$
Now I have to find a function for the period (in terms of $A$, the amplitude, $m$, and $c$), but I'm stuck on how to approach the problem. I can set up a differential equation:
$$m{d^2x(t) \over dt^2}=-cx^3,$$
$$d^2x(t)=-{cx^3 \over m}dt^2.$$
But I am not sure how to solve this. Wolfram Alpha gives a particularly nasty solution involving the hypergeometric function, so I don't think the solution involves differential equations. But I don't have any other leads.
How would I find the period $T$ of this oscillator?
Answer: Since
$$\frac1 2mv^2+U(x)=U(A)$$
We have
$$dt=\frac{dx}v=\frac{dx}{\sqrt{2(U(A)-U(x))/m}}=\frac{dx}{\sqrt{c(A^4-x^4)/(2m)}}$$
Then
$$\frac T4=\int_0^{\frac T4}dt=\int_0^A\frac{dx}{\sqrt{\frac{c}{2m}(A^4-x^4)}}$$
Thus
$$T=4\int_0^A\frac{dx}{\sqrt{\frac{c}{2m}(A^4-x^4)}}$$ | {
"domain": "physics.stackexchange",
"id": 9584,
"tags": "homework-and-exercises, newtonian-mechanics, oscillators, anharmonic-oscillators"
} |
What might be the half-life of observationally stable nuclei with energetically favorable decay modes? | Question: For example, a reaction
$${}^{132} \mathrm{Ba} \rightarrow {}^{128}\mathrm{Xe} + \alpha + 22.19\mathrm{keV}$$
would be energetically favorable, contrary that ${}^{132} \mathrm{Ba}$ is observationally stable. However, energetically favorable reactions happen, only not very quickly. There are also a lot of similar examples.
Does any estimation exist, what could be its half-life? How could it be calculated? I think, it might depend on some potential gate calculation.
Answer: There is the empirically found Geiger-Nuttall law
relating the energy of the $\alpha$ decay to its half-life:
$$\log T_{1/2} = \frac{A(Z)}{\sqrt{E}}+B(Z)$$
where $T_{1/2}$ is the half-life, $E$ the total kinetic energy
(of the alpha particle and the daughter nucleus), and $A$ and $B$
are coefficients that depend on the isotope's atomic number $Z$.
(image from this question)
You see, for decay energy less than $4$ MeV the half-life is
longer than $10^{18}$ s ($\approx 10^{11}$ years),
so that such a decay is hard to observe because it would require
a very large number of atoms. | {
"domain": "physics.stackexchange",
"id": 96472,
"tags": "nuclear-physics, radioactivity, elements, half-life"
} |
Interested in Mathematical Statistics... where to start from? | Question: I have been working in the last years with statistics and have gone pretty deep in programming with R. I have however always felt that I wasn't completely grasping what I was doing, still understanding all passages and procedures conceptually.
I wanted to get a bit deeper into the math behind it all. I've been looking online for texts and tips, but all texts start with a very high level. Any suggestions on where to start?
To be more precise, I'm not looking for an exaustive list of statistical models and how they work, I kind of get those. I was looking for something like "Basics of statistical modelling"
Answer: When looking for texts to learn advanced topics, I start with a web search for relevant grad courses and textbooks, or background tech/math books like those from Dover.
To wit, Theoretical Statistics by Keener looks relevant:
http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-93838-7
And this:
"Looking for a good Mathematical Statistics self-study book (I'm a physics student and my class & current book are useless to me)"
http://www.reddit.com/r/statistics/comments/1n6o19/looking_for_a_good_mathematical_statistics/ | {
"domain": "datascience.stackexchange",
"id": 234,
"tags": "statistics, predictive-modeling"
} |
csview: A tiny utility to view csv files | Question: Seeing a csv in table form is nicer then viewing it as row text, for example the csv from Calculate food company sales for the year (with headers added by me) looks much nicer in this table form than in plain-text:
<table border="1">
<tr>
<th> Kind</th>
<th> Brand</th>
<th> Sales in 2014</th>
<th> Sales in 2015</th>
</tr>
<tr>
<td> Cereal</td>
<td>Magic Balls</td>
<td>2200</td>
<td>2344
</tr>
<tr>
<td> Cereal</td>
<td>Kaptain Krunch</td>
<td>3300</td>
<td>3123
</tr>
<tr>
<td> Cereal</td>
<td>Coco Bongo</td>
<td>1800</td>
<td>2100
</tr>
<tr>
<td> Cereal</td>
<td>Sugar Munch</td>
<td>4355</td>
<td>6500
</tr>
<tr>
<td> Cereal</td>
<td>Oats n Barley</td>
<td>3299</td>
<td>5400
</tr>
<tr>
<td> Sugar Candy</td>
<td>Pop Rocks</td>
<td>546</td>
<td>982
</tr>
<tr>
<td> Sugar Candy</td>
<td>Lollipop</td>
<td>1233</td>
<td>1544
</tr>
<tr>
<td> Sugar Candy</td>
<td>Gingerbud</td>
<td>2344</td>
<td>2211
</tr>
<tr>
<td> Sugar Candy</td>
<td>Respur</td>
<td>1245</td>
<td>2211
</tr>
<tr>
<td> Chocolate</td>
<td>Coco Jam</td>
<td>3322</td>
<td>4300
</tr>
<tr>
<td> Chocolate</td>
<td>Larkspur</td>
<td>1600</td>
<td>2200
</tr>
<tr>
<td> Chocolate</td>
<td>Mighty Milk</td>
<td>1234</td>
<td>2235
</tr>
<tr>
<td> Chocolate</td>
<td>Almond Berry</td>
<td>998</td>
<td>1233
</tr>
<tr>
<td> Condiments</td>
<td>Peanut Butter</td>
<td>3500</td>
<td>3902
</tr>
<tr>
<td> Condiments</td>
<td>Hot Sauce</td>
<td>1234</td>
<td>1560
</tr>
<tr>
<td> Condiments</td>
<td>Jelly</td>
<td>346</td>
<td>544
</tr>
<tr>
<td> Condiments</td>
<td>Spread</td>
<td>2334</td>
<td>5644</tr></table>
To view a .csv file, I just translate it into HTML and call the browser on it.
The code is short because the task is simple, but I feel like it could be written better:
"""
This utility shows a csv file in table format
by translating it to html and calling the browser on the newly created file.
The html file is not deleted after being viewed
and has name `original_file.split('.')[0] + '.html'`
Example usage:
python3 csview.py example.csv
"""
import webbrowser
import sys
def csv_to_html(csv):
START = '''<table border="1">\n\n'''
END = '''</tr></table>'''
lines = [line for line in csv.split("\n") if line]
html_lines = ["<th>" + lines[0].replace(',', '</th>\n<th>') + '</th>'] +\
["<td>" + line.replace(',', '</td>\n<td>') for line in lines[1:]]
body = '<tr>\n' + '\n</tr>\n\n<tr>\n'.join(html_lines)
return START + body + END
if __name__ == "__main__":
html_filename = sys.argv[1].split('.')[0] + '.html'
with open(html_filename, "w+") as out_file:
with open(sys.argv[1]) as in_file:
out_file.write(csv_to_html(in_file.read()))
webbrowser.open(html_filename)
Answer: Use Modules
As a joke, I'd say that to write good Python, you need to be lazy. Python comes with batteries included, and using them usually has a very good impact on functionality, readability, security and performance while reducing workload.
For example, with the csv module, as suggested by others, it can parse several formats, your intent is clear, it supports separators and line breaks inside cells with quoting and is implemented in C.
I also suggest os.path.splitext and xml.etree.ElementTree. Once again, intent is made more clear, you gain in functionality (special support for funny file names for splitext, automatic escaping for etree), though you may argue with the performance argument for this specific case.
import webbrowser
import sys
import xml.etree.ElementTree as eTree
import os.path
import csv
def csv_to_html(csvhandle):
# Init
table = eTree.Element("table", border="1")
reader = csv.reader(csvhandle)
# Header
headline = eTree.SubElement(table, "tr")
for column in next(reader):
elem = eTree.SubElement(headline, "th")
elem.text = column
# Content
for row in reader:
row_elem = eTree.SubElement(table, "tr")
for cell in row:
elem = eTree.SubElement(row_elem, "td")
elem.text = cell
return eTree.tostring(table, method="html", encoding="unicode")
if __name__ == "__main__":
csv_filename = sys.argv[1]
html_filename = os.path.splitext(csv_filename)[0] + ".html"
# With construct, just to remind you that several with_items are allowed.
# Comes in handy when dealing with a bigger number of nested with statements
with open(html_filename, "w+") as out_file, open(csv_filename) as in_file:
out_file.write(csv_to_html(in_file))
webbrowser.open(html_filename)
If you can, you could also use lxml instead of xml.etree, which produces better HTML and is faster, but the module is not standard. | {
"domain": "codereview.stackexchange",
"id": 18377,
"tags": "python, html, csv"
} |
Nomenclature of organic compounds containing complex side chains | Question: How do we decide the sequence of substituents in a organic compound if one of the substituents is complex. Should the complex chain must always be written first while writing the IUPAC name or are there any other rules governing it?
The following IUPAC names got me confused:
7-(1,2-dimethylpentyl)-5-ethyltridecane
AND
5,5-dimethyl-6-(1,1-dimethylbutyl)-6-pentyltridecane
In the first compound the complex substituent is written first although p(pentyl) comes after e(ethyl).
In the second compound, the complex substituent is written later although m(methyl) comes after b(butyl).
Answer: Simple prefixes are arranged alphabetically disregarding any multiplicative prefixes. The multiplicative prefixes are inserted later and do not alter the alphabetical order.
For example, ‘1,2-dibromo-’ is considered to begin with ‘b’.
However, the name of a compound substituent is considered to begin with the first letter of its complete name.
For example, ‘1,2-dibromobutyl-’ is considered to begin with ‘d’.
On this matter, the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book) reads as follows:
P-14.5 ALPHANUMERICAL ORDER
Alphanumerical order has been commonly called ‘alphabetical order’. As these ordering principles do involve ordering both letters and numbers, in a strict sense, it is best called ‘alphanumerical order’ in order to convey the message that both letters and numbers are involved
Alphanumerical order is used to establish the order of citation of detachable substituent prefixes (not the detachable saturation prefixes, hydro and dehydro), and the numbering of a chain, ring, or ring system when a choice is possible.
(…)
P-14.5.1 Simple prefixes (i.e., those describing atoms and unsubstituted substituents) are arranged alphabetically; multiplicative prefixes, if necessary, are then inserted and do not alter the alphabetical order already established.
P-14.5.2 The name of a prefix for a substituent is considered to begin with the first letter of its complete name.
Therefore, the correct alphanumerical order corresponds to the name 7-(1,2-dimethylpentyl)-5-ethyltridecane (not 5-ethyl-7-(1,2-dimethylpentyl)tridecane) since the compound substituent name ‘dimethylpentyl’ starts with ‘d’, and ‘d’ is earlier alphabetically than ‘e’.
(Nevertheless, the preferred IUPAC name is 5-ethyl-7-(3-methylhexan-2-yl)tridecane.)
In your second example, the correct alphanumerical order corresponds to the name 6-(1,1-dimethylbutyl)-5,5-dimethyl-6-pentyltridecane (not 5,5-dimethyl-6-(1,1-dimethylbutyl)-6-pentyltridecane) since the compound substituent name ‘dimethylbutyl’ starts with ‘d’, and ‘d’ is earlier alphabetically than ‘m’.
(Nevertheless, the preferred IUPAC name is 5,5-dimethyl-6-(2-methylpentan-2-yl)-6-pentyltridecane.) | {
"domain": "chemistry.stackexchange",
"id": 3382,
"tags": "organic-chemistry, nomenclature"
} |
Instantiating objects with many attributes | Question: I have a class with quite a few attributes, most of which are known when I create an instance of the object. So I pass all the values in the constructor:
$op = new OpenIdProvider($imgPath . $name . $ext, 'openid_highlight',
0, 0, 108, 68, 6, $info[0], $info[1], $name);
I'm finding that having this many parameters makes it confusing both when writing and reading the code, as it's not easy to determine which attribute each value corresponds to. Also, this has a bit of a code smell to it - seems like there should be a better way. Any suggestions?
Answer: Martin Fowler's bible book Refactoring does identify a smell called "Long Parameter List" (p78) and proposes the following refactorings:
Replace Parameter with Method (p292)
Introduce Parameter Object (295)
Preserve Whole Object (298)
Of these I think that "Introduce Parameter Object" would best suit:
You'd wrap the attributes up in their own object and pass that to the constructor. You may face the same issue with the new object if you choose to bundle all the values directly into its' constructor, though you could use setters instead of parameters in that object.
To illustrate (sorry, my PHP-fu is weak):
$params = new OpenIDParams();
$params->setSomething( $imgPath . $name . $ext );
$params->setSomethingElse( 'openid_highlight' );
$params->setName( $name );
$op = new OpenIdProvider( $params );
This is a little more verbose but it addresses your concern about not being clear about the attributes' purpose / meaning. Also it'll be a little less painful to add extra attributes into the equation later. | {
"domain": "codereview.stackexchange",
"id": 29061,
"tags": "php, constructor"
} |
LALR(1) parsers and the epsilon transition | Question: I am having trouble getting my head wrapped around epsilon transitions while creating an LALR(1) parse table.
Here's a grammar that recognizes any number of 'a' followed by a 'b'. 'S' is an artificial start state. '$' is an artificial 'EOS' token.
0. S -> A $
1. A -> B b
2. B -> B a
3. B -> epsilon
Itemsets:
i0: S -> . A $
A -> .B b
B -> .B a
A -> B . b ! because B could -> epsilon
B -> B . a ! "
i1: S -> A . $
i2: S -> A $ .
i3: A -> B . b ! from i0
B -> B . a
i4: A -> B b . ! from i0 or i3; the LALR algorithm compresses identical states.
i5: B -> B a . ! from i0 or i3: the LALR algorithm compresses identical states.
I previously had a description on how this would work to parse a simple string. I removed it because I know less now than I did before. I can't even figure out a parse tree for 'ab'.
If someone could show me how I have mis-constructed my itemsets and how I'd reduce the epsilon transition I'd be grateful.
Answer: Your states and itemsets are not quite correct. The epsilon production must appear in relevant itemsets, and you have combined two states into one, which would produce a shift-reduce conflict if the epsilon production were added to the itemset (which should be done).
The following was generated with bison (using the --report=all command-line option); it differs from the theoretic model because the grammar has been "augmented" with an extra start symbol and an explicit end-of-input marker ($end). Also, it has done some table compression, so in the action tables, you can think of $default as meaning "either a or b".
It is worth explaining how State 0 comes about, since it shows how epsilon productions are handled (no differently from other productions).
We start with $accept: . S $end, by definition. ($accept is the starting state). Then the closure rule is applied as long as possible. Remember that the closure rule is: If any item in the itemset, the . is immediately before a non-terminal, add all the productions for that non-terminal with an initial .. Hence we add:
S: . A
continuing with A:
A: . B 'b'
continuing with B:
B: . B 'a'
B: .
We can't apply closure any longer, so we're done. Since the state now has an item with the dot at the end (the epsilon production for B), a reduction is possible.
State 0
0 $accept: . S $end
1 S: . A
2 A: . B 'b'
3 B: . B 'a'
4 | .
$default reduce using rule 4 (B)
S go to state 1
A go to state 2
B go to state 3
State 1
0 $accept: S . $end
$end shift, and go to state 4
State 2
1 S: A .
$default reduce using rule 1 (S)
State 3
2 A: B . 'b'
3 B: B . 'a'
'b' shift, and go to state 5
'a' shift, and go to state 6
State 4
0 $accept: S $end .
$default accept
State 5
2 A: B 'b' .
$default reduce using rule 2 (A)
State 6
3 B: B 'a' .
$default reduce using rule 3 (B)
In State 0, the closure rule has added the epsilon production (line 4). Furthermore, no item in the state 0 itemset has the point before a terminal. So with any lookahead, the parser is forced to reduce the epsilon production, after which it will use the goto function for state 0 to decide to move to state 3. (In your state machine, states 0 and 3 are conflated, but I do not believe this is correct.) State 3 will definitely shift a terminal; with the input ab$end, it will shift the a and move to state 6, which will then reduce a B. And so on. | {
"domain": "cs.stackexchange",
"id": 3761,
"tags": "formal-grammars, parsers"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.