anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Read contents of file with appropriate error handling (Rust) | Question: I am trying to learn basic Rust. Is this a good way to get the contents of a file? Am I handling errors the best way? Is there any way this could be more performant?
use std::fs::File;
use std::io::Read;
fn main() {
let path = "bar/foo.txt";
match read(path) {
Ok(contents) => println!("{}", contents),
Err(err) => println!("Unable to read '{}': {}", path, err),
}
}
fn read(file: &str) -> Result<String, std::io::Error> {
let mut file = match File::open(file) {
Ok(mut f) => f,
Err(err) => return Err(err),
};
let mut data = String::new();
match file.read_to_string(&mut data) {
Ok(_) => return Ok(data),
Err(err) => return Err(err),
}
}
Answer:
Your code gives a warning:
warning: variable does not need to be mutable
--> src/main.rs:15:12
|
15 | Ok(mut f) => f,
| ^^^^^
|
When you move a value from one binding to another, the two bindings don't have to agree on mutability. It may sound strange, but it's valid to move from an immutable binding to a mutable binding. It's perfectly safe because in order to be able to move a value, there must not be any pointers to it, which ensures that you have exclusive access to the value. Therefore, you could write Ok(f) here instead of Ok(mut f).
The match / return Err pattern is so frequent that Rust has a shorthand for it: the ? operator (and its predecessor, the try! macro). We could rewrite your program using the ? operator like this:
use std::fs::File;
use std::io::Read;
fn main() {
let path = "bar/foo.txt";
match read(path) {
Ok(contents) => println!("{}", contents),
Err(err) => println!("Unable to read '{}': {}", path, err),
}
}
fn read(file: &str) -> Result<String, std::io::Error> {
let mut file = File::open(file)?;
let mut data = String::new();
file.read_to_string(&mut data)?;
Ok(data)
}
The second match in read ends the function, so we don't need the ? operator. Instead, we could use map to replace the data in the Ok variant while keeping the Err the same, and then just return the result of map.
fn read(file: &str) -> Result<String, std::io::Error> {
let mut file = File::open(file)?;
let mut data = String::new();
file.read_to_string(&mut data).map(|_| data)
}
Which one you use is up to you. :)
File::open accepts more than just &str. Its signature is:
fn open<P: AsRef<Path>>(path: P) -> Result<File>
You could make your own read function more generic by introducing a type parameter.
use std::path::Path;
fn read<P: AsRef<Path>>(file: P) -> Result<String, std::io::Error> {
let mut file = File::open(file)?;
let mut data = String::new();
file.read_to_string(&mut data)?;
Ok(data)
}
Look at the list of implementations for AsRef<Path> to see what types you can now pass to your function. | {
"domain": "codereview.stackexchange",
"id": 28315,
"tags": "error-handling, file-system, rust"
} |
Age of Occator Crater | Question: Has any official information been published regarding the estimated age of the Occator crater on Ceres? I ran some quick searches but couldn't find anything putting a definitive (or even speculative) date on the moment of impact.
I ask because based upon my own very amateur analysis of the imagery:
...there seems to be a marked decrease in the amount of secondary cratering visible within a fairly uniform radius of the main crater.
My assumption is that when the impact occurred, the surrounding area was likely blanketed by material kicked up from the impact site, obscuring most pre-existing craters (smaller ones, in particular). Kind of like fresh snowfall, except made of rock.
So for a (small) secondary crater to be visible near Occator, the impact would have to have occurred after the impact that created Occator crater. Meaning that the relative lack of small craters near Occator would seem to imply a fairly recent impact ("recent" in terms of Ceres geological timescales), I believe?
Is there any official data that would confirm or refute this?
Answer: Five years have passed, and several papers have been published about this. The latest, Neesemann et al. (2019), compiles previous results and propose a new age estimate. Or rather estimates, because dating such object is tricky! It depends on which model you use (lunar-derived chronology model versus asteroid-derived chronology model, or ADM), and on which part of the crater you use the model on (ejecta blanket versus interior lobate deposits). Hence the large interval in the result: they found an age of 1.6 to 63.7 Ma (ADM on ejecta blanket), which they translated to "relatively young", geologically speaking. | {
"domain": "astronomy.stackexchange",
"id": 5027,
"tags": "ceres, crater"
} |
Is the function of adjacent genes correlated? | Question: Do genes that occupy a similar locus on the genome have correlated function, specifically in human beings? It is my understanding that adjacent genes are inherited together, and so location plays a role there. However it terms of function, I don't know to what extent location plays a role. Furthermore, if say two adjacent genes have the same expression, does this necessarily mean that their function is correlated, or is that interpretation stretch?
Answer: In bacteria, this is often true. This is because more than one gene is often transcribed onto a single RNA. This grouping of genes is called an operon. It is usually true that these have a related function because they are being translated to protein in very much the same proportion - a convenient way to regulate the function as a whole.
Once you get into eukaryotes this is no longer true (except for v. rare cases most of which are viral genes), one mRNA transcript contains just one translation region. This is true even for yeast and other single celled organisms. Gene regulation can be correlated, but the relationship on the genome has little to do with it.
There is some importance to the genomic relationship of two genes because of the crossing over that occurs in meiosis, but this is more of a relationship that is important in speciation and evolution, it doesn't have any recognized importance to how the genes act within the eukaryotic cell. | {
"domain": "biology.stackexchange",
"id": 3366,
"tags": "genetics, gene-expression, human-genetics"
} |
Expected value of function of operator | Question: I have some eigenstate $|\psi\rangle$, a smooth function $f$ and an observable $\hat{A}$. I want to compute the expected value of $f(\hat{A})$:
$$
\langle \psi|f(\hat{A})|\psi\rangle.
$$
I would do that by expanding $f(\hat{A})$ in a Taylor series. However my situation is quite simple and there might be a somewhat easier way of doing it. I have eigenstates of the form $|\ell, m\rangle$ such that $\langle \theta,\phi|\ell,m\rangle=Y_\ell^m(\theta,\phi)$. The observable in my case is $\hat{A} = \hat{y}/\hat{r} = \sin(\hat{\theta})\sin(\hat{\phi})$. I'm having trouble here. For instance, if $\hat{A}=\hat{\theta}$ then I would do:
$$
\langle \psi|\hat{\theta}|\psi\rangle = \langle \psi|\int_0^\pi \theta|\theta\rangle\langle\theta||\psi\rangle.
$$
Answer: If the state $|\psi \rangle = |\ell, m\rangle$, where $\langle \theta,\phi | \ell, m\rangle = Y^m_\ell(\theta,\phi)$ are the normalised spherical harmonics, then the expectation value of an operator $\mathcal O$ is given by,
$$\langle \mathcal O \rangle = \int d\Omega \, Y^{-m}_\ell (\theta,\phi) \, \mathcal{O} \, Y^m_\ell(\theta,\phi)$$
over the unit sphere, $d\Omega = \sin \theta \, d\theta d\phi$. In your case, applying the definition of the harmonics,
$$= \frac{(2\ell+1)}{4\pi}\int \, d\Omega \, P^{-m}_\ell(\cos \theta) \, (\sin \theta \sin \phi) \, P^m_\ell(\cos \theta) = 0$$
since luckily, the easier integration over $\phi$ is simply,
$$\int_0^{2\pi} d\phi \, \sin \phi = 0,$$
and there is no need to do the daunting integration over $\theta$. (Though, if you did need to do the integration over $\theta$, since Mathematica was unable to produce a closed-form answer, off the top of my head I would either replace the Legendre polynomials by a series representation, or maybe write them in terms of hypergeometric functions and make use of some of their identities.) | {
"domain": "physics.stackexchange",
"id": 35381,
"tags": "quantum-mechanics, homework-and-exercises, operators, hilbert-space"
} |
Did the initiation of the Higgs Field and the inflationary period occur simultaneously? | Question: My question is: Both of these concepts involve a phase transition and they both started, as far as I know, early in time, so are the above two ideas linked in any way?
My first source for this query is Inflationary Period
The inflationary epoch lasted from 10$^{−36}$ seconds after the Big Bang to sometime between 10$^{−33}$ and 10$^{−32}$seconds. Following the inflationary period, the Universe continues to expand, but at a less rapid rate.
My second source is from Lisa Randall, who states, and I paraphrase her a bit here:
Early on, [After the Big Bang] particles had no mass, but later a phase transition occurred that gave some of the particles mass (i.e. the Higgs field turned on).
My assumptions about these effects are that the inflation idea solves the horizon problem and the flatness problem and that the Higgs Field provides mass, although I am aware that, in a proton for example, most of the mass does not come from the Higgs field, but from the "quark sea", as Randall puts it.
My initial guess is that they are connected in some way, but the probability of this diminishes, imo, if they did not occur simultaneously.
I self study, but I think I am at a level that, if I am lucky enough to receive any answers, I probably won't be able to completely follow them immediately. However, I may be able to understand them after a period of related reading, so being pushed a bit on implicit assumed knowledge is ok by me.
Answer: I expect you are familiar with the Big Bang model, seen here . It is a mathematical model using mathematical solutions from General Relativity and the Standard Model of particle physics .
The BB developed to describe astronomical observations and the SM developed to describe particle physics observations. The SM describes how particles/nature behaves as the energy available gets larger, and the Big Bang describes the available energy as the time from the origin of the BB increases.
The electroweak symmetry is unbroken until a certain energy in the interactions, as the energy for particle interaction gets lower the Higgs field appears, the particles get masses and nucleation can later start. This happened
Between 10^−12 second and 10^−6 second after the Big Bang
Before the Cosmic Microwave Background data came out, and the horizon problem appeared, the electroweak symmetry breaking was the phase transition appearing in the Big Bang chronology.
The quantization of gravity# hypothesis for the very early universe could model the uniformity of the observed CMB spectrum. The inflationary period was introduced to model the CMB , in order to be consistent with the observation of a uniformity in the universe that could not be explained by thermodynamic arguments at other periods, except immediately after the BB.
The two phase transitions, (end of inflation period, electroweak breaking) are connected by the diminishing of the available energy per particle due to the expansion, but are independent of each other, the electroweak with the Higgs field appearing much later in the chronology of expansion.
# Please note that the quantization of gravity at the moment is an effective theory. There is research going on , no definitive answers. | {
"domain": "physics.stackexchange",
"id": 25023,
"tags": "particle-physics, cosmology, big-bang, higgs, cosmological-inflation"
} |
From String Frame to Einstein Frame for 10D supergravity | Question: This question is related to but not answered in the post String frame and Einstein frame for a Dp-brane, so it should be treated as a separate question.
Beginning with the gravity action
$$S = \frac{1}{(2\pi)^7 l_s^8}\int d^{10}x \sqrt{-\gamma}\left[e^{-2\Phi}(R + 4(\nabla\Phi)^2) - \frac{1}{2}\left|F_{p+2}\right|^2\right]$$
in the string frame, I want to derive the action in the Einstein frame, which is
$$S = \frac{1}{(2\pi)^7 l_s^8 g_s^2}\int d^{10}x \sqrt{-g}\left[R - 4(\nabla\phi)^2 - \frac{1}{2}g_s^2 e^{(3-p)\phi/2}\left|F_{p+2}\right|^2\right]$$
where $e^{\Phi} = g_s e^{\phi}$, $g_{\mu\nu} = e^{-\phi/2}\gamma_{\mu\nu}$, and $|F_{p}|^2 = \frac{1}{p!}F_{\mu_1\mu_2\ldots\mu_p}F^{\mu_1\mu_2\ldots\mu_p}$.
I understand that
$$R_\gamma = e^{-\phi/2}\left[R_g - \frac{9}{2}\nabla^2\phi - \frac{9}{2}(\nabla\phi)^2\right]$$
(Note: the above expression for the Ricci scalar has been derived here: Curvature of Weyl-rescaled metric from curvature of original metric). The interpretation is that the derivative terms (gradient squared, and Laplacian) on the right hand side have been computed using the $g$ metric, and hence are "already" in Einstein frame form.
Now, I also understand that
$$\sqrt{-\gamma} = e^{5\phi/2}\sqrt{-g}$$
$$|F_{p+2}|^2_{\mbox{string frame}} = e^{-(p+2)\phi/2} |F_{p+2}|^2_{\mbox{Einstein frame}}$$
(for the particular normalization stated above) and
$$(\nabla\phi)^2_{\mbox{string frame}} = e^{-\phi/2}(\nabla\phi)^2_{\mbox{Einstein frame}}$$
but substituting all this into the first expression for the action still leaves behind the Laplacian term $\nabla^2\phi$, which does not appear in the (correct) expression for the string frame action.
What am I missing here?
Answer: The previous answers are correct, in general you can ignore the laplacian terms since they are total derivatives (equations of motion are not affected by total derivatives). Perhaps your confusion is due to the fact that you should be using the covariant laplacian as in this wiki page:
$$ \Delta \phi = \frac{1}{\sqrt{-det \;g}}\partial _ \mu \left( \sqrt{-det \;g} g^{\mu\nu} \partial _ \nu \phi \right)$$
Much like for the divergence in this question, that part of the action gives:
$$\int d^{10} x \sqrt{-det \;g}\frac{1}{\sqrt{-det \;g}}\partial _ \mu \left( \sqrt{-det \;g} g^{\mu\nu} \partial _ \nu \phi \right)=\int d^{10} x \partial _ \mu \left( \sqrt{-det \;g} g^{\mu\nu} \partial _ \nu \phi \right) $$
which is clearly a total derivative.
Importantly, things are not as simple when you go from Einstein frame to string frame: you end up with a term $e^{-2 \phi} \Delta \phi$ which is NOT a total derivative. In that case, you need to use that $$e^{-2 \phi} \Delta \phi = -\frac12 \Delta e^{-2 \phi} + 2 e^{-2 \phi} \partial _ \mu \phi \partial ^\mu \phi$$
the first term on the right IS a total derivative and can be killed, but the last one contributes to give the right normalization for the dilaton in the string frame (a factor of $+4$). | {
"domain": "physics.stackexchange",
"id": 96952,
"tags": "general-relativity, string-theory, ads-cft, branes, supergravity"
} |
What exactly makes negative energy negative? | Question: Antimatter is the opposite of matter since it has opposite electric charge (e.g. proton -> positive whilst anti-proton -> negative).
So what exactly makes negative energy negative? What property does it have to make it the opposite of 'positive' or 'zero' energy?
Answer: Imagine that you're (infinitely) far away from Earth - define this point as your potential energy $E_p = 0$ as Earth does not affect you. Now imagine you are on the surface of Earth and want to get to that point - how do you do that? You need to add energy to yourself/your spaceship to counteract Earth's gravity and escape - so you add this energy to the potential energy you experience on the surface to get to the desired point. This means that while on Earth, your potential energy is negative. | {
"domain": "physics.stackexchange",
"id": 57566,
"tags": "energy, potential-energy, conventions"
} |
c# Fastest way to get values from string | Question: I have a C# app that receives the following commands, via tcp sockets.
{
key = "foo",
value = 1.6557,
}
I'm currently using this method to get the key-value pairs and store them to a classes auto properties.
private Regex _keyRegex = new Regex("\"(.)*\"");
private Regex _valueRegex = new Regex(@"\d*\.{1}\d*");
private MyClass CrappyFunction(string nomnom)
{
// Gets a match for the key
var key = _keyRegex.Match(nomnom);
// Gets a match for the value
var value = _valueRegex.Match(nomnom);
// Tests if got matches for both. If not, returns null.
if (!key.Success || !value.Success) return null;
// Found both values, so it creates a new MyClass and returns it
// Also removes the " chars from the key
return new MyClass(
key.ToString().Replace("\"", string.Empty),
value.ToString());
}
Even though it works, I have a really bad feeling looking at this particular piece of code. It's ugly, in the sense that I'm using two regex objects to achieve my goal.
Any suggestions will be appreciated.
Answer: If you don't want to use a JSON library, you can combine the 2 regex expressions into one and use named groups (?<name>expression).
private static Regex _regex =
new Regex(@"""(?<key>.*)"".*?(?<value>\d*\.\d*)", RegexOptions.Compiled);
The you get the result with
var match = _regex.Match("{ key = \"foo\", value = 1.6557, }");
string key = match.Groups["key"].Value;
string value = match.Groups["value"].Value;
Note that in a verbatim string, double quotes must be escaped by doubling them. The named group key does not include the double quotes, so you get the key directly.
So I basically have the regex
key_expression.*?value_expression
Both expressions are separated by .*?. The quotation mark tells the * to be lazy, i.e. to take as few characters as possible. If you don't, it will swallow the digits before the decimal point. | {
"domain": "codereview.stackexchange",
"id": 34053,
"tags": "c#, regex"
} |
Generic multithreading solution for improving the performance of slow tasks | Question: I'm currently in the process of replacing an archaic multithreading solution using some of the newer C++ standard library features now that our software has been updated to use C++20.
Previously, most of our multithreading was implemented using thread managers that controlled the flow of the threads and allocated tasks to them. Typically this was done to chop up a very slow function into smaller "bites" of work that could be spread across available threads in order to improve performance.
I wanted to create a templated solution that removes the overhead of creating a unique thread manager class and that also reduces complexity for future multithreading implementations.
Here is the new solution I've come up with:
Threads.h
#include <chrono>
#include <future>
#include <vector>
using namespace std::chrono_literals;
template <class TPayload>
void DoTasksAsync(
std::function<void(const TPayload&)> asyncTask,
const std::vector<TPayload>& jobs,
std::function<bool()> shouldContinue)
{
int numWorkerThreads = std::thread::hardware_concurrency() - 1;
size_t jobsLeft = jobs.size();
std::vector<std::future<void>> futures;
do
{
// create a time point 1 microsecond from now
std::chrono::system_clock::time_point aLittleLater = std::chrono::system_clock::now() + 1us;
// remove all futures that have finished in space of 1 microsecond
std::erase_if(futures, [aLittleLater](std::future<void>& thr)
{
return thr.wait_until(aLittleLater) == std::future_status::ready;
});
// if there is space in the futures vector, begin another task in another thread
// with the next payload
while (futures.size() < numWorkerThreads && jobsLeft > 0)
{
TPayload payload = std::move(jobs[--jobsLeft]);
futures.push_back(
std::async(
std::launch::async,
[&asyncTask, payload = std::move(payload)]
{
asyncTask(payload);
})
);
}
} while (futures.size() > 0 && shouldContinue());
// ensure all threads have finished
// -- important when shouldContinue() breaks the loop while threads are still executing
for (std::future<void>& thr : futures)
{
if (thr.valid())
{
thr.wait();
}
else
{
throw std::future_error(std::future_errc::no_state);
}
}
}
Here is an example of how it might be used:
Sorter.cpp
#include "Threads.h"
#include <algorithm>
#include <iostream>
#include <vector>
enum SortingAlgorithm { Bubble, IntroSort };
template<class T>
class Sorter
{
public:
void Sort(std::vector<T>& vector, SortingAlgorithm algorithm)
{
switch (algorithm)
{
case Bubble:
DoBubble(vector);
break;
case IntroSort:
DoIntroSort(vector);
break;
}
}
private:
void DoBubble(std::vector<T>& vector)
{
size_t size = vector.size();
for (int i = 0; i < size - 1; i++)
{
for (int j = 1; j < size - 1; j++)
{
if (vector[j] < vector[j - 1])
{
T temp = vector[j];
vector[j] = vector[j - 1];
vector[j - 1] = temp;
}
}
}
}
void DoIntroSort(std::vector<T>& vector)
{
std::sort(vector.begin(), vector.end());
}
};
struct SortVectorPayload
{
Sorter<int>* Sorter;
SortingAlgorithm Algorithm;
std::vector<int>* Vector;
};
static void SortVectorAsync(const SortVectorPayload& payload)
{
payload.Sorter->Sort(*payload.Vector, payload.Algorithm);
}
int main()
{
Sorter<int>* sorter = new Sorter<int>();
std::vector<SortVectorPayload> payloads;
// fill the list of payloads with some random stuff to do
std::vector<std::vector<int>*> lists;
for (int i = 0; i < 100; i++)
{
lists.push_back(new std::vector<int>(1000));
std::generate(lists.back()->begin(), lists.back()->end(), []() { return std::rand(); });
payloads.push_back(SortVectorPayload{ sorter, Bubble, lists.back() });
}
for (int i = 0; i < 100; i++)
{
lists.push_back(new std::vector<int>(1000));
std::generate(lists.back()->begin(), lists.back()->end(), []() { return std::rand(); });
payloads.push_back(SortVectorPayload{ sorter, IntroSort, lists.back() });
}
// sort the lists on available threads
DoTasksAsync(std::function(SortVectorAsync), payloads, []() { return true; });
for (std::vector<int>* list : lists)
{
delete list;
}
delete sorter;
}
So, here are my major concerns I'd like to get some help with:
I'm using two std::move() calls inside DoTasksAsync to put the payloads in the right location. Is there anything inherently wrong with using both of these calls when perhaps one would suffice, or am I using too few and copying more data than I need to?
The only way I could come up with for checking the status of all the futures that are currently running was to create a timepoint 1 microsecond in the future and calling wait_until() on each future. Is there a cleaner way of doing this that would also allow the main thread to continue checking the shouldContinue() function each iteration?
I never call get() on any of my futures. Will these leave futures floating around that are still waiting for something to get their result even after they've left the scope of the function?
I don't deal with return types at all and just use the payload to have each thread modify some shared data (with appropriate locks and such). What changes could I make to this function to enable return values from each thread's task?
And finally, is what I'm doing here even a good idea at all or am I completely barking up the wrong tree?
Answer: Use worker threads that pick jobs themselves
There are a few issues with your code. First, you call std::async() once for every job. With most implementations of the standard library, it means it creates a new std::thread for every job. While you limit the amount of concurrent threads, creating and destroying a std::thread still has some cost. So it would be better to just create numWorkerThread std::threads, and have each thread pick multiple jobs from jobs themselves, until all jobs are finished.
Another issue is that in your implementation, you have a busy-loop waiting for futures to become ready. Sure, it waits up to a microsecond, but that's usually not enough for a CPU core to go into low-power idle mode, so effectively you are still burning power continuously just waiting for another core to finish a task. It's much better to use std::condition_variable to signal that something has finished. Or you could even avoid explicitly waiting entirely. Consider this implementation:
template <class TPayload>
void DoTasksAsync(
std::function<void(const TPayload&)> asyncTask,
const std::vector<TPayload>& jobs,
std::function<bool()> shouldContinue)
{
const std::size_t numWorkerThreads =
std::min(jobs.size(), std::thread::hardware_concurrency());
std::atomic_size_t next_job = 0;
std::vector<std::jthread> threads;
threads.reserve(numWorkerThreads);
for (std::size_t i = 0; i != numWorkerThreads; ++i) {
threads.emplace([&](){
std::size_t job;
while (shouldContinue() && (job = next_job++) < jobs.size()) {
std::invoke(asyncTask, jobs[job]);
}
});
}
}
Moving from a const container
You pass jobs by const reference, but later you try to std::move() items from jobs. Because jobs is const, it cannot actually do a real move, and will instead just perform a copy instead. You could pass jobs as a non-const reference, but even better is to not std::move() at all (as shown above), as no copy has to be made if asyncTask itself takes the payload parameter by const reference.
Handling the return value of asyncTask
This is definitely possible, you just need to ensure DoTasksAsync is also templated on the return value of asyncTask somehow, and you have to decide on some way to store the return values. You could for example use a std::vector for that:
template <class TResult, class TPayload>
auto DoTasksAsync(
std::function<TResult(const TPayload&)> asyncTask,
…)
{
…
std::vector<TResult> results(numWorkerThreads);
…
results[job] = std::invoke(asyncTask, jobs[job]);
…
return results;
}
However, note that we now have to explicitly pass the type of the return value as a template parameter when calling DoTasksAsync(). Unfortunately, it cannot deduce this. The solution to this is not to use std::function for passing asyncTask, but rather use a template parameter for the function type, and then deducing the return type from that:
template <class TPayload, class TAsyncTask>
auto DoTasksAsync(
TAsyncTask asyncTask,
…)
{
using TResult = std::invoke_result_t<TAsyncTask, decltype(jobs.first())>;
…
std::vector<TResult> results(numWorkerThreads);
…
}
Make it more generic
Your function requires that the payloads are stored in a std::vector. However, what if you had them in a std::list instead? Or a std::deque, or any other type of container? The only thing you care about is that you can iterate over the range of payloads. So consider writing the function like so:
template <class TJobs, class TAsyncTask>
auto DoTasksAsync(
TAsyncTask asyncTask,
TJobs&& jobs,
…)
{
…
}
You might also consider that the caller wants the result in something different than a std::vector. You could instead have it provide an output iterator to store the results in. This brings me to:
You are effectively implementing a parallel std::transform()
You are applying asyncTask to all elements of jobs. That's basically a parallel std::transform(). Consider looking at the interface of std::transform() and std::ranges::transform().
Even better, std::transform() can actually run things in parallel, if you it pass an execution policy as the first parameter. So maybe you don't need DoTasksAsync() at all?
Consider using a dynamic thread-safe queue for jobs
You have to pass a vector of a fixed size to DoAsyncTasks(). However, if you look at your example code, then you are building up payloads, but only once that is fully built can you start processing the payloads. But ideally you could already have a thread start working on the first list right after you added that.
The usual solution to that is to use a thread-safe queue. The worker threads will pick items from that queue to process, and the main thread can push new jobs to that queue whenever it wants to. You can find many examples here on Code Review.
Avoid manual memory management
In your example code, you do a lot of manual memory management. This often leads to bugs, as indeed happens in your code: you forgot to delete sorter, so you have a memory leak. Either avoid using pointers altogether, use smart pointers and/or use better containers. For example, there is no need to allocate sorter on the heap:
CSorter sorter();
And to get a stable array of lists:
std::deque<std::vector<int>> lists;
This works since std::deque will never move its elements around in memory. | {
"domain": "codereview.stackexchange",
"id": 45573,
"tags": "c++, multithreading, c++20"
} |
Tests on sodium peroxoborate | Question: When I prepare sodium perborate by mixing sodium hydroxide with metaborate ($\ce{NaBO}$) and hydrogen peroxide and cooling it I applied a test to it by acidifying it with $\ce{HCl}$ and adding $\ce{KMnO4}$. A precipitate appeared and it turned black. What is the reason?
I tested another sample of sodium perborate by acidifying it with $\ce{NH4Cl}$ and after that KI and this turned white. Again, why?
Answer: In your first test, the $\ce{KMnO4}$ probably reacted with the $\ce{H2O2}$ excess remaining after your preparation. The ionic equation for the reaction is
$$\ce{2MnO4- + 3H2O2 -> 2MnO2 + 2H2O +3O2 + 2OH-}$$
The $\ce{MnO2}$ formed is an insoluble black precipitate. Although there is an $\ce{OH-}$, the reaction apparently occurs in neutral and faintly acidic medium as well.
The result of your second test doesn't seem to be very clear. Sodium perborate apparently reacts very quickly with any water so if you added any water during the second test, it must have reacted to form oxygen gas bubbles (which appear white). | {
"domain": "chemistry.stackexchange",
"id": 364,
"tags": "inorganic-chemistry"
} |
Method that compares the scores of two players | Question: I tackled this problem today and was wondering how I could possibly improve this code.
The objective of the method is to return an array with the respective scores of each player. The method takes three values from 2 users. a0,a1,a2 are all scores from player a. b0,b1,b2 are all scores of player b.
A player is awarded a point if their score of the same category is bigger than the score of the player.
For example to determine who got a better first score we must compare a0 with b0 and so on forth.
Here is my code:
static int[] solve(int a0, int a1, int a2, int b0, int b1, int b2){
int p1 = 0;
int p2 = 0;
if(a0 > b0){
p1++;
}else if(a0 < b0){
p2++;
}
if(a1 > b1){
p1++;
}else if(a1 < b1){
p2++;
}
if(a2 > b2){
p1++;
}else if(a2 < b2){
p2++;
}
return new int[]{p1,p2};
}
Answer: As Timothy mentioned, your naming of the variables are not so good. I would recommend you have a look at Java Naming Conventions. It will be helpful for writing easily readable codes.
Secondly, try using private for your method. I know it is not a big program, but you will get used to it.
Thirdly, the beauty of the code is in shortness and effectiveness. And here I took Timothy's code and wrote it with for loop:
int[] scoresOfPlayerA = new int[]{a0,a1,a2};
int[] scoresOfPlayerB = new int[]{b0,b1,b2};
for (int i = 0; i < scoresOfPlayerA.length; i++){
if (scoresOfPlayerA[i] > scoresOfPlayerB[i]){
finalScoreOfA++;
} else if (scoresOfPlayerA[i] < scoresOfPlayerB[i]) {
finalScoreOfB++;
}
}
return new int[]{finalScoreOfA, finalScoreOfB};
Good luck with your future codes. | {
"domain": "codereview.stackexchange",
"id": 26340,
"tags": "java"
} |
At what depth does the underground begin to warm up? | Question: Spring water comes out colder from being underground. But if you get deeper the temperature goes up. At what depth does the underground stop getting colder and begins getting warmer?
Is there a map for that point like a geothermal table map? I ask this because I want to build a underground water cooling air conditioner in the same manner that pipes are heated on roof tops.
Answer: Your question has an incorrect assumption built in. Near surface ground water temperatures are not generally colder, but rather reflect the average annual temperature. This will be colder than surface temps in summer, but warmer in winter. There is an an additional effect if your rainfall isn't evenly distributed over the course of the year, as percolating rainfall will be initially warmed by warm soil temps in summer. As an example, in central Alberta shallow well water temps run about 10 C, while our average temp for the year is around 4-5 C
Start here: https://www.builditsolar.com/Projects/Cooling/EarthTemperatures.htm
A graph on that page shows how soil temps vary with depth depending on seasonal variation, and moisture levels. Ball park: 30 feet gets out out of the seasonal variation, but 12 feet eliminates about 2/3 of the annual variation.
If your climate is cold enough in winter, it may be worth your while to store coolth in the form of brine. In effect build a well insulated building with insulated foundation with what amounts to a swimming pool in it filled with either salt water brine, or calcium chloride brine. Have a few thousand feet of 1/2" plastic pipe (drip irrigation pipe is cheap) both outside and in the pool, and circulate antifreeze through the pipe when winter temps are below the brine temp. In the summer, a secondary loop circulates between the pool and the heat exchangers for the house.
If you are in a desert climate, you may be able to do something similar on a smaller scale. Under a calm clear sky, you can often get frost on the side of an insulator facing the dark sky at night. Modifying this idea, create a large bottom insulated surface coated with a good IR emitter (Titanium dioxide white is one such.) , and run shallow water over it. If you can routinely get 30 C colder than day time air temps, then you only have to store a few days worth of coolth instead of a years worth.
You can make the system above colder by pre-chilling the water with an evaporative cooler
Edit: One source claims 1 degree F per 30 feet starting at around 100 feet. Temperatures above this
however are highly variable from water movement. | {
"domain": "earthscience.stackexchange",
"id": 2764,
"tags": "geology, geothermal-heat, mapping, natural-conditions, underground-water"
} |
Does there always exist a reduction between two NP-hard problems? | Question: Let $A$ and $B$ be NP-hard problems. For all tuples $(A, B)$ does there exist a polynomial time reduction from $A$ to $B$ and from $B$ to $A$?
Context: I want to prove some problem is NP-hard. Can I pick any problem in NP-hard to reduce from?
Answer: A decision problem $L$ is NP-hard when for every problem $H$ in NP, there is a polynomial-time reduction from $H$ to $L$. This is important because it does not necessarily go the other way, you can not say there is a polynomial time reduction from $L$ to $H$. This is because a problem can be NP-Hard and not be in NP. If $A$ and $B$ in your question are both not in NP then there may not be a reduction. | {
"domain": "cs.stackexchange",
"id": 8877,
"tags": "reductions, np-hard"
} |
Set id in constructor, or generate it when asked for? | Question: I'm using PHP, I have a class that generates an id based on the calling class, so the id of the class won't change after it has been initialised, now I'm wondering which of my two methods is preferrable.
Set the id in the constructor
Generate the id when asked for
/* Set the id in the constructor */
abstract class Badge
{
protected $activities;
protected $id;
abstract public function check();
public function __construct(array $activities)
{
$this->activities = $activities;
$this->id = $this->getId();
}
public function getId(): string
{
$class = explode('\\', get_class($this));
$classWithoutNamespace = array_pop($class);
$snakeCased = strtolower(preg_replace('/(?<!^)[A-Z]/', '_$0', $classWithoutNamespace));
return "{$snakeCased}_{$this->target}";
}
public function toArray(): array
{
return [
'id' => $this->id,
];
}
}
/* Generate the id when asked for */
abstract class Badge
{
protected $activities;
abstract public function check();
public function __construct(array $activities)
{
$this->activities = $activities;
}
public function getId(): string
{
$class = explode('\\', get_class($this));
$classWithoutNamespace = array_pop($class);
$snakeCased = strtolower(preg_replace('/(?<!^)[A-Z]/', '_$0', $classWithoutNamespace));
return "{$snakeCased}_{$this->target}";
}
public function toArray(): array
{
return [
'id' => $this->getId(),
];
}
}
```
Answer: I would go with lazy loading. Generate the id when requested
private $id;
public function getId()
{
if ($this->id === null) {
$this->id = //your logic to generate the id here
}
return $this->id;
}
and don't use anywhere else $this->id even inside the class. Use $this->getId() everywhere.
using $this->getId() everywhere will ensure that you always get the generated id (the same one everytime) and you are sure you don't get the null initial value. | {
"domain": "codereview.stackexchange",
"id": 39240,
"tags": "php, constructor"
} |
Coriolis force at small scale | Question: I would premit that I am not an expert (just an enthusiast) and my mathematical background is limited. That said I am trying to figure it out if the effect of coriolis force of big system (our planet for example) has an effect even at microscopic scale, assuming there is no friction or other interaction inside the fluid (eg no polar interaction).
My gut feeling was that what matter is the difference in angular velocity of the observed fluids, so if the scale is really small (compared to the system) the difference in angular velocity should be negletable and so the coriolis force.
I am dubious about that because coriolis force appear on earth even at small scale, like a sink.. so I was wondering what I am missing
Answer:
... coriolis force appear on earth even at small scale, like a sink ...
The idea that the Coriolis force determines whether a sink or bathtub drains clockwise or anti-clockwise is a misconception. On such a small scale, the Coriolis force due to the Earth’s rotation is negligible, and the direction of draining is determined by asymmetries in the geometry of the sink or bathtub. | {
"domain": "physics.stackexchange",
"id": 80974,
"tags": "fluid-dynamics, vortex, coriolis-effect"
} |
A stream version of Fizz Buzz | Question: I'm trying to learn streams and hashmaps, in doing so I want to make a very scalable, very clean version of FizzBuzz while adhering to the book Clean Code by Robert Cecil Martin.
public class Main {
public static void fizzBuzzHundredTimes(Map<String, Integer> fizzDivisor) throws ArithmeticException{
String output; // Avoid recreating String each iteration
for (int i = 1; i <= 100; i++) {
final int finalI = i; // Variables used in lambda functions need to be final.
output = fizzDivisor.entrySet()
.stream()
.filter(entry -> finalI % entry.getValue() == 0)
.map(entry -> entry.getKey())
.collect(Collectors.joining(""));
if (output.length() > 0) {
// the value was divisible.
System.out.println(output);
} else {
System.out.println(i);
}
}
}
public static void main(String[] args) {
Map<String, Integer> fizzMap = new HashMap<>();
fizzMap.put("Fizz",3);
fizzMap.put("Buzz",5);
fizzMap.put("Fuzz",7);
fizzMap.put("Bizz",11);
fizzMap.put("Biff",13);
try{
// Avoid negative numbers
fizzBuzzHundredTimes(fizzMap);
} catch(Exception e){
System.err.println(e + "Was thrown");
}
}
}
Answer:
You can use output.isEmpty() instead of checking its length.
You might want to use a LinkedHashMap which presists the order of the elements as well. Right now the ordering will be seemingly random (Will it be BuzzFizzFuzzBiffBizz? Or FizzFuzzBiffBizzBuzz?) (technically it's deterministic, but it's based on the hashcodes of the strings, which is seemingly random)
Your exception handling System.err.println(e + "Was thrown"); doesn't tell you anything about why the exception happened. I would recommend using a logger (Slf4j / Log4j) and log the exception properly, or use e.printStackTrace() to make it easier for you to debug the problems.
In your final version however, you should not need any exception handling as all the bugs that could cause them to appear should have been fixed - as this is an application based on pure logic and no network calls and stuff.
String output; // Avoid recreating String each iteration technically, the string is recreated each iteration anyway. There's nothing you can do to avoid that.
You're only declaring it once. It's best practice to declare it in a small scope as possible, so I would recommend declaring it only when you initialize it.
You could create another method to return a single string for a single number, instead of having one method to process all the 100 numbers.
// Avoid negative numbers I don't see how that comment is relevant at the location it is written. Negative numbers are prevented by the for-loop in your code. | {
"domain": "codereview.stackexchange",
"id": 35450,
"tags": "java, stream, fizzbuzz"
} |
Finding missing numbers in an array | Question: Is it possible to make this faster? Any suggestions are more than welcome
I have used JavaScript to write this code
PROBLEM
You will get an array of numbers.
Every preceding number is smaller than the one following it.
Some numbers will be missing, for instance:
[-3,-2,1,5] // missing numbers are: -1,0,2,3,4
Your task is to return an array of those missing numbers:
[-1,0,2,3,4]
SOLUTION
const findTheMissing = (target) => {
// final result list
let result = [];
// array will go from min to max value present in the array
const min = target[0];
const max = target[target.length - 1];
// will maintain the track of index of target array
// will start from 2nd element of array because we need a no. to subsctract from
let pivotIndex = 1;
for (let index = min; index < max; index++) {
// value to the index
let pivotValue = target[pivotIndex];
// dif of the value
let diff = pivotValue - index;
// diff means its time to move the pivot to next :P
if (diff > 0) {
// not going to current index at exists in the target array
if (index === target[pivotIndex - 1])
index++;
// YO!! WE FOUND HE MISSING
result.push(index);
}
else {
pivotIndex++;
}
}
return result; // got all missing numbers
}
RESULT
let source = [-5, 0, 2, 5, 7];
console.log(findTheMissing(source));
// [ -4, -3, -2, -1, 2, 3, 4, 6 ]
Answer: I think, there is an error:
If I enter the values:
[-3,-2,1,5]
I get the result:
[-2,0,2,3,4]
but it should be:
[-1,0,2,3,4]
The error occurs when there are two adjacent numbers in the array (here -3 and -2).
Naming:
Index is not an index, but the actual value in the valid sequence (from min to max). I would call it value or curValue or something like that.
I don't like the name target either, because the target is actually the sequence of numbers from min to max. input would be better IMO.
Analysis
Basically you want to compare two sequences of numbers index by index. If they differ then save the target value, if not then increment to the next value of the input array. The index of the target sequence (here from min to max) is always incremented. Instead of the target sequence you can just increment a value starting from input[0] and ending on input.[input.length - 1].
All in all it could be done something like this:
function findMissing(input) {
var result = [];
for (var inputIndex = 0, targetValue = input[0]; targetValue <= input[input.length - 1]; targetValue++) {
if (input[inputIndex] != targetValue) {
result.push(targetValue);
}
else {
inputIndex++;
}
}
return result;
} | {
"domain": "codereview.stackexchange",
"id": 30311,
"tags": "javascript, programming-challenge"
} |
Help to understand the code for dipeptide composition calculation (in python) | Question: Dipeptide composition of a protein sequence is the number of times a particular dipeptide (e.g. Arginine-Histidine) occurs in a sequence divided by the total number of dipeptides in the sequence (which is the length of the sequence - 1)
I have found the following code for this:
import re
def DPC(fastas):
AA = 'ACDEFGHIKLMNPQRSTVWY'
encodings = []
diPeptides = [aa1 + aa2 for aa1 in AA for aa2 in AA]
header = ['#'] + diPeptides
encodings.append(header)
AADict = {}
for i in range(len(AA)):
AADict[AA[i]] = i
for i in fastas:
name, sequence = i[0], re.sub('-', '', i[1])
code = [name]
tmpCode = [0] * 400
for j in range(len(sequence) - 2 + 1):
tmpCode[AADict[sequence[j]] * 20 + AADict[sequence[j+1]]] = tmpCode[AADict[sequence[j]] * 20 + AADict[sequence[j+1]]] +1
if sum(tmpCode) != 0:
tmpCode = [i/sum(tmpCode) for i in tmpCode]
code = code + tmpCode
encodings.append(code)
return encodings
There is another file (main) from where the arguments fastas come from which is basically a list of strings where every string is a protein sequence. I am finding it really difficult to understand the nested loops that were used:
for i in fastas:
name, sequence = i[0], re.sub('-', '', i[1])
code = [name]
tmpCode = [0] * 400
for j in range(len(sequence) - 2 + 1):
tmpCode[AADict[sequence[j]] * 20 + AADict[sequence[j+1]]] = tmpCode[AADict[sequence[j]] * 20 + AADict[sequence[j+1]]] +1
if sum(tmpCode) != 0:
tmpCode = [i/sum(tmpCode) for i in tmpCode]
code = code + tmpCode
encodings.append(code)
I have spent 4-5 hours on this but I am still finding it really difficult to follow. I would really appreciate if anyone can explain the steps involved in these loops. Thank you!
Edit: The github link to the files are :
DPC.py
iFeature.py
The github link to the python toolkit: iFeature
Answer: A few additional hints that may help clarify the algorithm:
The 20 magic number is because there are twenty amino acids. The 400 magic number is because there are 20 * 20 potential dipeptides.
These peptides are then hashed by converting their letter to a number in the range [0, 20). You can then multiple one of them by 20 and add the other to get a number between [0, 400). In the case of this function the first amino acid "code" is multiplied by 20. This number can be converted back to the dipeptide sequence when outputting the data. The translation table they use is generated here diPeptides = [aa1 + aa2 for aa1 in AA for aa2 in AA].
The nested loops are to loop through each entry and then each sequence to get every dipeptide sequence, which can be done in linear time.
Just as a quick note on hashing DNA and peptide sequences. You can generally convert them to an array index by using increasing powers of the alphabet size multiplied by the letter index of the alphabet. For example, for DNA you can convert A=0, C=1, G=2, T=3, and you can hash fourmers by using 4^4 * a[i] + 4^3 * a[i + 1] + 4^2 * a[i + 2] + 4^1 * a[i + 3]. In code you may see this performed with bitshifts for DNA since the alphabet size is a power of 2.
The advantage of this hashing scheme is. thateach sequence is guaranteed to have a unique index. The problem with this method is that as if you wish to use longer k-mers the memory usage will grow exponentially. For example, for 16-mers you need 4 billion combinations, which is the size of a 32 bit unsigned integer. | {
"domain": "bioinformatics.stackexchange",
"id": 1946,
"tags": "python, featurecounts"
} |
Is it ok to struggle with mathematics while learning AI as a beginner? | Question: I have a decent background in Mathematics and Computer Science .I started learning AI from Andrew Ng's course from one month back. I understand logic and intuition behind everything taught but if someone asks me to write or derive mathematical formulas related to back propagation I will fail to do so.
I need to complete object recognition project within 4 months.
Am I on right path?
Answer: I think the key part of your question is "as a beginner". For all intents and purposes you can create a state of the art (SoTA) model in various fields with no knowledge of the mathematics what so ever.
This means you do not need to understand back-propagation, gradient descent, or even mathematically how each layer works. Respectively you could just know there exists an optimizer and that different layers generally do different things (convolutions are good at picking up local dependencies, fully connected layers are good at picking up connections among your neurons in an expensive manner when you hold no previous assumptions), etc.. Follow some common intuitions and architectures built upon in the field and your ability to model will follow (thanks to the amazing work on opensource ML frameworks -- Looking at you Google and Facebook)! But this is only a stop-gap.
A Newton quote that I'm about to butcher: "If I have seen further it's because I'm standing on the shoulders of giants". In other words he saw further because he didn't just use what people before him did, he utilized it to expand even further. So yes, I think you can finish your object detection project on time with little understanding of the math (look at the Google object detection API, it does wonders and you don't even need to know anything about ML to use it, you just need to have data. But, and this is a big but, if you ever want to extend into a realm that isn't particularly touched upon or push the envelope in any meaningful way, you will probably have to learn the math, learn the basics, learn the foundations. | {
"domain": "ai.stackexchange",
"id": 1323,
"tags": "math, getting-started"
} |
What do we know of superconductivity in thin layers? | Question: motivated by another question, i wonder if there are special properties of superconductivity when restricted on 2D or very thin layers related to the effective permittivity in function of the frequency $\epsilon ( \omega )$ near the first-order transitions between the superconducting and normal phases , any references about the main state of the art would be appreciated
Edit: I've noticed that most research papers on superconductivity report on the current transport properties, but they usually don't talk much about the permittivity. Is this because its harder to measure?
Answer: The obvious case for superconductivity superflow in 2D is, well, superconductivity superflow in a 2DEG (2D electron gas). One of the many ways this manifests is in the quantum hall effect. There, one observes a step-like behavior for the resistivity of a 2D sample w.r.t externally controllable parameters such as the magnetic field or carrier concentration. The resistivity is quantized in integer or fractional units as shown in the plot below (courtesy of D.R. Leadley, Warwick University 1997).
When the magnetic field is such that the system lies on the "plateaus", in between the quantized resistance values, the 2DEG in the bulk of the sample is in a superconductivity state exhibiting dissipationless transport. This is also depicted by the green line in the plot which measures the longitudinal resistance $\rho_{xx}$ of the sample. We see that this quantity vanishes precisely when the system is on a plateau indicating the presence of superflow in the longitudinal direction.
Edit: As pointed out by @wsc and @4tnmele in the comments below, it is not quite accurate to describe the plateaus as being in a superconducting state. However on the plateaus the hall strip does exhibit dissipationless flow in the longitudinal direction - even though this is primarily due to edge currents. The bulk remains insulating. So while it might be correct to describe this state as exhibiting "superflow" it is incorrect to call it a "superconductor". I have modified the language in my answer to reflect this change. I am not deleting my answer because I feel that it is still relevant in the context of the OP's question. | {
"domain": "physics.stackexchange",
"id": 648,
"tags": "superconductivity, boundary-conditions"
} |
Selection Sort Analysis | Question: I'm having difficulty understanding the big-O analysis of the selection sort algorithm. Here is my pseudocode (with line numbers):
procedure SELECTION (A(n), limit)
1. for j <- 0 to limit - 1 do
2. min_index <- j
3. for k <- j + 1 to limit do
4. if A(k) < A(min_index)
5. min_index <- k
6. end-if
7. end-for
8. temp <- A(min_index)
9. A(min_index) <- A(j)
10. A(j) <- temp
end-for
end-SELECTION
Our professor wants us to work from the inside out; in other words, analyze the statements the furthest away from the conceptual vertical line that denotes hierarchies. Therefore, I start at line 5, and work out from there. Here's what I've understood so far:
The time complexity of lines 4-6 is O(1) (constant).
Because lines 4-6 are "contained" in the for loop on line 3, you must use the rule of sums to multiply line 3 and lines 4-6. In other words, if line 3 is a program fragment f(x) and lines 4-6 are a program fragment g(x), then the time complexity of lines 3 - 6 is f(x) * g(x).
This is where I get confused. Because var limit is referencing the size of the array, wouldn't the for loop on line 3 run limit-1 times, because k is equal to 1 and is running to limit? To put it another way, if k were equal to zero, wouldn't the loop run limit times? Every video and website I've looked at has not given me a clear representation of that expression, because they've analyzed it differently.
On top of this question, how do I determine which asymptotic notation to use to describe this problem? Is it Big-Oh, Big-Omega or Big-Theta?
Thank you.
Answer: The number of iterations of the loop 3–7 depends on the value of $j$: it is $\mathit{limit} - j$. Therefore the running time of lines 3–7 is $O(\mathit{limit} - j)$. Lines 8–10 run in $O(1)$ time, so the body of the loop 1–11 runs in time $O(\mathit{limit}-j) + O(1) = O(\mathit{limit}-j)$ (using $j < \mathit{limit}$). Finally, the total running time is
$$
O\left(\sum_{j=0}^{\mathit{limit}-1} \mathit{limit}-j\right) =
O\left(\sum_{j=1}^{\mathit{limit}} j\right) =
O(\mathit{limit}^2),
$$
using the formula $\sum_{j=1}^n j = \frac{n(n+1)}{2}$. | {
"domain": "cs.stackexchange",
"id": 12205,
"tags": "algorithms, algorithm-analysis, time-complexity, runtime-analysis, sorting"
} |
How To Think About Sharp Shadows In Terms Of Diffraction | Question: I tried to implement difraction as explained here. I used the formula on page 4, right before the simplification to the Rayleigh-Sommerfeld diffraction formula,
$$u(x_0)=\frac{1}{4\pi}\int_A u(x)\frac{\partial g_{x_0}}{\partial\nu}\Bigg|_x dS(x)$$
with
$$\frac{\partial g_{x_0}}{\partial\nu}\Bigg|_x=-2(ik-\frac{1}{\|x\|-\|x_0\|})G(x-x_0)\cos(\theta(x, x_0))$$
because they say
Since we are interested in the regime where the observation point $x_0$ is far from the aperture
and I wanted to play around with various distances, so I think the formula should be just fine?
The only thing which I don't understand, from reading through the lectures, it seems like we are just modelling monochromatic electromagnetic waves, so we should also get all kinds of real world situations where diffraction is not visible.
When I setup a monochromatic point source behind some large aperture, I get a sharp shadow from the aperture on the screen. If I move the point source around (along the $XY$-plane), the shadow would also move around on the $XY$-plane. But I'm neither able to get this behaviour out of my simulation, nor do I see how this would work, as I compute a large number of spherical wavefronts travelling in the z-direction from each point on the aperture, and bundling their energy along their travelling direction, which is the $\cos\theta(x, x_0)$-term.
So here is how I picture the current process:
And here is the other real life situation which should also have an explanation in terms of electromagnetic waves:
Is the correct way of thinking about this that a sharp shadow only results from spherical wavefronts cancelling each other precisely behind the egde of the shadow (and my simulation not providing enough resolution for this phenomenon)?
Answer: As usual in wave optics and quantum mechanics, the observations known from ray optics and classical mechanics, i.e. sharp shadows, are the result of high wavenumbers combined with macroscopic obstacles/apertures (i.e. large compared to wavelength).
If you increase wavenumber in your simulation by one or two orders of magnitude, you should be able to get a high frequency oscillation in the lit part of the screen and rapid decay of illuminance in the shaded part. | {
"domain": "physics.stackexchange",
"id": 89657,
"tags": "simulations, diffraction, huygens-principle"
} |
Why do protein solutions have to be alkalised in biuret test? | Question: I’ve read that CuSO4 solution reacts with peptide bonds that connect amino acids to create a violet colour, but the instructions always tell me to add NaOH solution to the protein solution before I add CuSO4. How is alkalising the protein solution before adding CuSO4 solution an aid to this process?
Answer: This is probably to prevent precipitation of copper hydroxide (see Itzhaki & Gill, 1964 - they suggest adding dilute copper sulfate slowly to the NaOH solution to avoid this). If you have the protein already alkalized and ready to react you'll get the color reaction before precipitate forms.
Commercial premixed solutions, like @canadianer mentioned in a comment, have tartrate present to prevent this (Geiger & Bessman, 1972 mention this).
It doesn't seem like it's strictly necessary to add NaOH first, but that's probably the most reliable way to do the assay without using an additional stabilizing agent.
Itzhaki, R. F., & Gill, D. M. (1964). A micro-biuret method for estimating proteins. Analytical biochemistry, 9(4), 401-410.
Geiger, P. J., & Bessman, S. P. (1972). Protein determination by Lowry's method in the presence of sulfhydryl reagents. Analytical biochemistry, 49(2), 467-473. | {
"domain": "biology.stackexchange",
"id": 9757,
"tags": "proteins, lab-techniques, food-chemistry"
} |
Count the frequency of n-grams in a random Wikipedia corpus | Question: This code counts the frequency of n-grams in a random Wikipedia corpus, as of now, it downloads everything, than performs all the counting. In your opinion is there a way to perform downloading and counting simultaneously while keeping the code reasonably simple to improve the performance?
import wikipedia as wk
import time
from itertools import combinations
import string
def get_random_wikipedia_corpus(articles_number, verbose=False):
contents = []
i = 0
while i < articles_number:
try:
page = wk.random()
content = wk.page(page).content
contents.append(content)
i += 1
except (wk.DisambiguationError, wk.PageError):
pass
time.sleep(1) # Avoid DDOSing Wikipedia :)
if verbose:
print(f"Iteration {i}, adding {page.title}")
return ''.join(contents)
def all_ngrams(n):
return (''.join(t) for t in combinations(string.ascii_lowercase, n))
def count_all(text, fragments, forbidden_counts=[]):
return [(t, c/len(text)) for t in fragments if (c := text.count(t)) not in forbidden_counts]
if __name__ == "__main__":
s = time.time()
CORPUS_ARTICLES = 10
corpus = get_random_wikipedia_corpus(CORPUS_ARTICLES).lower()
corpus = ''.join(char for char in corpus if char in string.ascii_lowercase)
print(time.time() - s, len(corpus))
onegrams = all_ngrams(1)
bigrams = all_ngrams(2)
trigrams = all_ngrams(3)
quadrigrams = all_ngrams(4)
onegrams_counts = count_all(corpus, onegrams, (0, 1))
bigrams_counts = count_all(corpus, bigrams, (0, 1))
trigrams_counts = count_all(corpus, trigrams, (0, 1))
quadrigrams_counts = count_all(corpus, quadrigrams, (0, 1))
onegrams_counts.sort(key=lambda pair: pair[1], reverse=True)
bigrams_counts.sort(key=lambda pair: pair[1], reverse=True)
trigrams_counts.sort(key=lambda pair: pair[1], reverse=True)
quadrigrams_counts.sort(key=lambda pair: pair[1], reverse=True)
print(onegrams_counts)
print(bigrams_counts)
print(trigrams_counts)
print(quadrigrams_counts)
Answer: I propose an optimization based on the multiprocessing module,
to parallelize the requests. Unfortunately, if you want to get the results
as soon as possible, you will have to "DDOS" Wikipedia at some point,
and the sleep needs to be removed.
Some notes and suggestions
In the following piece of code, the page variable might not be defined in the print,
the pass statement should probably be replaced by a continue:
try:
page = wk.random()
...
except (wk.DisambiguationError, wk.PageError):
pass # <- continue?
time.sleep(1) # Avoid DDOSing Wikipedia :)
if verbose:
print(f"Iteration {i}, adding {page.title}")
(It seems that page.title is a method and not a property or an attribute in
the version of the wikipedia package I use (1.4.0), so it needs to be called
in order to display it correctly)
The default value of the forbidden_counts argument of the count_all function
should be an empty tuple instead of an empty list, as it is immutable, and it
would be more consistent with the parameters you send when using this function.
You could add a main function
To handle the different sizes of "n-grams" and to avoid code duplication, a for loop
wouldn't be more appropriate?
Optimization part
In this proposed solution, a new function has been extracted from the
get_random_wikipedia_corpus function. This new function requests exactly one
valid article. The get_random_wikipedia_corpus only calls it multiple times
by using a multithreading.Pool object. Here is the solution with the suggestions:
import multiprocessing
import string
import time
from itertools import combinations, repeat
import wikipedia as wk
CORPUS_ARTICLES = 10
def get_random_wikipedia_article(iteration: int, verbose: False) -> str:
while True:
page = wk.random()
try:
content = wk.page(page).content
except (wk.DisambiguationError, wk.PageError):
continue
if verbose:
print(f"Request {iteration}, adding '{page.title()}'")
return content
def get_random_wikipedia_corpus(articles_number, verbose=False):
pool = multiprocessing.Pool()
return ''.join(pool.starmap(get_random_wikipedia_article, zip(range(articles_number), repeat(verbose))))
def all_ngrams(n):
return (''.join(t) for t in combinations(string.ascii_lowercase, n))
def count_all(text, fragments, forbidden_counts=()):
return [(t, c / len(text)) for t in fragments if (c := text.count(t)) not in forbidden_counts]
def main():
s = time.time()
corpus = get_random_wikipedia_corpus(CORPUS_ARTICLES, verbose=True).lower()
corpus = ''.join(char for char in corpus if char in string.ascii_lowercase)
print(time.time() - s, len(corpus))
for letter_count in (1, 2, 3, 4):
ngrams = all_ngrams(letter_count)
ngrams_counts = count_all(corpus, ngrams, (0, 1))
ngrams_counts.sort(key=lambda pair: pair[1], reverse=True)
print(ngrams_counts)
if __name__ == "__main__":
main()
Displayed metrics before optimization: 31.928 seconds for 30287 characters
Displayed metrics after optimization: 2.154 seconds for 11948 characters
It would be more meaningful to compare those durations for the same requested pages. | {
"domain": "codereview.stackexchange",
"id": 42792,
"tags": "python, performance, wikipedia"
} |
Sensors in Gazebo | Question:
Hi all!
I'd like to add an hokuyo laser to the pioneer that I'm simulating with gazebo. But I don't know how to do it.
I've seen that with Gazebo you can simulate an erratic robot with a laser on it.
So I added:
<include filename="$(find erratic_description)/urdf/erratic_hokuyo_laser.xacro" />
<!-- BASE LASER ATTACHMENT -->
<erratic_hokuyo_laser parent="base_link">
<origin xyz="0.2 0 0.001" rpy="0 0 0" />
</erratic_hokuyo_laser>
to the pioneer3dx.xacro.
But when I run the simulation I can't see the laser in the robot model and /scan in the topic list.
Any advice?
Originally posted by camilla on ROS Answers with karma: 255 on 2012-12-17
Post score: 2
Answer:
I solved the problem changing the lines added this way:
<include filename="$(find p2os_urdf_mod)/defs/erratic_hokuyo_laser.xacro"/>
<xacro:erratic_hokuyo_laser parent="base_link">
<origin xyz="0.01 0 0.23" rpy="0 0 0" />
</xacro:erratic_hokuyo_laser>
and in the "erratic_hokuyo_laser.xarco" I added the keyword xacro: before the macro tag:
<xacro:macro name="erratic_hokuyo_laser" params="parent *origin">
...
</xacro:macro>
Now I can see the /scan topic and map its values with rviz.
Originally posted by camilla with karma: 255 on 2012-12-20
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 12133,
"tags": "gazebo, erratic, gazebo-simulator, pioneer-3dx, erratic-gazebo"
} |
Intuition of LDA | Question: Can anyone explain how the LDA-topic model assigns words to topics?
I understand the generative property of the LDA model but how does the model recognize that "Labrador" and "dog" are similar words/ in the same cluster/topic? Is there kind of a similarity measure? The learning parameters of LDA are the the assignment of words to topics, the topic-words probabilities vector and the document-topic probabilities vector. But HOW is it learned?
Answer: You are right, LDA is not very intuitive. It involves a lot of mathematics and concepts. However this video should help you
https://youtu.be/3mHy4OSyRf0
Also this article
“Intuitive Guide to Latent Dirichlet Allocation” by Thushan Ganegedara https://link.medium.com/texozcnAc6 | {
"domain": "datascience.stackexchange",
"id": 7420,
"tags": "machine-learning, nlp, unsupervised-learning, topic-model, lda"
} |
Angular variables like $\omega$, $\alpha$, etc are properties of the moving particle or the observer observing them? | Question: Angular displacement ($\theta$) of a moving particle is the angle through which its position vector rotates, with respect to a reference point.
Angular velocity ($\omega$) of the particle is the rate at which its position vector rotates. Its physical meaning is "how quickly an observer (i.e, the reference point) observing the particle turns his head" while observing the motion of the particle.
Similarly, angular acceleration ($\alpha$) is the rate at which angular velocity changes. Physical meaning : The rate at which observer's head slows down or speeds up as it turns, to keep observing the moving particle.
My question is, are these angular variables properties of the particle? Or the observer's?
Answer:
...are these angular variables properties of the particle? Or the observer's?
Depends on the frame of refernce of the observer. If the observer is observing the motion through a translatory frame of reference (no matter whether inertial or not), all the angular parameters of the body will be the same as observed in the ground frame.
However, if the obeserver is observing through a rotatory frame of reference, then the angular parameters will come out to be different from the corresponding ones when measured from the ground frame.
But what does a "translatory" or "rotatory" frame of reference mean?
A translatory frame of reference is the one which does not have any angular velocity when observed in the ground frame. Note that here we are talking about the angular velocity of a frame, precisely, we are talking about the angular velocity of the co-ordinate axes which constitute that frame. Examples of translatory frame of reference:
a frame moving with a constant velocity (and zero angular velocity)
the frame of a block (connected to a spring) undergoing simple harmonic motion
a frame moving in circles such that the direction of the co-ordinate axes is constant in time (imagine sitting on the London Eye)
A rotatory refernce frame is the one which has a non zero angular velocity with respect to the ground frame. Again, we are actuall seeing the angular velocity of the co-ordinate axes which make up that frame, and not of the frame itself. Examples:
the frame of reference attached to a person sitting on a fan
the frame of refence attached to a spinning ballerina
the frame of refernce attached to a spinning top
Note: In all the above examples, there were actually two choices (about the type of frames) available to us. Either we could make our frame rotate with the ballerina (case 1) (in which case it would be a rotatory frame of reference), or we could just fix the directions of our axes and just fix the frame to the ballerina's position and not her rotation (case 2) (in which case it would be a translatory frame of reference). Since the above examples are of rotatory frame of reference, thus we are considering the case 1 of choosing the frame of reference. | {
"domain": "physics.stackexchange",
"id": 66987,
"tags": "newtonian-mechanics, kinematics, angular-velocity"
} |
Parallelization of number factors using OpenMP | Question: For a simple try at parallelization on my own outside of school, I've created a number factors calculator. I hope to eventually come up with something more creative.
Since I don't have access to parallel computers at this time, I'm using OpenMP provided by my compiler (gcc 4.8.1) and running it on my laptop (Intel Core i3-2330M). I'm using a maximum of four threads, which was determined from a call to omp_get_max_threads().
I've conducted four runs, each with four billion values and from one to four threads:
#include <cstdint>
#include <cstdlib>
#include <ctime>
#include <iomanip>
#include <iostream>
#include <map>
#include <omp.h>
void displayCompTime(std::clock_t start, std::clock_t end, std::int64_t integer, int threads)
{
double elapsed = static_cast<double>(end - start) / CLOCKS_PER_SEC;
std::cout << integer << " values and " << threads << " thread(s): "
<< std::setprecision(4) << std::fixed << elapsed << "s\n";
}
void calcFactors(std::int64_t integer, int threads)
{
std::map<std::int64_t, std::int64_t> factors;
std::int64_t i;
#pragma omp parallel for num_threads(threads) default(none) \
shared(factors, integer), private(i)
for (i = 2; i <= integer; i++)
{
if (integer % i == 0)
{
factors[i] = integer / i;
}
}
}
int main()
{
const std::int64_t integer = 4000000000;
const int runs = 4;
for (int i = 0; i < runs; i++)
{
std::clock_t start = std::clock();
int threads = i + 1;
calcFactors(integer, threads);
std::clock_t end = std::clock();
displayCompTime(start, end, integer, threads);
}
}
Output:
4000000000 values and 1 thread(s): 67.7330s
4000000000 values and 2 thread(s): 40.7640s
4000000000 values and 3 thread(s): 32.5630s
4000000000 values and 4 thread(s): 29.7640s
Based on these results, this code doesn't appear to scale very well. I don't know if using a non-default static schedule would give faster times, and anything else would just incur additional overhead. Fortunately, I didn't need to include atomic or critical.
Would avoiding a lot of division help? I didn't try for anything else yet as this is only a start. I also wanted to see how well my laptop could handle parallelization.
Other than performance, I'm okay with any general OpenMP advice. I was sure to use some good practices for that, such as default(none) for explicitly listing the variables.
Answer: I have now ran this on a different machine which supports a lot more threads. Apart from some minor changes, I have utilized std::lldiv to do the division and modulo operations in one step instead of (possibly) two. To easily run it with more threads, especially with a divisible number of them, I have added commandline options to take this value and perform only one run.
for (i = 2; i <= integer; i++)
{
std::lldiv_t result = std::lldiv(integer, i);
if (result.rem == 0)
{
factors[i] = result.rem;
}
}
These are my new runtimes:
4000000000 values and 1 thread(s): 63.1260s
4000000000 values and 2 thread(s): 32.0790s
4000000000 values and 4 thread(s): 15.9060s
4000000000 values and 8 thread(s): 8.3521s
4000000000 values and 16 thread(s): 6.0289s
4000000000 values and 32 thread(s): 6.1347s
The serial runtime still appears to be slow because of the division, but it's at least slightly better than the original. However, the next three runtimes appear to be quite efficient with respect to the thread counts. After that, runtimes only improve a little, but are no longer efficient. In addition, branch prediction may also be increasing runtimes, but I don't know how to get around that. | {
"domain": "codereview.stackexchange",
"id": 12693,
"tags": "c++, performance, openmp, factors"
} |
Nielsen and Chuang 2.2.3: state of a qubit after measurement | Question: (Apologies for the poor typesetting of kets. I haven't been able to figure out how to do that on this site).
On page 85 of Nielsen and Chuang's textbook, they write that the probability of obtaining the result $m$ after experiment $M_m$ conducted on qubit $\psi$ is given by
\begin{equation}
p(m) = \langle \psi \mid M_m^\dagger M_m\mid \psi\rangle
\end{equation}
This is the first they said about measurements, so I'll take their word for this; no problem. My confusion comes later down the page, when they write that the state after the measurements $M_0$ and $M_1$, respectively, are
\begin{align}
\frac{M_0\mid\psi\rangle}{\mid a\mid} &= \frac{a}{\mid a\mid}\mid0\rangle \\
\frac{M_1\mid\psi\rangle}{\mid b\mid} &= \frac{b}{\mid b\mid}\mid1\rangle
\end{align}
In the first case, with $M_0$: as long as $\mid a \mid \neq 0$, then the qubit will be guaranteed to be in state $\mid 0 \rangle$. Similarly, if $\mid b \mid \neq 0$ then the qubit is guaranteed to be in state $\mid 1\rangle$.
This doesn't seem like a measurement to me. All that's being measured is whether or not the amplitude for one of the basis vectors is nonzero.
For example, if you have the state $\psi \equiv \left(\mid0\rangle + \mid1\rangle\right)/\sqrt{2}$ , then using the measurement $M_0$ will always result in the qubit being in state $\mid 0\rangle$, and measurement $M_1$ always leads to $\mid 1 \rangle$. This doesn't seem very useful.
What am I missing here?
Editing to add: Moreover, these measurement operators aren't even applicable in the special case where the qubit is just a classical bit (one of the amplitudes equal to one). For instance, how can you even apply the operator $M_0 = \mid 0 \rangle \langle 0 \mid$ to the state $\mid 1 \rangle$? This would just give the state $0\mid 0 \rangle + 0\mid 1 \rangle$, which makes no sense.
Answer: $M_0$ and $M_1$ correspond to two outcomes of the same measurement, not two different measurements.
If the operator $M_0$ is applied to the state |1⟩, you do indeed get 0|0⟩+0|1⟩. The way to interpret this is that the probability you get the outcome $M_0$ if you have the state |1⟩ is 0, that is, you never get the outcome $M_0$ on state |1⟩.
If you have the state |ψ⟩≡(|0⟩+|1⟩)/$\sqrt{2}$, you observe $M_0$ and $M_1$ with probability $\frac{1}{2}$ each. | {
"domain": "physics.stackexchange",
"id": 53415,
"tags": "quantum-mechanics, quantum-information, measurements, information"
} |
Charge density of line of length $L$ expressed with Dirac delta function in cylindrical coordinates | Question: I want to find the charge density of a line charge of length $L$ in cylindrical coordinates.
I suppose charge density is independent of $\phi$. The line charge is only defined for coordinates of $z$ between $L/2$ and $-L/2$. So I suppose we can express the charge density as:
$$\rho(r,\phi,z)=f(r,z)\delta(r)\theta(L/2-|z|)$$
I want to find $f(r,z)$, so I can try to use the volume integral to find it. However, something must be wrong in my definition, as this integral integrates to zero:
$$Q=\int^{\infty}_{-\infty} \int^{2\pi}_{0} \int^{\infty}_{-\infty}f(r,z)\delta(r)\theta(L/2-|z|)~r~dz~d\phi~dr\\=\int^{\infty}_{-\infty} \int^{2\pi}_{0} f(r,z)\theta(L/2-|z|)~0~dz~d\phi=0$$
I can't find my mistake, and I would really appreciate your help.
By the way, I already know that $\rho=\frac{Q}{L}$, but I want to prove it with the cylindrical coordinates and delta function that I mentioned.
$\theta$ is the step function.
Answer: In order to compensate for the $r$ in the volume element
$r\ dr\ d\phi\ dz$ you need a factor $\frac{1}{r}$ in
your charge density $\rho$.
And the over-all constant is adjusted so that the total
volume integral will be $Q$. You finally get
$$\rho(r,\phi,z)=\frac{Q}{2\pi rL}\delta(r)\theta(L/2-|z|)$$
The term $\frac{1}{2\pi r}\delta(r)$ might seem weird at first,
but it is actually equal to $\delta(x)\delta(y)$.
See also the math question Dirac delta in polar coordinates
and its accepted answer. | {
"domain": "physics.stackexchange",
"id": 86846,
"tags": "electrostatics, charge, density, dirac-delta-distributions"
} |
What are the current best known upper and lower bounds on the (un)satisfiability threshold for random k-sat and/or 3-sat? | Question: I would like to know the current state of the phase transition for random k-sat, given n variables and m clauses, what is the best known c=m/n for upper and lower bounds.
Answer: Dimitris Achlioptas covers this in a survey article from the first edition of the Handbook of Satisfiability (PDF of draft).
There is conjectured to be a single threshold $r_k$ for each $k \ge 3$, so that when $m/n > r_k$ then a random $k$-SAT formula with $m$ clauses and $n$ variables is unsatisfiable with high probability, and so that when $m/n < r_k$ then a random $m$-clause, $n$-variable $k$-SAT formula is satisfiable with high probability.
(More precisely, the conjecture is that in the limit as $n$ tends to infinity, the probability of satisfiability is 0 or 1 in these two regimes, respectively.)
Assuming that this Satisfiability Threshold Conjecture holds, the best known bounds for $r_k$ are
k 3 4 5 7 10 20
Best upper bound 4.51 10.23 21.33 87.88 708.94 726,817
Best lower bound 3.52 7.91 18.79 84.82 704.94 726,809
(this table appears on the page indicated as 247 in the draft).
In a more recent manuscript (arXiv:1411.0650), Jian Ding, Allan Sly and Nike Sun showed that for all sufficiently large $k$, there is in fact a single threshold $r_k = 2^k\ln 2 - (1+\ln 2)/2 + o(1)$, and the error term $o(1)$ in this expression goes to zero as $k$ tends to infinity. | {
"domain": "cstheory.stackexchange",
"id": 297,
"tags": "cc.complexity-theory, sat, lower-bounds, upper-bounds, phase-transition"
} |
Does a Haskell program count as an inductive proof? | Question: Is the following statement from [1] true?
"Since recursion is the main computational technique, a terminating pure Haskell program counts as an inductive proof of a theorem."
My intuition is that inductive proofs require a base case, assume the hypothesis case for k, prove induction step fork+1.
I am not clear on how these steps occur in the execution of a program (function?). Also, what logic is employed in such a proof?
Regards,
Patrick Browne
[1] http://ebooks.iospress.com/volumearticle/44257
Thanks for your very helpful answers.
Would it be fair to say that the answer may be morally yes [1], but technically perhaps no [2].
Below are two Haskell programs together with what I consider to be equational proofs (not inductive) that they evaluate to a desired ground term.
The first program is not recursive. So does the quote in my original posting include non-recursive programs?
-- Prog 1 non-recursive
x,y,z:
x = 1
y = x + 2
z = x + y
proveZ = z == 4
-- Equational Proof
[1]: (z)
---> (x + y)
[2]: (x + y)
---> (1 + y)
[3]: (1 + y)
---> (1 + (2 + x))
[4]: (1 + (2 + x))
---> (1 + (2 + 1))
[5]: (1 + (2 + 1))
---> (1 + 3)
[6]: (1 + 3)
---> (4)
-- Prog 2 recursive
data Vector = Empty | Add Vector Int
size Empty = 0
size (Add v d) = 1 + (size v)
proveSize = size (Add (Add Empty 1) 2) == 2
-- Equational Proof
[1]: (size (Add (Add Empty 1) 2))
---> (1 + (size (Add Empty 1)))
[2]: (1 + (size (Add Empty 1)))
---> (1 + (1 + (size Empty)))
[3]: (1 + (1 + (size Empty)))
---> (1 + (1 + 0))
[4]: (1 + (1 + 0))
---> (1 + 1)
[5]: (1 + 1)
---> (2)
The motivation for my original question is that I wish to understand the relationship between the evaluation of a Haskell program and the application of equational logic to the same Haskell program. While Haskell produces the correct answer, as would a non-functional language, is the computation a proof in equational logic? I imagine that Haskell cannot do any form of symbolic proof (e.g. id a == a).
In summary is my yes/no opinion reasonable?
[1] Fast and Loose Reasoning is Morally Correct:
http://www.cs.ox.ac.uk/jeremy.gibbons/publications/fast+loose.pdf
[2] Formulating Haskell: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.1399
Answer: The statement is not true as stated. Even if we imagine a Haskell-like language where all functions terminate and values are non-bottom, only some programs would correspond to inductive proofs (as opposed to proofs not using induction), but it would be the ones that (directly or indirectly) use recursion. That said, what they are doing induction over may not at all be obvious. (For the OP: every inductive type gives rise to a notion of structural induction with "mathematical induction" just being structural induction over the naturals.) Many patterns of recursion in practical programs, e.g. quicksort, do not correspond to a direct structural induction over their inputs.
For real Haskell, for just about any program proving that it terminates for all inputs requires very strong restrictions on those inputs: all values must be fully defined and all passed in functions must terminate on at least the inputs exercised by the current program. The latter isn't checkable in general, but what we can do instead is reduce a value that stands for a putative proof to normal form, and if we succeed then we have a real proof. Unfortunately, Haskell doesn't evaluate things to normal form but only to weak head normal form. For some types we can force the weak head normal form of one expression to correspond to normal forms via rnf, but we can't do this for functions in general.
Even without restrictions, programs in Haskell correspond to proofs in some logic, albeit an inconsistent one. This leads to another aspect that they conveniently didn't elaborate on. They say that certain Haskell programs correspond to proofs of theorems, but they don't say what theorems you can state! Haskell (particularly with higher-rank polymorphism) is closely related to the polymorphic lambda calculus and that corresponds to intuitionistic, second-order propositional logic1 (SOPL) (not to be confused with second-order predicate logic which is usually what "second-order logic" refers to). While SOPL is uninteresting in the classical case, it turns out to be pretty powerful in the intuitionistic case, capable of embedding intuitionistic first-order predicate logic. The embedding is non-obvious and even with it you would not be able to prove something like the Fundamental Theorem of Arithmetic. In practice, most programs correspond to completely trivial theorems or (intutionistic) propositional tautologies.
From my (admittedly very brief) skimming of the paper, it seems that they are more interested in the Prolog-like reasoning the type class system enables than leveraging the propositions-as-types correspondence. So their comments seem to be a bit of a non sequitur anyway.
1 See Chapter 12. | {
"domain": "cs.stackexchange",
"id": 9994,
"tags": "proof-techniques, functional-programming"
} |
Does a charged particle travelling with uniform velocity induce a magnetic field? | Question: Does a charged particle, an electron say, travelling with uniform velocity induce a magnetic field? I believe it doesn't. In primary school, we all learned how to induce a magnetic field into an iron nail by wrapping coils of wire around the nail and then hooking it up to a DC battery, but if you do not coil the wire, the magnetic nail doesn't occur. What's happening here? My only guess are the electrons are accelerating; the magnitudes of their speeds aren't changing, but rather their directions. In the coil, a force must be applying itself to the electrons in order for them to make their spiralling paths, thus, they are said to be accelerating and that is what causes the magnetic field to develop.
Answer: A straight wire does have a magnetic field. It circles around the wire instead of going in a straight line like in a coil.
Picture source: http://coe.kean.edu/~afonarev/physics/magnetism/magnetism-el.htm
On the left is a straight wire with the magnetic field curling around it. The middle shows a single loop of wire. Notice that the magnetic field still curls around the wire, but the fields from opposite ends of the loop add together to make a strong field. The right picture shows a multi-loop wire (a solenoid), which enhances the field compared to the single loop. The right picture is the kind of field you created with the wire and nail. For the same current, the solenoid creates a much stronger field, which is why it is used to magnetize the nail.
To answer your original question, a single electron in motion does have a magnetic field that's similar to the straight wire (the field curls around the electron's path of motion) except that it gets weaker as you move farther away along the electon's path. | {
"domain": "physics.stackexchange",
"id": 48584,
"tags": "electromagnetism"
} |
Build Architecture Question | Question:
So I have been working on using IRI's segway library to implement a segway driver for ROS, and I have run into a situation where I don't know how to proceed.
IRI is setting up a port of their stuff for ROS, so at this point it is more educational for me.
So, the IRI segway library relies on some of their other libraries.
Here is the dependency Structure:
My ROS Node depends on iri_segway_rmp200.
iri_segway_rmp200 depends on iricomms and iriutils.
iricomms depends on iriutils.
All of the IRI stuff is in SVN and is LGPL licensed, my stuff can be considered MIT. They are setup to build dynamically linked libraries (.so on linux and .dylib on osx).
The way I see it there are a few ways to approach this setup in ROS: (Maybe there is a 4th, correct setup)
Put a script in rosdep.yaml to (svn co; make; sudo make install) each of the libraries from IRI.
This is good because the libraries are installed and no further configuration is required, but requires root access
Also, they install Find.cmake files in the cmake shared Modules folder, which is nice
Create a ROS pkg for each library and use rosmake to build them, using the svn_checkout.mk make script and patches to build and install them to the pkg dir
This is nice because the libraries are contained in the ros pkg
But, you need to export cpp flags to include and link directories
How do you handle runtime dynamic linking?
Download/Distribute and build the libraries in your ROS pkg and statically link your node to the libraries
Doesn't allow multiple nodes to link to a single source for the libraries (build the libs for each node that uses them)
So here are my questions:
What is the preferred method of build architecture?
How do you handle dynamic linking at runtime? (the .so file is in package A, and how does a binary in package B find it at runtime?)
How can you handle installing and referencing Find.cmake files that libraries have? (use manifext.xml's <depend .../> instead, is my guess)
What if any licensing concerns are there? (patches and static linking, etc)
Sorry if some of these questions are silly, I am still learning these more complicated build systems (rosmake/cmake).
Thanks in advanced!
Originally posted by William on ROS Answers with karma: 17335 on 2011-02-14
Post score: 3
Answer:
Currently your option #2 is basically the recommended technique. The manifest provides a way to export the compile and linking flags.
Ken recently put together a Tutorial about Wrapping External Libraries It's a new tutorial and any feedback on clarity etc is always welcomed.
In our review of the build system targeted for E Turtle we are shooting for a solution more like option #1 but that's in progress still.
To answer your specific questions:
Option 2
You can use rpath to set the linking search directories. (See the Tutorial and examples linked from within)
We don't have a good solution right now. There are several ways this could be done using exports but we haven't used Find modules yet.
Linking to LGPL code requires changes to the LGPL'd code to be released but not works which are linked against the libraries. For example glibc is under the LGPL license.
Originally posted by tfoote with karma: 58457 on 2011-02-14
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 4723,
"tags": "rosmake, cmake, rosbuild"
} |
determine background noise level via cross correlation | Question: The setup:
Sound is being played by the speaker on the iphone. At the same moment i record sound via the microphone. Obviously the speaker sound will be recorded in addition to all the external noise.
My goal:
To determine the level of all external sound which is recorded in addition to the sound from the speakers.
(no implementation yet)
So far my approach is to cross correlate the input stream with the output stream to get the delay between both of them and an integer representing the level of similarity at this point, which will be used to estimate external noise. (if the int drops below a given value the noise is too loud)
Has anyone experience with stuff like that, if this approach will be accurate enough to make a guess if the external noise reaches a certain level of disturbance?
Or will i have t do further analysation in the frequency domain via fft for example.
I'm completely new to this kind of problem so be kind with me, if I'm totally wrong ;)
Answer: If you want to get an estimate of the noise, do the cross-correlation, and then multiply your transmitted sound by your cross-correlation peak and then subtract it from your received sounds. Make sure, of course, that you use the cross-correlation to line up the transmitted sound with when it was received.
What remains after you subtract is the noise with some amount of (hopefully small) error. | {
"domain": "dsp.stackexchange",
"id": 512,
"tags": "audio, cross-correlation"
} |
Why does greater orbital overlap mean a stronger bond? | Question: According to valence bond theory, orbital overlap produces a bond. However, I don’t understand why having greater orbital overlap renders a bond stronger. It’s intuitive, I suppose, but I haven’t been able to find an actual explanation as to why that is. Are there quantum mechanical effects behind this?
Answer: An intuitive explanation is given in the comment above. Better orbital overlap generally indicates more constructive, favorable interaction between the two atoms.
Let's consider a homonuclear diatomic bond (e.g. $\ce{H2}$). The qualitative understanding is similar for two different atoms, but the math from quantum mechanics is a bit easier with a homonuclear bond.
So there's one bonding orbital ($\sigma$) and one anti bonding orbital ($\sigma^*$).
The energy of the orbitals will be:
$e_1 = \frac{H_{11} + H_{12}}{1 + S_{12}}$
$e_2 = \frac{H_{11} - H_{12}}{1 - S_{12}}$
You can put in some numbers and see what happens (e.g., as the overlap $S_{12}$ increases from 0 to 1, the bonding interaction becomes more and more energetically favorable).
N.B. There's a slight asymmetry - the anti bonding orbital is higher in energy than the bonding orbital is lower in energy. | {
"domain": "chemistry.stackexchange",
"id": 6081,
"tags": "bond, quantum-chemistry, atoms, orbitals, valence-bond-theory"
} |
How does gravitation propagate along curved spacetime? | Question: In this wikipedia article it is described how a beam of light, with its locally constant speed, can travel "faster than light". That is to say it travels a distance, which, from a special relativistic point of view, is surprisingly big.
I wonder if a gravitational wave on such a curved spacetime (of which the wave is actually part of) behaves equally.
Does a gravitational wave also ride on expanding spacetime, just as light does? Do the nonlinearities of gravitation-gravitation interaction influence the propagation of a wave (like e.g. a plasma) such that light and gravity are effectively not equally fast?
If I want to send a fast signal in this expanding universe scenario, in what fashion do I decide to I send it?
Answer: The speed of light is only locally invariant. For example if you watch a beam of light falling onto a black hole event horizon you'll find it slows down, and if you waited an infinite time you'd see it stop at the event horizon. Whether the speed of light is really changing depends on your point of view. I'd say it's just your choice of co-ordinates that makes it look as if it's changing, but opinions will differ.
Anyhow, the "faster than $c$" motion referred to in your link is similar in that it's a matter of geometry and the co-ordinates you're using. Since it's just geometry gravity waves will be affected in the same way as light, as indeed will everything that propagtes in space including objects with a non-zero rest mass. | {
"domain": "physics.stackexchange",
"id": 2825,
"tags": "general-relativity, waves, spacetime, faster-than-light, gravitational-waves"
} |
Change in kinetic energy of a charge is zero | Question: If i move positive test charge from infinity to a point in the electric field of positive charge untill it reaches certain point And my external work made the change in kinetic energy zero does that mean the charge will stay stationary at that point
Answer: It does not. If there is a nonzero net electric field at that point, then the charge will begin to accelerate. | {
"domain": "physics.stackexchange",
"id": 51824,
"tags": "electrostatics"
} |
How do you tell if a variable star is periodic or not by its light curve? | Question: I have light curves for a particular star. I'm able to construct a periodogram of the stars' light curve, a plot of the power of the light curve vs. the signal's period. For a nonperiodic but irregular signal, I'd still expect to see some peaks and valleys on the periodogram just because of random noise. How could I distinguish the non-periodic stars from periodic signals given their light curves and/or periodograms?
Answer: There are some nice and well-tested routines to create periodograms and test the periodicities for significance in the astropy.timeseries package, containing the LombScargle class.
This class includes a method for calculating the "false alarm probability", which I think is almost what you are looking for. This gives the probability that you would see a periodogram peak as high as observed if the data had no periodicity at all, at that particular frequency.
To generalise this to whether you would detect a peak as high as observed at any frequency requires some sort of Monte-Carlo or bootstrapping technique, the latter is also implemented in the LombScargle class and can be conceptually understood as replacing each observation point with a value chosen randomly from your list of observations whilst keeping the time the same; then the periodogram is
recomputed. Do this multiple times and you build up the distribution of peak heights in the periodogram for data which are essentially free of any periodic signal.
None of the above quite answers the question - what is the probability that the underlying source is periodic, given these data? Nor can it give you the probability that the source is not periodic. Both of these questions need to be answered by simulating data and folding it through your periodicity detection scheme.
These techniques and issues are discussed in detail by Vanderplas (2018) (particularly section 7), which is highly recommended and should be mandatory reading for anyone doing time-series analysis of light-curves. | {
"domain": "astronomy.stackexchange",
"id": 6301,
"tags": "data-analysis, light-curve, variable-star"
} |
How does mass of a sound barrier affect the amplitude of sound waves penetrating the room? | Question: I am trying to learn what role mass of sound insulation plays in reducing amplitude of sound waves entering a room from the outside.
In a thought experiment, a person has persuaded an elephant to lean against the party wall with a noisy neighbour. Is it fair to reason that any sound wave entering the room through the party wall will make an attempt to 'move molecules of the elephant' during sound transmission, thus reducing the amplitude of incoming sound?
I suspect that sound waves will attempt to travel down a route that is 'easiest to vibrate', avoiding the elephant.
Would the elephant play any role at all in reducing the apmplitude of incoming sound waves?
EDIT: there are no air gaps in our imaginary room.
Answer: I don't have the maths to argue this [I'm a sound engineer not a physicist], so let me try to cover it in broad strokes.
Sound isn't all one frequency. To humans it's frequencies from approximately 20Hz to 20kHz [less as you get older]. Higher frequencies tend not to bother humans as we just can't hear them at all. Lower frequencies turn from 'sound' into perceptible 'vibration' - we can feel it even if we can't actually hear it.
Let's consider this like playing back music on a good hifi.
At the top we have cymbals & hi-hats; the bright, fizzy noises.
Below that, the majority of the sounds; guitars, strings, brass, vocals etc.
Then right at the bottom, bass, bass drum [& other lower sounding drums].
The next thing to consider is that higher frequencies take far less energy to generate, but are considerably easier to block. At frequencies from maybe 4kHz & up, you can block them with your fingers in your ears or earplugs, or a wall, or a bit of foam or rockwool.
If you're ever in the market for 'sound absorbing' materials to block sound transmission, always be wary of claims. "Blocks >60dB" ..at what frequencies? 20Hz? Not a chance, mate.
OK, so lower frequencies take much more energy to generate, but as they're carrying more energy, they also take a lot more effort to block.
Consider the thunder that accompanies lightning. If you are quite close when it strikes [or like they make it sound in the movies], it is a massive 'crack' of pretty much 'all frequencies at once'. By the time you're a mile away [about 5 seconds between flash & sound] then already it's just a deep rumble - the high frequencies didn't even make it that far in clear air. The bass did, though.
A domestic interior wall made of wood frame & drywall [plasterboard] will stop higher frequencies pretty easily, but it will barely stop a human voice - right in the middle of our frequency range, as we discussed earlier, along with the guitars etc.
You'll know this if you have argumentative neighbours/spouse/children etc, or even just an over-enthusiastic dog - slamming the doors & stomping off to another room merely dulls the sound, it doesn't block it entirely.
A light piece of drywall will itself have a resonant frequency, one at which it will vibrate in sympathy. If allowed the freedom to vibrate, it will then transmit that frequency out the other side, almost unimpeded. If we damp it, with foam or rockwool, that will make the energy dissipate, it will waste itself trying to make the rockwool vibrate. If we then add a wooden frame & another piece of drywall of a different thickness & density, this will want to resonate at a different frequency, yet will also be damped by the filler. Things like rockwool don't really have a resonant frequency, but what they will do is just allow very low frequencies to pass as though they weren't there. What they are good at is damping the vibrations of the larger resonant surfaces - in effect making them waste their energy.
This is the basic element of sound transmission prevention. You have layers of different densities, each dissipating energy at a different frequency. The very lowest frequencies need the greatest mass to damp. To do this for a domestic wall would actually take too many layers to be really sensible. Before long, the wood frame would actually be transmitting most of the noise. You'd have to go to additional lengths to prevent transmission through the basic frame the building is constructed on.
Recording studios use this principle, though, by 'floating' a room within a room - literally trying to isolate one structure from the rest of the building, sitting it on sound absorbing [ie non-resonant] rubber.
Concrete in a single 'lump' or 'sheet' also has a resonant frequency - but it also has a lot of mass, making it harder [though not impossible] to vibrate. You could still feel trucks going by, or the very lowest frequencies from a loud hifi. Concrete's 'problem' in this respect is that it's really all one piece, so it does have actual resonant frequencies, like a really thick piece of drywall.
Better than concrete is a wall made of lots of separate stones - non identical, not built in any particular mathematical or artistic pattern. This gives a lot of variation in frequency, each stone & cement joint all being excited by one frequency but damped by all the others around it, which want to vibrate at a different frequency.
Right… so, eventually down to your elephant. Sorry it's taken a while.
Your elephant is, in structure [apart from being alive & probably not too happy at being part of this experiment] most similar to the rough stone wall. It's a bag of liquid, with a lot of different density membranes and an overall skeleton of loosely jointed harder materials. Water will actually transmit sound similarly to air, but as each bit of water in our elephant is in its own tiny cell, the disparity of sizes & interspersed membranes breaks this up like our stone wall.
It has a lot of mass, too.
If you were to persuade your elephant to lean on a resonant wall, it would damp the wall to some extent, depending on how hard it squished itself against it. It would be acting like the world's biggest lump of rockwool or neoprene rubber. It wouldn't really be fabulously efficient used in this manner.
If, however, we were to replace the wall with entirely elephant, you would very probably have a really, really good sound insulator. 8ft of varying densities & substrates, a lot of mass & not a lot of cohesion between each component & its resonant frequency. If we were to get a bit 'icky' then its lungs & rib cage would probably be the most resonant part - so let's wedge him into the wall facing the noise, & we can sit quietly in the room his butt protrudes into.
Sonic bliss, if not olfactory ;)
Note: I've not even touched on reflectivity or diffusion - that would turn this into a full novel ;))
After comments
It is remarkably difficult to achieve total sonic separation in a domestic environment. I once built a 'room within a room' as a home recording studio, in a house basement, of course surrounded by earth - lots of mass. I did manage to achieve sufficient reduction that you could no longer hear anyone shouting or singing as loud as they possibly could, from the room above. My bass (guitar) amp, however, still went through it like a knife through butter. It took the extra attenuation of the building [double skin of old Victorian brickwork] & earth itself to fully damp it to below audibility. You couldn't hear it from outdoors unless it was particularly quiet in the street. | {
"domain": "physics.stackexchange",
"id": 93599,
"tags": "acoustics"
} |
How to open PCD file | Question:
Hi guys. I'm working with my kinect to extract point clouds. So I run at first openni_launch and after i run : rosrun pcl_ros pointcloud_to_pcd input:=/camera/depth_registered/points
So after this, ROS saves every point clouds that kinect gets. SO my question is: how can I extract data from a PCD? I want to extract fields about sensor_msgs/PointField[].
It exists another method to get every information about a PCD file?
thanks to all!
Originally posted by kalum on ROS Answers with karma: 3 on 2016-03-08
Post score: 0
Original comments
Comment by Reza1984 on 2016-03-09:
If you installed pcl library you can use
#include <pcl/io/pcd_io.h>
otherwise use
#include <pcl_ros/io/pcd_io.h>
with the following command
pcl::io::loadPCDFile<pcl::PointXYZ> ("test_pcd.pcd", *cloud)
Comment by kalum on 2016-03-10:
Thanks man!! I've another question. Is there a method to discover the distance between my image/object and my kinect? Thank you :)
Answer:
You can use PCL. Check this tutorial; http://pointclouds.org/documentation/tutorials/reading_pcd.php
#include <iostream>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
int
main (int argc, char** argv)
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
if (pcl::io::loadPCDFile<pcl::PointXYZ> ("test_pcd.pcd", *cloud) == -1) //* load the file
{
PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
return (-1);
}
std::cout << "Loaded "
<< cloud->width * cloud->height
<< " data points from test_pcd.pcd with the following fields: "
<< std::endl;
for (size_t i = 0; i < cloud->points.size (); ++i)
std::cout << " " << cloud->points[i].x
<< " " << cloud->points[i].y
<< " " << cloud->points[i].z << std::endl;
return (0);
}
Originally posted by Akif with karma: 3561 on 2016-03-09
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by kalum on 2016-03-10:
Thanks man!! I've another question. Is there a method to discover the distance between my image/object and my kinect? Thank you :) | {
"domain": "robotics.stackexchange",
"id": 24034,
"tags": "ros, pcd"
} |
A non-blocking lock decorator in Python | Question: I needed to impose on a Python method the following locking semantics: the method can only be run by one thread at a time. If thread B tries to run the method while it is already being run by thread A, then thread B immediately receives a return value of None.
I wrote the following decorator to apply these semantics:
from threading import Lock
def non_blocking_lock(fn):
fn.lock = Lock()
@wraps(fn)
def locker(*args, **kwargs):
if fn.lock.acquire(False):
try:
return fn(*args, **kwargs)
finally:
fn.lock.release()
return locker
This works in my testing so far. Does anyone notice any gotchas or have any suggestions for improvements?
Revised version
After the suggestions by @RemcoGerlich, I have added a docstring and kept the lock local to the decorator:
from threading import Lock
def non_blocking_lock(fn):
"""Decorator. Prevents the function from being called multiple times simultaneously.
If thread A is executing the function and thread B attempts to call the
function, thread B will immediately receive a return value of None instead.
"""
lock = Lock()
@wraps(fn)
def locker(*args, **kwargs):
if lock.acquire(False):
try:
return fn(*args, **kwargs)
finally:
lock.release()
return locker
Answer: I think this will work fine and it's very close to how I would write it myself.
A few small things come to mind:
There's no documentation of any kind that explains what the semantics are, and they're not explicit either (the return None if the lock isn't acquired is entirely implicit). I would put the short explanation you put in this question into a docstring, and/or add an explicit else: return None to the if statement.
is there any reason why the lock object is exposed to the outside world by making it a property of the function (fn.lock) ? I would simply make it a local variable, so that it's hidden. But I'm not sure. | {
"domain": "codereview.stackexchange",
"id": 6299,
"tags": "python, concurrency"
} |
Is the diameter of earth's north pole similar to the south? (Think axis of rotation+ axis of precession) | Question: Earth is an oblate spheroid, not a sphere, since it's rotation around the axis causes it to bulge along the equator.
Is it teally a symetric oblate spheroid though, or also not even that?
To explain: If we combine the effect due to the rotation around the axis, with the precession of the axis itself, is it correct to assume that there should also be some asymmetry between the geographic north and south poles?
Namely that there is at least a small difference between the diameter of the two?
If so, how large is it?
Part two: If there is such asymetry, does it change in the duration of a milankovitch cycle?
Answer: Yes. By definition of "axis", that is a straight line through the center. Therefore, the angle between the axes of rotation and precession make the same one angle with each other, whether you measure North or south.
Of course, Earth is an oblate spheroid only to some approximation. In reality there are hills and mountains. So if you walked and swam along the imaginary circle spanned by those two axes, you would get more wet I the north, cold in the south, and overall walk a different distance, simply because your distance from the center of Earth was different. | {
"domain": "physics.stackexchange",
"id": 90790,
"tags": "astronomy, earth, precession"
} |
creating my first android app : DefaultNodeFactory symbol not found | Question:
Hi,
I followed the steps in http://ros.org/wiki/ApplicationsPlatform/Clients/Android and installed the android developer tools as explained in here. I Also followed step 4 in creating your own android app, but after hitting rosmake --threads=1 i get the following error on 62nd package make which is my project package that DefaultNodeFactory is not being find. I have copied the error part of the make down below. Also if I do rosmake on any of the tutorial packages they get to compile fine but when I used the steps explained on the website i keep getting this error, so if someone could help me I would really appreciate it.
[rosmake-1] Starting >>> android_test1 [ make ]
[ rosmake ] Last 40 linesdroid_test1: 6... [ 1 Active 62/63 Complete ]
{-------------------------------------------------------------------------------
[echo] Handling Resources...
[aapt] No changed resources. R.java and Manifest.java untouched.
-pre-compile:
-compile:
[javac] /home/vahid/ROS_DIR/rosjava_core/rosjava_bootstrap/android.xml:607: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
[javac] Compiling 52 source files to /home/vahid/ROS_DIR/appmanandroid/library/bin/classes
[javac] RosActivity.java:48: cannot find symbol
[javac] symbol : class DefaultNodeFactory
[javac] location: package org.ros.internal.node
[javac] import org.ros.internal.node.DefaultNodeFactory;
[javac] ^
[javac] SendGoalDisplay.java:59: cannot find symbol
[javac] symbol : class DefaultNodeFactory
[javac] location: package org.ros.internal.node
[javac] import org.ros.internal.node.DefaultNodeFactory;
[javac] ^
[javac] RosActivity.java:570: cannot find symbol
[javac] symbol: class DefaultNodeFactory
[javac] node = new DefaultNodeFactory(nodeMainExecutor.getScheduledExecutorService()).newNode(config);
[javac] ^
[javac] SendGoalDisplay.java:206: cannot find symbol
[javac] symbol : class DefaultNodeFactory
[javac] location: class ros.android.views.SendGoalDisplay
[javac] Node newNode = new DefaultNodeFactory().newNode(nc);
[javac] ^
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 4 errors
BUILD FAILED
/home/vahid/ROS_DIR/appmanandroid/build_app.xml:116: The following error occurred while executing this line:
/home/vahid/android-sdk-linux_x86/tools/ant/build.xml:485: The following error occurred while executing this line:
/home/vahid/ROS_DIR/rosjava_core/rosjava_bootstrap/android.xml:587: The following error occurred while executing this line:
/home/vahid/ROS_DIR/rosjava_core/rosjava_bootstrap/android.xml:607: Compile failed; see the compiler error output for details.
Total time: 5 seconds
Executing command: ['ant']
[ rosmake ] Output from build of package android_test1 written to:
[ rosmake ] /home/vahid/.ros/rosmake/rosmake_output-20120212-194658/android_test1/build_output.log
[rosmake-1] Finished <<< android_test1 [FAIL] [ 61.48 seconds ]
[ rosmake ] Halting due to failure in package android_test1.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 63 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/vahid/.ros/rosmake/rosmake_output-20120212-194658
Originally posted by vahid on ROS Answers with karma: 31 on 2012-02-12
Post score: 1
Answer:
Maybe you use old version rosjava. I don't see class DefaultNodeFactory in my project.
Try compare source from your project with source from my project.
/*
* Copyright (C) 2011 Google Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not
* use this file except in compliance with the License. You may obtain a copy of
* the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
*/
package myandroid.test_and;
import ros.android.activity.RosAppActivity;
import android.os.Bundle;
import org.ros.node.Node;
import android.util.Log;
import android.widget.Toast;
import android.view.Menu;
import android.view.MenuInflater;
import android.view.MenuItem;
//TODO: search for all instances of TODO
/**
* @author damonkohler@google.com (Damon Kohler)
* @author pratkanis@willowgarage.com (Tony Pratkanis)
*/
public class AndroidTest extends RosAppActivity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
setDefaultAppName("AndroidTest");
setDashboardResource(R.id.top_bar);
setMainWindowResource(R.layout.main);
super.onCreate(savedInstanceState);
//TODO: add code
//Called on creation. ROS hasn't started yet, so don't start
//anything that depends on ROS. Instead, look up things like
//resources. Initialize your layout here.
}
/** Called when the node is created */
@Override
protected void onNodeCreate(Node node) {
super.onNodeCreate(node);
//TODO: Put your initialization code here
}
/** Called when the node is destroyed */
@Override
protected void onNodeDestroy(Node node) {
super.onNodeDestroy(node);
//TODO: Put your shutdown code here for things the reference the node
}
/** Creates the menu for the options */
@Override
public boolean onCreateOptionsMenu(Menu menu) {
MenuInflater inflater = getMenuInflater();
inflater.inflate(R.menu.AndroidTest_menu, menu);
return true;
}
/** Run when the menu is clicked. */
@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
case R.id.kill: //Shutdown if the user clicks kill
android.os.Process.killProcess(android.os.Process.myPid());
return true;
//TODO: add cases for any additional menu items here.
default:
return super.onOptionsItemSelected(item);
}
}
}
Originally posted by Alexandr Buyval with karma: 641 on 2012-02-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by vahid on 2012-02-20:
thanks for the response, it actually was the same. since i can not leave a lot of comments here i explained it down below. thanks.
Comment by vahid on 2012-02-25:
Hi I wanted to know if i could please get my question answered down below. | {
"domain": "robotics.stackexchange",
"id": 8205,
"tags": "ros, client-rosjava, rosjava, android"
} |
If the resistive forces are greater than the driving force, will the car decelerate or reverse? | Question:
Describe the motion of the cyclist if:
the resistive force is greater than the driving force.
the driving force is greater than the resistive force.
the driving force has the same magnitude as the resistive force.
Suggest two ways in which the cyclist could reduce the resistive force.
Why does the propulsive force from the car engine will need to be greater than the net force (that causes acceleration)?
For bullet point 1's...
(1) Will the cycle deccelerate or move in the opposite direction?
(2) Will the cycle speed up or simply move?
(3) Will the cycle keep moving in constant speed or stop moving at all?
(4) One way can be reducing the amount of friction. Is there another way?
For bullet point 2...
Is it because of the presence of the resistive forces? So does this make the answer to 1's (3) the second option?
Answer: Newton's 2nd Law answers it all:
$$\sum F=ma$$
And as you see in this law, no velocities are involved. The speed in whatever direction is not connected to the accelerations that might happen.
Net force is negative: $\sum F=ma<0$. In other words, the net force is pointing backwards. The acceleration is in the same direction as the net force and is so also backwards.
Net force is now positive: $\sum F=ma>0$. Acceleration is too and is thus forward.
Net force is zero: $\sum F=0=ma$. Nothing accelerates. Whatever motion it has is not change.
All these three descriptions never mentioned speed. In all cases the speed could be either forwards or backwards, we don't know. For example, in the first bullet point, if the speed is forward, then is is slowing down; is the speed backwards, then it is speeding up backwards.
The resistive force is the friction. The question is asking how you can reduce that. For example by pumping your tires harder and by not biking on a sand beach. Other resistive forces could be frictions in joints and gears and alike, but I assume that this question assumes these things ideal.
For bullet point 2... Is it because of the presence of the resistive forces?
The question doesn't really make sense. Is the propulsion force larger than the net force? Yes of course, since the net force could for example be zero.
If you mean that the propulsion force must be bigger than the resistive force, then your answer is fine and correct. | {
"domain": "physics.stackexchange",
"id": 40447,
"tags": "homework-and-exercises, newtonian-mechanics, forces, friction"
} |
Shuffling a list of track indices | Question:
This question has a follow-up: Follow up to Shuffling a list of track indices
I have a bunch of paths in an ArrayList<String> and when a user has activated the shuffle button, I want to have a random order to play all of them. This class should help me get every index once. The methods getNext() and getPrevious() should help me to get the last / next song when the user clicks on either the next or previous button.
public class ShuffleGenerator {
private ArrayList<Integer> indexList;
private int currentIndex;
public ShuffleGenerator(){
indexList = new ArrayList<Integer>();
currentIndex = 0;
}
public void initalize( int seed, int size ){
if( !indexList.isEmpty() ){
indexList.clear();
}
if( size > 0 ){
//fill it with indices
ArrayList<Integer> indices = new ArrayList<Integer>();
for( int i = 0; i < size; i++ ){
indices.add(i);
}
Random rnd = new Random(seed);
do{
int index = rnd.nextInt(indices.size());
indexList.add( indices.get(index) );
indices.remove(index);
}while( indices.size() > 0 );
}
}
/**
* Get the next index.
* @return -1 if failed otherweise the index.
*/
public int getNext(){
//if currentIndex is invalid
if( currentIndex+1 >= indexList.size() || indexList.isEmpty() ){
return -1;
}
currentIndex++;
return indexList.get(currentIndex);
}
/**
* Get the previous index.
* @return -1 if failed otherwise the index.
*/
public int getPrevious(){
//if currentIndex is invalid
if( currentIndex-1 < 0 || indexList.isEmpty() ){
return -1;
}
//decrement the current index so we can get the previous one
currentIndex--;
return indexList.get(currentIndex);
}
}
I tested it twice and it seems like I get every index from 0 to size-1 in a random order. But is it really giving me every index? I might have made a mistake and just not noticed yet. And is there anything to improve?
Answer: Sorting Algorithm
Your sorting algorithm is fine. Note that as pointed out in the other answers, improvements could be made to it to speed it up. As it is right now, however, it does produce good results.
However, java.util.Random uses a long as seed. I suggest changing public void initalize( int seed, int size ) to public void initalize( long seed, int size ). Keep in mind that you should use a varying seed if you want to implement some sort of randomize function.
An optimization: indices.remove(int index) will return the removed element. This means you can replace
indexList.add( indices.get(index) );
indices.remove(index);
with
indexList.add( indices.remove(index) );
Additionally, checking if indexList is empty is not needed. You're free to clear the list regardless of whether it has content or not.
Other functions
if( currentIndex+1 >= indexList.size() || indexList.isEmpty() ){//in getNextIndex()
You never allow currentIndex to get below 0. Collection.size() can't return a value lower than 0. Thus, the second condition will never evaluate to true.
Additionally, when shuffling, you don't reset currentIndex to 0. Is that intended?
Naming
initalize is a typo, I think. It should be initialize. Names with typo's are hard to remember as you need to remember which way it is written exactly.
Use of built-in functions
while( indices.size() > 0 );//in initalize
You have .isEmpty() for that. I suggest you use it.
Comments
/**
* Get the next index.
* @return -1 if failed otherweise the index.
*/
(...)
/**
* Get the previous index.
* @return -1 if failed otherwise the index.
*/
This comment has typo's and a grammatical issue. I recommend @return -1 if failed, otherwise the index. instead. Clarifying what the index is wouldn't be a bad idea either.
//if currentIndex is invalid
if( currentIndex+1 >= indexList.size() || indexList.isEmpty() ){
return -1;
}
"Invalid" is arbitrary. How about "if the next index is out of bounds"?
One of your public methods, initialize (or that's what it should be called after you've fixed your typos), is lacking javadoc. I suggest you add it - in particular, explain what the parameters are for. Lastly, you might want to explain the purpose of the class by placing javadoc at the top of the class. | {
"domain": "codereview.stackexchange",
"id": 8949,
"tags": "java, shuffle"
} |
Deleting millions of rows from a MSSQL server table | Question: This SQL query took 38 minutes to delete just 10K of rows. How can I optimize it?
Index already exists for CREATEDATE from table2.
declare @tmpcount int
declare @counter int
SET @counter = 0
SET @tmpcount = 1
WHILE @counter <> @tmpcount
BEGIN
SET ROWCOUNT 10000
SET @counter = @counter + 1
DELETE table1
FROM table1
JOIN table2
ON table2.DOC_NUMBER = table1.DOC_NUMBER
AND table2.DOC_YEAR = table1.DOC_YEAR
WHERE YEAR(CREATEDDATE) BETWEEN 2007 and 2009
IF @@ROWCOUNT = 0
BREAK
END
Answer: The performance problem is due to the non-sargable expression YEAR(CREATEDATE). Applying a function to a column in a WHERE clause prevents efficient use of the index on the column. Below is an alternative technique, which uses an inclusive start and exclusive end for the datetime range.
Note the use of SET ROWCOUNT is deprecated for DELETE, INSERT and UPDATE statements. Use TOP instead.
DELETE TOP(10000) table1
FROM dbo.table1
JOIN dbo.table2
ON table2.DOC_NUMBER = table1.DOC_NUMBER
AND table2.DOC_YEAR = table1.DOC_YEAR
WHERE
CREATEDDATE >= '20070101'
AND CREATEDATE < '20100101';
I'm not sure the purpose of @tmpcount in your script other than perhaps controlling the number of iterations for testing. | {
"domain": "codereview.stackexchange",
"id": 24090,
"tags": "sql, datetime, sql-server, time-limit-exceeded"
} |
Why do crystals grow in preferred directions? | Question: I want to know why snowflakes (and other crystals) grow symmetrically and I find the leading answer to the established question to be entirely unsatisfactory.
When water freezes, you get ice. Ice, like many solid materials, forms a crystalline structure. In the case of water, the crystalline structure may be attributed to the hydrogen bond, a special kind of an attractive interaction.
So a big chunk of ice will have a crystalline structure - preferred directions, translational symmetry, and some rotational symmetries.
Nature adds one water molecule at a time. The molecules always try to choose the most energetically favored position on the frozen body. Because these laws of creation of a snowflake are symmetric with respect to the rotational symmetries, it follows that any symmetry that exists at the beginning - a hexagonal symmetry of a small number of molecules in the initial crystal - will be preserved.
No, sorry, it doesn't follow at all. Something hugely important is missing from this answer -- causality.
Suppose the snowflake at one moment is perfectly symmetrical. Then suppose a random molecule gets added at point A. To maintain the symmetry, later on a random molecule gets added at a corresponding point B (and four others). But the motion of the water molecule is random and there is no reason for it to end up at B or anywhere else in particular.
Even considering theo's remarks about dendrite growth, why don't we see this?
I see absolutely no reason why any given additional water molecule cannot land in an energetically unfavourable position. Besides, if low energy considerations were a driving force, snowflakes would be approximately spherical.
So, what's really going on? How can a bump at A create an attractive force at a distant B?
Answer: Broadly speaking, the reason why a snowflake forms a flat crystal with 6-fold symmetry (as opposed to a sphere) is due to a combination of the underlying symmetry (order) of the ice crystal and a dynamic instability (chaos) resulting from the non-linear phenomena of solidification and dendrite formation as a function of temperature and humidity variations in the atmospheric clouds.
Each snowflake is a crystal which grows out from a central nucleus of ice which has a 'hexagonal' symmetry. The simplest 'snowflake' is therefore a hexagonal prism.
When molecules of water 'adhere' to the surface of the crystal, the non-linear combinations of humidity and temperature create conditions in which flat surfaces are inherently 'unstable'. As the crystal tosses and turns in the atmosphere, water molecules solidify on tiny 'bumps' like the corners of the hexagon crystal so that these 'bumps' grow faster than the adjacent 'flat' regions, thereby forming long 'needle' like structures. As the 'needles' get bigger, any small irregularities along the edge become unstable and water solidifies on these to form 'branch' like structures. The tendency is for the needles to grow from the rectangular 'prism facets' rather than the hexagonal facets, thereby resulting in snowflakes with a flat shape rather than a 'spiky ball'. A detailed description can be found in this paper on "the physics of a snow crystal"
Because the snowflake is very small and temperature and humidity vary relatively slowly, the conditions for solidification of the water at a given point on the crystal will be the same at the 6-fold symmetrical points, so that the snowflake 'grows' symmetrically. In this way, smaller needles grow on previous needles to form a flat-plate with 'tree-like' structures (dendrites) around the a hexagonal prism nucleus in a 6-fold symmetric manner.
According to Professor Ian Stewart of Warwick University, author of "what shape is a snowflake?":
An ice crystal grows when molecules of water adhere to its surface.
Certain combinations of humidity and temperature create conditions in
which flat surfaces are dynamically unstable, an effect called the
Mullins-Sekerka instability. In these conditions, if a flat surface
accidentally develops a tiny bump, the bump grows faster than other
nearby regions, amplifying the irregularity. A big enough bump is
nearly flat, and becomes unstable for the same reason, so new smaller
bumps proliferate. This process of repeated ‘tip-splitting’ leads to a
fernlike pattern known as a dendrite. Dendritic growth causes the
enormous variety of shapes seen in snowflakes, because the branching
patterns are extremely sensitive to slight changes in humidity and
temperature.
An lecture on the topic of mathematics of patterns and symmetry accessible to the layman can be found on Youtube. In this video (from minute 3:00 to 15:00), Ian Stewart gives an excellent introduction to the formation of symmetry within the snowflake.
A detailed description of the Mullins-Sekerka instability and the remarkable phenomena responsible for the formation of these beautiful patterns in snowflake crystals can be found in this paper by J.S. Langer. | {
"domain": "physics.stackexchange",
"id": 18963,
"tags": "water, crystals, ice"
} |
Violation of Pauli exclusion principle | Question: From hyperphysics (emphasis mine):
Neutron degeneracy is a stellar application of the Pauli Exclusion Principle, as is electron degeneracy. No two neutrons can occupy identical states, even under the pressure of a collapsing star of several solar masses. For stellar masses less than about 1.44 solar masses (the Chandrasekhar limit), the energy from the gravitational collapse is not sufficient to produce the neutrons of a neutron star, so the collapse is halted by electron degeneracy to form white dwarfs. Above 1.44 solar masses, enough energy is available from the gravitational collapse to force the combination of electrons and protons to form neutrons. As the star contracts further, all the lowest neutron energy levels are filled and the neutrons are forced into higher and higher energy levels, filling the lowest unoccupied energy levels. This creates an effective pressure which prevents further gravitational collapse, forming a neutron star. However, for masses greater than 2 to 3 solar masses, even neutron degeneracy can't prevent further collapse and it continues toward the black hole state.
How then, can they collapse, without violating the Pauli Exclusion Principle? At a certain point does it no longer apply?
Answer: The Pauli exclusion principle is being applied here to FREE neutrons. There are always free energy/momentum states for the neutrons to fill, even if they are compressed to ultra-high densities; these free states just have higher and higher energies (and momenta).
One way of thinking about this is in terms of the uncertainty principle. Each quantum state occupies approximately $h^3$ of position-momentum phase-space. i.e. $(\Delta p)^3 \times (\Delta x)^3 \geq h^3$. (Actually each momentum state can accomodate 2 fermions - spin up and spin down).
If you increase the density, $\Delta x$ becomes small, so $\Delta p$ has to become large. i.e. the neutrons will occupy higher and higher momentum states as the Fermi energy is increased. You can make the density as high as you like and the PEP is not violated because the particles gain higher momenta.
The increasing momenta of the neutrons supplies an increasing degeneracy pressure. However, there is a "saturation", because eventually all the neutrons become ultrarelativistic and so an increase in density does not lead to such a big increase in pressure. Technically - $P \propto \rho$ at extremely high densities.
It is then a bit of standard textbook astrophysics to show that a star supported by such an equation of state is not stable and will collapse given the slightest perturbation.
In reality neutron stars are not supported by ideal degeneracy pressure - there is a strong repulsive force when they are compressed beyond the nuclear saturation density, with something like $P \propto \rho^2$. Yet even here, an instability is reached at finite density because in General Relativity, the increasing pressure contributes (in addition to the density) to the extreme curvature of space and ultimately means that the star collapses at a finite density and pressure. | {
"domain": "physics.stackexchange",
"id": 21055,
"tags": "quantum-mechanics, astrophysics, stars, pauli-exclusion-principle, neutron-stars"
} |
cmd_vel topic from navigation stage not sending to raspberry robot | Question:
Hello
I have some problem with navigation stage module. I have a robot with raspberry pi and hokuyo lidar mount on it and base station for mapping calculation.
The standard way to run the ROS on two machine is: 1. start roscore 2. start scan streaming 3. start reading cmd_vel on robot 4. start tf publisher 5. start rf2o 6. start hector_slam 7. start navigation_stage 8. start frontier_exploration
And my problem is when I try to set 2d Nav point somewhere in the room and in the same team run echo on cmd_vel topic I can see that navigation stage is publishing some velocity data. But when I run cmd_vel echo on RPi there is no data.
The more interesting thing is that I wrote my manual control topic which is publishing to cmd_vel and this is working correctly and robot is moving.
It seems that for some reason when running navigation_stage data is only visible on local PC and not on RPi.
Can you help finding solution for that problem ?
Originally posted by bjurek5 on ROS Answers with karma: 3 on 2018-04-22
Post score: 0
Answer:
As @Rodolfo8 said it's probably a networking configuration problem. You have to make sure you've correctly configured the ROS_MASTER_URI variable from your PC and Raspberry.
Try echo $ROS_MASTER_URI from your PC and from the Raspberry. Do it from every terminal you are using to be sure they all share the same master.
Another thing is that by default you should have ROS_MASTER_URI=http://localhost:11311/ or something similar, but you should change it to http://192.168.1.XX:11311/ (the IP adress of the raspberry or the PC) and then run export ROS_MASTER_URI=http://192.168.1.XX:11311/ from all your terminals.
Finally, you should also set the variable ROS_HOSTNAME to the corresponding IP adress.
Originally posted by Delb with karma: 3907 on 2018-04-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2018-04-23:\
set the variable ROS_HOSTNAME to the corresponding IP adress.
I'd recommend sticking to hostnames for ROS_HOSTNAME, and use ROS_IP for IPs.
The former can exploit the flexibility that DNS provides (but you need a working DNS).
Comment by bjurek5 on 2018-04-23:
I will check in afternoon where I have configured IP's and where not. Does the console running roscore also need to have configured ROS_MASTER_URI before ?
Comment by Delb on 2018-04-23:
Yes, every console must have the same ROS_MASTER_URI
Comment by bjurek5 on 2018-05-07:
I added on host PC export ROS_MASTER_URI to .bashrc file, and the second thing where I had problem earlier was to have in virtualbox configuration only one bridged network card, for some reason I had two activated, after disabling everything runs well, thanks for support | {
"domain": "robotics.stackexchange",
"id": 30716,
"tags": "ros, navigation, communication, ros-kinetic"
} |
Function to perform checks on data | Question: I'm currently trying to do data quality checks in python primarily using Pandas.
The code below is fit for purpose. The idea is to do the checks in Python and export to an Excel file, for audit reasons.
I perform checks like the below in 4 different files, sometimes checking if entries in one are also in the other, etc.
df_symy is just another DataFrame that I imported inside the same class.
I'm wondering if there's a better way to do this task, for example:
create a function for each check
create a more meaningful way to check the data and present the results, etc
def run_sharepoint_check(self) -> pd.DataFrame:
"""Perform checks on data extracted from SharePoint.
Check 1: Amount decided is 0 or NaN
Check 2: Invalid BUS ID
Check 3: Duplicated entries. It also tags the latest entry
Check 4: SYMY Special not in SharePoint
"""
# Use this function to check the data we have from SharePoint
# and return DF with checks in the excel file
# Another function will be used to process the data and produce
# the final list
df = self.sharepoint_data.copy()
df_symy = self.symy_data
df_symy = df_symy.loc[df_symy['COMMITMENT_TYPE'].str[0] == 'S']
symy_cols = ['BUYER_NUMBER', 'Balloon ID',
'CLD_TOTAL_AMOUNT', 'POLICY_CURRENCY']
df = df.merge(right=df_symy[symy_cols],
how='outer',
left_on=['Balloon ID', 'BUS ID'],
right_on=['Balloon ID', 'BUYER_NUMBER'])
check_1 = df['Amount decided'] == 0 | df['Amount decided'].isna()
df.loc[check_1, 'check_1'] = 'Amount decided is 0 or NaN'
check_2 = df['BUS ID'].isna()
df.loc[check_2, 'check_2'] = 'Invalid BUS ID'
check_3 = df.duplicated(subset=['Balloon ID', 'BUS ID'], keep=False)
df.loc[check_3, 'check_3'] = 'Duplicated entry'
check_3_additional = ~df.duplicated(
subset=['Balloon ID', 'BUS ID'], keep='first'
)
# Filter only the entries that are duplicated
# Out of those, the first one is the latest
df.loc[(check_3) & (check_3_additional),
'check_3'] = 'Duplicated entry (Latest)'
# Match Balloon+BUSID (SYMY) to SharePoint
check_4 = (~df.duplicated(subset=['Balloon ID', 'BUS ID'],
keep='first')) & (df['BUS ID'].isna())
df.loc[check_4, 'check_4'] = 'SYMY SA not in SharePoint'
check_cols = ['check_1', 'check_2', 'check_3', 'check_4']
# .fillna('OK') just for visual purposes in Excel.
df[check_cols] = df[check_cols].fillna('OK')
# self.data_checks_dfs in the init method is an empty dictionary.
self.data_checks_dfs['SharePoint_checks'] = df
return df
So, how can I improve this?
Does anyone have this type of task that has been automated using Python?
Answer: It seems difficult to comment on the meaningfulness of your proposed solution. You are the one to decide on that since you know the context around your problem. With that being said, I'm afraid there is not much to go on here.
Nevertheless, one observation I make about your code is that the checks use .loc to locate rows that receive a "non-OK", and finally, after all checks, you use a fill to turn nans into "OK". To make your code more concise and likely faster, consider using np.where. So for instance, your check_1 would be turned into:
df["check_1"] = np.where(
(df['Amount decided'] == 0) | (df['Amount decided'].isna()),
"Amount decided is 0 or NaN", "OK")
Also, I would rename check_N into something that actually means a little more, like amount_check for check_1 and so on. | {
"domain": "codereview.stackexchange",
"id": 41030,
"tags": "python, pandas"
} |
How to get publisher info from a message? | Question:
Can anyone tell me if it's possible to get publisher information in the callback of a particular message?
For the system I'm trying to setup, I've implemented a node that publishes odometry messages (type: nav_msgs/Odometry) and am using a launch file to start multiple instances of this node, under different name spaces. Another node type (call it the coordinator node) will listen for the odometry messages from the multiple publishers. The point of the coordinator node is that it will coordinate the movement of multiple robots based on their reported locations. I'm new to ROS, so I've started by implementing my nodes using the basic templates shown in the tutorials, but I'm now realizing that the example callback method doesn't allow for identifying which particular node broadcasted a given message.
For now, I'm programming in Java and using the client_rosjava_jni library. I was hoping there would be another callback class that would include the publisher info, along with the published message, but I'm not seeing that option, at least in the JNI wrappers. I can imagine getting around this by making my first node type publish a custom message type that includes the pedigree information along with the basic odometry message type, but it seems like this should already be available in the core system. Does anyone know if that's true? Am I not seeing this option because I'm using the JNI wrappers?
Thanks in advance for any help-suggestions.
Hung
Originally posted by HQ on ROS Answers with karma: 3 on 2012-12-16
Post score: 0
Original comments
Comment by HQ on 2012-12-17:
Thanks for the two suggestions, tfoote. I haven't had any success getting the remapping feature working, so that may be another short-coming of the Java JNI wrappers I'm using. Been thinking about switching to the full Java core implementation and it looks like it's time. Appreciate your help with both of my recent questions. :)
Answer:
The information is embedded in the header information and can be found through the C++ and python implementations. The Java implementation may or may not provide a way to get to the metadata.
Overall this breaks the anonymous publish subscribe abstraction which is core to ROS's design. A much cleaner solution is for you to remap your different robots onto different topics so that you know that everything coming in on a specific topic has the same semantic information, and you can provide specific or parameterized callbacks for each source.
Originally posted by tfoote with karma: 58457 on 2012-12-16
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 12131,
"tags": "ros, client-rosjava"
} |
MaxProductOfThree in C# | Question: This is my solution for the MaxProductOfThree from codility, it scores 100%, but I'm not a fan of the if if if structure it has going on.
Task description
A non-empty array A consisting of N integers is given. The product of
triplet (P, Q, R) equates to A[P] * A[Q] * A[R] (0 ≤ P < Q < R < N).
For example, array A such that:
A[0] = -3 A[1] = 1 A[2] = 2 A[3] = -2 A[4] = 5 A[5] = 6
contains the following example triplets:
(0, 1, 2), product is −3 * 1 * 2 = −6 (1, 2, 4), product is 1 * 2 * 5 = 10 (2, 4, 5), product is 2 * 5 * 6 = 60 Your goal is to find the maximal product of any triplet.
Write a function:
class Solution { public int solution(int[] A); }
that, given a non-empty array A, returns the value of the maximal
product of any triplet.
For example, given array A such that:
A[0] = -3 A[1] = 1 A[2] = 2 A[3] = -2 A[4] = 5 A[5] = 6
the function should return 60, as the product of triplet (2, 4, 5) is
maximal.
Write an efficient algorithm for the following assumptions:
N is an integer within the range [3..100,000]; each element of array A
is an integer within the range [−1,000..1,000].
using System;
using System.Collections.Generic;
using System.Linq;
class Solution {
public int solution(int[] A)
{
List<int> B = A.ToList();
int product = 1;
int min1 = B.Min();
if (A.Length > 3)
{
if (min1 < 0)
{
B.RemoveAt(B.IndexOf(min1));
int min2 = B.Min();
if (min2 < 0)
{
int max1 = B.Max();
if (A.Length == 4)
{
product = min1 * min2 * max1;
}
else
{
product = Math.Max(min1 * min2 * max1, getProductOfMaximalsForMoreThanTwoElements(B));
}
}
else
{
product = getProductOfMaximalsForMoreThanTwoElements(B);
}
}
else
{
product = getProductOfMaximalsForMoreThanTwoElements(B);
}
}
else
{
foreach (int element in B)
{
product *= element;
}
}
return product;
}
private int getProductOfMaximalsForMoreThanTwoElements( List<int> B)
{
int max1 = B.Max();
B.RemoveAt(B.IndexOf(max1));
int max2 = B.Max();
B.RemoveAt(B.IndexOf(max2));
int max3 = B.Max();
return max1 * max2 * max3;
}
}
To be honest, the original idea was thought beforehand, but it took me a couple of tries until every test was passed, as with most of these exercises. Are there things to improve in terms of readability, logic and quality of this code?
Answer: This looks like a good attempt. Let's first start assuming we only have to deal with positive values.
private int getProductOfMaximalsForMoreThanTwoElements( List<int> B)
{
int max1 = B.Max();
B.RemoveAt(B.IndexOf(max1));
int max2 = B.Max();
B.RemoveAt(B.IndexOf(max2));
int max3 = B.Max();
return max1 * max2 * max3;
}
This method would provide the solution. So let's review it.
Starting with a couple of style choices, method names in C# should be PascalCase. Additionally, the name of the method is vague. The method provides the product of the 3 biggest numbers: GetProductOfTop3Numbers.
B.IndexOf(B.Max()) is doing unnecessary work. Max loops over the entire list to find the maximum value, and IndexOf does it all over again. That's a waste of effort. Sadly, System.Linq doesn't provide an ArgMax method, so we will have to spin our own:"
public static int ArgMax(this IList<int> list) {
var max = int.MinValue;
var argmax = -1;
var index = 0;
foreach(var item in list) {
if (item > max) {
max = item;
argmax = index;
}
index++;
}
return argmax;
}
Here, instead of looping through the list to find the maximum value, and then search the list for that value, we can just return the index we found that biggest value at. The max is then easily retrievable through B[index].
The act of searching the list for the max, and popping it can be its own method:
public static int PopMax(List<int> list)
{
var argmax = list.ArgMax();
var max = list[argmax];
list.RemoveAt(argmax);
return max;
}
Our initial method would then be:
private static int GetProductOfTop3Numbers(List<int> list) {
var product = 1;
product *= PopMax(list);
product *= PopMax(list);
product *= PopMax(list);
return product;
}
Or, more generic:
private static int GetProductOfTopNNumbers(List<int> list, int n) {
var product = 1;
for(var times = 0; times < n; times++) {
product *= PopMax(list);
}
return product;
}
Now let's return to the actual problem: negative numbers. This is what caused you problems introducing a bunch of if-statements. Let's see if we can find something for that.
We want the highest possible product. If we simply take the top 3 items from the list, we might risk missing out on very negative numbers. Take for instance the list new [] { -5, -5, -3, -1, 0, 1, 2, 3 }. In this case, the naive solution would be 1 * 2 * 3, but actually -5 * -5 * 3 has a bigger product. Note that if we consider negative numbers in our solution, it will always have to be a pair of negative numbers, to cancel out the - signs.
This pair would then always be the two lowest entries in the list. The two most negative numbers together should provide the biggest product out of all negative numbers. The product of these can then be compared to the product of the biggest numbers, to decide which of the two sets to pick.
Furthermore, because we will never take 3 negative numbers (because the product would be negative), we can freely take the biggest number in the list. Should this number also be negative (because the input only consists of negative numbers), this is fine as well, because it would then be the least negative number, which still would result in the biggest product.
So, to recap:
Take the biggest number
Take the next two biggest numbers, or take the next two smallest numbers
Check which together has the biggest product
Done
For the mins, we need to provide a similar ArgMin and PopMin implementations. Also realise that simultaneously popping items from the list at both ends might cause troubles if this list is very short, so be sure to copy the list:
private static int GetProductOfTopNNumbers(List<int> list, int n) {
var copiedList = new List<int>(list);
var product = 1;
for(var times = 0; times < n; times++) {
product *= PopMax(copiedList);
}
return product;
}
All in all, that leaves us with:
public int Solution(int[] A) {
var list = A.ToList();
var max = PopMax(list);
return Math.Max(max * GetProductOfMinNNumbers(list, 2), max * GetProductOfTopNNumbers(list, 2));
}
public static int ArgMin(this IList<int> list)
{
var min = int.MaxValue;
var argmin = -1;
var index = 0;
foreach (var item in list)
{
if (item < min)
{
min = item;
argmin = index;
}
index++;
}
return argmin;
}
public static int PopMin(List<int> list)
{
var argmin = list.ArgMin();
var min = list[argmin];
list.RemoveAt(argmin);
return min;
}
private static int GetProductOfMinNNumbers(List<int> list, int n)
{
var copiedList = new List<int>(list);
var product = 1;
for (var times = 0; times < n; times++)
{
product *= PopMin(copiedList);
}
return product;
} | {
"domain": "codereview.stackexchange",
"id": 36387,
"tags": "c#, performance, beginner, array, combinatorics"
} |
Why should the agent bounce the ball back and forth on the same side of the screen in Atari Breakout? | Question: The following is from page 17 of "Michael Hu, “The Art of Reinforcement Learning: Fundamentals, Mathematics, and Implementations with Python”, Apress, 2023"
https://link.springer.com/book/10.1007/978-1-4842-9606-6
An example of good reward engineering is in the game of Atari Breakout, where the goal of the agent is to clear all the bricks at the top of the screen by bouncing a ball off a paddle. One way to design a reward function for this game is to give the agent a positive reward for each brick it clears and a negative reward for each time the ball passes the paddle and goes out of bounds. However, this reward function alone may not lead to optimal behavior, as the agent may learn to exploit a loophole by simply bouncing the ball back and forth on the same side of the screen without actually clearing any bricks.
This part is not clear:
However, this reward function alone may not lead to optimal behavior, as the agent may learn to exploit a loophole by simply bouncing the ball back and forth on the same side of the screen without actually clearing any bricks.
Why should the agent bounce the ball back and forth on the same side of the screen? Is there a reward in this case?
Answer: There might be some cases, say, once the agent clearing a brick obtaining a positive reward and then forever bouncing the ball on the same side of the screen would have a higher cumulative reward than trying to clear as many bricks as possible but then got many negative rewards which might be significant compared to positive reward. It's like a conservative game player, once winning a game initially then stops playing completely. Or it's like the case if getting a downvote of an answer is equal to getting an upvote in terms of scores, then many more people would be reluctant to answer any question.
Therefore appropriate reward engineering is crucial. In certain scenarios a poorly designed reward function might inadvertently incentivize the agent to adopt suboptimal strategies that maximize cumulative rewards without truly achieving the main objectives of the task. | {
"domain": "ai.stackexchange",
"id": 4165,
"tags": "reinforcement-learning, reward-design, atari-games"
} |
Annihilation and Creation operator for bosons properties in second quantization | Question: In second quantization the commutation relation of annihilation and creation operators of bosons is
\begin{equation}
[b,b^\dagger]=bb^\dagger-b^\dagger b=1
\end{equation}
I am wondering what the general commutation relation is:
\begin{equation}
[(b)^n,(b^\dagger)^m]=?
\end{equation}
I am able to find that $[(b)^n,b^\dagger]=n(b)^{n-1}+b^\dagger b^n$ and $[b,(b^\dagger)^m]=m(b^\dagger)^{m-1}+(b^\dagger)^mb$, but am having difficulties with the more general form
Answer: First, consider the expression $[f^n,g]$. The product rule for commutators is
$$
[fg,h]=[f,h]g+f[g,h] \\
$$
Applying this to some powers of f, we get:
$$
[f^2,h] = [f,h]f+f[f,h] \\
[f^3,h] = [f^2,h]f+f[f^2,h] \\
= [f,h]f^2+2f[f,h]f+f^2[f,h]
$$
This looks a lot like a binomial expansion, so we can infer that
$$
[f^n,h] = \sum_{i=0}^{n-1} {n-1\choose i} f^{i}[f,h]f^{(n-1)-i}
$$
Now, let us upgrade to the case where both operators are raised to some power:
$$
[f^n,h^m] = \sum_{i=0}^{n-1} {n-1\choose i} f^{i}[f,h^m]f^{(n-1)-i} \\
[f^n,h^m] = \sum_{i=0}^{n-1} {n-1\choose i} f^{i}\left(\sum_{j=0}^{m-1} {m-1\choose j} h^{(m-1)-i}[f,h]h^{i}\right)f^{(n-1)-i}
$$
Where in the second line the $[f,h^m]$ was expanded with the same rule as $[f^n,h]$, since $[f,h^2] = [f,h]h+h[f,h]$)
Letting $f:=b$ and $h:=b^\dagger$ and noting that $[b,b^\dagger] = 1$, we obtain:
$$
[b^{(n)},b^{\dagger (m)}] = \sum_{i=0}^{n-1} \sum_{j=0}^{m-1} {n-1\choose i}{m-1\choose j} b^{(i)}\left( b^{\dagger (m-1)}\right)b^{(n-1)-i}
$$ | {
"domain": "physics.stackexchange",
"id": 62704,
"tags": "quantum-mechanics, quantum-field-theory, operators, commutator"
} |
Atoms vs molecule when talking about the avogadro's number? | Question: By far, my understanding is that a molecule is made up of atoms bonded together. For example, a molecule of water ($\ce{H2O}$) has 2 hydrogen atoms and one oxygen atom.
However, when it comes to Avogadro's number, I'm getting confused because it mixes the concept and treats it like it is the same.
Avogadro's number is defined as the number of elementary particles (molecules, atoms, compounds, etc.) per mole of a substance.
If I take 1 mole of water, with the molecules definition for the Avogadro's number, it has $ \mathrm{6.022×10^{23}} $ molecules of water. With each molecule having 3 atoms, it means that 1 mole of water has $\mathrm{18.066×10^{23}} $ atoms.
If I take 1 mole of water with the atoms definition for Avogadro's number, it has $\mathrm{6.022×10^{23}}$ atoms of water.
What is the right way to think about this?
Answer: $1$ mole of water has $\pu{6.022E23}$ molecules of water, but not $\pu{6.022E23}$ atoms of water, because the expression "atoms of water" has no sense. You are allowed to state that it contains $\pu{18.066E23}$ atoms. But you are not allowed to speak of "atoms of water". Water has not its own atoms of water. | {
"domain": "chemistry.stackexchange",
"id": 15205,
"tags": "atoms, molecules, mole"
} |
How can an external force acting on a particle be a Newton 3 pair with a conservative force? | Question: If I particle is moved from a point A to a point B (both of which lie in a conservative force field - like a gravitational field for example), and to avoid confusion lets just say that B has a higher potential than A.
I understand that I am doing positive work on the particle - because I am transferring energy TO the particle. And in this case I understand that the conservative force must be doing negative work on the particle since $$\Delta U=-W=-\Delta T$$ where $\Delta T$ is the change in the kinetic energy of the particle, and if the particle is gaining potential $\implies$ $\Delta U$ is positive so negative work must be done.
But what I don't understand is how the force provided by me moving the particle from A to B forms a Newton 3 pair with the conservative force since Newton three pairs only apply to two particles that are interacting with each other and must be they must be a force of the same type - me moving a particle is not the same as the gravitational force on that particle (since the weight of the particle and the weight of the earth are Newton three pairs)
Are there other forces acting on the particle that link together the negative work done by the gravitational field and the positive work done by me that I have completely skipped out on? Or is my question completely wrong? Thanks!
Answer: Your reasoning about the work is nearly correct. You basically need to split up that equation into two equations and specify what work you are talking about.
With $\Delta U=-W$ , the work here is just referring to the work done by the conservative force. If the conservative force does positive work, then the particle is losing potential energy.
With $W=\Delta T$ , the work is the total work done on the particle. In our scenario this is the sum of the work done by us and the work done by the conservative force. This means that the particle can be gaining potential energy but maintain a constant kinetic energy if we supply an amount of work equal to the amount of potential energy gained.
As for the force pairs from Newton's third law, let's say that the conservative force is gravity from Earth. Then there is one force pair between the particle and the Earth (each exerts a gravitational force on the other). The other pair is between you and the particle. Whatever force you apply to the particle, the particle will push back with and equal but opposite force.
It seems like there is some confusion between these force pairs and work. If you are only interested in the work done on the particle, then you only focus on forces acting on that particle. Any forces that the particle exerts on other objects does not need to be considered in thinking about the work done just on the particle itself. | {
"domain": "physics.stackexchange",
"id": 49341,
"tags": "newtonian-mechanics, forces, energy, work, potential-energy"
} |
How to fuse odometry and AR slam using robot_localization? | Question:
I am trying to use robot_localization to fuse wheel odometry and fiducial_slam and I am a little confused by the transforms.
In the current system, we have the wheel odometry publishing odom->base_link, and fiducials publishing map->odom.
When I run the EKF in this system (with publish_tf: false) and look at the /odom/filtered topic in rviz, it looks great, the kalman filter pose seems to track the robot pose in the real world very well.
The problem comes when I turn off the map->odom publishing from fiducials, and turn on publish_tf for robot_localization. This causes the position of the robot to fly off wildly every time we move.
I think this is because the wheel odometry data is published relative to the /odom frame, but there is no way to transform this into the /map frame without the kalman filter using it's own output, and this cyclical flow is causing wild instability.
How should I go about trying to solve this issue? How do you fuse map relative and odom relative pose measurements.
Edit: localization config
frequency: 10
two_d_mode: true
map_frame: map
odom_frame: odom
base_link_frame: base_footprint
world_frame: map
publish_tf: true
# x, y, z,
# roll, pitch, yaw,
# vx, vy, vz,
# vroll, vpitch, vyaw,
# ax, ay, az
odom0: /odom
odom0_config: [true, true, false,
false, false, true,
false, false, false,
false, false, true,
false, false, false]
odom0_differential: false
odom0_relative: true
#imu0: /imu
#imu0_config: [false, false, false,
# false, false, true,
# false, false, false,
# false, false, true,
# false, false, false]
#imu0_differential: true
#imu0_relative: true
pose0: /fiducial_pose
pose0_config: [true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
pose0_rejection_threshold: 3
# x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az
process_noise_covariance: [0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.00, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.00, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.00, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.000, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.000, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.00, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.00, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.00, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.00, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.00, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
initial_estimate_covariance: [1e-2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-4, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
Originally posted by rohbotics on ROS Answers with karma: 13 on 2019-03-24
Post score: 1
Answer:
Please include sample messages for all sensor inputs, as well as the config for your odom frame EKF. Without those, I can't comment fully.
Without that information, I have other questions and comments.
When you turn on publish_tf for the second EKF instance, are you making sure to disable tf publication in whatever node is doing the fiducials? You can't have two nodes trying to publish the same transform.
You are correct in your statement: first, you are fusing two absolute source of pose data, which is not advised, unless both are in agreement (think of, e.g., two GPS sensors), and moreover, you are using the EKF's own transform to transform data that it is trying to fuse. That's not really a huge deal, since the EKF uses its own state internally to do some transforms, but it will make your life more complicated. Any reason not to just fuse the velocity data from the wheel odometry? That's a common practice.
Originally posted by Tom Moore with karma: 13689 on 2019-05-15
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 32735,
"tags": "navigation, ros-kinetic, robot-localization"
} |
Why work $W$ and heat $Q$ are different concepts? | Question: I understand heat as the flow of energy (through radiation, convection or conduction) from one body to another. When I think about conduction (for example) I visualize particles that jiggle a lot bouncing against particles that jiggle less and transferring heat to them progressively.
Thus, with this collisions and change in momentum $ \Delta p$ over a period of time $\Delta t$, we have a force $F$ and a displacement $d$ of particles. As the formula of work is defined as $W = F \times d$ why can't we consider heat the same as work at the atomic level?
If this kind of work is not what we define as heat, why don't we take this kind of energy into consideration when it comes to use the formula of internal energy $U = W + Q$ ?
Answer: You are right. Microscopically, work and heat are just about the same. Both involve molecular collisions transfer energy from one object to the other.
Work involves a kind of "coherent" transfer in a manner of speaking, in which the collisions are predominantly, and to an extreme degree, in one direction. Also, typically the force is applied to one location on the object. And importantly, the boundary of the system is displaced. (E.g. translation or deformation)
On the other hand, transfer of energy by heat is "incoherent", many directions, and typically in all directions. And importantly, the boundary of the system is not displaced.
Finally, everyday phenomena fall into one or the other category, and they differ in their macroscopic behavior. Loosely speaking, when heat is transfered the temperature rises. When work is done, the boundary of the system changes. Of course adding heat generally also results in the boundary moving [say, expanding], and work generally results in a temperature change [Joule's experiment]. I'm trying to motivate the macroscopic separation between work and heat without a lengthy discussion.
To the engineers who first worked all of this out, the very existence of atoms was unknown. To them, the separation between heat and work was very clear. They had little reason to view them as manifestations of the same microscopic process. They thought that heat was a physical fluid. In any event, their remarkable achievements have stood the test of time. | {
"domain": "physics.stackexchange",
"id": 58407,
"tags": "thermodynamics, work"
} |
Nodejs Module Pattern And Caching | Question: The question is about nodejs module pattern code structure.
My goal is to initialise the module at one time and export the module using module.export
so that, it's available in other files.
I took Twilio for explaining.
Here I want to initialize twilo using credentials and export it as twilioClient, which I can reuse without reinitialising.
But I also need a dynamic Twilio client, ie, I give the credentials as params and create a client using that param.
That was my requirement and I came across this structure,
const twilio = require('twilio');
function init (config) {
const twilioSID = config.TWILIO_ACCOUNT_SID;
const twilioAuth = config.TWILIO_AUTH_TOKEN;
const client = twilio(twilioSID, twilioAuth);
return client;
}
const twilioSID = process.env.TWILIO_ACCOUNT_SID;
const twilioAuth = process.env.TWILIO_AUTH_TOKEN;
const client = twilio(twilioSID, twilioAuth);
module.exports = {
twilioClient: client,
twilioClientFn: init
};
Is this structure okay? is there any room for improvement?
Thank You
Answer: Keep it D.R.Y.
Avoid repeating code. The function init has its task repeated below it. There is no need to repeat that code, use the function.
Avoid single use variables. The function is storing values in variables for no reason. Use the references directly, return the result of the call twilio rather than store it before returning it.
Aim to keep the code short, more code is more to read understand and maintain. In this case less code also means less CPU cycles needed to run
Rewrite
Does the same in less than half the code.
const twilio = require("twilio");
const twilioClientFn = env => twilio(env.TWILIO_ACCOUNT_SID, env.TWILIO_AUTH_TOKEN);
module.exports = {
twilioClient: twilioClientFn(process.env),
twilioClientFn,
}; | {
"domain": "codereview.stackexchange",
"id": 43959,
"tags": "javascript, design-patterns, node.js"
} |
Optimizing Java Anagram checker (compare 2 strings) | Question: An anagram is like a mix-up of the letters in a string:
pots is an anagram of stop
Wilma is an anagram of ilWma
I am going through the book Cracking the Coding Interview and in the basic string manipulation there's the problem:
write a method to check if two strings are anagrams of each other.
My method uses StringBuffer instead of String because you can .deleteCharAt(index) with StringBuffer/StringBuilder.
public boolean areAnagrams(StringBuffer s1b, StringBuffer s2b) {
for (int i=0; i<s1b.length(); ++i) {
for (int j=0; j<s2b.length(); ++j) {
if (s1b.charAt(i) == s2b.charAt(j)) {
s1b.deleteCharAt(i);
s2b.deleteCharAt(j);
i=0;
j=0;
}
}
}
if (s1b.equals(s2b)) {
return true;
} else
return false;
}
I iterate over every character in s1b and if I find a matching char in s2b I delete them both from each string and restart the loop (set i and j to zero) because the length of the StringBuffer objects changes when you .deleteCharAt(index).
I have two questions:
Should I use StringBuilder over StringBuffer (in Java)?
How can I make this faster?
In regards to fasterness:
This method is good in that it doesn't require any additional space, but it sorta destroys the data as you're working on it. Are there any alternatives that I have overlooked that could potentially preserve the strings but still see if they are anagrams without using too much external storage (i.e. copies of the strings are not allowed -- as a challenge)?
And, if you can use any sort of storage space in addition to this, can one lower the time complexity to \$O(n)\$ (technically \$O(2n)\$) instead of \$O(n^2)\$?
Also, the above code might not compile because I just wrote it from scratch in here; sorry if it's bugg'd.
Answer: Start with a simple, easy to understand version. Try to reuse API functions.
import java.util.Arrays;
...
public boolean areAnagrams(String s1, String s2) {
char[] ch1 = s1.toCharArray();
char[] ch2 = s2.toCharArray();
Arrays.sort(ch1);
Arrays.sort(ch2);
return Arrays.equals(ch1,ch2);
}
Of course this is not the fastest way, but in 99% it is "good enough", and you can see what's going on. I would not even consider to juggle with things like deleted chars in StringBuilder if there were no serious performance problem. | {
"domain": "codereview.stackexchange",
"id": 18106,
"tags": "java, strings, interview-questions"
} |
How would I search within an angular distance of an object corresponding to a distance in parsecs? | Question: I'm interested in the learning about the local densities around galaxies. I've found myself a bit confused on how to relate angular distances (arcseconds) with physical distances (parsecs) when thinking about distances between two distant galaxies. For example, let's say the average distance between galaxies within clusters is a few megaparsecs. If I had ra, dec coordinates of a particular galaxy and could hypothetically query an all-sky catalog, let's say I'd want to search within a few megaparsecs to get a feel for its density field. But generally when querying catalogs you specify a search radius in some arcseconds. How would these relate here?
Answer: Ken G's answer is essentially correct, but there is one important thing to keep in mind: The distance $r$ between the galaxies that is interesting for you is the distance they had at the time they emitted the light you see (you don't care about what has happened to them since that time and how far apart they are today; you care about the physical properties then). But the angle you observe them to span is not angle they spanned when they emitted the light (since they were closer to you in the past, so the light they emitted in your direction at that time would "miss" you by the time it reached that place).
This means that, in your calculation, the distance from you to the galaxies should not be the "physical" distance $d$ (i.e. the actual distance right now to the galaxies, which also corresponds to the comoving distance), but the so-called angular diameter distance, $d_A$. These are related (in a flat universe) by
$$
d_A = \frac{d}{1+z},
$$
where $z$ is the redshift of the galaxies (I assume they have almost the same redshift; if they are too different, then the distance from each other is not simply given by the projected distance). Note that in a non-flat universe, the formula is slightly more complicated; see the link above.
The comoving distance, in turn, is obtained by integration:
$$
d(z) = \frac{c}{H_0} \int_0^z \frac{dz'}{\left( \Omega_\mathrm{M}(1+z)^3 + \Omega_\Lambda \right)^{1/2}},
$$
where $c$ is the speed of light, and $\Omega_\mathrm{M}$, $\Omega_\Lambda$, and $H_0$ are the present matter and dark energy density parameters and Hubble constant, respectively. Here, I've again assumed a flat universe, and neglected radiation since when that was important, there were no galaxies anyway).
Obviously, this effect is not big for small redshifts, but $d_A$ has the interesting property that it increases with physical distance only out to a certain point (roughly $z=1.6$), after which it decreases, because the galaxies were closer to us in the past.
So the physical distance $r$ between two galaxies spanning an angle $\theta$ is
$$
r = \frac{\theta}{d_A}
$$
or
$$
\boxed{
\frac{r}{\mathrm{kpc}} = \frac{\theta / \mathrm{arcsec}}{d / \mathrm{Mpc}} (1+z)
\, \frac{10^{-3} \times 180\times3600}{\pi},
}
$$
where the last factor accounts for converting Mpc to kpc and radians to arcseconds.
In Python, arbitrarily for $z=2.2$ and $\theta=12.3''$, it's as easy as
from astropy.cosmology import Planck15
from astropy import units as u
z = 2.2 # redshift of galaxies
theta = 12.3 * u.arcsec # angle
r_ang = Planck15.kpc_proper_per_arcmin(z) # phys. dist. per angle
r = r_ang * theta # physical distance
print('Distance between galaxies: ', r.to(u.kpc))
which will print Distance between galaxies: 104.22540165050975 kpc. | {
"domain": "astronomy.stackexchange",
"id": 3621,
"tags": "distances, parsec"
} |
Does a Berry phase operator exist? | Question: The closed-path Berry phase can have measurable effects and, if I am understanding correctly, is a measurable quantity in and of itself. If that is so, is there a Hermitian operator with Berry phases as its eigenvalues? What would it be, and what would its eigenstates be?
Answer: There is no single operator corresponding to the Berry phase because the Berry phase is not a property of a single quantum state, but of a full trajectory through state space. "Observables are Hermitian operators" does not mean "anything you can measure is a Hermitian operator", it means measurable properties of states are Hermitian operators acting on those states.
It easy to construct plenty of other things you can "observe" that are not actually observables/Hermitian operators, e.g. if you do repeated measurements on an identically realized state, you can compute (an approximation for) the standard deviation of the observable being measured, but the standard deviation itself is not a linear operator on states, either. | {
"domain": "physics.stackexchange",
"id": 77525,
"tags": "operators, berry-pancharatnam-phase"
} |
MoveIt! commander with youbot | Question:
Hello MoveIt! Commander enthusiasts. We are having problems getting our youbot to move properly in simulation with gazebo. Our architecture is as follows:
MoveIt Commander ==> python_script ==> gazebo (headless) ==> rviz
We use Moveit Commander to select a named pose stored in the SRDF and then compute a plan. We then publish that plan to the youbot topic called "/arm_1/arm_controller/command". This topic works great with sending messages using the rostopic pub command with all joint positions set to valid values and velocity, acceleration, and effort set to zero. We can publish a JointTrajectory (JT) message with a single JointTrajectoryPoint (JTP) at 10 Hz and the robot moves just fine in gazebo.
When publishing from our Python script, the velocity, acceleration, and efforts are all populated. We publish a valid JT with more than one JTP computed by the MoveIt Commander plan() function at the same rate, and the robot does not move 1) unless we publish at least 3 times, and 2) as the robot moves, it appears to jerk to and from the starting position. Presumably this is because I have published the JT command three times and the LIFO design of ROS subscriber/publisher queuing causes the robot to get confused until the final JT command is received by gazebo.
Our python code is as follows. I apologize ahead of time for the look of our code, but we are just prototyping as this stage.
#!/usr/bin/env python
import sys
import copy
import rospy
import moveit_commander
import moveit_msgs.msg
import geometry_msgs.msg
from trajectory_msgs.msg import JointTrajectory, JointTrajectoryPoint
from std_msgs.msg import String
def get_random_waypoints(move_group=[], n=2):
''' Get a series of waypoints with the current pose first'''
geom_points = []
pose = move_group.get_current_pose().pose
geom_points.append(copy.deepcopy(pose))
try:
for ii in range(0,n):
pose = move_group.get_random_pose().pose
geom_points.append(copy.deepcopy(pose))
return copy.deepcopy(geom_points)
except MoveItCommanderException:
print "get_random_waypoints failed"
return False
def make_trajectory_msg(joint_trajectory_plan=[], seq=0, secs=0, nsecs=0, dt=2, frame_id='/base_link'):
jt = JointTrajectory()
jt.header.seq = seq
jt.header.stamp.secs = 0 #secs
jt.header.stamp.nsecs = 0 #nsecs
jt.header.frame_id = frame_id
jt.joint_names = joint_trajectory_plan.joint_trajectory.joint_names
njtp = len(joint_trajectory_plan.joint_trajectory.points)
for ii in range(0, njtp):
jtp = JointTrajectoryPoint()
jtp = copy.deepcopy(joint_trajectory_plan.joint_trajectory.points[ii])
jtp.time_from_start.secs = secs + dt*(ii+1)
jtp.time_from_start.nsecs = nsecs
jt.points.append(jtp)
return jt
def move_robot():
print "============ Starting tutorial setup"
moveit_commander.roscpp_initialize(sys.argv)
rospy.init_node('my_moveit_client',
anonymous=False)
## Instantiate a RobotCommander object. This object is an interface to
## the robot as a whole.
robot = moveit_commander.RobotCommander()
## Instantiate a PlanningSceneInterface object. This object is an interface
## to the world surrounding the robot.
scene = moveit_commander.PlanningSceneInterface()
## Instantiate a MoveGroupCommander object. This object is an interface
## to one group of joints. In this case the group is the joints in the left
## arm. This interface can be used to plan and execute motions on the left
## arm.
print "Creating move group commander object"
group = moveit_commander.MoveGroupCommander("manipulator")
## We create this DisplayTrajectory publisher which is used below to publish
## trajectories for RVIZ to visualize.
display_trajectory_publisher = rospy.Publisher(
'/move_group/display_planned_path',
moveit_msgs.msg.DisplayTrajectory)
## Create a publisher to gazebo arm controller
gazebo_command_publisher = rospy.Publisher(
'/arm_1/arm_controller/command',
JointTrajectory,
queue_size=10 )
# setup the loop rate for the node
r = rospy.Rate(10) # 10hz
## Wait for RVIZ to initialize. This sleep is ONLY to allow Rviz to come up.
print "============ Waiting for RVIZ..."
#rospy.sleep(10)
print "============ Starting tutorial "
## Getting Basic Information
## ^^^^^^^^^^^^^^^^^^^^^^^^^
##
## We can get the name of the reference frame for this robot
print "============ Reference frame: %s" % group.get_planning_frame()
## We can also print the name of the end-effector link for this group
print "============ Reference frame: %s" % group.get_end_effector_link()
## We can get a list of all the groups in the robot
print "============ Robot Groups:"
print robot.get_group_names()
## Sometimes for debugging it is useful to print the entire state of the
## robot.
print "============ Printing robot state"
print robot.get_current_state()
print "============"
print "+++++++++++++++++++ selecting waypoints/pose"
end_effector_link = "gripper_eef"
group.set_named_target("gun")
plan = group.plan()
## We want the cartesian path to be interpolated at a resolution of 1 cm
## which is why we will specify 0.01 as the eef_step in cartesian
## translation. We will specify the jump threshold as 0.0, effectively
## disabling it.
jt = make_trajectory_msg(joint_trajectory_plan=plan, dt=0.2, frame_id='base_link' )
print "========= Here be the joint trajectory\n"
print "JTP length is " + str(len(jt.points)) + "\n"
print jt
print "My group object is\n"
print group
print "\n"
sent = 3
while not rospy.is_shutdown():
if sent > 0:
sys.stdout.write('.')
sys.stdout.flush()
gazebo_command_publisher.publish(jt)
sent -= 1
else:
sys.stdout.write('x')
sys.stdout.flush()
r.sleep()
## When finished shut down moveit_commander.
moveit_commander.roscpp_shutdown()
if __name__=='__main__':
try:
move_robot()
except rospy.ROSInterruptException:
pass
We are obviously doing something wrong, but cannot explain it. Please provide us guidance on how to make the robot move smoothly with a single JT message instead of three. Any other helpful hints would be great. Thanks!
Originally posted by Rick C NIST on ROS Answers with karma: 28 on 2014-06-25
Post score: 0
Answer:
You have to publish 3 times :
Increase the delay in your MoveIt kinematic parameters -> kinematics_solver_timeout
As the robot moves, it appears to jerk to and from the starting position :
In Rviz parameters,
add RobotModel
MotionPlanning -> Scene Robot : uncheck "Show Robot Visual"
MotionPlanning -> Planning Request : uncheck "Query Start State"
Originally posted by est_CEAR with karma: 170 on 2014-11-24
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18388,
"tags": "ros, gazebo, moveit, youbot, moveit-commander"
} |
Is a Dirac Engine in the realms of the conceivable? | Question: I'm quite happy to delete this question if it is felt to be rubbish or if someone can say "nothing original".
I have an idea about how humans might explore the universe and obtain ridiculously huge amounts of power from (literally) nothing.
Isn't one of Dirac's theories that vacuum is in fact an ocean of feverish activity (the Dirac sea), where matter and antimatter are constantly and spontaneously coming into being and cancelling one another out?
So... what about the idea of harnessing this and using the resultant tanks of matter and antimatter to power a spacecraft? ... applying an acceleration of 9.81 ms2 constantly...?
If you could do that, I don't know how much time it would take (as perceived by those on board) to reach several of our nearest star systems... but it'd be pretty quick because you'd quickly get to 99% or closer to the speed of light... obviously the idea would be to accelerate to the mid-point and then decelerate from thereon.
Answer: That's a twist on the ol' vacuum thruster idea that I hadn't heard before — to try to make an antimatter rocket using antimatter captured from quantum vacuum fluctuations. But given our current understanding of physics, the answer is no, because it's not possible to get energy out of the vacuum. This other question and its answers are good reading on that front.
Of course, this is edging into the realm of physics where we can't really say that we understand what's going on. We define the vacuum to be the lowest energy state, and we have some evidence that it's "real", but maybe there are some subtleties about its interaction with gravity that we don't get, for example. So if you're writing sci-fi and want an antimatter vacuum rocket, go for it.
[Incidentally, I'm no historian, but I don't know why Dirac would be particularly associated with that idea; certainly Heisenberg is the name you hear most in association with vacuum fluctuations because of his uncertainty principle.] | {
"domain": "physics.stackexchange",
"id": 46879,
"tags": "antimatter, dirac-equation"
} |
Advanced and Detailed Minesweeper Probabilities | Question: In an earlier question, I showed my code for calculating the mine probability for each and every field in a Minesweeper board. But it doesn't stop there. In that very same Minesweeper Analyze project, there is also an algorithm for calculating the probability of each and every number for each and every field.
Motivation
In the multiplayer version of Minesweeper called Minesweeper Flags, you have to be careful not to reveal too much information to your opponent. This is often what separates a good player from a bad inexperienced player. If there is a 80% chance of being a mine, but a 20% chance of revealing an open field (which could potentially reveal 10 obvious mines to your opponent), would you take the risk? Calculation of Expected Value (which is an entirely different topic in itself) tells us that it is probably a bad idea.
Description
Consider this 8x7 Minesweeper board with a total of 6 mines:
When grouping the fields into FieldGroups (see previous question for a definition of field group), we find that there is:
3 fields only connect to the 1 (b)
4 fields connected to both 1 and 3 (c)
3 fields only connected to the 3 (d)
44 fields connected to neither (a)
When performing a mine-probability analyze on this board, we find that there are two Solution groups:
4c = 1, 44a = 3, 3b = 0, 3d = 2, 158928 combinations (4 fields in group c should have one mine, 44 fields in group a should have 3 mines, etc.)
4c = 0, 44a = 2, 3b = 1, 3d = 3, 2838 combinations
Now, what is the probability for the number '2' directly above the '1'? Let's call the field above the 1 as x. It would be possible to calculate that by adding a FieldRule to the existing board for the neighbors of x (2a + 3d + 1b = 2), and a rule for that field not being a mine (x = 0). But if we were to do this for each and every field on the board, it would take quite a lot of time. Not to mention that we have to do it for each and every number (0-8).
So, we can group the fields into probability-groups by which field groups they are neighboring to:
Field Groups Probability Groups
aaaaaaaa abbbbbba
aaaaaaaa bcdefghb
aabccdaa bijklmnb
aab13daa bop13qrb
aabccdaa bijklmnb
aaaaaaaa bcdefghb
aaaaaaaa abbbbbba
As per the above 'diagram', the probability group k is where our field x belongs, it is neighboring the Field Groups 3*a, 2*b, 1*c. This information can be stored in the GroupValues structure that was introduced in the previous question, it is essentially a Map<FieldGroup, Integer>. To improve the lookup speed of it, it's hashCode value is stored once it is calculated for the first time, for better performance.
We can see that there are actually two k's on the probability groups map, so we gain some performance by grouping them together as they will have the same detailed probabilities. There are also 22 b's on the map so there's a lot of performance we gain there by having them share probabilities. Note that all fields in probability group b is neighboring 5 of FieldGroup a
Calculating the detailed probabilities for a Probability Group
Let's focus on this solution for now: 4c = 1, 44a = 3, 3b = 0, 3d = 2, 158928 combinations
We know that the x field (in probability group k) has these neighbors: 3*a, 2*b, 1*c.
First we deal with 1c (because that's what the Eclipse debugger chose). Either that specific neighbor has a mine, or it does not. Let's say that it does not have a mine. Then there are 2 combinations, as there are two c fields away from field x, and one of them must have a mine.
Secondly, 3a. The a group has 44 fields in total, and in this solution should have 3 mines. This field is neighbor to 3 of those fields. Let's say that 1 of those neighbors are a mine, then there's (according to hypergeometric distribution) \$\binom{3}{1}*\binom{44 - 3}{3 - 1} = 2460\$ combinations.
Then, the neighbors 2b. We know from our solution though that there are no mines in b, so that's an easy one: One combination.
We're not neighbor with any d's so there's 3 combinations for them (3 fields, 2 mines).
So, for no mines in c, one mine in a and no mines in b, there are a total of one mines, and \$2 * 2460 * 1 * 3 = 14760\$ combinations. As there is currently no found mine around the x field we add 14760 to the combinations for our field x to be a 1.
This is done for all possible variations of how many mines is in each group. Once all the neighbor-groups and solutions has been processed, it is divided by the total number of combinations for the entire map to get the actual probabilities.
In the end, we get this double[] for the number probabilities for our field x:
[0.4004549781783564, 0.2998343285980985, 0.05172285894440117, 0.0023552538852416455, 1.854530618300508E-5, 0.0, 0.0, 0.0, 0.0]
That is, 40% risk of being open field, 29.9% chance for a 1, 5.1% for a 2, etc...
Speed reflection
There's one iteration over probability groups, one for solutions, and one for the neighboring FieldGroups, and then one recursive loop for a specific assignment. So it's something like \$O(probabilityGroups * solutions * K^{neighboringGroups})\$ Additionally, there's also some calculation of the actual combinations involved, using the binomial coefficient and hypergeometric distribution.
This causes huge performance issues when having many solutions, many field groups, and/or many probability groups - such as in The Super board of death, causing such boards to be left practically unsolved as it would take too much time to solve them. Luckily though, such situations rarely happen. When a bot is using this algorithm to play the bot simplifies the analyze required when taking mines. And in other occasions where this algorithm is used, it is mostly used to analyze games where the players are somewhat smart so the game rarely becomes too complex. And besides... this algorithm is faster than all other algorithms/approaches I know of.
Typical values:
Field Groups tend to be between 10 - 50, in some rare cases up to 100.
Solution Groups tend to be between 2 - 50, in some games where two bad players are playing each other, it can easily go up to 1500 (and beyond).
Probability Groups tend to be around 50, in some cases up to 100.
This algorithm timeouts when these values get too big.
As games are played on a 16x16 board (at the moment at least, might support bigger in the future), the absolute maximum for Field Groups and Probability Groups is 256. For Solution Groups, there is no known maximum as games with a large amount of solution groups take too long to analyze even with 'simple' analyze (mine probabilities only).
Class Summary (7 files)
DetailAnalyze: Entry-point, containing one static method to perform the analyze
DetailedResults: interface for accessing the results
DetailedResultsImpl: Implementation of the above interface
FieldProxy: Container for the detailed probabilities for a single field
NeighborFind: Interface to perform checking for neighbors and determining the current known mines around a field
ProbabilityKnowledge: The public interface of the FieldProxy class
ProxyProvider: Interface used while creating the results to access the data of other fields
Dependencies
AnalyzeResult, Combinatorics, FieldGroup, GroupValues, RuntimeTimeoutException, Solution from the other parts of my Minesweeper Analyze project
Code
The code is using Java 6, and can be found in my Minesweeper Analyze github repository
First of all, modified version of GroupValues
public class GroupValues<T> extends HashMap<FieldGroup<T>, Integer> {
private static final long serialVersionUID = -107328884258597555L;
private int bufferedHash = 0;
public GroupValues(GroupValues<T> values) {
super(values);
}
public GroupValues() {
super();
}
@Override
public int hashCode() {
if (bufferedHash != 0) {
return this.bufferedHash;
}
int result = super.hashCode();
this.bufferedHash = result;
return result;
}
public int calculateHash() {
this.bufferedHash = 0;
return this.hashCode();
}
}
Then, the rest of the code:
DetailAnalyze.java: (62 lines, 2139 bytes)
/**
* Creator of {@link DetailedResults} given an {@link AnalyzeResult} and a {@link NeighborFind} strategy
*
* @author Simon Forsberg
*/
public class DetailAnalyze {
public static <T> DetailedResults<T> solveDetailed(AnalyzeResult<T> analyze, NeighborFind<T> neighborStrategy) {
// Initialize FieldProxies
final Map<T, FieldProxy<T>> proxies = new HashMap<T, FieldProxy<T>>();
for (FieldGroup<T> group : analyze.getGroups()) {
for (T field : group) {
FieldProxy<T> proxy = new FieldProxy<T>(group, field);
proxies.put(field, proxy);
}
}
// Setup proxy provider
ProxyProvider<T> provider = new ProxyProvider<T>() {
@Override
public FieldProxy<T> getProxyFor(T field) {
return proxies.get(field);
}
};
// Setup neighbors for proxies
for (FieldProxy<T> fieldProxy : proxies.values()) {
fieldProxy.fixNeighbors(neighborStrategy, provider);
}
double totalCombinations = analyze.getTotal();
Map<GroupValues<T>, FieldProxy<T>> bufferedValues = new HashMap<GroupValues<T>, FieldProxy<T>>();
for (FieldProxy<T> proxy : proxies.values()) {
// Check if it is possible to re-use a previous value
FieldProxy<T> bufferedValue = bufferedValues.get(proxy.getNeighbors());
if (bufferedValue != null && bufferedValue.getFieldGroup() == proxy.getFieldGroup()) {
proxy.copyFromOther(bufferedValue, totalCombinations);
continue;
}
// Setup the probabilities for this field proxy
for (Solution<T> solution : analyze.getSolutionIteration()) {
proxy.addSolution(solution);
}
proxy.finalCalculation(totalCombinations);
bufferedValues.put(proxy.getNeighbors(), proxy);
}
int proxyCount = bufferedValues.size();
return new DetailedResultsImpl<T>(analyze, proxies, proxyCount);
}
}
DetailedResults.java: (46 lines, 1162 bytes)
/**
* Interface for retreiving more detailed probabilities, for example 'What is the probability for a 4 on field x?'
*
* @author Simon Forsberg
*
* @param <T> The field type
*/
public interface DetailedResults<T> {
Collection<ProbabilityKnowledge<T>> getProxies();
/**
* Get the number of unique proxies that was required for the calculation. As some can be re-used, this will always be lesser than or equal to <code>getProxyMap().size()</code>
*
* @return The number of unique proxies
*/
int getProxyCount();
/**
* Get a specific proxy for a field
*
* @param field
* @return
*/
ProbabilityKnowledge<T> getProxyFor(T field);
/**
* Get the underlying analyze that these detailed results was based on
*
* @return {@link AnalyzeResult} object that is the source of this analyze
*/
AnalyzeResult<T> getAnalyze();
/**
* @return The map of all probability datas
*/
Map<T, ProbabilityKnowledge<T>> getProxyMap();
}
DetailedResultsImpl.java: (46 lines, 1211 bytes)
public class DetailedResultsImpl<T> implements DetailedResults<T> {
private final AnalyzeResult<T> analyze;
private final Map<T, ProbabilityKnowledge<T>> proxies;
private final int proxyCount;
public DetailedResultsImpl(AnalyzeResult<T> analyze, Map<T, FieldProxy<T>> proxies, int proxyCount) {
this.analyze = analyze;
this.proxies = Collections.unmodifiableMap(new HashMap<T, ProbabilityKnowledge<T>>(proxies));
this.proxyCount = proxyCount;
}
@Override
public Collection<ProbabilityKnowledge<T>> getProxies() {
return Collections.unmodifiableCollection(proxies.values());
}
@Override
public int getProxyCount() {
return proxyCount;
}
@Override
public ProbabilityKnowledge<T> getProxyFor(T field) {
return proxies.get(field);
}
@Override
public AnalyzeResult<T> getAnalyze() {
return analyze;
}
@Override
public Map<T, ProbabilityKnowledge<T>> getProxyMap() {
return Collections.unmodifiableMap(proxies);
}
}
FieldProxy.java: (182 lines, 5711 bytes)
public class FieldProxy<T> implements ProbabilityKnowledge<T> {
private static int minK(int N, int K, int n) {
// If all fields in group are neighbors to this field then all mines must be neighbors to this field as well
return (N == K) ? n : 0;
}
private double[] detailedCombinations;
private double[] detailedProbabilities;
private final T field;
private int found;
private final FieldGroup<T> group;
private final GroupValues<T> neighbors;
public FieldProxy(FieldGroup<T> group, T field) {
this.field = field;
this.neighbors = new GroupValues<T>();
this.group = group;
this.found = 0;
}
void addSolution(Solution<T> solution) {
recursiveRemove(solution.copyWithoutNCRData(), 1, 0);
}
/**
* This field has the same values as another field, copy the values.
*
* @param copyFrom {@link FieldProxy} to copy from
* @param analyzeTotal Total number of combinations
*/
void copyFromOther(FieldProxy<T> copyFrom, double analyzeTotal) {
for (int i = 0; i < this.detailedCombinations.length - this.found; i++) {
if (copyFrom.detailedCombinations.length <= i + copyFrom.found) {
break;
}
this.detailedCombinations[i + this.found] = copyFrom.detailedCombinations[i + copyFrom.found];
}
this.finalCalculation(analyzeTotal);
}
/**
* Calculate final probabilities from the combinations information
*
* @param analyzeTotal Total number of combinations
*/
void finalCalculation(double analyzeTotal) {
this.detailedProbabilities = new double[this.detailedCombinations.length];
for (int i = 0; i < this.detailedProbabilities.length; i++) {
this.detailedProbabilities[i] = this.detailedCombinations[i] / analyzeTotal;
}
}
/**
* Setup the neighbors for this field
*
* @param neighborStrategy {@link NeighborFind} strategy
* @param proxyProvider Interface to get the related proxies
*/
void fixNeighbors(NeighborFind<T> neighborStrategy, ProxyProvider<T> proxyProvider) {
Collection<T> realNeighbors = neighborStrategy.getNeighborsFor(field);
this.detailedCombinations = new double[realNeighbors.size() + 1];
for (T neighbor : realNeighbors) {
if (neighborStrategy.isFoundAndisMine(neighbor)) {
this.found++;
continue; // A found mine is not, and should not be, in a fieldproxy
}
FieldProxy<T> proxy = proxyProvider.getProxyFor(neighbor);
if (proxy == null) {
continue;
}
FieldGroup<T> neighborGroup = proxy.group;
if (neighborGroup != null) {
// Ignore zero-probability neighborGroups
if (neighborGroup.getProbability() == 0) {
continue;
}
// Increase the number of neighbors
Integer currentNeighborAmount = neighbors.get(neighborGroup);
if (currentNeighborAmount == null) {
neighbors.put(neighborGroup, 1);
}
else neighbors.put(neighborGroup, currentNeighborAmount + 1);
}
}
}
@Override
public T getField() {
return this.field;
}
@Override
public FieldGroup<T> getFieldGroup() {
return this.group;
}
@Override
public int getFound() {
return this.found;
}
@Override
public double getMineProbability() {
return this.group.getProbability();
}
@Override
public GroupValues<T> getNeighbors() {
return this.neighbors;
}
@Override
public double[] getProbabilities() {
return this.detailedProbabilities;
}
private void recursiveRemove(Solution<T> solution, double combinations, int mines) {
if (Thread.interrupted()) {
throw new RuntimeTimeoutException();
}
// Check if there are more field groups with values
GroupValues<T> remaining = solution.getSetGroupValues();
if (remaining.isEmpty()) {
this.detailedCombinations[mines + this.found] += combinations;
return;
}
// Get the first assignment
Entry<FieldGroup<T>, Integer> fieldGroupAssignment = remaining.entrySet().iterator().next();
FieldGroup<T> group = fieldGroupAssignment.getKey();
remaining.remove(group);
solution = Solution.createSolution(remaining);
// Setup values for the hypergeometric distribution calculation. See http://en.wikipedia.org/wiki/Hypergeometric_distribution
int N = group.size();
int n = fieldGroupAssignment.getValue();
Integer K = this.neighbors.get(group);
if (this.group == group) {
N--; // Always exclude self becuase you can't be neighbor to yourself
}
if (K == null) {
// This field does not have any neighbors to that group.
recursiveRemove(solution, combinations * Combinatorics.nCr(N, n), mines);
return;
}
// Calculate the values and then calculate for the next group
int maxLoop = Math.min(K, n);
for (int k = minK(N, K, n); k <= maxLoop; k++) {
double thisCombinations = Combinatorics.NNKK(N, n, K, k);
recursiveRemove(solution, combinations * thisCombinations, mines + k);
}
}
@Override
public String toString() {
return "Proxy(" + this.field.toString() + ")"
+ "\n neighbors: " + this.neighbors.toString()
+ "\n group: " + this.group.toString()
+ "\n Mine prob " + this.group.getProbability() + " Numbers: " + Arrays.toString(this.detailedProbabilities);
}
}
NeighborFind.java: (30 lines, 718 bytes)
/**
* Interface strategy for performing a {@link DetailAnalyze}
*
* @author Simon Forsberg
*
* @param <T> The field type
*/
public interface NeighborFind<T> {
/**
* Retrieve the neighbors for a specific field.
*
* @param field Field to retrieve the neighbors for
*
* @return A {@link Collection} of the neighbors that the specified field has
*/
Collection<T> getNeighborsFor(T field);
/**
* Determine if a field is a found mine or not
*
* @param field Field to check
*
* @return True if the field is a found mine, false otherwise
*/
boolean isFoundAndisMine(T field);
}
ProbabilityKnowledge.java: (39 lines, 1031 bytes)
public interface ProbabilityKnowledge<T> {
/**
* @return The field that this object has stored the probabilities for
*/
T getField();
/**
* @return The {@link FieldGroup} for the field returned by {@link #getField()}
*/
FieldGroup<T> getFieldGroup();
/**
* @return How many mines has already been found for this field
*/
int getFound();
/**
* @return The mine probability for the {@link FieldGroup} returned by {@link #getFieldGroup()}
*/
double getMineProbability();
/**
* @return {@link GroupValues} object for what neighbors the field returned by {@link #getField()} has
*/
GroupValues<T> getNeighbors();
/**
* @return The array of the probabilities for what number this field has. The sum of this array + the value of {@link #getMineProbability()} will be 1.
*/
double[] getProbabilities();
}
ProxyProvider.java: (7 lines, 132 bytes)
public interface ProxyProvider<T> {
FieldProxy<T> getProxyFor(T field);
}
Usage / Test
Available on Github: https://github.com/Zomis/Minesweeper-Analyze/blob/master/src/test/java/net/zomis/minesweeper/analyze/DetailAnalyzeTest.java
To see results of analyze in action, follow these steps:
go to a random game on my Minesweeper Flags Stats page
go to a random situation within that game
click on the map
click on the analyze link at the bottom
then click on a random field on the map.
A popup element will show the detailed probabilities for that field.
Questions
In order of importance:
Does another implementation of this exist? Are there any existing libraries out there?
Any general comments welcome about this code and/or this approach
Can it be made even faster? (except for some optimizations, I doubt it myself, but I would really love it if it could be improved significantly)
Answer: @Pimgd were definitely on to something big in the last part of the answer, but not on to it enough. Simply changing Solution<T> to GroupValues<T> did not help much.
However, what certainly did help, is to use a List<Entry<FieldGroup<T>, Integer>> (as ugly as that type declaration looks). And to iterate through the List without modifying anything.
It turns out that this change alone made the code NINE (9) to TEN (10) times faster.
The new method is like this:
private void recursiveRemove(List<Entry<FieldGroup<T>, Integer>> solution, double combinations, int mines, int listIndex) {
if (Thread.interrupted()) {
throw new RuntimeTimeoutException();
}
// Check if there are more field groups with values
if (listIndex >= solution.size()) {
// TODO: or if combinations equals zero ?
this.detailedCombinations[mines + this.found] += combinations;
return;
}
// Get the assignment
Entry<FieldGroup<T>, Integer> fieldGroupAssignment = solution.get(listIndex);
FieldGroup<T> group = fieldGroupAssignment.getKey();
// Setup values for the hypergeometric distribution calculation. See http://en.wikipedia.org/wiki/Hypergeometric_distribution
int N = group.size();
int n = fieldGroupAssignment.getValue();
Integer K = this.neighbors.get(group);
if (this.group == group) {
N--; // Always exclude self becuase you can't be neighbor to yourself
}
if (K == null) {
// This field does not have any neighbors to that group.
recursiveRemove(solution, combinations * Combinatorics.nCr(N, n), mines, listIndex + 1);
return;
}
// Calculate the values and then calculate for the next group
int maxLoop = Math.min(K, n);
for (int k = minK(N, K, n); k <= maxLoop; k++) {
double thisCombinations = Combinatorics.NNKK(N, n, K, k);
recursiveRemove(solution, combinations * thisCombinations, mines + k, listIndex + 1);
}
}
And called like this:
recursiveRemove(new ArrayList<Entry<FieldGroup<T>, Integer>>(solution.copyWithoutNCRData().getSetGroupValues().entrySet()), 1, 0, 0);
Iterating instead of continuously copying and modifying. Why on earth did I not think of this before?
This means that doing this big analyze on The super board of death is now done in about 3.5 minutes instead of 35 minutes!! | {
"domain": "codereview.stackexchange",
"id": 11090,
"tags": "java, algorithm, combinatorics, minesweeper"
} |
With respect to what quantities do I vary Lagrangians in field theory? | Question: I have recently been wondering, with respect to which quantities (covariant or contravariant) one should vary QFT Lagrangians and whether there is some rule regarding this. Let me give an example which will hopefully clarify what my problem is.
Let's take the source-free EM Lagrangian:
$$
\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}
$$
One can now compute:
$$
\frac{\partial\mathcal{L}}{\partial(\partial_\mu A_\nu)}=F^{\nu\mu}\\
\frac{\partial\mathcal{L}}{\partial(\partial^\mu A^\nu)}=F_{\nu\mu}\\
\frac{\partial\mathcal{L}}{\partial(\partial^\mu A_\nu)}=F^{\nu}{}_{\mu}\\
\frac{\partial\mathcal{L}}{\partial(\partial_\mu A^\nu)}=F_{\nu}{}^{\mu}
$$
If I want to arrive at the correct equation of motion now, I have to choose $\partial _\mu$ or $\partial ^\mu$ according to how I varied $\mathcal{L}$ (btw: is "varied" the correct word here?).
My question is: is there a "correct" way to do this? Can I write the equation of motion in a way, which will work for all Lagrangians without me having to "manually" pick the $\partial$? Most textbooks seem to have:
$$
\partial_\mu \frac{\partial\mathcal{L}}{\partial(\partial_\mu \phi)}-\frac{\partial\mathcal{L}}{\partial \phi}=0
$$
The example above seems to suggest, that varying w.r.t. a covariant quantity returns a contravariant quantity so to say and vice versa. Is that true in general? I do unfortunately not have any theoretical background on this so far and am being thrown into the cold water.
Answer: Let us suppose that we are in a Minkowski (flat) space-time with a diagonal (symmetric) metrics $\eta_{\mu\nu} = Diag(-1, 1, 1, 1)$, and $\eta^{\mu\nu} = Diag(-1, 1, 1, 1)$,
The standard Euler-Lagrange equations for a Lagrangian $\mathcal L(A_\nu, \partial_\mu A_\nu)$, is : $\partial_\mu \dfrac{\partial\mathcal{L}}{\partial(\partial_\mu A_\nu)}-\dfrac{\partial\mathcal{L}}{\partial A_\nu}=0$, because the field variables are $A_\mu$, and its first derivatives $\dfrac{\partial A_\nu}{\partial x^\mu} = \partial_\mu A_\nu$ .
The relation between contravariant ($v^\nu$) and covariant coordinates ($v_\mu$), is simply $v_\mu = \eta_{\mu\nu}v^\nu$, or $v^\nu = \eta^{\nu\mu} v_\mu$. This is the natural relation between "lower" and "upper" indices, and you may generalise it to tensors also, so, for instance your Lagrangian could be written :
$\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}= -\frac{1}{4} \eta^{\mu\alpha}\eta^{\nu\beta} F_{\alpha\beta}F_{\mu\nu}$
Now, you may transform the Euler-Lagrange relations in : $\partial_\mu \dfrac{\partial\mathcal{L}}{\partial(\partial_\mu A^\nu)}-\dfrac{\partial\mathcal{L}}{\partial A^\nu}=0$ or $\partial^\mu \dfrac{\partial\mathcal{L}}{\partial(\partial^\mu A_\nu)}-\dfrac{\partial\mathcal{L}}{\partial A_\nu}=0$ or $\partial^\mu \dfrac{\partial\mathcal{L}}{\partial(\partial^\mu A^\nu)}-\dfrac{\partial\mathcal{L}}{\partial A^\nu}=0$, if you want, but this will bring nothing new or useful to you, so it is better to stay with the "standard" expression of the Euler-Lagrange equations.
The "standard" expression of the Euler-Lagrange equations is more natural too, because $A_\mu$ (as a "connnection") has a "natural" covariant nature (like $p_\mu$), and a coordinate $x^\mu$ has a natural contravariant nature, so a derivative relative to $x^\mu$, that is $\partial_\mu$ has a natural covariant nature. This will become important in curved space-times, where the relation between contravariant and covariant quantities is more complex. | {
"domain": "physics.stackexchange",
"id": 11039,
"tags": "lagrangian-formalism, variational-principle, covariance, variational-calculus"
} |
Why are only the extremely low frequencies of electromagnetic radiation able to penetrate the earth and sea? | Question: Wikipedia classifies ELF (Extremely Low Frequency) radiation as between 3 Hz and 300 Hz, and ULF (Ultra Low Frequency) from 300 Hz to 3 kHz. Both ranges can penetrate the earth and sea. What is the physics behind why this is so?
What would be the ideal frequency then to scan up to 500 feet underground for metal detection, a sort of underground metal detection radar?
Answer: The physics of the attenuation is that of the physics in conductive materials. The skin depth is a conductor is (see Wikipedia at https://en.m.wikipedia.org/wiki/Skin_effect),
$d = \sqrt(2/\mu_0\sigma\omega)$
See also a simple derivation online at http://farside.ph.utexas.edu/teaching/315/Waves/node65.html
Seawater is still a good conductor, as shown there, with conductivity of 5 (ohms-meters)$^{-1}$. The skin depth arises because the charged particles in the medium will tend to cancel the electric field, and eventually do. The higher the conductivity clearly the smaller the skin depths. You have to solve the Maxwell equations with the approximations for a good conductor and you get the wavelength dependence. The article shows the derivation which is basic electromagnetism. It's standard also in textbooks, see it for instance in Jackson. Intuitive interpretation (though better to do the math, you won't misinterpret) is that the electromagnetic energy per unit depth is less at higher wavelengths, the changes are partially scaled by wavelength.
With $\sigma$ the conductivity and $\omega = 2\pi f$, and f the frequency. For lower frequency f the skin depth is higher, greater penetration or distance before attenuation. $\mu_0$ Is the vacuum permeability. Most or almost all the absorption is from the electric field at ELF frequencies so the magnetic properties of the conductor matter little. The derivation of the skin depth is in just about any electromagnetism book, in the most general case also.
The earth is also a conductor, a geological conductor with typically starta that are arranged In a complex way. For lower frequencies like ELF the small scale arrangements have little effect because of wavelengths of a thousand and more kilometers, so an average conductivity is not too bad an approximation. i saw a good reference for the absoprtion through the earth by googling, but can't seem to find it again.
So again low frequencies like ELF propagate better.
By the way, they also propagate pretty good above ground, with the ground and the ionosphere forming a duct. Lots of papaer on the physics and propagation for these waves. | {
"domain": "physics.stackexchange",
"id": 41761,
"tags": "electromagnetism, geophysics, metals, radio-frequency"
} |
Bluetooth communication problem with arduino over rosserial_arduino and rosserial_python | Question:
Hi,
I am using an Arduino Uno in combination with a Sparkfun Bluetooth Mate Gold modem to interface with my ROS hydro framework running on a host. The communication is established with the rosserial_python node on the host and the rosserial_arduino code uploaded to the Arduino. When I upload the blink_led example described in this tutorial, I get the following error after running the rosserial_python node on the host:
[ERROR] [WallTime: 1395914292.667126] Mismatched protocol version in packet: lost sync or rosserial_python is from different ros release than the rosserial client
[INFO] [WallTime: 1395914292.667822] Protocol version of client is unrecognized, expected Rev 1 (rosserial 0.5+)
The error is usually thrown after a few seconds, before the error is thrown everything works fine and I can blink the LED. I already tried the following measures without success:
Set a baudrate of 115200 instead of 57600 on the host side, the bluetooth modem and the Arduino UART.
Compile the rosserial code from source instead of using the latest debian binaries (results in a different error complaining about unexpected message lengths).
Recompile the ros_lib library in the Arduino sketchbook.
Increase the delay times in the Arduino code.
Remove all subscribers in the Arduino code and just init the node.
When I try to run the system over bluetooth, the modem is connected to the TX/RX pins of the arduino (TX to RX, RX to TX) and the USB cable removed. When I disconnect the bluetooth modem from the Arduino and connect the Arduino over a USB cable to the host, everything runs fine.
Can anyone help me with this issue?
Originally posted by Tones on ROS Answers with karma: 635 on 2014-03-27
Post score: 1
Answer:
In the end, I used the serial_node of the rosserial_server package instead of rosserial_python. According to the ROS wiki, rosserial_server is currently in an experimental state and has restricted functionality, but it works fine for me.
Originally posted by Tones with karma: 635 on 2014-04-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 17442,
"tags": "ros, arduino, communication, bluetooth, rosserial"
} |
How to trade poles of All pole filter with zeroes? | Question: I have filter of order 2000 IIR all-pole. I want to implement it via cascade IIR filter. Is there any way to reduces poles order and increase zeroes as its cost. To balance zero and poles. Since all pole cascade have lots of zero coefficient because of lack of zero, in this way I want to gain more computational and maybe space efficiency.
Is it possible to balance zeros and poles? Lossy or losslessly(preferred).
Answer: Yes, any IIR filter that has an impulse response that ultimately decays to insignificant values can be implemented as an FIR filter: this is the extreme case where all (non-zero) poles have been eliminated. This is because the coefficients of an FIR filter is the impulse response for the filter, so we can get the zeros by describing the sampled impulse response as the numerator of the transfer function and factoring that into its zeros.
The conversion can be done by just resampling a high precision impulse response from the IIR to a lower rate. The lowest rate can be determined from the frequency response with regards to avoiding frequency domain aliasing. My approach would be to use a simple zero phase filter (‘filtfilt’ in Matlab and Python) to limit the bandwidth of the impulse response to something that is greater than the bandwidth of interest and then resample the filtered response to at least 2x greater than that bandwidth). If the sampling rate is already minimized then we can only reduce the number of zeros needed if it’s total time duration far exceeds the actual impulse response above a limit of significance.
To be clear, all systems ultimately have the same number of poles and zeros it’s just some of them may be at infinity so not visible on a plot we may make; with a causal FIR system all the poles are at the origin and are referred to as “trivial”.
If we wanted to trade a smaller group of poles with zeros out of a higher order filter with many poles, we can do the following procedure:
Factor the filter into the cascade of two filters, with the poles to be exchanged as a separate filter cascaded with the poles to keep.
Use the process above to convert those poles to zeros. Note that there will typically be many more zeros to achieve a similar impulse response as can be achieved with the poles. Consider how FIR filters typically are of much higher order than IIR implementations to achieve a given frequency response. | {
"domain": "dsp.stackexchange",
"id": 12138,
"tags": "filter-design, infinite-impulse-response, digital-filters"
} |
Why does HER not work with on-policy RL algorithms? | Question: I'm wondering because I don't appreciate what is wrong with just applying HER to an otherwise on-policy algorithm? Like if we do that will the training stability just fall apart? And if so why? My understanding is that on-policy is just a category created by humans meaning that "the default algorithm doesn't do off-policy optimization". But why does that prohibit adding off-policy elements?
Answer: On policy algorithms contain policy and/or value update calculations that assume data was generated by the current policy. Breaking that assumption will cause them to miscalculate, or not function at all without some kind of intervention.
As an example SARSA has the TD target (new estimate for $q(S_t,A_t)$) of:
$$R_{t+1} + \gamma q(S_{t+1},A_{t+1})$$
The main problem here is the value of $A_{t+1}$ (there's a couple of minor issues with $S_t, A_t$ as well, but those also apply to off policy and are why you usually want your behaviour policy to be close to the target policy).
In an experience replay table you will have historic data from when the policy was different. So the value of $A_{t+1}$ that might be stored may not be the one that the current policy would take. Learning from that action would skew the value table to learning about whatever policy has just been sampled, instead of more accurately improve knowledge of the current policy.
To fix this, you could resample the action choice, or even use an expected q value across all actions that SARSA could currently take. That fixes things, but actually it also changes your method to off-policy Expected SARSA . . .
Similar issues apply to using experience replay tables with other on policy methods. They become inaccurate if used naively, and if you fix the issues with that, you reinvent off-policy methods (when possible, not all on policy methods have an off policy equivalent) | {
"domain": "ai.stackexchange",
"id": 3874,
"tags": "reinforcement-learning, off-policy-methods, on-policy-methods, hindsight-experience-replay"
} |
What is the definition of Normalised Eigenvectors in QM? | Question: I recently came across a problem:
"Normalise the three eigenvectors of the matrix $B$." The matrix $B$ was given by:
$$ B =
\begin{bmatrix}
2 & 0 & i\\
0 & 1 & 0\\
i & 3 & 2
\end{bmatrix}.
$$
I found the three eigenvalues and corresponding eigenvectors, and found two of them to be orthogonal.
My question is, does "Normalise" mean to convert these eigenvectors to unit vectors, or to form an orthonormal basis from them (for example, use the Gram-Schmidt procedure)?
Answer: If we calculate eigenvectors $\vec{s}_j$ of a matrix $\mathbf{B}$, they are solutions to the equation
$$ \mathbf{B}\vec{s}_j = \lambda_j \vec{s}_j$$
From the above equation you can clearly see, that eigenvectors are only determined up to a factor, since $\vec{q}_j = a\vec{s}_j$, with an arbitrary scalar $a \neq 0$, also solves the above statement which you can see by simply multiplying the whole equation by $a$. We can use this degree of freedom to ensure that $||\vec{s}_j|| = 1$, which is usually called normalization.
However, linear combinations of eigenvectors of different eigenvalues - which is what a Gram-Schmidt othogonalization would do to the vectors in your example - are not eigenvectors of the matrix.
$$ (a\vec{s}_j + b\vec{s}_k) \mathbf{B} = a \lambda_j \vec{s}_j + b \lambda_k \vec{s}_k \neq \lambda (a\vec{s}_j + b\vec{s}_k)$$
So orthonormalizing them would break them being eigenvectors. Notice that this is not the case if the eigenvalue is degenerate, i.e. both eigenvectors have the same eigenvalue. This is easy to see in the above equation when setting $\lambda_j = \lambda_k$.
As someone stated above, if the matrix were hermitian, you can always choose the basis vectors to be orthonormal. The term choose comes from possible degeneracies, in which subspace you can in fact orthonormalize - so make them orthogonal and scale them to have norm 1 - the vectors against each other. If there are no degenerecies, the eigenvectors are automatically orthogonal. | {
"domain": "physics.stackexchange",
"id": 40334,
"tags": "quantum-mechanics, operators, hilbert-space, eigenvalue, normalization"
} |
Singleton with readonly parameters | Question: The goal is to create a Singleton and pass it a parameter that is required for the construction and initialization of the class, then preventing any changes to be made to the passed parameter (just like a readonly field being set by an argument passed to a constructor).
For instance:
Sockets
Hosts
Databases
Repositories
(Any instance that requires at least one argument in order to construct)
I am having a tough time coming to terms with this design, and I am quite certain that there is a pitfall or a loose-end to this implementation of a Singleton combined with a Builder Pattern, to mimic readonly fields set by constructor arguments.
Example Implementation
In this example, I am trying to get a Singleton of Host, where I would like the enum EnvironmentTypes to be treated like a readonly field usually found in classes that have parameters passed into the constructor.
EnvironmentTypes Enum
public enum EnvironmentTypes
{
Production,
Staging,
Development
}
IHost Interface
public interface IHost
{
string Name { get; set; }
}
Host Class
public sealed class Host : IHost
{
#region Singleton
private static readonly Lazy<Host> _instance = new Lazy<Host>(() => new Host());
public static Host Instance { get { return _instance.Value; } }
#endregion
private static bool _isInstantiated;
private static EnvironmentTypes _environment;
private string _name;
internal static EnvironmentTypes Environment
{
get { return _environment; }
internal set
{
if (_isInstantiated) throw new InvalidOperationException(nameof(_environment) +" cannot be set once an instance is created.");
_environment = value;
}
}
public string Name
{
get { return _name; }
set { _name = value; }
}
static Host()
{
_isInstantiated = false;
_environment = EnvironmentTypes.Production;
}
private Host()
{
_isInstantiated = true;
_name = "My Server";
}
}
HostBuilder Class
public sealed class HostBuilder
{
private readonly EnvironmentTypes _environment;
private string _name;
public HostBuilder(EnvironmentTypes environment)
{
_environment = environment;
}
public HostBuilder SetName(string name)
{
_name = name;
return this;
}
public IHost Build()
{
Host.Environment = _environment;
Host host = Host.Instance;
host.Name = _name;
return host;
}
}
Implementation
class Foo
{
void UsingTheBuilder()
{
// probably over-kill
HostBuilder builder = new HostBuilder(EnvironmentTypes.Development)
.SetName("Bingo");
IHost host = builder.Build();
//host.Environment is not available, great!
host.Name = "Renamed Server"; // works as expected.
}
void ManualConfiguration()
{
Host.Environment = EnvironmentTypes.Development;
Host host = Host.Instance;
host.Name = "Bingo";
Host.Environment = EnvironmentTypes.Staging; // throws! Hoped to prevent
// the developer from doing this.
}
}
Random Notes: It would be great if I could restrict access from getting to the static properties of Host, so that I can totally avoid anyone trying to set the Host.Environment static property and throwing an exception -- note how the HostBuilder shields that from happening as it is a readonly field.
Answer: Disclaimer: I am biased towards singletons. I think it is an anti-pattern, that has no place in modern C#.
First, here is a great article on how singletons become a disaster when you try to unit test a code, that heavily relies on them. Your case is even more complex, because you also have to initialize additional parameters. And you can't change those. So you can't test Host with different "environments" unless you try to bypass your own exception with reflection.
I would just register non-static Host class as singleton inside IoC container, and be done with it. It will solve all your problems:
Parameters of Host are no longer exposed.
Container guaranties, that there is going to be a single instance of Host.
Host is exposed as service (IHost) and not as implementation (Host).
You can mock IHost in unit tests.
You can easily unit-test Host implementation with whatever parameters you want, because now it has public constructor and can be re-created as often as it is required by your tests.
Classes that depend on IHost will now require it as dependency, instead of secretly accessing it via global static property. | {
"domain": "codereview.stackexchange",
"id": 25809,
"tags": "c#"
} |
Integral Represation of Four-Current with proper time | Question: I have a question about the four current in covariant representation.
the four current is defined as
$$
J^{\alpha} = \binom{c\rho}{\vec{j}}
$$
and i'm having a point charge,
$$\rho(\vec{x},t)=e\delta(\vec{x}-\vec{r}(t)),$$
$$\vec{j}(\vec{x},t)=e\frac{d\vec{r}}{dt}\delta(\vec{x}-\vec{r}(t)).$$
Now, the four-vector $r=(r^0, \vec{r})$ is the space-time-point on the trajectory of my point charge and $x=(x^0, \vec{x})$ is the observation point.
With $r^{\alpha}(\tau)$ as function of proper time and the four-velocity $V^{\alpha}$ the four current can be written like
$$J^{\alpha} = e \int{d\tau V^{\alpha}\delta^{(4)}(x-r(\tau))}.$$
My Question: why and how? I just can't verify this expression, particulary i don't understand where the integral is coming from, and i'm a little bit confused about how $r^{\alpha}(\tau)$ looks like. I mean i know
$$r^{\alpha} = \binom{ct}{\vec{r}(t)},$$ but how to handle this with proper time and also what's about $V^{\alpha}(\tau)$ and $V^{\alpha}(t)$?
Answer: Note: $c=1$ in the following.
Every time-like worldline can be parametrized by its proper time. If you are given $\vec r(t)$, then the proper time at $t_0$ is given by
$$\tau(t_0) = \int_0^{t}\sqrt{1-\left(\frac{\mathrm{d}\vec r}{\mathrm{d}t}\right)^2}\mathrm{d}t $$
and inverting this expression to get $t(\tau)$ gives you the worldline $r^\mu = (t(\tau),\vec r(t(\tau))$ parametrized by its proper time.
Now, insert $1 =\frac{\mathrm{d}t}{\mathrm{d}t}$ into your expression for $\rho$. Then, your four-current becomes
$$ j^\mu = e \frac{\mathrm{d}r^\mu}{\mathrm{d}t}\delta(\vec x-\vec r(t))$$
and using the chain rule
$$ j^\mu(\vec x,t) = e \frac{\mathrm{d}r^\mu}{\mathrm{d}\tau}\frac{\mathrm{d}\tau}{\mathrm{d}t}\delta(\vec x - \vec r(t)) = e u^\mu \frac{\mathrm{d}\tau}{\mathrm{d}t}\delta(\vec x - \vec r(t))$$
Now, we use
$$ j^\mu(\vec x,t')= = \int j^\mu(\vec x,t)\delta(t'-t)\mathrm{d}t=\int e u^\mu \frac{\mathrm{d}\tau}{\mathrm{d}t}\delta(\vec x - \vec r(t))\delta(t'-t)\mathrm{d}t$$
and again by the chain rule
$$ j^\mu(\vec x ,t') = \int e u^\mu \delta(\vec x - \vec r(t))\delta(t'-t)\mathrm{d}\tau$$
Finally, $t= r^0$, $t' = x^0$ and $\delta(x - r) = \delta(\vec x - \vec r)\delta(x^0 - r^0)$ give the formula you asked about. | {
"domain": "physics.stackexchange",
"id": 26179,
"tags": "electromagnetism, special-relativity, electric-current, dirac-delta-distributions"
} |
Lyonization and X-linked disorders? | Question: Lyonization or X-chromosome inactivation is the conversion of all but one, X-chromosomes in Females into non-coding heterochromatin (i.e. deactivated) leading to the formation of one or more Barr bodies. The selection of the X-chromosome to be inactivated is different in different animals. In female Marsupials, the inactivation is always of the Paternal X-chromosome while in placental mammals, the selection is random. (Although the extent of lyonization is not completely random and varies directionally with age).
My question is
If the X-chromosome to be inactivated is randomly selected in
placental mammals (including humans), then why don't the females
heterozygous for a X-linked recessive disorder do not show the
phenotype of the disease though the functional gene is equally likely
(as compared to the defective) to be the one to be lyonized?
There are some evidences of genes escaping inactivation and managing to exhibit themselves but I don't think they can account for all the recessive x-linked disorders.
Answer: The simple answer is that which X chromosome is inactivated varies in different cell lineages, so typically a female will have cells exhibiting either wild-type or mutant phenotypes. It was Mary Lyon's observation of mosaicism in heterozygous mouse coat colour that gave the phenomenon its name. So in the case of a recessive disease there will be a phenotype, but in many cases the 50% of cells expressing the normal gene will provide sufficient functional cells to get by. From the Wikipedia article about Mary Lyon:
Her research has allowed us to understand the genetic control mechanisms of chromosome X, which explains the absence of symptoms in numerous healthy women that are carriers of diseases associated with this chromosome.
Edit - response to comments:
Plug, I et al. (2006) Bleeding in carriers of hemophilia. Blood 108: 52-56
Abstract:
A wide range of factor VIII and IX levels is observed in heterozygous carriers of hemophilia as well as in noncarriers. In female carriers, extreme lyonization may lead to low clotting factor levels. We studied the effect of heterozygous hemophilia carriership on the occurrence of bleeding symptoms. A postal survey was performed among most of the women who were tested for carriership of hemophilia in the Netherlands before 2001. The questionnaire included items on personal characteristics, characteristics of hemophilia in the affected family members, and carrier testing and history of bleeding problems such as bleeding after tooth extraction, bleeding after tonsillectomy, and other operations. Information on clotting factor levels was obtained from the hospital charts. Logistic regression was used to assess the relation of carrier status and clotting factor levels with the occurrence of hemorrhagic events. In 2004, 766 questionnaires were sent, and 546 women responded (80%). Of these, 274 were carriers of hemophilia A or B. The median clotting factor level of carriers was 0.60 IU/mL (range, 0.05-2.19 IU/mL) compared with 1.02 IU/mL (range, 0.45-3.28 IU/mL) in noncarriers. Clotting factor levels from 0.60 to 0.05 IU/mL were increasingly associated with prolonged bleeding from small wounds and prolonged bleeding after tooth extraction, tonsillectomy, and operations. Carriers of hemophilia bleed more than other women, especially after medical interventions. Our findings suggest that not only clotting factor levels at the extreme of the distribution, resembling mild hemophilia, but also mildly reduced clotting factor levels between 0.41 and 0.60 IU/mL are associated with bleeding.
Bimler, D & Kirkland, J (2009) Colour-space distortion in women who are heterozygous for colour deficiency. Vision Research 49: 536-543
from the Introduction:
About 15% of women are heterozygous for some form of colour vision deficiency (CVD). That is, they possess a genetic abnormality on one of their two X chromosomes, affecting the photopigments (opsins) which subserve colour vision. The retina of a heterozygous woman is a mosaic in which some cone cells express the aberrant gene while others express the normal copy, depending on which X chromosome is active (inactivation of one X chromosome occurs randomly in retinal stem-cells at some stage of fetal development). The normal cells are sufficient to provide full trichromatic vision. | {
"domain": "biology.stackexchange",
"id": 1452,
"tags": "molecular-biology, molecular-genetics, gene-expression, cytogenetics"
} |
Cosmological constant in a semiclassical approximation of quantum gravity | Question: Why is it the case that, in a semiclassical description of quantum gravity, the cosmological constant is small in Planck units?
Why does this mean that
$$\ell \gg G$$
for $\Lambda = - 1/\ell^{2}$?
Answer: Semiclassical gravity (and basically anything known today) doesn't say anything about specific value of the cosmological constant. You can consider quantum corrections to the effective cosmological constant, you can look how it changes during phase transitions but there's always some constant term that is unconstrained by theory. At the present level of understanding we have to treat it as a free parameter with some value taken from observations.
This value is very strange having no obvious relation to any other known scale and when you take into account all the phase transitions and how it should be related to some high energy physics. This is a major unsolved problem of the modern physics. | {
"domain": "physics.stackexchange",
"id": 41272,
"tags": "dimensional-analysis, physical-constants, cosmological-constant, absolute-units"
} |
What does a quantum logic gate look like in a universal quantum computer? | Question: I am familiar with logic gates which are in classical computers. They are made up of semiconductor devices like diodes and transistors.
There are some companies which are working on the universal quantum computers such as Google, IBM, Honeywell.
How does quantum logic gates such as Pauli-X,Y,Z, Hadamard, CNOT, CCNOT look like in the circuits of those universal quantum computers? I have referred to many white papers, but in all of them I saw them representing these gates as matrices.
Are Quantum logic gates even physically realized yet OR are they only theoretically proved using matrices?
Can you please attach some real picture of these processors, and point out the quantum logic gates?
Answer: The quantum gates in a superconducting qubit chip are not devices located in space made out of metal. They are processes applied over time. They look like carefully choreographed microwave chirps travelling down wires attached to the superconducting loops that are the qubits.
Instead of moving the data through the operations, you move the operations through the data.
Figure is from https://arxiv.org/abs/1709.06678 | {
"domain": "quantumcomputing.stackexchange",
"id": 2894,
"tags": "experimental-realization"
} |
Finding direction of electric force of 2 particles | Question: A molecule has its centre P of positive charge situated a distance of 2.8 × 10–10 m from
its centre N of negative charge, as illustrated in Fig.
The molecule is situated in a uniform electric field of field strength 5.0 × 104 V m–1. The
axis NP of the molecule is at an angle of 30° to this uniform applied electric field.
The magnitude of the charge at P and at N is 1.6 × 10–19 C.
I need to draw the direction of the force on P and N due to the electric field. Is it correct force on P act on N and force of N act on P.
The next question asked to calculate Torgue, i get the result of perpendicular distance tan30 x Ef but in the answer sheet they use sin30 x Ef.
What am going wrong...
Answer: this is basically a dipole type of problem. direction of dipole is from -ve to +ve charge in the direction of line joining the charge and it is p. torque is given by cross product of this p and electric field i.e. pXE. (please note that angle between dipole p and E). In this case it will be 150. had your p been N and N been P it would have been 30. So the equation now becomes PESin150=PEsin30 (not tan 30) and P by definition is Magnitude of charge (1.6X10^-19) times the seperation between charges.
For direction of forces consider the on positive charge in the direction of electric field lines and opposite to that on negative charge. | {
"domain": "physics.stackexchange",
"id": 17485,
"tags": "homework-and-exercises, electric-fields, dipole"
} |
Transformation of four-velocity in special relativity | Question: I am revising special relativity introducing more matrix form in the equation. Currently I am reading book in which transformation matrix is defined as
$${\Lambda=
\begin{bmatrix}
\gamma & -v\gamma & 0 & 0 \\
-v\gamma & \gamma & 0 & 0 \\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{bmatrix}
}$$
and the four-velocity as
$${U^{\alpha}=\Lambda^{\alpha}_{\overline{\beta}}(\vec e_{\overline{0}})^{\overline{\beta}}=\Lambda^{\alpha}_{\overline{0}}}$$
The first thing that bothers me is that for positive ${v}$ I get negative component of the velocity vector
$${\vec U=
\begin{bmatrix}
\gamma \\
-v\gamma \\
0 \\
0
\end{bmatrix}
}$$
And the second thing is that when I apply this rule of transformation twice (for example when I have the following task:Ship is travelling with 0.6c relative to the Earth and another ship is travelling with 0.6c with respect to the first shipt. Find the four-velocity of the second ship relative to the Earth.(it is not homework)) For the four-velocity I get
$${
\vec U=
\begin{bmatrix}
\gamma^{2}+v^{2}\gamma^{2} \\
-v\gamma^{2}-v\gamma^{2} \\
0 \\
0\\
\end{bmatrix}
}$$
The second component is negative which definitely differs from the real vector (I checked this using Lorentz transformation, getting the equations, building the t''-x'' axes of the second ship's frame with respect to Earth). Where am I wrong?
(c=1)
Answer: Timaeus's answer could be correct. The $\Lambda$ matrix from your book may have been intended as a passive transformation (one that acts on the coordinate system) and you mistakenly used it as an active transformation (one that acts on the object). Alternatively, the $\Lambda$ matrix from your book may have been intended as the active transformation of a co-vector and my answer below addresses that.
Because boosts are not orthogonal matrices, you have to think about the different way co-vectors and contra-vectors transform. The $\Lambda$ matrix you wrote down is the +v boost for a co-vector. You then applied it to a vector and incorrectly interpreted the result as if it were a contra-vector. The co-vector $U_\alpha$ with +v velocity really does have $-v\gamma$ in it as you found. You must raise U's index with the metric to see what the contra-variant $U^\beta$ is, and you see the +v that you expected.
$$U^\beta = \eta^{\beta \alpha}U_\alpha$$
$$\begin{bmatrix}\gamma \\v\gamma \\0 \\0\end{bmatrix}=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 &-1 & 0 & 0 \\ 0 & 0 &-1 & 0 \\ 0 & 0 & 0 &-1\end{bmatrix} \begin{bmatrix}\gamma \\-v\gamma \\ 0 \\ 0\end{bmatrix}$$
A contra-vector transforms as
$$Unew^\alpha = \Lambda^\alpha _{-\beta} Uold^\beta$$
Then a co-vector transforms like
$$Unew_\alpha = Uold_\beta(\Lambda^{-1})^\beta _{-\alpha}$$
This matrix boosts contra-vectors by +v. I have redefined your $\Lambda$ to be the matrix that gives a +v boost to a contra-vector.
$${\Lambda=\begin{bmatrix}\gamma & v\gamma & 0 & 0 \\
v\gamma & \gamma & 0 & 0 \\0 & 0 & 1 & 0\\0 & 0 & 0 & 1
\end{bmatrix}}$$
$$\begin{bmatrix}\gamma \\v\gamma \\0 \\0\end{bmatrix}
=\begin{bmatrix}\gamma & v\gamma & 0 & 0 \\
v\gamma & \gamma & 0 & 0 \\0 & 0 & 1 & 0\\0 & 0 & 0 & 1 \end{bmatrix}
\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}$$
This matrix boosts co-vectors by +v.
$${(\Lambda^{-1})^T=\begin{bmatrix}\gamma & -v\gamma & 0 & 0 \\
-v\gamma & \gamma & 0 & 0 \\0 & 0 & 1 & 0\\0 & 0 & 0 & 1
\end{bmatrix}}$$
$$\begin{bmatrix}\gamma \\-v\gamma \\0 \\0\end{bmatrix}
=\begin{bmatrix}\gamma & -v\gamma & 0 & 0 \\
-v\gamma & \gamma & 0 & 0 \\0 & 0 & 1 & 0\\0 & 0 & 0 & 1 \end{bmatrix}
\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}$$
Notice that rotations R are orthogonal which means $(R^{-1})^T=R$ and the transformations for co-vectors and contra-vectors are the same. Hence we don't talk about co-vectors and contra-vectors when doing rotations. | {
"domain": "physics.stackexchange",
"id": 24241,
"tags": "special-relativity, vectors, tensor-calculus, linear-algebra"
} |
Ssecond and third level excitations of open string | Question: Following David Tong's lectures on string theory, I considered the second and third level excitations of open string. For simplicity, let's set the spacetime dimension = 4, so the transverse dimension is just 2.
The first excited states are $a_{-1}^1\left|{0}\right>$ and $a_{-1}^2\left|{0}\right>$ (superscripts label dimension). They are the two polarizations of the photon, or, the vector rep of SO(2).
The second excited states are $a_{-1}^1 a_{-1}^1 \left|{0}\right>$, $a_{-1}^1 a_{-1}^2 \left|{0}\right>$, $a_{-1}^2 a_{-1}^2 \left|{0}\right>$, $a_{-2}^1 \left|{0}\right>$, $a_{-2}^2 \left|{0}\right>$. These five states are recognized as the symmetric traceless rank-2 rep of SO(3). Hence, they are massive spin-2.
For the third level, I reckon there are 10 states. However, the symmetric traceless rank-3 tensors have dimension 7 not 10 (though symmetric rank-3 tensors have dimension 10). So, how to interpret these 10 states as spin-3?
Or it is a problem that it is not legit to simply work in D=4? (This also raises another general question, in D=26, what is the general formula for number of states at level n? And how to interpret level n as spin n?)
Answer: As far as you don't require Lorentz invariance, it is perfectly OK to work in four dimensions.
How many states do you have at each level ? You can convince yourself, as an exercise, that at level $k$, the number of states is the coefficient of $x^k$ in the function $$\prod\limits_{m \geq 1} \frac{1}{(1-x^m)^2} = 1 + 2x + 5x^2 + 10x^3 + 20 x^4 + \dots $$
Now clearly those numbers do not coincide with the dimensions of the irreducible representations of $SO(3)$, which are the odd integers. The reason is simply that the representations to which the string states belong are not irreducible. The 10-dimensional representation is the sum of a spin 3 and a spin 1.
One way to see that is to refine the generating function given above. For that, include an additional variable $s$ that will "count the spin": $$\prod\limits_{m \geq 1} \frac{1}{(1-s^2x^m)(1-s^{-2}x^m)} = 1+\left(s^2+\frac{1}{s^2}\right)
x+\left(s^4+\frac{1}{s^4}+s^2+\frac{1}{s^2}+1\right)
x^2+\left(s^6+\frac{1}{s^6}+s^4+\frac{1}{s^4}+2
s^2+\frac{2}{s^2}+2\right)
x^3+\left(s^8+\frac{1}{s^8}+s^6+\frac{1}{s^6}+3
s^4+\frac{3}{s^4}+3 s^2+\frac{3}{s^2}+4\right)
x^4 \dots $$
The coefficient of $x^3$ is the sum of the character of a spin 3, $s^6+\frac{1}{s^6}+s^4+\frac{1}{s^4}+2
s^2+\frac{1}{s^2}+1$, and a spin 1, $s^2+\frac{1}{s^2}+1$.
For another example, let us consider level 4. Here the character is $$s^8+\frac{1}{s^8}+s^6+\frac{1}{s^6}+3
s^4+\frac{3}{s^4}+3 s^2+\frac{3}{s^2}+4$$. To decompose it, we can start with the highest weight, $s^8$. This is spin 4. We remove the spin 4 character, $s^8+\frac{1}{s^8}+s^6+\frac{1}{s^6}+
s^4+\frac{1}{s^4}+1 s^2+\frac{1}{s^2}+1$ and we are left with $2
s^4+\frac{2}{s^4}+2 s^2+\frac{2}{s^2}+3$. We see that the highest weight is $2s^4$, so we have two spin 2. Removing $2 \left(
s^4+\frac{1}{s^4}+ s^2+\frac{1}{s^2}+1 \right)$, we are left with $1$, which is a spin 0. Conclusion: the level 4 is one spin 4, two spin 2 and one spin 0. The dimensions add up to $9 + 2 \cdot 5 + 1 = 20$.
You can play the same game at any level, and observe that except at level 1, you can always cast the spectrum into (in general reducible) representations of $SO(3)$.
Edit : how to tell which states give which spins
For that, you can refine even more the generating function. Let us take a different fugacity for each oscillator level,
$$\prod\limits_{m \geq 1} \frac{1}{(1-s^2x^m \alpha_{-m})(1-s^{-2}x^m \alpha_{-m})} =1+x \left(\alpha _{-1} s^2+\frac{\alpha _{-1}}{s^2}\right)+x^2
\left(\alpha _{-1}^2+\alpha _{-1}^2 s^4+\frac{\alpha
_{-1}^2}{s^4}+\alpha _{-2} s^2+\frac{\alpha
_{-2}}{s^2}\right)+x^3 \left(2 \alpha _{-2} \alpha _{-1}+\alpha
_{-1}^3 s^6+\frac{\alpha _{-1}^3}{s^6}+\alpha _{-2} \alpha _{-1}
s^4+\frac{\alpha _{-2} \alpha _{-1}}{s^4}+\alpha _{-1}^3
s^2+\alpha _{-3} s^2+\frac{\alpha _{-1}^3}{s^2}+\frac{\alpha
_{-3}}{s^2}\right)+ \dots$$
Let us look for instance at level 3. You see that the spin 3 is made by states of the form $\alpha_{-1}^3$ and $\alpha _{-2} \alpha _{-1}$, while the spin 1 is made of states of the form $\alpha_{-3}$ and $\alpha _{-2} \alpha _{-1}$. | {
"domain": "physics.stackexchange",
"id": 39702,
"tags": "homework-and-exercises, string-theory"
} |
What is negative and positive potential energy in Newtonian Mechanics? | Question: In my latest post I became confused why potential energy would ever even be negative and what it even represents?
An individual who answered my question suggested that potential cannot be positive. Yet if you take a look at the picture from my past question, you see that the effective potential of an orbiting body should be positive beyond a point, can anyone clarify this for me? I am sorry if it is a stupid question but I've neglected a large part of Physics until now and focused more on Maths, now that I am starting to become more and more intrigued by this field, I am getting confused over these definitions. I've already had a post up for $4$ days and have become really fatigued over this question, can anyone please help me out?
(Side note: Can't find anything on this)
Answer: Long comment. You are confused because the two body problem can be analyzed in two different ways. The way in the picture you reference, analyzes the motion as if it were one dimensional, you discard information on angle and retain only r. You are in a reference frame that rotates with the mass, and the potential is not just the gravitational potential, but it also includes the centrifugal force. That is why it is labeled $V_{eff}$, effective potential. This one can become positive.
The gravitational potential is an upside down hyperbola (V=-k/r), and can never be positive (if we define the potential to be zero when the particle is at infinity). | {
"domain": "physics.stackexchange",
"id": 81177,
"tags": "newtonian-mechanics, potential-energy, conventions"
} |
Is the velocity of a fluid in a vertical tube constant? | Question: Here is how I concluded that:
Let there be two points $A$ and $B$ in a tube, $B$ being a little higher up.
Using the Bernoulli's equation:
$$P_B+\frac{\rho v'^2}{2}+\rho gh=(\rho gh+P_B)+\frac{\rho v^2}{2}$$
Here, the terms cancel and the velocity comes to be equal.
Have I made an incorrect assumption?
According to other answers this is not true .
Answer: Assuming the vertical pipe is of constant cross-section, and because Bernoulli applies to incompressible fluids only, then the continuity equation tells us:
$$vA=v'A'$$
with $A=A'$, then:
$$v=v'$$
applies always.
You may be confused by the following situation:
Here, because the top reservoir is very large, then $v_0\approx 0$ and $v_1$ can then be calculated from Bernoulli ($v_1\approx \sqrt{2gh}$ with $h$ the length of the pipe)
Then $v_1\gg v_0$.
But the velocity inside the pipe is always the same, that is $v_1$, for the reason outlined higher up. And $v_0$ lays of course outside the tube. | {
"domain": "physics.stackexchange",
"id": 75225,
"tags": "fluid-dynamics, water, flow, bernoulli-equation"
} |
Why are there both scalar and vector mesons? | Question: I understand that the term "vector meson" denotes a meson particle (composite particle with a quark and an antiquark) which transforms as a vector under the Lorentz group. Since both the quark and the antiquark are Lorentz spinors, we have the tensor product
$$
\frac 12\otimes\frac 12 = 1\oplus 0\tag{1}
$$
so from a group theoretic viewpoint, it makes sense to me that a meson is either a scalar or a vector.
But what's the exact difference between the $\pi$ and $\rho$ mesons? How does a scalar $\bar ud$ differ from a vector $\bar ud$? I suspect it has to do with the Clebsch-Gordan decomposition in $(1)$, so the scalar $\bar ud$'s spin state must be a superposition $\frac{1}{\sqrt{2}} \left(|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle\right)$
Is that correct so far?
If yes, then is it possible that a quark in a $\rho$ meson "flips its sign" such that the vector meson becomes a scalar meson? This would be some process like $\rho\to\pi+X$ with $X$ some other particle that must be there because of energy conservation, for example a photon.
Answer:
I suspect it has to do with the Clebsch-Gordan decomposition in $(1)$, so the scalar $\bar ud$'s spin state must be a superposition $\frac{1}{\sqrt{2}} \left(|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle\right)$
Yes, I would agree with this (though there are multiple isospin scalar states that determined the charge of the $\pi$ meson)
Is it possible that a quark in a $\rho$ meson "flips its sign" such that the vector meson becomes a scalar meson?
Yes, you could say that one of the quarks in the meson can 'flip' it's spin when it reacts with some other particle, causing the vector meson to become a scalar meson. It is also possible for the $\rho$ meson to decay into two pions, since the $\rho$ mesons are charged and this charge has to be conserved, decays like $\rho^\pm \to \pi^\pm + \pi^0$ or $\rho^0 \to \pi^+ + \pi^-$ are possible. | {
"domain": "physics.stackexchange",
"id": 38831,
"tags": "standard-model, group-theory, isospin-symmetry, mesons"
} |
lunar north node in heliocentric terms? | Question: I am wondering how to get the heliocentric longitude and latitude for the Lunar North Node, for example.
I have been able to extract from the Swiss Ephemeris api the geocentric and heliocentric longitude and latitude of the Earth's moon, as well as the geocentric Right Ascension and Declination of the Earth's moon.
Unfortunately, it seems to me that the Swiss Ephemeris api consistently reports a value of zero for the heliocentric and geocentric longitude and latitude of the Lunar North Node and South Node. I have been able to extract from the Swiss Ephemeris api the geocentric Right Ascension and Declination for the Lunar North Node and South Node, but the heliocentric values are zero.
I suspect that guidance has already been provided on this forum about how to transform Right Ascension and Declination values to Longitude and Latitude, but if I succeeded in doing that for the Lunar North Node, then I would expect my results to be expressed in terms of geocentric Longitude and Latitude only.
Does anyone have a suggestion about how to get the heliocentric Longitude and Latitude, or alternatively the heliocentric Right Ascension and Declination of the Lunar North Node?
Answer: The longitude of the ascending node is a standard orbital element, usually given the symbol $\Omega$ (upper-case omega). You can request osculating orbital elements of many solar system bodies from JPL Horizons.
For lunar elements, Horizons gives you two choices of reference plane: the ICRF frame XY plane, which is essentially the Earth equatorial plane of the J2000.0 epoch, or the J2000.0 ecliptic plane. If you use the ecliptic plane, the resulting longitude is the ecliptic longitude of the node. If you use the equatorial plane, the resulting longitude is the RA of the node.
Orbit elements are calculated using TDB (Barycentric Dynamical Time). Currently, delta_T = TDB - UT is ~69 seconds. You can query Horizons for a more accurate value of delta_T when requesting an Observer or Vector table, but not when requesting an Elements table.
Here is a typical Horizons batch file that requests the Moon's elements relative to the center of the Earth, using the ecliptic plane. The center of the Earth has ID number 399, the Moon has ID number 301 (the Earth-Moon barycenter has ID 3).
!$$SOF
MAKE_EPHEM = 'YES'
OBJ_DATA = 'NO'
COMMAND = '301'
CENTER = '@399'
EPHEM_TYPE = 'ELEMENTS'
CAL_TYPE = 'GREGORIAN'
CSV_FORMAT = 'YES'
REF_PLANE= 'Ecliptic'
START_TIME = '2023-Aug-1'
STOP_TIME = '2023-Aug-8'
STEP_SIZE = '1d'
!$$EOF
Here's some Python / Sage code that makes the request using the Horizons file API and prints the results. The code is mostly plain Python, it just uses some Sage features to get the input arguments. You can select geocentric or heliocentric data, and you can select the reference plane. And you can select the time span. Please see the Horizons docs for info on all of the time specification options.
""" Retrieve orbital elements of the Moon from Horizons,
relative to the Earth or the Sun.
Written by PM 2Ring 2023.08.13
"""
import re, requests
url = "https://ssd.jpl.nasa.gov/api/horizons_file.api"
api_version = "1.0"
base_cmd = """
MAKE_EPHEM = 'YES'
OBJ_DATA = 'NO'
COMMAND = '301'
EPHEM_TYPE = 'ELEMENTS'
CAL_TYPE = 'GREGORIAN'
CSV_FORMAT = 'YES'
"""
def fetch_data(center, plane, start, stop, step, verbose=False):
cmd = f"""
!$$SOF
{base_cmd}
CENTER = '{center}'
REF_PLANE= '{plane}'
START_TIME = '{start}'
STOP_TIME = '{stop}'
STEP_SIZE = '{step}'
!$$EOF
"""
#print(cmd)
req = requests.post(url, data={'format': 'text'}, files={'input': ('cmd', cmd)})
version = re.search(r"API VERSION:\s*(\S*)", req.text).group(1)
if version != api_version:
print(f"Warning: API version is {version}, but this script is for {api_version}")
m = re.search(r"(?s)\\\$\\\$SOE(.*)\\\$\\\$EOE", req.text)
if m is None:
print("NO EPHEMERIS")
print(req.text)
return None
if verbose:
print(req.text)
else:
lines = req.text.splitlines()
print("\n".join(lines[5:16]))
return m.group(1)[1:]
@interact
def main(txt=HtmlBox("The Moon's orbital elements"),
center=Selector([("@399", "Earth"), ("@10", "Sun")], selector_type='radio'),
plane = Selector([("Ecliptic", "Ecliptic"), ("Frame", "Equator")], selector_type='radio'),
start="2023-Aug-1", stop="2023-Aug-8", step="1d",
verbose=True, auto_update=False):
result = fetch_data(center, plane,
start.strip(), stop.strip(), step.strip(), verbose=verbose)
if result is None:
print("No ephemeris data found!")
return
print(result)
Here's a live version of that script, running on the SageMathCell server.
The query returns results in plain ASCII form. Here's the ephemeris portion of the results.
JDTDB, Calendar Date (TDB), EC, QR, IN, OM, W, Tp, N, MA, TA, A, AD, PR,
**************************************************************************************************************************************************************************************************************************************************************************************************************************************************
$$SOE
2460157.500000000, A.D. 2023-Aug-01 00:00:00.0000, 7.445758991458748E-02, 3.571365862619948E+05, 5.008652258592002E+00, 2.822256182533066E+01, 2.900634646762910E+02, 2.460158888058302E+06, 1.518410206123902E-04, 3.417899740459997E+02, 3.388674616251798E+02, 3.858673382984544E+05, 4.145980903349139E+05, 2.370900818158910E+06,
2460158.500000000, A.D. 2023-Aug-02 00:00:00.0000, 7.442018378647541E-02, 3.573039656944939E+05, 5.010438148654570E+00, 2.804632484702357E+01, 2.884738888530200E+02, 2.460158772626169E+06, 1.517435366605044E-04, 3.564256960231063E+02, 3.558398892112612E+02, 3.860325813458171E+05, 4.147611969971404E+05, 2.372423945841116E+06,
2460159.500000000, A.D. 2023-Aug-03 00:00:00.0000, 7.340284302854846E-02, 3.572496770205199E+05, 5.016123674133671E+00, 2.788500405977741E+01, 2.869024014201097E+02, 2.460158658978031E+06, 1.520284336503910E-04, 1.104703942319290E+01, 1.281728923915146E+01, 3.855501544901964E+05, 4.138506319598729E+05, 2.367978090387134E+06,
2460160.500000000, A.D. 2023-Aug-04 00:00:00.0000, 7.162883937549905E-02, 3.569891829431704E+05, 5.024686604357866E+00, 2.775623512602453E+01, 2.850820375305457E+02, 2.460158530548510E+06, 1.526321490973527E-04, 2.597197940124664E+01, 2.987693813420488E+01, 3.845328227376530E+05, 4.120764625321357E+05, 2.358611879142072E+06,
2460161.500000000, A.D. 2023-Aug-05 00:00:00.0000, 6.957637835472257E-02, 3.565535202363541E+05, 5.034473963517143E+00, 2.766974414675761E+01, 2.829331038624139E+02, 2.460158382784924E+06, 1.534193512592334E-04, 4.132003231720461E+01, 4.694188100651857E+01, 3.832163242006440E+05, 4.098791281649340E+05, 2.346509726740445E+06,
2460162.500000000, A.D. 2023-Aug-06 00:00:00.0000, 6.771479272345840E-02, 3.560319643831986E+05, 5.043614138526157E+00, 2.762519150265715E+01, 2.806633899532325E+02, 2.460158229386336E+06, 1.542182772497700E-04, 5.690361733634386E+01, 6.370248607985509E+01, 3.818916803617047E+05, 4.077513963402107E+05, 2.334353660409191E+06,
2460163.500000000, A.D. 2023-Aug-07 00:00:00.0000, 6.625425330791064E-02, 3.556004033547897E+05, 5.050458943166362E+00, 2.761268189373863E+01, 2.787078308606339E+02, 2.460158097611625E+06, 1.548623091365481E-04, 7.228451565776196E+01, 7.968276040122535E+01, 3.808321533078448E+05, 4.060639032608998E+05, 2.324645693372517E+06,
2460164.500000000, A.D. 2023-Aug-08 00:00:00.0000, 6.507275768816487E-02, 3.554955131298805E+05, 5.053938392861703E+00, 2.761547227317707E+01, 2.775232172597620E+02, 2.460158014454506E+06, 1.552250039753550E-04, 8.698050648988226E+01, 9.443786013757804E+01, 3.802386934953690E+05, 4.049818738608575E+05, 2.319213984733781E+06,
$$EOE
**************************************************************************************************************************************************************************************************************************************************************************************************************************************************
Here's a URL for the same request using the GET-based API:
https://ssd.jpl.nasa.gov/api/horizons.api?format=text&MAKE_EPHEM=YES&OBJ_DATA=NO&COMMAND=301&EPHEM_TYPE=ELEMENTS&CAL_TYPE=GREGORIAN&CSV_FORMAT=YES&CENTER=%40399&REF_PLANE=Ecliptic&START_TIME=2023-Aug-1&STOP_TIME=2023-Aug-8&STEP_SIZE=1d
Here's a slightly longer script that extracts the longitude of the node and converts it to a float. | {
"domain": "astronomy.stackexchange",
"id": 7051,
"tags": "the-sun"
} |
Utopian Tree in F# | Question: This is the Utopian Tree in F# (from HackerRank)
The Utopian tree goes through 2 cycles of growth every year. The first
growth cycle of the tree occurs during the monsoon, when it doubles in
height. The second growth cycle of the tree occurs during the summer,
when its height increases by 1 meter. Now, a new Utopian tree sapling
is planted at the onset of the monsoon. Its height is 1 meter. Can you
find the height of the tree after N growth cycles?
This helped me understand what to do:
//N = 0 ret initial height
//N = 1 ret double height
//N = 2 ret double height, +1
//N = 4 ret double, +1, double, + 1
Anywhere that I could improve this F# code? I'm trying to stay within the parameters of functional language programming. Per the doc on HackerRank, I had to take in n parameters, 0 being the number of test cases followed by the number of growth cycles:
[<EntryPoint>]
let main argv =
let tests = System.Int32.Parse(System.Console.ReadLine())
let cycles = [for i in 1..tests -> System.Int32.Parse(System.Console.ReadLine())]
let even h = h * 2
let odd h = h + 1
let rec height acc cycle n =
match cycle - cycle + n with
| c when c = cycle -> acc
| c when c % 2 = 0 -> height (even acc) cycle (n+1)
| _ -> height (odd acc) cycle (n+1)
cycles |> List.map (fun c -> height 1 c 0) |> List.map (fun x -> System.Console.WriteLine x)
0 // return an integer exit code
Example input/ouput:
3
0
1
4
1
2
7
Answer: A few points:
The value i on the second line is not used. It is customary to use underscore to name such values.
The expression cycle - cycle + n seems a bit redundant :-)
If you swap the last two arguments of height, you can use it in partially applied form as argument to List.map on the last line (i.e. List.map (height 1 0)), instead of introducing extra lambda.
The parameter cycle is improperly named: it doesn't signify a "cycle", but rather the "total number of cycles".
In the "real world" (e.g. if you were designing a library or something), you shouldn't expose your "temporary state" parameters (i.e. acc and n) to the consumer. Instead, you should expose just the function with one parameter ("total number of cycles"), and have the actual recursive function private.
Functions even and odd seem weirdly named: they don't do what they say. Either name them correctly (like, evenYearGrowth) or get rid of them altogether (personally I prefer the latter).
The very last lambda (the one calling WriteLine) is also redundant: it just passes its argument directly to another function, so it can be replaced with that function.
The last List.map call has a result of non-unit type (namely, List<unit>), which causes the compiler to produce a warning on that line (along the lines of "the result is being discarded, blah-blah-blah"). If you want to call a function on every element of the list solely for its side effects, use List.iter.
Depending on performance considerations and expected problem size, the use of List may be a problem: mapping over a list (i.e. List.map) allocates a new list, which you don't actually need, because you're just reading values and "passing them through a pipeline", so to say. You don't need to remember them. In these circumstances, I use seq.
On the other hand, the use of Seq all through will result in outputting each result right after receiving each input, which may or may not be the desired behavior. I don't know if it is, so I will leave the input acquisition as is.
The System.Int32.Parse(System.Console.ReadLine()) piece looks kinda scary and unclean. Plus, it's repeated twice. So I would make it a separate function.
Lastly (though this one is a matter of taste), I would use a string of ifs instead of the match. Seems shorter and cleaner.
So, having said all of the above, here is my version:
let main argv =
let height =
let rec h acc n totalCycles =
if n = totalCycles then acc
elif n % 2 = 0 then h (acc * 2) (n+1) totalCycles
else h (acc + 1) (n+1) totalCycles
h 1 0
let readInt = System.Console.ReadLine >> System.Int32.Parse
let tests = readInt()
let cycles = [for _ in 1..tests -> readInt()]
cycles
|> Seq.map height
|> Seq.iter System.Console.WriteLine
0 // return an integer exit code | {
"domain": "codereview.stackexchange",
"id": 15715,
"tags": "programming-challenge, functional-programming, f#"
} |
Proper way to execute parameterized commands within a method | Question: This method allows for the execution of parameterized commands within a DAL class, and I tried to make it as reusable as possible. It works well but it feels a bit verbose to me. Is there a way to simplify this or is it good as is?
/// <summary>
/// Retrieves records from the specified data source.
/// </summary>
/// <param name="query">The command to be executed.</param>
/// <param name="dataSrc">The data source to use for the command.</param>
/// <param name="parameters">Optional parameters for a parameterized command.</param>
/// <returns>A System.Data.DataTable containing the returned records.</returns>
public static DataTable GetRecords(string query, DelConnection dataSrc, params SqlParameter[] parameters)
{
if (query == "") throw new ArgumentException("No command provided.", "query");
string conString = Dal.GetConnectionString(dataSrc);
using (SqlConnection con = new SqlConnection(conString))
using (SqlCommand cmd = new SqlCommand(query, con))
{
// Add parameters to command
foreach (SqlParameter p in parameters)
{
cmd.Parameters.Add(p);
}
// Fill data table
DataTable data = new DataTable();
using (SqlDataAdapter adap = new SqlDataAdapter(cmd))
{
adap.Fill(data);
}
return data;
}
} //// End GetRecords()
Answer: I would suggest few improvements:
Validate all parameters as early as possible. In your method, the validation of parameters collection is not early enough. This is not really about performance because 99.9% of the time there should be no exception thrown, but it is about grouping the validation logic together
Make it an extension method. You can easily make this an extension method of DelConnection and then simplify your code at call site like dataSrc.GetRecords("some query here")
Consider return empty result instead of null value. Especially when you meant to return empty result. My rule of thumb is that every method that returns some kind of collection should not return null at all.
Using params instead of concrete-type collection. Instead of using SqlParameterCollection, you can use params SqlParameter[] parameters instead. This will make it more convenient to call this method.
For the verbosity, your code is not verbose at all in my opinion. It is as short as it should be. | {
"domain": "codereview.stackexchange",
"id": 3676,
"tags": "c#"
} |
Reversal of time arrow on IBM Q | Question: It is well known that a quantum computer is reversible. This means that it is possible to derive an input quantum state $|\psi_0\rangle$ from an output $|\psi_1\rangle$ of an algorithm described by a unitary matrix $U$ simply by applying transpose conjugate to $U$, i.e.
\begin{equation}
|\psi_0\rangle = U^\dagger|\psi_1\rangle
\end{equation}
In article Arrow of Time and its Reversal on IBM Quantum Computer an algorithm for a time reversal and going back to an input data $|\psi_0\rangle$ is proposed. Steps of the algorithm are following:
Apply a forward time unitary evolution $U_\mathrm{nbit}|\psi_0\rangle = |\psi_1\rangle$
Apply an operator $U_\psi$ to change $|\psi_1\rangle$ to $|\psi_1^*\rangle$, where the new state $|\psi_1^*\rangle$ is complex conjugate to $|\psi_1\rangle$
Apply an operator $U_R$ to get "time-reversed" state $|R\psi_1\rangle$
Finally, apply again $U_\mathrm{nbit}$ to obtain the input state $|\psi_0\rangle$
According to the paper, the algorithm described above simulates reversal of the time arrow. Or in other words, it simulates a random quantum fluctuation causing a time reversal.
Clearly, when the algorithm is run on a quantum computer, it returns back to initial state but without application of an inverse to each algorithm step. The algorithm simply goes forward.
My questions are these:
Why it is not possible to say that an application of $U^\dagger$ on output of algorithm $U$ is reversal of time arrow in general case?
It is true that above described algorithm returns a quantum computer to an initial state but it seems that the algorithm simply goes forward. So where I can see the a reversal of time arrow?
The authors of the articles have found out that when a number of qubit involved in the time reversal algorithm is increasing, the effect of time reversal diminishes:
How is it possible to reverse time for few qubits and concurently to preserve flowing of time in forward direction for another qubits?
Does this mean that time flows differently for different qubits?
When do the qubits return to commnon time frame to be possible to use them in another calculation?
Answer: Of course if we have unitary evolution
$$|\psi_1\rangle = U|\psi_0\rangle$$
then
$$|\psi_0\rangle = U^\dagger|\psi_1\rangle$$
I did not read the paper, but evidently the authors do something different, based on the following: the Schrödinger equation
$$i\hbar\frac{\partial\Psi}{\partial t}=\hat{H}\Psi$$
changes its form if we substitute $t\rightarrow -t$ to complex conjugate:
$$-i\hbar\frac{\partial\Psi}{\partial t}=\hat{H}\Psi$$
and its solution is also complex conjugate.
So time reversal is antiunitary operator $U_RK$ where $U_R$ is unitary and $K$ is complex conjugation. | {
"domain": "quantumcomputing.stackexchange",
"id": 1175,
"tags": "ibm-q-experience, experimental-realization, research"
} |
Abstracting and unit testing lookups in Excel table | Question: Background
I have a vba solution that I use to ingest investment text reports and reformat them for analysis in Excel. It works, but the macros involve a lot of direct manipulation of Excel objects, and have no unit-testing.
After finding RubberDuck, and reading several years' worth of excellent posts from @MathieuGuindon, I've decided to re-write the "brute force"-heavy solution as a way to learn these new concepts and techniques.
When ingesting from a report, I also pull additional attributes from excel tables. I'm beginning my re-write with those lookup tables. The first of which I'm submitting here.
Initial goals:
Programming to Interfaces not classes
Making Services and Proxies rather than direct access to Excel sheets and ranges
Using the PredeclaredId attribute to enable a Create method
Thorough unit testing
Apart from general review, I also have some specific questions, which I'll post following the code.
Code
IAssetTableProxy -- abstracts reference to the "physical" excel table's data rows
'@Folder("Services.Interfaces")
Option Explicit
Public Function GetAssetTableData() As Variant()
End Function
AssetTableProxy -- Implementation
'@Folder("Services.Proxies")
Option Explicit
Implements IAssetTableProxy
Public Function IAssetTableProxy_GetAssetTableData() As Variant()
Dim tblName As String
tblName = "AssetInfoTable"
IAssetTableProxy_GetAssetTableData = Worksheets(Range(tblName).Parent.Name).ListObjects(tblName).DataBodyRange.value
End Function
AssetInfo -- a class to handle the three values for each row: Desc, Ticker, Type
'@PredeclaredId
'@Folder("Services")
Option Explicit
Private Type TAssetInfo
Desc As String
Ticker As String
AssetType As String
End Type
Private this As TAssetInfo
Public Property Get Desc() As String
Desc = this.Desc
End Property
Friend Property Let Desc(ByVal value As String)
this.Desc = value
End Property
Public Property Get Ticker() As String
Ticker = this.Ticker
End Property
Friend Property Let Ticker(ByVal value As String)
this.Ticker = value
End Property
Public Property Get AssetType() As String
AssetType = this.AssetType
End Property
Friend Property Let AssetType(ByVal value As String)
this.AssetType = value
End Property
Public Property Get Self() As AssetInfo
Set Self = Me
End Property
Public Function Create(ByVal theDesc As String, ByVal theTicker As String, ByVal theAssetType As String) As AssetInfo
With New AssetInfo
.Desc = theDesc
.Ticker = theTicker
.AssetType = theAssetType
Set Create = .Self
End With
End Function
IAssetInfoService -- holds a collection of AssetInfo objects and
provides the needed lookups to data from AssetTableProxy
'@Folder("Services.Interfaces")
Option Explicit
Public Function Create(ByRef assetTbl As IAssetTableProxy) As IAssetInfoService
End Function
Public Function GetAssetTypeForDesc(ByVal Desc As String) As String
End Function
Public Function GetTickerForDesc(ByVal Desc As String) As String
End Function
AssetInfoService -- implementation
'@PredeclaredId
'@Folder("Services")
Option Explicit
Option Base 1
Implements IAssetInfoService
Private Type TAssetsTable
AssetColl As Collection
End Type
Private this As TAssetsTable
Friend Property Get Assets() As Collection
Set Assets = this.AssetColl
End Property
Friend Property Set Assets(ByRef coll As Collection)
Set this.AssetColl = coll
End Property
Public Property Get Self() As IAssetInfoService
Set Self = Me
End Property
Public Function IAssetInfoService_Create(ByRef assetTbl As IAssetTableProxy) As IAssetInfoService
Dim twoDArr() As Variant
twoDArr = assetTbl.GetAssetTableData
With New AssetInfoService
Dim tempAsset As AssetInfo
Dim tempColl As Collection
Set tempColl = New Collection
Dim rw As Long
For rw = 1 To UBound(twoDArr, 1)
Set tempAsset = AssetInfo.Create(twoDArr(rw, 1), twoDArr(rw, 2), twoDArr(rw, 3))
tempColl.Add tempAsset, key:=tempAsset.Desc
Next rw
Set .Assets = tempColl
Set IAssetInfoService_Create = .Self
End With
End Function
Public Function IAssetInfoService_GetAssetTypeForDesc(ByVal Desc As String) As String
Dim tempTp As String
If Exists(this.AssetColl, Desc) Then
tempTp = this.AssetColl(Desc).AssetType
Else
tempTp = "Unknown Asset"
End If
IAssetInfoService_GetAssetTypeForDesc = tempTp
End Function
Public Function IAssetInfoService_GetTickerForDesc(ByVal Desc As String) As String
Dim tempTicker As String
If Exists(this.AssetColl, Desc) Then
tempTicker = this.AssetColl(Desc).Ticker
Else
tempTicker = "Unknown Asset"
End If
IAssetInfoService_GetTickerForDesc = tempTicker
End Function
Private Function Exists(ByRef coll As Collection, ByRef key As String) As Boolean
On Error GoTo ErrHandler
coll.Item key
Exists = True
ErrHandler:
End Function
Unit Testing
AssetTableTestProxy -- proxy implementation for testing w/o dependency on actual excel table
'@Folder("Services.Proxies")
Option Explicit
Option Base 1
Implements IAssetTableProxy
Public Function IAssetTableProxy_GetAssetTableData() As Variant()
Dim twoDArr(1 To 3, 1 To 3) As Variant
twoDArr(1, 1) = "Asset1"
twoDArr(1, 2) = "Tick1"
twoDArr(1, 3) = "Type1"
twoDArr(2, 1) = "Asset2"
twoDArr(2, 2) = "Tick2"
twoDArr(2, 3) = "Type2"
twoDArr(3, 1) = "Asset3"
twoDArr(3, 2) = "Tick3"
twoDArr(3, 3) = "Type3"
IAssetTableProxy_GetAssetTableData = twoDArr
End Function
TestAssetInfoService -- Unit tests for Asset Info Service
Option Explicit
Option Private Module
'@TestModule
'@Folder("Tests")
Private Assert As Object
Private Fakes As Object
Private assetTbl As IAssetTableProxy
'@ModuleInitialize
Public Sub ModuleInitialize()
'this method runs once per module.
Set Assert = CreateObject("Rubberduck.AssertClass")
Set Fakes = CreateObject("Rubberduck.FakesProvider")
Set assetTbl = New AssetTableTestProxy
End Sub
'@ModuleCleanup
Public Sub ModuleCleanup()
'this method runs once per module.
Set Assert = Nothing
Set Fakes = Nothing
Set assetTbl = Nothing
End Sub
'@TestInitialize
Public Sub TestInitialize()
'this method runs before every test in the module.
End Sub
'@TestCleanup
Public Sub TestCleanup()
'this method runs after every test in the module.
End Sub
'@TestMethod
Public Sub GivenAssetInTable_GetTicker()
On Error GoTo TestFail
'Arrange:
Dim tbl As IAssetInfoService
Set tbl = AssetInfoService.IAssetInfoService_Create(assetTbl)
'Act:
Dim tick As String
tick = tbl.GetTickerForDesc("Asset2")
'Assert:
Assert.AreEqual "Tick2", tick, "Tick was: " & tick
TestExit:
Exit Sub
TestFail:
Assert.Fail "Test raised an error: #" & Err.Number & " - " & Err.Description
End Sub
'@TestMethod
Public Sub GivenAssetInTable_GetAssetType()
On Error GoTo TestFail
'Arrange:
Dim tbl As IAssetInfoService
Set tbl = AssetInfoService.IAssetInfoService_Create(assetTbl)
'Act:
Dim assetTp As String
assetTp = tbl.GetAssetTypeForDesc("Asset2")
'Assert:
Assert.AreEqual "Type2", assetTp, "AssetTp was: " & assetTp
TestExit:
Exit Sub
TestFail:
Assert.Fail "Test raised an error: #" & Err.Number & " - " & Err.Description
End Sub
'@TestMethod
Public Sub GivenAssetNotInTable_GetUnknownAssetMsg()
On Error GoTo TestFail
'Arrange:
Dim tbl As IAssetInfoService
Set tbl = AssetInfoService.IAssetInfoService_Create(assetTbl)
'Act:
Dim tp As String
tp = tbl.GetAssetTypeForDesc("unsub")
'Assert:
Assert.AreEqual "Unknown Asset", tp
TestExit:
Exit Sub
TestFail:
Assert.Fail "Test raised an error: #" & Err.Number & " - " & Err.Description
End Sub
Module1 -- additional sub to play around with functions
Option Explicit
Sub TestAssetInfoTable()
Dim assetTbl As IAssetTableProxy
Dim testAssetTbl As AssetTableTestProxy
Set assetTbl = New AssetTableProxy
Set testAssetTbl = New AssetTableTestProxy
Dim assetSvc As IAssetInfoService
Dim testAssetSvc As IAssetInfoService
Set assetSvc = AssetInfoService.IAssetInfoService_Create(assetTbl)
Set testAssetSvc = AssetInfoService.IAssetInfoService_Create(testAssetTbl)
Dim tp As String
Dim tick As String
tp = assetSvc.GetAssetTypeForDesc("AMAZON COM INC (AMZN)")
tick = assetSvc.GetTickerForDesc("AMAZON COM INC (AMZN)")
MsgBox ("Real Svc: tp=" & tp & "; tick=" & tick)
tp = testAssetSvc.GetAssetTypeForDesc("Asset3")
tick = testAssetSvc.GetTickerForDesc("Asset3")
MsgBox ("Test Svc: tp=" & tp & "; tick=" & tick)
End Sub
Specific questions:
I initially had the "proxy" logic in the service class. But it felt like I was duplicating too many functions when I created the AssetInfoTestService class. Breaking it out to AssetTableProxy and AssetTableTestProxy allowed me to keep only one version of the service functions. But is this carrying things (abstraction?) too far?
Learning about interfaces, I believe I understand the following pieces:
the contract created by each Function mentioned in the interface;
the requisite coding of corresponding Interface_Function in the implementing class;
the dimm-ing of class var "as Interface"; and
accessing the functions with classVar.Function
However there seems to be an exception here. In TestAssetInfoTable I dim assetSvc as IAssetInfoService. That interface has a Create function, and in the concrete class, I have IAssetInfoService_Create defined. But when I try to call AssetInfoService.Create(…) I get a compile error that only clears when I change to AssetInfoService.IAssetInfoService_Create. What am I missing there?
I see the "Option Base 1" thing. Since leave C pointers long ago, I haven't really had a religious belief one way or the other on 0- vs 1-based arrays. I went with it here, because when I began playing with the (extremely handy) multiDimArray = Range I noted the returned arrays are 1-based. And I kept screwing myself up between coding for those, and coding for my own 0-based ones. So I just opted to go all 1-based. Rubberduck Code Inspections do always throw that decision back in my face though, so let me ask here: are compelling reasons to not do that, or work arounds/tips for the multiDimArray = Range 1-based thing?
Answer: First off, nice work overall. It's apparent from the way you set up your interfaces and implementations that you "get it". Given that, most? of this can probably be classified as "nitpicking". I'm also not going to specifically address your second question, but the answers should be apparent based on the review itself (if not, feel free to ask in the comments). I'm completely ambivalent as to your 1st question (and can't really compare them without seeing the alternative structure), although others may have opinions there.
AssetInfoService's internal Collection is not properly encapsulated. You expose it like this...
Friend Property Get Assets() As Collection
Set Assets = this.AssetColl
End Property
...but that is relying on the caller to hold a reference to its interface instead of a hard reference to prevent a call like AssetInfoService.Assets.Remove or AssetInfoService.Assets.Add from anywhere in the same project. The Friend modifier obviously prevents other projects from doing this, but it isn't clear from the code provided why you would want a caller to be able to mess with the internals of the class like that. If the intention of the IAssetInfoService is to wrap a Collection (as evidenced by the Exists method), then I'd provide a complete wrapper.
Related to the above, I'd say it's overkill to provide an internal Type that contains a single member:
Private Type TAssetsTable
AssetColl As Collection
End Type
Private this As TAssetsTable
Nitpick, but I'd also prefer an empty line after End Type - that makes it more readable.
The factory Create methods are much, much clearer in the calling code if you implement them on the base class also. That's why you have to write code like this:
Set assetSvc = AssetInfoService.IAssetInfoService_Create(assetTbl)
Set testAssetSvc = AssetInfoService.IAssetInfoService_Create(testAssetTbl)
The best way to think of a class's implementation is the same way that it would be viewed in a COM TypeLib - internally, AssetInfoService is more or less treated as an implicit interface (let's call it _AssetInfoService to follow MS convention). Unlike .NET, the implemented interfaces are not aggregated back into the "base" interface implicitly - that's why you need to use the explicit interface version when you have an instance of the concrete class. If the intention is to have the procedure accessible from the implementing class, the standard way of doing this in VBA is to wrap the base method with the interface's implementation:
Public Function Create(ByRef assetTbl As IAssetTableProxy) As IAssetInfoService
Dim twoDArr() As Variant
twoDArr = assetTbl.GetAssetTableData
With New AssetInfoService
'... etc.
End Function
Public Function IAssetInfoService_Create(ByRef assetTbl As IAssetTableProxy) As IAssetInfoService
Set IAssetInfoService_Create = Me.Create(assetTbl)
End Function
That makes the calling code much more readable:
Set assetSvc = AssetInfoService.Create(assetTbl)
Set testAssetSvc = AssetInfoService.Create(testAssetTbl)
I don't see a reason for the Self properties of your factories to be public. If you're only intending to provide access to them via their interfaces, there isn't a reason to expose this on the concrete instances. The reason for this is that there is no restriction on "up-casting". This is perfectly legal:
Sub Foo()
Dim bar As IAssetInfoService
Set assetSvc = AssetInfoService.IAssetInfoService_Create(assetTbl)
Dim upCast As AssetInfoService
Set upCast = assetSvc
With upCast.Self
'Uhhhh...
End With
End Sub
The other side of this is related to the discussion of the "base" interface above. If for some reason a caller up-casts to AssetTableProxy, they'll find that it has no public members...
AssetTableProxy has what I would consider to be a bug. This code is implicitly using the ActiveWorkbook and the ActiveSheet:
Public Function IAssetTableProxy_GetAssetTableData() As Variant()
Dim tblName As String
tblName = "AssetInfoTable"
IAssetTableProxy_GetAssetTableData = Worksheets(Range(tblName).Parent.Name).ListObjects(tblName).DataBodyRange.value
End Function
If this is always supposed to reference the current workbook, I'd use ThisWorkbook.Worksheets (or the equivalent code name). The unqualified Range will throw if the ActiveSheet isn't a Worksheet, so your method of finding the ListObject this way puts you in kind of a catch 22 because you're only using the name of the table, which means that you need to get its parent worksheet to find... its worksheet? Just skip all of this and use the code name of the sheet directly. Also, tblName is functionally a constant. I'd declare it as one.
Private Const TABLE_NAME As String = "AssetInfoTable"
Public Function IAssetTableProxy_GetAssetTableData() As Variant()
'Replace Sheet1 with the actual code name of the worksheet.
IAssetTableProxy_GetAssetTableData = Sheet1.ListObjects(TABLE_NAME).DataBodyRange.value
End Function
Nitpick - I would remove the underscores in your test names (i.e. GivenAssetInTable_GetTicker()). The underscore has special meaning in VBA for procedure names - it's treated as kind of an "interface or event delimiter". This is probably our fault (as in Rubberduck's - I'm a project contributor) in that the "Add test module with stubs" used to do this when it was naming tests. This has been corrected in the current build, and TBH I'd like to see an inspection for use of an underscore in a procedure name that isn't an interface member or event handler (but I digress). The main take-away here is that when you see an underscore in a procedure name, you shouldn't need to ask yourself if it has meaning outside the name.
Another nitpick - there's no reason to Set assetTbl = Nothing in ModuleCleanup(). The reason that the Assert and Fakes are explicitly set to Nothing has to do with the internal architecture of Rubberduck's testing engine. In your case it doesn't matter in the least if the reference to your IAssetTableProxy isn't immediately freed.
Specifically regarding your third question. The reason Rubberduck suggests not using Option Base 1 is that it is a per module option that overrides the default array base of the language. If you specify the lower bound like you do here...
Option Explicit
Option Base 1
Implements IAssetTableProxy
Public Function IAssetTableProxy_GetAssetTableData() As Variant()
Dim twoDArr(1 To 3, 1 To 3) As Variant
'...
IAssetTableProxy_GetAssetTableData = twoDArr
End Function
...it is superfluous - you're always creating an array with base 1 and doing it explicitly. You should be doing this anyway if you're using a non-zero base because it's clear that the lower bound is "non-standard" without requiring the person looking at the code to scroll all the way to the top of the module and catch the fact that you have a non-standard option defined. I can see it at the point of the declaration.
The other place it appears is in AssetInfoService, but it is completely unneeded there also. The only place you are assigning an array is here...
Dim twoDArr() As Variant
twoDArr = assetTbl.GetAssetTableData
...and that module doesn't control the actual creation of the array. You can remove Option Base 1 everywhere in your code and it will have no effect what-so-ever.
If you're using arrays from an external source (i.e. Excel), you should be using LBound anyway - VBA has a zero default, but a COM SAFEARRAY allows the lower bound to be an arbitrary number. Pedantically, this code...
For rw = 1 To UBound(twoDArr, 1)
...should be:
For rw = LBound(twoDArr, 1) To UBound(twoDArr, 1)
That decouples your interface from the representation of the array that is supplied by the IAssetTableProxy. This is just like any other form of coupling in that it makes the implementation "brittle" to the extent that it makes assumptions about the form of the data. | {
"domain": "codereview.stackexchange",
"id": 33798,
"tags": "vba, excel, rubberduck"
} |
Relation between optimal decompositions for entanglement of formation | Question: In the answer of this question, the last paragraph says that
If you know one decomposition which is optimal for Entanglement of Formation for a given state $\rho$, you can obtain the optimal decomposition for other states by simply shifting the weights $q_i$ in the optimal decomposition.
I'd like to know how to prove this or is there any references?
Answer: We will prove the following statement:
Let $\rho=\sum p_i \rho_i$ ($p_i>0$) be a decomposition of a given state $\rho$ which minimizes the cost function $C(\{p_i,\rho_i\})=\sum p_i f(\rho_i)$ for that state. Then,
$\rho'= \sum p_i' \rho_i$ is a decomposition which minimizes $C$ for the state $\rho'$.
Note. In the case of entanglement of formation, $f(\rho_i)=S(\mathrm{tr}_B\,\rho_i)$.
Proof. We will prove this by contradiction. Assume that there exists a decomposition $\rho'=\sum q_j \sigma_j$ for which
$$\sum q_j\,f(\sigma_j)<\sum p_i'\,f(\rho_i)\ .$$
Choose a $\lambda>0$ s.th. $r_i:=p_i-\lambda p_i'\ge0$ for all $i$. Then,
\begin{align}
C(\{p_i,\rho_i\}) & = \sum_i p_i f(\rho_i)\\
&=\lambda\left[\sum_i p_i'\, f(\rho_i)\right] + \sum_i r_i\, f(\rho_i)\\
&>\lambda\left[\sum_j q_j\,f(\sigma_j)\right]+ \sum_i r_i\, f(\rho_i)\\
&=C(\{w_k,\tau_k\})
\end{align}
with the ensemble $\{w_k,\tau_k\}$ the union of the ensembles $\{\lambda q_j,\sigma_j\}$ and $\{r_i,\rho_i\}$, which is in contradiction to the assumption that $\rho=\sum p_i\rho_i$ is the optimal decomposition for $\rho$.
$\Box$
As far as I am aware, this is a well-known result, but I don't know where this is written down.
Late edit: I just stumbled across the result: It is proven in Sec. IV B of K.~G. H. Vollbrecht and R. F. Werner, Phys. Rev. A 64, 062307 (2001) (also quant-ph/0010095). | {
"domain": "physics.stackexchange",
"id": 18957,
"tags": "quantum-information, quantum-entanglement"
} |
What maintains constant voltage in a battery? | Question: I know there's lots of questions that address similar situations, (Batteries connected in Parallel,
Batteries and fields?,
Naive Question About Batteries,
and the oft-viewed
I don't understand what we really mean by voltage drop).
However, I have a question, and after examining just the battery structure, I have been wondering, exactly what structure/process maintains the constant voltage drop within batteries? I mean, certain chemical reactions are occurring in each half-cell, and electrolytes maintain charge conservation, I get that there's some motivation for the electron to move from one cell to the other.
But why is this motivation so constant? I really want to get this.
Answer: Consider for a moment, a cell that is not connected to a circuit, i.e., there is no path for current external to the cell.
The chemical reactions inside the cell remove electrons from the cathode and add electrons to the anode.
Thus, as the chemical reactions proceed, an electric field builds between the anode and cathode due to the differing charge densities.
It turns out that this electric field acts to reduce the rate of the chemical reactions within the cell.
At some point, the electric field is strong enough to effectively stop the chemical reactions within the cell.
The voltage across the terminals of the cell, due to this electric field, is then constant and this is the open-circuit voltage of the cell.
If an external circuit is connected to the cell, electrons flow from the anode through the external circuit and into the cathode, reducing the difference in charge densities which in turn reduces the electric field just enough such that the chemical reactions can once again take place to maintain the electric current through the circuit.
The larger the external current, the greater the required rate of chemical reactions and thus, the lower the voltage across the terminals.
As long as the circuit current is significantly less than the maximum current the chemicals reactions can sustain, the voltage across the battery terminals will be close to the open circuit voltage.
As the external current approaches the maximum current, the voltage across the terminals rapidly falls and when the voltage is zero, the cell is supplying maximum current. This current is called the short-circuit current. | {
"domain": "physics.stackexchange",
"id": 15039,
"tags": "potential, potential-energy, voltage, batteries, chemical-potential"
} |
How many electrons are around the platinum in an ethenetriplatinate complex? | Question:
In $\ce{[Pt3(C2H4)]-}$, how many electrons are there around platinum?
While solving this problem I thought three $\ce{Pt}$ formed a triangle with ethylene above. But I was confused about how the $\ce{C2H4}$ gave electron pairs to $\ce{Pt}$. In addition, the answer gives $16$, but I only see the initial $10$ electrons around a $\ce{Pt}$ and $2$ given by another two $\ce{Pt}$.
What is the exact structure of this coordination compound, and how can it be deduced?
Answer: Looking up your proposed sum formula, I arrived at a quantum mechanical paper that used said complex (triangular $\ce{Pt3}$ moiety, whereonto an ethylene molecule coordinates onto the longer edge of the triangle) as a modelling study for the interaction of ethylene with platinum metal surfaces.[1]
As far as I can tell,[2] the authors in now way account for a surface of platinum; their three platinum atoms in vacuum are supposed to model an entire platinum surface. I cannot say whether that type of approach is typically used or whether it is considered good by quantum chemists.
Searching further, I also found a 1981 paper describing $\ce{[Pt3(C2H4)(cod)2(cot)2]}$, a neutral complex that includes two additional cycloocta-1,5-diene ($\ce{cod}$) and two additional cycloocta-1,3,5,7-tetrene ($\ce{cot}$) ligands.[3] In this complex, the bonding situation and electron count can be well answered, but that wasn’t your question.[4]
Furthermore, I find a complex $\ce{[Pt3(C2H4)]-}$ to be highly questionable: Platinum is in the 10th group, ethylene is a singlet molecule so adding a single additional electron will create a radical anion. Another reason why your question and answer cannot be serious, since in no way can a radical end up at $16$ electrons. I would even go a step further and say that it is likely an unstable complex that would decompose depending on which species are close to it. Note that the authors of the first paper calculate an uncharged complex $\ce{[Pt3(C2H4)]}$.
Notes and references
A. Cruz, V. Bertin, M. Castro, Int. J. Quantum Chem. 2000, 80, 298. DOI: 10.1002/1097-461X(2000)80:3<298::AID-QUA3>3.0.CO;2-2.
I am not a quantum chemist, and my last calculation efforts date back to a research project during my master’s studies. Those acquainted with methods will have to comment on how good the paper is.
N. M. Boag, J. A. K. Howard, J. L. Spencer, F. G. A. Stone, J. Chem. Soc., Dalton Trans. 1981, 1051. DOI: 10.1039/DT9810001051.
Two platinum atoms with 18 electrons in a square-planar environment, one with 16 electrons in a trigonal environment. The former coordinated by $\ce{cod}$ and $\ce{cot}$ (each contributing two double bonds), the latter coordinated by two $\ce{cot}$ molecules (each contributing one double bond) and an ethylene. The three platinum centres are too far apart to be considered bonded, according to the authors.
The bonding of the two 18-electron platinums to their respective $\ce{cot}$ unit is described as $\ce{cot^2-}$ and $\ce{Pt^2+}$, resulting in oxidation states of $\mathrm{+II}$ and $\mathrm{\pm 0}$ for platinum. | {
"domain": "chemistry.stackexchange",
"id": 5523,
"tags": "coordination-compounds, electronic-configuration, molecular-structure"
} |
Why do coherent states behave semi-classically, but harmonic oscillator states do not? | Question: A coherent state of the quantum harmonic oscillator is defined as an eigenvector $|\alpha\rangle$ of the annihilation operator $\hat a$ with eigenvalue $\alpha$ or as spatial translations of the ground state of the QHO:$$T_{x_0}|0\rangle = \exp(-\frac i \hbar\hat px_0)|0\rangle:=|\bar x_0\rangle$$the definitions are equivalent when $\alpha$ is real. Coherent states exhibit certain semi-classical properties, such as the following:$$\langle \bar x_0|\hat x_H(t)|\bar x_0\rangle= x_0\cos\omega t,$$where $\omega$ is the angular frequency of the harmonic oscillator, and the $H$ subscript represents a Heisenberg operator. We have also $$\langle \bar x_0|\hat p_H(t)|\bar x_0\rangle = -m\omega x_0 \sin \omega t.$$ So that both the expectation value of position and momentum oscillate with time, in contrast to the energy eigenstates of the harmonic oscillator, which have vanishing expectation values for these operators. So, my question is: why do the coherent states actually behave like oscillators, when the harmonic oscillator energy eigenstates do not? What is a physical intuition for why the expectation values vanish for the QHO energy eigenstates?
Answer: The discovery and study of coherent states represents one aspect of one
of the biggest problems physicists have faced with the birth and the
subsequent development, supported by excellent experimental results, of the
quantum mechanics: the search for a correspondence between the new theory, conceived
for the analysis of microscopic systems, and classical physics, still fully valid
for the description of the macroscopic world.
The history of coherent states begins immediately after the advent of mechanics
quantum: their introduction on a conceptual level dates back to a
article published in 1926, in which Schrödinger reports the existence of a class of
states of the harmonic oscillator that show, in a certain sense, behavior
analogous to that of a classic oscillator: for these states it is verified that the energy
mean corresponds to the classical value and the position and momentum averages have
oscillatory forms in constant phase relation.
Returning to Schrödinger's article, the "almost classical" states from him
identified present, in addition to the characteristics already mentioned, an important aspect:
being represented by Gaussian wave packets that do not change shape in the
time, guarantee the minimization of the product among the uncertainties about
position and on the impulse, that is the condition closest to the possibility of measuring
simultaneously the aforesaid quantities with arbitrary precision, allowed
from classical physics.
So, starting from the following relations:
\begin{equation}
a^{\dagger}|n\rangle=\sqrt{n+1}|n+1\rangle \quad a|n\rangle=\sqrt{n}|n-1\rangle
\end{equation}
it is noted that, by virtue of the orthonormality of the states
stationary, the diagonal matrix elements of the position and momentum operators are
null in the representation of energy, which means that the expectation values of
position and momentum on any stationary state are zero instant by instant.
The stationary states just analyzed are characterized by distributions of
constant probabilities with respect to the position over time; the wait values of the position e
of the impulse are null at all times: this aspect is a fundamental one
difference with the states of the classic oscillator, for which, once the energy is defined
(as long as different from zero), the observables position and momentum evolve over time
according to sinusoidal functions and are always in phase quadrature with each other. Also, if yes
calculate the uncertainties on position and momentum for a steady state with n photons, yes
gets the uncertainty relation $\Delta x \Delta p=(n+1/2)\hbar$.
it is therefore possible to obtain the minimization of the product of the uncertainties on impulse and
position, which represents the maximum similarity with classical mechanics.
A state that is as similar as possible to the classical case must therefore
have the following characteristics:
1): The evolution over time of the position and momentum expectation values must
be of a simple periodic type, with a constant phase ratio between position and
impulse.
2): The wave functions must be as narrow as possible around the value
average of the position, so that the probability distribution with respect to the
position may tend, by varying appropriate parameters, to a delta function of
Dirac;
3): The product of the uncertainties on the position and on the impulse must be minimal.
So you can see this "classical" behavior, that admits these particular states, as an intrinsic property of QHO.
Furthermore I have say that: also the harmonic oscillator energy eigenstates actually behave like oscillators.
Maybe this answer is a bit too long but I hope it can help you. | {
"domain": "physics.stackexchange",
"id": 85340,
"tags": "quantum-mechanics, harmonic-oscillator, coherent-states"
} |
Is this a relativistic paradox? | Question: In general relativity is postulated that $m_\mathrm{inertial} = m_\mathrm{gravitational}$. Suppose one has a very very intense uniform gravitational field parallel with the $z$-axis of a 3-dimensional Cartesian system. Suppose we also have 2 very light particles, of equal mass, initially at rest with respect to the Cartesian system, and initially with the same $z$ coordinate, but separated by a very large distance D (in the $xy$-plane), such that the gravitational interaction between them is very weak. Then the 2 particles are released simultaneously in the gravitational field. According to the general relativity, the reference frame associated with the freely falling particles is an inertial one, and in this frame, the particles barely move towards each other due to the weak gravitational attraction between them. However, for an observer associated with the initial Cartesian system which is at rest, these particles appear to be very massive in a very short time (since they accelerate very fast due to the very strong gravitational field they are freely falling in).
So, for a high intensity field, after a very short time (as measured in the Cartesian reference frame), the particles would collide due to the very intense gravitational attraction between them, generated by their huge dynamical masses. The higher the initial uniform gravitational field, the faster the collision. But a collision is a spacetime point and it also has to occur simultaneously in the frame of reference that is freely falling where it seems to occur only after a very very long time interval. The collision is due to the motion in the $xy$-plane in both frames of reference and it should not be affected by the motion along the $z$ axis. Is this a paradox? What goes wrong?
Answer: If "the initial Cartesian system which is at rest" is not free-falling along with the particles, then it must be accelerating upward very quickly to resist the very strong gravitational field, so it is a highly noninertial frame. An observer in this frame would feel strong acceleration, which would be equivalent to being deep in a strong gravity well resisting the force of gravity. They would therefore experience very strong time dilation, and the observer's clock would run much slower than the free-falling particles' clocks. The observer would therefore see the free-falling particle's time as running very quickly, and the very slow attraction that the free-falling particles observe would get "sped up" into very fast attraction from the noninertial observer's perspective. | {
"domain": "physics.stackexchange",
"id": 36517,
"tags": "general-relativity, special-relativity, gravity"
} |
Finding solution of given recursive equation? | Question: $T(n) = 1+ \sum_{j=0}^{n-1} T(j)$
I've proved $2^n$ be the solution of this equation using induction. But is there any other way to find the solution? I just proved not solved.
Answer: The trick in this case is to consider differences:
$$
T(n)-T(n-1) = \left(1 + \sum_{j=0}^{n-1} T(j)\right) - \left(1 + \sum_{j=0}^{n-2} T(j)\right) = T(n-1).
$$
Hence $T(n) = 2T(n-1)$, and so $T(n) = 2^n T(0)$. | {
"domain": "cs.stackexchange",
"id": 9036,
"tags": "recurrence-relation"
} |
Why is rolling up our sleeves, more stable than tucking them in? | Question: When sleeves are longer than your arms, two possible non-destructive ways of making them shorter are to roll them up or to tuck them in.
Rolled up sleeves [source: wikihow]
Tucked in sleeves [source: MS paint+ wikihow]
An experiment was performed to determine which of the two was more stable. Sleeves were rolled up identically in both fashions, and arms were jerked with roughly equal force until they had been unrolled completely. The number of jerks needed to unfold each configurations were measured, and experiment was repeated till arms were tired.
No. of jerks needed to unfold
Attempt
1
2
3
4
5
Tucked in
4
5
4
3
3
Rolled up
11
8
9
7
10
Result: Tucked-in sleeves only took 3.8 jerks on average. Rolled-up sleeves took 9 jerks on average, more than double that of tucked-in.
Conclusion: Rolling up sleeves are energetically more stable than tucking them under.
To demonstrate that this result is independent of the material, replicas of the sleeves were constructed using paper.
A. Folded outward
B. Folded inward
The effect became even more apparant with paper. B's fold was easy to undo, but A's was almost impossible to do without tearing.
What makes the fold in A much stronger than that of B?
Answer: It is caused by what the material needs to do to become unfolded. When rolling up, the material needs to be stretched in order to roll it down. When tucking in, the material can crumple a little to become untucked. The same rationale holds for the paper. Crumpling the edge is easier than stretching out the edge or crumpling in the bulk, therefore the rolled up sleeves are more stable. | {
"domain": "physics.stackexchange",
"id": 92929,
"tags": "newtonian-mechanics, classical-mechanics, friction, everyday-life"
} |
How is downsampling and low-pass filtering done in MS-SSIM? | Question: I am just studying the Multiscale Strutural Similarity measure for image quality and going through the original paper [1].
I think I understood the basic ides and "regular" structural similarity, but could someone clarify on how downsampling is done and which low-pass filter is applied when computing the MS-SSIM?
[1] Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik. "Multiscale structural similarity for image quality assessment." The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. Vol. 2. Ieee, 2003. Available online.
Answer: I assume the question is specifically the decimate by 2 as in this figure:
Decimation is the combination of a low pass filter with a downsampler. The downsampling by 2 designated by the block that has a 2 with a down-arrow is simply done by removing every other sample. This would fold in frequency everything in the upper part of the spectrum so an anti-alias filter is required prior to the down-sampling. This is no different from the requirement for an anti-alias filter prior to an A/D converter. So the ideal filter L would be a low pass filter that would pass everything that would be in the first Nyquist zone defined by the new sampling rate after the downsamping, while completely rejecting all the high frequency components that would otherwise fold in. (Such an ideal filter cannot be realized, but we approach this trading complexity with allowable distortion).
This graphic I have below demonstrates this with a decimate by 4 for a real signal with the first Nyquist zone extending from $0$ to $F_s/2$, where $F_s$ is the sampling rate (and $-F_s/2$ to $+F_s/2$ for complex signals).
With digital sampling (or resampling as in this case) all the spectrum centered on $mF_s$ is aliased to be centered on $0$ in the first Nyquist zone where $m$ is any integer, so here in this example the new sampling rate is $1/4$ of what the original sampling rate, thus we see all the images that are at multiples of this new sampling rate that were part of the original digital spectrum that was at the higher rate.
In this case a multi-band rejection filter is a good solution to minimize resources. Alternate approaches that are also quite efficient are Cascade-Integrator-Comb structures (CIC) and polyphase filters which are detailed in other posts on this site. | {
"domain": "dsp.stackexchange",
"id": 8633,
"tags": "lowpass-filter, image-processing, downsampling, image-compression"
} |
How does TF-IDF classify a document based on "Score" alloted to each word | Question: I understand how TF-IDF "score" is calculated for each word in a document, but I do not get how can it be used to classify a test document. For example, if the word "Mobile" occurs in two texts, in the training data, one about Business (like the selling of Mobiles) and the other about Tech, then how does the "score" for word "Mobile", in both training and test document over the given dataset, help the algorithm to classify whether the text (a new test document) belongs to "Business" category or "Tech" category? I'm new to NLP, thanks in advance!
Answer: It's not a single TFIDF score on its own which makes classification possible, the TFIDF scores are used inside a vector to represent a full document: for every single word $w_i$ in the vocabulary, the $i$th value in the vector contains the corresponding TFIDF score. By using this representation for every document in a collection (the same index always corresponds to the same word), one obtains a big set of vectors (instances), each containing $N$ TFIDF scores (features).
Assuming we have some training data (labelled documents), we can use any supervised method to learn a model, for instance Naive Bayes, Decision Trees, SVM, etc. These algorithms differ from each other but they are all able to take into account all the features for a document in order to predict a label. So in the example you give maybe the word "mobile" only helps the algorithm eliminate the categories "sports" and "literature", but maybe some other words (or absence of other words) is going to help the algorithm decide between categories "Business" and "Tech". | {
"domain": "datascience.stackexchange",
"id": 8079,
"tags": "classification, nlp, tfidf"
} |
Unmet dependencies in ros-electric-desktop-full installation | Question:
I come up with the following error while installing ros-electric-desktop-full:
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
ros-electric-desktop-full: Depends: ros-electric-vision-opencv (= 1.6.8-s1321009367~lucid) but it is not going to be installed
Depends: ros-electric-image-pipeline (= 1.6.3-s1321009993~lucid) but it is not going to be installed
Depends: ros-electric-image-transport-plugins (= 1.4.2-s1321009887~lucid) but it is not going to be installed
E: Broken packages
So I went ahead and tried to install ros-electric-vision-opencv on its own using "sudo apt-get install ros-electric-vision-opencv", but had the following error message:
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
ros-electric-vision-opencv: Depends: libopencv2.3-dev (= 2.3.1+svn6514+branch23-8~lucid) but 2.3.1+svn6514+branch23-6~lucid is to be installed
E: Broken packages
Is there a problem with the dependencies for libopencv2.3-dev?
Thank you!
Edit1: Looking into the problem further, I noticed that http://packages.ros.org/ros/ubuntu/lists/ros-lucid-electric_lucid_main_i386_Packages (updated today) uses 2.3.1+svn6514+branch23-8~lucid for libopencv2.3-dev. However,http://packages.ros.org/ros/ubuntu/dists/lucid/main/binary-i386/Packages uses 2.3.1+svn6514+branch23-6~lucid. I think this is causing the problem. Should this be reported as a bug?
Originally posted by esha_umn on ROS Answers with karma: 81 on 2011-11-11
Post score: 8
Original comments
Comment by esha_umn on 2011-11-14:
The problem with dependencies has been sorted out. I managed to install electric-desktop-full successfully today.
Comment by Daniel Canelhas on 2011-11-11:
I found this to be the case, as well: The following packages have unmet dependencies:ros-electric-eigen : Depends: libeigen3-dev (= 3.0.1-1+ros4~natty) but 3.0.2-3+natty1 is to be installed
Answer:
I had the same problem as the OP (lucid and oneiric). The problem seems to (now) be fixed for me by switching from Freiburg mirror to the main mirror. So it might be fixed upstream but changes need a while to settle down.
Originally posted by demmeln with karma: 4306 on 2011-11-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 7277,
"tags": "ros, opencv, vision-opencv, ros-electric"
} |
Expenses Calculation Using OOP in PHP | Question: Raw Data & Analysis Objective
There is a company called Nerdina Entertainment (Nerdina for short). It's been decided to optimize the operation costs of 4 departments (see below). These have 4 types of specialists on staff. Each of the 4 types is characterized by three base properties: pay rate per month, gallons of coffee consumed per month and (just for the fun of it) the amount of code units (whatever this means) produced per month. Additionally, Nerdina employs a system of grades: each employee is assigned a grade that affects their monthly pay rate. Head of a department is a special status that alters all of the base stats. The summary of the available data is in the next sections.
The preliminary goal is to produce a report like this:
+--------------+-------+------------+--------------+------------+---------------+
| DEPARTMENT | STAFF | LABOR COST | COFFEE DRUNK | CODE UNITS | COST PER UNIT |
+--------------+-------+------------+--------------+------------+---------------+
| Analytics | 17 | 142,450 | 102 | 1,037 | 137.4 |
+--------------+-------+------------+--------------+------------+---------------+
| Training | 16 | 129,050 | 102 | 1,265 | 102 |
+--------------+-------+------------+--------------+------------+---------------+
| Development | 36 | 335,150 | 224 | 3,175 | 105.6 |
+--------------+-------+------------+--------------+------------+---------------+
| Sales | 28 | 218,450 | 131 | 1,045 | 209 |
+--------------+-------+------------+--------------+------------+---------------+
| TOTAL | 97 | 825,100 | 559 | 6,522 | 554 |
+--------------+-------+------------+--------------+------------+---------------+
| AVERAGE | 24.25 | 206,275 | 139.75 | 1,630.5 | 138.5 |
+--------------+-------+------------+--------------+------------+---------------+
Employee Types
Base stats. The figures for pay rate, coffee consumption and code units produced are per month.
+----------+---------+--------+------+
| TYPE | PAYRATE | COFFEE | CODE |
+----------+---------+--------+------+
| Manager | 7,000 | 5 | 75 |
+----------+---------+--------+------+
| Marketer | 6,600 | 4 | 5 |
+----------+---------+--------+------+
| Engineer | 8,300 | 8 | 200 |
+----------+---------+--------+------+
| Analyst | 7,500 | 12 | 125 |
+----------+---------+--------+------+
NOTE: Heads of the departments earn and drink two times the base figure and don't produce any code.
Grades
+-------+-----------+
| GRADE | PAYRATE |
+-------+-----------+
| 1 | base |
+-------+-----------+
| 2 | base×1.25 |
+-------+-----------+
| 3 | base×1.5 |
+-------+-----------+
Staff
E.g. 6×man3 translates to 6 Managers of Grade 3
+--------------+-------------------------------------------------+
| DEPARTMENT | STAFF |
+--------------+-------------------------------------------------+
| Analytics | 9×man1, 3×man2, 2×ana3, 2×mar1 + chief 1×man2 |
+--------------+-------------------------------------------------+
| Training | 8×man1, 3×mar1, 2×ana1, 2×eng2 + chief 1×man2 |
+--------------+-------------------------------------------------+
| Development | 12×man2, 10×mar1, 8×eng2, 5×ana3 + chief 1×eng3 |
+--------------+-------------------------------------------------+
| Sales | 13×man1, 11×mar2, 3×mar3 + chief 1×man1 |
+--------------+-------------------------------------------------+
My Solution
This is a long one.
<?php
/**
* input.php
* the input format is determined by me based on the given data.
* I've been told by the peers that the current one is way too complicated and
* the array should look like this:
* $input = [
* ['Analytics', 9, Employee::MANAGER, 1],
* ['Training', 8, Employee::MANAGER, 1],
* ...
* ];
* Please advise on this point
*/
$input = [
'Analytics' => [
[9, Employee::MANAGER, 1],
[3, Employee::MANAGER, 2],
[2, Employee::ANALYST, 3],
[2, Employee::MARKETER, 1],
[1, Employee::MANAGER, 2, true]
],
'Training' => [
[8, Employee::MANAGER, 1],
[3, Employee::MARKETER, 1],
[2, Employee::ANALYST, 1],
[2, Employee::ENGINEER, 2],
[1, Employee::MANAGER, 2, true]
],
'Development' => [
[12, Employee::MANAGER, 2],
[10, Employee::MARKETER, 1],
[8, Employee::ENGINEER, 2],
[5, Employee::ANALYST, 3],
[1, Employee::ENGINEER, 3, true]
],
'Sales' => [
[13, Employee::MANAGER, 1],
[11, Employee::MARKETER, 2],
[3, Employee::MARKETER, 3],
[1, Employee::MANAGER, 1, true]
]
];
/**
* padstring.php
* a function facilitating the report output later on
*/
function padString($string, $length, $side = "right", $pad = " ") {
if (strlen($string) == $length) {
return $string;
} else {
$charsNeeded = $length - strlen($string); // 5
$padding = str_repeat($pad, $charsNeeded);
($side == "right") ? ($string = $string . $padding) : ($string = $padding . $string);
return $string;
}
}
/**
* classes.php
*/
abstract class Employee {
const MANAGER = "Manager";
const MARKETER = "Marketer";
const ENGINEER = "Engineer";
const ANALYST = "Analyst";
protected int $grade;
protected bool $chief;
public function __construct(int $grade, bool $chief = false) {
$this->grade = $grade;
$this->chief = $chief;
}
/**
* the following methods are in place to make sure all subclasses
* include the base properties returned by these methods
*/
abstract public function getBaseRate();
abstract public function getBaseCoffeeConsumption();
abstract public function getBaseCodeProduced();
public function getActualPay(): float {
$rate = $this->getBaseRate();
if ($this->grade == 2) {
$rate *= 1.25;
} elseif ($this->grade == 3) {
$rate = $rate * 1.5;
}
return $this->chief ? $rate * 2 : $rate;
}
public function getActualCoffeeConsumption(): float {
return $this->chief ? $this->getBaseCoffeeConsumption() * 2 : $this->getBaseCoffeeConsumption();
}
public function getActualCodeProduced(): int {
return $this->chief ? 0 : $this->getBaseCodeProduced();
}
}
class Manager extends Employee {
protected $baseRate = 7000;
protected $baseCoffeeConsumption = 5;
protected int $baseCodeProduced = 75;
public function getBaseRate(): float {
return $this->baseRate;
}
public function getBaseCoffeeConsumption(): float {
return $this->baseCoffeeConsumption;
}
public function getBaseCodeProduced(): int {
return $this->baseCodeProduced;
}
}
class Marketer extends Employee {
protected $baseRate = 6600;
protected $baseCoffeeConsumption = 4;
protected int $baseCodeProduced = 5;
public function getBaseRate(): float {
return $this->baseRate;
}
public function getBaseCoffeeConsumption(): float {
return $this->baseCoffeeConsumption;
}
public function getBaseCodeProduced(): int {
return $this->baseCodeProduced;
}
}
class Engineer extends Employee {
protected $baseRate = 8300;
protected $baseCoffeeConsumption = 8;
protected int $baseCodeProduced = 200;
public function getBaseRate(): float {
return $this->baseRate;
}
public function getBaseCoffeeConsumption(): float {
return $this->baseCoffeeConsumption;
}
public function getBaseCodeProduced(): int {
return $this->baseCodeProduced;
}
}
class Analyst extends Employee {
protected $baseRate = 7500;
protected $baseCoffeeConsumption = 12;
protected int $baseCodeProduced = 125;
public function getBaseRate(): float {
return $this->baseRate;
}
public function getBaseCoffeeConsumption(): float {
return $this->baseCoffeeConsumption;
}
public function getBaseCodeProduced(): int {
return $this->baseCodeProduced;
}
}
class Department {
protected string $name;
protected array $staff;
public function __construct($name) {
$this->name = $name;
}
public function getName() {
return $this->name;
}
public function addToStaff(Employee $employee) {
$this->staff[] = $employee;
}
public function getStaffNumber() {
return count($this->staff);
}
public function getLaborCost() {
$laborCost = 0;
foreach ($this->staff as $employee) {
$laborCost += $employee->getActualPay();
}
return $laborCost;
}
public function getCoffeeConsumption() {
$coffee = 0;
foreach ($this->staff as $employee) {
$coffee += $employee->getActualCoffeeConsumption();
}
return $coffee;
}
public function getCodeProduced() {
$code = 0;
foreach ($this->staff as $employee) {
$code += $employee->getActualCodeProduced();
}
return $code;
}
public function getCostPerUnit() {
return round($this->getLaborCost() / $this->getCodeProduced(), 2);
}
}
class Company {
protected array $depts;
public function __construct(array $depts) {
$this->depts = $depts;
}
public function getDepts() {
return $this->depts;
}
public function getTotalStaffNumber() {
$staffNumber = 0;
foreach ($this->depts as $dept) {
$staffNumber += $dept->getStaffNumber();
}
return $staffNumber;
}
public function getTotalLaborCost() {
$laborCost = 0;
foreach ($this->depts as $dept) {
$laborCost += $dept->getLaborCost();
}
return $laborCost;
}
public function getTotalCoffeeConsumption() {
$coffee = 0;
foreach ($this->depts as $dept) {
$coffee += $dept->getCoffeeConsumption();
}
return $coffee;
}
public function getTotalCodeProduced() {
$code = 0;
foreach ($this->depts as $dept) {
$code += $dept->getCodeProduced();
}
return $code;
}
public function getTotalCostPerUnit() {
$cost = 0;
foreach ($this->depts as $dept) {
$cost += $dept->getCostPerUnit();
}
return $cost;
}
public function getAverageStaffNumber() {
return round($this->getTotalStaffNumber() / count($this->depts), 2);
}
public function getAverageLaborCost() {
return round($this->getTotalLaborCost() / count($this->depts), 2);
}
public function getAverageCoffeeConsumption() {
return round($this->getTotalCoffeeConsumption() / count($this->depts), 2);
}
public function getAverageCodeProduced() {
return round($this->getTotalCodeProduced() / count($this->depts), 2);
}
public function getAverageCostPerUnit() {
return round($this->getTotalCostPerUnit() / count($this->depts), 2);
}
/**
* should I use echo or is it better to put the entire report string in a variable
* and return it?
*/
public function printReport() {
$regcol = 15;
$widecol = 20;
echo padString('DEPARTMENT', $widecol) . padString('STAFF', $regcol, 'left') . padString('LABOR COST', $regcol, 'left') . padString('COFFEE DRUNK', $regcol, 'left') . padString('CODE UNITS', $regcol, 'left') . padString('COST PER UNIT', $regcol, 'left') . "\n";
echo padString('=', $widecol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . "\n";
foreach ($this->depts as $dept) {
echo padString($dept->getName(), $widecol) . padString($dept->getStaffNumber(), $regcol, 'left') . padString($dept->getLaborCost(), $regcol, 'left') . padString($dept->getCoffeeConsumption(), $regcol, 'left') . padString($dept->getCodeProduced(), $regcol, 'left') . padString($dept->getCostPerUnit(), $regcol, 'left') . "\n";
}
echo padString('=', $widecol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . padString('=', $regcol, 'right', '=') . "\n";
echo padString('TOTAL', $widecol) . padString($this->getTotalStaffNumber(), $regcol, 'left') . padString($this->getTotalLaborCost(), $regcol, 'left') . padString($this->getTotalCoffeeConsumption(), $regcol, 'left') . padString($this->getTotalCodeProduced(), $regcol, 'left') . padString($this->getTotalCostPerUnit(), $regcol, 'left') . "\n";
echo padString('AVERAGE', $widecol) . padString($this->getAverageStaffNumber(), $regcol, 'left') . padString($this->getAverageLaborCost(), $regcol, 'left') . padString($this->getAverageCoffeeConsumption(), $regcol, 'left') . padString($this->getAverageCodeProduced(), $regcol, 'left') . padString($this->getAverageCostPerUnit(), $regcol, 'left') . "\n";
}
}
/**
* main.php
*/
function makeDepts(array $input): array {
$depts = [];
foreach ($input as $dept => $staff) {
$currentDept = new Department($dept);
foreach ($staff as $employeeGroup) {
$quantity = $employeeGroup[0];
$type = $employeeGroup[1];
$grade = $employeeGroup[2];
$chief = isset($employeeGroup[3]) ? true : false;
for ($c = 0; $c < $quantity; $c++) {
$employeeObject = new $type($grade, $chief);
$currentDept->addToStaff($employeeObject);
}
}
$depts[] = $currentDept;
}
return $depts;
}
$depts = makeDepts($input);
$company = new Company($depts);
$company->printReport();
I'd appreciate any comments or suggestions!
Answer:
I find it odd that you have declared padString() despite php already offering str_pad().
I think I'd favor declaring grade-based rate multipliers as a configurable lookup array rather than a hardcoded condition block. This way you can maintain that logic without touching the method.
There is a lot of duplicated method logic in the Department class, this could be D.R.Y.ed out with a single summing method that is fed the correct method name by which to fetch the correct data.
The same advice applies to your Company class regarding the repeated summing and averaging methods.
I recommend that all elements in the $input rows be declared with a consistent number of rows. In other words, you shouldn't need to check if $employeeGroup[3] is set. This way you can unpack the row values into readable individual values from within the nested foreach() declaration. Demo
foreach ($input as $dept => $staff) {
foreach ($staff as [$quantity, $type, $grade, $chief]) {
...
}
} | {
"domain": "codereview.stackexchange",
"id": 37935,
"tags": "php, object-oriented"
} |
Getting error in fit | Question: I was wondering if anyone had run into the problem of trying to estimate errors in their signal processing on spectroscopic data. I know that many people use spectroscopic techniques to estimate concentrations of different materials, but I would like to know how accurate these measurements can be (i.e. I would like to get an output something like "species A is 20% +-1% of the sample given"). Further, I would like to know how to deal with very extreme cases, where several different types of materials are present, and may have observables which fall directly on top of each other.
A simple example may be the following:
You can see there are two species being fit to the data. If the areas are then transformed by calculation into percentages, (i.e. the sample taken is 48% B, 52% A) how can we be sure of this and how accurate is the fit? I know this will be dependent on the accuracy of the estimate of the positions of these peaks that are (perhaps) given by the user, so I am interested in a method that takes a known error in the parameters (say +-15 on the x axis for error in the peak center position, +-10 error in the width of the peak, etc.).
I suspect that the errors will become large when the observables overlap (i.e. the peak centers for two fitting functions are the same).
In addition, it is possible that these spectra have a large background, which may also have error, affecting all of the other species and their errors. I am not certain if this background would be treated differently than all of the other species, or if it could be treated within the same algorithm as all of the other species.
To illustrate my point further, here is an image of a spectroscopic measurement of several different materials:
On the top in red is the raw data given by optical absorption (the measured data), while the black is a calculated background, and the blue is a calculated sum of all of the species including the background.
On the bottom, the calculated background is subtracted (blue and red lines now DO NOT include the calculated background), while the several different colored lines below are each individual species being summed up to create the blue line.
These are the calculated measurements I am interested in estimating the error in.
As you can see, the error is enormously large in this example for most of the calculated measurements. Each species may or may not have several 'peaks' associated with it, which can be illustrated by the bolded yellow calculated line. In addition, you can see that several of the calculated peak centers fall around the same place, so this will likely reduce the certainty that the measurements are correct even if the calculated line falls directly upon the raw data.
I have calculated the mean squared displacement as a quick estimate of how good the fit is, but I know that this doesn't do anything to address any larger concerns of the actual calculated measurement uncertainty. The most I have really done in statistics is standard deviation and calculations dealing with several measurements, but this is quite different, since it deals with how sure you can be with only one measurement, not seeing differences in multiple measurements. Is there a way of using confidence intervals and confidence levels on non independent and identically distributed random variables? (Again, I am very new to statistics and have never taken a course on it, so I apologize if this is elementary or trivial)
Answer: Well, without knowing the specifics of your algorithm, I assume that you basically have multiple functions (e.g. Gaussian or Lorentzian peaks, each with position and FWHM parameters, additionally maybe some polynomials for backgrounds etc.) and you sum them all up into one big "fit function" that you hand over to an optimization algorithm that jiggles the parameters around until you found a good fit in the sense of non-linear least squares.
What you can do then is computing the asymptotic standard errors. For this, you have to (most likely numerically) compute the Jacobi Matrix of your fit function (matrix of partial derivatives for each parameter (columns) at each measurement point (rows)). From this you can compute the variance-covariance matrix and then compute the errors for each parameter. This is an approach very often found in curve fitting software. In fact, there is a quite comprehensive explanation here at OriginLabs (see the section on Parameter Standard Errors). Also a more detailed overview can be found here at arxiv.
Other methods are discussed in this paper. They certainly are appropriate and maybe even mandatory in some cases, but I find them a bit too complicated for most "real world" applications.
If I were you, I would go with the asymptotic standard errors for a first start and see, if they fit the needs. It is an easy to implement strategy that is, as already said, also widely found in software packages. | {
"domain": "dsp.stackexchange",
"id": 3630,
"tags": "sampling, cross-correlation, statistics"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.