anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Calculating Poisson brackets in classical non-relativistic Hamiltonian field theory
Question: Summary of the question: How can I prove the equal-time Poisson bracket relations for the classical Hamiltonian field theory? I.e $$[q(x,t),H(t)]_\mathrm{PB}=\dot{q}(x,t)\tag{1}$$ for a field $q$ and $$[\Pi(x,t),H(t)]_\mathrm{PB}=\dot{\Pi}(x,t)\tag{2}$$ for the conjugate momentum field $\Pi$. Since we should have the right physics, I think these 'abstract' relations themselves are not enough, but their specific compatibility with known physical laws (e.g. Euler-Lagrange equation for fields) should also be addressed. Do they appear while addressing the previous question? Or are those 'abstract' relations just enough to ensure that we are doing the right physics? Hello, As a preliminary step of studying quantum field theory, I am looking at some classical field theories, especially focusing on their formulations. During that, I am suffering a problem regarding the Poisson bracket relations in Hamiltonian field theory. First, let's start from the Euler-Lagrange equation $$\frac{\partial}{\partial t}\left(\frac{\partial \mathcal{L}}{\partial(\partial_t q)}\right)+\frac{\partial}{\partial x}\left(\frac{\partial \mathcal{L}}{\partial(\partial_x q)}\right)-\frac{\partial \mathcal{L}}{\partial q}=0\tag{3}$$ for a Lagrangian density $\mathcal{L}$ involving a field variable $q(x,t)$. For a canonical momentum density $$\Pi(x,t):=\partial\mathcal{L}/\partial(\partial_t q)\tag{4}$$ conjugate to $q$, we define the Hamiltonian density $$\mathcal{H}(x,t):=\Pi(x,t)\frac{\partial q}{\partial t} - \mathcal{L}.\tag{5}$$ Since we are formulating a Hamiltonian mechanics, we desire the following equal-time Poisson bracket relations hold: (A) $\quad[q(x,t),\,\Pi(x',t)]_\mathrm{PB}=\delta(x-x')$ (B) $\quad[q(x,t),\,q(x',t)]_\mathrm{PB}=[\Pi(x,t),\,\Pi(x',t)]_\mathrm{PB}=0.$ According to some lecture notes and web pages, I found a definition of Poisson bracket $$[A(q,\Pi),\,B(q,\Pi)]_{\mathrm{PB}}:=\int dx \left( \frac{\delta A}{\delta q(x)}\frac{\delta B}{\delta \Pi(x)}-\frac{\delta A}{\delta \Pi(x)}\frac{\delta B}{\delta q(x)} \right)\tag{6}$$ where $\delta$ is the functional derivative. Meanwhile, based on the definition of functional derivative, I derived some formulae like $$\frac{\delta q (x)}{\delta q(y)} = \frac{\delta \Pi (x)}{\delta \Pi(y)}=\delta(x-y),\tag{7}$$ $$\frac{\delta A(x)}{\delta q(y)}=\frac{\partial A(x)}{\partial q(x)} \delta(x-y),\quad \frac{\delta A(x)}{\delta \Pi(y)}=\frac{\partial A(x)}{\partial \Pi(x)} \delta(x-y)\tag{8}$$ (I'm not sure the accuracy of those formulae, and I don't have a clear intuitive picture about those formulae in my mind; to me, they are just bunch of symbols providing some rules). Starting form the suggested definition of the Poisson bracket and using the formulae above, I think I somehow successfully checked that the basic Poisson bracket relations (A), (B) hold, given the definition of the PB. But here's my first problem: the definition of the Poisson bracket involves only one argument $x$, but we should deal with two arguments $x$ and $x'$ while calculating the PB. This is very confusing to me, and I cannot make sure that my derivation below is rigorous enough. $$ [q(x,t),\Pi(x',t)]_\mathrm{PB}=\left[\int dx' q(x',t)\delta(x-x'),\,\,\Pi(x',t)\right]_\mathrm{PB}\\ =\int dy \left(\int dx' \frac{\delta q(x')}{\delta q(y)}\delta(x-x')\frac{\delta \Pi(x')}{\delta \Pi(y)} - \int dx' {\frac{\delta q(x')}{\delta \Pi(y)}}\delta(x-x'){\frac{\delta \Pi(x')}{\delta q(y)}}\right) \\ = \int dy \int dx' \delta(x'-y) \delta(x-x') \delta(x'-y) \\ =\delta(x-x').\tag{9} $$ To avoid the confusion above, in the second line, I introduced a new variable $y$ for the dummy index defining the Poisson bracket. Also, in the final step, dealing with the delta functions seems unsatisfactory. Is every step okay here? Next, I want to look at the PB relations involving the Hamiltonian $H=\int dx \mathcal{H}$ as an argument. I started from here: $$ [q(x,t),H(t)]_\mathrm{PB}=\int dx'\,\left[ q(x,t),\,\Pi(x',t)\frac{\partial q}{\partial t}(x',t)-\mathcal{L}\left( \partial_t q,\,\partial_x q,\,q \right) \right]_\mathrm{PB},\tag{10} $$ written from the definition of the Hamiltonian density. Using the distributive law of Poisson bracket and the formula $$[A,\,BC] = [A,B]C + B[A,C],\tag{11}$$ I could reach $$ =\int dx' [q(x,t),\Pi(x',t)]_\mathrm{PB} \frac{\partial q}{\partial t}(x',t) + \int dx' \Pi(x',t) \left[q(x,t),\frac{\partial q}{\partial t}(x',t) \right]_\mathrm{PB} \\ \quad\quad-\int dx'\left[ q(x,t),\mathcal{L}\left(\partial_t q(x',t),\partial_x q(x',t),q(x',t)\right) \right]_\mathrm{PB}.\tag{12} $$ In the first term, using the relation $[q(x,t),\Pi(x',t)]=\delta(x-x'),$ I think I can obtain $\partial q(x,t) / \partial t$. However, I cannot go further with the other terms. So here comes my second question: how can I derive the final PB relation for $q$, expectedly something like $[q,H]=\dot{q}$? In addition, how can I derive the similar PB relation for $\Pi$? Finally, in the whole step above, we only dealt with the 'abstract' algebraic properties of Poisson bracket relations, not specifically checking the compatibility of those with already-known physical laws (such as Euler-Lagrange equations). Do they appear while addressing the previous questions? Or are those 'abstract' relations just enough to ensure that we are doing the right physics? Answer: I think that your issues are not specifically due to classical field theory, but rather Hamiltonian mechanics in general. Formally, there is no conceptual difference between the two, you just replace the Kronecker deltas of the finite dimensional phase space with Dirac deltas in field theory. The Hamilton's equations of motion are typically obtained by a variational principle. Your real trajectories are stationary points of the action: $$ S = \int pdq-Hdt $$ Using calculus of variation, you can deduce Hamilton's equations of motion the Euler-Lagrange equation gives: $$ \dot q = \frac{\partial H}{\partial p} \\ \dot p = -\frac{\partial H}{\partial q} $$ which you can then rewrite more conveniently using Poisson brackets. Note that this is equivalent to the Euler-Lagrange equation of the corresponding Lagrangian thanks to the property of the Legendre transform, even though the candidate trajectories in the Hamiltonian formalism are more diverse than in the Lagrangian setting. Therefore, if you capture the "right" physics in your Lagrangian (typically by symmetry considerations), then the Hamilton equations of motion will also capture the "right" physics due to this equivalence. Back to your field theory setting, the same method applies. From the variational principle using the action $S$: $$ S = \int \Pi\cdot dq -H dt \\ \Pi \cdot \dot q = \int \Pi(x) \dot q(x) d^nx $$ you obtain the Hamilton's equations of motion: $$ \begin{align} \dot q(x) &= \frac{\delta H}{\delta \Pi(x)} \\ &= \frac{\partial \mathcal H}{\partial \Pi}(x) \\ \dot\Pi(x,) &= -\frac{\delta H}{\delta q(x)} \\ &= -\frac{\partial \mathcal H}{\partial q}(x) \end{align} $$ where the second equations assume that (as it is often the case) the Hamiltonian is of the form: $$ H = \int \mathcal H(q,\Pi)(x) d^nx $$ The correct definition of the Poisson bracket two functionals $A,B$ of $q,\Pi$ is: $$ [A,B] = \frac{\delta A}{\delta q}\frac{\delta B}{\delta \Pi}-\frac{\delta A}{\delta \Pi}\frac{\delta B}{\delta q} $$ The Poisson bracket is still a functional of $q,\Pi$, but is implicitly a function of $x$. Things simplify if you assume that they are of the form: $$ A = \int [\mathcal A(\Pi,q)](x) d^nx\\ B = \int [\mathcal B(\Pi,q)](x) d^nx $$ In which case: $$ \begin{align} \frac{\delta A}{\delta q(x)} &= \frac{\partial \mathcal A}{\partial q}(x) & \frac{\delta B}{\delta q(x)} &= \frac{\partial\mathcal B}{\partial q}(x) \\ \frac{\delta A}{\delta \Pi(x)} &= \frac{\partial\mathcal A}{\partial \Pi}(x) & \frac{\delta\mathcal B}{\delta \Pi(x)} &= \frac{\partial\mathcal B}{\partial \Pi}(x) \\ [A,B] &= \frac{\partial \mathcal A}{\partial q}\frac{\partial \mathcal B}{\partial \Pi}-\frac{\partial \mathcal B}{\partial q}\frac{\partial \mathcal A}{\partial \Pi} \end{align} $$ Even the elementary fields can be written this way: $$ q(x) = \int q(y)\delta(y-x)d^ny \\ \Pi(x) = \int \Pi(y)\delta(y-x)d^ny $$ Or more generally, this applies to local observables as well, which justify the equations whose accuracy you were not sure: $$ \mathcal A(x) = \int [\mathcal A(q,\Pi)] (y)\delta(y-x)d^ny \\ \mathcal B(x) = \int [\mathcal B(q,\Pi)] (y)\delta(y-x)d^ny $$ This is why you have: $$ \begin{align} [q(x),\Pi(y)] &= \delta(x-y) \\ [q(x),A] &= \frac{\partial \mathcal A}{\partial \Pi}(x) \\ [\Pi(x),A] &= -\frac{\partial \mathcal A}{\partial q}(x) \\ [\mathcal A(x),\mathcal B(y)] &= \frac{\partial \mathcal A}{\partial q}(x)\frac{\partial \mathcal B}{\partial \Pi}(x)\delta(x-y)-\frac{\partial \mathcal B}{\partial q}(x)\frac{\partial \mathcal A}{\partial \Pi}(x)\delta(x-y) \end{align} $$ Hope this helps.
{ "domain": "physics.stackexchange", "id": 97538, "tags": "lagrangian-formalism, hamiltonian-formalism, classical-field-theory, poisson-brackets, functional-derivatives" }
Improving efficiency of download code
Question: I'm in the process of reviewing some old code released by an old colleague. We have a cronjob that executes once in an hour to download some thumbnails: the paths to the thumbnails are stored in a plain array. // $totalThumbs is usually less than 2000 (each about 3kb) while ($i < $totalThumbs) { $imgName = basename($thumbs_array[$i]); $fgc = file_get_contents($thumbs_array[$i]); $currentFile = __DIR__ . "/" . $imgName; // if file_get_contents doesn't return false if ($fgc !== false) { // if the file is not in that directory, write the file if (!file_exists($currentFile)) { file_put_contents($currentFile, $fgc); clearstatcache(true, $currentFile); } } $i++; sleep(1); } This code works but, for example, we can't use CURL multi_exec because of the limited resources of our server. Is there a way to improve it (more efficient and/or more secure), considering our hardware limits? We don't need speed, but eventually less memory consumption because the same server is at the same time busy with many other 'jobs'. Thanks EDIT (for Mast): one important thing to say is the current idea is to remove this part and use a cronjob to directly store an array in a file, so that the file we are talking about has only to read that array $dir = "https://mylocaldir"; $thumbs_array = []; // this one returns JSONP $raw_json = 'https://endpoint'; $json = file_get_contents($raw_json); // JSONP removal $json = str_replace("jsonOutreachFeed(", "", $json); $json = substr($json, 0, -1); $decoded_json = json_decode($json); $itm = $decoded_json->items; $totalThumbs = count($itm); for ($i = 0; $i < $totalThumbs; $i++) { $thumbs_array[] = $itm[$i]->media->m; } Answer: Since the code is only reading the contents of the file to see if there are contents in the file, an optimization for both speed and memory usage would be to use the filesize() function and not read the contents. Reading the contents of the file consumes both time and memory. Rather than writing the contents read in, copy or move the original file to the new location using either rename() or move_uploaded_file().
{ "domain": "codereview.stackexchange", "id": 38976, "tags": "performance, php, memory-optimization" }
How should I decide which principal units to use in dimensional analysis based on resulting pi's
Question: Consider an unknown equation involving six different variables: period $T$, in seconds velocity $v$, in meters per second pressure difference $\Delta p$, in Pascal length $L$, in meters volume $V$, in cubic meters density $\rho$, in kilograms per cubic meter We want to find an expression for $T$. Applying dimensional analysis and Buckhingham's Pi Theorem, we need a basis of $3$ fundamental units and $3$ dimensionless $\pi$'s. Consider the basis: $v$, $\Delta p$, $L$ ($T, M L^{-1} T^{-1}, L$) The $\pi$ which includes $\rho$ will be $$\pi_1 = \frac{\rho v^2}{\Delta p}$$ Now consider the basis: $v$, $\rho$, $L$ ($T, M L^{-3}, L$) The $\pi$ which includes $\Delta p$ will be $$\pi_1 = \frac{\Delta p}{\rho v^2}$$ which is the inverse of the $\pi$ found in the previous base. However the remaining two $\pi$'s remain exactly equal! $$\pi_2 = \frac{Tv}{L}$$ $$\pi_3 = \frac{V}{L^3}$$ Which leads to the following contradicting result $$\frac{Tv}{L} = \phi(\frac{\rho v^2}{\Delta p}, \frac{V}{L^3})$$ $$\frac{Tv}{L} = \phi(\frac{\Delta p}{\rho v^2}, \frac{V}{L^3})$$ So does $T$ depend on $v^2$ or on $v^{-2}$? How do I know which one it is? Did I make a mistake or is the answer to this question beyond the abilities of dimensional analysis? Answer: Assuming your dimensional analysis is correct, the functional dependence is of the form: $$\frac{Tv}{L} = k_1\left(\frac{\rho v^2}{\Delta p}\right)^{\alpha_1}\left(\frac{V}{L^3}\right)^{\beta_1}$$ or: $$\frac{Tv}{L} = k_2\left(\frac{\Delta p}{\rho v^2}\right)^{\alpha_2}\left(\frac{V}{L^3}\right)^{\beta_2}$$ The only way to determine the numerical value of the coefficients is either through an analytical result or through an experiment. In general you should find that $k_1=k_2=k$, $\alpha_1=-\alpha_2=\alpha$ and $\beta_1=\beta_2=\beta$ so it does not matter what the functional dependence is.
{ "domain": "physics.stackexchange", "id": 45932, "tags": "fluid-dynamics, dimensional-analysis" }
How do we count charges in this example of a Redox reaction equation?
Question: My brother(younger) had a doubt regarding this chemistry book example so I let him here : Question : Balance the equation : $\ce{H+ + MnO4- + Fe^2+ -> Fe^3+ + Mn^2+}$ Solution steps(as given by the textbook): Step 1 : $$\ce{MnO4- -> Mn^2+}$$ Step 2 : $$\ce{MnO4- -> Mn^2+ + 4 H2O}$$ Step 3 : $$\ce{MnO4- + 8 H+ -> Mn^2+ + 4 H2O}$$ Step 4 : $$\ce{MnO4- + 8 H+ + 5 e- -> Mn^2+ + 4 H2O}\tag{1}$$ Step 5 : $$\ce{Fe^2+ -> Fe^3+ + e-}\tag{2}$$ Step 6 : $5\times(2) + (1)$ and thus we have the balanced equation: $$\ce{MnO4- + 8 H+ + 5 Fe^2+ -> 5 Fe^3+ + Mn^2+ + 4 H2O}$$ Explanation given in the textbook for step 4: we equalise the charge by adding 5 electrons on left. My doubt regarding step 4: how do I know that in step 4 that 5 electrons do actually equalize the charge in the given equation? More accurately how do I know that there is any surplus or deficit of charge in any given equation atleast in this case? Answer: If you look at the charges in step 4, the left side adds up to 7 and the right side adds up 2. The MnO4 on the left has a -1 charge, and the 8 hydrogens add a +8 charge, adding up to a +7 charge on the left overall. The Mn on the right has a +2 charge so that is +2. To balance out this charge, they must be equal. How can +7 be made equal to +2? By subtracting 5. In this case, adding 5 electrons creates a -5 charge that means both sides of the equation have a +2 charge. You just need to add up the charges like this for a given equation to balance it.
{ "domain": "chemistry.stackexchange", "id": 6190, "tags": "homework, redox, stoichiometry" }
Natural merge sort
Question: Note: the next iteration at Natural merge sort - follow-up I have compiled natural merge sort algorithm from Java to C++. Natural merge sort sacrifices at most \$2N\$ amount of work in order to recognize the runs in the input range. A run is a contiguous subsequence which is ascending or strictly descending. Every descending run is reversed. We don't allow non-strictly descending runs, since reversing them would rearrange possible equal elements in it and make the entire algorithm non-stable. After it has figured the runs, it puts them all into a queue. Then while there is runs to merge, it pops two runs, merges them, and appends the new merged run to the tail of the queue. It continues in this manner until there is only one run in the queue left, which corresponds to the sorted range. So, all in all, the time complexity of this sort is \$\Omega(N) \cap \mathcal{O}(N \log N)\$. \$\Theta(N)\$ space complexity. Is that a good way of writing (potentially) reusable code? Are there some C++-idioms I should have adhered to? natural_merge_sort.h: #ifndef NATURAL_MERGE_SORT_H #define NATURAL_MERGE_SORT_H #include <algorithm> #include <iostream> #include <iterator> /******************************************************************************* * Implements a simple, array-based queue of integers. All three operations run * * in constant time. This queue, however, does not check for under-/overflow of * * underlying buffer because of performance considerations. * *******************************************************************************/ class UnsafeIntQueue { private: const size_t MINIMUM_CAPACITY = 256; size_t m_head; size_t m_tail; size_t m_size; size_t m_mask; size_t* m_buffer; /*************************************************************************** * Makes sure a capacity is at least 'MINIMUM_CAPACITY' and is a power of * * two. * ***************************************************************************/ size_t fixCapacity(size_t capacity) { capacity = std::max(capacity, MINIMUM_CAPACITY); size_t s = 1; while (s < capacity) { s <<= 1; } return s; } public: /*************************************************************************** * Constructs a new integer queue, which can accommodate 'capacit' amount * * integers. * ***************************************************************************/ UnsafeIntQueue(size_t capacity) : m_head{0}, m_tail{0}, m_size{0} { capacity = fixCapacity(capacity); m_mask = capacity - 1; m_buffer = new size_t[capacity]; } /*************************************************************************** * Destroys this queue, which releases the underlying buffer. * ***************************************************************************/ ~UnsafeIntQueue() { delete[] m_buffer; } /*************************************************************************** * Appends the input integer to the tail of this queue. * ***************************************************************************/ void enqueue(const size_t element) { m_buffer[m_tail & m_mask] = element; m_tail = (m_tail + 1) & m_mask; m_size++; } /*************************************************************************** * Removes and returns the integer at the head of this queue. * ***************************************************************************/ size_t dequeue() { const size_t ret = m_buffer[m_head]; m_head = (m_head + 1) & m_mask; m_size--; return ret; } /*************************************************************************** * Returns the amount of integers in this queue. * ***************************************************************************/ size_t size() const { return m_size; } }; /******************************************************************************* * Scans the range [first, lst) and returns the queue containing sizes of each * * run in the order they appear while scanning from left to right. * *******************************************************************************/ template<class RandomIt, class Cmp> std::unique_ptr<UnsafeIntQueue> build_run_size_queue(RandomIt first, RandomIt lst, Cmp cmp) { const size_t length = std::distance(first, lst); UnsafeIntQueue* p_q = new UnsafeIntQueue(length / 2 + 1); RandomIt head; RandomIt left = first; RandomIt right = left + 1; const RandomIt last = lst - 1; while (left < last) { head = left; if (cmp(*right++, *left++)) { // Reading a strictly descending run. while (left < last && cmp(*right, *left)) { ++left; ++right; } p_q->enqueue(right - head); std::reverse(head, right); } else { // Reading a ascending run. while (left < last && !cmp(*right, *left)) { ++left; ++right; } p_q->enqueue(left - head + 1); } ++left; ++right; } if (left == last) { // Handle the case of an orphan element at the end of the range. p_q->enqueue(1); } return std::unique_ptr<UnsafeIntQueue>(p_q); } /******************************************************************************* * Returns the amount of leading zeros in 'num'. * *******************************************************************************/ size_t leading_zeros(const size_t num) { size_t count = 0; for (size_t t = (size_t) 1 << (8 * sizeof(t) - 1); t; t >>= 1, ++count) { if ((t & num)) { return count; } } return count; } /******************************************************************************* * Returns the amount of merge passes needed to sort a range with 'run_amount' * * runs. * *******************************************************************************/ size_t get_pass_amount(size_t run_amount) { return 8 * sizeof(run_amount) - leading_zeros(run_amount - 1); } /******************************************************************************* * Implements the merging routine. Runs in in time O(n + m), where 'n' is the * * the length of the left chunk and 'm' is the length of the right chunk. * *******************************************************************************/ template<class RandomIt, class Cmp> void merge(RandomIt source, RandomIt target, const size_t offset, const size_t left_run_length, const size_t right_run_length, Cmp cmp) { RandomIt left = source + offset; RandomIt right = left + left_run_length; const RandomIt left_bound = right; const RandomIt right_bound = right + right_run_length; RandomIt target_iter = target + offset; while (left < left_bound && right < right_bound) { *target_iter++ = cmp(*right, *left) ? *right++: *left++; } std::copy(left, left_bound, target_iter); std::copy(right, right_bound, target_iter); } /******************************************************************************* * Implements the natural merge sort, which sacrifices one pass over the input * * range in order to establish an implicit queue of runs. A run is the longest * * consecutive subsequence, in which all elements are ascending or strictly * * descending. Every descending run is reversed to ascending run. We cannot * * consider non-strictly descending runs, since that would sacrifice the stabi- * * lity of the algorithm. After the run queue is establish, the algorithm re- * * moves two runs from the head of the queue, merges them into one run, which * * is then appended to the tail of the run queue. Merging continues until the * * queue contains only one run, which denotes that the entire input range is * * sorted. * * * * The best-case complexity is O(N), the average and worst-case complexity is * * O(N log N). Space complexity is O(N). * *******************************************************************************/ template<class RandomIt, class Cmp> void natural_merge_sort(RandomIt first, RandomIt last, Cmp cmp) { const size_t length = std::distance(first, last); if (length < 2) { // Trivially sorted. return; } typedef typename std::iterator_traits<RandomIt>::value_type value_type; // Scan the runs. std::unique_ptr<UnsafeIntQueue> p_queue = build_run_size_queue(first, last, cmp); // Request a buffer. RandomIt buffer = new value_type[length]; std::copy(first, last, buffer); // Count the amount of merge passes over the array required to bring order. const size_t merge_passes = get_pass_amount(p_queue->size()); RandomIt source; RandomIt target; // Make sure that after the last merge pass, all data ends up in the input // container. if ((merge_passes & 1) == 1) { source = buffer; target = first; } else { source = first; target = buffer; } size_t runs_left = p_queue->size(); size_t offset = 0; // While there is runs to merge, do... while (p_queue->size() > 1) { // Remove two runs from the head of the run queue. const size_t left_run_length = p_queue->dequeue(); const size_t right_run_length = p_queue->dequeue(); // Do the merge. merge(source, target, offset, left_run_length, right_run_length, cmp); // Append the merged run to the tail of the queue. p_queue->enqueue(left_run_length + right_run_length); runs_left -= 2; offset += left_run_length + right_run_length; // The current pass over the array is almost complete. switch (runs_left) { case 1: { const size_t single_length = p_queue->dequeue(); std::copy(source + offset, source + offset + single_length, target + offset); p_queue->enqueue(single_length); } // FALL THROUGH! case 0: { runs_left = p_queue->size(); offset = 0; RandomIt tmp = source; source = target; target = tmp; break; } } } delete[] buffer; } #endif main.cpp: #include <algorithm> #include <chrono> #include <iostream> #include <random> #include "natural_merge_sort.h" /******************************************************************************* * Prints an array. * *******************************************************************************/ template<class T> static void print_int_array(const T* begin, const T* last) { while (begin < last) { std::cout << *begin++ << " "; } std::cout << std::endl; } /******************************************************************************* * Checks that the input range is sorted (is in ascending order). * *******************************************************************************/ template<class T, class Cmp> bool is_sorted(T* begin, T* end, Cmp cmp) { while (begin < end - 1) { if (cmp(*(begin + 1), *begin)) { return false; } ++begin; } return true; } /******************************************************************************* * Creates a random integer array of length 'length', minimum integer * * 'minimum', maximum integer 'maximum', using seed 'seed'. * *******************************************************************************/ static int* get_random_int_array(const size_t length, const int minimum, const int maximum, const unsigned int seed) { std::default_random_engine generator(seed); std::uniform_int_distribution<int> distribution(minimum, maximum); int* array = new int[length]; for (size_t i = 0; i < length; ++i) { array[i] = distribution(generator); } return array; } /******************************************************************************* * Create an array of pointers to integers. * *******************************************************************************/ static int** get_random_int_pointer_array(const size_t length, const int minimum, const int maximum, const unsigned seed) { std::default_random_engine generator(seed); std::uniform_int_distribution<int> distribution(minimum, maximum); int** array = new int*[length]; for (size_t i = 0; i < length; ++i) { array[i] = new int(distribution(generator)); } return array; } /******************************************************************************* * Returns a strongly presorted array of integers. * *******************************************************************************/ static int* get_presorted_int_array(const size_t length) { int* array = new int[length]; int num = 0; for (size_t i = 0; i < length / 2; ++i) { array[i] = num++; } for (size_t i = length / 2; i < length; ++i) { array[i] = num--; } return array; } /******************************************************************************* * Returns the milliseconds since the Unix epoch. * *******************************************************************************/ static unsigned long long get_milliseconds() { return std::chrono::duration_cast<std::chrono::milliseconds>( std::chrono::system_clock::now().time_since_epoch()).count(); } /******************************************************************************* * Checks that the two ranges are of the same length and content. * *******************************************************************************/ template <class T> bool are_equal(const T* begin1, const T* end1, const T* begin2, const T* end2) { if (std::distance(begin1, end1) != std::distance(begin2, end2)) { return false; } while (begin1 < end1) { if (*begin1++ != *begin2++) { return false; } } return true; } /******************************************************************************* * Compares two integer pointers by the values they point to. * *******************************************************************************/ static bool compare_dereference(const int* a, const int* b) { return *a < *b; } /******************************************************************************* * Compares two integers. * *******************************************************************************/ static bool compare_int(const int a, const int b) { return a < b; } /******************************************************************************* * Profiles the 'std::stable_sort' agains the range ['begin', 'end') using the * * comparator 'cmp'. * *******************************************************************************/ template<class T, class Cmp> static void profile_stable_sort(T* begin, T* end, Cmp cmp) { unsigned long long ta = get_milliseconds(); std::stable_sort(begin, end, cmp); unsigned long long tb = get_milliseconds(); std::cout << "std::stable_sort in " << (tb - ta) << " milliseconds. " << "Sorted: " << is_sorted(begin, end, cmp) << std::endl; } /******************************************************************************* * Profiles the 'natural_merge_sort' agains the range ['begin', 'end') using * * the comparator 'cmp'. * *******************************************************************************/ template<class T, class Cmp> static void profile_natural_merge_sort(T* begin, T* end, Cmp cmp) { unsigned long long ta = get_milliseconds(); natural_merge_sort(begin, end, cmp); unsigned long long tb = get_milliseconds(); std::cout << "natural_merge_sort in " << (tb - ta) << " milliseconds. " << "Sorted: " << is_sorted(begin, end, cmp) << std::endl; } /******************************************************************************* * Profiles the sorting algorithms on a random integer array. * *******************************************************************************/ static void profile_on_random_array(const size_t sz, const int minimum, const int maximum, const unsigned seed) { int* array1 = get_random_int_array(sz, minimum, maximum, seed); int* array2 = new int[sz]; std::copy(array1, array1 + sz, array2); std::cout << "--- PROFILING ON RANDOM ARRAY OF LENGTH " << sz << " ---" << std::endl; profile_stable_sort(array1, array1 + sz, compare_int); profile_natural_merge_sort(array2, array2 + sz, compare_int); std::cout << "Same contents: " << are_equal(array1, array1 + sz, array2, array2 + sz) << std::endl << std::endl; } /******************************************************************************* * Profiles the sorting algorithms on an array of pointers to random integers. * *******************************************************************************/ static void profile_on_integer_pointer_array(const size_t sz, const int minimum, const int maximum, const unsigned seed) { std::cout << "--- PROFILING ON RANDOM POINTER ARRAY OF LENGTH " << sz << " ---" << std::endl; int** array1 = get_random_int_pointer_array(sz, minimum, maximum, seed); int** array2 = new int*[sz]; std::copy(array1, array1 + sz, array2); profile_stable_sort(array1, array1 + sz, compare_dereference); profile_natural_merge_sort(array2, array2 + sz, compare_dereference); std::cout << "Same contents: " << are_equal(array1, array1 + sz, array2, array2 + sz) << std::endl << std::endl; } /******************************************************************************* * Profiles the sorting algorithms on a presorted array. * *******************************************************************************/ static void profile_on_presorted_array(const size_t sz) { std::cout << "--- PROFILING ON PRESORTED ARRAY OF LENGTH " << sz << " ---" << std::endl; int* array1 = get_presorted_int_array(sz); int* array2 = new int[sz]; std::copy(array1, array1 + sz, array2); profile_stable_sort(array1, array1 + sz, compare_int); profile_natural_merge_sort(array2, array2 + sz, compare_int); std::cout << "Same contents: " << are_equal(array1, array1 + sz, array2, array2 + sz) << std::endl << std::endl; } /******************************************************************************* * The entry point to a demo program. * *******************************************************************************/ int main(int argc, const char * argv[]) { unsigned long long seed = get_milliseconds(); //// std::cout << "Seed: " << seed << std::endl << std::endl; //// const size_t length = 5000000; const int min_int = -100; const int max_int = 300; std::cout << std::boolalpha; profile_on_random_array(length, min_int, max_int, seed); profile_on_integer_pointer_array(length, min_int, max_int, seed); profile_on_presorted_array(length); return 0; } Answer: One thing that I find odd is that you duplicate a great deal of standard library code to implement your sorting algorithm: is_sorted can be replaced by std::is_sorted. are_equal can be replaced by std::equal. Note that since C++14, there is an overload taking 4 iterators, just like what you did, so that it can use a \$O(1)\$ std::distance with random-access iterators. compare_int can be replaced by std::less<int>, or even the simpler and more generic std::less<> if you have access to the C++14 library. Also, you shouldn't bother with a dedicated compare_dereference function when you can use a lambda instead (here, C++14 generic lambda): profile_stable_sort(array1, array1 + sz, [](auto* a, auto* b){ return *a < *b; }); If you use GCC or Clang, you could use __builtin_clz which may be way faster than manually counting leading zeros. A simple macro check could help you to pick the most suitable implementation: size_t leading_zeros(const size_t num) { #if defined(__GNUC__) || defined(__clang__) return __builtin_clz(num); #else size_t count = 0; for (size_t t = (size_t) 1 << (8 * sizeof(t) - 1); t; t >>= 1, ++count) { if ((t & num)) { return count; } } return count; #endif }
{ "domain": "codereview.stackexchange", "id": 13015, "tags": "c++, algorithm, c++11, sorting, mergesort" }
Simulating DFA operations on non-regular languages with finite final state set
Question: A language I'm investigating is not regular since the minimal DFA for the language grows depending on input string size. However, while the number of non-final states increases, the final states are the same small strictly finite core which include the initial state. Another question looks superficially similar, hinting that what I'm looking for might be an interpretation of this language as a generalization of a prefix-code in order to ignore the infinite non-final states. Are there methods to treat this 'pseudo-regular' language as regular (or any other language class with efficient algorithms) for purposes of intersection and path finding from the small final state set back to the inital state, thus sidestepping the issue that the complete DFA is not finite in size? Answer: Your question is missing quite a few definition, but here is my interpretation. A language $L \subseteq \Sigma^*$ is finite-regular if there exists $C$ such that for all $n$, $L \cap \Sigma^n$ is accepted by some DFA with at most $C$ final states. Note that if the minimal DFA for a language has $m$ accepting states, then every DFA for the language has at least $m$ accepting states (this follows from Myhill–Nerode theory). Therefore we obtain an equivalent definition if we replace "some DFA" with "the minimal DFA". Using the product construction, you can easily show that the class of finite-regular languages is closed under union and intersection, and more generally under monotone set operations.
{ "domain": "cs.stackexchange", "id": 9526, "tags": "regular-languages, finite-automata" }
MVC design pattern approaches
Question: I am actually more confused rather than understanding the concept. I have been developing a system and want to know the MVC design pattern. which one is best and the concept and usability of the layers? In general, I have two approaches: 1. ViewModel layer (contains viewmodels), Models (contains models and datacontext), Repository (communicating to datacontext), Controller (communicating to view), Presentation (views) In general, if one page for example, country (crude operations) will be look like this [AdminRoleAuthorize(Roles = "Admin")][CustomHandleError] public class CountriesController : BaseController { #region Members private IUnitOfWork unitOfWork; #endregion Members #region Constructor public CountriesController(iMSTUnitOfWork _unitOfWork) { unitOfWork = _unitOfWork; } #endregion Constructor #region Actions public ActionResult Index() { return View(); } public async Task<ActionResult> Details(int? id) { if (id == null) { return new HttpStatusCodeResult(HttpStatusCode.BadRequest); } Country country = await unitOfWork.CountryRepository.GetByIdAsync((int)id); if (country == null) { return HttpNotFound(); } return View(country); } public ActionResult Create() { return View(); } [HttpPost] [ValidateAntiForgeryToken] public async Task<ActionResult> Create(Country country) { try { if (ModelState.IsValid) { unitOfWork.CountryRepository.Insert(country); await unitOfWork.SaveAsync(); return RedirectToAction("Index"); } return View(country); } catch (Exception ex) { ModelState.HandleError(ex); return View(country); } } public async Task<ActionResult> Edit(int? id) { if (id == null) { return new HttpStatusCodeResult(HttpStatusCode.BadRequest); } Country country = await unitOfWork.CountryRepository.GetByIdAsync((int)id); if (country == null) { return HttpNotFound(); } return View(country); } [HttpPost] [ValidateAntiForgeryToken] public async Task<ActionResult> Edit(Country country) { try { if (ModelState.IsValid) { var original= unitOfWork.CountryRepository.GetById(country.Id); unitOfWork.CountryRepository.Update(original); await unitOfWork.SaveAsync(); return RedirectToAction("Index"); } return View(country); } catch (Exception ex) { ModelState.HandleError(ex); return View(country); } } public async Task<ActionResult> Delete(int id) { await unitOfWork.CountryRepository.DeleteAsync(id); await unitOfWork.SaveAsync(); return RedirectToAction("Index"); } #endregion Actions #region DataTable public JsonResult GetJsonData(DTParameterModel param) { try { var results = GetCountry(param); return Json(results, JsonRequestBehavior.AllowGet); } catch (Exception ex) { return Json(new { error = ex.Message }); } } #endregion DataTable #region Methods public DTResult<Country> GetCountry(DTParameterModel param) { var errors = unitOfWork.CountryRepository.GetQueryable().Where(a => (a.Name.Contains(param.Search.Value) || param.Search.Value == null) ); return DataTableFiltering<Country>.GetResult(param, errors); } protected override void Dispose(bool disposing) { if (disposing) { // unitOfWork.Dispose(); } base.Dispose(disposing); } #endregion Methods } Please note that there is no business layer comes here and I am accessing repository directly. 2. ViewModel layer (contains viewmodels), Models (contains models and datacontext), Repository (communicating to datacontext), Controller (communicating to view), Presentation (views) Business layer In this scenario my code will look like this: public class ParticipationStatusController : BaseController { private readonly IParticipationStatusManager _participationStatusManager; public ParticipationStatusController(IParticipationStatusManager participationStatusManager, ISecurityManager securityManager) : base(securityManager) { _participationStatusManager = participationStatusManager; } #region NotUsed public async Task<ActionResult> ParticipationStatusListList() { var serviceResponse = new DataResponse<string> { ErrorCode = ErrorCodes.Failure }; var clientId = GetAuthenticatedClientId(); if (clientId == 0) { serviceResponse.Message = HttpStatusCode.Unauthorized + " :Set Authenticated Client Id"; return View(serviceResponse); } int languageId = await GetClientLanguageId(clientId); var response = await _participationStatusManager.GetParticipationStatus(languageId); if (!response.Exceptions.Any()) { if (response.Response != null) { var resultResponse = new DataResponse<IEnumerable<ParticipationStatusTranslateViewModel>> { Data = response.Response }; ErrorMessage.Getmessage(response.Response.Count(), "Participation Status", ref resultResponse); return Ok(resultResponse); } else return NotFound(); } foreach (var keyValuePair in response.Exceptions) { ModelState.AddModelError(keyValuePair.Key, new Exception(keyValuePair.Value)); } if (response.Response != null) return View(response.Response); else return BadRequest(ModelState); } #endregion } Please note that all the communications to database are going through the business layer which is communicating to the repository. Which is a good to use and what is the role of each layer? I do not have good experience developing system design. Also, should I use viewmodel only in presentation? and use automapper to communicate to the repository? Answer: Unit of Work What strikes me the most in you design is the invalid usage of the Unit of Work. You use it only to get the repository via its property but a Unit of Work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work. Like an Entity in Entity Framework that tracks changes. You might as well pass via DI the repository directly and it wouldn't change anything. Naming a Unit of Work a unit-of-work is also a bad practice. It's just a placeholder for a more specific type in your domain. Maybe a Country or something but definitely not just a unit-of-work. I cannot say anything more then this because you didn't post more code but its design doesn't look right. It's difficult to come up with an example. The only thing I would change about your code in the first place would be to use the repository instead of the invalid unit-of-work. So instead: public class CountriesController : BaseController { private IUnitOfWork unitOfWork; public CountriesController(iMSTUnitOfWork _unitOfWork) { unitOfWork = _unitOfWork; } } you'll have: public class CountriesController : BaseController { private readonly IRepository _repository; public CountriesController(IRepository repository) { _repository = repository; } } It might be a good idea to ask another question about your unit-of-work and repository that you use here. MVC The first design looks OK. I cannot say anything about the second one, it's too short and it doesn't interact with the model - perhaps it doesn't have to. I don't know. Minor issues Using regions like you use them is worse then not using them at all. I had to remove them first so that your code becomes readable. Personally I don't like them. Furthermore your code formatting is very strange with those multiple line breaks which makes it hard to read too.
{ "domain": "codereview.stackexchange", "id": 22243, "tags": "c#, design-patterns, mvc" }
Variational Quantum Linear Solver (Hadamard test): circuit question
Question: Trying to understand the circuit/algorithm for VQLS and I found this diagram to show the high-level idea of doing the Hadamard test in this tutorial. But I am not quite sure why we need the two circuit blocks in the red box that I drew. I appreciate the insight. Answer: This circuit is used to calculate the coefficients $\mu_{l, l', j}$ which appear in the numerator of $C_ L$ \begin{align*}\mu_{l, l', j} = \langle 0| V^\dagger A_{l'}^\dagger U Z_j U^\dagger A_l V |0\rangle\end{align*} Hadamard test is used to calculate the expectation value $\langle\psi|{\bf Q}|\psi\rangle$. Now we have, $$\langle\psi|{\bf Q}|\psi\rangle\ = \langle 0| V^\dagger (A_{l'}^\dagger U Z_j U^\dagger A_l) V |0\rangle$$ That is, $|\psi\rangle\ = V |0\rangle$ and ${\bf Q} = A_{l'}^\dagger U Z_j U^\dagger A_l$.
{ "domain": "quantumcomputing.stackexchange", "id": 2877, "tags": "quantum-algorithms, vqe, linear-algebra" }
Event Registration
Question: I am new to the world of coding, PHP as well as XHTML. As my first stab at object oriented coding, I put together a basic registration form that is meant to serve as a means to collect user details. What this form does not do is the following; Capture information is a file or database as I have not yet gained enough experience about databases, sessions, cookies or files. I looked at using onchange to populate the days dropdown menu dynamically but despise the form reloading and hence avoided doing so. I am aware I can use AJAX however considering my lack of knowledge, I was hoping someone could advise other alternatives apart from Javascript. Sanitize the data using htmlentities or htmlspecialchars. What I would like I would appreciate any constructive criticisms, what I can improve on, what I could have done better XHTML Code <?php require_once('includes/base.php'); require_once('functions.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="keywords" content="" /> <meta name="description" content= "" /> <meta name="author" content="" /> <title>Event Details</title> <link rel="stylesheet" type="text/css" href="web.css" /> </head> <body> <h1>Details</h1> <p>Please provide us with your details so that we can confirm your booking</p> <div> <?php $eventform = new eventform(); $eventform -> validateform(); ?> <fieldset> <legend>Your details</legend> <form name="eventdetails" method="post" action="event.php"> <label for="fname">First Name</label> <input type="text" name="fname" value="<?php echo isset($_POST['fname']) ? $_POST['fname'] : ''; ?>" tabindex="1" id="fname" /> <label for="lname">Last Name</label> <input type="text" name="lname" value="<?php echo isset($_POST['lname']) ? $_POST['lname'] : ''; ?>" tabindex="2" id="lname" /> <label for="email">Email</label> <input type="text" name="email" value="<?php echo isset($_POST['email']) ? $_POST['email'] : ''; ?>" tabindex="3" id="email" /> <label for="cuisine">Cuisine</label> <?php $eventform = new eventform(); $eventform -> radiobuttons('cuisine'); ?> <label for="eventperiod">Please select an event day</label> <?php $eventform = new eventform(); ?> <select name="eventmonth" tabindex="8" id="eventmonth"> <?php echo $eventform -> dropdown('months'); ?> </select> <select name="eventday" tabindex="7" id="eventday"> <?php echo $eventform -> dropdown('days'); ?> </select> <select name="eventyear" tabindex="9" id="eventyear"> <?php echo $eventform -> dropdown('years'); ?> </select> <label> <input type="submit" name="register" value="register" tabindex="10" id="register" /> <input type="reset" name="reset" value="reset" tabindex="11" id="reset" /> </label> </form> </fieldset> </body> </html> CSS Code body{ font-family: arial, verdana, sans-serif; font-size: 0.8em; font-weight: normal; font-style: normal; } h1 { } p { } fieldset { border: 1px solid #D9D9D9; background-color: #F5F5F5; } fieldset legend { color: #FFFFFF; background-color: #8B8378; padding: 5px; padding-right: 9px; padding-left: 9px; } label { display: block; padding: 5px; } label.cuisine { display: inline; } Inlcudes - Base <?php $parentdirectory = dirname(dirname(__FILE__)); define('BASE_DIRECTORY', $parentdirectory); ini_set('include_path', $parentdirectory.'/includes/'); ?> Includes - Class <?php class eventform { //Declare data members/properties private $fname, $lname, $email, $cuisine, $eventday, $timezone, $eventmonth; //Generate months public function eventmonths($startingmonth=1, $endingmonth=12) { //Default timezone $this->timezone = date_default_timezone_set('UTC'); //Loop through months $months = range($startingmonth, $endingmonth); $returnmonths = ''; foreach($months as $key) { $returnmonths .= date('F', mktime(0, 0, 0, $key)).','; } return rtrim($returnmonths, ','); } //Generate years public function eventyears() { //Get current year $year = date('Y'); //Loop through years $startyear = $year; // $endyear = date('Y', strtotime('+5 years')); $endyear = date('Y', mktime(0,0,0,12,30,$year+5)); $returnyears = ''; for($startyear; $startyear < $endyear; $startyear=$startyear+1) { $returnyears .= $startyear.','; } return rtrim($returnyears, ',') ; } //Loop through days for a specific month and year public function eventdays($month, $year) { // return date('t', strtotime($someYear . '-' . $someMonth . '-01')); // return date('t', strtotime($someYear . '-' . $someMonth )); // $numberofdays = date('t', strtotime($Year. '-' .$Month)); $numberofdays = date('t', mktime(0,0,0,$month,1,$year)); $startdays = 1; $returndays = ''; for($startdays; $startdays <= $numberofdays; $startdays=$startdays+1) { $returndays .= $startdays.','; } return rtrim($returndays, ','); // return substr($returndays, 0, -1); } //Define the type of dropdown public function dropdown($type) { switch($type) { case('days'): $stringdays = $this->eventdays(1, 2011); $arraydays = explode(',', $stringdays); foreach($arraydays as $day) { echo '<option value="'.$day.'">'.$day.'</option>'; } break; case('months'): $stringmonths = $this->eventmonths(); $arraymonths = explode(',', $stringmonths); foreach($arraymonths as $month) { echo '<option value="'.date('n', strtotime($month)).'">'.$month.'</option>'; } break; case('years'): $stringyears = $this->eventyears(); $arrayyears = explode(',', $stringyears); foreach($arrayyears as $key) { echo '<option value="'.$key.'">'.$key.'</option>'; } break; } } //Define the type of radiobuttons public function radiobuttons ($type) { switch($type) { case('cuisine'): $cuisines = array('italian', 'russian', 'malaysian'); $cuisinetype = array_combine(range(1, count($cuisines)), $cuisines); foreach($cuisinetype as $key => $value) { if(isset($_POST['register']) && $_POST['register'] == 'register') { if(isset($_POST['cuisine']) && $_POST['cuisine'] == $value) { $status = 'checked=checked'; } else { $status = ''; } } else { $status = ''; } echo '<input type="radio" name="cuisine" value="'.$value.'" tabindex="'.$key.'" id="'.$value.'" '.$status.'>', '<label for="'.$value.'" class="cuisine">'.ucfirst($value).'</label>'; } break; } } //Validate form public function validateform() { if(isset($_POST['register']) && $_POST['register'] == 'register') { $this->fname = isset($_POST['fname']) ? $_POST['fname'] : ''; $this->lname = isset($_POST['lname']) ? $_POST['lname'] : ''; $this->email = isset($_POST['email']) ? $_POST['email'] : ''; $this->cuisine = isset($_POST['cuisine']) ? $_POST['cuisine'] : ''; $this->eventday = isset($_POST['eventday']) ? $_POST['eventday'] : ''; $errormsg = array(); if(empty($this->fname)) { $errormsg[0] = 'Please enter your first name'; } if(empty($this->lname)) { $errormsg[1] = 'Please enter your last name'; } if(empty($this->email)) { $errormsg[2] = 'Please specify an email address'; } if(empty($this->cuisine)) { $errormsg[3] = 'Please select a cuisine'; } if(empty($this->eventday)) { $errormsg[4] = 'Please select an event day'; } foreach($errormsg as $key => $value) { echo $value.'<br />'; } //Basic email validation if(!empty($this->email)) { if(!preg_match('/^[a-zA-Z0-9_\.\-]+[a-zA-Z0-9_\.\-]+@[a-zA-Z0-9_\.\-]+\.[a-zA-Z]{1,6}$/', $this->email)) { echo 'Please enter a valid email address'; } } } } } ?> Answer: Moving to an answer to post some code examples class file: public function radiobuttons ($type) { switch($type) { case('cuisine'): $cuisines = array('italian', 'russian', 'malaysian'); $cuisinetype = array_combine(range(1, count($cuisines)), $cuisines); $output = ''; foreach($cuisinetype as $key => $value) { if(isset($_POST['register']) && $_POST['register'] == 'register') { if(isset($_POST['cuisine']) && $_POST['cuisine'] == $value) { $status = 'checked=checked'; } else { $status = ''; } } else { $status = ''; } $output .= '<input type="radio" name="cuisine" value="'.$value.'" tabindex="'.$key.'" id="'.$value.'" '.$status.'>', '<label for="'.$value.'" class="cuisine">'.ucfirst($value).'</label>'; } return $output; } //code here would run with a break // but not run with a return // so when reviewing the code you'd need to scroll down and check here if you use a break // but not if you use a return } xhtml file : <?php //makes it clear that it's outputing text echo $eventform -> radiobuttons('cuisine'); ?>
{ "domain": "codereview.stackexchange", "id": 333, "tags": "php, html, css, object-oriented" }
Van der Waals Radius clarifications
Question: I just read about van der Waals radius and had some questions about it. Would it be correct to say that in general, van der Waals radius decreases as we move from left to right in the periodic table? The van der Waals radius of the noble gas in a period is larger than the radius of halogen that precedes it. Why is it so? What is the difference between van der Waals radius for monoatomic (say neon) and non-monoatomic (say fluorine) gases? Answer: Let me start by providing a reference to a nice collection of van der Waals' data (actually the site provides a lot of other interesting data about the elements, you might want to poke around). As to your questions, Yes, that is a general trend, but as you can see there are many exceptions. The trend exists for the same reason that covalent radii generally shrink as we move from left to right across the periodic table, every time we add another proton and electron to an atom, the electrons do not effectively screen the outermost electrons from the positive nucleus so they become drawn in closer to the nucleus. & 3. There are a lot of different ways to measure the van der Waals radius of an atom, but in recent times the preferred method is to examine the crystal structure of a molecule. Using $\ce{F2}$ as an example, the crystal would be composed of fluorine molecules. if we measure the crystallographic distance between two adjacent, but non-bonded fluorine atoms and divide by two, we would have a single value that could be used to estimate the van der Waals radius for fluorine (we would probably want to measure a number of similar values for fluorine from other molecules, maybe using $\ce{CH3F, NF3, ClF,}$ etc., for example, and average them together). Our crystal might look something like the drawing below. Because diatomic and polyatomic molecules are polarizable, the bonded fluorine atoms will likely be slightly polarized when packed in the crystal in order to stabilize (lower the energy) the overall system. This polarization will cause non-bonded fluorines to draw a bit closer to each other then they would if there were no polarization. A crystal of a noble gas (let's say neon) will just be a collection of monatomic neon atoms, no bonds, no polarization. Therefor there is no "attractive" force to draw adjacent neon atoms closer together like there was in the case of fluorine. So even before we make our measurements we might expect the van der Waals radius for neon to be larger than that for fluorine, which turns out to be the case.
{ "domain": "chemistry.stackexchange", "id": 1519, "tags": "intermolecular-forces, atomic-radius" }
Tree heap Haskell code
Question: I'd like a review of Haskell tree heap code in Turning a tree into a heap in Haskell. module Heapify where data Tree a = Leaf a | Node (Tree a) a (Tree a) deriving Show ourTree = Node (Node (Leaf 8) 2 (Leaf 4)) 3 (Node (Leaf 1) 7 (Leaf 9)) atTop :: Tree a -> a atTop (Leaf a) = a atTop (Node _ a _) = a replaceTop :: Ord a => Tree a -> a -> Tree a replaceTop (Leaf _) a = Leaf a replaceTop (Node l _ r) a = heapify (Node l a r) adjustLeft :: Ord a => Tree a -> Tree a adjustLeft (Leaf a) = Leaf a -- But we shouldn't ask to do this. adjustLeft node@(Node l a r) | topL <= a = node | otherwise = Node (replaceTop l a) topL r where topL = atTop l adjustRight :: Ord a => Tree a -> Tree a adjustRight (Leaf a) = Leaf a -- But we shouldn't ask to do this. adjustRight node@(Node l a r) | topR <= a = node | otherwise = Node l topR (replaceTop r a) where topR = atTop r doTop :: Ord a => Tree a -> Tree a doTop (Leaf a) = Leaf a doTop node@(Node l a r) | atTop l > atTop r = adjustLeft node | otherwise = adjustRight node heapify :: Ord a => Tree a -> Tree a heapify (Leaf a) = Leaf a heapify (Node l a r) = doTop (Node (heapify l) a (heapify r)) Answer: Correctness? Unanticipated cases? module Heapify where data Tree a = Leaf a | Node (Tree a) a (Tree a) deriving Show I think this can't host an even number of elements. fromList [1,2] seems impossible. ourTree = Node (Node (Leaf 8) 2 (Leaf 4)) 3 (Node (Leaf 1) 7 (Leaf 9)) an example data. atTop :: Tree a -> a atTop (Leaf a) = a atTop (Node _ a _) = a ok. replaceTop :: Ord a => Tree a -> a -> Tree a replaceTop (Leaf _) a = Leaf a replaceTop (Node l _ r) a = heapify (Node l a r) produces a heap. doesn't know what data was put on top, must re-heapify. Does not use the knowledge whether l/r are in fact heaps already or not. If was called on a heap, both must have been heaps already. The usual flow would be for heapify to be called on arbitrary trees, but replaceAtTop to be called on heaps only. This might have an impact on performance. adjustLeft :: Ord a => Tree a -> Tree a adjustLeft (Leaf a) = Leaf a -- But we shouldn't ask to do this. adjustLeft node@(Node l a r) | topL <= a = node | otherwise = Node (replaceTop l a) topL r where topL = atTop l assumes l was already a heap. If not, l might harbor some number yet bigger than topL and the otherwise clause could produce a non-heap (replaceTop fully heapifies its argument so its biggest number will get floated to its top). adjustRight :: Ord a => Tree a -> Tree a adjustRight (Leaf a) = Leaf a -- But we shouldn't ask to do this. adjustRight node@(Node l a r) | topR <= a = node | otherwise = Node l topR (replaceTop r a) where topR = atTop r similarly to the above, assumes r was a heap. Both functions assume both l and r were heaps actually, because r (corr., l) is kept unchanged. So both assume that only the top element can be out of place before the call, and produce a heap under that assumption. If the assumption does not hold, the produced value will not be a heap. doTop :: Ord a => Tree a -> Tree a doTop (Leaf a) = Leaf a doTop node@(Node l a r) | atTop l > atTop r = adjustLeft node | otherwise = adjustRight node assuming both l and r were heaps before call, produce a heap. heapify :: Ord a => Tree a -> Tree a heapify (Leaf a) = Leaf a heapify (Node l a r) = doTop (Node (heapify l) a (heapify r)) assuming heapify fulfills its promise, OK. Base case (Leaf a): OK. So, OK (except for the data definition deficiency). Call graph heapify (does not assume l/r were heaps) | |_____ heapify | |_____ doTop | |____ adjustLeft / adjustRight (assume l/r were heaps) | |____ replaceTop (does not assume l/r were heaps) | |_____ heapify Performance? So, heapify essentially is heapify (Leaf a) = Leaf a heapify (Node l a r) | a >= top = Node lh a rh | ltop > rtop = Node (replaceTop lh a) ltop rh -- lh is a heap! | otherwise = Node lh rtop (replaceTop rh a) -- rh is a heap! where lh = heapify l ----- superfluous, == id when called from (1) -- (2) rh = heapify r ----- superfluous, == id when called from (1) -- (2) ltop = atTop lh rtop = atTop rh top = max ltop rtop replaceTop (Leaf _) a = Leaf a replaceTop (Node l _ r) a = heapify (Node l a r) ------- (1) Looks a very heavy recursion. One way to fix this is to assume that heapify will be called only on heaps, not on arbitrary trees. Then the fix is simply to eliminate the two (2) calls. Then it becomes logarithmic. Heaps would need to be created from lists. The usual way is fold/insertElem. The data type definition needs to be adjusted to allow insertion of one element into a heap tree. The definitions will have to be readjusted. Probably easier done with small snippets, like in OP code. atTop would have to produce a Maybe value, as now empty heaps become a possibility (say, one of the children of a fromList [1,2] heap tree - now both subtrees are heaps, and one is empty). The whole balance/depth issue is untouched here. It's probably simpler to re-write replaceTop to essentially duplicate the above code, without the calls to heapify. It will now assume it's operating on heaps only, not on arbitrary trees. This will make replaceTop logarithmic. Leaves all the other problems unaddressed. replaceTop (Leaf _) a = Leaf a -- operates on heaps! replaceTop (Node lh _ rh) a -- lh, rh are heaps! | a >= top = Node lh a rh | ltop > rtop = Node (replaceTop lh a) ltop rh | otherwise = Node lh rtop (replaceTop rh a) where ltop = atTop lh rtop = atTop rh top = max ltop rtop According to master theorem for 2T(n/2) + O(log(n)) case, heapify will be O(n) then, assuming a balanced tree. A balanced tree can be built from a list in O(n) time. removeTop will most probably be O(log(n)).
{ "domain": "codereview.stackexchange", "id": 3838, "tags": "haskell, tree, heap" }
Transition Probability greater than 1? (In Perturbation)
Question: This topic already is widely discussed in this pot, but I can't find a direct answer boiling down to my question. Question Considering a constant perturbation $V$ on a system in initial state $|\varphi_i>$ The transition probability $P$ for $|\varphi_i> \:\to\: |\varphi_f>$ (The system switching into a final State due to $V$) should be:$\\[12pt]$ $$P_{i\to f} = \dfrac{4\,|V_{fi}|^2}{\hbar^2}\,\Bigg|\sin\left(\dfrac{{(\omega_{fi}\cdot t})/{2}}{\omega_{fi}}\right)\Bigg|^{2}\quad \text{where} \quad V_{fi}\: =\: <\varphi_f|V|\varphi_i>\quad \text{and} \quad \omega_{fi} =\dfrac{E_f-E_i}{\hbar}\\[12pt]$$ Now for a really long perturbation thus $t\to\infty$ this should rewrite to: $\\[12pt]$ $$P_{i\to f} = t\cdot\dfrac{2\pi}{\hbar}\,|V_{fi}|^2\,\delta(E_f-E_i)\\[12pt]$$ I fairly get the interpretation: If you really wait a large amount of time, the Probability only will be $\neq0$ if $E_f = E_i$. This is to say $|\varphi_f>$ only will adapt the energy of $|\varphi_i>$. $\left(\text{Even in accordance with uncertainty:} \Delta t\,\Delta E \geq \dfrac{\hbar}{2}\right)$ But the gist I don't grasp: How to even explain $P \to \infty$ for $E_f = E_i$ ? I always read the Probability is less than 1. Answer: Your expression is a result valid to first order in the perturbation. For long times restricting to first order is a poor approximation and one should include higher order terms. A sign that keeping only the first order term is poor is precisely that the transition probability becomes unphysically greater than 1. This is similar to writing $\sin(\omega t)\approx \omega t$. This is only valid for small values of $\omega t$. It makes no sense to use this approximation when $\omega t$ is not small. Indeed: $$ \sin(\omega t)\approx \omega t -\frac{1}{6}(\omega t)^3 +\ldots $$ clearly showing that you can keep the first term only when $\frac{1}{6}(\omega t)^3\ll 1$. If you keep only the first order correction then for sufficiently large $t$ you would get $\omega t>1$, something impossible for the sine function.
{ "domain": "physics.stackexchange", "id": 93780, "tags": "quantum-mechanics, perturbation-theory" }
Spyder 4: changed behavior or "run cell" / run selected code
Question: I'm a user of spyder. This weekend I updated to spyder 4, which seems to have received many usefull improvements, however I have a problem with running selected code. The logic seems to have changed. Unfortunately for me it is very important, that I can select code lines and run them ad hoc, without copying them to the shell each time. For older versions of spyder this could be easily done by selecting the code and pressing [Ctrl]+[Enter], but now this seems to execute not just the code I selected. To be honest, I don't even know, what it executes. The same applies if I select "run cell" from the menu after selecting code. Can anybody shed some light into this? how can I execute selected code in spyder 4? For me this functionality is so important, that I really think about downgrading to a lower version, even though I would lose big improvements in the editor. Answer: [Ctrl]+[Enter] is for debugging mode and F9 is what you are looking for. See also this stackoverflow question.
{ "domain": "datascience.stackexchange", "id": 6557, "tags": "python, spyder" }
Machine Learning: Writing Poems
Question: I'm a student of machine learning, and these days I was trying to learn how to use the TensorFlow library. I've gone through various tutorials and trial&errors with tensorflow, and I thought the best way to learn it for real would be to make use of it in a little project of my own. I've decided that I should attempt to make a program that writes poems. I'm not aiming for top-end quality program; for my first model, I'd be happy with just a string of non-sense words groups together in poem format. The problem is that I'm having problem looking up books or videos about machine learning programs that deal with writing sentence structures. Can you make any suggestions on what I could look for (even google keywords are fine) to get the sample programs and basic knowledge that I need? Thank you. Answer: This is just the comment from Emre expanded, but yes you should look into recurrent neural networks for generating text in the style of a given corpus. RNNs and LSTM work really quite well for this. This writeup is widely cited, and to your question, shows how it's pretty easy to generate something like this, given the text of Shakespeare's plays: PANDARUS: Alas, I think he shall be come approached and the day When little srain would be attain'd into being never fed, And who is but a chain and subjects of his death, I should not sleep. Second Senator: They are away this miseries, produced upon my soul, Breaking and strongly should be buried, when I perish The earth and thoughts of many states. If you follow this you can easily run this on your local GPU too to generate text from whatever input poetry you like. I have had pretty good results with 0.1-0.5 dropout, 2 layers, layers of size 512-1024.
{ "domain": "datascience.stackexchange", "id": 1337, "tags": "machine-learning, neural-network, tensorflow" }
Net flux through surface
Question: if equal number of electric field enters and equal leaves. Then why isnt net flux zero? Answer: The electric flux through an area is defined as the electric field multiplied by the area of the surface projected in a plane perpendicular to the field. See the diagram below and equation below for calculating the net flux over a closed surface. If the electric field lines are straight equally spaced parallel lines, as shown in your diagram, $\vec E$ is constant and comes out of integral. Integrating over the bottom (circular) surface of your diagram the flux is $$\phi_{bottom}=-\pi r^{2} E$$ The flux is negative because $\cos 180 =-1$. For the hemispherical surface, the area of the surface projected in a plane perpendicular to the field is the circular area below, or $\pi r^2$. The electric flux is then the electric field times that area or $$\phi_{top}=+\pi r^{2} E$$ The flux is positive since $\theta$ between the field and the projected surface is zero. Then the net flux over the entire closed surface becomes $$\phi_{E}=\phi_{top}+\phi_{botton}=0$$ It should be noted that the net flux for a closed surface will always be zero unless electric field lines either originate or terminate in the space enclosed by the surface. That will only occur if there is net charge enclosed by the surface. Then the net electrical flux for any closed surface will be, per Gauss' Law, the net charge enclosed by the surface divided by the electrical permittivity of the enclosed space, or, for free space, $$\phi_{E}=\frac{Q}{\epsilon_{o}}$$ Hope this helps
{ "domain": "physics.stackexchange", "id": 84700, "tags": "electrostatics, gauss-law" }
putting sql-query response directly to foreach-loop?
Question: Is there any reason to not do this? Putting the query directly inside a foreach-statment when the only time the result is going to be used is within that location.. <select> <?php foreach($dbh->query('SELECT id, name, so FROM Employees ORDER BY so') as $e): ?> <option value="<?=$e['so']?>">-- after "<?=$e['name']?>" --</option> <?php endforeach; ?> </select> It seems like I don't have to provide the fetchAll() when doing so either. In fact, if I just do the following, and not adding any fetch()-methods at all, I still get the result if I put $employees into a foreach-loop: $employees = $dbh->query('SELECT id, name, so FROM Employees ORDER BY so'); foreach($employees as $e){ /* works same as above */ } when I do print_r($employees), I only get this string: PDOStatement Object ( [queryString] => SELECT id, name, so FROM Employees ORDER BY so ) Is that correct behaviour? These are my options for the connection: $options = array( PDO::ATTR_PERSISTENT => true, PDO::ATTR_EMULATE_PREPARES=>false, PDO::ATTR_DEFAULT_FETCH_MODE=>PDO::FETCH_ASSOC, PDO::MYSQL_ATTR_INIT_COMMAND=>'SET NAMES utf8', PDO::ATTR_ERRMODE=>PDO::ERRMODE_EXCEPTION, // _SILENT (pub) || _WARNING || _EXCEPTION (dev) ); Answer: The idea is very bad, for many reasons. your query could fail. It will result in a torn design, with only half the page rendered. sometimes you must display a different text if the query returned no rows sometimes you need to send your data in a different format. For example not as HTML, but as a JSON-formatted array in response to AJAX call. a bowl of spaghetti made from HTML, PHP and SQL makes it hard to maintain. There should be a separation between the data manipulation and the data presentation. These are my options for the connection: Some of them are wrong as well a cargo cult persistent connection will do you no good charset should be set in the DSN ERRMODE should NEVER be set to silent. On a live server is should be the same as on dev, i.e. set to EXCEPTION. Errors should be only hidden from a site user, not completely silenced.
{ "domain": "codereview.stackexchange", "id": 31861, "tags": "php, iterator, pdo" }
Are the IRV and ERV equal in amount?
Question: I am reading about Inspiratory Reserve Volume and Expiratory Reserve Volume in my textbook. So my general question is aren't the two terms show the same amount of air ? I would like to tell you why I think so. Like IRV tells us about the maximum amount of air we can intake other than normal intake. So this extra amount is what going inside our lungs and I think this is the part which comes out in the form of ERV . So shouldn't the total capacity of lungs be equal to (Tidal volume + IRV + Reserved Volume) only ? Why do we need to include the term ERV in the above equation ? Answer: You're misunderstanding a bit. Here are some definitions from Wikipedia, where I added some bold for emphasis: Expiratory reserve volume: the maximal volume of air that can be exhaled from the end-expiratory position Inspiratory reserve volume: the maximal volume that can be inhaled from the end-inspiratory level In plain terms, ERV is how much you can still breathe out after you've breathed out all you normally would. You can try it out yourself: when you're finished breathing out and ready to start breathing in, try to blow out a bit, you'll find there is still air there. You don't need to breathe in extra to be able to breathe out this extra. Similarly, IRV is how much you can still breathe in after you've breathed in all you normally would. If you were to do a "fullest possible breath" that would include inspiring as much extra as possible, plus the normal breath volume, plus exhaling the last little bit you can: IRV + Tidal Volume + ERV There's still a little bit of residual air left over (RV), though, so: IRV + Tidal Volume + ERV + RV gives you the total capacity.
{ "domain": "biology.stackexchange", "id": 10983, "tags": "respiration, cellular-respiration, breathing, lungs" }
How do I locate a specific object in an image?
Question: Some pictures contain an elephant, others don't. I know which of the pictures contain the elephant, but I don't know where it is or how does it look like. How do I make a neural network which locates the elephant on a picture if it contains one? There are no pictures with more than one elephant. Answer: so assuming your not allowed to use transfer methodologies (like take an already exisiting elephant object detector) my recommendation is to train a CNN classifier (labels are binary-- elephant exist, elephant doesnt exist) and then use strategies founded in like grad cam. Note there does exist a gradcam++ but because you can assure theres only one instance, it isnt necessary and is just more complicated. Note that since you just need the location and not the pixel specificity, you dont even need to do the guided backprop, but just the relation with respect to the last convoluitional map. A quick description is that its using the gradient of the class loss w.r.t the last feature map to see which locations helped make the classification, and from there you can upscale to the receptive field that those neurons touch Hope this helped!
{ "domain": "ai.stackexchange", "id": 1203, "tags": "deep-learning, object-detection, semi-supervised-learning" }
Simple text parser using regexes
Question: I'm trying to write simple parser using regexes. This is what I currently have, it looks really messy. Any tips what can I change? from re import compile from typing import Dict, Iterator, List, NamedTuple, TextIO # PATTERNS registers_pattern = list(map(compile, [ r'(rax=[0-9a-f]{16}) (rbx=[0-9a-f]{16}) (rcx=[0-9a-f]{16})\n', r'(rdx=[0-9a-f]{16}) (rsi=[0-9a-f]{16}) (rdi=[0-9a-f]{16})\n', r'(rip=[0-9a-f]{16}) (rsp=[0-9a-f]{16}) (rbp=[0-9a-f]{16})\n', r' (r8=[0-9a-f]{16}) (r9=[0-9a-f]{16}) (r10=[0-9a-f]{16})\n', r'(r11=[0-9a-f]{16}) (r12=[0-9a-f]{16}) (r13=[0-9a-f]{16})\n', r'(r14=[0-9a-f]{16}) (r15=[0-9a-f]{16})\n', ])) flags_pattern = compile(r'iopl=[0-9a-f]+(?:\s+[a-z]{2}){8}\n') segments_pattern = compile(r'(?:[a-z]{2}=[0-9a-f]{4}\s+){6}efl=[0-9a-f]{8}\n') label_pattern = compile(r'[\w\+]+:\n') instruction_pattern = compile( r'[0-9a-f]{8}`[0-9a-f]{8}\s+(?P<ENCODING>[0-9a-f]+)\s+(?P<INSTRUCTION>.+?)\n?' ) class Instruction(NamedTuple): instruction: str encoding: str registers: Dict[str, str] def parse_trace(stream: TextIO) -> Iterator[Instruction]: """ TODO: some description """ iterator = iter(enumerate(stream, start=1)) for index, line in iterator: # Parse general-purpose registers registers: Dict[str, str] = {} for pattern in registers_pattern: if match := pattern.fullmatch(line): # Extract register values from regex match and go to the next line registers.update(group.split('=') for group in match.groups()) index, line = next(iterator) else: raise RuntimeError(f'Failed to parse line: {index}') if flags_pattern.fullmatch(line) is None: raise RuntimeError(f'Failed to parse line: {index}') if segments_pattern.fullmatch(next(iterator)[1]) is None: # TODO: here will be something raise RuntimeError(f'Failed to parse line: {index}') if label_pattern.fullmatch(next(iterator)[1]) is None: raise RuntimeError(f'Failed to parse line: {index}') if (match := instruction_pattern.fullmatch(next(iterator)[1])) is None: raise RuntimeError(f'Failed to parse line: {index}') yield Instruction(match.group('INSTRUCTION'), match.group('ENCODING'), registers) # Example of usage: from io import StringIO trace = StringIO("""rax=0000000000000000 rbx=0000000000000000 rcx=0000000000000000 rdx=0000000000000000 rsi=0000000000000000 rdi=0000000000000000 rip=000000000040100a rsp=0000000000000000 rbp=0000000000000000 r8=0000000000000000 r9=0000000000000000 r10=0000000000000000 r11=0000000000000000 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246 lol+0x1000: 00000000`00401000 48bba47868302035e80c mov rbx,0CE83520306878A4h rax=0000000000000000 rbx=0000000000000000 rcx=0000000000000000 rdx=0000000000000000 rsi=0000000000000000 rdi=0000000000000000 rip=000000000040100a rsp=0000000000000000 rbp=0000000000000000 r8=0000000000000000 r9=0000000000000000 r10=0000000000000000 r11=0000000000000000 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246 lol+0x1000: 00000000`00401000 48bba47868302035e80c mov rbx,0CE83520306878A4h""") for each in parse_trace(trace): print(each.instruction) ``` Answer: Instead of trying to match whole lines, it would be simpler to match smaller chunks of the input and the combine the smaller regexes. For example, one regex could match any of the registers, a different one could match the segment registers, etc. Order of the patterns may be important if one regex is be a prefix of another, but that doesn't seem to apply in this case. Use capture groups to get the important information, and MatchObject.lastgroup to see which part of the regex matched. The flags pattern probably needs to have other codes added. The (?x) in the pattern is for verbose mode, which lets you add whitespace and comments to the regex pattern. pattern = r"""(?x) (?P<REG>r\w{1,2})=(?P<RVAL>[0-9a-f]{16}) # registers |iopl=(?P<IOPL>[0-9a-f]+) # iopl |(?P<FLAGS>(\s+(?:nv|up|ei|pl|zr|na|po|nc))+) # flags |(?P<SREG>[csdefg]s|efl)=(?P<SVAL>[0-9a-f]+) # segments |(?P<LABEL>^[\w\+]+:) # label |[0-9a-f]{8}`[0-9a-f]{8}\s+(?P<ENCODING>[0-9a-f]+)\s+(?P<INSTRUCTION>.+) |(?P<NL>\n) """ regex = re.compile(pattern) registers = {} lineno = 1 for line in trace: print(f"line = {lineno}") for mo in regex.finditer(line): group_name = mo.lastgroup if group_name == 'NL': lineno += 1 else: if group_name == "RVAL": registers[mo['REG']] = mo['RVAL'] print(f" {mo['REG']} = {mo['RVAL']}") elif group_name == "IOPL": print(f" iopl = {mo['IOPL']}") elif group_name == "FLAGS": print(f" FLAGS = {mo['FLAGS'].strip().split()}") elif group_name == "SVAL": print(f" {mo['SREG']} = {mo['SVAL']}") elif group_name == "LABEL": print(f" LABEL = {mo['LABEL']}") elif group_name == "INSTRUCTION": print(f" {mo['INSTRUCTION']} = {mo['ENCODING']}") For the sample input, the code outputs: line = 1 rax = 0000000000000000 rbx = 0000000000000000 rcx = 0000000000000000 line = 2 rdx = 0000000000000000 rsi = 0000000000000000 rdi = 0000000000000000 line = 3 rip = 000000000040100a rsp = 0000000000000000 rbp = 0000000000000000 line = 4 r8 = 0000000000000000 r9 = 0000000000000000 r10 = 0000000000000000 line = 5 r11 = 0000000000000000 r12 = 0000000000000000 r13 = 0000000000000000 line = 6 r14 = 0000000000000000 r15 = 0000000000000000 line = 7 iopl = 0 FLAGS = ['nv', 'up', 'ei', 'pl', 'zr', 'na', 'po', 'nc'] line = 8 cs = 0033 ss = 002b ds = 002b es = 002b fs = 0053 gs = 002b efl = 00000246 line = 9 LABEL = lol+0x1000: line = 10 mov rbx,0CE83520306878A4h = 48bba47868302035e80c line = 11 rax = 0000000000000000 rbx = 0000000000000000 rcx = 0000000000000000 line = 12 rdx = 0000000000000000 rsi = 0000000000000000 rdi = 0000000000000000 line = 13 rip = 000000000040100a rsp = 0000000000000000 rbp = 0000000000000000 line = 14 r8 = 0000000000000000 r9 = 0000000000000000 r10 = 0000000000000000 line = 15 r11 = 0000000000000000 r12 = 0000000000000000 r13 = 0000000000000000 line = 16 r14 = 0000000000000000 r15 = 0000000000000000 line = 17 iopl = 0 FLAGS = ['nv', 'up', 'ei', 'pl', 'zr', 'na', 'po', 'nc'] line = 18 cs = 0033 ss = 002b ds = 002b es = 002b fs = 0053 gs = 002b efl = 00000246 line = 19 LABEL = lol+0x1000: line = 20 mov rbx,0CE83520306878A4h = 48bba47868302035e80c Obviously, do something useful instead of just printing the information.
{ "domain": "codereview.stackexchange", "id": 39316, "tags": "python, regex" }
Gauss Law for infinite plate - why $E$ is the same?
Question: I understand Gauss's law. I also understand that by gauss's law, for the infinite plate of charge(uniformly distributed), Electric field is the same everywhere which means it doesn't depend on the distance. While I truly understand the proof of gauss's law and also its usage and proof why E is the same everywhere for infinite plane of charge, I can't still make sense of it logically in a physical sense. Imagine there's infinite length of plate. Then, from it, at distance P, we know E is σ/2ε. But now, if we imagine the point from plate at distance P+100, we can easily see that there's higher distance between P+100 and each charge than it was for the point P. So the electric field definitely must be smaller, but it's not since however we proved it for point P, the same proof will give us exactly the same for P+100. What am I missing in terms of logical explanation(no need to include formulas, I understand them)? UPDATE @J.Murray | @Jacob Stuligross, I think there're lots of approximations being made here. we assume the ring can be rolled and used as a rectangle whose width is dx(the thickness of ring) and height as 2px(x is radius). x is definitely inner radius. Though, it quite won't be rectangle as if you say we got dx as thickness, then after rolling the ring, one height is x, second height is 2p(x+dx). So if you still treat it rectangle, we definitely lose some very small rectangle areas and are we sure it's so small charge won't be there ? from the above point to the ring's points, cone is assumed to be drawn. While I agree that from apex to each point of the outer ring's edge, they're the same height(important since if not we can't treat it as if area * E_change holds true. if they're the same height, then true, all points of rings are distanced away from P by the same value, but this assumption means that there should not be multiple charges in the shell area(outer - inner). If there're more than 1 charge there, distance from it to our reference point wouldn't be the same as the distance between apex and outer edge. Hence calculation won't be fully correct, but since you bring dx there, i assume it's small, but still holds charges. if it holds multiple charges, it's bad. Any thoughts ? Answer: The contribution to the field due to any particular patch of charge on the plate has a component which is perpendicular to the plate and a component which is parallel to the plate. When you sum over the entire plate, the parallel components cancel each other out and the perpendicular components add together to give you your net field. Since the $\vec E_\parallel$'s all cancel out due to symmetry, the relevant contribution is $E_\perp$, given by $$E_\perp = \frac{dA}{r^2}\cos(\theta)=dA \frac{d}{(x^2+d^2)^{3/2}}$$ where I've ignored the Coulomb constant and charge density. When we make a tiny step $d\mapsto d+\delta d$, the change becomes $$\delta E_\perp = \frac{x^2-2d^2}{r^5} \delta d$$ Observe for $x>d\sqrt{2}$, $E_\perp$ actually increases when you take a step away. Even though the magnitude of the force goes down, the change in angle more than compensates. Defining polar coordinates $(x,\phi)$ on the plane, we can write the total contribution as $$\int_0^{2\pi}\mathrm d\phi \int_0^\infty x\mathrm dx\ \delta E_\perp = 2\pi \delta d\left[\int_0^{d\sqrt{2}} \mathrm dx \ x\left(\frac{x^2-2d^2}{(x^2+d^2)^{5/2}}\right) + \int_{d\sqrt{2}}^\infty \mathrm dx \ x\left(\frac{x^2-2d^2}{(x^2+d^2)^{5/2}}\right)\right]$$ $$= 2\pi \delta d \left[-\frac{2}{3d\sqrt{3}} + \frac{2}{3d\sqrt{3}}\right] = 0$$ So the take-away is this: When you move away from the plate, the magnitude of the field from each patch decreases as you'd expect. However, for sufficiently far away patches, this results in an increase in the perpendicular component of the field due to the corresponding change in angle. When you sum over the whole plate, the positive changes to $E_\perp$ (which correspond to patches with $x>d\sqrt{2}$) precisely cancel the negative changes (corresponding to patches with $x<d/\sqrt{2}$) with the net effect being that the total field does not change. Now that the intuition is (hopefully) there, we can see this more cleanly by noting that the total perpendicular force can be written $$ E_\perp = \int_0^\infty \mathrm dx \frac{2\pi x d}{(x^2+d^2)^{3/2}}$$ If we define $u\equiv x/d$ and substitute, we see that the $d$-dependence completely drops out. Even more simply, observe that if we assume $$E_\perp = k_C^a\sigma^b d^c$$ then dimensional analysis yields immediately that $c=0$.
{ "domain": "physics.stackexchange", "id": 95885, "tags": "electrostatics, electric-fields, gauss-law" }
Number Guessing Game in Java
Question: This is my implementation of the famous Number Guessing Game. It is not object oriented, but the goal was to create a simple to read, procedural program. Have I accomplished this goal? import java.util.Random; import java.util.Scanner; class Main { private static int lowerRange = 1; private static int upperRange = 99; private static int tries = 6; private static Scanner input = new Scanner(System.in); private static Random random = new Random(); public static void main(String[] args) { showDescription(); boolean run = true; while (run) { play(); System.out.print("Play again? (Y / N): "); run = input.nextLine().equalsIgnoreCase("Y") ? true : false; } } private static void showDescription() { System.out.println("You have to guess a number between " + lowerRange + " and " + upperRange + "."); System.out.println("You have " + tries + " tries."); } private static void play() { int randomNumber = generateRandomNumber(lowerRange, upperRange); int triesSoFar = 0; boolean won = false; while (triesSoFar < tries && won == false) { System.out.print("Your try: "); int guess = Integer.parseInt(input.nextLine()); if (guess > randomNumber) { System.out.println("The secret number is smaller."); } else if (guess < randomNumber) { System.out.println("The secret number is higher."); } else { won = true; break; } triesSoFar++; } if (won) { System.out.println("You found the secret number!"); } else { System.out.println("You lose. The secret number was " + randomNumber + "."); } } private static int generateRandomNumber(int lowerRange, int upperRange) { return random.nextInt(upperRange - lowerRange) + lowerRange; } } Answer: You have showDescription directly printing out the message. Avoid printing in arbitrary functions whenever possible. It's much better to have things return Strings, then print at the call site as needed. I'd change that function to: private static String produceDescription() { return "You have to guess a number between " + lowerRange + " and " + upperRange + ".\n" + "You have " + tries + " tries."); } . . . System.out.println(produceDescription()); Why? Two highly-related reasons: Whenever possible, functions should return the data that they produce. Can you guarantee that for a function like this you will always want to directly print that data? Let the caller decide how they want to use the data that the function produces. Forcing the data to be printed makes the function less useful in the long term. As an example... If you ever decide to adapt this program to use a full GUI, you're directly printing and will need to modify every function that is calling println. The less functions you have that are using data in a specific way, the easier it will be to alter your program later. I would not make everything static here. Again, what if in the future you wanted to run two games at the same time (like if you created a server that allows people to connect to it and play)? I'd get rid of static everywhere, and make everything plain instance methods/properties, then just instantiate a instance of the game in main. Main is a poor name for this class. Ideally, it should be a description of what the object accomplishes. What if you ever imported this class so it can be used elsewhere? A class called Main doesn't make it immediately obvious what it is used for. I'd change the name to something like NumberGuessingGame. run = input.nextLine().equalsIgnoreCase("Y") ? true : false; is redundant. Ternary expressions are cool, but they're often overused. Think about it, what does equalsIgnoreCase return? A bool (true or false). You're then using that bool as a condition to the ternary... to get the same thing that equalsIgnoreCase returned originally. Just get rid of the ternary: run = input.nextLine().equalsIgnoreCase("Y"); And on the topic of conditions, a little later you have: while (triesSoFar < tries && won == false) { won == false certainly isn't wrong, but comparing against a boolean value directly is almost always unnecessary. Just write: while (triesSoFar < tries && !won) { Remember, ! is read as "not". "While tries to far is less than tries, and they haven't (not) won". tries is a bad name. It isn't representing the number of tries taken (that's triesSoFar), it's representing the max number of tries allowed. Change it to something like maxAllowedTries. Naming is very important. It is one of the key things that allows someone to be able to read your code and quickly know what's going on. Make sure your functions and variable names accurately describe what they do, or you'll make other people's lives more difficult when they need to read your code. You'll also make your own life more difficult if you ever come back to this program, because you will eventually forget parts of this program after some time. System.out.print("Your try: "); Isn't actually guaranteed to print right away. print and println use a "buffer" to hold text while waiting to print. That buffer is only "flushed" (printed out) when the text exceeds a certain length, or a newline (\n) is reached. You have short text here, and because you're using print, no newline is being added to the buffer. You may find that if you made this text a little shorter, it wouldn't print until some other text had be printed as well, which will make your program confusing. If you use print instead of println, it can be a good idea to add a call to System.out.flush() after it to make sure everything is printed when you want it to be. Just to show a potentially appropriate use of a ternary, you have: if (won) { System.out.println("You found the secret number!"); } else { System.out.println("You lose. The secret number was " + randomNumber + "."); } Which isn't bad, but it could be shorted a bit: String message = won ? "You found the secret number!" : ("You lose. The secret number was " + randomNumber + "."); System.out.println(message); I'm not necessarily advocating for this way, but I thought I'd show it. It allows you to get rid of the multiple calls to println. int guess = Integer.parseInt(input.nextLine()); if (guess > randomNumber) { System.out.println("The secret number is smaller."); } else if (guess < randomNumber) { System.out.println("The secret number is higher."); } else { won = true; break; } is a dense chunk of code. I prefer to add more spacing around things. I like blank lines above ifs and elses so it's easier to see at a glance the distinct blocks. I'd make it: int guess = Integer.parseInt(input.nextLine()); if (guess > randomNumber) { System.out.println("The secret number is smaller."); } else if (guess < randomNumber) { System.out.println("The secret number is higher."); } else { won = true; break; } That generally makes it easier to pick things out by eye when scanning over a document. There's some more stuff, but unfortunately, I have to go. Good luck!
{ "domain": "codereview.stackexchange", "id": 35715, "tags": "java, beginner, game, number-guessing-game" }
Hybrid assembly versus polishing for hifi and illumina reads
Question: I will have to carry out a project of assembly using hifi reads for which I have already illumina reads and I am wondering which of the hybrid assembly or polishing would be the best option for this kind of data? I wanted to use hifiasm which seems the gold standard and then nextpolish2 but I am wondering if using for example spades to do an hybrid assembly would lead to something better or worse? This question is only intended to see if someone did such benchmarking before or not. Best regards, Answer: Usually, if you have HiFi reads, there is no way you can improve the assembly with short reads. Not just that they don't have higher quality, but they also map a lot worse. People sometimes use short reads for genome profiling and QCing the whole thing using Merqury. Not sure if there is benchmarking published though.
{ "domain": "bioinformatics.stackexchange", "id": 2673, "tags": "assembly, illumina, pacbio" }
What Changes a Stress-Strain Curve?
Question: If stress is applied to a material but does not exceed the yield point, when the stress is relieved, the material such as steel returns to its original size and does not experience any permanent deformation. However, if the stress exceeds the yield stress, the material deforms plastically. The method I see for determining this deformation is to shift the $\sigma=E\varepsilon$ line horizontally until it aligns with the maximum stress, then find where this line intercepts the $\sigma=0$ axis, as shown below. This is considered the plastic deformation and the new 0 stress size of the material. From here, I would reason that this same stress-strain curve could be used again. A simple shift of by $\varepsilon_{plastic}$ and scaling by $\frac{1}{1+\varepsilon_{plastic}}$ of strain should suffice to obtain a new stress strain curve from the original, like so: Is this a valid method for determining a new stress strain curve, or does the releasing of the material allow the material to restructure itself in some way to create a different, non-trivial change in the stress-strain curve? More generally, materials with a given chemical composition and temperature history will, all else equal, have identical stress strain curves. If multiple samples of this material with identical starting conditions are deformed plastically through multiple deformations and releases, will their final stress strain curves be identical as long as the maximum stress each endured was the same, or is the process path dependent in that releasing a material alters its stress strain curve in some non-trivial manner, and therefor materials which experienced the same maximum stress but different intermediate stresses will have different stress strain curves? Answer: It depends on the material and temperature. For steel at room temperature, the process can be said most path independent. What can affect the outcome are thermally activated processes as diffusion of interstitial impurity atoms, that can migrate to stressed regions around dislocations. The dislocation density increases during plastic deformation, and that migration tends to increase the yield point. Hydrogen for instance can diffuse even at room temperatures. On the other hand, increasing temperature makes the self difusion of the lattice atoms relevant, changing the morphology of the dislocation arrays and releasing elastic energy, what tends to soften the material.
{ "domain": "physics.stackexchange", "id": 66883, "tags": "material-science, elasticity, stress-strain" }
about a way to change default global planner
Question: Hello, Everyone might think that this question is same as this article ( http://answers.ros.org/question/39347/how-to-change-global-planner-of-move_base/ ). Actually, as following the answer, I edited my move_base.launch.xml like this. &ltnode pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> &ltparam name="base_global_planner" value="carrot_planner/CarrotPlanner" /> &ltrosparam file="$(find turtlebot_navigation)/param/move_base_params.yaml" command="load" /> &ltrosparam file="$(find turtlebot_navigation)/param/costmap_common_params.yaml" command="load" ns="global_costmap" /> &ltrosparam file="$(find turtlebot_navigation)/param/costmap_common_params.yaml" command="load" ns="local_costmap" /> &ltrosparam file="$(find turtlebot_navigation)/param/local_costmap_params.yaml" command="load" /> &ltrosparam file="$(find turtlebot_navigation)/param/global_costmap_params.yaml" command="load" /> &ltrosparam file="$(find turtlebot_navigation)/param/eband_planner_params.yaml" command="load" /> &ltparam name="base_local_planner" value="eband_local_planner/EBandPlannerROS" /> &ltparam name="global_costmap/map_topic" value="$(arg map_topic)" /> &ltremap from="cmd_vel" to="navigation_velocity_smoother/raw_cmd_vel"/> </node> The result is following. As showing, Global plan set to elastic band. Elastic band is a local planner, not a global planner. My experiment environment is a turtlebot2 as a target, Hydro as ROS dist on Ubuntu 12.04. [ INFO] [1397253785.373263757]: Global plan set to elastic band for optimization [ INFO] [1397253785.374082648]: TrajectoryController: Goal reached with distance 0.00, 0.11 (od = -0.02); sending zero velocity Could anybody tell me some information to change a global planner? Thank you in advance! I show you an additional information. Rviz just seems to display a eband_local_planner as global & local planner. Originally posted by Ken_in_JAPAN on ROS Answers with karma: 894 on 2014-04-11 Post score: 1 Original comments Comment by Ken_in_JAPAN on 2014-04-11: I should have read this article on this web ( http://wiki.ros.org/navigation/Tutorials/Writing%20A%20Global%20Path%20Planner%20As%20Plugin%20in%20ROS) Comment by Ken_in_JAPAN on 2014-04-11: The reason that another planner isn't displayed on Rviz relate to yaml file? Answer: The answer on this site ( http://answers.ros.org/question/39347/how-to-change-global-planner-of-move_base/ ) is correct. It's just adding the following code to move_base.launch.xml < param name="base_global_planner" value="carrot_planner/CarrotPlanner" /> A way to check a change of planner is to use this command rostopic echo /move_base/parameter_updates ,but the problem of a topic in Global_Plan and Local_Plan on Rviz remains unsolved. Originally posted by Ken_in_JAPAN with karma: 894 on 2014-04-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by hubery524 on 2019-12-11: when I use the CarrotPlanner, there is no error,;But in rviz move_base, there is no CarrotPlanner,do you encounter this problem?
{ "domain": "robotics.stackexchange", "id": 17629, "tags": "ros, base-global-planner, eband-local-planner" }
Wrapping JS file in function, and assigning that function to a variable
Question: I've been reading about wrapping JS files in functions to call them on load, etc. I also was looking at namespaces and how to avoid collisions with functions/objects/variables of the same name. What improvements can be made? I know the code works, I'm more concerned with best practices for wrapping JS files and using namespaces. // codereview.js: var codeReview = (function() { // basically "private" function? Will there be a collision with jQuery here? function $(id) { return document.getElementById(id); } // "public" functions return { fizzbuzz: function() { var newhtml = ''; var output = 0; for (var i = 1; i <= 100; i++) { if (i % 3 == 0 && i % 5 == 0) { output = "FizzBuzz"; } else if (i % 3 == 0) { output = "Fizz"; } else if (i % 5 == 0) { output = "Buzz"; } else { output = i; } newhtml += output + "<br />"; } $('fizzbuzz-output').innerHTML = newhtml; }, email: function() { var first = "test@em"; var last = "ail.com"; var email = first + last; $('my_email').innerHTML = email; } } })(); Both of the "public" functions would be called via codeReview.fizzbuzz() or codeReview.email() Also, codeReview.email() is basically just to click and reveal my email address. I did it this way thinking it couldn't get scraped, is that accurate or did I just come up with a super complicated way to display an email address? Answer: From a organizing perspective you are doing fine, in essence you used the Module Pattern. I would just advise you to consider placing 'use strict'; in the beginning of your function. As you probably figured out, $ does not collide with jQuery. Though, for this piece of code I would not create a one line function if you are only going to call that function once. For the email thing, I think it might fool a number of scrapers, I'd keep it. For fizzbuzz, while your code is working, I think the goal is to use a minimum amount of modulo statements, your code could be written with fewer modulo statements.. My counter suggestion is very close to your code: // codereview.js: var codeReview = (function() { // "public" functions return { fizzbuzz: function() { var output, i; for (i = 1; i <= 100; i++) { output = ''; if (i % 3 == 0) { output = 'Fizz'; } if (i % 5 == 0) { output += 'Buzz'; } output = output || i; console.log(output); } }, email: function() { var first = 'test@em'; var last = 'ail.com'; var email = first + last; return document.getElementById('my_email').innerHTML = email; } } })(); codeReview.fizzbuzz(); This was an interesting read, which provided an alternative inner loop which is more code review friendly: for (i = 1; i <= 100; i++) { if (i % 15 == 0) { output = 'FizzBuzz'; } else if (i % 3 == 0) { output = 'Fizz'; } else if (i % 5 == 0) { output += 'Buzz'; } else{ output = i; } console.log(output); }
{ "domain": "codereview.stackexchange", "id": 9404, "tags": "javascript, fizzbuzz" }
Is astrochronology a proper method?
Question: Wikipedia's definition of astrochronology is: "Astrochronology is the dating of sedimentary units by calibration with astronomically tuned timescales, such as Milankovic cycles,1[2] or even sunspot cycles. When used in concert with radiometric dating, it allows the resolution of timescales to a high degree of accuracy. If orbital precession cycles are identified, the dating error can be as low as 21,000 years." source: wikipedia Question: Is this method really used and does it have a good reputation among field geologists? Answer: Yes it is used and it works. I've heard varying viewpoints on it's accuracy and dependability though. I remember my earth science lecturers were in three camps with it: one felt that the dates weren't consistently reliable, the data he'd gotten from astro-dating in the past had very poor independent confidence margins when there was no radiometric calibration. He implied that the results felt as though they had been pulled out of thin air. two would gather testable material for future study but didn't use astro-dating themselves. It was an interesting but immature technique that might be useful in future but not to them. only one felt that it was really, immediately useful. Do note that this is in New Zealand, we're a geologically young and active country, so much so that we rarely need longer range radiometric testing like Uranium-Lead or even Potassium-Argon, we can Carbon date most of the sites we're interested in. Fine calibration of longer range datings is much less of an issue here than in older, more settled, locations.
{ "domain": "earthscience.stackexchange", "id": 1565, "tags": "geology, geochronology, field-measurements" }
Bose Chaudhuri Hocquenghem (BCH) vs Reed Solomon (RS) Error correcting codes and sound transmission channel modeling
Question: I've implemented an acoustic based information transmission system using a bandwidth of 2 kHz and the band between 18.5 and 20.5 kHz. I'm trying to find the best error correcting code for it, in the sense that I could get the same error correction capability with the lowest redundancy added (I want to get the best speed possible). I have been able to transmit data between 2 smartphones at low rate (400 bps) and short distances (5-10 cm). At the moment I've design a package that I send between peers and I'm using 4 BCH(63,39,4) blocks there. I have also added some parity check bits (just for error detection) in case BCH fixed the incoming package in the wrong way (could happen if more that 4 errors occur in any of my 4 BCH blocks). I interleave the BCH coded bits before I modulate them trying to equally distribute the error burst among the 4 BCH blocks. Right now I'm capable of implementing both RS and BCH coding schemes, but I was wondering if there was some criteria I could follow in advance for choosing which one might perform better, would it be worth the time implementing RS. I read this document which says that for a Rayleigh Fading Channel being simulated in Matlab, BCH outperformed binary RS for a specific N,K,T selection. But they mention that this has a strong channel model dependency. My questions are: What channel model fits data transmission over air best, I guess distortion generated by speakers should also be take under account, specially at the frequencies at which I'm working? Is there a general study comparing BCH vs RS in different channels? Is RS theoretically better than BCH for the same code rate n/k. Non binary RS may be considered, I'm just interested in same error correcting capability with greatest bit-rate ? Independent answers to any of these questions are most welcome Note: At this low bitrate CPU is not a bottleneck. Anyway the most time consuming task of my system is FFT based matched filter demodulation (specially if I use high order modulations), not BCH error decoding with such small code length. Answer: What channel model fits data transmission over air best, I guess distortion generated by speakers should also be take under account? Disclaimer: I'm really not an Audio person. There's a lot of audio DSP engineers on here that probably have a far better idea of what's happening on an acoustic channel. But: Frequency selectiveness will be pretty important. It's unlikely your phone speakers can go below 40 Hz, or far above 12 kHz. Audio typically might experience a bad case of multipath prepare yourself for what would be a relatively solid amount of Doppler if your communication partners, or significant reflectors, or the medium, are moving Is there a general study comparing BCH vs RS in different channels? probably, hundreds. But: it's really a thing that needs to be compared under specific channel models, and since there are freely available implementations of both, as soon as you have a realistic channel model for your specific use case down, simulation should give you more application-specific results than any study might ever give you. I'd really go for channel sounding, in your case; send a known signal of suitable bandwidth, record the raw reception, try to extract a transfer function from that (assume LTI, and verify that assumption). Change the environment/channel, and do it again. Do that ten times (and write a paper), and come up with a stochastic channel model (e.g. "we have this general amplitude response for the ideal single channel, we have x~(PDF) multipath components, and their delays and weights follow the XYZ distribution", and write a paper). Then write a channel simulator (at this point, I might, again, recommend GNU Radio, for which several channel sounders and channel simulators exist, but mainly for RF channels – but that's not inherent to its design, just to most of its userbase; obviously, write a paper), and test a few codes on that (and write a paper :))!
{ "domain": "dsp.stackexchange", "id": 4455, "tags": "digital-communications, sound, channelcoding, reed-solomon" }
Cleaning up Reverse Polish Notation evaluator
Question: I was wondering if anyone had some general clean up advice on my code. I believe that more helper functions would be my case but I could be wrong. 30 lines maximum per method is my rule. I can't seem to figure out how to clean this up more, though. Sample Input: (1/3) (1/5) - (40/1) * # (2/3) B * # Sample Output Expression 1 is: (1/3)(1/5)-(40/1)* The value is: (16/3) Intermediate results: (2/15)(16/3) Expression 2 is: (2/3)B Invalid Expression Intermediate results: import java.util.Scanner; public class RpnEvaluator { private final int MAX_TOKEN = 100; private Scanner stdin = new Scanner(System.in); private Queue myQueue = new Queue(MAX_TOKEN); private Stack myStack = new Stack(MAX_TOKEN); public void run() throws java.io.IOException { int count = 1; Fraction myInterMid; while(stdin.hasNext()) { runOnce(count++); System.out.print("Intermediate results: "); while(!myQueue.isEmpty()) { myInterMid = (Fraction)myQueue.remove(); System.out.print(myInterMid.toString()); } System.out.println(); clear(myStack, myQueue); } System.out.println("Normal Termination of Program 3."); } private boolean isOperator(String input) { String[] oprtr = {"+", "-", "*"}; for(String choice: oprtr) if(choice.equals(input)) return true; return false; } private boolean isOperand(String input) { if(input.charAt(0) == '(') return true; return false; } private Fraction runOperation(String choice, Fraction op2, Fraction op1) { Fraction newFract = new Fraction(); switch (choice) { case "*": newFract = new Fraction(op1.times(op2)); break; case "+": newFract = new Fraction(op1.plus(op2)); break; case "-": newFract = new Fraction(op1.minus(op2)); break; } return newFract; } private void runOnce(int count) { Fraction op1 = null; Fraction op2 = null; clear(myStack, myQueue); System.out.print("Expression " + count++ + " is: "); doTypeCheck(op1, op2); } private void clear(Stack myStack, Queue myQueue) { myStack.clear(); myQueue.clear(); } private void runTheOperator(Fraction op2, Fraction op1, String readIn) { op1 = (Fraction)myStack.pop(); Fraction interMed = runOperation(readIn, op2, op1); myStack.push(interMed); myQueue.add(interMed); } private void doTypeCheck(Fraction op1, Fraction op2) { Fraction answer = null; String readIn = ""; boolean valid = true; readIn = stdin.next(); while(!readIn.equals("#") && valid == true) { if(!isOperator(readIn) && isOperand(readIn)) { processOperand(readIn); readIn = stdin.next(); } else if(isOperator(readIn)) { System.out.print(readIn); if(myStack.isEmpty()) valid = false; else op2 = (Fraction)myStack.pop(); if(myStack.isEmpty()) { valid = false; throwLine(readIn); } else { runTheOperator(op2, op1, readIn); readIn = stdin.next(); } } else { System.out.print(readIn); valid = false; throwLine(readIn); } } System.out.println(); if(myStack.isEmpty()) valid = false; else answer = (Fraction)myStack.pop(); if(!myStack.isEmpty()) valid = false; checkMessageValid(valid, answer); } private void checkMessageValid(boolean valid, Fraction answer) { if(valid == false) System.out.println("Invalid Expression"); else System.out.println("The value is: " + answer.toString()); } private void throwLine(String line) { while(!line.equals("#")) { line = stdin.next(); } } private void processOperand(String readIn) { Fraction stringFract = null; Fraction myFract = null; stringFract = new Fraction(readIn); System.out.print(stringFract.toString()); myFract = new Fraction(readIn); myStack.push(myFract); } } Answer: Reverse Polish Notation does not need parentheses, so that should actually be invalid input and should not be checked. To determine if something is an operand you should be able to use stdin.hasNextInt(). If that is false, then you should be able to use stdin.next() to get whatever the operator is. This could greatly simplify your type checking. Instead of terminating input with # you should just check for newlines. I'm not sure what the purpose of having myQueue is. You should only need a stack for processing RPN. I would recommend reading up on RPN to understand it more (the Wikipedia page has an algorithm for interpreting RPN) and see if that gives you ideas on how you might rethink your approach. Edit: Assuming that the input format and output values are requirements, there are several things you can do to improve the code. MAX_TOKEN should be private static final. You should rename your methods to better describe what they do. runOnce() is called in a loop, so it is not called once, and it doesn't explain what it does. I would recommend something like processNextCalculation(). doTypeCheck(), throwLine() and checkMessageValid() should also be renamed. You should move the output in checkMessageValid() to be in the same place as where you output the intermediate results. You will need to change some return values to propagate the results upward. isOperand() can be reduced to return input.charAt(0) == '('; In run() you don't use myInterMid elsewhere so you can remove references to it and simply System.out.println(myQueue.remove()); You should try to be consistent about using my as a prefix. Typically, it implies that the variable is an instance member variable, but you also use it for variables within methods. That is definitely confusing. I don't know what is in your Fraction class, but I imagine that the times(), plus(), and minus() methods are returning Fractions, so you don't need to new one up from the result. You don't need to declare op1 and op2 in runOnce(), since you can just pass two nulls to doTypeCheck(). For that matter, you don't need to even have parameters for doTypeCheck(). Once you've done that, you should bring the rest of the method up to run() so you can keep your text output in the same place. Since you can't have both isOperator() and isOperand() be true, you don't need to check both in your first if condition in doTypeCheck(). You don't need to check myStack.isEmpty() twice, you should be able to check if myStack.size() is less than two. If your implementation of Stack doesn't have a size() method, well...I feel bad for you. Whenever myStack.isEmpty() is true, you can break out of the loop and your failure case will still show correctly. You only need one Fraction in processOperand() and you can initialize it on the same line as you declare it.
{ "domain": "codereview.stackexchange", "id": 6515, "tags": "java, beginner, math-expression-eval, rational-numbers" }
ROS2 how to programmatically set rosout logging handler
Question: Hi, In ROS2 there are different types of logging handlers. Command line arguments can be used to setup the loggers: these are passed to rclcpp::init() and then to rcl::init(). The command line arguments can do the following: enable stdout logging, enable rosout logging, enable an external logging library (log4cxx or spdlog). They do this by creating a logging_multiple_output_handler and setting it through rcutils_logging_set_output_handler. I found myself in a situation where I want to use my own version of logging_multiple_output_handler. These handler should always use 2 loggers: rosout and an additional one that I'm defining. For this reason, I created a function to be called right after rclcpp::init in order to re-initialize the logging handlers to what I need. void set_ros2_logger_handler() { auto allocator = rcl_get_default_allocator(); rcl_logging_rosout_init(&allocator); auto logging_multiple_output_handler = []( const rcutils_log_location_t * location, int severity, const char * name, rcutils_time_point_value_t timestamp, const char * format, va_list * args) -> void { rcl_logging_rosout_output_handler(location, severity, name, timestamp, format, args); my_custom_handler(location, severity, name, timestamp, format, args); }; rcutils_logging_set_output_handler(logging_multiple_output_handler); } However, this does not really work and I'm wondering what should be the best approach. The problems are the following: rcl_logging_rosout_init is a hidden function (marked with RCL_LOCAL) so how can I know if rosout has been enabled without using it? rcl_logging_rosout_output_handler is not declared as hidden, but still I can't link to it for some reason. EDIT: apparently rcl_logging_rosout_output_handler should be accessible and the error is linked to the following line in rcl/CMakeLists.txt rcl_set_symbol_visibility_hidden(${PROJECT_NAME} LANGUAGE "C") TICKET: https://github.com/ros2/rcl/issues/476 DISCOURSE: https://discourse.ros.org/t/set-symbol-visibility-to-hidden-for-rmw-and-rcl-packages/7981/5?u=alsora Originally posted by alsora on ROS Answers with karma: 1322 on 2019-07-29 Post score: 0 Answer: The code I posted is supposed to work. rcl_logging_rosout_output_handler is marked as RCL_PUBLIC, but it was not accessible due to a bug addressed by this PR https://github.com/ros2/rcl/pull/478 rcl_logging_rosout_init should be marked with RCL_PUBLIC too since otherwise the handler is useless. https://github.com/ros2/rcl/pull/479 Originally posted by alsora with karma: 1322 on 2019-08-01 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 33547, "tags": "ros, ros2, rosout, logger" }
shear force calculation with velocity and mass
Question: I have made Image which is self explanatory. It has the question Suppose I have a metal plate that is travelling at V m/s. A stationary body acts as a shearing body. When the metal plate contacts the still body, it starts to shear off. The plane in which it will be sheared is shown in diagram (red line). So I want to ask how much force will get applied for the shearing. This is just an idea that I'm working through. I haven't seen this in any book that I have read (I haven't yet seen various books yet). This problem came in my mind yesterday while I was dreaming. Also I am assuming that the acceleration of the moving plate will be equal to change in velocity (after shearing the plate stops), so a = (0 - V) / 1 (unit time) So question is how much mass should be considered for the equation F = ma ? basically I want to calculate force required. If am wrong in some calculation then correct me. Answer: It seems like what you're really after is the force, not the mass. Now, if you're assuming that the plate will shear off, then the easiest way of calculating the force isn't through Newton's laws, but by calculating the force necessary to shear off the plate. Simplistically, that can be found by $$ F = A\tau $$ where $A$ is the transversal cross-section of the plate along the shearing plane (assuming thin-walled theory applies here) and $\tau$ is the shear strength of the chosen material. This doesn't consider dynamic effects (buckling, warping, possibility of the plate simply bouncing off instead of shearing), but that's impossible to do with the information presented.
{ "domain": "engineering.stackexchange", "id": 673, "tags": "design" }
What are "proxy data sets" in machine learning?
Question: The paper Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration uses the term "proxy data sets" in this way To develop DL techniques for synthesizing high-resolution realistic fundus images serving as proxy data sets for use by retinal specialists and DL machines. I googled that term, but didn't find a definition of "proxy data sets". What are "proxy data sets" in machine learning? The paper Analysis of Manufacturing Process Sequences, Using Machine Learning on Intermediate Product States (as Process Proxy Data) mentions a similar term The advantage of the product state-based view is the focus on the product itself to structure the information and data involved throughout the process. Using the intermediate product states as proxy data for this purpose Does "proxy data" mean the same thing as "proxy data sets" does? Answer: In computer science, if you say "A is a proxy for B", then it means that "A replaces B" (temporarily or not), or that "A is used as an intermediary for B". The term "proxy" usually refers to a server, i.e. there are the so-called proxy servers, which intuitively do the same thing (i.e. they are used as intermediaries). The following picture is worth more than 1000 words. Let's go back to your context. In the following paragraph To develop DL techniques for synthesizing high-resolution realistic fundus images serving as proxy data sets for use by retinal specialists and DL machines. The deep learning techniques will be used to generate data that will be used as a replacement for real data for use by retinal specialists. In your second paragraph The advantage of the product state-based view is the focus on the product itself to structure the information and data involved throughout the process. Using the intermediate product states as proxy data for this purpose The word "intermediate" is even used, so there should not be any need for further explanation, although I admit that I have no idea of what "product states" are in this context, but they will be used as "proxy data" for other data. Does "proxy data" mean the same thing as "proxy data sets" does, if yes, I will go through that paper. I guess so, but bear in mind that "data" is more general than "data set". I have found other slightly different uses of the expression "proxy data", not in the context of computer science, but in the context of paleoclimatology. For example, have a look at this web article What Are Proxy Data?. Just for completeness, here are some dictionary definitions of proxy. authority given to a person to act for someone else, such as by voting for them in an election a person who you choose to do something officially for you a situation, process, or activity to which another situation, etc. is compared
{ "domain": "ai.stackexchange", "id": 1833, "tags": "machine-learning, datasets, terminology" }
Getting Start with Kinect and ROS
Question: Hi all, I want to use a Gazebo world and a robot with a kinect on it where I can identiy objects of this world. I'm thinking on getting the kinect RGB images and process, but I'm a little confuse about how to simulate the kinect on gazebo and parse the image data to ROS, so I was thinking if any of you doesn't have some quick start or tutorials to give me (even some tips), I know normally OpenNI and PCL are used, but I don't know exactly how to use them with ROS and Gazebo together. I searched at ROS forum but related questions normally was closed as a Gazebo question, so I thought maybe here was the best place to discuss it. Originally posted by cflavs on Gazebo Answers with karma: 1 on 2016-09-12 Post score: 0 Original comments Comment by kerem on 2016-10-07: I'm also looking for a similar solution, have you got any clues yet? Answer: Just add this lines in ur sdf file with a model and launch it, than after this u can run rostopic echo in another terminal to see the published topics, it will exactly work like real kinect sensor, but if u want to make it more realistic u need to add some noise in it. <!-- camera --> <gazebo reference="camera_link"> <sensor type="depth" name="camera1"> <always_on>1</always_on> <visualize>true</visualize> <camera> <horizontal_fov>1.047</horizontal_fov>  <depth_camera> </depth_camera> <clip> <near>0.1</near> <far>100</far> </clip> </camera> <plugin name="camera_controller" filename="libgazebo_ros_openni_kinect.so"> <alwaysOn>true</alwaysOn> <updateRate>10.0</updateRate> <cameraName>camera</cameraName> <frameName>camera_link</frameName> <imageTopicName>rgb/image_raw</imageTopicName> <depthImageTopicName>depth/image_raw</depthImageTopicName> <pointCloudTopicName>depth/points</pointCloudTopicName> <cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName> <depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName> <pointCloudCutoff>0.4</pointCloudCutoff> <hackBaseline>0.07</hackBaseline> <distortionK1>0.0</distortionK1> <distortionK2>0.0</distortionK2> <distortionK3>0.0</distortionK3> <distortionT1>0.0</distortionT1> <distortionT2>0.0</distortionT2> <CxPrime>0.0</CxPrime> <Cx>0.0</Cx> <Cy>0.0</Cy> <focalLength>0.0</focalLength> </plugin> </sensor> </gazebo> U have to define link, collision element and visual element to represent the kinect sensor and add this sensor plugin before i.r ur model or link which u want to see as a kinect sensor. Originally posted by hari1234 with karma: 56 on 2016-12-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by aadityacr7 on 2018-05-01: I get this particular error message when copy pasted this code inside the link tag. XML Element[gazebo], child of element[link] not defined in SDF. Ignoring[gazebo]. Any suggestions on why this could be happening? Thanks in advance. EDIT: Solved the problem. Gazebo tag is used for URDF files not for SDF files.
{ "domain": "robotics.stackexchange", "id": 3986, "tags": "ros, kinect" }
Help to recognize spider
Question: It is unusual to see such size spiders in Lithuania, so I was wondering what kind of spider it is and if it is strong enough to bite through humans skin. I've googled for biggest known spiders in Lithuania, but mentioned few species don't look alike - mine has way thinner legs, but I'm not an expert. I've placed 1 euro cent coin for size - it's diameter is 16.25mm Answer: The long hairy legs make me think that this might be a spider belonging to the Tegenaria genus, or something related.
{ "domain": "biology.stackexchange", "id": 5940, "tags": "zoology, species-identification" }
Reducing text input size into word2vec without affecting performance too badly?
Question: So I am implementing Word2Vec for the first time, and I have a set of training data that I would like to train a word2vec model on. Predictably, the problem is the dataset is rather large, and I have more limited computational power than I would like. This is a very common problem of course, but are there any ways to minimize the input text without too horribly affecting the performance? For example, if I had the example sentences: Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much. They were the last people you’d expect to be involved in anything strange or mysterious, because they just didn’t hold with such nonsense. Mr. Dursley was the director of a firm called runnings, which made drills. He was a big, beefy man with hardly any neck, although he did have a very large mustache. I could do some thing like take out the stop words... but as I googled, that's a bad idea according to one link and a good idea according to another... so which is it? I also looked into text summarization but that seems harder than implementing word2vec. Another idea I naively had was to randomly take out X% of sentences from the text. But that would obviously be a performance hit in the model, and I'm not sure how big. So, are there any general methods for trying to do this, and is stop word removal a way to do it? Answer: I don't think there really is a right or wrong answer to the "removing stopwords" question. Some people will argue that throwing away information will reduce model performance, while others argue that it'll increase noise. I personally follow a simple rule of thumb. If my model depends on sentence structure, then I keep stopwords. If i'm modeling topics and more interested in important phrases, then I remove them. This seems to work well. If you're looking for ways to reduce you matrix, then yes removing stopwords is a perfectly acceptable idea. Another thing you can do is apply common feature reduction techniques like LSA or chi2 to find the most important words and reduce your input space to the most meaningful words. However, doing this may dramatically effect the performance or your word2vec model. But if it is your only choice, then why not give it a go.
{ "domain": "datascience.stackexchange", "id": 2503, "tags": "nlp, word2vec, word-embeddings, automatic-summarization" }
What sustains the rotation of earth's core (faster than surface)?
Question: I recently read that the earth's core rotates faster than the surface. Well, firstly, it's easier to digest the concept of planetary bodies, stars, galaxies in rotation and/or orbital motion. But, what makes a planet's core rotate? And in the earth's case, faster than its surface? Secondly, I am aware that the core's rotation is what lends the earth its magnetic field but.. what keeps it going in the first place? Answer: The same thing that makes the surface rotate: the fact that it always rotated. Because angular momentum is (almost) conserved our planet has no other option than to always rotate. (Of course in reality this picture is complicated by interactions with rest of the universe, but those are just small corrections). You can ask then why did it rotate in the first place and this has to do with the origin of the Solar system. So, the main question is not what makes Earth rotate but what layers of Earth rotate at what speed. If Earth were a solid body then it would rotate with same angular speed everywhere. But the interior of the Earth is liquid, so the picture is quite complicated. Only thing that is obvious is that every part of Earth must rotate (at some speed or other) because of friction: if one layer (say crust) were rotating and other (say mantle) weren't then after a little while it would start to rotate anyway because the atoms of the moving layer would drag the atoms of the static layer with them. To say anything more than this one would have to consider precise materials and fluid equations in the interior of Earth. Hopefully someone else will come along and complete the picture.
{ "domain": "physics.stackexchange", "id": 67363, "tags": "classical-mechanics, earth, geophysics, magnetohydrodynamics, planets" }
Finding the longest path in an undirected node-weighted tree
Question: I have a tree where each node is assigned a weight (a real number that can be positive or negative). I need an algorithm to find a simple path of maximum total weight (that is, a simple path where the sum of the weights of the nodes in the path is maximum). There's no restriction on what node the path starts or ends. I have a possible algorithm, but I am not sure it works and I am looking for a proof. Here it is: 1)Select an arbitrary node u and run DFS(u) to find the maximum weight simple path that starts at u. Let (u, v) be this path. 2)Run DFS(v) to find the maximum weight simple path that starts at v. Let this path be (v, z). Then (v, z) is a simple path of maximum weight. This algorithm is linear in the size of the graph. Can anyone tell me if it works, and if so, give a proof? Note: The Longest Path Problem is NP-Hard for a general graph with cycles. However, I only consider trees here. Answer: No, your algorithm does not work sometimes. Here is a counterexample. Let graph $G$ have five weighted nodes, $A\mapsto 1$, $B\mapsto -1$, $C\mapsto 0$, $D\mapsto 1$, $E\mapsto 1$. There are four edges, $AB$, $BC$, $CD$ and $CE$. Let us run your algorithm. Select node $B$ and run DFS($A$). We may get the maximum weight simple path $B, A$, whose weight is 0. Run DFS($A$), we may find the maximum weight simple path that starts at $A$, which is $A, B, C, D$, whose weight is 1. That path is returned. However, the simple path with the maximum weight is actually $D,C,E$ whose weight is $2$.
{ "domain": "cs.stackexchange", "id": 12839, "tags": "algorithms, graphs" }
What is the difference between hydrogen chloride and hydrochloric acid?
Question: My teacher insists that hydrogen chloride is a misnomer, and that the proper nomenclature demands HCl to be named hydrochloric acid. However, I saw the name hydrogen chloride used in my textbook, but it was never further explained. Answer: Hydrogen chloride is a gas. It is the molecule made up of a hydrogen and a chlorine atom bonded by a single bond. It comes in gas cylinders for use in chemistry. Hydrochloric acid is a solution (no, not to your problem of a stubborn teacher). It is composed of water molecules and hydrogen chloride molecules. In terms of mass percentage, concentrated hydrochloric acid is often $37~\%\ \ce{HCl}$ in water. It comes in glass bottles (often brown glass for ease of bottle use; the chemical company does not have to distinguish between compounds that can live in clear bottles and those that must live in brown ones) for use in chemistry. Hydrogen chloride can also be dissolved in other solvents such as ether. In this case too, it may be termed hydrochloric acid, although a specification of the solvent should be added. Whether or not either of the two names is a misnomer depends on what you are addressing. Using hydrogen chloride for the liquid solution is as wrong as using hydrochloric acid for the gas.
{ "domain": "chemistry.stackexchange", "id": 10027, "tags": "nomenclature" }
Electromagentic Radiation by objects at rest
Question: When electrically charged particle moves under acceleration, it emits electromagnetic radiation. So, the question is that when an electrically-charged particle, say a proton or an electron is at rest, then it will emit no EM radiation? Answer: Accelerating charged particles, electrons or protons, emit electromagnetic waves. This means if they are moving with a constant velocity, or they are stationary, they will not emit electromagnetic radiation (photons). An electron in an atom (in an excited state) can emit a photon, and drops to a lower energy level. But a free, isolated, non-accelerating electron will not spontaneously emit a photon, because conservation of momentum/energy will not hold. Also, a proton (in a nucleus) can go through a different radiative process, called beta decay where a proton decays into neutron with the emission of a positron and a neutrino. And a free proton does not spontaneously decay (as far as all experiments have shown). Also, at the macroscopic level, large material objects emit thermal electromagnetic waves, or thermal radiation. This is because again, the charged protons and electrons (in the atoms that make up a macroscopic object), are constantly vibrating. When these charged particles vibrate, since they are being rapidly accelerated and decelerated, they will emit these electromagnetic waves. Again, an acceleration of charged particles is required for the emission of electromagnetic waves.
{ "domain": "physics.stackexchange", "id": 81552, "tags": "electromagnetism, electromagnetic-radiation, electric-fields" }
Multiply two huge base-10 numbers in assembly
Question: This post is a second part of my original post, Add two huge base-10 numbers, which deals with adding two huge base-10 numbers. However, in this case, I'm multiplying two non-negative whole numbers. In both cases, the input and output are represented as null-terminated ASCII strings, with digits in big-endian order. I use my own custom strlen function which only modifies rcx. I've tried to incorporate most of the feedback I've received and managed to shorten the code from 170 to 130 lines. Is there anything else I can do which will make it run faster? Here's the code: extern add_whole extern strlen section .text global _multiply_whole _multiply_whole: ; Input: ; - char *a -> rdi ; - char *b -> rsi ; - char *res -> rdx (allocate strlen(a) + strlen(b) + 1 (for null terminator) bytes) ; - char *buf1 -> rcx (temporary buffer for program to use) ; - char *buf2 -> r8 (both buf1 and buf2 have the same length as res) ; Registers used: ; - rax ; - rbx ; - r9 ; - r10 ; - r11 ; - r12 ; - r13 ; - r14 ; - r15 push rbx ; Push used callee-saved registers. push r12 push r13 push r14 push r15 mov r9, rdi ; Move char* a to r9. mov r10, rcx ; Save rcx since strlen() doesn't preserve it. call strlen ; rax = strlen(a) mov rbx, rax ; Save rax in order to prevent it from being overwritten. mov rdi, rsi ; strlen's first argument is now b. call strlen ; Call strlen with b. dec rax xchg rax, rbx ; rbx = strlen(b) - 1 mov rdi, r9 ; Restore rdi. mov rcx, r10 ; Restore rcx. xor r10d, r10d ; r10b is the carry. mov byte [rdx], '0' ; Set the default value of the result string to '0'. xor r14d, r14d ; The number of leading zeroes to add. .loop_1: lea r13, [rax+1] ; r13 = strlen(a) + 1 add rcx, rax ; rcx = &a[strlen(a)] mov r9, rax ; r9 = strlen(a) mov r15d, 10 ; The number we're dividing by. push rax ; Save rax since we use it for mul and div later. .loop_2: movzx r11d, byte [rsi+rbx] ; r11b is a byte of char *b sub r11b, '0' movzx eax, byte [rdi+r9-1] ; al is a byte of char *a sub al, '0' mul r11b mov r11b, al ; r11b is the multiplication of char *a and char *b. add r11b, r10b ; Add carry. push rdx mov al, r11b div r15b mov r10b, al ; carry = r11b / 10 mov al, ah mov byte [rcx], al add byte [rcx], '0' ; Add the remainder to the result string. pop rdx dec rcx dec r9 jnz .loop_2 ; Loop r9 times. mov byte [rcx], r10b ; Add the final carry (might be 0) add byte [rcx], '0' ; to the string. xor r10b, r10b ; Reset carry. xor r11d, r11d ; r11 = 0 test r14, r14 jle .after_loop_3 ; if r14 <= 0 then skip loop_3 .loop_3: ; Adds r14 trailing zeroes to buf1, r14 is the amount of times loop_1 has run. mov byte [rcx+r13], '0' ; buf1[r13] = '0' inc r13 ; r13++ inc r11 ; r11++ cmp r11, r14 js .loop_3 ; Loop r14 times. .after_loop_3: mov byte [rcx+r13], 0 ; Zero-terminate the string to pass it to add_whole(). push rcx ; Save used caller-saved registers. push rdi push rsi push rdx mov rdi, rdx ; param1 = rdx (char *res) mov rsi, rcx ; param2 = rcx (char *buf1) mov rdx, r8 ; param3 = r8 (char *buf2) mov r15, r8 ; Saving the r8 register. call add_whole ; buf2 = add_whole(res, buf1) (this doesn't actually return anything) mov r8, r15 ; Restoring the r8 register. pop rdx pop rsi xor r11d, r11d mov rdi, r8 call strlen ; r9 = strlen(buf2) (length of the addition result) mov r9, rax pop rdi pop rcx pop rax .loop_4: mov r15b, byte [r8+r11] mov byte [rdx+r11], r15b inc r11 cmp r11, r9 js .loop_4 mov byte [rdx+r9], 0 xor r11d, r11d .loop_5: mov byte [rcx+r11], 0 mov byte [r8+r11], 0 inc r11 cmp r11, r13 js .loop_5 inc r14 mov r15, rbx dec rbx test r15, r15 jg .loop_1 mov rdi, rdx call strlen mov byte [rdx+rax], 0 pop r15 pop r14 pop r13 pop r12 pop rbx ret This is basically the pen-and-paper multiplication algorithm implemented in assembly. I've used movzx in some cases and even avoided pushing and popping registers (instead, I've used two movs as suggested by https://stackoverflow.com/questions/73996728/is-moving-into-another-register-faster-or-slower-than-push-and-pop.) Currently, this code can calculate 1000! 27 times per second. I used this code for that benchmark. #include "multiply.h" #include <chrono> #include <iostream> std::string factorial(int x) { std::string answer = "1"; for (auto i = 2; i <= x; i++) { std::string toMultiply = std::to_string(i); char *partAns = (char *)calloc(answer.length() + toMultiply.length() + 1, 1); multiply(answer.c_str(), toMultiply.c_str(), partAns); answer = partAns; free(partAns); } return answer; } int main() { double time = 0; int x = 0; while (time < 1) { auto start = std::chrono::high_resolution_clock::now(); std::string s = factorial(1000); auto end = std::chrono::high_resolution_clock::now(); time += std::chrono::duration_cast<std::chrono::microseconds>(end - start) .count() * 1e-6; x++; } std::cout << "1000!'s per second: " << x << '\n'; } multiply.h: /// @brief Multiplies the first rational argument with the second rational /// argument and stores the result in the third argument. /// @param a The first non-negative rational number as a decimal. /// @param b The first non-negative rational number as a decimal. /// @param res Where a * b will be stored. void multiply(const char *a, const char *b, char *res) { extern size_t strlen(const char *str); size_t a_length = strlen(a); size_t b_length = strlen(b); char *a_copy = (char *)calloc(a_length + 1, 1); char *b_copy = (char *)calloc(b_length + 1, 1); size_t ptr = 0; int displacement = 0; unsigned char flag = 0; for (size_t i = 0; i < a_length; i++) { if (flag == 1) displacement++; if (a[i] == '.') flag = 1; else { a_copy[ptr] = a[i]; ptr++; } } ptr = 0; flag = 0; for (size_t i = 0; i < b_length; i++) { if (flag == 1) displacement++; if (b[i] == '.') flag = 1; else { b_copy[ptr] = b[i]; ptr++; } } size_t bufsize = strlen(a) + strlen(b) + 1; char *buf1 = (char *)calloc(bufsize, 1); char *buf2 = (char *)calloc(bufsize, 1); _multiply_whole(a_copy, b_copy, res, buf1, buf2); size_t reslength = strlen(res); for (size_t i = 0; i < displacement; i++) res[reslength - i] = res[reslength - i - 1]; if (displacement) res[reslength - displacement] = '.'; free(buf1); free(buf2); free(a_copy); free(b_copy); } Any way to improve this? Answer: Unfortunate compare/branch pair A cmp/js pair cannot macro-fuse on various Intel processors. Since it's meant to be a less-than comparison, usually it would be done with cmp/jl or cmp/jb (depending on signedness) both of which can macro-fuse. Also, cmp/js in general can go wrong in case of overflow. That case doesn't look very relevant here in this code, I just mention it so you can avoid it when it does matter. avoided pushing and popping registers Not entirely, but I'll leave that up to you. div and 8-bit registers Note that the dividend for div r15b is ax, not al. ah is zero at the time of the division, but seemingly by accident, and in such a way that it creates a situation where the high-byte register and the low-byte register may have gotten renamed separately and need to be "unified" with an implicit extra µop. That would only happen on microarchitectures that rename those registers separately. On others, writing to byte registers may have an implicit dependency on the corresponding 64-bit register, to "merge" the new value into it. movzx has been used in some cases, but there are still other operations that risk triggering the curse of partial register writes. The curse of partial register writes also applies to xor r10b, r10b for example, which (unlike xor r10d, r10d) is not a special zeroing idiom (so it is a real xor instruction, with associated execution cost, and not a dependency-breaker). Unless there is some overriding reason not to, you should zero the corresponding 32-bit register. Using xor r11d, r11d to set r11 to zero is good, keep doing that. By the way the div can be avoided, which is probably still a good idea, despite significant improvements to the performance of integer division in recent x86 processors. There are various ways to approach this. For example for a number in range 0..91 held in eax, division by 10 may be implemented like this: imul edx, eax, 0CDh ; approximately 1/10 * 2^11 shr edx, 11 which works for that entire range of numbers and actually more, the first time it goes wrong is when eax = 0x0405. Then the remainder can be determined by subtracting 10 times the quotient from the dividend, imul ecx, edx, -10 add eax, ecx ; this add may be merged with the conversion to ASCII into a 3-part lea Or two leas could be used to implement a multiplication by 10, but then there would be a sub which cannot be merged with the conversion to ASCII. I used some relatively arbitrary registers for these examples, I do not mean to imply that these snippets can be plugged into the code as-is. Please take them as descriptions of techniques rather than as concrete suggestions. mov byte [rcx], al add byte [rcx], '0' A read-modify-write operation is relatively "heavy" in various ways. Not super heavy, but heavy enough to think about avoiding it. It could be avoided by using lea to do a non-destructive addition, or even by adding '0' to eax (note that I'm choosing eax instead of al on purpose, to avoid the wrath of partial register writes), storing al, and then subtracting '0' again. That add and sub would be part of a loop-carried dependency chain, but that does not look significant to me compared to the rest of the loop body. Sweeping changes One of the biggest things you could do for performance, is multiplying several digits at once. The basic multiplication algorithm does a quadratic number of multiplications after all. Switching from base 10 to base 109 (so that each limb fits in 32 bits, and each product fits in 64 bits) would not reduce multiplications by a factor of 9, but rather a factor of 81[1]. I chose a power of ten for the base, since that most easily converts to and from the base ten input and output. Even with that, it is not so simple to do efficiently, or even to do it at all. The temporary results into which the additions take place, should remain in base 109 for as long as possible, since converting that to base ten is (while simpler than general base conversions, since each limb would map nicely to 9 output digits) actually not cheap. You could take this one step further and take advantage of 64bit-to-128bit multiplication to do it all in base 1018 for another 4x reduction in multiplications, but the resulting 128bit partial products are more complicated to deal with. I don't think you should do it. There are various more complex algorithms for big-integer multiplication that are worth using when numbers have about the size that you're already dealing with (over a thousand decimal digits), but they apply only when both inputs are sufficiently big, which is not the case when calculating a factorial. note 1: but that factor is lower when calculating a factorial, since one of the multiplicands is always small in that case.
{ "domain": "codereview.stackexchange", "id": 43997, "tags": "performance, linux, assembly, x86, bigint" }
Problem involving 1st law of thermo and ideal gas law
Question: Problem: $1.0 \text{ kg}$ of air at pressure $10^6 \text{ Pa}$ and temperature $398 \text{ K}$ expands to a five times greater volume. The expansion occurs such that in every instance the added heat is a quarter of the work done by the gas. Calculate the final pressure. $1 \text{ kmol}$ has a mass of $29 \text{ kg}$ and $C_V = \frac 52 R$. I solved the problem but I have some questions regarding the approaches I made. I interpreted heat being a quarter of the work done every instance as $\text{d}Q = \frac 14 \text{d}W$. By the 1st law of thermodynamics ($Q = W + \Delta U$), this yields $-\frac 34 \text{d}W = \text{d}U$. But for molar heat capacity we know that $\text{d}U = nC_V\text{d}T$, hence $-\frac 34 \text{d}W = nC_V\text{d}T$. By definition, $\text{d}W = p\text{d}V$ and therefore $-\frac 34 p\text{d}W = nC_V\text{d}T$. We treat the gas as ideal and thus we can substitute $\frac {nRT}V$ for $p$, arriving at $-\frac 34 \frac{nRT}V \text{d}V = nC_V\text{d}T$. Separation of variables yield $-\frac 34 \frac{nR}V \text{d}V = \frac {nC_V}T \text{d}T$. We take the definite integrales $ \displaystyle \int_{V_1}^{5V_1} \!\!\!\!\! -\frac 34 \frac{nR}V \text{d}V = \int_{398}^{T_2} \frac {nC_V}T \text{d}T$ and can then finally solve for $T_2$ (it turns out to be approximately $245 \text{ K}$). At last, we can determine the final pressure from $\displaystyle \frac {p_1V_1}{T_1} = \frac {p_2V_2}{T_2}$. (1) My main question is whether you guys know of a different way to solve this problem not involving having to solve differential equations? (2) I've seen the 1st law written as $\text{d}U = \text{d}Q + \text{d}W$ but if we use that expression we definitely won't get the right answer; what's up with the inconsistency? (3) I saw no other option than having to use the molar heat capacity equation $\text{d}U = nC_V\text{d}T$ but doesn't this assume constant volume? In our problem the volume is definitely changing, so is it not contradictory to use that very equation? Answer: (1)The First law is written in form of differentials themselves,so I think there may be no escape from using differential equations. (2)The way most commonly the first law is written is, $dU=dQ-dW_{\text{work done by the system}}$. Here dW is work done by the system. However,in subjects other than physics,more important quantity is the work done by the experimenter.(This is quite common in chemistry) As the process in thermodynamics are most "quasistatic"(http://en.wikipedia.org/wiki/Quasistatic_process),the container/piston is always in equilibrium. So, $\vec{F_{ext}}=-\vec{F_{int}}$(they are equal and opposite),then we have, $dW_{\text{by the system}}=-dW_{\text{on the system}}$. And so the first law can be written as: $dU=dQ+dW_{\text{work done on the system}}$. Here dW is work done on the system. As the two $dW$s have different meanings we won't a different answer. (3)The internal energy of a gas is state variable/state function(http://en.wikipedia.org/wiki/Functions_of_state). $U$ depends only on the final and initial states of the system and not on what process was used to get from initial to the final state.So we can use a constant volume process to get $dU$ which then can be used in any process without any modification.
{ "domain": "physics.stackexchange", "id": 10082, "tags": "thermodynamics, ideal-gas, differential-equations" }
Reverse contraposition
Question: While it is trivial to prove contraposition ∀ A B: Prop, (A → B) → (~B → ~A) using Coq, is it equally trivial to prove the reversed form: ∀ A B: Prop, (~A → ~B) → (B → A) ? In particular, is it doable without using any additional axioms, e.g. from classical logic? Answer: It is not provable without additional axioms. In fact, it implies double negation elimination (take $B=\top$), which in turn is equivalent to the excluded middle.
{ "domain": "cstheory.stackexchange", "id": 2723, "tags": "coq" }
If the path integral formulation includes future events, why doesn't that imply retrocausality?
Question: I know that such events would cancel out in the math, but if an extreme event were to happen in the future (say a black hole forming or something on that par), would a particle in the present react to it? If not, why? Answer: The path integral is a broad idea, which comes in several different flavors. In non-relativistic quantum mechanics for one particle, you calculate the propagator matrix element $\langle {\bf x}_f| U(t_f,t_i) | {\bf x}_i \rangle$ by summing $\exp(i\, S[{\bf x}(t)]\, /\hbar)$ over paths connecting $(t_i, {\bf x}_i)$ to $(t_f, {\bf x}_f)$ which travel forward in time, so there's clearly no retrocausality. In relativistic quantum field theory, you typically use the LSZ formalism, which involves integrating $\exp(i S[\varphi(x)] / \hbar)$ with $S[\varphi(x)] := \int_{t_i}^{t_f} dt\, \int d^3 {\bf x}\, \mathcal{L}(\varphi(x), \partial_\mu \varphi(x); x)$ in the limit $t_i \to -\infty$ and $t_f \to +\infty$, so that the incoming and outgoing particles are thought of as far-separated and asymptotically noninteracting. (In practice, this limit is carried out through the use of the "$i\epsilon$ trick" in the denominator of the propagator, which sets the boundary conditions.) In neither case are paths that extend later than $t_f$ considered, so there is no retrocausality. (You do find acausal correlations across spacelike separations, but they cannot transmit acausal influences.)
{ "domain": "physics.stackexchange", "id": 51720, "tags": "quantum-mechanics, path-integral, boundary-conditions, causality" }
Why does sugar dissolve faster in hot water compared to cold water?
Question: Why does sugar dissolve faster in hot water compared to cold water? Answer: I am going to have to respectfully disagree with Pranjal Rana. The reason that sugar dissolves more quickly in hot water than in cold water has to do with the kinetic energy of the particles. The faster (hotter) the water molecules are moving, the more energy they give to the sugar molecules when they collide with them. Think about it this way: a sugar crystal is just a lot of sugar molecules stuck together. You dissolve the crystal you need to hit the molecules with enough force to break them off of the rest of the crystal.
{ "domain": "physics.stackexchange", "id": 42097, "tags": "thermodynamics, water, physical-chemistry, states-of-matter" }
ekf_localization control command
Question: Hi! I just saw it now is possible add to the filter a control command, but looking at the code this control applies on the accelerations directly, and not x(t + 1) = Ax(t) + Bu(t). What model use in the filter? The command of my robot is linear velocity and angular velocity, How i do it? Thanks. Originally posted by Porti77 on ROS Answers with karma: 183 on 2016-06-16 Post score: 0 Answer: I think the control_config can control the input of your integration. and I think its order is Vx,Vy,Vz,VRoll,Vpitch,Vyaw. Originally posted by asimay_y with karma: 255 on 2016-06-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24959, "tags": "ros, navigation, sensor-fusion, robot-localization, ekf-localization" }
Sinc function's energy
Question: Given this signal: $$4000\,\mbox{sinc}(4000t)+D1000\,\mbox{sinc}(1000t)$$ with $ D=6000$, find its energy. To find the energy of sinc I use this transform: $$A\,\mbox{sinc}(Bt) \rightarrow A\,\mbox{rect}(f/B)/B$$ and compute the energy $E$ using: $$E=\frac{A^2}{B}$$ In this case what is the right answer? A) $4000^2/4000+D^21000^2/10000$ or B) $1/4000+A^2/1000$ Answer: Neither A nor B. Hint: the Fourier transforms of the two sinc function can be added in the frequency domain due to the Fourier transform linearity property, but the energies do not since the spectrums overlap. Note that the energy formula comes from computing the product of the square of the frequency domain amplitude by the frequency range of a rectangular pulse. Applying your formula, the amplitude for a single $\mbox{sinc}$ is given by $A/B$ and covers a range of $B$, yielding an energy of $(A/B)^2 B = A^2/B$. The amplitude of the first rectangular function (corresponding to the $4000\mbox{sinc}(4000t)$ function in the time-domain) in the frequency-domain is $1$, and covers frequencies in the range $[-2000,+2000]$. Similarly, the amplitude of the second rectangular function (corresponding to the $D 1000\mbox{sinc}(1000t)$ function in the time-domain) in the frequency-domain is $D$ and covers frequencies in the range $[-500,+500]$. Now, computing the energy of more than one $\mbox{sinc}$ is a little trickier... You should realize that there is an overlap for frequencies in $[-500,+500]$, where the total amplitude is $1+D$. The energy in that range is thus $1000 (1+D)^2$. For the rest of the ${[-2000,-500], [+500,+2000]}$ frequencies, there is no overlap and the amplitude is that of the first rectangle function, i.e. $1$. The corresponding energy there is thus $3000$. This then gives you a total energy of $3000 + 1000(1+D)^2$. Substituting $D=6000$ gives $36,012,004,000$.
{ "domain": "dsp.stackexchange", "id": 5008, "tags": "signal-energy" }
Low temperature PCR
Question: We're trying to do emulsion PCR using HA-coated polystyrene beads and we're noticing that the beads are seeing drastic issues with thermal degradation above 90C. As PCR has an unfortunate requirement of requiring a high temperature, I was curious about what options are out there to reduce the temperature during PCR denaturation or to stabilize the beads. Answer: Perhaps isothermal amplification is possible (NASBA)? Amplification of DNA also seems possible (NASBA at biomerieux).
{ "domain": "biology.stackexchange", "id": 7781, "tags": "pcr, protocol" }
Why is it seemingly easier to resume torrent downloads than browser downloads?
Question: I really wonder how torrent downloads can be resumed at later point of time. If such a technology exists, then why is it not possible in browsers? It is often not possible to pause a browser download so that it can be resumed at a later point of time. Often, the download will start again from the beginning. But in the case of a torrent download, you can resume anytime. One reason I could think of is that a browser makes an HTTP connection to the server which contains the file, and when this connection breaks, there is no data regarding how much file was saved so no resume is possible. Is there a fundamental reason why torrent downloads are easier to resume than web downloads? Answer: The bittorrent protocol was designed to transfer large files out-of-order. It divides files in chunks (pieces in bittorrent terminology), and maintains a map of which participant holds which chunks. One of the elementary commands is for one participant to request a chunk from another participant. If a client crashes or disconnects, it can check which chunks it has already downloaded (the base data includes a cryptographic checksum for each chunk) and request only chunks that it does not already have. I think bittorrent includes a command to request part of a chunk, too, but if worst comes to worst only chunks that have not been fully downloaded need to be re-requested. The HTTP protocol was designed to transfer mainly small files and to be simple to implement. Its most basic command is to download one file with a minimum of fuss. A simple server may only understand one command, to download a file in full. Hence, if the download is interrupted, there is no choice but to download the whole file again. There is a way for a client to request only part of a file (with the Range: header). Not all servers implement it (because it is not a fundamental feature of HTTP). Web browsers typically don't bother with it (because they are primarily designed to download small files: web pages), but all download managers support it (because they are designed to load large files) and will use it if the server accepts.
{ "domain": "cs.stackexchange", "id": 280, "tags": "computer-networks, communication-protocols" }
How do I find the equivalent resistance between positions A,B, C and D in this diagram?
Question: So I am a prelab assignment where I need to calculate the equivalent resistance between two assigned points. Here is my attempt so far: The thing that is troubling me is the point C and D, where the resistance R4 lies. I don't understand how to take in account this resistance in the circuit: Is it excluded from the circuit since no current pass through it? I'm really not sure about this. Can someone explain for me what happens for each point (ex: A and B, A and C, etc.) I've made an attempt at doing it but I am totally confused because I never saw a circuit where the resistance is out of the loop. Answer: Is it excluded from the circuit since no current pass through it? I'm really not sure about this. As drawn, there is no source and so the current is zero through all resistors. This is a resistor network and the question is asking, in effect, for you to imagine placing the leads of an ohmmeter across, e.g., nodes A & B (or A & D etc.) and calculating what the ohmmeter will display. A physical ohmmeter can be conceptualized as a voltage source in series with an ammeter. The voltage source drives the resistor network and there is a resulting current. The ammeter measures the resulting current and the equivalent resistance is, by Ohm's law, just the voltage of the source divided by the current through the ammeter (the ammeter reading can then be scaled with units of ohms). So, to visualize this process, draw a (unit) voltage source in series with an ammeter and then connect this series combination across nodes A & D. Now see that the resistor $R_4$ is in circuit and that there is a current through it and thus, its resistance is a factor in the equivalent resistance between nodes A & D. On the other hand, if you connect across nodes A & B, the resistor $R_4$ is 'dangling', i.e., one end is not connected to anything and so, there is no current through it - it is out of circuit and thus, is not a factor in the equivalent resistance between nodes A & B.
{ "domain": "physics.stackexchange", "id": 46553, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, home-experiment" }
libstatistics_collector: Unknown CMake command "rosidl_get_typesupport_target" (galactic)
Question: Hello everyone, I am trying to build ROS2 from source following the galactic link. While building, I am getting the below error, Starting >>> shape_msgs --- stderr: libstatistics_collector CMake Error at CMakeLists.txt:100 (rosidl_get_typesupport_target): Unknown CMake command "rosidl_get_typesupport_target". make: *** [Makefile:1130: cmake_check_build_system] Error 1 --- Failed <<< libstatistics_collector [5.11s, exited with code 2] Aborted <<< shape_msgs [1.76s] Aborted <<< rclpy [2.80s] Aborted <<< map_msgs [4.19s] Summary: 176 packages finished [1min 34s] 1 package failed: libstatistics_collector 3 packages aborted: map_msgs rclpy shape_msgs 1 package had stderr output: libstatistics_collector 150 packages not processed I tried to comment it in "src/ros-tooling/libstatistics_collector/CMakeLists.txt" file, but it leads to some other failure. Need some help here. Thanks in advance. Originally posted by baila on ROS Answers with karma: 32 on 2021-08-07 Post score: 0 Answer: This is probably the same cause as is fixed by ros2/ros2#1166. Try changing the active branch for ros-tooling/libstatistics_collector to galactic. It's set to master here, but that isn't correct any more since ros-tooling/libstatistics_collector#116. I tried to comment it in "src/ros-tooling/libstatistics_collector/CMakeLists.txt" file, but it leads to some other failure. yes, don't do that. That function call is there for a reason. Originally posted by gvdhoorn with karma: 86574 on 2021-08-07 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by baila on 2021-08-07: Thanks a lot for the solution. It fixed the build error. :) Comment by mnokka on 2021-10-14: Thank you! I had similar build error; used ROS2 Foxy, but ROS2 tutorial's branch was rolling, when corrected the branch as suggested (foxy-devel in my case) this "rosidl_get_typesupport_target" build error disappeared.
{ "domain": "robotics.stackexchange", "id": 36779, "tags": "ros2" }
Does a metal being "soft" equal being able to scrape onto paper?
Question: When you scrape aluminum onto paper, it makes a faint mark. The question is, does softness of a metal correlate to how well a mark shows up when you scrape it onto a surface such as paper? Answer: «How well a mark shows up» is an ambiguous description. Yet, Does softness of a metal correlate to how easy material abrades? If you refer to Mohs' scale, yes. Modern test kits contain pens with synthetic alloys to offer reproducible access to Mohs' levels of hardness (credit, commercial source) like it is mostly known in art for silver, the silverpoint. Though similar to silver in Mohs' scale, I didn't hear (so far) about an aluminium point. It equally is one of the techniques in geology, to compare the colour of the mineral as such with the colour of the streak of this mineral e.g., on a white tile you carry with you in the field; the two needn't be equal. Is the trace of the material visually discernible from the paper? This depends on the paper. Paper often is sold bleached and containing fillers (e.g., $\ce{CaCO3}$, $\ce{TiO2}$) for a white appearance. Again returning to art, your paper may be darker from the get-go, then allowing to use the technique known as trois crayons to introduce heights and lows with pencils used: (example of a trois crayons, credit Wikipedia) Does the trace adhere well to the paper? Like e.g., for charcoal, pen, chalk, in part this depends on the roughness of the paper if small particles enter grooves of the substrate. Often, the adhesion to the substrate is enhanced after drawing by applying a coating called fixative. (Hair-fixing spray sometimes is a cheaper alternative.) In case of aluminum, you may bet that its metallic reflectance stays; on surface, it will oxidize to transparent, colourless $\ce{Al2O3}$. In case of e.g., silver, this is going to get dark (think $\ce{Ag2S}$).
{ "domain": "chemistry.stackexchange", "id": 16361, "tags": "physical-chemistry" }
Shouldn't General Relativity Predict a Maximum Temperature?
Question: I've seen a lot of questions about maximum temperature and “absolute hot” — several ask if special relativity places any limits on temperature (clearly not). (Also this discussion of absolute hot on a NOVA blog post.) But I haven't seen general relativity addressed in any of these discussions — shouldn't there be a point where increasing the temperature of a given system will cause it to exceed some critical threshold of energy density and consequently cause it to collapse into a black hole? And wouldn't that bound the upper limit of temperature? Answer: In principle yes, though the situation isn't as clear cut as you describe. If you could confine a volume of matter within some volume then gradually heat it by adding energy to it then at some point the total energy density would exceed the density required to form a black hole and at that point the matter would start to collapse into a black hole. However the density of a black hole depends on its size, so the maximum temperature would be dependent on the size of your assemblage of matter. I suppose you could argue that the smallest volume measurable would be a Planck volume, and you could base your calculation on this. However at such fantastically high energy densities it isn't obvious that temperature has much meaning.
{ "domain": "physics.stackexchange", "id": 28108, "tags": "general-relativity, temperature" }
Problems installing hydro from source
Question: Hi, I can successfully install the catkin (wet?) packages using the install from source on Ubuntu instructions found on the ROS website. However, I seem to still be missing packages that are installed when I install from binary. Looking at the Wiki for installing from source, it mentions "To utilize the things installed there simply source that file. Lets do that now before building the rest of ROS:". However, it never mentions how to build the rest of ROS. So I figured, let's look at the source install documentation from Groovy. It is very similar, but has how to build the ROS (dry) packages. However, when I try to follow those instructions, substituting hydro for groovy, it tells me that there are no dry packages for Hydro. I still don't understand why these files are not there when I compile from source, unless possibly they aren't actually part of ROS. They (apt-get packages) get installed as part of the dependency process when installing the HRATC2014 framework. See below: dime@hratc4:~$ dpkg -S /opt/ros/hydro/lib/libhusky_gazebo_plugins.so /opt/ros/hydro/lib/libcontroller_manager.so ros-hydro-husky-gazebo-plugins: /opt/ros/hydro/lib/libhusky_gazebo_plugins.so ros-hydro-controller-manager: /opt/ros/hydro/lib/libcontroller_manager.so These don't get installed as part of the source install, so do I need to look elsewhere for their source? Edit: You are correct, and this is the web page we are following. The problem is, currently robot_pose_ekf does not integrate GPS data. However, clearpath robotics has code (navigation stack) with a modified robot_pose_ekf that does integrate GPS data. Please Google "clearpath robotics robot_pose_ekf" (I can't post links yet due to karma). At the bottom it says "where to get the code", and you can see the github directory where we're getting the modified navigation stack. Note, that this is the entire navigation stack, not just robot_pose_ekf. Being new to the ROS ecosystem, we just assumed we could replace the navigation directory in the ROS source, build (which we did succesfully) and run it. However, we still need extra packages provided by the HRATC2014 competition, and weren't able to integrate them with our compiled version -- this is where we are hung up...the ROS desktop-full is installed (along with needed .so files from HRATC2014) in /opt/ros. My custom build of ROS with modified navigation stack is in home_dir/catkin_workspace1/install_isolated. Should I be doing the navigation source build in an "overlay" workspace? And by all means, if I'm simply asking something that's documented, just point me in the right direction, because I haven't been able to find it. Thanks for all your help. Originally posted by dime on ROS Answers with karma: 41 on 2014-04-02 Post score: 1 Answer: There seems to be some misconception about what is "ROS". When you installed ROS, probaly you chose the desktop_full variant. This gives you maybe around 250 packages. However there are over 1000 packages released for ROS hydro. Just because they don't get installed with any of the core variants (base, robot, desktop, desktop_full, etc) does not mean they are not "part of ROS". As you noticed, these additional ROS packages seem to be not part of packages needed for the "HRATC2014 framework" (I don't know what that is). You can of course install also those from source. rosinstall_generator is your friend here. I guess the question is really, what you are trying to acheive. Usually it does not make sense to install everything from source on Ubuntu. It is much more economical to install almost everything with apt-get, and then create an overlay workspace with just the packages you want to modify and thus compile from source (including any packages you need that depend on the ones you modify). Again, rosdep, rosinstall_generator and wstool are your friend for this. Edit: In response to your comment. It is probably best to install the core ROS system from APT, and then checkout only the ones you need to customize from source in an overlay. In catkin, you should not manually modify the environment variables like ROS_PACKAGE_PATH / LD_LIBRARY_PATH, but rather make sure you have sourced your underaly (the core install in /opt/ros/hydro/setup.bash) before invoking catkin_make on your overlay with the source packages for the first time. Even if you want to build everything from source, I would still advice to install the core pacakges in an underlay workspace, then create a second overlay workspace for development. Else configuratoin/compilation will be slow and you cannot use catkin_make due to some non-catkin packages. You should follow the tutorial at http://www2.isr.uc.pt/~embedded/events/HRATC2014/Simulator.html. Which packages on top of the three in the hratc2014_framework.rosinstall do you want to install from source exactly? Edit 2: In response to your update, you have multiple possibilities. I am assuming that you want the hratc 2014 on top of a ROS install where only the navigation stack is replaced. Option 1 The first option is to create a workspace with the hratc packages and the modified navigation stack. # set up workspace with hratc and custom navigation mkdir ~/hratc2014_workspace cd ~/hratc2014_workspace wstool init src https://raw.github.com/ras-sight/hratc2014_framework/master/hratc2014_framework.rosinstall -j8 wstool set navigation https://github.com/clearpathrobotics/navigation.git --git -v hydro-devel -t src wstool update -t src # install dependencies from debian rosdep install --from-path src -i --rosdistro hydro # source the underlay and compile the workspace . /opt/ros/hydro/setup.bash catkin_make This is the most convenient option, but there are some subtle pitfalls with this. For example, anything you install from debians that depends on the navigation stack was compiled with the released packages, but you now have "shadowed" those with your custom ones. If the ABI is different, you might run into trouble. However this case might not arise for you. Option 2 The other way is to create a custom ros install from source (dektop_full or whatever) where you replace the navigation stack with the version of clearpath. This is cleaner, because now you are not "shadowing" one version of the navigation stack with a custom one. You use this workspace as the underlay for your workspace with the hratc2014 packages. In order to add additional packages to the underlay (to fulfil all dependencies of the hratc2014 packages), you can include additional packages in the underlay with wstool and rosinstall_generator. Initialize the underlay with the desktop_full metapackage and our custom navigation mkdir ~/ros_ws cd ~/ros_ws wstool init src wstool set navigation https://github.com/clearpathrobotics/navigation.git --git -v hydro-devel -t src wstool update -t src -j8 Get a .rosinstall file for all dependencies of the navigation stack and all of ros in the desktop_full variant. CAVEAT: This only works if the clearpath for of the navigation stack has the same dependencies as the released version. rosinstall_generator navigation desktop_full --deps --rosdistro hydro --deps-only --tar > deps.rosinstall Merge the deps into the workspace, fetch all sources and build everything. wstool merge deps.rosinstall -t src wstool update -t src -j8 ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release Prepare the overlay workspace with the hratc sources. mkdir ~/hratc2014_ws cd ~/hratc2014_ws wstool init src https://raw.github.com/ras-sight/hratc2014_framework/master/hratc2014_framework.rosinstall -j8 We cannot use rosdep to find missing dependencies, and rosinstall_generator does not have support for the same kind of --from-paths option (see https://github.com/ros-infrastructure/rosinstall_generator/issues/25), so a little bit of manual work is needed. First source the underlay, then let rosdep tell us what is not there already. source ~/ros_ws/install_isolated/setup.bash rosdep rosdep check --from-paths src -i For this gives the following output (on os x): System dependencies have not been satisified: homebrew ros-hydro-husky-navigation homebrew ros-hydro-ros-control homebrew ros-hydro-husky-gazebo-plugins homebrew ros-hydro-ros-controllers homebrew ros-hydro-gazebo-ros-control homebrew ros-hydro-hector-gazebo-plugins homebrew ros-hydro-gazebo-plugins homebrew ros-hydro-gazebo-ros From that output you can deduce the following command to get a .rosinstall file for all missing dependecies. Take care to change - in the package name to _. rosinstall_generator husky_navigation ros_control husky_gazebo_plugins ros_controllers gazebo_ros_control hector_gazebo_plugins gazebo_plugins gazebo_ros --deps --rosdistro hydro --tar --exclude RPP > hratc-deps.rosinstall Now do back to the underlay and add the additional packages. cd ~/ros_ws wstool merge ~/hratc2014_ws/hratc-deps.rosinstall -t src wstool update -t src -j8 ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release Finally we are ready to build the overlay cd ~/hratc2014_ws catkin_make Originally posted by demmeln with karma: 4306 on 2014-04-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by dime on 2014-04-03: I pulled the source for ROS because we needed to include a third-party modification to parts of the navigation stack (and it was only available in source). Then, I need to also use some dependencies that get installed when I use rosdep on the catkin workspace provided by the competition (HRATC2014). However, rosdep installs these in /opt/ros/hydro. My catkin install complained about being mixed type packages so we have to use catkin_make_isolated, which installed it into my home directory. I haven't been able to figure out how to make my local, source compiled ROS know where the other libraries are. I tried ROS_PACKAGE_PATH / LD_LIBRARY_PATH, etc., but there's some concept I'm missing here. It still doesn't seem to integrate well (i.e. controller_manager/spawn -- one of the dependencies, is never found). I'd have though there'd just be an environment variable that we could set equal to /opt/ros/hydro or /opt/ros/hydro/lib and it would just pick up the other libraries.
{ "domain": "robotics.stackexchange", "id": 17509, "tags": "ros, husky, build-from-source" }
Machine Learning on financial big data
Question: Disclaimer: although I know some things about big data and am currently learning some other things about machine learning, the specific area that I wish to study is vague, or at least appears vague to me now. I'll do my best to describe it, but this question could still be categorised as too vague or not really a question. Hopefully, I'll be able to reword it more precisely once I get a reaction. So, I have some experience with Hadoop and the Hadoop stack (gained via using CDH), and I'm reading a book about Mahout, which is a collection of machine learning libraries. I also think I know enough statistics to be able to comprehend the math behind the machine learning algorithms, and I have some experience with R. My ultimate goal is making a setup that would make trading predictions and deal with financial data in real time. I wonder if there're any materials that I can further read to help me understand ways of managing that problem; books, video tutorials and exercises with example datasets are all welcome. Answer: There are tons of materials on financial (big) data analysis that you can read and peruse. I'm not an expert in finance, but am curious about the field, especially in the context of data science and R. Therefore, the following are selected relevant resource suggestions that I have for you. I hope that they will be useful. Books: Financial analysis (general / non-R) Statistics and Finance: An Introduction; Statistical Models and Methods for Financial Markets. Books: Machine Learning in Finance Machine Learning for Financial Engineering (!) - seems to be an edited collection of papers; Neural Networks in Finance: Gaining Predictive Edge in the Market. Books: Financial analysis with R Statistical Analysis of Financial Data in R; Statistics and Data Analysis for Financial Engineering; Financial Risk Modelling and Portfolio Optimization with R Statistics of Financial Markets: An Introduction (code in R and MATLAB). Academic Journals Algorithmic Finance (open access) Web sites RMetrics Quantitative Finance on StackExchange R Packages the above-mentioned RMetrics site (see this page for general description); CRAN Task Views, including Finance, Econometrics and several other Task Views. Competitions MODELOFF (The Financial Modeling World Championships) Educational Programs MS in Financial Engineering - Columbia University; Computational Finance - Hong Kong University. Blogs (Finance/R) Timely Portfolio; Systematic Investor; Money-making Mankind.
{ "domain": "datascience.stackexchange", "id": 221, "tags": "machine-learning, bigdata, finance" }
How are poisons discovered? Does someone have to die/be poisoned from it first?
Question: How are poisons discovered? Does someone have to die/be poisoned from it first? Or are there other ways of discovering the harmfulness of a substance? Perhaps everything is tested on other animals prior to testing them on humans? Answer: Alle Dinge sind Gift, und nichts ist ohne Gift, allein die Dosis macht dass ein Ding kein Gift ist (The dose makes the poison) - Paracelsus Poisons (I'm going to use this as an umbrella term for "toxins" and "venom" as well. Bear in mind though, they are not the same thing) have been known since antiquity. Back in the good old days, you figured out if something was poisonous or not by eating/touching it (or getting someone else to do it), i.e- "discovering" a poison was simply a matter of chance. These chance encounters alone, led to the discovery of numerous poisons. With the advent of Chemistry, the gents in white lab-coats figured out that compounds that bear resemblance to the already well-known poisons, in terms of their functional groups and structures, are also toxic (albeit, to vastly different degrees). Poisons could now be identified a priori (you could tell it would end badly if you were exposed to such a substance, but not how badly). And no, you can't really quantify a poison's effects without testing it on something. Nor can you tell how much of something would be needed to kill or severely maim. This is where we bring in the idea of a median lethal dose $\pu{LD_{50}}$ which is, simply put, the minimum amount of a substance required to kill off half of all the animals in a particular test group. $\pu{LD_{50}}$ values for a particular substance depend on the animal used. Of course, the only way to get an accurate $\pu{LD_{50}}$ (which itself, is really a "mean/average" value of sorts) for a human is to actually poison someone, which doesn't sound very nice. So you do the next best thing, you measure it for a rat/chimp, couple it with your knowledge of the poison's mechanism of action, and extrapolate the value to something that would kill a person. Another way to establish degree of toxicity, which doesn't involve killing animals, would be by exposing a cell/tissue culture (and not the entire animal) to the potential poison .
{ "domain": "chemistry.stackexchange", "id": 9490, "tags": "everyday-chemistry, experimental-chemistry, toxicity, chemical-biology" }
Scan message must contain angles from -x to x
Question: I follow the tutorial turtlebot_simulator to make a map in a simulated world.But when I run roslaunch turtlebot_gazebo gmapping_demo.launch a error that scan message must contain angles from -x to x occurs, as show below. How to deal with it. [ERROR] [1433813576.466253926, 24.270000000]: Scan message must contain angles from -x to x, i.e. angle_min = -angle_max Originally posted by Leejamin on ROS Answers with karma: 51 on 2015-06-08 Post score: 4 Original comments Comment by tosandip on 2015-07-10: I am facing the same issue, any possible solution? Comment by forinkzan on 2015-07-14: me too , do you have any possible solution Comment by tosandip on 2015-07-15: No i have not found any.. starting looking at jackal robot. Comment by chuong on 2015-07-15: I have the same problem. This is the new behaviour in the latest ros-indigo-gmapping-1.3.7 package. CHANGELOG.rst says: .... Added cfloat include Change arbitrary constant to FLT_EPSILON Added check that scan goes from -x to x ....... Answer: Hi all, thank you for sharing the data. A fix was merged upstream in 1.3.8: it just reached shadow fixed so please test. The fix centers the scan before sending them to gmapping, whether they're slightly / completely off-centered or even upside-down. Originally posted by Vincent Rabaud with karma: 1111 on 2015-07-31 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 21865, "tags": "navigation, turtlebot, message, scan, gmapping" }
Proving that the 3-current density corresponding to the global phase invariance vanishes at infinity
Question: The components $j^i$ of the 3-current density $\textbf{j}$ corresponding to the global phase invariance of the action of a complex scalar field $\phi$ i.e., $\phi\to e^{-iq\theta}\phi$ is given by $$j^i=iq[(\partial^i\phi)\phi^*-(\partial^i\phi^*)\phi].\tag{1}$$ Each component of the 3-current density must go to zero at spatial infinity for the corresponding Noether's charge $$Q=\int j^0 d^3\textbf{x}$$ to be conserved. So my questions are: Is there a way to prove that $j^i$ as given in equation (1) vanishes at spatial infinity? Even if we assume $j^i$ to be zero at spatial infinity what would it mean physically? Answer: You may have already seen this, but Schwartz says $$\partial_t Q = \int d^3 x \partial_t J_0 = \int d^3 x \vec{\nabla}\cdot \vec J = 0.\tag{3.28}$$ "In the last step we have assumed $\vec J$ vanishes at the spatial boundary, since, by assumption, nothing is leaving our experiment." Without some assumption about $\vec J$ at the spatial boundary, it is not true in general. Edit: To clarify this assumption, Schwartz states in section 3.2 "We will always make the physical assumption that our fields vanish on these asymptotic boundaries" (referring to spatial and temporal infinity).
{ "domain": "physics.stackexchange", "id": 45646, "tags": "symmetry, field-theory, boundary-conditions, noethers-theorem, classical-field-theory" }
Relative Permittivity in a "Good Conductor"
Question: I have a question in which I need to show that sea water is effectively a "good conductor", when considering the propagation of radio waves of frequency $< 10^9$. We're given that the conductivity of sea water is around $5 Sm^{-1}$ and has a refractive index of around $9$. It is my understanding that in order to show that the sea water is a "good conductor", you would need to show that: $\sigma>>\omega\epsilon_r\epsilon_0$ $\sigma, \omega, \epsilon_0$ are trivial but I am not sure how you would get a value of $\epsilon_r$. Edit: I managed to solve the problem using Gilbert's help. Here's my proof: As, $${v=\frac{1}{\sqrt{\mu\epsilon}}},$$ $$v=\frac{c}{n},$$ $$\mu_r=1,$$ $$\implies v=\frac{1}{\sqrt{\mu_0\epsilon_r\epsilon_0}}=\frac{c}{n},$$ $$\implies \mu_0\epsilon_r\epsilon_0=\frac{n^2}{c^2},$$ $$\implies \mu_0\epsilon_r\epsilon_0=\frac{n^2}{c^2},$$ $$\implies \epsilon_r=\frac{n^2}{c^2\mu_0\epsilon_0},$$ As $\epsilon_0=\frac{1}{\mu_0c^2}$, $$\implies \epsilon_r=n^2$$ Answer: For non-magnetic materials, $n=\sqrt{\epsilon_r}$.
{ "domain": "physics.stackexchange", "id": 54136, "tags": "electromagnetism, refraction, conductors" }
Examples of molecules with D4 and C9 point groups
Question: Trying to come up with examples of (hypothetical) molecules/shapes that have $D_4$ and $C_{9h}$ point groups. For $D_4$, I thought a cyclobutane structure, but with 4 substituents coming off of the ring all pointing in the same direction. However, I'm not sure if this is $D_4$ because in a book I am reading it says that all the $D_n$ point groups show chirality, which this molecule does not. For $C_{9h}$, I really can't think of any shape/structure that doesn't just end up being $D_{9h}$. $C_9$ would be a 9-bladed propellor shape, although I'm not really sure how to draw this, and I don't see how I could alter that to get $C_{9h}$. Thanks Answer: For nearly any point group which is actually feasible (including $D_4$), look up the Symmetry gallery. Your all-cis tetra-substituted cyclobutane would be $C_{4v}$, BTW. As for $C_{9h}$... well, I don't quite believe this is possible, but probably we may come up with something that would look almost realistic. A substituted nanotube fragment will do, I think. Surely it is hypothetical, so what? Cyclo-$\ce{C18}$ is hypothetical as well, and in my opinion, has lower chances to be stable. Also, its symmetry is kinda dubious.
{ "domain": "chemistry.stackexchange", "id": 6399, "tags": "physical-chemistry, group-theory" }
Bank ATM machine in Ruby
Question: I've made a simple implementation of a bank ATM machine in pure Ruby. I'd like to hear some recommendations to make my code more idiomatic. atm.rb This is the startup file. I didn't use a class and I'm getting the user input using gets.chomp and a case statement for the flow control. require_relative 'account' account = Account.new loop do puts "What you want to do?\n1- Deposit\n2- Withdraw\n3- Check balance\n4- Check statement\n5- Exit" option = gets.chomp case option when '1' puts "How many dollars do you want to deposit?" amount = gets.chomp.to_f account.deposit(amount) when '2' puts "How many dollars do you want to withdraw?" amount = gets.chomp.to_f account.withdraw(amount) when '3' puts "You have #{account.balance} USD." when '4' puts account.statement.join("\n") when '5' break else "Wrong option. Try again." end end account.rb I used an array to store the transactions. To check the balance, I'm iterating through the array using the inject function. require_relative 'transaction' class Account def initialize @transactions = [] end def balance @transactions.inject(0) { |sum, transaction| sum + transaction.value } end def deposit(amount) add_transaction(amount, :deposit) end def withdraw(amount) add_transaction(amount, :withdraw) end def statement @transactions.map { |transaction| transaction.to_s } end private attr_accessor :transactions def add_transaction(amount, type) @transactions << Transaction.new(amount, type) end end transaction.rb This is the transaction class. I don't know if it's ok to validate the constructor parameters the way I did. class Transaction attr_accessor :amount, :type, :timestamp def initialize(amount, type) raise 'Invalid type' unless POSSIBLE_TYPES.include? type @amount = amount @type = type @timestamp = Time.now.getutc end def value return @amount if type == :deposit -@amount end def to_s "#{@timestamp} - #{type} - #{amount} USD" end private POSSIBLE_TYPES = [:deposit, :withdraw] end I'm aware that I'm not validating the user input type or value, I'm ok with that. See it running in repl.it. Answer: This is my first review, so please bear with me. I'll start with atm.rb: require_relative 'account' account = Account.new loop do # This is totally opinion, but I think doing this by joining a string array looks neater. # puts "What you want to do?\n1- Deposit\n2- Withdraw\n3- Check balance\n4- Check statement\n5- Exit" puts "What you want to do?" options = [ "1- Deposit", "2- Withdraw", "3- Check balance", "4- Check statement", "5- Exit" ] puts options.join("\n") # These two lines can be condensed into one # option = gets.chomp # case option case gets.chomp when '1' puts "How many dollars do you want to deposit?" # You could merge these lines into account.deposit(get.chomp.to_f) amount = gets.chomp.to_f account.deposit(amount) when '2' puts "How many dollars do you want to withdraw?" # You could merge these lines into account.withdraw(get.chomp.to_f) amount = gets.chomp.to_f account.withdraw(amount) when '3' puts "You have #{account.balance} USD." when '4' puts account.statement.join("\n") when '5' break else "Wrong option. Try again." end end This is generally fine, just a couple things that I'd prefer for readability. I have larger ideas for structure for the other two files. For the Transaction class, you really don't need the @type instance variable if it can only be :deposit or :withdraw. You can really just make that information be held in the sign of @amount. If @amount > 0, it's a deposit, and if @amount < 0 it's a withdraw. So, with that in mind, here's my revised version: class Transaction # This was an attr_accessor. These attributes don't need to be editable, only readable. attr_reader :amount, :timestamp def initialize(amount) @amount = amount.to_f @timestamp = Time.now.getutc end def to_s # @amount is an instance variable, and so I added the "@" before it # You don't really need type here, but if you want it, use the next line: type = @amount > 0 ? "deposit" : "withdraw" "#{@timestamp} - #{type} - #{@amount} USD" end end And now, we move to the final file, account.rb: require_relative 'transaction' class Account def initialize @transactions = [] end def balance # With our modifications to Transaction, we can do this more simply # @transactions.inject(0) { |sum, transaction| sum + transaction.value } @transactions.map(&:amount).sum end # In the next 2 methods, we don't to pass type anymore. # We can also remove the private add_transaction method, because it doesn't really serve any purpose. If you wanted to modify the transaction creation process, you should do it in Transaction#initialize. # I've also added optional type handling, for fun def deposit(amount) # add_transaction(amount, :deposit) @transactions << Transaction.new(amount) end def withdraw(amount) # add_transaction(amount, :withdraw) @transactions << Transaction.new(-amount) end def statement # This can be simplified with the map(&:to_s) syntax # @transactions.map { |transaction| transaction.to_s } @transactions.map(&:to_s) # You may also want to make the call to #join here, instead of in the main loop, because a statement sounds like it should be a string. end end Edit: You could also get rid of the Transaction#to_s method and instead modify the Account#statement method to be: class Account def statement @transactions.map do |t| type = t.amount > 0 ? "deposit" : "withdraw" "#{t.timestamp} - #{type} - #{t.amount} USD" # Personally, I like the extra line for readability, but # you could also write it as: # "#{t.timestamp} - #{t.amount > 0 ? 'deposit' : 'withdraw'} - #{t.amount} USD" end end end
{ "domain": "codereview.stackexchange", "id": 28693, "tags": "ruby, finance" }
Fractional knapsack in C++11
Question: I am just picking up C++11 and decided to try some algorithmic problems to get better at it. One such problem is the fractional knapsack problem which can be solved by a greedy approach. The idea is to find the most valuable item and fill it in and move on to the next one till the knapsack is filled. Partial items are allowed. I hope this code can be reviewed for: Correctness Correct usage of C++11 features Code logic improvement Coding conventions struct Item { float value; float size; uint32_t index; }; uint32_t FillKnapsack(std::vector<Item> &Items, const uint32_t MaxCap) { // sorting is done based on value/size ratio of the item std::sort(Items.begin(), Items.end(), [](const Item& A, const Item& B) { return (A.value/A.size) > (B.value/B.size); }); float c = 0, v = 0; for (const auto& item: Items) { if (c + item.size <= MaxCap) { c += item.size; v += item.value; } else { v += (MaxCap - c)/item.size * item.value; c = MaxCap; } std::cout << "adding item " << item.index << " with value " << item.value << std::endl; std::cout << "c: " << c << " v: " << v << std::endl; // since the knapsack is filled we can break out. if (c >= MaxCap) break; } return v; } int main() { std::vector<Item> items = { { 10, 30, 0}, { 20, 20, 1}, { 30, 10, 2} }; std::cout << FillKnapsack(items, 35) << std::endl; return 0; } Answer: Return or Output I feel like FillKnapsack should either (a) simply tell you the max value or (b) give you the items that it used. Right now, you're returning (a) and logging, incompletely, (b). (Incomplete because if you use a fractional amount of an item, that's not indicated). I suggest you pick one or the other. If the former, basically just drop the couts. If the latter, you'll want to return something like a: std::vector<std::pair<Item, double>> or: struct ItemWithFraction { Item item; double fraction; }; std::vector<ItemWithFraction> Avoid single-letter variables What is c? What is v? Move the break into the branch We have two cases per item: either we can fit the whole item and have room to spare, or we can't. If we can't, then we break. So that can become: for (const auto& item : Items) { if (capacity + item.size < MaxCap) { value += item.value; capacity += item.size; } else { // last little bit left value += (MaxCap - capacity) / item.size * item.value; break; } } return value; Don't need to assign capacity in the else case either. Take items by copy The idea of the algorithm suggests that given this set of items, find me a value. It doesn't suggest to me that you should reorder the items I pass in just because that happens to make it easier to solve the problem. To avoid being destructive, I'd write the signature as: uint32_t FillKnapsack(std::vector<Item> Items, const uint32_t MaxCap) If the user doesn't care, they can always move() the items in and avoid the copy. Naming Typically, UpperCase naming are used for classes (e.g. Item). Functions and variables are typically either camelCase (with the first letter lower) or snake_case. So I'd prefer fillKnapsack(std::vector<Item> items, const uint32_t maxCapacity); and fill_knapsack(std::vector<Item> items, const uint32_t max_capacity); to what you have.
{ "domain": "codereview.stackexchange", "id": 18036, "tags": "c++, algorithm, c++11" }
Divide N sticks among M boys as homogeneously as possible (ignoring order)
Question: There are $N$ sticks. $N$ is an integer greater than zero. I want to divide it among $M$ boys. $M$ is also a positive integer. Partitioning $N$ among $M$ is easy, but doing it as evenly as possible is difficult for me to think through. Can someone suggest an algorithm? Similar questions exist on this site, but I could not find an answer that solves this problem, though it is possible that I overlooked something. EDIT: The sticks are all homogeneous, the same in every respect. What I mean by "as evenly as possible" is that if there are 6 sticks and 3 boys, the division algorithm should output 2-2-2. If there are 5 sticks among 3 boys, it should output 1-2-2. The disparity between the minimum stick holder and maximum stick holder should be minimized. Ordering does not matter (e.g. 1-2-2 is the same as 2-1-2). Answer: # Divide perfectly homogeneously, as large as possible for i = 0 to M array[i] = N / M # integer division, resulting in floor # Divide remainder for i = 0 to N modulus K array[i] += 1
{ "domain": "cs.stackexchange", "id": 15201, "tags": "algorithms, optimization, integer-partitions" }
When does a PDA split?
Question: In case of NFA, if the NFA is in a state and reads $\epsilon$ ( empty string ) the NFA splits in to two, with one being at the current state and other with the state along the $\epsilon$ transition. In case of PDA where transitions are of the type $a,b \to c$, with $a$ being the input alphabet being read, $b$ being the stack element being read and popped and $c$ being the stack element being pushed. In NFA I understood the splitting upon $\epsilon$ as the ability to guess. So I assumed that a PDA in a state $r$ with stack being $S$ splits into two PDA only when the transition is $\epsilon,\epsilon \to \epsilon \\$ ( that is when $a=b=c=\epsilon$ in the figure below the PDA splits into two with one being in state $r$ and other in $s$ with same stack $S$ ). But now I am a bit doubtful, about when does a PDA split. I feel I am wrong and am misunderstanding something trivial. ( The figure below just shows the part of a larger PDA ). How would it affect the power of a PDA if it were only allowed to split on $\epsilon,\epsilon \to \epsilon$ transition? Answer: A PDA (or an NFA) doesn't split. At any given point in time, it non-deterministically chooses a valid next step, if any. If there are several options, you can say that the PDA "splits" if you wish, trying all of them in parallel. A PDA (or an NFA) could have more than one possible move even if it has no $\epsilon$ moves, and conversely, there might be situations in which an $\epsilon$ move is the only possible move of the automaton (can you think of such a situation?). So your identifying $\epsilon$ transitions with "splitting" is wrong.
{ "domain": "cs.stackexchange", "id": 5386, "tags": "automata, finite-automata, pushdown-automata, nondeterminism" }
Are penguins plantigrade or digigrade?
Question: I'm trying to rig a 3D model of a penguin, but I don't know where to put the bones near the ankle because I can't tell if they're digigrade or plantigrade. Nearly all birds are digigrade, but penguins spend a lot of time walking and don't generally grasp or run with their talons. Answer: Plantigrade. The penguins are highly specialized for their flightless aquatic existence. The feet are located much farther back than those of other birds, with the result that the bird carries itself mostly upright; its walk can thus be described as plantigrade (i.e., on the soles). The sole comprises the whole foot instead of just the toes, as in other birds. The most notable characteristic of the group is the transformation of the forelimb into a paddle. This is accompanied by a body morphology particularly adapted to movement in a liquid medium. The thoracic (rib) cage is well developed, and the sternum bears a pronounced keel for the attachment of the pectoral muscles, which move the flippers. The flipper has the same skeletal base as the wing of flying birds but with its elements shortened and flattened, producing a relatively rigid limb covered with very short feathers—an ideal organ for rapid propulsion. The body plumage likewise consists of very short feathers, which minimize friction and turbulence. The density of the plumage and the layer of air that it retains provide almost complete insulation of the body. https://www.britannica.com/animal/penguin#ref3467
{ "domain": "biology.stackexchange", "id": 7190, "tags": "zoology" }
What is the first organic compound to be discovered ? Urea or alloxan?
Question: It is widely accepted that urea is the first organic compound to be synthesized from inorganic chemicals. Wikipedia states that: In 1828, the German chemist Friedrich Wöhler obtained urea artificially by treating silver cyanate with ammonium chloride. $$\ce{AgNCO + NH4Cl → (H2N)2CO + AgCl}$$ This was the first time an organic compound was artificially synthesized from inorganic starting materials, without the involvement of living organisms. The results of this experiment implicitly discredited vitalism — the theory that the chemicals of living organisms are fundamentally different from those of inanimate matter. But, alloxan (a not-so popular organic compound!) was isolated in 1818 by Brugnatelli (according to Wikipedia) 10 years before the discovery of urea! So, is urea the first ever organic compound to be discovered or is it alloxan? I also found a link which claimed that urea is not the first organic compound to be discovered. Answer: The problem herein lies in the different possible definitions of organic chemistry. The IUPAC defines an organic compound as a structure containing at least one carbon atom that can be named using the principles of organic nomenclature. A less concise definition is All compounds containing carbon and at least one other element that are not carbon monoxide, carbon dioxide or the latter’s inorganic salts. Another one: All compounds that contain at least one $\ce{C-H}$ bond. Originally, the definition meant compounds isolated from organic materials i.e. organisms, living beings. You see that the exact definition of organic compounds is difficult. But it was once very clear, historically: Anything not isolated from organisms or a derivative thereof was inorganic. Note that isolation of organic compounds can be traced extremely far back in history: The ancient Egyptians isolated indigo as a colouring agent. However, they didn’t synthesise it and most attempts pre-19th century failed. Nobody questioned the possibility of isolating pure organic compounds from living matter, it was merely questioned whether they could be synthesised without involvement of organisms. Alloxan was synthesised by oxidising uric acid with nitric acid as Wikipedia states. Thus, it was synthesised from an organic compound and would have been considered to contain enough vis vitalis even for further synthetic steps. The compound your link states, 5-amino-3H-1,2,4-dithiazole-5-thione, is not described in any prime source as having been isolated from organisms. In fact, it is unknown to Wikipedia, so with all due skepticism, one can assume it to be of little to no relevance in living beings. Compounds we today would call organic have been synthesised even earlier. Ethylene was first discovered in 1669 and it was successfully chlorinated to give 1,2-dichloroethane in 1794 — way before 1818, 1821 or 1828. Due to its name Dutch oil it is possible that it was considered an inorganic chemical. If it was synthesised from ethene that was, in turn, synthesised from ethanol, that would have just meant it carried over the vis vitalis from the original ethanol. Wöhler’s discovery was that significant, because he used compounds that were (and are) unanimously declared inorganic (silver cyanate and ammonium chloride) and arrived directly at a compound that had previously been isolated from living material namely urea.
{ "domain": "chemistry.stackexchange", "id": 4352, "tags": "organic-chemistry, history-of-chemistry" }
Tidy vs. untidy data
Question: In R for Data Science, the authors explain their idea of tidy data. They give an example for data that is not tidy: #> # A tibble: 3 x 3 #> country `1999` `2000` #> * <chr> <int> <int> #> 1 Afghanistan 19987071 20595360 #> 2 Brazil 172006362 174504898 #> 3 China 1272915272 1280428583 A tidy version of the same data, according to the authors, would be: #> # A tibble: 6 x 3 #> country year population #> <chr> <chr> <int> #> 1 Afghanistan 1999 19987071 #> 2 Afghanistan 2000 20595360 #> 3 Brazil 1999 172006362 #> 4 Brazil 2000 174504898 #> 5 China 1999 1272915272 #> 6 China 2000 1280428583 I see that in the untidy version, one must already know that the data is population, otherwise it is impossible to understand, what 1999 and 2000 mean. However, that could be derived from context, e.g. if the tibble is stored in a variable called population. Now, who am I to doubt their judgment -- I do not. But I would like to better understand their idea. What are the actual advantages of the second version? I cannot not intuitively see them, e.g. calculation of mean etc. would be easy for both cases AFAICS, even if different functions need to be used. On the other hand, I would think that it is e.g. easier to calculate the correlation between the population of two years if the data is stored in the "untidy" form. What is my mistake here? Answer: As you mention, the first example is data in an "untidy" format which can make analysis more difficult because of multiple reason. The first one being the column names, as you mention you do not know what the values in the columns actually represent. You mention that this can be derived from the context (e.g. the variable name), but what if the data is originally stored in a csv file called data.txt? In addition, how would this work if you have another type of value for which you also have data for 1999 and 2000, what would you name the columns? A second (and probably the biggest) advantage of using tidy data is that it allows for easier and more standardized analysis. I think the following from the original paper describes it well: Tidy data makes it easy for an analyst or a computer to extract needed variables because it provides a standard way of structuring a dataset. Compare Table 3 to Table 1: in Table 1 you need to use different strategies to extract different variables. This slows analysis and invites errors. If you consider how many data analysis operations involve all of the values in a variable (every aggregation function), you can see how important it is to extract these values in a simple, standard way. Tidy data is particularly well suited for vectorised programming languages like R, because the layout ensures that values of different variables from the same observation are always paired. Section 4 adn 5 from the paper give more in-depth information on how the tidy format works with existing functions within R and how to apply them.
{ "domain": "datascience.stackexchange", "id": 10468, "tags": "data-cleaning" }
When do you use amortized time complexity and when to use unamortized?
Question: This is my guess: -Use amortized because we want to know the "averaged" complexity over n operations assuming the operation is going to be used frequently -Use unamortized when you know the operation is going to be used rarely Answer: You can find the motivation for the amortized analysis from this reference: The motivation for amortized analysis is that looking at the worst-case time per operation can be too pessimistic if the only way to produce an expensive operation is to "set it up" with a large number of cheap operations beforehand. Hence, unamortized (asymptotic) analysis means considering time complexity of an algorithm (instead of each operation) when each operation can be counted in constant time.
{ "domain": "cs.stackexchange", "id": 17212, "tags": "time-complexity, amortized-analysis" }
What compactifications of the Poincaré group have been studied?
Question: As we know, the Poincaré group is non-compact. Poincaré invariance has been observed at velocities and energies up to $10^{20}$ eV in cosmic rays. The other day, I was thinking about how the $SU(2)$ homomorphism to $SO(3)$ imposes a double cover, and I keep wondering if something like that could exist in the Poincaré group, but of course the main problem is that the group is not compact. I wonder if it is possible at all to make a compactification of the group that is consistent with low-energy physics, and still preserves some form of the isotropy of spacetime. For instance, I considered identifying the different connected components (either CP or PT inverted) of the group at some boundary consistent with energies of the order of $10^{28}$ eV, but with meaningful dimensional analysis, but have not succeeded in analyzing the symmetry properties and algebraic properties of the resulting manifold (is it still a Lie group after such an identification?) The physical interpretation of such an identification is up for discussion, but I think the concrete example compactification I gave would basically amount to a duality that continuously maps between particles with energies above $E_p$ (some arbitrary boundary energy) and particles with energy below $E_p$ and $P$ or $CP$ reversed. This would for instance make electric charge conservation an approximate symmetry. Has something like this been attempted? Or are there good reasons known why this could not work? Answer: You cannot embed the Poincare group or the Lorentz group into a compact Lie group $G$. Indeed, denote the Lie algebra of $G$ as $\mathfrak{g}$ and the Lorentz algebra as $\mathfrak{l}=\mathfrak{o}(1,3)\cong\mathfrak{sl}_2(\mathbf{C})$. The Killing form on $\mathfrak{g}$ is non-positive-definite, but then so is its restriction to $\mathfrak{l}$. Restriction of the Killing form on $\mathfrak{g}$ to $\mathfrak{l}$ is $\mathfrak{l}$-invariant and is therefore proportional to the Killing form on $\mathfrak{l}$, since the latter is a simple real Lie algebra. Finally, the Killing form on $\mathfrak{l}$ has signature $(3,3)$, contradiction. By the same reasoning, you cannot mod out by a discrete subgroup of the Lorentz group and get a compact group: the Lie algebra does not change, so the Killing form cannot become non-positive-definite. On the other hand, there are well-known 'compactifications' of the translation group. You can either mod out $\mathbf{R}/\mathbf{Z}=S^1$ or immerse $\mathbf{R}\rightarrow T^2$ as an irrational winding depending on what kind of compactifications you are interested in.
{ "domain": "physics.stackexchange", "id": 3390, "tags": "research-level, group-theory, compactification, poincare-symmetry" }
How do organic chemistry mechanisms become accepted?
Question: Now I have been learning chemistry for five years. I remember when I started organic chemistry, it was fun to draw arrows between molecules to show, as if in a mathematical demonstration, how the reactions occurred. In every lesson I had, teachers explained to us how a specific reaction (for example the Shapiro reaction) occurs step by step, explaining the chemistry of each group in each intermediate as if things were obvious (you know how teachers are). But I've been wondering for some weeks now how does a mechanism come to be considered as accepted or still discussed? If they use some, what kind of spectrometry techniques are used to measure the amount of each intermediate? If not how do they proceed? Do they use computational chemistry? Because for example for a reaction such like a $\mathrm{S_N2}$ it doesn't look too tricky to find how it works, whereas for Fries rearrangement (I don't know if the mechanism is considered as accepted or not) it seems to be more tricky. (Ref.) So can you explain the methods (at least the most used) to confirm a mechanism? I am aware that "confirm" does not mean that we are 100% sure, but rather that it is simply the best we have found so far. Answer: Great question! When I was teaching, Anslyn and Dougherty was a decent text for this. Here are some general comments: First, please note that you cannot be sure about a mechanism. That's the real killer. You can devise experiments that are consistent with the mechanism but because you cannot devise and run all possible experiments, you can never be sure that your mechanism is correct. It only takes one good experiment to refute a mechanism. If it's inconsistent with your proposed mechanism, and you're unable to reconcile the differences, then your mechanism is wrong (or incomplete at best). Writing mechanisms for new reactions is hard. Good thing we have a whole slew of existing reactions that people already have established (highly probable, but not 100% guaranteed) mechanisms for. Computational chemistry is pretty awesome now and provides some really good insights into how a specific reaction takes place. It doesn't always capture all relevant factors so you need to be careful. Like any tool, it can be used incorrectly. The types of reactions you run really depend heavily on the kind of reaction you're studying. Here are some typical ones: Labeling -- very good for complex rearrangements Kinetics (including kinetic isotope effects) -- good for figuring out rate-determining steps Stereochemistry -- Good for figuring out if steps are concerted (see this example mechanism I wrote for a different question) Capturing intermediates -- This can be pretty useful but some species that you capture aren't involved in the reaction, so be careful. Substitution effects and LFER studies -- Great for determining if charge build-up is accounted for in your mechanism For named reactions, the Kurti-Czako book generally has seminal references if you want to actually dig through the literature for experiments. For your specific reaction, what do we think the rate-determining step is? Probably addition into the acylium? You could try to capture the acylium intermediate. You could run the reaction with reactants that have two labelled oxygens and reactants that have no labelled oxygens. Do they mix? If not, it's fully intramolecular. Otherwise, there's an intermolecular component and the mechanism as written is incomplete. A quick Google search suggests that the boron trichloride mediated version has been studied via proton, deuterium, and boron NMR. I didn't follow up on this, but there's clearly some depth here. When I was T.A.ing for Greg Fu, he really liked to use an example with the von Richter reaction. I might be able to find those references...
{ "domain": "chemistry.stackexchange", "id": 7959, "tags": "organic-chemistry, reaction-mechanism, computational-chemistry" }
Holant problems and holographic reduction: simple graphs or multigraphs?
Question: From what I can understand, Holographic reductions for Holant problems are used to show #P-hardness or polynomial time computability of certain counting problems on undirected graphs that have very restricted forms (for instance, 3-regular graphs, or 2-3-regular bipartite graphs). My concern is about the confusion between the use of the word "graph" to mean either a) a simple graph, without self-loops and parallel edges or b) a multigraph, that can have self-loops and parallel edges (i.e., multiple edges between the same pair of vertices). I will first expose my problem by examples, then ask more precisely my questions. Examples: In the paper Cai, Lu and Xia, Holographic Reduction, Interpolation and Hardness, 2012, the authors provide a dichotomy for all problems of the form $\mathrm{Holant}([x,y,z][t_1,t_2,t_3,t_4])$, so over 2-3-regular bipartite graphs. They do not specify if they talk about simple graphs or multigraphs. A search for "self loops" reveals that they might be using graphs with self loops, but this is not very explicit (and what about parallel edges?). However nowhere else in the paper this distinction is discussed, and all the gadgets they use seem to be simple graphs (in the Appendix). In Tyson Williams' thesis, by default the graphs considered are multigraphs (see page 4). He also says that Holant problems always consider multigraphs, suggesting that the first paper I cited considers multigraphs. In some of his results, he mentions the distinction between the two notions of graphs: for instance, Theorem 10.2.7 states that some problem is hard on 3-regular multigraphs, and he shows in Theorem 10.2.8 that the same problem is aslo hard on 3-regular simple graphs. By looking at the proof you see that the trick used is specific to this particular problem. In the paper Xia, Zhang and Zhao, Computational complexity of counting problems on 3-regular planar graphs, 2007, the abstract talks about 3-regular graphs. When I first read it I thought that they are considering simple graphs, but actually I did not find a place where they say this. At this point, I want to say that this post is not a critic to the authors of all these papers (I know that this kind of details can be annoying to present rigorously), the goal is just to clarify what is happening here. Questions: 1) Do all the PTIME results of, say, the first paper I cite, work for multigraphs? 2) Do all the hardness results of that paper work for 2-3-regular simple bipartite graphs? 3) If not, is there a simple general fix so that the hardness results work for simple graphs, or does one have to find an ad hoc patch for each one of these problems? (for example, as in Theorem 10.2.8 of Tyson Williams' thesis) 4) If yes, do they also work for 2-3-regular simple graph that also satisfy this special property: no two nodes of the partition of degree 2 have the same neighbors. This condition would insure that, when you take a 2-3-regular simple graph that satisfies this, then contracting the nodes in the partition of degree 2 would always give a 3-regular simple graph. (without this condition, we could end up with parallel edges.) This post is also related to this post and this one, where the same kind of confusion arose (a positive answer to question 4) would solve these problems). I decided to create a new question because I realized that the problem is more general that the ones exposed in these posts. Answer: My concern is about the confusion between the use of the word "graph" to mean either a) a simple graph, without self-loops and parallel edges or b) a multigraph, that can have self-loops and parallel edges (i.e., multiple edges between the same pair of vertices). In the literature about Holant problems, the word "graph" should be read as "multigraph" unless otherwise stated. As you noticed, I tried to be more explicit about this in my thesis. In my experience, different areas of math use terms mostly consistently within their own area but in ways that would be contradictory when considering multiple areas and read strictly. In your first example, the proper notation is $\mathrm{Holant}([x,y,z]|[t_1,t_2,t_3,t_4])$. This means that the input graphs are bipartite with one part having only vertices of degree 2 (and assigned the symmetric binary function $[x,y,z]$) and the other part having only vertices of degree 3 (and assigned the symmetric ternary function [t_1, t_2, t_3, t_4]). Also (and I am sure you know this, but to be clear), a bipartite graph cannot have self loops. So if a bipartite graph is not a simple graph, then it is because it has parallel edges. 1) Yes 2) Not immediately. I think you have correctly observed that their reductions work for simple graphs. It remains to verify if the hard problems that they were reduced from are also hard when further restricted to simple graphs. 3) There is no general fix. In general, you have to reduce from problems that are hard for simple graphs. 4) Gadget 2 does not satisfy your condition. Consider $N_1$. The only two vertices of degree 2 without dangling edges have the same neighbor set.
{ "domain": "cstheory.stackexchange", "id": 4720, "tags": "counting-complexity" }
Centripetal Force Terminology
Question: Is centripetal force the correct terminology? Why is the resultant (orthogonal ) force component called "centripetal force"? In my view, it is just a component Answer: The terminology is correct. It doesn't have to be the resultant force. You are correct that it may only be a component. "The centripetal force" is just a name we give to the sum of all forces that point towards the centre (thus the name) because only these forces cause turning. Regardless of other forces being present in other directions. Imagine driving around a round-about. Constant speed. Friction pulls you inwards as you turn your wheels and causes the turning. This is thus the centripetal force. In this case it is also the resultant force because all other forces (weight and normal force) cancel out. Now speed up. Accelerate. Your wheels push from the asphalt and a forwards friction force appears. The resultant force is now not towards the centre but at an angle. Only a portion of the total friction force points sideways towards the centre and causes turning. Another portion points forwards and causes speeding up. So only the centre-directed portion is called the centripetal force.
{ "domain": "physics.stackexchange", "id": 81767, "tags": "forces, terminology" }
What is an asymptotically tight upper bound?
Question: From what I have learned asymptotically tight bound means that it is bound from above and below as in theta notation. But what does asymptotically tight upper bound mean for Big-O notation? Answer: Saying that a big-O bound is "asymptotically tight" basically means that the author should have written $\Theta(-)$. For example, $O(x^2)$ means that it's no more than some constant times $x^2$ for all large enough $x$; "asymptotically tight" means it really is some constant times $x^2$ for large enough $x$ and not, say, some constant times $x^{1.999}$.
{ "domain": "cs.stackexchange", "id": 15727, "tags": "asymptotics" }
Work done by vertical force
Question: We know that work is defined by the dot product of force and displacement,which mathematically means $W=Fs\cos \theta$ in scaler. But let's suppose an object is moving in $x$ axis. It's vertical velocity $v_y=0$. If a force is applied in $y$ direction which is $F$,wouldn't $v_y$ increase as a result of acceleration $a_y$? If $v_y$ increases,that means the object will also start moving in the $y$ direction. So we have a motion in $y$ axis which is along the direction of force. So even though $0=W=Fs\cos 90$ makes sense mathematically,how can $s$ even be $0$ in light of the above explanation? I would be very grateful if someone kindly clears the concept. Answer: Think of a train cart moving in its track. Push along with the track and force is parallel to displacement. Angle is zero and we get the largest possible work: $$W=\mathbf F\cdot \mathbf d=Fd\cdot\cos(0)=Fd.$$ Push perpendicular to the track and force is perpendicular to displacement. But the cart cannot move this way, so a normal force from the train rails will appear to balance out the force. We thus still have displacement only along the track as before. Angle is $90^\circ$ and we see that this force does no work: $$W=\mathbf F\cdot \mathbf d=Fd\cdot\cos(90^\circ)=Fd\cdot 0=0.$$ The fact that the cart is moving and having displacement is not due to this force $\mathbf F$. Pushing upwards is the same scenario: Also here, force and displacement are perpendicular. This time the counteracting force is gravity. Maybe you push hard enough to overcome gravity so the cart is moved upwards and lifted out of its tracks. Then you are turning its displacement vector $\mathbf d$, so that it now no longer is pernpendicular to the force, and thus you will get a work value done by this force. But if the force is not strong enough to overcome gravity and the cart thus does not move upwards, then the cart will just keep its usual displacement and the force is doing no work to help. This displacement is then not due to this force.
{ "domain": "physics.stackexchange", "id": 91853, "tags": "newtonian-mechanics, forces, work" }
Understanding ConfusionMatrix for Google Prediction API
Question: I'm trying to analyze my training model, Google Prediction API provides analyze method to get insights for the model. Currently I want to improve confidence levels, for my predictions, I haven't found any documentation in how to read this ConfusionMatrix, any insights will be great: { "kind": "prediction#analyze", "id": "modelX", "selfLink": "https://www.googleapis.com/prediction/v1.6/projects/projectX/trainedmodels/modelX/analyze", "dataDescription": { "outputFeature": { "text": [ { "value": "labelA", "count": "681" }, { "value": "labelB", "count": "127" }, { "value": "labelC", "count": "814" }, { "value": "labelD", "count": "427" } ] }, "features": [ { "index": "0", "text": { "count": "2049" } } ] }, "modelDescription": { "modelinfo": { "kind": "prediction#training" }, "confusionMatrix": { "labelA": { "labelA": "14.17", "labelB": "0.17", "labelC": "3.83", "labelD": "0.67" }, "labelB": { "labelA": "0.50", "labelB": "2.00", "labelC": "1.33", "labelD": "0.00" }, "labelC": { "labelA": "1.17", "labelB": "0.00", "labelC": "70.00", "labelD": "3.50" }, "labelD": { "labelA": "1.17", "labelB": "0.17", "labelC": "4.17", "labelD": "12.17" } }, "confusionMatrixRowTotals": { "labelA": "18.83", "labelB": "3.83", "labelC": "74.67", "labelD": "17.67" } } } Answer: As explained in the documentation: This shows an estimate for how this model will do in predictions. This is first indexed by the true class label. For each true class label, this provides a pair {predicted_label, count}, where count is the estimated number of times the model will predict the predicted label given the true label. If you are not sure what a confusion matrix, see Wikipedia, where the "actual class" refers to the same thing as the "true class" in the Google documentation.
{ "domain": "datascience.stackexchange", "id": 771, "tags": "machine-learning, confusion-matrix, google-prediction-api" }
SOLID principles breaking in interview task (file parser)
Question: Originally asked on Stack Overflow I recently had an interview where interviewers asked me to Rewrite following code with SOLID principles. Add a new file handler (PriceFileHandler for example). Add supporting stream parsing (to one parser implementation) Here is the code which I had to change: <?php class FileHandler { public function parse(string $type, string $data) { switch ($type) { case 'xml': $parser = new XmlParser(); break; case 'csv': $parser = new CsvParser(); break; default: throw new InvalidArgumentException(); } return $parser->parse($data); } } class XmlParser { /** * @param string $data * @return SimpleXMLElement */ public function parse(string $data) { // ... } } class CsvParser { /** * @param string $data * @return string */ public function parse(string $data) { // ... } } as result I rewrite it to following code: <?php interface FileHandlerInterface { /** * @param string $type * @param string $data * * @return array >>>> always return parsed data in one format */ public function parseString(string $type, string $data); /** * @param string $type * @param resource $data * * @return array >>>> always return parsed data in one format */ public function parseStream(string $type, resource $data); } abstract class AbstractFileHandler implements FileHandlerInterface { /** * @var ParserInterface[] */ private $parsers = []; public function addParser(ParserInterface $parser) { $this->loaders[$parser->getAlias()] = $parser; } /** * @param string $alias * @return StringParserInterface * @throws Exception */ public function getStringParser(string $alias) { if (!isset($this->parsers[$alias])) { throw new Exception; } if (!($this->parsers[$alias] instanceof StringParserInterface)) { throw new Exception; } return $this->parsers[$alias]; } /** * @param string $alias * @return StreamParserInterface * @throws Exception */ public function getStreamParser(string $alias) { if (!isset($this->parsers[$alias])) { throw new Exception; } if (!($this->parsers[$alias] instanceof StreamParserInterface)) { throw new Exception; } return $this->parsers[$alias]; } } class PriceFileHandler extends AbstractFileHandler { /** * @param string $type * @param string $data * * @return array */ public function parseString(string $type, string $data) { $parser = $this->getStringParser($type); $parsedData = $parser->parse($data); // remove items with price <= 0 foreach ($parsedData as $key => $parsedItem) { if ($parsedItem['price'] <= 0) { unset($parsedData[$key]); } } return $parsedData; } /** * @param string $type * @param resource $data * * @return array */ public function parseStream(string $type, resource $data) { throw new Exception('doesn\'t support'); } } class FileHandler extends AbstractFileHandler { /** * @param string $type * @param string $data * * @return array */ public function parseString(string $type, string $data) { $parser = $this->getStringParser($type); return $parser->parseString($data); } /** * @param string $type * @param resource $data * * @return array */ public function parseStream(string $type, resource $data) { $parser = $this->getStreamParser($type); return $parser->parseStream($data); } } interface ParserInterface { } interface StringParserInterface { public function parseString(string $data); } interface StreamParserInterface { public function parseStream(resource $data); } class XmlParser implements ParserInterface, StringParserInterface, StreamParserInterface { /** * @param string $data * @return array */ public function parseString(string $data) { // ... } /** * @param resource $data * @return array */ public function parseStream(resource $data) { // ... } } class CsvParser implements ParserInterface, StringParserInterface { /** * @param string $data * @return array */ public function parseString(string $data) { // ... } } The interviewers said I broke these principles: SRP - perhaps about parsing from different sources in file handlers.... LSP - .... OCP - .... Could someone describe where and why I've made mistakes or what I've done wrong and how to do it in correct way? I've read and re-read principles with examples and don't see where I could broke LSP and OCP principles :( Answer: SRP As you noticed too, this one is fairly clear, your grouping of different sources of data under a single parser interface violates the single responsibility principle. Ideally, you should have a single interface which has a generic method (parse) which all your classes can implement. If that's not possible as the signatures for each method might be different, you should create interfaces for each one and implement said interface. LSP LSP has been covered extensively in this SO post - which I advise that you take a look through. I have no intention to regurgitate what has previously been written, so again, I'll keep it brief. Liskov imposes certain standards, one of them being: No new exceptions should be thrown by methods of the subtype, except where those exceptions are themselves subtypes of exceptions thrown by the methods of the supertype. When you define the function parseStream in your interface FileHandlerInterface: /** * @param string $type * @param resource $data * * @return array >>>> always return parsed data in one format */ public function parseStream(string $type, resource $data); The return is specified as an array, it actually says 'always return parsed data in one format', so it seems that you were consciously aware of the principle but maybe in the heat of the moment you glanced over it further on. If we look in the PriceFileHandler class, you'll notice that you define the function with nothing but an exception as part of its body: /** * @param string $type * @param resource $data * * @return array */ public function parseStream(string $type, resource $data) { throw new Exception('doesn\'t support'); } Yet, the return type that is documented for this definition is an array too. This falls under the undesired effects category that LSP warns against. As far as OCP goes, I can't see anything right now, however, I'll revisit this again if I do see something.
{ "domain": "codereview.stackexchange", "id": 34085, "tags": "php, object-oriented, design-patterns" }
Definition of Stable, Unstable and Neutral Equilibrium using Force
Question: So I have read that a particle is in Stable Equilibrium when the potential energy is minimum, Unstable Equilibrium when the potential energy is maximum and Neutral Equilibrium when the potential energy is neither maximum or minimum. All of these definitions have the condition that net force on the particle = $0$ (or the particle is at equilibrium or $\frac {dU}{dr} = 0$. Some extensions of these concepts or different definitions of Equilibrium are : "A particle is said to be in Stable Equilibrium when a slight displacement of the particle from equilibrium position makes it oscillate about that position (which means it returns back to the mean position) or that the forces acting on the particle are in the opposite direction to the displacement. The particle is in Unstable Equilibrium when a slight displacement of the particle from equilibrium position makes it go farther away from the position (which means it cannot come back to the mean position) or the forces acting on the particle are in the direction of displacement." My doubt is : what is the condition of force for neutral equilibrium (like for Stable Equilibrium - force opposite to displacement and for Unstable Equilibrium - force along direction of displacement)? Answer: An example of stable equilibrium is a ball in a bowl. If the ball is displaced sideways, it is higher than the equilibrium position. It loses potential energy if it rolls back to equilibrium. Not coincidentally, the force on a displaced ball is toward the equilibrium position. You can think about how force is related to the derivative of potential energy to see why. A ball on a plane is an example of neutral equilibrium. No change in potential energy on displacement. No force. A ball on a hill is in unstable equilibrium. A ball at the inflection point of curve $y=x^3$ illustrates why $\frac{dU}{dr}$ isn't a bulletproof definition of neutral equilibrium. If displaced toward positive x, the force pushes it toward the "equilibrium" position. If displaced toward negative x, the force pushes it away. This is not a neutral equilibrium.
{ "domain": "physics.stackexchange", "id": 98224, "tags": "forces, potential-energy, equilibrium" }
Lorentz invariant integration measure and Heaviside step function
Question: I'm currently studying Klein-Gordon fields and I ran onto the concept of the Lorentz invariant integration measure, namely: \begin{equation} \frac{d^3k}{(2\pi)^32E_k} \end{equation} where $E_k=\sqrt{\boldsymbol{k}^2+m^2}$. I see from Lorentz Invariant Integration Measure that in my integral I should include $\theta(k_0)\delta(k^{\mu}k_{\mu}-m^2)$. I get the reason for the presence of the delta (I want on-shell relativistic particles) but I don't get why I want to select particles with positive energy and neglect the negative solutions. On my notes, I find that $k_0=\pm E_k$ and "Negative energies can't be neglected. They will eventually be interpreted as antiparticles with positive energy in QFT". So why, integrating, I select only particles with $k_0>0$? Answer: It is just mathematics, forget about the physics for a moment. What we want to do is to find an integration measure $\mathrm d\mu(\boldsymbol k)$ that is invariant under Lorentz transformations. It should be clear that $$ \mathrm d\mu(\boldsymbol k)=\delta(k^2-m^2)\Theta(k^0) $$ does the trick, irrespective of what the factors represent. You may wonder why we choose this form for $\mathrm d\mu$ instead of some other possibility; the reason is that this measure is rather natural in this context. You can find a possible motivation for $\mathrm d\mu$ in this post of mine. By using $\Theta(k^0)$ you are not selecting particles. After all, you are solving the Klein-Gordon equation, at which point there are no particles yet. On the other hand, if you are using $\mathrm d\mu(\boldsymbol k)$ to derive a formula for the cross-section, the particles are already there! You cannot change the spectrum of the Hamiltonian by choosing some integration measure or other. The physics of a problem are independent of how you choose to solve the equations. Choosing this specific form for $\mathrm d\mu(\boldsymbol k)$ is convenient. Choosing other forms won't remove, nor add, particles from your system.
{ "domain": "physics.stackexchange", "id": 36173, "tags": "lorentz-symmetry, klein-gordon-equation" }
Inapproximability of graph problems on a restricted setting
Question: I am considering the following problem $\mathcal{P}$. $\mathcal{P}$: Given an undirected graph $G$, and an integer $k$, find a set of vertices $S \subseteq V(G)$, with $|S| = k$, such that the number of edges in the subgraph induced by $S$ is minimized. which is clearly NP-hard, as the answer is 0 iff there is an independent set of size $k$ in $G$. So I am interested in studying whether the problem can be approximated assuming $\mathrm{P} \neq \mathrm{NP}$ (assumption implicit from now on). More precisely, saying that $\mathcal{P}$ cannot be approximated by a factor better than $\rho$ means that there is no constant $\alpha < \rho$ for which a polytime algorithm $A$ can guarantee that $A(G, k) \leq \alpha \mathrm{OPT}(G,k)$ for every input $G, k$. Now, for the restricted setting, imagine $c > 0$ is an arbitrary constant, and then define $c\mathcal{P}$ as the same problem but with the restriction that $|V(G)| \geq c \cdot k$. The question I am wondering is whether, given the following claims 1) and 2), it is true that 1) implies 2). There is a constant $\rho > 1$ such that $\mathcal{P}$ cannot be approximated by a factor better than $\rho$. There is a constant $\rho' > 1$ such that for any constant $c > 0$ the problem $c\mathcal{P}$ cannot be approximated by a factor better than $\rho'$. I would appreciate any help, or pointers to problems when something like that is proven. Answer: Yes, $(1) \implies (2)$. Suppose towards contradiction that $(1)$ is true but $(2)$ is false. Negating $(2)$ gives $\forall \rho' > 1 \exists c > 0$ such that the problem $cP$ can be approximated by a factor better than $\rho'$ in time polynomial in the vertex count. (I added "in time polynomial in the vertex count", since without a restriction like this you could of course solve the problem exactly in exponential time.) Let $c_{\rho'}$ be a value of $c$ that works for $\rho'$ (the negation of $(2)$ makes this well-defined for each $\rho'$). Now arbitrarily good $\rho'$-approximations can be made to the original problem in time polynomial in the number of vertices: Just keep adding vertices that are adjacent to every other vertex until $|V(G)| \ge c_{\rho'}k$ and then solve using the approximation algorithm for the restricted problem. If the solution contains any of the added vertices, swap each one with any original vertex -- this cannot make the solution worse. Note that the above approximation algorithm is polynomial in the original number $n$ of vertices even if $c_{\rho'}$ is very large, e.g., superexponential in $\rho'$, because for $k, \rho'$ fixed and $n$ large enough, $n \ge c_{\rho'}k$ so no vertices need to be added. The existence of a poly-time $\rho'$-approximation for the original unrestricted problem and for arbitrary $\rho'$ contradicts $(1)$ (specifically, you could choose, e.g., $\rho' = (1+\rho)/2$), so it must be that $(1) \implies (2)$. (Aside: It's not necessary for $(1)$ to hold for the above proof to go through, but if $\textrm P \ne \textrm{NP}$, it does anyway, since if a poly-time $\rho$-approximate algorithm for your problem for arbitrary $\rho > 1$ existed, you could use it to solve Maximum Independent Set exactly in poly-time: Choose any $\rho > 1$ and run the $\rho$-approximate algorithm for your problem inside a binary search on $k$ until we find the largest $k$ for which it reports an answer of 0. Since the approximation ratio is defined multiplicatively, the true answer for this $k$ must also be 0, and since $A(G, k) \ge OPT(G, k)$, the true answer for $k+1$ is $> 0$, so $k$ is the size of a largest IS.)
{ "domain": "cs.stackexchange", "id": 18513, "tags": "complexity-theory, np-hard, approximation" }
Is it possible to convert .avi file into rosbag file in ROS_KINETIC_KAME
Question: I am not understanding how to run a video using ros_kinetic_kame. Please share me a video or complete process with all required packages that how to convert a .avi file into rosbag format. Edit: This is my error I found : $ roslaunch video_stream_opencv camera.launch video_stream_provider:="/home/test/Desktop/Prj/output.avi" ... logging to /home/test/.ros/log/98df6008-40c1-11e9-9e42-c81f6622f107/roslaunch-DH1CT5Z1-14580.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://DH1CT5Z1:34887/ SUMMARY ======== PARAMETERS * /camera/camera_stream/buffer_queue_size: 100 * /camera/camera_stream/camera_info_url: * /camera/camera_stream/camera_name: camera * /camera/camera_stream/flip_horizontal: False * /camera/camera_stream/flip_vertical: False * /camera/camera_stream/fps: 30.0 * /camera/camera_stream/frame_id: camera * /camera/camera_stream/height: 0 * /camera/camera_stream/loop_videofile: False * /camera/camera_stream/set_camera_fps: 30.0 * /camera/camera_stream/video_stream_provider: /home/test/Deskto... * /camera/camera_stream/width: 0 * /rosdistro: kinetic * /rosversion: 1.12.14 NODES /camera/ camera_stream (video_stream_opencv/video_stream) ROS_MASTER_URI=http://localhost:11311 process[camera/camera_stream-1]: started with pid [14597] [ INFO] [1551955599.176597852]: Resource video_stream_provider: /home/test/Desktop/Prj/output.avi [ INFO] [1551955599.176662227]: Getting video from provider: /home/test/Desktop/Prj/output.avi [ INFO] [1551955599.179496159]: Video stream provider type detected: videofile [ INFO] [1551955599.180594786]: Camera name: camera [ INFO] [1551955599.181613177]: Setting camera FPS to: 30 [ INFO] [1551955599.181655102]: Camera reports FPS: 20 [ INFO] [1551955599.182527461]: Setting buffer size for capturing frames to: 100 [ INFO] [1551955599.183486775]: Throttling to fps: 30 [ INFO] [1551955599.184338937]: Publishing with frame_id: camera [ INFO] [1551955599.185229180]: Provided camera_info_url: '' [ INFO] [1551955599.186110557]: Flip horizontal image is: false [ INFO] [1551955599.187001087]: Flip vertical image is: false [ INFO] [1551955599.191553137]: using default calibration URL [ INFO] [1551955599.191696665]: camera calibration URL: file:///home/test/.ros/camera_info/camera.yaml [ INFO] [1551955599.191759949]: Unable to open camera calibration file [/home/test/.ros/camera_info/camera.yaml] [ WARN] [1551955599.191787319]: Camera calibration file /home/test/.ros/camera_info/camera.yaml not found. [ INFO] [1551955599.191810010]: Opened the stream, starting to publish. [camera/camera_stream-1] process has finished cleanly log file: /home/test/.ros/log/98df6008-40c1-11e9-9e42-c81f6622f107/camera-camera_stream-1*.log all processes on machine have died, roslaunch will exit shutting down processing monitor... ... shutting down processing monitor complete done Originally posted by SUBHASH on ROS Answers with karma: 3 on 2019-03-05 Post score: 0 Original comments Comment by Jasmin on 2019-03-05: Yes, it is possible. Can you please tell us what have you tried and what didn't work exactly. As a start you can use video_stream_opencv, video_player is outdated. Comment by SUBHASH on 2019-03-07: How to get the video_stream_opencv I have tries to get its packages but all I find the error for broken packages only Comment by Jasmin on 2019-03-07: I've posted an answer for your first question that should work once the package is installed correctly. Can you please share the dependency errors that you've got to help you solve them. Comment by Jasmin on 2019-03-07: did you use sudo apt-get install ros-kinetic-video-stream-opencv to install the package or install it from source? Answer: I've used video_stream_opencv once with ROS indigo, Installed it with sudo apt-get install ros-indigo-video-stream-opencv And run it on an .avi video as follows: #check if the input video is fine: $ rosrun video_stream_opencv test_video_resource.py path_to_your_video/your_video.avi #run it: $ roslaunch video_stream_opencv camera.launch video_stream_provider:="path_to_your_video/your_video.avi" The output topic is /camera/image_raw, note that you may need to adjust some parameters if needed (fps, frame_id...) Once you have your topic ready you can record a bag of your video data directly with rosbag record If your need is images published to a ROS topic, we are already there! But if you just need the bag file and want a more direct conversion you may make use of this link and this question's answer. Originally posted by Jasmin with karma: 188 on 2019-03-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by SUBHASH on 2019-03-07: After executing $ rosrun video_stream_opencv test_video_resource.py home/Desktop/Prj/output.avi Error: Trying to open resource: home/Desktop/Prj/output.avi Error opening resource: home/Desktop/Prj/output.avi Maybe opencv VideoCapture can't open it Comment by SUBHASH on 2019-03-07: Please tell me the process to execute this code with ROS. Does it need video_stream_opencv? Comment by Jasmin on 2019-03-07: Check the path to your video file again, I think that the user name is missing, it should be something like /home/user_name/Desktop/... Comment by SUBHASH on 2019-03-07: $ roslaunch video_stream_opencv camera.launch video_stream_provider:="path_to_your_video/your_video.avi" To launch this command roscore opened in another tab is required? Comment by Jasmin on 2019-03-07: It should run roscore automatically... Maybe, I'm not sure in this case. Did it work only when having a running roscore? Comment by SUBHASH on 2019-03-07: I got this error: $ roslaunch video_stream_opencv camera.launch video_stream_provider:="/hometest/Desktop/Prj/output.avi" ... logging to /home/test/.ros/log/98df6008-40c1-11e9-9e42-c81f6622f107/roslaunch-DH1CT5Z1-14432.log Checking log directory for disk usage. This may take awhile. goto next commen Comment by SUBHASH on 2019-03-07: This worked properly: rosrun video_stream_opencv test_video_resource.py path_to_your_video/your_video.avi Now I want to convert my .avi file to .bag file How to do that? $ roslaunch video_stream_opencv camera.launch video_stream_provider:="path_to_your_video/your_video.avi" above command gave me err Comment by Jasmin on 2019-03-07: what kind of error, can you post it here? Comment by Jasmin on 2019-03-07: This is not an error, it means that everything is working fine :). Can you tell me if you are able to visualize your video frames when running rosrun image_view image_view image:=/camera/image_raw in a new terminal just after running the previous command. Comment by Jasmin on 2019-03-07: Sorry I've just seen your answer containing the output messages. don't add them as answer you can edit your question to add them. did it say at some point [ERROR]... Failed to contact master at [localhost:11311]. Retrying... did you try running roscore first then camera.launch ? Comment by SUBHASH on 2019-03-07: $ rosrun image_view image_view image:=/camera/image_raw This worked I can see my video but now how to convert it into .bag format? Comment by Jasmin on 2019-03-07: great! you can use rosbag record for that as mentioned in my answer. Comment by SUBHASH on 2019-03-07: I got this error while recording on tab of $ roslaunch video_stream_opencv camera.launch video_stream_provider:="/home/test/Desktop/Prj/output.avi" this command: [ERROR] [1551958949.419922197]: Compressed Depth Image Transport - Compression requires single-channel 32bit-floating point or 16bit raw depth images (input format is: bgr8). OpenCV Error: Assertion failed (key_ != -1 && "Can't fetch data from terminated TLS container.") in getData, file /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/system.cpp, line 1532 terminate called after throwing an instance of 'cv::Exception' Comment by SUBHASH on 2019-03-07: But .bag file also created. Now how to see whether recorded video is the same or not? Comment by Jasmin on 2019-03-07: rosrun image_view image_view image:=/camera/image_raw and rosbag play your_bag.bag
{ "domain": "robotics.stackexchange", "id": 32590, "tags": "ros, ros-kinetic, catkin-ws" }
Will the upcoming solar eclipse stress the European electrical grid?
Question: From an article in the Financial Times today, about a solar eclipse on March 20th, 2015: An eclipse of the sun next month could disrupt Europe’s power supplies because so many countries now use solar energy, electricity system operators have warned. “The risk of incident cannot be completely ruled out,” the European Network Transmission System Operators for Electricity said on Monday, adding the eclipse on March 20 would be “an unprecedented test for Europe’s electricity system”. [...] ENTSO-E said the eclipse could play a bigger role in places such as Germany, Europe’s largest economy, which now gets more than a quarter of its electricity from renewable generators and like other EU nations is connected with neighbouring countries’ grid systems. The organisation also said it had been planning co-ordinated “countermeasures” for several months to help protect the continent’s power system from the eclipse [...] Patrick Graichen, executive director of Agora Energiewende, a Berlin renewable energy think-tank, said the March 20 eclipse was unlikely to cause any problems because there are several well-known ways of balancing power supplies and there has been plenty of time to plan. But the eclipse will still be a “stress test” of the flexibility of the European power system, he added, because it will have to adapt to a more abrupt shift in solar power generation than would normally occur, especially if it is a sunny day and all solar power stations were producing at full load. “Within 30 minutes the solar power production would decrease from 17.5 gigawatts to 6.2GW and then increase again up to 24.6GW. This means that within 30 minutes the system will have to adapt to a load change of -10GW to +15GW,” he said. The eclipse is supposed to last about 3 hours, but I am skeptical as to how this can be so much more impactful than normal sunlight reduction on a daily basis because of clouds, etc. Should the eclipse be expected to be much more impactful than a normal cloudy day? Answer: Any high slew rate (fast rate of change of power) stresses the grid. Lots of things cause high slew rates. People getting ready for work in the morning, having showers, turning lights and appliances on. Factories starting up at the same time. Faults on major international HVDC transmission lines. Safety shutdowns at nuclear reactors. Lots of people turning the kettle on at half-time when England are playing a widely-televised football match. Normally, exogenously-variable renewables such as photovoltaics and wind don't have rapid changes in power at the continental scale. Weather systems tend to be limited to 2000km across at most, which is smaller than Europe. However, an eclipse does affect a very large area in a very short space of time. If it's largely cloudless over Europe at the time of the eclipse, then there will be a large drop in PV, followed by a large increase. Each of these events would be a high slew rate. So, yes, the eclipse will be much more impactful than a normal cloudy day, because you never get a single cloud blocking out the sun from half of Europe, and than vanishing, all within half an hour. You've identified the article's quote from Patrick Graichen, who is executive director of the Berlin renewable energy think-tank Agora Energiewende: “Within 30 minutes the solar power production would decrease from 17.5 gigawatts to 6.2GW and then increase again up to 24.6GW. This means that within 30 minutes the system will have to adapt to a load change of -10GW to +15GW,” Now, given that continental European demand will be well over 150GW (I don't know exactly - somewhere in the range $250\pm100$ GW I suppose), that's not a big proportional change, but it's still a lot of gigawatts of other plant to turn up, and then down again, within half an hour. And it's not just a matter of balancing total demand and supply every second, but also managing the flows on all the major transmission links to make sure nothing gets overloaded for very long. The good news is that unlike many high-slew-rate events, the timing of this one is very very predictable. And that makes it much easier to manage. Training for grid operators tends to be based on the last few decades of experience. As we decarbonise the grid, the nature of unusual events will change: the next decade's "ordinary" will not look like the "ordinary" of the 1980s, and similarly for the "extraordinary". This eclipse is one of the new kind of "extraordinary". It's not that we don't have the technology to deal with it: Europe has many many dozens of gigawatts of power-plants that can be turned up or down very rapidly without ill effects (and as it doesn't need to be the same plants that are turned down as turned up, there's no need for individual plants to be stressed by such rapid cycling). And large consumers can be incentivised to adapt their demand to the changing availability of supply. So technically it's not hard. But it is something that's fairly unprecedented, and that always makes grid managers a bit wary.
{ "domain": "physics.stackexchange", "id": 20012, "tags": "energy, renewable-energy, meteorology, solar-cells" }
Where can i find all the elements that are currently available to be used in the .world file?
Question: Hey, I want to create a sand surface in gazebo, then design a wheel on solidworks and import it on gazebo so to be able to test the interactions of the two. My first task is to create the sand surface, to do that i thought to create a box and transform it to sand(testbed) by setting the physical properties for it. Where can i find all the elements that are available to be used in the .world file? If not all the elements that i want to use are available, can i set my own ones by any way? Also any general suggestions/ideas regarding the whole project will be very helpful. Thanks in advance! Originally posted by Kyriakos on Gazebo Answers with karma: 5 on 2019-11-25 Post score: 0 Answer: Where can i find all the elements that are available to be used in the .world file? Here: http://sdformat.org/spec?ver=1.6&elem=world If not all the elements that i want to use are available, can i set my own ones by any way? You currently can't do that directly into <world>, but you could add them inside a <plugin> tag that loads your custom plugin and parses them. Originally posted by chapulina with karma: 7504 on 2019-11-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 4463, "tags": "ros-melodic, gazebo-9" }
What is the process for vetting the 4,175 candidate planets Kepler has discovered?
Question: In the National Geographic article about NASA's announcement of new planets discovered by the Kepler spacecraft, it states that there are 4,175 candidate planets. What is the process for vetting these candidate planets? How do we know they exist without knowing whether they are planets? Or are they candidates because the data hints that a planet might be there but we need more detailed data or more analysis before we can confirm it? Answer: Planet "candidates" are Kepler Objects of Interest (KOIs)that have a transit-like light curve and have passed a number of observational tests. They are candidates, because although they do show a transit in the light curve of the star in question, there is no independent confirmation of a planetary mass. One problem to overcome is that of "false positives". There are other astrophysical phenomena besides planets that can mimic an exoplanetary transit in a Kepler light curve. For instance a grazing incidence stellar binary can produce what looks like a transit in a light curve; so too can a chance alignment of a star with a background eclipsing binary star. There are also a number of instrumental anomalies, such that a signal in one part of a CCD or the Kepler field can produce a "ghost" at another position that might look like a transit. If you can obtain detailed spectroscopy including accurate radial velocity measurements you can usually rule out most of these false positives, largely by getting a mass constraint for the companion. Even where you have a very clean transit signal and can estimate a planetary radius, a further problem is that a wide range of masses produce objects with very similar radii. ie. more massive brown dwarfs have very similar radii to exoplanets. Again, only a mass estimate, either through radial velocity measurements or sometimes through "transit timing variations" if the object is in a multiple exoplanet systems. Now the problem with the KOIs is many of these stars are way too faint ($V>14$) to do the kind of detailed spectroscopy that is easily possible on the much brighter exoplanets around stars found in say the HATNET or WASP ground-based surveys of bright stars. So, what you can do is tackle the problem statistically, by identifying the kinds of false-positives you might have, quantifying their influence, and throwing away suspect objects (see for instance Batalha 2012). Section 4 of this paper describes in detail some of the tests that are done: e.g. comparing the depths of odd and even numbered transits to look for asymmetries that would indicate grazing incidence binaries; looking between transits for a secondary eclipse that would also indicate a stellar companion; the shape of the transit is diagnostic but cannot easily rule out a planetary candidate; searching for motion of the "photocenter" of the source - if the photocenter moves during a transit it could indicate a background, diluted, eclipsing binary lightcurve. An early paper by Morton & Johnson (2011) claimed on the basis of a population synthesis approach, that that astrophysical false positives were limited to less than 10%. However, a recent paper by Coughlin et al. (2014) discusses how instrumental effects can be tested for by comparing the transit periods with the periods of other known objects in the Kepler field of view. They claim that around 30% of the KOIs may in fact be false positives. Either way it looks like the big majority of the KOIs are indeed exoplanets, but identifying which ones aren't will require detailed follow-up.
{ "domain": "astronomy.stackexchange", "id": 706, "tags": "planet, exoplanet, kepler" }
Custom Exception.ToString() formatter
Question: I find the default Exception.ToString() formattig very difficult to read especially if there are inner exceptions involved. I guess everyone knows how it looks: System.ArgumentException: This is a message. ---> UserQuery+TestException: You need to specify a valid id. at UserQuery.foo(Int32 id) in C:\[...]\LINQPad5\_vheqmalf\query_bkavzy.cs:line 56 at UserQuery.Main() in C:\[...]\LINQPad5\_vheqmalf\query_bkavzy.cs:line 37 --- End of inner exception stack trace --- at UserQuery.Main() in C:\[...]\LINQPad5\_vheqmalf\query_bkavzy.cs:line 41 So I thought I create my own formatting and later a base exception to get a better result - at least one that is more pleasing my eyes ;-) TestException » You need to specify a valid id. { ExtraInfo: [ Bar: "Baz" ] StackTrace: [ System.String UserQuery.foo(System.Int32 id) » C:\[...]\LINQPad5\_vheqmalf\query_bkavzy.cs:56 System.Void UserQuery.Main() » C:\[...]\LINQPad5\_vheqmalf\query_bkavzy.cs:37 ] } ArgumentException » This is a message. { ExtraInfo: [ ParamName: "" ] StackTrace: [ System.Void UserQuery.Main() » C:\[...]\LINQPad5\_vheqmalf\query_bkavzy.cs:41 ] } It's similar to json but It shouldn't be json. It should be easy to read. I'm not going to parse it. To keep the implementation short I chose to use mainly dynamic objects for the formatter. It has two methods: Format that takes an exception and turns it into as string GetExceptionInfos that extracts the relevant data from each exception ExceptionStringFormatter class: internal class ExceptionStringFormatter : IExceptionFormatter { private const string Space = " "; private const string CurlyBracketLeft = "{"; private const string CurlyBracketRight = "}"; private const string SquareBracketLeft = "["; private const string SquareBracketRight = "]"; private const string RaquoSpacer = " » "; private const int IndentWidth = 3; public string Format<TException>(TException exception) where TException : Exception { var exceptionInfos = GetExceptionInfos(exception).Reverse(); var exceptionString = new StringBuilder(); foreach (dynamic exceptionInfo in exceptionInfos) { exceptionString .Append(exceptionInfo.ExceptionType.Name).Append(RaquoSpacer) .Append(exceptionInfo.ExceptionMessage ?? "<Message>") .AppendLine(); exceptionString.Append(CurlyBracketLeft).AppendLine(); // format properties exceptionString.Append(Indent(1)).Append("ExtraInfo:").AppendLine(); exceptionString.Append(Indent(1)).Append(SquareBracketLeft).AppendLine(); foreach (var property in (IDictionary<string, object>)exceptionInfo.CustomProperties) { exceptionString .Append(Indent(2)) .Append(property.Key + ": ") .Append("\"" + property.Value + "\"") .AppendLine(); } exceptionString.Append(Indent(1)).Append(SquareBracketRight).AppendLine(); // format stack trace exceptionString.Append(Indent(1)).Append("StackTrace:").AppendLine(); exceptionString.Append(Indent(1)).Append(SquareBracketLeft).AppendLine(); foreach (var stackFrame in exceptionInfo.StackTrace) { exceptionString .Append(Indent(2)) .Append(stackFrame.CallerSignature).Append(RaquoSpacer) .Append(stackFrame.FileName) .Append(":" + stackFrame.LineNumber) .AppendLine(); } exceptionString.Append(Indent(1)).Append(SquareBracketRight).AppendLine(); exceptionString.Append(CurlyBracketRight).AppendLine(); } exceptionString.ToString().Dump(); return string.Empty; } internal static IEnumerable<dynamic> GetExceptionInfos<TException>(TException exception) where TException : Exception { foreach (var ex in exception.AsEnumerable()) { var exceptionInfo = new ExpandoObject() as dynamic; exceptionInfo.ExceptionType = ex.GetType(); exceptionInfo.ExceptionMessage = ex.Message; exceptionInfo.CustomProperties = new ExpandoObject() as dynamic; foreach (var property in ex.GetCustomProperties()) { ((IDictionary<string, object>)exceptionInfo.CustomProperties)[property.Key] = property.Value; } var strackTrace = new StackTrace(ex, true); var stackFrames = strackTrace.GetFrames(); exceptionInfo.StackTrace = stackFrames?.Select(sf => { var stackFrame = new ExpandoObject() as dynamic; stackFrame.CallerSignature = sf.GetMethod().Format(); stackFrame.FileName = sf.GetFileName(); stackFrame.LineNumber = sf.GetFileLineNumber(); return stackFrame; }).ToList(); yield return exceptionInfo; } } private static string Indent(int depth) { return string.Empty.PadLeft(IndentWidth * depth); } } There are also two helper classes. One is the ExceptionExtensions. Its purpose is to enumerate the inner exceptions and to get the custom properties from each one. internal static class ExceptionExtensions { private static readonly HashSet<string> ExceptionPropertyNames = new HashSet<string>( typeof(Exception) .GetProperties(BindingFlags.Instance | BindingFlags.Public) .Select(p => p.Name)); public static IDictionary<string, object> GetCustomProperties<TException>(this TException exception) where TException : Exception { var currentExceptionProperties = exception.GetType() .GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly) .Where(p => !ExceptionPropertyNames.Contains(p.Name)); var result = currentExceptionProperties.ToDictionary(property => property.Name, property => property.GetValue(exception)); return result; } public static IEnumerable<Exception> AsEnumerable(this Exception exception) { var currentException = exception; do { yield return currentException; currentException = currentException.InnerException; } while (currentException != null); } } and the other one is the MethodSignatureFormatter that creates method signature from method-info: internal static class MethodSignatureFormatter { public static string Format(this MethodBase method) { var methodInfo = method as MethodInfo; var signature = new StringBuilder() .Append(methodInfo.ReturnType.FullName) .Append(" ").Append(methodInfo.DeclaringType.FullName) .Append(".").Append(methodInfo.Name) .Append("(") .Append(string.Join(", ", methodInfo.GetParameters().Select(p => p.ParameterType.FullName + " " + p.Name))) .Append(")"); return signature.ToString(); } } Base exception with custom formatting: public abstract class FormattableException : Exception { protected FormattableException(string message, Exception innerException) : base(message, innerException) { } protected FormattableException(string message) : base(message) { } public IExceptionFormatter Formatter { get; set; } = new ExceptionStringFormatter(); public override string ToString() { return Formatter?.Format(this) ?? base.ToString(); } } Demo: void Main() { try { try { foo(9); } catch (Exception ex) { throw new ArgumentException("This is a message.", ex); } } catch (Exception ex) { Console.WriteLine(ex.ToString()); Console.WriteLine("-"); Console.WriteLine(new ExceptionStringFormatter().Format(ex)); } } string foo(int id) { throw new TestException("You need to specify a valid id.") { Bar = "Baz" }; } Answer: private const string Space = " "; There's little point in storing this in a variable: it's never going to change anyway and you're not clarifying any of the intent. I'm fully aware that is a space and { is a left curly bracket. Using dynamic is a very, very expensive action. By defaulting to it everywhere instead of using an intermediate type you're incurring heavy costs on yourself. .Append(exceptionInfo.ExceptionMessage ?? "<Message>") I'm not sure if <Message> is a default I'd like to see. It makes it feel as if something wasn't parsed correctly -- perhaps you can just leave it empty or give it an actual description like "No message found"? internal class ExceptionStringFormatter : IExceptionFormatter It has no instance fields so you might as well make it a static class with static members. That way you don't have to create a new object every time you want to format an exception string. exceptionString.ToString().Dump(); return string.Empty; Testing remnant? var exceptionInfo = new ExpandoObject() as dynamic; Might as well just write dynamic exceptionInfo = new ExpandoObject(); AsEnumerable(this Exception exception) Seems like a questionable name for what it does. Perhaps something like GetInnerExceptions() is more appropriate considering that's what it is actually implemented as and not just a projection of one exception to a list of exceptions. var methodInfo = method as MethodInfo; Use a direct cast or perform a null check. Right now you're just using the result directly which could mean you get a NullReferenceException -- a ClassCastException would be much more helpful in identifying the issue.
{ "domain": "codereview.stackexchange", "id": 20547, "tags": "c#, strings, formatting, reflection, exception" }
Make this Ruby array pivot more efficient?
Question: A puzzle I was given: Description: Write a method that returns the "pivot" index of a list of integers. We define the pivot index as the index where the sum of the numbers on the left is equal to the sum of the numbers on the right. Given [1, 4, 6, 3, 2], the method should return 2, since the sum of the numbers to the left of index 2 is equal to the sum of numbers to the right of index 2 (1 + 4 = 3 + 2). If no such index exists, it should return -1. If there are multiple pivots, you can return the left-most pivot. You can write the method in any language. Make sure that the method: runs successfully handles all edge cases is as efficient as you can make it! Apparently my solution is not efficient enough: def pivot(arr) results = [] arr.each_with_index do |n,index| last = arr.size - 1 sum_left = arr[0..index].inject(:+) sum_right = arr[index..last].inject(:+) results << index if sum_left == sum_right end results.any? ? results.first : -1 end Answer: Ruby allows -1 as an index that means last, so you don't have to calculate it at all. Calculating the whole left_sum every time is repeating work since it is always the previous left_sum + arr[index-1] (except for when index = 0). Similarly the right_sum is always the previous right_sum - arr[index]. You don't have to gather all results, so you can terminate early on finding the leftmost solution, or as soon as sum_left > sum_right (assuming there are no negative numbers in arr?) you know there is no solution, so can return -1. For example (untested) def find_pivot(arr) sum_left = -arr[-1] sum_right = arr.inject(:+) arr.each_index do |i| sum_left += arr[i-1] sum_right -= arr[i] if sum_left == sum_right return i elsif sum_right < sum_left # assuming there are no negative numbers we already know there's no solution return -1 end end return -1 # in case we somehow reach the end without a solution or early termination end Initialising sum_left to -arr[-1] is a trick to save on having to add an if statement to detect and handle the first iteration of the loop differently, since it cancels out the effect of sum_left += arr[0-1] which would make sum_left jump to the value of the last value in the array.
{ "domain": "codereview.stackexchange", "id": 6420, "tags": "ruby" }
Which linear algebra book should I read to understand vectorized operations?
Question: I am reading Goodfellow's book about neural networks, but I am stuck in the mathematical calculus of the back-propagation algorithm. I understood the principle, and some Youtube videos explaining this algorithm shown step-by-step, but now I would like to understand the matrix calculus (so not basic calculus!), that is, calculus with matrices and vectors, but especially everything related to the derivatives with respect to a matrix or a vector, and so on. Which math book could you advise me to read? I specify I studied 2 years after the bachelor in math school (in French: mathématiques supérieures et spéciales), but did not practice for years. Answer: If you already have two years of a bachelor's of mathematics, I recommend part I of the book that you're mentioning. That part of the book reviews the main mathematics used in the optimization of neural nets (in part 1), and then actually goes through the various models in detail in the later parts. The review is done at a level that is suitable for someone who has already studied these topics, but needs a refresher. The book Matrix Differential Calculus with Applications in Statistics and Econometrics covers more advanced topics, which might also be what you are looking for. There is also the related Wikipedia article.
{ "domain": "ai.stackexchange", "id": 1470, "tags": "neural-networks, reference-request, linear-algebra, books, calculus" }
what is square root of a density matrix power two?
Question: I know that in algebra for a variable we have $ \sqrt {x^2} = |x|$ What if $x$ is a density matrix? Please share resource for your answer. Answer: If $\rho$ is a density matrix, then $\sqrt{\rho^2} = \rho$. To see why this is, let's start with the definition of the square root of a matrix. Ordinarily, if $A$ is a square matrix, there may be multiple choices of a square matrix $B$ such that $B^2 = A$. However, if $P$ is a positive semidefinite matrix, then there is a unique choice of a positive semidefinite matrix $Q$ such that $Q^2 = P$, and when people write $\sqrt{P}$ for a positive semidefinite matrix $P$, this is what is most typically meant. You can find a proof of this claim (that there exists a unique positive semidefinite matrix $Q$ such that $Q^2 = P$) by taking $k=2$ in Theorem 7.2.6 of Horn and Johnson, Matrix Analysis. Once we have the definition of $\sqrt{\rho^2}$ in place, it's pretty trivial: $\rho$ is positive semidefinite and $\rho^2 = \rho^2$, so we have our unique positive semidefinite square root: $\sqrt{\rho^2} = \rho$.
{ "domain": "quantumcomputing.stackexchange", "id": 4898, "tags": "mathematics, density-matrix, resource-request" }
How to convert a ROS message file to an IDL file?
Question: What is the terminal command to convert a ros message file to an idl file? Originally posted by StewartHemm74 on ROS Answers with karma: 46 on 2022-05-18 Post score: 0 Answer: The rosidl_adapter core package exposes both scripts and an API to perform these conversions. For your case, use the msg2idl.py script: ros2 run rosidl_adapter msg2idl.py /path/to/msg/file Originally posted by aprotyas with karma: 456 on 2022-05-18 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by StewartHemm74 on 2022-05-19: Thank you!
{ "domain": "robotics.stackexchange", "id": 37684, "tags": "ros, ros2, rosmessage, dds" }
2-SAT or 3-SAT or k-SAT in AC-0
Question: This may be an elementary question, but I'm new to circuit complexity. Does 2-SAT in CNF form belong to the complexity class AC$^0$? It seems simple enough to construct an AC$^0$ circuit of depth 2 of polynomial size for 2-SAT. $~$Given $n$ variables (which means $2n$ literals), there can be at most $\frac{(2n)(2n-1)}{2} = n(2n-1)$ number of OR gates at Level one, because, given $2n$ literals, there can be at most $n(2n-1)$ clauses. At Level 2, there is just one gate (an AND gate). But we can also extend this to 3-SAT. $~$The only difference is that at Level one, the maximum number of OR gates would now be ($2n$ choose $3$), which is $O(n^3)$, still polynomial in the number of variables. Doesn't this mean that 3-SAT is also a member of AC$^0$? Wouldn't this mean that 3-SAT is polynomially solvable? What am I doing wrong? Answer: 2SAT is the following problem: given $\varphi$ (a Boolean formula in 2-CNF), is $\varphi$ satisfiable? To show that 2SAT is in $\textsf{AC}^0$, you must show a constant-depth, polynomial-size circuit that can solve 2SAT. In other words, you need to come up with an algorithm to solve 2SAT, and then show that the algorithm can be implemented by an $\textsf{AC}^0$ circuit. In your question, you showed that $\varphi$ is in $\textsf{AC}^0$, but that's not what we need. We need to know that the algorithm to solve 2SAT (i.e., to check whether $\varphi$ is satisfiable) is in $\textsf{AC}^0$, not that $\varphi$ is in $\textsf{AC}^0$.
{ "domain": "cs.stackexchange", "id": 4028, "tags": "circuits" }
Add odd/even harmonics to signal?
Question: How do I add odd or even harmonics to a floating point signal? Do I have to use tanh or sin? What I'm trying to do is achieve some very simple distortion effects, but I'm having a hard time finding exact references. What I'd like is something similar to what the Culture Vulture does by adding odd and even harmonics in its pentode and triode settings. The float value is a single sample in a sample flow. Answer: What your distortion box does is apply a non-linear transfer function to the signal: output = function(input) or y = f(x). You're just applying the same function to every individual input sample to get the corresponding output sample. When your input signal is a sine wave, a specific type of distortion is produced called harmonic distortion. All of the new tones created by the distortion are perfect harmonics of the input signal: If your transfer function has odd symmetry (can be rotated 180° about the origin), then it will produce only odd harmonics (1f, 3f, 5f, ...). An example of a system with odd symmetry is a symmetrically-clipping amplifier. If your transfer function has even symmetry (can be reflected across the Y axis), then the harmonics produced will only be even-order harmonics (0f, 2f, 4f, 6f, ...) The fundamental 1f is an odd harmonic, and gets removed. An example of a system with even symmetry is a full-wave rectifier. So yes, if you want to add odd harmonics, put your signal through an odd-symmetric transfer function like y = tanh(x) or y = x^3. If you want to add only even harmonics, put your signal through a transfer function that's even symmetric plus an identity function, to keep the original fundamental. Something like y = x + x^4 or y = x + abs(x). The x + keeps the fundamental that would otherwise be destroyed, while the x^4 is even-symmetric and produces only even harmonics (including DC, which you probably want to remove afterwards with a high-pass filter). Even symmetry: Transfer function with even symmetry: Original signal in gray, with distorted signal in blue and spectrum of distorted signal showing only even harmonics and no fundamental: Odd symmetry: Transfer function with odd symmetry: Original signal in gray, with distorted signal in blue and spectrum of distorted signal showing only odd harmonics, including fundamental: Even symmetry + fundamental: Transfer function with even symmetry plus identity function: Original signal in gray, with distorted signal in blue and spectrum of distorted signal showing even harmonics plus fundamental: This is what people are talking about when they say that a distortion box "adds odd harmonics", but it's not really accurate. The problem is that harmonic distortion only exists for sine wave input. Most people play instruments, not sine waves, so their input signal has multiple sine wave components. In that case, you get intermodulation distortion, not harmonic distortion, and these rules about odd and even harmonics no longer apply. For instance, applying a full-wave rectifier (even symmetry) to the following signals: sine wave (fundamental odd harmonic only) → full-wave rectified sine (even harmonics only) square wave (odd harmonics only) → DC (even 0th harmonic only) sawtooth wave (odd and even harmonics) → triangle wave (odd harmonics only) triangle wave (odd harmonics only) → 2× triangle wave (odd harmonics only) So the output spectrum depends strongly on the input signal, not the distortion device, and whenever someone says "our amplifier/effect produces more-musical even-order harmonics", you should take it with a grain of salt. (There is some truth to the claim that sounds with even harmonics are "more musical" than sounds with only odd harmonics, but these spectra aren't actually being produced here, as explained above, and this claim is only valid in the context of Western scales anyway. Odd-harmonic sounds (square waves, clarinets, etc.) are more consonant on a Bohlen–Pierce musical scale based around the 3:1 ratio instead of the 2:1 octave.) Another thing to remember is that digital non-linear processes can cause aliasing, which can be badly audible. See Is there such a thing as band-limited non-linear distortion?
{ "domain": "dsp.stackexchange", "id": 586, "tags": "audio, signal-detection, c, distortion" }
Classical notion of trajectory
Question: Why the classical notion of trajectory is meaningless in quantum mechanics? I am asking here about notion of trajectory from classical mechanics and why in quantum mechanics we cannot use it or is meaningless Answer: In classical mechanics, each body has an exact position at all times (and hence an exact speed given by its derivative, and the equivalent quantities for rotary motion). To understand the difference to quantum mechanics, think of it as wave mechanics: Each body is described by a wave(function) and properties such as position or speed are only defined to the extent that they can be defined for a wave: Not arbitrarily well, at least not all such quantities simultaneously.
{ "domain": "physics.stackexchange", "id": 21266, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, measurement-problem, observables" }
Kinect problem - [/openni_camera] No devices connected
Question: Hi, there, I'm trying to connect Kinect with turtlebot, but it is not working. First of all, I was following instructions on how to "hack" kinect cable and connect it with power board which has gyro. I cut power cable as it was shown and made an adapter to connect with power board. Then, I've connected Kinect with my TrimSlice - I see green light on Kinect start blinking. I've plugged second cable into the power board and run: roslaunch turtlebot_calibration calibrate.launch After all processes has started I consistently see: [INFO] [WallTime: 1321466338.673699] Still waiting for scan [INFO] [WallTime: 1321466338.975991] Still waiting for scan [INFO] [WallTime: 1321466339.277746] Still waiting for scan [INFO] [WallTime: 1321466339.580285] Still waiting for scan [ INFO] [1321466339.599740633]: [/openni_camera] No devices connected.... waiting for devices to be connected ... Ctrl+C doesn't stop this process and I would have this output unless I will kill this process. I was searching by error in google and here but didn't get anything which I could apply to my situation. Most of the questions about Kinect related to it work when I can't even start it. Any ideas what I was doing wrong or/and how I can fix it? Thanks. -Roman. Originally posted by Roman Burdakov on ROS Answers with karma: 131 on 2011-11-16 Post score: 0 Original comments Comment by Roman Burdakov on 2011-11-21: I don't know how but it finaly got through. I was trying many different things and appently one of them has worked. Now I'm getting different errors but this is a subject for a separate topic. Thanks! Comment by mmwise on 2011-11-16: can you view images out of the kinect? that would be the first thing to check. when the calibration says waiting for scan, a scan is not being published which could mean in turn that a pointcloud isn't being published. Answer: can you view images out of the kinect? that would be the first thing to check. when the calibration says waiting for scan, a scan is not being published which could mean in turn that a pointcloud isn't being published Originally posted by mmwise with karma: 8372 on 2012-02-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7320, "tags": "ros, kinect, turtlebot, robot-calibration, openni-camera" }
Find first non-repetitive char in a string
Question: I just had an interview for the C#.NET developer position and I was asked to find the first non repetitive char in a given string without using any library functions but just the basic constructs of the language such as fors and ifs. I presented this solution considering the quick time constraint. I request evaluation of the solution of how truly it serves the question. public static char findFirstNonRepetitiveChar(string input) { char c = '0'; for (int i = 0; i < input.Length; i++) { bool isdistinct = true; for (int j = 0; j < input.Length; j++) { if (input[i] == input[j] && i!=j) { isdistinct = false; break; } } if (isdistinct) { c = input[i]; break; } } return c; } Answer: first problem is, an empty string returns 0 as does :- findFirstNonRepetitiveChar("000"); this is going to take a looooooong time :- findFirstNonRepetitiveChar(new string('a', 500000) + new string('b', 500000)); Alternatives as suggested use hashtables which if you are going to use no library, are going to take a while to get going from scratch. An alternative, but still pretty fast is to remove repeats from the string as you find them, this then becomes dependent on the number of different characters used...which for most normal text isn't too much, but for a devious unicode string, it could add up, I'm presuming using string.Replace and StringBuilder is a no no :- public static string Strip(string s, char c) { var r = new char[s.Length]; int i=0; foreach (var b in s) { if (b != c) r[i++] = b; } return new string(r,0,i); } public static char? findFirstNonRepetitiveChar(string s) { int x = 1; while (x < s.Length) { for (x = 1; x < s.Length; x++) { if (s[x] == s[0]) { s = Strip(s, s[0]); } } } return (s.Length > 0)? (char?)s[0] : null; }
{ "domain": "codereview.stackexchange", "id": 22544, "tags": "c#, strings, interview-questions" }
Transfer function intuition
Question: What is the meaning of the transfer function of a filter? Please explain intuitively with an example if possible. Answer: Normally, in electrical enginnering, we apply the term "transfer function" and "filter" to an operation that belongs in the class we call Linear Time-Invariant systems (LTI). Sometimes you might read something from someone where they apply either terms to a nonlinear operation. A 'filter" in statistics usually means a different thing than what EEs mean. So the use of "filter" is deprecated in this EE context. Now either a linear or nonlinear operator will take an input signal $x(t)$ or $x[n]$ and will transform it into an output signal $y(t)$ or $y[n]$. The algorithm or rules or math of that operation is what defines what the operator is, linear or nonlinear, time-variant or time-invariant, deterministic or stochastic. If the operator is LTI, then I would call it either an "LTI system" or a "filter". For all practical purposes, what defines an operator as linear is $$\begin{align} \text{if } \quad y_1(t) &= \operatorname{LTI}\Big\{ x_1(t) \Big\} \\ \text{and } \quad y_2(t) &= \operatorname{LTI}\Big\{ x_2(t) \Big\} \\ \\ \text{then } \quad y_1(t)+y_2(t) &= \operatorname{LTI}\Big\{ x_1(t)+x_2(t) \Big\} \\ \end{align}$$ and what defines it as time-invariant is $$\begin{align} \text{if } \quad y(t) &= \operatorname{LTI}\Big\{ x(t) \Big\} \\ \\ \text{then } \quad y(t-\tau) &= \operatorname{LTI}\Big\{ x(t-\tau) \Big\} \quad \text{for any real } \tau \\ \end{align}$$ Now, if your operator satisfies those two conditions, I will call it a "filter". When a system is LTI, then we can apply all sorts of math on the operator and the signals going in and coming out of the operator. This includes the Fourier Transform and the Laplace Transform (for discrete-time signals, $x[n]$ and $y[n]$, this would be the Discrete-Time Fourier Transform and the Z Transform, instead). Probably, the first result we get from Linear System Theory (also called "Signals and Systems" for some textbooks), is that the output $y(t)$ can always be related to the input $x(t)$ by the operation of convolution: $$\begin{align} y(t) &= \int\limits_{-\infty}^{\infty} h(u) x(t-u) \ \mathrm{d}u \\ &= \int\limits_{-\infty}^{\infty} h(t-u) x(u) \ \mathrm{d}u \\ \end{align}$$ $h(t)$ is the impulse response, and in a sense, fully defines the operation of the filter or LTI system. I.e. if you know how your LTI system responds to a Dirac impulse as an input, you know how your LTI system will respond to any input. Then, whether we use the Fourier Transform $$\begin{align} X(f) &= \int\limits_{-\infty}^{\infty} x(t) \, e^{-j 2 \pi f t} \ \mathrm{d}t \end{align}$$ or the (bilateral) Laplace Transform $$\begin{align} X(s) &= \int\limits_{-\infty}^{\infty} x(t) \, e^{-s t} \ \mathrm{d}t \end{align}$$ then the above convolution operation in the time domain becomes a simpler multiplication operation in the frequency domain (or sometimes the "transform domain"): $$\begin{align} Y(f) &= H(f) \, X(f) \\ \\ Y(s) &= H(s) \, X(s) \\ \end{align}$$ where $X(\cdot)$ and $Y(\cdot)$ and $H(\cdot)$ are all defined the same way from $x(t)$, $y(t)$ and $h(t)$. The thing that multiplies the (transform of the) input to become the (transform of the) output is the transfer function. That means $H(f)$ or $H(s)$ transfers the input $X(f)$ or $X(s)$ to the output $Y(f)$ or $Y(s)$. But that is only in the frequency domain. In the time domain, the convolution operator (convolving the input $x(t)$ with the impulse response $h(t)$) is what your filter does. Either the transfer function ($H(f)$ or $H(s)$) or the impulse response ($h(t)$) are sufficient to fully define your LTI system (or filter) from the POV of the input to output relationship. I.e. two different systems with identical impulse responses $h_1(t) = h_2(t)$ are indistinguishable from each other if both are placed in black boxes. The same can be said about their transfer functions $H_1(s) = H_2(s)$. If their transfer functions are identical, from just looking at the inputs and outputs, we cannot tell any difference between the two LTI systems.
{ "domain": "dsp.stackexchange", "id": 6727, "tags": "filters, fourier-transform, transfer-function" }
Measurement uncertainty basics
Question: $$x = (\overline{x}-K)\pm \Delta x\tag{1}$$ $$\Delta x = s_{\overline{x}} = \sqrt{\frac t {\sqrt n} s_x} \tag{2}$$ $$s_x = \sqrt{\frac 1 {n-1} \sum^{n}_{i=1}(x_i-\overline{x})} \tag{3}$$ $$\frac {\Delta x}{\overline{x}}\times 100\% \tag{4}$$ $$\text{Ablolute: }U = 8.11V\pm0.02 V\tag{5}$$ $$\text{Relative: }U = 8.11(1\pm0.0025) V\tag{6}$$ 1. How does omitting the $t$ effect the $\Delta x$ (especially for small $n$) In the *physical formulae* for my Physics lecture it omits the $t$ in the equation. Since in one of my problems the professor didn't use it either, I assume this is okay at least for this basic level course. I somewhat understand it's meaning. 2. If the relative measurement uncertainty is defined as in $(4)$, how do you derive at $(6)$ from $(5)$? These values where given in the solution to a problem without further explanation. (LaTex question: Is there a way to make the overline for the x appear closer to the x?) Answer: Your physics lecturer is either being a bit slack or, more likely, is simply seeking to get an idea of the order of magnitude for the uncertainty. In statistical inference, many tests look at how far a result is from its mean and the meaningful way to look at this difference is almost always as a certain number of sigmas (population standard deviations, if they are known) or data standard deviations (an estimate of sigma derived from the actual measured data if sigma is not known). The notation $t$ suggests the Gosset (Student) $t$-test. The reason is that the statistic expressed in this way can be related to the probability of observing your dataset this far from the mean given the null hypothesis of a certain mean. That is, if you get a low probability from this assumption, this casts doubt on the soundness of the assumption. Deviations expressed as the appropriate number of standard deviations are thus a highly fundamental idea in the doctrine of falsifiability (see my answer here). So there is a great deal of theory in deciding the appropriate scale factor (in this case a critical $t$ for the $t$-test). But, having said that, these scale factors don't vary too much and you can use the $\Delta x$ to get a feel for whether a deviation is physcially meaningful: to find something one standard deviation away from the mean is not very uncommon, and wouldn't raise eyebrows, two SDs is a bit more uncommon, three SDs mean you really need to look at your null hypothesis carefully whilst when we get to somewhere between 4.5 and 6 sigma, as in the data suggesting the Higgs Boson you're on to something enough to award a Nobel Prize. In this last case, the null hypothesis is that the background experimental variability in the collider experiment could account for the little "bump" that bespeaks the Higgs Boson. It's possible, and it still IS possible. It's just that after so many reproduction of the result it's highly unlikely. Lastly (6) is gotten from (5) by dividing the "absolute" uncertainty by the nominal mean ($0.0025 \approx 0.02 / 8.11$). "Proportional" or "Fractional" uncertainty would be a better word, but unfortunately "Relative" is also current usage: you can express it either as a percentage, or as an unadorned ratio, or as a "parts per million" or "billion".
{ "domain": "physics.stackexchange", "id": 9814, "tags": "measurements, error-analysis" }
Use the Euler-Lagrange equation to find the equations of motion
Question: I have to use the Euler-Lagrange equation $$ \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} - \frac{\partial L}{\partial q} = 0 $$ to go from $$ L = T - V = \sum_{n=1}^N \frac{m}{2} {\dot{q}_n}^2 - \sum_{n=1}^N \frac{K}{2} {\left( q_{n+1} - q_n \right)}^2 $$ to $$ m\ddot{q}_n + \frac{K}{2} \left[ 2 \left(q_n - q_{n-1}\right) - 2 \left(q_{n+1} + - q_n\right) \right] = 0 $$ or $$ m\ddot{q}_n = K \left( q_{n+1} + q_{n-1} - 2q_n \right) $$ So I started with $$ \frac{\partial L}{\partial \dot{q}} = \frac{m}{2} \sum_{n=1}^N \frac{\partial}{\partial \dot{q}} \left( {\dot{q}_n}^2 \right) = \sum_{n=1}^N m\dot{q}_n $$ $$ \frac{d}{dt} \left( \sum_{n=1}^N m\dot{q}_n \right) = \sum_{n=1}^N m\ddot{q}_n $$ Then on the second sum I'm missing something $$ \frac{\partial L}{\partial q} = -\frac{K}{2} \sum_{n=1}^N \frac{\partial}{\partial q} {\left( q_{n+1} - q_n \right)}^2 $$ How does this get anywhere close to the 2nd term in the 3rd equation? I figured that perhaps I should recalculate for every n, so rewriting the equations: $$ \frac{\partial L}{\partial \dot{q}_n} = \frac{m}{2} \sum_{n=1}^N \frac{\partial}{\partial \dot{q}_n} \left( {\dot{q}_n}^2 \right) = \sum_{n=1}^N m\dot{q}_n $$ $$ \frac{d}{dt} \left( \sum_{n=1}^N m\dot{q}_n \right) = \sum_{n=1}^N m\ddot{q}_n $$ $$ \frac{\partial L}{\partial q_n} = -\frac{K}{2} \sum_{n=1}^N \frac{\partial}{\partial q_n} {\left( q_{n+1} - q_n \right)}^2 = -\frac{K}{2} \sum_{n=1}^N 2{\left( q_{n+1} - q_n \right)}(-1) $$ $$ \frac{\partial L}{\partial q_n} = \frac{K}{2} \sum_{n=1}^N 2{\left( q_{n+1} - q_n \right)} $$ So for n+1 $$ \frac{\partial L}{\partial \dot{q}_{n+1}} = \frac{m}{2} \sum_{n=1}^N \frac{\partial}{\partial \dot{q}_{n+1}} \left( {\dot{q}_n}^2 \right) = 0 $$ $$ \frac{\partial L}{\partial q_{n+1}} = -\frac{K}{2} \sum_{n=1}^N \frac{\partial}{\partial q_{n+1}} {\left( q_{n+1} - q_n \right)}^2 = -\frac{K}{2} \sum_{n=1}^N 2{\left( q_{n+1} - q_n \right)} $$ Now there are 2 (coupled?) equations: $$ m\ddot{q}_n - \frac{K}{2} 2{\left( q_{n+1} - q_n \right)} = 0 $$ $$ \frac{K}{2} 2{\left( q_{n+1} - q_n \right)} = 0 $$ Which result $$ m\ddot{q}_n + K \left[ {\left( q_{n+1} - q_n \right) - \left( q_{n+1} - q_n \right)} \right] = 0 $$ If I can rewrite one of the parentheses n+1 -> n and n -> n-1, then I can get the desired result. $$ m\ddot{q}_n + K \left[ {\left( q_{n} - q_{n-1} \right) - \left( q_{n+1} - q_n \right)} \right] = 0 $$ $$ m\ddot{q}_n + K {\left(- q_{n-1} - q_{n+1} + 2q_n \right)} = 0 $$ $$ m\ddot{q}_n = K {\left(q_{n-1} + q_{n+1} - 2q_n \right)} $$ But this sounds like BS, so I have no idea what I'm doing. Enlighten me. Answer: We have $$L= \frac{m}{2}\sum_{n~=~1}^N \dot{q}_n - \frac{K}{2}\sum_{n~=~1}^N(q_{n+1}-q_n)^2$$ Consider the equations of motion for the $k$-th particle: $$ \frac{\partial L}{\partial q_k}-\frac{\mathrm d}{\mathrm dt}\frac{\partial L}{\partial \dot{q}_k} = 0 $$. The term of the velocity is easy: $$ \frac{\mathrm d}{\mathrm dt}\frac{\partial L}{\partial \dot{q}_k} = \frac{\mathrm d}{\mathrm dt}(m \dot{q}_k)=m\ddot{q}_k$$ With the positions, notice that each coordinate appears in two terms, so $$ \begin{align}\frac{\partial L}{\partial q_k} &= \frac{\partial}{\partial q_k}\left[-\frac{K}{2}(q_k-q_{k-1})^2-\frac{K}{2}(q_{k+1}-q_k)^2\right] \\ &= -K(q_k-q_{k-1})+K(q_{k+1}-q_k)=K(q_{k+1}+q_{k-1}-2q_k)\end{align} $$ So, the eom for the $k$-th particle is $$ m\ddot{q}_k = K(q_{k+1}+q_{k-1}-2q_k) $$ (obviously for $1<k<N$). Some comments Notice that I have intentionally changed the index for the derivation of the eom $k$ with respect to the dummy index $n$ that appears in the summatory. This is a very good practice when working with index notations that can help you to avoid mistakes when performing this calculations. If you are somehow new to the index notations I recommend you to perform this computations explicitly for some given $N$ (let's say, $N=3$ or $4$), expanding the sums to actually view how it works and convince yourself that the method is right.
{ "domain": "physics.stackexchange", "id": 34445, "tags": "homework-and-exercises, lagrangian-formalism, variational-calculus, coupled-oscillators" }
Project Euler 1 (sum of multiples of 3 or 5 under 1000)
Question: I solved this a while ago. In the moment I solved it I was learning Scheme and, well, I still am. I'm not looking at the best solution (I searched for it and coded it already, in Python), what I want is to improve this code, maintaining the same (brute) approach, so I can learn some best practices and tips in Scheme. Maybe there are some suggestions on how I wrote the routines, for example. #lang racket (define (check x) (or (= (modulo x 3) 0) (= (modulo x 5) 0))) (define (count-check x) (if (check x) x 0)) (define (sum-multiples-rec actual limit acc) (if (< actual limit) (sum-multiples-rec (+ actual 1) limit (+ (count-check actual) acc)) acc)) (define (sum-multiples lower-limit upper-limit) (sum-multiples-rec lower-limit upper-limit 0)) (sum-multiples 0 1000) Answer: So, any time you want to write a "helper" recursive function, the standard way to write that is to use a named let. So here's how I might restructure your program (but keeping the same algorithm): (define (sum-multiples start end) (let loop ((sum 0) (i start)) (cond ((>= i end) sum) ((or (zero? (modulo i 3)) (zero? (modulo i 5))) (loop (+ sum i) (add1 i))) (else (loop sum (add1 i)))))) Named let is a macro, which in this case expands to the same expression as: (define (sum-multiples start end) ((letrec ((loop (lambda (sum i) (cond ((>= i end) sum) ((or (zero? (modulo i 3)) (zero? (modulo i 5))) (loop (+ sum i) (add1 i))) (else (loop sum (add1 i))))))) loop) 0 start)) (I've taken the liberty of using (add1 i) instead of (+ i 1) since you're using Racket.) Here's a version that's even more Rackety, using for/sum to accumulate the sum rather than using a manual loop: (define (sum-multiples start end) (for/sum ((i (range start end))) (if (or (zero? (modulo i 3)) (zero? (modulo i 5))) i 0)))
{ "domain": "codereview.stackexchange", "id": 6948, "tags": "beginner, scheme, programming-challenge, racket" }
Proving using Beta Reductions (Lambda Calculus)
Question: I am working on proving something by using Lambda Calculus and Beta Reductions, and I was following along a tutorial on another problem and attempted to carry over the knowledge onto a different problem and I am a little confused on the last step. The problem is to prove $\textbf{or true false}$. Here are my steps: (if there are any improvements or errors please let me know) Step 1: Begin writing the identity for OR along with the parameters true and false. $(\lambda x. \lambda y.(x ~ true ~ y)) ~ true false$ Step 2: Replace the $x$ terms with true. $(\lambda y.(true ~ true ~ y)) ~ false $ Step 3: Replace the $y$ terms with false. $ true ~ true ~ false \equiv (true ~ true) false $ Step 4: (I am a little shaky on this step, however I saw it done this way). Expand on one of the terms by replacing with it's identity. So we break true into: $ (\lambda x. \lambda y. x) true ~ false $ Step 5: Again, replace x with true. $(\lambda y.true) ~ false$ Step 6: $\textbf{this is the step i am most confused on}$ It looks as if since there is no way we can distribute the false to the y we simply just drop both of them since it is not an applicable function. Which then returns you with true, which stands to reason that true or false is true. Can anyone shed light on Step 6? Or if I am correct and thats the rationale please let me know! Answer: First let: $$ true = \lambda x ~ y . x $$ $$ false = \lambda x ~ y . y$$ $$ or = \lambda x ~ y .((x ~ true) y) $$ So this is what you wrote: $$or ~ true ~ false \rhd_{\beta} true$$ $$ \Bigg ( \Big( \lambda x ~ y . (x ~ true) y \Big) true \Bigg) ~ false $$ $$ \Bigg ( \lambda y. \Big ((true ~ true) y \Big) \Bigg) false $$ $$ ((true ~ true) ~ false) $$ Unwrap $true$. $$ \Big( (\lambda x ~ y . x) ~ true) \Big) ~ false $$ $$ \color{red}{(\lambda y . true) ~ false } \rhd true $$ In the red step, which is what's burdening you, you're just consuming the $false$, why? Because there are no free $y$ variables in $true \equiv (\lambda x~ y. x)$. This is rule-(e) for substitution. Definition 1.12 (Substitution) For any $M, N, x$, define $[N/x]M$ to be the result of substituting $N$ for every free occurrence of $x$ in $M$, and changing bound variables to avoid clashes. The precise definition is by induction on $M$, as follows (after [CF58, p.94]). (a) $[N/x]x \equiv N$ (b) $[N/x]a \equiv a$ for all atoms $a \not \equiv x$ (c) $[N/x](PQ) \equiv ([N/x]P)([N/x]Q)$ (d) $[N/x](\lambda x.P) \equiv (\lambda x.P)$ (e) $[N/x](\lambda y.P) \equiv P$ if $x \not \in FV(P)$. (f) $[N/x](\lambda y.P) \equiv \lambda y. [N/x]P$ if $x \in FV(P)$ and $y \not \in FV(N)$. (g) $[N/x](\lambda y.P) \equiv \lambda z. [N/x][z/y]P$ if $x \in FV(P)$ and $y \in FV(N)$. Note: $FV(P)$ stands for the set of free variables of $P$.
{ "domain": "cs.stackexchange", "id": 7793, "tags": "lambda-calculus" }
Project Euler Problem # 8 maximum product of sequences
Question: Is there any room for improvement? Your feedback would be helpful. Problem Statement: The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832. Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product? import operator number = """\ 73167176531330624919225119674426574742355349194934\ 96983520312774506326239578318016984801869478851843\ 85861560789112949495459501737958331952853208805511\ 12540698747158523863050715693290963295227443043557\ 66896648950445244523161731856403098711121722383113\ 62229893423380308135336276614282806444486645238749\ 30358907296290491560440772390713810515859307960866\ 70172427121883998797908792274921901699720888093776\ 65727333001053367881220235421809751254540594752243\ 52584907711670556013604839586446706324415722155397\ 53697817977846174064955149290862569321978468622482\ 83972241375657056057490261407972968652414535100474\ 82166370484403199890008895243450658541227588666881\ 16427171479924442928230863465674813919123162824586\ 17866458359124566529476545682848912883142607690042\ 24219022671055626321111109370544217506941658960408\ 07198403850962455444362981230987879927244284909188\ 84580156166097919133875499200524063689912560717606\ 05886116467109405077541002256983155200055935729725\ 71636269561882670428252483600823257530420752963450 """ # Find the thirteen adjacent digits in the 1000-digit number that have # the greatest product. # What is the value of this product? def partition(n): """returns a list of n series""" list_of_nums = list(number) all_partitions = [] while len(list_of_nums) != 0: count = 0 temp = '' while count <= n - 1: try: temp += list_of_nums[count] count += 1 except IndexError: return all_partitions all_partitions.append(temp) if len(list_of_nums) != 0: del list_of_nums[0] return all_partitions def get_max(all_partitions): """returns the maximum product of n series""" part_sum = [] for num in all_partitions: tot = 1 for digit in num: tot *= int(digit) if tot != 0: part_sum.append((num, tot)) return sorted(part_sum, key=operator.itemgetter(1), reverse=True)[0] [1] if __name__ == '__main__': # Sanity check: sequence of length (4) partitions1 = partition(4) print(get_max(partitions1)) print() # Result: sequence of length (13) partitions2 = partition(13) print(get_max(partitions2)) Answer: Some suggestions: number and n could be parameters to this script. That way the whole thing would be reusable. You can use list comprehensions to partition your string: >>> def partition(digits: str, length: int) -> List[str]: ... return [digits[index:index + length] for index in range(len(digits) - length + 1)] ... >>> partition("12345", 3) ['123', '234', '345'] Multiplying N random digits is going to be slower than checking whether any of the digits are zero. So your first pass could be to exclude any partitions which contain zero. If there are no partitions left afterwards the max is zero, and you've done no multiplications at all. >>> partitions = partition("7316717653133062491", 13) >>> nontrivial_partitions = [partition for partition in partitions if "0" not in partition] >>> nontrivial_partitions ['7316717653133'] An optimization on the above is to immediately discard the next N - 1 digits as soon as you encounter a zero when generating partitions, since all of those partitions are also going to multiply to zero. Make sure you check for any zeros within those numbers as well to keep skipping. It looks like you have a newline at the end of number.
{ "domain": "codereview.stackexchange", "id": 35262, "tags": "python, programming-challenge" }
Wave train 200 m/s meaning
Question: I ran a 60 m line intended for surficial seismic refraction. Energization with hammer. 12 (10 hz vertical 5 m spaced) geophones and 5 shots (15 m spaced). Record length 0.5 s. Thanks to a borehole near a strata of man-made gravelly silt (0.40 m) and very soft silty clay (to 5 m) and then clay improves its consistency deeper were identified. My seismic traces seemed to be ok, but at office the principal wave train gives 200 m/s and first arrivals refractions are so weak that they cannot be easily identified. I think the principal train wave shadows them. My question: What is the meaning of a 200 m/s train wave? Answer: I've seen data like this before. My research focused on acquiring/processing both vertical and horizontal component data in subsurface environments characterized mainly by saturated and/or unconsolidated facies. So, it's entirely possible even for a that kind of surface-wave to display such low apparent velocities. What I think you are seeing at that particular offset and traveltime is the Rayleigh-wave beginning to 'disperse' - AKA seismic dispersion beginning to manifests itself as a results of the wavefield convolving with layering and near-surface heterogeneity. As for a refraction arrival, if it's indeed present, it should be extremely weak and probably only discernible via some kind of gain (e.g. AGC).
{ "domain": "earthscience.stackexchange", "id": 1886, "tags": "geophysics, seismology, seismic" }
Beginner's pygame Conway's Game of Life
Question: Here's my implementation of Game of Life. I'm quite new to Python and would like to know how could I improve that code, especially in terms of performance, compactness and readability. import copy import pygame from pygame import * import random #constants red = (255, 0, 0) black = (0, 0, 0) white = (255, 255, 255) #neighbour coordinates neighbours = [[-1,-1],[-1,0],[-1,+1], [0,-1], [0,+1], [+1,-1],[+1,0],[+1,+1],] class cell(object): def __init__(self, ngb, state): self.state = state self.ngb = 0 #2d array for storing cells cells = [[i for i in range(50)] for i in range(50)] #random field generation def generate(): print "Generating" for y in xrange(50): for x in xrange(50): cells[x][y] = cell(0, random.randint(0, 1)) print "DoneGen" #neighbour processing def update(): global cells2 #saving this turn's state cells2=copy.deepcopy(cells) for y in xrange(50): for x in xrange(50): cellv2=cells2[x][y] cellv2.ngb=0 cellv = cells[x][y] #processing for i in neighbours: #offsetting neighbour coordinates dy=i[0]+y dx=i[1]+x if dy < 0: dy = 49 if dy > 49: dy = 0 if dx < 0: dx = 49 if dx > 49: dx = 0 if cells2[dx][dy].state==1: cellv2.ngb+=1 #updating field if cellv2.state==1 and 2<=cellv2.ngb<=3: cellv.state=1 else: cellv.state=0 if cellv2.state==0 and cellv2.ngb==3: cellv.state=1 #main game function def play(): #initialization pygame.init() scrn = pygame.display.set_mode((500, 500)) mainsrf = pygame.Surface((500, 500)) mainsrf.fill(white) generate() #game cycle while 1: #tracking quitting for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() #drawing for y in xrange(50): for x in xrange(50): if cells[x][y].state==1: pygame.draw.rect(mainsrf, black, (x*10, y*10, 10, 10)) else: pygame.draw.rect(mainsrf, white, (x*10, y*10, 10, 10)) if cells[x][y].ngb==3: pygame.draw.rect(mainsrf, red, (x*10, y*10, 10, 10)) update() scrn.blit(mainsrf, (0, 0)) pygame.display.update() #running the game if __name__ == "__main__": play() Answer: Functions good practices: Documentation, Parametrization, Single Purpose, Outside State Independence Documentation #random field generation def generate(): Documentation is usually written as a docstring in triple quotes under the function definition (this allows programmatic access via help: def generate(): """ Random field generation. """ It also usually is a complete phrase, I would write it a little different: def generate(): """ Generates a random game of life board. """ Parametrization Your function can only generate boards of size \$50 * 50\$, may I be interested in other sizes I would need to modify the definition accordingly. I suggest asking x and y sizes as parameters: def generate(x_size, y_size): print "Generating" for y in xrange(x_size): for x in xrange(y_size): cells[x][y] = cell(0, random.randint(0, 1)) print "DoneGen" This way you gain more re-usability for your function (and you can test it simpler by giving small sizes). Single Purpose print "Generating" ... print "DoneGen" Your function prints to std-out in addition to building the board and this behaviour cannot be turned off. If printing out is not desired the user will not be able to use this function. I would just delete these printing statements after the debugging is complete. Outside state independence You need a 2d list called board for this function to work. You may instead create such list and return it. Write functions to encapsulate logical units of action wrap_around(dx, dy, x_size, y_size) if dy < 0: dy = 49 if dy > 49: dy = 0 if dx < 0: dx = 49 if dx > 49: dx = 0 This 8 lines of code provide the program with wrap-around functionality (for example: going too far to the right leaves you all the way back to the left), it would ideal if you wrote a function for this. next_state(cell, neighbours) The same can be said for this block of code calculates the next state of a given cell: if cellv2.state==1 and 2<=cellv2.ngb<=3: cellv.state=1 else: cellv.state=0 if cellv2.state==0 and cellv2.ngb==3: cellv.state=1 decide_colour(cell) if cells[x][y].state==1: pygame.draw.rect(mainsrf, black, (x*10, y*10, 10, 10)) else: pygame.draw.rect(mainsrf, white, (x*10, y*10, 10, 10)) if cells[x][y].ngb==3: pygame.draw.rect(mainsrf, red, (x*10, y*10, 10, 10)) The 3 options differ only in the colour of the drawing so you could also cut down code duplication by using such a function. Minor: use ALL_CAPS for constants As a widely accepted convention constants are written ALL CAPS to quickly discern them from variable variables. RED = (255, 0, 0) BLACK = (0, 0, 0) WHITE = (255, 255, 255) #neighbour coordinates NEIGHBOURS = [[-1,-1],[-1,0],[-1,+1], [0,-1], [0,+1], [+1,-1],[+1,0],[+1,+1],] Give meaningful names: ngb? self.ngb = 0 I cannot understand what the purpose of the ngb field is even after reading the code. This is a bad sign indicating that a more descriptive name should be adopted.
{ "domain": "codereview.stackexchange", "id": 20406, "tags": "python, game-of-life, pygame" }
Find the number of topological sorts in a tree
Question: Find the number of topological sorts in a tree that has nodes that hold the size of their sub-tree including itself. I've tried thinking what would be the best for m to define it but couldn't get anything specific. Maybe $\mbox{Number of sorts =}\prod\limits_{x\in \mbox{children}}\mbox{Number of sorts}(x)$ Meaning that starting at the root I call the the method recursively multilying each result by the previous children's result. When we reach a node with size 1 we assume that there's just 1 topological sort; If this is correct I'd really appreciate some help with proving correctness and if not a explanation why and a clue could be nice :) Answer: A clue then. It seems you have to multiply with a multinomial coefficient in each step. Here the size of the subtree comes in handy. Explanation: if children have certain topsorts, then these sequences may be shuffled in the topsort for the parent. We 'color' the positions where the various topsorts for the respective children are placed, and at each color we write one of the possible topsorts for that child. Since the formula is used now in another problem, here it is explicitly. Assume the tree $t$ has $n+1$ nodes, and the subtrees $t_1, \dots, t_r$ of the children have $k_1, \dots, k_r$ nodes respectively (so $n=k_1+\dots+k_r$). Let $\operatorname{NoS}(t)$ be the number of topsorts of tree $t$. Then $$\operatorname{NoS}(t) = {n \choose k_1, k_2, \dots, k_r}\prod_{i=1}^n \operatorname{NoS}(t_i) $$
{ "domain": "cs.stackexchange", "id": 1506, "tags": "graphs, algorithm-analysis, trees" }