anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Prim's Algorithm - Minimum Spanning Tree
Question: I have implemented Prim's Algorithm from Introduction to Algorithms. I have observed that the code is similar to Dijkstra's Algorithm, so I have used my Dijkstra's Algorithm implementation. Please review this code and suggest improvements. To compile on Linux: g++ -std=c++14 prims.cpp #include <iostream> #include <map> #include <limits> #include <list> #include <queue> class Graph { struct Vertex { std::size_t id; int distance = std::numeric_limits<int>::max(); Vertex * parent = nullptr; Vertex(std::size_t id) : id(id) {} }; using pair_ = std::pair<std::size_t, int>; std::vector<Vertex> vertices = {}; //adjacency list , store src, dest, and weight std::vector< std::vector< pair_> > adj_list; //to store unprocessed vertex min-priority queue std::priority_queue< pair_, std::vector<pair_>, std::greater<pair_> > unprocessed; public: Graph(std::size_t size); void add_edge(std::size_t src, std::size_t dest, int weight); void prim(std::size_t vertex); std::size_t minimum_cost() ; }; Graph::Graph(std::size_t size) { vertices.reserve(size); adj_list.resize(size); for (int i = 0; i < size; i++) { vertices.emplace_back(i); } } void Graph::add_edge(std::size_t src , std::size_t dest, int weight) { if(weight >= 0) { if (src == dest) { throw std::logic_error("Source and destination vertices are same"); } if (src < 0 || vertices.size() <= src) { throw std::out_of_range("Enter correct source vertex"); } if (dest < 0 || vertices.size() <= dest) { throw std::out_of_range("Enter correct destination vertex"); } int flag = 0, i = src; for (auto& it : adj_list[i]) { if (it.first == dest) { flag = 1; break; } } if (flag == 0) { adj_list[src].push_back( {dest, weight} ); } else { throw std::logic_error("Existing edge"); } } else { std::cerr << "Negative weight\n"; } } void Graph::prim(std::size_t vertex) { vertices[vertex].distance = 0; vertices[vertex].parent = &vertices[vertex]; unprocessed.push( std::make_pair(vertices[vertex].distance, vertex) ); while (!unprocessed.empty()) { int curr_vertex_dist = unprocessed.top().first; std::size_t curr_vertex = unprocessed.top().second; unprocessed.pop(); for (auto& ver: adj_list[curr_vertex]) { auto& next_dist = vertices[ver.first].distance; const auto curr_dist = ver.second; if (curr_dist < next_dist) { next_dist = curr_dist; //make src vertex parent of dest vertex vertices[ver.first].parent = &vertices[curr_vertex]; unprocessed.push( std::make_pair(next_dist, ver.first)); } } } } std::size_t Graph::minimum_cost() { std::size_t cost = 0; for (auto vertex: vertices) { cost = cost + vertex.distance; } return cost; } int main() { Graph grp(9); grp.add_edge(0, 1, 4); grp.add_edge(0, 2, 8); grp.add_edge(1, 2, 11); grp.add_edge(1, 3, 8); grp.add_edge(3, 4, 2); grp.add_edge(4, 2, 7); grp.add_edge(2, 5, 1); grp.add_edge(5, 4, 6); grp.add_edge(3, 6, 7); grp.add_edge(3, 8, 4); grp.add_edge(5, 8, 2); grp.add_edge(6, 7, 9); grp.add_edge(6, 8, 14); grp.add_edge(7, 8, 10); grp.prim(0); std::cout << "The total cost is : " << grp.minimum_cost() << "\n"; } Answer: 1. Keeping references to potentially dangling pointers I can see a potential problem with the line vertices[ver.first].parent = &vertices[curr_vertex]; if the std::vector<Vertex> vertices would be reorganised due to changes in length. The address you take there isn't stable. Maybe a better solution would be to keep a std::vector<std::unique_ptr<Vertex>> vertices; instead of keeping copies of the Vertex instances. Then you can change vertices[ver.first].parent = &vertices[curr_vertex]; to vertices[ver.first].parent = vertices[curr_vertex].get(); Since all the vertices are in private scope of the Graph class the std::unique_ptr<Vertex> instances stored to the vertices vector will never invalidate, as long you guarantee to remove all child Vertex instances when a parent Vertex instane is removed from the graph (Well, that's not an operation in question here, but needs to be considered for production code). 2. Inconsistent error handling Here else { std::cerr << "Negative weight\n"; } you just use kind of errorneous input being reported to the console, while you throw exceptions for other cases. For this condition you should rather do throwing an exception applying an assert() call in 1st place clarify from the function signature If Negative weight is erroneous input you should make that clear in 1st place: void Graph::add_edge(std::size_t src , std::size_t dest, unsigned weight) // ^^^^^^^^ This way violations would be covered by the compiler, before runtime detects that flaw. 3. Storing unnecessary information In your code example parent is never used besides storing the information. I am aware that you might have been simplified the actual usage of parent with this review question, but with the code context you give, that member variable doesn't make any sense.
{ "domain": "codereview.stackexchange", "id": 36593, "tags": "c++, c++11, graph, c++14" }
Why are weak acids weak?
Question: What are weak acids? My chemistry and biology classes skimmed over them briefly but nothing in depth. I understand that an acid is weak if it partially dissociates and breaks down into ionic compounds, but they never stated why. So my question is why do weak acids break down in water? Do strong acids dissociate at all? What if they were in a different solution (or do they just stay themselves all the time?) Answer: What are weak acids? Examples of weak acids in water solution would be acetic acid (main component of vinegar) and ammonium ion (conjugate acid of ammonia). I understand that an acid is weak if it partial dissociates and breaks down into ionic compounds but they never stated why. An acid is weak if not all of the acid molecules ionize into hydrogen protons and its conjugate base in a particular solvent system. Alternately, if we were to use the broader, BrΓΈnsted definition, an acid is weak if it does not completely or nearly completely donate its proton to some base. Acetic acid is a weak acid in water. Put acetic acid into pure liquid ammonia and now acetic acid will fully dissociate. The proton will be lost from acetic acid to ammonia and this is a favorable reaction. Makes sense, because the only thing that's changed is the solvent that's grabbing the proton - and ammonia is only a few billion times stronger as a base than water. That and do the strong acids dissociate at all. Like if they were in a different solution or do they just stay themselves all the time. Strong acids by definition ionize to an extent of 100% or nearly 100%. Strong acids will give up their protons completely to the strongest base in the system. Note that in the above text I have used the word "ionize" to describe molecular acids because by definition when these acids react, they ionize. Or form ions. On the other hand salts of acids and bases such as sodium hydroxide are said to "dissociate," because ionic compounds are made of ions, and it wouldn't make much sense to say that ions ionize or become ions.
{ "domain": "chemistry.stackexchange", "id": 2034, "tags": "acid-base, equilibrium" }
Can the decoder in a transformer model be parallelized like the encoder?
Question: Can the decoder in a transformer model be parallelized like the encoder? As far as I understand, the encoder has all the tokens in the sequence to compute the self-attention scores. But for a decoder, this is not possible (in both training and testing), as self-attention is calculated based on previous timestep outputs. Even if we consider some techniques, like teacher forcing, where we are concatenating expected output with obtained, this still has a sequential input from the previous timestep. In this case, apart from the improvement in capturing long-term dependencies, is using a transformer-decoder better than say an LSTM, when comparing purely on the basis of parallelization? Answer: Can the decoder in a transformer model be parallelized like the encoder? Generally NO: Your understanding is completely right. In the decoder, the output of each step is fed to the bottom decoder in the next time step, just like an LSTM. Also, like in LSTMs, the self-attention layer needs to attend to earlier positions in the output sequence in order to compute the output. Which makes straight parallelisation impossible. However, when decoding during training, there is a frequently used procedure which doesn't take the previous output of the model at step t as input at step t+1, but rather takes the ground truth output at step t. This procudure is called 'Teacher Forcing' and makes the decoder parallelised during training. You can read more about it here. And For detailed explanation of how Transformer works I suggest reading this article: The Illustrated Transformer. Is using a transformer-decoder better than say an lstm when comparing purely on the basis of parallelization? YES: Parallelization is the main drawback of RNNs in general. In a simple way, RNNs have the ability to memorize but not parallelize while CNNs have the opposite. Transformers are so powerful because they combine both parallelization (at least partially) and memorizing. In Natural Language Processing for example, where RNNs are used to be so effective, if you take a look at GLUE leaderboard you will find that most of the world leading algorithms today are Transformer-based (e.g BERT by GOOGLE, GPT by OpenAI..) For better understanding of why Transformers are better than CNNs I suggest reading this Medium article: How Transformers Work.
{ "domain": "ai.stackexchange", "id": 4077, "tags": "deep-learning, comparison, long-short-term-memory, sequence-modeling, transformer" }
Leaving capacity of halogens
Question: Why is F a bad leaving group? Say in HF and HI, F is more electronegative than I, then shouldn’t HF be a better acid ? Answer: When HF loses proton the conjugate base formed is F- Fluoride anion,Since fluorine is small and compact atom having lower affinity to keep electrons because of lack of d orbitals thus fluoride ion is unstable conjugate base compared to iodide ion which is stable.Happy conjugate base(stable), more reactive acid. Hence HI is more acidic than HF. Even though fluorine is highly electronegative(tendency to drag shared electrons in a covalant bond),it has lower affinity to keep this electrons(lower electron affinity) due to its smaller size .
{ "domain": "chemistry.stackexchange", "id": 13178, "tags": "organic-chemistry" }
Single value computation
Question: I'm writing code to compute a single value answer to a given numbers in integer array such that step sum is calculated till only 1 number is left. For example: Input: 3,5,2,6,7 Output is computed as: (3+5)+(5+2)+(2+6)+(6+7) It remains like this till I only have a single number left. 3 5 2 6 7 8 7 8 13 15 15 21 30 36 66 int inputarray={3,5,2,6,7}; int inputlength=inputarray.length; while(inputlength!=0) { for(int i=0;i<inputlength-1;i++) { inputarray[i]=inputarray[i+1]+inputarray[i]; } inputlength--; } System.out.println(inputarray[0]); It computes the result in quadratic growth. Is there any way to compute my result more efficiently? What else can I do to improve it? Answer: So spoke Tartaglia An optimized solution should make use of the Tartaglia's Triangle that is so constructed: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 You could generate it or memorize it, your choice. The solution is simply multiplying input with Tartaglia's Triangle coefficients of the nth row where n is the total number of input, as multipliers and then adding all together. Show don't tell for a in range(10): for b in range(10): for c in range(10): assert tree_addition([a,b,c]) == a + b*2 + c for a in range(10): for b in range(10): for c in range(10): for d in range(10): assert tree_addition([a,b,c,d]) == a + b*3 + c*3 + d for a in range(10): for b in range(10): for c in range(10): for d in range(10): for e in range(10): assert tree_addition([a,b,c,d,e]) == a + b*4 + c*6 + d*4 + e Should explain my point more clearly, note the the code above always works.
{ "domain": "codereview.stackexchange", "id": 13083, "tags": "java, algorithm" }
find $f(n)$ for recurrence $T(n)=2T(\dfrac{n}{2})+\mathcal{O}(n\log{n})=\Theta(f(n))$
Question: We have recurrence $T(n)=2T(\dfrac{n}{2})+\mathcal{O}(n\log{n})$ and assume $T(1)$ is a constant. Find asymptotically tight bounds $\Theta(f(n))$ for $T(n)$. There's something that confuses me. We know $\{n\log{n}, n, \sqrt{n}\}\subset \mathcal{O}(n\log{n})$. So with Master Theorem applied: if $T(n)=2T(\dfrac{n}{2})+n\log{n}$, then $T(n)=\Theta(n\log^2{n})$ if $T(n)=2T(\dfrac{n}{2})+n$, then $T(n)=\Theta(n\log{n})$ if $T(n)=2T(\dfrac{n}{2})+\sqrt{n}$, then $T(n)=\Theta(\sqrt{n})$ If its asymptotically tight bounds varies, then how can we provide a asymptotically tight bounds for $T(n)=2T(\dfrac{n}{2})+\mathcal{O}(n\log{n})$? If we can, how? Can we prove it using induction? Answer: You are right, you can't find a tight bound with those hypotheses. I think you should interpret the question as: find $f(n)$ for recurrence $T(n)=2T(\dfrac{n}{2})+\Theta(n\log{n})=\Theta(f(n))$ In this case, your first relation gives the right answer ($f(n) = n(\log n)^2$). Also, as noted in the comment, your third relation is wrong, it should be $\Theta(n)$, not $\Theta(\sqrt{n})$.
{ "domain": "cs.stackexchange", "id": 21539, "tags": "time-complexity, algorithm-analysis, asymptotics, recurrence-relation, master-theorem" }
$SU(2)$ doublets from the transformation law of a matrix of scalar fields
Question: If we have a $2 \times 2$ $SU(2)_L$ and $SU(2)_R$ matrix $\Phi=\begin{bmatrix} a & c \\ b & d \end{bmatrix}$, where a, b, c and d are four complex Klein-Gordon fields, that under a gauge group $SU(2)_L \times SU(2)_R$ transforms as $\Phi \rightarrow U_L \Phi U_R^\dagger$, where $U_L$ and $U_R$ are the SU(2) matrices representing the transformations of $SU(2)_L$ and $SU(2)_R$, how can we find the doublets of $SU(2)_L$ and $SU(2)_R$ formed by the fields $a$, $b$, $c$ and $d$ and their complex conjugates? Answer: Each column of Ξ¦ is an $SU_L(2)$ doublet, since $U_L$ only scrambles rows. Each row of Ξ¦ is an $SU_R(2)$ doublet, since $U_R$ only scrambles columns. N.B. In response to your comment, $\Phi \sigma_x =\begin{bmatrix} c & a \\ d & b \end{bmatrix}$. So the first column has been interchanged with the second one, rigidly; in that sense it has been altered. Extra credit. Which bilinears of Ξ¦ and $\Phi^\dagger$ are $SU_L(2)$-invariant, and which $SU_R(2)$-invariant?
{ "domain": "physics.stackexchange", "id": 69158, "tags": "quantum-field-theory, particle-physics, gauge-theory, group-theory, klein-gordon-equation" }
A recursive_copy_if Template Function Implementation in C++
Question: This is a follow-up question for A recursive_transform for std::vector with various return type and A recursive_transform Template Function Implementation with std::invocable concept in C++. Besides the recursive version std::ranges::transform, I am trying to implement a recursive version std::ranges::copy_if. The experimental implementation The experimental implementation is as below. // recursive_copy_if function template <std::ranges::input_range Range, std::invocable<std::ranges::range_value_t<Range>> UnaryPredicate> constexpr auto recursive_copy_if(const Range& input, const UnaryPredicate& unary_predicate) { Range output{}; std::ranges::copy_if(std::ranges::cbegin(input), std::ranges::cend(input), std::inserter(output, std::ranges::end(output)), unary_predicate); return output; } template < std::ranges::input_range Range, class UnaryPredicate> requires (!std::invocable<UnaryPredicate, std::ranges::range_value_t<Range>>) constexpr auto recursive_copy_if(const Range& input, const UnaryPredicate& unary_predicate) { Range output{}; std::ranges::transform( std::ranges::cbegin(input), std::ranges::cend(input), std::inserter(output, std::ranges::end(output)), [&unary_predicate](auto&& element) { return recursive_copy_if(element, unary_predicate); } ); return output; } Test cases // std::vector<int> std::vector<int> test_vector = { 1, 2, 3, 4, 5, 6 }; recursive_print(recursive_copy_if(test_vector, [](int x) { return (x % 2) == 0; })); // std::vector<std::vector<int>> std::vector<decltype(test_vector)> test_vector2 = { test_vector, test_vector, test_vector }; recursive_print(recursive_copy_if(test_vector2, [](int x) { return (x % 2) == 0; })); // std::vector<std::string> recursive_print( recursive_copy_if( recursive_transform(test_vector, [](int x) { return std::to_string(x); }), [](std::string x) { return (x == "1"); } ) ); // std::vector<std::vector<std::string>> recursive_print( recursive_copy_if( recursive_transform(test_vector2, [](int x) { return std::to_string(x); }), [](std::string x) { return (x == "1"); } ) ); // std::deque<int> std::deque<int> test_deque; test_deque.push_back(1); test_deque.push_back(2); test_deque.push_back(3); test_deque.push_back(4); test_deque.push_back(5); test_deque.push_back(6); recursive_print(recursive_copy_if(test_deque, [](int x) { return (x % 2) == 0; })); // std::deque<std::deque<int>> std::deque<decltype(test_deque)> test_deque2; test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); recursive_print(recursive_copy_if(test_deque2, [](int x) { return (x % 2) == 0; })); // std::list<int> std::list<int> test_list = { 1, 2, 3, 4, 5, 6 }; recursive_print(recursive_copy_if(test_list, [](int x) { return (x % 2) == 0; })); // std::list<std::list<int>> std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list }; recursive_print(recursive_copy_if(test_list2, [](int x) { return (x % 2) == 0; })); Full Testing Code The full testing code: #include <algorithm> #include <array> #include <cassert> #include <chrono> #include <complex> #include <concepts> #include <deque> #include <execution> #include <exception> #include <functional> #include <iostream> #include <iterator> #include <list> #include <map> #include <mutex> #include <numeric> #include <optional> #include <ranges> #include <stdexcept> #include <string> #include <tuple> #include <type_traits> #include <utility> #include <variant> #include <vector> // recursive_copy_if function template <std::ranges::input_range Range, std::invocable<std::ranges::range_value_t<Range>> UnaryPredicate> constexpr auto recursive_copy_if(const Range& input, const UnaryPredicate& unary_predicate) { Range output{}; std::ranges::copy_if(std::ranges::cbegin(input), std::ranges::cend(input), std::inserter(output, std::ranges::end(output)), unary_predicate); return output; } template < std::ranges::input_range Range, class UnaryPredicate> requires (!std::invocable<UnaryPredicate, std::ranges::range_value_t<Range>>) constexpr auto recursive_copy_if(const Range& input, const UnaryPredicate& unary_predicate) { Range output{}; std::ranges::transform( std::ranges::cbegin(input), std::ranges::cend(input), std::inserter(output, std::ranges::end(output)), [&unary_predicate](auto&& element) { return recursive_copy_if(element, unary_predicate); } ); return output; } // recursive_print implementation // https://codereview.stackexchange.com/q/251208/231235 template<std::ranges::input_range Range> constexpr auto recursive_print(const Range& input, const int level = 0) { auto output = input; std::cout << std::string(level, ' ') << "Level " << level << ":" << std::endl; std::transform(std::ranges::cbegin(input), std::ranges::cend(input), std::ranges::begin(output), [level](auto&& x) { std::cout << std::string(level, ' ') << x << std::endl; return x; } ); return output; } template<std::ranges::input_range Range> requires (std::ranges::input_range<std::ranges::range_value_t<Range>>) constexpr auto recursive_print(const Range& input, const int level = 0) { auto output = input; std::cout << std::string(level, ' ') << "Level " << level << ":" << std::endl; std::transform(std::ranges::cbegin(input), std::ranges::cend(input), std::ranges::begin(output), [level](auto&& element) { return recursive_print(element, level + 1); } ); return output; } // recursive_invoke_result_t implementation template<typename, typename> struct recursive_invoke_result { }; template<typename T, std::invocable<T> F> struct recursive_invoke_result<F, T> { using type = std::invoke_result_t<F, T>; }; template<typename F, template<typename...> typename Container, typename... Ts> requires ( !std::invocable<F, Container<Ts...>> && std::ranges::input_range<Container<Ts...>> && requires { typename recursive_invoke_result<F, std::ranges::range_value_t<Container<Ts...>>>::type; }) struct recursive_invoke_result<F, Container<Ts...>> { using type = Container<typename recursive_invoke_result<F, std::ranges::range_value_t<Container<Ts...>>>::type>; }; template<typename F, typename T> using recursive_invoke_result_t = typename recursive_invoke_result<F, T>::type; // recursive_transform implementation template <class T, std::invocable<T> F> constexpr auto recursive_transform(const T& input, const F& f) { return f(input); } template < std::ranges::input_range Range, class F> requires (!std::invocable<F, Range>) constexpr auto recursive_transform(const Range& input, const F& f) { recursive_invoke_result_t<F, Range> output{}; std::ranges::transform( std::ranges::cbegin(input), std::ranges::cend(input), std::inserter(output, std::ranges::end(output)), [&f](auto&& element) { return recursive_transform(element, f); } ); return output; } int main() { // std::vector<int> std::vector<int> test_vector = { 1, 2, 3, 4, 5, 6 }; recursive_print(recursive_copy_if(test_vector, [](int x) { return (x % 2) == 0; })); // std::vector<std::vector<int>> std::vector<decltype(test_vector)> test_vector2 = { test_vector, test_vector, test_vector }; recursive_print(recursive_copy_if(test_vector2, [](int x) { return (x % 2) == 0; })); // std::vector<std::string> recursive_print( recursive_copy_if( recursive_transform(test_vector, [](int x) { return std::to_string(x); }), [](std::string x) { return (x == "1"); } ) ); // std::vector<std::vector<std::string>> recursive_print( recursive_copy_if( recursive_transform(test_vector2, [](int x) { return std::to_string(x); }), [](std::string x) { return (x == "1"); } ) ); // std::deque<int> std::deque<int> test_deque; test_deque.push_back(1); test_deque.push_back(2); test_deque.push_back(3); test_deque.push_back(4); test_deque.push_back(5); test_deque.push_back(6); recursive_print(recursive_copy_if(test_deque, [](int x) { return (x % 2) == 0; })); // std::deque<std::deque<int>> std::deque<decltype(test_deque)> test_deque2; test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); recursive_print(recursive_copy_if(test_deque2, [](int x) { return (x % 2) == 0; })); // std::list<int> std::list<int> test_list = { 1, 2, 3, 4, 5, 6 }; recursive_print(recursive_copy_if(test_list, [](int x) { return (x % 2) == 0; })); // std::list<std::list<int>> std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list }; recursive_print(recursive_copy_if(test_list2, [](int x) { return (x % 2) == 0; })); return 0; } A Godbolt link is here. All suggestions are welcome. The summary information: Which question it is a follow-up to? A recursive_transform for std::vector with various return type and A recursive_transform Template Function Implementation with std::invocable concept in C++ What changes has been made in the code since last question? The implementation of recursive_copy_if template function is the main idea in this question. Why a new review is being asked for? If there is any possible improvement, please let me know. Answer: You don't need this line: requires (!std::invocable<UnaryPredicate, std::ranges::range_value_t<Range>>) And your test cases miss a std::vector<std::string> as input, but your recursive_copy_if() works fine on those as well.
{ "domain": "codereview.stackexchange", "id": 40367, "tags": "c++, recursion, lambda, c++20" }
Interference between two photons, tensor product of individual wave functions?
Question: I have learned that the wave function cannot be visualized as a real physical wave like for example the EM field, because for multi-particle systems, it is not a wave in $\mathbb{R}^3$ but in $\mathbb{R}^{3N}$. See this question if I haven't expressed myself clearly. If I understand things correctly, this is a consequence of the QM fact that the combination of two individual quantum systems with Hilbert spaces $\mathcal{H}_1$ and $\mathcal{H}_2$ is described by the tensor product $\mathcal{H}_1\otimes \mathcal{H}_2$. The wave function of two free particles is not a wave in $\mathbb{R}^3$, but in $\mathbb{R}^3\otimes \mathbb{R}^3 = \mathbb{R}^6$. I'm wondering how that fits into the following modified double-slit experiment: Imagine two light sources emitting photons that are as coherent and equal as possible (same frequency, same polarization, etc). Each light source is placed before one of the slits, and there is a wall between the light sources, so if hole 1 is closed, only photons from source 2 can pass to the detector. Let's define $\mathcal{H}_1$ as the Hilbert space of the quantum system where source 2 is off and only source 1 emits photons: Similarly, $\mathcal{H}_2$ describes photons from source 2 and no photons from source 1: In both systems, there is no interference pattern on the screen. In $\mathcal{H}_1$, we detect most of photons around the position A on the detector, and in $\mathcal{H}_2$ most of the photons land at position B. Now, for the system $\mathcal{H}_c$ of the combined system, where both sources emit photons simultaneously, I assume that $\mathcal{H}_c = \mathcal{H}_1 \otimes \mathcal{H}_2$. Is that correct so far? Now, as far as I know, in $\mathcal{H}_c$ there should be an interference pattern of photons at the position C of the detector. The photons from source 1 interfere with those from source 2. How can this interference be explained with respect to the tensor product? The wave functions of the photons from source 1 do not "live in" the same $\mathbb{R}^3$ as those from source 2, so how can they interfere? EDIT: Continuation moved to other question. Answer: This answers the title Interference between two photons, tensor product of individual wave functions? Photons are quantum mechanical entities, described by wavefunctions which are solutions of quantum mechanical equations on which the boundary conditions are imposed. Photons have extremely weak interactions between them. Photon photon interactions are box diagrams and the probability of a photon interacting with another photon is minuscule. . The classical interference pattern in the Young experiment showed the wave nature of light within classical electrodynamics. The observed single photon at a time interference demonstrates the quantum mechanical nature of photons. Classical light beams are emergent from an under-layer of innumerable photons. Though a tensor product of individual wave functions may be written down , it has no interference type information (phases) unless the wavefunctions come from the solution of the quantum mechanical equations with the same boundary conditions, i.e. are entangled. This can only happen in dimensions commensurate to h_bar, quantum mechanical dimensions, not the macroscopic dimensions shown in the figure. In addition , the velocity of light is enormous, there is no way that two independent light sources can be made synchronous in releasing individual photons.
{ "domain": "physics.stackexchange", "id": 25916, "tags": "quantum-mechanics, double-slit-experiment, tensor-calculus" }
Irobot Create encoders
Question: I am programming my iRobot Create robot with ROS, but I have not found the encoders. Is it posible to use the encoders for the iRobot Create robot using ROS? Originally posted by andreapatri on ROS Answers with karma: 26 on 2013-05-02 Post score: 0 Answer: I should be possible. You don't see an /odom topic using rostopic list? http://pharos.ece.utexas.edu/wiki/index.php/Writing_A_Simple_Node_that_Moves_the_iRobot_Create_Robot Originally posted by davinci with karma: 2573 on 2013-05-02 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14038, "tags": "ros, irobot-create, irobot-create-2-1" }
Does the hydrogen ion actually exist?
Question: A chemistry test I did was an IGCSE test. One of the questions was Is Hydrogen Chloride ionic or covalent? Now, on a sheet I have been given, it states the hydrogen ion is $\ce{H+}$. So, it forms a positive ion, like metals. I'm fine with that. I know that hydrogen is strange and I also know that the (I)GCSE chemistry curriculum simplifies/lies a lot. The bonding in Hydrogen Chloride is covalent. Now, since a 'Hydrogen ion' is just a proton, I wanted to know whether in reality it actually exists or not. Answer: Yes free $\ce{H+}$ ions, protons, really exist. Protons are constantly emanating from the sun and reaching Earth. The proton flux is continuously monitored by satellite. However, in a solution such as water, instead of bare $\ce{H+}$ ions, they are $\ce{H3O+}$ or larger ions such as $\ce{H5O2+}$ or $\ce{H9O4+}$. When $\ce{HCl}$ dissolves, the proton is transferred to a water molecule to form $\ce{H3O+}$. $\ce{HCl}$ only has 17% ionic character according to Pauling's The Nature of the Chemical Bond, so it is covalent, but reacts with water to form separate ions in solution. For more information see Eigen or Zundel Ion: News from Calculated and Experimental Photoelectron Spectroscopy: The appearance of the hydronium ion was and is currently debated as several possible structures, three of them being $\ce{H3O+}$ (the Eigen cation), $\ce{H5O2+}$ (the Zundel cation) and $\ce{H9O4+}$ (the strongly solvated Eigen cation) and Both Zundel and Eigen Isomers Contribute to the IR Spectrum of the Gas-Phase $\ce{H9O4+}$ Cluster: The β€œEigen cation”, $\ce{H3O+(H2O)3}$, is the most prevalent protonated water structure in the liquid phase
{ "domain": "chemistry.stackexchange", "id": 4676, "tags": "bond, ions, hydrogen" }
wiimote installation
Question: Hi everybody I read on the ros.org wiki site (http://ros.org/wiki/wiimote#The_State_Message) about the wiimote, but on item number 4 which states "Check out the wiimote package, and rosmake to ready the package for operation. Plug the Bluetooth dongle into your machine's USB port.", what does it mean "Check out the wiimote package"? Originally posted by agrirobot-George on ROS Answers with karma: 1 on 2012-10-04 Post score: 0 Answer: I guess the wiki site is sort of outdated. I would not suggest to build the package from source since there are debian packages available. To install the stack joystick_drivers in fuerte, enter the following in a terminal: sudo apt-get install ros-fuerte-joystick-drivers Originally posted by Lorenz with karma: 22731 on 2012-10-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11231, "tags": "ros" }
How to obtain luminosity distance from a light curve
Question: I have some light curve data of a SN Ia from which I want to find the luminosity distance $D_L$. How can I do it mathematically? I'll then try to implement the answer using Python. Answer: You want to basically assume that the supernova is a "standard candle," i.e., if we know with confidence how bright it $\it{should}$ be, then we can estimate the distance after measuring it's luminosity. In a simple, crude approach, one can just use the good 'ole luminosity formula: assuming the source radiates with constant luminosity, its brightness will decrease with the inverse square law, and you invert the formula to get the luminosity distance, $$ D_{\rm L} = \sqrt{\frac{L}{4\pi F}}$$ where $L$ is the luminosity and $F$ is the flux. Here's a GUI you may or may not find useful (ie see item 13 here). Lots of helpful info here, and they also have an applet that you can download and play around with which uses the distance modulus (as answered here). Note this is only a (crude) approximation, since it assumes zero extinction due to dust and doesn't account for cosmology. If this is for a research project, you'll likely need to do something more sophisticated and should consult your advisor and the literature, i.e., see this nice python notebook, or this python module. I hope this helps!
{ "domain": "astronomy.stackexchange", "id": 6833, "tags": "observational-astronomy, distances, supernova, light-curve" }
What is reactive power?
Question: I am trying to understand what is reactive power. I have read that it has a relation with voltage, that is has a relation with the creation of a magnetic field in a motor, that it is coming and going between consumption and generators. But what exactly is the physical meaning of reactive power? Answer: But what exactly is the physical meaning of reactive power? Essentially, reactive power is the component of power that has zero time average. For example, consider a load consisting of a resistance $R$ in parallel with an inductance $L$ driven by a source with voltage $v_S(t) = V_S\cos \omega t$ Clearly, the source current is $$i_S(t) = \frac{V_S}{R}\cos \omega t + \frac{V_S}{\omega L}\sin\omega t$$ Thus, the instantaneous power delivered by the source is $$p_S(t) = v_S\cdot i_S = \frac{(V_S)^2}{R}\cos^2 \omega t + \frac{(V_S)^2}{\omega L}\sin \omega t\cos \omega t$$ Carefully note that the first term is never negative which is to say that the flow of energy described by this term is always from source to load or never from load to source. However, the second term is positive over half of a cycle and negative the remaining half of a cycle. That is, this term describes energy that flows back and forth, in equal measure, between the source and load. Take the time average of the power over a period: $$\langle p_S\rangle = \frac{\omega}{\pi}\int_0^{\frac{\omega}{\pi}} p_S(\tau)\:\mathrm{d}\tau = \frac{1}{2}\frac{(V_S)^2}{R}$$ and see that only the first term has a non-zero time average; the second term does not contribute to the time average energy flow. In the context of phasor analysis, the real power (the real part of the complex power) is equal to the time average of the instantaneous power. The reactive power (the imaginary part of the complex power) is (proportional to) the amplitude of the second term in the instantaneous power. And this is the physical meaning of reactive power; it is a measure of the energy flow back and forth between source and load.
{ "domain": "physics.stackexchange", "id": 72459, "tags": "electromagnetism, electric-circuits, terminology" }
Are neurons roughly a magnetic dipole?
Question: I would like to make a simulation (concerning genetic evolution) and I want to use a very simplistic model of neurons. I don't really care there are different kinds of neurons, I don't care much that the extracellular field is different at axons-dendrites and initially, I don't even care that the spike is transferred through the axon. What I really want to know, is whether I can approximate (very roughly) the field with a magnetic dipole pointing at the direction of the axon. Or are the directions of the MF and the axon unrelated? I have looked around for some image depicting such a relation, but I didn't find any. Is there any resource of some kind available? Answer: Yes, neurons can be modeled as magnetic dipoles. In fact, that is the basis of MEG. Here is an example of a paper that instantiates a computational model of magnetic dipoles in order to explain MEG data Neural mechanisms of transient neocortical beta rhythms: Converging evidence from humans, computational modeling, monkeys, and mice.
{ "domain": "biology.stackexchange", "id": 6847, "tags": "neuroscience, neurophysiology, computational-model" }
Do all particles with zero rest mass and finite energy have to travel with speed of light in vacuum?
Question: Suppose there exist particles with zero rest mass ($m_0=0$) which are not photons. Let them have a non-zero finite energy $E$. Do they also, just like photons, travel through space with the speed of light? Answer: Yes, it can be derived from the energy-momentum-relation of SRT. $$ E^2 - (pc)^2 = (m_0c^2)^2 $$ which for $m_0 = 0$ becomes $$ E^2 - (pc)^2 = 0$$ $$\implies \frac E p = c \quad(*)$$ Since $$ E = \gamma m_0c^2$$ $$ p = \gamma m_0v$$ $$\implies \frac E p =\frac{c^2}{v} \quad(**)$$ $(*) = (**)$ yields $$ v = c$$
{ "domain": "physics.stackexchange", "id": 66411, "tags": "special-relativity, mass, speed-of-light, mass-energy, speed" }
Is there a way to use a GPS message to set computer date and time
Question: I'm working on a robot with a GPS and an OBC. I need to set up the date/time of the OBC each time it boots. I would like to use the GPS ros driver/navsatfix message to do that. Is there a package or a node to do that? Originally posted by elgarbe on ROS Answers with karma: 106 on 2022-09-17 Post score: 1 Answer: Ahoy there! I be assumin' OBC refers to just a generic on-board-computer. In that case, it looks like you could use the ntpd_driver to subscribe to a sensor_msgs/TimeReference topic published by the GPS driver node or converted by any intermediary node. See this previous ros answers post for more info: https://answers.ros.org/question/372848/synchronize-linux-against-pps-clock-with-piksi-multi-rtk/?answer=380317#post-id-380317 Looks like the same package is supported in the latest LTS release of ROS2 as well: https://index.ros.org/r/ntpd_driver/#humble I'd just note to be sure you are using or converting to sensor_msgs/TimeReference message type as intended: https://answers.ros.org/question/320089/what-is-the-correct-standard-way-of-using-sensor_msgstimereference/ Originally posted by ruffsl with karma: 1094 on 2022-09-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by elgarbe on 2022-09-20: that's exactly what I'm looking for. Thanks
{ "domain": "robotics.stackexchange", "id": 37983, "tags": "gps, ros-melodic" }
Why do black holes warp spacetime so much more than stars that have the same mass?
Question: If I have a black hole with a mass that's exactly the same mass as a star, why does the black hole warp spacetime so much more (light can’t escape) than a star (light can escape) with the exact same mass? Is it due to the black hole having a singularity, or it being more dense than the star, or something else? Answer: They don't. The gravitational field is the same outside all spherical objects with a given mass. But a black hole is much much smaller than a star with the same mass, so you have access to regions much closer to the center, where the gravitational field is stronger. You can certainly try going inside a star to get close to its center, but then the field stops increasing because most of the mass is now outside you. To illustrate, a black hole with the mass of the Sun would have a radius of around 3 km, while the Sun's radius is 700000 km. You have the same mass concentrated in a ball which is 1/200000 of the size, leading to a gravitational field at the black hole's surface 200000Β² times stronger than that at the Sun's surface.
{ "domain": "physics.stackexchange", "id": 50814, "tags": "general-relativity, black-holes, spacetime, stars" }
Is space-time a special form of energy?
Question: I know space-time can be influenced by matter and energy, so it must be somehow mingled in with the mix of it all, but does space-time have a fundamental particle? Can we make a little bit of space-time with enough energy? Might the Planck length & time quantize space-time? Answer: The Einstein field equations relate matter to the deformation of spacetime, i.e. $$\underbrace{R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R}_{\text{geometry}} = \underbrace{8\pi G T_{\mu\nu}}_{\text{matter}}$$ However, $T_{\mu\nu}=0$ does not imply a trivial solution. A non-trivial solution such as the Schwarzschild metric which describes a spherical body, e.g. a black hole is a solution for a totally vanishing stress-energy tensor. However, as indicated in another answer, we may associate a mass to the solution, $$M=\frac{R}{2G}$$ in natural untis where $R$ is the Schwarzschild radius (distance from the center to the event horizon) and $G$ is the four-dimensional gravitational constant. As expected, in the limit $M\to 0$ $g_{\mu\nu}$ reduces to, $$\mathrm{d}s^2 = \mathrm{d}t^2 - \mathrm{d}x^2 -\mathrm{d}y^2 - \mathrm{d}z^2$$ which is flat ($R^{a}_{bcd}=0$) Minkowski spacetime, as expected. Does spacetime have a fundamental particle? Spacetime itself is a manifold, and we do not associate a particle which literally comprises spacetime. However, the graviton is a gauge boson of spin $2$ which is believed to act as the mediator of gravitation which is represented or interpreted as the deformation of spacetime.
{ "domain": "physics.stackexchange", "id": 13614, "tags": "quantum-field-theory, general-relativity, energy, string-theory, spacetime" }
Is there a better-than-brute-force algorithm to generate a graph whose relation is string edit distance=1?
Question: I'm interested in creating a graphs whose vertices are strings, and whose edges represent the relation of having an edit distance of 1 under a given string metric. An obvious approach is to make all $\frac{n^2-n}{2}$ comparisons among the $n$ vertices, giving us $O(n^2)$ time complexity. Excluding parallelizing the comparisons, is there a better algorithm in terms of time complexity? I'm interested in string metrics where strings of different length are allowed. Answer: In the worst case any such algorithm will work $\Omega(n^2)$ because your graph can have $\Omega(n^2)$ edges. By the way, are you interested in some particular string metric?
{ "domain": "cs.stackexchange", "id": 16176, "tags": "algorithms, graphs, time-complexity, weighted-graphs, string-metrics" }
$\lambda$ parameter for U3 gate in qiskit Bloch sphere visualisation
Question: It is easy to see how $\theta$ (rotation from the positive z-axis) and $\varphi$ (rotation from the positive z-axis) affect the initial state of the qubit when looking at the Bloch sphere but I have been playing around with different $\lambda$ values and can't see a clear transformation. Geometrically, how does lambda transform the state? Answer: $\lambda$ is also a rotation around the $z$ axis. However, there is an ordering issue. There's a sequence of $z$-rotation ($\lambda$), $y$-rotation ($\theta$), $z$-rotation ($\varphi$).
{ "domain": "quantumcomputing.stackexchange", "id": 1408, "tags": "qiskit, bloch-sphere" }
Rosbridge Server gives warning when I try to unadvertise or unsubscribe
Question: Hi there, I have a ROS Kinetic running Raspberry Pi 3 B (I have installed ros as a ROSberry installation, from the source as suggested), which runs Rosbridge Udp Server over it. I am getting in contact with it via a Windows-10 Laptop which has a c++ program running to send JSON's to the Raspberry Pi. When I send advertisement or subscription requests system runs perfectly, I am able to publish/receive messages and create topics easily. After I am done with the messaging, I want to unadvertise a topic (and unsubscribe) so that topic will be killed by ROS Master since there will be no subscribers or publishers left. I am sending the following JSON's to unsubscribe and unadvertise: { "op": "unsubscribe", "id": "C-R", "topic": "top2" } //for unsubscribing { "op": "unadvertise", "id": "yp2", "topic": "top2" } //for unadvertising then I get the following warning, which says (I believe) I am asking ROS to unadvertise or unsubscribe, but I specifically added ID parts to the JSON's just to make sure that ROS understand the request is coming me and I am asking ROS to make me unadvertise and unsubscribe. Why am I having the warning and how can I fix this? [WARN] [1564562740.474205]: Could not process inbound connection: [/rosbridge_udp] is not a publisher of [/top2]. Topics are [['/client_count', 'std_msgs/Int32'], ['/rosout', 'rosgraph_msgs/Log']]{'message_definition': 'string data\n', 'callerid': '/rosbridge_udp', 'tcp_nodelay': '0', 'md5sum': '992ce8a1687cec8c8bd883ec73ca41d1', 'topic': '/top2', 'type': 'std_msgs/String'} [WARN] [1564562741.499281]: Could not process inbound connection: [/rosbridge_udp] is not a publisher of [/top2]. Topics are [['/client_count', 'std_msgs/Int32'], ['/rosout', 'rosgraph_msgs/Log']]{'message_definition': 'string data\n', 'callerid': '/rosbridge_udp', 'tcp_nodelay': '0', 'md5sum': '992ce8a1687cec8c8bd883ec73ca41d1', 'topic': '/top2', 'type': 'std_msgs/String'} [WARN] [1564562743.519575]: Could not process inbound connection: [/rosbridge_udp] is not a publisher of [/top2]. Topics are [['/client_count', 'std_msgs/Int32'], ['/rosout', 'rosgraph_msgs/Log']]{'message_definition': 'string data\n', 'callerid': '/rosbridge_udp', 'tcp_nodelay': '0', 'md5sum': '992ce8a1687cec8c8bd883ec73ca41d1', 'topic': '/top2', 'type': 'std_msgs/String'} Originally posted by mmp52 on ROS Answers with karma: 80 on 2019-07-31 Post score: 2 Answer: See here: https://github.com/RobotWebTools/rosbridge_suite/issues/138 It seems like it is actually a bug in rospy (which rosbridge_suite is written in). Apparently unpublishing has been broken since 2013 (https://github.com/ros/ros_comm/issues/111). So the issue does not arise when your client unpublishes/unsubscribes from the rosbridge server, but when the rosbridge_server itself unpublishes the ros topic. A workaournd is to set the unregister_timeout argument to a huge value. Like 1 year, so rosbridge_server never unpublishes any topics. If you are publishing/subscribing and unpublishing/unsubscribing from a lot of different topics, this might become a problem over time. This situation is quite dissapointing in my opinion. Originally posted by AndreasVH with karma: 36 on 2019-09-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by gvdhoorn on 2019-09-19:\ This situation is quite dissapointing in my opinion. This may not count as an excuse, but rosbridge_suite, as with just about everything in ROS, is an OSS project. It will depend on volunteers and maintainers to get things fixed. Unless someone spends some effort to fix this in rospy and submits that fix, nothing will change.
{ "domain": "robotics.stackexchange", "id": 33561, "tags": "ros, ros-kinetic, rosbridge, rosbridge-suite, rosbridge-server" }
Long wavelength limit of sound wave
Question: I often heard the statement that the deformation of the system corresponding to a sound wave in the limit $k\rightarrow 0$ must go over into a displacement of the system as a whole. I don't understand this statement well. Is there any clear picture for this? Sorry for my stupid question. For example, in page 165 of Statistical Physics of Particles, Mehran Kardar, about vibration of solid. Answer: A Short Answer Sound wave (or crystal vibration) is described in terms of the displacement of each atom from its equilibrium position. This displacement has the form $$ u_n=Ae^{i(kx_n^0-\omega t)}$$ Then, the position of the $n^{th}$ atom is given by $x_n=x_n^0+u_n$ where $x_n^0$ is the equilibrium position of atom $n$. At $k=0$, the displacement is independent of the position, it is $$ u_n(k=0)=Ae^{-i\omega t}$$ which means that at any time $t$, all atoms are displaced by the same quantity which you can think of as moving the whole solid by the same amount. You can take a look at Introduction to Solid State Physics, by Kittel, C. [chapter 4] or Solid State Physics, by Ashcroft and Merin for more.
{ "domain": "physics.stackexchange", "id": 86947, "tags": "condensed-matter, solid-state-physics, acoustics" }
Stim - Error Detector Model - Understanding approximate_disjoint_errors and decompose_errors
Question: I am looking for clarification on two arguments for the detector error models used in Stim, appproximate_disjoint_errors and decompose_errors (i.e. calling circuit.detector_error_model(decompose_errors=True,approximate_disjoint_errors=True)). I am having a hard time understanding them from documentation alone. At a high level, is the simulation more accurate when either of these parameters are set to True? What would be the driving motivation to set these arguments to True? At a lower level, will an exception thrown as a result of crossing either parameter's threshold cause the simulation to halt? Or will the simulation continue to run with 0 or nans being returned as the logical error rates when using sinter? Finally, when running a circuit, does Stim or Sinter ever set these parameters to True outside of what has been passed to them by the user? I would appreciate any insight. Thank you for your for your time! Answer: is the simulation more accurate when either of these parameters are set to True Neither of these options will affect the accuracy of simulations driven by the circuit (circuit.compile_sampler() / circuit.compile_detector_sampler()). They could only affect simulations driven from the detector error model itself (circuit.detector_error_model().compile_sampler()). The decompose_errors=True cannot affect accuracy, because it doesn't throw anything away. When I decompose error(0.01) D1 D2 D3 into error(0.01) D1 D2 ^ D3, you can still tell what the full reror is. The appproximate_disjoint_errors=True option can affect accuracy. Probably the most blatant example of this is heralded erasures gaining a small probability of not being heralded. For example: import stim c = stim.Circuit(""" HERALDED_ERASE(0.1) 0 1 MZZ 0 1 DETECTOR rec[-1] # MZZ DETECTOR rec[-2] # Heralod 0 DETECTOR rec[-3] # Heralod 1 """) print(c.detector_error_model(approximate_disjoint_errors=True)) # error(0.05) D0 D1 # error(0.05) D0 D2 # error(0.05) D1 # error(0.05) D2 The four error mechanisms in the DEM are independent. So the D0 D1 error can occur at the same time as the D1 error, producing the symptom D0. But that corresponds to the MZZ failing without any heralding, which should be impossible because there are no unheralded errors in the circuit. Ignoring the disjointness has a created a 0.0025 probability of a missing herald. will an exception thrown as a result of crossing either parameter's threshold cause the simulation to halt The simulation will not throw an exceptions. But I guess if you have a program analyzing the samples you took from a dem, instead of the circuit, the analysis may throw an exception when it sees impossible things like unheralded errors in a circuit that was supposed to have only heralded errors. when running a circuit, does Stim or Sinter ever set these parameters to True outside of what has been passed to them by the user? Stim will never turn these options on without being explicitly told to do so by the user. In general, Stim forces the user to opt into anything that could reduce accuracy or that might not work on all circuits. Sinter does use approximate_disjoint_errors=True when it infers a dem for a circuit. You can override this by specifying the detector error model in the sinter.Task instances you give to sinter.collect, instead of having sinter infer them from the circuit.
{ "domain": "quantumcomputing.stackexchange", "id": 5534, "tags": "error-correction, stim" }
Conformal algebra
Question: I am reading dr. Joshua Qualls lectures on conformal field theory. https://arxiv.org/abs/1511.04074 In section 2.4 Conformal group he defined the generators $$ \begin{aligned} J_{\mu,\nu}&=L_{\mu,\nu}\\ J_{-1,\mu} &=\frac{1}{2}(P_{\mu}-K_{\mu})\\ J_{0,\mu} &= \frac{1}{2}(P_{\mu}+K_{\mu})\\ J_{-1,0} &=D \end{aligned} \tag{2.28} $$ where $P,K,D$ and $L$ are translational, special conformal transformation, dilatation and angular momentum generator. then he gave this commutation relation $$ [J_{mn},J_{pq}]=i(\eta_{mq}J_{np}+\eta_{np}J_{mq}-\eta_{mp}J_{nq}-\eta_{nq}J_{mp}).\tag{2.29} $$ I was wondering how to derive this? in Weinberg's QFT 1, he considered infinitesimal transformation for $U(\lambda,a)$ and then work out the final commutation relation. how to do this using the same way that Weinberg did but now for full conformal transformation? or any other derivation you may know, please share then. Answer: The conformal algebra in d dimensional Minkowski space is isomorphic to so(d+1,n+1) where n is the number of time-like directions. So in 4d the conformal algebra can be recast as so(4,2) Just plug in the definitions and confirm the commutation relations hold. It’s an exercise that everyone studying CFTs should do.
{ "domain": "physics.stackexchange", "id": 48979, "tags": "group-theory, conformal-field-theory, lie-algebra" }
bno055 IMU doesn't give absolute orientation
Question: Hello, For my robot I am using the Adafruit bno055 9DOF absolute orientation IMU. However, the orientation given is relative to it's starting position, not the magnetic north pole. I have tried using the rtimulib_ros package as well as ros-imu-bno055 but in both cases the result is the same. I know the magnetometer is working since I can see its output in one of the topics published by ros-imu-bno055 and those values change if I move a magnet close to it, but I'll admit that I don't really know how to interpret that data so I'm not sure what kind of value I should be seeing. The IMU connected to a Jetson Nano via I2C. Thanks! Originally posted by anonymous58627 on ROS Answers with karma: 1 on 2020-08-26 Post score: 0 Answer: That is expected behavior. Its absolute orientation relative to startup. Some IMUs won’t even do that fusion for you. Originally posted by stevemacenski with karma: 8272 on 2020-08-26 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by anonymous58627 on 2020-08-27: I see. Is there an existing package that will allow me to use the magnetometmer to set the origin to the magnetic north? Comment by flynneva on 2021-02-09: hi @jeremie, I've been working on a ROS2 bno055 package for awhile now with plans on adding I2C support relatively shortly. might be of interest to you: https://github.com/flynneva/bno055
{ "domain": "robotics.stackexchange", "id": 35467, "tags": "imu, ros-melodic" }
Why is the cost increasing in the linear regression method?
Question: I am implementing the linear regression model from scratch. #!/usr/bin/env python2 # -*- coding: utf-8 -*- """ Created on Thu Nov 16 14:40:53 2017 @author: user """ import os import random os.chdir('/home/user/Desktop/andrewng/machine-learning-ex1/ex1') import pandas as pd data = pd.read_csv('/home/user/Desktop/andrewng/machine-learning-ex1/ex1/ex1data1.txt',header=None) theta_0 = random.random() theta_1 = random.random() alpha = 0.001 print(len(data)) hist = -90 cost = 0 print('theta_0 + theta_1 ') while(cost-hist>0.001): hist = cost cost = 0 a = 0 b = 0 for i in range(len(data)): k = data.iloc[i] a = a + theta_0 + theta_1*k[0] - k[1] b = b + (theta_0 + theta_1*k[0] - k[1])*k[0] theta_0 = theta_0 - alpha*a/len(data) theta_1 = theta_1 - alpha*b/len(data) #print(str(theta_0)+' '+str(theta_1)) for j in range(len(data)): k = data.iloc[i] cost = cost + (theta_0 + theta_1*k[0] - k[1])**2 cost = cost/(2*len(data)) print(cost) #if(cost>hist): # print(str(theta_0)+' '+str(theta_1)) # break print(str(theta_0)+' '+str(theta_1)) According to the theory the cost should decrease at each iteration but for me the cost keeps increasing. (DATA) Answer: The reason is your learning rate alpha is too large for this optimization problem. Start with a really small value (< 0.000001) and you will observe a decrease in your cost function. Keep in mind that when the learning rate is too large, the gradient descent algorithm will miss the global minimum (global because MSE cost function is convex) and will diverge. This is why you have observed still increasing cost function values with alpha = 0.001.
{ "domain": "datascience.stackexchange", "id": 7215, "tags": "machine-learning, python, linear-regression" }
Is Coulomb written/found on any object/product as its unit?
Question: Today I had physics class and the unit Coulomb made me wonder as to why it was even invented when I don't see it being used anywhere. When I asked where it may be shown just like bulbs have x Watts and y Volts, my teacher told me to search it up, and I did! But I couldn't find it anywhere. Please tell me where it might be seen, thanks in advance! :) (Q = It i.e. charge = current Γ— time, it's SI unit is Coulomb) Answer: The thing that you don't find Coulomb label on any product doesn't means that this product isn't based on charge storage. For example, rechargeable Ni battery with label $2000~\text{mAh}$ stores (when fully charged) $7200~\text{C}$ charge or equivalently $10^{22}$ electrons. The reason why battery labeling uses $It$ notation rather than Q is of practical ones. For example, suppose that you have a music player which uses above mentioned battery and when playing music draws from it ~$20~\text{mA}$ current, then you know that at this rate you can listen your favorite music for $(2000 mAh)/(20 mA) = 100~\text{h}$ or 4 days. So it's more convenient to know ASAP, how much time you can support steady current of $x$ amperes using fully charged battery. In addition, capacitors also stores electric charge. They are used everywhere in electric circuits (your computer motherboard for example). Capacitor, on the other hand is measured by capacitance : $$ C = \frac QV \tag 1 $$, it's about how much charge it can store given voltage between two terminals. That's why it is measured in Farads, not in Coulombs directly. But, you can calculate capacitor total charge saved. For example capacitor with capacitance of $4700~ \mu F$ and terminals voltage of $35~V$ will store at it's plates $164~\text{mC}$ charge.
{ "domain": "physics.stackexchange", "id": 98025, "tags": "charge, si-units, metrology" }
Can Le Chatelier's principle be derived?
Question: Le Chatelier's principle says : If a constraint (such as a change in pressure, temperature, or concentration of a reactant) is applied to a system in equilibrium, the equilibrium will shift so as to tend to counteract the effect of the constraint. This seems as an analogue of Newton's first law of motion where we talk of inertia resisting a change in the state of rest or motion of a body. Now in physics we have General Relativity whence we can derive Newton's laws. Is there a corresponding general theory in the domain of chemical kinetics too, based on sounder mathematical principles rather than just empirical observations, whence we can derive the Le Chatelier's principle as a special case? Any suggestions are thankfully invited. Answer: Yes. To prove Le Chatelier's principle doesn't require a general theory of chemical kinetics, just an understanding of (thermodynamic!) fluctuations and response. As usual, Callen's Thermodynamics and an Introduction to Thermostatistics is a good reference. Ch. 8.5 is relevant, and I shall simply recapitulate Callen's development below. (Incidentally, I think this principle is closer to Newton's third law of action and reaction, but in general thermodynamic laws have a very different flavor from mechanical laws because of their underlying structure. Thermodynamics deals with nebulous relationships; mechanics with definite equations of motion.) In fact, we will prove an extra statement, the Le Chatelier-Braun principle. This principle states that the secondary effects induced by a perturbation also serve to curtail it. This augments the Le Chatelier principle, that the primary effect induced by a perturbation serves to curtail it. I will steal Callen's example of a system immersed within a pressure and temperature reservoir, with diathermal walls and a movable domain wall with which to control its volume. The wall is moved slightly outward, causing a positive volume change $\text{d}V$. The primary effect is the decrease in pressure of the system, which leads to a driving force to decrease the system's volume. A secondary effect is the change in temperature $\text{d}T$ resulting from this change in volume, $$\text{d}T = \left(\frac{\partial T}{\partial V}\right)_S\text{d}V = -\frac{T\alpha}{Nc_v\kappa_T}\text{d}V.$$ The prefactor is unimportant; we care only about the sign of the result. All variables are positive except for $\alpha$, which is of variable sign. For now, we can assume it positive. The reduction of temperature in the system then drives heat flow into it, which will itself affect the pressure of the system: $$\text{d}P = \left(\frac{\partial P}{\partial S}\right)_V\frac{\text{d}Q}{T} = \frac{\alpha}{NT^2c_v\kappa_T}\text{d}Q.$$ This change in pressure is positive and diminishes the effect of the original perturbation. The same result is obtained if one takes $\alpha$ negative. Let us consider a thermodynamic system with first law $$\text{d}U = f_1\,\text{d}X_1 + f_2\,\text{d}X_2.$$ It is coupled to a reservoir, itself with first law $$\text{d}U' = f_1'\,\text{d}X_1' + f_2'\,\text{d}X_2' = -f_1'\,\text{d}X_1 - f_2'\,\text{d}X_2,$$ noting that $\text{d}X_i' = -\text{d}X_i$, because $X_i+X_i'$ is assumed fixed. A perturbation $\text{d}X_1^\text{pert}$ drives fluctuations $$\text{d}f_1^\text{fluc} = \frac{\partial f_1}{\partial X_1}\text{d}X_1^\text{pert} \quad \text{and} \quad \text{d}f_2^\text{fluc} = \frac{\partial f_2}{\partial X_1}\text{d}X_1^\text{pert}.$$ These fluctuations in intensive quantities themselves lead to responses $\text{d}X_1^\text{resp}$ and $\text{d}X_2^\text{resp}$. The signs of these responses can be determined by minimizing the total energy of the system and reservoir given the initial perturbation, $$\text{d}(U+U') = (f_1-f_1')\,\text{d}X_1^\text{resp} + (f_2-f_2')\,\text{d}X_2^\text{resp} = \text{d}f_1^\text{fluc}\text{d}X_1^\text{resp} + \text{d}f_2^\text{fluc}\text{d}X_2^\text{resp} \leq 0.$$ Because $X_1$ and $X_2$ are independent variables, each term in the sum must be negative, and we have $$\text{d}f_1^\text{fluc}\text{d}X_1^\text{resp} \leq 0 \quad \text{and} \quad \text{d}f_2^\text{fluc}\text{d}X_2^\text{resp} \leq 0.$$ The first equation yields $$0 \geq \text{d}f_1^\text{fluc}\text{d}X_1^\text{resp} \quad \Longleftrightarrow \quad 0 \geq \text{d}f_1^\text{fluc}\frac{\partial f_1}{\partial X_1}\text{d}X_1^\text{resp} = \text{d}f_1^\text{fluc}\text{d}f_1^{\text{resp}(1)},$$ where we have multiplied both sides of the inequality by $\partial f_1/\partial X_1$, which must be positive by stability. $\text{d}f_1^{\text{resp}(1)}$ is the response of $f_1$ to the fluctuation due to $X_1$ only, and so we have the Le Chatelier principle. The second equation yields $$0 \geq \text{d}f_2^\text{fluc}\text{d}X_2^\text{resp} = \frac{\partial f_2}{\partial X_1}\text{d}X_1^\text{pert}\text{d}X_2^\text{resp},$$ and hence that $$0 \geq \frac{\partial f_1}{\partial X_1}\text{d}X_1^\text{pert}\frac{\partial f_1}{\partial X_2}\text{d}X_2^\text{resp} = \text{d}f_1^\text{fluc}\text{d}f_1^{\text{resp}(2)},$$ where we have both used a Maxwell relation and multiplied both sides of the inequality by $\partial f_1/\partial X_1$. This is the Le Chatelier-Braun principle.
{ "domain": "chemistry.stackexchange", "id": 10948, "tags": "kinetics, theoretical-chemistry" }
Min & Max heap implementation
Question: As follows I have implemented Min and Max heap data structure, using an array for storing elements. I would need a code review, please. Please, spare recommendations for generic implementation as they are not useful in my current requirements. I have written down units tests and I'm sharing those, too. Abstract Class public abstract class AbstractHeap { #region internal properties private int Capacity { get; set; } internal int Size { get; set; } internal int[] Nodes { get; set; } #endregion #region constructors public AbstractHeap() { Capacity = 100; Size = 0; Nodes = new int[Capacity]; } #endregion #region helperMethods public void EnlargeIfNeeded() { if (Size == Capacity) { Capacity = 2 * Capacity; Array.Copy(Nodes, Nodes, Capacity); } } public int getLeftChildIndex(int parentIndex) { return 2 * parentIndex + 1; } public bool hasLeftChild(int parentIndex) { return getLeftChildIndex(parentIndex) < Size; } public int leftChild(int index) { return Nodes[getLeftChildIndex(index)]; } public int getRightChildIndex(int parentIndex) { return 2 * parentIndex + 2; } public bool hasRightChild(int parentIndex) { return getRightChildIndex(parentIndex) < Size; } public int rightChild(int index) { return Nodes[getRightChildIndex(index)]; } public int getParentIndex(int childIndex) { return (childIndex - 1) / 2; } public bool hasParent(int childIndex) { return getParentIndex(childIndex) >= 0; } public int parent(int index) { return Nodes[getParentIndex(index)]; } public void swap(int index1, int index2) { int temp = Nodes[index1]; Nodes[index1] = Nodes[index2]; Nodes[index2] = temp; } #endregion #region available public methods /// <summary> /// Gets the minimum element at the root of the tree /// </summary> /// <returns>Int value of minimum element</returns> /// <exception cref="">InvalidOperationException when heap is empty</exception> public int peek() { if (Size == 0) throw new InvalidOperationException("Heap is empty"); return Nodes[0]; } /// <summary> /// Removes the minimum element at the root of the tree /// </summary> /// <returns>Int value of minimum element</returns> /// <exception cref="">InvalidOperationException when heap is empty</exception> public int pop() { if (Size == 0) throw new InvalidOperationException("Heap is empty"); int item = Nodes[0]; Nodes[0] = Nodes[Size - 1]; Size--; heapifyDown(); return item; } /// <summary> /// Add a new item to heap, enlarging the array if needed /// </summary> /// <returns>void</returns> public void add(int item) { EnlargeIfNeeded(); Nodes[Size] = item; Size++; heapifyUp(); } #endregion #region abstract methods internal abstract void heapifyUp(); internal abstract void heapifyDown(); #endregion } Max Heap implementation using abstract class public class MaxHeap : AbstractHeap { #region constructors public MaxHeap() : base() { } #endregion #region internal methods internal override void heapifyDown() { int index = 0; while (hasLeftChild(index)) { int largerChildIndex = getLeftChildIndex(index); if (hasRightChild(index) && rightChild(index) > leftChild(index)) { largerChildIndex = getRightChildIndex(index); } if (Nodes[largerChildIndex] > Nodes[index]) swap(index, largerChildIndex); else break; index = largerChildIndex; } } internal override void heapifyUp() { int index = Size - 1; while (hasParent(index) && parent(index) < Nodes[index]) { swap(index, getParentIndex(index)); index = getParentIndex(index); } } #endregion } Min Heap implementation using abstract class public class MinHeap : AbstractHeap { #region constructors public MinHeap() : base() { } #endregion #region internal methods internal override void heapifyDown() { int index = 0; while (hasLeftChild(index)) { int smallerChildIndex = getLeftChildIndex(index); if (hasRightChild(index) && rightChild(index) < leftChild(index)) { smallerChildIndex = getRightChildIndex(index); } if (Nodes[smallerChildIndex] < Nodes[index]) swap(index, smallerChildIndex); else break; index = smallerChildIndex; } } internal override void heapifyUp() { int index = Size - 1; while (hasParent(index) && parent(index) > Nodes[index]) { swap(index, getParentIndex(index)); index = getParentIndex(index); } } #endregion } Unit tests for max heap [TestClass] public class MaxHeapTests { [TestMethod] public void AddEmptyRemove() { var heap = new MaxHeap(); heap.add(10); Assert.AreEqual(10, heap.peek()); int result = heap.pop(); Assert.AreEqual(10, result); heap.add(20); Assert.AreEqual(20, heap.peek()); } [TestMethod] public void AddMultipleCheckPeek() { var heap = new MaxHeap(); foreach (int item in new int[] { 10, 20, 2, 45, 7, 5, 12 }) heap.add(item); Assert.AreEqual(heap.peek(), 45); } [TestMethod] public void AddMultipleCheckPopThenPeek() { var heap = new MaxHeap(); foreach (int item in new int[] { 10, 20, 2, 45, 7, 5, 12 }) heap.add(item); int result = heap.pop(); Assert.AreEqual(45, result); Assert.AreEqual(20, heap.peek()); } [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void PeekPoopEmpty() { var heap = new MaxHeap(); heap.peek(); heap.pop(); } } Unit tests for min heap [TestClass] public class MinHeapTests { [TestMethod] public void AddEmptyRemove() { var heap = new MinHeap(); heap.add(10); Assert.AreEqual(10, heap.peek()); int result = heap.pop(); Assert.AreEqual(10, result); heap.add(20); Assert.AreEqual(20, heap.peek()); } [TestMethod] public void AddMultipleCheckPeek() { var heap = new MinHeap(); foreach (int item in new int[] { 10, 20, 2, 45, 7, 5, 12 }) heap.add(item); Assert.AreEqual(heap.peek(), 2); } [TestMethod] public void AddMultipleCheckPopThenPeek() { var heap = new MinHeap(); foreach (int item in new int[] { 10, 20, 2, 45, 7, 5, 12 }) heap.add(item); int result = heap.pop(); Assert.AreEqual(2, result); Assert.AreEqual(5, heap.peek()); } [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void PeekPoopEmpty() { var heap = new MinHeap(); heap.peek(); heap.pop(); } } Answer: public abstract class AbstractHeap { #region internal properties private int Capacity { get; set; } internal int Size { get; set; } internal int[] Nodes { get; set; } Capacity doesn't seem to serve any purpose at all. It just mirrors Nodes.Length and is a potential source of bugs. Why should subclasses be able to access the setters of Size and Nodes? I think they should be private. public void EnlargeIfNeeded() { if (Size == Capacity) { Capacity = 2 * Capacity; Array.Copy(Nodes, Nodes, Capacity); } } This is buggy. Add a unit test which inserts more than 100 elements into a heap, watch it turn red, and then fix it. public int getLeftChildIndex(int parentIndex) { return 2 * parentIndex + 1; } public bool hasLeftChild(int parentIndex) { return getLeftChildIndex(parentIndex) < Size; } public int leftChild(int index) { return Nodes[getLeftChildIndex(index)]; } Is there any particular reason for not using expression-valued methods? Is there any particular reason for not following standard C# style and using UpperCamelCase for the method names? To me it seems a bit overkill to have three methods for each of left, right, and parent, but that's a question of style and I can see an argument that it's for readability of the upheap and downheap methods. On the other hand, why are all of these methods (and Swap) public? That's exposing far too much of the internal implementation. internal abstract void heapifyUp(); internal abstract void heapifyDown(); I really can't understand why these methods, which are the most complicated ones in the whole class, should be implemented twice. I would much rather implement them once, in the base class, and handle the differences by means of an IComparer field or an abstract method Compare(int a, int b).
{ "domain": "codereview.stackexchange", "id": 25495, "tags": "c#, heap" }
Why do objects burn when they enter earth's atmosphere?
Question: Why do all objects burn when they enter our atmosphere? is this because of our atmosphere composition? and does this happen on other planets as well? Answer: You'll often hear that it's because of friction, but that's often not the main factor. For larger objects it's more likely the pressure they create. In both cases the reason is the enormous speed, often tens of kilometers per second. When a larger object enters the atmosphere at these speed the air in front of it gets compressed so much that it becomes extremely hot. (Think of pumping up a tire; you're also compressing air and you can feel the valve becoming hot.) The compressed air will often disintegrate the object in the air, and then the debris may burn because of the heat. This is exactly what happened to the asteroid above Russia last year: it exploded with an enormous flash in the air, and left little traces on the ground. This happens on other planets as well, if they have a sufficiently dense atmosphere. In 1994 the comet Shoemaker-Levy crashed into Jupiter. It disintegrated before entering Jupiter's atmosphere due to the strong gravitation, but when the fragments entered the atmosphere they could easily be seen lighting up as they burned up. edit Remember the Space Shuttle? It had heat resistant tiles on the bottom of the craft to protect it from burning when it entered the atmosphere, even though its speed is only a fraction of a meteorite's speed when that enters the atmosphere. During the last launch of the Space Shuttle Columbia some material from the external fuel tank damaged this heat shield, and upon re-entry the heat and the highly pressurized air under the craft could enter it, causing the craft to disintegrate and kill all crew.
{ "domain": "astronomy.stackexchange", "id": 271, "tags": "planetary-atmosphere" }
roscore runns only as root ubuntu 12.04 fuerte
Question: Hi I cannot start the roscore without root rights. I am getting the following message: me/max/.ros/log/f45ff58e-a35a-11e1-9b99-001fc603da02/roslaunch-rue-3228.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. Traceback (most recent call last): File "/usr/local/bin/rosversion", line 4, in <module> import pkg_resources File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2711, in <module> parse_requirements(__requires__), Environment() File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 584, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: rospkg==1.0.2 Invalid <param> tag: Cannot load command parameter [rosversion]: command [rosversion ros] returned with code [1]. Param xml is <param command="rosversion ros" name="rosversion"/> running the core with root rights works? sudo -i source /opt/ros/fuerte/setup.bash roscore I am using Ubuntu 12.04 32bit with ros fuerte and I installed everything based on the ppa server. Any ideas? greetings Originally posted by Markus Bader on ROS Answers with karma: 847 on 2012-05-21 Post score: 0 Original comments Comment by dornhege on 2012-05-21: In a non-root shell, what do you get for rosversion ros Comment by joq on 2012-05-21: Are you sourcing setup.bash in both cases? Comment by Eric Perko on 2012-05-21: PPA server? You mean either packages.ros.org or one of the mirrors listed on the wiki ( http://ros.org/wiki/ROS/Installation/UbuntuMirrors )? Comment by Markus Bader on 2012-05-21: @dornhege rosversion ros chrashes as well, if I run it as root it gives me 1.8.7 @joq yes i am sourcing the setup.bash in both cases @Eric With PPA server I mean the packages.ros.org -> no mirror was used Answer: I figured it out :-) The problem was a old attempt to install ros from sources with pip and easy_install as it is described on http://www.ros.org/wiki/fuerte/Installation/Ubuntu/Source . The attempt places stuff under /usr/local/bin and since I was executing some of the commands with sudo the rights we set to root only. Installing the fuerte ros based on the ppa worked well but old binaries under /usr/local/bin where already placed within the search path before the others and executed instead. This leaded to the permission problems. The solution was to remove: do delete the related old stuff cd /usr/local/bin rm -rf pip pip-2.7 rosco rosdep rosdep-gbp-brew rosdep-source rosinstall roslocate rosversion rosws virtualenv cd /usr/local/lib rm -rf python2.7 # sudo apt-get purge python-pip python-dev build-essential # should not be nessesary but I dit it anyway sudo apt-get install ros-fuerte-desktop Greetings Originally posted by Markus Bader with karma: 847 on 2012-05-22 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 9479, "tags": "ubuntu, ubuntu-precise, root, roscore, sudo" }
Arcsine function for a real-time control program
Question: I have been developing control software for a three-phase induction motor. The main task of this C++ code is to control the torque of the motor. In one part of the algorithm (The control algorithm is executed with a period of 100 Β΅s.), I need to calculate the arcsine function. Here is my implementation. I would appreciate your assessment. Math.h #include <cstdint> class Math { public: /** * @brief Function calculates arcsine of a given argument. * @param x argument, \f$x\in\left<-1, 1\right>\f$ * @return arcsine value of x */ static float arcsine(float x); private: static const uint16_t kArcSineLutSize = 101; static const float arcsine_lut[kArcSineLutSize]; } Math.cpp float Math::arcsine(float x) { bool neg_table_value = false; if (x < 0) { // arcsin(-x) = -arcsin(x) x = -x; neg_table_value = true; } uint16_t index = static_cast<uint16_t>(x * 100.0); float tmp = (arcsine_lut[index + 1] - arcsine_lut[index]) * (100.0 * x - index) + arcsine_lut[index]; if (neg_table_value) { return -tmp; } else { return tmp; } } const float Math::arcsine_lut[Math::kArcSineLutSize] = { 0.0000, 0.0100, 0.0200, 0.0300, 0.0400, 0.0500, 0.0600, 0.0701, 0.0801, 0.0901, 0.1002, 0.1102, 0.1203, 0.1304, 0.1405, 0.1506, 0.1607, 0.1708, 0.1810, 0.1912, 0.2014, 0.2116, 0.2218, 0.2321, 0.2424, 0.2527, 0.2630, 0.2734, 0.2838, 0.2942, 0.3047, 0.3152, 0.3257, 0.3363, 0.3469, 0.3576, 0.3683, 0.3790, 0.3898, 0.4006, 0.4115, 0.4225, 0.4334, 0.4445, 0.4556, 0.4668, 0.4780, 0.4893, 0.5007, 0.5121, 0.5236, 0.5352, 0.5469, 0.5586, 0.5704, 0.5824, 0.5944, 0.6065, 0.6187, 0.6311, 0.6435, 0.6561, 0.6687, 0.6816, 0.6945, 0.7076, 0.7208, 0.7342, 0.7478, 0.7615, 0.7754, 0.7895, 0.8038, 0.8183, 0.8331, 0.8481, 0.8633, 0.8788, 0.8947, 0.9108, 0.9273, 0.9442, 0.9614, 0.9791, 0.9973, 1.0160, 1.0353, 1.0552, 1.0759, 1.0973, 1.1198, 1.1433, 1.1681, 1.1944, 1.2226, 1.2532, 1.2870, 1.3252, 1.3705, 1.4293, 1.5708}; Answer: Use a namespace instead of a class There is no reason to use a class if you only have static member functions and variables. Consider creating a namespace instead: namespace Math { static float arcsine(float x); } You don't even have to declare the private static members in the header file this way. What are the requirements? Why are there only 100 entries in the table? Why is every value only specified up to 4 decimals? This seems very arbitrary. The fewer points you have in your table and the less precise the points in the table are, the bigger the error is compared to the real arcsine. So you should ask yourself: what is the error budget? A quick check of your functions versus std::asin() reveals that near 0, you have errors of about 0.06%, which is probably fine, but near Β±1 you have an error of almost 2.4%. If that is not good enough for your purpose, you probably need to increase the number of points, use more precision for each point, and/or use something a bit more sophisticated than linear interpolation. I also have to wonder why you are using a look-up table instead of just using std::asin(). If your CPU is not fast enough to do this calculation every 100 Β΅s, then it's a good approach. If it's fast enough but it doesn't come with a standard library that contains an arcsine function, then you could consider implementing your own. I would then recommend using Chebyshev polynomials. Calculate the table at compile-time Instead of hardcoding the values in the look-up table, you can have the compiler generate the table for you. For example: template<std::size_t N> consteval auto generate_arcsine_lut() { std::array<float, N> values; for (std::size_t i = 0; i < N; ++i) { values[i] = std::asin(i / static_cast<float>(N)); } return values; } static const auto arcsine_lut = generate_arcsine_lut<100>(); See this on godbolt.org. Note however that std::asin() is not a constexpr function, and while GCC is fine with this, Clang does not want to compile it. The way around this is to implement your own implementation of asin() (as mentioned above). Do you really need the arcsine? The main task of this C++ software is to control the torque of the motor. In a part of the control algorithm (the control algorithm is executed with 100ΞΌs period.) I need to calculate the arcsine function. You might have some mathematical formulas with an angle in them, but in the end you are not interested in that angle, just in how much you have to energize each coil of the motor. So probably you have some values coming in (from Hall sensors or some rotary encoder?), from which you create a vector of the current direction the axis is pointing in. If you just want to rotate that vector a little bit, then instead of calculating the angle, adding something to the angle, and converting that back into a vector, you can just multiply the first vector with a rotation matrix. The matrix is constant, so once precalculated you can just apply it to a vector using a few multiplication and additions. If you want to vary the rotation you apply a lot, but if the rotation angle is always very small, then you can use the small-angle approximations of sin and cos to calculate that matrix very cheaply. Of course, keep your error budget in mind.
{ "domain": "codereview.stackexchange", "id": 44767, "tags": "c++, mathematics, embedded" }
Sandia Hand passes through the steering wheel
Question: The simulation allows the fingers to pass through the steering wheel, which makes it difficult to hold the hand while trying to turn the wheel. Sandia collision break through In 0:35 it can be seen that there is some kind of "transparency" in the collision polygon, or it is in the hand or it is in the steering wheel, but this shows that some collision polygons are not 100% solid. Originally posted by Alberto Romay on Gazebo Answers with karma: 11 on 2013-03-20 Post score: 1 Answer: There is a pull request that is addressing this issue: https://bitbucket.org/osrf/gazebo_models/pull-request/25/polaris-raise-steering-wheel-add-collision/diff Originally posted by nkoenig with karma: 7676 on 2013-03-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Alberto Romay on 2013-03-20: This pull request addresses the issue of the location from some vehicle parts (steering wheel raised, seat lowered) but it is not clear to me if it also addresses the issue of the fingers passing through the steering wheel like if there is no collision. Example: The hand grabs the steering wheel (surrounds it with the fingers) then the robot pulls the hand and the steering wheel gets out through the fingers. I can make another video if the last one was not clear. Comment by nkoenig on 2013-03-20: The steering wheel collision object is changing from a series of cylinders to a triangle mesh.
{ "domain": "robotics.stackexchange", "id": 3143, "tags": "collision" }
Print initial letter of an entered name
Question: Just started learning python and I'm solving some beginner problems. I wrote a solution for a question without any edge cases like user not entering anything, entering numbers etc.: Question: Write a program to print initial letter of an entered name? Expected output: If the user inputs, Ram Chandra Giri -> R.C.Giri Ram chandra Giri -> R.C.Giri My solution works as expected but I would like insight on whether I am following best practice or not and if there is better and easier way to do it. Question is from Class 10 book for students to learn Q-BASIC. My solution: def initial_of_name(c): word_list = c.split(" ") i = (len(word_list) - 1) new_list = [] for w in range(0, i): # this loop leaves last name and creates a new list. word = word_list[w] word = word[0:1].upper() # sliced word into initial letter new_list.append(word) new_list.append(word_list[-1]) # added last name to the list before joining. initial = '.'.join(new_list) print("The initial of your name is {0}.".format(initial)) return name = input("Please enter your name?\n>") initial_of_name(name) Answer: In general, I think that's a great start - it's consistently formatted and sensibly structured. def initial_of_name(c): c isn't a great name for a parameter, it tells the reader nothing about what the function is expecting to get. It might also be helpful to add a docstring to provide a bit more information. word_list = c.split(" ") If you're splitting on spaces, note that you can just use .split(), which will also have the effect of counting consecutive whitespace as a single space. This is helpful if someone accidentally types My Name instead of My Name; with the current formulation you'd get ['My', '', 'Name']. i = (len(word_list) - 1) ... for w in range(0, i): word = word_list[w] Iterating over a list by index in Python is a bit of an anti-pattern, there are lots of better tools for dealing with iteration. In this case, you could iterate over a slice of all but the last item in the list: for word in word_list[:-1]: Talking of slicing: word = word[0:1].upper() You don't need the 0, that's the default start to a slice, so it could be word[:1]. But what you want is the first character, which is perhaps more clearly expressed as word[0]. new_list = [] for ...: ... new_list.append(...) Creating an empty list, iterating and appending is often not the most efficient way to build a new list. Instead, consider a list comprehension (if this is unfamiliar to you, see e.g. Python for-in loop preceded by a variable). print("The initial of your name is {0}.".format(initial)) In general, I wouldn't use print in a function like this. This is part of separation of concerns; building the appropriate string and displaying the result to the user are two completely different tasks, so they shouldn't be in the same function. Additionally, printing rather than returning the result makes it harder to test the function. With all of that considered, here's how I would implement the same requirements. def initial_of_name(name): """Converts a name to initials and surname. Ensures all initials are capitalised, even if the first names aren't. Examples: >>> initial_of_name('Ram Chandra Giri') 'R.C.Giri' >>> initial_of_name('Ram chandra Giri') 'R.C.Giri' """ parts = name.split() initials = [part[0].upper() for part in parts[:-1]] return '.'.join(initials + [parts[-1]]) if __name__ == '__main__': name = input("Please enter your name?\n>") initial = initial_of_name(name) print("The initial of your name is {0}.".format(initial)) If the last part is unfamiliar, see What does if __name__ == "__main__": do? Note that including examples in the docstring in the form of an interactive Python session not only gives the reader a good idea of exactly how to use the function, but also allows doctest to validate the code and examples for you. If you save the above into a file initial.py: $ python -m doctest initial.py --verbose Trying: initial_of_name('Ram Chandra Giri') Expecting: 'R.C.Giri' ok Trying: initial_of_name('Ram chandra Giri') Expecting: 'R.C.Giri' ok 1 items had no tests: initial 1 items passed all tests: 2 tests in initial.initial_of_name 2 tests in 2 items. 2 passed and 0 failed. Test passed.
{ "domain": "codereview.stackexchange", "id": 27756, "tags": "python, beginner, strings, python-3.x" }
If A is poly-time reducible to B, is B poly-time reducible to A?
Question: Basically, is the following statement true? $A \leq_p B$ $\rightarrow$ $B \leq_p A$ Answer: Short answer: No. For one example, take $A$ to be the language $A = \{0, 1\}^*$, i.e. the language that contains every possible string. It is polytime reducible to pretty much anything, for example 3-SAT (the reduction outputs the formula $x\wedge y \wedge z$ on every input). But 3-SAT is definitely not reducible to $A$: there is nothing to map unsatisfiable instances to. Maybe a bit less silly. Take $A$ to be any problem in P. Take $B$ to be a problem complete for exponential time (for example the problem of deciding, given a Turing machine $M$, an input $x$, and a number $k$ written in binary, whether $M$ halts on $x$ within $k$ time steps). Because $B$ is complete for exponential time, $A$ is reducible to $B$. Because a polytime reduction from $B$ to $A$ would imply a polytime algorithm for $B$ and therefore for any exponential time problem, contradicting the time hierarchy theorem, such a polytime reduction does not exist.
{ "domain": "cs.stackexchange", "id": 1284, "tags": "complexity-theory, reductions, polynomial-time" }
Find the sum of numbers from an array closest to a number, where repetition of the numbers are allowed
Question: I would like to find the sum of values from a given number array, where the repetition of numbers are allowed, closest to a target but the sum cannot exceed the target. If there are more solution, I'd prefer the one with the minimum element count. Examples: 1) Given values: [500, 1000, 2000, 5000] Target: 7000 Result: [2000,5000] 2) Given values: [500, 1000, 2000, 5000] Target: 7990 Result: [500, 2000, 5000] 3) Given values: [222,333] Target: 444 Result: [222,222] 4) Given values: [222,333] Target: 777 Result: [222,222,333] Later on I would like to implement this algorithm in JavaScript, and make it run in a browser. I have tried: -Knapsack algorithm -Generating all combination of the numbers and find one with the minimum difference but both are very slow when implemented, and used with big numbers. Answer: This problem is NP-hard by a reduction from partition. Let $X = \{x_1, \dots, x_n\}$, be an instance of partition and let $2M = \sum_{x \in X} x$ (assume that $M$ is an integer as otherwise the instance is trivial). Assume that each $x_i$ is positive and let $\ell = \lceil \log 2M \rceil$. I will now construct some values based on $X$. I will denote a generic value by a tuple $v$ of $n+2$ entries $(v^{*}, v^{(0)}, v^{(1)}, \dots, v^{(n)})$, where all entries $v^{(i)}$, $i=0,\dots,n$ are between $0$ and $2^\ell - 1$. The tuple $v$ will actually correspond to integer whose binary representation is obtained by encoding $v^{*}$ in binary (using a variable number of bits), each of $v^{(0)}, \dots, v^{(n)}$ using $\ell$-bits, and finally concatenating the resulting bitstrings, in order. An example of the tuple $v=(5, 2, 7, 9)$ with $\ell = 4$ is as follows: $$ (\underbrace{101}_{v^*} \underbrace{0010}_{v^{(0)}} \underbrace{0111}_{v^{(1)}} \underbrace{1001}_{v^{(2)}})_2 = (21113)_{10} $$ Notice that the value obtained in this way has an encoding of polynomial length w.r.t. the length of the encoding of the original partition instance, as long as $v^{(*)}$ is small enough (as will be the case). The sum of two tuples is the tuple whose integer value equals the sum of the integer values of the summands. Let then $y_i = (1, x_i, 0, \dots, 0, 1, 0, \dots, 0)$, where the second entry set to $1$ is $y_i^{(i)}$. Similarly, let $z_i = (1, 0, 0, \dots, 0,1,0, \dots, 0)$, where $z_i^{(i)} = 1$. The instance of the problem in the question consists of the set of numbers $S = \{y_i : x_i \in X\} \cup \{z_i : x_i \in X\}$ and of the target value $T=(n,M,1, \dots, 1)$. Given a solution $X' \subseteq X$ to the partition problem, it is easy to construct a solution $S' \subseteq S$ for our problem such that $\sum_{x \in S'} x = T$. Simply select $S' = \{ y_i : x_i \in X'\} \cup \{ z_i : x_i \in X \setminus X' \}$ (notice that it suffices for $S'$ to be a set rather than a multi-set). I will now argue that if a (multi-)set $S' \subseteq S$ such that $\sum_{x \in S'} x = T$ exists, then the partition instance $X$ has answer "yes". From now on let $S'$ be such a set and notice that, by our choice of $T$, the overall multiplicity of all the elements of $S'$ cannot be larger than $n$ (as otherwise $T^{(*)}$ would be exceeded). Claim: Let $m(x)$ be multiplicity in $S'$ of $x \in S$. $\forall i=1,\dots,n$ we have $m(y_i) + m(z_i) = 1$. Proof: If $m(y_i) + m(z_i) = 0$ then, since $T^{(i)}=1$, $S'$ must contain at least $2^\ell \ge 2M > n$ elements $y_j$ or $z_j$ such that $j > i$, contradicting $\sum_{x \in S'} m(x) \le n$. Assume then that $m(y_i) + m(z_i) > 1$, and consider the tuple $h$ corresponding to the sum of exactly 2 copies of $y_i$ and/or $z_i$ in $S'$. We have $h^{(i)}=2$ and, since $T^{(i)}=1$, we conclude that there must be at least $2^\ell - 2$ other copies of some $y_j$ or $z_j$ for $j \ge i$ in $S'$. The total multiplicity of the elements in $S'$ is therefore at least $2 + 2^\ell - 2 > n$, a contradiction. $\;\; \square$ Let $X' = \{x_i : y_i \in S' \}$ and call $h = \sum_{x \in S'} x$. Since there can be at most one copy of each $z_i$ in $S'$ (see the above Claim), the only way to affect $h^{(0)}$ is through the values $y_i \in S'$. Since there can be at most one copy of each $y_i$ in $S'$ (see the above Claim), we must have $\sum_{x_i \in X'} x_i = \sum_{y_i \in S'} y_i^{(0)} = h^{(0)} = T^{(0)} = M$.
{ "domain": "cs.stackexchange", "id": 14921, "tags": "algorithms, knapsack-problems, integers" }
SFR Breeders on Small Mars Moon feasibility
Question: Need help with a topic for a sci-fi book. Basic premise is 200yr in future, mining facility on small Mars moon of Phobos. Power source we would like is an SFR breeder as we think this would be cost effective for fuel. What I need to know. Phobos has $0.0057$ m/s$^2$ of gravity and I need know how the heck would liquid sodium operate in that environment. I know without oxygen liquid sodium at high temp wouldn't react if released however is there a point the temp in an SFR reactor can get so hot it explodes vice a standard melt down? BTW I am Gas Turbine tech and understand a good bit of tech but Space and Nuclear is a little too far in left field for me. Answer: The place to put this is the Worldbuilding Forum. They consider this kind of question all the time there. The two new things you must deal with are lack of gravity, and vacuum. Without gravity, you can't depend on hot liquid to rise and cold liquid to fall. But you didn't depend on that anyway, you always use pumps. So that's OK. If you hvae a vessel with liquid and gas both, you can't depend on the liquid to stay at the bottom and the gas at the top. So you probably want a design that doesn't have mixed liquid and gas. Do you anywhere depend on a compressible gas to buffer volumes, when the liquid volume changes? That could be an issue. Maybe replace that with a piston that can change volume. That's probably all that matters from low gravity. Except for all the little things we haven't thought of. Vacuum means that you are basicly running your heat exchange inside a thermos bottle. It isn't easy to get rid of heat, and your reactor is a heat engine. You could have a bunch of fins to radiate heat away? Those would be less effective in sunlight, but the farther you are from the sun the less that matters. It might work better to conduct your heat into the nearby rock. The hotter rocks would conduct it to farther rocks, and hotter rocks at the surface would radiate it away. You'd have to make sure that the rocks around your base don't get uncomfortably hot. Think about accidents. There will be an accident. The vacuum won't provide oxygen to burn the sodium, but if you use water as a secondary heat exchange, the water will make a mess. The sodium will make a mess regardless. Let's assume that the radioactive stuff won't cause a problem -- any problem it causes would be the same for other nuclear designs too. You can expect to get sodium on all sorts of things near the wrecked power plant. It will boil away in vacuum and spatter things. You wind up with solid sodium and (if there's water) sodium hydroxide etc littering the landscape. Anybody who goes outside anywhere near can expect to get sodium and sodium hydroxide particles on the outside of his protective gear, which will have amusing results when he goes indoors. You should probably arrange to thoroughly wash off anybody who comes in from outdoors, before they remove protective gear. Build that into the base design, on the assumption there will be an accident. It's a lot better than having to add it to the design later, after an emergency. In your story, you probably should have a nuclear accident in the background, at least one has already happened and people are dealing with the long-term results.
{ "domain": "physics.stackexchange", "id": 61959, "tags": "fluid-dynamics, space-expansion, nuclear-engineering" }
Are there any algorithms that learn to learn a function mapping?
Question: Typical algorithms involve learning and applying a single mapping e.g. $ f: X \mapsto Y$ Are there any algorithms that learn multiple mappings given an extra variable e.g. $ f(Z): X \mapsto Y$ (This feels like an abuse of notation.). Here we learn a mapping, given $Z$, that is then applied to $X$ to produce $Y$ One solution (although intractable) would be to train an algorithm (e.g. a neural network) whose output is then used as the weights of a second network. Any suggestions would be much appreciated and feel free to ask for more details. Thanks. Answer: This is equivalent to thinking of this as learning a function of two inputs, i.e., $g : (X,Z) \mapsto Y$ (i.e., $g(X,Z)=Y$). Any machine learning method for learning functions can be used for that; you just treat the feature vector (the input) as the concatenation of the vector $X$ and the vector $Z$, and then use standard ML techniques on the resulting instances. So, you don't need anything special or fancy to handle this -- you can use any existing machine learning method that you are comfortable with.
{ "domain": "datascience.stackexchange", "id": 2768, "tags": "machine-learning, neural-network" }
Does the equation CV = q break down when V is small or a wire is very long?
Question: I'm trying to understand how the equation $CV=q$ relating the voltage difference and charge stored on a charged capacitor breaks down under extreme circumstances. In most problem statements that I'm seeing, they might begin with a capacitor attached to a battery via a circuit and an open switch. They'll say that when the switch is open, the voltage difference between the two ends of the battery is $120 V$. Then, when the switch is closed and when the capacitor is fully charged the stored charge on each plate will have magnitude $|q| = 5 C$. Finally, the problem will ask me to calculate the capacitance of the capacitor. From what I understand, these problems assume that the potential difference between the two plates is the same as the original potential difference between the two ends of the battery. Is this a truly accurate assumption? Technically speaking, couldn't the redistribution of charge along the circuit change the electric field within and outside of the circuit and thus couldn't it also change the potential difference between the two conductors? (The two conductors being the two separately charged batteries and their corresponding wires/capacitor plates). If the closing of the switch and the charging of the capacitor can change the potential difference, when is this change in potential difference significant? Is it ever significant to an electrical engineer (even if such a circumstance is an outlier)? Or is it only detectable in the most extreme of circumstances, ones that no electrical engineer would ever have to deal with? (i.e. when V is very small or there is a very long wire over which the excess charge can redistribute itself?) Answer: When you ask questions about things "in the limit", the answer is almost always "It depends". In this case, the answer is "it depends". The equation $Q=CV$ assumes linear behavior of the capacitor - in reality the dielectric of most capacitors has hysteresis as well as a nonlinear component, so as you increase the voltage, the capacitance will change. This effect is particularly strong (and useful) in certain PIN diodes that can be used, when reverse polarized, as voltage-dependent capacitors (handy for electronically tuning RF circuits, for example). The equation also assumes that $Q$ is only the charge on the capacitor - that no charge accumulates elsewhere. Now the capacitance of a pair of 16 AWG wires separated by 1 cm is almost exactly 10 pF/m - see this handy calculator. So if your capacitor is attached to the battery with a sufficiently long wire, the wire itself will act as a non-negligible capacitor. Again, at RF frequencies (especially when you get up into the GHz range) these "parasitics" really start to add up, and in fact they dominate the behavior of circuits. They are in fact the basis for transmission lines, and definitely are considered important for electrical engineering. Back to your original question: if you have a battery connected to a capacitor with a pair of wires and a switch, then we usually consider the following simplifications: The battery voltage is the same before I close the switch and after The only charge of interest is on the capacitor No current flows in the circuit except at the instant that the switch is closed At high enough precision, none of these assumptions are true. Let's take them one at a time: When charge flows from a battery, the chemical reaction that takes place will cause the voltage to drop slightly. In your example, you used 5 C of charge - that is 1 A for 5 s, which is a not inconsiderable amount of current for a battery. A 120 V battery might consist of 80 1.5 V cells in series; if each cell drops 1 mV because of this transient load, the voltage of the battery would drop by 80 mV. As I showed above, the wires will collect some charge; how much charge will depend on the geometry. If the value $Q$ of interest is only the charge on the capacitor (rather than the charge that left the battery) this difference doesn't matter - but that was not made explicit in the question. A capacitor has finite resistance - charge will slowly bleed off. How quickly it bleeds depends on many factors: most notably the dielectric material used, and the electric field across it. Making a capacitor with high capacitance per unit volume requires a very thin dielectric and thus a high field and possibly high leakage. Precision capacitors tend to be low-value, and bulky. In your example, a charge of 5 C and a voltage of 120 V is possible with a capacitance of $c=\frac{5}{120} F = 42 mF. This is big, but not too unwieldy. I found one online that was 40 mF and 160 V rating; it was 76 mm diameter, by 150 mm long. As I said - quite big. I looked at the data sheet for this family of capacitors but could not find any information on leakage current. However, since this is an electrolytic capacitor, the leakage will be quite large. For example, I found a generic article on the leakage of aluminum electrolytic capacitors which included the following plot as figure 3: As you can see, there is significant current flowing in large electrolytic capacitors - this was for a 1 mF, 35 V device so you can imagine it will be quite a bit worse for 44 mF, 120 V. These differences are often much more important than the voltage drop of the battery (which your question was asking about). Long wires plus leakage currents could, in principle, result in a sustained voltage drop across the wires. But as a rule of thumb, electrical engineers will be careful when designing precision circuits (ones where a few mV on 120 V might matter) with short wires; and if you need long runs with a precise voltage at the end of it, you might use local regulation at the far end, or a 3 or 4 wire solution which allows sensing at the remote end so that the voltage source can compensate.
{ "domain": "physics.stackexchange", "id": 22166, "tags": "electromagnetism, electrostatics, capacitance, electrical-engineering" }
rosdep 0.10.22 Error 404
Question: I have installed the newest version of rosdep and when I perform rosdep update I get the following output and the eventual error: reading in sources list data from /etc/ros/rosdep/sources.list.d Hit https://github.com/ros/rosdistro/raw/master/rosdep/osx-homebrew.yaml Hit https://github.com/ros/rosdistro/raw/master/rosdep/gentoo.yaml Hit https://github.com/ros/rosdistro/raw/master/rosdep/base.yaml Hit https://github.com/ros/rosdistro/raw/master/rosdep/python.yaml Hit https://github.com/ros/rosdistro/raw/master/rosdep/ruby.yaml Hit https://github.com/ros/rosdistro/raw/master/releases/fuerte.yaml Ignore legacy gbpdistro "groovy" Ignore legacy gbpdistro "hydro" Query rosdistro index https://raw.github.com/ros/rosdistro/rep137/releases/index.yaml ERROR: error loading sources list: HTTP Error 404: Not Found I have checked and the file https://raw.github.com/ros/rosdistro/rep137/releases/index.yaml does not exist. Further information: I have seen that what happens is that the script /usr/share/pyshared/rosdep2/ when calling rosdistro.get_index_urdl() it is taking the function in the file /usr/local/lib/python2.7/dist-packages/rosdistro/init.py which has DEFAULT_INDEX_URL = 'https://raw.github.com/ros/rosdistro/rep137/releases/index.yaml' I think that the error would not happen if the file /usr/lib/pymodules/python2.7/rosdistro/init.py was taken which has DEFAULT_INDEX_URL = 'https://raw.github.com/ros/rosdistro/master/index.yaml' Originally posted by Jordi Pages on ROS Answers with karma: 245 on 2013-10-01 Post score: 0 Answer: What is the contents of /etc/ros/rosdep/sources.list.d/20-default.list? From the error message I would guess that you have an extra line in there referring to that url. Edit: Ahh you have a version from PIP overlaying the debian package. Please repeatedly run sudo pip uninstall rosdep Until it says nothing to uninstall. You should also uninstall any other ros tools in /usr/local installed via pip using the same procedure. Originally posted by tfoote with karma: 58457 on 2013-10-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Jordi Pages on 2013-10-02: It does not seem so. The contents are: os-specific listings first yaml https://github.com/ros/rosdistro/raw/master/rosdep/osx-homebrew.yaml osx yaml https://github.com/ros/rosdistro/raw/master/rosdep/gentoo.yaml gentoo generic yaml https://github.com/ros/rosdistro/raw/master/rosdep/base.yaml yaml https://github.com/ros/rosdistro/raw/master/rosdep/python.yaml yaml https://github.com/ros/rosdistro/raw/master/rosdep/ruby.yaml gbpdistro https://github.com/ros/rosdistro/raw/master/releases/fuerte.yaml fuerte gbpdistro https://github.com/ros/rosdistro/raw/master/releases/groovy.yaml groovy gbpdistro https://github.com/ros/rosdistro/raw/master/releases/hydro.yaml hydro
{ "domain": "robotics.stackexchange", "id": 15720, "tags": "rosdep" }
Can we always find a unitary operation connecting qubit states with given eigendecompositions?
Question: Consider the density matrices $\rho_0 = |0 \rangle \langle 0|$ and $\rho_1 = |1 \rangle \langle 1|$. Let $\{p_1, p_2\}$, and $\{p_3, p_4\}$ be two probability distributions, that is, $$0 \leq p_1, p_2, p_3, p_4 \leq 1$$ $$p_1 + p_2 = 1$$ $$~\text{and}~ p_3 + p_4 = 1.$$ These probability distributions refers two mixed states $\rho = p_1 \rho_0 + p_2 \rho_1$ and $\rho' = p_3 \rho_0 + p_4 \rho_1$. Now I have the following questions: Is there is a unitary matrix $U$ such that $\rho' = U^\dagger \rho U$ ? How to calculate $U$ when all $p_1, p_2, p_3$ and $p_4$ are known, if $U$ exists? Can we represent $U$ with a quantum circuit with qubits if $U$ exists? Answer: Since conjugation of the state $\rho$ with unitaries, i. e. $\rho \mapsto U \rho U^\dagger$, preserves eigenvalues, this could only be possible in your case if $p_1 =p_3$ or $p_1 = p_4$. In those cases the unitary transformation required is the identity and $\sigma_x$ respectively.
{ "domain": "quantumcomputing.stackexchange", "id": 1852, "tags": "quantum-gate, quantum-state, unitarity" }
Why did whales end up larger than most sauropods and what is a whale's limits?
Question: I understand that due to the square-cube law (when something grows 2 times but its volume grows 4 times and it goes on) sauropods were massive but not enough when compared to whales because whales got around the square-cube law by using the buoyant force to balance their bodies in the water. In addition, being mammals allowed them to breathe, oxygen-rich air while fish were limited to oxygen-poor water, this thenceforth allowed whales to grow many times more than the largest fish. So my question is what is the limit to whale size and if there is one what is their true potential? A question presented by the scientific community and given to the public. Answer: As best we can tell food availability is the relevant limits in whales. its true that the bigger an animal gets the more energy efficient it is, it needs less food proportionally but still needs more total amounts of food. There is a limit to how much food you can find and eat, especially in the lean parts of the year. This is actually why we see dwarfism on islands lower food availability favors smaller size (less overall food needs). Finding food and even feeding itself takes energy and it reaches a point where the energy you expend is equal to or less than the energy you can extract from your food. There are other limits like blood pressure, internal surface area, digestion time, heat buildup, but whales have not reached any of these. https://science.sciencemag.org/content/366/6471/1367
{ "domain": "biology.stackexchange", "id": 11296, "tags": "dinosaurs" }
Deconvolution of an Image Acquired by a Square Uniform Detector
Question: So, I acquired some images by scanning a radiation source with a square detector like in the following gif. Where the dashed grid represents reality, the 3x3 square my detector, and the 4x4 my acquired data. Obviously, because I'm moving the detector in steps smaller than its size, my image is convoluted. Assuming my detector has uniform sensitivity, how do I go about deconvolving it? The point spread function should be simple, but I can't seem to find examples anywhere. (I can work with Python or Matlab, so feel free to give any examples you want) The data I'm working with consists of gaussian-ish peaks surrounded by noise like so: Answer: Your model is exactly a Convolution with Uniform Kernel where the output is what is called the Valid Part of the Convolution. In MATLAB lingo it will be using conv2(mA, mK, 'valid'). So the way to solve it will be using a matrix form of the convolution and solving the linear system of equations. Let's use the Lenna Image as input (Size was reduced for faster calculations): We have a uniform kernel for the sensor model. The output of the convolution with uniform kernel is given by: The output from the sensor is both blurred and smaller (Less 2 rows and 2 columns as it is 3x3 kernel) just as in your model. This is the model of Valid Convolution. In Matrix form what we have is: $$ \boldsymbol{b} = K \boldsymbol{a} $$ Where $ \boldsymbol{b} $ is the column stack vector of the output image, $ \boldsymbol{a} $ is the column stack vector of the input image and $ K $ is the convolution operator (Valid Convolution) in matrix form. In the code it is done in the function CreateConvMtx2D(). So, now all we need is to restore the image by solving the Matrix Equation. Yet the issue is the equation is Underdetermined System and the matrix has high condition number which suggest not to solve this equation directly. The solution is to use some kind of regularization of the least squares form of the problem: $$ \arg \min_{\boldsymbol{a}} \frac{1}{2} {\left\| K \boldsymbol{a} - \boldsymbol{b} \right\|}_{2}^{2} + \lambda r \left( \boldsymbol{a} \right) $$ Where $ r \left( \boldsymbol{a} \right) $ is the regularization term. In the optimal case the regularization should match the prior knowledge on the problem. For instance, in Image Processing we can assume a Piece Wise Smooth / Constant Model which matches the Total Variation regularization. Since we have no knowledge here, we will use the classic regularization to handle the Condition Number - Tikhonov Regularization: $$ \arg \min_{\boldsymbol{a}} \frac{1}{2} {\left\| K \boldsymbol{a} - \boldsymbol{b} \right\|}_{2}^{2} + \frac{\lambda}{2} {\left\| \boldsymbol{a} \right\|}_{2}^{2} = {\left( {K}^{T} K + \lambda I \right)}^{-1} {K}^{T} \boldsymbol{b} $$ The output is given by (For $ \lambda = 0.005 $): We can see that near the edge we have some artifacts which are due to the fact the system is Underdetermined and we have less equations to describe those pixels. One can use the $ \lambda $ parameter to balance between how sharp the output is (Yet with artifacts) to how smooth it is, basically governing the level inversion of the system. I advise playing with the parameter to find the best balance for your case but more than that, find a better regularization. Since the information you're after looks smooth you can use something in that direction. The full MATLAB code is available on my StackExchange Signal Processing Q63449 GitHub Repository (Look at the SignalProcessing\Q63449 folder). Enjoy...
{ "domain": "dsp.stackexchange", "id": 8299, "tags": "image-processing, matlab, python, deconvolution, inverse-problem" }
Subscribing an Object to its own Events
Question: Is it appropriate for the class to subscribe to its own events like this? Should the class not subscribe, and instead move that code to before the OnEventName(Object args) methods call the EventHandlers? I'm curious if there are any downfalls to this approach. On one hand, with this approach I could remove all the null checks on the event-firing methods. On the other hand, what if another event down the road needs information from the event handling I have already added a subscription to. I.e. in the future a method needs the NetworkServer_Error method to have been guaranteed to have fired before it can work. I'm mostly concerned with the event logic here, not so much the implementations. None of these implementations are concrete, but I want to make sure I have done this part properly first. public class NetworkServer { NetPeerConfiguration mainServerConfiguration; NetServer mainNetServer; List<NetConnection> connectedClients = new List<NetConnection>(); public NetworkServer() { mainServerConfiguration = new NetPeerConfiguration("SecretKey"); mainServerConfiguration.AcceptIncomingConnections = true; mainServerConfiguration.Port = 5501; mainNetServer = new NetServer(mainServerConfiguration); // Wire up our events Error += NetworkServer_Error; DataReceived += NetworkServer_DataReceived; StatusChanged += NetworkServer_StatusChanged; ClientConnected += NetworkServer_ClientConnected; ClientDisconnected += NetworkServer_ClientDisconnected; } void NetworkServer_DataReceived(object sender, NetMessageEventArgs e) { Program.LogLine("Data recieved from: " + e.Message.SenderConnection.RemoteEndPoint.ToString() + ", Payload size: " + e.Message.LengthBytes.ToString(), LoggingType.Information); } void NetworkServer_Error(object sender, MessageEventArgs e) { Program.LogLine(e.Message, LoggingType.Error); } void NetworkServer_ClientConnected(object sender, ConnectionEventArgs e) { if (!connectedClients.Contains(e.Connection)) { connectedClients.Add(e.Connection); Program.LogLine("New client discovered: " + e.Connection.RemoteEndPoint.ToString(), LoggingType.Information); } } void NetworkServer_ClientDisconnected(object sender, ConnectionEventArgs e) { if (connectedClients.Contains(e.Connection)) connectedClients.Remove(e.Connection); Program.LogLine("Client lost: " + e.Connection.RemoteEndPoint.ToString(), LoggingType.Information); } void NetworkServer_StatusChanged(object sender, StatusChangedEventArgs e) { switch (e.Connection.Status) { case NetConnectionStatus.Disconnected: OnClientDisconnected(new ConnectionEventArgs(e.Connection)); break; case NetConnectionStatus.Connected: OnClientConnected(new ConnectionEventArgs(e.Connection)); break; default: Program.LogLine("Unhandled StatusChanged: " + e.Connection.RemoteEndPoint.ToString() + " now " + e.Connection.Status + ".", LoggingType.Warning); break; } } public Task RunServer() { return Task.Run(() => { mainNetServer.Start(); DateTime started = DateTime.UtcNow; Program.LogLine(string.Format("The server was started on {0} at {1}.", started.ToString("dd-MM-yyyy"), started.ToString("HH:mm:ss.fffffff")), LoggingType.Important); while (true) { NetIncomingMessage msg; while ((msg = mainNetServer.ReadMessage()) != null) { switch (msg.MessageType) { case NetIncomingMessageType.VerboseDebugMessage: case NetIncomingMessageType.DebugMessage: case NetIncomingMessageType.WarningMessage: case NetIncomingMessageType.ErrorMessage: OnError(new MessageEventArgs(msg.ReadString())); break; case NetIncomingMessageType.StatusChanged: OnStatusChanged(new StatusChangedEventArgs(msg.SenderConnection)); break; case NetIncomingMessageType.Data: OnDataReceived(new NetMessageEventArgs(msg)); break; default: Program.LogLine("Unhandled type: " + msg.MessageType, LoggingType.Warning); break; } mainNetServer.Recycle(msg); } System.Threading.Thread.Sleep(1); } }); } void OnError(MessageEventArgs args) { if (Error != null) Error(this, args); } void OnStatusChanged(StatusChangedEventArgs args) { if (StatusChanged != null) StatusChanged(this, args); } void OnDataReceived(NetMessageEventArgs args) { if (DataReceived != null) DataReceived(this, args); } void OnClientDisconnected(ConnectionEventArgs args) { if (ClientDisconnected != null) ClientDisconnected(this, args); } void OnClientConnected(ConnectionEventArgs args) { if (ClientConnected != null) ClientConnected(this, args); } public event MessageEventHandler Error; public event NetMessageEventHandler DataReceived; public event StatusChangedEventHandler StatusChanged; public event ConnectionChangedEventHandler ClientConnected; public event ConnectionChangedEventHandler ClientDisconnected; } Here are the EventHandler delegates: public delegate void MessageEventHandler(Object sender, MessageEventArgs e); public delegate void NetMessageEventHandler(Object sender, NetMessageEventArgs e); public delegate void StatusChangedEventHandler(Object sender, StatusChangedEventArgs e); public delegate void ConnectionChangedEventHandler(Object sender, ConnectionEventArgs e); And, lastly, the EventArgs classes: public class ConnectionEventArgs : EventArgs { private NetConnection _Connection; public NetConnection Connection { get { return _Connection; } } public ConnectionEventArgs(NetConnection client) { this._Connection = client; } } public class MessageEventArgs : EventArgs { private string _Message; public string Message { get { return _Message; } } public MessageEventArgs(string message) { this._Message = message; } } public class NetMessageEventArgs { public NetIncomingMessage Message { get; private set; } public NetMessageEventArgs(NetIncomingMessage message) { Message = message; } } public class StatusChangedEventArgs : ConnectionEventArgs { public StatusChangedEventArgs(NetConnection client) : base(client) { } } Lastly, the Program class calling the NetworkServer: class Program { static LoggingType logMessageTypes = LoggingType.All; static void Main(string[] args) { NetworkServer ns = new NetworkServer(); Task t = ns.RunServer(); t.Wait(); LogLine("Done!", LoggingType.Information); } public static void LogLine(string line, LoggingType type, ConsoleColor foreColor = ConsoleColor.Gray, ConsoleColor backColor = ConsoleColor.Black) { if (type <= logMessageTypes) { Console.ForegroundColor = foreColor; Console.BackgroundColor = backColor; Console.WriteLine(DateTime.UtcNow.ToString("O") + ": " + type.ToString() + ": " + line); } } } Answer: Basically the communication between objects is done in two different ways. A parent object is talking to its child object by using the child objects methods and properties. A child object is talking to its parent by using events. So I support @Thomas W with his answer. That beeing said, let us start with the refactoring ideas. The EventArgs classes You should either make the variables you use public readonly omiting the property or use autoimplemented properties with a private setter, because they are only read but never written. The NetMessageEventArgs class is missing the inheriting from EventArgs which should be done. See: https://stackoverflow.com/a/6816889/2655508 it allows people using your classes to use and handle generic *Handler(object sender, EventArgs e) declarations. If you don't inherit from EventArgs, then they have to use explicitly typed I don't see a reason to name the constructors parameter client in the ConnectionEventArgs and StatusChangedEventArgs classes. A more obvious name would be connection. The result of the applied changes (which aren't needed anymore, but posting them nevertheless) public class ConnectionEventArgs : EventArgs { public NetConnection Connection { get; private set; } public ConnectionEventArgs(NetConnection connection) { Connection = connection; } } public class MessageEventArgs : EventArgs { public string Message { get; private set; } public MessageEventArgs(string message) { Message = message; } } public class NetMessageEventArgs : EventArgs { public NetIncomingMessage Message { get; private set; } public NetMessageEventArgs(NetIncomingMessage message) { Message = message; } } public class StatusChangedEventArgs : ConnectionEventArgs { public StatusChangedEventArgs(NetConnection connection) : base(connection) { } } NetworkServer class mainNetServer and connectedClients should be made readonly so you won't accidently assign some new value to them. You should move the NetPeerConfiguration mainServerConfiguration; inside the constructor, because you only use it there. Or much better you should inject it into the constructor. Your class does not need to know how it is created nor should it need to create it. It only needs to use it. You should declare the NetPeerConfiguration mainServerConfiguration; and NetServer mainNetServer; explicitly private to make this more obvious. Instead of using a List<NetConnection> I would like to encourage you to use a HashSet<NetConnection>. In this way you don't need to check if item is in the set, you can just call Add() and don't have to worry about it beeing already in the set. based on your comments to @Thomas W's answer the only object which is using the provided events is the object itself. So you could use simple methods leaving aside the events. you should use a ILogger interface which should be injected to the constructor instead of using a public static method of the calling object. with using string.Format() you can make the messages easier to read and could make them constant if you want to. The result of the applied changes public class NetworkServer { private readonly NetServer mainNetServer; private readonly HashSet<NetConnection> connectedClients = new HashSet<NetConnection>(); public NetworkServer(NetPeerConfiguration mainServerConfiguration) { mainNetServer = new NetServer(mainServerConfiguration); } private void ProcessReceivedMessage(NetIncomingMessage message) { string msg = "Data recieved from: {0}, Payload size: {1}" LogMessage(msg, LoggingType.Information, message.SenderConnection.RemoteEndPoint, message.LengthBytes); } private void LogMessage(string message, LoggingType loggingType, params Object[] par) { message = FormatMessage(message, par); Program.LogLine(message, loggingType); } private String FormatMessage(string message, params Object[] par) { if (par.Length == 0) { return message; } return string.Format(message, par); } private void ProcessErrorMessage(string message) { LogMessage(message, LoggingType.Error); } private void AddClient(NetConnection connection) { if (connectedClients.Add(connection)) { string msg = "New client discovered: {0}"; LogMessage(msg, LoggingType.Information, connection.RemoteEndPoint) } } private void RemoveClient(NetConnection connection) { connectedClients.Remove(connection); string msg = "Client lost: {0}"; LogMessage(msg, LoggingType.Information, connection.RemoteEndPoint) } private void ProcessStatusChangedMessage(NetConnection connection) { switch (connection.Status) { case NetConnectionStatus.Disconnected: RemoveClient(e.Connection); break; case NetConnectionStatus.Connected: AddClient(e.Connection); break; default: LogMessage("Unhandled StatusChanged: {0} now {1}.", LoggingType.Warning, e.Connection.RemoteEndPoint, e.Connection.Status); break; } } public Task RunServer() { return Task.Run(() => { mainNetServer.Start(); DateTime started = DateTime.UtcNow; LogMessage("The server was started on {0} at {1}.", LoggingType.Important, started.ToString("dd-MM-yyyy"), started.ToString("HH:mm:ss.fffffff")); while (true) { NetIncomingMessage msg; while ((msg = mainNetServer.ReadMessage()) != null) { switch (msg.MessageType) { case NetIncomingMessageType.VerboseDebugMessage: case NetIncomingMessageType.DebugMessage: case NetIncomingMessageType.WarningMessage: case NetIncomingMessageType.ErrorMessage: ProcessErrorMessage(msg.ReadString()); break; case NetIncomingMessageType.StatusChanged: ProcessStatusChangedMessage(msg.SenderConnection); break; case NetIncomingMessageType.Data: ProcessStatusChangedMessage(msg); break; default: LogMessage("Unhandled type: {0}", LoggingType.Warning, msg.MessageType); break; } mainNetServer.Recycle(msg); } System.Threading.Thread.Sleep(1); } }); } }
{ "domain": "codereview.stackexchange", "id": 14585, "tags": "c#, object-oriented, .net, event-handling" }
Reverse Vowels of a String
Question: The task is taken from leetcode Write a function that takes a string as input and reverse only the vowels of a string. Example 1: Input: "hello" Output: "holle" Example 2: Input: "leetcode" Output: "leotcede" Note: The vowels does not include the letter "y". My first solution /** * @param {string} s * @return {string} */ var reverseVowels = function(s) { const LEN = s.length; const str = [...s]; const vowels = ['a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U']; const rev = []; for (let i = 0; i < LEN; i++) { if (vowels.includes(s[i])) { rev.push(s[i]); } } for (let i = 0; i < LEN; i++) { if (vowels.includes(str[i])) { str[i] = rev.pop(); } } return str.join(''); }; My second solution /** * @param {string} s * @return {string} */ var reverseVowels = function(s) { if (s.length <= 1) { return s; } const sForward = [...s]; const sBackward = [...s].reverse(); const LEN = s.length - 1; let left = -1, right = -1; const VOWELS = ['a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U']; const findVowel = start => (x, i) => start < i && VOWELS.includes(x); while(true) { left = sForward.findIndex(findVowel(left)); right = sBackward.findIndex(findVowel(right)); if (left >= LEN - right || left === -1 || right === -1) { return sForward.join(''); } [sForward[left], sForward[LEN - right]] = [sBackward[right], sForward[left]]; } return sForward.join(''); }; My third solution /** * @param {string} s * @return {string} */ var reverseVowels = function(s) { if (s.length <= 1) { return s; } const str = [...s]; let left = 0; let right = s.length - 1; const VOWELS = ['a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U']; while(left < right) { if (VOWELS.includes(str[left]) && VOWELS.includes(str[right])) { [str[right], str[left]] = [str[left], str[right]]; left++; right--; } else if (VOWELS.includes(str[left])) { right--; } else { left++; } } return str.join(''); }; My fourth solution /** * @param {string} s * @return {string} */ var reverseVowels = function(s) { if (s.length <= 1) { return s; } const str = [...s]; let left = 0; let right = s.length - 1; const map = new Map(); map.set('a', true); map.set('e', true); map.set('i', true); map.set('o', true); map.set('u', true); map.set('A', true); map.set('E', true); map.set('I', true); map.set('O', true); map.set('U', true); while(left < right) { if (map.get(str[left]) && map.get(str[right])) { [str[right], str[left]] = [str[left], str[right]]; left++; right--; } else if (map.get(str[left])) { right--; } else { left++; } } return str.join(''); }; My fifth solution /** * @param {string} s * @return {string} */ var reverseVowels = function(s) { if (s.length <= 1) { return s; } const str = [...s]; let left = 0; let right = s.length - 1; const set = new Set(['a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U']); while(left < right) { if (set.has(str[left]) && set.has(str[right])) { [str[right], str[left]] = [str[left], str[right]]; left++; right--; } else if (set.has(str[left])) { right--; } else { left++; } } return str.join(''); }; Answer: I recommend using the Array.prototype methods. These lend themselves to adopting a functional style of programming. Functional programming is a lot easier to test, as a pure function always returns the same result and never mutate objects. const vowels = ['a','e','i','o','u']; function vowelStringReverse(str) { //split the string into an array of chars, that includes their index position const strArray = str.split('').map((v,i) => ({ value: v, i: i, isVowel: vowels.includes(v) })); //For just the vowels, //Create index position lookup, where the index positions are reversed const vowelsReverseIndexLookup = strArray.filter(v => v.isVowel) .reduce((acc, cur, i, array) => { return { ...acc, [array[array.length-1-i].i]: cur.value }; }, {}); //Now iterate through the string, and look up the vowel from the lookup. const strReversed = strArray.map((v,i) => { if (v.isVowel) { return vowelsReverseIndexLookup[i] } else { return v.value; } }); return strReversed.join(''); } console.log(vowelStringReverse('hello')); console.log(vowelStringReverse('leetcode')); In this example I've given, if you wanted, you could then pull out the anonymous functions I've declared, and write tests for them: for example: const vowelsReverseIndexLookup = strArray.filter(v => v.isVowel) .reduce(reverseIndexReducer, {}); //elsewhere: const reverseIndexReducer = (acc, cur, i, array) => { return { ...acc, [array[array.length-1-i].i]: cur.value }; };
{ "domain": "codereview.stackexchange", "id": 34632, "tags": "javascript, algorithm, programming-challenge, ecmascript-6" }
Forces of tension in a pulley rope system
Question: I have found similar questions to this topic on PSE but none of them aswered the questions I have. So when we consider the pulley-rope system below, where the are two weights ($m_2>m_1$) and a pulley which is considered as homogeneous cylinder $m_3$. The rope is considerd massless. My question is first, are $T_1$ and $T_2$ the same magnitude? If not, would they have the same magnitude if the pulley was massless? And second, Where did we get $T_1'$ and $T_2'$ (and what's their magnitude)? (Someone said It is because of Newtons third law but I dont get why the opposite force to $T_1$ isn't $T_2$ but $T_1'$?) (I am confused because my textbook says the magnitude of $T_1$ and $T_2$ will be the same even if $m_2>m_1$ (when the pulley is considered massless) and it also doesn't say anything about the existence of forces $T_1'$ and $T_2'$.) Thanks. Answer: The rope is considered massless and inextensible and so the tension will be the same throughout the rope except when some other external force acts on the rope eg friction. A rope with such properties has the following functions: 1 transmit a force from one position to another and/or 2 change the direction of the transmitted force. With the pulley not being massless and there being friction between the rope and the pulley, remembering that the function of the rope is to transmit forces, the labelled forces in your diagram can be defined as follows. Force $\vec T_2$ can be thought of as the force on mass $m_2$ due to the pulley and force $\vec T’_2$ can be thought of as the force on the pulley due to mass $m_2$. So in a sense they are a Newton third law pair being equal in magnitude and opposite in direction. The same is true of forces $\vec T_1$ and $\vec T'_1$ If the pulley has mass and hence a moment of inertia and there is friction between the rope and the pulley then to accelerate the pulley a torque must be applied and this is provided by forces $\vec T'_2$ and $\vec T'_1$. If the pulley is massless and therefore has zero moment of inertia then there can be no torque applied on it otherwise it would suffer an infinite angular acceleration. So in this case the magnitudes of forces $\vec T'_2$ and $\vec T'_1$ would be equal and there directions are opposite. To include the rope in a more formal way you would need to consider the force on the top of the rope due to the pulley and its Newton third law pair the force on the pulley due to the top of rope. Also you would need to consider the force on the bottom of the rope due to the mass and its Newton third law pair the force on the mass due to the bottom of rope. In this case the magnitude of the force on the top of the rope due to the pulley would equal the magnitude of the force on the bottom of the rope due to the mass and these forces would be in opposite directions ensuring a net zero force on the massless rope. This would introduce extra (unnecessary?) forces to deal with.
{ "domain": "physics.stackexchange", "id": 55244, "tags": "newtonian-mechanics, forces" }
My first accumulators
Question: Notes I'm working my way through SICP, and as I got very confused by the section on folds, I decided to try to implement foldr in scheme and javascript to understand how it works differently with immutable and mutable variables. I'm mostly looking for advice on whether or not what I've done makes sense idiomatically in each language, but any other feedback would be very much appreciated. Scheme ;Foldr implemented so I can understand it (define (foldright function initial collection) (if (not (pair? collection)) initial (foldright function (function initial (car collection)) (cdr collection)))) ;Test Case (should return 10) (foldright (lambda (x y) (+ x y)) 0 (list 1 2 3 4)) Javascript function foldr(func, initial, collection) { if (collection.length === 0) { return initial }; initial = func(initial, collection[0]); collection.shift(); return foldr(func, initial, collection); }; //Test case (should return 10) var result = foldr(function (a, b) { return a + b; }, 0, [1, 2, 3, 4]); console.log(result); Answer: I can only speak to the JS code - which looks pretty good, by the way... except: You're modifying the array you're folding. In fact, you're truncating it completely due to your use of shift. So your function has destructive side-effects. After the fold, you have your answer, but you've lost the question, so to speak. I.e., var question = [2, 3, 7]; var answer = foldr(function (a, b) { return a * b; }, 0, input); console.log(answer); // => 42 console.log(question); // => [] ... oops Hence it'd be better to do something like this, using slice(1) to get a copy of the array starting from index 1: function foldl(func, initial, collection) { // not foldr (see below) if (collection.length === 0) { return initial; } initial = func(initial, collection[0]); return foldl(func, initial, collection.slice(1)); } By the way: You'll see I've removed the semicolon after the if block - it's not necessary (JS just ignores it, as it does the semicolon after the function body). But I've inserted a semicolon after return initial inside the block. That semicolon isn't strictly necessary either, since JS will see the close-brace following it, and insert its own semicolon. But better to do it properly yourself. You could also write the condition as simply if (!collection.length), and it could be considered idiomatic. But I prefer your current - strict and explicit - condition myself. Lastly, I'd prefer the Node.js style of placing function arguments at the end, like foldr(initial, collection, func) only because it avoids the dangling arguments after an inline function. On the other hand, JavaScript itself favors putting function arguments first, as in the built-in Array.prototype.reduce function... it's a tough call (no pun intended). Update: I hadn't even noticed, until I read your comment above, but yes, your function is actually foldl, not foldr. You're reducing the array from first to last (i.e. from the left). So the first value passed to func is the first element in the array. A foldr function would pass the values in reverse order. A real foldr function could be: function foldr(func, initial, collection) { // for real this time if( collection.length > 1) { initial = foldr(func, initial, collection.slice(1)); } return func(initial, collection[0]); } You can check the behavior like this: var func = function (memo, value) { memo.push(value); return memo; }; foldl(func, [], [1, 2, 3, 4]); // => [1, 2, 3, 4] foldr(func, [], [1, 2, 3, 4]); // => [4, 3, 2, 1] (reversed!) Update 2: While I'm at it, here's a more memory efficient solution, that doesn't use slice and thus doesn't create n-1 arrays along the way. A similar technique can be used for foldl function foldr(func, memo, collection) { var l = collection.length - 1; function fold(offset) { if( offset < l ) { memo = fold(offset + 1); } return func(memo, collection[offset]); } return fold(0); } Of course, neither this nor the functions above actually check whether the input array is empty (of if it's even an array)... not that that's terribly difficult to do, but I've complicated things enough already, I think :)
{ "domain": "codereview.stackexchange", "id": 6985, "tags": "javascript, scheme, sicp, higher-order-functions" }
Check whether a lowercase string has any character repetition
Question: I got this HW question from my freshman year at college - "Create a python function to check whether a string containing only lowercase letters has any repetition in it". So, I did it a few ways: Brute forced it by checking if a character in the string was present in the substring to the right of it using a nested for loop. Created a list and stored every new character in it. Looped through each character in the string and checked if a character was in the list. If so, the string wasn't unique. Both of these work but I'm looking for another solution. I came up with the following. Can you please tell me how inefficient this one would be? def isUnique(string): ascii_product = 1 for chr in string: ascii_prime = 2 ** (ord(chr) - 95) - 1 if ascii_product % ascii_prime == 0: return False ascii_product *= ascii_prime else: return True If the code is too horrible to understand, here's what I'm trying to do. I loop through each character in the string and with its ASCII value, I create a unique Mersenne prime number associated with it. I then check if the product is divisible by the prime. If it is, the string is not unique. Otherwise, I multiply the product by the prime. This code works as well but I wanted to know how bad it is in terms of efficiency. Also, what's the best way of doing this? Answer: Naming PEP 8: The Style Guide for Python Code has recommendations all Python programs should follow. One of these is function names should be in snake_case. As such, isUnique should actually be named is_unique. Don't shadow built-in function names chr is a built-in function. For example, chr(97) returns 'a'. After executing for chr in string, you no longer have access to the chr function in that scope. ch is commonly used as a variable for extracting characters from a string. for-else The for ... else: ... construct is for use when you use break to terminate a loop early if something is found, executing the else: portion only if the search failed to find a result. In this particular case, you are not using break; rather you return from inside the loop. In such as situation, else: is unnecessary, and can cause confusion. Consider: def example(container): for item in container: if complicated_test(item): return True else: print("Point A") print("Point B") return False There is no way "Point B" can be reached without also reaching "Point A" first. The else: is an unnecessary control block. Magic numbers What is 95? Where did it come from? It is a magic number. My first guess was that it is the ordinal of 'a', but that turned out to be wrong. The constant deserves a name. LOWERCASE_TO_MERSENNE_OFFSET comes to mind as a possibility, though it might be a bit long. You might even want to define it with an expression, to help readers see where it comes from: LOWERCASE_TO_MERSENNE_OFFSET = ord('a') - 2 Binary numbers You are trying to use prime numbers to store flags in a single integer, to indicate whether or not a lowercase letter has been seen. Using bits to store these flags in a single integer is much simpler. \$2^0\$ would be the 'a' flag, \$2^1\$ would be the 'b' flag, ... \$2^{25}\$ would be the 'z' flag def is_unique(string: str) -> bool: """Determine if a string contains unique lowercase letters Returns `True` if all lowercase letters are unique, `False` otherwise. Calling the function with `string` containing anything other than lowercase letters results in undefined behaviour. """ letter_flags = 0 first = ord('a') for ch in string: flag = 1 << (ord(ch) - first) if letter_flags & flag: return False letter_flags |= flag return True Since larger integers in Python are stored as objects on the heap, and are immutable, bit manipulation requires creating a new object when the bits of the integer are changed. As such, bit manipulation in Python is not as fast as in languages like C, C++, or Java. There is a bitarray package which can be installed (pip install bitarray) which may be used to create mutable bit arrays. Using a bitarray instead of an integer will be much faster, yet still keeps the memory footprint of the application near its absolute minimum. Since a bitarray can be thought of as the bits of an integer, this can still be thought of as storing your β€œseen flags” in a single integer. from bitarray import bitarray def is_unique(string: str) -> bool: """Determine if a string contains unique lowercase letters Returns `True` if all lowercase letters are unique, `False` otherwise. Calling the function with `string` containing anything other than lowercase letters results in undefined behaviour. """ letter_flags = bitarray(26) letter_flags.setall(False) first = ord('a') for ch in string: flag = ord(ch) - first if letter_flags[flag]: return False letter_flags[flag] = True return True Finally, bit manipulation will always incur an overhead over direct indexing. Using a bytearray(26) object to hold the twenty-six flags is likely faster than using a bitarray. It is no longer meeting your implied goal of storing the flags inside a single integer. It requires perhaps 22 additional bytes of memory, but does not require installation of an external package. def is_unique(string: str) -> bool: """Determine if a string contains unique lowercase letters Returns `True` if all lowercase letters are unique, `False` otherwise. Calling the function with `string` containing anything other than lowercase letters results in undefined behaviour. """ letter_flags = bytearray(26) first = ord('a') for ch in string: flag = ord(ch) - first if letter_flags[flag]: return False letter_flags[flag] = 1 return True Set Juho's set solution is a simple 1-line solution, but it is \$O(N)\$ in time. With certain inputs, it can take a very long time, failing programming challenges. Eg) is_unique('a' * 1_000_000_000) calls the function with a string 1 billion characters long, and then iterates over the entire string to build the set. If you want to use this type of solution, you should include a fast fail to catch these types of degenerate cases: from string import ascii_lowercase def is_unique(string: str) -> bool: # Pigeon hole principle: a string longer than 26 characters must have duplicates! if len(string) > len(ascii_lowercase): return False return len(string) == len(set(string))
{ "domain": "codereview.stackexchange", "id": 42445, "tags": "python, python-3.x" }
How to name ether with ketone and carboxyl groups?
Question: Carboxylic acid is the main functional group due to higher priority, but since it's an ether, what's the proper way to name the following compound? My attempt: 2-(1-methyl oxo propyl ether) ethanoic acid. Answer: You have already found out that the carboxylic acid group is the principal characteristic group. Therefore, the ether group has to be expressed as a prefix. The corresponding rule in Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book)) reads as follows. P-63.2.2.1.1 Substituent prefix names for $\ce{R'-O-{}}$ groups are formed by concatenation, i.e., by adding the prefix β€˜oxy’ to the substituent prefix name for the group $\ce{R'}$. These compound prefixes require the numerical multiplying prefixes β€˜bis’, β€˜tris’, etc. In this case, the substituent prefix name for the $\ce{R'-{}}$ group is 3-oxobutan-2-yl and thus the substituent prefix name for the $\ce{R'-O-{}}$ group is (3-oxobutan-2-yl)oxy. Therefore, the complete name for the compound that is given in the question is [(3-oxobutan-2-yl)oxy]acetic acid. Note that locants are omitted for parent compounds when all substitutable hydrogen atoms have the same locant. Therefore, the preferred IUPAC name is [(3-oxobutan-2-yl)oxy]acetic acid and not 2-[(3-oxobutan-2-yl)oxy]acetic acid.
{ "domain": "chemistry.stackexchange", "id": 17280, "tags": "organic-chemistry, nomenclature, carbonyl-compounds, ethers" }
Where do fermionic coherent states live?
Question: Although there have been a couple of questions on fermionic coherent states, I don't think any has answered the question "on what space do fermionic coherent states live?", or at least not to my understanding. Hopefully, someone with more knowledge can clarify the situation. The usual "explanation" is that coherent states $|\psi\rangle = \exp(-\psi a^*) |0\rangle$ live on a space "larger" than the usual fermionic Fock space $\mathscr{F}$ (or the exterior algebra of the single-particle Hilbert space $\mathscr{H}$), where Grassman variables $\psi$ are the coefficients. I understand that you can define Grassmann variables as elements in the exterior algebra $\mathscr{G}$ of an infinite-dim vector space $V$ and whether the variable is fermionic or bosonic depends on whether $\psi \in \mathscr{G}_-=\oplus \Lambda^{2k+1}(V)$ or $\in \mathscr{G}_+=\oplus \Lambda^{2k}(V)$. You can then define the Grassmann integral via an abstract algebraic generalization of the Lebesgue/Riemann integral. A rigorous explanation can be found Tao's blog. However, my problem is how would you define/construct this "larger space"? It can't just be the tensor product $\mathscr{G} \otimes \mathscr{F}$ since we require that a fermionic Grassman variable $\psi\in \mathscr{G}_-$ anti-commute with $\mathscr{F}_- = \oplus \Lambda^{2k+1}(\mathscr{H})$ and commute with $\mathscr{F}_+ = \oplus \Lambda^{2k}(\mathscr{H})$ so that $\psi$ anti-commutes with the ladder operators $a,a^*$. It should also have a well-defined "inner product", in the sense that, it is a sesqui-linear map on this "larger space" and maps into the Grassmann variable $\mathscr{G}$. One attempt would be to think of fermionic coherent states as anti-linear maps on the fermionic Fock space which satisfy $$ \langle m|\psi \rangle=\psi_{i_M} \cdots \psi_{i_1} $$ where $|m\rangle = (a_{i_1}^*)\cdots(a_{i_M}^*)$, and similarly think of $\langle \psi|$ as linear maps which satisfy $\langle \psi|a^* = \langle \psi|\psi$ and $\langle \psi|0\rangle = 1$. However, I'm not sure if this is the right way to think of this problem. Best attempt so far. After further thought, it's possible that the "larger space" is the exterior algebra $\Lambda$ of the direct sum $V\oplus \mathscr{H}$, so that $\mathscr{F},\mathscr{G}\subseteq \Lambda$. Also notice that as vector spaces, $\Lambda$ is isomorphic to $\mathscr{G}\otimes \mathscr{F}$, which is easily seen if we were to "push" all the Grassmann variables to the left and Fock space states to the right based on the (anti)-commutation relation. Therefore, the ladder operators $a,a^*$ are well-defined on $\Lambda$ as $I\otimes a, I\otimes a^*$ on $\Lambda \cong \mathscr{G}\otimes \mathscr{F}$. We can then proceed to define an "inner product" based on the anti-commutation rules. I haven't yet worked out the details though. Answer: Consider a fermionic Fock space ${\cal F}$, which is a $\mathbb{Z}_2$-graded $\mathbb{C}$-Hilbert space. Example: Given a single Grassmann-odd creation operator $a^{\dagger}$, the fermionic Fock space is $${\cal F}~=~\mathbb{C}|\Omega\rangle\oplus\mathbb{C}a^{\dagger}|\Omega\rangle. $$ The most pedestrian approach is probably to consider a (generalized) $\mathbb{Z}_2$-graded $R$-Hilbert space $$ {\cal H}~=~R\otimes_{\mathbb{C}}{\cal F}, $$ where $$R~=~\mathbb{C}^{1|0}\oplus\mathbb{C}^{0|1}$$ is the $\mathbb{Z}_2$-graded ring of supernumbers. E.g. the inner product $$\langle\cdot,\cdot\rangle: {\cal H} \times {\cal H}~\to ~R$$ should be $R$-sesquilinear. Returning to OP's title question, the fermionic coherent states then belongs to ${\cal H}$. See also this Phys.SE post and links therein.
{ "domain": "physics.stackexchange", "id": 76018, "tags": "hilbert-space, mathematical-physics, fermions, grassmann-numbers, coherent-states" }
Implementing Gradient Descent Algorithm in Python, bit confused regarding equations
Question: I'm following the guide as outlined at this link: http://neuralnetworksanddeeplearning.com/chap2.html For the purposes of this question, I've written a basic network 2 hidden layers, one with 2 neurons and one with one neuron. For a very basic task, the network will learn how to compute an OR logic gate so the training data will be: X = [[0, 0], [0, 1], [1, 0], [1, 1]] Y = [0, 1, 1, 1] And the diagram: For this example, the weights and biases are: w = [[0.3, 0.4], [0.1]] b = [[1, 1], [1]] The feedforward part was pretty easy to implement so I don't think I need to post that here. The tutorial I've been following summarises calculating the errors and the gradient descent algorithm with the following equations: For each training example $x$, compute the output error $\delta^{x, L}$ where $L =$ Final layer (Layer 1 in this case). $\delta^{x, L} = \nabla_aC_x \circ \sigma'(z^{x, L})$ where $\nabla_aC_x$ is the differential of the cost function (basic MSE) with respect to the Layer 1 activation output, and $\sigma'(z^{x, L})$ is the derivative of the sigmoid function of the Layer 1 output i.e. $\sigma(z^{x, L})(1-\sigma(z^{x, L}))$. That's all good so far and I can calculate that quite straightforwardly. Now for $l = L-1, L-2, ...$, the error for each previous layer can be calculated as $\delta^{x, l} = ((w^{l+1})^T \delta^{x, l+1}) \circ \sigma(z^{x, l})$ Which again, is pretty straight forward to implement. Finally, to update the weights (and bias), the equations are for $l = L, L-1, ...$: $w^l \rightarrow w^l - \frac{\eta}{m}\sum_x\delta^{x,l}(a^{x, l-1})^T$ $b^l \rightarrow b^l - \frac{\eta}{m}\sum_x\delta^{x,l}$ What I don't understand is how this works with vectors of different numbers of elements (I think the lack of vector notation here confuses me). For example, Layer 1 has one neuron, so $\delta^{x, 1}$ will be a scalar value since it only outputs one value. However, $a^{x, 0}$ is a vector with two elements since layer 0 has two neurons. Which means that $\delta^{x, l}(a^{x, l-1})^T$ will be a vector even if I sum over all training samples $x$. What am I supposed to do here? Am I just supposed to sum the components of the vector as well? Hopefully my question makes sense; I feel I'm very close to implementing this entirely and I'm just stuck here. Thank you [edit] Okay, so I realised that I've been misrepresenting the weights of the neurons and have corrected for that. weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] Which has the output [array([[0.27660583, 1.00106314], [0.34017727, 0.74990392]]) array([[ 1.095244 , -0.22719165]]) Which means that layer0 has a weight matrix with shape 2x2 representing the 2 weights on neuron01 and the 2 weights on neuron02. My understanding then is that $\delta^{x,l}$ has the same shape as the weights array because each weight gets updated indepedently. That's also fine. But the bias term (according to the link I sourced) has 1 term for each neuron, which means layer 0 will has two bias terms (b00 and b01) and layer 1 has one bias term (b10). However, to calculate the update for the bias terms, you sum the deltas over x i.e $\sum_x \delta^{x, l}$; if delta has the size of the weight matrix, then there are too many terms to update the bias terms. What have I missed here? Many thanks Answer: There seems to be a mismatch between the weights you provide and your network diagram. Since w[0] (the yellow connections) is meant to transform $ x \in \mathbb{R}^2 $ into the layer 0 activations which are $ \mathbb{R}^2 $, w[0] should be a matrix $ \in \mathbb{R}^{2 \times 2} $, not a vector in $\mathbb{R}^2 $ as you have. Likewise, your w[1] (the red connections) should be a vector $ \in \mathbb{R^2} $ and not a scalar. Finally, if you are indeed scaling the output of layer 1 (the blue connection), then you'll need an additional scalar value. However, the blue connection confuses me a bit as usually the activated output is used directly in the loss function, not a scaled version of it. Unless the blue connection stands for the loss function. In short, I believe if you change the shapes of your weight matrices to actually represent your network diagram, your update equations will work. I'll go through the network below to make sure I illustrate my point. $ x \in \mathbb{R}^{2} $, an input example $ W^0 \in \mathbb{R}^{2 \times 2} $, the yellow connections $ W^1 \in \mathbb{R}^2 $, the red connections $ z^0 = xW^0 \in \mathbb{R}^{2} $, the weighted inputs to the layer 0 nodes. The dimensions of this should match the number of nodes at layer 0. $ a^0 = \sigma(z^0) \in \mathbb{R}^{2} $, the output of the layer 0 nodes. The dimensions of this should match the number of nodes at layer 0. $ z^1 = a^0 W^1 \in \mathbb{R} $, the weighted inputs to the layer 1 nodes. The dimensions of this should match the number of nodes at layer 1. $ a^1 = \sigma(z^1) \in \mathbb{R} $, the output of the layer 1 nodes and thus the output of the network. The dimensions of this should match the number of nodes at layer 1. Weight Updates As you say before your edit, $\delta^1$, as the product of two scalars $\nabla_a C$ and $\sigma'(z^1)$, is also a scalar. Since $a^0$ is a vector in $\mathbb{R}^2$, then $\delta^1(a^0)^T$ is also a vector in $\mathbb{R}^2$. This matches what we expect, as it should match the dimensions of $W^1$ to allow the element-wise subtraction in the weight update equation. NB. It is not the case, as you say in your edit, that the shape of $\delta^l$ should match the shape of $W^l$. It should instead match the number of nodes, and it is the shape of $\delta^l(a^{l-1})^T$ that should match the shape of $W^l$. You had this right in your original post. Bias Updates This brings us to the bias updates. There should be one bias term per node in a given layer, so the shapes of your biases are correct (i.e. $\mathbb{R}^2$ for layer 0 and $\mathbb{R}$ for layer 1). Now, we saw above that the shape of $\delta^l$ also matches the number of nodes in layer $l$, so again the element-wise subtraction in your original bias update equation works. I also tried using this book to learn backprop, but I had a hard time connecting the variables with the different parts of the network and the corresponding code. I finally understood the algorithm in depth only after deriving all the update equations by hand for a very small network (2 inputs, one output, no hidden layers) and working my way up to larger networks, making sure to keep track of the shapes of the inputs and outputs along the way. If you're having trouble with the update equations I highly recommend this. A final piece of advice that helped me: drop the $x$ and the summations over input examples from your formulations and just treat everything as matrices (e.g. a scalar becomes a matrix in $\mathbb{R}^{1 \times 1}$, $X$ is a matrix in $\mathbb{R}^{N \times D}$). First, this allows you to better interpret matrix orientations and debug issues such as a missing transpose operation. Second, this is (in my limited understanding) how backprop should actually be implemented in order to take advantage of optimized linalg libraries and GPUs, so it's perhaps a bit more relevant.
{ "domain": "ai.stackexchange", "id": 2235, "tags": "neural-networks, python, gradient-descent" }
Detecting conservation, loss, or gain in a crafting game with items and recipes
Question: Suppose we're designing a game like Minecraft where we have lots of items $i_1,i_2,...,i_n\in I$ and a bunch of recipes $r_1,r_2,...,r_m\in R$. Recipes are functions $r:(I\times\mathbb{N})^n\rightarrow I\times\mathbb{N}$, that is they take some items with non-negative integer weights and produce an integer quantity of another item. For example, the recipe for cake in Minecraft is: 3 milk + 3 wheat + 2 sugar + 1 egg $\rightarrow$ 1 cake ... and the recipe for torches is: 1 stick + 1 coal $\rightarrow$ 4 torches Some recipes could even be reversible, for example: 9 diamonds $\leftrightarrow$ 1 diamond block If there's some combination of recipes we can repeatedly apply to get more of the items that we started with then the game is poorly balanced and this can be exploited by players. It's more desirable that we design the game with recipes that conserve items or possibly lose some items (thermodynamic entropy in the real world - you can't easily un-burn the toast). Is there an efficient algorithm that can decide if a set of recipes will: conserve items? lose items to inefficiency? gain items? Is there an efficient algorithm that can find the problematic recipes if a game is imbalanced? My first thoughts are that there is a graph structure / maximum flow problem here but it's very complex, and that it resembles a knapsack problem. Or maybe it could be formulated as a SAT problem - this is what I'm considering to code it at the moment but something more efficient might exist. We could encode recipes in a matrix $\mathbf{R}^{m \times n}$ where rows correspond to recipes and columns correspond to items. Column entries are negative if an item is consumed by a recipe, positive if it's produced by the recipe, and zero if it's unused. Similar to a well known matrix method for graph cycle detection, we could raise $\mathbf{R}$ to some high power and get sums of each row to see if item totals keep going up, stay balanced, or go negative. However, I'm not confident this always works. Any discussion, code, or recommended reading is very appreciated. Answer: This should be solvable with linear programming. Background and setup Let the state vector be a vector of the count of number of each item you have. If the possible items are milk, wheat, sugar, egg, cake, diamonds, then the rule 3 milk + 3 wheat + 2 sugar + 1 egg $\rightarrow$ 1 cake affects the state vector by adding $(-3,-3,-2,-1,1,0)$ to it. So, let $a_i$ denote the change vector for the $i$th rule. Gaining items I claim that there exists a way to gain items without bound iff there exists a feasible solution to the linear program $$a_1 x_1 + \dots + a_n x_n \ge (0,0,\dots,0), x_1 \ge 0, \dots, x_n \ge 0$$ such that $a_1 x_1 + \dots + a_n x_n>(0,0,\dots,0)$. Here $\ge$ is defined on vectors pointwise (i.e., $u \ge v$ iff $u_i\ge v_i$ holds for all $i$) and similarly for $>$. This can be expressed as a linear program: you maximize the sum of the coordinates of $a_1 x_1 + \dots + a_n x_n$, subject to the inequalities above. Therefore, you can solve it in polynomial time using a linear programming solver. This tells you whether there is a way to gain some item without bound. Why is the claim true? Well, if there is a feasible solution to the linear program, then it provides a way to grow the number of some item without bound. In particular, if you start with a very large number of each item, then apply rule 1 $x_1$ times, rule 2 $x_2$ times, etc., you'll end up with a new state vector that differs from where you started by $a_1 x_1 + \dots + a_n x_n$, which is at least as large in each component and is strictly larger in at least one component. Moreover, if you start with a sufficiently large number of items, you'll never "go negative" at any intermediate step of application of the rules. Note that if there is a solution to this linear program, there is a solution in the rationals, which yields a solution in the integers (multiply by the appropriate constant to clear denominators). Conversely, if there is a method to grow the number of some item without bound, then there is a solution to the linear program: just let $x_i$ count the number of times rule $i$ is applied in that method, and you'll see that this yields a valid solution to the linear program. Losing items I believe that there is a similar equivalence: there exists a way to lose items without bound iff there exists a feasible solution to the linear program $$a_1 x_1 + \dots + a_n x_n \le (0,0,\dots,0), x_1 \ge 0, \dots, x_n \ge 0$$ such that $a_1 x_1 + \dots + a_n x_n<(0,0,\dots,0)$. You should check my reasoning as I have not checked this carefully. Conservation Finally, if there is no way to gain items without bound or lose items without bound, then I think it follows that value is conserved.
{ "domain": "cs.stackexchange", "id": 16027, "tags": "graphs, optimization, knapsack-problems, integer-programming, computer-games" }
Ear ossicles a part of Skull?
Question: Is there anything special about not considering ear ossicles as a part of skull? I could not understand the reasoning behind such classification. Answer: The justification is simple. The bones of the ear [ear ossicles] do not articulate with any other bone of the skull hence it is not considered as a part of the skull, similar is the case with the hyoid which is attached to the tongue.These bones have been placed under the group of associated bones.
{ "domain": "biology.stackexchange", "id": 7149, "tags": "human-anatomy, terminology, hearing, human-ear" }
Should I use minimax or alpha-beta pruning?
Question: Should I use minimax or alpha-beta pruning (or both)? Apparently, alpha-beta pruning prunes some parts of the search tree. Answer: Both algorithms should give the same answer. However, their main difference is that alpha-beta does not explore all paths, like minimax does, but prunes those that are guaranteed not to be an optimal state for the current player, that is max or min. So, alpha-beta is a better implementation of minimax. Here are the time complexities of both algorithms Minimax: $\mathcal{O}(b^d)$, Alpha-beta (best-case scenario): $\mathcal{O}(b^{d/2}) = \mathcal{O}(\sqrt{b^d})$ where $b$ is an average branching factor with a search depth of $d$ plies.
{ "domain": "ai.stackexchange", "id": 2668, "tags": "comparison, minimax, alpha-beta-pruning" }
Electric motor for low speed, high torque application?
Question: Background For my mechanical engineering senior design project, we are designing a cargo transporter electric vehicle for off-road conditions. The vehicle needs to transport loads weighing up to about 22,000 N or about 5000 lbf. The transporter will have a very slow speed (about 6 in/s). Using a coefficient of rolling resistance of 0.35, I calculated that the total power requirements would be about 5 hp. To calculate this, I took the rolling resistance force times the target speed of 6 in/s (P=FV). Assuming that we use four electric motors, one for each wheel, I found a per-motor horsepower requirement of about 1.25 hp. For large wheels of about 3 ft diameter at the speed of 6 in/s, the wheel rotation rate is about 2.5 RPM. This yields a large torque requirement of around 3700 N-m or 2700 ft-lb. I'm having trouble finding the right combination of DC motor and gearbox to fit this application. Question How can I size a DC motor for a high torque, low speed application? The torque load per motor is about 3700 N-m and the nominal rotation speed is about 2.5 RPM, yielding a power requirement of about 1.25 hp per motor. I can find DC motors rated for this power, but definitely not for this torque requirement or rotational speed. Would I need a special gearbox? How do industry people normally solve this problem? Answer: First, make sure you know all load cases. You need to know which torque at which RPM you need to have. Then, find a motor + gearbox combination, check their rated torque, RPM and power. Consideration here are: cost, efficiency, type of controlling/driving and in this case especially, power supply. I can imagine that on a lunar base you might not have your 60 Hz 120 V AC? Just a quick example of a process you'll need to iteratively go through. If start-up torque is not huge compared to steady state power/torque requirements, and let's make the wild guess you'll have DC system on the lunar base, and the application requires frequent starting, stopping, positioning, you could have a look at servo systems with bolt on planetary gearbox. Let's say a 5000 rpm motor and a 2.5 RPM load, requires a 1:2000 gear. Let's divide that by a worm 1:40 and a planetary gear box in the motor 1:50. Check torques, check other assumptions. Rinse and repeat. By the way, are you sure about the 0.35 rolling friction coefficient? Seems high for rolling.
{ "domain": "engineering.stackexchange", "id": 4975, "tags": "motors, torque, electric-vehicles" }
Bash script to clear files not matching a given extension
Question: Using LaTeX creates a bunch of auxiliary files like foo.aux, foo.log, etc. Sometimes, I want to create a clean run of my document. I usually manually execute this in the terminal as $ ls foo* foo.tex foo.aux foo.log foo.bbl foo-blx.bib $ rm foo{.aux,.log,.bbl,-blx.bib} This works fine. However, it is error-prone and I don't want to accidentally erase my .tex file. So I added this function to my ~/.bashrc: # Delete all non-TeX files # invoke as: cleantex foo # where: foo.tex, foo.aux, foo.log, etc. are files in directory cleantex () { if [ -n "$1"]; then name = $(basename -s ".tex" "$1") rm -f $name{.!(*tex),-blx.bib} fi } My question is about the key line rm -f $name{.!(*tex),-blx.bib} that actually executes the script. Is this line well-written? What might I improve here? Answer: My real concerns with the script are: cleantex path/to/file.tex will not delete files in that path, but in the current directory. you should check the actual .tex file exists before you delete all the things around it. the brace-expansion is unnecessarily complicated...... especially when combined with the extended glob !(*tex). I would manually resolve the brace-expansion so that there is only one complicated operation on that line. I would use actual glob-expansion and only delete existing files.... and not use the -f option on rm (which does more than just suppress the error message if files do not exist....)
{ "domain": "codereview.stackexchange", "id": 6808, "tags": "bash, cli" }
Coefficient Scaling of IIR filter to obtain unity gain response
Question: I am designing a 2nd order IIR digital filter : My tf equation with coefficients is b = [0 1.209e09] a= [9.2175 -2.6952 1.0000] sys=tf(b,a,0.1,'Variable','z^-1') bode(sys) I have couple of questions : How to get unity scaling .. as a formula... so that i can always multiply it with the 'sys' tf. I usually seen that coefficients are less than 1 , is there a way i can make them so ? My coefficients are calculated from some adaptive filter optimization algorithm, so they keep on changing, is there a way i can reduce the filter to filter coefficients changes to a a minimum.? Answer: It's important to specify at which frequency you want unity gain. But assuming you mean DC ($\omega=0$), because that filter has a low pass characteristic, the DC gain of an IIR filter is given by $$G_{DC}=\frac{\sum_kb[k]}{\sum_ka[k]}\tag{1}$$ It's also common to normalize the denominator coefficients such that $a[0]=1$. In your example that would give a = [1.00000 -0.29240 0.10849] and b = [0.00000 131163547.59967] Finally, normalizing by the DC gain $(1)$ will give you a filter with unity gain at DC. This leaves the denominator coefficients unchanged, and the new normalized numerator coefficients are given by b = [0.00000 0.81609]
{ "domain": "dsp.stackexchange", "id": 9032, "tags": "filters, discrete-signals, infinite-impulse-response, digital-filters" }
Particle in a cylindrical well: role of $\varphi$?
Question: Look at a particle in a cylindrical well of radius $R$ and length $L$ with zero potential $V=0$ inside and $V=+\infty$ on the boundaries and outside the well. The time-independent SchrΓΆdinger Equation can be written as: $$\nabla^{2} \psi=-k^{2}\psi\tag{1}$$ where $k^{2}=\frac{2mE}{\hbar^2}$ The geometry demands cylindrical coordinates, which yields from (1): $$\frac{\psi_{\rho}}{\rho}+\psi_{\rho\rho}+\frac{\psi_{\varphi \varphi}}{\rho^{2}}+\psi_{zz}=-k^{2}\psi$$ for a wave function $\psi(\rho,\varphi,z)$, with boundary conditions: $$\psi(\rho,\varphi,0)=0\\ \psi(\rho,\varphi,L)=0\\ \psi(R,\varphi,z)=0$$ Using the separation of variables Ansatz: $$\psi(\rho,\varphi,z)=R(\rho)P(\varphi)Z(z),$$ this solves quite easily. For $P(\varphi)$ and noting that $\varphi$ is an angle, I get: $$P(\varphi)=c_{1}\cos m\varphi$$ with $m=0,1,2,3...$. $c_{1}$ is a normalisation constant that works out as: $$c_1=\frac{1}{\sqrt{\pi}},$$ except for $m=0$, then: $$c_1=\frac{1}{\sqrt{2\pi}}$$ For $Z$ and $R$ I get, respectively, a sine function and a Bessel of the first kind $J$. So this is quite uneventful, really. Then I tried a little 'check my work' and found this page on wave functions in cylindrical wells: https://arxiv.org/abs/1205.3444 Slightly surprising, this page states the TISE as: $$\frac{\psi_{\rho}}{\rho}+\psi_{\rho\rho}+\psi_{zz}=-k^{2}\psi,$$ with no $\varphi$ in sight. My first thought was, "Ah, symmetry!" but the angle $\theta$ does play in the wave functions of hydrogenic atoms and their potentials are symmetrical too. Then I thought the presence of a radial potential may necessitate the angle, but the page I linked to also treats the case for a potential $V=\frac{\alpha}{r}$ (scroll down a little) and still no $\varphi$ mentioned. I also solved the case of a circular potential well a while back and had to include the angle. So my question is: does in the cylindrical case one have to include $P(\varphi)$ in the wave functions? And if not, why not? Answer: That paper mentioned in the question (arXiv:1205.3444) is, frankly speaking, very low quality. It appears to be a conference contribution from a student who was working on programming Maple to solve the SchrΓΆdinger equation analytically. However, there are two issues with the work. The first, which you have identified, is that the paper misses many solutions. The second, of less importance but still worth mentioning, is that the problem of solving the kind of boundary value problems analytically through a language like Maple was already solved at least a decade before this paper was posted on the arXiv. So you are correct that the paper misses the dependence of the wave function on the azimuthal angle $\phi$. There should indeed be an angular function, although your analysis is not quite complete in this respect either. Using separation of variables, you should actually find for each integer $m\geq 0$, a general angular solution $$P(\phi)=c_{1}\cos m\phi+c_{2}\sin m\phi.$$ Actually, it is usually more convenient to consider all integers $m$, positive or negative, with $$P(\phi)=\frac{1}{\sqrt{2\pi}}e^{im\phi}.$$ This form is more convenient, because no special care needs to be taken for $m=0$; and moreover, in this formulation $m\hbar$ is the eigenvalue of the angular momentum component $L_{z}$. Without the end cap boundary conditions at $z=0$ and $z=L$, the $z$-dependence of the wave function could be expressed either with trigonometric functions $$Z(z)=d_{1}\cos(k_{3}z)+d_{2}\sin(k_{3}z)$$ (for $k_{3}\geq 0$), or complex exponentials $$Z(z)=e^{ik_{3}z}$$ where any real $k_{3}$ is allowed and $\hbar k_{3}$ is the eigenvalue of the momentum component $p_{z}$. [In this case, the normalization of $Z(z)$ depends on what convention is being used to normalize continuum states, so I have not attempted to normalize the state here.] With the end caps, only the $$\sqrt{\frac{2}{L}}\sin(k_{3}z)$$ wave function is acceptable, and this makes $k$ subject to a quantization condition (just like $m$). The solution must have $k_{3}=\frac{n\pi}{L}$, and the wave functions are no longer eigenstates of $p_{z}$. The separated equations for $R(\rho)$, $P(\phi)$ and $Z(z)$ are all second order; this is why there are two solutions for $P$ and $Z$ for each $m^{2}$ or $k_{3}^{2}$, respectively. There should also be two solutions to the radial equation you found for $R(\rho)$; the two solutions are proportional to $J_{m}(k_{1}\rho)$ and $N_{m}(k_{1}\rho)$β€”where $k_{1}^{2}+k_{3}^{2}=k^{2}$β€”in terms of Bessel Functions of the first and second kind. However, the Bessel Function of the second kind (or Neumann function) is not normalizable in the vicinity of the origin, so it does not get an acceptable solution. This leaves you with just the $J_{m}$ solution that you found. Since $\rho=R$ must be a root of the Bessel Function, $k_{1}$ is quantized as well. $k_{1}R$ has to be one of the $z_{m,j}$, where $z_{m,j}$ is the $j$th root $J_{m}(z_{m,j})=0$ of the Bessel Function $J_{m}$. This makes the quantized energy level of the system $$E_{j,m,n}=\frac{\hbar^{2}}{2m}\left(\frac{z_{m,j}^{2}}{R^{2}}+\frac{n^{2}\pi^{2}}{L^{2}}\right),$$ which depends on all three quantum numbers $j$, $m$, and $n$.
{ "domain": "physics.stackexchange", "id": 73279, "tags": "quantum-mechanics, angular-momentum, symmetry, schroedinger-equation, coordinate-systems" }
Make huge view in C# readable and maintanable
Question: I have an application that is using MVC5 with C# and Razor Engine. This view displays a huge table: <table class="table table-bordered" id="pendingTable"> <thead> <tr> <th>Actions</th> <th>Order Status</th> <th>Order Details</th> <th>Order Date</th> <th>Expected Delivery Date</th> <th>Ordered By</th> <th>Employee ID</th> <th>Employee Status</th> <th>Employee Name</th> <th>Employee Type</th> <th>Scope</th> <th>Delivery Address</th> <th>Comment</th> </tr> </thead> <tbody> <tr> <td></td> <!-- At least 8 TDs each one with hundreds of lines of code--> </tr> </tbody> </table> Now, the problem is that everything is in one huge file. As you can guess, this file is a nightmare to update and maintain. I am familiar with the C# region directive, but I can't find anything similar for Views. I also know about partial views, but I have the strong impression from discussions in Stack Overflow that these should only be used when I have a piece of code in a View that is re-usable, which is not the case. What is the best way to deal with Views this large? Answer: I would use a Partial View for this code. Anytime I have a menu for a page I put it in a partial view even though it may not be used on another page. <tr>@Html.Partial("_Actions")</tr> <tr>@Html.Partial("_OrderStatus")</tr> <tr>@Html.Partial("_OrderDetails")</tr> <tr>@Html.Partial("_OrderDate")</tr> inside the Partial view placed in Views/Shared @model ActionsViewModel <td>Column 1</td> <td>Column 2</td> <td>Column 3</td> <td>Column 4</td> <td>Column 5</td> etc... Good Luck!
{ "domain": "codereview.stackexchange", "id": 11531, "tags": "c#, .net, mvc, razor" }
Calculating engine starter’s energy use
Question: During a discussion on start-and-stop vehicle technology some bloke began pushing the point that re-starting the car uses stored energy from the battery, which needs to be replenished by increased fuel usage once the engine is moving. Well, this is obvious, the question is that of significance of such additional usage, especially in comparison with savings from reduced idle. Common sense tells me that this is splitting hairs: if ICE efficiency outlays put all accessory usage at 2 to 3% of overall energy consumption, which includes AC, and all electricals, then it can’t be that much. More over, this dissertation explores idle-reduction technologies for long-haul commercial trucks, and one of the systems exampled is battery-powered (p. 14) that stores enough energy to power AC or heater overnight, and takes about six hours of charge while driving. Author acknowledges increased fuel consumption due to necessity of higher-amperage alternator, although there are no specific numbers provided. However, the mere existence of such commercial application available on the market leads me to believe that such added load is still better than idling. But, in the interest of science, I need some hard numbers (besides, the fellow just wouldn’t go away). I have some ideas on what to consider, but I’m not well versed in electrical and mechanical engineering, so I do not think I can account for most major factors. Energy use of the starter motor can be calculated by using the amount of current used per start (which would be 2 to 3 seconds) by the motor itself and the solenoid that engages the starter gear onto the flywheel. Both current and power demand can be found in the starter’s specifications, but I am not sure as to how reliable those numbers would be in the real-world application. Then there is fuel consumption of the start itself which is estimated at 10 to 15 seconds worth of idling (and Florida chapter of ASME even calculated six seconds for 6-cylinder engine (in a simple, non-rigorous field experiment), but the original link is broken). Now, how to calculate the increased consumption due to the charging of depleted battery from the starter motor itself, and, additionally, to account for all the accessories that were running while the engine was stopped? Is it a simple matter of using the same number calculated for energy usage of starter motor, and arriving at it by calculating the amount of fuel needed to produce that much extra energy given the losses in the engine itself and in the charging circuit? And, ultimately, how significant are those considerations in the bigger picture? Answer: First, consider the case with negligible auxiliary loads (no air conditioning). For a Civic-sized engine (1.8 liters), this US DOE worksheet estimates about 0.3 US gallon/hour fuel consumption at idle. Here is a conservative starter calculation: The Civic starter is rated at 1.0 kW (83A$\times$12V). A 3 second start therefore produces 3 kJ. Assume an additional 25% in battery internal dissipation that must be replaced. As you note, this energy must be replenished by the ICE (internal combustion engine). Max ICE efficiency is only 30%. The incremental efficiency, which is what matters for this small additional load, is no doubt higher, but I’ll use 25% as a conservative estimate. Alternator efficiency is not great either; I’ll use a conservative 50%. With these values, it requires 3.0 kJ$\times$(1.25 / 0.25 / 0.5) = 30 kJ worth of fuel to recharge the battery (Note the overall charging efficiency is only 10%!). Now, the energy density of gasoline is 120 MJ per US gallon (42.4 MJ/kg), so the amount of fuel required to recharge the battery, including all the inefficiencies, is 30 kJ $\div$ 120 MJ/gal = 0.00025 US gallon. So, the β€œcrossover” idle time in this case, above which it is more efficient to stop and restart, is 0.00025 gal $\div$ 0.3 gal/hour $\approxeq$ 8.3 $\times10^{-4}$ hours, or about 3 seconds. Now suppose an air conditioner (PDF) is consuming 1 kW of electrical power. With the engine running, the A/C requires (via the alternator) an additional engine fuel consumption equivalent to 1 kW / 0.5 / 0.25 = 8 kW, or 29 MJ/hour, or 0.24 gal/hour of gasoline. For a duration $t$, the total fuel consumption with the engine running is (0.3 + 0.24)$t$ = 0.54$t$ (with $t$ in hours). With the engine stopped, the A/C still consumes 1 kW, or 3.6 MJ per hour. With that low 10% charging efficiency, it requires 36 MJ worth of fuel (or 0.3 gal) to recharge an hour’s worth of A/C operation. Adding in the starter contribution, the total fuel requirement is 0.00025 + 0.3$t$ (with $t$ again in hours). Equating these two new fuel requirements, the crossover time with the A/C on increases, but only to about 4 seconds. Although the battery charging efficiency is low, the waste of the idling fuel consumption dominates the calculation. Note that I don’t have a reference for the 25% battery re-charge inefficiency. Unfortunately, that’s an important number when running an A/C, since it reduces the advantage of shutting off the engine. At some high load level (in the neighborhood or 4 kW) that disadvantage outweighs the advantage of turning off the engine. Further (experimental) data to confirm the above estimates can be found here: http://www.iwilltry.org/b/projects/how-many-seconds-of-idling-is-equivalent-to-starting-your-engine/ In my case it consumes about the same amount of fuel as 7 seconds of idling. However, the additional fuel consumption observed seems almost entirely due to a faster idle speed setting for the first 20 seconds after starting. Any good driver would start moving within 1-2 seconds after starting, which would effectively eliminate the fast idle losses. If you can begin extracting useful work from your engine within 1 second after starting the engine then it appears starting the engine consumes fuel equivalent to about 0.2 seconds of idling.
{ "domain": "physics.stackexchange", "id": 27470, "tags": "energy, electricity" }
Why is color change between quarks without consequence?
Question: As always I will preface this question with the fact that I only have a high school education, so I may be overlooking something, or unaware of something that is inherent to the question. That being said, as I understand it, in QCD color represents the strength of the strong force on a given quark. Additionally a baryon must be color neutral in composition. However, quarks of different color behave identically. Why do these quarks behave the same if they are affected by the strong force differently? I've been thinking about this a bit, and the only thing that I can think of is that as gluons are particles that carry the strong force the difference is what gluons the quarks are able to interact with, and since color is conserved the baryon is able to persist through these color changes. Am I on the right track with this or is there something else going on here? Answer: Quarks can be RGB color, but color charges are paired (color with anticolor) but there is no gauge invariant meaning of the color. Now gluons do not have a definite color, they are in a superposition of colors and quarks have temporarily a color for a given feynman diagram, which is the only way we know how to display their interactions. https://en.wikipedia.org/wiki/Color_charge Now when one quark interacts with another (different color), the mediator is a gluon so that the quarks swap color. There are basically two reasons why the quarks color swap is confusing: even three quarks (which is not the real picture of a proton or neutron) are constantly exchanging gluons and so swapping their color the three valence quark model is not realistic, because in reality, the protons and neutrons consist of a sea of quarks, antiquarks and gluons constantly interacting, appearing, disappearing, and only if you take the remaining valence quarks will you see theoretically three quarks. The quarks and antiquarks are constantly interacting, swapping color, and it is the proton or neutron itself that needs to be color neutral. You say that the swapping of color does not have any consequences on the quark itself, which is correct, since they are in confinement. We have never experimentally seen quarks outside of confinement, so how would we be able to tell if they (different color) behave identically. You are saying that the difference is, that the different color quarks are interacting via different color gluons, correct, but these are all in superpositions of states. So the answer to your question is that the different color quarks might behave differently, but we would need to experimentally see that outside confinement.
{ "domain": "physics.stackexchange", "id": 61876, "tags": "particle-physics, quantum-chromodynamics, quarks, strong-force, gluons" }
Aren't context lengths for transformers an artificial restriction?
Question: Let's focus on the case of decoder-only transformers, where I am using algorithm 10 from "Formal Algorithms for Transformers" by Mary Phung and Marcus Hutter as a reference. : https://i.stack.imgur.com/ZWC9o.png Previously I thought that the maximum context length is very much built into the transformer, for example as the dimension of a layer of weights. After studying this algorithm I am surprised because it seems more like an artificial restriction! Because this is a topic of active research I would like to know if I am misunderstanding something. The way I see it, if I had access to the weights of GPT-2 right now, I could almost execute it on any number of tokens I like right away (If I had sufficient memory to compute this). The MHA algorithm is just carried out over a larger sequenece. There are only two issues, which are points where the context window $l_{max}$ appears: The positional encoding has only $l_{max}$ positions During training the weights were never optimized to attend over more than $l_{max}$ tokens. But these issues seem rather easy to resolve: Use some positional encoding which has infinitely many positions. The first encoding vectors are nicely spread around while the later ones are closer to eachother, due to the nature of fitting an infinite sequence of vectors into more or less a unit ball/sphere. But this is not an issue: It is natural for the positional encoding to become more vague as the token is further and further in the past. Train 50% on context lengths around $l_{max}$, 25% on context lengths around $2 l_{max}$, 12.5% on context lengths around $4 l_{max}$ and so on... I can imagine the following issues appearing: A) Memory becomes larger than what is available on a single "unit" (GPU?) so you have to start moving data back and forth to execute your transformer, which is just terribly inefficient during training and also inference, so it is really pointless to train on such large context windows. B) Perhaps the transformer just doesn't learn well with this procedure for some reason. These issues are still rather "soft" issues though. As far as I can tell, I could use the architecture of GPT-2 (modified positional encoding) to create 1000000 context window LLMs, in theory. So, am I missing something? Thank you! Answer: Yes, you have the right idea. There's been a lot of work recently regarding extending the context-length of existing models, mostly looking at the Llama family of models. You should check out this post/paper on LongChat by the team behind the Vicuna models (LMSYS). They extend the pretrained Llama models (with 2048-token contexts) to be able to handle 16k-token contexts. You may seem some parallels between the techniques they discuss and what you mentioned in your post. (1) They use Rotary Embeddings, which have an infinite number of positions. However, instead of extending past the original length, they instead interpolate between the existing values. (2) They also finetune the model on long-context sequences (long conversational data). The resulting performance on different context lengths: This plot from the LMSYS org shows the LongChat model (described above) versus accuracy on a retrieval task, compared to some other models. This technique is elaborated on in this paper: "Extending Context Window of Large Language Models via Position Interpolation" Position Interpolation can easily enable very long context windows (e.g. 32768), requiring only fine-tuning for 1000 steps on the Pile (Gao et al., 2020) to achieve a good quality. The cost of fine-tuning is negligible compared to the pre-training costs. This confirms our hypothesis that it is relatively easy for the models to adapt to interpolated position encodings. This work suggests that your second suggestion may even be overkill -- you don't need to do too much additional tuning to allow models to use the larger context. They also discuss why one should interpolate between embeddings rather than extrapolate beyond the original context length: namely, that it can result in catastrophic explosions in attention scores when the position differences are too large.
{ "domain": "ai.stackexchange", "id": 3991, "tags": "natural-language-processing, transformer, attention, gpt, decoder" }
Fastest FIFO with macros for use embedded devices
Question: UPDATE (a year later) Since this post the code has turned into a small library called fifofast and is hosted on github under the MIT License. This note is meant for anyone who stumbled across this post in search of a fast fifo. Thanks again for your help guys! Objective I'm totally aware that there are dozens of FIFO implementations, but on small, low power microcontrollers every processor cycle and byte of RAM can matter significantly. Because it is used very commonly, I've written a small, macro-based library with the following requirements: generic implementation (any data type, any size, any amount of FIFOs) fast (from ISR's function calls take very long) low overhead RAM usage user-friendly, all macros should work just like normal C functions (or at least throw errors at compile time) Details on the implementation On embedded systems typically all memory is allocated static at compile time. With macros this can be abused to omit to store any pointers or array lengths, as the compiler knows where which data element is. With _fff_create(_type, _depth, _id) a anonymous structure is created and access by its identifier _id. Although each of these structures may have an arbitrary amount of data storage included, this information can be extracted at compile time with the macro _sizeof_array(_id.data). The _depth may only be 2^n to avoid the slow % operator. Naming conventions are as follows: /* Example: Description: _name() Function-like macro. Unlike preprocessor macros these macros are intended to be used like any other C function. The _ differentiates it from a normal function and hints to possible subtle problems in use. _name(id) Any normally written parameter of a function-like macro can be any C expression of the correct type (such as uint8_t, uint16_t, ...) _name(_id) Any parameter starting with _ is taken literally and must follow exact guidelines. See description of macro in question for details. type_t _tmp All local variables of a function like macro are marked with _ to prevent conflicts substituted C names for the parameter. DO NOT pass any C identifier starting with _ _return Is used within compound statements as a label for the return value. */ Why I'm here (Problems, Questions, ...) This kind of code is rather uncommon to see, and although I've worked with the pre-processor before, I've likely missed a few pitfalls. Some things in particular are IMHO not ideal and I'd like your thoughts on them: I'm not quite sure whats the best way to access such a FIFO from varies .c files. How would you do this? The _function _fff_read_safe(_id) seems to be slightly bloated. When empty I want to pass the last element instead of a 0 to prevent unexpected behavior when storing typedef'd structs. Currently there is a second _function for longer FIFOs _fff_create_deep(_type, _depth, _id) but I'd rather select uint8_t or uint16_t automatically, depending on _depth. The GCC extention typeof would be great for this, but only returns int or long What is the best way to limit the _depth parameter to 2^n values? both _fff_read macros require compound statements, which forcces use of a GCC compiler. Is there any way around this? Are there any Best Practices with macros or their naming conventions I'm missing? System Info The library is meant primarily for use on 8bit AVR microcontrollers (<=20MHz, typically <=2kB RAM, <=32kB Flash) and has been written in AVR Studio 7 and compiled with GCC 4.9.2. The code has been tested with the built-in simulator and seems to be working fine. Code #ifndef __GNUC__ #error fifofast.h requires "compound statements" and "typeof" offered by a GNU C/ GCC compiler! #endif #ifndef __OPTIMIZE__ #pragma message "fifofast.h is intended to be compiled with optimization and will run VERY SLOWLY without!" #endif #define _sizeof_array(_array) (sizeof(_array)/sizeof(_array[0])) // all function-like macros are suitable for ANY fifo, independent of data type or size. // creates and initializes an anonymous _fifofast_t structure. // _id: C conform identifier // _type: any C type except pointers and structs. To store pointers or structs use typedef first // _depth: maximum amount of elements, which can be stored in the FIFO. The value must be 2^n, // n=2..8 for the normal version, n=2..16 for the "_deep" version. // The actual depth is always 1 count less than specified in as this byte // is required to distinguish a "full" and "empty" state #define _fff_create(_type, _depth, _id) \ struct {uint8_t read; uint8_t write; _type data[_depth];} _id = {0,0,{}} #define _fff_create_deep(_type, _depth, _id) \ struct {uint16_t read; uint16_t write; _type data[_depth];} _id = {0,0,{}} // returns the maximum amount of data elements which can be stored in the fifo // The returned value is always 1 count less than specified in _fifofast_create(...) as it is // required to distinguish a "full" and "empty" state // _id: C conform identifier #define _fff_mask(_id) (_sizeof_array(_id.data)-1) // allows accessing the data of a fifo as an array without removing any elements // Like any array this function can be used as a right or left site operand. Attempting to access // more elements than currently in the buffer will return undefined data on read and will have no // effect on write. Accidental read/write operations outside the assigned data space are not possible. // _id: C conform identifier // index: Offset from the first element in the buffer #define _fff_data(_id, index) _id.data[(_id.read+(index))&_fff_mask(_id)] // returns the current fill level of the fifo (the amount of elements that can be read) // _id: C conform identifier #define _fff_used(_id) ((_id.write-_id.read)&_fff_mask(_id)) // returns the current free space of the fifo (the amount of elements that can be written) // Function is slightly slower than _fifofast_used() // _id: C conform identifier #define _fff_free(_id) ((_id.read-_id.write-1)&_fff_mask(_id)) // returns true (any value != 0) if the fifo is full and (might) be faster that !_fifofast_free() //#define _fff_is_full(_id) (_id.write == ((_id.read-1)&_fff_mask(_id)) // returns true (any value != 0) if the fifo is empty and is slightly faster that !_fifofast_used() #define _fff_is_empty(_id) (_id.write == _id.read) // flushes/ clears buffer completely // _id: C conform identifier #define _fff_flush(_id) do{_id.read=0; _id.write=0;} while (0) // removes a certain number of elements // MUST be ONLY used when enough data to remove is in the buffer! This function is especially // useful after data has been used by _fff_data(...) // _id: C conform identifier // amount: Amount of elements which will be removed #define _fff_remove(_id, amount) (_id.read = (_id.read+(amount))&_fff_mask(_id)) // removes a certain number of elements or less, if not enough elements is available // _id: C conform identifier // amount: Amount of elements which will be removed #define _fff_remove_safe(_id, amount) \ do{ \ if(_fff_used(_id) >= (amount)) \ _fff_remove(_id, (amount)); \ else \ _fff_flush(_id); \ }while(0) // returns the next element from the fifo and removes it from the memory // MUST be used only when fifo is NOT empty, useful for repeated reads #define _fff_read(_id) \ ({ \ typeof(_id.data[0]) _return = _id.data[_id.read]; \ _id.read = (_id.read+1)&_fff_mask(_id); \ _return; \ }) // returns the next element from the fifo and removes it from the memory // If no elements are stored in the fifo, the last one is repeated. // _id: C conform identifier #define _fff_read_safe(_id) \ ({ \ typeof(_id.data[0]) _return; \ if(_fff_is_empty(_id)) \ _return = _id.data[(_id.read-1)&_fff_mask(_id)]; \ else \ { \ _return = _id.data[_id.read]; \ _id.read = (_id.read+1)&_fff_mask(_id); \ } \ _return; \ }) // adds an element to the fifo // MUST be used only when fifo is NOT full, useful for repeated writes #define _fff_write(_id, newdata) \ do{ \ _id.data[(_id.write)&_fff_mask(_id)] = (newdata); \ _id.write = (_id.write+1)&_fff_mask(_id); \ }while(0) // adds an element to the fifo // If fifo is full, the element will be dismissed instead #define _fff_write_safe(_id, newdata) \ do{ \ typeof(_id.write) _next = (_id.write+1)&_fff_mask(_id); \ if(_next != _id.read) \ { \ _id.data[_next] = (newdata); \ _id.write = _next; \ } \ }while(0) EDIT: This code is meant to be a library for future projects and thus I can't show any "real" application code. To test the macros I just tossed them into a simple (and pointless) main.c file. All macros compile without any warnings. Variables are declared volatile to read them out in simulation (run step-by-step). #include "Data/fifofast.h" int main(void) { // create a fifo with 1024 elements of type uint8_t _fff_create_deep(uint8_t, 1024, dbg_fifo); // Check used/ free amount volatile uint16_t dbg_used = 0; volatile uint16_t dbg_free = 0; dbg_used = _fff_used(dbg_fifo); dbg_free = _fff_free(dbg_fifo); // write some data to it (_safe version not required, we know there is enough space) _fff_write(dbg_fifo, 17); _fff_write(dbg_fifo, 19); _fff_write(dbg_fifo, 23); // Check used/ free amount again dbg_used = _fff_used(dbg_fifo); dbg_free = _fff_free(dbg_fifo); volatile uint8_t dbg1 = 0; volatile uint8_t dbg2 = 0; volatile uint8_t dbg3 = 0; // array-like access without removing elements dbg1 = _fff_data(dbg_fifo, 0); dbg2 = _fff_data(dbg_fifo, 1); dbg3 = _fff_data(dbg_fifo, 2); // remove first element _fff_remove(dbg_fifo, 1); // read 3 times, last read fails and returns previous element dbg1 = _fff_read(dbg_fifo); dbg2 = _fff_read_safe(dbg_fifo); dbg3 = _fff_read_safe(dbg_fifo); // Check used/ free amount a last time dbg_used = _fff_used(dbg_fifo); dbg_free = _fff_free(dbg_fifo); while(1); } Answer: Since you've mentioned that this code is meant to be used in ISRs, I have to notice an absence of synchronization. _fff_write_safe should inform the caller on whether write was successful. To enforce the power-of-two depth, pass a power instead of depth, e.g.: #define _fff_create(_type, _depth, _id) \ struct {uint8_t read; uint8_t write; _type data[1 << _depth];} _id = {0,0,{}}
{ "domain": "codereview.stackexchange", "id": 30373, "tags": "c, reinventing-the-wheel, memory-management, generics, macros" }
Show image when in viewpoint of browser window
Question: I am working on a web project and am trying to create a way to detect when an image is in viewpoint of the browser window then load/show the image. Since there are some rather large image that are being resize to use a thumbnails, I am trying to take into consideration those that might view it one a mobile device and reduce page loading time. I notice that many site with galleries are doing the same thing. I create a little script using JavaScript to detect when an <img /> tag is in view and then take its data attribute value (the URL of the image) and set it's <img src=""/> attribute to it. Is this is an efficient way of doing it or should I just use PHP the create thumbnails? I have included snippets of the HTML and JavaScript I am using. HTML of the image <img data-image-src="images/c9c191f226c643eabcce6debfe76049d.png" src="" alt="A nify Image" /> JavaScript with jQuery //Show images when in viewpoint $(window).on('scroll load',function(){ //Window width & height var viewpointWidth = $(window).width(), viewpointHeight = $(window).height(), //Document Top pos & Left pos documentScrollTop = $(document).scrollTop(), documentScrollLeft = $(document).scrollLeft(), //Document Positions minTop = documentScrollTop, maxTop = documentScrollTop + viewpointHeight, minLeft = documentScrollLeft, maxLeft = documentScrollLeft + viewpointWidth; //Loop for each image $('.page-content img').each(function(){ $eleImage = $(this), elementOffset = $eleImage.offset(); if((elementOffset.top > minTop && elementOffset.top < maxTop) && (elementOffset.left > minLeft &&elementOffset.left < maxLeft)) { $(this).attr('src',$(this).data('image-src')); } }); }); Answer: Correctness You need to handle resize events also. Yes, you can add it like : $(window).on('scroll load resize', ... You are only checking for the visibility of the top/left corner, if you load a large image, scroll right, and then scroll down, you will see the right edges of the ALT text of the images, but because their top/left corner is not in viewport, those images will not load. You might want to check if any point of the img box is visible. Browser Behavior Make sure you have <!DOCTYPE HTML> on your document, this will load all images if you miss doctype declaration. (for Firefox, Chrome, don't know about others) You might want to mark images processed, by adding a class or another data; and do not fiddle with the src attributes of the images once they are loaded. Javascript related Because you miss a var before $eleImage and elementOffset, they pollute global namespace. You copy the value of $(this) here : $eleImage = $(this), but you keep using $(this) below.
{ "domain": "codereview.stackexchange", "id": 9863, "tags": "javascript, optimization, jquery, image" }
Charge present at the centroid of equilateral triangle
Question: Three equal charges +Q each are placed on the vertices of an equilateral triangle. A charge +q is initially placed at the centre of the triangle. If this charge (+q) is slightly displaced towards a vertex and left free. What will be its motion? My working: Initially its in equilibrium and when its displaced to one end, charges far from it will exert lesser repulsive force and the vertex charge will exert more force since it went closer to it. Hence it goes back to its original position and does SHM. Am I correct? Answer: I suspect that the Earnshaw's theorem might make such motion unstable. You may consider a similar problem: a positive charge between two positive charges. All your reasoning seems applicable there, but it is intuitively obvious that the motion will be unstable.
{ "domain": "physics.stackexchange", "id": 21014, "tags": "homework-and-exercises, electrostatics" }
Why is the contact force of the rigid-body robot model ks*xr?
Question: In the paper "On Dynamic Models of Robot Force Control", which is written by Steven D. Eppinger and Warren P. Seering, there is a model 'rigid-body robot model'. The model is like below. In the paper, the model is described as: We model the robot as a mass with a damper to ground. The mass $m$,. represents the effective moving mass of the arm. The viscous damper $b_r$ is chosen to give the appropriate rigid body mode to the unattached robot. The sensor has stiffness $k_s$ and damping $b_s$. The workpiece is shown as a "ground state". The robot actuator is represented by the input force $F$ and the state variable $x_r$, measures the position of the robot mass. In the paper, the contact force, which is the force across the sensor, is $F_c=k_s x_r$. But because the damping source from $b_s$ is also the part of the sensor, I think the contact force is $F_c =k_s x_r + b_s \dot{x}_r$. However because this paper and model are cited by many studies, I think I'm wrong. So, why $F_c=k_s x_r$? PS : I wrote $f_s = k_s x_r$ previously. I modified that to $F_c = k_s x_r$. Answer: Edited: You should have put the subscript $c$ on variable $F$, because they define $F_c$ as the contact force the system is trying to maintain. Since achieving and servoing around that desired contact force is the goal, the velocity at the target state will be zero. Therefore $F_c = k_sx_r$. You’ll find the damping terms in the dynamic model - they are in the denominator of the transfer function. Note: I see you added the subscript now. Thanks.
{ "domain": "robotics.stackexchange", "id": 1786, "tags": "force-sensor" }
How to derive moment of inertia for a thin disc?
Question: Wikipedia's article on moment of inertia provides examples of how to derive the moments of inertia for different bodies: $$I_P=\iiint_Q p(x,y,z)||\mathbf{r}||^2 dV$$ It gives an example of how to derive the moment of inertia for a thin rod: $$I_{C,rod}=\iiint_Q p(x,y,z)||\mathbf{r}||^2dV=\int_{-\frac{l}{2}}^{\frac{l}{2}} px^2s dx=\left.px\frac{x^3}{3}\right\rvert_{-\frac{l}{2}}^\frac{l}{2}=\frac{ps}{3}\left(\frac{l^3}{8}+\frac{l^3}{8}\right)=\frac{ml^2}{12}$$ That was pretty straightfoward, but I don't understand why you need a double integral for the example that follows, the derivation for a thin disk: $$I_{C,disk}=\iiint_Q p(x,y,z)||\mathbf{r}||^2dV=\int_0^{2\pi}\int_0^Rpr^2sr\, dr\, d\theta=2{\pi}ps\frac{R^4}{4}=\frac{1}{2}mR^2$$ I know it is because $V({r,\theta})$ is a function of two variables, but I still don't understand where those limits come from. Could someone please explain? Answer: When you calculate the moment of inertia for a thin rod, you reduce the problem to one dimension because you're only considering it's length. A disk is a 2D object, hence you need those two spatial coordinates to characterize it, the radial and angular components. Their origin becomes straighforward once you learn about the "polar coordinate system", which I encourage you to investigate. As for the limits of integration, think of a straight line of length R with an edge at the origin. Then rotate that line 2Ο€ radians around the origin, a full 360Β° turn that is. This will draw the disk/circle you are trying to parameterize.
{ "domain": "physics.stackexchange", "id": 97954, "tags": "homework-and-exercises, rotational-dynamics, moment-of-inertia" }
Deriving an alternative expression for the Van der Waals equation using given parameters
Question: The question is to find out an alternative expression for the Van der Waals equation of corresponding states for 1 mole of gas using the parameters given as $$A=\frac{P}{P_c},\quad B=\frac{V}{V_c},\quad C=\frac{T}{T_c} ,$$ where $P_c,T_c,V_c$ represent the critical pressure, temperature and volume respectively. I am aware of the simple form of the Van der Waals equation which is $$\left(P+\frac{a}{V^2}\right)\left(V-b\right)=RT .$$ The values of $P_c , T_c \text{ and } V_c$ can also be written down in terms of the Van der Waals constants but I have no idea on how to proceed after this. I tried to substitute the known values but it made things complicated. Is there any other approach to this problem? Answer: $\def\p{\partial} \require{enclose}$Start with the van der Waals equation $(1)$. $$\left(P + a\frac{n^2}{V^2}\right)\left(V - bn\right) = nRT\tag1$$ Note that you have a slightly different form of the equation, namely you have used molar volume $V_m$. Divide both sides of $(1)$ with $n$, use $V_m = V/n$, and $n^2/V^2 = 1/V_m^2$ to reach your variant. Values for critical parameters in terms of $a, b, n, R$ The critical point $(P_c, T_c, V_c)$ is a critical point and an inflection point of the isotherm by definition.$^{[a]}$ Therefore we arrive at two equations; $$\left(\frac{\p P}{\p V}\right)_{T\ =\ T_c} = 0,\tag2$$ $$\left(\frac{\p^2 P}{\p V^2}\right)_{T\ =\ T_c} = 0.\tag{2'}$$ Equation $(2)$ is a sufficient condition for a critical point (mathematically, a derivative's non-existance works too). Equation $(2')$ gives a necessary condition for an inflection point. So our first step should be to find the equation $P = P(T, V)$. Just use equation $(1)$. $$P = \frac{nRT}{V-nb} - a\frac{n^2}{V^2}\tag{3}.$$ Apply conditions $(2)$ and $(2')$ on $(3)$. $$\left(\frac{\p P}{\p V}\right)_T = -\frac{nRT}{(V - bn)^2} + 2a\frac{n^2}{V^3} \overset{(2)}{=} 0\tag4$$ $$\left(\frac{\p^2 P}{\p V^2}\right)_T = \frac{2nRT}{(V-bn)^3} - 6a\frac{n^2}{V^4} \overset{(2')}{=} 0\tag{4'}$$ Both $(4)$ and $(4')$ are equal to zero. Thus we may multiply either equation by non-matching real numbers, and still maintain equality: $(4) = (4')$. We will use this to our advantage by multiplying $(4)$ with $2/(V-bn)$ and $(4')$ by $-1$. $$-\frac{2nRT}{(V - nb)^3} + 4a\frac{n^2}{V^3(V- nb)} \overbrace{=}^{(4)\ =\ (4')} -\frac{2nRT}{(V - nb)^3} + 6a\frac{n^2}{V^4}\tag5$$ $$\frac{2}{V - nb} = \frac{3}{V} \implies 3V - 3nb = 2V \implies \enclose{box}[mathcolor="green"]{V_c = 3nb}\tag{5'}$$ Plug result $(5')$ into equation $(4')$. $$-\frac{nRT}{(3nb - nb)^2)} + 2a\frac{n^2}{27n^3b^3} = 0 \implies -\frac{RT}{4} + \frac{2a}{27b} = 0 \implies \enclose{box}[mathcolor="red"]{T_c = \frac{8a}{27bR}}\tag6$$ The outcomes $(5')$ and $(6)$ are to be placed into equation $(3)$. $$P_c = nR\frac{8a}{27bR} : 2nb - a\frac{n^2}{9n^2b^2} \implies \enclose{box}[mathcolor="blue"]{P_c = \frac{a}{27b^2}}\tag7$$ Values for $a, b, R$ in terms of critical parameters $$b \overset{(5')}{=} \frac{1}{3n}V_c\tag{8a}$$ $$a \overset{(7)}{=} 27b^2P_c \overset{(8a)}{=} \frac{3P_cV_c^2}{n^2}\tag{8b}$$ $$R \overset{(6)}{=} \frac{8a}{27bT_c} \overset{(8a, 8b)}{=} \frac{8P_cV_c}{3nT_c}\tag{8c}$$ Reduced van der Waals equation Replace the values of $a, b, R$ from equations $(8a)$$-$$(8c)$ into the original equation $(1)$. Immediately after this substitution multiply both sides by $3/(P_cV_c)$. You should have the result $$\left[\frac{P}{P_c} + 3\left(\frac{V_c}{V}\right)^2\right]\left(3\frac{V}{V_c} - 1\right) = 8\frac{T}{T_c}.\tag9$$ Define $P/P_c = \pi, V/V_c = \varphi, T/T_c = \tau$. Finally, enjoy the reduced van der Waals equation $$\enclose{box}[mathcolor="orange"]{\left(\pi + \frac{3}{\varphi^2}\right)(3\varphi - 1) = 8\tau}.\tag{10}$$ What does equation $(10)$ imply by there being only reduced quantities $\pi, \varphi, \tau$? Hint: theorem of corresponding states. $[a]$ While I believe this to be technically true, I also think it deserves a question of its own. The fact that a phase critical point physically always has to (?) be a inflection point confuses me a bit, so is best left to another answerer.
{ "domain": "chemistry.stackexchange", "id": 7547, "tags": "physical-chemistry, gas-laws" }
Setting zoom for layer disappearance in Leaflet
Question: I have got 2 working codes, where one is a bit longer and another a bit shorter. Is it some way to bind all these layers as a one group? The first one is: map.on('zoomend', function() { if (map.getZoom() <6){ map.removeLayer(job); }else{ map.addLayer(job); } if (map.getZoom() <7){ map.removeLayer(job2); }else{ map.addLayer(job2); } if (map.getZoom() <8){ map.removeLayer(job3); }else{ map.addLayer(job3); } }); and another one map.on('zoomend', function() { if (map.getZoom() <8){ map.removeLayer(job); } if (map.getZoom() <8){ map.removeLayer(job2); } if (map.getZoom() <8){ map.removeLayer(job3); } else { map.addLayer(job); map.addLayer(job2); map.addLayer(job3); } }); I would like to avoid this kind of repeats, as I am going to have multitude of layers in the future. Answer: In your case a better way is using a predefined mapping (as Map) holding the relations between zoom limits and respective jobs.Then, just iterate through the mapping and perform add/remove operation: const zoomJobsMap = new Map([ [6, job], [7, job2], [8, job3], ... ]); map.on('zoomend', function() { let zoom = map.getZoom(); # get zoom value at once for (var [zoomLimit, jobObj] of zoomJobsMap) { (zoom < zoomLimit)? map.removeLayer(jobObj) : map.addLayer(jobObj); } }
{ "domain": "codereview.stackexchange", "id": 36434, "tags": "javascript, leaflet" }
How are data from tiltmeters used to monitor volcanic activity?
Question: I've just learned in this answer that tiltmeters (which I assume measure changes in tilt) are used near active volcanos. (Saw a mention in item 4. here also.) What kind of geological (volcanological?) information can be learned from tilt data? Can it also offer predictive benefits, trigger evacuation warnings? Answer: Tiltmeters placed on the flanks of an active volcano can measure changes in the slope angle of the flank. These changes are often inferred to be related to changes in the shape and activity of the magma chamber. This article provides a quick and dirty example of how these instruments can be used, as well as their limitations. In this case, the tiltmeters were able to capture the deflation of an erupting magma chamber, but not the precursory inflation. This leads to some inferences about the nature of the magma's ascent through the subsurface. The article also references studies that were able to detect magma chamber inflation as well. These kinds of observations definitely add to the body of evidence volcanologists use to assess risk of an eruption, but it's far from a slam dunk. Here are some examples from that paper, both the long-term trend, and a higher time resolution display during a small eruption: Fig. 2. Tiltmeter data from V-net (KRMV, KRHV) and Hi-net (MKNH, SUKH) stations during the period from January 23 to February 2, 2011. From here. Fig. 10. Enlarged view of tiltmeter data during the period from 0:00 to 21:00 on January 26, 2011. Vertical line shows the occurrence time of the small eruption at 7:31 and the beginning of the sub-Plinian eruption, when the amplitude of the seismic tremor increased at 14:49. From here.
{ "domain": "earthscience.stackexchange", "id": 1069, "tags": "volcanology, measurements, volcanic-hazard, field-measurements" }
How to calculate the branching fraction for a decay with a given Cabibbo angle
Question: Lets suppose I know the branching fraction of a decay. How can I calculate the branching fraction in a different final state with the Cabibbo angle? For example: $$D^+ \rightarrow \bar K^0 + e^+ + \nu_e$$ The branching fraction would be 9%. What would be the branching fraction in a $\pi^0$ final state if the corresponding Cabibbo angle is 0.2? Answer: This is actually a nice problem, steering you to do these estimates in your mind, Fermi-style, aggressively ignoring the inessential, a crucial skill. You realize that, compared to the mass of the mother particle, the D, ~1.87 GeV, the masses of the two mesons , K ~ 0.5 GeV and Ο€ ~ 0.14 GeV are similar, so no phase-space disparities should matter, and the ratio of the respective branching fractions should only be controlled by the squares of the respective amplitudes! The respective amplitudes are the same c-decay diagrams, one to d, and the other to s, whose ratio goes as $$ |V_{cd}/V_{cs}|\approx \tan\theta_c\approx 0.2, $$ consequently $$ \frac{\Gamma(D\to \pi e \nu)}{\Gamma(D\to \bar K e \nu)}\sim 0.04, $$ amounting to a branching fraction for the Ο€ mode of $\sim 4\cdot 10^{-3}$, quite close to what you see in the PDG.
{ "domain": "physics.stackexchange", "id": 80267, "tags": "homework-and-exercises, particle-physics, standard-model" }
Spring extensions
Question: A certain spring has attached to it a mass of 25 units: on increasing the load by 6 units it extends 2.5 cm. a) What is the time of oscillation under the original load? b) What will be the velocity and acceleration when it is midway between its lowest and mean positions if it is loaded as at first, pulled down 5 cm and let go? I am guessing the spring is hung vertically. Do I work out the modulus of elasticity first? I guess the equations $\omega=\sqrt{k/m}$ will be helpful. Answer: Let's use SI units so mass is in kilograms and length is in meters. Let the original mass be $m_1 = 25 \,\mathrm{kg}$ and the final mass be $m_2 = m_1 +\Delta m$ where $\Delta m = 6 \,\mathrm{kg}$. Let the original extension of the spring be $x_1$ and the final extension be $x_1 + \Delta x_1$ where $\Delta x = 2.5\, \mathrm{cm}$. We can determine the spring constant by noting that in each case, the gravitational force of the mass on the spring must balance the spring force pulling up; this gives two equations: $m_1g = k(x_1 - x_\mathrm{eq})$ $m_2g = k(x_2 - x_\mathrm{eq})$ so that subtracting the first equation from the second gives $(m_2 - m_1)g = k(x_2-x_1)$ which given the notation above gives $\Delta m g = k \Delta x$ so the spring constant is $ k = \frac{\Delta m}{\Delta x} g = \frac{6\,\mathrm{kg}}{2.5\,\mathrm{cm}} g$ With this in hand, you can indeed compute the angular frequency for the original load $m_1$ using the formula you wrote. This in turn will give you the period, which is what I assume part (a) is asking for. I'll let you think about part (b) since this sounds like a homework question to me. Let me know of any typos. Cheers!
{ "domain": "physics.stackexchange", "id": 6191, "tags": "homework-and-exercises, newtonian-mechanics, spring" }
Impossible dilemma about Shapiro delay and momentum conservation
Question: I'm having an impossible dilemma, with no right answer. Starting with the Shapiro delay from the General Relativity theory, I can't understand how the conservation of momentum applies to it. The problem is like this: If I launch a photon from point $A$ to point $B$, the photon goes with light speed $c$ and gets there in time $t = (B-A)/c$. If I have a mass somewhere on the photon trajectory, the photon arrives later at the point $B$, or, it arrives at a closer point $C$ in the same time as before. So, we have for the case with the mass near photon trajectory, time $t = (C-A)/c$. (Here I've taken for simplicity a mass with a tunnel in it for the photon.) Now, if the photon arrives later at point $B$ because of the Shapiro delay, the momentum of the photon is delayed. This delay should be apparent on the movement of the mass. Meaning, the mass should move. The question is: which direction will the mass move when the photon passes through it, forward (+), backward (-), or will it stay put? If the mass moves forward, momentum is conserved. But the photon has antigravity. If the mass moves backward, the photon is gravitationally attractive but the conservation of momentum is violated. If the mass stays put, we violate the conservation of momentum and we have an antigravity photon. It seems to me like an impossible situation. Any thoughts? Note - the problem originated after the EM drive hype. Answer: In proper distance and proper time you will see the Shapiro delay, and the distance is also larger by the same amount. The speed of light remains c. That explains your concern If we use coordinate time and distance yes the speed of light can be different, but coordinate speed of light means nothing invariant, it's just an artifact of the coordinate system selected. You can see this in pg. 10 of the following, which explains and calculates the effect. http://onlyspacetime.com/Chapter_2.pdf You can also see the graphic for the measured Shapiro delay (1 or 2 pages before) where you can see the delay is largest for the smaller impact parameter, i.e., when Venus was in (almost) superior conjunction with the Sun (that was the Shapiro measurement, Earth to Venus and back, of course they couldn't drill a hole through the Sun like you did). The delay is real, due to gravity (so, for instance, it is much bigger than the effect of the curved path if the light is skimming the surface of the Sun or not too far away) even apart from a curved path. So there is no problem with momentum either, or energy. Added as Edit from Comments below If the photon/planet system emit gravitational waves in the postulated motion of the photon through the center of the planet ( big if, probably no, see below), then a small amount of momentum, and energy is carried away by the GW waves. Those will be lost by the 2 body system, and the oscillation of the photon through the planet will decrease in distance. No mystery if so, the energy and momentum they lose will go to infinity in GW waves. And if the photon looses energy and momentum (if one also the other), the freq will decrease, and it won't oscillate as far. But, it'll be too small to measure, maybe 40 or so orders of magnitude smaller than the EM wave from an electron doing the same. But, worse, the photon-planet system has no quadrupole moment, only dipole (the angular momentum, and of course monopole, the mass), and thus cannot radiate. General Relativity doesn't allow monopole or dipole radiation – equivalently a spin two field can only radiate spin, or angular momentum, greater than 2-1 = 1, ie spin 2's, ie gravitons (for electromagnetic waves of spin 1, it has to be angular momentum greater than zero, ie, spin 1's, ie, photons). The only way would be if the photon was orbiting the planet, I think that is a quadruple moment, and I think it's changing. Then they radiate. A changing quadrupole moment will radiate. But it still would be too small to measure – we did barely detect real gravitational radiation from 2 black holes. The two black holes were orbiting each other, and that was a changing quadrupole moment. From a total of 65 (or so) solar masses we got about 3 solar masses of radiation when the grav field was 'super' strong near their horizons. In your case it would undetectable (when further away the effect was too small and we were not able to detect it).
{ "domain": "physics.stackexchange", "id": 36836, "tags": "general-relativity, photons, conservation-laws" }
What is the definition of physically meaningful?
Question: I asked a question recently where I wanted to know whether it was physically meaningful to talk about the arrow of time in other universes. Although many people apparently have an intuitive notion of what is meant by "physically meaningful", the only real context I have found is in terms of "physically meaningful observables" where in a quantum mechanical context we talk about the possible eigenvalues associated with eigenstates of an operator: $H |\psi\rangle = \lambda|\psi\rangle$ The observables being possible outcomes of experiments or other operations on the state vector. So in the context of the arrow of time, where there are claims of the possibility of a vector time, one would like to think of some possible operation that would produce set of possible values for that time. So the first question one might ask is, "what operation can I perform to observe the vector values of the arrow of time?" It would seem that there is some difference between defining such an operation and the possibility of it actually being executable. So one might think of a thing being physically meaningful if one can define a set of executable operations (executabality being determined by some set of constraints). However, I suspect there is a better definition, or a better way to conceptualize what is meant by "physically meaningful", and I would like to read what those are. Answer: This talk begins to cover some of this question in detail at around 6:30. I haven't completed viewing it but it is moving further into the rabbit hole. Found also here. Just the first of many notes. I find the analogy of N=4 SYM to the harmonic oscillator very fascinating. Also the implication that physical reality is about the structure of scattering amplitudes is fantastic. This entry was given down check, however, I was intending to work iteratively.I will continue to update periodically as I move forward. 2/1/2011 Update one To answer this question we must define three key components first: 1) Physical System 2) Data 3) Interconnection A physical system is a type of system that deals with analyzing the physical universe. Physical is having to do with physics which has to do with analyzing nature which has to do with phenomena which has to do with "any event that is observable, however commonplace it might be, even if it requires the use of instrumentation to observe, record, or compile data concerning it." Data is the "lowest level of abstraction from which information and then knowledge are derived"; suggesting that data is the most fundamental element associated with an observation. Starting with a set of data, nothing about a physical system can be said using that data unless the data and physical system have some interconnection. With those three components: physical system, data and interconnection; we might be able to explore a definition of physically meaningful. To be continued, despite the trolls...
{ "domain": "physics.stackexchange", "id": 380, "tags": "soft-question, definition" }
How to Implement a Physically Human-Controlled Robotic Arm?
Question: I'm a complete beginner in robotics other than FIRST competitions at my school. I wanted to make a robotic arm that works by a person physically moving the end-effector in 3D space and the robot staying in place even when the person removes their hand. Similar to https://youtu.be/RLxIg69GWqA?t=465. The robot arm would also apply force-feedback when it gets close to some predefined threshold. If I wanted to use servos (because I really need the angle data for force feedback), I would need a certain gear ratio for torque (there is no payload to the end-effector; the only loads would be for all other axes' connections), and this would make it hard for someone to move the end-effector fast and efficiently. To offset this problem, I have found online that I can use force control on the end-effector. The problem with this is that most 6-axis force/torque sensors are seriously expensive and way too industrial for my needs. I also have no idea how to implement this kind of sensor to the arm. Are there any simpler ways to accomplish the task above? Any changes/recommendations to how I'm going about this? I plan for this to be a 5 DoF arm with a 16" reach and, as i said before, no end-effector payload. Answer: Fully integrated highly calibrated force torque sensors are complex and priced accordingly. However you do not need that level of precision for your application. If you're willing to do a little bit of electronics yourself you can make your own force feedback circuits using strain gauges. If they're mounted on or near your end effector you can measure the strain on the link/mount and then use that as your force feedback. It will give you direct feedback on the forces applied to the end effector and you can align them with your primary axes to simplify the math for providing the feedback to your controller potentially. Alternatively you can possibly have one at each joint to provide the feedback necessary.
{ "domain": "robotics.stackexchange", "id": 2568, "tags": "robotic-arm, servomotor, force, force-sensor" }
Where is the link to ros::init() code API?
Question: I'm reading this tutorial: http://wiki.ros.org/roscpp/Overview/Initialization%20and%20Shutdown In this page, there is a link to "ros::init() code API", but the link is broken. Using Google I've found another page where "ros::init()" is not a link: http://docs.ros.org/indigo/api/roscpp/html/ Could someone help me or write the correct link in the tutorial? Thanks. Originally posted by Fabio on ROS Answers with karma: 23 on 2015-06-02 Post score: 0 Answer: This is related to a doxygen regression: https://bugzilla.gnome.org/show_bug.cgi?id=744762 Doxygen fails to extract the ros::init() functions and therefore API documentation itself can't cross reference it and the wiki can't link to them. Until doxygen releases a fix for that we can't do anything about it. Please look directly at the source code (https://github.com/ros/ros_comm/blob/bd10f702dcf4c447cc421ed3fd4838f9173afe28/clients/roscpp/include/ros/init.h#L68-L103) to see the available signatures and their documentation. Originally posted by Dirk Thomas with karma: 16276 on 2015-06-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 21825, "tags": "ros, tutorial, roscpp, wiki" }
How can I optimize my code that uses nested for loops?
Question: I am working to solve a problem where I need to determine if a Point lies on a line connecting two other Points. For example, if I have Point a, b, c, I want to determine if c is on the line segment connecting a and b. In my code, I have a point, me (a in the example) and two lists of points, hits (b in the example) and reachable (c in the example). For each point in hits, I want to determine if there is any point in reachable that is on the line segment that connects me and the point in hits. If there is a point on that line segment, then numHits needs to be decremented. Here is my code: Point me; //Point a from the example above ArrayList<Point> hits = new ArrayList<>(); //a list of Point b's from the example above ArrayList<Point> reachable = new ArrayList<>(); //a list of point c's from the example above for(Point hit : hits) { for(Point p : reachable) { if(!hit.equals(p) && !me.equals(p)) { //find the equation of a line from me to hit if(hit.x - me.x == 0) { //if the line has an undefined slope... if the line is vertical if( (((p.y <= hit.y) && (p.y >= me.y)) || ((p.y >= hit.y) && (p.y <= me.y))) && p.x - me.x == 0) { //if there is any occupied point on that line in between me and the hit, that point blocks the hit numHits--; break; } } else { //create a line from me to the hit... if there is any occupied point on that line in between me and the hit, that point blocks the hit double deltaY = hit.y - me.y; double deltaX = hit.x - me.x; double m = deltaY / deltaX; //slope double b = me.y - (double)(m*me.x); //y intercept if((double) p.y == ((double)(m * p.x) + b)) { //if this point is on the same line if( ((p.x <= hit.x && p.x >= me.x) && (p.y <= hit.y && p.y >= me.y)) || ((p.x <= hit.x && p.x >= me.x) && (p.y >= hit.y && p.y <= me.y)) || ((p.x >= hit.x && p.x <= me.x) && (p.y >= hit.y && p.y <= me.y)) || ((p.x >= hit.x && p.x <= me.x) && (p.y <= hit.y && p.y >= me.y))) { //if the point is in between me and the hit numHits--; break; } } } } } } My code works to determine if there is any point in reachable between me and each point in hits, it just gets incredibly slow the larger hits and reachable get. For example, if hits has a size of 780,000 and reachable has a size of 1,500,000 the code takes a very long time to run. I was wondering how I may be able to optimize this to run more quickly. I'm not sure if the bottleneck issue lies in the loops themselves or in the code within the loops. Any help or optimization ideas are greatly appreciated. Thank you! Answer: It is normally good to break code into several methods. Everything that is inside your if(!hit.equals(p) && !me.equals(p)) should be moved to a separate method taking as input those 3 points a,b and c When coding problems of a mathematical flavor it is often very useful to support your code with some kind of theory. In this case you might want to look up the mathematical terms "dot product" and "cross product". Once you understand those it becomes easy to implement your function. And remember to make several functions (dotProduct() and crossProduct()) so you can reuse code later. When it comes to performance you need to look at your algorithm. There is no need to test all pairs of points (hit, p). Maybe you should make a small method that computes the angle of a vector from me to p? If angle is the same they hit, otherwise they miss. So the question becomes given a list of angles for points in hits and a list of angles in reachable, do they contain a common angle?
{ "domain": "codereview.stackexchange", "id": 38381, "tags": "java, performance, time-limit-exceeded, memory-optimization" }
Learning resources for computational plasma physics
Question: I'd like to try implementing a "simple" simulation of nuclear fusion (http://en.wikipedia.org/wiki/Nuclear_fuel_cycle) process taking place inside an idealized reactor. Are there any good resources (books, courses, papers, blog posts etc.) covering this? Answer: this summer as a part of the summer program I've worked on a plasma simulation code PIConGPU and my mentor recommended me Plasma Physics via Computer Simulation, it is comprehensive introduction to the field. It has all the details need to write your first simulation. :)
{ "domain": "physics.stackexchange", "id": 37507, "tags": "nuclear-physics, resource-recommendations, plasma-physics, fusion" }
Language of a grammar
Question: What's the language of following grammar? $G: S \to S_1B$ $S_1 \to aS_1b$ $bB \to bbbB$ $aS_1b \to aa$ $B \to \lambda$ any hint or solution? Answer: Hint: Starting with $S\rightarrow S_1B$, do the $S_1$ derivation and then the $B$ part. Invoke $S_1\rightarrow aS_1b$ a certain number, $n$, times. What do you get? Then eliminate $S_1$ by using $aS_1b\rightarrow aa$. Now what do you have? Finally, use $bB\rightarrow bbbB$ some number, $m$ times (getting two more $b$s each time) and eventually erase the $B$. What's your final result? It should be fairly obvious, except for one special case.
{ "domain": "cs.stackexchange", "id": 3192, "tags": "formal-languages, automata, formal-grammars, finite-automata" }
Lucas test in the presence of EWG like -CN
Question: Does the given compound give Lucas test? My teacher says due to the presence of a strong electron withdrawing group the carbocation formed is unstable. But I say if it undergoes $\mathrm{S_N2}$ mechanism then an electron withdrawing group should increase its reactivity. Why then does this compound not give the test? Answer: Lucas reagent follows SN1 pathway. $\ce{Zn}$ is a lewis acid that tends to bond with $\ce{-OH}$ and form a partial positive charge leading to the formation of a carbocation. According to Hammond Postulate, the stability of the transition state should allow successive formation of product or intermediate. The stability of the positive charge induced on the carbon if stabilised in the transition state should therefore stabilise the carbocation. The rate determining step is the formation of the carbocation. Since tertiary carbocations are the most stable the carbocation is formed quite easily and the product formation happens quickly. Similarly, the primary and secondary alcohols, need to form a primary and secondary carbocation, which is not stable. Hence the reaction is slow. Zinc forms a weaker bond with $\ce{-OH}$ and induces a partial charge on them, thus not forming the whole carbocation. So I think that in such cases where the carbocation is not stable, the reaction is slow, and may not have a a very well defined SN1 or SN2 classification i.e. truly borderline reaction may not occur, due to accompanying facets like solvation and ion pairs.
{ "domain": "chemistry.stackexchange", "id": 14464, "tags": "organic-chemistry, alcohols, halides, reactivity" }
second quantization - time dependent basis
Question: In the second quantization time-independent field operator can be expanded in the orthonormal basis: $$\hat{\Psi}(\mathbf{x}) = \sum\limits_{i}\hat{a}_{i}\ \phi_{i}(\mathbf{x})$$ Time evolution of the field operator is given by: $$\hat{\Psi}(\mathbf{r},t) = e^{it/\hbar \hat{H}}\hat{\Psi}(\mathbf{x})e^{-it/\hbar \hat{H}} = \sum\limits_{i}\hat{a}_{i}(t)\ \phi_{i}(\mathbf{x})$$ Time dependence is transferred to the operators. Can we have time-dependent basis states rather than operators: $$\hat{\Psi}(\mathbf{r},t) = \sum\limits_{i}\hat{a}_{i}\ \phi_{i}(\mathbf{x},t)$$ ? How would the Fock state look like ? Answer: The answer is yes only if $\hat{H}$ is quadratic in the creation and annihilation operators (with suitable additional regularity assumptions). In other words, it is possible only if the map is a so-called Bogol'ubov transformation. There is in fact a theorem called Shale's theorem (for bosons) and Shale-Stinespring (for fermions) giving the conditions for a quadratic operator to be the generator of a Bogol'ubov transformation. If $\hat{H}$ is more than quadratic, it is easy to see (heuristically at least) that it cannot preserve the linear form of the field operator, and so OP's last expression cannot be true.
{ "domain": "physics.stackexchange", "id": 28771, "tags": "quantum-mechanics, many-body" }
Is there any difference between using convolution and correlation for finding edges with Sobel?
Question: I know that Sobel is a filter for edge detection and we should use convolution to find edges, but is there any difference if we use correlation instead of convolution? I think Sobel tries to find a pixel which has different values from its surrounding pixels, and the difference between correlation and convolution is just flipping the filter, I think, there should be no difference. Answer: You're correct, convolution is the same as correlation except that 2D Sobel filter kernel is rotated for 180 degrees. Generally speaking for spatial filtering using correlation or convolution is a matter of personal/domain preferences/convention.
{ "domain": "dsp.stackexchange", "id": 4618, "tags": "filters, convolution, correlation" }
Relative strengths of methyl, primary, secondary, and tertiary C–H bonds
Question: Which of the highlighted C–H bonds in the following compounds is weakest? (R = generic alkyl group) I personally thought that the answer was 1 but it is given as 4 in the answer key. I didn't understand how it is so. Answer: I believe hyperconjugation is a possible cause for this phenomonen. In general, the greater the amount of electron donation to the antibonding MO of a bond, the weaker the bond. Note also that it is widely-acclaimed that $\ce {C-C}$ $\sigma$ bonding MOs are better hyperconjugative donors than $\ce {C-H}$ $\sigma$ bonding MOs. In the case of methane, there is no hyperconjugation occuring. Thus, naturally, we would predict that it has the strongest of the $\ce {C-H}$ bonds. In the case of the $\ce {C-H}$ bond made to the tertiary carbon, there is the most numerous number of hyperconjugative donations of electron density to the $\ce {C-H}$ $\sigma$ antibonding MO due to there being the most number of adjacent $\ce {\alpha C - \beta C}$ and $\ce {\alpha C - \beta H}$ bonds acting as hyperconjugative donors.
{ "domain": "chemistry.stackexchange", "id": 10720, "tags": "organic-chemistry, bond, hydrocarbons" }
Publishing scene does not make planner take it into account
Question: Hello everyone, I'm working with a model of the schunk 7DOF arm in moveit. I import objects into the environment and publish the scene but still the planner provides a path that goes through the object. I get the sense that the planning pipeline and rviz are not communicating on the same topic. How do I find out and fix it? Thanks much Originally posted by Chipper10 on ROS Answers with karma: 41 on 2013-10-11 Post score: 0 Answer: Turned out the issue was using the joint state publisher gui to set the goal configuration. Path planning worked fine if the final configuration was set from rviz. Originally posted by Chipper10 with karma: 41 on 2013-10-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by bit-pirate on 2013-11-12: Good you came back and dropped an answer for others!
{ "domain": "robotics.stackexchange", "id": 15839, "tags": "ros, rviz, ompl-planning, publish" }
Is the spin of a black hole affected by the matter orbiting it?
Question: It's well understood that the moon has an effect on how much the earth spins. My understanding is that if the moon was bigger, it could impact tidal forces to a larger extent, and possibly slow the rotation of the earth. An article by space.com titled "This Huge Black Hole Is Spinning at Half the Speed of Light" describes that black holes could be rotating at 50% the speed of light. Can we tell if the speed of a black hole's rotation is impacted by the amount of matter orbiting it? Answer: Is the spin of a black hole affected by the matter orbiting it? Short answer: Yes! Spin (angular momentum) of a black hole could be changed by interaction with the surrounding matter in a variety of ways. The laws of black hole thermodynamics would be respected in such processes. This, in particular, means that the area of the event horizon will always be increasing even when rotational energy is being extracted from the black hole. Also, angular momentum of black hole would satisfy $J<GM^2/c^2$ (where $M$ is the black hole mass), it is impossible to spin up the black hole beyond this bound. If a body plunges into a black hole its angular momentum (mostly angular momentum of its orbital motion but also any intrinsic angular momentum it had) becomes part of the black hole's. So if a black hole is surrounded by accretion disk, then transport of angular momentum within and into the black hole is an important aspect of processes happening there and this transport would occur via a lot of means: plasma turbulence, magnetic fields, electromagnetic and gravitational radiation etc. Over the course of its evolution accretion could β€œspin up” black hole quite close to maximum possible values of angular momentum (for a given mass). It is possible to transfer angular momentum (and energy) to a black hole via tidal effects see e.g. this paper. For example, this would happen if a black hole is part of a binary (with another black hole, neutron star etc.). This is the direct generalization of Earth's rotation slowing by the tidal influence of the Moon as mentioned in the OP. When placed in identical environments, a rotating black hole absorbs more energy and angular momentum from tidal effects than a nonrotating black hole. But even for rotating black holes under most circumstances this type of mass/angular momentum absorption is too small and could be usually ignored. Rotating black holes exhibit superradiance, a phenomenon when the flux of radiation (electromagnetic or gravitational) impinging on a black hole is amplified. This effect is conceptually similar to Penrose process only for waves rather than particles. The necessary energy and angular momentum carried away is supplied by black hole's rotation, and it slows down as a result. A related concept is the black hole bomb, a runaway superradiant process. Finally, special mention should go to Blandford–Znajek process which extracts rotational energy (and thus slows down the black hole's rotation) via magnetic fields from external sources.
{ "domain": "physics.stackexchange", "id": 81786, "tags": "general-relativity, black-holes, angular-momentum, kerr-metric" }
Pythonic solution to display multi-dimensional tuple as a tree
Question: I would like to show the tree-like structure contained in a multi-dimensional tuple by using the set of graphical glyphs that exist in Unicode fonts (e.g. \u2500, \u2502, \u2514 and others). Here is an example of the kind of output, I'm talking about: >>> showtree('(01, (02, 03, 04), 05, ((06, 07), 08), (09, (10, 11), (12, 13), 14), 15)') ┬─01 β”œβ”€β”¬β”€02 β”‚ β”œβ”€03 β”‚ └─04 β”œβ”€05 β”œβ”€β”¬β”€β”¬β”€06 β”‚ β”‚ └─07 β”‚ └─08 β”œβ”€β”¬β”€09 β”‚ β”œβ”€β”¬β”€10 β”‚ β”‚ └─11 β”‚ β”œβ”€β”¬β”€12 β”‚ β”‚ └─13 β”‚ └─14 └─15 Ideally, empty items, multi-words items, extra spaces around commas or parenthesis, as well as unbalanced parenthesis should be managed correctly, as in the following example: >>> showtree('( A, B multi-word item , (C,D), ( E , , F ), G )')) ┬─A β”œβ”€B multi-word item β”œβ”€β”¬β”€C β”‚ └─D β”œβ”€β”¬β”€E β”‚ β”œβ”€ β”‚ └─F └─G I came up with a solution that works quite well (the examples above have been generated with it) but the implemented algorithm is not very elegant, and the code not very pythonic : def showtree(string): """display multidimensional tuple as a tree""" glyphs = ['\u252C\u2500','\u251C\u2500','\u2514\u2500','\u2502 '] tree, glyph, item = [], [], [] for char in string: if char in ',)' and item: # add glyph prefix and current item to tree tree.append(glyph + [''.join(item).strip()]); item = [] if char == ',': # update glyph prefix for new item in sublist glyph = [glyphs[3]] * (len(glyph)-1) + [glyphs[1]] elif char == ')': # update glyph prefix for last item in sublist tree[-1][-2] = glyphs[2]; glyph = glyph[:-1] elif char == '(': # update glyph prefix for first item in sublist glyph.append(glyphs[0]) else: # other chars are simply added to current item item.append(char) return '\n'.join(''.join(node) for node in tree) So I would like to get some ideas for an improved implementation, maybe using regular expressions or other advanced parsing techniques. Thank you very much for any hint... Answer: Neither the code provided in the question or the top voted answer work correctly. This is as you're making unreasonable assumptions on how the data is formed. >>> print(showtree('(A)')) └─A >>> print(showtree('(1, ((2)), (3, 4, (5)))')) ┬─1 β”œβ”€β”¬β”€β””β”€2 β”œβ”€β”¬β”€3 β”‚ β”œβ”€4 β”‚ β”œβ”€β””β”€5 >>> print(tree_string_to_display_string('(1, ((2)), (3, 4, (5)))')) ┬─1 β”œβ”€β”¬β”€β””β”€2 β”œβ”€β”¬β”€3 β”‚ β”œβ”€4 β”‚ β”œβ”€β””β”€5 Your code is also doing two things at once, parsing a string and building a tree. And so I will only take a tuple as input. If you need to take a string then you can build the parser separately. Make the code work with one item. To do this should be fairly simple, we take the tuple ('1',) and return ──1. def build_tree(root): return '\n'.join(_build_tree(root)) def _build_tree(node): yield '──' + node[0] >>> print(build_tree(('1',))) ──1 Make the code work with 1, 2 or more items. This is fairly simple, we check node's length and use the above code if it's 1. Otherwise we make: The first item start with ┬─. The last item start with └─. All other items start with β”œβ”€. This is as simple as using tuple unpacking, and then iterating over the middle. def _build_tree(node): if len(node) == 1: yield '──' + node[0] return start, *mid, end = node yield '┬─' + start for value in mid: yield 'β”œβ”€' + value yield '└─' + end >>> print(build_tree(('1',))) ──1 >>> print(build_tree(('1', '2'))) ┬─1 └─2 >>> print(build_tree(('1', '2', '3'))) ┬─1 β”œβ”€2 └─3 Get the code to work if you only enter a value, no tuples. This is simple, since we're only allowing tuples as the nesting datatype, then we can just add an if not isinstance(node, tuple). We will convert this to a string now to help our future selves. def _build_tree(node): if not isinstance(node, tuple): yield str(node) return if len(node) == 1: yield '──' + node[0] return start, *mid, end = node yield '┬─' + start for value in mid: yield 'β”œβ”€' + value yield '└─' + end >>> print(build_tree('1')) 1 Get the code to work recursively with the same input as (2). To check this we will change the input to integers. This is simple. We run _build_tree on all the values in node, if it's a tuple. From here we only work on these values. We know these values are going to be iterators with only one value. This means we can just use next for now. def _build_tree(node): if not isinstance(node, tuple): yield str(node) return values = [_build_tree(n) for n in node] if len(values) == 1: yield '──' + next(values[0]) return start, *mid, end = values yield '┬─' + next(start) for value in mid: yield 'β”œβ”€' + next(value) yield '└─' + next(end) >>> print(build_tree((1,))) ──1 >>> print(build_tree((1, 2))) ┬─1 └─2 >>> print(build_tree((1, 2, 3))) ┬─1 β”œβ”€2 └─3 Get the code working recursively. We know all the current yields work the same way, and so this calls for a new function. This should take three values: The value to add to the first item. (This is what we're doing right now) The value to add on all other items. This is important as if a node is only one large. But has nested data that is larger than one value then we will add ' ' to the output. The nested data. This is really simple to build: def build_lines(first, other, values): yield first + next(values) for value in values: yield other + value Finally we adjust the current yields so they are yield from functions. def build_tree(root): return '\n'.join(_build_tree(root)) def _build_tree(node): if not isinstance(node, tuple): yield str(node) return values = [_build_tree(n) for n in node] if len(values) == 1: yield from build_lines('──', ' ', values[0]) return start, *mid, end = values yield from build_lines('┬─', 'β”‚ ', start) for value in mid: yield from build_lines('β”œβ”€', 'β”‚ ', value) yield from build_lines('└─', ' ', end) def build_lines(first, other, values): yield first + next(values) for value in values: yield other + value >>> print(build_tree(('01', ('02', '03', '04'), '05', (('06', '07'), '08'), ('09', ('10', '11'), ('12', '13'), '14'), '15'))) ┬─01 β”œβ”€β”¬β”€02 β”‚ β”œβ”€03 β”‚ └─04 β”œβ”€05 β”œβ”€β”¬β”€β”¬β”€06 β”‚ β”‚ └─07 β”‚ └─08 β”œβ”€β”¬β”€09 β”‚ β”œβ”€β”¬β”€10 β”‚ β”‚ └─11 β”‚ β”œβ”€β”¬β”€12 β”‚ β”‚ └─13 β”‚ └─14 └─15 >>> print(build_tree(('A', 'B multi-word item', ('C', 'D'), ('E', '', 'F'), 'G'))) ┬─A β”œβ”€B multi-word item β”œβ”€β”¬β”€C β”‚ └─D β”œβ”€β”¬β”€E β”‚ β”œβ”€ β”‚ └─F └─G >>> print(build_tree((1, ((2,),), (3, 4, (5,))))) ┬─1 β”œβ”€β”€β”€β”€β”€2 └─┬─3 β”œβ”€4 └───5
{ "domain": "codereview.stackexchange", "id": 37552, "tags": "python" }
Very slow convergence with CNN
Question: I am new to deep learning. I am working on training an SSD model on a set of small objects. I am using Adam gradient descent for optimization and a large input (800x800), but I seem to only get an improvement of 0.010 after every 20 or so epochs(350 steps). What can I do or look for to speed up convergence on this model? Answer: Implement the below mentioned techniques and check Add Batch Normalization Increase the learning rate Standard/Normalize the inputs if you have not done it already
{ "domain": "datascience.stackexchange", "id": 4204, "tags": "cnn, object-detection, convergence" }
How does center of mass correlate to stability?
Question: "If an object is tilted it will topple over if a vertical line from its centre of gravity falls outside its base" http://www.schoolphysics.co.uk/age11-14/Mechanics/Statics/text/Stability_/index.html What does it mean by outside its base, and why does that affect stability? Also how does torque relate to this? Thanks! Answer: You need to consider the torque about the point of contact between the wheel and ground. This is an image from your link. If the torque about the point of contact will rotate the vehicle anti-clockwise then the vehicle does not right itself (left car and right bus) whereas if the torque about the point of contact is clockwise and the vehicle will right itself (right car and left bus). If the vehicle will right itself then the line of action of the weight of the vehicle passes through an area which the vehicle covered, aka "the base", when it was upright ie the line of action of the weight is to the right of the point of contact between the tyre and the road and so the torque is clockwise.
{ "domain": "physics.stackexchange", "id": 39619, "tags": "classical-mechanics, mass, torque, stability" }
Help on integrating differential dynamic pressure (kinetic energy per unit volume) for 1D radial flow towards a line sink
Question: A quick introduction to my question and then the question asked at the end. For this problem the cross-sectional area normal to flow is the surface of a cylinder, $A=2\pi r L$, where $r =$ radial distance from the axis of the cylinder (line sink). The dynamic pressure (kinetic energy per unit volume) for a parcel of fluid is $$\tag{1} 0.5 \rho v^2$$where $\rho =$ volumetric-mass density and $v=$ velocity. The derivative of the dynamic pressure with respect to radial position is $$\tag{2} \frac{d}{dr}(0.5 \rho v^2)=\rho v \frac{dv}{dr}$$ I want to find the integral of the change in dynamic pressure with respect to $r$ (the change in dynamic pressure as the fluid moves towards or away from the line sink). In doing so I make the following steps, $$\tag{3} \int_{r_1}^{r_2} \rho v \frac{dv}{dr} dr$$ Since $v = w/(\rho A)$, where $w=$ mass flow rate, then, $$\tag{4} \int_{r_1}^{r_2} \rho \frac{w}{\rho A} \frac{dv}{dr} dr$$ $$\tag{5} \int_{r_1}^{r_2} \frac{w}{2\pi r L} \frac{dv}{dr} dr$$ $$\tag{6} \frac{w}{2\pi L} \frac{dv}{dr} \int_{r_1}^{r_2} \frac{1}{r} dr$$ $$\tag{7} \frac{w}{2\pi L} \frac{dv}{dr} \ln\left(\frac{r_2}{r_1}\right)$$ I have values for all variables in last equation above except for $dv/dr$. How do I determine the value for $dv/dr$? Answer: Assuming w is constant, you need to substitute A in v formula and then take the derivative of v. dv/dr is not constant, it's function of r so you have to keep it within integral. Although, eventually you will get same kinetic energy term and integral would simply be difference of kinetic energy at different values of r.
{ "domain": "physics.stackexchange", "id": 84891, "tags": "fluid-dynamics, integration" }
ROS2 client connecting to the same server fails after restarting the executable
Question: ROS Version: ROS2 Ardent Platform: x86_64 Ubuntu 16.04 LTS, 4.15.0-48-generic Hello! I'm creating a system with a rclcpp::Node instance that monitors and controls rclcpp_lifecycle::LifecycleNode or other rclcpp::Node instances (referred hereon out as managed nodes). It performs a bring-up and a bring-down handshake of sorts. The bring-up sequence has the managed node, if its a lifecycle node, publish a message containing the name of the node when it's created on a topic that this system management node is subscribed to. This management node uses the name of the node to create a client for the lifecycle state change service for that managed node, then requests it to be in a configured state, and finally, upon that success, an async request to an active state. This works beautifully, I implemented a multithreaded-heartbeat service and this management node reliably hears back from all of these other nodes. The only problem I'm having and can't seem to find a way around is that when one of these managed nodes (either normal or lifecycle) is brought down through SIGINT, even though the bring-down handshake (informing this management node that it's being brought down, transitioning the lifecycle node to its finalized state if applicable, and calling rclcpp::shutdown, among other things) succeeds, when the managed node's executable is restarted (in the same session of the management node's executable), it usually has trouble communicating over one those services (lifecycle state change or heartbeat), and almost always fails to communicate the third time the executable is ran. But as soon as I terminate and rerun the management node's executable, it starts working perfectly again. I briefly messed with the QoS associated with the clients for the services, and though that didn't help or made matters worse, I don't feel confident enough in my knowledge of those to rule that out as a possible solution. Sound familiar to anyone? Is there something about destroying clients I should know about, or managing connections to a server? (I hope the latter isn't the case because I don't want to modify the lifecycle state change service XD) I realize that Crystal is out, but I would like to hold off on making that transition until my project is in a completely stable state. If it's determined that this issue is uniquely Ardent-related though, then that could be a good motivator to get that transition going XD Thanks in advance! Originally posted by vandy2000 on ROS Answers with karma: 38 on 2019-05-09 Post score: 0 Answer: I had recently updated to Crystal, and that immediately fixed everything! :) Originally posted by vandy2000 with karma: 38 on 2019-06-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 32994, "tags": "ros, rclcpp, service, ardent" }
Identifying a starting point for maximum coverage
Question: I have 10,000 sequentially numbered items I wish to characterize. For each item, I created a list of the other items I think might have a similar characteristic. i.e. Item 1: [2,4,7,8,9,...,9489] (list contains 250 items) Item 2: [1,4,13,23,...,9424] (list contains 12 items) Item 3: [1,2,4,7,...,9489] (list contains 140 items) Item 4: [1,3,7,9,...,9211] (list contains 250 items) Item 5: [1,3,7,9,...,9221] (list contains 250 items) Item 6: [1,2,7,25,...,9248] (list contains 241 items) Item 7: [4,5,6] (list contains 3 items) If I only have time to test 50 items, How might I choose which items to test so that the maximum number of items is represented? The identified similarity is not necessarily bi-directional - notice item 7's list contains 4,5,6 but not 1 and 3(which both have 7 in their list). Some lists may be very similar or even identical, as seen in items 4 and 5. Testing both 4 and 5 would not achieve the maximum coverage. Answer: It sounds like you want to select a set of 50 lists whose union contains as many elements as possible. This is the maximum coverage problem, which is NP-complete, but there are standard techniques that can be used to get decent-quality solutions (there is an approximation algorithm using a greedy algorithm, and a formulation as ILP).
{ "domain": "cs.stackexchange", "id": 8652, "tags": "algorithms" }
How does a diatomic molecule vibrate in response to heat?
Question: Do you know of any applet/animation that shows how a diatomic molecule like $\ce{O2}$ or $\ce{N2}$ vibrate in response to/ to produce heat? Can it vibrate only on one axis, the one joining the two atoms? What happens to the electrons during the vibration? Answer: The diatomic can vibrate only by extending and contracting the bond. The more energy that is added the bigger the bond excursion. (More vibrational levels are excited and have bigger excursions). The bond only extends by a few % of its length in lower vibrational levels so its pretty hard to see. If heaps & heaps of energy are added (temp say $1000$ K) then in a real molecule the bond can break as it is actually anharmonic in its motion. When this happens the atoms fly apart. Additionally the molecule can also rotate while it is vibrating. Rotation is a slower motion than vibration so many vibrational periods occur within a $360$ degree rotation. At any given constant temperature in a sample of molecules the energy moves between vibrations, rotations and translations so that each type of energy is in equilibrium. Some of the energy can also be transferred as radiation, in the infra-red (for vibrations hundreds to a few thousand $\pu{cm^{-1}}$) and microwave region ( in the range few $\pu{cm^{-1}}$) for rotational transitions but only if the diatomic molecule has a permanent dipole, i.e. is not homonuclear. If these molecules are in equilibrium with surrounding then these surroundings must also emit/absorb radiation to keep the total energy constant. If you were to suddenly isolate your block of (heteronuclear diatomic) molecules, say in space, then the radiation emitted would over a few hours remove energy so that the molecules will end up in their zero point levels. The figure shows the potential energy profile of HI. The the average length for the for the lowest level (quantum number $0$) is approx $106.4 $ pm and the chnage due to vibration is about $\pm 10 $ pm and clearly greater for higher vibrational levels. The vibrational period is $\approx 14 $ fs.
{ "domain": "chemistry.stackexchange", "id": 6921, "tags": "spectroscopy, molecules, heat" }
Is HBrO2 a reductor?
Question: I'm working on a chemistry project and that features a reaction where $\ce{Br-}$ participates in a redox reaction as a reductor. Eventually, $\ce{HBrO2}$ competes with $\ce{Br-}$ to react with the same chemical as $\ce{Br-}$ reacts with. Is is therefore true that $\ce{HBrO2}$ is also a reductor? If yes, what is the redox potential of $\ce{HBrO2}$? Answer: Bromous acid ($\ce{HBrO2}$) is an inorganic compound, which is an unstable compound. However, the salts of its conjugate base, bromites (e.g., $\ce{NaBrO2.3H2O}$) have been isolated. Bromous acid is considered to be an oxidizer. However, I can't find its reduction potential, probably because of its unstable nature (Ref.1). Based on the studies of Faria, et al. (Ref.1), $\ce{HBrO2}$ decomposes according to following mechanism: $$\ce{HBrO2 <=>[$K_\mathrm{a}$] H+ + BrO2-} \tag{E1}$$ $$\ce{HBrO2 + BrO2- ->[$k_1$] HOBr + BrO3-} \tag{R1}$$ $$\ce{HBrO2 + HBrO2 ->[$k_2$] HOBr + BrO3- + H+} \tag{R2}$$ It was found that $k_1 = \pu{39.1 Β± 2.6 M-1 s-1}$ and $k_2 = \pu{800 Β± 100 M-1 s-1}$ for reactions $(\mathrm{R1})$ and $(\mathrm{R2})$, respectively. Hence, authors have calculated and given equilibrium quotient for fast equilibrium $(\mathrm{E1})$ as $K_\mathrm{a} = \pu{(3.7 Β± 0.9) \times 10^{-4} M}$ at ionic strength $\pu{0.06 M}$ and $\pu{25.0 Β± 0.1 ^\circ C}$. I think OP is working on Belousov-Zhabotinsky reaction (oscillator), which would involves both $\ce{HBrO2}$ and $\ce{Br-}$. For example, usual reaction conditions are for reaction volume of $\pu{28 mL}$ at $\pu{39.6 ^\circ C}$ are: $\ce{[NaBrO3]} = \pu{1.8 \times 10^{-3} M}$; $\ce{[CH2(CO2H)2]} = \pu{5.6 \times 10^{-3} M}$; $\ce{[CeSO4]} = \pu{5.8 \times 10^{-4} M}$; and $\ce{[H2SO4]} = \pu{1.5 M}$ (Ref.2; yet this reference did not mention adding $\ce{[NaBr]}$ or $\ce{[KBr]}$ solution to the mixture). Note that either $\ce{Ce(IV)}$, or $\ce{Ru(II)}$, or $\ce{Fe(II)}$ complex can be employed as the catalyst. The Ref.3 proposed three mechanisms for the reaction: Process A (consumption of bromide ion): $$\ce{Br- + BrO3- + 2H+ -> HBrO2 + HOBr} \tag{A1}$$ $$\ce{Br- + HBrO2 + H+ <=> 2HOBr} \tag{A2}$$ $$\ce{Br- + HOBr + H+ -> Br2 + H2O} \tag{A3}$$ Process B (Oxidation of catalyst(R), autocatalytic reaction): $$\ce{2HBrO2 + 2H+ -> HOBr + BrO3- + H+} \tag{B1}$$ $$\ce{HBrO2 + BrO3- + H+ <=> 2BrO2 + H2O} \tag{B2}$$ $$\ce{BrO2 + cat.(R) + H+ <=> HBrO2 + cat.(O)} \tag{B3}$$ Process C (Reduction of catalyst(O), production of bromomalanoic acid): $$\ce{BrO2 + cat.(O) + H2O -> BrO3- + cat.(R) + 2H+} \tag{C1}$$ $$\ce{Br2 + CH2(CO2H)2 -> BrCH(CO2H)2 + Br- + H+} \tag{C2}$$ $$\ce{6cat.(O) + CH2(CO2H)2 + 2H2O -> 6cat.(R) + HCO2H + 2CO2 + 6H+} \tag{C3}$$ $$\ce{4cat.(O) + BrCH(CO2H)2 + 2H2O -> 4cat.(R) + HCO2H + Br- + 2CO2 + 5H+} \tag{C4}$$ As a consequence of this chain of reactions, it is evident by the equation $(\mathrm{A2})$ that $\ce{Br-}$ is oxidized to $\ce{HOBr}$ by $\ce{HBrO2}$ (which has formed as an intermediate): $$\ce{Br- + H2O <=> HOBr + H+ + 2e-} \tag{A2'}$$ $$\ce{HBrO2 + 2H+ + 2e- <=> HOBr + H2O} \tag{A2"}$$ The sum of the equations $(\mathrm{A2'})$ and $(\mathrm{A2''})$ gives: $$\ce{Br- + HBrO2 + H+ -> 2HOBr} \tag{A2}$$ Since this reaction is spontaneous for oscillation to be continued, it is safe to say $\ce{HBrO2}$ is a strong oxidizer compared to $\ce{Br-}$ since $\ce{Br-}$ acts as a reducer in this particular reaction. However, it is also evident by the equation $(\mathrm{B2})$ that $\ce{HBrO2}$ has reduced to $\ce{BrO3-}$ to $\ce{HBrO2}$ (all formed as intermediates except $\ce{BrO3-}$): $$\ce{BrO3- + 2H+ + e- <=> BrO2 + H2O} \tag{B2'}$$ $$\ce{HBrO2 <=> BrO2 + H+ + e- } \tag{B2"}$$ The sum of the equations $(\mathrm{B2'})$ and $(\mathrm{B2''})$ gives: $$\ce{HBrO2 + BrO3- + H+ -> 2BrO2 + H2O} \tag{B2}$$ Since this reaction is also spontaneous for oscillation to be continued, it is safe to say $\ce{HBrO2}$ can be a reducer compared to $\ce{BrO3-}$ since it acts as a reducer in this particular reaction. Also note that according to the equation $(\mathrm{R2})$, $\ce{HBrO2}$ self propagates. Therefore, whenever $\ce{HBrO2}$ forms, it could be acting as oxidizer or reducer, based on the conditions. References: Robert de Barros Faria, Irving R. Epstein, Kenneth Kustin, β€œKinetics of Disproportionation and pKa of Bromous Acid,” J. Phys. Chem. 1994, 98(4), 1363–1367 (https://doi.org/10.1021/j100055a051). Y. Pomeau, J. C. Roux, A. Rossi, S. Bachelart, C. Vidal, β€œIntermittent behaviour in the Belousov-Zhabotinsky reaction,” J. Physique Lett. 1981, 42(13), 271 – 273 (https://doi.org/10.1051/jphyslet:019810042013027100). Yu Chang, Nannan Zhang, Yuxin Yang, Jun Du, Xing Fan, Changyuan Tao, β€œTime-periodic oscillation reaction in an organic-solvent dominated electrolyte,” Physical Chemistry Chemical Physics 2017, 19(40), 27643-27650 (https://doi.org/10.1039/C7CP05414A).
{ "domain": "chemistry.stackexchange", "id": 14008, "tags": "redox, reduction-potential" }
Feature engineering of timestamp for time series analysis
Question: Following a Tensorflow time series analysis tutorial, I came across a particular way of converting data timestamps into a time-of-day periodic signal, that could help the model interpret the data better than just providing the timestamp. timestamp_s = date_time.map(pd.Timestamp.timestamp) day = 24*60*60 year = (365.2425)*day df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day)) df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day)) df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year)) df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year)) plt.plot(np.array(df['Day sin'])[:25]) plt.plot(np.array(df['Day cos'])[:25]) plt.xlabel('Time [h]') plt.title('Time of day signal') I am not sure I understand how time of day and day of year periodic structure was extracted from the time stamp, so I would appreciate any pointers regarding this. Lastly, would a simple normalized time_of_day and day_of_year extra new columns from date_time column suffice? Answer: timestamp_s = date_time.map(pd.Timestamp.timestamp) takes a column of timestamps and converts them into seconds-since-1970 format (also called unix timestamp). day is set to 86400 seconds. The remainder from dividing timestamp_s by day is the time of day, where 0 is midnight (in UTC timezone), and 34200 is noon, and 86359 is 23:59:59. np.sin() takes input in radians, so that is what the multiplying by 2 * np.pi is doing. (There is no need to explicitly take the remainder, of course, because sine is a cyclic function.) The year calculation is using the same idea, but using the number of seconds in a year. So 0 is Jan 1st, 00:00:00 UTC, 86359 is Jan 1st, 23:59:59 UTC, and er... 31557599 is 23:59:59 UTC on Dec 31st. Well, kind of. They are using 365.25 to avoid messing around with Feb 29th and leap years. But it does mean that e.g. 10am on Dec 25th is only the same number every 4th year. Another common one would be to use day * 7 to see if day of week is a useful predictor. E.g. if the data is supermarket sales figures. I really like their fourier transform graph they show in that article. That clearly shows that weekday would not be at all useful. Ah, just seen it is temperature data being plotted. That makes sense! Lastly, would a simple normalized time_of_day and day_of_year extra new columns from date_time column suffice? As in -1.0 for 00:00:00 through to +1.0 for 23:59:59. And -1.0 for Jan 1st through to +1.0 for Dec 31st. The nice feature the sine waves bring is you don't get that disjoint at midnight, and at new year. You could instead do -1.0 for 00:00:00 through to +1.0 for 12:00:00, then back to -1.0 for 23:59:59 (and something similar with Jun 30th). But, at the point, sine is looking both smoother and simpler to code.
{ "domain": "datascience.stackexchange", "id": 10512, "tags": "python, time-series" }
Is there a database on chemical reactions, similar to NIST, but far more complete?
Question: The only thing I'm interested in is to be able to search all reactions involving that compound in the database. Equilibrium constant, reaction rate and energy of activation would all be very welcome, but in case these data are absent it's no big deal. Doesn't need to be an online database either, I can download a software if it contains such data. Answer: The best known databases are Beilstein and Chemical Abstracts with around 22 millions reactions each since the end of the 19th century but they are commercial. Organic Syntheses has made available its database for free and it covers 6000 reactions. One of the greatest advantages is that each step is described in depth and each reaction has been thoroughly tested prior to publication. Have a look here for a list of databases.
{ "domain": "chemistry.stackexchange", "id": 17807, "tags": "reference-request, cheminformatics" }
Relation between impedance and phase angle in AC circuit
Question: I am seeing a lot the expression that the phase angle $\theta$ between the current and the voltage of the circuit can be expressed in term $tan\theta=\frac{IM(Z)}{Re(Z)}$ when $Z$ is the total impedance of the circuit, and I do not understand why is it true? what is the logic behind it? I understand that $Z$ is complex number and I understand the relation $tan\theta=\frac{IM(Z)}{Re(Z)}$ but why this angle is represent the phase angle between current and the voltage? Answer: In an RLC series circuit with an AC source of voltage, the current in the circuit, $i(t)$, can be written as the real part of $$ {\bf I} = I_0e^{j\omega t}\,, $$ where $j = \sqrt{-1}$ and the source voltage, $v(t)$, can be written as the real part of ${\bf V} = V_0e^{j(\omega t + \theta)}\,,$ where $V_0 = I_0 Z$. This means that $$ {\bf V} = {\bf I}{\rm {\bf Z}}\,, $$ where ${\rm {\bf Z}} = Ze^{j\theta}$ is the impedance. The angle $\theta$ is the phase difference between $i(t)$ and $v(t)$. The impedance can be expressed in terms of the resistance, inductance and capacitance of the circuit as $$ {\rm {\bf Z}} = R + j(\omega L - 1/(\omega C))\,. $$ This follows from applying Kirchhoff's voltage law to the circuit, which says that $$ {\bf V} = {\bf V}_{R} + {\bf V}_L + {\bf V}_C\,. $$ The expressions for ${\bf V}_{R}$, ${\bf V}_L$ and ${\bf V}_C$ are ${\bf V}_{R} = {\bf I}R, {\bf V}_{L} = j\omega L {\bf I}$ and ${\bf V}_{C} = -j(\omega C)^{-1}{\bf I}$ respectively. Using the properties of complex numbers and applying them to ${\rm {\bf Z}}$ it follows that $$ |{\rm {\bf Z}}| = Z = \sqrt{R^2 + (\omega L - 1/(\omega C))^2}\,,\quad \tan\theta = \frac{Im({\rm {\bf Z}})}{Re({\rm {\bf Z}})}\,. $$ A similar analysis can be applied to an RLC AC circuit with components connected in parallel to obtain the same result but in that case, the expression for ${\rm {\bf Z}}$ differs from the expression above. ${\bf Edit:}$ As mentioned in the comments, the example involving the RLC series circuit given above, it looks like the voltage of the source depends on the current. However, this not actually the case. To make this fact clearer, let us write $$t = t^{\prime} - \frac{\theta}{\omega}$$ and substitute it into the expressions for ${\bf V}$ and ${\bf I}$ written above. Doing so yields the following expressions for ${\bf V}$ and ${\bf I}$ $$ {\bf V} = V_0e^{j\omega t^{\prime}}\,, \quad {\bf I} = I_0e^{j(\omega t^{\prime} - \theta)}\,. $$ In these expressions involving the time variable $t^{\prime}$ one can drop the prime on the $t^{\prime}$ if one so wishes. Taking the ratio of ${\bf V}$ to ${\bf I}$ yields, $$ \frac{{\bf V}}{{\bf I}} = \frac{V_0}{I_{0}}e^{j\theta} = Ze^{j\theta} = {\rm {\bf Z}} $$ where $Z$ and $\tan\theta$ have the same expressions as stated above.
{ "domain": "physics.stackexchange", "id": 68624, "tags": "electric-circuits, electric-current, electrical-resistance, voltage" }
Does ros::Timer::hasPending method actually work?
Question: I used it expecting that it will return true after starting the timer, until the callback get's called (in a one-shot timer). But I turned out to be wrong; the following program will return always false: #include "ros/ros.h" void callback(const ros::TimerEvent &event) { ROS_INFO("CB!"); } int main(int argc, char **argv) { ros::init(argc, argv, "timer_test"); ros::NodeHandle nh; ros::Timer timer = nh.createTimer(ros::Duration(1.0), callback, true); ROS_INFO("%d", timer.hasPending()); ros::spinOnce(); ROS_INFO("%d", timer.hasPending()); ros::Duration(0.5).sleep(); ros::spinOnce(); ROS_INFO("%d", timer.hasPending()); ros::Duration(0.5).sleep(); ros::spinOnce(); ROS_INFO("%d", timer.hasPending()); ros::Duration(0.5).sleep(); ros::spinOnce(); ROS_INFO("%d", timer.hasPending()); ros::spinOnce(); } Am I misunderstanding what the method actually does? Thanks Originally posted by jorge on ROS Answers with karma: 2284 on 2017-12-20 Post score: 1 Answer: I checked the underlying source code to figure out what hasPending() checks. Here is what bool TimerManager<T, D, E>::hasPending(int32_t handle) returns: return info->next_expected <= T::now() || info->waiting_callbacks != 0; It looks like hasPending() will only return true when a timer event is past due or just about to tick. I assume the code below will print true, CB!, then false. #include "ros/ros.h" void callback(const ros::TimerEvent &event) { ROS_INFO("CB!"); } int main(int argc, char **argv) { ros::init(argc, argv, "timer_test"); ros::NodeHandle nh; ros::Timer timer = nh.createTimer(ros::Duration(1.0), callback, true); ros::Duration(2).sleep(); // wait for longer than the timer period ROS_INFO(timer.hasPending() ? "true" : "false"); ros::spinOnce(); ROS_INFO(timer.hasPending() ? "true" : "false"); } I hope this helps. Originally posted by Link with karma: 228 on 2017-12-20 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jorge on 2017-12-21: right. but it seems to me that this behavior is neither expected nor useful... so I think we can consider this an issue on Timer class code. @Link, do you agree?
{ "domain": "robotics.stackexchange", "id": 29590, "tags": "roscpp, timer" }
Checking whether a state is almost orthogonal to permutation invariant states
Question: Let us consider \begin{equation} |T\rangle = |\psi \rangle^{\otimes m} \end{equation} for an $n$-qubit quantum state $|\psi\rangle$. Let $\mathcal{V}$ be the space of all $(m + 1)$-partite states that are invariant under permutations. Consider $|\psi\rangle|T\rangle$, which belongs to the space $\mathcal{V}$. I am trying to prove: For $|\phi\rangle$ orthogonal to $|\psi\rangle$, $|\phi\rangle |T\rangle$ is almost orthogonal to $\mathcal{V}$, upto an error $\mathcal{O}\big(\frac{1}{m}\big)$. There exists a measurement to determine whether $|\phi\rangle |T\rangle$ is in $\mathcal{V}$, upto an error $\mathcal{O}\big(\frac{1}{m}\big)$, and when doing so, the state $|T\rangle$ is only disturbed by $\mathcal{O}\big(\frac{1}{m}\big)$. (For reference, these statements are in this paper, in the last paragraph of page 13). Answer: Let $P$ be the projector onto the symmetric subspace. We want to find $$ \max|\langle\gamma|\phi,T\rangle| $$ for $|\gamma\rangle\in \mathcal{V}$. This is equivalent to $$ \max|\langle\gamma|P|\phi,T\rangle|, $$ so if we calculate $P|\phi\rangle|T\rangle$, then $|\gamma\rangle$ will be the normalised state parallel to that. Now, if we project $|\phi\rangle|T\rangle$ onto the fully symmetric subspace, we get $$ |\gamma\rangle=\frac{1}{\sqrt{m+1}}\sum_{i=1}^{m+1}|\psi\rangle^{\otimes (i-1)}|\phi\rangle|\psi\rangle^{\otimes(m+1-i)}. $$ Hence, the maximum value of the overlap is $1/\sqrt{m+1}$. If you're talking about an error, you'r probably talking about a measurement, and hence need the mod-square, so the probability of finding the state in the symmetric subspace is $1/(m+1)$. How to perform the measurement? I could just specify measurement projectors $\{P,I-P\}$. However, you may subsequently ask how you perform that measurement using standard gate elements. I've never looked at this in great detail, but assume it is related to topics such as this. Equally, for this specific form of state, I imagine you could do something generalising the swap-test (which projects two systems onto their symmetric subspace). Further explanation of the claimed result: Let $$ |\omega_j\rangle=\frac{1}{\sqrt{m+1}}\sum_{k=1}^{m+1}\omega^{kj}|\psi\rangle^{\otimes (k-1)}|\phi\rangle|\psi\rangle^{\otimes(m+1-k)} $$ where $\omega=e^{2\pi i/(m+1)}$. Each of the $|\omega_j\rangle$ is an eigenvector, eigenvalue $\omega^j$, of the cyclic permutation operator. The cyclic permutation operator and the projector onto the symmetric subspace commute (because the symmetric subspace is invariant under permutations), and hence they share a common eigenbasis and, in particular, only $|\omega_0\rangle$ is in the symmetric subspace. We have $$ |\phi\rangle|T\rangle=\frac{1}{\sqrt{m+1}}\sum_{j=0}^m|\omega_j\rangle. $$ Thus, $$ P|\phi\rangle|T\rangle=\frac{1}{\sqrt{m+1}}|\omega_0\rangle. $$ The normalised version of this is hence $$ |\gamma\rangle=|\omega_0\rangle. $$
{ "domain": "quantumcomputing.stackexchange", "id": 2030, "tags": "quantum-state, information-theory, linear-algebra" }
Partial measurement of a Two-Mode Squeezed Vacuum state
Question: It seems that if you measure a quadrature on the first mode, let's say $\hat{q}$, on a TMSVS, the second mode will collapse into a coherent state $|q+it\rangle$ Is this true ? What is then the value of $t$ ? Answer: Start with a TMSV $$|TMSV\rangle=\frac{1}{\cosh r}\sum_{n=0}^\infty \tanh^n{r}|n\rangle_1\otimes|n\rangle_2.$$ Project the first mode onto a coherent state $|\alpha\rangle=e^{-|\alpha|^2/2}\sum_{n}\frac{\alpha^n}{\sqrt{n!}}|n\rangle$: $$e^{-|\alpha|^2/2}\sum_{n}\frac{\alpha^n}{\sqrt{n!}}\langle n|_1 \frac{1}{\cosh r}\sum_{n=0}^\infty \tanh^n{r}|n\rangle_1\otimes|n\rangle_2=e^{-|\alpha|^2/2}\sum_{n}\frac{\alpha^n}{\sqrt{n!}} \frac{1}{\cosh r}\tanh^n{r}|n\rangle_2.$$ Normalize, collect some terms, and you find what the new resultant coherent state is in the second mode. This can be done with a TMSV starting with a different phase, which will affect the final coherent state in the second mode.
{ "domain": "physics.stackexchange", "id": 90276, "tags": "quantum-optics" }