anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Classification followed by regression to handle response variable that is usually zero
Question: I have a data set consisting of a bunch of predictors (mostly unbounded or positive real numbers) and a single response variable that I wish to predict. The response is typically exactly zero -- around 90% of the time. I have tried modelling this using standard Gaussian process methods as well as random forests. However, in both cases (although moreso when using random forests) the model seems to handle the data poorly, usually predicting a non-zero response. Now, if the predicted responses were in fact very close to zero I could just set a cut-off below which the values would be rounded to zero, but they are significantly non-zero in many cases. My idea for a solution is to train two models: a classification model trained on the entire training-set that predicts whether a variable is zero or non-zero, and a regression model trained only on the rows in the training set with a non-zero response. I would then first use the classification model to predict which observations have a response that is exactly zero, and subsequently use the regression model to predict the value of the non-zero responses. Is this a sound way to solve the described problem? Does this sort of model have a name? Are there better ways to do this? Answer: This sounds entirely reasonable, and the usual name for this structure I have heard for this is just "pipeline" which also applies to other system-feeds-next-system structures - it might also be "machine learning pipeline" or "data processing pipeline". There are ways to assess performance of a ML pipeline: You can of course compare the final accuracy or loss value, with the simpler model. Has turning the model into a more complex multi-stage one actually improved things? Sadly nothing is guaranteed, although I would be hopeful in your case initially - in part because you could apply adjustments available to classifier models used to deal with class imbalance issues. You can decide which part of the pipeline will gain you the most benefit by switching between pipeline-so-far input to each unit and perfect input from the training data. Then you can see how much incremental difference is possible by perfecting that unit in the pipeline. In your case you have a two stage pipeline, so you can check whether it is worth focusing more effort on the classifier or regression parts by comparing the incremental improvements between: The unadjusted output of the whole pipeline run end-to-end. The output of the regression (or zero) assuming that the classifier was perfect. A perfect score. Whichever of the two differences gives you the largest difference (2) - (1), or (3) - (2) points at work being most rewarded for working on the classifier or regression stage respectively. You can see a worked example of this per-stage analysis in Advice for Applying Machine Learning (slides 21, 22), amongst other places.
{ "domain": "datascience.stackexchange", "id": 1646, "tags": "machine-learning, random-forest, gaussian" }
A recursive_transform Template Function with Calling reserve for Performance Improvement
Question: This is a follow-up question for A recursive_transform Template Function with Unwrap Level for std::array Implementation in C++. Following the suggestion mentioned in G. Sliepen's answer, the function recursive_transform implementation is updated. The constraint using requires clause is performed instead of using static_assert(). The is_iterable concept is replaced by std::ranges::input_range in STL. Moreover, std::ranges::transform() is used in the overload that handles std::array. For the part of performance acceleration, another overload is added for those containers with reserve() member function. To separate the version for dealing with containers with / without reserve member function, additional concept is_reservable is added in this post. The experimental implementation The experimental implementations of recursive_transform function and the used is_reservable concept are as follows. recursive_transform function implementation: // recursive_transform implementation (the version with unwrap_level) template<std::size_t unwrap_level = 1, class T, class F> requires (unwrap_level <= recursive_depth<T>()&& // handling incorrect unwrap levels more gracefully, https://codereview.stackexchange.com/a/283563/231235 !is_reservable<T>) // constexpr auto recursive_transform(const T& input, const F& f) { if constexpr (unwrap_level > 0) { recursive_invoke_result_t<F, T> output{}; std::ranges::transform( input, // passing a range to std::ranges::transform() std::inserter(output, std::ranges::end(output)), [&f](auto&& element) { return recursive_transform<unwrap_level - 1>(element, f); } ); return output; } else { return std::invoke(f, input); // use std::invoke() } } // recursive_transform implementation (the version with unwrap_level, reserve space) template<std::size_t unwrap_level = 1, class T, class F> requires ( unwrap_level <= recursive_depth<T>()&& // handling incorrect unwrap levels more gracefully, https://codereview.stackexchange.com/a/283563/231235 is_reservable<T>) constexpr auto recursive_transform(const T& input, const F& f) { if constexpr (unwrap_level > 0) { recursive_invoke_result_t<F, T> output{}; output.reserve(input.size()); // Call reserve() if possible, https://codereview.stackexchange.com/a/283563/231235 std::ranges::transform( input, // passing a range to std::ranges::transform() std::inserter(output, std::ranges::end(output)), [&f](auto&& element) { return recursive_transform<unwrap_level - 1>(element, f); } ); return output; } else { return std::invoke(f, input); // use std::invoke() } } /* This overload of recursive_transform is to support std::array */ template< std::size_t unwrap_level = 1, template<class, std::size_t> class Container, typename T, std::size_t N, typename F > requires std::ranges::input_range<Container<T, N>> constexpr auto recursive_transform(const Container<T, N>& input, const F& f) { Container<recursive_invoke_result_t<F, T>, N> output; std::ranges::transform( // Use std::ranges::transform() for std::arrays as well input, std::begin(output), [&f](auto&& element){ return recursive_transform<unwrap_level - 1>(element, f); } ); return output; } is_reservable concept implementation: template<class T> concept is_reservable = requires(T input) { input.reserve(1); }; Full Testing Code The full testing code: // A recursive_transform Template Function with Calling reserve for Performance Improvement #include <algorithm> #include <array> #include <cassert> #include <chrono> #include <complex> #include <concepts> #include <deque> #include <execution> #include <exception> #include <functional> #include <iostream> #include <iterator> #include <list> #include <map> #include <mutex> #include <numeric> #include <optional> #include <ranges> #include <stdexcept> #include <string> #include <tuple> #include <type_traits> #include <utility> #include <variant> #include <vector> template<class T> concept is_reservable = requires(T input) { input.reserve(1); }; // recursive_depth function implementation template<typename T> constexpr std::size_t recursive_depth() { return 0; } template<std::ranges::input_range Range> constexpr std::size_t recursive_depth() { return recursive_depth<std::ranges::range_value_t<Range>>() + 1; } // recursive_invoke_result_t implementation template<typename, typename> struct recursive_invoke_result { }; template<typename T, std::invocable<T> F> struct recursive_invoke_result<F, T> { using type = std::invoke_result_t<F, T>; }; template<typename F, template<typename...> typename Container, typename... Ts> requires ( !std::invocable<F, Container<Ts...>>&& // F cannot be invoked to Container<Ts...> directly std::ranges::input_range<Container<Ts...>>&& requires { typename recursive_invoke_result<F, std::ranges::range_value_t<Container<Ts...>>>::type; }) struct recursive_invoke_result<F, Container<Ts...>> { using type = Container< typename recursive_invoke_result< F, std::ranges::range_value_t<Container<Ts...>> >::type >; }; template<template<typename, std::size_t> typename Container, typename T, std::size_t N, std::invocable<Container<T, N>> F> struct recursive_invoke_result<F, Container<T, N>> { using type = std::invoke_result_t<F, Container<T, N>>; }; template<template<typename, std::size_t> typename Container, typename T, std::size_t N, typename F> requires ( !std::invocable<F, Container<T, N>>&& // F cannot be invoked to Container<Ts...> directly requires { typename recursive_invoke_result<F, std::ranges::range_value_t<Container<T, N>>>::type; }) struct recursive_invoke_result<F, Container<T, N>> { using type = Container< typename recursive_invoke_result< F, std::ranges::range_value_t<Container<T, N>> >::type , N>; }; template<typename F, typename T> using recursive_invoke_result_t = typename recursive_invoke_result<F, T>::type; // recursive_transform implementation (the version with unwrap_level) template<std::size_t unwrap_level = 1, class T, class F> requires (unwrap_level <= recursive_depth<T>()&& // handling incorrect unwrap levels more gracefully, https://codereview.stackexchange.com/a/283563/231235 !is_reservable<T>) // constexpr auto recursive_transform(const T& input, const F& f) { if constexpr (unwrap_level > 0) { recursive_invoke_result_t<F, T> output{}; std::ranges::transform( input, // passing a range to std::ranges::transform() std::inserter(output, std::ranges::end(output)), [&f](auto&& element) { return recursive_transform<unwrap_level - 1>(element, f); } ); return output; } else { return std::invoke(f, input); // use std::invoke() } } // recursive_transform implementation (the version with unwrap_level, reserve space) template<std::size_t unwrap_level = 1, class T, class F> requires ( unwrap_level <= recursive_depth<T>()&& // handling incorrect unwrap levels more gracefully, https://codereview.stackexchange.com/a/283563/231235 is_reservable<T>) constexpr auto recursive_transform(const T& input, const F& f) { if constexpr (unwrap_level > 0) { recursive_invoke_result_t<F, T> output{}; output.reserve(input.size()); // Call reserve() if possible, https://codereview.stackexchange.com/a/283563/231235 std::ranges::transform( input, // passing a range to std::ranges::transform() std::inserter(output, std::ranges::end(output)), [&f](auto&& element) { return recursive_transform<unwrap_level - 1>(element, f); } ); return output; } else { return std::invoke(f, input); // use std::invoke() } } /* This overload of recursive_transform is to support std::array */ template< std::size_t unwrap_level = 1, template<class, std::size_t> class Container, typename T, std::size_t N, typename F > requires std::ranges::input_range<Container<T, N>> constexpr auto recursive_transform(const Container<T, N>& input, const F& f) { Container<recursive_invoke_result_t<F, T>, N> output; std::ranges::transform( // Use std::ranges::transform() for std::arrays as well input, std::begin(output), [&f](auto&& element){ return recursive_transform<unwrap_level - 1>(element, f); } ); return output; } int main() { // non-nested input test, lambda function applied on input directly int test_number = 3; std::cout << "non-nested input test, lambda function applied on input directly: \n" << recursive_transform<0>(test_number, [](auto&& element) { return element + 1; }) << '\n'; // test with non-nested std::array container static constexpr std::size_t D = 3; auto test_array = std::array< double, D >{1, 2, 3}; std::cout << "test with non-nested std::array container: \n" << recursive_transform<1>(test_array, [](auto&& element) { return element + 1; })[0] << '\n'; // test with nested std::arrays auto test_nested_array = std::array< decltype(test_array), D >{test_array, test_array, test_array}; //std::cout << "test with nested std::arrays: \n" // << recursive_transform<2>(test_nested_array, [](auto&& element) { return element + 1; })[0][0] << '\n'; // nested input test, lambda function applied on input directly std::vector<int> test_vector = { 1, 2, 3 }; std::cout << recursive_transform<0>(test_vector, [](auto element) { element.push_back(4); element.push_back(5); return element; }).size() << '\n'; // std::vector<int> -> std::vector<std::string> auto recursive_transform_result = recursive_transform<1>( test_vector, [](int x)->std::string { return std::to_string(x); } ); // For testing std::cout << "std::vector<int> -> std::vector<std::string>: " + recursive_transform_result.at(0) << '\n'; // recursive_transform_result.at(0) is a std::string // std::vector<string> -> std::vector<int> std::cout << "std::vector<string> -> std::vector<int>: " << recursive_transform<1>( recursive_transform_result, [](std::string x) { return std::atoi(x.c_str()); }).at(0) + 1 << '\n'; // std::string element to int // std::vector<std::vector<int>> -> std::vector<std::vector<std::string>> std::vector<decltype(test_vector)> test_vector2 = { test_vector, test_vector, test_vector }; auto recursive_transform_result2 = recursive_transform<2>( test_vector2, [](int x)->std::string { return std::to_string(x); } ); // For testing std::cout << "string: " + recursive_transform_result2.at(0).at(0) << '\n'; // recursive_transform_result.at(0).at(0) is also a std::string // std::deque<int> -> std::deque<std::string> std::deque<int> test_deque; test_deque.push_back(1); test_deque.push_back(1); test_deque.push_back(1); auto recursive_transform_result3 = recursive_transform<1>( test_deque, [](int x)->std::string { return std::to_string(x); }); // For testing std::cout << "string: " + recursive_transform_result3.at(0) << '\n'; // std::deque<std::deque<int>> -> std::deque<std::deque<std::string>> std::deque<decltype(test_deque)> test_deque2; test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); auto recursive_transform_result4 = recursive_transform<2>( test_deque2, [](int x)->std::string { return std::to_string(x); }); // For testing std::cout << "string: " + recursive_transform_result4.at(0).at(0) << '\n'; // std::list<int> -> std::list<std::string> std::list<int> test_list = { 1, 2, 3, 4 }; auto recursive_transform_result5 = recursive_transform<1>( test_list, [](int x)->std::string { return std::to_string(x); }); // For testing std::cout << "string: " + recursive_transform_result5.front() << '\n'; // std::list<std::list<int>> -> std::list<std::list<std::string>> std::list<std::list<int>> test_list2 = { test_list, test_list, test_list, test_list }; auto recursive_transform_result6 = recursive_transform<2>( test_list2, [](int x)->std::string { return std::to_string(x); }); // For testing std::cout << "string: " + recursive_transform_result6.front().front() << '\n'; return 0; } The output of the test code above: non-nested input test, lambda function applied on input directly: 4 test with non-nested std::array container: 2 5 std::vector<int> -> std::vector<std::string>: 1 std::vector<string> -> std::vector<int>: 2 string: 1 string: 1 string: 1 string: 1 string: 1 Godbolt link All suggestions are welcome. The summary information: Which question it is a follow-up to? A recursive_transform Template Function with Unwrap Level for std::array Implementation in C++ What changes has been made in the code since last question? The constraint using requires clause is performed instead of using static_assert(). New concept is_reservable is proposed and another overload calling reserve() member function is added. Why a new review is being asked for? Please review the updated version recursive_transform template function. I am not sure there is any other better way to deal with containers with / without reserve member function for performance improvement. Any other further suggestion is welcome, certainly. Answer: Avoid code duplication You have two versions of recursive_transform() that look very similar, and only differ in whether reserve() is called. I would avoid the code duplication here by making use of if constexpr: template<std::size_t unwrap_level = 1, class T, class F> requires (unwrap_level <= recursive_depth<T>()) constexpr auto recursive_transform(const T& input, const F& f) { … recursive_invoke_result_t<F, T> output{}; if constexpr (is_reservable<decltype(output)>) { output.reserve(); } … }
{ "domain": "codereview.stackexchange", "id": 44522, "tags": "c++, performance, template, c++20, constrained-templates" }
Why solar eclipse paths are symmetrical?
Question: Reading upon the eclipse of March 20, 2015, I stumbled upon this page: http://www.timeanddate.com/eclipse/list-solar.html. What caught my eye is that for each year there are two eclipses whose paths are almost symmetrical relative to equator. I'm just curious why that's always exactly so. Answer: The orbit of the moon is inclined by 5.14° to the ecliptic. As you may know, the ecliptic is the apparent path of the Sun across the sky, so it is also the plane of Earth's orbit around the Sun. The plane of the Moon's orbit is inclined by 5.14° to the plane of the Earth's orbit. The intersection of the two planes is a line bisecting both planes. There is also a nice picture of this at http://www.cnyo.org/tag/orbital-plane/. [I'll insert it here after checking the copyright] So, it is only when the Earth is very near to the intersection line that an eclipse can occur, because it is only then that the Sun, Moon and Earth can be in a strait line. This occurs for slightly more than a month (called an eclipse season), twice a year. Another angle comes into the picture too. The axis of the Earth is tilted, so the equator is at 23.4° to the plane of the ecliptic. So, the pattern you have observed comes about because of that geometry, where mirror image eclipse paths occur about six months apart. The linked pages explain a lot more of the complexities. Image by Nela (nyabla.net) (with additional annotations by Onceinawhile), CC BY-SA 4.0, via Wikimedia Commons
{ "domain": "astronomy.stackexchange", "id": 822, "tags": "solar-eclipse" }
An N-dimensional mathematical vector template class deriving from std::array (C++-17)
Question: I've coded up a Vector class for use in my simulation support code, which I offer up for review. I decided to extend std::array. Now, I know that std::array is an aggregate type, but with C++-17 we can now extend an aggregate base type and still use 'curly brace' initialization. Extending std::array means that each Vector consumes the same, fixed amount of memory as a regular std::array. I've added methods for arithmetic and various Vector operations, such as the cross-product (for N=3 dimensions only), the dot product, and length. I'd welcome opinions over whether this is a good idea in principle, and also specific suggestions about the details of the code. The full source code follows. Note that it makes use of a randomization class which is outside the scope of this question, but if you want to see it it is here: https://github.com/ABRG-Models/morphologica/blob/master/morph/Random.h /*! * \file * \brief An N dimensional vector class template which derives from std::array. * * \author Seb James * \date April 2020 */ #pragma once #include <cmath> using std::abs; using std::sqrt; #include <array> using std::array; #include <iostream> using std::cout; using std::endl; using std::ostream; #include <string> using std::string; #include <sstream> using std::stringstream; #include <type_traits> using std::enable_if; using std::enable_if_t; using std::is_integral; using std::is_scalar; using std::decay_t; #include "Random.h" using morph::RandUniformReal; using morph::RandUniformInt; namespace morph { /*! * \brief N-D vector class * * An N dimensional vector class template which derives from std::array. Vector * components are of scalar type S. It is anticipated that S will be set either to * floating point scalar types such as float or double, or to integer scalar types * such as int, long long int and so on. Thus, a typical (and in fact, the default) * signature would be: * * Vector<float, 3> v; * * The class inherits std:array's fixed-size array of memory for storing the * components of the vector. It adds numerous methods which allow objects of type * Vector to have arithmetic operations applied to them, either scalar (add a scalar * to all elements; divide all elements by a scalar, etc) or vector (including dot * and cross products, normalization and so on. * * Because morph::Vector extends std::array, it works best when compiled with a * c++-17 compiler (although it can be compiled with a c++-11 compiler). This is * because std::array is an 'aggregate class' with no user-provided constructors, * and morph::Vector does not add any of its own constructors. Prior to c++-17, * aggregate classes were not permitted to have base classes. So, if you want to do: * * Vector<float, 3> v = { 1.0f , 1.0f, 1.0f }; * * You need c++-17. Otherwise, restrict your client code to doing: * * Vector<float, 3> v; * v[0] = 1.0f; v[1] = 1.0f; v[2] = 1.0f; */ template <typename S, size_t N> struct Vector; /*! * Template friendly mechanism to overload the stream operator. * * Note forward declaration of the Vector template class and this template for * stream operator overloading. Example adapted from * https://stackoverflow.com/questions/4660123 */ template <typename S, size_t N> ostream& operator<< (ostream&, const Vector<S, N>&); template <typename S=float, size_t N=3> struct Vector : public array<S, N> { //! \return the first component of the vector template <size_t _N = N, enable_if_t<(_N>0), int> = 0> S x (void) const { return (*this)[0]; } //! \return the second component of the vector template <size_t _N = N, enable_if_t<(_N>1), int> = 0> S y (void) const { return (*this)[1]; } //! \return the third component of the vector template <size_t _N = N, enable_if_t<(_N>2), int> = 0> S z (void) const { return (*this)[2]; } //! \return the fourth component of the vector template <size_t _N = N, enable_if_t<(_N>3), int> = 0> S w (void) const { return (*this)[3]; } /*! * \brief Unit vector threshold * * The threshold outside of which the vector is no longer considered to be a * unit vector. Note this is hard coded as a constexpr, to avoid messing with * the initialization of the Vector with curly brace initialization. * * Clearly, this will be the wrong threshold for some cases. Possibly, a * template parameter could set this; so size_t U could indicate the threshold; * 0.001 could be U=-3 (10^-3). * * Another idea would be to change unitThresh based on the type S. Or use * numeric_limits<S>::epsilon and find out what multiple of epsilon would make * sense. */ static constexpr S unitThresh = 0.001; /*! * Set data members from an array the of same size and type. */ void setFrom (const array<S, N> v) { for (size_t i = 0; i < N; ++i) { (*this)[i] = v[i]; } } /*! * Set the data members of this Vector from the passed in, larger vector, v, * ignoring the last element of v. Used when working with 4D vectors in graphics * applications involving 4x4 transform matrices. */ void setFrom (const array<S, (N+1)> v) { for (size_t i = 0; i < N; ++i) { (*this)[i] = v[i]; } } /*! * Output the vector to stdout */ void output (void) const { cout << "Vector" << this->asString(); } /*! * Create a string representation of the vector * * \return A 'coordinate format' string such as "(1,1,2)", "(0.2,0.4)" or * "(5,4,5,5,40)". */ string asString (void) const { stringstream ss; auto i = this->begin(); ss << "("; bool first = true; while (i != this->end()) { if (first) { ss << *i++; first = false; } else { ss << "," << *i++; } } ss << ")"; return ss.str(); } /*! * Renormalize the vector to length 1. */ void renormalize (void) { S denom = static_cast<S>(0); auto i = this->begin(); while (i != this->end()) { denom += ((*i) * (*i)); ++i; } denom = sqrt(denom); if (denom != static_cast<S>(0.0)) { S oneovermag = static_cast<S>(1.0) / denom; i = this->begin(); while (i != this->end()) { *i++ *= oneovermag; } } } /*! * Randomize the vector * * Randomly set the elements of the vector consisting of floating point * coordinates. Coordinates are set to random numbers drawn from a uniform * distribution between 0 and 1 (See morph::RandUniformReal for details). * * Note that I need a real or int implementation here, depending on the type of * S. This allows me to use the correct type of randomizer. * * Note, if you omit the second template arg from enable_if_t (or enable_if) * then the type defaults to void. * * \tparam F A floating point scalar type */ template <typename F=S, enable_if_t<!is_integral<decay_t<F>>::value, int> = 0 > void randomize (void) { RandUniformReal<F> ruf (static_cast<F>(0), static_cast<F>(1)); auto i = this->begin(); while (i != this->end()) { *i++ = ruf.get(); } } /*! * Randomize the vector * * Randomly set the elements of the vector consisting of integer * coordinates. Coordinates are set to random numbers drawn from a uniform * distribution between 0 and 255 (See morph::RandUniformInt for details). * * Note on the template syntax: Here, if I is integral, then enable_if_t's type * is '0' and the function is defined (I think). * * \tparam I An integer scalar type */ template <typename I=S, enable_if_t<is_integral<decay_t<I>>::value, int> = 0 > void randomize (void) { RandUniformInt<I> rui (static_cast<I>(0), static_cast<I>(255)); auto i = this->begin(); while (i != this->end()) { *i++ = rui.get(); } } /*! * Test to see if this vector is a unit vector (it doesn't *have* to be). * * \return true if the length of the vector is 1. */ bool checkunit (void) const { bool rtn = true; S metric = 1.0; auto i = this->begin(); while (i != this->end()) { metric -= ((*i) * (*i)); ++i; } if (abs(metric) > morph::Vector<S, N>::unitThresh) { rtn = false; } return rtn; } /*! * Find the length of the vector. * * \return the length */ S length (void) const { S sos = static_cast<S>(0); auto i = this->begin(); while (i != this->end()) { sos += ((*i) * (*i)); ++i; } return sqrt(sos); } /*! * Unary negate operator * * \return a Vector whose elements have been negated. */ Vector<S, N> operator- (void) const { Vector<S, N> rtn; auto i = this->begin(); auto j = rtn.begin(); while (i != this->end()) { *j++ = -(*i++); } return rtn; } /*! * Unary not operator. * * \return true if the vector length is 0, otherwise it returns false. */ bool operator! (void) const { return (this->length() == static_cast<S>(0.0)) ? true : false; } /*! * Vector multiply * operator. * * Cross product of this with another vector v2 (if N==3). In * higher dimensions, its more complicated to define what the cross product is, * and I'm unlikely to need anything other than the plain old 3D cross product. */ template <size_t _N = N, enable_if_t<(_N==3), int> = 0> Vector<S, N> operator* (const Vector<S, _N>& v2) const { Vector<S, _N> v; v[0] = (*this)[1] * v2.z() - (*this)[2] * v2.y(); v[1] = (*this)[2] * v2.x() - (*this)[0] * v2.z(); v[2] = (*this)[0] * v2.y() - (*this)[1] * v2.x(); return v; } /*! * Vector multiply *= operator. * * Cross product of this with another vector v2 (if N==3). Result written into * this. */ template <size_t _N = N, enable_if_t<(_N==3), int> = 0> void operator*= (const Vector<S, _N>& v2) { Vector<S, _N> v; v[0] = (*this)[1] * v2.z() - (*this)[2] * v2.y(); v[1] = (*this)[2] * v2.x() - (*this)[0] * v2.z(); v[2] = (*this)[0] * v2.y() - (*this)[1] * v2.x(); (*this)[0] = v[0]; (*this)[1] = v[1]; (*this)[2] = v[2]; } /*! * \brief Scalar (dot) product * * Compute the scalar product of this Vector and the Vector, v2. * * \return scalar product */ S dot (const Vector<S, N>& v2) const { S rtn = static_cast<S>(0); auto i = this->begin(); auto j = v2.begin(); while (i != this->end()) { rtn += ((*i++) * (*j++)); } return rtn; } /*! * Scalar multiply * operator * * This function will only be defined if typename _S is a * scalar type. Multiplies this Vector<S, N> by s, element-wise. */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > Vector<S, N> operator* (const _S& s) const { Vector<S, N> rtn; auto val = this->begin(); auto rval = rtn.begin(); // Here's a way to iterate through which the compiler should be able to // autovectorise; it knows what i is on each loop: for (size_t i = 0; i < N; ++i) { *(rval+i) = *(val+i) * static_cast<S>(s); } return rtn; } /*! * Scalar multiply *= operator * * This function will only be defined if typename _S is a * scalar type. Multiplies this Vector<S, N> by s, element-wise. */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > void operator*= (const _S& s) { auto val = this->begin(); for (size_t i = 0; i < N; ++i) { *(val+i) *= static_cast<S>(s); } } /*! * Scalar division * operator */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > Vector<S, N> operator/ (const _S& s) const { Vector<S, N> rtn; auto val = this->begin(); auto rval = rtn.begin(); for (size_t i = 0; i < N; ++i) { *(rval+i) = *(val+i) / static_cast<S>(s); } return rtn; } /*! * Scalar division *= operator */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > void operator/= (const _S& s) { auto val = this->begin(); for (size_t i = 0; i < N; ++i) { *(val+i) /= static_cast<S>(s); } } /*! * Vector addition operator */ Vector<S, N> operator+ (const Vector<S, N>& v2) const { Vector<S, N> v; auto val = this->begin(); auto val2 = v2.begin(); for (size_t i = 0; i < N; ++i) { v[i] = *(val+i) + *(val2+i); } return v; } /*! * Vector addition operator */ void operator+= (const Vector<S, N>& v2) { auto val = this->begin(); auto val2 = v2.begin(); for (size_t i = 0; i < N; ++i) { *(val+i) += *(val2+i); } } /*! * Vector subtraction */ Vector<S, N> operator- (const Vector<S, N>& v2) const { Vector<S, N> v; auto val = this->begin(); auto val2 = v2.begin(); for (size_t i = 0; i < N; ++i) { v[i] = *(val+i) - *(val2+i); } return v; } /*! * Vector subtraction */ void operator-= (const Vector<S, N>& v2) { auto val = this->begin(); auto val2 = v2.begin(); for (size_t i = 0; i < N; ++i) { *(val+i) -= *(val2+i); } } /*! * Scalar addition */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > Vector<S, N> operator+ (const _S& s) const { Vector<S, N> rtn; auto val = this->begin(); auto rval = rtn.begin(); for (size_t i = 0; i < N; ++i) { *(rval+i) = *(val+i) + static_cast<S>(s); } return rtn; } /*! * Scalar addition */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > void operator+= (const _S& s) { auto val = this->begin(); for (size_t i = 0; i < N; ++i) { *(val+i) += static_cast<S>(s); } } /*! * Scalar subtraction */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > Vector<S, N> operator- (const _S& s) const { Vector<S, N> rtn; auto val = this->begin(); auto rval = rtn.begin(); for (size_t i = 0; i < N; ++i) { *(rval+i) = *(val+i) - static_cast<S>(s); } return rtn; } /*! * Scalar subtraction */ template <typename _S=S, enable_if_t<is_scalar<decay_t<_S>>::value, int> = 0 > void operator-= (const _S& s) { auto val = this->begin(); for (size_t i = 0; i < N; ++i) { *(val+i) -= static_cast<S>(s); } } /*! * Overload the stream output operator */ friend ostream& operator<< <S, N> (ostream& os, const Vector<S, N>& v); }; template <typename S=float, size_t N=3> ostream& operator<< (ostream& os, const Vector<S, N>& v) { os << v.asString(); return os; } } // namespace morph ``` Answer: There’s a lot of going on here. At a cursory glance: You must not use using in headers at file scope. It’s not clear why your template parameter is called S instead of T, and the more I read your source code the more this confuses me. What does S stand for? — The use of T for such member types is near-universal. I’m not sure public inheritance is a good idea here: why are you modelling an is-a relationship? Why not private inheritance or a std::array member? Don’t declare functions with a (void) parameter list. In C this is necessary to create the correct prototype. In C++ it has no purpose — () does the same, and is conventional. Your setFrom member functions should be constructors and assignment operators instead. Don’t pass std::array by value, pass it by const& — otherwise a very expensive copy might be created. At the very least make this depend on N as well as sizeof(S) so you can optimise for arrays small enough to be passed inside a single register. Use algorithms (std::copy, constructors, assignment) instead of copying your arrays in a for loop. output is redundant if you define a suitable formatted output stream operator. If you want to follow C++ convention, asString should be called str. That’s not necessary of course. S denom = static_cast<S>(0); can usually be written as auto denom = S{0};, and those cases where this fails because no suitable constructor exists are probably cases where you want this to fail. Don’t use while loops to iterate over ranges, it’s unidiomatic and therefore confusing: either use for loops or, better yet, ranged-for loops where possible. And once again use appropriate algorithms. The loop that calculates denom can be replaced by a call to std::accumulate, for instance. That way you can also declare denom as const and initialise it directly. randomize guards against S being an integral type; renomalize does not, but also needs this constraint.
{ "domain": "codereview.stackexchange", "id": 38067, "tags": "c++, c++17, coordinate-system" }
What is the reason why galvanic series is in it's specific order?
Question: I'm trying to understand why galvanic series molecular entities are in the order they are in. Why does $\ce{Cu(s)}$ reduce $\ce{Ag^{2+}}$ ions but not $\ce{Pb^{2+}}$ ions? there seems to be some kind of small correlation with electronegativity but there's a lot of exceptions for it to be true. ie: the electronegativities of Li and K are 0.98 and 0.82 respectively and K is more noble, but K and Ca electronegativities are 0.82 and 1 and Ca is more noble in this case? If you go down the galvanic series the nobility of metals sometimes increase steadily for a long way but then start decreasing with no seeming connection to electronegativity or octets or half octets. ie from Mg to Sn the electronegativity increases step by step. (going through the nobility order)(Mg 1.31, Al 1.61, Zn 1.65, Fe 1.83, Ni 1.91, Sn 1.96, Pb 2.33 but then decrease, H 2.2, Cu 1.9 and then increases again Ag 1.93, Au 2.54) So is there any connection between electronegativity and the nobility of a metal? and if so why is there so much exceptions? If not then what determines the nobility of a metal? does ionization of a metal notably change the molecular entitie's electronegativity? Here's a picture to illustrate with the order of nobility and arrows to clarify the direction. Answer: There is relation between metal electronegativity and the position in the electrochemical / galvanic metal series. But the metal electronegativity is just one of the factors determining the position. The position is determined by the value of the standard Gibbs free energy of the reaction. $$\ce{M^{n+}(aq) + $\frac n2$ H2(g) -> M(s) + n H+(aq)} \quad \quad ΔG^{\circ}_\text{r}=ΔH^{\circ}_\text{r} -TΔS^{\circ}_\text{r} \tag{1}$$ and the related standard reduction potential of the formal "half-reaction": $$\ce{M^{n+}(aq) + n e- -> M(s)}\quad \quad E^{\circ}_{\ce{M/M^{n+}}} = - \frac {ΔG^{\circ}_\text{r}}{nF} \tag{2}$$ where $F \approx \pu{96485 C mol-1}$ is the Faraday constant. The metal position on the galvanic series is determined by the value of this potential, the most negative (the least inclined to oxidize anything) for alkali metal ions, the most positive (the most inclined to oxidize anything) for precise/inert metal ions. The sign of the reaction Gibbs energy change determines the reaction spontaneity. Spontaneous reactions have this sign negative. The relation of the potential and the Gibbs energy may seem on the first look suspicious, until one reminds himself the equation of electrostatics $\text{d}E = U . \text{d}q$, where $E,U,q$ are potential energy, potential and electric charge, which defines the favourite unit of energy $\pu{1 eV}$.
{ "domain": "chemistry.stackexchange", "id": 17681, "tags": "electrochemistry, electronegativity, noble-metals" }
What makes type inference for dependent types undecidable?
Question: I have seen it mentioned that dependent type systems are not inferable, but are checkable. I was wondering if there is a simple explanation of why that is so, and whether or not there is there a limit of "dependency" where types can be indexed by values, below which type inference is possible and above which it is not? Answer: For a rather simple version of dependent type theory, Gilles Dowek gave a proof of undecidability of typability in a non-empty context: Gilles Dowek, The undecidability of typability in the $\lambda\Pi$-calculus Which can be found here. First let me clarify what is proven in that paper: he shows that in a dependent calculus without annotations on the abstractions, it is undecidable to show typeability of a term in a non-empty context. Both of those hypotheses are necessary: in the empty context, typability reduces to that of the simply-typed $\lambda$-calculus (decidable by Hindley-Milner) and with the annotations on the abstractions, the usual type-directed algorithm applies. The idea is to encode a Post correspondence problem as a type conversion problem, and then carefully construct a term which is typeable iff the two specific types are convertible. This uses knowledge of the shape of normal forms, which always exist in this calculus. The article is short and well-written, so I won't go into more detail here. Now in polymorphic calculi like system-F, it would be nice to be able to infer the type abstractions and applications, and omit the annotations on $\lambda$s as above. This is also undecidable, but the proof is much harder and the question was open for quite some time. The matter was resolved by Wells: J. B. Wells, Typability and type checking in System F are equivalent and undecidable. This can be found here. All I know about it is that it reduces the problem of semi-unification (which is unification modulo instantiation of universal quantifiers, and is undecidable) to type checking in System F. Finally it is quite easy to show that inhabitation of dependent families is undecidable: simply encode a Post problem into the constructor indices. Here are some slides by Nicolas Oury that sketch the argument. As to whether there is a "limit", it much depends on what you are trying to do with your dependent types, and there are many approximations which try to be either decidable, or at least close enough to be usable. These questions are still very much part of active research though. One possible avenue is the field of "refinement types" where the language of expression of type dependencies is restricted to allow for decidable checking see, e.g. Liquid Types. It's rare that full type inference is decidable even in these systems though.
{ "domain": "cs.stackexchange", "id": 1549, "tags": "undecidability, type-theory, type-inference, dependent-types" }
Related to Additive white Gaussian noise (AWGN)
Question: In my research work regarding wireless communication, I came across many research papers wherein AWGN is assumed to be modelled as "complex Gaussian with zero mean and unit variance". I understood why we model noise as Gaussian, but I am not getting why we model noise with zero mean and unit variance only. Also, what effect the noise will make if mean is non-zero and variance is more than unity ? Any help in this regard will be highly appreciated. Answer: The zero-mean assumption rests on empirical validation. You can do it yourself: turn on an oscilloscope and set it to the minimum amplitude. You'll see noise with roughly zero mean. Intuitively, this is the case because Gaussian noise is made up of billions of separate, independent actions (from heated electrons), which on average cancel each other out. The variance of the noise indeed changes from case to case. Keep in mind, though, that what matters in communications is the signal-to-noise ratio, not the individual signal or noise powers. When studying or simulating a system, it is often convenient to set the noise variance to one, and then vary the signal power to obtain different SNRs. You may as well set the signal power to one and vary the noise -- the result is the same.
{ "domain": "dsp.stackexchange", "id": 12408, "tags": "image-processing, digital-communications, noise, signal-detection" }
High/low density water solution values
Question: I would like to know what min and max values of densities are achievable with water solutions. There are some conditions: 1. Normal or close to normal temperature; 2. Non-radioactive solutions/components; 3. Acids are good as long as they can be safely contained. Thanks for your help! Answer: Highest known is about 4.25 (all densities in g/cm${}^3$) at room temperature: http://en.wikipedia.org/wiki/Clerici_solution If you're willing to allow heating it comes close to 5 because the solubility of the salts increases with temperature: http://books.google.com/books?id=kaa2qeFRXmUC&pg=PA99 My favorite runner-up is the Thoulet solution mentioned at http://en.wikipedia.org/wiki/Potassium_tetraiodomercurate(II) but it only reaches around 3.20. Lowest would depend on what you mean by "water solution". If you mix a drop of water with a gallon of a very light liquid such as diethyl ether (which is not very miscible with water but does dissolve some -- hence the need to dry it in the lab over sodium or molecular sieves), then the answer is the density of the liquid (ether = 0.73). If water has to be the majority component I'd expect the minimum to be very close to the density of water, since mixtures tend to decrease a bit in volume. (The density of water is 1.0 at 4${}^\circ$, close to 1.0 at room temperature, 0.94 near the boiling point).
{ "domain": "chemistry.stackexchange", "id": 1548, "tags": "water, solutions" }
Having trouble with making openni_depth_frame a visable child of Baxter's head in Rviz
Question: I'm using a kinect with Baxter. I am having a problem getting baxter to move one of its hands to a humans hand to grab something from it. In order to track people I am using openni.launch from the package openni_launch openni_tracker from the package openni_tracker. After I add a tf display to Rviz: I can successfully view the skeleton of a user in rviz when the fixed frame is 'openni_depth_frame'. I can see the model of Baxter if the fixed frame is one of the frames that make up Baxter's tf model. I wrote a script making 'openni_depth_frame' a child of 'head' (since the kinect is attached to the top of Baxter's head). I have verified that this works by seeing that 'openni_depth_frame' is a child of 'head' in rqt_tf_tree after running this script. However, I expected to see both the 'openni_depth_frame' frame and the frames that make up the user openni_tracker is tracking at the same time I can see baxter's model. Am I incorrect in thinking that? Or does tf not work that way? Originally posted by justmein on ROS Answers with karma: 1 on 2017-02-16 Post score: 0 Answer: You'll need to add a static transform between any frame on Baxter and openni_depth_frame. That will tell RViz (and all other ROS tools that use TF) how your camera is attached to the robot. You can invoke the static transform publisher through the commandline: Usage: static_transform_publisher x y z qx qy qz qw frame_id child_frame_id OR Usage: static_transform_publisher x y z yaw pitch roll frame_id child_frame_id for instance: $ rosrun tf2_ros static_transform_publisher 0.15 0.075 0.5 0.0 0.7854 0.0 /head /openni_depth_frame Or through a launch file named something like static_depth_tf.launch: <launch> <!-- Users update this value to set transform between camera and robot --> <arg name="camera_link_pose" default="0.15 0.075 0.5 0.0 0.7854 0.0"/> <arg name="camera_link_name" default="/openni_depth_frame"/> <arg name="robot_link_name" default="/head"/> <node pkg="tf2_ros" type="static_transform_publisher" name="camera_link_broadcaster" args="$(arg camera_link_pose) $(arg robot_link_name) $(camera_link_name)" /> </launch> And then invoke it with $ roslaunch static_depth_tf.launch Originally posted by imcmahon with karma: 790 on 2017-03-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27037, "tags": "ros, rviz, openi-tracker, baxter, transform" }
Cylinder rotating without slipping on an accelerating slab
Question: I am very confused by the following problem asked in my first year physics class: Please let me know if you can assist in any way! I've spent hours and hours on this question and gained absolutely nothing. Everything I do seems to lead to a contradiction one way or another. There are other resources online I've found that mention this question, but I can't tease out a good solution from these: Force on a solid cylinder that is rolling on an accelerating block https://www.physicsforums.com/threads/a-rolling-disc-on-a-slab.594918/ Thank you! Answer: As this is a homework question I won't give you a full solution, only point you in the right direction. On the lower block acts a second force, $F_F$, a friction force that causes torque and the angular acceleration $\alpha$ of the cylinder: $$F_F R=-I\alpha$$ Where $I$ is the moment of inertia of the cylinder and $R$ its radius. It carries a minus sign because it points in the opposite direction of $F$. So the net force acting on the block is: $$F_{net}=F-F_F$$ Now also note that for rolling without slipping, with $a$ the acceleration of the block, then: $$a=\alpha R$$ To determine $a$ and $\alpha$ use the equations above to set up: $$F_{net}=ma$$
{ "domain": "physics.stackexchange", "id": 29435, "tags": "homework-and-exercises, classical-mechanics, forces" }
Given a set of sets, find the smallest set(s) containing at least one element from each set
Question: Given a set $\mathbf{S}$ of sets, I’d like to find a set $M$ such that every set $S$ in $\mathbf{S}$ contains at least one element of $M$. I’d also like $M$ to contain as few elements as possible while still meeting this criterion, although there may exist more than one smallest $M$ with this property (the solution is not necessarily unique). As a concrete example, suppose that the set $\mathbf{S}$ is the set of national flags, and for each flag $S$ in $\mathbf{S}$, the elements are the colors used in that nation’s flag. The United States would have $S = \{red, white, blue\}$ and Morocco would have $S = \{red, green\}$. Then $M$ would be a set of colors with the property that every national flag uses at least one of the colors in $M$. (The Olympic colors blue, black, red, green, yellow, and white are an example of such an $M$, or at least were in 1920.) Is there a general name for this problem? Is there an accepted “best” algorithm for finding the set $M$? (I’m more interested in the solution itself than in optimizing the process for computational complexity.) Answer: The problem is the well-known NP-complete problem Hitting Set. It is closely related to Set-Cover. The NP-completeness proof can found in the classic book of Garey and Johnson. If you want to approximate it, you might want to translate your instance first to Set-Cover, and then apply an approximation algorithm for Set-Cover. However, Set-Cover cannot be be approximated by a constant factor in polynomial time, unless P=NP as shown by Lund and Yannakakis. If you are interested in exact solutions and your inputs behave nicely, I would recommend using a fixed-parameter tractable. The running time is here not only expressed in terms of the input length $n$ but also in terms of an additional parameter $k$. If the running time is $O(f(k)\cdot n^{O(1)})$, we call the algorithm a FPT-algorithm. Here, $f(k)$ is an increasing function. So if $k$ is constant we have a polytime algorithm. The first chapter of the book by Flum and Grohe, explains an FPT-algorithm for hitting set (more precisely for $p$-card-hitting set). The algorithm is easy to implement and uses the method of bounded search trees. Still it needs to much space to explain here, basically you break down the necessary(?) brute-force search, into small pieces (when $k$ is small).
{ "domain": "cs.stackexchange", "id": 438, "tags": "algorithms, optimization, sets" }
Reconstruction of wildlife distribution based on poorly-sampled data
Question: cross-posted to Signal Processing, Cross Validated, and World Building Stack Exchange Hi, I thought I'd also put this here in case there are any field biologists with ideas on the matter. Problem: After reading a series of fantasy novels, I noticed that the biosphere in that world made no sense. To clarify, this is a world where despite magical occurrences, the world itself is almost entirely non-magical. 'Alternate history magical realism,' perhaps. i.e., unlike, for example, Harry Potter, in which almost all plant and animal species mentioned are fictitious and magical, this series uses real flora and fauna. This allows me to extract information about the fictional world's environment based on the distribution of these animals, by assuming that similar animals will live in similar climates on Earth and in the fictional world. Ignoring the likelihood that the original author did not put enough thought into worldbuilding to make this a necessarily reasonable endeavor, my idea for how to proceed was as follows: As maps exist of the fictional world, and the path of the characters can be plotted, I hoped to mark every mention of a specific plant or animal in the text, along with the location of the characters when it occurred, and from this reconstruct a plausible distribution for each species. I've created a theoretical example (in photoshop), for illustration: where the red dotted line represents the paths of various characters, the orange, green, and blue splotches represent the true distribution of the species; the stars, triangles, and circles represent the locations at which a species is mentioned; and the brown, green, and blue lines represent the reconstructed contours of the distribution. Is there a method to do such a reconstruction? It sounds a bit like a Monte Carlo analysis, but I figured I should check... (It also sounds rather like the magical programs detective shows use to plot serial killers' locations) Note: It should be clear from the problem statement that just because a species is not mentioned at a specific location does not mean that it does not exist there. i.e., a sample at a specific location returning only 'A' - 'Bill and Jeff saw a lemur.' - does not exclude the possibility of 'B' and 'C' also at that location, but not sampled. Just because the text may specifically say that Bill and Jeff saw a lemur, and doesn't mention any other flora or fauna doesn't mean we should assume that they are in a universe devoid of anything but the occasional lemur. Final Thoughts: Ideally, the analysis method would further: take into account the coverage of the paths, and not assume that (in the example above) nothing exists in Mexico or northern Canada, just because there are no samples taken there. Remember that samples can only be taken along paths. take into account edges, in this case coastlines. If A, B, and C are land animals, it does not make sense that a reconstruction of their distribution would include water, even if their range surrounds a lake or something. Sorry for the long-winded explanation. Any thoughts? Answer: The problem of how to infer species distributions from scattered species occurrences is common in ecology, and there exists a number of methods to construct distribution maps. As a start, you should have a look at Species Distribution Models (SDMs) using regressions models or Maxent, and the paper by Elith et al (2009) is a good starting point and a standard reference. SDMs using maxent is now a common approach, which integrates species occurrences as point data along with environmental layers (e.g. temperature, moisture and topography) to predict species distribution maps, and this can also include absence data or "pseudo-absence" data (randomly sampled data from a region of interest). The maxent software is described and can be downloaded here: http://www.cs.princeton.edu/~schapire/maxent/ A common criticism against distributions produced by Maxent is however that they ignore e.g. species interactions, and they only considers the species occurences and the environmental variables that has been included in the model. In your "Note", you touch upon the issue of detectability, which is an important issue that has received much attention recently. The problem is largest when you only have presence data, and to have real presence/absence data is preferable. Even if you don't have real absenses (the species has been searched for but not found), an estimate of sampling effort in different areas is still very useful, since this means that you can at least evaluate whether absenses is due to "real" absense or lack of sampling. In your case, the movement paths of characters could be used as a measure of spatial "sampling effort". The main issue with detectability in studies of distribution of species trends is if there is trends or bias in detectability, which means that apparent changes over time or patterns in distribution might be due to differences in detectability and not real differences between areas or over time. This could for instance be the case if observers are more likely to spot a species in one type of habitat (open savannah) then in another type of habitat (closed forest). Useful starting points for issues of detectability are Dorazio (2014) (technical though) and Kery et al (2010).
{ "domain": "biology.stackexchange", "id": 3612, "tags": "ecology, measurement, visualization, species-distribution" }
removing words based on a predefined vector
Question: I have the dataset test_stopword and I want to remove some words from the dataset based on a vector. How can I do that in R? texts <- c("This is the first document.", "Is this a text?", "This is the second file.", "This is the third text.", "File is not this.") test_stopword <- as.data.frame(texts) ordinal_stopwords <- c("first","primary","second","secondary","third") Answer: texts <- c("This is the first document.", "Is this a text?", "This is the second file.", "This is the third text.", "File is not this.") test_stopword <- as.data.frame(texts) ordinal_stopwords <- c("first","primary","second","secondary","third") (newdata <- as.data.frame(sapply(texts, function(x) gsub(paste(ordinal_stopwords, collapse = '|'), '', x)))) The output is getting skewed when added in a code block (maybe a bug in SE). But, you would get the desired output.
{ "domain": "datascience.stackexchange", "id": 536, "tags": "r, data-cleaning" }
Inertial Mass = Gravitational Mass. Why?
Question: Okay, so the inertial mass of an object is always equal to the gravitational mass of the object. Conceptually, however, they seem different. Then what makes them identical? Is it because they are rooted in the same, more fundamental quantity? What's the complete explanation of their equivalence? Or is it simply a postulate in general relativity without any further explanation? Answer: They are thought to be equivalent, in a theoretical sense, because (inertial) mass causes curvature in spacetime and it is that curvature that gives rise to gravitational forces. This is the essence of General Relativity, which was developed on the postulate that the two masses are equal. That they are equal was one of the most important null experiments in the history of Physics, which was to demonstrate that objects fall at the same rate regardless of their mass or composition. That is to say, if the acceleration of an object subjected to a particular force depends on its inertial mass, and the the gravitational force exerted on an object depends on its gravitational mass, these two masses were shown experimentally to be equal to within experimental error. Experiments were performed by Galileo, Newton, Lorand von Eotvos (1922), Braginsky (1971) and others that demonstrated this result with increasing precision over the years. The "Gravitation" book by Misner, Thorne, and Wheeler has a good explanation of these experiments that you may want to consider. Your question is a bit analogous to asking, in the context of Special Relativity, "why is it that the maximum speed at which information is transmitted is the speed of light?" There was nothing special about the speed of light when it was used as a thought experiment by Einstein to come up with Special Relativity. He pondered, if the laws of Physics are the same in every reference frame, then what happens when a moving train, for example, casts a beam of light as it is approaching the station? This led to the Lorentz transformations (which were independently and previously derived in another context), predictions of time and space dilation that were then confirmed through experiment. Those equations reinforce the notion that a particle is massless in the limit of v = c (limit of velocity approaching speed of light).
{ "domain": "physics.stackexchange", "id": 97162, "tags": "general-relativity, gravity, mass, equivalence-principle, inertia" }
Is there another universe which is made up of antimatter, in large amounts like ours is made up of matter?
Question: Can it be possible that in the big bang, not one, but two universes were formed, one formed of matter, and the other formed of antimatter? It seems logical to me that since our universe is formed of matter, there must be some universe made of antimatter, so that a sort of an equilibrium is maintained. Can this other antimatter universe be treated like a mirror image to our universe, so as to maintain the equilibrium? Actually, to me it seems a bit strange that the big bang should produce a universe with much matter and little antimatter. So it seems that there must be a universe with greater antimatter and less matter, just the mirrored image of our universe, doesn't it? Answer: There are various theories about how the matter anti-matter asymmetry arose. See this search for lots of related questions. It's generally believed that the Big Bang formed almost equal quantities of matter and anti-matter, but there was a very small inbalance i.e. there was slightly more matter. The anti-matter all annihilated with matter to leave normal matter and lots of photons (the particle to photon ratio in the universe is about $10^{-9}$, that is a billion photons for each particle of matter). Over the years various ideas have been proposed, for example that some areas of the universe contain matter and other areas contain anti-matter. However no-one has ever come up with a convincing theory to describe how this could happen. You suggest there could be an anti-matter universe, but you'd need to put this on a sound mathematical footing for anyone to take the suggestion seriously, and no-one has ever managed to do this. By contrast the suggestion I mentioned in my first paragraph is theoretically plausible, though the exact mechanism for generating the small excess of matter is unknown. It requires a process known as CP violation, and this is known to happen and has been measured at accelerators. However the amount of CP violation we've measured is too small to account for all the matter we see. It's widely believed that when we understand physics beyond the Standard Model, be it String Theory or whatever, we'll discover how the excess of matter originated.
{ "domain": "physics.stackexchange", "id": 75359, "tags": "universe, big-bang, antimatter, thought-experiment" }
Propulsion Energy in Special Relativity
Question: I have a question regarding how to calculate the energy required to move objects within the frame of special relativity. To that extent, I understand the mass of an object as it approaches the speed of light increases, which I would assume necessitates an increase in energy. How would one calculate the energy required to move an object at a constant proper acceleration? For example, if an object had a rest mass of 10 kg and a constant proper acceleration of 10 ms^-2 and the goal was to accelerate it to 99% of the speed of light, how would one calculate the energy it would require? I believe it has something with the relationship of the rate change in energy over space being equivalent to the relativistic force, but I could definitely be wrong. Answer: Just calculate the KE at the speed desired. By the work-energy theorem, that's the work necessary to accelerate the object to that speed. The only tricky thing is to use the correct relativistic formula for KE, namely (gamma - 1) * mc^2. The beauty is that this avoids all complications regarding how you did the accelerating -- that's the power of energy methods.
{ "domain": "physics.stackexchange", "id": 9323, "tags": "homework-and-exercises, special-relativity, energy" }
Is there an IRAM satellite that measures thermal radiation at 250 GHz, or was this a ground-based instrument?
Question: The Nature Research Letter A Pluto-like radius and a high albedo for the dwarf planet Eris from an occultation (also here and here) says about half-way through: We now reassess Eris’ surface temperature in the light of our new results. Measurements by the Spitzer22 and IRAM11 satellites imply disk-averaged brightness temperatures of Tb=30±1.2 K and Tb=38±7.5 K at 70 and 1,200 mm, respectively. 11Bertoldi,F.F.Altenhoff,W.Weiss,A.,Menten,K.M.&Thum,C.Thetrans-neptunian object UB313 is larger than Pluto. Nature 439, 563–564 (2006). 22Stansberry, J. et al. in The Solar System beyond Neptune (eds Barucci, M. A., Boehnhardt, H., Cruikshank, D. P. & Morbidelli, A.) 161–179 (Univ. Arizona Press, 2008). The caption of Figure 1 of reference 11 (Bertoldi et al. 2006) mentions: The 117-element MAMBO-2 camera has an effective frequency for thermal radiation of 250 GHz, a half-power bandwidth of 80 GHz (210–290 GHz), and a beam size of 10.7 arcsec (corresponding to 760,000 km at a distance of 96 AU). and I think it refers to Welcome to the home of MAMBO; the "Max-Planck-Millimeter-Bolometer" IRAM/MAMBO2.250GHz Question: Is the use of the word satellites in "Measurements by the Spitzer22 and IRAM11 satellites..." correct, or does the latter refer to terrestrial measurments? Answer: The Nature paper by Bertoldi et al. (2006) says: Our millimetre observations were performed with the Max-Planck Millimeter Bolometer (MAMBO-2) array detector at the IRAM 30 m telescope on Pico Veleta, Spain. This is a ground-based telescope in the Spanish Sierra Nevada mountains at 2850m to try and get above as much of the precipitable water vapor, which has a very large effect on sub-mm observations, as possible. Its website is here Since it's common to refer to the Spitzer and its predecessor, the IRAS mission, when discussing far-infrared observations, I suspect the authors typed "IRAM", thought "IRAS (satellite)" and made a simple typo that was not caught before publication.
{ "domain": "astronomy.stackexchange", "id": 4895, "tags": "radio-astronomy, space-telescope, radio-telescope, infrared" }
Repository searching code duplication
Question: A followup question to this: IQueryable Extensions working on expression for collection property I am working on a project for a family member which involves the use of a database and a repository, and I have set up a fairly complex (possibly too complex, but I don't have a ton to go off of so I'm trying to make it very flexible for rapid expansion) system to allow for searching for items through the repository via the use of various SearchParameters objects, based off of the following base class to allow for pagination and direct item finding via GUID: Notes: Using Entity Framework Code First EFCF model classes make use of navigation properties public abstract class SearchParametersBase <TModel> : SearchBase { public List<Guid> ItemGuids { get; set; } protected SearchParametersBase(int page, int size, params Guid[] itemGuids) : base(page, size) { if (itemGuids == null || !itemGuids.Any()) ItemGuids = null; else ItemGuids = itemGuids.ToList(); } } The biggest problem I see currently is that because I have search parameter classes such as these: public class CompanySearchParameters : SearchParametersBase<Company> { public string BillingAddressSearchParameter { get; set; } public ClientSearchParameters ClientSearchParameters { get; set; } public LocationSearchParameters LocationSearchParameters { get; set; } public string NameSearchParameter { get; set; } public CompanySearchParameters(int page, int size, params Guid[] itemGuids) : base(page, size, itemGuids) { } } public class LocationSearchParameters : SearchParametersBase<CompanyLocation> { public string AddressSearchParameter { get; set; } public CompanySearchParameters CompanySearchParameters { get; set; } public string DescriptionSearchParameter { get; set; } public string LabelSearchParameter { get; set; } public LocationSearchParameters(int page, int size, params Guid[] itemGuids) : base(page, size, itemGuids) { } } I ended up with a helper class that has a fair amount of duplicated code based on a different type of IQueryable<T> for a model that has navigation properties: internal static class RepositoryQueryFilterer { internal static IQueryable<Account> FilterAccountQuery(IQueryable<Account> query, AccountSearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter)) query = query.Where(x => x.Notes.Contains(searchParameters.NotesSearchParameter)); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<Client> FilterClientQuery(IQueryable<Client> query, ClientSearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterClientQueryByAccountSearch(query, searchParameters.AccountSearchParameters); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<Client> FilterClientQueryByAccountSearch(IQueryable<Client> query, AccountSearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter)) query = query.Where(x => x.Account.Notes.Contains(searchParameters.NotesSearchParameter)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } internal static IQueryable<Company> FilterCompanyQuery(IQueryable<Company> query, CompanySearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterCompanyQuery(query, searchParameters.ClientSearchParameters); query = FilterCompanyQuery(query, searchParameters.LocationSearchParameters); if (!string.IsNullOrWhiteSpace(searchParameters.NameSearchParameter)) query = query.Where(x => x.Name.Contains(searchParameters.NameSearchParameter)); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<Company> FilterCompanyQuery(IQueryable<Company> query, ClientSearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterCompanyQuery(query, searchParameters.AccountSearchParameters); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => x.Clients.Any(y => searchParameters.ItemGuids.Contains(y.Id))); return query; } internal static IQueryable<Company> FilterCompanyQuery(IQueryable<Company> query, AccountSearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter)) query = query.Where(x => x.Clients.Any(y => y.Account.Notes.Contains(searchParameters.NotesSearchParameter))); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => x.Clients.Any(y => searchParameters.ItemGuids.Contains(y.AccountId))); return query; } internal static IQueryable<Company> FilterCompanyQuery(IQueryable<Company> query, LocationSearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.DescriptionSearchParameter)) query = query.Where(x => x.Locations.Any(y => y.Description.Contains(searchParameters.DescriptionSearchParameter))); if (!string.IsNullOrWhiteSpace(searchParameters.LabelSearchParameter)) query = query.Where(x => x.Locations.Any(y => y.Label.Contains(searchParameters.LabelSearchParameter))); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => x.Locations.Any(y => searchParameters.ItemGuids.Contains(y.Id))); return query; } internal static IQueryable<Contact> FilterContactQuery(IQueryable<Contact> query, ClientSearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterContactQuery(query, searchParameters.AccountSearchParameters); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.ClientId)); return query; } internal static IQueryable<Contact> FilterContactQuery(IQueryable<Contact> query, CompanySearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterContactQuery(query, searchParameters.LocationSearchParameters); query = FilterContactQuery(query, searchParameters.ClientSearchParameters); query = FilterContactQuery(query, searchParameters.LocationSearchParameters); if (!string.IsNullOrWhiteSpace(searchParameters.NameSearchParameter)) query = query.Where(x => x.Name.Contains(searchParameters.NameSearchParameter)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } internal static IQueryable<Contact> FilterContactQuery(IQueryable<Contact> query, AccountSearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter)) query = query.Where(x => x.Client.Account.Notes.Contains(searchParameters.NotesSearchParameter)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Client.AccountId)); return query; } internal static IQueryable<Contact> FilterContactQuery(IQueryable<Contact> query, LocationSearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.DescriptionSearchParameter)) query = query.Where(x => x.Company.Locations.Any(y => y.Description.Contains(searchParameters.DescriptionSearchParameter))); if (!string.IsNullOrWhiteSpace(searchParameters.LabelSearchParameter)) query = query.Where(x => x.Company.Locations.Any(y => y.Label.Contains(searchParameters.LabelSearchParameter))); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => x.Company.Locations.Any(y => searchParameters.ItemGuids.Contains(y.Id))); return query; } internal static IQueryable<Contact> FilterContactQuery(IQueryable<Contact> query, ContactSearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterContactQuery(query, searchParameters.ClientSearchParameters); query = FilterContactQuery(query, searchParameters.CompanySearchParameters); if (!string.IsNullOrWhiteSpace(searchParameters.CellNumberSearchParameter)) query = query.Where(x => x.CellNumber.Contains(searchParameters.CellNumberSearchParameter)); if (!string.IsNullOrWhiteSpace(searchParameters.OfficeNumberSearchParameter)) query = query.Where(x => x.OfficeNumber.Contains(searchParameters.OfficeNumberSearchParameter)); if (!string.IsNullOrWhiteSpace(searchParameters.EmailSearchParameter)) query = query.Where(x => x.Email.Contains(searchParameters.EmailSearchParameter)); if (!string.IsNullOrWhiteSpace(searchParameters.NameSearchParameter)) query = query.Where(x => x.Name.Contains(searchParameters.NameSearchParameter)); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<Discount> FilterDiscountQuery(DiscountSearchParameters searchParameters, IQueryable<Discount> query) { if (searchParameters == null) return query; query = FilterDiscountQuery(searchParameters.ProductSearchParameters, query); if (searchParameters.IsPercentSearchParameter != null) query = query.Where(x => x.IsPercent == searchParameters.IsPercentSearchParameter.Value); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<Discount> FilterDiscountQuery(ProductSearchParameters searchParameters, IQueryable<Discount> query) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.TitleSearchParameter)) query = query.Where(x => x.Product.Title.Contains(searchParameters.TitleSearchParameter)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } internal static IQueryable<CompanyLocation> FilterLocationQuery(IQueryable<CompanyLocation> query, LocationSearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterLocationQuery(query, searchParameters.CompanySearchParameters); if (!string.IsNullOrWhiteSpace(searchParameters.DescriptionSearchParameter)) query = query.Where(x => x.Description.Contains(searchParameters.DescriptionSearchParameter)); if (!string.IsNullOrWhiteSpace(searchParameters.LabelSearchParameter)) query = query.Where(x => x.Label.Contains(searchParameters.LabelSearchParameter)); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<CompanyLocation> FilterLocationQuery(IQueryable<CompanyLocation> query, CompanySearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.NameSearchParameter)) query = query.Where(x => x.Company.Name.Contains(searchParameters.NameSearchParameter)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } internal static IQueryable<Order> FilterOrderQuery(IQueryable<Order> query, OrderSearchParameters searchParameters) { if (searchParameters == null) return query; query = FilterOrderQuery(query, searchParameters.CompanySearchParameters); query = FilterOrderQuery(query, searchParameters.ProductSearchParameters); if (!string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter)) query = query.Where(x => x.Notes.Contains(searchParameters.NotesSearchParameter)); if (searchParameters.PositionsOfInterestSearchParameter != null && searchParameters.PositionsOfInterestSearchParameter.Any()) query = query.Where(x => x.PositionsOfInterest.Any(y => searchParameters.PositionsOfInterestSearchParameter.Contains(y))); if (searchParameters.OrderStatusSearchParameter != null && searchParameters.OrderStatusSearchParameter.Any()) query = query.Where(x => searchParameters.OrderStatusSearchParameter.Contains(x.Status)); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<Order> FilterOrderQuery(IQueryable<Order> query, CompanySearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.NameSearchParameter)) query = query.Where(x => x.Company.Name.Contains(searchParameters.NameSearchParameter)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } internal static IQueryable<Order> FilterOrderQuery(IQueryable<Order> query, ProductSearchParameters searchParameters) { if (searchParameters == null) return query; if (!string.IsNullOrWhiteSpace(searchParameters.TitleSearchParameter)) query = query.Where(x => x.Product.Title.Contains(searchParameters.TitleSearchParameter)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } internal static IQueryable<Product> FilterProductQuery(IQueryable<Product> query, ProductSearchParameters searchParameters) { if (searchParameters == null) return query; FilterProductQuery(query, searchParameters.DiscountSearchParameters); if (!string.IsNullOrWhiteSpace(searchParameters.TitleSearchParameter)) query = query.Where(x => x.Title.Contains(searchParameters.TitleSearchParameter)); query = FilterGuids(query, searchParameters); return query; } internal static IQueryable<Product> FilterProductQuery(IQueryable<Product> query, DiscountSearchParameters searchParameters) { if (searchParameters == null) return query; if (searchParameters.IsPercentSearchParameter != null) query = query.Where(x => x.Discounts.Any(y => y.IsPercent == searchParameters.IsPercentSearchParameter.Value)); if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } private static IQueryable<TModel> FilterGuids<TModel>(IQueryable<TModel> query, SearchParametersBase<TModel> searchParameters) where TModel : PocoBase { if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } } The reason that, so far, I haven't been able to combine those methods that look, and in fact are, almost identical, is that they are operating on different objects with the related navigation properties. Is there a way of making these methods more single-minded and reusable? Answer: public List<Guid> ItemGuids { get; set; } Start by initializing this to an empty collection. You'll save a lot of null checks. Consider this old method: private static IQueryable<TModel> FilterGuids<TModel>(IQueryable<TModel> query, SearchParametersBase<TModel> searchParameters) where TModel : PocoBase { if (searchParameters.ItemGuids != null && searchParameters.ItemGuids.Any()) query = query.Where(x => searchParameters.ItemGuids.Contains(x.Id)); return query; } without all the null checks: private static IQueryable<TModel> FilterGuids<TModel>(IQueryable<TModel> query, SearchParametersBase<TModel> searchParameters) where TModel : PocoBase { return query.Where(x => !searchParameters.ItemGuids.Any() || searchParameters.ItemGuids.Contains(x.Id)); } Yes, a one-liner. if (!string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter)) query = query.Where(x => x.Notes.Contains(searchParameters.NotesSearchParameter)); You can simplify a few things by using the ?: ternary operator and get rid of the if's: return searchParameters != null && !string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter) ? query.Where(x => x.Notes.Contains(searchParameters.NotesSearchParameter)) : query; Turn this class RepositoryQueryFilterer into a collection of extensions. Name each overload just FilterBy: internal static class RepositoryQueryFilters { internal static IQueryable<Account> FilterBy(this IQueryable<Account> query, AccountSearchParameters searchParameters) { return searchParameters != null && !string.IsNullOrWhiteSpace(searchParameters.NotesSearchParameter) ? query.Where(x => x.Notes.Contains(searchParameters.NotesSearchParameter)) : query; } private static IQueryable<TModel> FilterBy<TModel>(this IQueryable<TModel> query, SearchParametersBase<TModel> searchParameters) where TModel : PocoBase { return searchParameters.ItemGuids.Any() ? query.Where(x => searchParameters.ItemGuids.Contains(x.Id)) : query; } // ... } Ok, it's shorter now but still, it's a lot for a filter that just uses a single property. You shouldn't use big objects as parameters if you only need a single value. Consider this: internal static class RepositoryQueryFilters { internal static IQueryable<Account> FilterBy(this IQueryable<Account> query, string notesSearchParameter) { return !string.IsNullOrWhiteSpace(notesSearchParameter) ? query.Where(x => x.Notes.Contains(notesSearchParameter)) : query; } private static IQueryable<TModel> FilterBy<TModel>(this IQueryable<TModel> query, IReadOnlyList<Guid> guids) where TModel : PocoBase { return guids.Any() ? query.Where(x => guids.Contains(x.Id)) : query; } } Now each extension uses only what it really requries. It's much easier to test because you don't have to create complex objects. Try to keep things as simple as possible. Anyways, I think creating a few queries that can perform the few different searches you requires would be easier to maintain then a all those filters an parameter objects.
{ "domain": "codereview.stackexchange", "id": 22847, "tags": "c#, entity-framework, database, search, repository" }
How to train-test split and cross validate in Surprise?
Question: I wrote the following code below which works: from surprise.model_selection import cross_validate cross_validate(algo,dataset,measures=['RMSE', 'MAE'],cv=5, verbose=False, n_jobs=-1) However when I do this: (notice the trainset is passed here in cross_validate instead of whole dataset) from surprise.model_selection import train_test_split trainset, testset = train_test_split(dataset, test_size=test_size) cross_validate(algo, trainset, measures=['RMSE', 'MAE'],cv=5, verbose=False, n_jobs=-1) It gives the following error: AttributeError: 'Trainset' object has no attribute 'raw_ratings' I looked it up and Surprise documentation says that Trainset objects are not the same as dataset objects, which makes sense. However, the documentation does not say how to convert the trainset to dataset. My question is: 1. Is it possible to convert Surprise Trainset to surprise Dataset? 2. If not, what is the correct way to train-test split the whole dataset and cross-validate? Answer: EDIT: It seems I misunderstood the task at first, so here's my correction. Hope it works this time It seems like what you're trying to do is similar to what is in the documentation under examples/split_data_for_unbiased_estimation.py (or this github issue which seems to be exactly what you want) The code manually splits the dataset into two without using any sort of function call. Then sets the internals of the data variable to be only the train split. import random from surprise import SVD from surprise import Dataset from surprise import accuracy from surprise import GridSearch # Load your full dataset. data = Dataset.load_builtin('ml-100k') raw_ratings = data.raw_ratings # shuffle ratings if you want random.shuffle(raw_ratings) # 90% trainset, 10% testset threshold = int(.9 * len(raw_ratings)) trainset_raw_ratings = raw_ratings[:threshold] test_raw_ratings = raw_ratings[threshold:] data.raw_ratings = trainset_raw_ratings # data is now your trainset data.split(n_folds=3) # Select your best algo with grid search. Verbosity is buggy, I'll fix it. print('GRID SEARCH...') param_grid = {'n_epochs': [5, 10], 'lr_all': [0.002, 0.005]} grid_search = GridSearch(SVD, param_grid, measures=['RMSE'], verbose=0) grid_search.evaluate(data) algo = grid_search.best_estimator['RMSE'] # retrain on the whole train set trainset = data.build_full_trainset() algo.train(trainset) # now test on the trainset testset = data.construct_testset(trainset_raw_ratings) predictions = algo.test(testset) print('Accuracy on the trainset:') accuracy.rmse(predictions) # now test on the testset testset = data.construct_testset(test_raw_ratings) predictions = algo.test(testset) print('Accuracy on the testset:') accuracy.rmse(predictions) PS: If you feel like this seems a bit hacky and weird... then the core-developer of Scikit-learn that wrote this code also agrees with that sentiment.
{ "domain": "datascience.stackexchange", "id": 7430, "tags": "dataset, cross-validation, recommender-system, python-3.x" }
Portably generate uniformly random floats from mt19937 output
Question: My goal is to generate a sequence of random float from a seeded random generator. The sequence should be the same on any machine / any compiler -- this rules out the use of std::uniform_real_distribution since it doesn't make that guarantee. Instead we have to create our own version of this, which we will be able to guarantee is portable / uses the same implementation on all platforms. We can assume I'm starting from std::mt19937 since the C++ standard does mandate its implementation, so the goal becomes, how can I convert a uniformly random uint32_t to a uniformly random float in the range [0.0f, 1.0f]. The main things I'm concerned about are: Efficiency Loss of precision I threw something together which looks like this: template <typename RNG> float uniform_float(RNG & rng) { static_assert(std::is_same<std::uint32_t, typename RNG::result_type>::value, "Expected to be used with RNG whose result type is uint32_t, like mt19937"); float result = static_cast<float>(rng()); for (int i = 0; i < 32; ++i) { result /= 2; } return result; } In preliminary tests it seems to be outputting floats in the range [0.0f, 1.0f]. I did also try looking around in the libstdc++ headers to see where they are doing the equivalent thing, but it looked like it was going to take some digging to actually find it. Here are some natural questions in my mind: When static casting uint32_t to float, the value is never outside the representable range, so the behavior is not undefined. Typically it will not be representable exactly though, since both types have 32 bits, and float has to have some overhead. The standard says it is implementation-defined whether I get the next highest or next lowest representable number in this case. I assume that it doesn't matter since I'm going to divide by two anyways many times after this, and then many of these values will collide anyways? Is it better (faster) to divide by the uint32_t max value, rather than by 2^32? I assume not. Does dividing by two repeatedly cause a subtle bias as the least significant bits are repeatedly discarded? If they are only being discarded then I would expect not, but possibly there is some rounding that takes place and could cause problems? An alternative strategy would be, start with 2^{-32} as a float, and then multiply it up by a random integer into the range 0.0f, 1.0f. However it's harder for me to understand in terms of the standard exactly what will happen if I do that -- what if 2^{-32} is not exactly representable? If I simply write it as a multiplication, then the int will be promoted to a float first anyways, right? Is it better to do some kind hand-rolled operation for int * float, using a bit-by-bit doubling routine etc.? Answer: At least in my opinion, the efficiency question you've raised (repeated division by 2 vs. division by 232 vs. multiplication by 2-32) is best answered by profiling. I, however, would take issue with making a blind assumption that the generator's maximum is 232-1 at all. Instead, I'd rather use the generator's own specification of the maximum value it can return, so the code would look something like this: return (float)rng() / (float)RNG::max(); Here again, pre-inverting and multiplying might (possibly) improve speed: static const float factor = 1.0f / RNG::max(); return rng() * factor; I also tend to wonder whether it might make sense to make the result type generic. With this, we're no longer restricted to a generator that produces a 32-bit result, but we probably do want to assure that the result type is floating point. template <typename Result, typename RNG> Result uniform_dist(RNG & rng) { static_assert(std::is_floating_point<Result>::value, "Result must be a floating point type"); static const Result factor = Result(1) / static_cast<Result>(RNG::max()); return rng() * factor; } Then we'd specify the result type as a template parameter: std::mt19937 rng { std::random_device()() }; std::cout << uniform_dist<float>(rng) << "\n"; std::cout << uniform_dist<double>(rng) << "\n";
{ "domain": "codereview.stackexchange", "id": 18944, "tags": "c++, performance, c++11, random, floating-point" }
Direction of Photo Electron Emission
Question: I was looking for information on how the photo electrons are emitted when under X-ray radiation. In this ancient review paper here http://authors.library.caltech.edu/1551/1/WATpr28.pdf they state that the most common angles for non polarized X-ray beams (of various energies) range roughly at around 70-80 degrees with the beam. It is unclear to me, whether the photo electrons are moving towards the source of the X-ray beam or away from it ? Undergrad texts do not seem to shed light on this matter, the best one gets is pictures with emitted electrons being at a 90 degree angle to the incoming photon. Also, I assume this angle is given for a cone, i.e. it's 70-80 w.r.t. the beam, but with 2pi angle around the beam ? Answer: Referring to the Watson 1928 paper you cited (see footnote 31 on page 737) I infer the following. The most probable direction of initial photo-electron ejection is a little forward of perpendicular to the incoming un-polarized X-ray beam (based on ejection being parallel to the electric vector of the x-ray photons which vector is perpendicular to the beam length). So for un-polarized x-rays the electron vectors will form a cone opening up "down-beam" away from the x-ray source. The half-angle of the cone is beta but beta is very close to 90 degrees. The authors hypothesize that the emitted photo-electrons are prone to scattering by atoms surrounding the photo-electron source atom. They say it is this scattering which accounts for the observed distributions being other than a spike at angle beta. They say that application of a geometrical scattering model like Rutherford nuclear scattering appears to account very well for the scattering patterns reported from the cited experiments. In the fu-berlin paper cited by Carl Witthoft, which describes X-ray Photoelectron Spectroscopy XPS, it is written " XPS in condensed matter one must note that the method is very surface sensitive, because only photo electrons from a thin surface layer are emitted lossless. " In http://journals.aps.org/pr/abstract/10.1103/PhysRev.23.137 the abstract of a 1923 paper by Bubb.F. there is a rough description of photoelectron emission angles of photo-electrons from a block of parafin wax produced by polarized beam of x-rays. (Sadly I can't access the body of this paper). Of course all this is many years old so I am sure that there must be some more up-to-date descriptions somewhere. The wikipedia article for XPS (http://en.wikipedia.org/wiki/X-ray_photoelectron_spectroscopy) doesn't appear to mention scattering angle although the overview diagram indicates "take-off angle".
{ "domain": "physics.stackexchange", "id": 15598, "tags": "photoelectric-effect" }
How to understand the SR Latch
Question: I can't wrap my head around how the SR Latch works. Seemingly, you plug an input line from R, and another from S, and you are supposed to get results in $Q$ and $Q'$. However, both R and S require input from the other's output, and the other's output requires input from the other other's output. What comes first the chicken or the egg?? When you first plug this circuit in, how does it get started? Answer: A flip-flop is implemented as a bi-stable multivibrator; therefore, Q and Q' are guaranteed to be the inverse of each other except for when S=1, R=1, which is not allowed. The excitation table for the SR flip-flop is helpful in understanding what occurs when signals are applied to the inputs. S R Q(t) Q(t+1) ---------------- 0 x 0 0 1 0 0 1 0 1 1 0 x 0 1 1 The outputs Q and Q' will rapidly change states and come to rest at a steady state after signals have been applied to S and R. Example 1: Q(t) = 0, Q'(t) = 1, S = 0, R = 0. State 1: Q(t+1 state 1) = NOT(R OR Q'(t)) = NOT(0 OR 1) = 0 Q'(t+1 state 1) = NOT(S OR Q(t)) = NOT(0 OR 0) = 1 State 2: Q(t+1 state 1) = NOT(R OR Q'(t+1 state 1)) = NOT(0 OR 1) = 0 Q'(t+1 state 2) = NOT(S OR Q(t+1 state 1)) = NOT(0 OR 0) = 1 Since the outputs did not change, we have reached a steady state; therefore, Q(t+1) = 0, Q'(t+1) = 1. Example 2: Q(t) = 0, Q'(t) = 1, S = 0, R = 1 State 1: Q(t+1 state 1) = NOT(R OR Q'(t)) = NOT(1 OR 1) = 0 Q'(t+1 state 1) = NOT(S OR Q(t)) = NOT(0 OR 0) = 1 State 2: Q(t+1 state 2) = NOT(R OR Q'(t+1 state 1)) = NOT(1 OR 1) = 0 Q'(t+1 state 2) = NOT(S OR Q(t+1 state 1)) = NOT(0 OR 0) = 1 We have reached a steady state; therefore, Q(t+1) = 0, Q'(t+1) = 1. Example 3: Q(t) = 0, Q'(t) = 1, S = 1, R = 0 State 1: Q(t+1 state 1) = NOT(R OR Q'(t)) = NOT(0 OR 1) = 0 Q'(t+1 state 1) = NOT(S OR Q(t)) = NOT(1 OR 0) = 0 State 2: Q(t+1 state 2) = NOT(R OR Q'(t+1 state 1)) = NOT(0 OR 0) = 1 Q'(t+1 state 2) = NOT(S OR Q(t+1 state 1)) = NOT(1 OR 0) = 0 State 3: Q(t+1 state 3) = NOT(R OR Q'(t+1 state 2)) = NOT(0 OR 0) = 1 Q'(t+1 state 3) = NOT(S OR Q(t+1 state 2)) = NOT(1 OR 1) = 0 We have reached a steady state; therefore, Q(t+1) = 1, Q'(t+1) = 0. Example 4: Q(t) = 1, Q'(t) = 0, S = 1, R = 0 State 1: Q(t+1 state 1) = NOT(R OR Q'(t)) = NOT(0 OR 0) = 1 Q'(t+1 state 1) = NOT(S OR Q(t)) = NOT(1 OR 1) = 0 State 2: Q(t+1 state 2) = NOT(R OR Q'(t+1 state 1)) = NOT(0 OR 0) = 1 Q'(t+1 state 2) = NOT(S OR Q(t+1 state 1)) = NOT(1 OR 1) = 0 We have reached a steady state; therefore, Q(t+1) = 1, Q'(t+1) = 0. Example 5: Q(t) = 1, Q'(t) = 0, S = 0, R = 0 State 1: Q(t+1 state 1) = NOT(R OR Q'(t)) = NOT(0 OR 0) = 1 Q'(t+1 state 1) = NOT(S OR Q(t)) = NOT(0 OR 1) = 0 State 2: Q(t+1 state 2) = NOT(R OR Q'(t+1 state 1)) = NOT(0 OR 0) = 1 Q'(t+1 state 2) = NOT(S OR Q(t+1 state 1)) = NOT(0 OR 1) = 0 We have reached a steady; state therefore, Q(t+1) = 1, Q'(t+1) = 0. With Q=0, Q'=0, S=0, and R=0, an SR flip-flop will oscillate until one of the inputs is set to 1. Example 6: Q(t) = 0, Q'(t) = 0, S = 0, R = 0 State 1: Q(t+1 state 1) = NOT(R OR Q'(t)) = NOT(0 OR 0) = 1 Q'(t+1 state 1) = NOT(S OR Q(t)) = NOT(0 OR 0) = 1 State 2: Q(t+1 state 2) = NOT(R OR Q'(t+1 state 1)) = NOT(0 OR 1) = 0 Q'(t+1 state 2) = NOT(S OR Q(t+1 state 1)) = NOT(0 OR 1) = 0 State 3: Q(t+1 state 3) = NOT(R OR Q'(t+1 state 2)) = NOT(0 OR 0) = 1 Q'(t+1 state 3) = NOT(S OR Q(t+1 state 2)) = NOT(0 OR 0) = 1 State 4: Q(t+1 state 4) = NOT(R OR Q'(t+1 state 3)) = NOT(0 OR 1) = 0 Q'(t+1 state 4) = NOT(S OR Q(t+1 state 3)) = NOT(0 OR 1) = 0 As one can see, a steady state is not possible until one of the inputs is set to 1 (which is usually handled by power-on reset circuitry).
{ "domain": "cs.stackexchange", "id": 1182, "tags": "circuits, sequential-circuit" }
What is meant by “decay topology” in experimental high energy physics
Question: Often we come across 'decay topology' while doing data analysis in experimental particle physics. My guess is that it represents decay of a particle and how it decays and into what it decays. Edit: I was reading about D*+ reconstruction and analysis strategy in a thesis by R S de Rooji (https://www.researchgate.net/publication/258809712_Prompt_D_production_in_proton-proton_and_lead-lead_collisions_measured_with_the_ALICE_experiment_at_the_CERN_Large_Hadron_Collider). The exact lines were "... this chapter introduces the strategy for the $D^{*+}$ reconstruction via the $D^{*+}\rightarrow D^0 \pi^+_{soft} \rightarrow K^-\pi^+\pi^+_{soft}$ hadronic decay channel. Furthermore, the decay topology defines a multitude of observables on which can be cut in order to increase the statistical significance of the $D^{*+}$ signal compared to the combinatorial background which arises from uncorrelated pairs of tracks." [decay=(of a radioactive substance, particle, etc.) undergo change to a different form by emitting radiation. topology=the way in which constituent parts are interrelated or arranged.] Answer: This expression is used to describe patterns of decays, where the exact type of particles is not specified, as only the stages of the decay matters. Let's look at an example. The minimal supersymmetric extension has the following decay: $$\begin{align} H&\to\tilde{\chi}^0_2\tilde{\chi}^0_2\\ \text{each}\ \tilde{\chi}^0_2&\to Z\tilde{\chi}^0_1. \end{align}$$ So a Higgs decays into two neutralinos (the next lightest ones), which in turn each decay into a $Z$ and the lightest neutralino $\tilde{\chi}^0_1$. Since $\tilde{\chi}^0_1$ is stable, this results in two instances of missing energy. Then, let's consider this other example: $$\begin{align} H&\to\tilde{l}^-\tilde{l}^+,\\ \tilde{l}^+&\to l^+\tilde{\chi}^0_1,\\ \tilde{l}^-&\to l^-\tilde{\chi}^0_1. \end{align}$$ If we ignore the specific type of particles, both can be represented by the diagram below: with the assignment $A\to H$, $B,B'\to\tilde{\chi}^0_2$, $a,a'\to Z$, and $X,X'\to\tilde{\chi}^0_1$ for the first decay, and $A\to H$, $B\to\tilde{l}^-$, $B'\to\tilde{l}^+$, $a\to l^-$, $a'\to l^+$, and $X,X'\to\tilde{\chi}^0_1$ for the second decay. Although this diagram looks like it, this is not a Feynman diagram: the arrows represent the flow of time and the dashed lines represent missing energy, i.e. the arrowed plain lines do not mean fermion and the dashed lines do not mean scalar. This is a decay topology, or more exactly a graphical representation of it. This allows particle physicist to study in one go a whole set of processes, which is especially useful for the reusing the kinematics computation.
{ "domain": "physics.stackexchange", "id": 43567, "tags": "particle-physics, experimental-physics, terminology, data-analysis" }
Ripple Carry Adder
Question: I'm new to computer science and came across an issue while learning about ripple carry adders. I'm trying to add binary 1+1+1 below (excuse the poor photoshopping). I know the answer should be binary 11, but my sums were 0 and 0 with a carry of 1. Shouldn't my sums be 1 and 1 with a carry of 0? What am I doing wrong? Answer: You're misinterpreting the action of a ripple-carry adder. Each of the two boxed circuits takes in two 1-bit numbers (the top two inputs) and a carry bit (the left input) and returns a sum bit (the lower output) and a carry (the right output). In this example, you are adding $a_0+b_0$ (the 1, 1 left pair of inputs) and $a_1+b_1$ (the 1, 0 right pair of inputs). In other words, reading from least significant bits on the left, you're adding $a=11$ to $b=01$, i.e., $11+01$ which gives you the sum bits $s_0=0,s_1=0$ and a carry of 1, as you observed. In decimal terms, your example adds $3+1$ and correctly produces $4$.
{ "domain": "cs.stackexchange", "id": 5794, "tags": "computer-architecture, arithmetic, digital-circuits, cpu" }
Why 0.2% of strain is considered while taking proof stress?
Question: When a material does not show a distinct yield point then 0.2% of strain is considered and a line is drawn parallel to the elastic line and the corresponding stress is called proof stress. My question is why are they taking 0.2% of strain? Is that an assumption and does it differ with materials Answer: It is not even an assumption but somewhat arbitrarily chosen and not universally adopted either. In some codes and for some materials 0.1% is chosen and in others 0.05% and sometimes x% total strain is used rather than the x% proof stress. Also proof stress is not x% of strain, it is 0.2% remaining strain after unloading. So the 0.2% proof stress is the stress where after unloading you end up with a permanent elongation of 0.2% of your specimen. The choice of 0.2% is a compromise between being easily measurable with simple equipment while being exact enough for most engineering purposes.
{ "domain": "engineering.stackexchange", "id": 3533, "tags": "materials, civil-engineering" }
Magnetic field due to a coil of N turns and a solenoid
Question: I have learnt that the formula for calculating the magnetic field at the centre of a current-carrying coil of $N$ turns is:- $$ B = \frac {\mu N I}{2r}$$ (where $r=$ radius of the loop, $I=$ current in the coil) And, the magnetic field at the centre of a current-carrying solenoid of $N$ turns is:- $$ B = \frac{\mu N I}{L}$$ (where $L$ & $I$ are the length and the current in the solenoid respectively and $\mu=\mu_0\mu_r$ is the magnetic permeability). As we can see, both these formulas are different. But I can't figure out why that is. (Since from what I have read about solenoids, they are just a number of coils wound closely together). So my question is- why are there two different formulas for magnetic field at the the centre of a coil of $N$ turns and of a solenoid? (Is a solenoid somehow different from a coil having many turns?) Answer: Let's discuss the matter both qualitatively and quantitatively Quantitative Discussion First of all let's derive the expression for the magnetic field at the axis of a current carrying coil Let's begin with a coil of a single turn and derive the expression for the magnetic field on the axis of this coil. The cos components of the magnetic field cancel out due to symmetry and the sine components add up along the axis. So we have the field as $$dB=\frac{\mu_0Idl\sin\alpha}{4\pi r^2}\sin\theta$$ Here $\alpha=\frac{\pi}{2}$, so $\sin\alpha=1$ ($\alpha$ is the angle b/w the face of the loop and the object). or $$dB=\frac{\mu_0Idl}{4\pi r^2}\sin\theta$$ or $$B=\int\frac{\mu_0Idl}{4\pi r^2}\frac{R}{r}$$ or $$B=\frac{\mu_0IR}{4\pi r^3}\int dl$$ ($\int dl= 2\pi r$ i.e. the circumference of the coil) or $$B=\frac{\mu_0IR}{4\pi r^3} 2\pi R$$ or $$B=\frac{\mu_0IR^2}{2(R^2+x^2)^\frac{3}{2}}$$ Now for an object at the centre of the coil $x=0$ so $$B=\frac{\mu_0I}{2R}$$ Now the point is that we can extend this formula for a coil of N turns iff the thickness of the coil is small(better if negligible) i.e all the loops are nearly on the same cross section. Otherwise if the coil is considerably thick then we cannot apply this derivation. For a thick coil (solenoid) the derivation is different Let us discuss about this coil of thickness t. Here we cannot apply the above derivation as the coil is thick. If we wish to derive the expression of magnetic field on the axis of this coil with the method we did before we will not only have to integrate along the circumference of each individual loop of the coil but also along the length of the coil. Thus it would be better if in this case we consider our differential element not to be $dl$ on the circumference of the coil, but to be a coil of small thickness $dx$ itself and this brings us to the logic for deriving the magnetic field on the axis of a solenoid. Let 'N' be the number of coils per unit length of the solenoid. So in the length $dx$ there will be $Ndx$ number of coils. The field experienced by an object O at the centre of this solenoid is given by $$dB=\frac{\mu_0NdxIR^2}{2(R^2+x^2)^\frac{3}{2}}$$ Now we can substitute $x$ as $R\tan\theta$ so $$x=R\tan\theta$$ or $$dx=R\sec^2\theta d\theta$$ putting these values we get $$dB=\frac{\mu_0NI\cos\theta d\theta}{2}$$ integrating the expression from $-\frac{\pi}{2}$ to $+\frac{\pi}{2}$ we get $$B=\int_\frac{-\pi}{2}^\frac{\pi}{2} \frac{\mu_0NI\cos\theta d\theta}{2}$$ or $$B=\frac{\mu_0NI}{2} \int_\frac{-\pi}{2}^\frac{\pi}{2} \cos \theta d\theta$$ or$$B=\frac{\mu_0NI}{2} (\sin\frac{\pi}{2} -\sin(-\frac{\pi}{2}))$$ or $$B= \mu_0NI$$ which is the required expression for the field at the centre of a solenoid. Look in this derivation, unlike the first one here we have assumed the differential element to be a coil of small thickness $dx$ rather than assuming a small length $dl$ on any of the coils. so for the thick coil the derivation will be Qualitative Discussion For a coil of n turns we can apply the formula $$B=\frac{\mu_0NI}{2R}$$ only when all the $N$ turns of the coil are nearly on the same cross section. Otherwise we will have to resort to the expression of solenoid. I hope these helped clear ur doubts
{ "domain": "physics.stackexchange", "id": 42988, "tags": "electromagnetism, magnetic-fields, inductance" }
Revised Job Queue for Strategy Game
Question: After posting my previous question about this Job Queue, I decided I wasn't actually very happy with it. I am embarrassed to admit that upon further testing it did not function properly in all situations. I received some awesome feedback about it, and I have made extensive changes to the classes as well as fixing all of the bugs that I could find. I believe the code is much simpler and easier to understand, so I would like to get some feedback on it. One of the key differences between this and the previous version is that now the workers are assigned a JobUnit from the JobQueue, and derive their destination positions based on that. Then when they are finished moving, they are sent to the JobQueue to work on that JobUnit. I don't know if this is a good way to handle the problem, but it is working. DTJobQueue.h: #import <Foundation/Foundation.h> #import "DTDwarf.h" #import "DTJob.h" #import "DTJobQueueState.h" @interface DTJobQueue : NSObject <NSCoding> @property JobQueueState state; @property NSMutableArray *completedJobsForPickup; -(void) updateJobQueue; #pragma mark - Job Handling -(void) addJob: (DTJob *)job; -(BOOL) alreadyHaveJobOfThisType:(JobType)jobType; -(BOOL) areJobsAvailableForWork; -(JobType) activeJobType; #pragma mark - Dwarf Handling -(void) addDwarfToQueue:(DTDwarf *)dwarf; -(DTJobUnit *)jobUnitForDwarf; #pragma mark - Pause and Cancel Jobs -(void) cancelAllJobs; //nothing currently uses pause -(void) pauseJobQueue; -(void) unpauseJobQueue; #pragma mark - Info for Rendering -(int) numberOfJobsInArray; -(NSMutableArray *) listOfJobs; @end DTJobQueue.m: #import "DTJobQueue.h" /* The JobQueue handles the jobs of the floor it is on. One job is active at a time, and only one job of each type is allowed at once. Each update if a job is not active, the queue tries to make one active and starts it if successful. If a job is currently active, it updates it based on workloads received from the workers. Once complete it resolves the job and clears workers out of the queue. It also has methods to cancel jobs in the queue. */ @implementation DTJobQueue { DTJob *_activeJob; NSMutableArray *_activeJobUnits; NSMutableArray *_workerSlots; NSMutableArray *_jobQueue; } static const int kNumWorkerSlots = 4; -(id) init { self = [super init]; if (self) { _state = JobQueueStateIdle; _jobQueue = [[NSMutableArray alloc]init]; _workerSlots = [[NSMutableArray alloc]init]; _activeJobUnits = [[NSMutableArray alloc]init]; _completedJobsForPickup = [[NSMutableArray alloc]init]; } return self; } #pragma mark - Update Loop -(void) updateJobQueue { switch (self.state) { case JobQueueStateIdle: //jobs are started in the idle state and only if one is not already active if ([self chooseAnActiveJob]) { [self startActiveJob]; self.state = JobQueueStateWorking; } break; case JobQueueStateWorking: //this case happens when a job is active //first it loads as many job units as it can, then it checks if the workers have finished any //then it resolves the job if all of its units are completed [_activeJob updateJob]; //dont create more new job units than the max minus the number of already loaded units int numAvailableJobUnits = kNumWorkerSlots - (int)_activeJobUnits.count; //first check if more job units can be loaded if ((int)_activeJobUnits.count < kNumWorkerSlots) { //if so, load the previously determined number of them, not more than the total number that will be needed while ((int)_activeJobUnits.count < _activeJob.jobUnitsNeededToComplete && numAvailableJobUnits > 0) { [self fillJobUnitSlot]; numAvailableJobUnits--; } } [self checkForFinishedJobUnits]; if (_activeJob.status == JobStateCompleted) { [self resolveFinishedJob]; } break; default: break; } } #pragma mark - Idle Update -(BOOL) chooseAnActiveJob { if (_jobQueue.count > 0 && _activeJob == nil) { _activeJob = [_jobQueue firstObject]; [_jobQueue removeObjectAtIndex:0]; return YES; } else { return NO; } } -(void) startActiveJob { //creates the job units [_activeJob startJob]; } #pragma mark - Working Update -(void) fillJobUnitSlot { //returns nil if there are no available job units DTJobUnit *jobUnit = [_activeJob jobUnitWaitingForWorker]; if (jobUnit != nil) { jobUnit.status = UnitWaitingForWorker; [_activeJobUnits addObject:jobUnit]; } } -(void) checkForFinishedJobUnits { //dwarves are removed from their worker slot if their unit is finished //they are not passed back and forth, just a reference kept while they are working NSMutableArray *dwarvesStaying = [[NSMutableArray alloc]init]; for (DTDwarf *dwarf in _workerSlots) { if (dwarf.dwarfState == DwarfFinishedWorking) { dwarf.dwarfState = DwarfReadyForFloorPickup; [self completeJobUnit:dwarf.dwarfJobUnit]; } else { [dwarvesStaying addObject:dwarf]; } } _workerSlots = dwarvesStaying; } -(void) completeJobUnit:(DTJobUnit *)jobUnit { [_activeJob completeJobUnit:jobUnit]; [_activeJobUnits removeObject:jobUnit]; } -(void) resolveFinishedJob { [self.completedJobsForPickup addObject:_activeJob]; _activeJob = nil; self.state = JobQueueStateIdle; } #pragma mark - Dwarf Handling -(void) addDwarfToQueue:(DTDwarf *)dwarf { //the dwarf is only added to the queue once in position to work the job dwarf.dwarfState = DwarfWorking; [_workerSlots addObject:dwarf]; for (DTJobUnit *jobUnit in _activeJobUnits) { //this is a sanity check to make sure the dwarf has the right job unit if (jobUnit == dwarf.dwarfJobUnit) { jobUnit.status = UnitWorkerIsWorking; } } } -(DTJobUnit *)jobUnitForDwarf { //the dwarf is assigned a job unit when it reaches the floor and is assigned a job destination DTJobUnit *jobUnit = nil; for (int i = 0; i < (int)_activeJobUnits.count; i++) { DTJobUnit *tempUnit = [_activeJobUnits objectAtIndex:i]; if (tempUnit.status == UnitWaitingForWorker) { return tempUnit; } } return jobUnit; } #pragma mark - Job Handling -(void) addJob: (DTJob *)job { //at this point all other validation has already taken place if (self.state != JobQueueStateClosed) { [_jobQueue addObject:job]; } } -(BOOL) alreadyHaveJobOfThisType:(JobType)jobType { //saving a lot of execution time by returning early if (_activeJob.jobType == jobType) { return YES; } else { for (DTJob *job in _jobQueue) { if (job.jobType == jobType) { return YES; } } return NO; } } -(BOOL) areJobsAvailableForWork { //this method is called by the dwarf movement AI also if ([self jobSlotsOpen] && [self jobUnitsAvailableForWork] && self.state == JobQueueStateWorking) { return YES; } else { return NO; } } -(BOOL) jobSlotsOpen { //part of this is a sanity check to prevent too many workers from trying to enter the queue if ((int)_workerSlots.count < kNumWorkerSlots && (int)_workerSlots.count < (int)_activeJobUnits.count) { return YES; } else { return NO; } } -(BOOL) jobUnitsAvailableForWork { for (DTJobUnit *jobUnit in _activeJobUnits) { if (jobUnit.status == UnitWaitingForWorker || jobUnit.status == UnitWorkerAssigned) { //temporary workaround return YES; } } return NO; } -(JobType) activeJobType { return _activeJob.jobType; } #pragma mark - Pause and Cancel Jobs -(void) cancelAllJobs { [self removeDwarvesFromJobQueue]; [_activeJobUnits removeAllObjects]; _activeJob = nil; [_jobQueue removeAllObjects]; self.state = JobQueueStateIdle; } -(void) removeDwarvesFromJobQueue { for (DTDwarf *dwarf in _workerSlots) { dwarf.dwarfState = DwarfStateAbandoningJob; dwarf.dwarfJobUnit = nil; } [_workerSlots removeAllObjects]; } //currently pausing is not being used by anything -(void) pauseJobQueue { [self removeDwarvesFromJobQueue]; [self putJobUnitsBackInJob]; [_activeJobUnits removeAllObjects]; self.state = JobQueueStatePaused; } -(void) unpauseJobQueue { if ((int)_jobQueue.count > 0) { _activeJob = [_jobQueue firstObject]; [_jobQueue removeObject:[_jobQueue firstObject]]; self.state = JobQueueStateWorking; } else { self.state = JobQueueStateIdle; } } -(void) putJobUnitsBackInJob { for (DTJobUnit *jobUnit in _activeJobUnits) { jobUnit.status = UnitInQueue; } } -(void) putActiveJobBackInQueue { if (_activeJob != nil) { [_jobQueue insertObject:_activeJob atIndex:0]; _activeJob = nil; } } #pragma mark - Info for Rendering -(NSMutableArray *) listOfJobs { NSMutableArray *listOfJobs = [[NSMutableArray alloc]init]; if (_activeJob != nil) { [listOfJobs addObject:_activeJob]; } for (int i = 0; i < (int)_jobQueue.count; i++) { [listOfJobs addObject:[_jobQueue objectAtIndex:i]]; } return listOfJobs; } -(int) numberOfJobsInArray { return (int) _jobQueue.count; } #pragma mark - NSCoding methods -(id) initWithCoder:(NSCoder *)aDecoder { self = [super init]; if (self) { _state = [aDecoder decodeIntegerForKey:@"jobQueueState"]; _completedJobsForPickup = [aDecoder decodeObjectForKey:@"completedJobsForPickup"]; _activeJob = [aDecoder decodeObjectForKey:@"activeJob"]; _activeJobUnits = [aDecoder decodeObjectForKey:@"activeJobUnits"]; _workerSlots = [aDecoder decodeObjectForKey:@"workerSlots"]; _jobQueue = [aDecoder decodeObjectForKey:@"jobQueue"]; } return self; } -(void) encodeWithCoder:(NSCoder *)aCoder { [aCoder encodeInteger:self.state forKey:@"jobQueueState"]; [aCoder encodeObject:self.completedJobsForPickup forKey:@"completedJobsForPickup"]; [aCoder encodeObject:_activeJob forKey:@"activeJob"]; [aCoder encodeObject:_activeJobUnits forKey:@"activeJobUnits"]; [aCoder encodeObject:_workerSlots forKey:@"workerSlots"]; [aCoder encodeObject:_jobQueue forKey:@"jobQueue"]; } @end DTJob.h #import <Foundation/Foundation.h> #import "DTJobTypes.h" #import "DTJobStatus.h" #import "DTJobUnit.h" @interface DTJob : NSObject <NSCoding> -(id) initWithType:(JobType)jobType; @property JobType jobType; @property JobState status; @property CGPoint jobPosition; @property int floorNumber; @property int jobUnitsNeededToComplete; -(void) completeJobUnit:(DTJobUnit *)jobUnit; -(DTJobUnit *) jobUnitWaitingForWorker; -(BOOL) jobsAvailable; -(void) updateJob; -(void) startJob; -(void) pauseJob; @property NSMutableArray *blocksOnFloor; @property NSMutableArray *itemsOnFloor; @property NSMutableArray *enemiesOnFloor; @end DTJob.m: #import "DTJob.h" #import "DTJobUnit.h" #import "DTGroundBlock.h" #import "DTItem.h" #import "DTEnemy.h" /* Jobs are created by the Floors and passed to the JobQueue once validated. At this time the Floor gives the job an array of blocks, enemies, or items if the job is Mining, Fighting, or Hauling. The JobQueue starts the job, and the Job creates the proper number of JobUnits. The JobQueue asks for a valid JobUnit, and then hands it to a dwarf. Once completed the JobQueue tells the Job to complete the correct JobUnit. Once all JobUnits are completed, the Job changes its state to finished and the JobQueue handles it. */ @implementation DTJob { int _jobUnitsToCreate; NSMutableArray *_pendingJobUnits; NSMutableArray *_completedJobUnits; } -(id) initWithType:(JobType)jobType { self = [super init]; if (self) { _jobType = jobType; _jobUnitsToCreate = [self calculateJobsToCreate:jobType]; _pendingJobUnits = [[NSMutableArray alloc]init]; _completedJobUnits = [[NSMutableArray alloc]init]; _blocksOnFloor = [[NSMutableArray alloc]init]; _itemsOnFloor = [[NSMutableArray alloc]init]; _enemiesOnFloor = [[NSMutableArray alloc]init]; } return self; } -(int) calculateJobsToCreate:(JobType)jobType { switch (jobType) { case MiningJob: //this will be changed to the number of blocks case RoomUpgradeJob: //different rooms could require more workers but default is 1 case FightingJob: //this will be changed to the number of enemies case HaulJob: //this will be changed to the number of items case CleaningJob: return 1; case LadderJob: return 2; case BottomBuildJob: return 6; case WallBuildJob: return 4; case RoomBuildJob: return 10; case SuperiorWallBuildJob: return 8; default: break; } return 0; } #pragma mark - Update loop -(void) updateJob { [self checkIfJobIsFinished]; } -(void) checkIfJobIsFinished { if (_completedJobUnits.count >= self.jobUnitsNeededToComplete) { self.status = JobStateCompleted; [self clearAllJobUnits]; } } -(void) clearAllJobUnits { [_pendingJobUnits removeAllObjects]; [_completedJobUnits removeAllObjects]; } #pragma mark - Start and Pause Jobs -(void) startJob { if (_jobType == MiningJob) { //make a job unit for each block on the floor for (int i = 0; i < (int)self.blocksOnFloor.count; i++) { DTGroundBlock *tempBlock = [self.blocksOnFloor objectAtIndex:i]; DTJobUnit *jobUnit = [[DTJobUnit alloc]initWithJobType:_jobType]; jobUnit.position = tempBlock.blockPosition; jobUnit.unitGroundBlock = tempBlock; jobUnit.status = UnitInQueue; [_pendingJobUnits addObject:jobUnit]; } self.jobUnitsNeededToComplete = (int)self.blocksOnFloor.count; } else if (_jobType == HaulJob) { //make a job unit for each item on the floor for (int i = 0; i < (int)self.itemsOnFloor.count; i++) { DTItem *tempItem = [self.itemsOnFloor objectAtIndex:i]; DTJobUnit *jobUnit = [[DTJobUnit alloc]initWithJobType:_jobType]; jobUnit.position = tempItem.position; jobUnit.unitItem = tempItem; jobUnit.status = UnitInQueue; [_pendingJobUnits addObject:jobUnit]; } self.jobUnitsNeededToComplete = (int)self.itemsOnFloor.count; } else if (_jobType == FightingJob) { //make a job unit for each enemy on the floor for (int i = 0; i < (int)self.enemiesOnFloor.count; i++) { DTEnemy *tempEnemy = [self.enemiesOnFloor objectAtIndex:i]; DTJobUnit *jobUnit = [[DTJobUnit alloc]initWithJobType:_jobType]; jobUnit.position = tempEnemy.enemyMovement.currentPosition; jobUnit.unitEnemy = tempEnemy; jobUnit.status = UnitInQueue; [_pendingJobUnits addObject:jobUnit]; } self.jobUnitsNeededToComplete = (int)self.enemiesOnFloor.count; } else { //if not a special job case, make the preset number of jobs for (int i = 0; i < _jobUnitsToCreate; i++) { DTJobUnit *jobUnit = [[DTJobUnit alloc]initWithJobType:_jobType]; jobUnit.position = self.jobPosition; jobUnit.status = UnitInQueue; //need to make a method that randomly moves this along the x axis for each copy [_pendingJobUnits addObject:jobUnit]; } self.jobUnitsNeededToComplete = _jobUnitsToCreate; } } //pause not currently used by anything -(void) pauseJob { int remainingJobsCount = 0; for (int i = 0; i < _pendingJobUnits.count; i++) { remainingJobsCount++; } _jobUnitsToCreate = remainingJobsCount; [_pendingJobUnits removeAllObjects]; } #pragma mark - Job Unit Handling -(void) completeJobUnit:(DTJobUnit *)jobUnit { if (jobUnit != nil) { jobUnit.status = UnitCompleted; [_completedJobUnits addObject:jobUnit]; [_pendingJobUnits removeObject:jobUnit]; } } -(DTJobUnit *) jobUnitWaitingForWorker { DTJobUnit *jobUnitForWorker = nil; for (DTJobUnit *jobUnit in _pendingJobUnits) { if (jobUnit.status == UnitInQueue) { jobUnitForWorker = jobUnit; return jobUnitForWorker; } } return jobUnitForWorker; } -(BOOL) jobsAvailable { for (DTJobUnit *jobUnit in _pendingJobUnits) { if (jobUnit.status == UnitInQueue) { return YES; } } return NO; } #pragma mark - NSCoding methods -(id) initWithCoder:(NSCoder *)aDecoder { self = [super init]; if (self) { //basic job properties _status = [aDecoder decodeIntegerForKey:@"status"]; _jobType = [aDecoder decodeIntegerForKey:@"jobType"]; _jobPosition = [aDecoder decodeCGPointForKey:@"jobPosition"]; _floorNumber = [aDecoder decodeIntForKey:@"floorNumber"]; //job unit management _jobUnitsToCreate = [aDecoder decodeIntForKey:@"jobUnitsToCreate"]; _jobUnitsNeededToComplete = [aDecoder decodeIntForKey:@"jobUnitsNeededToComplete"]; _pendingJobUnits = [aDecoder decodeObjectForKey:@"pendingJobUnits"]; _completedJobUnits = [aDecoder decodeObjectForKey:@"completedJobUnits"]; //job objects _blocksOnFloor = [aDecoder decodeObjectForKey:@"blocksOnFloor"]; _itemsOnFloor = [aDecoder decodeObjectForKey:@"itemsOnFloor"]; _enemiesOnFloor = [aDecoder decodeObjectForKey:@"enemiesOnFloor"]; } return self; } -(void) encodeWithCoder:(NSCoder *)aCoder { //basic job properties [aCoder encodeInteger:self.status forKey:@"status"]; [aCoder encodeInteger:self.jobType forKey:@"jobType"]; [aCoder encodeInt:self.floorNumber forKey:@"floorNumber"]; [aCoder encodeCGPoint:self.jobPosition forKey:@"jobPosition"]; //job unit management [aCoder encodeInt:_jobUnitsToCreate forKey:@"jobUnitsToCreate"]; [aCoder encodeInt:self.jobUnitsNeededToComplete forKey:@"jobUnitsNeededToComplete"]; [aCoder encodeObject:_pendingJobUnits forKey:@"pendingJobUnits"]; [aCoder encodeObject:_completedJobUnits forKey:@"completedJobUnits"]; //job objects [aCoder encodeObject:self.blocksOnFloor forKey:@"blocksOnFloor"]; [aCoder encodeObject:self.itemsOnFloor forKey:@"itemsOnFloor"]; [aCoder encodeObject:self.enemiesOnFloor forKey:@"enemiesOnFloor"]; } @end DTJobUnit.h: #import <Foundation/Foundation.h> #import "DTJobUnitStatus.h" #import "DTJobTypes.h" #import "DTGroundBlock.h" #import "DTItem.h" #import "DTEnemy.h" @interface DTJobUnit : NSObject <NSCoding> -(id) initWithJobType:(JobType)jobType; @property JobUnitStatus status; @property JobType jobType; @property CGPoint position; //these are set by the Job when the JobUnit is created if it is a Mining, Hauling, or Fighting job @property DTGroundBlock *unitGroundBlock; @property DTItem *unitItem; @property DTEnemy *unitEnemy; @end DTJobUnit.m: #import "DTJobUnit.h" @implementation DTJobUnit /* JobUnits are created by the Job that handles them. The Job also sets and checks their status. They contain position information to give to dwarves. They will contain a reference to a block, item, or enemy if a Mining, Hauling, or Fighting job. */ -(id) initWithJobType:(JobType)jobType { self = [super init]; if (self) { _status = UnitInQueue; _jobType = jobType; } return self; } #pragma mark - NSCoding methods -(id) initWithCoder:(NSCoder *)aDecoder { self = [super init]; if (self) { //basic unit properties _status = [aDecoder decodeIntegerForKey:@"status"]; _position = [aDecoder decodeCGPointForKey:@"position"]; _jobType = [aDecoder decodeIntegerForKey:@"jobType"]; //things a unit can reference _unitGroundBlock = [aDecoder decodeObjectForKey:@"unitGroundBlock"]; _unitItem = [aDecoder decodeObjectForKey:@"unitItem"]; _unitEnemy = [aDecoder decodeObjectForKey:@"unitEnemy"]; } return self; } -(void) encodeWithCoder:(NSCoder *)aCoder { //basic unit properties [aCoder encodeInteger:self.status forKey:@"status"]; [aCoder encodeCGPoint:self.position forKey:@"position"]; [aCoder encodeInteger:self.jobType forKey:@"jobType"]; //things a unit can reference [aCoder encodeObject:self.unitGroundBlock forKey:@"unitGroundBlock"]; [aCoder encodeObject:self.unitItem forKey:@"unitItem"]; [aCoder encodeObject:self.unitEnemy forKey:@"unitEnemy"]; } @end Here is the method that first assigns a JobUnit to a worker: -(void) assignDwarfJobUnit:(DTDwarf *)dwarf { DTJobUnit *jobUnit = [_floorJobQueue jobUnitForDwarf]; if (jobUnit != nil) { dwarf.dwarfJobUnit = jobUnit; dwarf.dwarfMovement.destinationPosition = jobUnit.position; dwarf.dwarfMovement.dwarfMovementState = DwarfMovingToJobPosition; jobUnit.status = UnitWorkerAssigned; } } And here is the method that sends the worker to the JobQueue when it is in position: -(void) putDwarvesToWork { for (DTDwarf *dwarf in self.dwarfArray) { if (dwarf.dwarfMovement.dwarfMovementState == DwarfAtDestinationFloor || dwarf.dwarfMovement.dwarfMovementState == DwarfAtFloorExit) { if (dwarf.dwarfState != DwarfCarryingItemToStockpile) { if ([_floorJobQueue areJobsAvailableForWork]) { if ([dwarf isJobAllowed:[_floorJobQueue activeJobType]]) { [self assignDwarfJobUnit:dwarf]; } else { dwarf.dwarfMovement.dwarfMovementState = DwarfIdleMovement; } } else { dwarf.dwarfMovement.dwarfMovementState = DwarfIdleMovement; } } } } } I am open to any kind of criticism, so please don't hold back. This is the first time I have tried to extensively comment my code, especially big picture comments that describe what the overall point of a class is, so I would love to know how I did. I did not include the enums this time (this is already a lot of code), and I realized as I posted this that the names still do not perfectly conform to Apple standards, but I will be fixing that next. Answer: A few things about this method bother me: -(BOOL) chooseAnActiveJob { if (_jobQueue.count > 0 && _activeJob == nil) { _activeJob = [_jobQueue firstObject]; [_jobQueue removeObjectAtIndex:0]; return YES; } else { return NO; } } First, An can be completely removed from the method title. It's better as simply: chooseActiveJob. But even this method name is bothersome. What does this method do? It makes sure there's always an active job, right? There's a WAY better approach. First, let's change _activeJob to a property: @property (nonatomic,strong) DTJob *activeJob; Now, all references to _activeJob will be replaced with self.activeJob, and this method, chooseAnActiveJob can be replaced. Now, we want to write a custom setter for activeJob. - (void)setActiveJob:(DTJob *)job { if (!job) { _activeJob = [_jobQueue firstObject]; } else { _activeJob = job; } [_jobQueue removeObject:_activeJob]; } Now, the idea here is that whenever the currently active job is complete, you set: self.activeJob = nil; And if you never want to allow the user to do anything other than add jobs to a queue and the queue be executed in order, this is all you have to do. The setter will automatically set the job at the front of the queue to the current active job. But now, if you want to somehow allow the user to say "Do this job now!", you could send a reference to that job. The other important thing here is an understanding of how firstObject and removeObject:. These methods (firstObject in particular) are added to prevent having to make clunky index checks. If the array is empty, firstObject returns nil. And finally, we can call removeObject: to remove whatever object (and any copies of that object) in the job queue, and it's not problematic to send nil to remove either. As a note, it may be desirably to also start the job that you made the currently active job here. -(void) addJob: (DTJob *)job { //at this point all other validation has already taken place if (self.state != JobQueueStateClosed) { [_jobQueue addObject:job]; } } It would seem to make sense for this method to return a bool based on whether or not the job was added to the queue. -(BOOL) areJobsAvailableForWork { //this method is called by the dwarf movement AI also if ([self jobSlotsOpen] && [self jobUnitsAvailableForWork] && self.state == JobQueueStateWorking) { return YES; } else { return NO; } } -(BOOL) jobSlotsOpen { //part of this is a sanity check to prevent too many workers from trying to enter the queue if ((int)_workerSlots.count < kNumWorkerSlots && (int)_workerSlots.count < (int)_activeJobUnits.count) { return YES; } else { return NO; } } Any method that follows this form: if (someCondition) { return YES; } else { return NO; } Can be rewritten as: return someCondition; -(NSMutableArray *) listOfJobs { NSMutableArray *listOfJobs = [[NSMutableArray alloc]init]; if (_activeJob != nil) { [listOfJobs addObject:_activeJob]; } for (int i = 0; i < (int)_jobQueue.count; i++) { [listOfJobs addObject:[_jobQueue objectAtIndex:i]]; } return listOfJobs; } First of all, a forin loop will be faster... but what you may not realize is this is horribly inefficient and probably not doing quite what you want. If the order of the jobs is important at all, this method is getting it wrong. First, you add the _activeJob at the first index (if it exists), then, starting at 0, you copy a job from the job queue into the same index in the list of jobs. This means first you move the active job from index 0 to 1, then put the job in index 0 of the queue into index 0 of the list. And rinse and repeat for every job in the queue, continuously moving the active job to the very back with each insert. And while a forin loop is faster than this for loop, what's going to be better is this NSMutableArray method: [listOfJobs addObjectsFromArray:_jobQueue]; If you want the active job at index 0, then you'll want this: if (_activeJob) { [listOfJobs addObject:_activeJob]; } [listOfJobs addObjectsFromArray:_jobQueue]; If you want the active job at the back of the array, move the if after the addObjectsFromArray:. _jobUnitsToCreate = [self calculateJobsToCreate:jobType]; This line is in the init for DTJob. Truly, you shouldn't be calling methods within the class for the same reason you shouldn't be using self. syntax to access properties (and instead use the underscore). And with this method, I can make a strong case for it not even being part of the class. I think it still belongs in the file, sure, but you can instead define it as a C-Style function outside the class (before the @interface). int calculateJobsToCreate(JobType jobType) { // copy & paste method body here } And now just change init to: _jobUnitsToCreate = calculateJobsToCreate(jobType); -(void) updateJob { [self checkIfJobIsFinished]; } I don't understand the point of this method really. Truly, the logic from checkIfJobIsFinished could be moved into this one, and eliminate checkIfJobIsFinished (because it's name is clunkier, also, the method name doesn't make a ton of sense given it returns void). Most of the methods in DTJob could have better names. It's the DTJob class, so we know we're dealing with a job, and as such, we don't have to (and shouldn't) include the word "job" in every instance method: updateJob could be simply update startJob could be simply start pauseJob could be simply pause clearAllJobUnits could be simply clearAllUnits completeJobUnit: could be completeUnit: jobUnitWaitingForWorker could be simply unitWaitingForWorker jobAvailable could be hasAvailableUnit if (jobUnit != nil) You do this (or == nil) in a few places. For most Objective-C programmers, simply doing this is preferred: if (jobUnit) { // jobUnit is not nil or if (!jobUnit) { // jobUnit is nil And you should be doing this also. You're already doing it in all of your init methods: self = [super init]; if (self) { // self is not nil So let's be consistent. -(void) putDwarvesToWork { for (DTDwarf *dwarf in self.dwarfArray) { if (dwarf.dwarfMovement.dwarfMovementState == DwarfAtDestinationFloor || dwarf.dwarfMovement.dwarfMovementState == DwarfAtFloorExit) { if (dwarf.dwarfState != DwarfCarryingItemToStockpile) { if ([_floorJobQueue areJobsAvailableForWork]) { if ([dwarf isJobAllowed:[_floorJobQueue activeJobType]]) { [self assignDwarfJobUnit:dwarf]; } else { dwarf.dwarfMovement.dwarfMovementState = DwarfIdleMovement; } } else { dwarf.dwarfMovement.dwarfMovementState = DwarfIdleMovement; } } } } } This is nested too deeply. The first two levels can be combined in a method: - (BOOL)canPutDwarfToWork:(DTDwarf *)dwarf { if (!(dwarf.dwarfMovement.dwarfMovementState == DwarfAtDestinationFloor || dwarf.dwarfMovement.dwarfMovementState == DwarfAtFloorExit)) { return NO; } if (dwarf.dwarfState == DwarfCarryingItemToStockpile) { return NO; } return YES; } We can also combine the inner two most if checks into a single if since the inner else and outer else of these is identical. Now, we can refactor the loop body: if ([self canPutDwarfToWork:dwarf]) { if ([_floorJobQueue areJobsAvailableForWork] && [dwarf isJobAllowed:[_floorJobQueue activeJobType]]) { [self assignDwarfJobUnit:dwarf]; } else { dwarf.dwarfMovement.dwarfMovementState = DwarfIdleMovement; } }
{ "domain": "codereview.stackexchange", "id": 8565, "tags": "object-oriented, game, objective-c, queue" }
How often and for what reasons does Hubble use two different instruments at the same time?
Question: A comment under this answer to ** links to Hubblesite.org's Hubble Shoots the Moon The image, its description and credits are shown below. As explained, the purpose of the observation was to record a detailed spectrum of attenuated sunlight via its diffuse reflection from the Moon. I seem to remember a more recent effort to record the spectrum of the reddish brown light reflected from the Moon during a lunar eclipse where sunlight passes through Earth's atmosphere as a way of simulating exoplanet atmospheric analysis during transits of their stars, but I am not sure if this is a related precursor or not. Anyway. What I find interesting is The image was taken while the Space Telescope Imaging Spectrograph (STIS) was aimed at a different part of the moon to measure the colors of sunlight reflected off the Moon. I remember seeing a map Hubble's primary focal plane showing that most of it is empty, wasted space, and each of the instruments -- including the cameras! -- "pick off" a small bit for itself. That means that light from different parts of the sky can simultaneously enter different instruments; we can project the map of the focal plane back to the celestial sphere. With a focal length of 57.6 m that means every centimeter at the focal plane is about 36 arcseconds. For more about that see answer(s) to How many science instruments can be used in parallel with the Hubble Space Telescope? Okay so what's my question again? According to this description, while the Space Telescope Imaging Spectrograph (STIS) was collecting data from one part of the Moon, the Wide Field Planetary Camera 2 was also snapping pics of the lunar surface. Question: How often and for what main reasons does Hubble use two different instruments at the same time? About This Image In a change of venue from peering at the distant universe, NASA's Hubble Space Telescope has taken a look at Earth's closest neighbor in space, the Moon. Hubble was aimed at one of the Moon's most dramatic and photogenic targets, the 58 mile-wide (93 km) impact crater Copernicus. The image was taken while the Space Telescope Imaging Spectrograph (STIS) was aimed at a different part of the moon to measure the colors of sunlight reflected off the Moon. Hubble cannot look at the Sun directly and so must use reflected light to make measurements of the Sun's spectrum. Once calibrated by measuring the Sun's spectrum, the STIS can be used to study how the planets both absorb and reflect sunlight. (upper left) The Moon is so close to Earth that Hubble would need to take a mosaic of 130 pictures to cover the entire disk. This ground-based picture from Lick Observatory shows the area covered in Hubble's photomosaic with the Wide Field Planetary Camera 2. (center) Hubble's crisp bird's-eye view clearly shows the ray pattern of bright dust ejected out of the crater over one billion years ago, when an asteroid larger than a mile across slammed into the Moon. Hubble can resolve features as small as 600 feet across in the terraced walls of the crater, and the hummock-like blanket of material blasted out by the meteor impact. (lower right) A close-up view of Copernicus' terraced walls. Hubble can resolve features as small as 280 feet across. CREDITS: John Caldwell (York University, Ontario), Alex Storrs (STScI), and NASA Answer: I can't answer the first one but I can give a plan for figuring it out: How often does Hubble use two different instruments at the same time? Unfortunately the Hubble search tool does not have a way to search for "parallel" exposures. So to check which exposures are parallel and which are regular single-instrument exposures, you would have to check the metadata of every frame. There is a keyword "INSTRUME" that tells you the instrument recording a given exposure, and a keyword "PRIMESI" that tells you the prime science instrument. Any time the value of INSTRUME is different from the value of PRIMESI, you're looking at a parallel exposure. When the keywords match it's the prime exposure. So the ratio of parallel exposures to prime exposures gives you roughly the fraction of time two instruments are working. I would guess the number is like 5% but I could be way off. Staff working at Hubble are probably already tracking this for their operational statistics. Trick answer: The Fine Guidance Sensor (FGS) is used to control telescope pointing, but it can also be used as a science instrument. So actually most of the time the telescope is using two instruments (FGS + the prime science instrument), and your question becomes how often does Hubble use three instruments at the same time. However, the science mode of FGS is different from the pointing control mode. For what main reasons does Hubble use two different instruments at the same time? One main reason is to save time in surveys that cover a large area compared to the area of an individual instrument footprint. This type of parallel observation is what they call coordinated parallel, and it's the same observing team controlling the prime and the parallel instruments. Here is a map of fields in the CANDELS survey: The offset between the blue area (WFC3 instrument data) and pink area (ACS instrument data) is directly because of the parallel exposures. You can match the size of the offset and the angle between the tiny tiles to the offsets between WFC3 and ACS in the focal plane map you mentioned: The other type of parallel observation is pure parallel. Pure parallel observations basically say "take data with a second instrument whenever some conditions are met." They are "free" in a way because they don't require extra pointing to a target, and they let two separate teams take data at the same time. The parallel data are taken while someone else is controlling the pointing and using a different prime science instrument. Pure parallel observations are good for when you want data in some part of the sky, but you don't care precisely where you point. One example is searches for new objects in the Kuiper belt: you could just request parallel exposures be taken whenever Hubble is pointing near the ecliptic plane.
{ "domain": "astronomy.stackexchange", "id": 6583, "tags": "observational-astronomy, space-telescope, hubble-telescope" }
Detecting drum bpm in a noisy .wav file
Question: I am looking for algorithm(s) to solve the following problem: Given a noisy .wav sound capture (some wind + friction noise on the microphone), how to detect the BPM of a soft drum beat? I have attempted googling the subject, but the results are quite poor, due to high amount of mp3 related software for both analysis and fingerprint id generation. None of them supply information about how to actually do it. I am aware of algorithms to remove the noise, but that still leaves me with the problem of detecting BPM. And depending on how the BPM problem is solved, it's possible that I don't even need to denoise (since drum tends to be in the lower frequencies and noise in higher, a simple low-pass might be sufficient pre-processing). Answer: One method that works if there's a relatively strong drum beat is to take the magnitude of the STFT of the waveform, and then auto-correlate it in only the time dimension. The peak of the auto-correlation function will be the beat, or a submultiple of it. This is equivalent to breaking up the signal into a lot of different frequency bands, finding the amplitude envelope of each, autocorrelating each envelope, and then summing them. The noise and other parts of music are averaged out by the cross-correlation operation. This is because drum beats produce short-lived sound at many frequencies (vertical lines), while other parts of music are long-lived at only a few frequencies (horizontal lines), and noise is long-lived but random at all frequencies. You can see the beat repetition if you look at an STFT: I came up with this for a school project to find a single BPM value for entire music files, but it could be adapted to a stream of audio with changing BPM, too. You'd need to process chunks that are at least twice as long as the period of the BPM you're looking for.
{ "domain": "dsp.stackexchange", "id": 134, "tags": "audio, algorithms, noise, frequency" }
Highest lower bound on NP problems (TSP)
Question: I'll try another question that I haven't been able to find almost any kind of information about, thanks a lot for any kind of pointers or explanations. Is there a list of the proven lower bounds of NP algorithms (particularly for the TSP problems)? Like one might imagine that someone already has proven 3SAT to run at at least 2n² for example. Thanks again Answer: The best known lower bounds for TSP are essentially the same as those for 3SAT (within polylogarithmic factors), and they apply the fact that there are very efficient reductions from 3SAT to Hamiltonian Path. (So, a good algorithm for Hamiltonian path would imply one for 3SAT, but certain lower bounds show that sufficiently good algorithms don't exist for 3SAT.) For the best 3SAT lower bounds known, see this answer. The bottom line is that the only non-trivial lower bounds known for these problems impose a space restriction on the candidate algorithm, as well as a time restriction. So for example, we can say TSP cannot be solved in $n^{1.5}$ time and $O(log n)$ space, but we don't know if TSP can be solved in $n^{1.5}$ time (with no space restriction).
{ "domain": "cstheory.stackexchange", "id": 593, "tags": "cc.complexity-theory, lower-bounds, np" }
Odometry - 4ws quasiomnidirectional - Trigonometry - Strange behaviour
Question: Hi, Iam trying to make odometry work on a four wheel steering quasiomnidirectional base. Its losely based on pr2 trigonometry. The Problem that I encounter is that everything works except Angular_Z without any Linear_X or _Y. In that case the odometry frame twists opposite to reality. I also tried another approach (different math) but the same strange result. Anybody else encountered something like this before? Any suggestions how to solve this? Originally posted by Dragonslayer on ROS Answers with karma: 574 on 2020-02-01 Post score: 0 Answer: Problem solved. It was my "smart" base-controller that was the issue. EDIT: More exact, I had it decide to invert linear(wheel) if the desired steerangle was bigger then PI/2 and adjust the steeringangle accordingly. This worked with other poses because all the wheels made this move but in the case of twist only 2 out of 4 wheels made this move. I think it might have been the covariance matrix that somehow saw this as an error and started mixing things up. Will ask that question in another post. Originally posted by Dragonslayer with karma: 574 on 2020-02-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Dragonslayer on 2020-02-05: EDIT: It wasnt the "smart" base-controller. ANSWER see: https://answers.ros.org/question/343104/odometry-4ws-quasiomnidirectional-covariance_matrix/
{ "domain": "robotics.stackexchange", "id": 34365, "tags": "ros, ros-melodic, base-odometry, pr2" }
Initializing characters in a visual novel game
Question: I am making a visual novel game and I was wondering is there a better way of setting character attributes and initializing than the way that I have done down below. As you can see I did an abstract class so that I can call each character without rewriting a bunch of code. I also have an array that initializes everything. What I was wondering, is there a better way of initializing them instead of using an array, it is just tiring to type in Init.C[1].getAge(); every time that you want to get or set a variable. What i want specifically is a way to initialize all these Character Objects without cluttering up my code and if possible automaticly initialize all of them, as you can see i tried to do in my init peice of code. If anyone has any suggestions on how to initialize all these Character Objects I would be grateful, Also Any other comments or suggestions are welcome. package visualnovel; import Characters.*; public class VisualNovel { public static void main(String[] args) { Init Init = new Init(); Init.Init(); Init.C[1].getAge(); } } Here is my Init peice of code package visualnovel; import Characters.*; public class Init { Characters C[] = new Characters[3]; public void Init(){ Characters(); } public void Characters() { C[0] = new Main(); C[1] = new Secondary(); for (int i = 0; i < 2; i++) { C[i].Init(); } } } Here is the Character Abstract Class package Characters; public abstract class Characters { private String FirstName; private String LastName; private String EyeColor; private String SkinColor; private String Sex; private String Age; private String HairColor; private static int NumofCharacters; public Characters(){ FirstName = null; LastName = null; EyeColor = null; SkinColor = null; Sex = null; Age = null; HairColor = null; } // mutators public void setFirstName(String F){ FirstName = F; } public void setLastName(String L){ LastName = L; } public void setEyeColor(String E){ EyeColor = E; } public void setSkinColor(String S){ SkinColor = S; } public void setSex(String S){ Sex = S; } public void setAge(String A){ Age = A; } public void setHairColor(String H){ HairColor = H; } public void setDetails(String FirstName, String LastName, String Sex, String EyeColor, String HairColor, String SkinColor, String Age){ setFirstName(FirstName); setLastName(LastName); setSex(Sex); setEyeColor(EyeColor); setSkinColor(SkinColor); setAge(Age); setHairColor(HairColor); NumofCharacters++; } // Accessors public String getDetails(){ String D; D = LastName + ", " + FirstName; D += "\nSex: " + Sex; D += "\nAge: " + Age; D += "\nEyes: " + EyeColor; D += "\nSkin: " + SkinColor; D += "\nHair: " + HairColor; return D; } public String getFirstName(){ return FirstName; } public String getLastName(){ return LastName; } public String getEyeColor(){ return EyeColor; } public String getSkinColor(){ return SkinColor; } public String getSex(){ return Sex; } public String getAge(){ return Age; } public String getHairColor(){ return HairColor; } public int getNumberOfCharacters(){ return NumofCharacters; } public String toString(){ String S; S = LastName + ", " + FirstName; S += "\nSex: " + Sex; S += "\nAge: " + Age; S += "\nEyes: " + EyeColor; S += "\nSkin: " + SkinColor; S += "\nHair: " + HairColor; return S; } @Override public abstract void Init(); } Charcter Example Code package Characters; public class Main extends Characters{ public void Init() { setDetails("Allyson","Carter","Female","Green","Blonde","White","27"); } } Answer: Have you considered a data-driven approach? This would involve making a file format and having a file that contains the information for each character. You could then add or remove characters at will by simply updating the data file. You could make a routine which looks for a particular file or takes a path to a file and simply reads the data from it and creates characters as it reads them from the file. The file could be in any format that you find easy to work with - XML, JSON, INI, whatever.
{ "domain": "codereview.stackexchange", "id": 19562, "tags": "java, object-oriented, polymorphism, role-playing-game, abstract-factory" }
Examples of animals with different number of chromosomes that can interbreed?
Question: When I was first started to write this question, I wanted to know how species evolve to have a different chromosomal arrangement, such as having two pairs of chromosomes instead of one? However, I think I have resolved my misunderstanding by reading: http://scienceblogs.com/pharyngula/2008/04/21/basics-how-can-chromosome-numb/ My basic misunderstanding was that I thought animals with a different number of chromosomes couldn't reproduce, which appears to be wrong. My question is now, are there examples of animals that can interbreed that have a different number of chromosomes (particularly offspring that are not sterile)? Is there a species that is known to have a variation in the number of chromosomes (yet, somehow, hasn't yet resulted in complete speciation); or are there two species that can be interbred and result in fertile offspring? Answer: Such examples are not that rare: multiple species are polymorphic for supernumerary (B-chromosomes), others are known to show intra-specific Robertsonian polymorphism (fusions of acrocentric/separation of metacentric chromosomes). One very well studied example, which covers both intra- and inter-specific crossings, are isopod crustaceans from the Jaera albifrons complex. They show Robertsonian variation (and additionally sometimes B-chromosomes): different species in the same location normally have different chromosomal numbers; on the macrogeographical level at least some species show certain gradients in chromosomal numbers; sometimes the same variation can be observed on the level of local populations. Hybrids between species are viable and not sterile, but rare in nature. Fecundity of hybrids is lower than that of the parental species, and this seems to serve as an additional post-zygotic mechanism of species isolation. Crossings between specimens of the same species with different numbers of chromosomes do occur in mixed populations and are also possible for parents coming from different extremities of the above mentioned chromosomal cline. In the case of mixed populations frequency of these hetezygotes is less than expected by Hardy-Weinberg. Such offspring is fertile, and its lowered frequency is associated with higher incidence of abnormal meiosis. On the cytological level (normal) meiosis in a heterozygote individual is remarkable in showing trivalents instead of bivalents for the polymorphic chromosomes: (a metaphase with a trivalent in the center and two bivalents) (from Staiger & Bocquet 1956) In addition, in four out of five species of the complex chromosomal sex determination is of the following uncommon type: males are homogametic with sex chromosomes ZZ, while females are heterogametic with ZW1W2 chromosomes. So, females normally have in total an odd number of chromosomes and in their meiosis one observes a trivalent similar to the one shown above: the metacentric Z-chromosome conjugates with the two acrocentrix W-chromosomes. Some references available on-line: Lécher & Prunus 1971 (in French) Solignac 1981 (English, on isolating mechanisms in general) Lécher et al 1995 (a review in English) Kappas et al 2013 (a review in English - see pages 23-26) These are highly relevant to the question, but not accessible on the web: Lécher 1967 (Cytogénétique de l'hybridation expérimentale et naturelle chez l'Isopode Jaera (albifrons) syei Bocquet) Lécher 1967 (Polysomie autosomique chez l'Isopode Jaera albifrons syei Bocquet) Staiger & Bocquet 1956 (Les chromosomes de la super-espèce Jaera marina (Fabr.) et de quelques autres Janiridae (Isopodes Asellotes))
{ "domain": "biology.stackexchange", "id": 1939, "tags": "genetics, chromosome, speciation" }
Why should the integral of wavefunction squared be normalized for a Quantum Object?
Question: According to the statistical interpretation of Quantum Physics, a particle does not have a precise position regardless of any measurements. But then, the interpretation imposes another condition on the wavefunction of a quantum object - The integral of the wavefunction squared should be equal to one i.e. the total probability of finding that particle anywhere in space should be one because the particle must exist somewhere. Isn't this contradictory? On one hand stating the particle does not have a position firsthand but then also stating that it got to exist somewhere. I am a complete novice in this subject so pardon me if I don't make sense. Answer: How is it contradictory? The contradiction is with the assumption that a particle must have a position in order to exist. This is clearly not true. As Dirac put it “In the general case we cannot speak of an observable having a value for a particular state, but we can … speak of the probability of its having a specified value for the state, meaning the probability of this specified value being obtained when one makes a measurement of the observable.” Dirac P.A.M., 1958, Quantum Mechanics, Clarendon Press, p.47. Even in the macroscopic world position exists only as a relationship with other matter. You cannot say where you are unless you say where you are relative to something else. I am in this room. This room is in this building. The building is in this town. and so on. You cannot say where you are in space. In the macroscopic world, position always exists because objects are in continuous interaction with their environment. This is not true in the quantum world. A particle may have too few interactions with its environment to generate a precise property of position. Of course, this does not mean that the particle does not exist.
{ "domain": "physics.stackexchange", "id": 70358, "tags": "quantum-mechanics, wavefunction, probability, wave-particle-duality" }
Identity of bosonic coherent states
Question: I have a short question about the meaning of the identity of the bosonic coherent states. Before I ask the question I will explain some background. The eigenstate of the bosonic annihilation operator $\widehat{a}$ is: $$|\phi\rangle = \sum_{n}\frac{\phi^{n}}{\sqrt{n!}}\frac{\left(\widehat{a}^{\dagger}\right)^{n}}{\sqrt{n!}}|vac\rangle = e^{\phi \widehat{a}^{\dagger}}|vac\rangle $$ It satisfy $\widehat{a}|\phi\rangle = \phi|\phi\rangle$. I assume that we can see the operator $\widehat{a}$ as destroying a boson particle at a specific position as an example. The eigenstate for the set of bosonic annihilation operators $ \left\{\widehat{a}_{x} \right\}$ is: $$|\vec{\phi}\rangle = e^{\sum_{x}\phi_{x}\widehat{a}^{\dagger}}|vac\rangle$$ It satisfy $\widehat{a}_{i}|\vec{\phi}\rangle = \phi_{i}|\vec{\phi}\rangle$. Note $x$ represents different positions. The identity operator is: $$\widehat{I} = \int \left(\prod_{x} \frac{d\Re{\phi_{x}}d\Im{\phi_{x}}}{\pi} \right)e^{-\sum_{x}\phi_{x}^{*}\phi_{x}}|\phi\rangle \langle \phi|$$ My question is, what is the meaning of $|\phi\rangle \langle \phi|$ in the integral? Is $|\phi\rangle$ in the integral the eigenstate of the annihilation operator at the specific position $x$? Answer: The state in the integral is the joint eigenstate of each of the annihilation operators. The annihilation operators are for particular modes labeled here by $x$. I will clarify what I think the notation should be, for consistency. The vector states should be $$|\vec{\phi}\rangle=e^{\sum_x \phi_x \hat{a}^\dagger_x}|\mathrm{vac}\rangle$$ such that they are eigenstates of the annihilation operators $\hat{a}_x$ with eigenvalues $\phi_x$. It is assumed that operators for different modes commute and that the modes are bosonic: $[\hat{a}_x,\hat{a}^\dagger_y]=\delta_{xy}$. Normally we don't use $x$ to represent a discrete index but alas (OP has sums over $x$, so it is likely to be a discrete parameter). Then we note that the states are not normalized; the normalized state in this notation is $e^{-\sum_x |\phi_x|^2/2}|\vec{\phi}\rangle$. These normalized states are what go into the resolution of identity; hence the factor of $e^{-\sum_x |\phi_x|^2/2}\times (e^{-\sum_x |\phi_x|^2/2})^*=e^{-\sum_x |\phi_x|^2}$. Another way of rewriting the whole thing, making explicit that each mode is independent because one should learn about the single-mode case before the multimode version, is $$I=\bigotimes_x\int \frac{d\Re{\phi_x} d\Im{\phi_x}}{\pi}e^{-|\phi_x|^2}e^{\phi_x \hat{a}_a^\dagger}|\mathrm{vac}\rangle\langle \mathrm{vac}|e^{\phi_x \hat{a}_a}=\bigotimes_x\int \frac{d\Re{\phi_x} d\Im{\phi_x}}{\pi}e^{-|\phi_x|^2}|\phi_x\rangle\langle \phi_x|,$$ where the symbol $\bigotimes_x$ implies a tensor product over multiple modes (the single-mode case just removes that symbol and selects one $x$).
{ "domain": "physics.stackexchange", "id": 98724, "tags": "condensed-matter, hilbert-space, many-body, coherent-states" }
Exploration in Q learning: Epsilon greedy vs Exploration function
Question: I am trying to understand how to make sure that our agent explores the state space enough before exploiting what it knows. I am aware that we use epsilon-greedy approach with a decaying epsilon to achieve this. However I came across another concept, that of using exploration functions to make sure that our agent explores the state space. Q Learning with Epsilon Greedy $sample = R(s,a,s') + \gamma \max_{a'}Q(s',a')$ $Q(s,a) = (1 - \alpha)*Q(s,a) + \alpha*sample$ Exploration function $ f(u,n) = u + k/n $ $Q(s,a) = R(s,a,s') + \gamma*max_{a'}f(Q(s',a'), N(s',a'))$ Where N(s',a') counts the number of times you have seen this (s',a') pair before. Now my question is, are these 2 above strategies just 2 different ways of Q learning? Because I can't seem to find much details on exploration function but can find plenty on epsilon greedy Q learning. Or can the exploration function be used along with the epsilon greedy Q learning algorithm as a form of some optimization? I am confused as to where exactly would we make use of this exploration function Q learning strategy. Any help/suggestions are much appreciated! Answer: Any exploration function that ensures the behaviour policy covers all possible actions will work in theory with Q learning. By covers I mean that there is a non-zero probability of selecting each action in each state. This is required so that all estimates will converge on true action values given enough time. As a result, there are many ways to construct behaviour policies. It is possible in Q learning to use equiprobable random action choice - ignoring current Q values - as a behaviour policy, or even with some caveats learn from observations of an agent that uses an unknown policy. There are practical concerns: If the behaviour policy is radically different from optimal policy, then learning may be slow as most information collected is not relevant or very high variance once adjusted to learning the value of the target policy. When using function approximation - e.g. in DQN with a neural network - the distribution of samples of state, action pairs seen has an impact on the approximation. It is desirable to have similar population of input data to that which the target policy would generate*. In some situations, a consistent policy over multiple time steps gives better exploration characteristics. Examples of this occur when controlling agents navigating physical spaces that may have to deviate quite far from current best guess at optimal in order to discover new high value rewards. These concerns drive designs of different exploration techniques. The epsilon-greedy approach is very popular. It is simple, has a single parameter which can be tuned for better learning characteristics for any environment, and in practice often does well. The exploration function you give attempts to address the last bullet point. It adds complexity, but may be useful in a non-stationary environment since it encourages occasional exploration paths far away from the current optimal one. Now my question is, are these 2 above strategies just 2 different ways of Q learning? Yes. Or can the exploration function be used along with the epsilon greedy Q learning algorithm as a form of some optimization? Yes it should be possible to combine the two approaches. It would add complexity, but it might offer some benefit in terms of learning speed, stability or ability to cope with non-stationary environments. Provided the behaviour policy covers all possible actions over the long term, then your choice of how exploration for off-policy reinforcement learning works is one of the hyperparameters for a learning agent. Which means that if you have an idea for a different approach (such as combining two that you have read about), then you need to try it to see if it helps for your specific problem. * But not necessarily identical, because the approximate Q function would then lose learning inputs that differentiate between optimal and non-optimal behaviour. This can be a tricky balance to get right, and is an issue with catastrophic forgetting in reinforcement learning agents.
{ "domain": "datascience.stackexchange", "id": 9508, "tags": "machine-learning-model, q-learning" }
Does an induction cooktop attract or repel the base of a pan?
Question: My first guess would be that the frying pan would be attracted to the induction coil just as any ferrous metal to a solenoid. But I recently found out that an induction coil can actually be designed to levitate (repel) the work piece in some designs (induction levitation). Are the forces between an induction coil and a metal object always attractive/repulsive/neither or is it a function of the induction coils oscillation frequency or something else? Answer: An induction cooktop repels the base of the pan. My first guess would be that the frying pan would be attracted to the induction coil just as any ferrous metal to a solenoid. If we were to supply a direct current (DC) to the induction coil in an induction cooker, as you say, the coil acts like an electromagnet with fixed polarities and would attract the pan. However using a direct current will not lead to induced currents in the pan. And hence, no heating = no dinner! Here the case is different, we supply an alternating current (AC) to the induction coil. An alternating current produces an alternating magnetic field. And a time varying magnetic field induces current in the base of the pan as per Faraday's law of electromagnetic induction. These induced currents, known as Eddy currents, cause heating of the pan which then cooks your food. The following video by Veritasium demonstrates electromagnetic levitation of an aluminium plate of mass $1~\rm kg$. An alternating current is applied to the induction coil with an rms current of $800~\rm A$ and frequency $900~\rm Hz$. Levitating Barbecue! Electromagnetic Induction - YouTube Are the forces between an induction coil and a metal object always attractive/repulsive/neither or is it a function of the induction coils oscillation frequency or something else? The force between an induction coil and the pan is always repulsive. As per the Lenz's law, the direction of induced current in the pan is in such a way that it opposes the change in magnetic field. If the magnetic field weakens, the induced current reinforces the field. On the other hand, if the magnetic strengthens, the induced current tries to weaken the field. The magnetic interactions between the induction coil and the pan is hence always repulsive as their magnetic moments always oppose each other. Somewhat similar to keeping the like poles of two bar magnets closer to each other. The nature (attraction/repulsion) is invariant of the frequency, voltage, and other parameters as these laws of electromagnetism must hold good. However, these parameters might have impact on the magnitude of the repulsive force. This is the reason why we don't see the pan levitating on top of the induction cooker as it did in the video linked above! The following quote is from the website - Explain that stuff: Although your home power supply alternates at about $50–60~\rm Hz$ ($50–60$ times per second), an induction cooktop boosts this by about $500–1000$ times (typically to $20–40~\rm kHz$). Since that's well above the range most of us can hear, it stops any annoying, audible buzzing. No less importantly, it prevents magnetic forces from shifting the pan around on the cooktop. This explains why we hear a buzzing noise in the video linked above and not in our induction cookers. I was unable to find a source giving the rms value of current that passes through the induction coil. I can say for sure that the peak current must be much less than $800~\rm A$ (used to lift the aluminium plate). Using a comparatively lower current also helps in minimizing Joule's heating in the induction coil, thus reliving some stress on the exhaust fan.
{ "domain": "physics.stackexchange", "id": 67587, "tags": "electromagnetism, everyday-life, electromagnetic-induction" }
Why are planetary systems so rare?
Question: According to this site there are 258 know planetary systems and 302 planets. Mostly each of the listed system has only 1 planet of Mercury's or Mars' size, while our system has up to 8 planets. From what I know our sun is not a "big" star, so theoretically other bigger stars probably threw out more planetary material, so they should have many more planets. Humans know about billions of stars and since planets are made from what remained from star's birth then why aren't there billions and billions of planets? Answer: There may well be billions of planets in our Galaxy. All stars in our Galaxy are in relative motion and occasionally a star passes very close to the line between observers on Earth and a background star. The light from the background star is deflected by the gravitational field in a phenomenon called gravitational microlensing. The gravity of a planet orbiting the host star can also deflect the light from the background star and thus the planet's presence known. There has been a recent discovery by gravitational microlensing of a large population of "free-floating" jupiter-mass planets, or "nomads". These are planets which do not orbit a host star. The research suggests that there could be almost two jupiter-mass nomad planets for every star in the Galaxy. See http://arxiv.org/abs/1105.3544v1. Other work suggests that there could be as many as 100,000 nomad planets (of masses ranging all the way down to almost Pluto's mass) per main sequence star, see http://adsabs.harvard.edu/abs/2012MNRAS.423.1856S.
{ "domain": "physics.stackexchange", "id": 4226, "tags": "astronomy, planets" }
Calculation of distance between samples in data mining
Question: I am confused about a little issue related to distance calculation. What I want to know is, while calculating the distance between samples in classification or regression, is the label or output class also used, or the distance is calculated using all other attributes excluding the label attribute? Answer: The way that various distances are often calculated in Data Mining is using the Euclidean distance. You can read about that further here. If I understand your question correctly, the answer is no. The Euclidean distance can only be calculated between two numerical points. Therefore it would not be possible to calculate the distance between a label and a numeric point.
{ "domain": "datascience.stackexchange", "id": 3939, "tags": "machine-learning, data-mining, distance" }
2nd order pertubation theory for harmonic oscillator
Question: I'm having some trouble calculating the 2nd order energy shift in a problem. I am given the pertubation: $\hat{H}'=\alpha \hat{p}$, where $\alpha$ is a constant, and $\hat{p}$ is given by: $p=i\sqrt{\frac{\hbar m\omega }{2}}\left( {{a}_{+}}-{{a}_{-}} \right)$, where ${a}_{+}$ and ${a}_{-}$ are the usual ladder operators. Now, according to my book, the 2nd order energy shift is given by: $E_{n}^{2}=\sum\limits_{m\ne n}{\frac{{{\left| \left\langle \psi _{m}^{0} \right|H'\left| \psi _{n}^{0} \right\rangle \right|}^{2}}}{E_{n}^{0}-E_{m}^{0}}}$ Now, what I have tried to do is to calculate the term inside the power of 2. And so far I have done this: $\begin{align} & E_{n}^{1}=\alpha i\sqrt{\frac{\hbar m\omega }{2}}\int{\psi _{m}^{*}\left( {{{\hat{a}}}_{+}}-{{{\hat{a}}}_{-}} \right)}\,{{\psi }_{n}}\,dx=\alpha i\sqrt{\frac{\hbar m\omega }{2}}\left( \int{\psi _{m}^{*}\,{{{\hat{a}}}_{+}}{{\psi }_{n}}\,dx-\int{\psi _{m}^{*}\,{{{\hat{a}}}_{-}}{{\psi }_{n}}\,dx}} \right) \\ & =\alpha i\sqrt{\frac{\hbar m\omega }{2}}\left( \sqrt{n+1}\int{\psi _{m}^{*}\,{{\psi }_{n+1}}\,dx-\sqrt{n}\int{\psi _{m}^{*}\,{{\psi }_{n-1}}\,dx}} \right) \end{align} $ As you can see, I end up with the two integrals. But I don't know what to do next ? 'Cause if $m > n$, and only by 1, then the first integral will be 1, and the other will be zero. And if $n > m$, only by 1, then the second integral will be 1, and the first will be zero. Otherwise both will be zero. And it seems wrong to have to make two expressions for the energy shift for $n > m$ and $m > n$. So am I on the right track, or doing it totally wrong ? Thanks in advance. Regards Answer: Well, you end up with integrals but those are very, very easy to solve for the harmonic oscillator! Since your problem is already formulated in terms of the raising and lowering operators $a_+$ and $a_-$. Recall that $$a_+ | n \rangle = \sqrt{n+1} | n+1 \rangle$$ $$a_- | n \rangle = \sqrt{n} | n - 1 \rangle$$ $$ a_+ a_- | n \rangle = n | n \rangle$$ where $|n\rangle$ is short-hand for $|\psi_n^0 \rangle$. These relations make it almost trivial to compute matrix elements involving eigenstates of the harmonic oscillator and those operators. Just as an example, let's prove that all eigenstates have zero expectation value for $x$: We know $x$ is proportional to $a_+ + a_-$. Inserting that into the matrix element gives us $$\langle n | x | n \rangle \propto \langle n | a_+ | n \rangle + \langle n | a_- | n \rangle = \sqrt{n+1} \langle n | n+1\rangle + \sqrt{n} \langle n | n- 1 \rangle = 0$$ because eigenstates are orthogonal. EDIT: Continuing with the derivation where you left off, you see that you get a non-zero contribution only if $m = n+1$ or $m = n-1$. So in the infinite sum over all states $m \not= n$, only two terms will contribute, making it possible to easily carry out that sum: Just add those two non-zero terms.
{ "domain": "physics.stackexchange", "id": 7665, "tags": "homework-and-exercises, harmonic-oscillator" }
Quantum harmonic oscillator in thermodynamics
Question: I'm trying to understand the microcanonical ensemble in thermodynamics using the quantum harmonic oscillator. The Hamiltonian of the whole system is given by $$ H = \hbar\omega\sum\limits_{i=1}^N \left(a_i^\dagger a_i + \frac{1}{2}\right),$$ where $N$ is the total number of oscillators. I want to consider the case where $N=3$ and where the ensemble is formed by all states with total energy $\frac{9}{2}\hbar\omega$. Then there are 10 states in the ensemble. I'm now wondering how to calculate the probability $p(\epsilon)$ to find one specific oscillator with energy $\epsilon$. Since this is very new stuff to me, I don't quite know how to approach such a problem. Is there anybody who can show me how to find that probability? Answer: In the micro-canonical ensemble, if the given energy has a degenerate eigenspace of the hamiltonian, then you just take an orthonormal basis for that eigenspace and take an incoherent combination of those states as your density matrix. (Exercise for the reader: show that $\rho$, thusly defined, is independent of that choice of basis.) For your specific case, rephrased without the boring zero-point energies as $ H = \sum\limits_{i=1}^N a_i^\dagger a_i $, for $N=3$ oscillators and with total energy $3\hbar\omega$, giving $10$ states in the ensemble. we have the states $$\{|3,0,0⟩,|2,0,1⟩,|2,1,0⟩,|1,2,0⟩, |1,1,1⟩,|1,0,2⟩,|0,3,0⟩, |0,2,1⟩,|0,1,2⟩, |0,0,1⟩\}$$ as an orthonormal basis of that eigenspace, so you just take $\rho$ to be that combination, $$ \rho = \frac{1}{10}\sum_{n_1+n_2+n_3=3} |n_1,n_2,n_3⟩⟨n_1,n_2,n_3|. $$ To calculate the measurement result distribution for a single-oscillator quantity like one of the individual sub-hamiltonians, you can just trace out the other hamiltonians as $$ \rho_1 = \mathrm{Tr}_{2,3}\rho, $$ where the partial trace is the unique linear map such that $$ \mathrm{Tr}_{2,3}|n_1,n_2,n_3⟩⟨n_1,n_2,n_3| =|n_1⟩⟨n_1|\ \mathrm{Tr}\bigg[|n_2,n_3⟩⟨n_2,n_3|\bigg] =|n_1⟩⟨n_1|, \tag{$*$} $$ giving you a result of the form $$ \rho_1= p_1|n_1⟩⟨n_1| + p_2|n_2⟩⟨n_2| + p_3|n_3⟩⟨n_3|, $$ which then gives you the probabilities of getting the measurement results associated with each of those component states. It might be beneficial to go through this in full but the $E=3\hbar \omega$ case is a lot of drudgework so I'll do the $E=\hbar \omega$ case instead. Here you have three relevant states, which means that your full state is $$ \rho = \frac{1}{3}\bigg( |1,0,0⟩⟨1,0,0| + |0,1,0⟩⟨0,1,0| + |0,0,1⟩⟨0,0,1| \bigg). $$ You then want the reduced state, which you get from the full state via tracing out oscillators $2$ and $3$: you apply the partial trace, you break it up to the individual factors by linearity, you apply it via $(*)$ to each term, and then you add everything up: \begin{align} \rho_1 & = \mathrm{Tr}_{2,3}(\rho) \\ & = \frac{1}{3}\mathrm{Tr}_{2,3}\bigg( |1,0,0⟩⟨1,0,0| + |0,1,0⟩⟨0,1,0| + |0,0,1⟩⟨0,0,1| \bigg) \\ & = \frac{1}{3}\bigg[ \mathrm{Tr}_{2,3}\big(|1,0,0⟩⟨1,0,0|\big) + \mathrm{Tr}_{2,3}\big(|0,1,0⟩⟨0,1,0|\big) + \mathrm{Tr}_{2,3}\big(|0,0,1⟩⟨0,0,1|\big) \bigg] \\ & = \frac{1}{3}\bigg[ |1⟩⟨1|\mathrm{Tr}\big(|0,0⟩⟨0,0|\big) + |0⟩⟨0|\mathrm{Tr}\big(|1,0⟩⟨1,0|\big) + |0⟩⟨0|\mathrm{Tr}\big(|0,1⟩⟨0,1|\big) \bigg] \\ & = \frac{1}{3}\bigg[ |1⟩⟨1| + |0⟩⟨0| + |0⟩⟨0| \bigg] \\ & = \frac{1}{3}|1⟩⟨1| + \frac{2}{3} |0⟩⟨0|. \end{align} Hopefully that makes things clearer.
{ "domain": "physics.stackexchange", "id": 44323, "tags": "quantum-mechanics, homework-and-exercises, thermodynamics, harmonic-oscillator, probability" }
Flat metric induced from Schwarzschild
Question: The task is to find a function $f(r)$ such that the induced metric from the Schwarzschild metric $$ds^2 = -\left(1-\frac{2m}{r}\right) dt^2 + \frac{1}{1-\frac{2m}{r}} dr^2 + r^2 d\Omega^2 $$ on the level set $\{t=f(r)\}$ is flat. My first attempt was to guess $$ dt=0 = f'(r)dr$$ but it led me nowhere. My other idea was to introduce advanced and retarded coordinates $v=t-r$ and $u=t+r$ but there also I'm stuck. Maybe someone could give a guidline, a hint or a direction. Answer: You have $dt=f'dr$ so $ds^2=(\varphi^{-1}-f'^2\varphi)dr^2+r^2d\Omega^2$ with $\varphi:=1-\frac{2m}{r}$. We want the $dr^2$ coefficient to be $1$, so $f'^2=\frac{1-\varphi}{\varphi^2}$. You can take the rest from there.
{ "domain": "physics.stackexchange", "id": 40959, "tags": "homework-and-exercises, general-relativity, metric-tensor" }
Solution to Codejam 2019 1A (Pylons) in C
Question: The following is my solution to the Pylons problem from Codejam 2019: https://codingcompetitions.withgoogle.com/codejam/round/0000000000051635/0000000000104e03. Our Battlestarcraft Algorithmica ship is being chased through space by persistent robots called Pylons! We have just teleported to a new galaxy to try to shake them off of our tail, and we want to stay here for as long as possible so we can buy time to plan our next move... but we do not want to get caught! This galaxy is a flat grid of R rows and C columns; the rows are numbered from 1 to R from top to bottom, and the columns are numbered from 1 to C from left to right. We can choose which cell to start in, and we must continue to jump between cells until we have visited each cell in the galaxy exactly once. That is, we can never revisit a cell, including our starting cell. We do not want to make it too easy for the Pylons to guess where we will go next. Each time we jump from our current cell, we must choose a destination cell that does not share a row, column, or diagonal with that current cell. Let (i, j) denote the cell in the i-th row and j-th column; then a jump from a current cell (r, c) to a destination cell (r', c') is invalid if and only if any of these is true: r = r' c = c' r - c = r' - c' r + c = r' + c' Can you help us find an order in which to visit each of the R × C cells, such that the move between any pair of consecutive cells in the sequence is valid? Or is it impossible for us to escape from the Pylons? In summary, the task is to visit every cell in an RxC grid exactly once, without jumping to a cell that shares a row, column, or diagonal with the previous cell. If it is possible to visit every cell, the solution should print the sequence of steps. This is a standard backtracking problem, but there is a caveat. We want to check cells in "random" order, or otherwise the solution will be too slow. The code is correct, i.e. it gets two green checkmarks. I'd appreciate any feedback! #include <stdbool.h> #include <stdio.h> #define MAX_R 20 #define MAX_C 20 bool backtrack(bool visited_cells[][MAX_C], int sequence[][2], int R, int C, int visited_count, int r, int c) { // Total number of cells int N = R * C; // Return true if we have visited every cell. if (visited_count == N) { return true; } // Otherwise, try every legal jump in "random" order. visited_count++; for (int i = 0; i < N; i++) { // Next row and column to visit // Note that checking cells in consecutive order will be very slow. int nrc = i * 29 % N; // Next row int nr = nrc / C; // Next column int nc = nrc % C; // Skip the cell if we have already visited it. if (visited_cells[nr][nc]) { continue; } // Skip any invalid jumps. if (visited_count > 1 && (r == nr || c == nc || r - c == nr - nc || r + c == nr + nc)) { continue; } // Record our chosen cell. visited_cells[nr][nc] = true; sequence[visited_count-1][0] = nr; sequence[visited_count-1][1] = nc; // Recurse and try to visit the rest of the cells. if (backtrack(visited_cells, sequence, R, C, visited_count, nr, nc)) { // We have found a solution. return true; } // If we failed, undo our choice and try another cell. visited_cells[nr][nc] = false; } // We haven't found a solution. return false; } bool solve(int sequence[][2], int R, int C) { // Array to keep track of visited cells. bool visited_cells[MAX_R][MAX_C]; // Initially, we haven't visited any cells. for (int r = 0; r < R; r++) { for (int c = 0; c < C; c++) { visited_cells[r][c] = false; } } // Try to visit every cell using exhaustive search. return backtrack(visited_cells, sequence, R, C, 0, -1, -1); } int main(void) { // Number of test cases int T = 0; scanf("%d", &T); // Solve each test case. for (int t = 1; t <= T; t++) { // Number of rows int R = 0; // Number of columns int C = 0; scanf("%d %d", &R, &C); // Array to keep track of a solution int sequence[R * C][2]; // Solve the case. bool possible = solve(sequence, R, C); // Print our verdict. printf("Case #%d: %s\n", t, possible ? "POSSIBLE" : "IMPOSSIBLE"); if (possible) { // Print our solution sequence. for (int i = 0; i < R * C; i++) { printf("%d %d\n", sequence[i][0] + 1, sequence[i][1] + 1); } } } } Example output: $ cat tests.txt 2 2 2 2 5 $ ./solution < tests.txt Case #1: IMPOSSIBLE Case #2: POSSIBLE 1 1 2 4 1 2 2 5 1 3 2 1 1 4 2 2 1 5 2 3 Answer: These coding competition sites are great at teaching you tricks and algorithms, but they are terrible at teaching you to write readable and maintainable code. Let's see how we can improve the latter. Naming things Short variable names save some typing, but it is hard to read code that only contains abbreviations. Only use one-letter variable names if that letter is used in a very common way, like i as a loop iterator, or if its scope is very limited, so that you don't have to go searching through the code to find out where it is declared and what it means. Here are some suggestions for replacements: R → n_rows. Now we no longer have to guess that r stands for row. The prefix n_ is commonly used to indicate "number of". You might also drop the prefix and just write rows, but see below about that. C -> n_cols or n_columns r -> row c -> col or column N -> n_cells. If you drop the prefix, it will be cells, but notice how you already have the array visible_cells: now cells might start to sound like an array as well. This is where n_ removes any doubt. nr -> next_row nc -> next_col or next_column nrc -> next_cell Names like backtrack and solve are very generic. Backtrack what? Solve what and how? If you use them in a larger program where you need to solve different kinds of things, having these generic names in the global namespace will cause a problem when linking your code. So either give them more unique names, or make sure these names are not visible outside of the source file they are in, by making these functions static. Don't hesitate to create structs A great way to make your code easier to write, more maintainable and more self-documenting is by creating structs for things that always occur together. For example, you have a 2-dimensional array of integers named sequence, but actually it's just a 1-dimensional array of coordinates. You can make it look like the latter if you create a struct that describes a coordinate: typedef struct { int row; int col; } coordinate; And then you can write: static bool backtrack(…, coordinate sequence[], …) { … sequence[visited_count - 1].row = next_row; sequence[visited_count - 1].col = next_col; … } But it gets better if you make more use of coordinate. Use it everywhere you have a combination of row and column. For example: static bool backtrack(…, coordinate sequence[], coordinate size, int visited_count, coordinate pos) { int n_cells = size.row * size.col; … coordinate next_pos = {next_cell / size.cols, next_cell % size.cols}; … sequence[visited_count - 1] = next_pos; if (backtrack(…, sequence, size, visited_count, next_pos)) { … } … } Next, consider that your algorithm has some state that it has to pass to backtrack every time. This gets easier if you group all the state in a struct: typedef struct { coordinate *sequence; coordinate size; bool visited_cells[MAX_ROWS][MAX_COLS]; } backtrack_state; Note that this includes all the state that is shared by all invocations of backtrack(), it doesn't include the variables that are local to each invocation (visited_count, r and c). Now you can write: static bool backtrack(backtrack_state *state, int visited_count, coordinate pos) { int n_cells = state->size.row * state->size.col; … coordinate next_pos = {next_cell / size.cols, next_cell % size.cols}; … state->sequence[visited_count - 1] = next_pos; if (backtrack(state, visited_count, next_pos)) { … } … } static bool solve(coordinate sequence[], coordinate size) { backtrack_state state = {sequence, size, {{0}}}; return backtrack(&state, 0, (coordinate){-1, -1}); } Note how we can initialize the state, including the two-dimensional array visited_cells, all in one go. Simplify functions Try to keep functions simple and concise. If you find out you are doing a lot of things in one function, see if you can split it up in a meaningful way. For example, in main(), you read the number of test cases, and then proceed to process each test case in sequence. You can create a function that just does one test case, that way the code simplifies like so: static void process_test_case(int case_nr) { coordinate size; scanf("%d %d", &size.row, &size.col); int n_cells = size.row * size.col; coordinate sequence[n_cells]; bool possible = solve(sequence, size); printf("Case #%d: %s\n", case_nr, possible ? "POSSIBLE" : "IMPOSSIBLE"); if (possible) { for (int i = 0; i < n_cells; i++) { printf("%d %d\n", sequence[i].row + 1, sequence[i].col + 1); } } } int main(void) { int n_cases; scanf("%d", &n_cases); for (int case_nr = 1; case_nr <= n_cases; case_nr++) { process_test_case(case_nr); } } Note that this also automatically helps document the code: we now have a name for exactly that piece of code that processes a single test case, so you don't need a comment to explain that. That also brings me to: Avoid unnecessary comments You have added a lot of comments to your code, but some of them are not really necessary. If it is obvious from the code what is going on, you don't need a comment that says exactly the same thing. Of course, if you use cryptic variable and function names, you might need comments to decode what is going on, but if you have clear variable and function names, the need for comments often goes away. Consider: // Total number of cells int N = R * C; Yes, N is so generic, it often means "number of things", but what things? Now consider: int n_cells = size.row * size.col; The name n_cells implies it is the number of cells, so now you no longer have to explain that. It is now also clear that it's the product of the size in rows and columns, instead of resistance times capacitance, or whatever R and C might mean if you don't already know the context. If you have a complex expression and need to explain what it means, consider assigning it to a variable with a clear name first. Maybe split up very long expressions into smaller ones first. You can also create helper functions. For example: static bool is_valid_jump(coordinate from, coordinate to) { if (from.row == to.row) { return false; } if (from.col == to.col) { return false; } … } … if (visited_count > 1 && !is_valid_jump(pos, next_pos)) { continue; } Some comments are very necessary though. Consider the calculation of the next position. You are hinting that scanning sequentially is going to be to slow. But what about that % 29? Here a comment explaining in more detail what is going on would be very helpful. It's not really choosing a random position, but perhaps this very pseudo-random way is good enough? How was the constant 29 chosen?
{ "domain": "codereview.stackexchange", "id": 44386, "tags": "c, c99" }
What is the shortest mRNA the ribosome can read to produce a peptide?
Question: This question came as a comment on a previous question regarding non-ribosomal peptide synthesis, and why Glutathione cannot be synthesized by the ribosome. In general, Glutathione has a "gamma" peptide bond, thus cannot be synthesized by the ribosome. However, is there an actual minimal length that ribosome can process? I understand that the mRNA is more then just the codons, however, does it still require a minimum set of condos, e.g., say 18 nucleotides, to produce a peptide? Answer: Reading the paper linked by @canadianer and its references was pretty illuminating. It's a straightforward paper except for its conclusions, and the most convincing part is the evidence of actual translation. There's some evidence that transcription start sites are used as a regulatory mechanism by displacing other things that might otherwise bind. An unknown fraction, but most(90%+), very short bioinformatically detected ORFs are not translated. Some, however, are. See this table for the experimentally verified translated micropeptides. The shortest is a nearly unbelievable 6 peptides long.
{ "domain": "biology.stackexchange", "id": 4876, "tags": "biochemistry, molecular-biology, proteins" }
Deprecated issue regarding btQuaternion in electric
Question: I have a line of code that has deprecated issue in electric and I would like to know what is its better substitute. => btQuaternion q(pcl::deg2rad(data_set[t][8]), pcl::deg2rad(data_set[t][7]), pcl::deg2rad(data_set[t][6])); [Warning:‘btQuaternion::btQuaternion(const btScalar&, const btScalar&, const btScalar&)’ is deprecated (declared at /opt/ros/electric/stacks/bullet/include/LinearMath/btQuaternion.h:47)] P/S: Waiting for @joq's answer for he has done it already. This post is a refactor one. Originally posted by alfa_80 on ROS Answers with karma: 1053 on 2011-12-24 Post score: 1 Answer: The bullet manifest says: "To eliminate ambiguity about which API is active the btQuaternion constructor from Euler angles is deprecated. Please explicitly construct it and populate it seperately from euler angles, otherwise it is ambiguous as to what convention is being used." At this point, I would avoid using the bullet types directly, as they will change again in Fuerte. Here's a solution using only the tf typedef: // translate roll, pitch and yaw into a Quaternion tf::Quaternion q; q.setRPY(pcl::deg2rad(data_set[t][6]), pcl::deg2rad(data_set[t][7]), pcl::deg2rad(data_set[t][8])); Note the reordering from (yaw, pitch, roll) to (roll, pitch, yaw). Originally posted by joq with karma: 25443 on 2011-12-25 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 7731, "tags": "quaternion, ros-electric, transform" }
Neural network without matrices
Question: Have you ever seen a neural network without matrices? I'm asking, because I'm currently building one for educational purposes. Answer: Matrix multiplication is just a simplified notation for a particular set of addition and multiplication operations. You can absolutely represent a neural network without invoking matrix notation, it'll just be really tedious (and it will run slower). Start with a single perceptron and build your way up from there.
{ "domain": "datascience.stackexchange", "id": 2409, "tags": "neural-network" }
OpenGL error when run rviz
Question: Hi all, After I successfully install ros on my PC. I try to run rviz, but come cross an error as follows rosrun rviz rviz [ INFO] [1409663641.718819061]: rviz version 1.11.3 [ INFO] [1409663641.718874433]: compiled against OGRE version 1.8.1 (Byatis) Xlib: extension "NV-GLX" missing on display ":0". Xlib: extension "NV-GLX" missing on display ":0". [ INFO] [1409663641.808060686]: Stereo is NOT SUPPORTED [ INFO] [1409663641.808351168]: OpenGl version: 1.4 (GLSL 0). terminate called after throwing an instance of 'std::runtime_error' what(): Your graphics driver does not support OpenGL 2.1. Please enable software rendering before running RViz (e.g. type 'export LIBGL_ALWAYS_SOFTWARE=1'). Is there any solution about this? Thank you. Best Regards! Frank Originally posted by Frank on ROS Answers with karma: 11 on 2014-09-02 Post score: 1 Original comments Comment by yincanben on 2014-12-25: Have you solve this problem?I met the same problem,Can you share how to solve it? Comment by Cyril Jourdan on 2015-10-16: Hi, did anyone solved this ?, I just have this error after I installed various graphics and CUDA drivers to get some 3D cameras working... Answer: According to the exception message, it looks like your graphics driver does not support OpenGL 2.1. If your graphics card does not support hardware acceleration try the following (rviz might run excruciatingly slow, but at least it should run) : $> export LIBGL_ALWAYS_SOFTWARE=1 $> rosrun rviz rviz If you are certain that your graphics card does support hardware acceleration, make sure that you have installed the adequate driver (proprietary driver?) Originally posted by Martin Peris with karma: 5625 on 2014-09-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Eric Schneider on 2015-01-22: Hi, I'm running into the exact same problem as the original post (my OpenGL version is also 1.4) but exporting LIBGL_ALWAYS_SOFTWARE=1 does not fix the problem. I'll run the two lines you mention and the exact same error will occur. Is there anything else to test for/try as a fix? Comment by William on 2015-01-22: That env variable appears to be a mesa specific thing, people using different drivers may not get the desired affect: http://www.mesa3d.org/envvars.html Comment by Eric Schneider on 2015-01-22: Hm. In that case I don't know how to fix the problem. I'll try to figure out which drivers I'm using. Thanks.
{ "domain": "robotics.stackexchange", "id": 19260, "tags": "ros, rviz, opengl" }
SE Data Explorer: Users by City
Question: I view users by city using this SQL query: SELECT ROW_NUMBER() OVER(ORDER BY Reputation DESC) AS [#], Id AS [User Link], Reputation FROM Users WHERE LOWER(Location) LIKE 'sulaimani, iraq' or LOWER(Location) LIKE 'erbil, iraq' ORDER BY Reputation DESC; This works. I don't like the repetition of the LIKE clause - this could get unwieldy if I want to include many more locations (but not the whole country). Answer: LOWER(Location) LIKE 'sulaimani, iraq' or LOWER(Location) LIKE 'erbil, iraq' It is not necessary to use LIKE if you want an exact match. The = is sufficient in that case. LOWER(Location) = 'sulaimani, iraq' or LOWER(Location) = 'erbil, iraq' And as @TobySpeight already commented, you can then change it to an IN. LOWER(Location) IN ('sulaimani, iraq', 'erbil, iraq') That won't work if you want to actually use LIKE to do a fuzzier match, e.g. LOWER(Location) LIKE '%, iraq' But in that case, you could create a temporary table with the data as you want and query that, possibly joining the original table. By "data as you want", I mean lowercased and possible only the part after the ", " in this case. For other queries, you might have different requirements.
{ "domain": "codereview.stackexchange", "id": 42017, "tags": "sql, stackexchange" }
How do I know if the $120\rm\:V_{ac}$ is the rms value?
Question: This is one of the questions in one of the books I am reading. An incandescent lamp, rated at 100 watts, gives a certain amount of light when placed across the $120\rm\:V_{ac}$ power line. Would the amount of light increase, decrease or remain the same, when the lamp is placed across a $120\rm\:V_{dc}$ power line? I know that the brightness will remain the same if the 120 Vac power line is referring to the $\rm V_{rms}$, otherwise, the amount of light will increase (since the $\rm V_{rms}$ will then be 84.84V, which is less than the $120\rm\:V_{dc}$). However, I am confused if the $120\rm\:V_{ac}$ power line is already pertaining to the $\rm V_{rms}$. Can anybody explain? Answer: TL;DR By convention, unless explicitly defined otherwise, AC voltages are always specified in terms of their RMS value. The 230 V, 50 Hz and 120 V, 60 Hz standards are given in terms of voltage RMS value. Therefore, if the light bulb was connected to 120 V mains (AC) and then switched to 120 V battery (DC), neglecting minor effects due to impedances, the operating power would be the same. As to why AC voltages are specified in terms of their RMS value - it is due to the convention, and this is because RMS value immediately provides information you can work with. I am not saying peak voltage value is not used in analysis, but RMS value is used much more often. Hence, it is more convenient to express AC voltages in terms of their RMS value.
{ "domain": "physics.stackexchange", "id": 85913, "tags": "electric-circuits, conventions, notation" }
Restriction enzymes
Question: Why are those restriction enzymes which cut the DNA strands a little away from the centre of the recognition sequence more useful in the construction of recombinant DNA? Answer: There is two main ways to cut DNA, creating Blunt ends or Sticky ends. Blunt end example of SmaI recognition site: ("|" is the cutting site) 5'---CCC | GGG---3' 3'---GGG | CCC---5' Sticky end example of EcoRI recognition site: 5'---G | AATTC---3' 3'---CTTAA | G---5' In blunt end example, the only way you can attach both pieces of DNA is one after the other, which is harder to do and not specific at all since any bases can go one after the other. 5'---CCC --> <-- GGG---3' 3'---GGG --> <-- CCC---5' In the sticky example, the sequences will recognize its homologous sequence and attach to it (The TGGG will attach to the ACCC part, making a strong bond). When attaching one strand of DNA on the other, the bases must be homologous (A-T, G-C), this gives specificity since the strand can only be attached with specific sequences. 5'---GAATTC---3' 3'---CTTAAG---5' Here is some good references on restriction enzymes: Wikipedia(Complete informations with strong references) Biotech Learning Hub (Basics easily understandable)
{ "domain": "biology.stackexchange", "id": 6196, "tags": "biotechnology" }
Does an electromagnetic wave have any spatial extent transverse to its direction of propagation?
Question: Electromagnetic waves (classical, non-quantum conception) seem to be typically depicted as mutually orthogonal plane waves with amplitudes varying orthogonal (transverse) to the direction of propagation of the waves (not sure if that's the most accurate verbal description of graphical depictions, but I think it's pretty close). Although such a depiction suggests a transverse spatial extent of the wave, isn't this only because the electric and magnetic field magnitudes are depicted by lines of lengths corresponding to vector magnitudes? Rather, is it the case that an electromagnetic wave has no transverse spatial extent? (One can imagine that if the varying electric and magnetic field vectors were visually depicted by some other means -- e.g. gray-scale shading -- there would be no suggestion of transverse spatial extent). I noticed other posted questions that were close in concept, but did not seem to address this specific aspect. Please advise if I missed one that is essentially the same question. Also, I understand that there may be no meaningful non-quantum answer to this, but thought I'd ask anyway. Answer: Yes, it has a transverse spatial extent. If it didn't, then it would occupy zero volume, and when you integrated its energy density you would get zero. Another way to see that it has to have some transverse size is that if it didn't, the fields would be discontinuous functions of the transverse coordinates, and then you wouldn't be able to define the partial derivatives appearing in Maxwell's equations.
{ "domain": "physics.stackexchange", "id": 61444, "tags": "electromagnetic-radiation" }
What is the principle behind Microagglutination test (MAT)?
Question: Could some one please help me understand the principle behind the MAT. Also, is it only used to detect leptospira or can it be used to detect other pathogens as well? Kindly share links to papers if possible. Thanks! Answer: Microagglutination tests for the presence of antibodies, which declare themselves by sticking their antigens together into clumps. Leptospirosis serodiagnosis by the microscopic agglutination test. The microscopic agglutination test (MAT) is the gold standard for sero‐diagnosis of leptospirosis because of its unsurpassed diagnostic specificity. It uses panels of live leptospires, ideally recent isolates, representing the circulating serovars from the area where the patient became infected. A dilution series of the patient's serum is mixed with a suspension of live leptospires in microtiter plates. After incubating for about 2 hr at 30°C, results are read under the dark‐field microscope. The titer is the last dilution in which ≥50% of the leptospires have remained agglutinated You know antibodies work by sticking to things. If I have antibodies to a thing it is because I have been exposed to that thing, here leptospirosis. This version of the test uses live leptospires. If the antibodies stick to them and they clump up, the plate gets clearer. The good thing about this is that you don't need to know anything about the antigen except it is on the germs, or anything about the antibody except it sticks to the antigen. Also the germs are big enough to make clumps. If you have antigens for something small that won't make noticeable clumps (for example a virus, or an individual antigen of interest) you can stick it to microscopic latex beads. Then just as the leptospires clump up when the antibody sticks to them, the beads will clump up if there is antibody specific to the antigen you have stuck onto them. Historically the clumping was interpreted by a human with a microscope. You can do it with an optical density meter too. This approach is used for lots of different diagnostic tests. You can find them with google and "agglutination" but those links do not all explain how the test works.
{ "domain": "biology.stackexchange", "id": 10542, "tags": "microbiology, bacteriology, virology, pathology" }
constructing a DFA and a regular expression for a given regular language
Question: This is the question 1.12 from Introduction to the Theory of Computation 3rd Edition by Michael Sipser. The questions says, Let $D=\{w\mid w$ contains an even number of $\texttt{a}$'s and an odd number of $\texttt{b}$'s and does not contain the substring $\texttt{ab}$}. Give a DFA with five states that recognizes $D$ and a regular expression that generates $D$. I was able to construct a by using the intersection operation on the three simpler parts of $D$. I think that the regular expression that would generate $D$ is b(bb)*(aa)* but I'm not sure about it. Is my regular expression correct? If yes, then if I apply the subset construction algorithm on diagram (eps denotes ε) and minimize the resultant DFA, I should get the minimal DFA that I constructed through intersection, right? Answer: I was able to solve this question using two possible routes. The 1st route involves constructing the DFA by using the intersection operation on the simpler parts of $D$, i.e., $\{$even number of $a$'s$\} \cap\{$odd number of $b$'s$\}\cap\{$does not contain the substring $ab\}$ After that, I was able to reduce the DFA into a RegEx using the State Elimination Method and Arden's Lemma. The regular expression can be achieved by using Arden's Lemma as follows. $$1\to a2+b4$$ $$2\to a2+b2$$ $$3\to a5+b2+\varepsilon$$ $$4\to a5+b1+\varepsilon$$ $$5\to a3+b2$$ Note that $2\to a2+b2$ will resolve as $2\to \varnothing$ and not $2\to (a+b)^*$. The 2nd route involves "knowing" that the regular expression is $b(bb)^*(aa)^*$ by analyzing the language $D$. This RegEx can be converted to an ε-NFA. After that, we can convert this ε-NFA to get a DFA which when minimized would give us this.
{ "domain": "cs.stackexchange", "id": 7203, "tags": "regular-languages, finite-automata, regular-expressions" }
Spectrogram of a single tone complex signal has two dark lines?
Question: I am trying to plot the spectrogram of a complex signal I generated. I have written code to generate this signal and plot the spectrogram. It works. However I see that there are two dark lines present within the spectrogram. I'm curious to know the reason for the existence of these. I have also noticed that by varying the NFFT, the dark lines tend wobble around. I paste my code below: import matplotlib.pyplot as plt import numpy as np import scipy.signal as signal def plot_spectrogram(data,NFFT,Fs,ex ): plt.specgram(data, NFFT=NFFT, Fs=Fs) plt.title("Spectrogram of data") plt.ylim(-Fs/2, Fs/2) plt.show() plt.close() if ex: exit() ### Parameters F_offset = 250000 Fs = 1140000 ### Generate a digital complex exponential with phase -F_offset/Fs fc1 = np.exp(-1.0j*2.0*np.pi* F_offset/Fs*np.arange(len(x1)) ) plot_spectrogram(fc1, 512, Fs,ex=True) Answer: Since this is a constant spectrogram, you could just as well have just averaged the |FFT|² and plotted that! (The most colorful way of visualizing things isn't always the optimal one; your signal doesn't change over time, so you don't need the time axis of the spectrogram at all.) Quite possibly, in that "easier" representation, you would have spotted this: Your spectrogram does FFTs of length 512. That's not a multiple of the period of your discrete complex sinusoid. Therefore you'll see leakage. (You already know that effect!) However, to reduce that resolution-reducing leakage effect, specgram applies a Hanning window by default to your 512-sample chunks that you FFT. That hanning window leads to zeros: plt.plot(20*np.log10(np.abs(np.fft.fft(fc1[:512]*np.hanning(512))))) And there you go!
{ "domain": "dsp.stackexchange", "id": 7534, "tags": "python, spectrogram" }
Time required to evolve advanced chromatophores in octopuses and squid?
Question: Do we have any idea how long it took primitive Coleoida to evolve from non-color changing organisms to color changing organisms? Granted, not all coleoidans can change color (vampire squids are good example), many of them do. (Coleoida is the subclass below Cephalopoda that includes all squid, cuttlefish and octopuses. Nautilus' are excluded.) Answer: Here is a print screen from oneZoom.org. Coleoida include decapods and octopods, both of which have chromatphores, so their MRCA (which lived 330 millions years ago) probably did as well. Coleoida is a sister group to the nautiloids. They branched +400 millions years ago. Together they form the group of cephalopods. As nautiloids do not have chromatophores, the evolution of chromatophores probably occurred in between these two nodes (between MRCA of cephalopods and MRCA of coleoida). In other words, it occurred at some point during a period of about +70 millions years.
{ "domain": "biology.stackexchange", "id": 6104, "tags": "evolution, taxonomy" }
Apparent coincidence of spin-connection term and Higgs field?
Question: In curved space time, there is a spin-connection term $\overline{\psi}\gamma^\mu\sigma^{ab}\omega^{ab}_\mu\psi$. Here's my apparent problem. If there were no Higgs field and no gravity, all particles would be massless. And hence the left-handed and right handed electrons would uncouple and be seen as separate massless fermions with no connection to each other. It is only the higgs coupling that combines these in a pair with mass. As the higgs interaction turns the left-handed electron into a right handed one. But the spin-connection term, for example, (as far as I can tell) mixes the left and right handed electrons. Even though, apparenly, these two particles have nothing to do with each other. (e.g. This predicts a left handed neutrino passing by a rotating black hole will experience some kind of torsion effects and can turn into a right-handed neutrino). Worse still if we consider the Cabbibo mass mixing matrix, there is not pairing of the left-handed electron and right-handed but a pairing of the triplet of 3 generations of charged fermions with their counterparts. e.g. we would get a term like $\overline{\psi}^A(m^{AB}+\gamma^\mu\sigma^{ab}\omega^{ab}_\mu)\psi^B$ Where $m$ is a mass mixing matrix e.g. for up-quarks. Hence I don't see the justification for assuming that the spin-connection term pairs, say, left-handed up-quarks to right-handed up-quarks. Ignoring the Higgs term it could just as well pair left handed up-quarks to left-handed electrons. (e.g. this would predict a left handed quark passing by a rotating black hole would turn into a left-handed electron. Since we can't do the experiment how can we rule that out?) So it seems to me there is a some unexplained coincidence occuring where the Higgs interactions are pairing up particles identically to the way the spin-connecton pairs up particles. Or is there even an experiment to show how the chirality of a fermion is affected by gravitational torsion from a rotating gravitational mass? (Or perhaps some equivalent accelerating frame?) Can you explain this? My view is that a left-handed neturino in a gravitational field should stay a neutrino left-handed neutrino and there should be no mixing. Perhaps that could be achieved by replacing the term a pesudo-vector term: $\overline{\psi}\gamma_5\gamma^\mu\sigma^{ab}\omega^{ab}_\mu\psi$ (Just a guess)?? Answer: Frankly, I cannot follow every tangent in the discussion, nor should I attempt to comment on it. But the basic brutal fact that should consistently fit with all the rest is that odd numbers of visible gamma matrices insulate left from right chiralities inside spinor bilinears. This is a representation-independent statement and going to a specific representation might or might not be salutary. Specifically, $$ P_R = \frac{1 + \gamma^5}{2}, \qquad P_L = \frac{1 - \gamma^5}{2} ~, $$ so, e.g., the kinetic terms $$ \overline{\psi_L} ~\gamma^\mu \partial_\mu \psi_L= \overline{\psi} P_R~\gamma^\mu \partial_\mu \psi_L = \overline{\psi} ~\gamma^\mu P_L~ \partial_\mu \psi \neq 0 , \hbox {whilst} \qquad \overline{\psi_R} ~\gamma^\mu \partial_\mu \psi_L= 0. $$ Likewise, covariant completions to them, $$\overline{\psi_L}\gamma^\mu\sigma^{ab}\omega^{ab}_\mu\psi_L= \overline{\psi}\gamma^\mu\sigma^{ab}\omega^{ab}_\mu P_L\psi \neq 0, \hbox {whilst} \qquad \overline{\psi_R}\gamma^\mu\sigma^{ab}\omega^{ab}_\mu\psi_L= 0. $$ So spin-connection completions are chirally coordinated with the gradients they complete, and much unlike mass terms.
{ "domain": "physics.stackexchange", "id": 97968, "tags": "general-relativity, standard-model, fermions, higgs, chirality" }
Do differences in physical properties of different substances correspond directly with differences in the energy which composes those substances?
Question: A follow up question to In $E=mc^2,$ does it not matter what constitutes the mass? Do the different physical properties of chemical elements and compounds corresponds with the different sources of energy which compose of the mass of those elements and compounds? In answers to my previous question it was clarified that different forms of energy make up the total energy in a gram of sugar vs. a gram of water vs. a gram of lead, but ultimately a 1 gram mass of any of those objects has the same total energy. That has me thinking: the differences in properties between these objects must be accounted for in the differences in forms of energy composing each object. For example, the forms of energy which compose a gram of water (hydrogen bonding, covalent bonding, and more) are different than the forms of energy which compose a gram of salt (ionic bonding and other forces). Aside from the extreme complexity of the various forms of energy interacting to compose water from elementary particles, is the relationship between energy and mass/matter really this simple? Or are differences in the properties of objects dependant on more than the forms of energy of which they are made of, in which case, what other influences on properties of physical objects are there at play? Answer: You are confusing things some. Chemical energy and nuclear energy, and other kinds of energy are indeed different forms of energy, due to different forces, or equivalently different kinds of interactions (In physics there are four forms of interactions, the so called 4 forces). They will form different looking pieces of matter, and have different effects on on other matter when say they are mixed, but if the total matter for each case is the same, they have the same total energy. But this is the important thing. If you somehow take two equal amounts of energy and are able to convert them each totally into matter, their mass will be identical. Exactly given by E = m$c^2$. You have to be careful that you don't add other uncounted energy in. For instance, all that matter in each case has to be at rest, or else you have to add their kinetic energies in. Also, the matter 'things' in each case need to be one clump, or you'd have to account for gravitational and other forces between clumps. So you have to be careful and make sure you didn't miss something (such as neutrinos which are hard to detect). There is more. The strong equivalence principle says that that mass, from whatever energy, in each clump, say 1 gram of sugar and 1 gram of sand, will exert exactly the same gravitational force. Of course their inertial masses are also identical. An interesting way to see that the specifics of the forces makes no difference regarding total mass, consider two black holes at rest at some very large distance at rest with masses m and m (to make it easy). They will be attracted to each other (or equivalently in general relativity, GR, moves in geodesics in the spacetime towards each other) and start moving towards each other, accelerating as they get closer. They'll get very close, and if their trajectories are not perfectly in line, rotate around each other a number of orbits, and merge with each other into one larger black hole. The mass of the new black hole will be M, with $M = 2m - \mu$, and amazingly gravitational waves will be radiated and its total energy at infinity (or far enough away that you can consider it as no longer interacting with the black hole left behind), will be e, and the kinetic energy of the resultant black hole KE, with $\mu c^2$ = e + KE The changes in mass and energy were multiple, but the translations between them always follow Einstein's mass energy equation. The mass of the remaining black hole is exactly what some body (small enough to call it a test body, i.e. Ignore its own gravitational field), some astronomical distance away will conclude, from the black hole's gravitational attraction, is the mass of that black hole. The energy that escaped in the gravitational wave and the kinetic energy of the final black hole, make it balance out. Notice that you don't need to know the black hole's internal binding energy (which does not exist, but it would in any other body) or its rotational energy (which exists because their trajectories were not exactly aligned and so there needed to be some angular momentum) as it all goes to define its total mass M. Yes, mass and energy are numerically equivalent, with the $c^2$ being the translation factor. In advanced physics one uses natural units where c is set to equal 1, so mass and energy are indistinguishable in terms of their total values. We just call it mass energy, sometimes use either term for the other. Note that the mass or energy may have different forms, and do different things, even when the total value is always the same. So a gram of sand or sugar may have different amounts of nuclear mass and chemical or electromagnetic energy, that will always depends on differences between the two substances, but total masss for both are the same, and if you somehow converted each into total energy, of whatever kind, the total for each would be the same. And their gravitational effects on other matter are exactly equal (even though 1 gram is a small gravitational force). So for total mass or energy the constituent parts contributions to total mass, or total gravitational field created (assuming they both are point masses as an idealization) by each, are the same. Regardless of internal details. Stil a gram of sugar you may be able to eat ok, you wouldn't a gram ((or say a kg) of sand. Same mass, but Lots and of other differences.
{ "domain": "physics.stackexchange", "id": 44239, "tags": "mass-energy" }
Why aren't magnetic fields affected when a conductor is placed in the field?
Question: For electric fields when a conductor such as an aluminium sheet is placed in the field the field lines get affected due to the conductor.But when a conductor is placed in a magnetic field there will be no change in the magnetic field lines.For example if there are two parallel wires carrying an electric current in the same direction they will experience a force due to the magnetic filed generated. If we insert a conductor between the two wires(aluminium sheet) still the two wires would experience the same force. Why isn't the field affected by the conductor? Answer: Instead of speaking the effect of magnetic (or electric) field lines when a conductor is placed near it , it is better to speak the other way round. The electric field lines get distorted in the presence of a conductor, because the electric field could could induce some charge on the conductor and hence the electric field due to that conductor opposes the external field lines. That's why the structure of field lines change. A magnetic field affect only charged particles in motion. The equation for magnetic force is given by: $$\vec{F}_{mag}=q\vec{v}\times\vec{B}$$ From this equation, it is clear that if the velocity of the charged particle in a magnetic field is zero, then it will experience no magnetic force. In the case of conductor, even though there are free charges on it, they are in equilibrium and hence not affected by the magnetic field, since the net velocity vector of a charged particle is zero. But, if you place the conductor in a time-varying magnetic field, then the conductor experiences some force, which is due to the electric field generated by the time-varying magnetic field: $$\frac{\partial\vec{B}}{\partial t}=-\nabla\times \vec{E}$$ What that happens between two current carrying wires is that the magnetic field of one is affecting the moving charges on the other. That's why the magnetic field lines are not affected by the conductor.
{ "domain": "physics.stackexchange", "id": 34841, "tags": "magnetic-fields, electric-fields" }
Where on Earth does paleo-groundwater exist?
Question: Where on Earth do we expect to find very old groundwater (infiltrated thousands of years ago)? Answer: In large intracontinental basins where the main rock formations are exposed in adjoining highlands and rare deeply buried within the basin itself. The Madison Limestone is an example. The Madison and its equivalent strata extend from the Black Hills of western South Dakota to western Montana and eastern Idaho, and from the Canada–United States border to western Colorado and the Grand Canyon of Arizona. From Wikipedia. Ground-water ages vary from virtually modern to about 23,000 yr. The 14C ages indicate flow velocities of between 7 to 87 ft/yr. Hydraulic conductivities based on average carbon-14 flow velocities are similar to those based on digital simulation of the flow system (Downey, 1984). From: Geochemical Evolution of Water in the Madison Aquifer in Parts of Montana, South Dakota, and Wyoming By JOHN F. BUSBY, L. NIEL PLUMMER, ROGER W. LEE, andEWJCE B. HANSHAW
{ "domain": "earthscience.stackexchange", "id": 1235, "tags": "groundwater, hydrogeology" }
Phase transition and chemical potential
Question: In the case of the Ehrenfest classification for the first order phase transition it is said: If the first derivative of the free energy is discontinuous then we have a first order phase transition. Now I know that for the free energy we have : $dF= -pdV - SdT + \mu dN$. From here we get : $-(\partial F/\partial T)_P=S$. This is easy to understand. But it can be also found by taking the derivative of the chemical potential wrt Temperature: $-(\partial \mu/\partial T)_P=S$. Where does this equation comes from? And an additional question regarding phase transitions. If we are observing the liquid-gas phase transition. In our lectures the professor said that the system will always choose the state with the smaller chemical potential. So if for a fixed temperature value we have the system in the gas phase and we increase the pressure beyond the Vapor pressure then the system will jump into the liquid state because the chemical potential in the liquid phase has a smaller value. My question is the following: In the graph done by the professor (which I don't know how to illustrate here) it looks as if the chemical potential for the liquid phase decreases in value as pressure increases. But for the chemical potential the following equation was derived in the lecture: $\mu = \frac G N = \frac F N + \frac {PV}N = f + P \nu$ where: G is the Gibbs energy, F is the free energy, N the number of particles in the system, f the free energy for particle, and $\nu$ inverse of density. The point is that the chemical potential is proportional to the pressure. Which means that regardless of the phase in which the system is found, an increase in pressure means an increases in chemical potential for the gas and the liquid state and that the graph done by the professor is not correct,meaning for the liquid phase the chemical potential increases, but the one for the gas increases more, hence the system changes phase (into liquid) as pressure increases. Am I correct about the part in bolt? Answer: From your first formula (for the differential of Helmholtz free energy $F$) one can derive $$ -\left(\frac{\partial{F}}{\partial{T}}\right)_V = S. $$ Notice that the derivative is at constant volume, since the differential form clearly shows that $T,V$, and $N$ are the independent variables $F$ depends on. The partial derivative of the chemical potential wrt temperature is $$ -\left(\frac{\partial{\mu}}{\partial{T}}\right)_P = \frac{S}{N}=s. $$ Notice the division by N. The formula in the question, without division by $N$, cannot be correct because chemical potential and temperature are intensive quantities, while $S$ is extensive. Such formula can be easily obtained by recognizing that the Gibbs free energy $G=F+PV$ is a homogeneous function of degree one of its extensive variable $N$ and then, from the Euler theorem, $$ G = \mu N. $$ Therefore, every statement about the Gibbs free energy per particle is equivalent to a statement about the chemical potential as a function of $P$ and $T$. The derivative with respect to pressure of the chemical potential, at constant temperature, is $$ \left(\frac{\partial{\mu}}{\partial{P}}\right)_P =\frac{V}{N}=v $$ the volume per particle. By definition, this is a strictly positive quantity, therefore $\mu$ must be a strictly increasing function of the pressure. Moreover, in a one-phase system, the Gibbs free energy is a strictly concave function of the pressure (for the stability of the thermodynamic equilibrium). This implies that the chemical potentials of two homogeneous phases in the neighbor of a first-order phase transition should behave as in the following figure. The most stable phase corresponds to the lower curve (minimum Gibbs potential). On the left of the crossing point, the stable phase (purple) has a higher slope, thus corresponding to the phase with the higher volume per particle (lower density),i.e. the gas phase. On the right of the crossing (at high pressures) the stable phase has a lower slope and then a lower volume per particle (higher density).
{ "domain": "physics.stackexchange", "id": 80631, "tags": "thermodynamics, statistical-mechanics, phase-transition, chemical-potential" }
Resistance of liquids
Question: What is the effect of temperature on the resistance of liquid conductors (like electrolytes)? I think that it should increase with rise in temperature, as, just like in metals, the ions would bump into each other more often but the reverse is mentioned in my textbook. Answer: Resistance may decrease with rise in temperature as in case of electrolytes conductance is due to ions which have interionic forces. As we increase the temperature the energy is used to weaken these forces leading to increase in mobility of these ions thus increasing conductance or decreading resistance
{ "domain": "physics.stackexchange", "id": 49419, "tags": "electricity, electrical-resistance" }
Complex numbers in a frequency domain of a 2D image
Question: I am try to grasp the idea of frequency domain for images. I think get the basics, but now I've stuck with a question that I can't find appropriate answer anywhere. How are frequency domain and complex numbers in relationship? I've read this article from DSPguide and I understand that after applying DFT to an image in spatial domain, we get two planes: real and imaginary plane. By doing some calculations, we can get amplitude and phase planes. For what is the phase plane used for? Answer: Why don't you try it? Since points in the frequency domain correspond to sine waves in the image domain, it is reasonable to assume that the phase of the entries in the frequency domain controls the phase of the resulting sine wave. Try this f1 = zeros(255, 255); % Create image1 in the frequency domain - maintain conjugate symmetry % so we'll get a real image when we call ifft f1(5,1) = 255*255*.5 * exp(-j*pi/4); f1(252,1) = 255*255*.5 * exp(j*pi/4); % The second image is essentially the same except the phase is 0. f2 = abs(f1); % no phase figure(1); imshow(ifft2(f2)); title('phase = 0'); figure(2); imshow(ifft2(f1)); title('phase = pi/4') The result is and : And you can clearly see that the difference is the sine wave phase only. If you look at the values along the y-axis you'll see
{ "domain": "dsp.stackexchange", "id": 1614, "tags": "image-processing, dft, transform, digital, fourier" }
In what situations or problems would I use KDL data type in tf?
Question: Is KDL used by tf to do transforms or is it a supplement to explicitly do transforms by the programmer instead of letting tf do it? Please forgive my ignorance I've been trying to get a better understanding of tf but am confused by KDL. Originally posted by billtecteacher on ROS Answers with karma: 101 on 2014-02-11 Post score: 0 Answer: TF and KDL can achieve somewhat similar, but somewhat different uses. TF does not use KDL internally. TF is frequently used when you just want to transform data from one coordinate frame to another. KDL can also be used to find where a particular frame is based on a kinematic chain and joint positions, but doesn't offer the convenient functions for transforming sensor data. KDL does however offer kinematics functions such as Inverse Kinematics, Forward Kinematics, Jacobian calculations, etc. Originally posted by fergs with karma: 13902 on 2014-02-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by billtecteacher on 2014-02-11: Thank you so much for your help :)
{ "domain": "robotics.stackexchange", "id": 16947, "tags": "ros, kinematics, library" }
XMLGregorianCalendar to LocalDateTime
Question: From some generated code I get a javax.xml.datatype.XMLGregorianCalendar and I want to convert it to a LocalDateTime without any zone-offset (UTC). My current code accomplishes it, but I think it must be possible to acheaf the same result in a more elegant (and shorter) way. public static LocalDateTime xmlGregorianCalendar2LocalDateTime(XMLGregorianCalendar xgc) { // fix the time to UTC: final int offsetSeconds = xgc.toGregorianCalendar().toZonedDateTime().getOffset().getTotalSeconds(); final LocalDateTime localDateTime = xgc.toGregorianCalendar().toZonedDateTime().toLocalDateTime(); // this simply ignores the timeZone return localDateTime.minusSeconds(offsetSeconds); // ajust according to the time-zone offet } Answer: Something like : xgc.toGregorianCalendar().toZonedDateTime().toLocalDateTime() ? If you don't want to just rip off the zone information, but instead get the local time at UTC : ZonedDateTime utcZoned = xgc.toGregorianCalendar().toZonedDateTime().withZoneSameInstant(ZoneId.of("UTC")); LocalDateTime ldt = utcZoned.toLocalDateTime(); This answer is from the guy who has written the java.time specifications and implemented them btw : https://stackoverflow.com/questions/29767084/convert-between-localdate-and-xmlgregoriancalendar
{ "domain": "codereview.stackexchange", "id": 33785, "tags": "java, datetime" }
Must an optimization problem with a greedy algorithm belong to P?
Question: If it is known that for some optimization problem there is a greedy algorithm that solves it and the solution includes sorting of input at the preliminary stage, is it necessarily true that the problem belongs to P? I was told it is true that problems of this type must belong to P, but I have questions about whether this is true. It seems to me that there should be many examples of optimization problems with non-deterministic greedy algorithms. Are there in fact counterexamples to this claim or is it necessarily true? Answer: If the greediness here means moving on the input based on a specified score, and computation of the input item's score is polynomial, the answer is yes. It is correct. Because moving on the input with $n$ items is linear, and if the scoring computation for each item will be polynomial, as sorting them will be polynomial as well; finally, we can solve the problem in polynomial time (by the greedy algorithm). Notice that, the polynomial computability of the score is vital here. For example, if the computation of the score for each item will be an NP-complete/hard problem (for example if it is the minimum vertex covering of a graph), the statement will not be valid anymore.
{ "domain": "cs.stackexchange", "id": 17769, "tags": "complexity-theory, optimization, np-hard, greedy-algorithms, p-vs-np" }
Writing a panel for RViz in Python / accessing RosTopicHz from C++
Question: Hey guys, I would like to write a RViz panel where I can see the actual publishing rate on a given topic (same thing I can do with "rostopic hz" from command line). I have seen in the tutorial that panels for RViz are written in C++ and the class ROSTopicHz is written in Python. Is there any way to access the class to get the actual publishing rate based on a given topic name from C++? I could implement the functionality from ROSTopicHz in C++ but the problem I am facing is that I won't know prior which topics and message types I will subscribe to (this should be selectable during runtime). So I would need a way to dynamically create subscriber objects and callback functions in C++. Does anyone have an idea how to do this or a way to write a panel for RViz in Python directly? Thanks in advance! Originally posted by ce_guy on ROS Answers with karma: 77 on 2018-04-07 Post score: 0 Answer: Without (quite) some trickery, writing RViz panels in Python is probably going to be difficult. Dynamically subscribing to topics can be done both in Python as well as in C++ though, I don't immediately see why that would be any more difficult. Almost all RViz display plugins and quite some Panel plugins do it. See rviz/default_plugin/marker_display.cpp for instance. It would be somewhat involved, but one way to get this info into RViz could be to write a Python publisher that wraps rostopic.ROSTopicHz, publishes the data you're after and then write a regular C++ panel plugin (something like InstitutMaupertuis/simple_rviz_plugin) to display it inside RViz. If you don't absolutely need it to be an RViz panel, you could perhaps take a look at rqt_topic, which shows the same statistics (and more), but is an rqt plugin. RQT can also host an RViz instance. Edit: Thank you for your great answer! The problem I am seeing with C++ is that I need to create callback functions on runtime for new subscribers (so I can distinguish them for measuring publishing rate) because the messages have different types. I don't know at compile time the message types. The topic_tools pkg also has a C++ side to it. That provides the ShapeShifter class, which allows you to subscribe to any topic at runtime. See also ros_type_introspection/Tutorials/GenericTopicSubscriber. To clarify my idea a bit more: I have a system with quite many devices (all of them are publishing data in ROS) and several people are working on the system. Sometimes one or two devices are not running (because somene forgot to plug in a cable again for example). Since we don't have a mechanism to track these problems down with the nodes directly (legacy code problems) we just check with rostopic hz if there is data on the desired topics. So the panel should be reconfigurable to measure the publishing rate on all types of topics. I should've asked you why you wanted to this, but reading this description makes me wonder whether we're not solving an xy-problem. What you're describing sounds more like a task for diagnostics. Originally posted by gvdhoorn with karma: 86574 on 2018-04-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ce_guy on 2018-04-08: Thank you for your great answer! The problem I am seeing with C++ is that I need to create callback functions on runtime for new subscribers (so I can distinguish them for measuring publishing rate) because the messages have different types. I don't know at compile time the message types. Comment by ce_guy on 2018-04-08: I have thought about using a template function but how can I detect which topic I am getting the callback function call from at the moment? Thank you very much for your help! Comment by ce_guy on 2018-04-08: To clarify my idea a bit more: I have a system with quite many devices (all of them are publishing data in ROS) and several people are working on the system. Sometimes one or two devices are not running (because somene forgot to plug in a cable again for example). 1-2 Comment by ce_guy on 2018-04-08: 2-2 Since we don't have a mechanism to track these problems down with the nodes directly (legacy code problems) we just check with rostopic hz if there is data on the desired topics. So the panel should be reconfigurable to measure the publishing rate on all types of topics. Comment by ce_guy on 2018-04-08: I don't think it was a xy-problem. I just didn't know about the GenericTopicSubscriber. The panel will meet all our requirements in the current project status. Comment by gvdhoorn on 2018-04-08: Slightly pedantic, but: you seem to be after a way to monitor the status of various topics (or really: the status of various nodes). You picked something that you believe could be a viable approach: monitoring publications using ROSTopicHz. The problem is that it's a Python class, .. Comment by gvdhoorn on 2018-04-08: .. and you want to write an RViz panel (another approach you've already selected). Now the issue is that Python and RViz panels are not compatible, so you ask a question about your already selected (and implicit until your last comment) solution, instead of the real problem. That would seem to .. Comment by gvdhoorn on 2018-04-08: .. have the characteristics of an xy-problem. I don't mind at all really, but thought it was good to mention it. Comment by ce_guy on 2018-04-08: I was not sure if there is a way to write a panel in Python so I asked this question. I know that writing a rqt plugin in Python would be way easier but I this was not an option (my task is to write a panel for RViz directly). I should have mentioned that I has to be in RViz - sorry for that. Comment by gvdhoorn on 2018-04-08: You don't need to apologise for anything, as I already wrote, my last comment was pedantic. Comment by William on 2018-04-08: Also see: http://wiki.ros.org/Topics#Topic_statistics which lets you get statistics without directly subscribing. Comment by gvdhoorn on 2018-04-09: That would certainly make things a lot easier. I should've mentioned that. Now @ce_guy would only need to create a panel that visualises those msgs. See rqt_graph for an example of how rqt graph annotates the node graph with that info. Comment by ce_guy on 2018-04-09: Great info, I didn‘t know about that. Thank you very much guys!
{ "domain": "robotics.stackexchange", "id": 30573, "tags": "ros, python, c++, rviz" }
Still pulling information from XML to insert into Word Document inside 3rd party application
Question: Follow up to This Question I took some very good advice and changed my code around a little bit and eliminated some If statements. I am not retrieving very much information but it looks so skinny now. Is this a good thing? Is there something that I should add to the code? Dim phoneNode Dim phoneNodeList ReturnData = "" Set phoneNodeList = XmlDoc.SelectNodes("/Record/Case/CaseFormsLoad/PartyLoad/Party/PartyPhones/Phone") If phoneNodeList.Length > 0 Then For Each phoneNode In phoneNodeList If phoneNode.GetAttribute("ConfidentialFlag") = "True" Then ReturnData = ReturnData & phoneNode.Getattribute("PhoneNum") & VbCrLf End If Next End If This code is very readable and simple. Is there anything that I can do to make it shorter, and should I make it shorter? it is a Script and not a fully compiled code. Answer: I see one point that seems almost obvious enough that I'm worried I might be missing something. A For Each is "smart" enough that its body isn't executed at all for an empty collection, so you can eliminate the test for the list having a length of 0: Dim phoneNode Dim phoneNodeList ReturnData = "" Set phoneNodeList = XmlDoc.SelectNodes("/Record/Case/CaseFormsLoad/PartyLoad/Party/PartyPhones/Phone") For Each phoneNode In phoneNodeList If phoneNode.GetAttribute("ConfidentialFlag") = "True" Then ReturnData = ReturnData & phoneNode.Getattribute("PhoneNum") & VbCrLf End If Next
{ "domain": "codereview.stackexchange", "id": 7581, "tags": "vbscript" }
Significant Figures and Uncertainty
Question: Check the figure: As you can see from the figure, the pencil has 5.65 cm length with 0.05 cm uncertainty. We simply could say the length has 3 significant figures considering 5 and 6 are known digits and 5 is estimated digit. But we also have 0.05 cm uncertainty, meaning that length could be 5.60 cm or 5.70 cm, right? So how can we say 5.65 ± 0.05 cm has 3 significant figures when it could be 5.70 cm? The digit "6" is also seems estimated or suspicious in first place (it is not well-known and could be "7")! Answer: You are trying to apply significant figure rules to numbers with uncertainty, but you actually don't need to think about significant figures when the uncertainty is explicitly reported. The purpose of significant figures is to loosely indicate the uncertainty in the measurement. In this case, like you are suggesting in your question, if we just said the length is $5.65\,\mathrm{cm}$ then this means we are sure about the $5.6$ part, but we aren't exactly sure about the $0.05$ part. In relation to the measuring device, with the given tick marks we are essentially saying, "My ruler has ticks every $0.1\,\mathrm{cm}$, so I'm sure this pencil is longer than $5.6\,\mathrm{cm}$. However, I don't have tick marks more precise than this, so I'm going to use my best guess based on my vision and ability to line things up to say it is half way between the $5.6$ and $5.7$ tick marks at $5.65\,\mathrm{cm}$." Based on this, it is obvious that the final significant digit tells others that we are not sure about that last $0.05\,\mathrm{cm}$. Even the math we do with keeping track of significant figures and the rules for addition vs. multiplication with using the correct amount of significant figures is to make sure we keep consistent with the uncertain digits. The issue though is that significant figures don't tell us how uncertain we are. Do we think we have amazing sight and eatimation so we are within $0.01\,\mathrm{cm}$ of the $5.65\,\mathrm{cm}$? Or are we less confident in the $5.65\,\mathrm{cm}$ and want to report a larger uncertainty? Significant figures don't allow us to do this. Therefore, we can instead explicitly report the uncertainty as $5.65\pm0.05\,\mathrm{cm}$. At this point we don't need to worry about what that extra $0.05\,\mathrm{cm}$ in the measurement represents. The reported uncertainty takes care of this for us. And there are rules for doing math with numbers with uncertainty that keep our uncertainties consistent with the operations we choose to use, just like we have for significant figures. Therefore, you don't need to worry about significant figures if uncertainty is explicitly reported. The uncertainty already tells us what is actually certain and what is not, thus it replaces (and improves) what significant figures are supposed to do.
{ "domain": "physics.stackexchange", "id": 63414, "tags": "error-analysis" }
How does atmospheric pressure affect the buoyancy of a small particle?
Question: If I have a single cell (phytoplankton) or a small piece of sediment in a lake that is neutrally buoyant, how can I calculate the change in buoyancy due to changes in barometric pressure? For example, my single cell is close to neutrally buoyant at a barometric pressure of $1020\ hPa$. Later in the day, the barometric pressure drops to $1000\ hPa$. I understand that the cell should now be positively buoyant, but how do I calculate how much more buoyant? What other factors do I need to know? From looking the formula for buoyancy, I can see that I need: $\text{Buoyant force =(density of liquid)(gravitational acceleration)(volume of liquid displaced)$ I can assume fresh water if its a lake, and gravitational acceleration is a constant, and the volume of the cell is small ($1\ mm$ radius). $$F_b = \mathrm{(1000\ kg/m^3)(9.80\ m/s^2)(0.0000000041\ m^3) = 0.00004018\ N}$$ How does one bring the barometric pressure to bare on this formula? Answer: This is an answer to the specific refinement of this question that Vint asked in the comments to my other answer. In this refinement, we have an upside down test tube with some air in it, and we're interested in the effects on a short time scale where dissolution of air into the water is a negligible effect. In this case, the equation we need to make things work is the ideal gas law. It states that for an ideal gas, $PV=nRT$, where $P$ is the pressure of the gas, $V$ is its volume, $n$ is the number of molecules, $T$ is the temperature, and $R$ is the "ideal constant," which is empirically calculated to be. It says that any ideal gas will obey this relationship, so if we have one unknown, we can solve for it. In our case, we can simplify. Since the air never leaves the bubble, $n$ is constant. We hold $T$ to be constant because we're not talking about temperature, and $R$ is always a constant. Thus, there is a relationship between $P$ and $V$ for our bubble of gas. We can see that $P_1V_1 = P_2V_2$ (since both equal $nRT$, which is a constant). So if you know the pressure and volume before hand, and know the pressure after, we can calculate the volume after. Once we have this, its relatively easy to determine the buoyancy force: $\rho(V_{tube} + V_{bubble})$, where $\rho$ is the density of water.
{ "domain": "physics.stackexchange", "id": 68873, "tags": "pressure, buoyancy" }
Angle of a hanging ball in system trying to approach speed of light
Question: A common example of acceleration is a ball hanging from the top of the car. The angle this hanging ball makes from zero is dependent on the acceleration of the car. What happens as we allow the car to attempt to approach the speed of light at constant acceleration? My expectation is that since the car cannot reach c, it's acceleration must begin to decrease at some point From this expectation I would further assume that the angle the ball makes to an observer would decrease back to zero. Is this expectation correct? Where did I go wrong? Answer: You have come to correct conclusion, but the reasoning is not correct. Indeed, from the point of view of observer who is staying still the acceleration of the car is decreasing to 0 as the speed of car approaches $c$. But you can't just use non-relativistic approach to calculate the angle of the thread the object hangs on. At least I do not know how to do it correctly. Better approach would be first to calculate the position of thread in the frame of reference attached to the car. Looks like it's really simple(*): person inside the car "feels" constant acceleration, so the position of the thread will also be constant. Now we only need to change the frame of reference and find out how this picture looks like in frame of reference which is in rest. There is no need to calculate any forces to do that. Imagine that the body and thread are just painted on the wall of the car. $y_1 - y_0$ remains the same, $x_1 - x_0$ gets smaller because of relativistic length contraction, so the angle becomes smaller. (*) I am not quite sure about this statement. Yes, the person inside the car would feel constant acceleration. But if the gravity field (which from the point of view of observer is now produced by some "earth" which flies around quite fast) - well, I do not know.
{ "domain": "physics.stackexchange", "id": 52468, "tags": "general-relativity" }
TicTacToe game with AI in ruby - follow-up
Question: A week ago I posted my TicTacToe game follow-up question. The suggestions were referring mainly to the lack of polymorphism. Here's the new code, hopefully there's not a lot (or at all) to improve by now (except separating board functionality from Game class, but it seems to be too much of work anyway). As always, suggestions about structure, logic etc are welcome: # the game board class Board attr_accessor :board def initialize @board = (1..9).to_a end def display_board puts "\e[H\e[2J" # ANSI clear @board.each_slice(3).with_index do |row, idx| print " #{row.join(' | ')}\n" puts ' ---+---+---' unless idx == 2 end puts end def welcome_msg print "\nWelcome to Tic Tac Toe.\n\n" puts 'Enter 1 to play against another player, 2 to play against an evil AI'\ ', 3 to watch evil AI play against kind AI.' puts 'Type EXIT anytime to quit.' end def cell_open?(position) @board[position - 1].is_a?(Fixnum) end def win_game?(symbol) sequences = [[0, 1, 2], [3, 4, 5], [6, 7, 8], [0, 3, 6], [1, 4, 7], [2, 5, 8], [0, 4, 8], [2, 4, 6]] sequences.any? do |seq| return true if seq.all? { |a| @board[a] == symbol } end false end def full? @board.any? do |cell| return false if cell.is_a? Fixnum end true end def place_mark(position, symbol) @board[position - 1] = symbol end end # game logic class Game def initialize @board = Board.new start_screen end def start_screen(choice = nil) @board.welcome_msg @player1 = Human.new(@board, 'Player 1', 'X') # defaults @player2 = AI.new(@board, 'Evil AI', 'O') # defaults until (1..3).include?(choice) choice = gets.chomp exit if choice.downcase == 'exit' game_modes(choice.to_i) end end def game_modes(choice) @board.display_board case choice when 1 then @player2 = Human.new(@board, 'Player 2', 'O') when 3 @player1 = AI.new(@board, 'Kind AI', 'X') @player2 = AI.new(@board, 'Evil AI', 'O') else puts 'You silly goose, try again.' end @current_player = @player2 run_game end def run_game until game_over swap_players check_and_place end end def game_over @board.win_game?(@current_player.symbol) || @board.full? end def check_and_place position = @current_player.take_input @board.place_mark(position.to_i, @current_player.symbol) unless position.nil? @board.display_board result? end def result? if @board.win_game?(@current_player.symbol) puts "Game Over, #{@current_player.name} has won." exit elsif @board.full? puts 'Draw.' exit end end def swap_players case @current_player when @player1 then @current_player = @player2 else @current_player = @player1 end end end # human players in the game class Human attr_reader :name, :symbol def initialize(board, name, symbol) @board = board @name = name @symbol = symbol end def take_input(input = nil) until (1..9).include?(input) && @board.cell_open?(input) puts "Choose a number (1-9) to place your mark #{name}." input = validate_input(gets.chomp) end input end private def validate_input(input) if input.to_i == 0 exit if input.downcase == 'exit' puts 'You can\'t use a string, silly.' else position = validate_position(input.to_i) end position end def validate_position(position) if !(1..9).include? position puts 'This position does not exist, chief.' puts 'Try again or type EXIT to, well, exit.' elsif !@board.cell_open? position puts 'Nice try but this cell is already taken.' puts 'Try again or type EXIT to, well, exit.' end position end end # AI players in the game class AI attr_reader :name, :symbol, :board def initialize(board, name, symbol) @board = board @name = name @symbol = symbol end def take_input loading_simulation check_win(board) return @finished if @finished check_block(board) return @finished if @finished check_defaults(board) return @finished if @finished # failsafe check (1..9).reverse_each { |i| return i if board.board[i - 1].is_a? Fixnum } end private # first check if possible to win before human player. def check_win(board) @finished = false 1.upto(9) do |i| origin = board.board[i - 1] board.board[i - 1] = 'O' if origin.is_a? Fixnum # put it there if AI can win that way. return @finished = i if board.win_game?('O') board.board[i - 1] = origin end end # if impossible to win before player, # check if possible to block player from winning. def check_block(board) @finished = false 1.upto(9) do |i| origin = board.board[i - 1] board.board[i - 1] = 'X' if origin.is_a? Fixnum # put it there if player can win that way. return @finished = i if board.win_game?('X') board.board[i - 1] = origin end end # if impossible to win nor block, default placement to center. # if occupied, choose randomly between corners or sides. def check_defaults(board) @finished = false if board.board[4].is_a? Fixnum @finished = 5 else rand < 0.51 ? possible_sides(board) : possible_corners(board) end end def possible_sides(board) [2, 4, 6, 8].each do |i| return @finished = i if board.board[i - 1].is_a? Fixnum end end def possible_corners(board) [1, 3, 7, 9].each do |i| return @finished = i if board.board[i - 1].is_a? Fixnum end end def loading_simulation str = "\r#{name} is scheming" 10.times do print str += '.' sleep(0.1) end end end Game.new Answer: Board Whenever I look at code, I first look at the shape and the color. When I look at the code for Board, I find a lot of mixed colors in my color scheme. This suggests to me that maybe you are mixing data with logic. There are also a lot literals in there. Perhaps you can extract these and replace them with named constants or methods? Are you following the SRP? For instance, what does welcome_msg have to do with the board? Perhaps this is a little more ambiguous, but what about display_board? For @board, you are using a 9-element array which seems okay. You might consider making it a 2d-array to make the public interface a little nicer, but I suppose it's fine. But why initialize them with the numbers 1 through 9? It seems to me that the board should be agnostic regarding it's contents. The indices already indicate the positions and having the contents be nil more clearly indicates that it is empty IMHO. Are you happy with the argument name for #win_game? What about player or player_symbol? Later in the code you use the term mark, so what about mark? What about the method name? Is board supposed to know anything about the rules of the game? The local variable sequences is really a constant. Consider extracting it. You might also want to break it up into rows, columns and diagonals: ROWS = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] COLUMNS = [[0, 3, 6], [1, 4, 7], [2, 5, 8]] DIAGONALS = [[0, 4, 8], [2, 4, 6]] I always find that the literals true, false and nil are code smells in ruby. This is because ruby expressions are always implicitly truthy or falsey and either nil or not nil. This means that the expression return true if condition can almost always be written more succinctly and more efficiently as just condition (the exception being when you really need true and not just truthy). Game I found this rather complex to read. A first suggestion would be to use attributes. That will get rid of all those @ signs :). In #initialize you are calling start_screen. But start screen has nothing to do with initializing. It is already running the game. Why not move it to the run_game method? Should all methods be public? What methods do you want clients to call? You are setting up the player defaults in start_screen, only to then potentially change them later. Why not set them once and only once? The method game_over is a predicate, so should be named game_over?. The method result? is not a predicate, so should be named result or perhaps something else like display_result. I like the name of the method check_and_place, but should it be responsible for drawing the board and checking the result as well? You might be better of using a plain old if-else instead of a case statement in swap_players Example code Here are some mostly complete examples. I feel that there is more room to move stuff around, but they should indicate the things I touched upon. I feel that the Game example class still has to much conditions and to much raw data. class Board ROWS = [[0, 1, 2], [3, 4, 5], [6, 7, 8]] COLUMNS = [[0, 3, 6], [1, 4, 7], [2, 5, 8]] DIAGONALS = [[0, 4, 8], [2, 4, 6]] def initialize @cells = Array.new(9) end def [](position) @cells[position - 1] end def []=(position, player) fail RangeError unless (1..@cells.size).include? position @cells[position - 1] = player end def full? @cells.all? end def three_in_a_row?(player) [ROWS, COLUMNS, DIAGONALS].any? do |sequence| sequence.all? { |cell| cell == player } end end end class Game def initialize @board = Board.new end def run welcome_message run_game end private def welcome_message puts "\nWelcome to Tic Tac Toe.\n" puts 'Enter 1 to play against another player, 2 to play against an evil AI'\ ', 3 to watch evil AI play against kind AI.' puts 'Type EXIT anytime to quit.' end def run_game p1, p2 = select_players until game_over? swap_current_player(p1, p2) check_and_place draw_board end display_result end def select_players until (1..3).include?(choice = gets.chomp) exit if choice.downcase == 'exit' end case choice when 1 then [Human.new(@board, 'Player 1', 'X'), Human.new(@board, 'Player 2', 'O')] when 2 then [Human.new(@board, 'Player 1', 'X'), AI.new(@board, 'Evil AI', 'O')] when 3 then [AI.new(@board, 'Kind AI, 'X'), AI.new(@board, 'Evil AI, 'O')] end end def game_over? @board.three_in_a_row(@current_player) || @board.full? end def swap_current_player(p1, p2) @current_player = (@current_player == p1 ? p2 : p1) end def display_result if @board.three_in_a_row?(@current_player.symbol) puts "Game Over, #{@current_player.name} has won." else puts 'Draw.' end end end
{ "domain": "codereview.stackexchange", "id": 17453, "tags": "ruby, tic-tac-toe, ai" }
How much time will it take to move an object whose length is equal to one light year?
Question: Suppose there's a stick whose length is one light year and I push it from one side by one centimeter. How much time would it take for its other side to move by one centimeter and why? Answer: It depends on the material. When you push one end of the stick, you move the atoms at the very end of the stick. Those atoms push the atoms next to them, those atoms push the next atoms, and so on down the stick. This is a sound wave that travels down the stick, so the time you have to wait for the other end to move is the length of the stick divided by the speed of sound in the material of the stick. If the stick is wooden, the speed of sound is about 4000 m/s (compared with 330 m/s in air). It would take $\frac{9.5\cdot10^{15}\,m}{4000\,m/s} = 2.4\cdot10^{12}$ seconds (74 000 years) for the other end of the stick that is a light-year away to move.
{ "domain": "physics.stackexchange", "id": 43106, "tags": "speed-of-light, faster-than-light" }
How is a bound state defined in quantum mechanics?
Question: How is a bound state defined in quantum mechanics for states which are not eigenstates of the Hamiltonian i.e. which do not have definite energies? Can a superposition state like $$\psi(x,t)=\frac{1}{\sqrt{2}}\phi_1(x,t)+\frac{1}{\sqrt{2}}\phi_2(x,t), $$ where $\phi_1$ and $\phi_2$ are energy eigenstates be a bound state? How to decide? Answer: Bound states are usually understood to be square-integrable energy eigenstates; that is, wavefunctions $\psi(x)$ which satisfy $$ \int_{-\infty}^\infty|\psi(x)|^2\text dx<\infty \quad\text{and}\quad \hat H \psi=E\psi. $$ This is typically used in comparison to continuum states, which will (formally) obey the eigenvalue equation $\hat H\psi=E\psi$, but whose norm is infinite. Because their norm is infinite, these states do not lie inside the usual Hilbert space $\mathcal H$, typically taken to be $L_2(\mathbb R^3)$, which is why the eigenvalue equation is only formally true if taken naively - the states lie outside the domain of the operator. (Of course, it is possible to deal rigorously with continuum states, via a construct known as rigged Hilbert spaces, for which a good reference is this one.)
{ "domain": "physics.stackexchange", "id": 84740, "tags": "quantum-mechanics, hilbert-space, definition, superposition, quantum-states" }
Can I use AI to interpret XML documents?
Question: I think about a system which gets XML documents in various structures but with essentially the same data structure in it. For the example, let's assume each document contains data about one or more persons. So the AI would recognize a name. Somewhere else in the document there is the post address of our fictional person. The AI should now "see" the address and conclude, it belongs to our person. Anywhere else, there is a phone number in the document. Again, our AI should see the connection between our person and this phone number. This wouldn't be a job for an AI if there wasn't a catch. If the task was merely to find and map strings like addresses and phone numbers, we could simply use a regex to match our "target strings". The catch in this scenario would be this: the XML document might contain other data, which does not belong to our person but is a valid phone number for example and thus will match an regex. Would it be possible for an AI to learn this? If yes, with which framework would someone create such an AI? Sample XML document: <?xml version="1.0" encoding="utf-8" ?> <document> <data> <foo> <bar> <person> <name>John Doe</name> </person> </bar> <address> <street>Main street 1</street> <city>1111 Twilight town</city> <country>sample country</country> </address> <phone>+123 123 123</phone> </foo> <foo> <bar> <person> <name>Jane Doe</name> </person> </bar> <address> <street>Broadway 42</street> <city>4521 Traverse town</city> <country>sample country</country> </address> <phone>+123 412123</phone> </foo> </data> <creator> <!-- Note: While this looks like a valid person, --> <!-- this data should not be matched by the AI --> <name>Sam Smith</name> <office> <street>Seaside road 5</street> <city>4521 Traverse town</city> <country>sample country</country> </office> <phone>+123 555 555</phone> </creator> </document> Answer: XML, HTML and less formal languages all respond quite nicely to being transformed or interrogated within a graph framework. XML and HTML are particularly useful in that they conform strictly to a tree-structure. That means that any good data components can be measured in terms of tree-distance to any other "good" data components. If you extract your regex-friendly terms and keep track of where within their tree they are found, you may be able to cast those values into a general document-space vector (it might only need to be one-dimensional), allowing you to identify clusters of "good" vs anomalous sections of "bad" data based on a simple distance metric, or anomaly detection algorithm - say an isolation-forest that runs on information density for example. This depends on your data, and how much of it you can find, ideally already tagged up containing good vs bad. If you're looking to scrape reliable address-contents, then yes, you're likely to score hits on names, addresses, postcodes and phone-numbers all appearing as tightly connected clustered groups, all within one or two nodes-distance from one another. Meanwhile, an annotation containing a phone number lodged somewhere else is less likely to be a match. Different documents will have different threshold densities, and differing anomaly to conformant ratios, so you'd have a task on your hands to figure out some way to automatically tune your parameters on any given document set. In the past, I've tried doing this against html by flattening all the content into a single string of text and a similar approach yielded half-decent results, but if you're looking at XML, it's fair to expect the structure to yield more information.
{ "domain": "ai.stackexchange", "id": 311, "tags": "machine-learning, applications" }
If I'm using a compressor typically used for powering nail guns and other tools, why can't I make the air colder when I release it?
Question: I've always been fascinated in learning how to make liquid air, and I learned that there is a branch called physical chemistry, which deals in the manufacturing of different chemicals. My question is, if you need a compressor to compress nitrogen and oxygen into a tank until it's just about to explode, the temperature rises due to the molecules being pressed together. When I release that air, why doesn't that tank get cold? Isn't expansion supposed to cause the pressure to drop, and therefore, lower the temperature at the same time? Answer: I too use a compressed air-driven nailgun, and would offer the following observations. When I run my Speedaire 1 1/2 HP compressor into its 20 gallon tank to a pressure of 90 PSI, the compressor cylinder fins get hot- and throw the heat off into the surroundings. The tank itself stays at about room temperature. When I then release the compressed air through a blowdown nozzle (to blow sawdust off lumber before painting it), the escaping air is indeed cold, and the nozzle tip gets cold too as a result. All this is exactly in keeping with the known behavior of air when compressed, allowed to equilibrate with the environment, and then "throttled" back to ambient pressure. The incremental decrease per second in tank pressure upon running the blowdown nozzle is small compared to the pressure drop between the hose leading into the nozzle handle and the ambient immediately outside the nozzle. This means the cooling effect due to expansion is concentrated in the nozzle and not in the tank. So the nozzle handle gets noticeably cooler while the tank stays close to ambient temperature.
{ "domain": "physics.stackexchange", "id": 48021, "tags": "energy, pressure, air, molecules" }
Why is the modulating shape appear in both sides of the carrier signal in AM?
Question: It's about Amplitude Modulation. Can anyone tell me why is this signal wave or modulating waveform shape appears in both sides of the carrier sine wave? Why not in one side only? $$x_c(t) = A_c[1 + \mu x_m(t)]\cos(\omega_ct)$$ I copied this image from Wikipedia Answer: In a word: AC coupling. Even if you tried to do modulation on one side only, AC coupling would make it appear symmetrically about zero. It is neither practical, nor necessary, to do single sided modulation. And the expression you have shows that there is a number that multiplies the cosine function. That means the cosine will be multiplied both while it is going positive and while it is going negative. So the math and the image agree with each other. Note that the '$\mu x(t)$' factor has to be greater than -1 always - otherwise you get weird effects that you will experience as distortion (if the multiplier goes negative, the amplitude starts to increase again...)
{ "domain": "physics.stackexchange", "id": 27472, "tags": "radio-frequency" }
Problem with CNN
Question: I am using the BreakHis database. More specifically, I am trying to classify the 400X images. The sizes of the images are $700x460x3$. Here are the details of the dataset. Also, here is the code for the classification: from keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator() train_it = datagen.flow_from_directory( 'C:/Users/ahmed.zaalouk/Downloads/train' , class_mode= 'categorical' , batch_size=32,color_mode='rgb') val_it = datagen.flow_from_directory( 'C:/Users/ahmed.zaalouk/Downloads/validation' , class_mode= 'categorical' , batch_size=32,color_mode='rgb') test_it = datagen.flow_from_directory( 'C:/Users/ahmed.zaalouk/Downloads/test' , class_mode= 'categorical' , batch_size=32,color_mode='rgb') from keras.regularizers import l2 from keras.models import Sequential from keras.layers import Add, Conv2D, MaxPooling2D, Dropout, Flatten, Dense, BatchNormalization, Activation from tensorflow.keras import activations # Creating the model CNN_model = Sequential() # The First Block CNN_model.add(Conv2D(32, kernel_size=3,kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same', input_shape=(700, 460,3))) CNN_model.add(Activation(activations.relu)) CNN_model.add(BatchNormalization()) CNN_model.add(MaxPooling2D(2, 2)) # The Second Block CNN_model.add(Conv2D(32, kernel_size=3, kernel_initializer='he_uniform', kernel_regularizer=l2(0.0005), padding='same')) CNN_model.add(Activation(activations.relu)) CNN_model.add(BatchNormalization()) CNN_model.add(MaxPooling2D(2, 2)) from keras.optimizers import Adam, SGD from keras.engine.training import Model from keras import backend as K, regularizers from keras import losses CNN_model.add(Flatten()) # Layer 1 CNN_model.add(Dense(512)) # 512 units # Layer 2 CNN_model.add(Dense(512, activation='relu')) # 512 units CNN_model.add(Dropout(0.5)) # Layer 3 CNN_model.add(Dense(8, activation='softmax')) CNN_model.compile(optimizer="Adam", loss = 'categorical_crossentropy', metrics = ['acc']) CNN_model.fit_generator(train_it, steps_per_epoch=19, validation_data=val_it, validation_steps=5) Here is the model summary : Model: "sequential_11" _________________________________________________________________ Layer (type) Output Shape Param ================================================================= conv2d_29 (Conv2D) (None, 700, 460, 32) 896 _________________________________________________________________ activation_27 (Activation) (None, 700, 460, 32) 0 _________________________________________________________________ batch_normalization_27 (Batc (None, 700, 460, 32) 128 _________________________________________________________________ max_pooling2d_27 (MaxPooling (None, 350, 230, 32) 0 _________________________________________________________________ conv2d_30 (Conv2D) (None, 350, 230, 32) 9248 _________________________________________________________________ activation_28 (Activation) (None, 350, 230, 32) 0 _________________________________________________________________ batch_normalization_28 (Batc (None, 350, 230, 32) 128 _________________________________________________________________ max_pooling2d_28 (MaxPooling (None, 175, 115, 32) 0 _________________________________________________________________ flatten_8 (Flatten) (None, 644000) 0 _________________________________________________________________ dense_24 (Dense) (None, 512) 329728512 _________________________________________________________________ dense_25 (Dense) (None, 512) 262656 _________________________________________________________________ dropout_8 (Dropout) (None, 512) 0 _________________________________________________________________ dense_26 (Dense) (None, 8) 4104 ================================================================= Total params: 330,005,672 Trainable params: 330,005,544 Non-trainable params: 128 _________________________________________________________________ None I am getting this error and I don't know how to fix it : --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-80-7fdd4a4a32e1> in <module> ----> 1 CNN_model.fit_generator(train_it, steps_per_epoch=19, validation_data=val_it, validation_steps=5) ~\Anaconda3\lib\site-packages\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) 1916 'will be removed in a future version. ' 1917 'Please use `Model.fit`, which supports generators.') -> 1918 return self.fit( 1919 generator, 1920 steps_per_epoch=steps_per_epoch, ~\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1156 _r=1): 1157 callbacks.on_train_batch_begin(step) -> 1158 tmp_logs = self.train_function(iterator) 1159 if data_handler.should_sync: 1160 context.async_wait() ~\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds) 887 888 with OptionalXlaContext(self._jit_compile): --> 889 result = self._call(*args, **kwds) 890 891 new_tracing_count = self.experimental_get_tracing_count() ~\Anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds) 948 # Lifting succeeded, so variables are initialized and we can run the 949 # stateless function. --> 950 return self._stateless_fn(*args, **kwds) 951 else: 952 _, _, _, filtered_flat_args = \ ~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs) 3021 (graph_function, 3022 filtered_flat_args) = self._maybe_define_function(args, kwargs) -> 3023 return graph_function._call_flat( 3024 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access 3025 ~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1958 and executing_eagerly): 1959 # No tape is watching; skip to running the function. -> 1960 return self._build_call_outputs(self._inference_function.call( 1961 ctx, args, cancellation_manager=cancellation_manager)) 1962 forward_backward = self._select_forward_and_backward_functions( ~\Anaconda3\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager) 589 with _InterpolateFunctionError(self): 590 if cancellation_manager is None: --> 591 outputs = execute.execute( 592 str(self.signature.name), 593 num_outputs=self._num_outputs, ~\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 57 try: 58 ctx.ensure_initialized() ---> 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: InvalidArgumentError: Input to reshape is a tensor with 4194304 values, but the requested shape requires a multiple of 644000 [[node sequential_11/flatten_8/Reshape (defined at C:\Users\ahmed.zaalouk\Anaconda3\lib\site-packages\keras\layers\core.py:672) ]] [Op:__inference_train_function_12909] Errors may have originated from an input operation. Input Source operations connected to node sequential_11/flatten_8/Reshape: sequential_11/max_pooling2d_28/MaxPool (defined at C:\Users\ahmed.zaalouk\Anaconda3\lib\site-packages\keras\layers\pooling.py:355) sequential_11/flatten_8/Const (defined at C:\Users\ahmed.zaalouk\Anaconda3\lib\site-packages\keras\layers\core.py:667) Function call stack: train_function Edit : The number of images in the training set is 1275 The number of images in the validation set is 365 Answer: Default target size in flow_from_directory is 256 * 256 (height * width). So your data is resized to a dimension 256 * 256 while reading and you specified input_shape=(700, 460,3) in the layer ImageDataGenerator.flow_from_directory( directory, **target_size=(256, 256)**, color_mode="rgb", classes=None, class_mode="categorical", batch_size=32, shuffle=True, seed=None, save_to_dir=None, save_prefix="", save_format="png", follow_links=False, subset=None, interpolation="nearest", )
{ "domain": "datascience.stackexchange", "id": 9936, "tags": "classification, cnn, convolutional-neural-network" }
Motion planning for industrial robots
Question: EDIT Perhaps I should make the question even broader: I have a simple motion controller which needs to receive positions and velocities every 4ms (it's firmware is written in c++). I want to use ROS (currently using kinetic) to plan some trajectory (complicated or simple), and feed it to the motion controller. The problem I'm facing now is that the generated trajectory is not smooth and is not ready to be fed as is to the motion controller. Should I use other motion planning tools? or should I use tools to smoothen the trajectory? More deatails: I think I understand all of the tool-chain of ROS-I, but I'm missing the part of motion planning. I saw some questions about it, but most of them are pretty old. The ROS-I tutorials show two ways (in general) for motion planning. One is using MoveIt! and the other is using descartes (which is still experimental?). I've tried using MoveIt!, however, I need to interpolate the generated trajectories in constant time gaps and feed that to my controller (I do it in the robot driver - not with ROS). The trajectory that MoveIt! generates is pretty awful. The acceleration profile is really bad, and it affects the velocity and position. for example: image. Anyway, as I understand, MoveIt! is good for complicated motion planning tasks, for example obstacle avoidance, but the paths that it generates are not smooth. This question talks about interpolating, does the given answer is still valid? Someone wrote there that using ROS Industrial Trajectory Filter is the solution, but after reading about it and looking at the code, I don't think it solves the problem. What I'm looking for, is a motion planning tool that is more suitable for basic industrial robotics motion planning that generates smooth reliable paths. Is descartes the tool I'm looking for? Last question: I saw there are Inverse-Kinematics solvers (such as KDL, trac_IK) . Are they just used by the motion planners? Or do these packages also allow to make motion planning? Originally posted by ManMan88 on ROS Answers with karma: 44 on 2017-12-24 Post score: 1 Original comments Comment by ManMan88 on 2017-12-25: I saw this issue, and this issue I'm not sure if I fall to the same category and if this is fixed already Answer: I believe the 'problem' here is that in a ROS 'context', motion planning is considered at a much higher level than what you are referring to. In (industrial) robotics, motion planning is often 'just' trajectory generation, which at that level essentially is interpolation between two Cartesian (or joint) poses. Secondly, none of the pkgs you mention were meant for what you tried to do with them, which is low-level, hard real-time motion profile interpolation and trajectory generation. That is typically left to either hw or a low(er)-level software (stack). One example could be machinekit, but there are more. If/when you have that level of abstraction and control sorted out, then start looking at the motion planning tools in ROS. re: ros-i traj filter: that does actually include functionality for equidistant trajectory resampling, which seems like it would be what you want/need, but I've not used it, so I can't say anything about the quality of the output. Your 'last question' also points a bit to the conceptual differences between motion planning in various fields or communities: IK solvers are just that: solvers for IK queries. They can be used as part of a motion planner (or trajectory generator in your case), but on their own are definitely not meant (or designed) for those tasks. They deal with a completely different part of the problem (namely: mapping of joint space to Cartesian space poses and vice versa). Originally posted by gvdhoorn with karma: 86574 on 2018-01-12 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ManMan88 on 2018-01-15: Thanks you
{ "domain": "robotics.stackexchange", "id": 29605, "tags": "ros, motion-planning, ros-industrial" }
Program for binary to decimal and vice versa
Question: I started to learn C a week ago and this is my culminating program of what I've learned but I could use a few pointers. #include <math.h> #include <stdio.h> void binary(int n){ int x = 0, m = 0, z = n; while(n >= 1){ ++x; n /= 10; } n = z; int i[x]; x = 0; while (n != 0){ ++x; i[x] = n % 10; n /= 10; } while(x >= 0){ if (i[x] == 1){ if (x == 1) ++m; else m += pow(2,x - 1); } --x; } printf("%d", m); } void convert(int n){ int i[16]; int j, k, p; for(k = 15; k >= 0; --k){ i[k] = n % 2; n /= 2; } int *a, *b; a = &i[0]; b = &i[16]; for( ;*a == 0; ++a) ; while(a != b){ printf("%d", *a); ++a; } } int main() { int n; char m; printf("\nType b for binary to decimal conversion or d for decimal to binary conversion (max 65535)\n"); m = getchar(); printf("\nPrint number to be converted\n"); scanf("%d", &n); if (m == 'b'){ printf("%d converted to a decimal number is ", n); binary(n); } else{ printf("%d converted to a binary number is ", n); convert(n); } } Answer: two sets of comments. IS the functionality sensible or not? Is the code good / open to improvement etc? Firstly the functionality Its very odd to put 101 into an int and say that its 5. The number 5 in an int is always 5, it has various string representations depending on the base used to display it. I would have much preferred seeing 2 functions: one that accepts a string is a given base and gives that returns the valur of it as an int one that takes an int and displays it as a string using a given base In effect you would be recoding atoi and itoa Now about the implementation of the features you implemented First the code works, ++ for that. Nicely split into functions etc. ++ No Comments -- Gotta have at least some. commonly accepted c practice is to define and initialize variables on first use. (Old compilers required all variables to be declared at start of scope, not any more) for exmaple char m; printf("\nType b for binary to decimal conversion or d for decimal to binary conversion (max 65535)\n"); m = getchar(); is better as printf("\nType b for binary to decimal conversion or d for decimal to binary conversion (max 65535)\n"); char m = getchar(); You alignment is somewhat funky. Commonly accepted c styles are while(n >= 1) { ++x; n /= 10; } or while(n >= 1){ ++x; n /= 10; } your should '\n' on the last line of your output, its considered polite. I would have made the conversion functions return something rather than having them do the printing. Imagine you needed to do something else with the converted value There is no error checking. Entering 42 and asking for a 'b' conversion returns 0. THis is in fact an error, you should say so. n,z,x are not obvious names. I see that z is a copy of n so you can restore it later, call it saven or something like that Now for the actual logic. Given that you are doing such a strange thing I find it hard to really comment on it. But its not very clear. Taking convert, I would have converted the input to a string , base 2, then scanned the string back as decimal.
{ "domain": "codereview.stackexchange", "id": 21389, "tags": "beginner, c, number-systems" }
Is it possible to attain (near) light speed in space?
Question: As there is no drag in space/vacuum, is it possible to actually send a probe with enough fuel to have it achieve the speed of light or a value very near to it? Since there are not many significant forces that can slow down the speed (also why planetary motion will still continue for billions of years, I suppose) I am assuming fuel is our only restriction. Is that correct? Answer: As suggested by user John Doty in the comments, we can use the relativistic version of the Tsiolkovsky Rocket eq. $$\Delta v = c \tanh\big (\frac {v_\text{e}}{c} \ln \frac{m_0}{m_1} \big)$$ As there is no drag in space/vacuum, is it possible to actually send a probe with enough fuel to have it achieve the speed of light or a value very near to it? The goal here is to find the wet to dry mass ratio of the rocket $m_0/m_1$ needed for say $.99c$ so we can rearrange $$\exp{\big(\frac{c}{v_e}\text{arctanh}\frac{\Delta v}{c}}\big) = \frac{m_0}{m_1}$$ and now we can plug in an exhaust velocity $v_e = 5$ km/s and our desired ratio of the speed of light to find a whopping $$\frac{m_0}{m_1} = 10^{68917}$$ For a probe with the mass of $1000$ kg, this is a wet mass of $$m_0= 10^{68920} \text{ kg}$$ For comparison, the mass of the sun is only $10^{30} $kg. Another consideration brought in the comments from DKNguyen is There is drag in space from gas and tiny particles. It is a very small amount, but at relativistic speeds you run into a lot more of them a lot faster and they hit a lot harder And so in practice, there will be an effective drag which will cost you kinetic energy $ \propto v^3$ and is not considered in this calculation. As an aside, we can use a Hall-effect thruster that can get to a $v_e \approx c$ (from a comment by Gyro Gearloose) and find $$\frac{m_0}{m_1} = 14$$ which means that we still would need $14$ times the dry mass to get the rocket to $0.99c$.
{ "domain": "physics.stackexchange", "id": 99184, "tags": "speed-of-light, vacuum, rocket-science" }
How to rotate a chiral carbon
Question: I know that to correctly assign R,S priorities I have to rotate the above figure until the H is projecting into the page (away from me). However, I don't know how to rotate the H and the rest of the molecule when the H is in the plane of the page. When I imagine tilting the molecule back I get a see-saw shaped molecule ... obviously incorrect. Answer: Imagine viewing it from right side: H is upwards and CH3CH2 is downwards. OH is leftwards and CH3 rightwards. H and CH3CH2 are backwards(from right) so they must be on vertical line of Fischer projection and similarly for OH and CH3. Alternatively: Put the dotted wedge CH3 in any vertical position let it be bottom. Then imagine all other groups to be planar, we see in clockwise order: CH3CH2 , H , OH Draw them similarly on the Fischer projection in clockwise sense.! Note that both are (S)
{ "domain": "chemistry.stackexchange", "id": 11273, "tags": "organic-chemistry, chirality" }
Why does math work for describing and solving physics problems?
Question: The clarified version As far as I understand, Wigner considers a "miracle" the fact that it is even possible to find a mathematical equation that describes a natural phenomenon. It is not exactly what I was wondering about though. Lets say such an equation has been found. What exactly does it describe? Do we treat the phenomenon itself as just a black box that happens to "output" numbers that fit into the equation? This idea is supported by the fact not every intermediate step in solving the system of equations has an obvious physical interpretation. The system of equations mirror the internal "structure/working" of the phenomenon? On the other hand, this is supported by the following example. Kirchhoff's rule "the algebraic sum of currents in a network of conductors meeting at a point is zero" clearly follows from the fact no additional charges enter or leave the circuit. Is it a mix of the both options above? Maybe throughout the history it has been discovered empirically that coming up with equations and then solving them works for physics, but no one really knows why and how it works? An answer along these lines is perfectly fine with me too. I just have not seen the way/method math is used in physics discussed anywhere -- and so wonder if I'm missing something obvious to everyone else. The original question My question is a general one. But to explain what it is asking let's first a look at "solving" of an electrical circuit using Kirchhoff's laws as an example. Solving an electrical circuit So to find out the directions and amounts of the currents we have written down the equations based on the Kirchhoff's laws. And up to this point we were staying in the physics' "land" -- because Kirchhoff's laws intuitive/physical interpretation is not hard to see. Once we had the system of equations we used the usual/general math techniques to solve the equations. I guess, the math techniques used to solve equations were discovered much earlier than the concept of the electric circuit (and the task of solving it) was invented/discovered. Also it does not seem possible to "interpret/map" each step taken to solve the equations in terms of the physical phenomena actually happening in the circuit. But still solving the equations let us find the amounts and directions of currents. In other words we went outside of the physics' "land" and into the mathematics' "land" but in the end still came up with the physically correct answers. To sum it up, my question is: Mathematical techniques used to describe physical phenomena are not necessarily specifically invented for physics and do not necessarily have any meaningful physical interpretation. How come these techniques are able to produce correct (can be verified by experiment) results? And on the same note, who came up with the idea of using math for describing things in physics, how did this person come up with the idea? Hopefully, it is possible to understand what I'm asking about. I've tried as hard as possible to make the question clear and concise. But, honestly, I find it challenging to express this question clearly. Anyway, I will be glad to clarify it further as much as needed. Thank you in advance! Answer: One very popular view (as espoused by Max Tegmark) is that (quoting count_to_10) : math works because the universe is based on math http://www.scientificamerican.com/article/is-the-universe-made-of-math-excerpt/ https://en.wikipedia.org/wiki/Mathematical_universe_hypothesis Such a view was common from the time of Pythagoras, through to Kepler and Newton, with attempts to find mystical mathematical patterns in nature, and the description of God as a Geometer. Galileo wrote in 1623 : "The book of nature is written in the language of mathematics." An alternative view which is more "down to Earth" is that mathematics developed from the attempt to describe the world using numbers - not simply counting but also measuring (distance, angle, area, volume, weight, etc). This is obvious in the case of Geometry (literally, 'land measurement'). Trigonometry also developed for use in surveying, navigation and astronomy (in the latter case for predicting floods or auspicious astrological events). Probability was developed to answer questions about gambling. Calculus developed from trying to account for the shape of celestial orbits. More recently, the mathematics of chaos arose from weather prediction, and fractal geometry from the practical question of measuring the length of a coastline. Throughout most of its history mathematics developed as a tool of science and technology, from the time of Archimedes to the era of Euler, Lagrange, Gauss and Legendre. So it should not be surprising that it "works" in physics. It was not until about 1850 that Pure Mathematics became recognised as a separate subject. As Paul T points out, the issue was addressed by Eugene Wigner in a famous essay, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" ( http://www.maths.ed.ac.uk/~aar/papers/wigner.pdf.) However, I think this description of "unreasonable effectiveness" clashes with the reality of mathematical physics. Take a look inside Landau & Lifschitz or any other graduate text in mathematical physics. Seeing the horrendous mathematics required to solve many differential equations (Fourier Transforms, Bessel Functions, etc), most of which have no analytical solution anyway, you might then question whether the description of "unreasonable effectiveness" is really appropriate. Even more so when you realise that these complex solutions are still only an approximation to reality since the differential equations have themselves arisen only after making several simplifying assumptions. In Quantum Mechanics only the most simple problems can be solved analytically. Some are resolvable only into transcendental equations (eg finite potential barrier). Others are tractable only as "perturbations" of known solutions, or in QED require the summing of infinite series of terms. In some fields special tricks like Renormalization and Regularization are needed to deal with infinities. That linear algebra applies quite well in numerous macroscopic situations of interest is due to the facts that (1) many phenomena are approximately linear over the narrow region of interest, and (2) they are only weakly coupled to each other. Then empirical laws like Hooke's Law and Ohm's Law give sufficiently accurate results without making the calculations too difficult. The Law of Large Numbers, which is the basis of statistical mechanics, is also a great help in getting round the difficulties of solving non-linear equations at the molecular level. Most notably in the case of turbulence, although we can write the Navier-Stokes Equation - which again rests on simplifying assumptions - nobody has yet worked out how to solve it. But even with a system as simple as the Double Pendulum, we can write its equation of motion but we cannot always predict its behaviour. As dmckee says : Think for a moment about what happens to proposed descriptions of reality whose math doesn't work for describing the system they pertain to. Kirchhoff's laws didn't end up in the texts because the man's name is fun to say. When mathematics doesn't provide a solution to a physics problem, it is left out of the textbooks. Or we simplify until the problem is solvable. We concentrate on the problems we can solve, and avoid those we can't. That leaves the impression that mathematics can solve every physics problem. So in summary my answer is that : mathematics works in physics because it was developed (in part) for the purpose of describing the world, and it doesn't actually work anywhere near as well as some people make out. Response to The clarified Version We only treat the phenomenon as a black box when we are totally clueless about what is going on. Then we develop empirical equations - we select parameters and vary them to match experimental results. This rarely happens in physics, more so in engineering. Usually we aim to make the equations model the inter-relations of relevant variables : ie mirror the internal structure of the phenomenon. However, in solving those equations we are not restricted to mimicking the phenomenon - unless we're running a simulation. We can use any mathematical short-cuts (eg integration, analogy, symmetry) to predict the end result. Yes, we sometimes use a mixture of these two approaches : eg the Semi-Empirical Mass Formula in nuclear physics, and the various Equations of State for Real Gases. Dimensional Analysis might also come under this category : we choose which variables are relevant, and look for consistent relationships between them. I don't agree with Wigner that there is such a big mystery about the process and its success, that it is a "miracle" and that "nobody knows how it works." I am, as Geremia says, a disciple of Aristotle as Wigner is of Plato. Is it a miracle that we just happen to live on the only habitable planet within sight? Or is that a tautology, since we cannot do otherwise? Likewise I think it is no more a miracle that we've had amazing success applying mathematics to physics than that we've had amazing success applying our minds to developing aerospace, computer and communications technologies. The success of applying mathematics has spurred us to using it almost exclusively, perhaps at the expense of other approaches. As I said above, we tend to focus on problems to which maths can be applied, and neglect those to which it can't. And we're not satisfied that we understand something until we can write down and solve the governing equation(s). When existing mathematics fails to apply to a problem, we try or invent new tools, concepts or branches of mathematics to deal with it - such as topology, non-Euclidean geometries, catastrophe theory, fractal geometry, chaos, self-organizing systems and emergence. We forget the many failures which PhD students have had in trying to apply inappropriate mathematics to a stubborn problem.
{ "domain": "physics.stackexchange", "id": 32242, "tags": "soft-question, history, mathematics, epistemology" }
Can we find size of total interval in O(N)
Question: Let's say we have given $N$ intervals in the form $[x, y]$, both $x,y$ are integers , we want to find the number of integers covered by at least one interval of all $N$ intervals, (look in the example for better understand). The intervals may or may not intersect. Example: Let N = 3, and the intervals: $a_1 = [1, 5], a_2 = [3, 7], a_3=[5,7]$ The count of the integers is 7, because they cover the numbers: $1,2,3,4,5,6,7$ Is it possible to get such size in $O(N)$ time complexity I tried to come up with solution but I think that we should sort the intervals in some way and then to search them in linear time, but I cannot think how should we make the search and how to sort them. Answer: There is a simple $O(n\log n)$ algorithm based on sorting all the endpoints of the intervals. I will describe a more sophisticated algorithm, which outputs the union of the input intervals as a union of internally disjoint intervals (they may have overlapping endpoints). Denote the input intervals by $[s_i,t_i]$. Initialize a variable $a$ to 0. This variable counts the number of active intervals. Let $x$ go over the sorted order of the $s_i,t_i$: If $x = s_i$ and $a = 0$, record $s_i$ as the starting point $s$ of the current interval. If $x = s_i$ (regardless of $a$), increase $a$ by one. If $x = t_i$, decrease $a$ by one. If now $a = 0$, output $[s,x]$. It is not hard to modify this algorithm to output a union of truly disjoint intervals. This algorithm runs in $O(n\log n)$, the running time being dominated by the cost of sorting the endpoints of the intervals. If the endpoints are integers and we can only access them using comparisons, then there is a simple $\Omega(n\log n)$ lower bound obtained by reduction from element uniqueness. Given a list of integers $x_1,\ldots,x_n$, form intervals $[3x_i,3x_i+1]$; I'll let you take it from here. This might look like cheating (since we assume that the endpoints are integers, and use arithmetic operations in the reduction), but it does rule out certain kinds of algorithms from achieving $o(n\log n)$ performance.
{ "domain": "cs.stackexchange", "id": 10730, "tags": "intervals" }
Does the Higgs boson (or the 125 GeV boson, if it's not exactly the Higgs as predicted) occur in nature?
Question: If the Higgs boson mediates mass interactions, do they exist in nature? Are there Higgs bosons flying around all the time? Or do they only exist for a tiny fraction of a second while they mediate mass? As I understand it, the bosons are all mediators of interactions. We know that photons, gluons, and the W-/Z- bosons exist. Photons persist for very long durations as they speed through the cosmos. Does the Higgs/mass-mediating boson do this? I'm sort of having trouble getting a grasp on at what point the mass boson would exist and how that would work in practical fact. Also, it's entirely possible that this is a nonsensical question based on me misunderstanding everything. :) Answer: You need to distinguish between the Higgs boson, that has just been discovered by the LHC, and the Higgs field. The Higgs field exists everywhere, and it's the interaction of the Higgs field with the electroweak field that gives the W and Z bosons their mass. A different interaction of the field with fermions gives them mass, but I must admit the details of this are somewhat mysterious to me. Anyhow, the Higgs boson is the bit left over after the Higgs field has interacted with the electroweak bosons, and it doesn't do anything except excite physicists. It's importance is that its discovery is evidence for the Higgs field rather than because of any special property of it's own. It's just an excitation of the Higgs field along the radial direction in the "mexican hat" model. Higgs bosons exist everywhere in the sense that they are created continuously in cosmic ray collisions or as virtual particles, but any individual Higgs boson rapidly decays. But then this is true for the W and Z bosons as well. It's only the photon that has an extended lifetime.
{ "domain": "physics.stackexchange", "id": 4052, "tags": "higgs" }
ethernet connection via ros
Question: hi guys, i am trying to connect my laptop via LAN to my robot. So my question is, are there already ethernet interfaces for ros? With which a NODE can access on the Ethernet port? For example I can ping my robot? Originally posted by Hankmasta on ROS Answers with karma: 1 on 2014-10-21 Post score: 0 Answer: At first would I suggest you to synchonize your robot installing chrony $sudo apt-get install chrony then, if you have the later versions of Ubuntu you have access to Zeroconf which simplyfies a lot the whole story. With hostname you can find the name of your machine, for istance: $ hostname my_computer $ hostname my_robot Now I suggest you the following trick: just add .local at the end of your hostname, then you can do all operations you want to: $ping my_robot.local and f course: $ping my_robot.local after that set the ROS_MASTER_URI and ROS_HOSTNAME in yur robt and desktop as in the tutorial. Hope it helps Originally posted by Andromeda with karma: 893 on 2014-10-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19808, "tags": "ros, ethernet" }
Should I remove or interpolate missing values?
Question: I have a dataset containing a very long time series of hourly traffic congestion in a certain city, during a period of ~22 years (number of data points: Roughly 24 X 365 X 22 = 192720). I want to use this time series to forecast future hourly traffic congestion values. I have 2 types of missing values in the series: A "single" missing value - ~30 values that are missing, with no certain pattern, i.e. the missing values are sporadically spread across the time series. A missing day - 20 Days that are missing altogether, not a single data point for those days. 10 Of those 20 days are sporadically spread across the time series, while the other 10 are adjacent (10 days in a row). the overall missing values rate is around 0.25%, so I'm not worried about removing them altogether for descriptive statistics etc., just wondering if it's correct to remove them for the forecasting part. Also, not sure if I should treat the 2 types of missing values differently. Thanks! Answer: Easy answer: try both and see what works best. Obviously, whatever works best is what you should choose, but let's look into why one method might work better than the other and in what scenario it's more likely that it does. What does "null" mean in your case? This is a very common data question. Does "null" mean that (a) there is just no value AND we know that there isn't a value? Or does "null" mean, (b) there is a value AND we don't know it? Whether your data is continuous or discrete typically can help answer that question. In your case, I'd say (a). Congestion could be measured at any time and would have a value. So your data are measurements/samples out of a continuous distribution. So, in this case, interpolating might be a good idea. You are in a "luxury" situation where your missing data is relatively small. So, whichever option you should likely won't greatly impact the end solution. My personal take: Do interpolate the data, as having an interpolation mechanism could help your end-model deal with missing data in production from a functional perspective. You don't want your code to break if it encounters a null value. Also, not sure if I should treat the 2 types of missing values differently. Interesting question. You might want to interpolate the values differently. Instead of taking the average of the window near the missing values, when a day is missing, you might want to use the average similar days (i.e. average Mondays) as you might have some seasonality. Whereas with random missing values, looking at the direct "neighbours" might be sufficient.
{ "domain": "datascience.stackexchange", "id": 12111, "tags": "time-series, forecasting, interpolation" }
How did "dust and gas" form into the Hadean Earth?
Question: We understand that the Hadean Earth was basically a giant ball of magma, constantly bombarded, no atmosphere, and solidified material never accumulated as Earth was too hot at its surface. We know from the magma that eventually crust formed, and oceans began to form from comet/asteroid bombardment, as well as from volcanic degassing. ...However, Hadean Earth formed after tens of millions of years as gas and dust accumulated to form the planet, but what even is meant by "gas and dust"? When I think of those terms I think of our atmosphere and smog, and dust that would be in the corner of the house. How can it account for all the elements that occur naturally on Earth? It seems as if Earth was created from something that would have no grounds creating Earth. Is a certain temperature implied when "gas and dust" is stated? The dust in the corner of the house can not become magma. I know my hypothesizing is absurd, but I am doing that on purpose: what is even meant by "gas and dust"? What should be invoked in my mind, since mundane examples are not sufficient? Edit: I was led to this question through studying Geologic Time and its eras; if this would better fit in another stack exchange like astronomy, sorry for confusion. Answer: Very simply. No disrespect, but this is the type of explanation aimed at a young elementary/primary school child. It starts with hydrogen, helium and lithium . They've been around since the Big Bang. Quantum fluctuations created random zones of differing density material within the early universe. This resulted in some hydrogen collapsing under gravity to form stars. When enough hydrogen was collected, the interior of the stars became very hot and started to fuse hydrogen into different elements. First to be created within the stars, under stellar nucleosynthesis, also known as stellar nuclear fusion, was more helium. As hydrogen became depleted within the star, the heavier elements began to be created under nuclear fusion. Within stars similar to the Sun, this process ends with the formation of iron, shortly before the star sheds its outer layers explodes; larger stars explode as supernovae. The formation of elements heavier (see the periodic table of elements) than iron results from an alternate process, such as merging neutron stars, which I'll let you to investigate - I'm keeping things as simple as possible. As the stars explode during a supernova event they disperse the elements they made (He, B, C, O, N, F, Ne, Na, Mg, Al, Si, P, S, Cl Ar, K, Ca, ... Fe) into the nearby cosmos. From this clouds of gas, or gases (oxygen and nitrogen, etc.). The metallic elements are the precursors of cosmic dust. These coalesce under gravity to form larger grains, which in turn coalesce under gravity to form larger particles. Eventually rocks are formed, similar to what is found in the asteroid belt. Either via the influence of gravity or motion induced collisions, such materials start to coalesce further to form planets, or the cores or planets that later become gas giants. Each collision produces heat. This along with the heat from decaying nuclear isotopes makes the planets hot enough to become molten. Again, under gravity the heavier elements coalesce towards the center of the planet. The heat and the molten nature of the planet results in the formation of various compounds. As the outer parts of the planet cools solid crystals precipitate out of the molten mass to form solid rock that creates the crust of the planet. Due to the molten nature of some of the rock, as the planet cools, volcanic activity continues, bring fresher molten material to the surface of the planet. It also release volcanic gases, such as carbon dioxide, to form the initial atmosphere of the planet (see Venus, Earth and Mars). While all this is going on, the planet will be bombarded by asteroids and comets which bring additional material to the surface of the planet, such as heavy metals and water. This essentially is the Hadean Earth. Later if life forms, such a cyanobacteria and conditions are favorable, carbon dioxide from the atmosphere will be consumed and oxygen, created by the bacteria, will accumulate in the atmosphere. Having a protective planetary magnetic field helps with retention of the planetary atmosphere.
{ "domain": "earthscience.stackexchange", "id": 2735, "tags": "earth-history" }
What happens before a radioactive element decays?
Question: What happens to a radioactive element just before it decays? In school, I've been told that the decay process of an element is absolutely random, and it is impossible to determine which unstable element decays next. Clearly, there needs to be a triggering event. What is this event? Answer: Nothing happens! It's random! The nucleus is in an unstable state, and unstable states have a certain small probability to decay within a given amount of time (how small depends on the nucleus). There's not much else to it! Sometimes decay can be stimulated but the type of decay you're talking about is truly random.
{ "domain": "physics.stackexchange", "id": 19873, "tags": "radioactivity, elementary-particles" }
Speeding up 2D shadow algorithm
Question: I wrote a class to represent a set of lights in a 2D game that raycast in order to find shadows. The draw method does a lot of calculations and takes a lot of time. Mainly, the adding of areas and clipping takes from 1 to 12 milliseconds per loop. Is there any way to speed up this method? The whole project is on Github. SmoothLight.java public class SmoothLight { /** A Polygon object which we will re-use for each shadow geometry. */ protected final static Polygon POLYGON = new Polygon(); List<Light> lights = new ArrayList<>(); /** * * @param center * the base light * @param circles * the number of circles per layer * @param oneLayerProjection * the amount of projection of one layer from the previous * @param layers * the amount of layers * @param angle * the angle between each layer */ public SmoothLight(final Light center, final int circles, final int oneLayerProjection, final int layers, final int angle) { // creates layers of lights with the angle between each layer for (int j = 0; j < layers; j++) { // how much to rotate this layer counter-clockwise final int radialDifference = angle * j; // how much to project this layer final int projection = oneLayerProjection * j; final int dif = 360 / circles; for (int i = radialDifference; i < 360 + radialDifference; i += dif) { final double x = Math.cos(Math.toRadians(i)) * projection + center.getX(); final double y = Math.sin(Math.toRadians(i)) * projection + center.getY(); final int alpha = center.getColor().getAlpha() / circles / layers; final Color newColor = new Color(center.getColor().getRed(), center.getColor().getGreen(), center.getColor().getBlue(), alpha); lights.add(new Light(newColor, new Vec2D(x, y), center.getRadius())); } } } /** * @param g * the graphics to use for rendering * @param entities * the list of entities to take into account when drawing shadows * @throws Exception */ public void draw(final Graphics2D g, final List<Polygon> entities) { // old Paint object for resetting it later final Paint oldPaint = g.getPaint(); // amount to extrude our shadow polygon by for (final Light light : lights) { // minimum distance (squared) which will save us some checks final float minDistSq = light.getRadius() * light.getRadius(); // The area for drawing the light in Area shadowArea = null; for (int i = 0; i < entities.size(); i++) { final Polygon e = entities.get(i); final Rectangle2D bounds = e.getBounds2D(); // average to find the entity's radius final float radius = (float) (bounds.getWidth() + bounds.getHeight()) / 4f; // get center of entity final Vec2D center = new Vec2D(bounds.getX() + radius, bounds.getY() + radius); final Vec2D lightToEntity = center.minus(light.getPosition()); // get euclidean distance from light to center of the entity final float distSq = (float) lightToEntity.distanceSq(lightToEntity); // if the entity is outside of the shadow radius, then ignore if (distSq > minDistSq) { continue; } // if A never gets set, it defaults to the center Vec2D A = center; Vec2D B = center; // Find the farthest away vertices for which a line segment // between the source and it do not intersect // the polygon. Basically, a vertex with a line of sight to the // light source. Store these two in A and B. float maxA = 0; float maxB = 0; for (int j = 0; j < e.npoints; j++) { final int x = e.xpoints[j]; final int y = e.ypoints[j]; final float newDistSqred = (float) lineToPointDistanceSqrd(light.getPosition(), center, new Vec2D(x, y), false); if (maxA < newDistSqred) { maxB = maxA; B = A; maxA = newDistSqred; A = new Vec2D(x, y); } else if (maxB < newDistSqred) { maxB = newDistSqred; B = new Vec2D(x, y); } } // project the points by our SHADOW_EXTRUDE amount final Vec2D C = project(light.getX(), light.getY(), A, light.getRadius() * light.getRadius()); final Vec2D D = project(light.getX(), light.getY(), B, light.getRadius() * light.getRadius()); // construct a polygon from our points POLYGON.reset(); POLYGON.addPoint((int) A.x, (int) A.y); POLYGON.addPoint((int) B.x, (int) B.y); POLYGON.addPoint((int) D.x, (int) D.y); POLYGON.addPoint((int) C.x, (int) C.y); final Area a = new Area(POLYGON); // adds to the existing light area if (shadowArea == null) { shadowArea = a; } else { shadowArea.add(a); } if (Debug.OUTLINE_SHADOWS) { g.setColor(Color.PINK); g.draw(shadowArea); } } if (shadowArea == null) { // fill the polygon with the gradient g.drawImage(light.image, null, (int) (light.getX() - light.getRadius()), (int) (light.getY() - light.getRadius())); } else { // get the inverse of the lightArea and set that as the clip for // shadows final Shape s = g.getClip(); final Area lightArea = new Area(new Rectangle2D.Float(0, 0, LightingTest.getWidth(), LightingTest.getHeight())); lightArea.subtract(shadowArea); g.setClip(lightArea); g.drawImage(light.image, null, (int) (light.getX() - light.getRadius()), (int) (light.getY() - light.getRadius())); g.setClip(s); } if (Debug.OUTLINE_LIGHTS) { g.setColor(Color.PINK); g.drawOval((int) light.getX() - 2, (int) light.getY() - 2, 4, 4); } } // reset to old Paint object g.setPaint(oldPaint); } private static double lineToPointDistanceSqrd(final Vec2D pointA, final Vec2D pointB, final Vec2D pointC, final boolean isSegment) { if (isSegment) { final double dot1 = pointB.minus(pointA).dotProduct(pointC.minus(pointB)); if (dot1 > 0) { return pointB.distanceSq(pointC); } final double dot2 = pointA.minus(pointB).dotProduct(pointC.minus(pointA)); if (dot2 > 0) { return pointA.distanceSq(pointC); } } final double dist = pointB.minus(pointA).crossProduct(pointC.minus(pointA)) / pointA.distanceSq(pointB); return Math.abs(dist); } private static boolean lineSegmentIntersects(final float x, final float y, final float x2, final float y2, final Polygon e) { final int ITERATIONS = 15; for (int i = 1; i < ITERATIONS; i++) { if (e.contains(new Vec2D(x + (x2 - x) / ITERATIONS * i, y + (y2 - y) / ITERATIONS * i))) { return true; } } return false; } /** * Projects a point from end along the vector (end - start) by the given scalar amount. */ private static Vec2D project(final float x, final float y, final Vec2D end, final float scalar) { return project(new Vec2D(x, y), end, scalar); } private static Vec2D project(final Vec2D start, final Vec2D end, final float scalar) { return end.minus(start).unitVector().scalarMult(scalar).plus(end); } public void setPosition(final float x, final float y) { final float differenceX = x - lights.get(0).getX(); final float differenceY = y - lights.get(0).getY(); for (final Light l : lights) { l.setPosition(l.getX() + differenceX, l.getY() + differenceY); } } } Light.java public class Light { static final Color NULL_COLOR = new Color(0, 0, 0, 0); private static final float[] SIZE_FRACTION = new float[] { 0, 1 }; public final BufferedImage image; private float x; private float y; private final float radius; Color color; public Light(final Color c, final Vec2D position, final float radius) { super(); image = new BufferedImage((int) radius * 2, (int) radius * 2, BufferedImage.TYPE_4BYTE_ABGR); final Graphics2D g = (Graphics2D) image.getGraphics(); g.setPaint(new RadialGradientPaint(new Rectangle2D.Double(0, 0, radius * 2, radius * 2), SIZE_FRACTION, new Color[] { c, NULL_COLOR }, CycleMethod.NO_CYCLE)); g.fillRect(0, 0, (int) radius * 2, (int) (radius * 2)); color = c; this.radius = radius; setPosition((float) position.x, (float) position.y); } public float getX() { return x; } public float getY() { return y; } public void setPosition(final float x, final float y) { this.x = x; this.y = y; } public float getRadius() { return radius; } public Color getColor() { return color; } } Answer: (Note: The first part of this answer was written without a detailed analysis. See the "EDIT" below for an update) An interesting question. Some (possibly minor?) remarks and hints. (Disclaimer: I currently can't do a dedicated performance analysis with VisualVM & Co. So You should consider the following only as hints, and possible points to look at, but not as a "todo list". Modifications of existing code that aim at improving the performance should be done step by step, and interveaved with detailed benchmark and profiler runs.) In your Light class, you are creating an image with a type TYPE_4BYTE_ABGR. Usually, images with the type TYPE_INT_ARGB are the fastest (or TYPE_INT_RGB when no transparency is required) A class like the Vec2D class in the GitHub repository is very convenient. However, one should keep in mind the possible drawbacks of repeated allocations in such chained calls like dot2 = pointA.minus(pointB).dotProduct(pointC.minus(pointA)). It's hard to measure the direct impact on the performance, but the repeated object allocations might at least impose a workload on the Garbage Collector that could be avoided. The Escape Analysis has improved significantly in the recent Java versions, but it's still something that you should keep an eye on. Most of the above mentioned usages of Vec2D are in helper methods of the SmoothLight class. For example, the method lineToPointDistanceSqrd, which computes the squared distance of a point to a line or a line segment. You should consider replacing this method with the corresponding methods from the Line2D class, namely Line2D#ptLineDistSq and Line2D#ptSegDistSq. The lineSegmentIntersects method, particularly the ITERATIONS counter, looks dubious. I'll have to analyze this further (to make sure that the result is equivalent), but you might consider replacing this with a test whether the given line intersects one edge of the given polygon, using Line#linesIntersect Operations on Areas can be expensive. Again, it's hard to analyze this much code in detail. But you could consider to not compute the light- and shadow areas with the Area class, but instead create these Shapes manually, by connecting the intersection points (ordered clockwise around the light source) and build a Path2D that describes the shape of the lit area. Finally, two links to questions on stackoverflow where I wrote answers that contain "building blocks" that may be useful here. Note that these answers did not primarily aim at achieving a particularly high performance, but ... I did not try to post rubbish there either, so there might be some useful snippets involved: How to do 2D shadow casting in Java? : My answer here basically contains the code that was roughly created based on the description on the site linked in the original question. It shows one way of computing the shadow/light areas. This computation involves some tricks (as described on the site), and should do the computation of the light shape rather efficiently (for example, without using the Area class) Java2D Alpha Mapping images: Here, my answer shows how it is possible to apply a "light effect" to an image. This could be a way to avoid the use of "light images" altogether, by just painting the light shape directly into the target image, with an appropriate RadialGradientPaint and AlphaComposite. (By the way: Extending the program from the first answer to create "soft" shadows was still on my todo-list - I'll probably try to combine my answers and try to achieve the same effect as in your program, to see which of these approaches perform better and where the potential bottlenecks are) EDIT Extended based on a further analysis I performed some tests, mainly with jVisualVM, which showed that the main bottlenecks are not the usual suspects like the geometry computations or other high-level methods, but really in the low-level ones: Most of the time seems to be spent... in the g.drawImage(light.image...) call of the SmoothLight#draw method in the GraphicsUtils#glowFilter method and, the largest block: when the lightmap is drawn using the BLUR_FILTER Drawing this blurred image (with size 1024x768 - slightly larger than your original one) takes ~40ms on my machine - in contrast to 1-2ms of a simple call like g.drawImage(lightmap, 0, 0, null);. I've seen that you are already using a FastBlurFilter (by Romain Guy - he usually knows his stuff...), which internally exploits the fact that the blur can be implemented as a separable filter. However, this could possibly be implemented even faster through parallelization. A quick tests indicates that this might bring a speedup, but your mileage may vary (depending on the CPU, the image size and other factors...). However, you might try replacing the blur function of this filter with something like this: private static final ExecutorService executor = Executors.newCachedThreadPool(); static void blur(final int[] srcPixels, final int[] dstPixels, final int width, final int height, final int radius) { final int windowSize = radius * 2 + 1; final int radiusPlusOne = radius + 1; final int[] sumLookupTable = new int[256 * windowSize]; for (int i = 0; i < sumLookupTable.length; i++) { sumLookupTable[i] = i / windowSize; } final int[] indexLookupTable = new int[radiusPlusOne]; if (radius < width) { for (int i = 0; i < indexLookupTable.length; i++) { indexLookupTable[i] = i; } } else { for (int i = 0; i < width; i++) { indexLookupTable[i] = i; } for (int i = width; i < indexLookupTable.length; i++) { indexLookupTable[i] = width - 1; } } List<Callable<Object>> tasks = new ArrayList<Callable<Object>>(height); for (int y = 0; y < height; y++) { final int fy = y; final int srcIndex = y * width; Callable<Object> callable = Executors.callable(new Runnable() { @Override public void run() { process(srcPixels, dstPixels, width, height, radius, radiusPlusOne, sumLookupTable, indexLookupTable, fy, srcIndex); } }); tasks.add(callable); } try { executor.invokeAll(tasks); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } private static void process(final int[] srcPixels, final int[] dstPixels, final int width, final int height, final int radius, final int radiusPlusOne, final int[] sumLookupTable, final int[] indexLookupTable, int y, int srcIndex) { int pixel; int sumAlpha; int sumRed; int sumGreen; int sumBlue; sumAlpha = sumRed = sumGreen = sumBlue = 0; int dstIndex; dstIndex = y; pixel = srcPixels[srcIndex]; sumAlpha += radiusPlusOne * (pixel >> 24 & 0xFF); sumRed += radiusPlusOne * (pixel >> 16 & 0xFF); sumGreen += radiusPlusOne * (pixel >> 8 & 0xFF); sumBlue += radiusPlusOne * (pixel & 0xFF); for (int i = 1; i <= radius; i++) { pixel = srcPixels[srcIndex + indexLookupTable[i]]; sumAlpha += pixel >> 24 & 0xFF; sumRed += pixel >> 16 & 0xFF; sumGreen += pixel >> 8 & 0xFF; sumBlue += pixel & 0xFF; } for (int x = 0; x < width; x++) { dstPixels[dstIndex] = sumLookupTable[sumAlpha] << 24 | sumLookupTable[sumRed] << 16 | sumLookupTable[sumGreen] << 8 | sumLookupTable[sumBlue]; dstIndex += height; int nextPixelIndex = x + radiusPlusOne; if (nextPixelIndex >= width) { nextPixelIndex = width - 1; } int previousPixelIndex = x - radius; if (previousPixelIndex < 0) { previousPixelIndex = 0; } final int nextPixel = srcPixels[srcIndex + nextPixelIndex]; final int previousPixel = srcPixels[srcIndex + previousPixelIndex]; sumAlpha += nextPixel >> 24 & 0xFF; sumAlpha -= previousPixel >> 24 & 0xFF; sumRed += nextPixel >> 16 & 0xFF; sumRed -= previousPixel >> 16 & 0xFF; sumGreen += nextPixel >> 8 & 0xFF; sumGreen -= previousPixel >> 8 & 0xFF; sumBlue += nextPixel & 0xFF; sumBlue -= previousPixel & 0xFF; } srcIndex += width; } (This was just quickly created by extracting the core of the method and parallelizing it pragmatically - it may be cleaned up and improved). The GraphicsUtils#glowFilter method might be parallelized similarly, but there, the parallelization might imply an overhead that eats up the performance gains (though I have not tested it). An aside: When you call Graphics g=image.getGraphics(), you should eventually dispose the graphics object, by calling g.dispose() when you're done with it. You did not dispose the graphics, for example, in the Light constructor (but I have not systematically looked for this). Apart from that, it's not so easy to make this program faster in its current form. As mentioned in the first part of the answer: I also tried to implement something like this, based on my other answers on stackoverflow, but in its current version it's far slower than your approach. Although it looks a bit nicer, I think: (I tried to properly mix the lights: A magenta light and a green light should yield a white light - things like this don't come for free...). But, the same as with your approach: Making the shadows soft definitely IS expensive. One could try to think of a completely different approach here, I'm sure the computer graphics / OpenGL community already had some ideas about this. But this is beyond what can be discussed here...
{ "domain": "codereview.stackexchange", "id": 12526, "tags": "java, performance, game, computational-geometry, graphics" }
Yet another minimax tree for TicTacToe
Question: I have written a working minimax tree for Tic-Tac-Toe. What I have done so far is create a tree with end node having value 10, 0 or -10 base on win, lose or draw. After I create a tree I then determine the move based on depth first search which bubbles the end node value to root node. Based on the value received at the root node I determine which ever has the max possibility of winning (10 win or -10 loss or 0 for draw). Before going into coding detail I would like to explain that I have another class called TicTacToe.cpp/hpp which runs the game engine. When the user selects single player and it's the computer's turn it calls minimax(int &y, int &y, std::vector<std::vector<TicTacToe::state>> v); function which will eventually determine x and y co-ordinates. The minimax function takes x and y co-ordinate and the 2d-vector which has the present state of the tictactoe ( enum of HASX, HASO and EMPTY). This helps to determine which space is taken by what. void Tree::minimax(int& x, int& y, std::vector<std::vector<TicTacToe::state> > v){ bool turn = true; std::shared_ptr<Node> n = std::make_shared<Node>(); n->v = v; n->val = 9999; int max = -10000; // creates the tree create_tree(n, find_empty_vec(v), turn); // does the depth first search on the above tree depth_first_search(n, turn); for(auto it = n->collect_nodes.begin(); it != n->collect_nodes.end(); it++){ if((*it)->val == 10 || (*it)->val == 0) { if( (*it)->val > max){ max = (*it)->val; x = (*it)->x; y = (*it)->y; } } } } When minimax is called it eventually calls create_tree which create a tree. Tree is made of node struct Node{ // basically set of children this node have std::set<std::shared_ptr<Node>> collect_nodes; // stores current game state with each index // containing either EMPTY, HASX or HASO std::vector<std::vector<TicTacToe::state> > v; int val; int x; int y; }; The create_tree function takes the node which has snap shot of the tictactoe state. // This function creates the minimax tree, ev is just the vector of // EMPTY space that are in n->v ( see struct node above ) void Tree::create_tree(std::shared_ptr<Node> n, std::vector<std::pair<int, int>> ev, bool turn){ static int count = 0; // loss // game_state is function that determine who won, loss or draw // by looking at the 2d vector that I have passed here. if(game_state(n->v) == 1){ n->val = -10; return; } // win else if(game_state(n->v) == 2){ n->val = 10; return; } // draw else if(game_state(n->v) == 3){ n->val = 0; return; } for(auto it = ev.begin(); it != ev.end(); it++){ if(turn) { // HASO turn std::shared_ptr<Node> node = std::make_shared<Node>(); node->v = n->v; node->val = -9999; node->v[it->first][it->second] = TicTacToe::HASO; node->x = it->first; node->y = it->second; n->collect_nodes.insert(node); create_tree(node, find_empty_vec(node->v), false); }else{ std::shared_ptr<Node> node = std::make_shared<Node>(); node->v = n->v; node->val = 9999; node->v[it->first][it->second] = TicTacToe::HASX; node->x = it->first; node->y = it->second; n->collect_nodes.insert(node); create_tree(node, find_empty_vec(node->v), true); } } } After the tree has been made then I do the dfs over the tree int Tree::depth_first_search(std::shared_ptr<Node> n, bool turn){ if(n->val == -10 || n->val == 0 || n->val == 10) return n->val; else if(turn){ int max = -10000000; // I could use numeric_limits here for(auto it = n->collect_nodes.begin(); it != n->collect_nodes.end(); it++){ if(depth_first_search(*it, false) > max) max = depth_first_search(*it, false); } n->val = max; return max; }else{ int min = 10000000; for(auto it = n->collect_nodes.begin(); it != n->collect_nodes.end(); it++){ if(depth_first_search(*it, true) < min) min = depth_first_search(*it, true); } n->val = min; return min; } } I do have few concerns about the way I have done things here: I create tree every time its computer's turn. I was thinking about reusing previously dynamically allocated node tree by reassigning with new state. ( I am still confused what should be root in those case, since I have to determine which node in the tree determine the current state of tictactoe) Is the dfs algorithm I have used a good design or good way to do ? It would be great if someone critique on it. I also wanted to create a difficulty level like easy, moderate and hard so user can pick those level to play. For more detail you can visit the my git repo. Answer: I create tree every time its computer's turn. I was thinking about reusing previously dynamically allocated node tree by reassigning with new state. No, I would not do that. You could, but you would have to store the entire tree structure in memory, which can get very huge. Instead you can 'build' the tree while you search it and only store what you need (see below). I am still confused what should be root in those case, since I have to determine which node in the tree determine the current state of tictactoe If you really want to store the tree in memory, selecting a root node is simple. You traverse the tree until you find any node that matches your current game state (a tupel made of std::vector<std::vector<TicTacToe::state> > v and turn). There will be multiple nodes satisfying this criteria, which one you pick does not matter, as the subtrees are identical. Such a state fullfills the Markov Property and thus history doesn't matter. Is the dfs algorithm I have used a good design or good way to do ? It would be great if someone critique on it. Sort of. Its almost the standard recursive DFS for a given tree, but has a substential flaw: if(depth_first_search(*it, false) > max) max = depth_first_search(*it, false); Visiting every subtree twice means a lot of extra work for nothing. Instead do double temporary = depth_first_search(*it, false); if(temporary > max) max = temporary; and you should see an immediate performance boost. I prefer the iterative way of searching, as stated below. I also wanted to create a difficulty level like easy, moderate and hard so user can pick those level to play. The default way to create difficulty levels is to limit search time or depth. You would have to reimplement your search, because it does not come up with temporary solutions. (see below) Instead you could, based on a chance, make a random move instead of searching for the best one. This would allow players to outmanouver the AI. Alternatively you could pick a 'bad' move on purpose (to make it look less random). As promised I want to breefly sketch out how you can combine the creation and search of the tree in a way that (I think) is quite beautiful: struct Node { // game state std::vector<std::vector<TicTacToe::state> > game_state; bool turn; // misc and utility values double value; bool is_terminal; action move; // which action brought us to this node std::vector<std::shared_ptr<Node>> history; // what nodes did we visit to come here }; action find_move(std::set<std::shared_ptr<Node>> initial_state, bool turn, double search_time) { Queue search_nodes; // initialize right queue here (stack, FILO, priority queue, ...) search_nodes.push(initial_state); action best_move = choose_random_move(); std::shared_ptr<Node> best_node; while (!search_nodes.isEmpty() && search_time > 0) { std::shared_ptr<Node> current_element = search_nodes.pop(); if current_element.is_terminal{ bool found_better_move = false; // this loop is special for minimax for (each history_node in reversed current_element->history) { if (history_node->turn) { //maximize if (current_element->value > history_node->value) history_node->value = current_element->value; else break; } else { //minimize if (current_element->value < history_node->value) history_node->value = current_element->value; else break; } found_better_move = true; } if (found_better_move) { best_move = current_element->history[2]->move; best_node = current_element; } continue; } //add all childs to queue childs = get_all_childs(current_node); for (each child_node in childs) { search_nodes.push(child_node); } update_time(search_time); } } This implementation is beautiful because of its flexibilty. First of all I can easily add a maximum search time (or any other criteria) to change difficulty. Also depending on the type of Queue I can achieve different search behavior: Stack --> depth first search FIFO --> breadth first search Priority queue based on heuristic --> greedy search priority queue based on node values --> djikstra's algorithm priority queue based on node values + heuristic --> A* The last options however are tricky in the minimax case, but come in handy in other scenarios. A thought is to use a stack, but sort the order in which childs of current_element are inserted. This can yield a massive performance boost when combined with pruning. You also have several ways to do pruning. For alpha-beta-pruning the easiest is to check against the last two nodes in the history of a node. Their values give you alpha and beta. Edit: Corrected alpha-beta pruning
{ "domain": "codereview.stackexchange", "id": 21722, "tags": "c++, tree, tic-tac-toe, ai, depth-first-search" }
Is there any work relating type systems and Cook-Reckhow proof systems?
Question: An important subfield of computational complexity is proof complexity, mostly due to Cook and Reckhow. E.g. one notable result is that a proof system that has efficient proofs (i.e., in length polynomial to the formula being proven) for all tautologies could be constructed iff $coNP$ = $NP$. Other results deal with specific propositional calculus proof systems, showing that they all have superpolynomial proof lower-bounds. Is there any work relating type systems and type inference rules to the proof systems as defined in this theory? In general, a cursory examination reveals that most discussions of type systems discuss computability (decidability/undecidability) with little discussion of finer complexity bounds (which may well exist in many finite models). Is there a reason why this is so (if indeed it is so)? Answer: Cook-Reckhow propositional proof systems are nonunifrom. E.g. the computational complexity counterpart to the class of polynomial-size $\mathsf{Extended Frege}$ proofs is the nonuniform complexity class $\mathsf{P/poly}$. We have to look at their uniform counterparts: E.g. the proof complexity counterpart for $\mathsf{P}$ are bounded arithmetic theories like Cook's theory $\mathsf{PV}$ (standing for polynomial-time verifiable), Buss's theory $\mathsf{S}^1_2$, ... Cook and Urquhart used Cook's theory $\mathsf{PV}$ to define a theory of higher-types polynomial-time computable functions $\mathsf{PV}^\omega$ in the following paper: Stephen A. Cook and Alasdair Urquhart, "Functional interpretations of feasibly constructive arithmetic", 1993. doi: 10.1016/0168-0072(93)90044-E There has been some follow up work which you can find by looking at articles citing this paper. Implicit complexity theory that Martin mentions is also influenced by this work. Check out Simone Martini's survey slides and Ugo Dal Lago's survey article to get a picture of what is implicit complexity theory and its history.
{ "domain": "cstheory.stackexchange", "id": 3692, "tags": "proof-complexity, type-systems, proof-theory" }
Time to collide of hammer vs Moon, feather vs Moon, considering that the Moon is attracted by them (barycenter)
Question: I am an extreme novice in Physics, I am also a beginner in Physics Stack Exchange, and I'm not fluent in English, so please bear with me, consider my question with indulgence. I request indulgence especially because $99\text{%}$ of my question is a well-known question with an evident and well-known answer, but the important thing of the question is the remaining $1\text{%}$ which I will try to explain the best I can. This well-known question is: "On the Moon, drop a feather and a hammer, will they touch the ground at the same time?" (and the question is the same on Earth if we exclude the air density/shapes of bodies that fall). This question is always answered by the strict affirmation that "yes they do", and confirmed by astronaut David Scott experiment during Apollo $15$ mission. I agree with that. I agree that it's a strictly equal time, if we consider that the Moon is not attracted by the feather or the hammer. It may sound obvious to many people that the Moon is not attracted by so much light objects, but theoretically it is because these objects have a mass, even if the feather/hammer mass is around 1E-23 times of the Moon mass. (It's like the Earth which is 100 times more massive than the Moon, so the Earth is also attracted and the effect is its "oscillation" around the barycenter of Earth/Moon which is at approximately $4,670$ to $6,380\ km$ away from the geographical center of the Earth.) And here comes my question, which may look identical, but there is a change in the conditions and in the enunciation, especially I say "body" instead of "Moon", and I say "mutually attracted" rather than "one object falls on the body" so that you better understand how my question is 1% different from the previous question: "In space vacuum, a feather and a body are both initially positioned at the same distance and with no relative movement, and are mutually attracted because of their masses and will collide in a given amount of time. Let's repeat the same experiment with a hammer instead of the feather, will they collide in exactly the same amount of time?" (even if there's a tiny difference of 1E-20 second, I consider that it's not the same time) My assumption is that the hammer and the feather will be both attracted identically by the body, BUT at the same time the body will be more attracted by the hammer than by the feather (because the hammer is heavier than the feather). So, the body and the hammer will collide quicker than the body and the feather. Am I right or not? Where am I wrong? Of course, my question is related to the first well-known question, and my assumption is that there is a difference, although extremely small, maybe 1E-20 seconds, but theoretically it's different because the Moon is attracted. What do you think? Do you know any document which talks about that? (I searched a lot but I couldn't find anything) Thank you very much for your interest in this question. Answer: Say we drop an object of mass $m$ and it falls owing to its attraction to an object of mass $M$. The force on either object is $$ f = G M m / r^2 $$ where $r$ is the distance between the centre of one object and the centre of the other. Therefore the object whose mass is $m$ gets an acceleration $$ a_m = G M / r^2 . $$ Meanwhile the object of mass $M$ is experiencing a force of the same size, so it gets an acceleration $$ a_M = G m / r^2 . $$ So the two objects are accelerating towards one another. At any given time the distance between them is $r$, and this distance is changing (getting smaller). Its second rate of change is $$ \frac{d^2 r}{d t^2} = a_m + a_M = \frac{G(M+m)}{r^2} $$ Now your question concerns one moon of mass $M$ and two different objects (hammer and feather). Let's call their masses $m_h$ and $m_f$. We will do three experiments. Experiment 1. Drop a hammer on the moon, without any feather. The relative acceleration of moon and hammer is $$ a_1 = G(M + m_h)/r^2 . $$ Experiment 2. Drop a feather on the moon, without any hammer. The relative acceleration of moon and feather is $$ a_2 = G(M + m_f)/r^2 . $$ We find $a_1$ a tiny bit larger than $a_2$, so indeed the hammer hits the moon in less time than the feather if the two experiments are done separately. But usually the two experiments are done at the same time. Experiment 3. Drop hammer and feather together, on the moon. Now the total thing pulling on the moon is both hammer and feather together. In this case the acceleration of the moon is $$ a_{\rm moon} = G(m_h + m_f)/r^2 $$ and the accelerations of the other two objects are $$ a_{\rm hammer} = GM/r^2, \;\;\;\;\; a_{\rm feather} = GM/r^2 $$ (and they also have a slight acceleration towards one another). The main point now is that theses two accelerations, $a_{\rm hammer}$ and $a_{\rm feather}$, are the same. The moon is meanwhile also accelerating a tiny bit. But hammer and feather have the same motion as each other. Therefore when dropped side by side the hammer and the feather hit the moon together at exactly the same time. Your intuition was valid when applied to experiments 1 and 2, but not for experiment 3. But now I will adjust the experiment a little. Suppose the hammer and feather are dropped at the same time, but not very close to one another. For example, they might be a few kilometres apart, or even on opposite sides of the moon. In this case the moon will accelerate somewhat more towards the hammer than towards the feather, so in this case your intuition is right and the hammer hits the moon very slightly before the feather.
{ "domain": "physics.stackexchange", "id": 69426, "tags": "gravity" }
Meaning of spin operator
Question: I am learning about spin in QM and I was wondering if $\langle{\psi}|\hat{S}_z|\psi\rangle$ where $\psi$ is a spin wave function, is a meaningful quantity? In the case of the Hamiltonian $\hat{H}$, $\langle\hat{H}\rangle_{\Psi}=\langle{\Psi}|\hat{H}|\Psi\rangle$ is the mean energy for a system with wavefunction $\Psi$, but how should I interpret $\langle{\psi}|\hat{S}_z|\psi\rangle$? Is it something like the average value of $z$ given $\psi$? I am aware that (spin) $\psi$ lives in $\mathbb{C}^2$, and thus doesn't have "components" in $\mathbb{R}^3$. I am also aware that $\hat{S}_n=n_x\hat{s}_x+n_y\hat{S}_y+n_z\hat{S}_z$ is the spin operator in the direction of the unit vector $n$, but that this is an operator from $\mathbb{C}^2$ to $\mathbb{C}^2$ (just like $\hat{S}_z$), it does not give "components of the spin in $\mathbb{R}^3$". Finally, I know how to use $|n;+\rangle = cos{\frac{\theta}{2}}|+>+sin{\frac{\theta}{2}}e^{i\phi}|-\rangle$, to figure out the spherical angles of any spin, and that will give me $x,y,z$ "components" of the spin (a projection from $\mathbb{C}^2$ into $\mathbb{R}^3$?) - but that seems different from $\langle\psi|\hat{S}_z|\psi\rangle$. (I am also aware that spin operators enter into Dirac's equation, but in my class, we introduced spin and let it sit there in its own $\mathbb{C}^2$ and I must have missed something about $\hat{S}_z:\mathbb{C}^2\rightarrow\mathbb{C}^2$). Answer: The spin operator $\vec S = \left(\begin{matrix} S_x \\ S_y \\S_z \end{matrix}\right)$ is just like the (orbital) angular momentum operator. $\langle \psi \rvert S_i \lvert \psi \rangle$ gives you the expectation value for the component of the spin angular momentum. $\langle \psi \rvert \vec S \lvert \psi \rangle$ is the expectation for the full spin vector. The operators $S_i$ act on the spin space $\mathbb{C}^2$ just as you say, but their expectation values are real numbers, that, when combined into $\langle \psi \rvert \vec S \lvert \psi \rangle$, are an ordinary vector in $\mathbb{R}^3$ - which is then the "spin" part of the total angular momentum expectation for $\psi$.
{ "domain": "physics.stackexchange", "id": 20001, "tags": "quantum-mechanics, quantum-spin" }
Writing a generic performance efficient isEmpty method which can check for null or emptiness
Question: I am writing a utility method which can check for empty and null string, or collection or an object or any general types - public static boolean isEmpty(Object obj) { if (obj == null) return true; if (obj instanceof Collection) return ((Collection<?>) obj).size() == 0; // is below line expensive? final String s = String.valueOf(obj).trim(); return s.length() == 0 || s.equalsIgnoreCase("null"); } How can I make my above method efficient, since above isEmpty method will be called multiple times from the application which is very performance critical? I am suspecting below line will be expensive because of heavy toString methods and it will create temporary garbage as well that might cause GC and slow down the performance? final String s = String.valueOf(obj).trim(); If I need to check for map null or empty, should I keep both collection isEmpty and Map isEmpty method both or Collection isEmpty method will be fine for that? public static void main(String[] args) { Map<String, String> hello = new HashMap<String, String>(); System.out.println(isEmpty(hello)); Map<String, HashMap<Integer, String>> primary = new HashMap<String, HashMap<Integer, String>>(); System.out.println(isEmpty(primary)); } public static boolean isEmpty(Collection<?> value) { return value == null || value.isEmpty(); } public static boolean isEmpty(Map<?, ?> value) { return value == null || value.isEmpty(); } Answer: Method overloading can make your implementations more efficient and cleaner: public static boolean isEmpty(Collection obj) { return obj == null || obj.isEmpty(); } public static boolean isEmpty(String string) { return string == null || string.trim().isEmpty(); } public static boolean isEmpty(Object obj) { return obj == null || obj.toString().trim().isEmpty(); } The Collection version is as efficient as possible. The String version would be more efficient without the trimming. It would be best to trim your strings as soon you see them, long before they reach this call. If you can review the callers and make sure that the strings are always trimmed at their origins, then you can remove .trim() for best performance. The Object version can be inefficient, depending on the toString implementation of the objects that will be passed to it, and because of the trimming. I removed the comparison with null from there, because it seems pointless to me. I mean, a class whose toString method says "null" would seem very very odd. In any case, you don't really want the Object version to be called, at all. Most importantly because it probably won't even work. Take for example an empty Map. Its toString method returns the string {}, which won't match your conditions of emptiness. (For this type you should definitely add isEmpty(Map<?, ?> map) to benefit from its isEmpty method.) If performance is so critical, then add more overloaded implementations for all other types that you care about, for example: public static boolean isEmpty(Something obj) { return obj == null || obj.isEmpty(); } Finally, especially when something is so important, you definitely want to unit test it, for example: @Test public void testEmptyObject() { assertTrue(isEmpty((Object) null)); assertFalse(isEmpty(new Object())); } @Test public void testEmptyString() { assertFalse(isEmpty("hello")); assertTrue(isEmpty("")); assertTrue(isEmpty(" ")); assertTrue(isEmpty((Object) null)); } @Test public void testEmptySet() { assertFalse(isEmpty(new HashSet<String>(Arrays.asList("hello")))); assertTrue(isEmpty(new HashSet<String>())); } @Test public void testEmptyMap() { Map<String, String> map = new HashMap<String, String>(); assertTrue(isEmpty(map)); map.put("hello", "hi"); assertFalse(isEmpty(map)); }
{ "domain": "codereview.stackexchange", "id": 25873, "tags": "java, optimization, null" }
Not working brakes: just another energy conservation problem
Question: A car is driving down a mountain ($v=90 km/h=25 m/s$, when the driver realizes that brakes aren't working. He try to lose velocity going up an inclined ($20°$) plane, with a friction coefficient of $k=0.60$. How many meters will it take to halt? I've tried as following ($s$ is the request): $$K=\frac{mv^2}{2}$$ At the end, the potential energy gained is: $$U=mgh=mg\cdot s\cdot sin \alpha$$ In the mainwhile the energy lost due to the friction is: $$L_f=F \cdot s=mg \cdot cos(\alpha) \cdot s$$ But the work done by non conservative forces (friction) is also: $$L_f=U-K$$ And I have: $$mg \cdot cos(\alpha) \cdot s=mg\cdot s\cdot sin \alpha-\frac{mv^2}{2}$$ $$g \cdot cos(\alpha) \cdot s=g\cdot s\cdot sin \alpha-\frac{v^2}{2}$$ $$9.22s=3.35s-312.5$$ But I get a negative time. What's wrong? I'm sure that there is a stupid error, but I can't find it. The correct result (reported on the textbook) is 120 m. Answer: There are two sources of kinetic energy loss. a) is friction and the other b) conversion to potential energy. So $$ \frac{1}{2} m v^2 = m g \left( k \cos\alpha + \sin\alpha \right) s $$ In your post you have as if gravity is providing energy to the system which it would if the slope was downwards. If you correct your sign you will get the correct result.
{ "domain": "physics.stackexchange", "id": 5594, "tags": "homework-and-exercises, energy, energy-conservation" }
How is the velocity not constant?
Question: A bead is moving along the spoke of a wheel at constant speed u , the wheel rotates with uniform angular velocity w radians per second about an axis fixed in space , at t=0 the bead is in the x axis at the origin, find the velocity in t in POLAR coordinates In the text book it says : first we have $r(t)= ut$? doesn't that mean $v = u$ is constant? How did they obtain this result (it changes in time) if $v$ is constant? $v = u (r) + utw(\theta)$ (($r$) and ($\theta$) are the bases of the polar plane) Answer: You are missing the tangential component. $$u_t=\omega r = \omega u t $$
{ "domain": "physics.stackexchange", "id": 14486, "tags": "homework-and-exercises, classical-mechanics" }
Behaviour of action with respect to time
Question: I was wondering if it was possible to say something general on the behaviour of the action : $$ S[x(\tau)]=\int_0^T L(x,\frac{dx}{d\tau},t) dt $$ (where $x(\tau)$ defines a trajectory, with certain boundary conditions at $\tau=0$ and $\tau = T$, and $L$ is the Lagrangian) at small and large values of $T$. For some systems (harmonic oscillator), we can say that the action becomes very large at small $T$ (look last formula here: http://www.oberlin.edu/physics/dstyer/FeynmanHibbs/Prob2-2.pdf). Intuitively (and quite simple "mindedly"), I see it as when the endpoints are fixed and the total time become very small, the kinetic energy must increase a lot (I cannot say anything about the potential however). The question arised while I was reading the article "Path-Integral Derivation of Black-Hole Radiance" by Hartle and Hawking and they consider that kind of behaviour. Answer: Comment to the question (v2): If the Lagrangian reads $L=\frac{1}{2}m \dot{x}-V(x)$, then then Dirichlet on-shell action reads $$\tag{1} S(x_f,t_f;x_i,t_i)~\approx~\frac{m}{2}\frac{(\Delta x)^2}{\Delta t}-V(\bar{x})\Delta t $$ for small times $\Delta t\geq 0$, where $$\tag{2} \Delta t~ :=~t_f-t_i, \qquad \Delta x~ :=~x_f-x_i, \qquad \bar{x}~ :=~ \frac{x_f+x_i}{2} .$$
{ "domain": "physics.stackexchange", "id": 25918, "tags": "lagrangian-formalism, action" }
Mach-Zehnder Interferometer: two output interference pattern question
Question: I have drawn diagram so not to confuse. So far, I've heard that in Mach-Zehnder interferometer, two output should have one constructive interference, and one destructive interference for other. But, what I had calculated above for the phase shift, doesn't fit with the argument above. What am I missing? Where is the constructive interference? Answer: When you use a real beamsplitter, it has a finite thickness. When such a splitter is placed at the second position in a particular way, the green light beam going to screen 1 will be suffering Zero phase shift because it suffers a reflection coming from the denser medium and going back into the denser medium. The red beam going to screen 2 will suffer a phase shift of $\Pi$ radians because it is getting reflected coming from lighter medium and going back into lighter medium. This will be reversed if you change the position of the splitter but essentially, only one screen will have a constructive interference.
{ "domain": "physics.stackexchange", "id": 25054, "tags": "optics, interference, interferometry" }
How do you calculate the heuristic value in this specific case?
Question: The A* algorithm uses the "evaluation function" $f(n) = g(n) + h(n)$, where $g(n)$ = cost of the path from the start node to node $n$ $h(n)$ = estimated cost of the cheapest path from $n$ to the goal node But, in the following case (picture), how is the value of $h(n)$ calculated? In the picture, $h(n)$ is the straight-line distance from $n$ to the goal node. But how do we calculate it? Answer: The most obvious heuristic would indeed simply be the straight-line distance. In most cases, where you have, for example, x and y coordinates for all the nodes in your graph, that would be extremely easy to compute. The straight-line distance also fits the requirements of an admissible heuristic, in that it will never overestimate the distance. The travel-distance between two points can never be shorter than the straight-line distance (unless you start involving things like... teleportation). From an image like that, the straight-line distance might be difficult to figure out yourself, which is probably why they gave you the straight-line distances on the right-hand side of the image. If the image is perfectly consistent, I suppose you could theoretically figure out by inspecting some of the roads in detail how much distance is covered per pixel. Then, you can also figure out how many pixels the figure has along the straight-line paths you're interested in, and compute the straight-line distances yourself. I have no idea if the figure was actually drawn in a 100% consistent manner though.
{ "domain": "ai.stackexchange", "id": 625, "tags": "search, heuristics, a-star" }
What is space made of?
Question: General Relativity posits that matter curves spacetime, such that geodesics point towards the object in question, hence, gravity. Now, how does matter do this? What is spacetime "made of", anyway, such that it should interact with matter, being bent by it and forcing it to accelerate (via gravity)? Answer: The image of space being bent is just an analogy, it is not meant that anything is actually being deformed. Gravity distorts the notion of distance on spacetime, i.e. the presence of matter somehow causes the metric to change. A way to visualize this is to think of spacetime being bent, as you say, but really, spacetime is not made of anything, the idea of an ether has been laid to rest for a hundred years now, with good experimental reasons. Spacetime interacts with matter since matter exists within (or on, in some terminology) it, and when the notion of distance changes, the behaviour of objects relying on that notion changes. As for why the presence of matter itself influences the metric...well, this is the defining property of having mass/energy, just as generating (or reacting to) an electric field is the defining property of having an electric charge - in a manner of speaking, mass could be seen as the charge of gravity, though, since we do not fully understand gravity (yet), this is necessarily vague.
{ "domain": "physics.stackexchange", "id": 47632, "tags": "general-relativity, gravity, spacetime, universe, vacuum" }