anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Effect of electrical potential energy in the binding energy of an nucleus?
Question: The binding energy of a nucleus with $Z$ protons and $N=A-Z$ neutrons is given by $$B=(Zm_p+Nm_n-m_{nucleus})c^2$$ In the above equation we subtract the rest mass of nucleus from the rest mass of nucleons in free states but we are not accounting for the electrical potential energy between protons. Is it because it is negligible or is it included in $m_{nucleus}$ or is it any other reason? Answer: Well, It actually already included. The rest mass of nuclei $m_\text{nuc}c^2$ include all sorts of energies. When we subtract the individual masses with this mass, we get amount of mass due these interactions. Recall the semi-emphirical mass formula $$B(N,Z)=aA-bA^{2/3}-s\frac{(N-Z)^2}{A}-\frac{dZ^2}{A^{1/3}}-\frac{\delta }{\sqrt{A}}$$ The fourth term account for coulomb interaction.
{ "domain": "physics.stackexchange", "id": 78575, "tags": "energy, particle-physics, nuclear-physics, binding-energy" }
combining the use of workdir and option --jobscript in snakemake
Question: It seems that in snakemake the script specified after the --jobscript cannot be used properly if a workdir: is specified in the snakefile. The path of the script specified becomes relative to the workdir defined in the snakefile instead of being relative to the current working directory. I do not know if this is a feature or a bug !! Nevertheless, it becomes quite painful to workaround this problem. The only way I found was to parse the command line in the snakefile, for example: snakemake -c "qsub" -j 30 --js ./sge.sh --latency-wait 30 -rp in order to copy ./sge.sh in the output directory defined by the keyword workdir in the snakefile. As a side effect, if you specify option for qsub on the command line, for example: snakemake -c "qsub -e ./logs/ -o ./logs/" -j 30 --js ./sge.sh --latency-wait 30 -rp the logs folder must be created under the workdir directory. Is there something I don't understand with the --jobscript option? Am I not using it the right way? Am I going against the best practice? Answer: To quote from the snakemake documentation All paths in the snakefile are interpreted relative to the directory snakemake is executed in. This behaviour can be overridden by specifying a workdir in the snakefile: workdir: "path/to/workdir" Usually, it is preferred to only set the working directory via the command line, because above directive limits the portability of Snakemake workflows. It's not explicitly stated, but it's sort of implied that all relative paths then become relative to the working directory. I would expect that specifying an absolute path would get around this. As an aside, in my mind setting the working directory is usually only needed on clusters without a shared file system (presumably shared between the worker nodes, but not with the head node), since there you can't cd $WORKDIR before running snakemake. This is then normally done with your scheduler, in such cases.
{ "domain": "bioinformatics.stackexchange", "id": 289, "tags": "snakemake" }
Can change in position due to acceleration be expressed using dual quaternions?
Question: Dual quaternions seem like an appealing way to model 6DOF motion since they linearize rotation. I've reviewed what literature I can find on then, and found expressions for translation and change in position for constant velocity, but not for accelerating bodies. Can change in position over time due to acceleration be expressed using dual quaternions or does the lack of a second derivative (over the dual numbers) make this impossible? I've seen references to a more general Clifford 'Motor Algebra'? Does it solve this problem? Edit: I am primarily working from the paper: "3D kinematics using dual quaternions: theory and applications in neuroscience" which contains a tutorial covering screw translation and velocity using dual quaternions. Answer: It's certainly possible. However, the displacement, velocity, and acceleration cannot all be represented in the same dual quaternion. Dual numbers are often used to represent a quantity along with its derivative. This usage does not allow for a second derivative to be stored alongside it. The primary paper decided to store velocity quaternions along with their displacement quaternions in dual quaternions. However, one can simply store the displacement, velocity, and acceleration all in different variables. This is how the paper treats line velocities since the line itself took up too many degrees of freedom to store the velocity with the line in the same dual quaternion. All of that said, acceleration isn't a particularly useful quantity when modeling 6 degree of freedom inertial movement. This is because forces and torques are proportional to the derivative of momentum not velocity. This distinction is only relevant for objects that have non-isotropic inertia tensors, and thus is easily forgotten. This distinction, means one should track momentum, and directly calculate velocity from that, rather than calculating acceleration.
{ "domain": "physics.stackexchange", "id": 69829, "tags": "newtonian-mechanics, kinematics, complex-numbers, grassmann-numbers, clifford-algebra" }
Successive ionization energies (work functions?) for metals
Question: Are there tables for successive ionization energies for metals (like for gases)? All I can find are work functions to remove one electron from the surface of a metal into vacuum (and to keep it there). However, I guess it gets harder to remove electrons the more electrons already have been removed, right? Maybe there's a way to relate that energy to the average degree of ionization of the material. Is it possible at all? Finally, is it harder to ionize a metal gas or a metal solid? Although most work functions are a little bit smaller than the (1st) ionization-energies of the same material in gaseous state but I still think ionization of solids is harder. Am I correct? Answer: In an infinite metal solid, there are an infinite number of electrons, so each one removed will be at the exactly identical work function with no change in energy. Intuitively, this may seem a bit strange because .. well, it should be harder to remove an electron once positive charges start to accumulate. If I have even a micron-sized piece of metal, that's ~$10^{13}$ atoms. So you really have to do a lot of ionization before you're even going to get 0.1% of the metal ions.
{ "domain": "chemistry.stackexchange", "id": 6127, "tags": "metal, ionization-energy" }
How can this C++ json library be improved
Question: I need a class to generate json content from data retrieved from a database. This gave me an excuse to play at creating a json library. This is my first attempt so it could probably be improved in many ways. Features are: feed in json from stdin to populate internal structures. output json to stdout. insert into json. find by json key name. It uses std::variant which requires c++17 or better. Please review the code and give me some feedback. Is it heading in the right direction? Firstly the main header, json20.hpp: #ifndef JSON20_HPP_ #define JSON20_HPP_ #include <variant> // type-safe union #include <string> #include <vector> #include <map> #include <iostream> #include "json_value.hpp" struct json_array { std::vector<json_value> array; }; struct json_object { std::map < std::string, json_value> object; json_value& operator [](std::string key) { return object[key]; } }; struct json_null { }; std::ostream& operator<< (std::ostream& os, const json_value& v); class json20 { public: /* start empty constructor */ json20() {} /* construct json object from input stream */ json20(std::istream& istrm, std::string& parse_status); /* parent node of json object */ json_value root; /* insert json object by manual construction. Returns true on successful insertion */ bool insert(const json_object& object); /* find json value by key name */ bool find_by_key(const std::string& key_name, json_value& value); private: bool parse(std::istream& istrm, std::string& parse_status); json_value parse_json(std::istream& istrm, json_value& value, std::string& parse_status); bool recursive_find_by_key(const std::string& key_name, json_object& next, json_value& value); }; #endif // JSON20_HPP_ json values header, json_value.hpp: #ifndef JSON_VALUE_HPP_ #define JSON_VALUE_HPP_ #include "json20.hpp" #include <variant> #include <string> // fwd declare struct json_array; struct json_object; struct json_null; typedef std::variant<double, bool, std::string, json_array, json_object, json_null> json_value; #endif // JSON_VALUE_HPP_ json implementation file, json20.cpp: #include "json20.hpp" #include <cstring> static void eat_whitespace(std::istream& istrm) { int ch; while (istrm) { ch = istrm.peek(); if (isspace(ch)) { istrm.get(); } else { break; } } } static char get_next_token(std::istream& istrm) { eat_whitespace(istrm); char ch; if (istrm.get(ch)) return ch; else return (char)-1; // indicate failure; } std::ostream& operator<< (std::ostream& os, const json_value& v) { switch (v.index()) { case 0: os << std::get<0>(v); break; case 1: if (std::get<1>(v)) { os << "true"; } else { os << "false"; } break; case 2: os << '"' << std::get<2>(v) << '"'; break; case 3: { os << '['; bool first = true; json_array arr = std::get<json_array>(v); for (auto& item : arr.array) { if (!first) { os << ','; } os << item; first = false; } os << ']'; break; } case 4: { os << '{'; bool first = true; json_object obj = std::get<json_object>(v); for (auto& item : obj.object) { if (!first) { os << ','; } os << '"' << item.first << "\":"; os << item.second; first = false; } os << '}'; break; } case 5: os << "null"; break; } return os; } static bool try_string(std::istream& istrm, json_value& value) { eat_whitespace(istrm); char ch = static_cast<char>(istrm.peek()); if (ch != '"') { return false; } // remove first " istrm.get(ch); std::string s; while (istrm.get(ch)) { if (ch == '"') { value.emplace<2>(s); return true; } else { // haven't worked out how to just append ch to end of string in value s += ch; } } return false; } static bool try_number(std::istream& istrm, json_value& value) { eat_whitespace(istrm); char ch = static_cast<char>(istrm.peek()); if (ch != '-' && !isdigit(ch)) { return false; } while (istrm.get(ch)) { // start of a number if (ch == '-' || (ch >= '0' && ch <= '9')) { char buf[10] = {}; buf[0] = ch; int i = 1; while (istrm.get(ch)) { if (ch >= '0' && ch <= '9' || ch == '.') { buf[i] = ch; i++; } else { istrm.putback(ch); break; } } double number = strtod(buf, nullptr); value.emplace<0>(number); return true; break; } else { istrm.putback(ch); return false; } } return false; } static bool check_bytes(std::istream& istrm, const char* expected) { const size_t length(strlen(expected)); for (size_t i = 0; i < length; i++) { char ch; if(!istrm.get(ch)) { return false; } if (ch != expected[i]) { return false; } } return true; } static bool try_boolean(std::istream& istrm, json_value& value, std::string& parse_status) { eat_whitespace(istrm); char ch = static_cast<char>(istrm.peek()); if (ch != 'f' && ch != 't') { return false; } if (ch == 'f') { const char* expected = "false"; if (check_bytes(istrm, expected)) { value.emplace<bool>(false); return true; } else { parse_status += "parse of boolean false value failed|"; return false; } } if (ch == 't') { const char* expected = "true"; if (check_bytes(istrm, expected)) { value.emplace<bool>(true); return true; } else { parse_status += "parse of boolean true value failed|"; return false; } } return false; } static bool try_null(std::istream& istrm, json_value& value, std::string& parse_status) { eat_whitespace(istrm); char ch = static_cast<char>(istrm.peek()); if (ch != 'n') { return false; } else { const char* expected = "null"; if (check_bytes(istrm, expected)) { value.emplace<json_null>(); return true; } else { parse_status += "parse of null value failed|"; return false; } } } json20::json20(std::istream& istrm, std::string& parse_status) { parse(istrm, parse_status); } json_value json20::parse_json(std::istream& istrm, json_value& value, std::string& parse_status) { if (try_number(istrm, value)) { return value; } if (try_string(istrm, value)) { return value; } if (try_boolean(istrm, value, parse_status)) { return value; } if (try_null(istrm, value, parse_status)) { return value; } char ch = get_next_token(istrm); if (ch == '{') { json_object object; ch = get_next_token(istrm); if (ch == '}') { return object; } while (istrm) { if (ch == '"') { istrm.putback(ch); std::string key; json_value kvalue; if (!try_string(istrm, kvalue)) { parse_status += "unexpected json parsing failure of name|"; return kvalue; } else { key = std::get<2>(kvalue); } ch = get_next_token(istrm); if (ch != ':') { parse_status += "encountered an unexpected symbol: "; parse_status += ch; parse_status += ", : was expected|"; } object[key] = parse_json(istrm, value, parse_status); ch = get_next_token(istrm); if (ch == '}') { return object; } if (ch != ',') { parse_status += "encountered an unexpected symbol: "; parse_status += ch; parse_status += " , (comma) was expected|"; return object; } ch = get_next_token(istrm); } else { parse_status += "encountered an unexpected symbol: "; parse_status += ch; parse_status += " , \" (quote) was expected|"; return object; } } } if (ch == '[') { json_array jarray; ch = get_next_token(istrm); if (ch == ']') { return jarray; } istrm.putback(ch); while (istrm) { jarray.array.push_back(parse_json(istrm, value, parse_status)); ch = get_next_token(istrm); if (ch == ']') { break; } if (ch != ',') { parse_status += "comma expected, instead saw "; parse_status += ch; parse_status += '|'; } } return jarray; } // if get to here something is wrong parse_status += "parse failure, last character parsed: "; parse_status += ch; parse_status += '|'; return value; // no type/value will be set if return here } bool json20::parse(std::istream& istrm, std::string& parse_status) { root = parse_json(istrm, root, parse_status); return true; } bool json20::insert(const json_object& object) { root = object; return true; } bool json20::recursive_find_by_key(const std::string& key_name, json_object& next, json_value& value) { const auto found = next.object.find(key_name); if (found != next.object.end()) { value = found->second; return true; } // if not found, iterate over current values for child objects for (const auto element : next.object) { switch (element.second.index()) { case 3: // json_array { json_array jarray = std::get<json_array>(element.second); for (const auto list_item : jarray.array) { if (list_item.index() == 4) { json_object child = std::get<json_object>(list_item); return recursive_find_by_key(key_name, child, value); } } } break; case 4: // json_object json_object child = std::get<json_object>(element.second); return recursive_find_by_key(key_name, child, value); break; } } return false; } bool json20::find_by_key(const std::string& key_name, json_value& value) { // root should be a json_object, but check just in case switch (root.index()) { case 4: // json_object { json_object obj = std::get<json_object>(root); if (recursive_find_by_key(key_name, obj, value)) { return true; } break; } default: break; } return false; } test code using gtest, test.cpp: #include <gtest/gtest.h> #include "json20.hpp" #include <string> #include <iostream> #include <sstream> TEST(json20, GivenEmptyJsonStringThenConstructsCorrectly) { std::string myjson = "{}"; std::stringstream iostrm; iostrm << myjson; // take in through stdin std::string error; json20 json(iostrm, error); // output to stdout std::ostringstream ostrm; ostrm << json.root; std::string expected("{}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenJsonStringUsingNullThenConstructsCorrectly) { std::string myjson = "{ \"myvalue\": null }"; std::stringstream iostrm; iostrm << myjson; // take in through stdin std::string error; json20 json(iostrm, error); // output to stdout std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"myvalue\":null}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenNumberJsonStringThenConstructsCorrectly) { std::string myjson = "{ \"mynumber\": 3.142 }"; std::stringstream iostrm; iostrm << myjson; // take in through stdin std::string error; json20 json(iostrm, error); // output to stdout std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"mynumber\":3.142}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenStringJsonStringThenConstructsCorrectly) { std::string myjson = "{ \"mystring\": \"Angus\" }"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"mystring\":\"Angus\"}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenStringJsonArrayThenConstructsCorrectly) { std::string myjson = "{ \"myarray\": [\"Angus\", \"Lisa\"]}"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"myarray\":[\"Angus\",\"Lisa\"]}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenNumberJsonArrayThenConstructsCorrectly) { std::string myjson = "{ \"myarray\": [1, 2, 3]}"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"myarray\":[1,2,3]}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenAMixedJsonArrayThenConstructsCorrectly) { std::string myjson = "{ \"myarray\": [1, \"Angus\", 3]}"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"myarray\":[1,\"Angus\",3]}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenMultipleJsonArraysThenConstructsCorrectly) { std::string myjson = "{ \"myarray\": [\"Angus\", \"Lisa\", {\"objage\": [true, 4,false]}, \"Wookie\"]}"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"myarray\":[\"Angus\",\"Lisa\",{\"objage\":[true,4,false]},\"Wookie\"]}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenComplexJsonThenConstructsCorrectly) { std::string myjson = R"###({ "firstName": "John", "lastName" : "Smith", "isAlive" : true, "age" : 27, "address" : { "streetAddress": "21 2nd Street", "city" : "New York", "state" : "NY", "postalCode" : "10021-3100" }, "phoneNumbers": [ { "type": "home", "number" : "212 555-1234" }, { "type": "office", "number" : "646 555-4567" } ], "children": [], "spouse": null })###"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"myarray\":[\"Angus\",\"Lisa\",{\"objage\":[true,4,false]},\"Wookie\"]}"); } // failed cases TEST(json20, GivenEmptyStringThenConstructsCorrectly) { std::string myjson; // blank string std::stringstream iostrm; iostrm << myjson; // take in through stdin std::string error; json20 json(iostrm, error); // output to stdout std::ostringstream ostrm; ostrm << json.root; std::string expected(""); EXPECT_NE(error.size(), 0u); } TEST(json20, GivenInvalidUnquotedStringValueJsonStringThenNoCrash) { std::string myjson = "{ \"mynumber\":unquoted string }"; std::stringstream iostrm; iostrm << myjson; // take in through stdin std::string error; json20 json(iostrm, error); // output to stdout std::ostringstream ostrm; ostrm << json.root; EXPECT_NE(error.size(), 0u); } TEST(json20, GivenLargeButInvalidJsonStringThenNoCrash) { std::string myjson = R"###({ "firstName": "John", "lastName" -- "Smith", "isAlive" : true, "age" : 27, "address" : { "streetAddress": "21 2nd Street", "city" : "New York", "state" : "NY", "postalCode" : "10021-3100" }, "phoneNumbers": [ { "type": "home", "number" : "212 555-1234" }, { "type": "office", "number" : 646 555-4567 } ], "children": [[[], "spouse": nill })###"; std::stringstream iostrm; iostrm << myjson; // take in through stdin std::string error; json20 json(iostrm, error); // output to stdout std::ostringstream ostrm; ostrm << json.root; EXPECT_NE(error.size(), 0u); } TEST(json20, GivenAJsonObjectThenOutputAsExpected) { const std::string s = "my list"; double arrayofdoubles[] = {1.0, 2.0, 3.0}; json_array jarray; for (auto i : arrayofdoubles) { jarray.array.push_back(i); } json_object obj; obj.object[s] = jarray; json20 json; json.insert(obj); //// output to stdout std::ostringstream ostrm; ostrm << json.root; std::string expected("{\"my list\":[1,2,3]}"); EXPECT_EQ(ostrm.str(), expected); } TEST(json20, GivenValidJsonThenFindValueByKey) { std::string myjson = R"###({ "firstName": "John", "lastName" : "Smith", "isAlive" : true, "age" : 27, "address" : { "streetAddress": "21 2nd Street", "city" : "New York", "state" : "NY", "postalCode" : "10021-3100" }, "phoneNumbers": [ { "type": "home", "number" : "212 555-1234" }, { "type": "office", "number" : "646 555-4567" } ], "children": [], "spouse": null })###"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); json_value value; EXPECT_TRUE(json.find_by_key("firstName", value)); const json_value expected = std::string("John"); EXPECT_EQ(std::get<std::string>(value), std::get<std::string>(expected)); } TEST(json20, GivenJsonWithNestedKeyThenFindValueByKey) { std::string myjson = R"###({ "firstName": "John", "lastName" : "Smith", "nestedLevel1Key" : { "name": "childString1", "NestedLevel2Key": { "age": 10, "address1": "1 Woodlough Way" } } })###"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); json_value value; EXPECT_TRUE(json.find_by_key("address1", value)); const json_value expected = std::string("1 Woodlough Way"); EXPECT_EQ(std::get<std::string>(value), std::get<std::string>(expected)); } TEST(json20, GivenJsonWithNestedKeyInArrayThenFindValueByKey) { std::string myjson = R"###({ "firstName": "John", "lastName" : "Smith", "nestedLevel1Key" : [ "age": 29, { "name": "childString1", "NestedLevel2Key": [{ "address1": "1 Woodlough Way" }] }] })###"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); json_value value; EXPECT_TRUE(json.find_by_key("address1", value)); const json_value expected = std::string("1 Woodlough Way"); EXPECT_EQ(std::get<std::string>(value), std::get<std::string>(expected)); } TEST(json20, GivenInvalidbooleanJsonValueThenParseErrorShouldIndicateBooleanError) { std::string myjson = R"###({ "firstName": "John", "lastName" : "Smith", "isAlive" : truw, "age" : 27, "address" : { "streetAddress": "21 2nd Street", "city" : "New York", "state" : "NY", "postalCode" : "10021-3100" }, "phoneNumbers": [ { "type": "home", "number" : "212 555-1234" }, { "type": "office", "number" : "646 555-4567" } ], "children": [], "spouse": null })###"; std::stringstream iostrm; iostrm << myjson; std::string error; json20 json(iostrm, error); size_t found_error = error.find("bool"); EXPECT_NE(error.size(), 0u); EXPECT_NE(found_error, std::string::npos); } Answer: Code Review: I don't like this as it gives you an extra level of indirection. struct json_array { std::vector<json_value> array; }; You can simply use another name: using json_array = std::vector; This gives you a specific name for the array and removes the level of indirection. Sure this is resonable: struct json_object { std::map < std::string, json_value> object; Note That if you try and access an element that does not exist it will add it to the object (even if you are just reading). json_value& operator [](std::string key) { return object[key]; Maybe this is desirable, depends on your use case. } }; But sometimes you pass by const reference. In this case you can not access members of the object because there is no const access to members. I would a way to accesses elements from a const reference. json_value const& operator [](std::string key) const { auto find object.find(key); if (find != object.end()) { return find->second; } // Not sure what you want to do if the object does not exist. } Not really sure what json20 is for? class json20 { You don't need it to hold the JSON that is what json_value is for. To me this is JSON parsers, which is fine but you don't need to store the json_value inside. Personally I would rename this to JsonParser and then use to read a stream that returns a json_value object. The stream operator >> drops all leading white space so you don't actually need to write your own. static void eat_whitespace(std::istream& istrm) { Personally I would write eat_whitespace like this: static void eat_whitespace(std::istream& istrm) { int ch; while (istrm >> ch && std::isspace(ch)) { // Do nothing } if (istrm) { istrm.unget(); // We read a non whitespace. Put it back. } } So because operator>> drops leading white space we can re-write get_next_token static char get_next_token(std::istream& istrm) { char ch = static_cast<char>(-1); istrm >> ch; // Don't care if it does not work. // If the read fails then `ch` is unmodified. // So it will have a value of -1. return ch; } OK. Streaming: std::ostream& operator<< (std::ostream& os, const json_value& v) { For boolean values we can simplify it: // IF you want to do it manually. os << (std::get<1>(v)) ? "true" : "false"; // Using the stream operator. os << std::boolalpha << std::get<1>(v); For objects like the json_object, json_arry and json_null I would write their own stream operators. std::ostream& operator<< (std::ostream& os, json_array const& v); std::ostream& operator<< (std::ostream& os, json_object const& v); std::ostream& operator<< (std::ostream& os, json_null const& v); Now you can simplify case 3/4/5: case 3: os << std::get<3>(v); break; case 4: os << std::get<4>(v); break; case 5: os << std::get<5>(v); break; Lets simplify the try_string() static bool try_string(std::istream& istrm, json_value& value) { char ch; if (istrm >> ch) { if (ch != '"') { istrm.unget(); return false; } std::string s; std::getline(istrm, s, '"'); value.emplace<2>(s); return true; } return false; } Pretty sure your try_number does not conform to the JSON standard. static bool try_number(std::istream& istrm, json_value& value) { https://www.json.org/json-en.html Your code will read a number with multiple . in it. Numbers can not start with 0 (unless it is just zero or zero with a fraction). You don't support E suffix to numbers. Personally I would convert the C++ code for parsing JSON into LEX code to parse JSON values. The code for lex is a lot simpler:
{ "domain": "codereview.stackexchange", "id": 37880, "tags": "c++, parsing, json" }
Regioselectivity of bromination of alkenes
Question: Recently, in my chemistry lesson, we were discussing the mechanism of the bromination of alkenes. The teacher brought up the involvement of the cyclic bromonium intermediate. She also mentioned that the nucleophile attacks the more substituted carbon atom in the bromonium ion. However, this raised many questions among my classmates. Firstly, based on steric reasons, the nucleophile is more likely to attack the less-substituted carbon since there is less hindrance. Hearing this, the teacher then said that the reason for the preference is based on electronic factors. However, this makes little sense as well, considering that the presence of more alkyl groups will decrease the partial positive charge on the more substituted carbon, making it less attractive to the nucleophile (be it $\ce {Br^-}$ or $\ce {H2O}$). I went to seek clarification by looking at more authoritative texts, such as Advanced Organic Chemistry: Part A: Structure and Mechanisms by Carey and Sundberg, as well as Guidebook to Mechanism in Organic Chemistry by Peter Sykes. However, they were citing very weird reasons that were not very intuitive. Here are some extracts from relevant chapters of the two texts: From the former, Unsymmetrical alkenes nevertheless follow the Markovnikov rule because the partial positive charge that develops is located predominantly at the carbon that is better able to accommodate an electron deficiency... From the latter, With an unsymmetrical alkene, e.g. 2-methylpropene (32), the more heavily alkyl-substituted carbon will therefore be preferentially attacked by the residual nucleophile, $\ce {Cl^-}$. Why do the alkyl groups result in greater cationic character when they are in fact, electron releasing? Answer: With bromination, the first step is the formation of an adduct complex, which can be explained that the π-bond polarises the dibromide. The next step is the formation of the bromonium ion, which carries an overall positive charge. A symmetry-changing induction like substituents will always result in asymmetric carbon-bromine bond lengths in the bromonium ion. Since it is a cation, similar rules for the stability/instability as for carbocations apply. Therefore the higher substituted carbon will have the longer $\ce{C-Br}$ bond length, and the higher charge (more positive/less negative). This makes it a better target for the residual nucleophile $\ce{Br-}$ to attack there. Here is the bromonium ion for the reaction $$ \ce{\underset{alkene}{H3C-CH=CH2} + Br2 -> [\underset{bromonium ion}{H3C-CH(Br)CH2}]+ + Br- -> \underset{$trans-$alkane}{H3C-CHBr-CH2Br}} $$ calculated at the DF-b97d3/def2-SVP level of theory, charges with NBO6, to illustrate the concept: Additionally the HOMO has the higher coefficient at the lesser substituted carbon, while the LUMO has the higher coefficient at the higher substituted carbon. From this, you can see that both quotes from the textbooks are correct.
{ "domain": "chemistry.stackexchange", "id": 10632, "tags": "organic-chemistry, reaction-mechanism, c-c-addition" }
Can circumstances exist such that as seen from some solar system all stars are obscured?
Question: I am mostly active on Physics Stack Exchange but a question occurred to me that I think is best submitted here. This question is about a thought experiment: whether the circumstances of the intended thought experiment are physically possible. For this thought experiment it's not necessary for the circumstances to be physically possible, but it would be nice bonus. In the history of astronomy the fact that the fixed stars were available as reference of celestial motion was hugely helpful. For example: my understanding is that Kepler's approach to finding the shape of the orbit of Mars was as follows. From astronomical records extending over a long time the period of the orbit of Mars can be reconstructed. Kepler would take pairs of Mars position observations, spaced one Mars year apart. These pairs of Mars positions were from two different Earth positions, allowing Kepler to triangulate. The intended thought experiment: A solar system in a region of its galaxy with such a density of interstellar gas that all the stars are obscured. Without the fixed stars as reference of celestial motion the Kepler problem would be much harder. (The purpose of this thought experiment is to argue that while it would have been harder to obtain the inverse square law of gravity, it would not be impossible. This can then serve as demonstration as to what theory of motion is.) Some of my guesses: My understanding is that all parts of the galaxy can be observed, it's just that for some parts the observed luminosity of the stars is diminished because the light coming from those stars traveled through regions with interstellar gas. That would make it very unlikely that interstellar gas can be dense enough to obscure absolutely all stars. That brings me to guessing that this thought experiment would require rather contrived circumstances. For instance, maybe it would require a star to be out in intergalactic space. That would mean that star has had the misfortune, in its distant past, of being ejected from its originating galaxy. [Later edit] Assuming the planet is rotating and that the axis has an inclination with respect to the orbital plane: Once the astronomers make the transition to a heliocentric model: The astromers have the measurement of duration of the solar day, and from the fact that the duration of the year is constant over time they can, it seems to me, infer the ratio of the solar day to the intrinsic day. The astronomers can opt to use the line of intersection between the plane of the equator and the planet's orbital plane as a reference of zero rotation rate. (Of course, a rotating planet that has an inclination will undergo gyroscopic precession. However, the precession of the equinoxes is slow enough that on the assumption that the reference is stationary highly fertile science will be possible.) It seems to me that once such a reference system is in place a Kepler has the opportunity to find Kepler's laws of planetary motion. The purpose of the thought experiment is to demonstrate this: the very motion of the planets of a solar system is sufficient to establish a reference of motion. I submit that the conservation of the angular momentum of a rotating planet is sufficient to provide a solar system wide reference of orientation. Over time astronomers can iterate to an ever more accurate reference of motion. Summerizing: Motion itself is sufficient to provide reference of motion. Answer: The Coalsack nebula is likely a fairly typical dark nebula, reducing starlight by about 10 magnitudes over about 35 lightyears. A solar system in the middle of the nebula would hence get a reduction of outside stars by 5 magnitudes, which would hide much of the galaxy but would just reduce a bright star like Polaris to a faint one at the limit of unaided vision. While denser clouds likely exist, they are likely not opaque enough for the scenario unless they are very extensive (a galaxy like M64 may have big enough dust clouds to be more suitable, but there are going to be stars mixed in with them). Runaway stars may be closer to the scenario. They do not have to undergo very radical accelerations (e.g. they could be part of a tidal tail ripped out by a passing galaxy), although the most noticeable ones have undergone some drama. If separated far enough into a void the distance modulus can bring the magnitude of all galaxies down to the level where you need a good telescope to see them.
{ "domain": "astronomy.stackexchange", "id": 5342, "tags": "interstellar-medium" }
Team Fortress 2 item price lookup using Trade.tf developer API and Steam WebAPI
Question: I've made a nice little Python program using the requests module using the Trade.tf Developer API and the Steam WebAPI allowing the user to look up the price of Team Fortress 2 items through Trade.tf's spreadsheet. For those of you that may not know, Team Fortress 2 is a free-to-play team-based shooter game which also features quite a large e-economy for in-game items. Sites such as Backpack.tf and Trade.tf exist to show pricing of items in terms of various currency-like items such as refined metal (a crafting ingredient in-game) and keys (used to open supply crates which can unlock various items within them). I know there's definitely something that can be improved in this program, but I'm not fully sure on how I can accomplish this. Feel free to make any suggestions for it. import json import requests import time with open('keys.json', 'r') as f: keys = json.load(f) schema = requests.get('http://api.steampowered.com/IEconItems_440/GetSchema/v0001/?key=' + keys['steam']) spreadsheet = requests.get('http://www.trade.tf/api/spreadsheet.json?key=' + keys['tradetf']) qualities = {"uncraft" : -1, "genuine" : 1, "vintage" : 3, "unique" : 6, "strange" : 11, "haunted" : 13} # all qualities currently tracked by Trade.TF def get_item_defindex_from_name(name): for items in schema.json()["result"]["items"]: if items["name"] == name: defindex = items["defindex"] if defindex: return defindex else: print("Invalid item.") def get_quality_id_from_name(name): return qualities[name] def get_price_unit(unit): return { "r": " refined", "k": " keys", "b": " buds"}.get(unit, " ") def get_tradetf_price(item, quality): price = spreadsheet.json()["items"][str(item)][str(quality)]["regular"]["hi"] unit = get_price_unit(spreadsheet.json()["items"][str(item)][str(quality)]["regular"]["unit"]) return str(price) + unit def get_input(dialog, validation, error_message): while True: answer = input(dialog) if validation(answer): return answer else: print(error_message) def main(): try: while True: try: item_name = str(input("Enter item name: ")) item_ = get_item_defindex_from_name(item_name) break except UnboundLocalError: print("Invalid item.") quality = get_input( "Enter item quality: ", lambda quality: quality.lower() in qualities, "Quality not tracked by Trade.tf." ) quality_ = get_quality_id_from_name(quality.lower()) print("\n", quality, item_name, "price: ", get_tradetf_price(item_, quality_)) except KeyError: print("Item not currently tracked by Trade.tf.") if __name__ == "__main__": main() print("Shutting down...") time.sleep(3) Answer: Hah, this is a fun one cause I play a lot of Tf2. So let's see what could be improved here. I'll start with you naming. You don't need to use the _from_ naming convention. ie. def get_item_defindex_from_name(name): get_item_defindex_from_name(item_name) You already have the name there, it's clear that you need a name to get the defindex from, so remove that part and save yourself from a 5 word function name get_item_defindex(item_name) You should also use constants. qualities is essentially a constant but you haven't signalled it as one. To make it clear it is, write the name in UPPER_SNAKE_CASE (QUALITIES) and define it right after your imports, before running any actual code. I would do the same with your price units, it's not really clear why you treated them differently. Your with block is a bit misplaced. If you definitely want to load the schema and spreadsheet as part of the global namespace to be accessed in every function, then you should still put them in the if __name__ == "__main__" block. if blocks don't limit scope the way functions do, so they'll still be globally available. You also never use the spreadsheet or the schema without calling .json() on them. Couldn't you just do this here? If you did you could probably save significant time not having to run that function every time. In fact, you only need items out of spreadsheet and ["result"]["items"] out of schema. If you don't need to maintain some sort of connection to the results (which it doesn't seem like you do) then you can just store the relevant values from these dicts and ignore the rest. In get_item_defindex, you're looping over all the items to try find one with a matching name. Is there any reason to keep looking after you've found a value? I'm assuming not, so in that case you should return as soon as you find it to save time: def get_item_defindex_from_name(name): for items in schema.json()["result"]["items"]: if items["name"] == name: return items["defindex"] print("Invalid item.") If you specifically need to get the last matching name for some reason, then you should loop over the schema backwards: def get_item_defindex_from_name(name): for items in reversed(schema.json()["result"]["items"]): if items["name"] == name: return items["defindex"] print("Invalid item.") get_quality_id_from_name seems like a redundant function to me. If it's just going to be a dictionary lookup, it's easier to use that instead of a function. If it included getting the user input then it could be worth being a function. Often single lines of code are only worth being a function if they're particularly complex and worth abstracting or if they're called a lot and it saves repetitive code. Neither is true here. I'd say the same for price unit. In get_tradetf_price you can make it readable and slightly more performant to grab the item up front, before you get the price and unit: def get_tradetf_price(item, quality): item_data = spreadsheet.json()["items"][str(item)][str(quality)]["regular"] price = item_data["hi"] unit = get_price_unit(item_data["unit"]) return str(price) + unit Your get_input function is very confusing. It's a good abstraction mostly, but the validation approach is overcomplicated, especially for what you need here. Instead of passing a lambda to test the input, just pass a list of valid values and test if the input is in there, like so: def get_input(dialog, valid_inputs, error_message): while True: answer = input(dialog).lower() if answer in valid_inputs: return answer else: print(error_message) You can then call it with a much simpler line: quality = get_input("Enter item quality: ", QUALITIES, "Quality not tracked by Trade.tf.") Also note I added the .lower() to when you call input. When accepting input try to sanitise it immediately. You don't care about the case so lowercase it as soon as you accept the input. Note that if you can store the dictionary of items from the schema you can then reuse this for your item_name request at the start: item_name = get_input("Enter item name: ", schema_items, "Invalid item.") And lastly, your try block is a bit long. You clearly intend it to catch errors from get_tradetf_price but it's going to swallow any KeyError in your whole main block. Make try blocks as short as possible: def main(): item_name = get_input("Enter item name: ", schema_items, "Invalid item.") item_id = get_item_defindex(item_name) quality = get_input("Enter item quality: ", QUALITIES, "Quality not tracked by Trade.tf.") quality_id = get_quality_id_from_name(quality.lower()) try: print("\n {} {} price: {}".format(quality, item_name, get_tradetf_price(item_id, quality_id)) except KeyError: print("Item not currently tracked by Trade.tf.")
{ "domain": "codereview.stackexchange", "id": 18672, "tags": "python, python-3.x, steam" }
turtle tf tutorial fails to work
Question: Hi all, I am fairly new to ros and was going through the tf tutorials and stumbled upon a strange error. After fiddling around for an hour or so I'm still not able to understand the behavior. I'm doing everything exactly as stated in these tutorials, using C++. The code is exactly as stated there. I'm on fuerte, and ubuntu 12.04. The problem is that the second turtle goes in circles instead of approaching the first one. The broadcaster works well, so, I would guess, that the issue is in the listener in the "lookupTransform" method. In the end the angular speed appears to be wrong. I also attach the image of how the turtle moves. This happens on 2 different machines of the same configuration with fresh ros-fuerte installed. Any ideas what could explain such a behavior? Originally posted by niosus on ROS Answers with karma: 386 on 2012-10-15 Post score: 4 Answer: Hi, I had the same problem and just changed the order of the arguments for the Quaterion in turtle_tf_broadcaster.cpp. Now it looks like this and works for me: transform.setRotation( tf::Quaternion(0, 0, msg->theta) ); Not sure, what reason causes this problem. Originally posted by Tones with karma: 635 on 2012-11-08 This answer was ACCEPTED on the original site Post score: 8 Original comments Comment by thebyohazard on 2012-11-09: Good find! I was looking at the listener. I'm guessing the API changed the order of the constructor parameters from RPY to YPR. Anyway, the Quaternion constructor w/Euler angles was deprecated since the tutorial was written. You should make the quaternion another way or use the setEulerZYX function. Comment by niosus on 2012-11-16: Thanks, worked for me too. Anyway, the tutorial should be changed, I guess... Comment by 130s on 2012-12-28: Very good catch! I've updated the tutorial http://www.ros.org/wiki/tf/Tutorials/Writing%20a%20tf%20broadcaster%20%28C%2B%2B%29. Review is appreciated! Also, relevant announcement about Bullet migration http://goo.gl/8gJOn Comment by RafBerkvens on 2013-03-21: The code in the tutorial is still faulty. Thanks for the answer! Comment by wesleyc on 2013-08-06: Thanks for the answer! There seems to be a discrepancy between the tutorial and the class reference on how lookupTransform works too. The tutorial says it gives the transform from the 1st arg to the 2nd arg, but the class reference says the exact opposite. The class reference seems to be correct.
{ "domain": "robotics.stackexchange", "id": 11375, "tags": "ros, ros-fuerte, tf-tutorial, transform" }
Kinect point cloud to pcd files
Question: Hi, I'm using the openni_launch stack to gather data from the kinect sensor. I'm currently able to visualize the points cloud with rviz. I need to save each incoming frame as a .pcd file for data processing, is there a easy way to do so with ROS? thanks Originally posted by Antonio on ROS Answers with karma: 53 on 2012-11-02 Post score: 2 Answer: Use pointcloud_to_pcd stated in pcl_ros. You're able to visualize the points cloud with rviz, so I assumed that you know which topic is transmitting the pointcloud you need. You can store the pointcloud to a pcd file by simply running rosrun pcl_ros pointcloud_to_pcd input:=<your target topic name> If the data in "your target topic name" is valid pointcloud data, this executable will automatically save the pointcloud to a pcd file. Originally posted by Po-Jen Lai with karma: 1371 on 2012-11-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Antonio on 2012-11-05: Thank you so much, this is exactly what I need! Comment by Po-Jen Lai on 2012-11-06: Welcome: ) Comment by Athoesen on 2014-05-20: Will this still work with ROS Hydro? Comment by Po-Jen Lai on 2014-07-02: I am not sure~You can give it a try: ) Comment by dmngu9 on 2015-04-07: I did that but the cloud is saved upside down because orientation in rviz how can i fix this
{ "domain": "robotics.stackexchange", "id": 11606, "tags": "ros, kinect, pcl, pcd, openni.launch" }
Determine hour in other countries with generated random time
Question: I hope the title is clear. The objective is as following: There are 4 clocks in a hotel in London. One is showing the time for London. The others for New York, Tokyo and Hong Kong. Generate a random time for the clock in London, with that result, determine the hour for New York(+5), Tokyo(-9) and Hong Kong(-8). Then show the results. This is not homework, this is practice. Is this a good way? Or can i generate a random timespan? All input will be appreciated! Random generator = new Random(); DateTime london = DateTime.Now; DateTime timeInLondon = london.AddMinutes(generator.Next()); DateTime timeInNewYork = timeInLondon.AddHours(5); DateTime timeInTokyo = timeInLondon.AddHours(-9); DateTime timeInHongKong = timeInLondon.AddHours(-8); Console.WriteLine("London: " +london.ToShortTimeString() +"\n" +"New York: " +timeInNewYork.ToShortTimeString() + "\n" +"Tokyo: " +timeInTokyo.ToShortTimeString() + "\n" +"Hong Kong: " +timeInHongKong.ToShortTimeString()); Answer: DateTime london = DateTime.Now; DateTime timeInLondon = london.AddMinutes(generator.Next()); This can be rewritten as var timeInLondon = DateTime.Now.AddMinutes(generator.Next()); You can use var on the other lines as well. Removing the london variable makes it less likely you make mistakes like this: Console.WriteLine("London: " +london.ToShortTimeString()... Where you actually output DateTime.Now as it was at the start of your program, but not, as you probably wanted, the randomized timeInLondon! Now about your random time. Your seconds will obviously not be random. Apart from that, you only add a positive random number, with a maximum of around 4085 years. As such, I guess that is enough to make the resulting time more or less random as far as hours and minutes are concerned. I would opt for adding seconds though. You still would add anything up to 68 years to the current time, still probably ending up with a random-enough distribution over the 24-hour clock, including seconds: var timeInLondon = DateTime.Now.AddSeconds(generator.Next()); Finally, if you want to nitpick, it would be better to use time zone information instead of hard-coding magic numbers like 5 or -8. Time zones do not at all times have the same number of hours difference between them(*). You could use TimeZoneInfo.ConvertTime Method (DateTime, TimeZoneInfo, TimeZoneInfo) Keep in mind that if you opt for that, you might need to rethink randomisation of your time, since the date comes into play to determine the time in other places. (*)Not all countries start and end DST on the same dates, some countries do not even use DST, and on the southern hemisphere, DST goes "the other way around". This may all be irrelevant for the locations you picked, though.
{ "domain": "codereview.stackexchange", "id": 27058, "tags": "c#, datetime, random" }
The charge surface distribution of a conductor with a non-centered charge
Question: A point like charge Q is placed inside the cavity of a spherical conductor of an internal radius R1 and external radius R2 initially neutral(uncharged) (see the figure), the question is how is the charge distribution of the induced charge Q1 and Q2? the solution in the book says: in order to determine the distribution of the charge sigma2 of Q2 on the external surface , we can observe that according to the electrostatic shielding this distribution is independent of the charge distribution sigma1 of the charge Q1 on the surface R1 and of the position of the point like charge Q , but at the same time the distribution of the induced charge Q1 will not be uniform unless the point charge Q is placed in the center of the sphere. Can someone explain me what does this mean? Answer: The charge distribution on the surfaces occurs in such a way that the potential on every point is same (it's a conductor). Since the charge is at a different distance from the points on the inner surface, the distribution has to be non uniform to make potential same at every point. Now draw a gaussian surface outside the inner surface but inside the outer surface. The net charge inside this surface is zero. So no electric field line crosses this surface. Resultant, the inner charge distribution doesn't affect the distribution on the outer surface. This is "electrostatic shielding". So the charge distribution on the outer surface is independent of the distribution on the inner surface and presence of charge. Hence the charge distribution on the outer surface is uniform since charge gets distributed uniformly on a conductor surface in absence of any charge affecting it.
{ "domain": "physics.stackexchange", "id": 44876, "tags": "electrostatics, charge, gauss-law, conductors" }
Shell script to calculate the total space consumed by a user by file-size
Question: I'm working with a small bash code which is working fine but i'm just looking if there is better way to formulate this, In this code i'm looking for the Files between year 2002 and 2018 on the 7th column. Below is the working code, Script: #!/bin/bash # scriptName: Ftpcal.sh FILE="/home/pygo/Cyberark/ftplogs_3" AWK="/bin/awk" GREP="/bin/grep" USERS="`"$AWK" '$7 >= "2002" && $7 <= "2018"' $FILE | "$AWK" '{print $3}' | sort -u`" for user in $USERS; do echo "User $user " | tr -d "\n"; "$AWK" '$7 >= "2002" && $7 <= "2018"' "$FILE" | "$GREP" "$user" | "$AWK" '{ total += $4}; END { print "Total Space consumed: " total/1024/1024/1024 "GB"}'; done | column -t echo "" echo "==============================================================" "$AWK" '$7 >= "2002" && $7 <= "2018"' "$FILE" | "$AWK" '{ total += $4}; END { print "Total Space consumed by All Users: " total/1024/1024/1024 "GB"}'; echo "" Actual data Result: $ sh Ftpcal.sh User 16871 Total Space consumed: 0.0905161GB User 253758 Total Space consumed: 0.0750855GB User 34130 Total Space consumed: 3.52537GB User 36640 Total Space consumed: 0.55393GB User 8490 Total Space consumed: 3.70858GB User tx-am Total Space consumed: 0.18992GB User tx-ffv Total Space consumed: 0.183137GB User tx-ttv Total Space consumed: 17.2371GB User tx-st Total Space consumed: 0.201205GB User tx-ti Total Space consumed: 58.9704GB User tx-tts Total Space consumed: 0.0762068GB ------------ snipped output -------------- ============================================================== Total Space consumed by All Users: 255.368GB Sample data: -rw-r--r-- 1 34130 14063436 Aug 15 2002 /current/focus-del/files/from_fix.v.gz -rw-r--r-- 1 34130 14060876 Jul 12 2007 /current/focus-del/files/from1_fix.v.gz -rw-r--r-- 1 34130 58668461 Feb 23 2006 /current/focus-del/files/from_1.tar.gz -rw-r--r-- 1 34130 14069343 Aug 7 20017 /current/focus-del/files/from_tm_fix.v.gz -rw-r--r-- 1 34130 38179000 Dec 7 20016 /current/focus-del/files/from_tm.gds.gz -rw-r--r-- 1 34130 15157902 Nov 22 20015 /current/focus-del/files/from_for.tar.gz -rw-r--r-- 1 34130 97986560 Nov 4 20015 /current/focus-del/files/from_layout.tar Sample Result: $ sh Ftp_cal.sh User 34130 Total Space consumed: 0.0808321GB ============================================================== Total Space consumed by All Users: 0.0808321GB I'm okay with any better approach as a review process to make it more robust. Thanks. Answer: AWK="/bin/awk" It's easier and more readable if you just set your PATH to something appropriate. USERS="`"$AWK" '$7 >= "2002" && $7 <= "2018"' $FILE | "$AWK" '{print $3}' | sort -u`" Backticks should almost always be replaced by $( … ), which is faster because it does not invoke a subshell. Literal numbers should not be quoted. It happens to still do what you want in awk; in some languages it won't. A bad habit, easily avoided. There's no need to invoke awk a second time to extract the third field. Simply pair the action {print $3} with the condition ($7 >= …) that's already there. It's good form to indent the body of a for block (or any other block). echo "User $user " | tr -d "\n"; To suppress a newline on echo, use echo -n. column -t This has some awkward consequences, like tabs inside of labels ("TotalTABSpace") and unaligned numbers. printf will give much prettier results. Both bash and awk provide it. total/1024/1024/1024 Nothing wrong with this, as such, but 2**30 is useful shorthand for gigabyte. ============================================================== Bash can generate sequences like this with the idiom printf "=%.0s" {1..62}. The = is the character and 62 is the count. You're traversing the file three times and extracting the same information each time. This is going to get slow as the file grows. Awk has associative arrays: you can store a subtotal for each user, then iterate and print those subtotals at the end of the awk script, accomplishing the whole thing in one go. Putting it all together: /bin/awk -vusrfmt="User %-20s Total Space consumed: %11.6f GB\n" \ -vsumfmt=$( printf "=%.0s" {1..62} )"\nTotal Space consumed by All Users: %.6f GB\n" ' $7 >= 2002 && $7 <= 2018 { subtot[$3]+=$4 tot+=$4 } END { for (u in subtot) printf usrfmt, u, subtot[u] / 2**30 printf sumfmt, tot / 2**30 }'
{ "domain": "codereview.stackexchange", "id": 34254, "tags": "bash, linux, shell" }
Two storey brick building without using wood or steel
Question: Is it possible to build a two storey building just using bricks (no wooden beams, steel beams or steel reinforcement) to suspend the floor of the second storey? I'm aware that using wooden beams is an option, however because of the environment of the build (tropical rains, humidity, termites, ants, beetles and millions of other insects), I would much prefer to use only brick - if this is possible. Ideally there will be no intermediate supports; the floor structure will span from one external wall to the other. I have an intuitive feeling that an efficient structure can be built using just bricks - but I can't quite articulate it. Has this been done before - is this even possible? Answer: Of course it can be done - masonry bridges suspend a bridge deck above a void, you could use the same technique to support the floor for your second storey. Of course, in order to avoid tension in your masonry (masonry is not good at tension) you need an arch that is close to a catenary in shape. This means that the crown (top) of your arch is going to be a fairly big height above your springing (bottom of the arch). My intuition is that your "two storey" house will be the height of a normal three storey house, with the arch taking up the height of a storey. If you want the floor of your second storey to be flat, you'll need fill over the arch. This adds weight and hence causes more thrust. With a masonry arch bridge there's a big thrust resisted at the abutments. This is resisted by the mass of soil. For your house to resist it you're going to need very thick walls. So: technically feasible but not very practical. Of course, there are also plenty of other materials out there which aren't steel or wood. Fibre reinforced polymers would work, but they're not going to be cheap!
{ "domain": "engineering.stackexchange", "id": 1375, "tags": "structural-engineering, civil-engineering, structures" }
Assigning R/S configuration
Question: Question: Assign the R/S configurations to the carbons which are attached to different rings of this molecule (carbons 1 and 3 according to IUPAC numbering): My Solution I first started out by numbering the carbons. The one at the top would be 1 and the bottom chiral carbon would be 3. Next, According to CIP rules, I have included the phantom atoms by replacing the double bonds while deciding the priority order. Moving on, at carbon 1, My priorities would be: $\ce{-OH}$ Cyclopentene ring (on the left) Cyclopropene ring (on the right) Carbon 2 This gives me a counter-clockwise turn and since the least priority group is on the vertical line (away from us) the configuration I assign would be S. Following a similar procedure for carbon 3, The priorities would be : Bottom 5-member ring (contaning $\ce{O-O}$ bond) 3 member ring on the right 5 member ring on the left Carbon 2 Again, this gives me a counter-clockwise turn and since the least priority group is away, I would assign the configuration: S. The answer key, however, says that the answer is 1R, 3S. Where am I going wrong? Answer: I have separated C1 and C3 into unique structures. For C1, it is clear that the oxygen of the hydroxyl group has the top priority while the methylene group has the lowest priority. A digraph is constructed such that we delineate two paths around each of the unsaturated rings. Thus, non-duplicate atoms C3 and C5 terminate with their respective duplicate atoms C(3) and C(5). The cyclopentene ring has locants in sphere 3 of {(7),7,H} which is a tie with {(4),4,H} in sphere 3 of the cyclopropene ring. But the locants {8,H,H} of the cyclopentene lose to {(3),3,H} of the cyclopropene ring. The priorities are O>cyclopropene>cyclopentene>methylene. C1 has the (R)-configuration. continued... A different digraph for C3 is constructed by dissecting the three rings into a total of six pathways, two for each ring. The rings are designated with red, blue and black dots. The red and blue rings take precedent over the black ring because each one has locants {O,O,H} at the branchs while the black ring has {O,C,H}. To distinguish between blue and red, we move to the next atoms on the chain. Blue has two oxygens while red has two carbons. The priorities are blue>red>black>methylene. C3 has the (S)-configuration. Related issues: What are the CIP rules for cyclic substituents? How to check for geometrical isomerism in cyclic compounds? More on Rings and Digraphs
{ "domain": "chemistry.stackexchange", "id": 14457, "tags": "organic-chemistry, nomenclature" }
A simple priority queue in Java via linked list sorted by priority keys
Question: Now I have this very simple priority queue. add and changePriority both run in \$\mathcal{O}(n)\$ and extractMinimum in \$\Theta(1)\$: net.coderodde.util.pq.PriorityQueue: package net.coderodde.util.pq; /** * This interface defines the API for priority queue data structures. * * @author Rodion "rodde" Efremov * @version 1.6 (Oct 3, 2019) * @param <E> * @param <P> * @since 1.6 (Oct 3, 2019) */ public interface PriorityQueue<E, P extends Comparable<? super P>> { /** * Attempts to add an element to this queue only if it is not yet present. * * @return {@code true} if no duplicates are present and, thus, the element * is added to this priority queue. {@code false} is returned otherwise. */ public boolean add(E element, P priority); /** * Changes the priority of the element. * * @param element the target element. * @param priority the new priority for the element. * @return {@code true} if the priority of the target element changed. * {@code false} otherwise. */ public boolean changePriority(E element, P priority); /** * Removes and returns the element with the highest element. * * @return the highest priority element. * @throws {@link java.lang.IllegalStateException} if the queue is empty. */ public E extractMinimum(); /** * Checks wether the parameter element is in this queue. * * @return {@code true} if the input parameter is in this queue, * {@code false} otherwise. */ public boolean containsElement(E element); /** * The number of elements currently in the queue. */ public int size(); /** * Checks whether this queue is empty. * * @return {@code true} only if this queue contains no elements. */ public boolean isEmpty(); /** * Clears this queue removing all the elements from the queue. */ public void clear(); } net.coderodde.util.pq.impl.SimplePriorityQueue: package net.coderodde.util.pq.impl; import java.util.ConcurrentModificationException; import java.util.HashMap; import java.util.Iterator; import java.util.Map; import java.util.NoSuchElementException; import net.coderodde.util.pq.PriorityQueue; /** * This class implements a simple priority queue.The elements are ordered in a * linked list, the head node of which contains the highest priority element, * and the tail node contains the lowest priority element. * * @author Rodion "rodde" Efremov * @version 1.6 (Oct 3, 2019) * @param <E> the element type. * @param <P> the priority key type. * @since 1.6 (Oct 3, 2019) */ public final class SimplePriorityQueue<E, P extends Comparable<? super P>> implements PriorityQueue<E, P>, Iterable<E> { @Override public Iterator<E> iterator() { return new SimplePriorityQueueIterator(); } /** * This static inner class holds an element along with its priority. * * @param <E> the element type. * @param <P> the priority key type. */ private static final class Node<E, P> { E element; P priority; Node<E, P> next; Node<E, P> prev; Node(E element, P priority) { this.element = element; this.priority = priority; } E getElement() { return element; } P getPriority() { return priority; } void setPriority(P priority) { this.priority = priority; } Node<E, P> getNextNode() { return next; } Node<E, P> getPreviousNode() { return prev; } void setNextNode(Node<E, P> next) { this.next = next; } void setPreviousNode(Node<E, P> prev) { this.prev = prev; } } /** * Maps each element to the linked list node holding it. */ private final Map<E, Node<E, P>> map = new HashMap<>(); private Node<E, P> headNode = null; private Node<E, P> tailNode = null; private int size = 0; private int modCount = 0; /** * {@inheritDoc} */ @Override public boolean containsElement(E element) { return map.containsKey(element); } /** * {@inheritDoc} */ @Override public boolean add(E element, P priority) { if (map.containsKey(element)) { // Do not add the duplicates: return false; } Node<E, P> newNode = new Node<>(element, priority); if (headNode == null) { headNode = newNode; tailNode = newNode; size = 1; modCount++; map.put(element, newNode); return true; } insertNode(newNode); map.put(element, newNode); size++; modCount++; return true; } /** * {@inheritDoc} */ @Override public boolean changePriority(E element, P priority) { if (!map.containsKey(element)) { return false; } Node<E, P> node = map.get(element); node.setPriority(priority); unlinkNode(node); insertNode(node); return true; } /** * {@inheritDoc} */ @Override public E extractMinimum() { if (size == 0) { throw new NoSuchElementException("Extracting from an empty queue."); } Node<E, P> topPriorityNode = headNode; headNode = headNode.getNextNode(); if (headNode == null) { tailNode = null; size = 0; } else { headNode.setPreviousNode(null); size--; } map.remove(topPriorityNode.getElement()); modCount++; return topPriorityNode.getElement(); } /** * {@inheritDoc} */ @Override public int size() { return size; } /** * {@inheritDoc} */ @Override public boolean isEmpty() { return size == 0; } /** * {@inheritDoc} */ @Override public void clear() { size = 0; modCount++; map.clear(); } /** * Inserts the given node to its correct location. */ private void insertNode(Node<E, P> node) { Node<E, P> currentNode = headNode; // Comparator operator <= instead of < guarantees stability: while (currentNode != null && currentNode.priority.compareTo(node.getPriority()) <= 0) { currentNode = currentNode.getNextNode(); } if (currentNode == null) { tailNode.setNextNode(node); node.setPreviousNode(tailNode); tailNode = node; } else if (currentNode.getPreviousNode() == null) { // The priority of the new element is smaller than the minimum // priority throughout the queue: headNode.setPreviousNode(node); node.setNextNode(headNode); headNode = node; } else { node.setNextNode(currentNode); node.setPreviousNode(currentNode.getPreviousNode()); currentNode.setPreviousNode(node); node.getPreviousNode().setNextNode(node); } } /** * Unlinks the parameter node from the linked list. */ private void unlinkNode(Node<E, P> node) { if (node.getPreviousNode() != null) { node.getPreviousNode().setNextNode(node.getNextNode()); } else { headNode = node.getNextNode(); } if (node.getNextNode() != null) { node.getNextNode().setPreviousNode(node.getPreviousNode()); } else { tailNode = node.getPreviousNode(); } } /** * This inner class implements an iterator over the priority queue. */ private final class SimplePriorityQueueIterator implements Iterator<E> { private Node<E, P> node = headNode; private final int expectedModCount = SimplePriorityQueue.this.modCount; @Override public boolean hasNext() { checkComodification(); return node != null; } @Override public E next() { checkComodification(); if (!hasNext()) { throw new NoSuchElementException(); } E returnValue = node.getElement(); node = node.getNextNode(); return returnValue; } private void checkComodification() { if (expectedModCount != SimplePriorityQueue.this.modCount) { throw new ConcurrentModificationException( "Expected modification count: " + expectedModCount + ", " + "actual modification count: " + SimplePriorityQueue.this.modCount); } } } } net.coderodde.util.pq.impl.SimplePriorityQueueTest: package net.coderodde.util.pq.impl; import java.util.Iterator; import org.junit.Before; import org.junit.Test; import static org.junit.Assert.*; public class SimplePriorityQueueTest { private SimplePriorityQueue<Integer, Integer> queue; @Before public void setUp() { queue = new SimplePriorityQueue<>(); } @Test public void testIterator() { queue.add(1, 1); queue.add(4, 4); queue.add(3, 3); queue.add(2, 2); Iterator<Integer> iter = queue.iterator(); assertTrue(iter.hasNext()); assertEquals((Integer) 1, iter.next()); assertTrue(iter.hasNext()); assertEquals((Integer) 2, iter.next()); assertTrue(iter.hasNext()); assertEquals((Integer) 3, iter.next()); assertTrue(iter.hasNext()); assertEquals((Integer) 4, iter.next()); // Arrived to the end of the queue: assertFalse(iter.hasNext()); } @Test public void testContainsElement() { assertFalse(queue.containsElement(100)); assertFalse(queue.containsElement(90)); assertFalse(queue.containsElement(80)); queue.add(100, 100); queue.add(80, 80); queue.add(90, 90); assertTrue(queue.containsElement(100)); assertTrue(queue.containsElement(90)); assertTrue(queue.containsElement(80)); assertFalse(queue.containsElement(70)); assertFalse(queue.containsElement(60)); } @Test public void testAdd() { assertFalse(queue.containsElement(3)); queue.add(3, 3); assertTrue(queue.containsElement(3)); assertFalse(queue.containsElement(2)); queue.add(2, 2); assertTrue(queue.containsElement(2)); assertFalse(queue.containsElement(4)); queue.add(4, 4); assertTrue(queue.containsElement(4)); } @Test public void testChangePriority() { for (int i = 0; i < 10; i++) { queue.add(i, i); } queue.changePriority(5, -1); assertEquals((Integer) 5, queue.extractMinimum()); assertEquals((Integer) 0, queue.extractMinimum()); queue.changePriority(1, 100); assertEquals((Integer) 2, queue.extractMinimum()); assertEquals((Integer) 3, queue.extractMinimum()); assertEquals((Integer) 4, queue.extractMinimum()); assertEquals((Integer) 6, queue.extractMinimum()); assertEquals((Integer) 7, queue.extractMinimum()); assertEquals((Integer) 8, queue.extractMinimum()); assertEquals((Integer) 9, queue.extractMinimum()); assertEquals((Integer) 1, queue.extractMinimum()); } @Test public void testExtractMinimum() { queue.add(5, 5); queue.add(3, 3); queue.add(4, 4); queue.add(7, 7); queue.add(6, 6); for (int i = 3; i <= 7; i++) { assertEquals((Integer) i, queue.extractMinimum()); } // Is the queue stable? queue.add(2, 1); queue.add(3, 1); queue.add(1, 1); assertEquals((Integer) 2, queue.extractMinimum()); assertEquals((Integer) 3, queue.extractMinimum()); assertEquals((Integer) 1, queue.extractMinimum()); } @Test public void testSize() { for (int i = 0; i < 10; i++) { assertEquals(i, queue.size()); queue.add(i, i); assertEquals(i + 1, queue.size()); } } @Test public void testIsEmpty() { assertTrue(queue.isEmpty()); queue.add(2, 2); assertFalse(queue.isEmpty()); queue.add(1, 1); assertFalse(queue.isEmpty()); } @Test public void testClear() { queue.clear(); // No-op. assertTrue(queue.isEmpty()); for (int i = 0; i < 5; i++) { queue.add(i, i); assertFalse(queue.isEmpty()); } queue.clear(); assertTrue(queue.isEmpty()); } } Critique request I would like to hear comments on test coverage, coding style, maintainability and readability, to name a few. Thank you in advance. Answer: Small issue if (!map.containsKey(element)) { return false; } Node<E, P> node = map.get(element); You can simply write Node<E, P> node = map.get(element); and check if node is null. This increases performance and atomicity. Design issue I think you could improve readability if you split the implementation to a Doubly-Linked-List ("DLL") inside the priority queue. This way, you could have the following code encapsulated in your DLL // Comparator operator <= instead of < guarantees stability: while (currentNode != null && currentNode.priority.compareTo(node.getPriority()) <= 0) { currentNode = currentNode.getNextNode(); } You could gain a better separation of concerns. Tests could also improve this way. Small implementation issue As for your implementation of the DLL, perhaps you could consider using a sentinel rather than check for null for the head/tail operations.
{ "domain": "codereview.stackexchange", "id": 36199, "tags": "java, linked-list, unit-testing, iterator, priority-queue" }
Contribution to angular momentum $ L_z$ - due to rotation of probability fluid?
Question: I'm doing a course on QM and this concept is entirely new to me: "The eigenvalue $m\hbar$ of $L_z$ can be understood as the result from the rotational motion of probability fluid around the z-axis. " Then I thought, suppose we have a volume element $dV$ at some point $r$ away, and I'm trying to find the contribution to $L_z$. I think this is related to $J_{\phi} = \frac{m\hbar}{\mu r sin\theta}|\psi|^2 $. Is the contribution $$J_{\phi} \space dA = J_{\phi} \space r^2 sin\theta \space d\theta \space d\phi= \frac{m\hbar}{\mu} r \space d\theta \space d\phi $$. Am I right? Also, how do I obtain $m\hbar$? Answer: As stated in your reference, the probability current is given by $$\vec J[\varphi(\vec r)] = \frac{\hbar}{2\mu i}[\varphi^*\nabla\varphi - \varphi\nabla\varphi^*],$$ where $\mu$ is the mass of the particle. In the above, it is defined through its action on any arbitrary dummy wave function $\varphi$. Specifically for the case of a wave function which separates the radial from the angular part, $\varphi_{nlm}(\vec r)=R_l(r)Y_{lm}(\theta,\phi)$, which is the solutions to central potential problems, e.g. the hydrogen atom, one can evaluate $\vec J$ and finds $$\vec J[\varphi_{nlm}(\vec r)] = \frac{\hbar m}{\mu r\sin\theta}|\varphi_{nlm}(\vec r)|^2\vec e_\phi,$$ a current which only has an azimuthal component. This can be seen as $Y_{lm}(\theta,\phi)=P_{lm}(\cos\theta)e^{im\phi}$: The $i$ in the exponent has a different sign for $\varphi$ and $\varphi^*$, so the $\phi$ derivative is the only one which doesn't cancel. Now, the exercise you were given aims at a semiclassical derivation of the result. The $L_z$ component of orbital angular momentum can be written as $L_z=r_\perp p_\phi$ where $r_\perp=r\sin\theta$ is the distance from the $z$-axis and $p_\phi$ is the azimutal momentum. This hold if the motion is such that the electron cloud is rotating around the $z$-axis. Momentum is mass times velocity. In general, a current can be written as $\vec J=\rho\vec v$ where $\rho$ is the density. Here, $\rho=|\varphi|^2$, the probability density. Thus, we may write the velocity as $\vec v=\vec J/|\varphi|^2$, so $p_\phi=\mu v_\phi=\mu J_\phi/|\varphi|^2$. Inserting this, we get $$L_z = r\sin\theta\mu J_\phi/|\varphi|^2\qquad\Rightarrow\qquad J_\phi = \frac{L_z}{\mu r\sin\theta}|\varphi|^2.$$ However, note that this only holds if we assume that the motion if around the $z$-axis. Above, we saw that this indeed was the case for the atomic (central potential) wave function $\varphi_{nlm}$. Comparing the results, we see that $L_z$ is replaced by $\hbar m$, so that must be the eigenvalue of the $L_z$ operator in the state $\varphi_{nlm}$. Please note that all of this is quite hand-wavy. It is no doubt what the exercise wants you to do but please think of it as a motivation for why it might be a good idea to define the probability current the way it is defined, or why the angular momentum $z$-component is quantized in integer multiples of $\hbar$. Also, thinking of the electron cloud as a rotating fluid is not very helpful, IMO. It's yet another semiclassical model which captures some aspect of quantum mechanics which the particle-orbiting-a-nucleus-(just-like-a-solar-system!)-model can't (absence of trajectories), but it is bound to fail in describing other aspects. Finally, let me address some of the confusion you expressed in the comments of the original post: I think claiming that $J_\phi$ and $L_z$ are the same operator is a bit overdoing it. In fact, a would call the probability current an operator b/c in quantum mechanics, "operator" usually means "linear operator" and the probability current is not linear.
{ "domain": "physics.stackexchange", "id": 14480, "tags": "quantum-mechanics, angular-momentum" }
Double-slit experiment with alternating on-off switch
Question: Suppose we perform a double-slit experiment with a detector placed at a position of minimum intensity (maximum destructive interference), off-center where the path lengths differ by half a wavelength. The light source is alternately turned on and off (or blocked and unblocked near the source) and the intensity over time is recorded. I interpret the uncertainty principle to mean that there will be a peak in intensity at the times when the switch is flipped (whether on-to-off or off-to-on). i.e., it will look something like this (in ASCII art): __________'-----'__________'-----'__________ Is this correct? I have had trouble convincingly explaining my reasons for thinking so. What will be the measured intensity over time and why? Answer: Dear Dan, first of all, you shouldn't use the term "uncertainty principle" if you're talking about "light sources" and light may be explained by ordinary - classical (non-quantum) - electrodynamics where no uncertainty principle applies. This is just an exercise in the propagation of waves. Second, when you flip the switch, there may be temporary variations of the intensity, but they're not necessary, either. For example, you may find a minimum such that the number of wave peaks on the two trajectories (coming from the two slits) differs by 13.5 - one arm is 13.5 wavelengths longer than the other one. It will mean that the destructive interference only occurs when the beams from both slits are synchronized, and there will always be a period lasting about 13 periods after each flip of the switch when only one beam is coming to the detector. That will indeed eliminate the destructive interference, and give you the "apostrophes" in your ASCII art. The precise shape of the graph depends on the character of the switches, geometry of the experiment, and other things.
{ "domain": "physics.stackexchange", "id": 688, "tags": "electromagnetic-radiation, double-slit-experiment, interference" }
For these gauge transformations in electromagnetism, $\phi\to \phi-\partial_t \lambda$ and $\vec A\to \vec A+\nabla\lambda$, why do the signs differ?
Question: I was looking at this question on Mathematics S.E, as I would like to know the origin of the signs in the gauge transformations of the scalar and vector potentials components, $\phi$ and $\vec A$, of the four-vector potential $A^\mu=\left(\phi, \vec A\right)$: $$\phi\to \phi-\frac{\partial \lambda}{\partial t},\quad \vec A\to \vec A+\nabla\lambda\tag{A}$$ where $\lambda$ is a gauge parameter; an arbitrary function of spacetime. For the purposes of this question I will follow all the same notation and conventions as the linked question, including the Minkowski sign convention, $\eta= \mathrm{diag}(1,-1,-1,-1)$. I know that this question may seem better placed on Mathematics S.E, but I would like to see a physicists take on this. Moreover, I'm not equipped to understand the answers given by the mathematicians. The OP in the linked question wanted to know why the transformations in $(\mathrm{A})$ can be written as one compact equation, $$A_\mu\to A_\mu+\partial_\mu\lambda\tag{B}$$ The answer given is way beyond my comprehension, the user that gave the answer was talking about "musical isomorphisms", not something I can relate to. So instead of starting with the transformations in $(\mathrm{A})$ and trying to understand how they combine compactly into $(\mathrm{B})$, I will try the reverse approach. Given $A^\mu=\left(\phi, \vec A\right)$ then the covariant version is $A_\mu=\left(\phi, -\vec A\right)$, (this was pointed out in an answer to another question asked by the same OP here). Now, writing out the components $$A_\mu\to\left(A_0+\partial_0 \lambda, -A_1-\partial_1 \lambda, -A_2-\partial_2 \lambda, -A_3-\partial_3 \lambda\right)$$ $$=\left(\phi+\frac{\partial \lambda}{\partial t}, -\vec A - \nabla \lambda\right)\tag{C}$$ So according to $(\mathrm{C})$ the gauge transformation components of $A_\mu$ are $$\phi\to \phi+\frac{\partial \lambda}{\partial t},\quad \vec A\to -\vec A-\nabla\lambda\tag{D}$$ But these signs in $(\mathrm{D})$ are totally different to that of $(\mathrm{A})$. Does anyone know how the scalar and vector gauge transformations in $(\mathrm{A})$ can be deduced from $(\mathrm{B})$? Answer: We already know the 4-position is $x^\mu = (t, \vec{x})$ and the 4-gradient is $$\partial_\mu = \frac{\partial}{\partial x^\mu} = \left(\frac{\partial}{\partial t},\vec{\nabla} \right) \tag{1a}$$ or equivalently $$\partial^\mu = \left(\frac{\partial}{\partial t},-\vec{\nabla} \right) \tag{1b}$$ Starting with the relativistic gauge transformation $$A_\mu \to A_\mu + \partial_\mu \lambda \tag{B}$$ is conceptually problematic, because we do not know yet what is the correct 4-potential $A_\mu$. It is not obvious if the correct choice must be $A_\mu=(\phi,\vec{A})$ or $A_\mu=(\phi,-\vec{A})$. So we need to do it the other way round and start with the classical gauge transformation $$\phi\to \phi-\frac{\partial \lambda}{\partial t}, \quad \vec A\to \vec A+\vec{\nabla}\lambda\tag{A}$$ We can write (A) as $$(\phi,\vec{A}) \to (\phi,\vec{A})- \left(\frac{\partial \lambda}{\partial t},-\vec{\nabla}\lambda \right)$$ Here we see, that because of the $-\vec{\nabla}$ we can use $\partial^\mu$ from (1b), but not $\partial_\mu$ from (1a). We get $$(\phi,\vec{A}) \to (\phi,\vec{A})-\partial^\mu\lambda$$ To make this a valid 4-vector equation (with all upper $\mu$), we need to define $A^\mu=(\phi,\vec{A})$ and get $$A^\mu \to A^\mu-\partial^\mu\lambda$$ or equivalently with $A_\mu=(\phi,-\vec{A})$ $$A_\mu \to A_\mu-\partial_\mu\lambda$$
{ "domain": "physics.stackexchange", "id": 99945, "tags": "electromagnetism, metric-tensor, potential, gauge-theory, gauge-invariance" }
Should scattering states in 1D double delta function potential not lead to a steady-state problem?
Question: I have been trying for some time now to understand the double $\delta$-function potential, as solved here. One question in particular is why we don't consider scattering "back" from the second potential. Thus: assume we have two $\delta$-function potentials, and a particle coming from the left, arriving at the first potential. Then the particle is either reflected or transmitted, with some probability. But surely the same holds true at the other $\delta$-potential? I.e. the particle could meet the second potential, and get reflected back towards the first potential? And at the first potential it might again get either reflected (going back towards the second potential) or transmitted (going out from the first potential, adding to the first transmission). And surely this process could continue indefintely (albeit with smaller and smaller probabilities). Should this not lead to a steady state problem, where one needs to take into account the probability that the particle "bounces around" a certain number of times before being either transmitted or reflected? If so, then why do we define the transmission/reflection coefficients simply as the ratio of the amplitudes of the initial reflection/transmission? Or is this possibility somehow contained in this coefficients? I have looked around and found e.g.this Stack Exchange question, which mentions the $S$-matrix for a single $\delta$-function potential. Could such an approach be used here? Answer: This possibility is already contained in the coefficients. When Griffiths is solving the coefficients, he is going for a general solution of the Schrodinger equation, which means that the solution will be an eigenstate of the equation and not evolve with time. He is already going for a steady-state solution. Working with the arbitrary coefficients takes into account the amplitude of the wave incident on the delta from a given side whatever it is, no matter what potential is on that side (wither a single delta, or a Gaussian mound, or whatever). You could separately consider a spatially localized & time-dependent wave-packet incident on the two deltas, in which case if you mapped out the time-evolution carefully I'd indeed expect to see some sloshing back and forth inside the deltas as smaller and smaller parts of the wave-function bounce back and forward inside. Roughly, your interpretation is correct, although I might gripe about your language which implies that when the wavefunction meets a delta it either reflects or transmits, as really the wavefunction coherently does both at the same time. Also, yes S-matrices can be applied to one-dimensional scattering problems as well. Probably not what you're looking for at the moment though.
{ "domain": "physics.stackexchange", "id": 38928, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, potential, scattering" }
Tic Tac Toe implementation in Objective-C
Question: First off, here's the code: main.m #import <Foundation/Foundation.h> #import "PSBoard.h" #import "PSPlayer.h" #import "PSInputHandler.h" int main(int argc, const char * argv[]) { @autoreleasepool { NSLog(@"Enter Player 1 Name:"); NSString *playerOneName = [PSInputHandler getString]; NSLog(@"Enter Player 2 Name:"); NSString *playerTwoName = [PSInputHandler getString]; NSLog(@"How many rows and columns will you play with?"); NSUInteger numberOfRowsAndColumns = [PSInputHandler getInteger]; PSPlayer *playerOne = [[PSPlayer alloc] initWithSymbol:PSBoardSymbolX name:playerOneName]; PSPlayer *playerTwo = [[PSPlayer alloc] initWithSymbol:PSBoardSymbolO name:playerTwoName]; PSBoard *board = [[PSBoard alloc] initWithRows:numberOfRowsAndColumns columns:numberOfRowsAndColumns players:@[playerOne, playerTwo]]; do { PSPlayer *currentPlayer = [board playerUp]; BOOL validInputEntered = NO; //Loop until valid input is entered while(!validInputEntered) { //Get input coordinates NSLog(@"%@, enter a row (1-%lu).", currentPlayer.name, (unsigned long)numberOfRowsAndColumns); NSUInteger row = [PSInputHandler getInteger]; NSLog(@"Now enter a column (1-%lu).", (unsigned long)numberOfRowsAndColumns); NSUInteger column = [PSInputHandler getInteger]; //Verify that nothing is already placed there PSBoardSymbol symbolOfPlayerAtCoordinates = [board playerAtRow:row-1 column:column-1].symbol; if((symbolOfPlayerAtCoordinates != PSBoardSymbolX && symbolOfPlayerAtCoordinates != PSBoardSymbolO) && row > 0 && row <= numberOfRowsAndColumns && column > 0 && column <= numberOfRowsAndColumns) { [board setPlayer:currentPlayer atRow:(row-1) column:(column-1)]; validInputEntered = YES; } } //Show the board [board display]; } while(!board.winner); NSLog(@"Congrats %@! You won.", [board winner].name); } return 0; } PSBoard.h #import <Foundation/Foundation.h> @class PSPlayer; @interface PSBoard : NSObject @property (nonatomic) NSUInteger rows; @property (nonatomic) NSUInteger columns; @property (nonatomic, strong) PSPlayer *winner; -(instancetype)initWithRows:(NSUInteger)rows columns:(NSUInteger)columns players:(NSArray *)players; -(PSPlayer *)playerAtRow:(NSUInteger)row column:(NSUInteger)column; -(void)setPlayer:(PSPlayer *)player atRow:(NSUInteger)row column:(NSUInteger)column; -(void)display; -(PSPlayer *)playerUp; @end PSBoard.m #import "PSBoard.h" #import "PSPlayer.h" @interface PSBoard () @property (nonatomic, strong) NSMutableArray *internalBoardRepresentation; @property (nonatomic, strong) NSArray *players; @property (nonatomic, strong) PSPlayer *oldPlayerUp; @end @implementation PSBoard -(instancetype)initWithRows:(NSUInteger)rows columns:(NSUInteger)columns players:(NSArray *)players { if(self = [super init]) { self.rows = rows; self.columns = columns; self.players = players; self.internalBoardRepresentation = [[NSMutableArray alloc] initWithCapacity:rows]; PSPlayer *null = [[PSPlayer alloc] initWithSymbol:PSBoardSymbolNone name:nil]; for(NSUInteger row = 0; row < rows; row++) { NSMutableArray *currentColumn = [NSMutableArray array]; for(NSUInteger column = 0; column < columns; column++) { [currentColumn addObject:null]; } [self.internalBoardRepresentation addObject:currentColumn]; } self.oldPlayerUp = players[0]; } return self; } -(PSPlayer *)playerAtRow:(NSUInteger)row column:(NSUInteger)column { return self.internalBoardRepresentation[row][column]; } -(void)setPlayer:(PSPlayer *)player atRow:(NSUInteger)row column:(NSUInteger)column { self.internalBoardRepresentation[row][column] = player; [self checkForWinner]; } -(void)checkForWinner { NSUInteger numberOfPiecesInARowToWin = MAX(self.rows, self.columns); //Check horizontal lines for(NSUInteger row = 0; row < self.rows; row++) { PSPlayer *playerInFirstColumn = [self playerAtRow:row column:0]; NSUInteger playerPiecesInRow = 0; for(NSUInteger column = 0; column < self.columns; column++) { if([[self playerAtRow:row column:column] isEqualTo:playerInFirstColumn]) { playerPiecesInRow++; } } if(playerPiecesInRow >= numberOfPiecesInARowToWin && playerInFirstColumn.symbol != PSBoardSymbolNone) { self.winner = playerInFirstColumn; return; } } //Check vertical lines for(NSUInteger column = 0; column < self.columns; column++) { PSPlayer *playerInFirstRow = [self playerAtRow:0 column:column]; NSUInteger playerPiecesInColumn = 0; for(NSUInteger row = 0; row < self.rows; row++) { if([[self playerAtRow:row column:column] isEqualTo:playerInFirstRow]) { playerPiecesInColumn++; } } if(playerPiecesInColumn >= numberOfPiecesInARowToWin && playerInFirstRow.symbol != PSBoardSymbolNone) { self.winner = playerInFirstRow; return; } } //Check top left to bottom right diagonal PSPlayer *playerInFirstSlotOfLeftDiagonal = [self playerAtRow:0 column:0]; NSUInteger playerPiecesInLeftDiagonal = 0; for(NSUInteger row = 0, column = 0; row < self.rows; column++, row++) { if([[self playerAtRow:row column:column] isEqualTo:playerInFirstSlotOfLeftDiagonal]) { playerPiecesInLeftDiagonal++; } } if(playerPiecesInLeftDiagonal >= numberOfPiecesInARowToWin && playerInFirstSlotOfLeftDiagonal.symbol != PSBoardSymbolNone) { self.winner = playerInFirstSlotOfLeftDiagonal; return; } //Check bottom left to top right diagonal PSPlayer *playerInFirstSlotOfRightDiagonal = [self playerAtRow:self.rows-1 column:0]; NSUInteger playerPiecesInRightDiagonal = 0; for(NSInteger row = self.rows-1, column = 0; row >= 0; row--, column++) { if([[self playerAtRow:row column:column] isEqualTo:playerInFirstSlotOfRightDiagonal]) { playerPiecesInRightDiagonal++; } } if(playerPiecesInRightDiagonal >= numberOfPiecesInARowToWin && playerInFirstSlotOfRightDiagonal.symbol != PSBoardSymbolNone) { self.winner = playerInFirstSlotOfRightDiagonal; return; } } -(void)display { NSMutableString *displayString = [NSMutableString stringWithFormat:@"\n"]; for(NSUInteger row = 0; row < self.rows; row++) { NSMutableString *rowDisplayString = [[NSMutableString alloc] init]; NSString *innerFillerString = (row == self.rows-1) ? @" " : @"_"; for(NSUInteger column = 0; column < self.columns; column++) { NSString *columnSeparator = (column == self.columns-1) ? @" " : @"|"; NSString *playerSymbol = ([self playerAtRow:row column:column].symbolStringRepresentation); if(playerSymbol.length == 0) { playerSymbol = innerFillerString; } [rowDisplayString appendString:[NSString stringWithFormat:@"%@%@%@%@", innerFillerString, playerSymbol, innerFillerString, columnSeparator]]; } [displayString appendString:[NSString stringWithFormat:@"%@\n", rowDisplayString]]; [rowDisplayString setString:@""]; } NSLog(@"%@", displayString); } -(PSPlayer *)playerUp { PSPlayer *nextPlayerUp = self.players[1-([self.players indexOfObjectIdenticalTo:self.oldPlayerUp])]; PSPlayer *previousPlayerUp = self.oldPlayerUp; self.oldPlayerUp = nextPlayerUp; return previousPlayerUp; } @end PSPlayer.h #import <Foundation/Foundation.h> typedef NS_ENUM(NSInteger, PSBoardSymbol) { PSBoardSymbolX = 0, PSBoardSymbolO, PSBoardSymbolNone }; @interface PSPlayer : NSObject -(instancetype)initWithSymbol:(PSBoardSymbol)symbol name:(NSString *)name; @property (nonatomic) PSBoardSymbol symbol; @property (nonatomic) NSString *symbolStringRepresentation; @property (nonatomic, strong) NSString *name; @end PSPlayer.m #import "PSPlayer.h" @implementation PSPlayer -(instancetype)initWithSymbol:(PSBoardSymbol)symbol name:(NSString *)name{ if(self = [super init]) { self.symbol = symbol; self.symbolStringRepresentation = (symbol == PSBoardSymbolO) ? @"O" : ((symbol == PSBoardSymbolX) ? @"X" : @""); self.name = name; } return self; } @end PSInputHandler.h #import <Foundation/Foundation.h> @interface PSInputHandler : NSObject +(NSString *)getString; +(NSInteger)getInteger; @end PSInputHandler.m #import "PSInputHandler.h" @implementation PSInputHandler +(NSInteger)getInteger { int temp; scanf("%i", &temp); return (NSInteger)temp; } +(NSString *)getString { char input[256]; scanf("%s", input); return [NSString stringWithUTF8String:input]; } @end So my questions are: In the PSInputHandler.m class, I wasn't so sure about how to get input from the command line. I read that fgets() is a potential alternative to scanf(), but is there any reason for me to use one over the other? The method in PSBoard.m that checks for a winner, checkForWinner, is very long. Is there a simplified design I can use to shorten it? I struggled to name the playerUp method, which returns the player whose turn it is. Is there a more suitable name? When the user inputs which row and column to place an X or O in, I made it so that the coordinates they enter are from 1 to the number of rows and not 0 to the number of rows minus one (like with zero-based array indexing). Is this more user-friendly, or should I change it to zero-based indexing? In PSPlayer.m, I use nested ternary operators. Is this too hard to understand? Should I change it to if statements? When getting the user's input for how many rows and columns to use, which I expect to be an integer, how can I sanitize the input so that the program doesn't crash when a string (for example) is inputted? Any other critique welcome! Answer: I will update this post over the weekend as I go through your question more and come up with some examples to iterate over my points, but I thought for now, I'd answer some of the easier questions. Question 1. I'm not sure and cannot remember (I will try to find out). At the end of the day, you might consider implementing this with a GUI. If you're using Xcode, it's quite easy to develop a GUI for either OSX or iOS, and most of your logic is already in place. You'd just have to write the logic to hook the GUI up to the business logic. Question 2. One immediate thought on speeding up this process would be to use a flag to mark whether or not a row/column is a potential winner. AND, if you do find a row that's a winner, immediately return the winner. For a row to be a winner, every piece in the row must belong to the same player, correct? So set the owner of the piece in the first box, and check every box. As soon as you get to a box that doesn't match the first box, break;. You don't need to check any more boxes in that row/column/diagonal. You can move to the next row/column/diagonal. And if you get to the end of the inner loop and haven't had to break; because you've found the winner, then you can set the winner and return; and stop checking. So basically, refactor into something more like this: for(NSUInteger row = 0; row < self.rows; row++) { PSPlayer *playerInFirstColumn = [self playerAtRow:row column:0]; BOOL winnerFound; for(NSUInteger column = 0; column < self.columns; column++) { if(![[self playerAtRow:row column:column] isEqualTo:playerInFirstColumn]) { winnerFound = false; break; } else { winnerFound = true; } } if (winnerFound) { self.winner = playerInFirstColumn; return; } } This will improve performance some. You can probably still do better, but this is still a drastic improvement, especially for exceptionally large boards. Now... the BEST performance improvement I can think of would actually mean you're running this check after every turn (which you're already doing, right?). In this case, you only need to check ONE row, ONE column, and ZERO, ONE, or TWO diagonals. And this would be a massive performance boost. You only need to check a the row the piece was played in, the column the piece was played in, and the diagonal the piece was played in. Every other row, column, and diagonal has been checked on a previous turn and a winner was not found otherwise the game would be over and we wouldn't've had this turn. AND, even if we modified the rules to continue playing after a winner has been found, you can just use an array to keep track of each row and column and diagonal and who won that row/column/diagonal, and still only need to check the relevant rows (and only check them for the player who played the piece). Question 3. playerUp is probably an alright method name. Maybe activePlayer? If you feel it's not descriptive enough, don't hesitate to leave a comment explaining it. // returns the player whose turn it is Question 4. As a programmer, I am used to 0-based indexing systems, but your assumption is correct. Most people who use programs aren't programs and would be more comfortable with a 1-based coordinate system. Though... back to question 1... if this were given a GUI, it wouldn't matter. ;) Question 5. Personally, I hate the ternary operators and never use them. Whether or not they're acceptable would depend largely on who you're working with though. In this case, it's a simple one. Again, personally, I hate them and I wouldn't use it, because I never use it, but this one is simple enough that if you and everyone working on the project are comfortable with them, then go ahead and keep it. Question 5.1. The exact way you want to handle non-number input is an implementation detail that'd be up to you. Do you want to request another input? Do you want to just strip out the non-numbers and use the numbers that are there? But as for actually checking the string itself, once you've got it into an NSString, it's quite easy: NSCharacterSet *nonNumbers = [[NSCharacterSet decimalDigitCharacterSet] invertedSet]; if([yourString rangeOfCharactersFromSet:nonNumbers].location == NSNotFound) { // string is all numbers and is good to go } else { // string contains non-number characters }
{ "domain": "codereview.stackexchange", "id": 6023, "tags": "optimization, game, objective-c" }
Comparing $2^{F_n}$ and $2^{\varphi^n}$
Question: if we define $F_n$ be the $n$th fibonacci number and $\varphi$ be golden number then can we say that $2^{F_n} \in \Theta(2^{\varphi^n})$ or in other word $2^{\frac{\varphi^n - (-\varphi)^{-n}}{\sqrt{5}}} \in \Theta(2^{\varphi^n})$ It's simple to show that $F_n \in \Theta(\varphi^n)$ but about above one I don't have any Idea Answer: Hint: We have $F_n = [\varphi^n-(-\varphi)^{-n}]/\sqrt{5} = \varphi^n/\sqrt{5} \pm o(1)$, since $|1/\varphi| < 1$. Hence $$ 2^{F_n} = 2^{\varphi^n/\sqrt{5}} 2^{\pm o(1)} = 2^{\varphi^n/\sqrt{5}} (1\pm o(1)). $$ In particular, $2^{F_n} = \Theta(2^{\varphi^n/\sqrt{5}})$. I'll let you compare $2^{\varphi^n/\sqrt{5}}$ and $2^{\varphi^n}$ yourself.
{ "domain": "cs.stackexchange", "id": 3492, "tags": "asymptotics" }
Topological Order and Entanglement
Question: I have a question about entanglement in condensed matter physics. It seems that topological order origins from long range entanglement, but what is long range entanglement? It is the same as long range correlation? I am interested in this issue and I am happy to have any discussion. Answer: Long range entanglements are defined through local unitary transformations which is discussed in arXiv:1004.3835 Local unitary transformation, long-range quantum entanglement, wave function renormalization, and topological order by Chen, Gu, and Wen. Basically, long range entangled states are states which are very different from product states and cannot be changed to product states through local unitary transformations.
{ "domain": "physics.stackexchange", "id": 18734, "tags": "quantum-mechanics, condensed-matter, quantum-information, quantum-entanglement, topological-order" }
Advent of Code 2023 Day 2 in C#
Question: To paraphrase the puzzle, Santa takes a walk with an Elf, and they play a game involving a bag of colorful cubes. In each game, there is an unknown number of each colored cubes in the bag, and the Elf pulls out a number of cubes, for example: Game 1: 3 blue, 4 red; 1 red, 2 green, 6 blue; 2 green Game 2: 1 blue, 2 green; 3 green, 4 blue, 1 red; 1 green, 1 blue Game 3: 8 green, 6 blue, 20 red; 5 blue, 4 red, 13 green; 5 green, 1 red Game 4: 1 green, 3 red, 6 blue; 3 green, 6 red; 3 green, 15 blue, 14 red Game 5: 6 red, 1 blue, 3 green; 2 blue, 1 red, 2 green Part 1 In part 1, we want to know which games would have been possible if the bag contained only 12 red cubes, 13 green cubes, and 14 blue cubes. The answer is the sum of the eligible game ids, in the above example 1 + 2 + 5 = 8. Part 2 To paraphrase the change in part 2, for each game, we're looking for the maximum number of cubes per color that were drawn, multiply them together, and then sum this number for all games. With the example above this would be 48 + 12 + 1560 + 630 + 36 = 2286. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; string input = File.ReadAllText("day2.txt"); Console.WriteLine(GetSumOfInvalidIDs(input)); Console.WriteLine(GetCubeSetPower(input)); static int GetSumOfInvalidIDs(string input) { return input .Split("\r\n") .Sum(ProcessGamePart1); } static int GetCubeSetPower (string input) { return input .Split("\r\n") .Sum(ProcessGamePart2); } static int ProcessGamePart1(string game) { Dictionary<string, int> maxForColor = new() { {"red", 12}, {"green", 13}, {"blue", 14} }; foreach (string colorCount in GetColorCounts(game)) { var (numberOfBalls, color) = ParseColorCounts(colorCount.Trim()); if (numberOfBalls > maxForColor[color]) { return 0; } } return GetGameId(game); } static int ProcessGamePart2(string game) { Dictionary<string, int> maxSeenOfColor = new(); foreach (string colorCount in GetColorCounts(game)) { var (numberOfBalls, color) = ParseColorCounts(colorCount.Trim()); maxSeenOfColor.TryGetValue(color, out int currentMax); maxSeenOfColor[color] = Int32.Max(currentMax, numberOfBalls); } return maxSeenOfColor.Values.Aggregate(1, (currentPower, nextColor) => currentPower * nextColor); } static int GetGameId(string game) { return Int32.Parse(game.Split(" ")[1].TrimEnd(':')); } static string[] GetColorCounts(string game) { // Each color is given all at once for each handful, // so we only have to look at each count given. return game.Split(":")[1].Split(new char[] { ',', ';' }); } static (int, string) ParseColorCounts(string handful) { var parts = handful.Split(" "); int numberOfBalls = Int32.Parse(parts[0]); string color = parts[1]; return (numberOfBalls, color); } Review Request What could be improved upon? I am new to the language and am specifically interested in readability and idiomatic usage. Answer: Usings Keep the using directives tidy - System.Text and System.Threading.Tasks are not needed but System.IO is missing. Potential bug due to unclear input specifications The implementation will treat Game 1: 5 red, 10 red as a valid game. Which it isn't since there are only 12 red balls. It's not clear from the description if this is an expected input or not. If it's not a valid input then it should be rejected. Input processing Hard-coded input file name. If you follow the Unix philosophy you'd be just reading from stdin and process the input as you go. This has the advantage that users have the flexibility to decide where the data is coming from. Split("\r\n") will fail on a Unix file since the line endings are \n there. File.ReadAllLines already does the work for you and also removes the need to split the input several times. Otherwise if you do end up needing to split lines yourself then Environment.NewLine exists. Overall you process and parse all input several times which is wasteful and also intermixes the parsing logic with the actual game logic. This makes it hard to read and will also make maintenance hard (in the real world you can usually bet that the input specs and/or the game logic will be changing as time goes on). The better way to do this would be to create a data structure representing the games, parse the input into this structure and the apply the game logic on the structure. This has several advantages: input is processed just once, different input formats or slight changes in input format can be easily supported without having to touch the game logic and you can separate input validation from game logic.
{ "domain": "codereview.stackexchange", "id": 45300, "tags": "c#, programming-challenge" }
Duplicating byte array multiple times
Question: I have this code that duplicates a byte array 5 times. class Multiple { public static void main(String[] args) { byte[] result = new byte[] {0x41,0x42,0x43,0x44}; int len = result.length; byte[] multipled = new byte[len*5]; for (int i = 0; i < 5; i++) { for (int j = 0; j < len; j++) { multipled[i*len + j] = result[j]; } } System.out.println(new String(multipled)); System.out.println(new String(result)); } } Example: ABCDABCDABCDABCDABCD ABCD The code uses multiple loops and assignment, can it be better or shorter? Answer: It can be made shorter: public static void main(String[] args) { byte[] result = new byte[] {0x41, 0x42, 0x43, 0x44}; byte[] multipled = new byte[result.length * 5]; for (int i = 0; i < multipled.length; i++) multipled[i] = result[i % result.length]; ... }
{ "domain": "codereview.stackexchange", "id": 13035, "tags": "java, array" }
Advent of Code 2023 - Day 6: Wait For It
Question: Part One: The task involves organizing multiple toy boat races, each assigned a specific race time and a recorded distance. To surpass the existing record, participants must optimize the duration of holding the button, as the toy boat's speed increases by one millimeter per millisecond for every whole millisecond the button is pressed at the beginning of the race. The objective is to determine the various possible ways to achieve a performance surpassing the established record. For example: Time: 7 15 30 Distance: 9 40 200 This document describes three races: The first race lasts 7 milliseconds. The record distance in this race is 9 millimeters. The second race lasts 15 milliseconds. The record distance in this race is 40 millimeters. The third race lasts 30 milliseconds. The record distance in this race is 200 millimeters. Your toy boat has a starting speed of zero millimeters per millisecond. For each whole millisecond you spend at the beginning of the race holding down the button, the boat's speed increases by one millimeter per millisecond. So, because the first race lasts 7 milliseconds, you only have a few options: Don't hold the button at all (that is, hold it for 0 milliseconds) at the start of the race. The boat won't move; it will have traveled 0 millimeters by the end of the race. Hold the button for 1 millisecond at the start of the race. Then, the boat will travel at a speed of 1 millimeter per millisecond for 6 milliseconds, reaching a total distance traveled of 6 millimeters. Hold the button for 2 milliseconds, giving the boat a speed of 2 millimeters per millisecond. It will then get 5 milliseconds to move, reaching a total distance of 10 millimeters. Hold the button for 3 milliseconds. After its remaining 4 milliseconds of travel time, the boat will have gone 12 millimeters. Hold the button for 4 milliseconds. After its remaining 3 milliseconds of travel time, the boat will have gone 12 millimeters. Hold the button for 5 milliseconds, causing the boat to travel a total of 10 millimeters. Hold the button for 6 milliseconds, causing the boat to travel a total of 6 millimeters. Hold the button for 7 milliseconds. That's the entire duration of the race. You never let go of the button. The boat can't move until you let go of the button. Please make sure you let go of the button so the boat gets to move. 0 millimeters. In the example provided, with a current record of 9 millimeters: For the first race, holding the button for 2, 3, 4, or 5 milliseconds allows winning, totaling 4 different ways. In the second race, winning strategies involve holding the button for at least 4 milliseconds and at most 11 milliseconds, providing a total of 8 different ways to win. For the third race, winning requires holding the button for at least 11 milliseconds and no more than 19 milliseconds, offering a total of 9 ways to win. The answer is determined by multiplying the respective counts of winning strategies in each race. In this example, if you multiply all these values together, you get 288 (4 * 8 * 9). #!/usr/bin/env python3 from pathlib import Path from typing import Iterable import typer def race(speed: int, time_left: int, record: int) -> bool: # Check if we can cover a distance greater than the specified record. return speed * time_left > record def parse_nums(line: str) -> list[int]: # Name: A B C ... nums = line.split(": ")[1] return list(map(int, nums.split())) def count_wins(time: int, distance: int) -> int: return sum( race(hold_time, time - hold_time, distance) for hold_time in range(1, time + 1) ) def total_possibilities(lines: Iterable[str]) -> int: times, distances = tuple(map(parse_nums, lines)) total_margin = 1 for idx, time in enumerate(times): count = count_wins(time, distances[idx]) if count: total_margin *= count return total_margin def main(race_document: Path) -> None: with open(race_document) as f: print(total_possibilities(f)) if __name__ == "__main__": typer.run(main) Part 2: This part involves a single, extended toy boat race with combined time and record distance. So, the example from before: Time: 7 15 30 Distance: 9 40 200 now instead means this: Time: 71530 Distance: 940200 This time, the answer is 71503. #!/usr/bin/env python3 import sys from pathlib import Path from typing import Iterable import typer def race(hold_time: int, time_left: int, record: int) -> bool: # Check if we can cover a distance greater than the specified record. return hold_time * time_left > record def parse_nums(line: str) -> list[int]: # Name: A B C ... nums = line.split(": ")[1].strip().split() return int("".join(nums)) def count_wins(time: int, distance: int) -> int: return sum( race(hold_time, time - hold_time, distance) for hold_time in range(1, time + 1) ) def total_possibilities(lines: Iterable[str]) -> int: return count_wins(*tuple(map(parse_nums, lines))) def main(race_document: Path) -> None: sys.set_int_max_str_digits(0) with open(race_document) as f: print(total_possibilities(f)) if __name__ == "__main__": typer.run(main) Review Request: General coding comments, style, etc. What are some possible simplications? What would you do differently? Answer: Part 1 You ask what might be done differently. In function total_possibilities I would probably replace the use of the enumerate function used in your loop with the buit-in zip function: def total_possibilities(lines: Iterable[str]) -> int: # No need to use the tuple function: times, distances = map(parse_nums, lines) total_margin = 1 for time, distance in zip(times, distances): count = count_wins(time, distance) if count: total_margin *= count return total_margin We can now modify function parse_nums to return a map instead of a list since there is no longer any need to do any explicit indexing: def parse_nums(line: str) -> list[int]: # Name: A B C ... nums = line.split(": ")[1] return map(int, nums.split()) # No need to create a list If you wished you could also incorporate the functools.reduce function into function total_possibilities and use a comprehension: def total_possibilities(lines: Iterable[str]) -> int: times, distances = map(parse_nums, lines) return reduce( lambda x, y: x * y, (count_wins(time, distance) for time, distance in zip(times, distances)) ) Finally, in count_wins you are letting hold_range take on the values in the range [1, time]. But you should use instead the range [1, time-1] since if you hold the button for the maximum allowed time there is no time left for the boat to move: def count_wins(time: int, distance: int) -> int: return sum( race(hold_time, time - hold_time, distance) for hold_time in range(1, time) ) Part 2 In function parse_nums you are doing an extra split followed by a join. These two calls can be replaced with a call to replace: def parse_nums(line: str) -> list[int]: # Name: A B C ... return int(line.split(": ")[1].replace(' ', '')) Note that there is no need to perform a call to strip. But clearly, as indicated in the comment made by user vnp, the biggest improvement can be achieved with a change in algorithm. As I understand it you really want to solve the quadratic equation for distance + 1. This yields two roots and we wish to count all the integer values between these roots. So we have: ... from math import sqrt, ceil, floor def parse_nums(line: str) -> list[int]: # Name: A B C ... return int(line.split(": ")[1].replace(' ', '')) def count_wins(time: int, distance: int) -> int: distance += 1 discriminant = time ** 2 - 4 * distance sqrt_discriminant = sqrt(discriminant) upper_root = (time + sqrt_discriminant) / 2 lower_root = (time - sqrt_discriminant) / 2 # Get number of integers in the range [lower_root, upper_root]: return floor(upper_root) - ceil(lower_root) + 1 def total_possibilities(lines: Iterable[str]) -> int: # No need for the tuple function: return count_wins(*map(parse_nums, lines)) ... ```
{ "domain": "codereview.stackexchange", "id": 45332, "tags": "python, programming-challenge, string-processing" }
Stroke Width Transform: Gradient Direction Computation
Question: I've been trying to implement this paper from Boris Epshtein, Eyal Ofek, and Yonatan Wexler, the pioneers of Stroke Width Transform . Having a simple image with random texts, that underwent gray scaling. Then I have produced two edge maps using two different algorithms for output comparison (for later) , Sobel and Canny (I'm also planning on Prewwit or other edge detection algorithms). I also have gradient maps x and y, using sobel. on the segment of computing the gradient direction, given the equation: theta = arctan( Gy / Gx ) where: theta = is the resulting angle Gx = is the X gradient component Gy = is the Y gradient component (both Gx and Gy are from the gradient maps) I got a bit confused. If the value of Gx is 0 (zero), and we all know dividing by zero is undefined (some say it's infinity), so are we going to make theta infinity or just 0 (zero)? Answer: If $Gx = 0$ and $Gy > 0$, then we use $Gy / 0 = + \infty$. Note that $\arctan +\infty = 1$. If $Gx = 0$ and $Gy < 0$, then we use $Gy / 0 = -\infty$. Note that $\arctan -\infty = -1$.
{ "domain": "cs.stackexchange", "id": 8516, "tags": "algorithms, image-processing, mathematical-programming, research" }
Concentration of caffeine in physiologically relevant setting
Question: Caffeine is known to overcome G2/M (and possibly other types of) cell cycle arrest through its effects on DNA repair machinery and is used as a positive control in studies of cell cycle arrest. In this paper, for example, researchers use concentrations of caffeine in the 1-4 mM range to overcome nocodazole-mediated arrest. Is this a physiologically relevant concentration? I found a lot of interesting caffeine data including the LD50 (estimated to be around 150-200 mg/kg) but have not been able to turn up any good numbers on reasonable concentrations that might be found in a human. Answer: Let's figure it out. Suppose an adult with an average blood volume of 5 L (5000 mL) drinks a cup of strong coffee with 100 mg of caffeine in it. Given caffeine's formula weight of 194.19 g/mol, and assuming 100% of the caffeine is in the blood stream, its concentration (from this awesome calculator page from Graphpad, the makers of Prism) would be approximately 103 µM, or 10-40 times less than what's being used in the experiments you cite. Of course, this is overcoming drug-mediated cell cycle arrest, so its possible that a lower concentration is needed to jump-start physiologically-arrested cells.
{ "domain": "biology.stackexchange", "id": 2968, "tags": "pharmacology, cell-cycle" }
about angle between magnetic axis and geographical axis of earth
Question: How the angle between the magnetic axis of earth and the geographic axis of earth can be calculated? Answer: To calculate the angle between them you subtract the latitude of magnetic north from the latitude of axial north. Note though that magnetic north moves around on Earth's surface so there's no way to figure out where it is to begin with except by walking around with a compass until you find it (or some technologically advanced equivalent).
{ "domain": "physics.stackexchange", "id": 25793, "tags": "earth, geomagnetism" }
If the sun would be two times as big and with the same power output would temperature on Earth change?
Question: Suppose the suns radius is increased by two. And suppose its emitted power doesn't change. Will we notice this on Earth? It will seem twice as big but apart from that what will we notice(if anything). Will it have a different color? Will temperature on Earth rise or drop? The energy arriving on Earth per second will still be the same. Maybe it's arriving in a different form though. I know it's a hypothetical question but one day the radius of the sun will increase. Maybe the power emitted will not stay the same but imagine it did. If the emitted power is the same does it matter if the emitting body is a large spere of gas or a solid sphere? Answer: Second question first: If the emitted power is the same does it matter if the emitting body is a large sphere of gas or a solid sphere? Not really. For thermal radiation discussed below, as long as the emissivity is high at the wavelength in question, it will radiate similarly to a blackbody. We see light from the Sun's photosphere which is roughly where it becomes opaque, so we can approximate it as a dark gray, almost black color but "white hot" object. I know it seems counterintuitive, but think of white hot glass; it can only emit light because it's actually less transparent when it's glowing hot! Suppose the suns radius is increased by two. And suppose its emitted power doesn't change. Will we notice this on Earth? We can approximate the Sun's light as that of a thermal emitter, some dark sphere heated to glowing almost white-hot. In that case the connection between power $P$, size $r$ and temperature $T$ comes from the Stefan–Boltzmann law law $$P \approx \sigma A T^4 = \sigma 4 \pi r^2 T^4$$ where $\sigma$ is the Stefan–Boltzmann constant 5.67E-08 and $A$ is the area of the sphere. We don't need to calculate everything to see that $r^2 T^4$ must remain constant for constant power, which tells us that $T$ varies as $1/\sqrt{r}$. Sunlight reaching the Earth (but before being filtered by the atmosphere) has a the spectrum similar to that of some dark sphere heated to a temperature of roughly 5900 Kelvin. Photographers might call that a color temperature. Source click for larger Our new Sun with twice the radius but same power will have a new temperature of about 5900/$\sqrt{2} \ \approx$ 4170 Kelvin. It will be distinctly orange-ish. Source annotated, click for larger If the sun would be two times as big and with the same power output would temperature on Earth change? Yes! Less of its would be in the visible, transparent part of our atmosphere and more would be in the infrared where the atmosphere is more absorbing. When the Sun's warmth from above is absorbed in the atmosphere, it is easier for it to radiate that back into space, but climate models are very complicated and it's hard to say what new equilibrium point we would reach since any change in water vapor (the biggest greenhouse gas) can affect the climate substantially. Also our current plants on Earth need blue light and the amount of blue will drop precipitously, which is what orangish is; white light with some blue removed. Outdoors, our vision system would shift it's white point so that after a few minutes we'd get used to the new sunlight and feel that it is roughly white. But individual objects would now appear to have very different colors since we're missing blue light! Without blue, our white clothes and white paper in sunlight wouldn't look as clean since the blueing and brighteners in them would not be stimulated by as much blue light, and our fluorescent signs and jackets would look dull, which is a lot less important than the Earth's plants no longer growing properly! To see this graphically, let's use Planck's law (from here): $$B(\lambda, T) = \frac{2 h c^2}{\lambda^5} \frac{1}{\exp\left(\frac{h c}{\lambda k_B T}\right) - 1}$$ where $h$, $c$ and $k_B$ are the Planck constant, the speed of light and the Boltzmann constant. import numpy as np import matplotlib.pyplot as plt def B_lambda(lam, T): # https://en.wikipedia.org/wiki/Planck%27s_law hplank = 6.626E-34 # J s clight = 2.9979E+08 # m/s kB = 1.381E-23 # J/K term_1 = 2.*hplank*clight**2/lam**5 term_2 = hplank*clight/(lam*kB*T) return term_1 / (np.exp(term_2) - 1.) lam_vis_nm = np.arange(350, 751) lam_vis = 1E-09 * lam_vis_nm A_vis = B_lambda(lam_vis, 5900) B_vis = 4 * B_lambda(lam_vis, 4200) lam_nm = np.arange(100, 5001, 10) lam_um = lam_nm / 1E+03 lam = lam_nm / 1E+09 A = B_lambda(lam, 5900) B = 4 * B_lambda(lam, 4200) fig, (ax1, ax2) = plt.subplots(2, 1) ax1.plot(lam_vis_nm, A_vis) ax1.plot(lam_vis_nm, B_vis) ax1.set_xlim(lam_vis_nm.min(), lam_vis_nm.max()) ax1.set_xlabel('wavelength (nm)') ax1.tick_params(labelleft=False) ax1.set_ylabel('spectral intensity (arb)') ax2.plot(lam_um, A) ax2.plot(lam_um, B) ax2.set_xlim(lam_um.min(), lam_um.max()) maxi = max(A.max(), B.max()) ax2.set_ylim(maxi * 1E-04, 2*maxi) ax2.set_yscale('log') ax2.set_xlabel('wavelength (um)') ax2.set_ylabel('spectral intensity (arb)') plt.show()
{ "domain": "astronomy.stackexchange", "id": 5677, "tags": "the-sun, earth, temperature" }
Particle acquiring mass as it moves upward
Question: A particle whose initial mass is $m_0$ is projected vertically upwards at time $t=0$ with speed $gT$, where $T$ is a constant and $g$ is the uniform gravitational acceleration. At time $t$, the mass of the particle has increased to $m_0e^{t/T}$. If the added mass is at rest relative to particle when it is acquired, find the time when it is at highest point and mass at that instant. My attempt: I applied the impulse momentum equation considering that at a general time $t$, the mass is $m_0e^{t/T}$ and it is moving upwards with speed $v$; after a time $dt$, mass $dm$ (moving with speed $v$) adds to it and then they both move with a speed $v+dv$ Therfore, $$(m_0e^{t/T}+dm)(v+dv)-(m_0e^{t/T}+dm)v=-m_0e^{t/T}gdt$$ After simplifying we get $$gdt=-dv$$ On integrating with proper limits we get $t=T$, however the answer given is $T\ln(2)$ although my approach seems correct. Where have I gone wrong? Answer: I believe either the question is incorrectly worded, or the expected answer is wrong. Your reasoning is correct, although it doesn't really take any math to arrive at your answer: if the added mass is indeed at rest relative to the particle as it moves, the added mass imparts no impulse on the particle: the particle and the added mass are already moving with the same velocity. The mass increases, however the gravitational acceleration does not depend on the mass, so the particle maintains its acceleration as it moves upward, reaching the peak at time $t=gT/g=T$. I think the intention was to ask what happens when the added mass it at rest with respect to the "world frame", i.e. the frame with respect to which the particle is initially moving with a velocity of $gT$. Then, the added mass actually decelerates the particle. The answer does turn out to be $T\ln 2$ in this case. I'll leave it to you to attempt the problem as stated this way, let me if you have further issues.
{ "domain": "physics.stackexchange", "id": 68621, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, momentum" }
Which comes first? Multiple Imputation, Splitting into train/test, or Standardization/Normalization
Question: I am working on a multi-class classification problem, with ~65 features and ~150K instances. 30% of features are categorical and the rest are numerical (continuous). I understand that standardization or normalization should be done after splitting the data into train and test subsets, but I am not still sure about the imputation process. For the classification task, I am planning to use Random Forest, Logistic Regression, and XGBOOST (which are not distance-based). Could someone please explain which should come first? Split > imputation or imputation>split? In case that split>imputation is correct, should I follow imputation>standardization or standardization>imputation? Answer: Always split before you do any data pre-processing. Performing pre-processing before splitting will mean that information from your test set will be present during training, causing a data leak. Think of it like this, the test set is supposed to be a way of estimating performance on totally unseen data. If it affects the training, then it will be partially seen data. I don't think the order of scaling/imputing is as strict. I would impute first if the method might throw of the scaling/centering. Your steps should be: Splitting Imputing Scaling Here are some related questions to support this: Imputation before or after splitting into train and test? Imputation of missing data before or after centering and scaling?
{ "domain": "datascience.stackexchange", "id": 5314, "tags": "multiclass-classification, normalization, data-imputation" }
statsmodels ols does not include all categorical values
Question: I am doing an ordinary least squares regression (in python with statsmodels) using a categorical variable as a predictor. There are 5 values that the categorical variable can have. However, after running the regression, the output only includes 4 of them. Here is what I am running: >>> from statsmodels.formula.api import ols >>> model = ols("normalized_score ~ C(general_subreddit)", data=df_feature) >>> results = model.fit() >>> results.summary() The output of the last command includes the following rows in the table: I can check the count of each of the categorical variables as follows: >>> from collections import Counter >>> Counter(df_feature["general_subreddit"]) Counter({nan: 20, 'community': 4159, 'ending_addiction': 3819, 'mental_health': 4650, 'other': 6920, 'relationships': 4318}) Ignoring the NaNs, why does the categorical value of "community" not appear in the model summary? Answer: When a logistic model is built using a categorical variable with N levels, it only considers N-1 levels, as the remaining level is used as a reference by the model. What this means to your model as a whole is that, each level (when remaining variables remain same) is compared to the reference level. In your example, "community" level is used as the reference. So, ending_addiction contributes 0.0749 units more than "community" to the dependent variable. Hit this up if it's not clear --> https://community.alteryx.com/t5/Alteryx-Designer-Discussions/In-the-Logistic-Regression-Report-Factor-Missing/m-p/10100/highlight/true#M5169
{ "domain": "datascience.stackexchange", "id": 5175, "tags": "python, regression, linear-regression" }
How to build a model which can predict the probability of an event based on a set of timeseries data?
Question: I am trying to build a deep learning neural model using keras and tensorflow which can predict if a certain event will occur based on a set of timeseries data and some fixed data together . For example: For a given set of entities say , Their price behavior is co-related due to rumor of some event that is going to happen in future. 1) A & B 's fixed data like type, group etc. 2) During a certain period of time - 1 Jan 2015 - 30 Mar 2015 , their price . Data that I have is INPUT : Name of entity, Type Of entity , Size , Country, Specific Attributes and time series stock data from 1 Jan 2015 - 30 Mar 2015 OUTPUT : Y/N . Boolean output if event happend or not. Now my question is how do I build this since I have some fixed data which doesn't changes over time and some time series data which changes over time. Options that I thought of are 1) LSTM - But not sure if I should feed in fixed static data. 2) CNN - Not sure if it is the right approach ? Please let me know what should be my approach to handle such a problem. Answer: Since you have features that would be handled best with a recurrent neural net, AND some features that would be handled best with a feedforward net, what you can actually do is both and feed them into a main Dense layer which has a softmax output to give you the probability distribution. This would be rather hard to do by hand, but luckily you are using Keras, which allows for this kind of modeling rather easily! In the Keras functional API guide https://keras.io/getting-started/functional-api-guide/, there is a model actually very similar to what you are looking for, where the "Main" information is an LSTM layer (which you'd do for the stock prices), and the "Auxiliary" information would be (Name of entity, Type Of entity , Size , Country, Specific Attributes) etc... The model looks like this: The example model actually uses 2 loss functions (2 outputs), but you can easily build it to only have the one output. The code is all there so will be easy to replicate. I basically use this kind of model for almost everything now and get great results, vs just LSTM alone.
{ "domain": "datascience.stackexchange", "id": 1637, "tags": "machine-learning, time-series, tensorflow, keras" }
Is this a bedbug?
Question: Does anyone know if this is a bedbug? Or maybe a cockroach? I am located in Quebec, Canada. If you need a clearer picture I'll try to take one without the glass in the way so my camera can focus better. Hopefully it doesn't escape. Answer: I think I can see 5 extremities sticking out under the shield on one side in your image. Hence I think it has a total of 10 extremities. A bedbug, belonging to the class of insects, should have just 6. I'm pretty sure it's a pill bug (Fig. 1), a family of woodlice (Armadillidiidae), belonging to the isopoda, an order of crustaceans. Crustaceans include the lobsters, crabs and crayfish, to name a few. Fig. 1. A Pill bug species. source: Natural History of Orange County, CA A bed bug is an insect and has six extremities (Fig. 2). Fig. 2. A bed bug species. source: Terminix So long story short, your species here is not an insect, and hence not a bed bug. From the limited amount of information you've provided a specific species determination is difficult; there are many pill bugs.
{ "domain": "biology.stackexchange", "id": 6728, "tags": "entomology, species-identification" }
how stream theora topic
Question: Hi all, I performing a teleoperating experiment via webcam vision and using the wireless channel to transport the date, so i am interested on using the theora data compression. Using usb_cam_node, rostopic list command shows me the topic /usb_cam/image_raw/ and /usb_cam/image_raw/theora etc. Using the the command: rosrun image_view image_view image:/usb_cam/image_raw/, video is streaming but I am not able to undestand how view the theora topic: /usb_cam/image_raw/theora . Anyone could help me? Originally posted by doxsi on ROS Answers with karma: 3 on 2012-05-21 Post score: 0 Answer: Try running rosrun image_view image_view image:=/usb_cam/image_raw _image_transport:=theora Originally posted by derekjchow with karma: 341 on 2012-05-21 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 9483, "tags": "ros" }
How much CO2 gets free when combusting 6 kg n-heptane (C7H16)
Question: I am very new to chemistry and have a question to an exercise. It's about combustion (complete oxidation) n-heptane ($\ce{C7H16}$). There were several questions I could answer, for example the chemical equation: $$\ce{C7H16 + 11O2 -> 7CO2 + 8H2O}$$ But what I can't answer is, how I calculate how much $\ce{CO2}$ gets free when burning 6 kg of the n-heptane. Can someone help me with this? Thanks a lot! Edit: I am not sure if this information is needed: molar mass (g/mol): $\ce{C}=12$ $\ce{O}=16$ $\ce{H}=1$ density of n-heptane: $0.684\ \mathrm{kg/L}$ Answer: For these types of problem, concept of mole is very handy. Here, you know the reaction stoichiometry, that means you have at least three information from that reaction equation. How? ** For every mole heptane, you need 11 mole oxygen for complete combustion ** For every mole of heptane combustion, you will get 7 mol carbon di oxide and 8 mol water. So, now if you know the mole corresponding to 6 Kg heptane you can easily find out the amount of carbon di oxide produced in mole unit. The relation is $$n_{CO_2}=7 \times n_{Heptane}$$ $$n_{Heptane}=\frac{m_{Heptane}}{molecular~weight~of~heptane}$$ Now if you want to also calculate amount of carbon di oxide produced in mass unit or volume unit, you need to use the second equation or ideal gas law equation.
{ "domain": "chemistry.stackexchange", "id": 4522, "tags": "organic-chemistry, combustion" }
What are the advantages, if any, of derivative sampling?
Question: In Five short stories about the cardinal series $[1]$, the author makes the following comment: Interestingly enough, Shannon goes on to mention that other sets of data can also be used to determine the band-limited signal--for example, the values of ƒ and its first derivative at every other sample point, the values of ƒ and its first and second derivatives at every third sample point, and so on. The paper mentions some historical developments, but I'm curious what the "killer apps" are for derivative sampling. Does it go by any other names? Are there further generalizations of this approach? A simple overview, or pointers to some references would be great. -- J. R. Higgins, Five short stories about the cardinal series, Bull. Amer. Math. Soc. (N. S.) 12 (1985), no. 1, 45-89. http://bit.ly/plioNg Answer: Papoulis introduced a generalization of the sampling theorem [1], of which derivative sampling approach is one case. The gist of the theorem, quoting from [2] is: In 1977, Papoulis introduced a powerful extension of Shannon’s sampling theory, showing that a band-limited signal could be reconstructed exactly from the samples of the response of $m$ linear shift-invariant systems sampled at $1/m$ the reconstruction rate. Perhaps one reason why it's hard to search for the term is because Papoulis' generalized sampling theorem is mentioned more often than "derivative sampling". [2] is also a very good article which presents a broad overview of the sampling approaches at the time of publication. [3], also by the same author is an extension of [1] to the class of non-bandlimited functions. As for applications, in a recent paper [4], the derivative sampling approach is used to design wideband fractional delay filters and the authors show that sampling the derivative results in smaller errors. From the abstract: In this paper, the design of wideband fractional delay filter is investigated. First, the reconstruction formula of derivative sampling method is applied to design wideband fractional delay filter by using index substitution and window method. ... Finally, numerical examples are demonstrated to show that the proposed method has smaller design error than the conventional fractional delay filter without sampling the derivative of signal. While there certainly are more, I'll refrain from posting more references and application to keep it short (and avoid it turning into a list). A good point to start looking would be to check which papers have cited [1]-[3] and narrow down the list based on the abstract. [1]: A. Papoulis, “Generalized sampling expansion,” IEEE Trans. Circuits and Systems, vol. 24, no. 11, pp. 652-654, 1977. [2]: M. Unser, "Sampling - 50 years after Shannon," Proceedings of the IEEE, vol. 88, num. 4, p. 569-587, 2000 [3]: M. Unser and J. Zerubia, "A generalized sampling theory without band-limiting constraints," IEEE Trans. Circuits and Systems II, vol. 45, num. 8, p. 959–969, 1998 [4]: C-C Tseng and S-L Lee, "Design of Wideband Fractional Delay Filters Using Derivative Sampling Method", IEEE Trans. Circuits and Systems I, vol. 57, num. 8, p. 2087-2098, 2010
{ "domain": "dsp.stackexchange", "id": 15, "tags": "sampling" }
Can a mesh shield keep food cold in a running microwave?
Question: Please forgive my amateur physics knowledge. Say it's Thanksgiving leftover day. You load a plate with turkey, potatoes, cranberry sauce, and salad. Now it's time to heat your food, but oh no! You put the cold foods on your plate before using the microwave! The mesh grate over the microwave window reflects microwaves without being damaged. Could you safely place a similar mesh dome over your cranberry sauce and salad to heat half your food, but leave the other half cool? Answer: If the mesh were to reflect the microwaves, then yes you could heat up one half of a plate and keep the other cold. However, what typically happens is that the microwaves excite electrons in the metal mesh itself, generating eddy currents and sparks. Depending on your microwave, it can even damage the microwave itself because some microwaves are built on assumptions of what sorts of standing waves can form in the microwave and which ones cannot!
{ "domain": "physics.stackexchange", "id": 41055, "tags": "microwaves" }
Is graph automorphism Karp-reducible to graph isomorphism under hidden subgroup representation?
Question: The classical representations of the graph automorphism problem is Karp-reducible to the classical representation of the graph isomorphism problem. The sketch of proof for this can be written as follows based on the detailed proof given in Section 1.5 of The Graph Isomorphism Problem - Its Structural Complexity. Prove that the graph automorphism problem is Turing-reducible to the graph isomorphism problem. Prove that the graph automorphism problem has a polynomial time computable or-function. Prove that the graph isomorphism problem has both polynomial time computable and- and or-functions. Combining the previous results, it may be argued that the graph automorphism problem is Karp-reducible to the graph isomorphism problem. My question is whether similar reduciblity can be inferred for the problems under hidden subgroup representations (HSP). Let me define those representations. Graph isomorphism as a hidden subgroup problem Let the $2 n$ vertex graph $\Gamma = \Gamma_1 \sqcup \Gamma_2$ be the disjoint union of the two graphs $\Gamma_1$ and $\Gamma_2$ such that $Aut \left(\Gamma_1\right) = Aut \left(\Gamma_2\right) = \left\{e\right\}$ (a limited version). A map $\varphi : S_{2n} \to \text{Mat}\left(\mathbb{C}, N \right)$ from the group $S_{2n}$ is said to have hidden subgroup structure if there exists a subgroup $H_\varphi$ of $S_{2n}$, called a hidden subgroup, an injection $\ell_\varphi : S_{2n}/H \to \text{Mat}\left(\mathbb{C}, N \right)$, called a hidden injection, such that the diagram is a commutative diagram, where $S_{2n}/H_{\varphi}$ denotes the collection of right cosets of $H_\varphi$ in $S_{2n}$, and where $\nu : S_{2n}/H_\varphi$ is the natural map of $S_{2n}$ onto $S_{2n}/H_\varphi$. We refer to the group $S_{2n}$ as the ambient group and to the set $\text{Mat}\left(\mathbb{C}, N \right)$ as the target set. The hidden subgroup version of the graph isomorphism problem is to determine a hidden subgroup $H$ of $S_{2n}$ with the promise that $H$ is either trivial or $|H| = 2$. Graph automorphism as a hidden subgroup problem For a graph $\Gamma$ with $n$ vertices, a map $\varphi : S_{n} \to \text{Mat}\left(\mathbb{C}, N \right)$ from the group $S_{n}$ is said to have hidden subgroup structure if there exists a subgroup $\text{Aut}\left(\Gamma\right)$ of $S_{n}$, called a hidden subgroup, an injection $\ell_\varphi : S_{n}/\text{Aut}\left(\Gamma\right) \to \text{Mat}\left(\mathbb{C}, N \right)$, called a hidden injection, such that for each $g \in \text{Aut}\left(\Gamma\right)$, $g \left(\Gamma\right) = \Gamma$ and, the diagram is commutative, where $S_{n}/\text{Aut}\left(\Gamma\right)$ denotes the collection of right cosets of $\text{Aut}\left(\Gamma\right)$ in $S_{n}$, and where $\nu : S_{n}/\text{Aut}\left(\Gamma\right)$ is the natural map of $S_{n}$ onto $S_{n}/\text{Aut}\left(\Gamma\right)$. We refer to the group $S_{n}$ as the ambient group and to the set $\text{Mat}\left(\mathbb{C}, N \right)$ as the target set. The hidden subgroup version of the graph automorphism problem is to determine a hidden subgroup $\text{Aut}\left(\Gamma\right)$ of $S_{n}$ with the promise that $\text{Aut}\left(\Gamma\right)$ is either of trivial or non-trivial order depending on the type of $\Gamma$. My questions: I think it can be trivially shown that the hidden subgroup representation of the graph automorphism problem is Turing-reducible to the graph isomorphism problem by giving two input graphs as the original graph and the candidate automorphism of the original graph. If I am correct, the remaining question is whether there is a Karp-reduction for the hidden subgroup representation. If there is, how it can be shown. Answer: I think your focus on the rigid case of GI limits you too much. Instead phrase (non-rigid) GI as an HSP in the same way, but now the goal is to determine the size of the hidden subgroup, or a generating set. The difference between the isomorphic and non-isomorphic cases will be a factor of 2 in the order of the hidden subgroup. If you phrase the problem as finding generators of the hidden subgroup, then the question is just whether any generator switches $\Gamma_1$ and $\Gamma_2$. Now, an instance of "HSP-GA" corresponding to a graph $\Gamma$ is given by the function from $S_n \to M_n(\mathbb{C})$ defined by $f(\pi) = A(\pi(\Gamma))$ where $A(\cdot)$ denotes the adjacency matrix. In particular, $f(1) = A(\Gamma)$. Then apply the usual Karp reduction from GA to GI to get a pair of graphs $\Gamma_1, \Gamma_2$, and consider the HSP-GI instance, of the type described in the preceding paragraph, corresponding to the pair $\Gamma_1, \Gamma_2$ (that is, the disjoint union $\Gamma_1 \cup \Gamma_2$).
{ "domain": "cstheory.stackexchange", "id": 3945, "tags": "graph-theory, quantum-computing, reductions, graph-isomorphism, quantum-information" }
How to understand the storing mechanism used in external merge sort
Question: I was reading about external merge sort from the wikipedia article link, according to it: External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted subfiles are combined into a single larger file. Suppose we have a RAM which can only hold 2 chunks of data and we have 6 chunks of data to sort. Please see the below diagram: Since our memory can hold 2 chunks of data so the first step 1 sounds plausible since we are sorting only pairs of numbers (5,6), (3,4) (1,2). In the step 2 we merge the data and now our chunk size is 4. My question is how do you now load this 4 chunk of data into memory now? Since your memory cannot accept more than 2 chunks of data then how do you load and sort them? How did you sort while merging chunks of data here? I have visited several links, but not able to understand this concept. You must be performing some kind of sorting while merging the data as well right? Answer: How do you load that entire list into memory? You don't. You have a bit of a misunderstanding about how Merge works, in this setting. It's tempting to think that Merge should work as follows: load both input lists into memory, in their entirety; merge them; then write the result back out to storage. But that wouldn't work -- we won't have enough memory to load the entire lists into memory. Instead, Merge works without loading either input list into memory all at once. Take a look at the pseudocode for Merge, and you'll see that it only needs to have two list items in memory at a time: one list node from somewhere in the first list, and one from somewhere in the second node. Merge finds which one of those two items is smaller, outputs it, and then loads the next item from the input list it came from. You can see that Merge only needs $O(1)$ memory to merge two sorted lists, even if the sorted lists are stored in external storage -- it never needs to load the input lists into memory in their entirety.
{ "domain": "cs.stackexchange", "id": 5566, "tags": "algorithms, algorithm-analysis, sorting, quicksort" }
teb local planner optimization jumps to goal ignoring obstacles
Question: I am doing a project with a differential drive robot where I'ver recently started using teb local planner which has worked great except for a single issue. When giving the robot a nav goal the robot starts out following the global plan more or less, which makes sense since teb optimizes the time and the goal is not inside the local costmap. However, while the goal is still not inside the local costmap the local planner suddenly plans all the way to the goal, in some cases straight through an obstacle. As the obstacle approaches and enters the local costmap teb replans and everything is fine, although jerky. While this works ok for the obstacles I have so far I wonder what will happen if the local planner does this with a horseshoe type obstacle which doesn't fit in the local costmap. Also the jerkyness is a bit of a problem. To deal with this I tried to make the planner follow the global plan as through the tutorial on http://wiki.ros.org/teb_local_planner/Tutorials/Following%20the%20Global%20Plan%20%28Via-Points%29 I am using Ubuntu 18.04, ROS melodic and I compiled Teb local planner from source. Originally posted by robotjohan on ROS Answers with karma: 15 on 2019-09-30 Post score: 1 Answer: I had a similar issue and the problem was that I was providing a path with few via-points (they were far away), and the TEB algorithm seemed to look ahead a few points, thus finding the goal one and going for it. My solution was to provide intermediate via-points along the way (i.e. a straight line would have many intermediate points) and that solved the issue. Separately, check the max_global_plan_lookahead_dist and give it a small value (say 0.5 or 1) to make sure it does not look many meters ahead, thus reaching for the goal position instead of your via-points. Originally posted by dario with karma: 46 on 2020-01-09 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by robotjohan on 2020-01-28: Thank you, this seems to help.
{ "domain": "robotics.stackexchange", "id": 33833, "tags": "ros-melodic" }
Which solution has the highest total molarity of ions and why?
Question: (a) 0.008 M $\ce{Ba(OH)2}$ (b) 0.010 M $\ce{KI}$ The answer is "a" but I do not understand why. The book says that "a" is a strong electrolyte with a total ion concentration of 0.024. How did they calculate the concentration from the information that they gave? And why is it a strong electrolyte based on that calculation. I thought strong electrolytes were (solutes) ions that dissolved completely in a solvent/solution? Answer: Barium hydroxide is a slightly soluble salt. It dissociates in water and gives 3 ions (one ion of barium and two ions of hydroxide) with the concentration of $0.008 \ce{ M}$ for each one. This makes the total concentration of ions to be $0.024 \ce{ M}$. While the total concentration of $\ce{KI}$ is $0.02 \ce{ M}$. So the solution of barium hydroxide is a stronger electrolyte as it has a higher concentration of dissociated ions. In your book, they treat barium hydroxide as a strong electrolyte, i.e. it dissolved completely in water and dissociates to its constituent ions. This is of course an approximation. While $\ce{KI}$ is a true strong electrolyte; it dissolved completely in water and dissociates to its constituent ions. So, I agree with you with the definition of strong electrolyte, and I hope I clarify the ambiguity.
{ "domain": "chemistry.stackexchange", "id": 2070, "tags": "concentration" }
can ROS be installed on Ubuntu 9.10?
Question: Hi All, The ROS wiki has tutorials about installing ROS on Ubuntu 10.10 & 10.04, bu for some certain reasons I need to keep my current Ubuntu 9.10. Does anyone know if there will be problems if I install ROS on Ubuntu 9.10? Or has anyone tried that? Thanks, Zhu Originally posted by Zhu on ROS Answers with karma: 21 on 2011-03-20 Post score: 2 Answer: I believe that according to REP 3, that Box Turtle and C Turtle both support Ubuntu 9.10, Karmic Koala. CTurtle Ubuntu installation instructions That's not to say that you couldn't at least attempt to install Diamondback or latest on 9.10, but you may have a longer road ahead than just the ROS installation guide. You may have to add additional packages to Ubuntu in order to support the newer dependencies that come with Diamondback. Originally posted by mjcarroll with karma: 6414 on 2011-03-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5144, "tags": "ros, installation, ubuntu" }
Are stable orbits within the event horizon of a black hole possible?
Question: This paper https://arxiv.org/abs/1103.6140 theorizes that orbits inside the event horizon of a rotating or charged black hole are not only possible but actually stable enough to potentially support life. How can this possibly be true? Using the formula for the Schwarzchild radius of a black hole, it seems to me like any radius smaller than that is another event horizon. This would mean that any orbit inside a black hole can only be an inwards spiral, even if you negate electromagnetic or gravitational wave radiation. The paper seems to say this is not true if both the black hole and the planet were charged, but why is that? Even if they had an enormously strong charge, wouldn't the force of gravity still dominate? Answer: Let's consider the charged non-rotating black hole since this is the simplest case. The geometry is described by the Reissner-Nordström metric, though we won't need to go into the gory details to get a basic idea of what is happening. If you start with zero charge this is just the Schwarzschild metric. All the mass/energy is at the singularity so no matter how close you get to the centre of the black hole all the mass is still ahead of you pulling you inwards. The result is that once you pass through the horizon the $r$ coordinate becomes timelike and remains timelike as you approach and eventually crash into the singularity. No stable orbits are possible inside the horizon. But once you charge the black hole the electric field is present both inside and outside the event horizon, and this field has an energy so it produces a gravitational force. When you're at some distance $r$ from the centre there is a part of the field behind you pulling you outwards. And the closer you get to the singularity the more of the electric field is behind you pulling you outwards. The result is that a charged black hole has two horizons. As you pass through the outer horizon the radial coordinate becomes timelike and in this region you are doomed to fall inwards. However there is a second horizon marking the point at which the energy of the field outside balances out the mass at the singularity. As you pass through this horizon the radial coordinate becomes spacelike again and inside the second horizon you are not doomed to fall inwards. Indeed it's possible to find worldlines that travel in through the two horizons then turn round and travel back out again. For more on this see Entering a black hole, jumping into another universe---with questions. The paper contends that inside the second horizon there are stable orbits where planets could orbit and life exist. I haven't been through the paper so I can't comment. It isn't immediately obvious that stable orbits exist inside the second horizon, after all a Schwarzschild black hole has no stable orbits for $r_s \lt r \lt 3r_s$ even though this is outside the horizon. So the fact the $r$ coordinate is spacelike doesn't guarantee stable orbits exist. However since the paper was peer reviewed I assume they have done the sums correctly. However we should note that it is extremely unlikely a black hole would ever accumulate enough charge to move the inner horizon any great distance from the singularity, so realistically this is never going to happen for a charged black hole. With a rotating black hole it is more feasible, though I'm unsure if the supermassive black holes at galaxy cores are rotating fast enough to make it possible.
{ "domain": "physics.stackexchange", "id": 93166, "tags": "black-holes, orbital-motion, event-horizon" }
Motion and acceleration in a straight line
Question: A car with an initial velocity of 3 m/s has a velocity of 34 m/s after 3 sec. How far it moves in its third second of motion? acceleration V=U + at a = 10.3 m/s^2 x = ut + 1/2 a t^2 x = 3 (3) + 1/2 (10.3) (3x3) x = 56.25 m but answer in book = 29 m ?? Answer: Like New_new_newbie said, the question asks you to find the distance travelled in its third second (from t=2 to t=3 seconds). So just find x(2) and x(3), and the difference should give you 29m.
{ "domain": "physics.stackexchange", "id": 16551, "tags": "homework-and-exercises, kinematics" }
Could a mainspring be used to generate train acceleration during starting-up from the kinetic energy stored during its slowing down?
Question: When waiting for the underground, I've always been wondering why we could not absorb the kinetic energy of the train to slow it down at the station and then to release this energy to help it accelerating during the starting up. This would also reduce air pollution due to braking at underground stations. Indeed, in frequently-stopping, trains such as underground trains, most of the energy is used to accelerate, so I am wondering whether the train could be slowed down by a large spring that would be compressed while arriving a stop station, then blocked when the train is stopped for the travelers to get out/in, then released to help the train accelerate to continue its route. Of course, the train should not go to the opposite direction when releasing the spring, so a system should be designed so that the spring can be released in the correct direction. This idea looks simple and I'm sure engineers has already thought about it, however, I've not found any discussion on this on the web. I found that on some Spanish lines, the kinetic energy during braking is converted to electricity and used by other trains from the same line (research paper), but why not keeping the mechanical energy to release it directly with a spring method? Answer: I found that on some Spanish lines, the kinetic energy during braking is converted to electricity and used by other trains from the same line (research paper), but why not keeping the mechanical energy to release it directly with a spring method? Because there is already a motor on the axle and there is no room for your spring. This and all the other problems that would be associated with controlled collection and release of huge amounts of energy. The KE of a 28 tonne London Underground car doing 20 m/s (72 kph) would be $ \frac 1 2 mv^2 = \frac 1 2 28000 \times 20^2 = 5.6 MJ $. Your wind-up train would require engage / disengage mechanisms, have to allow running in both directions, have to allow free-wheeling during shunting and require some means of discharging any stored energy before maintenance. If the spring storage couldn't be mounted on the bogie some flexible drive arrangement would be required to connect the axles to the springs. Instead, many underground systems combine regenerative braking with placing the stations at the high points on the line so that they get free deceleration into a station and free acceleration out of the station. Regenerative electric braking is more efficient, the control gear requires only slight modification, it is tried and tested and avoids major mechanical mechanisms.
{ "domain": "engineering.stackexchange", "id": 4754, "tags": "springs, energy-efficiency, energy-storage, regenerative-braking, braking" }
Do burning red coals and red-hot iron have the same temperature?
Question: It would seem that Planck's law implies that objects of similar radiation spectra have the same temperature if the objects are "similarly close" to being black bodies. Am I right to infer that burning red coals and red-hot iron have approximately the same temperature (which?) because they are both emitting mostly red light? Answer: You're pretty much right and the principle - that many hot bodies are "nearly" black bodies and therefore the colour of their radiation is related to their temperature through the Planck law (or, more succinctly, the Wien displacement law) - is the basis for the optical pyrometer (see this page on howstuffworks.com). There are some approximations though. Hot surfaces are always at least slightly differenty from true black bodies and this difference is summarised in the emissivity (see the wiki page with this name) which is a scale factor defining the ratio of light emitted from a hot surface to that emitted by an ideal black body at that temperature. The complicated physics giving rise to a nonunity emissivity can be frequency dependent; if so the spectrum will be distorted and an optical pyrometer, as well as we, can be tricked into thinking the temperature is different from what the body's temperature really is. However, your principle is a good one, especially as a rule of thumb and your physical reasoning is very sound and admirable.
{ "domain": "physics.stackexchange", "id": 9378, "tags": "thermal-radiation" }
Why do we get 2 different answers for escape velocity, when we apply different laws to calculate it?
Question: To measure the escape velocity, if I use the equation - $$\frac{-GMm}{r^2}=\frac{v.dv}{dr} $$ and I put my final distance to be $\infty$ , then I get the answer, $$u = \sqrt\frac{2GM}{R} \;. \tag{1}$$ Which is quite obvious! But, if I use the equations - $$U_{\infty} - U_i = \frac{GMm}{R}$$ and $$K_{\infty}+U_{\infty} = K_R+U_R$$ Where, $K_R$ and $U_R$ are kinetic energies and potential energies at R respectively , where R is the radius of the earth. Now in these two equations, if I put $U_{\infty} = 0$, then I get, $$K_{\infty} = K_R - \frac{GMm}{R}\tag{3} $$ And since, $K_{\infty}$ is positive and not zero, we end up getting that $$u > \sqrt\frac{2GM}{R} \tag{2} $$ That is to say that, if I want to take the object at infinity, the speed must be (1), but if I want to make it's final potential energy 0 , then it's speed must be greater than (1). Also, (1) cannot be used in (3) because that would mean the final $KE$ is 0, which obviously is not! What is generating this hoax? Please tell me where am I wrong? Answer: The escape velocity is the minimum velocity possible such that it escapes a gravitational well. We have from conservation of energy: $$ E = K+U = \frac{1}{2}mv^2 - \frac{GMm}{r} $$ Lets consider initial and final energy as $E_1$ and $E_2$. To escape, you need to make you sure your speed fully overrules the potential energy. The maximum possible potential energy there is, is the case $U = 0$, that happens a $r\to\infty$. At all other cases, we have $U < 0$, and thus $U = 0$ is maximum. If your kinetic energy is greater than this maximum, you escaped. So, lets make sure of this. Assuming your initial energy to be: $$ E_1 = K_1 + U_1 = K_2 + U_2 = E_2 $$ At maximum, we have $U_2 = 0$, then: $$ K_1 + U_1 = K_2 \quad\implies\quad K_1 = K_2 - U_1 $$ So, if you have kinetic energy $K_1$, you escape. As you pointed out, the things at the end happens as $r\to\infty$ and therefore $E_2 = E_\infty = K_\infty + U_\infty$. Thus: $K_2 = K_\infty$. That is, that will be your kinetic energy at infinity. $$ K_1 = K_\infty - U_1\quad\implies\quad \frac{1}{2}mv^2 = \frac{1}{2} mv_\infty^2 + \frac{GMm}{r_1} $$ The velocity you need to escape such that, at infinity, you have velocity $v_\infty$ is: $$ v = \sqrt{v_\infty^2 + \frac{2GM}{r_1}} $$ At $v_\infty = 0$, you recover the minimum velocity you need to have in order to escape. Since it is minimum, your velocity at infinity will be zero. Once you arrive at infinity, regardless if you have velocity or not, you escape, and then your potential energy is zero.
{ "domain": "physics.stackexchange", "id": 35827, "tags": "newtonian-mechanics, newtonian-gravity, energy-conservation, projectile, escape-velocity" }
Why is contact resistance measured in $\Omega~\mu m$?
Question: In many papers the contact resistance of a metal in contact with a semiconductor is given in units of $\Omega~\mu m$, for example in the paper by Li et al. (Appl. Phys. Lett. 102 (2013), p. 183110): $R_C$ for contacts formed to epitaxial graphene on SiC have been reported to be less than 100 $\Omega~\mu m$ and with specific contact resistivity ($\rho_c$) of order $10^{-7} \Omega~cm^2$. I'm not sure where this unit comes from. From the transfer line method one can determine the contact resistance as a value in $\Omega$. Normalizing it to the contact area however should bring the unit to $\frac{\Omega}{\mu m^2}$, i.e. it should be divided by the area rather than be multiplied by a length. How does one arrive at the unit $\Omega~\mu m$ for the contact resistance? What assumptions regarding the geometry of the contact enter into this? Why is the correct unit not just $\frac{\Omega}{\mu m^2}$? Answer: Through a bit more accurate research on Wikipedia and in the paper mentioned above I found out where this unit comes from. The contact resistivity is defined as the slope of the $j$-$V$ curve at 0V, where $j$ is the current density. $$\rho_c = \frac{\partial V}{\partial j}|_{V=0}$$ This gives it units of $$\frac{V}{\frac{I}{m^2}} = \frac{V m^2}{I} = \Omega~m^2$$ The contact resistance (in $\Omega$) is simply the contact resistivity divided by the effective contact area $A = w \cdot L_T \cdot \coth\big(\frac{L_C}{L_T}\big)$ $$R_C = \frac{\rho_c}{A} = \frac{\rho_c}{w \cdot L_T \cdot \coth\big(\frac{L_C}{L_T}\big)}$$ In the specific case of a metal-graphene contact, the current flows into the metal contact only in a very thin strip at the edge. This shortens the transfer length $L_T$, which is a measure for the effective contact length. In this particular system this length is quite short (~100 nm) compared to the physical size of the contact (~ 100 $\mu m$), and can therefore be ignored. This leaves only $w$ in the denominator. For this reason the resistance per unit width is very often used in literature to describe such devices. $$\tilde{R}_C = \frac{\rho_c}{w} \rightarrow \Big[\frac{\Omega~m^2}{m}\Big] = [\Omega~m]$$
{ "domain": "physics.stackexchange", "id": 56476, "tags": "electrical-resistance, units, dimensional-analysis" }
using hector_trajectory_server to show trajectory in Rviz
Question: I want to show trajectory of the quadcopter simulated in gazebo using Rviz. I have learned that ‘hector_trajectory_server’ could be used to solve this problem, refer to this old Q&A. Since i am totally new in this field, anyone could give me some guidance on how to use this node (or Tutorial on using this node)? Any help will be appreciated. Thanks a lot. Originally posted by peigong on ROS Answers with karma: 11 on 2017-08-31 Post score: 1 Answer: I tried on my launch file: <node pkg="hector_trajectory_server" type="hector_trajectory_server" name="hector_trajectory_server" output="screen"> <param name="target_frame_name" type="string" value="/map" /> <param name="source_frame_name" type="string" value="/base_link" /> <param name="trajectory_update_rate" type="double" value="4" /> <param name="trajectory_publish_rate" type="double" value="0.25" /> </node> But after running it, it does not show the "trajectory" topic to uses on RViz. Please, do you know if another package shall be included on launch file to enable trajectory (nav_msgs/Path) ? Originally posted by gubertoli with karma: 36 on 2018-03-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by lucasw on 2021-08-03: The frame names shouldn't have the slashes / in them, but that doesn't appear to mess up the trajectory. By default it should publish on the trajectory topic, you have to add the Path in rviz to see it.
{ "domain": "robotics.stackexchange", "id": 28744, "tags": "rviz" }
Markdown Display in WPF
Question: I wanted to display a markdown document in WPF sensibly without relying on an HTML rendering system. I wrote a control based on the RichTextBox, and used Microsoft's markdown parser. What do you think? XAML: <RichTextBox x:Class="MarkdownViewer.MarkdownBox" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:MarkdownViewer" mc:Ignorable="d" d:DesignHeight="450" d:DesignWidth="800" x:Name="Content"> </RichTextBox> C# backend for the XAML file: public partial class MarkdownBox : RichTextBox { public static readonly DependencyProperty TextProperty = DependencyProperty.Register(nameof(Text), typeof(string), typeof(MarkdownBox), new UIPropertyMetadata(default(string), PropertyChangedCallback)); private static void PropertyChangedCallback(DependencyObject source, DependencyPropertyChangedEventArgs args) { if (source is MarkdownBox control) { var newValue = (string)args.NewValue; switch (args.Property.Name) { case nameof(Text): control.Text = newValue; break; } } } public string Text { get => (string)GetValue(TextProperty); set { var old = GetValue(TextProperty); SetValue(TextProperty, value); OnPropertyChanged(new DependencyPropertyChangedEventArgs(TextProperty, old, value)); SetTextboxContent(); } } public MarkdownBox() { InitializeComponent(); } private void Hlink_RequestNavigate(object sender, RequestNavigateEventArgs e) { Process.Start(new ProcessStartInfo(e.Uri.AbsoluteUri)); e.Handled = true; } private void SetTextboxContent() { Content.Document.Blocks.Clear(); var doc = new MarkdownDocument(); doc.Parse(Text ?? string.Empty); Content.Document.Blocks.AddRange(GetBlocks(doc.Blocks)); } private IEnumerable<Block> GetBlocks(IList<MarkdownBlock> blocks) { foreach (var block in blocks) { switch (block) { case HeaderBlock header: yield return GetHeaderBlock(header); break; case ParagraphBlock paragraph: yield return GetParagraphBlock(paragraph); break; case ListBlock list: yield return GetListBlock(list); break; case CodeBlock code: yield return GetCodeBlock(code); break; case QuoteBlock quote: yield return GetQuoteBlock(quote); break; case HorizontalRuleBlock rule: yield return GetRuleBlock(rule); break; case TableBlock table: yield return GetTableBlock(table); break; default: throw new NotImplementedException(); } } } private Block GetHeaderBlock(HeaderBlock header) { var headerLevels = new Dictionary<int, double> { [1] = 28, [2] = 21, [3] = 16.3833, [4] = 14, [5] = 11.6167, [6] = 9.38333, }; var content = header.Inlines.Select(GetInline); var span = new Span(); span.Inlines.AddRange(content); var labelElement = new Label { Content = span, FontSize = headerLevels[header.HeaderLevel] }; var blockElement = new BlockUIContainer(labelElement); return blockElement; } private Block GetParagraphBlock(ParagraphBlock paragraph) { var paragraphElement = new Paragraph(); paragraphElement.Inlines.AddRange(paragraph.Inlines.Select(GetInline)); return paragraphElement; } private Block GetListBlock(ListBlock list) { var listElement = new List { MarkerStyle = list.Style == ListStyle.Bulleted ? TextMarkerStyle.Disc : TextMarkerStyle.Decimal }; foreach (var item in list.Items) { var listItemElement = new ListItem(); listItemElement.Blocks.AddRange(GetBlocks(item.Blocks)); listElement.ListItems.Add(listItemElement); } return listElement; } private Block GetCodeBlock(CodeBlock code) { var typeConverter = new HighlightingDefinitionTypeConverter(); var avalon = new TextEditor { Text = code.Text, SyntaxHighlighting = (IHighlightingDefinition)typeConverter.ConvertFrom("C#"), FontFamily = new FontFamily("Consolas"), FontSize = 12, Padding = new Thickness(10), BorderBrush = Brushes.LightGray, BorderThickness = new Thickness(1), HorizontalScrollBarVisibility = ScrollBarVisibility.Auto, VerticalScrollBarVisibility = ScrollBarVisibility.Auto, IsReadOnly = true, ShowLineNumbers = true, MaxHeight = 250 }; return new BlockUIContainer(avalon); } private Block GetQuoteBlock(QuoteBlock quote) { var sectionElement = new Section { Background = new SolidColorBrush(Color.FromRgb(0xFF, 0xF8, 0xDC)), BorderBrush = new SolidColorBrush(Color.FromRgb(0xff, 0xeb, 0x8e)), BorderThickness = new Thickness(2, 0, 0, 0), Padding = new Thickness(5) }; var quoteBlocks = GetBlocks(quote.Blocks).ToList(); for (var i = 0; i < quoteBlocks.Count; i++) { var item = quoteBlocks[i]; item.Padding = new Thickness(5, 0, 5, 0); item.Margin = new Thickness(0); sectionElement.Blocks.Add(item); } return sectionElement; } private Block GetRuleBlock(HorizontalRuleBlock rule) { var line = new Line { Stretch = Stretch.Fill, Stroke = Brushes.DarkGray, X2 = 1 }; return new Paragraph(new InlineUIContainer(line)); } private Block GetTableBlock(TableBlock table) { var alignments = new Dictionary<ColumnAlignment, TextAlignment> { [ColumnAlignment.Center] = TextAlignment.Center, [ColumnAlignment.Left] = TextAlignment.Left, [ColumnAlignment.Right] = TextAlignment.Right, [ColumnAlignment.Unspecified] = TextAlignment.Justify }; var tableElement = new Table { BorderThickness = new Thickness(0, 0, 1, 1), BorderBrush = new SolidColorBrush(Color.FromRgb(0xdf, 0xe2, 0xe5)), CellSpacing = 0 }; var tableRowGroup = new TableRowGroup(); for (int rowIndex = 0; rowIndex < table.Rows.Count; rowIndex++) { var row = table.Rows[rowIndex]; var tableRow = new TableRow(); if (rowIndex % 2 == 0 && rowIndex != 0) { tableRow.Background = new SolidColorBrush(Color.FromRgb(0xf6, 0xf8, 0xfa)); } for (int cellIndex = 0; cellIndex < row.Cells.Count; cellIndex++) { var cell = row.Cells[cellIndex]; var cellContent = new Paragraph(); cellContent.Inlines.AddRange(cell.Inlines.Select(GetInline)); var tableCell = new TableCell { TextAlignment = alignments[table.ColumnDefinitions[cellIndex].Alignment], BorderBrush = new SolidColorBrush(Color.FromRgb(0xdf, 0xe2, 0xe5)), BorderThickness = new Thickness(1, 1, 0, 0), Padding = new Thickness(13, 6, 13, 6) }; tableCell.Blocks.Add(cellContent); if (rowIndex == 0) { tableCell.FontWeight = FontWeights.Bold; } tableRow.Cells.Add(tableCell); } tableRowGroup.Rows.Add(tableRow); } tableElement.RowGroups.Add(tableRowGroup); return tableElement; } private Inline GetInline(MarkdownInline element) { switch (element) { case BoldTextInline bold: return GetBoldInline(bold); case TextRunInline text: return GetTextRunInline(text); case ItalicTextInline italic: return GetItalicInline(italic); case StrikethroughTextInline strikethrough: return GetStrikethroughInline(strikethrough); case CodeInline code: return GetCodeInline(code); case MarkdownLinkInline markdownLink: return GetMarkdownLinkInline(markdownLink); case HyperlinkInline hyperlink: return GetHyperlinkInline(hyperlink); case ImageInline image: return GetImageInline(image); case SubscriptTextInline subscript: return GetSubscriptInline(subscript); case SuperscriptTextInline superscript: return GetSuperscriptInline(superscript); default: throw new NotImplementedException(); } } private Inline GetBoldInline(BoldTextInline bold) { var boldElement = new Bold(); foreach (var inline in bold.Inlines) { boldElement.Inlines.Add(GetInline(inline)); } return boldElement; } private static Inline GetTextRunInline(TextRunInline text) { return new Run(text.ToString()); } private Inline GetItalicInline(ItalicTextInline italic) { var italicElement = new Italic(); foreach (var inline in italic.Inlines) { italicElement.Inlines.Add(GetInline(inline)); } return italicElement; } private Inline GetStrikethroughInline(StrikethroughTextInline strikethrough) { var strikethroughElement = new Span(); strikethroughElement.TextDecorations.Add(TextDecorations.Strikethrough); foreach (var inline in strikethrough.Inlines) { strikethroughElement.Inlines.Add(GetInline(inline)); } return strikethroughElement; } private static Inline GetCodeInline(CodeInline code) { return new Run(code.Text) { Background = new SolidColorBrush(Color.FromRgb(0xef, 0xf0, 0xf1)) }; } private Inline GetMarkdownLinkInline(MarkdownLinkInline markdownLink) { var markdownLinkElement = new Hyperlink(); markdownLinkElement.Inlines.AddRange(markdownLink.Inlines.Select(GetInline)); markdownLinkElement.NavigateUri = new Uri(markdownLink.Url); markdownLinkElement.ToolTip = markdownLink.Tooltip; markdownLinkElement.RequestNavigate += Hlink_RequestNavigate; return markdownLinkElement; } private Inline GetHyperlinkInline(HyperlinkInline hyperlink) { var hyperlinkElement = new Hyperlink(); hyperlinkElement.Inlines.Add(hyperlink.Text); hyperlinkElement.NavigateUri = new Uri(hyperlink.Url); hyperlinkElement.RequestNavigate += Hlink_RequestNavigate; return hyperlinkElement; } private static Inline GetImageInline(ImageInline image) { var uri = new Uri(image.RenderUrl); var bitmap = new BitmapImage(uri); var imageElement = new Image { Source = bitmap, Height = image.ImageHeight == 0 ? double.NaN : image.ImageHeight, Width = image.ImageWidth == 0 ? double.NaN : image.ImageWidth, ToolTip = image.Tooltip }; return new InlineUIContainer(imageElement); } private Inline GetSubscriptInline(SubscriptTextInline subscript) { var subscriptElement = new Span(); subscriptElement.Typography.Variants = FontVariants.Subscript; foreach (var inline in subscript.Inlines) { subscriptElement.Inlines.Add(GetInline(inline)); } return subscriptElement; } private Inline GetSuperscriptInline(SuperscriptTextInline superscript) { var superscriptElement = new Span(); superscriptElement.Typography.Variants = FontVariants.Superscript; foreach (var inline in superscript.Inlines) { superscriptElement.Inlines.Add(GetInline(inline)); } return superscriptElement; } } Answer: <RichTextBox x:Class="MarkdownViewer.MarkdownBox" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:MarkdownViewer" mc:Ignorable="d" d:DesignHeight="450" d:DesignWidth="800" x:Name="Content"> </RichTextBox> Why? In the interests of KISS, why not delete this entire file, and remove partial and the constructor from the declaration of the class? private static void PropertyChangedCallback(DependencyObject source, DependencyPropertyChangedEventArgs args) { if (source is MarkdownBox control) { var newValue = (string)args.NewValue; switch (args.Property.Name) { case nameof(Text): control.Text = newValue; break; } } } Either this should be deleted or it should be commented to explain why it does anything useful, because it seems to say control.Text = control.Text and be either a no-op or an infinite recursion. set { var old = GetValue(TextProperty); SetValue(TextProperty, value); OnPropertyChanged(new DependencyPropertyChangedEventArgs(TextProperty, old, value)); SetTextboxContent(); } I think I see it now, and I think what this whole section should say is public static readonly DependencyProperty TextProperty = DependencyProperty.Register(nameof(Text), typeof(string), typeof(MarkdownBox), new UIPropertyMetadata(default(string), TextPropertyChanged)); private static void TextPropertyChanged(DependencyObject source, DependencyPropertyChangedEventArgs args) { // Assert that (source is MarkdownBox) && (args.Property.Name == nameof(Text)) if you want, // but IMO that's overkill. You've only registered the callback on one property. (source as MarkdownBox).SetTextboxContent(); } public string Text { get => (string)GetValue(TextProperty); set => SetValue(TextProperty, value); } And it seems to me that SetTextboxContent should be renamed UpdateTextboxContent. private void Hlink_RequestNavigate(object sender, RequestNavigateEventArgs e) { Process.Start(new ProcessStartInfo(e.Uri.AbsoluteUri)); e.Handled = true; } Since you're packaging this as a library, it should expose the event rather than handling it. private IEnumerable<Block> GetBlocks(IList<MarkdownBlock> blocks) { foreach (var block in blocks) { switch (block) { case HeaderBlock header: yield return GetHeaderBlock(header); break; case ParagraphBlock paragraph: yield return GetParagraphBlock(paragraph); break; ... Why IList instead of IEnumerable? Also, I would be tempted to pull out GetBlock, which can be rather more compact: private Block GetBlock(MarkdownBlock block) { switch (block) { case HeaderBlock header: return GetHeaderBlock(header); case ParagraphBlock paragraph: return GetParagraphBlock(paragraph); ... and reduce GetBlocks to private IEnumerable<Block> GetBlocks(IEnumerable<MarkdownBlock> blocks) => blocks.Select(GetBlock); private Block GetHeaderBlock(HeaderBlock header) { var headerLevels = new Dictionary<int, double> { [1] = 28, [2] = 21, [3] = 16.3833, [4] = 14, [5] = 11.6167, [6] = 9.38333, }; var content = header.Inlines.Select(GetInline); var span = new Span(); span.Inlines.AddRange(content); var labelElement = new Label { Content = span, FontSize = headerLevels[header.HeaderLevel] }; var blockElement = new BlockUIContainer(labelElement); return blockElement; } WPF, like HTML/CSS, allows separation of structure from style, but this actively works against that. At the very least the labels could use Tag so that I can write a style which uses triggers to override the hard-coded sizes, but since (again) this is published as a library I would look at defining separate subclasses for each of the header levels (or perhaps a single Header : Paragraph with a property HeaderLevel that can be used in a trigger) so that they can be styled directly. And the default values should be pulled out into Themes/Generic.xaml. Also, why BlockUIContainer(Label) instead of Paragraph? private Block GetCodeBlock(CodeBlock code) { var typeConverter = new HighlightingDefinitionTypeConverter(); var avalon = new TextEditor { Text = code.Text, SyntaxHighlighting = (IHighlightingDefinition)typeConverter.ConvertFrom("C#"), FontFamily = new FontFamily("Consolas"), FontSize = 12, Padding = new Thickness(10), BorderBrush = Brushes.LightGray, BorderThickness = new Thickness(1), HorizontalScrollBarVisibility = ScrollBarVisibility.Auto, VerticalScrollBarVisibility = ScrollBarVisibility.Auto, IsReadOnly = true, ShowLineNumbers = true, MaxHeight = 250 }; return new BlockUIContainer(avalon); } Similar comments apply: the default style should be in XAML, and IMO a library shouldn't assume that code will be C#. Leave SyntaxHighlighting blank and document how to use highlighting for end users who want it. private Inline GetInline(MarkdownInline element) I would re-order the code a bit to put this next to GetBlock as "high level" methods, and probably use a #region for the lower level methods for blocks and another for the inlines. private static Inline GetCodeInline(CodeInline code) { return new Run(code.Text) { Background = new SolidColorBrush(Color.FromRgb(0xef, 0xf0, 0xf1)) }; } I'm surprised that this doesn't use the same monospaced font as the code block. To finish, let me say that I love the concept. As it stands I wouldn't use the library, but if the issues I've raised (particularly about stylability) are addressed then it's possible I'll use it at some point in the future.
{ "domain": "codereview.stackexchange", "id": 35411, "tags": "c#, wpf, markdown" }
Solving the Poisson equation using Green's function
Question: I have the following equations: $$ \quad\Delta G(\bar{r}|\bar{r}') = -4\pi\delta(\bar{r}-\bar{r}') \\ \tag1$$$$ \quad\Delta\Psi(\bar{r}) = -4\pi\delta(\bar{r}-\bar{r}')\tag 2$$$$ \quad \\ \bar{r} \in V \\ \Psi(\infty) = 0 \text{ (Free space)} $$ Now in my notes it says that we treat 1 and 2 canonically, i.e. we do: $$\int_V [\Psi.\Delta G - G.\Delta\Psi]d\bar{r} $$ So my first question: what happened here? What does "canonically" mean in this context? I know the word means something like standard, or not random, but I don't see why someone would do this step. I see that it allows us to use Green's 2nd integral theorem, but how was this known a priori? Then it says that after applying Green's theorem and the definition of the (dirac) delta function, we find: $$\Psi(\bar{r}') = \int_V G(\bar{r}|\bar{r}')\rho(\bar{r}) d\bar{r} \\ \Rightarrow \Psi(\bar{r}) = \int_V G(\bar{r}|\bar{r}')\rho(\bar{r}') d\bar{r'} \text{ (symmetry)} $$ Again, how? I don't see how Green's theorem and the delta function lead to this equation. I did however find this: $$ \Delta\Psi(\bar{r}) = -4\pi\rho(\bar{r}) \\ \Rightarrow \Delta\Psi(\bar{r}) = \int -4\pi\delta(\bar{r}-\bar{r}')\rho(\bar{r}')d\bar{r}'\quad^*\\ \Rightarrow \Delta\Psi(\bar{r}) = \int\Delta G(\bar{r}|\bar{r}')\rho(\bar{r}')d\bar{r}'\\ \Rightarrow \Psi(\bar{r}) = \int_V G(\bar{r}|\bar{r}')\rho(\bar{r}') d\bar{r'} $$ I don't know if my derivation is correct, but it does seem to give me the result I'm looking for. Is there any fundamental difference between the 2 methods? $$^* \rho(\bar{r}) = \int \delta(\bar{r}-\bar{r}')\rho(\bar{r}')d\bar{r}' $$ Writing a potential density as a superposition of point densities. Answer: Since posting this question I have come to understand the answer, which I'm posting here for the sake of completeness. So my first question: what happened here? What does "canonically" mean in this context? I know the word means something like standard, or not random, but I don't see why someone would do this step. I see that it allows us to use Green's 2nd integral theorem, but how was this known a priori? This question has no really satisfactory answer as it needs to be studied on a case by case basis. In reality the reason why we take some steps in a derivation depends of multiple factors. One has to take into account the information that is available or have certain insight in how to tackle the problem. Another possibility is simple trial and error, in which case there is no deeper meaning behind why the steps were made. In the context of this problem I was given the following information: $$ \begin{cases} \Delta G(\bar{r}|\bar{r}') = -4\pi \delta(\bar{r} - \bar{r}') \\ \Delta \psi (\bar{r}) = -4\pi \rho(\bar{r})^* \end{cases}, \quad \begin{cases} \psi(\infty) = 0 \\ G(\infty) = 0 \end{cases} $$ *Mind the typo in my original question Given enough insight, one might try to link the interior (volume integral) with the boundary surface around this interior (surface integral) because of the interesting boundary conditions. This leads me to the second question. Again, how? I don't see how Green's theorem and the delta function lead to this equation. Treating it canonically now means treating it in a way that uses the insight provided in the paragraph above. Green's theorem links a volume integral with a surface integral so we should try to manipulate the equations to satisfy Green's theorem. $$ \int_V \psi \Delta G(\bar{r},\bar{r}') - G\Delta \psi (\bar{r}) d\bar{r} = -4\pi \int_V \psi \delta(\bar{r} - \bar{r}') - G\rho(\bar{r}) d\bar{r} = \int_S (\psi \bar{\nabla} G - G \bar{\nabla}\psi)\cdot \bar{n} dS $$ The surface integral is equal to 0 because of the boundary conditions and the middle integral becomes: $$ -4\pi \psi(\bar{r}') + 4\pi \int_V G(\bar{r}|\bar{r}') \rho(\bar{r}) d\bar{r} $$ This yields the result I was looking for. After all these years the answer came to me by looking at it for a few minutes, may this question rest in peace for once and for all. Kuddos to Samim Ul Islam whose comment to my question also provides an answer.
{ "domain": "physics.stackexchange", "id": 62674, "tags": "electrostatics, potential, mathematics, greens-functions" }
How to get a large collection of quantum circuits that are valuable to practitioners?
Question: I want to get a large collection of quantum circuits that matter to practitioners in quantum computing. I want a get many complete quantum circuits. For example, 1 million QASM files. However, I could not readily find a large dataset. Are there any places I could find it? Answer: I would recommend this paper: https://arxiv.org/abs/1804.03719. You can find there information on algorithms and how to deal with noise and errors in quantum circuits. Moreover, it can help you to "generate bunch of QASM files" implementing algorithms presented there.
{ "domain": "quantumcomputing.stackexchange", "id": 1062, "tags": "circuit-construction" }
The BFS approach to the SmartWordToy challenge
Question: I'm currently working on solving the SmartWordToy TopCoder problem: The toy company "I Can't Believe It Works!" has hired you to help develop educational toys. The current project is a word toy that displays four letters at all times. Below each letter are two buttons that cause the letter above to change to the previous or next letter in alphabetical order. So, with one click of a button the letter 'c' can be changed to a 'b' or a 'd'. The alphabet is circular, so for example an 'a' can become a 'z' or a 'b' with one click. In order to test the toy, you would like to know if a word can be reached from some starting word, given one or more constraints. A constraint defines a set of forbidden words that can never be displayed by the toy. Each constraint is formatted like "X X X X", where each X is a string of lowercase letters. A word is defined by a constraint if the ith letter of the word is contained in the ith X of the contraint. For example, the constraint "lf a tc e" defines the words "late", "fate", "lace" and "face". You will be given a String start, a String finish, and a String[] forbid. Calculate and return the minimum number of button presses required for the toy to show the word finish if the toy was originally showing the word start. Remember, the toy must never show a forbidden word. If it is impossible for the toy to ever show the desired word, return -1. Definition Class: SmartWordToy Method: minPresses Parameters: String, String, String[] Returns: int Method signature: int minPresses(String start, String finish, String[] forbid) (be sure your method is public) and trying to implement the BFS approach maintaining the queue of words to analyze while keeping track of "visited" and "forbidden" words. Here is my current implementation: from collections import deque from itertools import product def get_combinations(word): """Get a next and previous word variation for every character.""" for index, char in enumerate(word): next_char = chr(ord(char) + 1) prev_char = chr(ord(char) - 1) # make it a cycle if char == 'a': prev_char = 'z' if char == 'z': next_char = 'a' yield word[:index] + next_char + word[index + 1:] yield word[:index] + prev_char + word[index + 1:] class SmartWordToy: def minPresses(self, start, finish, forbid): # if we've got the solution with no presses if start == finish: return 0 forbidden = set(["".join(variation) for item in forbid for variation in product(*item.split())]) # if finish is forbidden, it is never reachable if finish in forbidden: return -1 visited = set() # double-ended queue for the BFS approach queue = deque([(start, 0)]) while queue: # take the next element from queue word, presses = queue.popleft() # mark word as visited visited.add(word) # make two changes for every character and check if we reached finish for item in get_combinations(word): if item == finish: return presses + 1 # put a word variation to queue if not already visited and not forbidden if item not in forbidden and item not in visited: queue.append((item, presses + 1)) # we've got no match return -1 It works on multiple sample inputs, like: print(SmartWordToy().minPresses("aaaa", "gzzb", {})) # prints 9 print(SmartWordToy().minPresses("aaaa", "zaaz", {"z z a a"})) # prints 2 print(SmartWordToy().minPresses("aaaa", "zzzz", {"z z z z"})) # prints -1 print(SmartWordToy().minPresses("aaaa", "aaaa", {})) # prints 0 But, if the "edit distance" (the presses counter) is becoming larger, like for example on the following input: SmartWordToy().minPresses("aaaa", "aagg", {}) The program quickly starts to eat a lot of memory (1GB in just a few seconds on my machine) and exceeds the allowed time limit as well. And, this is not even a worst case. How would you recommend to improve my current approach and keep track of visited words in a more efficient manner? Will appreciate any other feedback. Answer: You are marking a vertex as visited while taking it out of the queue and not while pushing it. Hence there can be a large number of copies of the same vertex in the queue, worsening the space and time complexity. The solution is to mark a vertex as visited while pushing it in the queue.
{ "domain": "codereview.stackexchange", "id": 24280, "tags": "python, performance, breadth-first-search, memory-optimization" }
Reactive Properties for easy property binding
Question: This is my attempt to write a data-bindable, platform independent property that doesn't require any literal strings. The basic idea is to basically have the same functionality as IObservable, but with a value which has a getter (and a setter in a derived interface). The core interface only has a getter: public interface IPropertySource<out T> { T Value { get; } IDisposable RawSubscribe(Action rawObserver); } Since it has a getter, the Action in RawSubscribe doesn't need to receive a value. This can be turned into a monad and add Linq operators with some extra code (I've removed all the null checks to make it shorter) (note that this code requires a reference to reactive extensions): internal class ExplicitPropertySource<T> : IPropertySource<T> { private readonly Func<T> _GetValue; private readonly Func<Action, IDisposable> _RawSubscribe; public T Value { get { return _GetValue(); } } public ExplicitPropertySource(Func<Action, IDisposable> rawSubscribe, Func<T> getValue) { _RawSubscribe = rawSubscribe; _GetValue = getValue; } public IDisposable RawSubscribe(Action rawObserver) { return _RawSubscribe(rawObserver); } } public static partial class PropertySource { public static IPropertySource<T> Create<T>(Func<Action, IDisposable> rawSubscribe, Func<T> getValue) { return new ExplicitPropertySource<T>(rawSubscribe, getValue); } public static IPropertySource<T> Return<T>(T value) { return PropertySource.Create(observer => Disposable.Empty, () => value); } public static IPropertySource<T> Distinct<T>(this IPropertySource<T> source, IEqualityComparer<T> comparer) { return PropertySource.Create( action => { T cachedValue = default(T); Action<T> sendAndCache = value => { action(); cachedValue = value; }; Action sendIfChanged = () => { var value = source.Value; if (!comparer.Equals(value, cachedValue)) sendAndCache(value); }; Action observer = null; observer = () => { observer = sendIfChanged; sendAndCache(source.Value); }; return source.RawSubscribe(() => { observer(); }); }, () => source.Value ); } public static IPropertySource<T> Distinct<T>(this IPropertySource<T> source) { return source.Distinct(EqualityComparer<T>.Default); } public static IPropertySource<T> Eager<T>(this IPropertySource<T> source) { return PropertySource.Create( action => { action(); return source.RawSubscribe(action); }, () => source.Value ); } public static IPropertySource<T> Lazy<T>(this IPropertySource<T> source) { return PropertySource.Create( action => { Action send = () => { }; var subscription = source.RawSubscribe(() => send()); send = action; return subscription; }, () => source.Value ); } public static IPropertySource<TResult> SelectMany<TSource, TResult>(this IPropertySource<TSource> source, Func<TSource, IPropertySource<TResult>> selector) { return PropertySource.Create( observer => { IDisposable rightSubscription = Disposable.Empty; Func<TSource, IPropertySource<TResult>> reattachRight = leftValue => { rightSubscription.Dispose(); var rightSource = selector(leftValue); rightSubscription = rightSource.Lazy().RawSubscribe(observer); return rightSource; }; IDisposable leftSubscription = source.Lazy().RawSubscribe(() => { reattachRight(source.Value); observer(); }); reattachRight(source.Value); return Disposable.Create(() => { leftSubscription.Dispose(); rightSubscription.Dispose(); }); }, () => selector(source.Value).Value ); } public static IPropertySource<TResult> Select<TSource, TResult>(this IPropertySource<TSource> source, Func<TSource, TResult> selector) { return source.SelectMany(value => PropertySource.Return(selector(value))); } public static IPropertySource<TResult> Merge<TLeft, TRight, TResult>(this IPropertySource<TLeft> left, IPropertySource<TRight> right, Func<TLeft, TRight, TResult> selector) { return left.SelectMany(leftValue => right.Select(rightValue => selector(leftValue, rightValue)) ); } } Since I'm using WinForms, I can then create these using standard WinForms properties: public class MemberAccessInfo { public readonly object Instance; public readonly string MemberName; public MemberAccessInfo(object instance, string memberName) { Instance = instance; MemberName = memberName; } } public static partial class PropertySource { private static MemberAccessInfo GetMemberAccessInfo<T>(Expression<Func<T>> memberAccessExpression) { LambdaExpression lambda = (LambdaExpression)memberAccessExpression; if (lambda.Body.NodeType != ExpressionType.MemberAccess) throw new Exception("Expression must be a member access."); MemberExpression memberExpr = (MemberExpression)lambda.Body; var instance = Expression.Lambda(memberExpr.Expression).Compile().DynamicInvoke(); return new MemberAccessInfo(instance, memberExpr.Member.Name); } internal static IPropertySource<T> FromProperty<T>(object instance, Type instanceType, PropertyInfo memberInfo) { var eventInfo = instanceType.GetEvent(memberInfo.Name + "Changed", BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public); Func<Action, IDisposable> subscribe; if (eventInfo != null && eventInfo.EventHandlerType == typeof(EventHandler)) { subscribe = observer => { EventHandler handler = (s, e) => observer(); eventInfo.AddEventHandler(instance, handler); return Disposable.Create(() => eventInfo.RemoveEventHandler(instance, handler)); }; } else { var notifyPropertyChanged = instance as INotifyPropertyChanged; if (notifyPropertyChanged == null) throw new Exception("This member cannot be observed."); subscribe = observer => { PropertyChangedEventHandler handler = (s, e) => { if (e.PropertyName == memberInfo.Name) observer(); }; notifyPropertyChanged.PropertyChanged += handler; return Disposable.Create(() => notifyPropertyChanged.PropertyChanged -= handler); }; } return Create(subscribe, () => (T)memberInfo.GetValue(instance)); } public static IPropertySource<T> FromProperty<T>(object instance, string propertyName) { var type = instance.GetType(); return FromProperty<T>(instance, type, type.GetProperty(propertyName)); } public static IPropertySource<T> FromProperty<T>(Expression<Func<T>> memberAccessExpression) { var propertyInfo = GetMemberAccessInfo(memberAccessExpression); return FromProperty<T>(propertyInfo.Instance, propertyInfo.MemberName); } } And finally, subscribe with these: public static partial class PropertySource { public static IDisposable Subscribe<T>(this IPropertySource<T> source, Action<T> observer, IEqualityComparer<T> comparer) { return source.Eager().Distinct(comparer).RawSubscribe(() => observer(source.Value)); } public static IDisposable Subscribe<T>(this IPropertySource<T> source, Action<T> observer) { return source.Eager().Distinct().RawSubscribe(() => observer(source.Value)); } } So I can do stuff like: var firstNameProperty = PropertySource.FromProperty(() => firstNameTextBox.Text); var lastNameProperty = PropertySource.FromProperty(() => lastNameTextBox.Text); var fullNameProperty = firstNameProperty.Merge(lastNameProperty, (fn, ln) => fn + " " + ln); var disposable = fullNameProperty.Subscribe(val => fullNameLabel.Text = val); This seems to me like such an obvious missing feature I was wondering if something like this already exists, and if not, whether I should make any improvements to my code. Also, I'm still not 100% my monad is actually a monad, but so far it's been working fine. This whole thing probably doesn't make much sense without the IProperty interface which also has a property setter, but is not a monad (it does have a SelectMany and Select methods though), and can also be created given a WinForms property. I'm not including it here since it's already pretty big, but I uploaded the whole thing to GitHib, and included a small sample WinForms application. Answer: T cachedValue = default(T); Action<T> sendAndCache = value => { action(); cachedValue = value; }; Action sendIfChanged = () => { var value = source.Value; if (!comparer.Equals(value, cachedValue)) sendAndCache(value); }; Action observer = null; observer = () => { observer = sendIfChanged; sendAndCache(source.Value); }; return source.RawSubscribe(() => { observer(); }); Here, you create a special initial observer which always sends the value (since there is nothing to compare it to) and then replace it with a normal comparing observer. I think it would be simpler and clearer to have a single observer, along with a variable tracking whether the cachedValue has been set: T cachedValue = default(T); bool cachedValueSet = false; Action observer = () => { var value = source.Value; if (!cachedValueSet || !comparer.Equals(value, cachedValue)) { action(); cachedValue = value; cachedValueSet = true; } }; return source.RawSubscribe(observer); () => selector(source.Value).Value What happens if source.Value is null? I'm also wondering about the monad thing: does using LINQ queries on your objects make sense? If not, I probably wouldn't use names like Select or SelectMany.
{ "domain": "codereview.stackexchange", "id": 20911, "tags": "c#, winforms, reactive-programming" }
How to write this method with similar checks and multiple returns more elegantly?
Question: I have written the following method which works as expected. But is there a way to write this more elegantly in Java 8? Appreciate any feedback. Thanks. private String get(Details details, Value value){ Type type = Type.getEvent(details.getType()); if(type != null){ if("SAMPLE".equals(type.typeDesc())){ if(SPIN.equals(details.getSpin())){ // this if looks very similar to last line.. return BACKUP + SAMPLE + value.getId(); } return DEFAULT_VAL + SAMPLE + value.getId(); } return value.getRoute(); } return StringUtils.contains(details.getSpin(), SPIN) ? BACKUP : DEFAULT_VAL; } Answer: I think it's a personal thing, but I prefer == null checks with early exits, because that way you have all exception paths at the beginning and the normal execution path is less indented. For your string concatenation you could make a ternary if and have one return statement less. private String get(Details details, Value value){ Type type = Type.getEvent(details.getType()); if(type == null) { return StringUtils.contains(details.getSpin(), SPIN) ? BACKUP : DEFAULT_VAL; } if("SAMPLE".equals(type.typeDesc())) { String prefix = SPIN.equals(details.getSpin()) ? BACKUP : DEFAULT_VAL; return prefix + SAMPLE + value.getId(); } return value.getRoute(); } Now looking at this, you check the spin for equals and contains. Is this intentional? If not you could further reduce the code by pulling the prefix part up like this: private String get(Details details, Value value){ String prefix = SPIN.equals(details.getSpin()) ? BACKUP : DEFAULT_VAL; Type type = Type.getEvent(details.getType()); if(type == null) { return prefix; } if("SAMPLE".equals(type.typeDesc())) { return prefix + SAMPLE + value.getId(); } return value.getRoute(); } In general you should use StringBuilder for string concatenation.
{ "domain": "codereview.stackexchange", "id": 36232, "tags": "java" }
Database Connection Program
Question: Earlier today I asked a question as a guest (whoops) regarding my Database program. I could not comment nor post again asking for further advice from the people that answered my question. I have worked on the advice given to me in that answer which can be seen here; Database Connection Program With HR Functionality I am still looking for advice on my further classes, something I cannot ask for in the previous question. Being completely new to CodeReview, I shall post again in the hope I can still find advice on my code, however if this is the wrong thing to do then please do notify me. User Selection public partial class UserSelection : Page { public UserSelection() { InitializeComponent(); dataGrid.CanUserAddRows = false; string username = Environment.UserName; } /*** * * Take all the data from the DataAccessor method FillDataGrid() and Trim() white space * ***/ private void FillDataGrid(object sender, RoutedEventArgs e) { DataAccessor da = new DataAccessor(); DataTable dt = da.FillDataGrid(); foreach (DataRow dr in dt.Rows) { if (!dr.IsNull("em_netname")) { dr["em_netname"] = dr["em_netname"].ToString().Trim(); } if (!dr.IsNull("em_dept")) { dr["em_dept"] = dr["em_dept"].ToString().Trim(); } if (!dr.IsNull("em_name")) { dr["em_name"] = dr["em_name"].ToString().Trim(); } if (!dr.IsNull("em_init")) { dr["em_init"] = dr["em_init"].ToString().Trim(); } } dataGrid.ItemsSource = dt.AsDataView(); } /*** * * Load the user's image from the S drive, if no image exists load noimage.png * ***/ private void LoadUserImage(object sender, SelectionChangedEventArgs e) { try { DataRowView dataRow = (DataRowView)dataGrid.SelectedItem; string username = dataRow.Row.ItemArray[2].ToString(); userImage.Source = new BitmapImage(new Uri(@"S:\Picture\"+username+".jpg")); //" } catch { userImage.Source = new BitmapImage(new Uri(@"C:\Users\DanD\Desktop\noimage.png")); } } /*** * * On clicking the HR button, load the HRsystem with the desired person * ***/ private void hrButton_Click(object sender, RoutedEventArgs e) { DataAccessor da = new DataAccessor(); DataRowView dataRow = (DataRowView)dataGrid.SelectedItem; if (dataRow != null) { Properties.Settings.Default.UserID = dataRow.Row.ItemArray[0].ToString(); // Add the selected Users ID to the properties settings file Properties.Settings.Default.Save(); da.SetUserDetails(); NavigationService.Navigate(new Uri(@"View/HRSystem/HRSystem.xaml", UriKind.Relative)); } else { MessageBox.Show("Please Select a User!"); } } /*** * * Chane the names of the existing columns, called when the columns are generated * ***/ private void ChangeColumnNames(object sender, DataGridAutoGeneratingColumnEventArgs e) { if (e.PropertyName.StartsWith("em_pplid")) { e.Column.Header = "ID"; } if (e.PropertyName.StartsWith("em_name")) { e.Column.Header = "Name"; } if (e.PropertyName.StartsWith("em_netname")) { e.Column.Header = "Net Name"; } if (e.PropertyName.StartsWith("em_init")) { e.Column.Header = "Initials"; } if (e.PropertyName.StartsWith("em_dept")) { e.Column.Header = "Department"; } } /*** * * Search the grid based on which radio button is selected, called when text is entered into the text box * ***/ private void SearchGrid(object sender, TextChangedEventArgs e) { DataView dv = dataGrid.ItemsSource as DataView; if (nNameRad.IsChecked == true) { dv.RowFilter = "em_netname LIKE '%" +searchBox.Text+ "%'"; } if (deptRad.IsChecked == true) { dv.RowFilter = "em_dept LIKE '%" + searchBox.Text + "%'"; } if (sNameRad.IsChecked == true) { dv.RowFilter = "em_name LIKE '%" + searchBox.Text + "%'"; } if (initRad.IsChecked == true) { dv.RowFilter = "em_init LIKE '%" + searchBox.Text + "%'"; } } private void AddEmployee(object sender, RoutedEventArgs e) { NavigationService.Navigate(new Uri(@"View/HRSystem/AddEmployee.xaml", UriKind.Relative)); } } Data Accessor class DataAccessor { /*** * * Select all of the valid users for use by the DataGrid in UserSelection.xaml * ***/ public DataTable FillDataGrid() { string constr = ConfigurationManager.ConnectionStrings["dbfString"].ConnectionString; using (OleDbConnection dbfCon = new OleDbConnection(constr)) { try { dbfCon.Open(); DataTable dTable = new DataTable(); OleDbCommand MyQuery = new OleDbCommand("SELECT em_pplid, em_name, em_netname, em_init, em_dept FROM employs WHERE em_netname NOT LIKE '' AND em_type != 2", dbfCon); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(MyQuery); dataAdapter.Fill(dTable); return dTable; } catch (OleDbException) { throw; } } } /*** * * Select the user's details and place into a list<String> for use by the HRsystem * ***/ public List<string> SetUserDetails() { var userID = Properties.Settings.Default.UserID; string constr = ConfigurationManager.ConnectionStrings["dbfString"].ConnectionString; using (OleDbConnection dbfCon = new OleDbConnection(constr)) { try { dbfCon.Open(); var cmdString = string.Format("SELECT em_surname, em_name, em_netname, em_init, em_dept, em_title FROM employs WHERE em_pplid = {0};", userID); OleDbCommand dbfCmd = new OleDbCommand(cmdString, dbfCon); OleDbDataReader myReader = dbfCmd.ExecuteReader(); List<string> listUser = new List<string>(); while (myReader.Read()) { listUser.Add(myReader[0].ToString()); //Surname listUser.Add(myReader[1].ToString()); //Name listUser.Add(myReader[2].ToString()); //Netname listUser.Add(myReader[3].ToString()); //Initials listUser.Add(myReader[4].ToString()); //Department listUser.Add(myReader[5].ToString()); //Job Title return listUser; } } catch (OleDbException) { throw; } } return null; } } Answer: Before I get started, I would suggest using an ORM for database access whenever possible. However, the remainder of this answer will assume that is not an option (for whatever reason). Use the MySQL ADO connector library You are currently using OleDB to access your MySql database (indicated by tag), but there is actually a MySql-specific library you can use. It uses the ADO.NET interfaces, so your code will have few changes aside from which objects you work with (e.g., MySqlConnection instead of OleDbConnection). It appears that the MySql team also maintains a MySql.Data nuget package as well, making this even easier: Install-Package MySql.Data Use parameterized queries instead of string formats var cmdString = string.Format("SELECT em_surname, em_name, em_netname, em_init, em_dept, em_title FROM employs WHERE em_pplid = {0};", userID); Building queries with string.Format is dangerous, as it leaves you open to SQL Injection attacks. This particular case is using a value from the app settings, so perhaps it's not likely in this case, but it's a good idea to get into the habit of using parameterized queries anyways. For MySql, you inject @-delimeted names: var cmdString = string.Format("SELECT em_surname, em_name, em_netname, em_init, em_dept, em_title FROM employs WHERE em_pplid = @userID"); Then, you add it to your command with the following: dbfCmd.Parameters.Add(new MySqlParameter("@userID", userID)); Dispose all the things While you have a using statement for your connection object, there are a few other objects which implement IDisposable you missed. Both the command and reader objects need using statements. Naming The SetUserDetails method doesn't set anything. It would be more appropriate to call it GetUserDetails, since you are retrieving information. Your variable names could use some work as well - they are somewhat inconsistent. Some have prefixes, while others do not. Some are camelCase, while others are PascalCase (the convention is camelCase). I left these changes as an exercise for the reader :) Use well-defined data objects I also noticed that SetUserDetails returns a List<string> which contains various user detail properties. It would be better to create a data structure to hold this and return it instead: public class UserDetails { public string Surname { get; set; } public string Name { get; set; } public string Netname { get; set; } public string Initials { get; set; } public string Department { get; set; } public string JobTitle { get; set; } } Then, your while loop can look as follows: while(myReader.Read()) { var details = new UserDetails { Surname = myReader[0].ToString(), //Surname Name = myReader[1].ToString(), //Name Netname = myReader[2].ToString(), //Netname Initials = myReader[3].ToString(), //Initials Department = myReader[4].ToString(), //Department JobTitle = myReader[5].ToString(), //Job Title }; listUser.Add(details); } How I learned to stop worrying and love the var When types are obvious, the common convention is to use var. var all the things! Return the simplest data types possible SetUserDetails returns a List<string>, when it should probably be returning IEnumerable<T>. If you need add/remove functionality, you can use ICollection<T>, and if you really need list functions, you can use IList<T>. Take advantage of ADO.NET interfaces Rather than directly using the command and reader object constructors, there are methods you can use to build them. When building a command object, you can use your connection object with the IDbConnection.CreateCommand method. Final Version Taking the above suggestions into account, I came up with the following for your two DAL methods: public DataTable FillDataGrid() { string constr = ConfigurationManager.ConnectionStrings["dbfString"].ConnectionString; using(var dbfCon = new MySqlConnection(constr)) { dbfCon.Open(); using(var MyQuery = dbfCon.CreateCommand()) { MyQuery.CommandText = "SELECT em_pplid, em_name, em_netname, em_init, em_dept FROM employs WHERE em_netname NOT LIKE '' AND em_type != 2"; var dTable = new DataTable(); var dataAdapter = new MySqlDataAdapter(MyQuery); dataAdapter.Fill(dTable); return dTable; } } } public IEnumerable<UserDetails> GetUserDetails() { var userID = Properties.Settings.Default.UserID; var constr = ConfigurationManager.ConnectionStrings["dbfString"].ConnectionString; using(var dbfCon = new MySqlConnection(constr)) { dbfCon.Open(); using(var dbfCmd = dbfCon.CreateCommand()) { dbfCmd.CommandText = "SELECT em_surname, em_name, em_netname, em_init, em_dept, em_title FROM employs WHERE em_pplid = @userID"; dbfCmd.Parameters.Add(new MySqlParameter("@userID", userID)); using(var myReader = dbfCmd.ExecuteReader()) { var listUser = new List<UserDetails>(); while(myReader.Read()) { var details = new UserDetails { Surname = myReader[0].ToString(), //Surname Name = myReader[1].ToString(), //Name Netname = myReader[2].ToString(), //Netname Initials = myReader[3].ToString(), //Initials Department = myReader[4].ToString(), //Department JobTitle = myReader[5].ToString(), //Job Title }; listUser.Add(details); } return listUser; } } } } Disclosure: I do an increasing amount of functional work lately, so I tend to use var more than others might. Feel free to put back some of the explicit type names if you feel uncomfortable leaving them as var.
{ "domain": "codereview.stackexchange", "id": 16162, "tags": "c#, beginner, mysql, database" }
These two operators commute...but their eigenvectors aren't all the same. Why?
Question: The Hamiltonian $$H = \left[ \begin{array}{cccc} a & 0 & 0 & -b \\ 0 & 0 & -b & 0\\ 0 & -b & 0 & 0\\ -b & 0 & 0 & -a \end{array} \right] $$ commutes with the qubit exchange operator $$ P = \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 \end{array} \right] $$ So I would expect the two to have the same eigenvectors. The eigenvectors of $P$ are easily seen to be $(1,0,0,0)^T; (0,0,0,1)^T; (0,1,1,0)^T ; (0,1,-1,0)^T$. The latter two are also eigenvectors of $H$, but the first two are not. Why? I thought commuting operators shared the same eigenbasis? Answer: Call $u_1, u_2, u_3, u_4$ the eigenvectors described by you, respectively. Your claims are all right, but realize that both $u_1$ and $u_2$ share the same eigenvalue, that is $1$, i.e., $Pu_1=u_1$ and $Pu_2=u_2$. Hence, any linear combination of $u_1$ and $u_2$ will also be eigenvectors with the same eigenvalue $1$. Try to find eigenvectors of $H$ of the form $\alpha u_1+\beta u_2$, with $\alpha$ and $\beta$ being constants.
{ "domain": "physics.stackexchange", "id": 17136, "tags": "quantum-mechanics, homework-and-exercises, quantum-information, hilbert-space, linear-algebra" }
How does time translational invariance and linearity imply irreducible solutions?
Question: The author of the book THE PHYSICS OF WAVES has mentioned on page 69, at the start of second last paragraph, that The point is worth repeating: Time translation invariance and linearity imply that we can always find irreducible solutions, (3.67), in which all the degrees of freedom oscillate with the same frequency. I searched a lot of books and websites to understand the reason behind this but could find a satisfactory answer. I understand the concept of invariance and linearity, but the author has used a very specific kind of solutions (on page 11, equation 1.38), which he called irreducible solutions. I don't understand why does time translation invariance and linearity imply those particular solutions. Any help or resource will be appreciated. Thanks in advance. Answer: What the author (Howard Georgi) is saying is that the derivation of the form of an irreducible solution in 1 DoF did not use the fact that there was on one degree of freedom. Therefore, the derivation applies to a system with $N$ coordinates. Linearity means that if $z(t)$ is a solution to the equations of motion, then: $$ h \times z(t) $$ is also solution, where $h$ is some number, which we eventually decide is $\in\mathbb{C}$. Time translation invariance means we can shift our clock by $a$: $$ t\rightarrow t+a $$ and our solution satisfies: $$ z(t) \rightarrow z(t+a) = h(a)z(t) $$ He then shows that: $$||h(a)|| = 1$$ Putting it all together: $$ h(a)= e^{-i\omega a}$$ This is significant. We know the solution is: $$ z(t) = Ae^{-i(\omega t+\phi)}$$ which we are taught to think of as an oscillating solution in which the acceleration is proportional to, but in opposition to, the displacement, which we implement as: $$ x(t)\equiv \Re(z(t))=A\cos(\omega t+\phi)$$ However, since: $$ z(t+a) = Ae^{-i(\omega (t+a)+\phi)} = e^{-i\omega a} Ae^{-i(\omega t+\phi)} = h(a)z(t)$$ we can think of it as: $$ z(t)=z(0+a)=h(t)z(0)$$ That is, the system is described by a state vector (in a 1D space) with some constant amplitude and an initial phase factor: $$ z(0) = Ae^{-i\phi}$$ with time evolution being a simple phase factor: $$h(t) = e^{-i\omega t}$$ The time evolution and initial state vector were required to solve: $$[m^{-1}k-\omega^2]A=0$$ Since $A$ is just a number, that required that the frequency describing time evolution satisfy: $$\omega^2 = \frac k m $$ So...when moving to a system of $N$ oscillators, the system is still described by state vectors, $\vec A^{(i)}$, and special frequencies, $\omega_i$, that satisfy: $$ [{\bf M}^{-1}{\bf K}-\omega_i^2{\bf I}]\vec A^{(i)}=0 $$ which is an eigenvalue equation with $N$ solutions (degeneracies notwithstanding). Each combination of configurations $\vec A^{(i)}$ describe the normal modes of the system, oscillating at eigenfrequency $\omega_i$. The key insight from time-invariance and linearity condition (the original eq 1.68), is that the coordinate displacements that go into the normal modes must all oscillate at the same frequency (and in phase). That is simply not obvious if you look at the system as $N$ coupled oscillators with $N$ solutions all oscillating at combinations of different frequencies and phases: $$ x_i(t) =ce^{-i\omega t+\phi}+ c'e^{-i\omega' t+\phi'}+c''e^{-i\omega''t+\phi''}+\ldots$$
{ "domain": "physics.stackexchange", "id": 80595, "tags": "classical-mechanics, waves, symmetry, coupled-oscillators" }
Explicit address for camera calibration file
Question: I have a few different platforms that are all using the same package. Each platform has a different stereo camera calibration file.yaml Each platform has its own launch file, which gets called (roughly) this way: <node pkg="nodelet" type="nodelet" name="camera1394_nodelet" args="load camera1394/driver left_manager"> <rosparam file="/home/turtlebot/workingDir/cam_info/00b09d0100c8851b.yaml" /> <param name="guid" value="00b09d0100c8851b" /> </node> Which seems to work, but within each yaml file the camera_info_url is defined this way: camera_info_url: package://hast/cam_info/left_00b09d0100c8851b.yaml but I would rather call it this way: camera_info_url: /home/turtlebot/BitSync/hast/workingDir/cam_info/left_00b09d0100c8851b.yaml This way I can keep and update the files in a "central server" (via bittorrent sync) instead of embedded within the package on each platform. However, the second method fails and does not load the yaml file properly. Is there a different way to define the camera parameters, or am I stuck with the file in each package? Originally posted by benabruzzo on ROS Answers with karma: 79 on 2014-10-10 Post score: 0 Answer: Like many other ROS camera drivers, camera1394 uses the camera_info_manager package for resolving camera configuration information. That package can handle many URL options. If I understand the question correctly, you want to store calibration files for all your cameras together, somewhere separate from your ROS packages. The easiest way to do that is via the ${NAME} substitution parameter, which gets resolved to the name of each specific device, 00b09d0100c8851b in your example. To accomplish that, set the URL this way: camera_info_url: file:///home/turtlebot/BitSync/hast/workingDir/cam_info/${NAME}.yaml Then, store all your calibration files in that directory using each camera's unique 16 hex digit name plus the .yaml suffix. You can even use the identical URL for calibrating the camera initially, camera_info_manager will store the file in the specified directory. After that, you would need to update the bit torrent manually. Originally posted by joq with karma: 25443 on 2014-10-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by benabruzzo on 2014-10-14: Would this also be called during the launch file as: <rosparam file="/home/turtlebot/workingDir/cam_info/${NAME}.yaml" /> ? Comment by benabruzzo on 2014-10-14: Thanks, that did work. Just FYI for others, no, the launch file needs to have the actual name of the yaml file: <rosparam file="/home/turtlebot/BitSync/hast/workingDir/cam_info/00b09d0100c884f8.yaml" /> Comment by joq on 2014-10-15: That's right, ${NAME} is resolved only for the URL parameter, not in the launch file itself.
{ "domain": "robotics.stackexchange", "id": 19702, "tags": "calibration, roslaunch, camera1394, camera, camera-info-manager" }
The image charge inside the conducting sphere without producing any electric field
Question: In the method of images for a grounded conducting sphere, we calculate the position of image charge at $$\frac{R^2}{r}$$, due to which there should be an electric field inside the conducting sphere which violates Gauss' law. I know it's a misconception but I am confused. Answer: The image charge there does not mean that "there should be an electric field inside the conducting sphere", there should be no net electric field in the conductor. It just mimics the electric field distribution outside the sphere. The electric field distribution outside the conducting sphere should be the same as the situation that there is no conductor there but an image charge placed there. It says nothing about the electric field inside the sphere. The figure here is a cartoon diagram of an induced grounded conducting sphere (from Wikimedia commons), which may help you understand the situation.
{ "domain": "physics.stackexchange", "id": 55732, "tags": "homework-and-exercises, electrostatics, electric-fields, conductors" }
HackerEarth challenge: Lunch boxes
Question: I was solving the lunch boxes problem on HackerEarth but as I submitted my solution most of the test cases were passed and the rest showed 'Time Limit Exceeded'. I would be very grateful if you could help me improve the algorithm of my code to reduce the runtime. Following is the Problem Alice works as a restaurant manager. The restaurant has prepared 'N' lunch boxes and Alice plans to distribute them to some schools. Consider that there are 'M' schools and an \$i^{th}\$ school orders \$A_i\$ lunch boxes. She wants to distribute lunch boxes to as many schools as possible. Also, she has the following rule: For an \$i^{th}\$ school, she gives either zero or \$A_i\$ lunch boxes Your task is to help Alice to determine the maximum number of schools that can get lunch boxes. Input format The first contains an integer \$t\$ that denotes the number of test cases in the input. Each test case consists of two lines: The first line contains two integers \$N\$ and \$M\$. The second line contains integers \$A_1, A_2, ..., A_m\$. Constraints \$ 1 \le t \le 10 \$ \$ 1 \le N, M \le 10^5 \$ \$ 1 \le A_i \le 10^6 \$ My code #include<stdio.h> int main() { int t; scanf("%d", &t); while(t--){ int n,m; scanf("%d", &n); scanf("%d", &m); int a[m]; for(int i=0; i<m; i++){ scanf("%d", &a[i]); } for(int i=0; i<m; i++){ for(int j=i + 1; j<m; j++){ if(a[i] > a[j]){ int temp = a[i]; a[i]=a[j]; a[j]=temp; } } } int sum=0, count=0; for(int i=0; i<m; i++){ sum=sum+a[i]; if(sum>n){ break; } else if(sum<=n) count++; } printf("%d\n", count); } } Answer: Use a better sorting algorithm I think the main reason why you get a time limit exceeded error is because you are using an \$O(M^2)\$ algorithm to sort the orders. This is going to get slow very quickly for large values of \$M\$. While you could try to implement a better sorting algorithm yourself, use qsort() instead. Use more meaningful variable names You are using a lot of one-character variable names: t, n, m, a. It's very hard to understand what those variables mean without having to follow the whole code. If you are going to use the same names as used in the problem statement, or for example if you are implementing some algorithm or mathematical formula from a paper, make sure you use exactly the same names as the problem (so in this case, capital N, M and A), and add a comment to the code linking to the problem/paper/etc. where those things are described. Alternatively, you could give them more meaningful names yourself: t -> num_testcases n -> num_lunchboxes m -> num_schools a[] -> order_sizes[] The exception is for variables like i and j. These are very commonly used as iteration counters, so it's not going to be confusing for most programmers. Unnecessary else if Don't add an if after an else if the condition is just the inverse of the first if. This will do just fine: if (sum > n) { break; } else { count++; } Avoiding having to repeat the condition ensures there is less change of mistakes. Also, you don't even need the else part in this code, you can just write: if (sum > n) { break; } count++;
{ "domain": "codereview.stackexchange", "id": 43961, "tags": "c, programming-challenge, time-limit-exceeded" }
Why did machine learning only become viable after Nvidia's chips were available?
Question: I listened to a talk by a panel consisting of two influential Chinese scientists: Wang Gang and Yu Kai and others. When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), Yu Kai, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples: In the early development of the computer, we compare our machines by their chips; ML/DL which is very popular these years would be almost impossible if not empowered by Nvidia's GPU. The fundamental algorithms existed already in the 1980s and 1990s, but AI went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers. Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together. Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build the AI systems we are building now. But why did AI/ML/DL become a hot topic and become empirical until decades later? Is it only a matter of hardware, software, and data? Answer: There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times: Mathematics: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors. Precision: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's (x86), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on special computers. Although, it is interesting to note that we have come a full circle and Google is now using TPU's with 8-16 bit accuracy to implement ML models with great success. Parallelization : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's). Data: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model. Cost: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent article on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds). Although, I am not sure but I think earlier supercomputers were specialised in "parallel+very high precision computing" (required for weather, astronomy, military applications, etc) and the "very high precison part" is overkill in Machine Learning scenario. Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times, All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this question on how we are moving beyond digital processors.
{ "domain": "ai.stackexchange", "id": 3225, "tags": "machine-learning, deep-learning, history" }
Calculate velocity when mass hit the Sun
Question: Suppose no planet exist in the solar system, only the sun. Then place a mass at a distance $r$ from the sun. At what velocity does the mass 'hit' the sun? Suppose both mass and sun are points in space and movements happen in a line. I know the relativistic consequences of this, just simplify things a bit... My question rises from calculating the integral $v = a(t)dt$. I know the acceleration by Newton's law, given by $G\frac{M}{r^2}$, but it depend on the distance and I'm not integrating over distance, but over time. In the end, $G*M$ are constant, hence I am getting integral of $\frac{1}{r^2} dt$, and I'm stuck. I'm not looking for answers involving mechanical conservation of energy, simply answers involving this integration method. Also, how much time does this mass take to hit the sun? I wrote out the integral of $\int G\frac{M}{r^2}t, dt$, but cannot go further with my actual knowledge. Answer: So we can look at this in terms of differential equation: $$\frac{d^2r}{dt^2}=-\frac{G(M+m)}{r^2}$$where $M$ is the mass of the sun and $m$ is the mass of the object that is going into the sun. $($I have written it like this in case the mass of the object $m$ is large enough to create a noticeable acceleration of the sun$)$. Multiplying both sides by $dr$ we get: $$\int_{r_0}^r\frac{dv}{dt}dr=\int_{r_0}^r -\frac{G(M+m)}{r^2}dr$$ $$\int_0^v v\ dv=\int_{r_0}^r -\frac{G(M+m)}{r^2}dr$$ $$\frac{1}{2}v^2=\frac{G(M+m)}{r}-\frac{G(M+m)}{r_0}$$ $$v=\sqrt{\frac{2G(M+m)}{r}-\frac{2G(M+m)}{r_0}}$$ Where $r_0$ is the initial radius and $r$ is the final radius $($both measured from the centre of the sun$)$. Now getting onto your second question we should continue from where we left off with:$$v=\frac{dr}{dt}=\sqrt{\frac{2G(M+m)}{r}-\frac{2G(M+m)}{r_0}}$$ $$\frac{dr}{dt}=\sqrt{\frac{2G(M+m)r_0-2G(M+m)r}{rr_0}}$$ And rearranging for $dt$ we get: $$dt=\frac{\sqrt{rr_0}dr}{\sqrt{2G(M+m)r_0-2G(M+m)r}}$$ $$dt=\sqrt{\frac{r_0}{2G(M+m)}}\sqrt{\frac{r}{r_0-r}}dr$$ Substituting in for $$u=\sqrt{\frac{r}{r_0-r}}$$ and rearranging we get $$dr=\frac{2r_0u}{(1+u^2)^2}du$$ and so $$dt=\sqrt{\frac{r_0}{2G(M+m)}}\frac{2r_0u^2}{(1+u^2)^2}du$$ Now, as $r\to r_0,\ u\to\infty$ and as $r\to r_f,\ u\to u_f$ where $r_f$ is the radius of the sun and $$u_f=\sqrt{\frac{r_f}{r_0-r_f}}$$ $$dt=\frac{2r_0^{3/2}}{\sqrt{2G(M+m)}}\int_{u_f}^{\infty}\frac{u^2}{(1+u^2)^2}du$$Solving this for $t$ gets us: $$t=\frac{2r_0^{3/2}}{\sqrt{G(M+m)}}\bigg(\frac{\pi}{2}-\bigg(\tan^{-1}(u_f)-\frac{u_f}{1+u_f^2}\bigg)\bigg)$$ Hope this helps :). If you need me to expand on any points I can just leave a comment.
{ "domain": "physics.stackexchange", "id": 45239, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, energy-conservation, free-fall" }
Is Einstein's theory really challenged by the recent paper in news?
Question: A recent paper authored by Niayesh Afshordi and Joao Magueijo is in the news claiming to challenge Einstein's theory on constancy of light speed. It says light might have travelled with a faster pace during the initial stages of Big bang which apparently sets aside the inflation hypothesis. My question is whether it actually invalidates Einstein's relativity - which depends on constancy of speed of light. Or is it only applicable to the very early universe? Answer: Magueijo came up with the variable speed of light idea nearly twenty years ago. The motivation was as a way of explaining the homogeneity of the universe. At that time inflation was still somewhat controversial (perhaps it still is) and Magueijo's theory was mainly intended as an alternative to inflation. Since then inflation has become more accepted and VSL theories generally became less popular. However there are beginning to be mutterings that inflation is such a vague theory that it's impossible to experimentally prove. That is, the theory has so many adjustable parameters it's hard to see what experimental data would conclusively rule it out. It is in this context that VSL theories are attracting more attention again. But it must be emphasised that there is no experimental evidence that conclusively rules out or rules in either inflation or VSL theories. They both remain ideas that explain some features in an appealing way, but neither is more than an appealing idea right now. As for the conflict with general relativity, both theories are concerned with the very earliest moments after the Big Bang, and I suspect most physicists already doubt that GR will hold right back to the Big Bang itself. We expect some quantum gravity to become important at very early times. So I'm not sure I'd say VSL thories challenge GR. The new element in Magueijo's paper is a firm prediction about the effect of VSL on the cosmic microwave background. While the current measurements of the CMB aren't precise enough to test Magueijo's predicitions the next generation of measurements may be. However even if the measurements match Magueijo's prediction that will show only that his VSL theory is possible not that it's true. His predicted value for the spectral index would still also be compatible with a range of inflationary theories.
{ "domain": "physics.stackexchange", "id": 35937, "tags": "general-relativity, cosmology, speed-of-light, big-bang" }
Acceleration in a rotating frame
Question: From "Centrifugal force", Wikipedia: Acceleration [...] By applying the transformation above from the stationary to the rotating frame three times the absolute acceleration of the particle can be written as: \begin{aligned}{\boldsymbol {a}}&={\frac {\operatorname {d} ^{2}{\boldsymbol {r}}}{\operatorname {d} t^{2}}}={\frac {\operatorname {d} }{\operatorname {d} t}}{\frac {\operatorname {d} {\boldsymbol {r}}}{\operatorname {d} t}}={\frac {\operatorname {d} }{\operatorname {d} t}}\left(\left[{\frac {\operatorname {d} {\boldsymbol {r}}}{\operatorname {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\operatorname {d} ^{2}{\boldsymbol {r}}}{\operatorname {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\operatorname {d} {\boldsymbol {r}}}{\operatorname {d} t}}\right]+{\frac {\operatorname {d} {\boldsymbol {\omega }}}{\operatorname {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times {\frac {\operatorname {d} {\boldsymbol {r}}}{\operatorname {d} t}}\\&=\left[{\frac {\operatorname {d} ^{2}{\boldsymbol {r}}}{\operatorname {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\operatorname {d} {\boldsymbol {r}}}{\operatorname {d} t}}\right]+{\frac {\operatorname {d} {\boldsymbol {\omega }}}{\operatorname {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times \left(\left[{\frac {\operatorname {d} {\boldsymbol {r}}}{\operatorname {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\operatorname {d} ^{2}{\boldsymbol {r}}}{\operatorname {d} t^{2}}}\right]+{\frac {\operatorname {d} {\boldsymbol {\omega }}}{\operatorname {d} t}}\times {\boldsymbol {r}}+2{\boldsymbol {\omega }}\times \left[{\frac {\operatorname {d} {\boldsymbol {r}}}{\operatorname {d} t}}\right]+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})\ .\end{aligned} Here I have seen that the last term, generally $\mathbf{\omega} {\times}\left(\mathbf{\omega} {\times} \mathbf{r} \right)$. The other terms nullify out possibly. Question: Why do the rotating frame problems require usage of non-inertial frames instead of ground frames for general calculations? Answer: Why do the rotating frame problems require usage of non-inertial frames instead of ground frames for general calculations? They don't. Using a rotating frame can sometimes makes a problem easier to solve than other choices of a frame of reference, particularly if the non-inertial effects can be ignored or are built-in. With regard to the latter, the fictitious centrifugal force is "built-in" when using a ground frame. The gravitational acceleration g is the vector sum of the acceleration due to gravity and the centrifugal acceleration. With regard to the former, the Coriolis effect is typically ignored in introductory physics problems that calculate how far a cannonball flies. Assuming gravitational acceleration is a constant vector and ignoring aerodynamic drag and the Coriolis effect results in a nice simple model, parabolic flight, that students can use to solve problems. Ignoring the Coriolis effect is consonant with those other simplifying assumptions. Those assumptions are not consonant with a cannonball (or some other ballistic projectile) that rises far above the Earth's atmosphere only to come back to Earth on another continent. That however is a problem for students of Global Thermonuclear War rather than students of introductory physics.
{ "domain": "physics.stackexchange", "id": 49312, "tags": "newtonian-mechanics, forces, reference-frames, acceleration, inertial-frames" }
Why are clouds wavy?
Question: Say you're in an ascending airplane as your 10 year-old son asks you: "Dad, why are these clouds wavy?" Now, say you know a little about gravity waves and the formation of wavy clouds(Maybe I should be linking to As instead? Are these wiki pages basically talking about the same thing?). How would you go about explaining it to your son? PS. I'm looking for a simple and fluid explanation that would be able to give some intuition for the phenomena. Answer: The wavy clouds and altostratus undulatus links describe specific types of cloud formation. I would guess that your son is asking why all clouds don't have smooth surfaces, so his question is more general. Assuming this is the case, the clouds are wavy simply because the air flow in and around them is turbulent. If you pour Pernod into water you get a wavy white cloud for exactly the same reason. If I'm wrong and your son is asking about specific cloud formations, then the waves are not that different in principle from waves on water. Waves on water are an oscillation at an interface, specifically the water/air interface, while the waves in clouds happen at the interface between layers of air at different temperatures and/or moving at different speeds.
{ "domain": "physics.stackexchange", "id": 5538, "tags": "waves" }
Showing relation for raising and lowering operator
Question: I found this task in an older exam of quantum mechanics and it was about the harmonic oscillator and thought it would be a good practice for myself. The task goes as: $\text{The normed eigenstates}\ |n\rangle \ \text{can be calculated from the zero state as:}$ $$|n\rangle= \frac{1}{\sqrt{n!}} (a^{\dagger})^n|0\rangle$$ $\text{Use this relation to show the following:}$ $$a^{\dagger}|n\rangle=\sqrt{n+1}|n+1\rangle \quad , \quad a|n\rangle=\sqrt{n}|n-1\rangle $$ I think I showed the first one: $\sqrt{n+1}|n+1\rangle=\sqrt{n+1}\frac{1}{\sqrt{n+1}}(a^{\dagger})^{n+1}|0\rangle=\frac{1}{\sqrt{n}}(a^{\dagger})^{n+1}|0\rangle=\frac{1}{\sqrt{n}}a^{\dagger}(a^{\dagger})^{n}|0\rangle=a^{\dagger}\frac{1}{\sqrt{n}}(a^{\dagger})^{n}|0\rangle=a^{\dagger}|n\rangle$ But I don't know how to show the second relation. Any advice tip/solution is appreciated. Answer: Hint: Take the conjugate of the first equation: $$ \langle n \mid\ =\frac{1}{\sqrt{n!}}\langle 0 \mid\ a^n $$ and use the same logic as with the first part.
{ "domain": "physics.stackexchange", "id": 69551, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, operators, harmonic-oscillator" }
Simple Subscribing plugin
Question: I am writing a simple subscriber program that I want which I want to subscribe to a topic that I publish. I have the publisher writtten and It works. But I am not able to even compile the subscriber : here is the subscriber program : #include <gazebo.hh> #include <transport/transport.hh> #include <msgs/msgs.hh> #include <iostream> #include "custom.pb.h" void subs(CustomPtr &msg) { std::cout << "Recieved { " << msg->my_data << "}" ; } int main(int _argc, char **_argv) { typedef const boost::shared_ptr<const my_msgs::msgs::Custom> CustomPtr // loading gazebo gazebo::load(_argc,_argv); int i; gazebo::run(); gazebo::transport::NodePtr node(new gazebo::transport::Node()); node->Init (); // start transportation gazebo::transport::run(); // Subscribe to a topic gazebo::transport::SubscriberPtr sub = node->Subscribe("~/testing_example", subs); //publishing loop while (true) { } // shutting down gazebo::transport::fini(); } This is the error I get : /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc:7:11: error: variable or field ‘subs’ declared void /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc:7:11: error: ‘CustomPtr’ was not declared in this scope /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc:7:22: error: ‘msg’ was not declared in this scope /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc: In function ‘int main(int, char**)’: /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc:14:54: error: ISO C++ forbids declaration of ‘type name’ with no type [-fpermissive] /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc:14:62: error: template argument 1 is invalid /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc:17:3: error: expected initializer before ‘gazebo’ /home/bharadwajramesh/Gazebo_tests/subscriber/subscriber.cc:27:79: error: ‘subs’ was not declared in this scope make[2]: *** [CMakeFiles/subscriber.dir/subscriber.cc.o] Error 1 make[1]: *** [CMakeFiles/subscriber.dir/all] Error 2 make: *** [all] Error 2 "testing_example" is the node that I publish on. I do now know how to access the data that is publish over the node, how do I write a simple function that will do this for me. Please help. -Thanks Bharadwaj Originally posted by Bharadwaj Ramesh on Gazebo Answers with karma: 51 on 2013-05-03 Post score: 0 Original comments Comment by scpeters on 2013-05-05: You're missing a semicolon at the end of the typedef statement. Comment by Bharadwaj Ramesh on 2013-05-06: fixed that but only 1 less error ! Comment by Bharadwaj Ramesh on 2013-05-06: basically I need the code that will help me extract data from a topic and use this data. like stor them in variables for use. Do I need to create a separate function for this ? I would be greatly helpfull if I can get a sample code that does this. Comment by scpeters on 2013-05-07: You probably also need to move the typedef before the definition of the subs function, otherwise the compiler won't know what CustomPtr means. Comment by scpeters on 2013-05-07: Here's another example plugin that uses gazebo topics for communication: collision_map_creator, from this tutorial. Comment by Bharadwaj Ramesh on 2013-05-07: I did try moving the type def outside but that does not help. The above tutorial you shared seems too complicated for me since I am pretty much a beginner with c++. I try follwoing this tutorial here https://bitbucket.org/osrf/gazebo/src/256f98f2318bf2078e708f069367f1b71549ffb6/examples/stand_alone/listener/listener.cc?at=default Does a subscribed need to defined inside a class ? In the above ex. I shared they do not use a class. Can I write up such a code that will subscribe. Comment by scpeters on 2013-05-07: If you want a simple solution and you're just trying to pass a single float, then use a Vector2d or Vector3d message, like I suggested in your other question. Comment by Bharadwaj Ramesh on 2013-05-07: Even if I publish as a Vector2d message how Do I still subscribing to a TOPIC. Answer: I have it fixed now. I am using a Vector2D type of message to transmit. here is the subscriber plugin: #include <gazebo/transport/transport.hh> #include <gazebo/msgs/msgs.hh> #include <gazebo/gazebo.hh> #include <iostream> ///////////////////////////////////////////////// // Function is called everytime a message is received. typedef const boost::shared_ptr<const gazebo::msgs::Vector2d> VectorTwoDPtr; void cb(VectorTwoDPtr &_msg) { // Dump the message contents to stdout. std::cout << "x : " <<_msg->x() << "\n"; std::cout << "y : " <<_msg->y() << "\n"; } ///////////////////////////////////////////////// int main(int _argc, char **_argv) { // Load gazebo gazebo::load(_argc, _argv); gazebo::run(); // Create our node for communication gazebo::transport::NodePtr node(new gazebo::transport::Node()); node->Init(); // Listen to Gazebo world_stats topic gazebo::transport::SubscriberPtr sub = node->Subscribe("~/pose_example1", cb); // Busy wait loop...replace with your own code as needed. while (true) gazebo::common::Time::MSleep(10); // Make sure to shut everything down. gazebo::transport::fini(); } And my publisher looks like : #include <gazebo/gazebo.hh> #include <gazebo/transport/transport.hh> #include <gazebo/msgs/msgs.hh> #include <math/gzmath.hh> #include <iostream> ///////////////////////////////////////////////// int main(int _argc, char **_argv) { // Load gazebo gazebo::load(_argc, _argv); // Create our node for communication gazebo::transport::NodePtr node(new gazebo::transport::Node()); node->Init(); // Start transport gazebo::transport::run(); // Publish to a Gazebo topic gazebo::transport::PublisherPtr pub = node->Advertise<gazebo::msgs::Vector2d>("~/pose_example1"); // Wait for a subscriber to connect pub->WaitForConnection(); // Publisher loop...replace with your own code. while (true) { gazebo::common::Time::MSleep(100); gazebo::math::Vector2d vect(5, 7); gazebo::msgs::Vector2d msg; gazebo::msgs::Set(&msg, vect); pub->Publish(msg); } // Make sure to shut everything down. gazebo::transport::fini(); } This is not a program I completely wrote but it was a program that I slightly modified based on the example given in this link : https://bitbucket.org/osrf/gazebo/src/c21a92eb17f9/examples/stand_alone?at=default This was just a simple program that I wanted to implement so that I understand how to communicate. Thank you scpeters for all your patient replies and your suggestions which is what I finally planned to use. Originally posted by Bharadwaj Ramesh with karma: 51 on 2013-05-12 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by scpeters on 2013-05-12: I'm glad the Vector2d message worked for you. Using custom messages can be tricky to get set up properly. Comment by Bharadwaj Ramesh on 2013-05-13: Yeah, I dint know that I did not need to create a custom message. It was my lack of C++ knowledge that made it difficult to understand. Comment by djou07 on 2015-01-13: thanks for sharing your code, this will help me... Comment by m4k on 2016-06-05: *note for gazebo5 -> use #include <gazebo/math/gzmath.hh> instead of #include <math/gzmath.hh>
{ "domain": "robotics.stackexchange", "id": 3261, "tags": "gazebo" }
Can a star orbit around multiple planets or a planet with massive moons?
Question: Note: I'm not talking about a star orbiting around a single or lonely planet :) I know a star orbiting a planet is almost impossible because if a planet is more massive than a star, that "planet" would probably be a star. But how about if the planet is not lonely like: A multiple planetary system which consists of free floating planets only but the planets are so massive so that their total mass is larger than a star A planet itself is not more massive than a star but it has so many massive moons Is it possible for a very small star to orbit around planets in such situations? Answer: There's theoretical ways to do it but it's so unlikely as to probably not exist. Assuming you want a system where the planets are in stable orbits around each other. The basic difficulty is the 3 body problem or n body problem. More on it here and here. For example, a massive planet could (in theory) have a single super-massive moon of similar mass to the planet, though that would probably be considered a 2 planet system. It's not possible for one planet to have two super-massive moons because that wouldn't be stable. In general, planets are many times the mass of all their moons combined, similarly stars are many times the mass of all their planets combined and when that stops being true the system is no longer stable. It's very difficult to generate sufficient mass by adding moons to a system, or by adding enough planets to get more massive than the sun, unless the planets crash into each other but past a certain mass, they'd stop being planets and become a kind of star when enough planets combined. If we look at gravitational systems, like our solar System, something like 99.7% of the mass is in the sun, so the sun dominates and everything orbits the sun. A few of the larger objects have moons, and curiously, but only cause they're in relatively calm orbits far away from other planets, a few asteroids orbit each other, but the system is very structured around the sun with much smaller stable orbital zones around the planets. Pluto also has a rather curious orbital system of it's own, likely caused by an impact, where Pluto and Charon are comparatively large and 4 tiny moons orbit around them. Source: But these kind of structured systems are only possible if you have a significant mass differential. When you have 3 or more bodies of similar mass and similar distance you get a high degree of mathematical chaos and instability. There are creative mathematical tricks to make it work, but none of them are stable or likely. This is what 3 similar mass bodies look like, and in such a system, with constant changes, the most likely scenario is that one of the bodies eventually gets ejected. (source, N-body problem above) There are star systems with several stars, but they are either unstable or contain significant differences in mass. The gravitational structure applies for large planets the same as it does for stars, and there's an article about that here. You can create stability by having 2 objects orbiting each other and a 3rd massive object quite distant. (Alpha Centauri is that kind of set-up though Proxima Centauri is quite a bit smaller, but it's the same system). You can even create the heirarchy where you have 2 objects orbiting each other and then 2 more, orbiting each other, but distant so the 2 co-orbitals orbit around each other, and if you do this enough times, you can kind of create enough planetary mass where a more massive star could distantly orbit the entire thing, but it gets very structured and very distant. It's not what I'd consider a normal orbit. You could also cheat and have several planets in a wildly unstable general proximity orbit and have a star some distance away orbiting the chaotic mess in the middle, but it wouldn't be stable for long.
{ "domain": "astronomy.stackexchange", "id": 3191, "tags": "star, orbit, planet" }
How come Wikipedia says that Random Turing Machines can provide uncomputable output?
Question: Wikipedia article mentioned : Hypercomputation The third paragraph starts off with: Technically, the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of deterministic, rather than random, uncomputable functions. Furthermore, according to another article, Random Turing Machine: In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. Am I then correct to say, that in this context, the word random actually means unknown secondary input? Because from my understanding: Probability is a well defined area of mathematics Randomness is rather a layman's term for "I don't really know what's going on here" or rather "I can't know what's going on here". Sort of like seeds in PRNGs, an argument can be made that PRNGs are not random at all, after all, the very word pseudo-random is an oxymoron. Given an input and the "hidden input" (seed) I can get the same input back every time. So in this context, is the output of a RTM reaaaallly uncomputable? Answer: It's not uncommon for Wikipedia to say dubious things. Don't trust it as a primary reference. Beware that hypercomputation is potentially a "crank-adjacent" subject, so the Wikipedia article on it might be especially at risk of containing material of uncertain reliability. When you find something in Wikipedia you don't understand, my advice is to look at primary sources, such as good textbooks on the subject. The statement "the output of a random Turing machine is uncomputable" does not look accurate to me. It's not even clear to me what that would mean. I'm not sure how computability would be defined for probabilistic machines -- that would need to be specified before such a statement can be assigned any truth value. No, in computer science "random" does not mean the same thing as "unknown". In lay language "random" can mean "arbitrary" or "I don't understand it" or "hard to predict" or "I don't know what's going on here" or "I don't see an obvious pattern", but in computer science "random" has a specific technical meaning taken from probability theory. See, e.g., https://cs.stackexchange.com/q/13893/755, https://cs.stackexchange.com/q/7729/755.
{ "domain": "cstheory.stackexchange", "id": 5044, "tags": "computability, turing-machines, randomness" }
Linking errors with message_filters in Hydro
Question: I am migrating a node from Fuerte to Hydro that uses the message_filters ApproximateTime sync policy. The code being used for this is functionally identical to the example on the message_filters wiki page, and worked great in Fuerte. But now when I try to build, I get the following linker errors: CMakeFiles/vo_rgbd_node.dir/src/ros_server.cpp.o: In function `message_filters::Synchronizer >, sensor_msgs::Image_ >, sensor_msgs::CameraInfo_ >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType> >::disconnectAll()': ros_server.cpp:(.text._ZN15message_filters12SynchronizerINS_13sync_policies15ApproximateTimeIN11sensor_msgs6Image_ISaIvEEES6_NS3_11CameraInfo_IS5_EENS_8NullTypeES9_S9_S9_S9_S9_EEE13disconnectAllEv[message_filters::Synchronizer >, sensor_msgs::Image_ >, sensor_msgs::CameraInfo_ >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType> >::disconnectAll()]+0x41): undefined reference to `message_filters::Connection::disconnect()' CMakeFiles/vo_rgbd_node.dir/src/ros_server.cpp.o: In function `message_filters::Connection message_filters::SimpleFilter > >::registerCallback > const> const&>(boost::function > const> const&)> const&)': ros_server.cpp:(.text._ZN15message_filters12SimpleFilterIN11sensor_msgs6Image_ISaIvEEEE16registerCallbackIRKN3ros12MessageEventIKS4_EEEENS_10ConnectionERKN5boost8functionIFvT_EEE[message_filters::Connection message_filters::SimpleFilter > >::registerCallback > const> const&>(boost::function > const> const&)> const&)]+0x9a): undefined reference to `message_filters::Connection::Connection(boost::function const&)' CMakeFiles/vo_rgbd_node.dir/src/ros_server.cpp.o: In function `message_filters::Connection message_filters::SimpleFilter > >::registerCallback > const> const&>(boost::function > const> const&)> const&)': ros_server.cpp:(.text._ZN15message_filters12SimpleFilterIN11sensor_msgs11CameraInfo_ISaIvEEEE16registerCallbackIRKN3ros12MessageEventIKS4_EEEENS_10ConnectionERKN5boost8functionIFvT_EEE[message_filters::Connection message_filters::SimpleFilter > >::registerCallback > const> const&>(boost::function > const> const&)> const&)]+0x9a): undefined reference to `message_filters::Connection::Connection(boost::function const&)' collect2: ld returned 1 exit status The contents of my CMakeLists.txt are as follows: cmake_minimum_required(VERSION 2.8.3) project(vo_rgbd) find_package(catkin REQUIRED COMPONENTS cv_bridge geometry_msgs relative_nav_msgs roscpp sensor_msgs ) find_package(Boost REQUIRED) find_package(Eigen REQUIRED) find_package(OpenCV REQUIRED) catkin_package() include_directories( include ${catkin_INCLUDE_DIRS} ${Boost_INCLUDE_DIRS} ${Eigne_INCLUDE_DIRS} ${OpenCV_INCLUDE_DIRS} ) add_executable(vo_rgbd_node src/vo_rgbd_node.cpp src/ros_server.cpp src/pose_estimator.cpp) add_dependencies(vo_rgbd_node relative_nav_msgs_generate_messages_cpp) target_link_libraries(vo_rgbd_node ${catkin_LIBRARIES} ${Boost_LIBRARIES} ${Eigen_LIBRARIES} ${OpenCV_LIBRARIES} ) Am I possibly missing something in my CMakeLists.txt that could be causing these errors, or might there be a different cause? Thanks Originally posted by dpkoch on ROS Answers with karma: 132 on 2013-12-05 Post score: 1 Answer: In your CMakeLists.txt'sfind_package, you left off message_filters. Originally posted by lindzey with karma: 1780 on 2013-12-05 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by dpkoch on 2013-12-05: Ah, I had thought that message_filters was part of roscpp, but I now see that those are both separate packages within the ros_comm stack. Adding message_filters to the find_package(catkin REQUIRED COMPONENTS ...) and as a dependency in package.xml fixed the problem. Thanks! Comment by smishra on 2013-12-10: Also, you misspelled Eigen in the include directories
{ "domain": "robotics.stackexchange", "id": 16360, "tags": "catkin, ros-hydro, message-filters" }
Reimplementing Alias method in C++
Question: In the following I reimplement the Walker-Vose Alias method for sampling from nonuniform, discrete probability distributions. I am well aware that there is https://en.cppreference.com/w/cpp/numeric/random/discrete_distribution which I would use for production code. alias_lib.h contains the main logic. main.cpp contains example usage. CMakeLists.txt contains a minimal cmake file. alias_lib.h #include <random> #include <vector> #include <algorithm> typedef double real; template <typename T> T pop(std::vector<T>& V) // Remove and return the last element of a vector. { auto val = V.back(); V.pop_back(); return val; } template <typename RNE> auto get_r(RNE& gen) // Return a real number from [0, 1) using a Random Number Engine { static std::uniform_real_distribution<real> dist(0.0, 1.0); return dist(gen); }; class AliasSampler // This class implements the Walker-Vose Alias Sampling method. // // The initializing weights do not have to be normalized. // The algorithm is described // [here](https://web.archive.org/web/20131029203736/http://web.eecs.utk.edu/~vose/Publications/random.pdf) // The naming of variables follows the Wikipedia [article](https://en.wikipedia.org/wiki/Alias_method) (As of 2022-10-31). { public: AliasSampler() = delete; AliasSampler(const std::vector<real>& weights) : K_(weights.size()) { // [...] If Ui = 1, the corresponding value Ki will never be consulted and is unimportant, // but a value of Ki = i is sensible. [...] std::iota(K_.begin(), K_.end(), 0); p_.reserve(weights.size()); std::transform( weights.begin(), weights.end(), std::back_inserter(p_), [result = std::reduce(weights.begin(), weights.end())] (real w) -> real { return w / result; } ); U_.reserve(weights.size()); std::transform( p_.begin(), p_.end(), std::back_inserter(U_), [&, this] (real x) -> real { return p_.size() * x; } ); // [...] As the lookup procedure is slightly faster if y < Ui (because Ki does not need to be consulted), // one goal during table generation is to maximize the sum of the Ui. // Doing this optimally turns out to be NP hard,  but a greedy algorithm comes reasonably close: rob from the richest and give to the poorest. // That is, at each step choose the largest Ui and the smallest Uj. // Because this requires sorting the Ui, it requires O(n log n) time. [...] (See the Wikipedia article) // For this reason we partition into small and large indices and use them in a sorted fashion. std::vector<std::size_t> indices_(U_.size()); std::iota(indices_.begin(), indices_.end(), 0); std::sort(indices_.begin(), indices_.end(), [&, this] (int a, int b) -> bool { return U_[a] < U_[b]; }); // I could use std::partition for partitioning into smaller and larger, **but** // AFAIK this would not make use of the fact that the array is already sorted. std::vector<std::size_t> smaller, larger; for (std::size_t i = 0; U_[indices_[i]] < 1; ++i) { smaller.push_back(indices_[i]); }; for (std::size_t i = U_.size() - 1; U_[indices_[i]] >= 1; --i) { larger.push_back(indices_[i]); }; while (smaller.size() && larger.size()) { std::size_t s = pop(smaller); std::size_t l = pop(larger); K_[s] = l; U_[l] = U_[l] - (1. - U_[s]); if (U_[l] < 1) { smaller.push_back(l); } else { larger.push_back(l); }; }; // [...] If one category empties before the other, the remaining entries may // have U_i set to 1 with negligible error. [...] (See the Wikipedia article) while (smaller.size()) { std::size_t s = pop(smaller); U_[s] = 1.; }; while (larger.size()) { std::size_t l = pop(larger); U_[l] = 1.; }; }; const auto& probabilities() const { return p_; }; template <typename RNE> auto operator()(RNE& gen) const // Return a random number according to the given probabilities // at initialization using a Random Number Engine. { auto x = get_r(gen); auto i = static_cast<size_t>(p_.size() * x); auto y = p_.size() * x - i; return y < U_[i] ? i : K_[i]; }; auto min() const { return 0; }; auto max() const { return p_.size() - 1; }; private: std::vector<real> p_{}; std::vector<std::size_t> K_{}; std::vector<real> U_{}; std::vector<std::size_t> indices_{}; }; main.cpp #include <iostream> #include <random> #include <map> #include <iomanip> #include "alias_lib.h" int main() { std::random_device rd; std::mt19937 gen(rd()); AliasSampler sampler({1., 2., 3., 1.5, 1., 1.5}); std::map<int, int> map; for(int n=0; n<100000; ++n) { ++map[sampler(gen)]; } for(const auto& [num, count] : map) { std::cout << num << " generated " << std::setw(4) << count << " times\n"; } return 0; } CMakeLists.txt cmake_minimum_required(VERSION 3.10) project(test_alias_sampling) set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_EXTENSIONS OFF) add_library(alias_lib INTERFACE alias_lib.h ) add_executable(main main.cpp) target_compile_options(main PRIVATE -Wall -Wextra -Werror) target_link_libraries(main alias_lib) Answer: Make it satify the RandomNumberDistribution requirement Your sampler class basically implements a specific random number distribution. The random number distributions in the standard library all satisfy the RandomNumberDistribtion requirement. It seems your class already satisfies many of the requirements, but it's missing some features, like the result_type and param_type type aliases, param(), reset(), an operator() that also takes a set of weights, comparison operators and std::ostream and std::istream operator overloads. By conforming exactly to this requirement, your class will be a true drop-in replacement for existing standard random number distributions. Avoid out-of-class utility functions and type aliases You defined real, pop(), get_r() outside class AliasSampler. This means these names will now live in the global namespace, and can potentially conflict with other code that wants to define those things. You can make pop() and get_r() private member functions, and you can even move the definition of real into the class. Use of STL algorithms STL algorithms are very helpful tools, but sometimes they are very unwieldy. You have examples of both cases in your code: using std::reduce() to sum a vector is very concise, but then std::transform() to fill a vector is looking like a monstrosity. Sometimes a good old for-loop is simpler and better than an algorithm. Consider: p_.reserve(weights.size()); auto sum_weights = std::reduce(weights.begin(), weights.end()); for (auto weight: weights) { p_.push_back(weight / sum_weights); } With C++20's ranges it might become a bit nicer to use algorithms: p_.reserve(weights.size()); auto sum_weights = std::reduce(weights.begin(), weights.end()); std::ranges::copy(std::views::transform(weights, [&](auto weight){ return weight / sum_weights; }, std::back_inserter(p_)); Or with C++23: auto sum_weights = std::reduce(weights.begin(), weights.end()); p_ = std::ranges::transform(weights, [&](auto weight){ return weight / sum_weights; }) | std::ranges::to<std::vector>(); Avoid unnecessary use of floating point The Wikipedia article mentions that the algorithm requires you to generate a random number between 0 and 1. However, the only reason to do so is to select a random index into p_. Instead of literally doing what the article mentions and using a std::uniform_real_distribution<real> to generate that random number, consider using a std::uniform_int_distribution<size_t>(0, p_.size() - 1). This avoids the multiplication and static_cast back to size_t. Don't store temporary data in member variables The vector indices_ is only used in the constructor to build the tables. It should not be a member variable, but just be declared inside the constructor. From the Wikipedia article it looks like you only need \$U_i\$ and \$K_i\$ to sample the distribution, \$p_i\$ is only mentiond in the table generation section. I'm not sure why you are using it in operator()? Stray semicolon There is an unnecessary semicolon after the definition of operator(). Make sure you turn on compiler warnings, and fix all the warnings the compiler reports. Documentation It's really good to see that you have linked to a paper and the Wikipedia article describing the algorithm, and mentioning that your are matching the naming conventions from one of those. You could also consider using the Doxygen format to document your code; this also allows the Doxygen tools to produce cross-references HTML and PDF output of the documentation you wrote.
{ "domain": "codereview.stackexchange", "id": 44062, "tags": "c++, beginner, reinventing-the-wheel, c++17" }
AppJar number pad and keyboard
Question: I am currently working on a python program with appJar. This program is made for Raspberry, as I wanted to create a program that is portable. I wanted to insert a virtual keyboard into it, by using buttons, and I currently succeeded by building one. This because it is much more useful for a little touch-screen. Here is the code of the full program: import csv import RPi.GPIO as GPIO import time from appJar import gui #all the names of the buttons are in italian l=[] class misure: #class where I manage all the measures and the functions of the entries global l def nome_file(self): nome_file=app.getEntry("e1") +'.csv' return nome_file def verticale(self): verticale=float(app.getEntry("e2")) return verticale def prof_tot(self): prof_tot=float(app.getEntry("e3")) return prof_tot def dist_riv(self): mis=float(app.getEntry("e2")) return mis def dist_fond(self): dist_fond=float(app.getEntry("e4")) return dist_fond def cambia_mis(self): app.clearEntry("e2") app.clearEntry("e3") app.clearEntry("e4") app.clearLabel("e5") app.showButton("Inizia misura") app.setFocus("e2") def cambia_prof(self): prof=float(app.getEntry("e4")) v=app.getLabel("e5") l.append(prof) l.append(v) print(l) app.clearEntry("e4") app.showButton("Inizia misura") def cambia_staz(self): app.clearEntry("e1") app.clearEntry("e2") app.clearEntry("e3") app.clearEntry("e4") app.clearLabel("e5") app.showButton("Inizia misura") app.setFocus("e1") def mulinello(self): mulinello=app.getOptionBox("Mulinello") return mulinello def tempo_mis(self): tempo_mis=app.getOptionBox("Secondi misurazione") return float(tempo_mis) class calcoli: #here are all of the calculation to transform rotation of an hydrometer reel to speed def velocita(self,mulinello,giri): v=0 giri_1s=0 if giri=='': v='' return v giri=float(giri) giri_1s=giri/30 if giri_1s==0: v=0 return v if mulinello=='125': if giri_1s<1.98: v=(1.93+(31.17*giri_1s))/100 return v elif giri_1s<10.27: v=(0.19+(32.05*giri_1s))/100 return v else: v=(-14.09+(33,44*giri_1s))/100 return v elif mulinello=='80': if giri_1s<1: v=(2.8+(31.34*giri_1s))/100 return v else: v=(0.82+(33.32*giri_1s))/100 return v elif mulinello=='50': if giri_1s<1.74: v=(1.23+(24.73*giri_1s))/100 return v else: v=(-0.42+(25.68*giri_1s))/100 return v elif mulinello=='30': if giri_1s<1.16: v=(1.90+(10.57*giri_1s))/100 return v else: v=(2.26+(10.26*giri_1s))/100 return v def conta_giri(self,temp_mis): """print(temp_mis) giri=input('Inserire numero di giri') # use this if from computer so you don't have to use raspberry function t_fine = time.time()+temp_mis print(t_fine) return giri""" GPIO.setmode(GPIO.BOARD) #input from raspberry of the rotations GPIO.setup(32,GPIO.IN) #set up a counter giri = 0 #set up a variable for reed activation reed_state = 0 print("Misurazione in corso...") t_fine = time.time()+temp_mis #while loop until t_fine while time.time()<t_fine: #check if reed newly activated(the hydrometric reel works like an on and off circuit) if GPIO.input(32) == 1 and reed_state == 0: #turn on LED. Set reed_state to 1. Add to counter . reed_state = 1 giri = giri + 1 #pause to debounce time.sleep(.01) #check if reed released if GPIO.input(32) == 0 and reed_state == 1: # set reed_state to 0 reed_state = 0 #now that loop has finished, print the final count return giri def funz(self): #function to start the measure of speed c=calcoli() m=misure() v=c.velocita(m.mulinello(),c.conta_giri(m.temp_mis())) v=round(v,4) l=[m.dist_fond(),v] app.setLabel("e5",v) app.hideButton("Inizia misura") file1='' def inserisci_mis(self): #insert measure into csv file global file1,mis_0,l m=misure() myFile = open(m.nome_file(),'a') with myFile: writer = csv.writer(myFile,lineterminator='\n') if file1!=m.nome_file(): primaLinea = ["Distanza dalla verticale", "Distanza dalla riva", "Profondità"]+["Dist dal Fondo","Velocità"]*5 writer.writerow(primaLinea) file1=m.nome_file() mis_0=float(m.dist_riv()) prof=float(app.getEntry("e4")) v=app.getLabel("e5") l.append(prof) l.append(v) writer.writerow([m.verticale(),(m.verticale()-mis_0),m.prof_tot()]+l) l=[] app=gui() #appJar gui app.setTitle("Water app") app.setFont(size=12, family="Calibri") app.addLabel("l1", "Misure",0,0) app.addLabel("l2", "Velocità",5,0) app.addEntry("e1",1,0) app.addEntry("e2",2,0) app.addEntry("e3",3,0) app.addEntry("e4",4,0) app.addLabel("e5","",6,0) app.addButton("Inizia misura",funz,6,0) app.addButton("Inserisci misura",inserisci_mis,4,1) app.addButton("Altra profondità",misure.cambia_prof,3,1) app.addButton("Cambia misura",misure.cambia_mis,2,1) app.addButton("Cambia stazione",misure.cambia_staz,1,1) app.setEntryDefault("e1", "Nome stazione") app.setEntryDefault("e2", "Verticale n°") app.setEntryDefault("e3", "Profondità totale") app.setEntryDefault("e4", "Distanza dal fondo") app.setGuiPadding(15, 5) app.addLabelOptionBox("Mulinello", ['125', '80', '50','30'],5,1) app.addLabelOptionBox("Secondi misurazione", ['15', '30', '60','120','180'],6,1,2) app.setLabelFont(size=13,weight="bold") app.setEntryWidths(["e1","e2","e3","e4"], 20) app.setEntryRelief("e1", "raised") app.setEntryRelief("e2", "raised") app.setEntryRelief("e3", "raised") app.setEntryRelief("e4", "raised") #keyboard and number pad app.addButtons([["1","2","3"],["4","5","4","6"],["7","8","9"],["0",".","O"]],press,1, 2, 3,4) app.addButtons([["A","B","C","D"],["E","F","G","H"],["I","L","M","N"],["P","Q","R","S"],["T","U","V","Z"]],press,1,5,4,5) app.go() Function recalled when you press the letters and numbers buttons: def press(Button): #function to write into entries with buttons of the program if Button=="A": entry='' focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"A" app.setEntry(focus,entry) elif Button=="B": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"B" app.setEntry(focus,entry) elif Button=="C": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"C" app.setEntry(focus,entry) elif Button=="D": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"D" app.setEntry(focus,entry) elif Button=="E": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"E" app.setEntry(focus,entry) elif Button=="F": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"F" app.setEntry(focus,entry) elif Button=="G": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"G" app.setEntry(focus,entry) elif Button=="H": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"H" app.setEntry(focus,entry) elif Button=="I": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"I" app.setEntry(focus,entry) elif Button=="L": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"L" app.setEntry(focus,entry) elif Button=="M": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"M" app.setEntry(focus,entry) elif Button=="N": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"N" app.setEntry(focus,entry) elif Button=="O": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"O" app.setEntry(focus,entry) elif Button=="P": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"P" app.setEntry(focus,entry) elif Button=="Q": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"Q" app.setEntry(focus,entry) elif Button=="R": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"R" app.setEntry(focus,entry) elif Button=="S": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"S" app.setEntry(focus,entry) elif Button=="T": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"T" app.setEntry(focus,entry) elif Button=="U": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"U" app.setEntry(focus,entry) elif Button=="V": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"V" app.setEntry(focus,entry) elif Button=="Z": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"Z" app.setEntry(focus,entry) elif Button=="0": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"0" app.setEntry(focus,entry) elif Button=="1": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"1" app.setEntry(focus,entry) elif Button=="2": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"2" app.setEntry(focus,entry) elif Button=="3": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"3" app.setEntry(focus,entry) elif Button=="4": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"4" app.setEntry(focus,entry) elif Button=="5": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"5" app.setEntry(focus,entry) elif Button=="6": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"6" app.setEntry(focus,entry) elif Button=="7": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"7" app.setEntry(focus,entry) elif Button=="8": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"8" app.setEntry(focus,entry) elif Button=="9": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"9" app.setEntry(focus,entry) elif Button==".": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"." app.setEntry(focus,entry) This is my program. I modified appJar.py into the appJar directory module to not have the focus on the buttons (lines 488-489-490 are all commented): #self.topLevel.bind('<Button-1>', lambda e: _setFocus(e)) #self.topLevel.bind('<Button-2>', lambda e: _setFocus(e)) #self.topLevel.bind('<Button-3>', lambda e: _setFocus(e)) I wanted to ask if there is a more efficient way to do a keyboard and numeric pad, because I think that there is a better way to do it. Or even something that I could change in my program to be more efficient. Answer: Python has an official style guide, PEP8. It won't hurt getting familiar with it, since it helps keeping your code readable, consistent and ready for others to mess with. Combining Italian and English gets confusing fast. It's also unhelpful for those of us that don't read Italian. There's also a distinct lack of whitespace between blocks of code and one-letter variable names are not helpful. Your class uses a global l where l=[]. I don't like globals, especially in a class. Consider the following: class misure: def __init__(self): self.l = [] def nome_file(self): my_misure = misure() print(my_misure.l) And all of a sudden you can have 20 instances of misure because they all have their own list. A class should hold it's own data as much as possible. The following can be improved a lot: def press(Button): #function to write into entries with buttons of the program if Button=="A": entry='' focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"A" app.setEntry(focus,entry) elif Button=="B": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"B" app.setEntry(focus,entry) elif Button=="C": focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"C" app.setEntry(focus,entry) If that entry should do what you want it to do, create it before the first if. But since you overwrite it anyway, why not toss it? Which leaves the following: focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"character" app.setEntry(focus,entry) That's what all those blocks look like, right? So why not put it in a function? def we_put_it_in_a_function(character): focus=app.getFocus() entry=app.getEntry(focus) entry=entry+"character" app.setEntry(focus,entry) All of a sudden, your massive if block looks like this: def press(Button): #function to write into entries with buttons of the program if Button=="A": we_put_it_in_a_function("A"): elif Button=="B": we_put_it_in_a_function("B"): elif Button=="C": we_put_it_in_a_function("C"): elif Button=="D": we_put_it_in_a_function("D"): elif Button=="E": we_put_it_in_a_function("E"): elif Button=="F": we_put_it_in_a_function("F"): And here comes the real kicker, it can be done even shorter: def press(Button): #function to write into entries with buttons of the program we_put_it_in_a_function(Button):
{ "domain": "codereview.stackexchange", "id": 30125, "tags": "python, python-3.x, gui, raspberry-pi" }
Electronic band structure
Question: I want to know why the bands like valence or conduction splits up into two parts as shown in this diagram. why the energy gaps exist with in these bands? As you can see in this diagram that their is an energy gap with in conduction as well as valence band. what are their physical significance? Answer: The shape of the bands depends on how the atomic orbitals interact with their neighbors in the crystal as a function of wavevector $k$ - where $k$ represents a phase shift from atom to atom. When the orbitals of nearest neighbors overlap a small change of phase will change the nature of the overlap and, thus, of the energy level. In many familiar semiconductors the lowest conduction and highest valence bands correspond to the anti-bonding and bonding levels of $sp^3$ hybridized orbitals in the diamond or zinc-blende structure. In such cases the gap between bonding and anti-bonding is the band gap which makes the crystal a semiconductor. Orbitals involved in bonding tend to have large overlap and a strong dependence of energy on $k$. Now, the answer to your question: However, there are other atomic orbitals as well. For example, in silicon there are unfilled $3d$ levels which lie higher up in energy. The $d$ orbitals do not overlap with neighbors as much as $s$ and $p$ orbitals tend to, and they result in very flat bands. Flat bands are less likely to overlap with other bands in energy and can create additional gaps in the band structure. Deeper levels that are lower in energy and filled also tend to have lower principal quantum number, $n$, and are thus more deeply bound in the atoms and thus less liklely to overlap with one another and also result in flat bands lying deep in the valence structure - possibly also creating additional gaps there. Most semiconductors physicists study just the lowest few conduction bands and the highest few valence bands because it is the filling and unfilling of these levels near the main band gap that determine the properties of fabricated devices like transistors. Studying the more distant bands is less common.
{ "domain": "physics.stackexchange", "id": 54741, "tags": "electronic-band-theory" }
VBA Excel Game - Testing
Question: My brother bet me that I couldn't make a game in Excel. The code is all over the place and buggy in parts. But I would appreciate it if anyone would give this game a go. The music and the code has all been made by me, but the artwork was found online and I can't claim any of it is mine (although a few bits were photoshopped to make it work for me). There's still a few bits of the game that don't work 100% but it shouldn't crash on you. Full code There's an installer or the zip file - it's probably best to just use the zip file. Sub dogame() Application.Visible = False loadedgame = False Do fminitScreen.Show playgame If loadedgame = False Then loadplayer fmNameHim.Show fmChooseWeapon.Show If Worksheets("options").range("B3").value = True Then fmrules.Show End If fminside1.Show ElseIf loadedgame = True Then Dim obj As Object Dim obj1 As Object Load fminside1 Load fmoutside Load fmupstairs For Each obj In UserForms If obj.Name = player.formname Then loadedgame = False obj.Character.left = player.left obj.Character.top = player.top obj.Show Exit For Else If obj.Name = "fmMusic" Then Else Debug.Print "unloaded form" & obj.Name Unload obj End If End If Next obj End If fmranking.Show Set player = Nothing Loop End Sub Answer: Procedure names should start with a verb that says what the procedure does. Sub dogame() That doesn't say much; "do" is as general as it gets. Also, by convention, procedure names should be PascalCase, so Sub DoGame() would follow that. You're not showing where loadedgame is declared, but if it's declared, I suspect it's a Boolean variable, at module level. If dogame is the only procedure that uses this variable, then loadedgame should be scoped to that procedure, and declared as close as possible to its usage: Sub RunGame() Dim loadedGame As Boolean Boolean variables get initialized with a False value, so if loadedgame is scoped at procedure level, this line isn't needed: loadedgame = False It's a very, very good idea to always declare all variables you're using. One way of forcing yourself to do that, is to specify Option Explicit at the top of every code module you're writing - whether it's a form's code-behind, a class module, or a standard module: with that option specified, VBA will refuse to compile your project if an undeclared identifier is used anywhere. Your indentation is somewhat broken in the inner For loop, and I strongly recommend indenting the entire procedure's body too: Sub RunGame() '<~ indented End Sub The If Worksheets("options").Range("B3").Value = True Then condition doesn't need to be indented - it's at the same level as the previous fmChooseWeapon.Show line. VBA module members are Public by default, but in other languages it's Private by default; it's always best to be explicit about access modifiers: Public Sub RunGame() I think there's way too much vertical whitespace. There shouldn't be 2 empty lines in a row. Boolean values can be used as a condition's expression - If [Boolean expression] Then can also be If [Boolean value] Then, so this: If loadedgame = False Then 'block A ElseIf loadedgame = True Then 'block B End If Is better written like this: If Not loadedgame Then 'block A Else 'block B End If ...and best written like that: If loadedgame Then 'block B Else 'block A End If Positive conditions are always easier to mentally process; don't be afraid to reverse the If and Else blocks to reverse a negative condition! Also, a Boolean can only ever be True or False, so there's no need to evaluate the False condition if you've already checked the True condition - the ElseIf block can become an Else block without altering the logic. This part is weird: If obj.Name = "fmMusic" Then Else Debug.Print "unloaded form" & obj.Name Unload obj End If If there's nothing to execute in the If block, only in the Else part, then you need to reverse your condition: If obj.Name <> "fmMusic" Then Unload obj Debug.Print "unloaded form" & obj.Name End If Notice I've moved the Debug.Print call after the Unload instruction - this way the immediate window will not say "unloaded form xyz" before the form is actually unloaded. Alternatively, you could edit the message to say "unloading form " & obj.Name & "..." - the idea is to avoid telling confusing lies if anything goes wrong with that Unload call. That said, obj is a very bad name to use - why not call it form or window? The procedure is quite long - I'd consider extracting smaller private procedures out of it, perhaps starting with the two branches in the If loadedgame condition. A procedure should do one thing (and do it well), so as to have as few reasons to change as possible. Having more specialized procedures helps making your code more manageable, too. Lastly, I suspect this is all written in a standard code module; I would encapsulate the game logic inside a class module, and have the dogame procedure instantiate an object of that type and call some Run method. The entire dogame procedure/macro could then possibly look like this: Public Sub RunGame() On Error GoTo ErrHandler Dim game As New GameLogic game.Run CleanExit: Exit Sub ErrHandler: Debug.Print "An error has occurred: " Err.Description Resume CleanExit End Sub
{ "domain": "codereview.stackexchange", "id": 10250, "tags": "game, vba, excel" }
Work on Ferromagnetic Object Due to Solenoid
Question: I've been going through some equations and such trying to determine the work done by a solenoid on a ferromagnetic object. I have the following: Magnetic field due to solenoid: $\vec{B} = \langle0,0,\mu_0nI\rangle$ (Assuming coils are on xy-plane and current is counter-clockwise) Force of magnetic field: $ F = q\vec{v} \times \vec{B} $ Work: $ W = \int F \cdot dl $ Work of Magnetic Field: $ W = \int_c(q\vec{v} \times \langle0,0,\mu_onI\rangle) \cdot d\vec{r} $ For one, this seems to indicate a work of 0 if the object is not charged, which I have seen in some places but just doesn't seem right. Also, this does not take into account the properties of the object, such as relative permeability, which I guess could have some effect with the charge value. I'm trying to calculate the acceleration of a ferromagnetic object from a magnetic field, is there a better way to do this? I've thought about the following: $ \vec{a} = \frac{q\vec{v} \times \vec{B}}{m} $ However, this is where I started running into the charge issue and thought to calculate it from the work done. Answer: You can apply the Lorentz force equation $F=qv\times B$ at the microscopic level, since the magnet is made out of charged particles. However, it's not practical to do this for a ferromagnetic object. The electrons in the ferromagnetic material also have intrinsic spin 1/2 and an intrinsic dipole moment, and they therefore experience an additional force in a field gradient. This force is not described by the Lorentz equation. A complete, realistic calculation is going to be extremely difficult. You could solve Maxwell's equations numerically, putting in the correct permeability. Note that you're going to have hysteresis effects.
{ "domain": "physics.stackexchange", "id": 8293, "tags": "electromagnetism, work, magnetic-fields" }
Does Haag's theorem say covariant transformation of interacting field is not possible?
Question: In https://www.physicsforums.com/threads/haags-theorem-perturbation-existence-and-qft.177865/#post-1384425 #2 post by meopemuk (Eugene) say that Haag's theorem says: $$U(\Lambda)\Phi(x) U^{-1}(\Lambda) \neq \Phi(\Lambda x)$$ Or to quote directly: The statement of Haag's theorem is that this interacting field cannot have a covariant transformation law with respect to the interacting representation of the Poincare group $U$. Is this a correct understanding of Haag's theorem? I am asking this because this is usually not how Haag's theorem is presented. And if true, would this mean Poincare-invariant vacuum of interacting field does not exist? Answer: Haag tells you that if you assume $\Phi(x)$ and $\Phi(\Lambda x)$ satisfy the same commutation relations, then $U$ does not necessarily exist (in contrast with systems with a finite number of degrees of freedom). But Haag doesn't prove that $U$ doesn't exist; only that it need not. In fact, the classical paper of Glimm&Jaffe on two-dimensional $\phi^4$ proves that $\phi$ does satisfy the covariant transformation laws in an interacting example, so it is factually false that $U$ doesn't exist. Sometime it does, sometimes it doesn't. That being said, nowadays we don't usually build up QFT's from canonical commutation relations. You don't need to assume they hold, and so you may sidestep Haag. A lot of people working on rigorous QFT have abandoned the CCR as a fundamental ingredient anyway.
{ "domain": "physics.stackexchange", "id": 48931, "tags": "quantum-field-theory, operators, covariance, poincare-symmetry" }
How can I download the SNP genotyping information from GTEX?
Question: I am new to the field of Bioinformatics. GTEx database is great; I know how to check whether a SNP is eQtl with the online tools of GTEx. But my question is how to download the right data to draw a boxplot on my own. like this paper: Please help me . I have been stuck on this for many days. Answer: There are two types of data you can get from GTEx. The first is summary statistics, basically the slopes and p-values you see in the boxplot. You can also get expression levels for all genes measured in GTEx. However, in order to plot boxplot by yourself, you also need genotype data. This is not publicly available because of privacy issues. In order to get it, you have to apply to it on dbGaP. I think you can find relevant information on the GTEx website.
{ "domain": "bioinformatics.stackexchange", "id": 1330, "tags": "snp" }
Why are creeks sometimes piped underground in urban areas?
Question: I live in Melbourne, Australia and a lot of the creeks here have been diverted through underground storm water pipes. I guessed that this is because land is highly sought after for development, and a lot of these creeks have been built over, however there are also a lot of cases where there is nothing built over the top of them at all. They simply carry the normal water flow through a small concrete pipe, leaving an empty valley on the surface which would only carry flood waters during heavy rainfall. What's the reason for going to the cost and effort of doing this? Is it a safety concern having creeks in residential areas? Do creeks have some effect on the land which would affect nearby buildings? Is it just for the convenience of being able to cross them without a bridge? Example Dandenong Creek in Bayswater area @ -37.836524 lat, 145.254465 long (water flow goes from the right to the left of the image): I've also seen a creek which briefly goes both above and below ground. The rushing water of the underground section could be seen through an overflow while the topside creek was mostly just stagnant water. It seemed completely pointless and made no sense to me. Answer: In more developed areas (cities) burying streams is popular because it allows for more land to be developed without the fear of flooding. Also, streams in these areas tend to become very blighted and void of life. At that point you might as well hide them from sight. The area that you show doesn't seem to be one of these areas. There are a few reasons that I can think of that might lead to the situation that you are describing. Swampy Area The area seems to be fairly flat. In its natural state, it may have been a swamp. Providing a buried pipe provides a freely draining area what will keep the water flowing in times of low flow. This will keep the area from being swampy and the negative aspects of that: mosquitoes and smell. You can see from the satellite images that there are some drainage structures that will help surface water to flow into the buried pipe. Park Areas The grass above the stream can now be used as a park without worrying about the dangers of children playing around a stream. This makes the area dual purpose without completely reducing the flood control capacity of the area. In times of heavy rain, the area above the pipes can still be used to to retain excess water. Slowly Fixing Past Mistakes It could also be that at one time the stream was completely buried. The local authority might be slowly exposing the stream to restore the natural condition. Without knowing the timeline of the area, it it hard to tell if this is a possibility.
{ "domain": "engineering.stackexchange", "id": 502, "tags": "civil-engineering, water-resources" }
Insert and Remove Element in Deque using threads in C++
Question: The following code works fine for inserting and removing an element from a deque using two threads. I would appreciate any help on how to make it better, especially in terms of thread safety. #include "stdafx.h" #include <deque> #include <condition_variable> #include <iostream> #include <thread> #include <mutex> #define SLEEP_TIME 1 std::deque <int> q; std::mutex _mu; std::condition_variable cond; void function1() { int count = 10; while (count > 0) { std::unique_lock <std::mutex> locker(_mu); q.push_front(count); locker.unlock(); cond.notify_one(); std::this_thread::sleep_for(std::chrono::seconds(SLEEP_TIME)); count--; } return; } void function2() { int data = 0; while (data != 1) { std::unique_lock<std::mutex> locker(_mu); cond.wait(locker); data = q.back(); q.pop_back(); locker.unlock(), std::cout << "T2 gets " << data << " from T1" << std::endl; } return; } int main() { std::thread t2(function1); std::thread t1(function2); t1.join(); t2.join(); std::cout << "Thread Function Executed Successfully" << std::endl; } Answer: Here are some things that may help you improve your code. Isolate platform-specific code If you must have stdafx.h, consider wrapping it so that the code is portable: #ifdef WINDOWS #include "stdafx.h" #endif In this code, with a single file, the use of stdafx.h does not gain anything on any platform, including Windows. Don't use leading underscores in names Anything with a leading underscore is a reserved name in C++ (and in C). See this question for details. Be careful with conditional variables Right now, t2 waits for a conditional variable and t1 sets it. If t1 never pushes anything to the deque, t2 would wait forever and never terminate. Similarly, if t1 is launched before t2, it could execute the cond.notify_one() before any threads are waiting for that variable, so t1 would push 10 and notify, but no thread would "hear" the notification with the effect that t2 would only be notified after t1 pushes 9. The effect is that the code would almost work but never terminate. To allow the launching of threads in either order, I'd recommend that t2 should keep popping until the deque is empty. This is a typical pattern in multithreaded code. Remember that main is a thread, too I realize that this code is probably written for practice using threads and locks and such, but it's important to remember that main is also a thread, so this program actually uses three threads and not two. Because the main function isn't really doing anything useful while the threads are running, an alternative approach would be to launch only t1 as a thread and then simply invoke function2 directly from main, thereby using only two threads instead of three and getting the same work accomplished. Remember that std::cout is a shared resource It's not an issue in this code because only one thread at a time prints to std::cout, but as you create more complex code, it's important to remember that std::cout is a shared resource, too, For that reason, you'll want to have a mutex for access to it as well if you want to avoid possible interleaving of output.
{ "domain": "codereview.stackexchange", "id": 23567, "tags": "c++, c++11, multithreading, thread-safety, queue" }
Detecting Dependencies in Common Lisp Project Files
Question: The following short program is a collection of routines for detecting the dependencies among a project’s component files. Dependencies serve as a rough measure of the complexity of a program. Minimizing inter-file dependencies is a worthy goal, since understanding and refactoring a complex program is often challenging. However, Common Lisp can support a wide latitude of inter-file dependencies within a compilation unit, so it is generally not advisable to sacrifice program modularity in order to remove all dependencies. This analysis of inter-file dependencies is applied to the files in a specified directory. The pathname for this directory is assumed to be the value of *default-pathname-defaults*, which must be assigned before detecting dependencies. Which files are included is determined by supplying a standard Common Lisp filespec, which defaults to “*.lisp”. A simple example of a dependency is the situation where a function is defined in one file, but used in a different file. In this case the using file depends on the defining file. But there are a number of other kinds of definitional dependencies checked besides defun, including defmacro, defparameter, defvar, defmethod, and defstruct. More complex examples of dependencies include codependencies, where several files depend on each other. If two files each contain definitions used by the other, then they are codependent. Multiple files can be circularly dependent on each other, also making them codependent. The amount of information printed to the terminal can be controlled by the key parameter verbose. The verbose option indicates why the files are dependent. For example, entering (display-all-dependencies) at the REPL simply prints out all detected file dependencies. But entering (display-all-dependencies :verbose t) will additionally show which symbols in the dependent file have definitions in another file. Interface function: display-all-dependencies (&key (pathspec "*.lisp") verbose) Prints out all the dependencies and codependencies among all the files matching the directory pathspec in *default-pathname-defaults* function: file-depends-on-what (file1 &key (pathspec "*.lisp") verbose) Prints out all the files which a given file depends on. function: file1-depends-on-file2 (file1 file2 &key verbose) Determines if file1 depends on file2. ;;; Filename: dependencies.lisp ;;; Finds the dependencies among files (ie, inter-file references) in a project directory. ;;; Assumes the project files have already been loaded, and ;;; that *default-pathname-defaults* points to the project directory. (in-package :cl-user) #-:alexandria (progn (ql:quickload :alexandria) (push :alexandria *features*)) #-:cl-ppcre (ql:quickload :cl-ppcre) (use-package :alexandria) (use-package :cl-ppcre) (defun purify-file (file) "Transform problematic symbols to benign nil in file, before reading. Returns a string of altered file content." (let ((file-string (alexandria:read-file-into-string file))) (ppcre:regex-replace-all "[ \t\r\n]'[A-Za-z0-9!@$%&*_+:=<.>/?-]+" file-string " nil") (ppcre:regex-replace-all "[(][qQ][uU][oO][tT][eE][ \t\r\n]+[A-Za-z0-9!@$%&*_+:=<.>/?-]+[)]" file-string "nil"))) (defun collect-symbols (tree) "Collects all of the symbols in a form." (let ((all-items (alexandria:flatten tree))) (delete-if (lambda (item) (or (not (symbolp item)) (find-symbol (symbol-name item) :cl))) (delete-duplicates all-items)))) (defun collect-defs (forms) "Collects all of the defined names in forms, excluding defstructs." (loop for form in forms when (member (first form) '(defun defmacro defparameter defvar defmethod)) collect (second form))) (defun collect-defstructs (forms) "Collects all of the defined defstruct names in forms." (loop for form in forms when (member (first form) '(defstruct)) if (symbolp (second form)) collect (second form) else if (listp (second form)) collect (first (second form)))) (defun pseudo-load (file) "Attempt to read a file doing what LOAD would do. May not always do the right thing. Returns list of all forms, including package prefixes. Based on function provided by tfb on Stack Overflow." (let ((file-string (purify-file file)) (*package* *package*)) (with-input-from-string (in-stream file-string) (loop for form = (read in-stream nil in-stream) while (not (eql form in-stream)) when (and (consp form) (eq (first form) 'in-package)) do (setf *package* (find-package (second form))) collect form)))) (defun file1-depends-on-file2 (file1 file2 &key verbose) "Determines if file1 depends on file2." (let* ((1forms (pseudo-load file1)) (all1-syms (collect-symbols 1forms)) (defstruct1-syms (collect-defstructs 1forms)) (1syms (set-difference all1-syms defstruct1-syms :test #'equal)) (forms2 (pseudo-load file2)) (def2-syms (collect-defs forms2)) (defstruct2-syms (collect-defstructs forms2)) (11syms (loop for defstruct2-sym in defstruct2-syms append (loop for 1sym in 1syms when (and (not (eql defstruct2-sym 1sym)) (search (format nil "~S" defstruct2-sym) (format nil "~S" 1sym))) collect 1sym)))) (when verbose (format t "~%~A symbols:~%~S~%" file1 1syms) (format t "~%~A symbols:~%~S~%" file2 def2-syms) (format t "~%~A structures:~%~S~%" file2 defstruct2-syms) (format t "~%~A symbols dependent on ~A:~%" file1 file2)) (append (intersection 1syms def2-syms) 11syms))) (defun file-depends-on-what (file1 &key (pathspec "*.lisp") verbose) "Prints out all dependencies of a file in directory = *default-pathname-defaults*." (let ((files (mapcar #'file-namestring (directory pathspec)))) (loop for file2 in files unless (equal file1 file2) do (let ((deps (file1-depends-on-file2 file1 file2))) (when deps (if verbose (format t "~%~A depends on ~A~%~S~%" (file-namestring file1) (file-namestring file2) deps) (format t "~%~A depends on ~A~%" (file-namestring file1) (file-namestring file2)))))))) (defun all-path-cycles (node current-path adjacency-table) "Recursively follow all paths in a dependency network." (if (member node current-path :test #'equal) (list (subseq current-path 0 (1+ (position node current-path :test #'equal)))) (loop for child in (gethash node adjacency-table) append (all-path-cycles child (cons node current-path) adjacency-table)))) (defun codependents (node-list dependencies) "Returns all dependent cycles for all nodes." (let ((adjacency-table (make-hash-table :test #'equal))) (loop for dep in dependencies do (push (second dep) (gethash (first dep) adjacency-table))) (loop for node in node-list append (all-path-cycles node nil adjacency-table)))) (defun display-codependencies (dependencies) "Prints out all codependencies among a group of files." (format t "~%Codependent files (with circular reference):~%") (let* ((node-list (remove-duplicates (alexandria:flatten dependencies) :test #'equal)) (codependents (remove-duplicates (codependents node-list dependencies) :test (lambda (set1 set2) (alexandria:set-equal set1 set2 :test #'equal))))) (loop for co-set in codependents do (format t "~S~%" co-set)))) (defun display-all-dependencies (&key (pathspec "*.lisp") verbose) "Prints out all dependencies of every pathspec file in directory = *default-pathname-defaults*." (let ((files (mapcar #'file-namestring (directory pathspec)))) (loop with dependencies for file1 in files do (loop for file2 in files unless (equal file1 file2) do (let ((deps (file1-depends-on-file2 file1 file2))) (when deps (push (list file1 file2) dependencies) (if verbose (format t "~%~A depends on ~A~%~S~%" (file-namestring file1) (file-namestring file2) deps) (format t "~%~A depends on ~A~%" (file-namestring file1) (file-namestring file2)))))) finally (display-codependencies dependencies)))) Answer: Some general comments first. And to prefix that, at a glance the code itself looks quite alright to me, I just have some comments on the details to make it even nicer. Arguably the first thing to note is to create an ASDF definition and don't require QuickLisp as the main loading mechanism. That usually entails one more .asd file and is otherwise painless. It will also allow you and others to easily load the whole system with one function from other systems. #-:alexandria - that's usually written as #-alexandria as the *features* list mostly contains keywords anyway and, more importantly, the reader function for #- reads the argument as a keyword unless otherwise specified. The use-package calls should go into the to-be-written defpackage declaration. The functions all have docstrings, which is good, though it might also help to specify a bit more what the types of the inputs and outputs are. You could of course also add check-type / declare statements to make those explicit, but for me, the reader, something like "Returns a LIST of all dependent cycles for all nodes." would already be enough to satisfy my curiosity. IMO, T and NIL should be written uppercase. That's because they're way too special to not have them stand out more. Pathspecs should probably be written #P"*.lisp" to be very clear about the type. I'm a bit wary of the flatten calls, but if you think that the additional allocations aren't a problem, then it's probably fine. It's just a point for optimisation to avoid too many intermediate lists. If the equal tests could be replaced with eql or eq, it might also be better, but I'm not sure that's possible. For file1-depends-on-file2, the name is actually mostly fine by me, though usuallya boolean-ish result would warrant file1-depends-on-file2-p ("predicate"). Also the verbose can probably be a regular argument if this isn't going to be called by a user directly. In it though, the names keyed by their source file look very awkward. Consider a helper function via flet/labels or even defun to get rid of the duplication and weird naming. It also stands out to me that intersection isn't using :test, but the other call to set-difference does - is that a bug? It might even make sense to define some helpers with the right equality relation baked in eventually. The format calls look good, only in display-all-dependencies I'd consider merging the verbose and non-verbose case via ~@[~A~] and the argument (and verbose deps) (so combined that'd be (format T "...~@[~A~]~%" ... (and verbose deps))). loop is a bit over-used IMO when you have two loops inside each other. It also looks like this would be better abstracted by a helper function that creates a list of all those pairs of files, there might even be one in the alexandria or arnesi packages. Pairs like (list file1 file2) might very well just use a single cons cell too, so (cons file1 file2) would suffice (and makes it absolutely clear it's a pair). Lastly, for loading the files, the purify-file function attempts to throw away "problematic" symbols, which are what, quoted symbols? I don't exactly see why this function, like it is, helps a lot. Instead what I'd probably suggest, is to disable evaluation of code when read is called (e.g. you don't want to execute anything while reading the files, something like (read-from-string "#.(format T \"Hello, world!~%\")")) quite handily demonstrates the dangers). Setting *read-suppress* to true would be a start, or even setting *read-eval* to false. See also with-standard-io-syntax.
{ "domain": "codereview.stackexchange", "id": 40749, "tags": "file-system, common-lisp" }
Building an N-qubit Controlled S Gate
Question: I've been beating my head against this problem for three days now and I just can't seem to crack it. To construct an N-qubit controlled Unitary gate, I can do something like this (note I'm using Qiskit syntax here): q_upper = QuantumRegister(n - 1) q_lower = QuantumRegister(1) q_out = QuantumRegister(1) circ = QuantumCircuit(q_upper, q_lower, q_out) # Setup u on the output line: |((1 << (n - 1)) - 1) 01> * u/2, |n0 11> * u/2, |((1 << n) - 1) 1> * u circ.cnv(q_upper, q_out) circ.cv(q_lower, q_out) # Cancel out the terms that aren't all ones circ.cnx(q_upper, q_lower) circ.vdg(q_lower, q_out) circ.cnx(q_upper, q_lower) This circuit requires a little explanation. Here, I'm attempting to build an n-qubit u-gate using an (n-1)-qubit controlled v-gate (which will be half the rotation of u) and a a controlled v-gate. Then I use an (n-1)-qubit controlled x-gate in conjunction with another controlled v-gate to cancel out the terms that aren't all one. As an example of the implementation, consider the Controlled-S (CS) gate: q_lower = QuantumRegister(1) q_out = QuantumRegister(1) circ = QuantumCircuit(q_lower, q_out) # Add a pi/2 rotation for 11, pi/4 for 01 and 10 circ.t(q_lower) circ.t(q_out) # Cancel out the pi/4 for 01 and 10 circ.cx(q_lower, q_out) circ.tdg(q_out) circ.cx(q_lower, q_out) I can expand on this to produce a Toffoli-S (CCS) gate: q_upper = QuantumRegister(1) q_lower = QuantumRegister(1) q_out = QuantumRegister(1) circ = QuantumCircuit(q_upper, q_lower, q_out) # Construct a pi/2 rotation for 111 and pi/4 rotations for 011 and 101 circ.cu1(pi / 4, q_upper, q_out) circ.cu1(pi / 4, q_lower, q_out) # Cancel out the pi/4 rotations on 011 and 101 circ.cx(q_upper, q_lower) circ.cu1(pi / -4, q_lower, q_out) circ.cx(q_upper, q_lower) I can expand it again to produce a CCCS gate: q_upper = QuantumRegister(2) q_lower = QuantumRegister(1) q_out = QuantumRegister(1) circ = QuantumCircuit(q_upper, q_lower, q_out) # Construct a pi/2 rotation for 1111 and pi/4 rotations for 1101 and 0011 circ.ccu1(pi / 4, q_upper[0], q_upper[1], q_out[0]) circ.cu1(pi / 4, q_lower, q_out) # Cancel out the pi/4 rotations for 1101 and 0011 circ.ccx(q_upper[0], q_upper[1], q_lower[0]) circ.cu1(pi / -4, q_lower, q_out) circ.ccx(q_upper[0], q_upper[1], q_lower[0]) Now, this circuit presents a problem: there is no ccu1 gate. Now, I could construct one that looks like this: circ.cu1(pi / 8, q_upper[0], q_out) circ.cu1(pi / 8, q_upper[1], q_out) circ.cx(q_upper[0], q_upper[1]) circ.cu1(pi / -8, q_upper[1], q_out) circ.cx(q_upper[0], q_upper[1]) But this means that an N-qubit Controlled-S gate would require an (N-1)-qubit Controlled-T gate which would require an (N-2)-qubit Controlled pi / 8 gate and so on and so forth, resulting in increasingly small rotations. Given the current state of quantum hardware (and probably the future of quantum hardware), this implementation doesn't seem practical but I haven't been able to find a better one. Could anyone suggest a way to break this recursion at either the S-gate or T-gate level or will I have to just deal with the recursion? Answer: The angles getting smaller is not a problem. The actual problem is that you are generating a ton of gates. To produce one gate with N controls, you're recursing on the producing of 3 gates with N-1 controls. Follow the logic to the end, and you'll see that you end up with $3^N$ gates! It is possible to reduce the number of gates from exponential to linear. In the ideal case, you have $N$ ancillae available and can use Toffoli gates to compute the AND of your control qubits. You apply the S gate to the qubit storing the AND, then uncompute. If you don't have any workspace, it gets a lot more complicated. You have to do stuff like this: and this
{ "domain": "quantumcomputing.stackexchange", "id": 622, "tags": "programming, circuit-construction, qiskit, gate-synthesis" }
Can we consider light from the Sun during sunrise and nightfall as polarised?
Question: On sunrise and nightfall light looks different, more orange or more dark. Some frequencies are filtered or light travels longer. The energy corresponding to this light is obviously lower than light from morning or noon. Is this the same effect as polarization? Answer: It's mostly dominated by Rayleigh scattering which scales as the inverse of the wavelength to the fourth power. So shorter wavelengths are scattered more strongly. Which is why the sky is blue and sunsets are reddish. Indeed when the light is scattered it will appear polarized to the observer on the ground. Try looking at the blue sky through a polarizer.....
{ "domain": "physics.stackexchange", "id": 27370, "tags": "visible-light, atmospheric-science, polarization" }
what does "lifting" mean?
Question: I see in certain places "lifting computation" or "lifting" mentioned. I haven't been able to accurately define for myself what is meant by that. This usually comes up in computer science context. Any ideas what it means? Answer: Computer science (especially theory B) has many connections to category theory, and that is the usual context for lifting. The basic idea is that you might have two objects $X$ and $Y$ that interact in a very intuitive way for you, and so it is easy to define a good morphism $f: X \rightarrow Y$. You might have a more complicated object $Z$ that is easy to relate to $Y$, but it is not obvious how it relates to $X$. So you will look at a morphism $g: Z \rightarrow Y$ and then use category theory to lift $f$ to $Z$ using $g$. In other words, you will find a morphism $h: X \rightarrow Z$ such that $gh = f$. For more info, see these slides.
{ "domain": "cstheory.stackexchange", "id": 4646, "tags": "reference-request, terminology" }
Batch normalization vs batch size
Question: I have noticed that my performance of VGG 16 network gets better if I increase the batch size from $64$ to $256$. I have also observed that, using batch size $64$, the with and without batch normalization results have lot of difference. With batch norm results being poorer. As I increase the batch size the performance of with and without batch normalization gets closer. Something funky going on here. So I would like to ask is the increase in batch size have effect on batch normalization? Answer: By increasing batch size your steps can be more accurate because your sampling will be closer to the real population. If you increase the size of batch, your batch normalisation can have better results. The reason is exactly like the input layer. The samples will be closer to the population for inner activations.
{ "domain": "datascience.stackexchange", "id": 5189, "tags": "machine-learning, neural-network, deep-learning, cnn, batch-normalization" }
Accessing roboearth database
Question: Hi, I was able to retrieve bottle and book dataset from the roboearth. But it was through some wild search. Cant't i browse through the database and download the objects. I feel that the objects dataset is pretty fine as they are segmented and in the form of pcd files. It is very useful to experiment on the object recognition stuff with such kind of dataset. So is there any other way to access the datasets that are in pcd forms and ready to use. Are there any other places to get the datasets? Thanks, Karthik Originally posted by karthik on ROS Answers with karma: 2831 on 2012-01-08 Post score: 0 Answer: You can browse the object models available in the database via the web interface: move the mouse pointer over "Search", click on "Objects". Then click on the upper "Search" button without entering anything to see all object models. HTH, Daniel Originally posted by ddimarco with karma: 916 on 2012-02-22 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by karthik on 2012-02-28: oh... :) thanks i missed it
{ "domain": "robotics.stackexchange", "id": 7819, "tags": "roboearth" }
Creation and Annialation Operators and Kinetic Energy Matrix Elements
Question: I'd like to write equations for $c_{ij}(t)$, With a hamiltonian of the form $$H=\sum_{kn}a^{\dagger}_k t_{kn}a_n + \frac{1}{2}\sum_{klmn}a^{\dagger}_k a^{\dagger}_l v_{klmn}a_m a_n$$ with $t_{kn}$ and $v_{klmn}$ as matrix elements of the kinetic energy and potential in the single particle basis. Now I'm operating on a two particle state $$|\psi (t)\rangle =\sum_{ij}a^{\dagger}_i a^{\dagger}_j |0\rangle c_{ij}(t)$$ Now I got the potential part to work, as well as the time derivative part. I'm actually having trouble with the kinetic energy part (hence the title). Here's what I have done. looking just at the first term acting $$\sum_{kn}\sum_{ij}a^{\dagger}_k t_{kn} a_n a^{\dagger}_i a^{\dagger}_j |0\rangle c_{ij}(t)$$ I use the commutation relations to fanangle the set of three operators after $t_{kn}$ into $\delta_{ni}a^{\dagger}_j \pm \delta_{nj}a^{\dagger}_i$ where the $\pm$ is for both fermions and bosons. Inserting this into the above I get $$\sum_{kn}\sum_{ij}t_{kn}a^{\dagger}_k (\delta_{ni}a^{\dagger}_j \pm \delta_{nj}a^{\dagger}_i)|0\rangle c_{ij}(t)$$ Now taking the sum over $n$ I get $$\sum_k \sum_{ij}(t_{ki}a^{\dagger}_k a^{\dagger}_j \pm t_{kj}a^{\dagger}_k a^{\dagger}_i)|0\rangle c_{ij}(t) $$ Now my trouble seems silly to me, but I can't figure out what is the right move here. If I swtich the $i$ and $j$ in the second term after the $\pm$ and $c_{ij}(t)$ is symmetric then it's possible zero, but I don't think that makes sense. My total equation as of now is $$\sum_k \sum_{ij}(t_{ki}a^{\dagger}_k a^{\dagger}_j \pm t_{kj}a^{\dagger}_k a^{\dagger}_i)|0\rangle c_{ij}(t)+\frac{1}{2}\sum_{kl}\sum_{ij}a^{\dagger}_k a^{\dagger}_l v_{klij}|0\rangle c_{ij}(t) \pm \frac{1}{2}\sum_{kl}\sum_{ij}a^{\dagger}_k a^{\dagger}_l v_{klji}|0\rangle c_{ij}(t)=i\hbar \sum_{ij}a^{\dagger}_i a^{\dagger}_j |0\rangle \dot{c}_{ij}(t)$$ I'd like to remove the arbitrary ket vectors in the whole expression to just obtain a simple expression for $c_{ij}(t)$ any help would be great, thanks. Answer: I'm not sure if the following is the main issue, but OP writes in the question formulation (v1): [...] $c_{ij}(t)$ is symmetric [...] The coefficient $c_{ij}$ is only $i\leftrightarrow j$ symmetric in case of bosons. In case of fermions, the coefficient $c_{ij}$ is $i\leftrightarrow j$ antisymmetric.
{ "domain": "physics.stackexchange", "id": 2914, "tags": "quantum-mechanics, matrix-elements" }
Imported map resizes and flickers in RViz
Question: Hello, I run Ridgeback robot on RVIZ with nagivation stack. Everything worked fine, I have the robot model in RVIZ and I was able to navigate around and created a map. The problem happened when I tried to use the existing map, the model will keep flickering and the map only show up when i click on reset in RVIZ. Here are the commands and steps that I use to bring out the existing map to RVIZ: Because my navigation stack stored inside the robot, I run these launch files in robot terminal seperately: In Terminal 1: source devel/setup.bash roslaunch ridgeback_navigation odom_navigation_demo.launch In Terminal 2: source devel/setup.bash roslaunch jackal_navigation gmapping.launch In Terminal 3: source devel/setup.bash roslaunch ridgeback_viz view_robot.launch config:=gmapping In Terminal 4: roslaunch ridgeback_navigation amcl_demo.launch map_file:=/home/administrator/ridgeback_world.yaml After that in my laptop terminal. export ROS_MASTER_URI=http://cpr-r100-0019:11311 export ROS_IP=192.168.0.103 export ROS_HOSTNAME=laura-Precision-M6400/ roslaunch ridgeback_viz view_robot.launch config:=local *Note that I saved my map files inside the robot, because when i saved it in the laptop, the file won't launch. It does not seem to matter if we create the map from live data or logged data it does the same thing. If we use the same map on a simulated robot in RViz is shows up and works just fine. Any suggestions? How do we keep the map from resizing. Our current map is 4000 x 4000 and it keeps resizing down to 992 x 992. We tried to use the 2D Nav pose to position the robot within the map. It stops jumping around when it gets close to the actual position but it starts jumping again. Originally posted by mewescott on ROS Answers with karma: 1 on 2017-08-21 Post score: 0 Answer: If I read your question text correctly, you appear to be running gmapping even though you already have a map. As far as I know, gmapping is only involved when making a map, not when using one you made earlier. Originally posted by gvdhoorn with karma: 86574 on 2017-08-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 28663, "tags": "navigation, rviz, gmapping" }
Why does having cold hands hurt?
Question: (1). Why does having cold hands create a painful sensation? Is it a physical effect, maybe a protection mechanism from evolution? (2). Why, if one's hands are cold, does it feel as if they are more susceptible to pain? Note: I have (2) as a separate question because the painful sensation in (1) refers to that from the coldness, whereas in (2), I mean that certain 'ordinary tasks' would feel painful, for example, holding a shopping bag. Answer: For question 1 it's just a biochemical response from thermoreceptor nerves in the skin, temperatures that are likely to cause lasting damage from prolonged exposure cause a neurological response - in this case it's pain. In address to question 2 it is most likely down to the physiological response of vasoconstriction. When the body is cold the body undergoes constriction of the blood vessels in the extremities to preserve core temperature where it is more essential for the proper functioning of vital organs. By constricting blood flow to the extremities (in this case the hands) there is less volume of blood present, because of this nerves present in the hand are closer to the surface and thus more sensitive to pain.
{ "domain": "biology.stackexchange", "id": 794, "tags": "physiology, skin, pain" }
Efficient way to extract the first row in a Group By group
Question: I have a large Sql Server view with this schema: [archive_ID] (int, not null) [archive_date] (datetime, not null) [archdata_path_ID] (varchar(50), not null) [archdata_value] (int not null) I need to group the records by the Date, and I need to extract just the first record for each group. This is the current query: WITH cte AS ( SELECT * ,CAST(archive_date AS DATE) AS C ,ROW_NUMBER() OVER ( PARTITION BY CAST(archive_date AS DATE) ORDER BY CAST(archive_date AS DATE) ASC ) AS ad FROM ArchiveData WHERE archdata_path_ID = @PathID ) SELECT DISTINCT C ,archdata_value AS val FROM cte WHERE ad = 1 ORDER BY C ASC The main problem is to improve the readability. Would be great to optimize also the performance, but it's not mandatory. Answer: I believe that DISTINCT is redundant, since the CTE should produce only one row for each date whose ROW_NUMBER() is 1. Avoid selecting * in the CTE, and list the columns you want explicitly. Your naming is poor: CTE, C, val, ad. Please find more descriptive names. If you are using any SQL Server ≥ 2012, then FIRST_VALUE() is the function you want. SELECT CAST(archive_date AS DATE) AS C , FIRST_VALUE(archdata_value) OVER ( PARTITION BY CAST(archive_date AS DATE) ORDER BY CAST(archive_date AS DATE) ) AS val FROM ArchiveData WHERE archdata_path_ID = @PathID ORDER BY C;
{ "domain": "codereview.stackexchange", "id": 8893, "tags": "sql, sql-server" }
Kardar: The derivation of the Maxwell Boltzmann distribution function
Question: In Mehran Kardar's volume 1: Statistical Physics of Particles, he introduces the Maxwell Boltzmann distribution function just after the discussion on the microcanonical ensemble as follows: The joint probability distribution function for a microstate is $$p(\mu)=\frac{1}{\Omega(E,V,N)} \times \begin{cases}1 & q_i\in [0,V^{1/3}]\ \text{and} \sum p_i^2/2m =E \\ 0 & \text{otherwise} \end{cases} .... (4.27)$$ The above text is for the ideal gas. After this and a few paragaphs on deriving the ideal gas laws, he said this: The unconditional probability of finding a particle with momentum $\vec p_1$ in the gas can be calculated from the joint PDF in 4.27 by integrating over all the other variables $$p(\vec p_1)=\int d^3q_1 \prod_{i=2}^{N}d^3q_id^3p_i\ p (\{ \vec p_i, \vec q_i \}) = V \frac{\Omega (E-p_1^2/2m,V,N-1)}{\Omega (E,V,N)}$$ I understand how unconditional probabilities are calculated. But I can't understand how he wrote that last expression out of thin air. Obviously, the first $V$ comes out because he integrated $d^3q_i$. But I didn't get how that $\Omega$ thing came out. I am a beginner with these things so it would be great to keep the explanation to beginning graduate level. Thank you. Any help is appreciated. Answer: Defining $$p(\mu) \equiv \frac{1}{\Omega(E,V,N)} \delta(E-H(P,Q)) \quad, $$ we find from the normalization condition $$\int \mathrm d^3p_1\, \mathrm d^3p_2\ldots \mathrm d^3 p_N \int \mathrm d^3 q_1\,\mathrm d^3 q_2 \ldots \mathrm d^3 q_N\, p(\mu) = 1 $$ that $$ \Omega(E,V,N) = \Gamma_N(E) \, V^N \quad , $$ where \begin{align} \Gamma_N(E) &\equiv \int \mathrm d^3p_1\, \mathrm d^3p_2\ldots \mathrm d^3 p_N\, \delta\left(E-\sum\limits_{i=1}^N \frac{p_i^2}{2m}\right) \\ V^N&= \int_V \mathrm d^3 q_1\,\mathrm d^3 q_2 \ldots \mathrm d^3 q_N \quad .\end{align} Note that we treat the particles as distinguishable and omitted some constant factors. Of course, we have assumed that $H$ is the Hamiltonian of an ideal gas. Here, $\Gamma_N(E)$ is related to the surface area of a $3N$-dimensional hypersphere with radius $\propto \sqrt E$. Eventually, this yields $$p(p_1) = \frac{V\, \Gamma_{N-1}\left(E-\frac{p_1^2}{2m}\right)\, V^{N-1}}{\Omega(E,V,N)} = V\, \frac{\Omega\left(E-\frac{p_1^2}{2m},V,N-1\right)}{\Omega(E,V,N)} $$
{ "domain": "physics.stackexchange", "id": 86694, "tags": "homework-and-exercises, thermodynamics, statistical-mechanics, probability, ideal-gas" }
Octomap Navigation? How to?
Question: How to use octomap for navigation? After completing mapping process, i have a map in ".bt" or ".ot" format. Now i want to do amcl on it (say gmapping). But gmapping requires ".pgm" map format. In this case how come i use the map from octomap to do the navigation. I am unable to understand how to acheive this by reading the documentation. Can someone please help. Originally posted by RoboRos on ROS Answers with karma: 37 on 2018-03-23 Post score: 0 Answer: Hello RoboRos, this is actually a very interesting question. There exists a C++ library that could help you with your task. It is called grid_map. As stated on the link provided, the grid_map is: ... a C++ library with ROS interface to manage two-dimensional grid maps with multiple data layers. It is designed for mobile robotic mapping to store data such as elevation, variance, color, friction coefficient, foothold quality, surface normal, traversability etc. Since using grid maps would be an entirely different topic, I will try to keep it short as possible by pointing out that there is a demonstration of an octomap conversion provided here. After installing the grid_map package, just change the path in the given launch file to point to your map - it will give you a quick demonstration of just what you could do with it. However, I would advise you to play a bit with the package since the entire concept of grip maps is very interesting and above all, very useful. If you have any further question, feel free to ask. EDIT: To save the image of the conversion, in grid_map_demos/src/OctomapToGridmapDemo.cpp add the following lines of code: #include <opencv2/imgcodecs.hpp> #include "grid_map_cv/grid_map_cv.hpp" You need to include opencv in order to use imwrite and grid_map_cv is needed, because it offers the function for grid map to image conversion. Then, find this part (it already exists): // Publish as grid map. grid_map_msgs::GridMap gridMapMessage; grid_map::GridMapRosConverter::toMessage(map_, gridMapMessage); gridMapPublisher_.publish(gridMapMessage); At that point, grid map is already constructed and you can save it by adding the following: cv::Mat map_img; grid_map::GridMapCvConverter::toImage<unsigned char,1>(map_, "elevation", CV_8UC1, map_img); cv::imwrite("/home/<username>/octomap_occupancy.pgm", map_img); You create a standard OpenCV Mat object, call the template function for grid map conversion with all the arguments and then just save the resulting image. toImage function declaration is: template<typename Type_, int NChannels_> static bool toImage(const grid_map::GridMap& gridMap, const std::string& layer, const int encoding, cv::Mat& image) Originally posted by dljubic with karma: 516 on 2018-03-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by RoboRos on 2018-03-27: Hi, Thanks for your response. This is really cool. However this is just downprojecting to a 2d map, right? Not a full navigation solution. I am not able to save it in some other format e.g .pgm. I mean, with this i am not able to publish map node. I can acquire map from projected_map:=map anywayz Comment by RoboRos on 2018-03-27: however i'll have to say that i am stunned by the quality of the map outputted by gridmap. Comment by RoboRos on 2018-03-28: @ddljubic any help would be appreciated....!!!! Comment by dljubic on 2018-03-28: Sorry for the late response, I am quite busy these days. I don't have time for the next few days to write and post a solution. In the meantime, you could take a look at the grid_map_cv folder, because grid maps provide us with the ability to transform between OpenCV formats and grid maps. Comment by dljubic on 2018-04-01: @RoboRos I have edited my answer and added the code. Please let me know if it helps! Comment by RoboRos on 2018-04-03: @dljubic Thank u for your response. Map has been saved in .pgm. Thanks alot for your help!!:) Comment by dljubic on 2018-04-07: @RoboRos Sorry for the slow answering and I am glad to help! :) Comment by h3ct0r on 2020-01-23: I didn't know about the grid_map package, very useful! thanks!
{ "domain": "robotics.stackexchange", "id": 30427, "tags": "slam, navigation, octomap, ros-kinetic, 3dslam" }
Increase of threshold current by temperature in a laser diode
Question: I read somewhere that if we increase the temperature of the material in a laser diode the threshold current for lasing also increases. Can someone explain to me why is this happening? What is the mechanism inside the diode behind this increase? Answer: "The increase in operating temperature increases the threshold current of the device — more drive current is needed to turn on the LASER diode. This phenomenon is attributed to the fact that as temperature increases, the nonradiative processes in the device would increase and compete with the system for photons. Thus, the threshold current increases with temperature." (http://www.askkia.com/articles/what-is-a-diode.html )
{ "domain": "physics.stackexchange", "id": 2420, "tags": "temperature, laser, semiconductor-physics, electronics" }
Unable to use phangorn::phyDat
Question: I am following an example from ‘Analysis of Phylogenetics Second Edition and Evolution with R’ from Emmanuel Paradis. He is doing: I am doing pretty much the same: x=c('garras','garras','garras', 'tejido','tejido','tejido') y <- phyDat(matrix(x), "USER", levels = unique(x)) but I get: Error in phyDat.default(data, levels = levels, return.index = return.index, : data object must contain taxa names Screenshot: What am I doing wrong? how come the example from the book works... Answer: I cannot access the book, it might be an older version or there are some steps in between. The screen shot doesn't help. If you look at the vignette and the error, what you need to provide is a matrix of characters with defined row names, for example: set.seed(111) x = sample(letters,100,replace=TRUE) phyDat(matrix(x),"USER",levels=unique(x)) Error in phyDat.default(data, levels = levels, return.index = return.index, : data object must contain taxa names mat = matrix(x,10,10,dimnames=list(paste0("t",1:10),NULL)) phyDat(mat,"USER",levels=unique(x)) 10 sequences with 10 character and 10 different site patterns. The states are n t s o y e c h z q m j d o h n z y x p q g u o a i h y c g t d f y t d f e u e f d i j r z x w u m a j p j g d t q y v o x p m s o z x z a y j z r u s a y k l f g l f r i l r f m t t e f x v f g q j
{ "domain": "bioinformatics.stackexchange", "id": 1525, "tags": "r, phylogenetics" }
Generic Singleton (Thread Safe)
Question: LifeTracker.h #ifndef _LIFETRACKER_H_ #define _LIFETRACKER_H_ #include <vector> #include <algorithm> #include "MySIngleton.h" class LifetimeTracker { public: LifetimeTracker(unsigned int x) : longevity_(x) {} virtual ~LifetimeTracker() = 0; friend bool Compare(const LifetimeTracker* p, const LifetimeTracker* q); void setKey(const std::string& key) { key_ = key; } const std::string& getKey() { return key_; } void setLongevity(unsigned int nlongevity) { longevity_ = nlongevity; } private: //This is the relative age of Singleton object(s) unsigned int longevity_; //Stored this Key to Identify Singleton Objects, if client want to Set Longivity std::string key_; }; inline bool Compare(const LifetimeTracker* p, const LifetimeTracker* q) { return p->longevity_ < q->longevity_; } // Definition required LifetimeTracker::~LifetimeTracker() = default; //Helper destroyer function // Concrete lifetime tracker for objects of type T template <typename T, typename Destroyer> class ConcreteLifetimeTracker : public LifetimeTracker { public: ConcreteLifetimeTracker(T* p, unsigned int longevity, Destroyer d) : LifetimeTracker(longevity) , pTracked_(p) , destroyer_(d) { } ~ConcreteLifetimeTracker() { destroyer_(pTracked_); } private: //This is the tracked object T* pTracked_; //This is destroying method Destroyer destroyer_; }; //This will be given to std::atexit void AtExitFn(); // Declaration needed below template <typename T> struct Deleter; template <typename T, typename D > class Singleton; //This is generic deleter, but client can also give its deleter template <typename T> struct Deleter { void operator()(T* pObj) { Singleton<T, Deleter<T> >::Destroy(pObj); } }; template <typename T, typename D> std::string getObjectKey() { std::string key = typeid(T).name(); key = key + typeid(D).name(); return key; } typedef std::vector<LifetimeTracker*> TrackerArray; TrackerArray pTrackerArray; unsigned int Priority = 0; template <typename T, typename Destroyer > int SetLongevity(T* pobj, Destroyer d, unsigned int nlogevity = 0) { if (nlogevity == 0) { nlogevity = ++Priority; } LifetimeTracker* ptracker = new ConcreteLifetimeTracker<T, Destroyer>(pobj, nlogevity, d); ptracker->setKey(getObjectKey<T, Destroyer>()); pTrackerArray.push_back(ptracker); return 1; } template <typename T, typename Destroyer = Deleter<T> > void SetLongevity(unsigned int nlogevity) { for (auto i : pTrackerArray) { if (i->getKey() == getObjectKey<T, Destroyer>()) { i->setLongevity(nlogevity); } } } void AtExitFn() { sort(pTrackerArray.begin(), pTrackerArray.end(), Compare); for (auto i : pTrackerArray) { delete *&i; } pTrackerArray.erase(pTrackerArray.begin(), pTrackerArray.end()); } #endif // !_LIFETRACER_H MySIngleton.h #ifndef _MYSINGLETON_H_ #define _MYSINGLETON_H_ #include "LifeTracker.h" #include <string> //If client Inherit from this class, it needs not to private or delete it all construc operations class NoConstructOperation { protected: NoConstructOperation() = default; virtual ~NoConstructOperation() = default; public: NoConstructOperation(const NoConstructOperation&) = delete; NoConstructOperation& operator =(NoConstructOperation&) = delete; NoConstructOperation(NoConstructOperation&&) = delete; NoConstructOperation& operator = (NoConstructOperation&&) = delete; }; // If client call Singleton<Myclass>, Singleton<Myclass, Deleter> it will give two different instance of Myclass template < typename T, typename D = Deleter<T> > class Singleton : public NoConstructOperation { public: //Check availabily of construcor, copy/move construction/assignment //{ static_assert(!std::is_constructible<T>::value, "Constructor is public"); static_assert(!std::is_trivially_constructible<T>::value, "Compiler provided default constructor is public: Please declare it private"); static_assert(!std::is_trivially_copyable<T>::value, "Compiler provided copy constructor is avaliable"); static_assert(!std::is_copy_constructible<T>::value, "copy constructor is avaliable"); static_assert(!std::is_trivially_copy_assignable<T>::value, "Compiler provided assignment is avaliable"); static_assert(!std::is_copy_assignable<T>::value, "copy assisgnment is avaliable"); static_assert(!std::is_trivially_move_assignable<T>::value, "Compiler provided move assisgnment is avaliable"); static_assert(!std::is_move_assignable<T>::value, "move assisgnment is avaliable"); static_assert(!std::is_move_constructible<T>::value, "move constructor is avaliable"); static_assert(!std::is_trivially_move_constructible<T>::value, "Compiler provided move constructor is avaliable"); //} static T& Instance(); static void Destroy(T* obj); private: static T* CreateInstance(); static int ScheduleForDestruction(void(*)()); }; template<typename T, typename D> T& Singleton<T, D>::Instance() { //This will be thread safe ? static T* ptr = CreateInstance(); // I have overloked it, this is for thread safety static int i = SetLongevity<T, D>(ptr, D()); static int i = ScheduleForDestruction(&AtExitFn); return *ptr; } template<typename T, typename D> inline T* Singleton<T, D>::CreateInstance() { return new T(); } template<typename T, typename D> inline int Singleton<T, D>::ScheduleForDestruction(void(*pFun)()) { std::atexit(pFun); return 1; } template<typename T, typename D> inline void Singleton<T, D>::Destroy(T* obj) { delete obj; obj = nullptr; } #endif Client Code // C++11/14 thread safe singleton #include "stdafx.h" #include <typeinfo> #include <stdexcept> #include <stdlib.h> #include <exception> #include <string> #include <sstream> #include <iostream> #include <atomic> #include <cstdio> #include "MySIngleton.h" class Deleter1; class Myclass : public NoConstructOperation { public: friend class Deleter1; friend class Singleton<Myclass>; friend class Singleton<Myclass, Deleter1>; private: Myclass() = default; ~Myclass() = default; }; class Deleter1 { public: void operator()(Myclass *p) { delete p; p = nullptr; } }; class Myclass1 :public NoConstructOperation { public: friend Singleton<Myclass1>; }; int main() { Singleton<Myclass, Deleter1>::Instance(); SetLongevity<Myclass, Deleter1>(6); Singleton<Myclass>::Instance(); getchar(); return 0; } This is Thread safe singleton. LifeTracker - will track the lifespan of Singleton Objects through longivity variable and through std::atexit, we will clean Singletons. Singleton - Which has class of which instance will be created + a destroyer which if client does not provide , Single will clean by itself. Pleae ask more details if you require to understand it. Answer: Singelton Anti-Pattern (@Jerry ;-) OK. I assume that you know Singelton is also considered an anti-pattern. If you think about it there are usually better ways of doing things. Design You have implemented some code that guarantees an order of destruction on your singletons (hopefully there are not that many that this is a big worry!). It looks like it should work. Couple of issues I have are: Specifying the order of destruction is manual and you have to call SetLongevity() for the type (and destructor). This is very fragile especially in long lived code where things are changed without knowing the context of the whole program. It uses atexit(). The functions registered with atexit() are run before the destruction of static storage duration objects. This means that your singletons that are accessed from the destructor of these objects will always be accessing destroyed objects. The other issue I have with this is that current standard technique already handles the order of creation and destruction correctly. See: Finding C++ static initialization order problems Once you understand there is an order of initialization/destruction problem it goes away because the solution is so simple. The only problem with order of initialization problem is that people are not aware of it. Code Review Underscore and Identifier The rules of using underscore in an identifier are non trivial and most people get it wrong. So best to avoid prefixing identifiers with underscore. For full details read this: What are the rules about using an underscore in a C++ identifier? #ifndef _LIFETRACKER_H_ #define _LIFETRACKER_H_ The above is illegal. These identifiers are reserved for the implementation. Using them results in an invalid program. Put your local header file first. #include <vector> #include <algorithm> #include "MySIngleton.h" <--- This should be first. You are trying to prevent your header file being accidently dependant on other people putting header file first. Your header file should contain all the other includes that it needs but no more. By putting it third in the list you may potentially hide the error of a missing dependency; eg. that your file depends on and you forgot to put #include <vector> in your header file. When somebody else uses your header file they find they need to add a #include <vector> to their code even if they are not using it because you forgot it. Namespace Put your code in a namespace. Some of your functions have very common names that could cause accidental errors in other people's code' Compare and Deleter springs to mind. Unsigned The only time that is real use case for unsigned int is for a bit field (and because the committee made size() return unsigned by mistake (they regret that decision)). Otherwise the automatic compiler conversion from signed to unsigned can cause some unexpected problems. LifetimeTracker life(-15); // Compiles just fine. The problem here is that user would expect that this object would be destroyed first (because it has a negative number and all others have a positive). But because the compiler will silently convert this value to a an unsigned value with a very big positive value you will get the exact opposite behavior of what your user expects. Principle of least surprise has been violated here. Avoid Get/Set functions This breaks encapsulation. You are exposing implementation details of your class to the world. Methods should be actions (verbs) that manipulate the state of the object. Do you really want to reset the Key after it has been created? Why not just set the key when it is created? Why are you getting the key? Is it just to do a comparison? Why not write an appropriate comparison operator? All will make the use of the class more logical! OK. setLongevity() is a valid use case. The user has requested the longevity been updated. Ahh now we see a better name updateLongevity() it already had a value and the user has requested you update it! class LifetimeTracker { void setKey(const std::string& key) const std::string& getKey() void setLongevity(unsigned int nlongevity); }; Const methods Methods that don't mutate the state of the object should be marked const. const std::string& getKey() -> const <- You forgot a const. { return key_; } Unused return value If your function always returns the same value. Then you have no need of a return value. Just make it void template <typename T, typename Destroyer > int SetLongevity(T* pobj, Destroyer d, unsigned int nlogevity = 0) { // STUFF return 1; // Who cares that it returns 1. } Smelly This smells. delete *&i; Why are you doing all the *& taking the address and then de-referencing it? Clear and erase all pTrackerArray.erase(pTrackerArray.begin(), pTrackerArray.end()); Sure that works. But it is the same as calling clear. And the method clear() much more accurately describes intent. Thread Safety Yes this is thread safe. Since C++11 the standard has guaranteed that this object will only be initialized once (even in the presence of threads). //This will be thread safe ? static T* ptr = CreateInstance(); Even before C++11 several compilers made that gurantee (but not all of them). This is thread safe in that it will be only initialized once. // I have overloked it, this is for thread safety static int i = SetLongevity<T, D>(ptr, D()); It is not thread safe in the context that it may be initialized by a different thread than ptr above. Your only guarantee is that it is initialized once. Same issue here. If it is a problem. static int i = ScheduleForDestruction(&AtExitFn); Seems like a lot of work template<typename T, typename D> inline T* Singleton<T, D>::CreateInstance() { return new T(); } This is only used in one place. Why not just put new T{} at the place where this function is called?
{ "domain": "codereview.stackexchange", "id": 22025, "tags": "c++, c++11, thread-safety, singleton" }
How do I estimate the difference in the speed of time flow between Earth and a distant star?
Question: I can use data from Gaia Mission to estimate radial velocity(from RVS) and proper motion(from parallax), to take into account special relativity. The remaining question is what data should I use to estimate gravitational effects on time flow. For example, how much faster/slower time passes for a star that is hundred light years closer to the galactic center than Earth? It is reasonable to assume that there are at least a few big components of the gravitational field there. The gravity of the star itself The gravity of the surrounding galaxy. The star's gravity is more or less possible to estimate, by estimating its mass. Good sources of data for this are very welcome. For the second component, I have no idea. This is not a research question, so the result should be believable, but not necessarily scientifically correct. The baseline is to use Neuton's gravity law for a massive disk with an average density close to the average density of the Milky Way. I suspect that there are some ways to improve this estimation. P.S. If there are better data sources for true velocities of the distant stars feel free to point them out. Answer: Nearby stars For such a small distance of $\sim 100$ light years, you can approximate that the Galaxy's gravitational field $g$ is uniform. On average, stars near the Sun orbit at speeds $v\simeq 220\text{ km/s}$ at a radius of $r\simeq8.2\text{ kpc}$. Assuming circular orbits, that corresponds to a gravitational acceleration of $g=v^2/r=2.0\times 10^{-11}\ c^2/\text{lyr}$; the unit here is squared speed of light per light year, which is appropriate for this problem. For an observer closer to the Galactic center by distance $\Delta r$, time "passes slower" by the fraction $g\Delta r/c^2=2.0\times 10^{-11}(\Delta r/\text{lyr})$ in the weak-field limit. If $\Delta r=100~\text{lyr}$, time passes slower by $2.0\times 10^{-9}$, or a couple of parts per billion (a couple of nanoseconds per second). Distant stars More generally, the fractional time dilation between two observers is just the gravitational potential difference between them, $\Delta\Phi/c^2$. Here's a spherical approximation of the Milky Way's gravitational potential, calculated using the spherical mass model of Cautun et al. (2020). The unit here is $10^{-6}c^2$, so if the vertical axis differs by $1$ between two radius values, that means the gravitational time dilation between observers at those radii is one part in a million (one microsecond per second).
{ "domain": "astronomy.stackexchange", "id": 6861, "tags": "time-dilation, estimate" }
Featurecount output .txt file from bam file
Question: After running this command: featureCounts -p -s 2 -a $genome -o $dir"/"$specie"/count_table/"$value".txt" $output_loc -T 8 on the BAM output of HISAT (output_loc), I got this output: Now, does the last column of this file show the gene count value? And if I merge the same column of different samples, how should I go through the normalization step? Thanks a lot for your help! Answer: The first 6 column in standard featureCounts output represent what is in the column names. All columns after than (starting at 7) represent the counts for the sample(s). If you use a single bam file as input then it's one column, if you use many bams as input then it is one column per bam. If the counts are gene level, exon transcript or anything else depends on how you run the tool. Typically, with default settings, it is usually the raw gene level counts. Generally, you can sum up technical replicates (=lane/sequencing replicates to increase depth) prior to running normalization.
{ "domain": "bioinformatics.stackexchange", "id": 2647, "tags": "rna-seq, gene-expression, read-mapping, featurecounts" }
Why Microchannel Plates can be operated only in vacuum?
Question: Why it is said that the Microchannel Plates can be operated in vacuum? What is the maximum pressure in which it can be operated? Also, while it is not operating, should it be kept in vacuum? Is this because the semiconducting property of the channels can be affected by the atmospheric pressure? Or while operating the gas molecules can be ionized and burn the detector? Answer: The electrons need to get from the top to the bottom without any interference from any gas molecules that might be in the channels. If nothing else, collisions with gas molecules will degrade performance. At atmospheric pressure, I don't think the device would work at all. You can blow a hole through an MCP with over-voltage, but I'm not sure how this relates to residual gas in the channels. Also, it is certainly possible to generate a discharge (spark) between the bottom of the MCP and whatever comes next, so you want to keep gases out of there. You can store them in air, preferably in a dry box.
{ "domain": "physics.stackexchange", "id": 14182, "tags": "electronics, particle-detectors, instrument" }