anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Simplifying an associative array()
Question: Would it be better to convert the array in the following code to $langs = array("en", "de", "fr"); and then reusing the values for both $folder and $flag? If so, how then would my foreach (or maybe a while?) loop be written? <?php $langs = array( "en/" => "en.png", "de/" => "de.png", "fr/" => "fr.png" ); $self = $_SERVER['REQUEST_URI']; $pattern = "{^.*/}i"; $links = array(); foreach ($langs as $folder => $flag) { $url = preg_replace($pattern, "$folder", $self); $link = "<li><a href=\"../$url\"><img src=\"../img/$flag\"></a></li>"; array_push($links, $link); } echo implode($links) . "\n"; ?> I'm trying to "fool proof" the code, by effectively limiting the folder structures and file names that can be used, as I create a basic template of files for quick rolling out of our websites. If you can see any other improvements, that would be much appreciated. Answer: You can definitely simplify this, as the only part of each element of $langs that changes is the two-letter language code. Something like this would work: <?php $langs = array( "en", "de", "fr" ); $self = $_SERVER['REQUEST_URI']; $pattern = "{^.*/}i"; $links = array(); foreach ($langs as $code) { $url = preg_replace($pattern, "$code/", $self); $link = "<li><a href=\"../$url\"><img src=\"../img/$code.png\"></a></li>"; array_push($links, $link); } echo implode($links) . "\n"; ?> You just use a non-associative array and append / or .png where needed; much cleaner.
{ "domain": "codereview.stackexchange", "id": 6005, "tags": "php, array" }
Is the problem of cryonics only the pressure?
Question: According to the pop-sci sources which I drew on from, the only problem with freezing whole bodies its that you can't conserve it without freeze-damage. If you were keeping a cold-blooded animal in a cold surrounding (well, before that removed the parasites somehow) and optimized the pressure in there so to make the cells not to collapse, could you stop all its functions and bring it back to life again later? Answer: The major problem in cryonics is that when an organism is frozen, all of the organism starts freezing. The cell start expanding since ice expands too. The sharp edges of the ice crystals can destroy a lot. Also, your blood would freeze. Again, the sharp edges of the ice crystals would destroy major blood vessels. That is why currently, they have to insert antifreeze into the blood, but it still might not be effective. Now, to answer your question, many cold-blooded organisms can survive freezing cold temperatures. Some frogs (like the wood frog), for example, have anti-freeze in their blood vessels! So some actually have special adaptations to survive freezing cold temperatures. I hope this answered your question. Please ask in the comments if you want more information on this. I will gladly research up for you.
{ "domain": "biology.stackexchange", "id": 4837, "tags": "cell-biology, cryonics" }
What is the reason behind studying seroprevalence of a disease?
Question: Scientific literature on viral disease, specifically ones like Zika and Dengue, contains seroprevalence data. What is the reason behind understanding seroprevalence? Answer: Seroprevalence tests people for the presence of a disease based on serologic methods. Often you do ELISA (or related methods) to test for pathogen specific antibodies. This gives you information on how common a disease is, especially when it might infect people without them showing symptoms. It also can help detecting cross-protection when the seroprevalence is lower than expected. This is something shown in micronesia, where the seroprevalence was much lower than expected (from: Zika Virus Seroprevalence, French Polynesia, 2014–2015) Our findings show that <50% of the population of French Polynesia had detectable Zika virus IgG. This seroprevalence rate is much lower than the 86% attack rate estimated by Kucharski et al. (14) using a model that assumed the French Polynesia population was 100% susceptible to Zika virus infection. However, in a setting where DENVs are highly prevalent (8), the possibility of cross-protecting immunity preventing infection from Zika virus (12,13) cannot be excluded. The attack rate and the asymptomatic:symptomatic ratio in French Polynesia were also lower than those described for the 2007 outbreak on Yap Island (73% and 4:1, respectively) (4); this finding supports the perception that the drivers of Zika virus transmission vary depending on geographic context. For other flaviviruses, such as DENV, previous model-based studies showed that the herd immunity threshold required to block viral transmission is ≈50%–85% (15). Thus, if Zika virus has the same epidemiologic characteristics as DENV, the seroprevalence rate of 49% would not be sufficient to prevent another outbreak. It can also help to make predictions about the spreading of a disease. With a high seroprevalence epidemic spreading is unlikely, while this is pretty much different with a low prevalence. This is something which was observed in Brazil when Zika occured there first.
{ "domain": "biology.stackexchange", "id": 9111, "tags": "molecular-biology, epidemiology" }
Distinguishability in Quantum Ensembles
Question: Inspired by this question: Are these two quantum systems distinguishable? and discussion therein. Given an ensemble of states, the randomness of a measurement outcome can be due to classical reasons (classical probability distribution of states in ensemble) and quantum reasons (an individual state can have a superposition of states). Because a classical system cannot be in a superposition of states, and in principle the state can be directly measured, the probability distribution is directly measurable. So any differing probability distributions are distinguishable. However in quantum mechanics, an infinite number of different ensembles can have the same density matrix. What assumptions are necessary to show that if two ensembles initially have the same density matrix, that there is no way to apply the same procedure to both ensembles and achieve different density matrices? (ie. that the 'redundant' information regarding what part of Hilbert space is represented in the ensemble is never retrievable even in principle) To relate to the referenced question, for example if we could generate an interaction that evolved: 1) an ensemble of states $|0\rangle + e^{i\theta}|1\rangle$ with a uniform distribution in $\theta$ to 2) an ensemble of states $|0\rangle + e^{i\phi}|1\rangle$ with a non-uniform distribution in $\phi$ such an mapping of vectors in Hilbert space can be 1-to-1. But it doesn't appear it can be done with a linear operator. So it hints that we can probably prove an answer to the question using only the assumption that states are vectors in a Hilbert space, and the evolution is a linear operator. Can someone list a simple proof showing that two ensembles with initially the same density matrix, can never evolve to two different density matrices? Please be explicit with what assumptions you make. Update: I guess to prove they are indistinguishable, we'd also need to show that non-unitary evolution like the projection from a measurement, can't eventually allow one to distinguish the underlying ensemble either. Such as perhaps using correlation between multiple measurements or possibly instead of asking something with only two answers, asking something with more that two so that finally the distribution of answers needs more than just the expectation value to characterize the results. Answer: You only need to assume the Schrödinger equation (yes, the same old linear Schrödinger equation, so the proof doesn't work for weird nonlinear quantum-mechanics-like theories) the standard assumptions about projective measurements (i.e. the Born rule and the assumption that after you measure a system it gets projected into the eigenspace corresponding to the eigenvalue you measured) Then it's easy to show that the evolution of a quantum system depends only on its density matrix, so "different" ensembles with the same density matrix are not actually distinguishable. First, you can derive from the Schrödinger equation a time evolution equation for the density matrix. This shows that if two ensembles have the same density matrix and they're just evolving unitarily, not being measured, then they will continue to have the same density matrix at all future times. The equation is $$\frac{d\rho}{dt} = \frac{1}{i\hbar} \left[ H, \rho \right]$$ Second, when you perform a measurement on an ensemble, the probability distribution of the measurment results depends only on the density matrix, and the density matrix after the measurement (of the whole ensemble, or of any sub-ensemble for which the measurement result was some specific value) only depends on the density matrix before the measurement. Specifically, consider a general observable (assumed to have discrete spectrum for simplicity) represented by a hermitian operator $A$. Let the diagonalization of $A$ be $$A = \sum_i a_i P_i$$ where $P_i$ is the projection operator in to the eigenspace corresponding to eigenvalue (measurement outcome) $a_i$. Then the probability that the measurement outcome is $a_i$ is $$p(a_i) = \operatorname{Tr}(\rho P_i)$$ This gives the complete probability distribution of $A$. The density matrix of the full ensemble after the measurment is $$\rho' = \sum_i P_i \rho P_i$$ and the density matrix of the sub-ensemble for which the measurment value turned out to be $a_i$ is $$\rho'_i = \frac{P_i \rho P_i}{\operatorname{Tr}(\rho P_i)}$$ Since none of these equations depend on any property of the ensemble other than its density matrix (e.g. the pure states and probabilities of which the mixed state is "composed"), the density matrix is a full and complete description of the quantum state of the ensemble.
{ "domain": "physics.stackexchange", "id": 794, "tags": "quantum-mechanics, quantum-information" }
The most stable way of standing in a bus
Question: Here's what's bugging me for quite a long time. Imagine the every day situation, that you are standing in a bus with your back on wall having only limited space on the floor and no handle to hold. You are quite ok for acceleration effects in 3 of 4 directions. You will not fall behind, the wall is there. Right and left direction is relatively easily handled by a wide stance, but you are troubled about falling forward. Now how would you stand to be able to withstand the acceleration effects the most effectively? As it is probably a bit too vague (and maybe not completely understandable), let me elaborate it and put some numbers just to demonstrate the situation. Consider a normal human (average height and normal vertical weight distribution, if a notion like this exists) and let's state that the limitation on the floor space is defined as 50 cm from the wall. You cannot put your feet further, because someone is standing there. If we skip all exotic and unnatural stances, we are basically left with two extremes and a continuum between them. One extreme is that you are standing with your legs straight, second extreme is that you squat (let's consider here you will not go below the point, when your legs are in parallel with the floor) (and let's ignore how inconvenient this stance is). My physical imagination is too weak to at least guess, which of these stances is more stable, I just don't suspect seeing optimum somewhere between (but not sure about that either). I also don't have any idea, how it should be calculated and if it's something rather trivial, or something very hard. Here's a picture of what I was speaking about. Figure on the left is what I eventually do in the bus, figure on the right is what comes intuitively to me as better way (center of mass is lower and maybe the angles are more favorable). So.. is it better to bend the knees (even a bit)? Answer: Let's draw in a few distances and forces: The black dot is your centre of mass (normally just below the navel I think). $a$ is the deceleration of the bus and $g$ is the acceleration due to gravity. If your feet don't skid, the question is whether you will rotate about the point where your feet touch the floor. To answer this we need to calculate the net moment about your feet. If this is negative it will be trying to rotate you anticlockwise so you will stay pinned to the wall. If the moment is positive you will rotate clockwise and fall over. The moment is the force times the perpendicular distance to the pivot, so the net moment is: $$ T = -gd + ah $$ And therefore the condition for you not falling over is: $$ gd > ah $$ When you crouch you reduce the value of $h$, so you reduce $ah$ and this makes you less likely to fall over. There is a slight subtlety to this as bending your legs will change the position of your centre of gravity (it will move up), but overall bending your legs is a good thing (it will also do wonders for your thigh muscles :-).
{ "domain": "physics.stackexchange", "id": 11637, "tags": "classical-mechanics, acceleration, stability" }
C++ Structural Requests System with Payload Management
Question: First things first, I'd thank Mr. G. Sliepen and Mr. Davislor regarding their help in the previous questions (C++ System data transfer design) (C++ System data transfer design - Following 1), given the resultant code can be found here*. * The code includes some added functionalities beyond the point :) Now I'm taking the review request one step further, as I've mentioned the "simplified SystemInterface" within the last question, I'm here to focus on it with any potential enhancements and optimizations. General notes: The System Data transfer is over-simplified within this code. An introduced "Guarders" to guard against inaccepted payloads and allowed for correction in some cases. A partial of SystemInterface is herein provided. Comes with a "simplified ServiceManager", which manages Services (or Entities), giving each one a unique number which is utilized to identify it across other entities (as Logger/SystemInterface/..). SystemInterface is provided here as one instance lies in the global scope, in my code it's a composed object within each entity, which receives a pid at construction. The SystemInterface uses a kind of structural recursion to find the intended request. Generally _execReq is called when a data is received by any communication mean (as MQTT/HTTP/Serial/...), and it's callable from other entities ofcourse. One of my concerns is the runtime of _execReq call, and whether it's already optimized or there's a room for optimizing. #include <algorithm> #include <cassert> #include <concepts> #include <cstdint> #include <functional> #include <iostream> #include <iterator> #include <ranges> #include <span> #include <string> #include <string_view> #include <unordered_map> #include <utility> #include <variant> #include <vector> // // Tools // #define CHOP_FRONT(vs) (std::vector<std::string>(++vs.begin(),vs.end())) std::string join(const std::vector<std::string>& vs,const char* delim) { std::string rv=""; if(vs.size()){ std::string sd(delim); for(auto const& v:vs) rv+=v+sd; for(int i=0;i<sd.size();i++) rv.pop_back(); } return rv; } std::vector<std::string> split(const std::string& s, const char* delimiter){ std::vector<std::string> vt; std::string delim(delimiter); auto len=delim.size(); auto start = 0U; auto end = s.find(delim); while (end != std::string::npos){ vt.push_back(s.substr(start, end - start)); start = end + len; end = s.find(delim, start); } std::string tec=s.substr(start, end); if(tec.size()) vt.push_back(tec); return vt; } enum class RequestErrors : uint8_t { OK, NOT_FOUND, NOT_STRING, TOO_MANY_INPUTS, TOO_FEW_INPUTS, NOT_NUMERIC, }; // // Data types // using BasicReferenceData = std::variant<std::string_view, std::span<std::uint8_t>>; using BasicConcreteData = std::variant<std::string, std::vector<uint8_t>>; std::span<std::uint8_t> t; using ConcreteDataContainer = std::vector<BasicConcreteData>; using ReferenceDataContainer = std::vector<BasicReferenceData>; /* Over simplified ReferenceData and ConcreteData, for the full implemented: https://godbolt.org/z/EofKMzGE9 */ using ReferenceData = ReferenceDataContainer; using ConcreteData = ConcreteDataContainer; using ReturnData = ConcreteData; using RequestReturn = std::pair<RequestErrors, ReturnData>; using CallbackSignature = std::function<RequestReturn(const std::string &context, const ReferenceData &)>; /// /// Guarders /// using PayloadGuardFN = std::function<RequestReturn(const ReferenceData&)>; /* Guarders Helper functions */ bool binaryData(const BasicReferenceData& data) { return std::holds_alternative<std::span<std::uint8_t>>(data); } bool printable(const BasicReferenceData& in) { auto span = std::get_if<std::span<std::uint8_t>>(&in); return !span || std::all_of(span->begin(), span->end()-1, isprint) && (isprint(span->back()) || span->back() == '\0'); // Allows the presence of zero terminator // !span means it's a string_view } void to_string(BasicReferenceData& data) { // Might remove the check. if (std::holds_alternative<std::span<std::uint8_t>>(data)) { auto span = std::get<std::span<std::uint8_t>>(data); data = BasicReferenceData{std::string_view(reinterpret_cast<char*>(span.data()), span.size())}; } } void to_string(ReferenceDataContainer::iterator it) { auto span = std::get<std::span<std::uint8_t>>(*it); *it = BasicReferenceData{std::string_view(reinterpret_cast<char*>(span.data()), span.size())}; } bool is_number(std::string_view s) { bool numeric = false; if (s.size()) { bool first_char_valid = numeric = (isdigit(s.front()) || (s.front() == '-' && s.size() > 1)); if (first_char_valid) { bool found_dot = false; for (auto it = s.begin(); it != s.end(); it++) { if (isdigit(*it)) continue; else if (*it=='.' && !found_dot) { found_dot = true; continue; } numeric = false; break; } } } return numeric; } class Guarder { public: virtual RequestReturn operator()(const ReferenceData& in) = 0; }; /* Guarder classes */ class CountGuarder : public Guarder { // Or a template ?? size_t count; public: RequestReturn operator()(const ReferenceData& in) override{ return std::make_pair((in.size() > count ? RequestErrors::TOO_MANY_INPUTS : in.size() < count ? RequestErrors::TOO_FEW_INPUTS : RequestErrors::OK), ReturnData{});} CountGuarder(size_t count) : count(count) {} }; class StringsGuarder : public Guarder { public: RequestReturn operator()(const ReferenceData& in) override { std::cout << "String Guarder\t"; auto mut_in = const_cast<ReferenceData*>(&in); // auto view = in | std::views::filter(binaryData); if (std::all_of(in.begin(), in.end(), printable)) { for (auto it = mut_in->begin(); it != mut_in->end(); it++) { if (binaryData(*it)) { to_string(it); } } // std::ranges::for_each(mut_in | std::views::filter(binaryData), to_string); std::cout << "OK\n"; return std::make_pair(RequestErrors::OK, ConcreteData{}); } else { std::cout << "NOT_STRING\n"; return std::make_pair(RequestErrors::NOT_STRING, ConcreteData{}); } } }; // Should follow a StringGuarder within the pipeline and remove the StringGuarder inside? class NumericGuarder : public Guarder { public: RequestReturn operator()(const ReferenceData& in) override { std::cout << "Numeric Guarder\t"; StringsGuarder stringify; auto result = stringify(in); if (result.first != RequestErrors::OK) { return result; } for (auto& i : in) { auto* sv = std::get_if<std::string_view>(&i); if (!sv || !is_number(*sv)){ std::cout << "Not Numeric!\n"; return std::make_pair(RequestErrors::NOT_NUMERIC, ConcreteDataContainer{std::string(*sv)}); } } std::cout << "OK\n"; return std::make_pair(RequestErrors::OK, ConcreteData{}); } }; // Can build custom Guarders. RequestReturn toStringCorrector(const ReferenceData& pload) { StringsGuarder stringify; return stringify(pload); } // // SystemInterface // struct request { std::uint32_t owner; std::uint32_t levID; CallbackSignature cbf; std::vector<Guarder*> guarders; }; using REQMAP = std::unordered_multimap<std::string,request>; using REQMAP_I = REQMAP::iterator; enum REQ_ID { REQ_ROOT, REQ_LOGGER, // A predefined ID for some instances, as the Logger. REQ_MAX }; class SystemInterface { REQMAP requestsMap; REQMAP_I __exactMatch(const std::string& cmd,uint32_t owner){ auto any=requestsMap.equal_range(cmd); for(auto i=any.first;i!=any.second;i++) if(i->second.owner==owner) return i; return requestsMap.end(); } void __flatten(std::function<void(std::string)> fn){ // For generating "help" list REQMAP_I ptr; for(ptr=requestsMap.begin();ptr!=requestsMap.end(); ptr++){ if(!(ptr->second.owner)){ if(ptr->second.levID) _flattenCmds(fn,ptr->first,ptr->first,ptr->second.levID); else fn(ptr->first); } } } RequestReturn _dispatch(std::vector<std::string> vs, ReferenceDataContainer ploads, uint32_t owner=0){ std::cout << "_dispatch: " << join(vs,"/") << "\n"; if(vs.size()){ REQMAP_I i; std::string cmd=vs[0]; i=__exactMatch(cmd,owner); if(i!=requestsMap.end()){ if(i->second.cbf) { return [=]()->RequestReturn{ for(auto& guard : i->second.guarders) { auto ret = (*guard)(ploads); if (ret.first!=RequestErrors::OK) { return ret; } } return i->second.cbf(join(CHOP_FRONT(vs),"/"), ploads); }(); } else return _dispatch(CHOP_FRONT(vs), ploads, i->second.levID); } else return std::make_pair(RequestErrors::NOT_FOUND, ConcreteDataContainer{}); } else return std::make_pair(RequestErrors::NOT_FOUND, ConcreteDataContainer{}); } public: void addRequest(uint32_t owner, uint32_t levId, const std::string &name, CallbackSignature cbf, std::initializer_list<Guarder*> guarders={}) { if (__exactMatch(name, owner) == requestsMap.end()){ // std::cout << "Inserting " << name << "\n"; requestsMap.insert(std::make_pair(name,request{owner,levId, cbf, guarders}));} else std::cout << "Already assigned path!\n"; } RequestReturn _execReq(std::string topic, ReferenceDataContainer ploads){ std::vector<std::string> cmd=split(topic,"/"); return _dispatch(cmd, ploads); // optimise? } void _flattenCmds(std::function<void(std::string)> fn,std::string cmd,std::string prefix,uint32_t lev){ REQMAP_I i=requestsMap.find(cmd); for(i=requestsMap.begin();i!=requestsMap.end();i++){ if(i->second.owner==lev){ std::string trim = prefix+"/"+i->first; if(i->second.levID) _flattenCmds(fn,i->first,trim,i->second.levID); else fn(trim); } } } void help() { std::vector<std::string> unsorted={}; __flatten([&unsorted](std::string s){ unsorted.push_back(s); }); sort(unsorted.begin(),unsorted.end()); std::cout << "HELP: \n"; for (auto& path : unsorted) { std::cout << "\t" << path << "\n"; } } }; SystemInterface interface; // // Guarder objects // CountGuarder c1Guard(1); NumericGuarder nGuard; StringsGuarder sGuard; // // ServiceManager // class ServiceManager { protected: std::uint32_t pid; std::string name; static std::uint32_t services; ServiceManager(std::string name) : name(name), pid(++services+REQ_MAX) { // std::cout << "name=" << name << " pid=" << pid << "\n"; } }; std::uint32_t ServiceManager::services=0; class Service1 : public ServiceManager { public: Service1() : ServiceManager("service1") { // Installs a level of "svc1" interface.addRequest(REQ_ROOT, pid, "svc1", nullptr, {}); // LevelID // Can add sub level ID on top of (pid) further ... // Adds a request path "svc1/test1" interface.addRequest(pid, 0, "test1",[this](const std::string& ctx, const ReferenceData& pload){ std::cout << "Received on test1\n"; return std::make_pair(RequestErrors::OK, ConcreteDataContainer{std::string{"Hello from test1"}}); }, {}); // Adds a request path "svc1/onestring" interface.addRequest(pid, 0, "onestring",[this](const std::string& ctx, const ReferenceData& pload){ std::cout << "Received on onestring\n"; return std::make_pair(RequestErrors::OK, ConcreteDataContainer{std::string{"Hello from onestring"}}); }, {&c1Guard, &sGuard}); } }; class Logger : public ServiceManager { // dummy logger int level; public: Logger() : ServiceManager("logger") { interface.addRequest(REQ_ROOT, REQ_LOGGER, "logger", nullptr, {}); // LevelID interface.addRequest(REQ_LOGGER, 0, "level", [this](const std::string& ctx, const ReferenceData& pload){ std::cout << "Received on level ctx=" << ctx << "\n"; if (ctx == "set") { if ((pload.size() == 1)) { if (auto plevel = std::get_if<std::string_view>(&pload[0])) { // MUST be true with the sGuard. level = std::atoi(std::string(*plevel).c_str()); } } else { return std::make_pair(RequestErrors::TOO_FEW_INPUTS, ConcreteDataContainer{}); } } else if (ctx != "get") { return std::make_pair(RequestErrors::NOT_FOUND, ConcreteDataContainer{}); } return std::make_pair(RequestErrors::OK, ConcreteDataContainer{std::to_string(level)}); }, {&sGuard, &nGuard}); } }; Service1 svc1; Logger logger; And here's a compiler explorer link with a simple example. Lastly, I'd like to attribute Philbowles (Passed) who is the original author of SystemInterface's internal methods which inspired me. With Thanks, Hamza Hajeir Answer: Avoid macros whenever possible Macros are notoriously difficult to write correctly, and will easily fail with different use cases. For example, the following will fail to compile: CHOP_FRONT(std::vector<std::string>{"Aap", "Noot", "Mies"}); If you had written a regular function, as already suggested by Davislor, you would have avoided this issue. Naming I strongly recommend you use verbs for functions, and nouns for types and variables. For example, isBinaryData() and isPrintable(). Be consistent: why use snake_case in to_string(), but camelCase in toStringCorrector()? Why does struct request not start with a capital when other types do? Why are some type aliases in ALL_CAPS? Never use double underscores: these are always reserved. I also recommend not using leading underscores at all. Either use trailing underscores (1 at most), or use some other prefix, like m_ for class members. What is the significance of a leading underscore anyway? I see you have some public member functions also starting with an underscore, so it definitely doesn't mean "private", otherwise you could have made those functions explicitly private. Incorrect error handling I see a lot of information messages, warnings and error messages all being written to std::cout. However, all this should go to std::log and std::cerr as appropriate. Furthermore, if you encounter an error, you must do something about it. Just printing something like "Already assigned path!\n" is not enough; the program will happily continue running, the user looking at the output thinks "What path? Is this good or bad? I'll just ignore it." Consider throwing an exception; either this will cause the program to abort, or the caller can catch it if it has some way to recover from this error. If you really can't use exceptions, use some other way to panic or to return the error to the caller somehow. This is unnecessarily complex There are several issues with this design that make it unnecessarily complex. Let's start with the levels and owners. Why have these at all? Why split the command path into its components, and then have a std::unordered_multimap sorted by components, when you can just have a map ordered by the full request path? I think you can greatly simplify things by writing: class SystemInterface { std::unordered_map<std::string request, CallbackSignature> requestsMap; … void addRequest(const std::string& path, CallbackSignature cbf) { … } }; Another issue are the guarders. It might seem like this is convenient, so the callback functions don't have to validate the payload. But you could easily rewrite the guarders so the callback functions can call them, such that the code is just as short as with your solution. Consider: interface.addRequest("logger/level", [this](auto& ctx, auto& pload) { … if (auto plevel = GetNumber(pload[0])) { level = *plevel; } … }); Where you created helper functions to deal with the data encapsulated in BasicReferenceData: std::optional<int> GetNumber(const BasicReferenceData& data) { if (/* checks to see it data actually contains a valid number */) { return std::atoi(std::get<std::string_view>(data)); } else { return std::nullopt; } } With C++23 you will get std::expected<>, so you can also have it return a more specific error code. Your RequestReturn type is a bit similar in that it holds some desired data and a possible error code, but it's hardcoded to return ConcreteData, which still requires the caller to further unpack that, and it doesn't have any of the handy features of std::optional and std::expected, like conversion to bool returning if it has an expected value or not, as well as monadic operations since C++23. If you cannot use that yet but still want to return something like RequestReturn, consider making that a proper class that emulates std::expected Anyway, with these changes, SystemInterface is greatly simplified: it mostly is a std::unordered_map with some access control. help() no longer needs to flatten anything, as the paths are already stored flattened.
{ "domain": "codereview.stackexchange", "id": 45094, "tags": "c++, callback" }
How does a neutron star convert its kinetic into electromagnetic radiation?
Question: Converting angular momentum into gamma rays seems amazing to me! How does it do it? Presume something to do with magnetic fields....why does it even have a magnetic field? Its made of neutrons so theres no charge rotating? Answer: First of all, every star has a magnetic field, though it is in general very weak. The magnetic field gets created by convection inside a star. Depending on its mass a star can consist of different layers where convection, radiation or conduction happens. Stars with masses higher than 0.5 solar mass all have a convective core which generates a magnetic field. If you want to know more about this, you should consult a textbook on Stellar Structure. Neutron stars are the remains of very massive stars, so they already have a significant magnetic field. With a supernova explosion the star collapses and the field lines get squeezed together producing even higher magnetic fields. Now a neutron star consist mostly of neutrons (obviously) but there is a thin outer layer were protons, electrons and iron atoms are present. They are the leftovers of the explosion bound again to the star. The high magnetic field and rotation give rise to a electric current on the surface. This in turn accelerates the protons and electrons along the field lines giving rise to all kinds of radiation. Note however, that the acceleration mechanisms of GRBs are poorly understood. This is still active research and there are several possible explanations for it.
{ "domain": "physics.stackexchange", "id": 35156, "tags": "astrophysics, neutron-stars" }
Configuration manifolds and constraints
Question: In Classical Mechanics there's this notion of configuration manifold. Although I've heard about that a lot and although I often use that concept, I'm not sure I really understand them well because I've found no book talking about that, except Spivak's Physics for Mathematicians. So my understanding is the following: the configuration manifold of a system in Classical Mechanics is basically one smooth manifold $M$ whose points are possible states of the system. In that case, for one particle in three dimensions, each state can be considered the point in space the particle is and so $M=\mathbb{R}^3$ is the configuration manifold. Now, reading about that on Spivak's book it seems he only talks about configuration manifolds when talking about constraints. So what is the relationship between configuration manifolds and constraints? The configuration manifold must already include the constraints in some way? I thought before reading this that the configuration manifold would be simply a manifold we choose whose points label states and that a constraint would be to restrict the allowed states to a submanifold of the first one. What really is the precise definition of configuration manifold and how it relates to constraints? Answer: I think that your description that the points of the configuration manifold are possible states of the system is as close to a precise definition as one will find. So for $n$ particles in three dimensions, the configuration manifold is just $(\mathbb{R}^3)^n$. As for how this relates to constraints, consider the simplest example: two particles attached with a rigid rod with length $L$, in three dimensions. Let the particles have positions $x_i$ and $y_i$, $i=1,2,3$. Then the constraint of a rigid rod is that $$L = \sum_i(x_i-y_i)^2 \tag{1}.$$ The configuration manifold of this system is that subset of $(\mathbb{R}^3)^2$ that verifies (1), that is, a level set of the distance function. It is a general principle that the level sets of a smooth function of the coordinates are also smooth manifolds. Hence we can say that imposing a constraint is picking out a submanifold of the configuration manifold for an unconstrained system.
{ "domain": "physics.stackexchange", "id": 15554, "tags": "classical-mechanics, mathematical-physics, differential-geometry, terminology, constrained-dynamics" }
Creating Two-Dimensional String Array for Plane Seats
Question: I'm creating a ticket reservation system in Java, I want to use a Two-dimensional String array for managing the seats. I have enums that hold the plane model and the amount of seats available so the array pulls the rows and columns from there. I just want to see efficient my code is and how can I improve it. public String[][] createSeatArray() { String[] alphabet = {"a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r" ,"s","t","u","v","w","x","y","z","aa","bb","cc","dd","ee","ff","gg","hh","ii","jj","kk","ll","mm","nn","oo","pp"}; String[][] seatArray = new String[this.getNumOfRows()][this.getNumOfColumns()]; int alphabetPos = 0; for(int i = 0; i < this.getNumOfRows(); i++) { for (int j = 1; j <= this.getNumOfColumns(); j++){ String columnToString = String.valueOf(j); seatArray[i][j-1] = alphabet[alphabetPos] + columnToString; if (this.getNumOfColumns() == 4) { if (j == 2) { System.out.print(seatArray[i][j-1] + " "); } else { System.out.print(seatArray[i][j-1] + " "); } } else { if (j == 3) { System.out.print(seatArray[i][j-1] + " "); } else { System.out.print(seatArray[i][j-1] + " "); } } } System.out.println("\n"); alphabetPos++; } return seatArray; } Thank you for any feedback! also keep in mind that if the indentation is off it may be due to bringing it over to stack overflow, on intellij it's perfect. Answer: Here are a few optimization suggestions: You can get rid of alphabetPos because it is always equals to i. In fact, while you are at it, your code will be more readable if you rename i to row and j to col or column. You can also get rid of columnToString and just rely on Java's string concatenation overloads. seatArray[i][j-1] = alphabet[alphabetPos] + j; Note that you use j multiple times as index, subtracting one, and only once as string. It would make your code more readable if you used zero based index, and only added 1 for the string composition. You may want to use string formatting. for (int col = 0; col < this.getNumOfColumns(); col++) { seatArray[row][col] = String.format("%s%d", alphabet[row], col + 1); if (this.getNumOfColumns() == 4) { if (col == 1) { System.out.print(seatArray[row][col] + " "); } else { System.out.print(seatArray[row][col] + " "); } } else { if (col == 2) { System.out.print(seatArray[row][col] + " "); } else { System.out.print(seatArray[row][col] + " "); } } } You can simplify your array printing by splitting it in to two parts: printing the seat number and printing the spacer. The first part is always the same, so no need to repeat it 4 times. The second part changes only when col and row have certain values, so one condition is enough. System.out.print(seatArray[row][col] + " "); //this part is always printed the same //just add a few more spaces if this is an isle seat if ((getNumOfColumns() == 4 && col == 1) || (getNumOfColumns() != 4 && col == 2)) { System.out.print(" "); } Finally, to save memory, you can get rid of the alphabet array all together, since the seat letters are sequential and rows 26 and above just have the same letter twice: String rowStr = String.format("%c", 'a' + (row % 26)); if (row > 25) rowStr += rowStr; seatArray[row][col] = String.format("%s%d", rowStr, col + 1);
{ "domain": "codereview.stackexchange", "id": 39771, "tags": "java, strings, array" }
Help designing Butterworth filter
Question: I am trying to bandpass filter a signal using a butterworth filter, but I am getting the following result (click to enlarge): The original signal is the blue and the filtered one is the green. Why does the first part of the filtered signal (from 0 to 50) is changing so much from the original? Can you help me understand why this is happening? Here goes some details: Signal length: 256 Frequency: 22Hz Cutoff frequencies: 0.7Hz and 3.0Hz Filter order: 6 Python code: nyq = 0.5 * fs low = lowcut / nyq high = highcut / nyq b, a = butter(order, [low, high], btype='band') y = lfilter(b, a, data) Answer: The first part of the response may be a result of the transient response of the Butterworth filter. Also note that the signal is phase shifted and the basic shape is altered. This looks like the result of phase distortion. You may want to try again using a Bessel filter which has better transient and phase characteristics. If you are using DSP, you can also try a true linear phase FIR approach. This will yield the best transient and phase characteristics.
{ "domain": "dsp.stackexchange", "id": 858, "tags": "filters" }
Recording all instances of a class - Python
Question: In a programme that I am writing, I need to loop through all the instances of a class, the way I got around this was by appending every class to a list like below: allPeople = [] class Person: def __init__(self, name, age, height): self.name = name self.age = age self.height = height allPeople.append(self) Jeff = Person("Jeff", 20, "1.6") Bob = Person("Bob", 39, "1.4") Helen = Person("Helen", 19, "1.3") for person in allPeople: print(person.name + " is " + str(person.age)) Is this the best way of doing things or is there a built in python function for this? Answer: Your approach is ok, but I have just one mention (this is based on personal preferences): I'd iterate over a class not a list which is defined outside it. You'll just have to define a metaclass which supports iteration. class IterPerson(type): def __iter__(cls): return iter(cls._allPeople) class Person(metaclass=IterPerson): _allPeople = [] def __init__(self, name, age, height): self._allPeople.append(self) self.name = name self.age = age self.height = height if __name__ == '__main__': Jeff = Person("Jeff", 20, "1.6") Bob = Person("Bob", 39, "1.4") Helen = Person("Helen", 19, "1.3") for person in Person: print(person.name + " is " + str(person.age)) Magic methods are always looked up on the class, so adding __iter__ to the class won't make it iterable. However the class is an instance of its metaclass, so that is the correct place to define the __init__ method. Please note that I also put a _ in front of your list. This is how we tell python the list we'll create is private. I have also used if __name__ == "main": which is good practice (it is used to execute some code only if the file was run directly, and not imported) Second, keep this kind of thing as simple as possible. Don't waste a lot of time and energy on something complex. This is a simple problem, keep the code as simple as possible to get the job done.
{ "domain": "codereview.stackexchange", "id": 35571, "tags": "python, object-oriented" }
Can a relativistic rocket ever dilate time enough to be practical?
Question: It seems the best shot for humans to visit other stars is a relativistic rocket, such as the hypothetical 1g rocket, which would dilate time in order for the mission to be completed in the crew's lifespan. Unfortunately, even travelling to the nearest star (Proxima Centauri, 4.26 LY), according to 1g rocket calculator tools online, would take 3.5 years ship time. I have also heard that 1g rockets can reach 0.99999999996 C, but only after 12 years ship time, and would take another 12 years ship time to decelerate. My question is, can a relativistic rocket ever dilate time enough to be practical? Like say, travelling 500 lightyears in only 12 days ship time? I have heard that rockets can't accelerate much faster than 1g without killing the crew, and that an equal amount of time is needed to decelerate at 1 g. But couldn't the rocket accelerate at 1g as close as possible to C, and then decelerate at freefall speed, keeping the crew closer to C as long as possible, and increasing the time dilation experienced? I'm sure someone who knows better can tell me why I'm wrong. I'm also assuming this is a laser sail, and have read of hypothetical laser sail methods to accelerate small payloads to 0.26 C in ten minutes. I highly suspect that would kill the crew. Intuitively, I feel using beams to accelerate the ship would be more subtle, and not cause the ship to experience high G forces, but once again, I'm sure somebody who knows more about it can tell me why I'm wrong. So, that's the question, is there anyway to accelerate a human crew, without killing them, close enough to C and fast enough that the time dilation shortens interstellar trips to weeks or months ship time, thus making it a more practical mode of travel. What's the absolute most we could shorten, say, a 500 lightyear trip for them? Answer: can a relativistic rocket ever dilate time enough to be practical? Like say, travelling 500 lightyears in only 12 days ship time? Difficult at reasonable accelerations. 1G can be exceeded, but not by much. I doubt you'd want to sit in a 3G chair for days at a time. couldn't the rocket accelerate at 1g as close as possible to C, and then decelerate at freefall speed, keeping the crew closer to C as long as possible There is no "as close as possible to C". In the magic world where you can accelerate forever, you can always go faster. The equation is assuming that you'll accelerate as long as you possibly can, which is halfway. You are of course free to stop accelerating and coast for part of the journey, but that will take longer than one where acceleration is done constantly. If you want to limit the outbound acceleration to 1G, you have to limit the arriving deceleration to 1G as well. is there anyway to accelerate a human crew, without killing them, close enough to C In terms of technology that we can imagine, no. Even the laser accelerators are completely theoretical and can't decelerate at their destination.
{ "domain": "physics.stackexchange", "id": 82334, "tags": "special-relativity, reference-frames, acceleration, time-dilation" }
Divergence theorem in complex coordinates
Question: This question is related to Stokes' theorem in complex coordinates (CFT) but, I still don't understand :( Namely how to prove the divergence theorem in complex coordinate in Eq (2.1.9) in Polchinski's string theory $$\int_R d^2 z (\partial_z v^z + \partial_{\bar{z}} v^{\bar{z}})= i \oint_{\partial R} (v^z d \bar{z} - v^{\bar{z}} dz ) (1) $$ I may try $$ \int_R dx dy ( \partial_x F_y - \partial_y F_x) = \oint_{\partial R} (F_x dx + F_y dy)(2) $$, but what kind of substitution I should use to get Eq. (1)? Answer: Let $\sigma^1$ and $\sigma^2$ be real coordinates on $\mathbb R^2$. Using the results on page 33, we find that \begin{align} \partial_zv^z &= \frac{1}{2}(\partial_1 -i\partial_2)(v^1 + iv^2) = \frac{1}{2}(\partial_1v^1 + i\partial_1v^2 - i\partial_2v^1 + \partial_2v^2) \\ \partial_{\bar z}v^{\bar z} &= \frac{1}{2}(\partial_1 +i\partial_2)(v^1 - iv^2) = \frac{1}{2}(\partial_1v^1 - i\partial_1v^2 + i\partial_2v^1 + \partial_2v^2) \end{align} and therefore using $d^2z = dz\,d\bar z = 2 d\sigma^1d\sigma^2$ \begin{align} \int_R d^2z\,(\partial_zv^z + \partial_{\bar z}v^{\bar z}) &= 2\int_R d\sigma^1\,d\sigma^2\,(\partial_1v^1 + \partial_2v^2) \end{align} similarly, for the right hand side we have \begin{align} v^zd\bar z &= (v^1 + iv^2)(d\sigma^1 - id\sigma^2) = v^1d\sigma^1 - iv^1d\sigma^2 + iv^2d\sigma^1 +v^2d\sigma^2 \\ v^{\bar z}dz &= (v^1 - iv^2)(d\sigma^1 + id\sigma^2) = v^1d\sigma^1 + iv^1d\sigma^2 + -iv^2d\sigma^1 +v^2d\sigma^2 \end{align} so that \begin{align} i\oint_{\partial R} v^z d\bar z - v^{\bar z} d z &= i\oint_{\partial R} 2i(v^2d\sigma^1 - v^1 d\sigma^2) = 2\oint_{\partial R} (v^1 d\sigma^2 -v^2d\sigma^1) \end{align} The identity in Polchinski is obtained by setting the left and right hand sides equal to one another which, in this case, gives \begin{align} \int_R d\sigma^1\,d\sigma^2\,(\partial_1v^1 + \partial_2v^2) &=\oint_{\partial R} (v^1 d\sigma^2 -v^2d\sigma^1) \end{align} which is precisely Stokes' theorem for a region in $\mathbb R^2$.
{ "domain": "physics.stackexchange", "id": 8649, "tags": "homework-and-exercises, string-theory, conformal-field-theory" }
Which calcium salt is good enough to make calcium benzoate?
Question: This question is inspired from the following questions: Reaction of citric acid and calcium chloride Precipitation of citric acid by CaCl2 and Ca(OH)2? The answer to the 1st question says that calcium citrate is highly unlikely to form from calcium chloride and citric acid because of its inability to be present in deprotonated form and will not precipitate in acidic medium. 2nd question also addresses a similar kind of problem. But what about calcium benzoate? Which calcium salt can succesfully precipitate calcium benzoate from benzoic acid? Would calcium hydroxide work for that? I am pretty sure calcium chloride won't work because it is a good dehydrating agent and would rip off water molecules to form something else. Calcium benzoate is also somewhat more soluble in water than calcium citrate. So, what should be the necessary measure to successfully precipitate calcium benzoate? Answer: Thanks to @Shoubhik who pointed out that calcium hydroxide is good enough to make calcium benzoate. To support this claim, I found a patent which indeed describes that calcium hydroxide was used to make calcium benzoate (See comments). Here is the summary of the patent: The invention relates to a processing method for synthesizing calcium benzoate at low temperature. The method comprises the following steps: heating and stirring 65-85kg of industrial-grade benzoic acid, 28-32kg of calcium hydroxide, 1-3kg of catalysts comprising triethanol distearate, ethylene distearamide and ethylene bislauramide and 300-500kg of clear water in a reaction kettle at 333-363 K to react for 2-4 hours; and then adding 5-15kg of separant, stirring for half an hour, filtering and drying. The method provided by the invention can be used for directly synthesizing calcium benzoate, thereby lowering the equipment investment, reducing the working procedures, lowering the labor intensity of workers, increasing the output, enhancing the quality, lowering the cost and preventing the environmental pollution.
{ "domain": "chemistry.stackexchange", "id": 10141, "tags": "organic-chemistry, experimental-chemistry, synthesis, precipitation" }
Why does a flat clover-like shape fall slower when it is rotating?
Question: The plastic flat clover-like shape pictured below falls noticeably slower when it is rotating fast. I wonder why. Its three edges are flat, so I don't think that the rotation makes it act like a propeller. An ideal explanation would include an analogy that a young child can understand (e.g. rain bouncing on a car's wind-shield). Here is a photo of the object. Answer: Air has momentum. Put another way, it takes air some time to get out of the way. When the blades are rotating fast enough, they approximate a disk for the purpose of air resistance. You can prove this is not a propeller effect by pitching the blades as in a propeller. When dropped, the propeller will start rotating so as to "screw" itself thru the air downwards. However, the object will drop noticably slower once it is spinning. If it were only a propeller issue, it should drop faster once the propeller is rotating in the direction to make the object go down. The reason it doesn't is because the propeller blades together approximating a disk adds much more air resistance than the bare blades by themselves. There have been aircraft built on this principle. Look up something called a auto-copter or auto-gyro. These use freely-rotating blades to form a disk-shaped wing. They look a lot like helicopters, but in a helicopter the rotor is powered and the propeller effect used to create lift (at least when hovering or vertical flight). In a auto-copter, somethine else, usually a traditional pushing or pulling propeller, is powered, but the vertical-axis rotor is free spinning. This effect is also exploited to land a helicopter when the engine dies. When the engine is powering the rotor, the blades are pitched so that the air is pushed down. You can think of the blades trying to "screw" upwards thru the air. When the rotor becomes free-turning, they have to be pitched the other way to "screw" dowwards thru the air so that the downward motion of the craft causes the rotor to keep spinning. There is a optimum pitch angle for the blades. Too little, and there won't be enough torque to keep the rotor spinning. Too much, and the "gearing" is too low such that there isn't enough spin per unit drop. Close to the ground, the blades are pitched upwards again. The momentum of the spinning blade is harvested to actually provide a propeller push upwards, or at least a nearly flat pitch. The momentum is spent quickly, so it takes a lot of skill to do this at just the right time.
{ "domain": "physics.stackexchange", "id": 11609, "tags": "aerodynamics, free-fall" }
Perform k-means clustering over multiple columns and get the cluster center values?
Question: I read here how to show the number of clusters over $n$ columns. I would like to know how to get in a table, the values of the clusters centers. Could someone help me with this? Answer: sklearn.clusters.KMeans has an attribute cluster_centers_, which stores the array of cluster centers. You can add them to the dataframe as new columns this way: clusters = KMeans(n_clusters = n) predict = clusters.fit_predict(data) centers = pd.DataFrame(clusters.cluster_centers_[predict, :]) centers.index = data.index data = pd.concat([data, centers], axis=1)
{ "domain": "datascience.stackexchange", "id": 11126, "tags": "python, clustering, k-means" }
Filtering a list by comparing enums against a user choice
Question: I have a list of objects where one of the properties on the object class is an enum. The program below simply loops through each item in the list, checks an int value, then checks the enum value for that item, then adds the item to another list when the condition is met. I am using if-else statements, I attempted to use a switch statement but it doesn't seem possible to place 2 constants on a single case. I'd like to know if there's a better / more elegant way to do this or is what I've done sufficient in this scenario? Program Code // User inputs number from 1 - 5 int SelectedOppType = Int32.Parse(Console.ReadLine()); foreach (Opportunities Opp in OpportunitiesList) { if (SelectedOppType == 1 && Opp.OpportunityStatusID == OpportunityStatus.Active) // Opp.OpportunityStatusID == OpportunityStatus.Draft { FilterdOppsList.Add(Opp); } else if (SelectedOppType == 2 && Opp.OpportunityStatusID == OpportunityStatus.Draft) { FilterdOppsList.Add(Opp); } else if (SelectedOppType == 3 && Opp.OpportunityStatusID == OpportunityStatus.Closed) { FilterdOppsList.Add(Opp); } else if (SelectedOppType == 4 && (Opp.OpportunityStatusID == OpportunityStatus.Active || Opp.OpportunityStatusID == OpportunityStatus.Draft)) { FilterdOppsList.Add(Opp); } else if (SelectedOppType == 5) { FilterdOppsList.Add(Opp); } } foreach (var OppItem in FilterdOppsList) { Console.WriteLine(OppItem.OppText); } Console.ReadLine(); Answer: Shortening ifs You add everything to the same list so you can concatenate all conditions and use only one if with a helper variable: foreach (Opportunities Opp in OpportunitiesList) { var canAdd = (SelectedOppType == 1 && Opp.OpportunityStatusID == OpportunityStatus.Active) || (SelectedOppType == 2 && Opp.OpportunityStatusID == OpportunityStatus.Draft) || (SelectedOppType == 3 && Opp.OpportunityStatusID == OpportunityStatus.Closed) || (SelectedOppType == 4 && (Opp.OpportunityStatusID == OpportunityStatus.Active || Opp.OpportunityStatusID == OpportunityStatus.Draft)) || (SelectedOppType == 5); if (canAdd) { FilterdOppsList.Add(Opp); } } Other improvements Magic numbers You should create an enum for the SelectedOppType and cast the int given by the user to it. Example: enum OppType { OppType1 = 1, OppType2 = 2, OppType3 = 3, OppType4 = 4, OppType5 = 5, } var selectedOppType = (OppType)Int32.Parse(Console.ReadLine()); I know the names OppTypeX are ugly but I don't know what the numbers mean, this is why we dislike them. They have no meaning. Next, the logic can be extracted as an easier to maintain dictionary. Now that all values are enums we can easily build such a dictionary that I find by far is better then a switch nested in a loop. Besides, you can reuse the dictionary to display the options in other places (if you make it a filed or a property of some class). var allowedOppTypeStatuses = new Dictionary<OppType, IEnumerable<OpportunityStatus>> { [OppType.OppType1] = new[] { OpportunityStatus.Active }, [OppType.OppType2] = new[] { OpportunityStatus.Draft }, [OppType.OppType3] = new[] { OpportunityStatus.Closed }, [OppType.OppType4] = new[] { OpportunityStatus.Active, OpportunityStatus.Draft }, [OppType.OppType5] = Enumerable.Empty<OpportunityStatus>(), }; foreach (Opportunities opp in OpportunitiesList) { var canAdd = allowedOppTypeStatuses[selectedOppType].Any(x => x == opp.OpportunityStatusID) || !allowedOppTypeStatuses[selectedOppType].Any(); if (canAdd) { FilterdOppsList.Add(opp); } } Minor issues We use camelCase for local variables so SelectedOppType should be selectedOppType Opportunities - this doesn't look like a good name for a type, there seems to be more wrong with your code (unless it's an enum with Flags attribute).
{ "domain": "codereview.stackexchange", "id": 22803, "tags": "c#, enum" }
Dispersion of Lithium Ions in Nature (Ratio Similar to Isotopes?)
Question: I'm working on a project that identifies average characteristics of different elements. I'm stuck on lithium, I can't find a way to ratio lithium I and lithium II that symbolizes how it is in nature. I tried looking around on the internet but I haven't found any mention of ratios to ions only ratios to isotopes. Does anyone know of anything that might help? Answer: I believe I resolved my problem, the attributes I was looking at was spectra and combining the lines of both ions. The problem was I was comparing data given by two authors that used different units, so what I did, which I think solves my problem, was divide the sums of the given intensity by the emission probabilities for which I received the ratio of Li I to Li II as roughly 33 to 6 or 18% Lithium II. More than I or apparently anyone else thought so I may have still made a mistake, either way thanks to everyone who tried and helped.
{ "domain": "chemistry.stackexchange", "id": 3674, "tags": "ions, isotope" }
Costmap not inflating all obstacles
Question: Hi all, Sometimes my local costmap does not inflate all obstacles. It marks everything that it should as LETHAL, but then doesn't mark some of the cells around the lethal cells. Picture attached for reference. Might this be a bug in the inflation_layer? I've used dynamic_reconfigure to play with my costmap settings and nothing has helped it. ROS Indigo standard navigation stack Originally posted by Robocop87 on ROS Answers with karma: 255 on 2016-05-02 Post score: 0 Original comments Comment by robustify on 2016-05-02: Can you post the YAML files you used to configure the costmaps, as well as the relevant launch files? Answer: Hi all, The problem was the ordering of the plugins. If the inflation_layer plugin gets loaded before the obstacle_layer plugin, you get the above behavior. Loading the obstacle layer first solves this problem. Originally posted by Robocop87 with karma: 255 on 2016-05-03 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 24541, "tags": "navigation, move-base, costmap-2d" }
How can I refactor to avoid repetition finding the index of strings in a list?
Question: I have a list of strings where I want to find the index of a first- and second-choice video compressor. Logic: If the first choice is available, return its index. If not, and the second choice is available, return its index. If not, return zero. Here is what I have now: List<string> availableVideoCompressors = GetAvailableVideoCompressors(); int compressorIndexToUse = 0; if (availableVideoCompressors.Any(c => c.StartsWith("ffdshow", StringComparison.OrdinalIgnoreCase))) compressorIndexToUse = availableVideoCompressors.Select((c, i) => new { c, i }).First(c => c.c.Equals("ffdshow", StringComparison.OrdinalIgnoreCase)).i; else if (availableVideoCompressors.Any(c => c.StartsWith("Microsoft Video 1", StringComparison.OrdinalIgnoreCase))) compressorIndexToUse = availableVideoCompressors.Select((c, i) => new {c, i}).First(c => c.c.StartsWith("Microsoft Video 1", StringComparison.OrdinalIgnoreCase)).i; return compressorIndexToUse; The problem is that I am iterating the list several times. Once to determine if the desired compressor is available, then again to obtain its index, for each candidate compressor. I am not sure how to refactor to avoid this repetition. What can I do instead? Answer: If you already have the list of compressors, I would do the following: var index = availableVideoCompressors.FindIndex(c => c.StartsWith("ffdshow", StringComparison.OrdinalIgnoreCase)); if (index == -1) index = availableVideoCompressors.FindIndex(c => c.StartsWith("Microsoft Video 1", StringComparison.OrdinalIgnoreCase)); if (index == -1) return 0; return index;
{ "domain": "codereview.stackexchange", "id": 11091, "tags": "c#, .net, linq" }
Simplified Vector Implementation in C++
Question: I have implemented a class Vector to learn about data structures and algorithms and how to implement them using C++. I have yet to implement some std::vector functionalities e.g. custom allocator template parameter, iterator/const_iterator classes. I'm learning C++ through a C++11 book and trying to learn C++14 and C++17 best practices on the fly. I would appreciate any advice on how to improve or make it more compatible with modern best practices. I'm using g++ compiler with -std=c++17 flag. One specific issue that I was in doubt was about the move assignment operator. I saw some code around the internet using a unified assignment operator for both copy and move assignment operators using swap, but this link made me think that it wasn't the best way to do it. Any thoughts? Vector.h namespace algorithms { template<typename T> class Vector { public: using size_type = std::size_t; using iterator = T*; using const_iterator = const T*; using reference = T&; using const_reference = const T&; // Constructors and Destructor Vector(); Vector(size_type initial_size, const T& value); explicit Vector(size_type initial_size); Vector(std::initializer_list<T> initializer); Vector(const Vector<T>& vector); Vector(Vector<T>&& vector) noexcept; Vector& operator=(const Vector<T>& vector); Vector& operator=(Vector<T>&& vector) noexcept; ~Vector(); // Iterators /// TODO: Replace pointers by an iterator class iterator begin() noexcept; const_iterator begin() const noexcept; const_iterator cbegin() const noexcept; iterator end() noexcept; const_iterator end() const noexcept; const_iterator cend() const noexcept; // Capacity size_type size() const noexcept; bool empty() const noexcept; size_type capacity() const noexcept; void reserve(size_type new_capacity); void resize(size_type new_size, const T& value); void resize(size_type new_size); void shrink_to_fit(); // Modifiers template<typename... Args> reference emplace_back(Args&&... args); template<typename... Args> iterator emplace(iterator position, Args&&... args); void push_back(const T& value); void push_back(T&& value); void pop_back(); iterator erase(iterator position); iterator erase(iterator first, iterator last); void clear() noexcept; void swap(Vector<T>& vector) noexcept; // Accessors reference operator[](size_type index); const_reference operator[](size_type index) const; reference at(size_type index); const_reference at(size_type index) const; reference back(); const_reference back() const; private: using Alloc = std::allocator<T>; Alloc allocator; T* dynamic_array; T* end_position; // points to one past the last constructed element in the array T* capacity_limit; // points to one past the end of the array void reallocate(size_type new_capacity); void reallocate_if_full(); void deallocate(); void allocate_and_copy(const_iterator begin, const_iterator end); }; // Non-member swap template<typename T> void swap(Vector<T>& left, Vector<T>& right); } Vector.inl namespace algorithms { // Constructors template<typename T> Vector<T>::Vector(): dynamic_array(nullptr), end_position(nullptr), capacity_limit(nullptr) {} template<typename T> Vector<T>::Vector(size_type initial_size, const T& value) { dynamic_array = allocator.allocate(initial_size); end_position = std::uninitialized_fill_n(dynamic_array, initial_size, value); capacity_limit = dynamic_array + initial_size; } template<typename T> Vector<T>::Vector(size_type initial_size): Vector<T>(initial_size, T()) {} template<typename T> Vector<T>::Vector(std::initializer_list<T> initializer) { allocate_and_copy(initializer.begin(), initializer.end()); } template<typename T> Vector<T>::Vector(const Vector<T>& vector) { allocate_and_copy(vector.cbegin(), vector.cend()); } template<typename T> Vector<T>::Vector(Vector<T>&& vector) noexcept: dynamic_array(vector.dynamic_array), end_position(vector.end_position), capacity_limit(vector.capacity_limit) { vector.dynamic_array = nullptr; vector.end_position = nullptr; vector.capacity_limit = nullptr; } template<typename T> Vector<T>& Vector<T>::operator=(const Vector<T>& vector) { Vector<T> temp(vector); swap(temp); return *this; } template<typename T> Vector<T>& Vector<T>::operator=(Vector<T>&& vector) noexcept { if (this != &vector) // protection against self-assignment { deallocate(); dynamic_array = vector.dynamic_array; end_position = vector.end_position; capacity_limit = vector.capacity_limit; vector.dynamic_array = nullptr; vector.end_position = nullptr; vector.capacity_limit = nullptr; } return *this; } template<typename T> Vector<T>::~Vector() { deallocate(); } // Iterators template<typename T> typename Vector<T>::iterator Vector<T>::begin() noexcept { return dynamic_array; } template<typename T> typename Vector<T>::const_iterator Vector<T>::begin() const noexcept { return dynamic_array; } template<typename T> typename Vector<T>::const_iterator Vector<T>::cbegin() const noexcept { return dynamic_array; } template<typename T> typename Vector<T>::iterator Vector<T>::end() noexcept { return end_position; } template<typename T> typename Vector<T>::const_iterator Vector<T>::end() const noexcept { return end_position; } template<typename T> typename Vector<T>::const_iterator Vector<T>::cend() const noexcept { return end_position; } // Capacity template<typename T> typename Vector<T>::size_type Vector<T>::size() const noexcept { return static_cast<size_type>(end_position - dynamic_array); } template<typename T> bool Vector<T>::empty() const noexcept { return size() == 0; } template<typename T> typename Vector<T>::size_type Vector<T>::capacity() const noexcept { return static_cast<size_type>(capacity_limit - dynamic_array); } template<typename T> void Vector<T>::reserve(size_type new_capacity) { if (new_capacity <= capacity()) { return; } reallocate(new_capacity); } template<typename T> void Vector<T>::resize(size_type new_size, const T& value) { if (new_size > capacity()) { reallocate(2 * new_size); end_position = std::uninitialized_fill_n(end_position, new_size - size(), value); } else if (new_size > size()) { end_position = std::uninitialized_fill_n(end_position, new_size - size(), value); } else if (new_size < size()) { for (size_type i = 0; i < size() - new_size; ++i) { --end_position; std::allocator_traits<Alloc>::destroy(allocator, end_position); } } } template<typename T> void Vector<T>::resize(size_type new_size) { resize(new_size, T()); } template<typename T> void Vector<T>::shrink_to_fit() { reallocate(size()); } // Modifiers template<typename T> template<typename... Args> typename Vector<T>::reference Vector<T>::emplace_back(Args&&... args) { reallocate_if_full(); std::allocator_traits<Alloc>::construct(allocator, end_position, std::forward<Args>(args)...); ++end_position; } template<typename T> template<typename... Args> typename Vector<T>::iterator Vector<T>::emplace(iterator position, Args&&... args) { const size_type distance = std::distance(begin(), position); if (position == end_position) { emplace_back(std::forward<Args>(args)...); } else { reallocate_if_full(); std::move_backward(begin() + distance, end_position, end_position + 1); std::allocator_traits<Alloc>::construct(allocator, begin() + distance, std::forward<Args>(args)...); ++end_position; } return begin() + distance; } template<typename T> void Vector<T>::push_back(const T& value) { emplace_back(value); } template<typename T> void Vector<T>::push_back(T&& value) { emplace_back(std::move(value)); } template<typename T> void Vector<T>::pop_back() { --end_position; std::allocator_traits<Alloc>::destroy(allocator, end_position); } template<typename T> typename Vector<T>::iterator Vector<T>::erase(iterator position) { std::move(position + 1, end(), position); --end_position; std::allocator_traits<Alloc>::destroy(allocator, end_position); return position; } template<typename T> typename Vector<T>::iterator Vector<T>::erase(iterator first, iterator last) { if (first == last) { return begin(); } auto new_end_position = std::move(last, end(), first); for (auto iterator = new_end_position; iterator != end_position; ++iterator) { std::allocator_traits<Alloc>::destroy(allocator, iterator); } end_position = new_end_position; return first; } template<typename T> void Vector<T>::clear() noexcept { deallocate(); dynamic_array = nullptr; end_position = nullptr; capacity_limit = nullptr; } template<typename T> void Vector<T>::swap(Vector<T>& vector) noexcept { using std::swap; swap(this->dynamic_array, vector.dynamic_array); swap(this->end_position, vector.end_position); swap(this->capacity_limit, vector.capacity_limit); } // Accessors template<typename T> typename Vector<T>::reference Vector<T>::operator[](size_type index) { return dynamic_array[index]; } template<typename T> typename Vector<T>::const_reference Vector<T>::operator[](size_type index) const { return dynamic_array[index]; } template<typename T> typename Vector<T>::reference Vector<T>::at(size_type index) { if (index < 0 || index >= size()) { throw std::out_of_range("Invalid index"); } return dynamic_array[index]; } template<typename T> typename Vector<T>::const_reference Vector<T>::at(size_type index) const { if (index < 0 || index >= size()) { throw std::out_of_range("Invalid index"); } return dynamic_array[index]; } template<typename T> typename Vector<T>::reference Vector<T>::back() { return dynamic_array[size() - 1]; } template<typename T> typename Vector<T>::const_reference Vector<T>::back() const { return dynamic_array[size() - 1]; } // Private template<typename T> void Vector<T>::reallocate(size_type new_capacity) { auto new_array = allocator.allocate(new_capacity); auto new_end_position = std::uninitialized_copy(std::make_move_iterator(begin()), std::make_move_iterator(end()), new_array); deallocate(); dynamic_array = new_array; end_position = new_end_position; capacity_limit = dynamic_array + new_capacity; } template<typename T> void Vector<T>::reallocate_if_full() { if (size() == capacity()) { size_type new_capacity = (size() != 0) ? 2 * size() : 1; reallocate(new_capacity); } } template<typename T> void Vector<T>::deallocate() { if (dynamic_array) { std::for_each(dynamic_array, end_position, [&allocator = allocator](T& value) { std::allocator_traits<Alloc>::destroy(allocator, &value); }); allocator.deallocate(dynamic_array, capacity_limit - dynamic_array); } } template<typename T> void Vector<T>::allocate_and_copy(const_iterator begin, const_iterator end) { size_type new_capacity = end - begin; dynamic_array = allocator.allocate(new_capacity); end_position = std::uninitialized_copy(begin, end, dynamic_array); capacity_limit = end_position; } // Non-member swap function template<typename T> void swap(Vector<T>& left, Vector<T>& right) { left.swap(right); } } Answer: Good effort. Still, there are many points even without going into allocator-support: You are missing a very important ctor (and the corresponding deduction guide): template <class InputIt, SFINAE_here> vector(InputIt first, InputIt last); Not only is it very useful in its own right, but copy-ctor and initializer_list-ctor can be trivially implemented in terms of it. You are missing .assign(). The version using an iterator-range would be the preferred building-block for construction from an iterator-range which has to be counted for getting the size. You are missing assignment from initializer_list, .insert(), .data(), and reverse-iterator-support. Members should accept const_iterators as inputs and return iterators. You can use the injected class-name (Vector) instead of specifying the template-parameters (Vector<T>). As a bonus, that is future-proof in case you later decide to add the allocator-support. std::allocator<T> is a trivial empty class. As such, any space it uses is wasted. Either use empty base optimization or just create it on-demand. You are missing comparison operators. Using in-class-initializers allows you to simplify your ctors. The default ctor can then even be made trivial by = default;-ing it in-class. Vector<T>::Vector(size_type initial_size, const T& value) is unsafe. If an exception gets thrown when allocating, all pointer-members are still indeterminate on entrance to the dtor. If one gets thrown later, all but .dynamic_array will be indeterminate, with equally bad results. Vector<T>::Vector(size_type initial_size) creates an ephemeral T and then copy-constructs all members using the previous ctor. While that works for many types, for some it is silently wrong, inefficient, or won't even compile. Don't pessimize the common case by checking for self-assignment. Simply swap everything. void Vector<T>::resize(size_type new_size, const T& value) really should go for just enough if it has to reallocate. Point 10 also applies to void Vector<T>::resize(size_type new_size). .insert(), .push_back(), and .resize() from a const& must work right even if passed an element of the container! .erase(Iter, Iter) should return the passed iterator if the range is empty, not anything else. std::uninitialized_move() was introduced in C++17, no need for std::uninitialized_copy() + move-iterators. There is a good reason to avoid doubling capacity on reallocation: If you stay below that, re-use of returned memory becomes possible. Non-member swap() should also be nowxcept.
{ "domain": "codereview.stackexchange", "id": 36362, "tags": "c++, beginner, reinventing-the-wheel, c++17" }
Manacher Algorithm in Haskell
Question: Please review the manacher algorithm in haskell. Find the longest Palindrome in a String. module Main where import qualified Data.Vector as V import Data.Maybe import Data.Ord -- manacher algorithm -- $ ghci -- $ :l manacher.hs -- > manacher "aba" manacher :: String -> String manacher arg0 = filter (/= '|') . V.toList $ maximum' getList where getList :: [V.Vector Char] getList = foreveryvchars 1 $ format arg0 -- get the maximum length palindrome maximum' :: [V.Vector Char] -> V.Vector Char maximum' = foldr1 (\x y -> if V.length x > V.length y then x else y) -- for every character try to match the left with right side foreveryvchars :: Int -> V.Vector Char -> [V.Vector Char] foreveryvchars center vchars = case vchars V.!? center of Nothing -> [] _ -> let (k, v) = match center 1 vchars in v : foreveryvchars ((\x -> if x == 0 then center + 1 else center + x) k) vchars -- Takes the center and expand till it matches -- returns the length and the palindrome it found around the center match::Int -> Int -> V.Vector Char -> (Int, V.Vector Char) match center radius vchars = let left = getleft center radius vchars right = getright center radius vchars in case left /= Nothing && right /= Nothing && left == right of True -> match center (radius + 1) vchars _ -> (radius - 1, V.slice (center - (radius - 1) ) (1 + ((radius -1 ) * 2)) vchars) getleft center radius vchars = vchars V.!? (center - radius) getright center radius vchars = vchars V.!? (center + radius) -- format :: String -> V.Vector Char format = stov . convert -- Insert pipe after each character convert::String -> String convert [] = '|':[] convert (x:xs) = '|':x:convert xs -- Convert String to Data.Vector stov::String -> V.Vector Char stov = V.fromList main::IO() main = print "hello world" Answer: Write clearly. Adding comments to your functions is good, but your first choice should be for the functions (and their arguments) to have self-explainatory names. Second, one hopes that the type signature will explain what the function does (sometimes adding type aliases can help with this). Only then, if there's still details that need explanation, write the comments in clear, complete sentences. Alternately, I think it's fine to just give a URL to a canonical source for the function. Additionally, any time there's a part of the code you struggled with or had to scrap an attempt at, try to leave a comment explaining why you ended up with the solution you did. match says it returns the "length", but the thing it actually returns is more like the "radius". Also, be a little more careful with your whitespace. Also camel-case and pascal-case are conventional in Haskell, e.g. sToV. Use -Wall. It stands for "warnings: all". Use it to find places where GHC already knows you can improve your code. Unfortunately, it doesn't actually mean "all", there are other flags you can set to make GHC even pickier. Sometimes less is more. Sometimes more is more. If there's a particular parametric type you use a lot, give it an alias (or a newtype, depending on the situation) like type PaddedVec = V.Vector (Maybe Char). On the other hand, if you have an alias (like stov) that you only use once, probably just inline it. If you want to be named, you can group it with its usage with a where clause. Be sure you're using built-in functionality as well as you can. maximum' is "just" maximumBy (compare `on` V.length). You can use where together with guards to simplify the function you were calling match. Ironically, in Haskell you rarely actually need to use a lambda. Adding a bunch of '|' chars isn't great. It's not clear that there's a better solution that doesn't involve any padding at all (I tried and failed), but choosing an arbitrary "dummy" character could fail (if the source text contains that character), and it locks your implementation to the one data-type (Char). Consider for example how you would adapt this algorithm to work on lists of integers? I would suggest instead mapping all the values to Justs, and padding with Nothing. here's my version: module Main where import Data.Foldable (maximumBy) import Data.Function (on) import Data.Maybe (catMaybes) import qualified Data.Vector as V type PaddedVec a = V.Vector (Maybe a) -- https://en.wikipedia.org/wiki/Longest_palindromic_substring#Manacher.27s_algorithm manacher :: (Eq a) => [a] -> [a] manacher = catMaybes . V.toList . grabLongest . scan . V.fromList . padded where grabLongest = maximumBy (compare `on` V.length) padded [] = Nothing : [] -- Can't just use Data.List.intersperse; we need them wrapping outside too. padded (x : xs) = Nothing : Just x : padded xs scan = forEveryIndex 0 -- Starting at 1 could cause a fold1-exception if the input is empty -- For every index, find the largest palindrome centered there, skipping some per Manacher's algorithm. forEveryIndex :: (Eq a) => Int -> PaddedVec a -> [PaddedVec a] forEveryIndex center vec = if center < (V.length vec) then pal : forEveryIndex (center + (max 1 radius)) vec else [] where (pal, radius) = findAround center 1 vec -- Find the longest palindorme centered at an index (and its "radius"). findAround :: (Eq a) => Int -> Int -> PaddedVec a -> (PaddedVec a, Int) findAround center radius vec | Nothing <- left = (priorSlice, priorRadius) -- This is an "outer" Nothing. | Nothing <- right = (priorSlice, priorRadius) | left /= right = (priorSlice, priorRadius) | otherwise = findAround center (radius + 1) vec where left = vec V.!? (center - radius) -- left :: Maybe (Maybe a) right = vec V.!? (center + radius) -- use `forall a.` above to make these explicit. priorRadius = radius - 1 priorSlice = V.slice (center - priorRadius) (1 + (priorRadius * 2)) vec main :: IO() main = do print $ manacher "hello world" print $ manacher "abacaba" print $ manacher "helloll world" print $ manacher "helloll wowrld" print $ manacher "" print $ manacher "x"
{ "domain": "codereview.stackexchange", "id": 43037, "tags": "algorithm, strings, haskell, palindrome" }
Is there a book with 100 reductions?
Question: In a lecture I'm taking about complexity theory a professor said, there are infinite many NP-complete problems. Question: I was wondering if there exists something like a database or a book with some known reductions (or with maybe more than only the NP-complete ones) and the proofs for them? I know there is a very nice database for Rings, but I couldn't find something similar for reductions. Answer: The classical reference on NP-completeness is Garey and Johnson's Computers and Intractability, which contains a compendium of over 300 NP-complete problems, with links to papers proving their NP-hardness. The only downside is that the book is quite old, dating from 1979.
{ "domain": "cs.stackexchange", "id": 19078, "tags": "reductions, reference-request, books, polynomial-time-reductions" }
Why do socks stink?
Question: What chemicals are responsible for the stench emanating from socks? If it's just sweat, why don't underwear or vests stink as much? Which gas am I inhaling when smelling socks to decide whether they need a wash? Answer: According to the Wikipedia page, aptly named Smelly Socks, it states that the smell is a mixture of ammonia, fatty acids (in particular, isovaleric acid), and lactic acid. What makes sock odor unique is that socks often pick up small amounts of organic material when walking around without shoes on. When wearing shoes, they are in a warm semi-enclosed environment with a considerable amount of sweat infusing into them (it can be quite warm and humid in shoes) particularly when exercising. A lot of the smell is due to bacterial action on the bits and pieces trapped in the socks themselves. Interestingly, the Wikipedia article states that it is a topic of ongoing study, in part due to a possibility that the chemicals could be used in the battle against Malaria - Using smelly socks to attract mosquitoes -- and combat malaria
{ "domain": "chemistry.stackexchange", "id": 3844, "tags": "everyday-chemistry, smell" }
rosserial_arduino can't send a sensor_msgs/Imu msg
Question: Hi, I have a imu implemented with arduino that I'd like to connect to ros but I got stuck in the most simple thing, sending the Imu message. Sometimes it's it just hang sometimes I get an exception in the python client. This is the program I'm trying: #include <ros.h> #include <sensor_msgs/Imu.h> ros::NodeHandle nh; sensor_msgs::Imu imu_msg; ros::Publisher imu_pub("imu", &imu_msg); void setup() { nh.initNode(); nh.advertise(imu_pub); imu_msg.header.frame_id = 0; imu_msg.orientation.x = 0.0; imu_msg.orientation.y = 0.0; imu_msg.orientation.z = 0.0; imu_msg.orientation.w = 0.0; } void loop() { imu_msg.header.stamp = nh.now(); imu_pub.publish( &imu_msg ); nh.spinOnce(); delay(1000); } And this output of the python serial_node: [INFO] [WallTime: 1327668877.112084] ROS Serial Python Node [INFO] [WallTime: 1327668877.115963] Connected on /dev/ttyACM0 at 57600 baud [INFO] [WallTime: 1327668880.075914] Note: publish buffer size is 512 bytes [INFO] [WallTime: 1327668880.076318] Setup publisher on imu [sensor_msgs/Imu] Traceback (most recent call last): File "/home/dlobato/ros_workspace/rosserial/rosserial_python/nodes/serial_node.py", line 58, in <module> client.run() File "/home/dlobato/ros_workspace/rosserial/rosserial_python/src/rosserial_python/SerialClient.py", line 284, in run self.callbacks[topic_id](msg) File "/home/dlobato/ros_workspace/rosserial/rosserial_python/src/rosserial_python/SerialClient.py", line 95, in handlePacket m.deserialize(data) File "/opt/ros/electric/stacks/common_msgs/sensor_msgs/src/sensor_msgs/msg/_Imu.py", line 190, in deserialize raise roslib.message.DeserializationError(e) #most likely buffer underfill roslib.message.DeserializationError: unpack requires a string argument of length 72 If I just change the type of message it works fine, so I think either there's some kind of bug regarding the Imu message or I can't see the problem. Any idea is welcome! Thanks! David. Originally posted by dlobato on ROS Answers with karma: 11 on 2012-01-27 Post score: 1 Answer: The reason is that the message is just too big for the arduino. There might be some other related questions on this if you look for it. I did not find a satisfying solution besides sending a float multi array and filling out an Imu message from that manually. Originally posted by dornhege with karma: 31395 on 2012-08-06 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 8015, "tags": "arduino, imu, rosserial" }
Is Parthenogenesis known as Sexual reproduction or Asexual reproduction?
Question: I have seen that in some books refer to Parthenogenesis as sexual and in others as asexual. The ones that say it's a form of sexual reproduction,they focus on the fact that meios has happened in the process and also,we are using a sexual reproduction system in that animal to produce the offspring. The ones saying it is a from of asexual reproduction focus on the fact that although meios is involved,we only have one parent in the reproduction process and we cannot have any new alleles in the offspring than the ones the parent has.(unless we have mutation) so how can we conclude in the end?which is it ? and also,what is the evolutionary explanation for this phenomenon(parthenogenesis)? Answer: Only a matter of definition It is only a matter of definition. When it comes to sex, there is such a diversity of mating system that it is very hard to make a good definition that won't be challenged by limit cases. Sexual reproduction can typically be defined based on the presence of meiosis (but again this will lead to limit-cases), on the need for two distinct parents + meiosis, or on the need for two distinct parents which have different gender (anisogamy) + meiosis. Diversity of reproductive systems At the end of the day, what one wish to call sexual reproduction does not matter too much as long as the definition is being made clear especially when dealing with potentially limit-cases. If I remember correctly the intro book to evolutionary biolgy called Evolution by Futuyma makes a good overview of the diversity of reproductive system. Having a vague understanding of this diversity will help you undermine your hope of finding a clearly defined definition for sexual reproduction. Applying the definition of wikipedia to parthenogenesis Here is the definition from wikipedia Sexual reproduction is a form of reproduction where two morphologically distinct types of specialized reproductive cells called gametes fuse together, involving a female's large ovum (or egg) and a male's smaller sperm Using this definition and applying it to the specific case of parthenogenesis, one would conclude that parthenogenesis is not a form of sexual reproduction (despite the presence of meiosis). Note by the way the even within the term parthenogenesis, there is quite a bit of diversity (see the wikipedia entry for parthenogenesis). what is the evolutionary explanation for this phenomenon(parthenogenesis)? I will ignore this question because A post should always be limited to a single question This question is very broad There is not a single answer to this question given the diversity of types of parthenogenesis. For a start on the subject, you might want to have a look at Jaenike and Selander, 1979 for example.
{ "domain": "biology.stackexchange", "id": 11856, "tags": "evolution, cell-biology, zoology, sexual-reproduction, asexual-reproduction" }
Histopathological image vs. natural image
Question: What is the difference between a histopathological image and a natural image when training a neural network? Answer: Histopathological images compared to natural images difference so, that in histopathological images the image needed to be modeled may contain millions of pixels, whereas a recognition from natural image that this is a dog / house / certain person needs significantly less information to be extracted from the picture. Histopathological images use WSIs (Whole slide image) which contain the part of tissue as a whole. It needs to be split to 256 x 256 pixel patches and those are used for ROI hunting (Region of Interest) and other analysis one by one. Histopathological images (WSI) are quite rare nowadays and hold privately. More open data would rise the accuracy of machine learning and increase the amount of results that could be found from the data. Complete source: https://arxiv.org/pdf/1709.00786v1.pdf
{ "domain": "ai.stackexchange", "id": 403, "tags": "neural-networks, machine-learning, deep-learning" }
Principle of Duality
Question: I would like to know how to alter/add NOTs when applying Duality principle. Suppose I have P = XY(X+Y) + NOT(Y), how to find its dual? My book says that while applying Duality Principle to a relation: Change all 0s to 1s Change all ORs to ANDs Change all ANDs to ORs But it doesn't say anything about NOTs. Answer: In the dual boolean algebra, NOT is the same NOT operation we have in the original boolean algebra. Your book does not say anything to handle NOT because there's nothing that needs to be done. Simply leave the NOTs unaffected in your formula.
{ "domain": "cs.stackexchange", "id": 12233, "tags": "boolean-algebra" }
What is teacher forcing?
Question: In the paper Neural Programmer-Interpreters, the authors use the teacher forcing technique, but what exactly is it? Answer: Consider the task of sequence prediction, so you want to predict the next element of a sequence $e_t$ given the previous elements of this sequence $e_{t-1}, e_{t-2}, \dots, e_{1} = e_{t-1:1}$. Teacher forcing is about forcing the predictions to be based on correct histories (i.e. the correct sequence of past elements) rather than predicted history (which may not be correct). To be more concrete, let $\hat{e}_{i}$ denote the $i$th predicted element of the sequence and let $e_{i}$ be the corresponding ground-truth. Then, if you use teacher forcing, to predict $e_{t}$, rather than using $\hat{e}_{t-1:1}$, you would use $e_{t-1:1}$. Recall that supervised learning can also be thought of as learning with a teacher. Hence the expression "teacher forcing", i.e. you force the predictions to be based on correct histories (the teacher's labels). Of course, intuitively, teacher forcing should help to stabilize training, given that the predictions are not based on noisy or wrong histories. See also the blog post What is Teacher Forcing for Recurrent Neural Networks? by Jason Brownlee.
{ "domain": "ai.stackexchange", "id": 1717, "tags": "neural-networks, machine-learning, recurrent-neural-networks, definitions, teacher-forcing" }
Chemoautotrophy in large organisms?
Question: The major nutritional mechanisms include chemoautotrophy, heterotrophy (including parasitism and saprotrophy) and photoautotrophy. All of these modes of nutrition developed first in prokaryotic ancestors, and gradually, through the course of evolution, these prokaryotes formed complex eukaryotic, multicellular organisms. Some of these complex descendants (the majority of Kingdom Plantae) possessed the photoautotrophic mode and some (the majority of Kingdom Animalia, some of Kingdom Protista & Kingdom Fungi) possessed the heterotrophic mode. My question is, since all the three modes developed quite early in evolutionary history, why didn't any chemoautotrophic multicellular, eukaryotic organism develop? (Or, if they did, why did they not form a wide-diverse group as the organisms with other two modes of nutrition did.) (One possible reason could be lesser availability of chemoautotrophic substrates, but I have no proof, nor any argument to support this.) Answer: I've been thinking about this question for a few days and along the lines of @ThomasIngalls comment, the simple answer is that cooperation between organisms has such a huge advantage in selection that any chance a eukaryote would become a chemautotroph would simply not adapt fast enough to compete with those that are already holding these niches. The chemautotrophs I'm aware of are usually archaeabacteria. They are optimized to grow slowly and have a large number of genes that can break down the unusual chemical compounds to grow in an inorganic often anoxic environment. These bacteria have genes evolved from the chemautotrophic earliest days of life on earth. They live deep within the earth, under ground, in volcanic vents. There are animals that live in some of these environments, but its simply too easy for these to use the bacteria that are already growing here readily rather then re-evolve the genes to do it by themselves. Eukaryotes are already symbiotes - they used bacteria to create ATP and photosynthetically fix carbon to the extent that they absorbed them as mitochondria and chloroplasts respectively. If eukaryotes were under a tremendous amount of selective pressure, some of them might absorb symbiotic chemautotrophs, but without such pressure, it doesn't happen when you are competing against plants with roots and animals that eat for their energy. Forming systems of organisms is much more stable than having individual species that can do it all. This question is actually similar to other questions that ask why we don't have metal armor, run as fast as cheetahs, etc. Evolution doesn't actually create super animals that are completely self contained. The ones that reproduce the best are the ones that can actually spread out the risk of survival and contribute to an ecosystem. It kind of makes me think of human beings - the only superorganism on the planet in this regard. We don't really need to compete against any organism - we are in fact killing most of them out there without really thinking about it. That might now turn out to be good for us long term.
{ "domain": "biology.stackexchange", "id": 1594, "tags": "evolution, nutrition, speculative" }
How to reload all attributes in WEKA
Question: Is there a way to reload all attributes after having removed someones without reopening the data file ? Any help please ? Answer: Judging from the screenshot, you are currently looking at the data in the preproces tab from the explorer module. In the menu above the top menu in your screenshot should be an undo option (5th option from the left).
{ "domain": "datascience.stackexchange", "id": 1450, "tags": "weka" }
Why can fish survive out of water?
Question: Obviously, humans can survive underwater for a short while. I've assumed this is because at some point it benefited us to stop breathing momentarily, and so we evolved a respiratory system that was able to take short breaks. Similarly, fish can survive outside of water for a short while. However, I can't think of a single instance for most fish where they may have needed to evolve that ability. Even their ancestors originated from aquatic environments, so I don't think it's a residual ability they no longer need. Why did fish evolve a respiratory system that could take breaks like ours can? Answer: Do not have too a "panselectionist" view of evolution! You can survive in a bath of mercury for a little while. You can survive naked in the outer space for a while (see here). Yet none of your ancestors where exposed to such conditions. We can be tolerant to certain conditions without having been selected to tolerate it. Similarly, you managed to survive in your very specific environment. Yet none of your ancestors ever encountered this specific environment. In other words, evolution is more than just natural selection. Not every phenotype in every environment you can think of is the result of a direct selective pressure acting on it. A classical and easy to read paper on the subject is Gould and Lewontin (1979). Why fish not die suddenly outside of the water? The main reason why fish die outside of water is that they cannot intake oxygen from air (see Breathing under water; not considering lungfish). So once fish are exposed to air only, they stop intaking oxygen, consume the oxygen that they have left in their circulatory system and tissues and slowly die of asphyxie but there is no reason for the death to be direct and sudden.
{ "domain": "biology.stackexchange", "id": 8006, "tags": "evolution" }
Adding hand-crafted features to a convolutional neural network (CNN) in TensorFlow
Question: Let's say I want to add a few hand-crafted features to a convolutional neural network CNN in TensorFlow. The CNN can be a simple one as described here. Naturally I'd like to add these features right after the second pooling and right before the first fully-connected layer (FC1 in the example). Is that easy to express my method in code? I'd have to append my features to the h_pool2_flat vector/tensor: h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) Answer: I figured it out. If we denote the additional features as x_feat, I changed the lines from h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) to h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_pool2_flat = tf.concat( [h_pool2_flat, x_feat ], 1 ) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
{ "domain": "datascience.stackexchange", "id": 3077, "tags": "deep-learning, tensorflow" }
quark annihilation into two gluons
Question: We're working through the Griffith's introduction to elementary particle physics and are wondering about the quark antiquark annihilation process. There are actually three first order processes, but my question is about the two feynman diagrams I sketched in paint. If we look at a QED annihilation process, like $e^-$ and $e^+$ we get two photons. The diagrams eventually look the same, if you replace the gluon lines with photon lines. I understand, that we can not differentiate the two photons, that's why we have to draw both diagrams. However, the gluons have different color charge, so I'd say I can differentiate them and therefore they are not interchangeable, thus I only get the left diagram. Can someone shed some light on this? Answer: Experimentally speaking, it is found we detect only $\mathrm{SU}(3)$ colour singlets in nature so the gluons we draw in our feynman diagrams will ultimately hadronise to form such states at time scales longer than the hard scattering with which they were involved. So at the end of the day the resultant states will carry $\mathrm{SU}(3)$ colour quantum numbers with a net red/green/blue configuration. Colour combinations other than this, as carried by the gluons in your picture, are not discernible to detectors so from this viewpoint we expect to consider the two topologically distinct diagrams as drawn. BTW those two diagrams are also equivalent to
{ "domain": "physics.stackexchange", "id": 37320, "tags": "feynman-diagrams, quantum-chromodynamics, quarks, gluons" }
Finding equation of motion for given Lagrangian with respect to metric
Question: Given the following action in $d$ dimensional $(0,1,...,d-1)$ curved spacetime: $$ S= \int d^dx\sqrt{-g}\mathscr{L}[\chi,\Phi,g^{\mu\nu}] $$ Where: $$\mathscr{L}=e^{-2\Phi} \left(-\frac{1}{2\kappa^2}[R-2\Lambda+4g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi]+g^{\mu \nu}\partial_{\mu}\chi\partial_{\nu}\chi+\frac{\beta^2g_{\tau\tau}-\beta_H^2}{(2\pi)^2}\chi^2 \right)$$ Where $g^{\mu\nu}$ is the metric $R$ is the ricci scalar relative to this metric, $g_{00}=g_{\tau\tau}=(g^{\tau \tau})^{-1}$, $\Phi,\chi$ are scalar fields and $\kappa,\beta,\beta_{H},\Lambda$ are constants. By doing variation by the field $\Phi$ one can obtain the following equation of motion: $$\frac{\partial \mathscr{L}}{\partial\Phi}-\Delta_{\mu}\left(\frac{\partial \mathscr{L}}{\partial\left(\partial_{\mu}\Phi\right)}\right)=0$$ Where $\Delta_\mu$ is the covariant derivative, more explicitly for our lagrangian this equation reads: $$R-2\Lambda-4g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu}\Phi +4g^{\mu\nu}\Delta_{\mu}\Delta_{\nu}\Phi=2\kappa^2\left(g^{\mu \nu}\partial_{\mu}\chi\partial_{\nu}\chi+\frac{\beta^2g_{\tau\tau}-\beta_H^2}{(2\pi)^2}\chi^2\right)$$ Now I want to find another equation of motion, the one with respect to the metric: $$\frac{\partial \mathscr{L}}{\partial g^{\mu \nu}}-\frac{1}{2}g_{\mu \nu}\mathscr{L} = 0$$ On the one hand: $$\frac{\partial \mathscr{L}}{\partial g^{\mu \nu}} = e^{-2\Phi} \left(-\frac{1}{2\kappa^2}[R_{\mu\nu}+4\partial_{\mu}\Phi\partial_{\nu}\Phi]+\partial_{\mu}\chi\partial_{\nu}\chi-\frac{\beta^2g^2_{\tau\tau}}{(2\pi)^2} \delta_{\mu}^{\tau}{\delta}_{\nu}^{\tau}\chi^2 \right) $$ Where I use the following identities: $$\frac{\partial g_{\tau\tau}}{\partial g^{\mu\nu}}=-g^2_{\tau\tau}\delta_{\mu}^{\tau}{\delta}_{\nu}^{\tau}$$ $$R=g^{\mu\nu}R_{\mu\nu}$$ Where $R_{\mu\nu}$ is Ricci tensor. On the other hand: $$-\frac{1}{2}g_{\mu \nu}\mathscr{L}=e^{-2\Phi} \left(-\frac{1}{2\kappa^2}\left[-\frac{1}{2}(R-2\Lambda)g_{\mu\nu}-2d\partial_{\mu}\Phi\partial_{\nu}\Phi \right]-\frac{d}{2}\partial_{\mu}\chi\partial_{\nu}\chi-\frac{1}{2}\frac{\beta^2g_{\tau\tau}-\beta_H^2}{(2\pi)^2}g_{\mu \nu}\chi^2 \right) $$ Where this time I used the identity: $$g^{\mu\nu}g_{\mu \nu}=\delta_{\mu}^{\mu}=d$$ Using the equation of motion of $\Phi$ above, we substitute it in $R-2\Lambda$ of our expression which yields eventually: $$-\frac{1}{2}g_{\mu \nu}\mathscr{L}= e^{-2\Phi}\left(-\frac{1}{2\kappa^2}[2d\Delta_{\mu}\Delta_{\nu}\Phi]\right) $$ Putting all together $$\frac{\partial \mathscr{L}}{\partial g^{\mu \nu}}-\frac{1}{2}g_{\mu \nu}\mathscr{L} = 0 \iff R_{\mu\nu}+4\partial_{\mu}\Phi\partial_{\nu}\Phi+2d\Delta_{\mu}\Delta_{\nu}\Phi=2\kappa^2 \left(\partial_\mu\chi\partial_\nu\chi-\frac{\beta^2g^2_{\tau\tau}}{(2\pi)^2} \delta_{\mu}^{\tau}{\delta}_{\nu}^{\tau}\chi^2\right)$$ But according to some papers, the answer should be: $$R_{\mu\nu}+2\Delta_{\mu}\Delta_{\nu}\Phi=2\kappa^2 \left(\partial_\mu\chi\partial_\nu\chi-\frac{\beta^2g^2_{\tau\tau}}{(2\pi)^2} \delta_{\mu}^{\tau}{\delta}_{\nu}^{\tau}\chi^2\right)$$ Where did I go wrong??? Answer: The variation of $R$ with respect to the metric is $$ \delta g^{ab} R_{ab} + g^{ab} \delta R_{ab} = \delta g^{ab} R_{ab} + \nabla^a \nabla^b (\delta g_{ab}) - g^{cd} \nabla^a \nabla_a (\delta g_{cd}) \\= \delta g^{ab} R_{ab} - \nabla_a \nabla_b (\delta g^{ab}) + g_{cd} \nabla^a \nabla_a (\delta g^{cd}). $$ (See Appendix E of Wald, for example, for why this is so.) For the typical Einstein-Hilbert action, the last two terms are a total derivative and so they vanish inside the varied action integral. But in your case, varying the term $e^\Phi R$ gives $$ e^{\Phi} \left[\delta g^{ab} R_{ab} - \nabla_a \nabla_b (\delta g^{ab}) + g_{cd} \nabla^a \nabla_a (\delta g^{cd}) \right], $$ and when you integrate the last two terms by parts (to get something of the form $(\cdots) \delta g^{ab}$), you'll get something non-vanishing.
{ "domain": "physics.stackexchange", "id": 93958, "tags": "lagrangian-formalism, metric-tensor, field-theory, action, variational-calculus" }
Probably unneeded returns
Question: This is my first "useful" Haskell program. It's invoked with filenames as arguments where the last filename is the destination to which the content of all other files will be concatenated. As far as I can tell it's working but I don't like the two returns in the main function and would like to know how I can get rid of the do in the concatFiles function. import System.Environment (getArgs) import System.IO (hClose, hPutStr, openFile, IOMode(WriteMode)) concatFiles :: FilePath -> [FilePath] -> IO () concatFiles destination files = do writeHandle <- openFile destination WriteMode mapM (writeToDestination writeHandle) files hClose writeHandle where writeToDestination writeHandle file = hPutStr writeHandle =<< readFile file main = do args <- getArgs argc <- return $ length args destination <- return $ last args concatFiles destination $ take (argc-1) args Answer: For your argc <- return $ length args , the reason the compiler is telling you to use the return is because in do notation, monadic binding lines must be the form: binding <- value wrapped in whatever monad you're using So length args just returns an Int, but Haskell wants an IO Int. By doing return $ length args, you lift the value returned by length args into the IO monad. You are correct in smelling something funny here. Haskell provides a much cleaner way of assigning non-monadic values to be used inside of do notation with the let keyword. You can replace those two lines with: let argc = length args destination = last args Note that bindings must have the same indentation level within a let assignment block. But really, there is more to improve! In Haskell, the four most common list partitions are: First element (head) Last element (last) Everything but the first (tail) Everything but the last (init) So with init we can eliminate the need for argc, changing your main to main = do args <- getArgs let destination = last args concatFiles destination $ init args But really, the word destination is longer than the function call last args, and doesn't really add much clarity, so we can change that, too, removing the need for the let entirely! main = do args <- getArgs concatFiles (last args) (init args) And if you really wanted to become more intimately familiar with how do notations and monads in general work, we can actually remove the do notation and replace it with a >>= operator by doing main = getArgs >>= \args -> concatFiles (last args) (init args) So as a quick recap of what is happening here, we are calling getArgs, which returns a list of arguments wrapped in the IO monad. The >>= (bind) operator takes that IO value, extracts the value inside, and sends it into the function on the right side of the >>= operator, where it gets bound to the argument args.
{ "domain": "codereview.stackexchange", "id": 6666, "tags": "haskell, monads" }
Quantum computer simulators
Question: What are good, free, open source, GUI-based quantum computer simulators? I remember there used to be a Java-based one that was very good, but it doesn't seem to be online anymore, or it's relocated. Answer: Here's a list of quantum computer simulators, categorized by the programming language in which they were written: http://www.quantiki.org/wiki/List_of_QC_simulators Specifically, http://www.davyw.com/quantum/ allows the full simulation of up to 9 qubits.
{ "domain": "physics.stackexchange", "id": 20442, "tags": "resource-recommendations, quantum-information, quantum-computer, software" }
When metamorphic rocks undergo pressure, how are they changed?
Question: When metamorphic rocks undergo pressure they are changed, but how are they changed? Do they become fragments of rocks? Doesn't that mean that the difference between sedimentary rocks and metamorphic rocks (when talking about fragment rocks) is that sedimentary rocks undergo weathering, which makes them fragments and metamorphic rocks undergo immense pressure which makes it into fragments? Answer: Metamorphic rocks are changed by transformations deep underground. Being deep underground there is immense pressure and heat. The transformations can be just crystal size of the particular mineral, or different minerals can be in fact formed. For a particular mineral there are also may be different crystal structures which depend on the pressure and temperature at which the crystal was formed. The different compositions and crystal structures would be shown on geochemical phases diagrams. Geochemical modeling could be used to predict the various reactions based on the temperature and pressure profile to which the material is subjected.
{ "domain": "earthscience.stackexchange", "id": 989, "tags": "geology, rocks, metamorphism" }
2SAT Problem using Implication Graph
Question: I was doing a practice question. As you can see below there is an Implication graph. To check whether the problem is satisfiable, I checked whether there were any 'bad loops'. To do so, for each literal/variable in the boolean formula, I see if there is a path from X --->-X and from -X---->X if both exist then a 'bad loop' exists and thus the problem is not satisfiable. I did that and saw that there were no such loops and thus I came to the conclusion that the 2SAT problem was satisfiable. See below: I realized that option d (The 2SAT problem is satisfiable if a,b,c and d are set to false. I'm struggling to come to this conclusion by looking at the graph. My understanding is that you take all the paths in the graph, and see if you come to a contradiction? Can someone please let me understand how to understand option d being correct. Thanks in advance! The method I have come up with is to write out the arcs/edges for each implication and work backwards to derive the 2SAT boolean formula and see if it works. Does anyone know a quicker way? :) Answer: For any variable $X$, if there is a path from $X$ to $\neg X$ then you cannot set $X$ to $True$, since otherwise you would end up with a $True \Rightarrow False$ implication somewhere along the path. For the same reason, if there is a path from $\neg X$ to $X$ you cannot set $X$ to $False$. Hence, in the first case you must set $X$ to $False$, in the second you must set it to $True$. If there are paths in both directions, as you already know, the formula is not satisfiable. In your example, there are only paths from $X$ to $\neg X$ for all variables, hence the only assignment that satisfies the instance is the one where all variables are $False$.
{ "domain": "cs.stackexchange", "id": 12599, "tags": "complexity-theory, np, satisfiability, 3-sat, 2-sat" }
Reduce the calls to database for validations in rails
Question: I was building a rails web, and used form object to separate the validation logics per form. I had to find_referral_code twice in order to store the user_id of referrer. I wonder how I can improve it. after_sign_up_form.rb def get_referred_by_id User.where(referral_code: referer_referral_code).first.id end def validates_presence_of_referer_referral_code u = User.find_by_referral_code(referer_referral_code) if u.nil? errors.add :referer_referral_code, "Not a valid Referral Code" else if @user.referral_code == referer_referral_code errors.add :referer_referral_code, "Can't use your own referral code" end end end def validates_unique_primary_email # errors.add :primary_email, "Email is taken" end def email_is_self (@user.identities.pluck(:email) << @user.email).flatten.include? primary_email end Answer: You can memorize result using ||= def referred_by_id @reffered ||= User.where(referral_code: referer_referral_code).first.id end def validates_presence_of_referer_referral_code u = referred_by_id if u.nil? errors.add :referer_referral_code, "Not a valid Referral Code" else if @user.referral_code == referer_referral_code errors.add :referer_referral_code, "Can't use your own referral code" end end end You shouldn't name your method with get_ prefix. EDITED: It is ruby naming convention to not use prefixes like get_ nor set_ (according to Rubocop and Ruby style guide). I think it is because, every method is a getter by default (always returns last line). When you call referred_by_id method, it will query a db just once and then it will memorize result. So it will improve performance if this method is called more then once in this class.
{ "domain": "codereview.stackexchange", "id": 14092, "tags": "ruby, ruby-on-rails, validation, active-record" }
Take over maintenance of packages?
Question: We're using 2 packages that have been neglected by the maintainers: (PocketSphinx and rviz_animated_view_controller). Is it going to ruffle any feathers if I fork rosdistro and update the source url and maintainer names? Our main goal here is a Kinetic version. Originally posted by AndyZe on ROS Answers with karma: 2331 on 2017-05-04 Post score: 1 Answer: @Dirk Thomas just started a related Discourse thread: Releasing repositories form “other” people. As ROS Answers isn't really suited for discussion, I'd suggest you bring this up there as well. Originally posted by gvdhoorn with karma: 86574 on 2017-05-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27814, "tags": "ros, pocketsphinx" }
Counting independent sets
Question: I know the Independent set problem is NP-complete. But could there be a more efficient way to count the exact number of different independent sets in an arbitrary, given graph? Answer: No, this problem is well-known to be #P-complete. For more, you can see this question on CSTheory.
{ "domain": "cs.stackexchange", "id": 20744, "tags": "algorithms, graphs, combinatorics, counting" }
Spring MVC controller
Question: My code has a nasty smell that looks like this: @RequestMapping(value = { "/updateGraph" }, method = RequestMethod.POST) public ResponseEntity<ArrowsGraph> updateGraph(@RequestBody ArrowsGraph g){ try{ g = arrowsBo.updateGraph(g); return new ResponseEntity<ArrowsGraph>(g, HttpStatus.OK); } catch (Exception e) { errorLogging.logFatalException(e, sessionState); return new ResponseEntity<ArrowsGraph>(HttpStatus.INTERNAL_SERVER_ERROR); } } @RequestMapping(value = { "/deleteGraph" }, method = RequestMethod.POST) public ResponseEntity<ArrowsGraph> deleteGraph(@RequestBody ArrowsGraph g){ try{ g = arrowsBo.deleteGraph(g); return new ResponseEntity<ArrowsGraph>(g, HttpStatus.OK); } catch (Exception e) { errorLogging.logFatalException(e, sessionState); return new ResponseEntity<ArrowsGraph>(HttpStatus.INTERNAL_SERVER_ERROR); } } Things to mention: The exceptions being caught are runtime exceptions, likely thrown if the database mapping is configured incorrectly. If the code is working correctly, exceptions should never be thrown, but in the case there is an error, I want to display an error the client side, instead of having unintended behaviour. The reason I'm using ResponseEntity<..> instead of @ResponseBody is so I can return a 500 in the case that an exception is thrown. The purpose of errorLogging.logFatalException(...) is to log the error, and notify admins. Before I refactored to use ResponseEntity<...> and catch the runtime exceptions, the methods looked a lot cleaner: @RequestMapping(value = { "/deleteGraph" }, method = RequestMethod.POST) public ArrowsGraph deleteGraph(@RequestBody ArrowsGraph ) return arrowsBo.deleteGraph(g); } Is there a nice way to avoid this repetition/boilerplate? Answer: Is there a nice way to avoid this repetition/boilerplate? Yes, there is a cleaner way. Since you are using Spring, please check the annotation @ControllerAdvice and the class ResponseEntityExceptionHandler. To handle exceptions, you'll need to extend ResponseEntityExceptionHandler and manage the returned ResponseEntity in a method annotated with @ExceptionHandler, for example: @ControllerAdvice(basePackageClasses = MyControllerClass.class) public class ErrorResponseHandler extends ResponseEntityExceptionHandler { @ExceptionHandler(MyExceptionType.class) @ResponseBody ResponseEntity<Object> handleControllerException(HttpServletRequest request, Throwable ex) { // process the exception and wrap the response to return in ResponseEntity } } Using this approach, you service methods will remain one-liners, returning non-wrapped ArrowsGraph instances.
{ "domain": "codereview.stackexchange", "id": 20252, "tags": "java, error-handling, spring-mvc" }
Probability in moving from state $\lvert1\rangle$ to $\lvert2\rangle $
Question: The system is $|1\rangle$ at time $t = 0$, while $|2\rangle$ at time $t$. The base is orthonormal $\left\{|1\rangle, |2\rangle\right\}$. Epsilon and delta are positive constants, which have some energy unit. The system is: $$ \hat{H} = \epsilon\left(-1|1\rangle\langle1|+|2\rangle\langle2|\right)+\delta\left(|1\rangle\langle2|+|2\rangle\langle1|\right) $$ What is the probability of moving from state $|1\rangle$ into the state $|2\rangle$? Answer: This is an outline of what you have to do: You need to find the eigenstates of the Hamiltonian. Write $\hat H$ as a $2\times 2$ matrix in the $\{|1\rangle,|2\rangle\}$ basis and diagonalize it, i.e. find its eigenvalues $E_1,E_2$ and the corresponding eigenvectors $|E_1\rangle, |E_2\rangle$. You can then decompose the initial state $|\psi\rangle$, $|1\rangle$ in your case, in terms of $|E_1\rangle$ and $|E_2\rangle$. The result will be something like this: $|\psi(t=0)\rangle = c_1|E_1\rangle+c_2|E_2\rangle$ The time evolve state will then be given by: $|\psi(t)\rangle = c_1e^{iE_1t/\hbar}|E_1\rangle + c_2e^{iE_2t/\hbar}|E_2\rangle$ Find the probability that the system is in state $|2\rangle$ at time $t$ by calculating the mod square of the scalar product: $P_{1\to 2}(t) = |\langle 2|\psi(t)\rangle|^2$ I can clarify any step if you don't know exactly what to do.
{ "domain": "physics.stackexchange", "id": 9505, "tags": "quantum-mechanics, homework-and-exercises" }
Why do we have momentum?
Question: Momentum is the defined as the product of mass and velocity and can be thought as measuring how much motion something has. However, it is not clear to me why we need momentum and why force is not sufficient to explain nearly all the phenomena. Consider a larger mass moving at the same speed as an object of smaller mass. We say the object of larger mass is harder to stop, as it has more momentum. However, could we not just say the object of larger mass exerts more force, so it requires a greater force to stop it? The use of momentum is not entirely clear to me. Answer: Momentum is the primary quantity because it is conserved. Force is only useful because for mechanical systems it is the rate of change of momentum. But momentum is more general, things like electromagnetic fields have momentum even if they have no mass and no acceleration and therefore no force in the usual sense. Momentum also has important meaning through Noether’s theorem. The fact that the laws of physics are the same here, in Paris, on Mars, and in Andromeda implies that there must be a conserved momentum. The actual details of those laws doesn’t matter. Any laws with that symmetry would necessarily result in some conserved momentum. So you could get rid of forces, $\vec F$, simply replacing them with rates of change of momentum $\dot {\vec p}$. But not the reverse since it is $\vec p$ that is conserved, that follows from the spatial symmetry of the laws of physics, and that applies to non-mechanical systems.
{ "domain": "physics.stackexchange", "id": 97061, "tags": "newtonian-mechanics, forces, momentum, mass, velocity" }
Why use the entanglement in Conway's free will theorem?
Question: In Conway's Strong Free Will Theorem paper, the proof consists of two parts. In the first part, they proved the Specker-Kochen theorem. In the second part, they constructed two experimenters that are space-like separated, each possessing one spin-1 particle from an entangled pair, then they each made a measurement. I don't see the point of making up the entanglement. From what I understood, the proof is like this: Suppose particles have no free will, but experimenters have, then there are two functions $\theta_a(x, y, z), \theta_b(w)$, such that $\theta_a(x, y, z)\in \{(110),(101),(011)\}$, and $\theta_b(w)\in\{0, 1\}$. By entanglement, we have $\theta_a(x, y, z) = (\theta_b(x), \theta_b(y), \theta_b(z))$. So $\theta_b$ satisfies the 101 property, which is impossible. However, this just looks like the Specker-Kochen theorem with extra steps. Why is it necessary to show that $\theta_b$ has the 101 property by detouring through $\theta_a$? Is it supposed to squeeze out any possible contexuality? Perhaps, phrased in another way, I'm asking this: suppose the universe has exactly one spin-1 particle, then is it possible that $\theta_b$ does not have the 101 property? Answer: I figured it out: it's to close the contextuality loophole. In the 2006 free will theorem paper (which the 2009 paper referred to), it's said here: One way of blocking no-go theorems that hidden variable theories have proposed is “contextuality”– that the outcome of an experiment depends upon hidden variables in the apparatus. For the triple experiment in SPIN, contextuality allows the particle’s spin in the z direction (say) to depend upon the frame (x, y, z). However, since the particle’s past history includes all its interactions with the apparatus, the Free Will theorem closes that loophole. What it means in my notation is that, if we had only one spin-1 particle a, then the function $\theta_a(x, y, z)$ might not be of form $\theta_a(x, y, z) = (\theta_b(x), \theta_b(y), \theta_b(z))$, due to "contextuality", meaning that the squared-spin in direction x can be affected by the other two directions the experimenter will measure. And as for why $\theta_b$ alone is not enough, I suppose it is also due to contextuality: if we only have $\theta_b$, we can't show that it has the 101 property without the help of $\theta_a$. Suppose we choose direction $w$, and measure the squared-spin of particle b along $w$. That gives $\theta_b(w)$, but now if we want to measure some other direction $w'$, we would be using $\theta_b'(w')$, which is not $\theta_b$, and thus we cannot show that $\theta_b$ itself has 101 property.
{ "domain": "physics.stackexchange", "id": 67701, "tags": "quantum-mechanics, quantum-entanglement" }
Relation between Signals and Events
Question: There are POSIX signals (linux man), and various event-based operating systems (such as TinyOS, most seem to be targeted at embedded systems). But the mechanisms seem very similar. Are the terms 'signal' and 'event' interchangeable? Or are signals a sub-set of events? Or are they two fundamentally different for some reason that I am completely missing? Answer: As general concepts, the words signal, notification and event are mostly interchangeable, though they carry different connotations and some words wouldn't be used to denote some specific mechanisms. A particular operating system or other concurrency framework may use these terms for different mechanisms. Some differences to watch for (with signals, events or whatever they're called) include: Synchronous vs asynchronous: does the sender block until the recipient receives the signal? Stacking: if multiple identical signals are generated before the recipient acknowledges any of them, will the recipient receive that many signals, or can they be merged? Associated information: is the signal just “hey, wake up”, or does it have an associated tag, or does it have a payload (which could be a pointer to a larger data structure)? Destinations: does the signal target a specific task (unicast), or a set of recipients (multicast), or all tasks that care (broadcast)? In the multicast case, will all the specified targets receive the signal, or is the signal cleared as soon as one task acknowledges it (this may be termed unicast rather than multicast)? For example, POSIX signals are asynchronous, unicast (multicast if sent to a process group), they stack, and have a tag which is a small integer. I'm not familiar with TinyOS; from a quick read of the tutorial I think they are asynchronous, unicast (perhaps with other possibilities), non-stacking, and have a payload (and, incidentally, sending an event is called signalling it; receiption is called triggering, because the event in TinyOS terminology is the handler instance rather than not the message). I think the word event wouldn't be used if there is no associated information at all. The word signal wouldn't be used if there's a rich payload: with a rich payload, it's a message, possibly an event or a notification. GUI frameworks often call notifications of user input events; GUI events have a payload (e.g. to identify which key was pressed) and don't stack. The use of the word event tends to promote event-driven architectures, where programs are structured as event callbacks rather than as a thread that checks for notifications periodically, but it isn't limited to this use.
{ "domain": "cs.stackexchange", "id": 3297, "tags": "operating-systems, os-kernel" }
React props children with List and Spinner Loader
Question: I have a custom component with a List. I need to know if this way is the best for show a Spinner while the fetch is working. I'm using child props for that, but I don't know if this way is the correct. import React, { Component, PropTypes } from 'react'; import demoData from './demodata'; // Other import stuff... export default class HomeScreen extends Component { // Constructor component constructor(props) { super(props); const ds = new ListView.DataSource({ rowHasChanged: (r1, r2) => r1 !== r2, sectionHeaderHasChanged: (s1, s2) => s1 !== s2 }); this.state = { dataSource: ds.cloneWithRows(demoData), loading: true }; // Service for fetching data this.service = new ServiceApp(); } componentDidMount() { this .fetchData(); } // Simulate fetchData fetchData() { const self = this; setTimeout(() => { self.setState({ loading: false }); }, 4000); } render() { const component = this.state.loading ? <Spinner /> : (<ListView style={styles.container} dataSource={this.state.dataSource} renderRow={data => <Row {...data} />} />); return ( <Content> {component} </Content> ); } } Answer: The way I do it is to have both the Spinner and the components in the render and pass it the loading prop and inside the Spinner component it will render the spinner if loading is set to true and return null if its set to false. That way you're not re-rendering the whole listview when you toggle it on and off. this.state = { loading: false, } //in the contructor componentDidMount() { this.setState({ loading: true }, () => this.fetchData()); } // Simulate fetchData fetchData = () => { //const self = this; you don't need this line setTimeout(() => { this.setState({ loading: false }); }, 4000); //after you receive the response, you set it back to false } render() { return ( <Content> <Spinner visible={this.state.loading} /> <ListView style={styles.container} dataSource={this.state.dataSource} renderRow={data => <Row {...data} />} /> </Content> ); }
{ "domain": "codereview.stackexchange", "id": 24648, "tags": "javascript, react.js" }
gas pressure outside nozzle in vacuum
Question: I'm currently trying to figure out the following in the simplest possible way: Say we have a nozzle in a vacuum environment. A gas of a certain pressure is emitted through the nozzle, which has a certain diameter, int the vacuum. The question is now how high is the pressure at a certain distance away from the nozzle exit? Any help would be greatly appreciated! Answer: The answer is that it depends on what you mean by pressure. To understand this imagine you have some gas in a canister at some pressure $P$, and you look at the inside walls of the canister with a microscope so powerful that you can see the gas molecules whizzing about and bouncing off the walls. One of collisions would look like this. The gas molecules have a mass $m$ and some average velocity $v$ so they have a momentum $mv$. When one molecule rebounds from the wall the change of momentum is $2mv$, but the rate of change of momentum is just the force exerted on the wall. So if the number of collisions with the wall per second per unit area is $N$, then the pressure on the wall will be $P = 2Nmv$ (not quite, because not all collisions are at right angles, but let's skip over this). Anyhow, the point is that pressure is caused by collisions with gas molecules. In a canister of gas (like the room you're in) the molecules are moving in all directions at random so the pressure they produce is the same everywhere. So for example the air pressure on you is the same all over your skin. But in your example of air escaping from a nozzle the air is not moving at random directions. Instead it's escaping from the nozzle in something like a cone: Suppose put a box in the path of the escaping gas, and we measure the pressures $P_1$ to $P_4$ on the four sides of the box. If the box were in a gas canister, as we talked about above, all four pressures would be the same because the gas molecule velocities are random so on average equal numbers of gas molecules per second would hit all four sides. But with the gas from the nozzle this clearly isn't true because far more gas molecules will hit side 1 than sides 2, 3 and 4. The pressure $P_1$ would be much greater than $P_2$ and $P_3$ and $P_4$ would be close to zero. So this is why I started the answer with it depends. You can't simply define a single pressure in a gas stream from a nozzle because the pressure you measure will depend on how you orientate your pressure gauge. Having said this, the obvious response is to ask what is the pressure $P_1$. The answer is that the number of gas molecules per unit area will decrease as the square of the distance from the nozzle, so the pressure will be: $$ P_1 \propto \frac{1}{d^2} $$ where the constant of proportionality will be determined by the geometry of the gas flow from the nozzle.
{ "domain": "physics.stackexchange", "id": 11505, "tags": "fluid-dynamics, pressure, vacuum" }
Velocity addition as a special case of change of reference frame
Question: In this question, I want to restrict the discussion to classical mechanics as understood before 1900; that is, to exclude any discussion of relativity (however, if there is a neat generalization I would be eager to hear about it). As I go back and reread a classical mechanics textbook, I am again struck by how opaque solutions involving relative velocities and velocities in different frames are. As is well-known, if $\textbf{V}$ is the velocity of a particle in frame $S$ and $\textbf{V}_0$ is the velocity of frame $S'$ which moves rigidly relative to $S$, then the relative velocity of the particle in $S'$ is $$\textbf{v}' = \textbf{V}-\textbf{V}_0$$ My question is, is this just a special case of how we "translate" between different frames? As per this question/answer, the most general relationship that we can have between the position of a particle as expressed in two different frames (neither of which need be inertial -- this is just a statement about how different vectors are related) is \begin{equation} \mathbf{R}(t)=\mathbf{R}_{0}(t)+\mathbf{r}(t)=\mathbf{R}_{0}(t)+\Bbb{S}(t)\:\mathbf{r}^{\prime}(t) \end{equation} where I here use the notation of the linked answer. Then do I obtain the rule for addition of velocities by simply differentiating the above and solving for $\textbf{v}' \equiv d\textbf{r}'/dt$? That is, using the standard dot notation for time derivatives, \begin{equation} \dot{\mathbf{R}}(t)=\dot{\mathbf{R}}_{0}(t)+\dot{\Bbb{S}}(t)\:\mathbf{r}^{\prime}(t)+\Bbb{S}(t)\:\dot{\mathbf{r}}^{\prime}(t) \end{equation} and solving for $\dot{\mathbf{r}}^{\prime}(t)$? The presence of $\mathbf{r}^{\prime}(t)$ seems to obscure this. In the special case I mentioned at the start of the question wherein $S'$ moves rigidly so that $\Bbb{S} \equiv id$ I seem to recover the right answer, but I ask this question because I'm not sure if I'm missing the bigger picture somehow. These details don't seem to be mentioned as explicitly/mathematically as I would like in my textbook (Taylor's Classical Mechanics). Answer: To link it with the common formula, since $S$ is orthogonal, it satisfies: $$ SS^T=1 $$ Taking the derivative: $$ \dot SS^T+S\dot S^T=0 $$ so $\dot SS^T$ is skew symmetric. In 3D, a skew symmetric operator can be uniquely represented as the cross product by a vector, so there exists $\omega$ such that: $$ \dot Sr’=\omega\times r $$ Your formula is therefore: $$ \dot R=V_0+\omega\times r+v’ $$ where I’ve set $v’=S\dot r’$ the velocity of the particle in the second frame (converted to the first frame), and $V_0=\dot R_0$. You then define $\omega$ as the angular velocity of the second frame with respect to the first frame. This is a standard definition of angular velocity (perhaps not for engineers but for mathematicians/physicists it is and is related to the more general study of Lie algebra and Lie groups). Intuitively, $\dot S$ captures the velocity part, while the $S^T$ allows you to stay in the first frame. Note that $S^T\dot S$ is also skew symmetric. It corresponds to the angular velocity in the second frame $\Omega$: $$ \dot S r’=S(\Omega \times r’) $$ Note that both antisymmetric operators are related by a conjugation via $S$, or equivalently $\omega=S\Omega$. Hope this helps.
{ "domain": "physics.stackexchange", "id": 94609, "tags": "newtonian-mechanics, reference-frames, inertial-frames, velocity, galilean-relativity" }
In a planetary gearset is it possible to move both carrier and ring gear at the input speed while holding sun gear stationary?
Question: A friend and I were in a discussion and can't seem to find the answer to the question above. I said it was possible and he said it isn't and now we are both confused so could anyone help here? Also would it be possible to spin carrier and sun gear while holding ring gear stationary? Answer: No, if you hold the sun gear the carrier will turn at slower speed than the ring gear. You could have a setting where the ring gear is turning at the same speed as the input speed but then the carrier gear will turn slower or else it will break, because of the fact that is being spinned by the ring gear around the sun gear. And as to the second part of you question, again if you hold the ring gear the sun gear and carrier will spin at different speeds. hear is a youtube clip, planetary gear
{ "domain": "engineering.stackexchange", "id": 2844, "tags": "mechanical-engineering" }
Does the Barycenter shift/combine when 2 planets line up with the sun?
Question: On August 1st-2nd, Earth & Saturn lined up together in relation to the sun. Here is a website that discusses it: https://www.healththoroughfare.com/science/saturn-will-line-up-with-earth-and-the-sun-in-beautiful-celestial-show/34561 While the two planets were in alignment, did the Barycenter between the sun & both planets combine into 1 Barycenter? Or did both planets retain their usual Barycenters? Answer: The barycenter of two point masses is a mathematical concept. It's just the mass "weighted" average of their positions: $$\mathbf{r_{B,12}} = \frac{m_1 \mathbf{r_1} + m_2 \mathbf{r_2}}{m_1+m_2}$$ If you have three bodies you can also write $$\mathbf{r_{B,123}} = \frac{m_1 \mathbf{r_1} + m_2 \mathbf{r_2} + m_3 \mathbf{r_3}}{m_1+m_2+m_3}$$ Barycenters are just math tools that are useful when simplifying a real world problem to something, well... simpler. In a complicated situation like the solar system, the barycenter of any on planet with the Sun isn't a very useful concept at all, unless that planet happens to be Jupipter which is a real solar system bully. Does the Barycenter shift/combine when 2 planets line up with the sun? The Sun-Earth barycenter and Sun-Saturn barycenter just keep moving along pretty much independently of each other, to the same extent that the two planets just keep orbiting the much, much more massive Sun pretty much independently of each other. And of course the Sun also moves around the barycenter for the whole solar system in a squiggly sort of way as shown in the question Is the barycenter of the solar system usually outside of the sun? If you like, you can think of that residual motion as a bit like the average of the motions of all the individual planet's Sun-planet barycenters.
{ "domain": "astronomy.stackexchange", "id": 5771, "tags": "gravity" }
How do I find specific planner configuration options?
Question: Basically, I'd like to re-ask this question. I would like to run some benchmarking tests using several of the planners available in ompl. I would also like to be able to experiment with different configuration options for those planners. But in order to populate ompl_planning.yaml, I need to know what those options are named, and presumably this will vary by planner type. How do I find this out? Originally posted by tom temple on ROS Answers with karma: 31 on 2011-11-04 Post score: 1 Answer: Currently all the options are not filled in the yaml file itself but you can look into the src/ompl_ros_planning_group.cpp file to find which planners and options are supported. I will try and get out a better yaml file or more information on the Wiki to help with that. Originally posted by Sachin Chitta with karma: 1304 on 2011-11-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 7186, "tags": "ros, ompl-planning" }
Expression of Bianchi identity in associated bundle
Question: Theorem 5.14.2 of Mathematical Gauge Theory by Hamilton states that the curvature form $F^A_M\in\Omega^2(M,Ad(P))$ satisfies the third form of the Bianchi identity $d_AF^A_M=0$ where $F^A_M$ is the form in $\Omega^2(M,AD(P))$ given by the correspondence between horizontal $k$-forms of type Ad on $P$ with values in $\mathfrak{g}$ and forms on $M$ with values in $Ad(P)$. The proof of this is basically applying this isomorphism to the bianchi identity $dF^A+[A,F^A]=0$. I am interested in how to show that $d_AF^A_M\in\Omega^2(M,AD(P))$ actually corresponds to $dF^A+[A,F^A]$. It has to do with the fact that with respect to a local gauge $s:U\rightarrow P$ we can write $(d_A\omega)_s=d\omega_s+A_s\wedge \omega_s$ and the fact that for $1$-forms the wedge product "is the commutator". Any help is appreciated Answer: To answer this question, it is useful to look how the exterior covariant derivative $\mathrm{d}_{A}$ acts on form on $P$ and not on $M$. Let us fix the following data. A smooth manifold $M$. A principal $G$-bundle $P$ over $M$ with $\mathfrak{g}=\mathrm{Lie}(G)$. A finite-dimensional representation $(V,\rho)$ of $G$ and let $E:=P\times_{\rho}V$ be the associated vector bundle. Let $A\in\Omega^{1}(P,\mathfrak{g})$ be a connection $1$-form. Now, there are two common ways to define the object $\mathrm{d}_{A}$, i.e. the exterior covariant derivative induced by $A$. One approach is to first of all define a connection $\nabla^{A}$ on $E$ using a parallel transport. Then, one defines $\mathrm{d}_{A}$ to be the exterior covariant derivative induced by $\nabla^{A}$. This approach is for example followed in Hamilton's book Mathematical Gauge Theory. However, there is another approach, which can be found in many other books (e.g. Baum's Eichfeldtheorie), which is equivalent. In this approach, one first constructs a natural derivative on forms on $P$, which preserves horizontal forms, and then one induces $\mathrm{d}_{A}$ on forms on $M$ using the usual isomorphism. The connection $\nabla^{A}$ is then recovered as the operator $\mathrm{d}_{A}$ acting on zero-forms. In more details: First of all, we would like to define a natural derivative on $\Omega^{k}(P,V)$. A natural canditate is to take the usual exterior derivative $\mathrm{d}:\Omega^{k}(P,V)\to\Omega^{k+1}(P,V)$. However, in the language of principal bundles, it is more natural to have an object preserving horizontal vector fields. Hence, we consider \begin{align*} \mathrm{D}_{A}:\Omega^{k}(P,V)\to\Omega^{k+1}(P,V) \end{align*} which we define for all $k\in\mathbb{N}_{0}$ and for all $\omega\in \Omega^{k}(P,V)$ by \begin{align*} (\mathrm{D}_{A}\omega)_{p}(v_{1},\dots,v_{k}):=(\mathrm{d}\omega)_{p}(\mathrm{pr}_{H}(v_{1}),\dots,\mathrm{pr}_{H}(v_{k})) \end{align*} for all $p\in P$ and $v_{1},\dots,v_{k}\in T_{p}P$, where $\mathrm{pr}_{H}:TP\to H$ denotes the projection onto the Ehresmann connection $H$ defined by $A$. Now, it turns out that this operator has the following very nice property: $D_{A}$ maps $\Omega^{k}_{\mathrm{hor}}(P,V)^{\rho}$ to $\Omega^{k+1}_{\mathrm{hor}}(P,V)^{\rho}$. Furthermore, for $\omega\in\Omega^{k}_{\mathrm{hor}}(P,V)^{\rho}$, the following formula holds $$D_{A}\omega=\mathrm{d}\omega+A\wedge_{\rho}\omega,$$ where $\wedge_{\rho}$ is the natural wedge product $\wedge_{\rho}:\Omega^{k}(P,\mathfrak{g})\times\Omega^{l}(P,V)\to\Omega^{k+l}(P,V)$ defined by the pairing $\langle\cdot,\cdot\rangle_{\rho}:\mathfrak{g}\times V\to V,(X,v)\mapsto \rho_{\ast}(X)v$, i.e. $A\wedge_{\rho}\omega:=\sum_{a,b}(A^{a}\wedge\omega^{b})\langle T_{a},e_{b}\rangle_{\rho}$ for basis $(T_{a})_{a}$ of $\mathfrak{g}$ and a basis $(e_{a})_{a}$ of $V$. Secondly, thanks to the natural isomorphism $$\Omega^{k}_{\mathrm{hor}}(P,V)^{\rho}\cong\Omega^{k}(M,E)$$ the operator $D_{A}$ induces an operator $\mathrm{d}_{A}:\Omega^{k}(M,E)\to\Omega^{k+1}(M,E)$. This is the same operator as defined using the strategy of parallel transports used in Hamilton's book. Usually, people denote both $D_{A}$ and $\mathrm{d}_{A}$ by the same symbol, but I wanted to avoid confusions. In particular, you see that under the isomosphism above, you get the following correspondence: $$D_{A}\omega=\mathrm{d}\omega+A\wedge_{\rho}\omega\quad \mapsto \quad\mathrm{d}_{A}\omega_{M}$$ where $\omega_{M}\in\Omega^{k}(M,E)$ is the form obtained from $\omega\in\Omega^{k}_{\mathrm{hor}}(P,V)^{\rho}$ via the isomorphism above. Now, if you take $(\rho,V)$ to be the adjoint representation $(\mathrm{Ad},\mathfrak{g})$ of $G$ (and hence $\mathrm{Ad}_{\ast}=\mathrm{ad}$), you get the formula $$D_{A}\omega=\mathrm{d}\omega+[A,\omega]$$ which is a global version of the local formula you very mentioning. In particular, you see that $\mathrm{d}_{A}F_{M}^{A}$ really corresponds to $D_{A}F^{A}=\mathrm{d}F^{A}+[A,F^{A}]$ as you claimed.
{ "domain": "physics.stackexchange", "id": 97738, "tags": "differential-geometry, field-theory, gauge-theory" }
How to name LiN(CF2SO3)2?
Question: I found this compound in the list of inorganic lithium salts from this article on ScienceDirect. It is said to be of the sulfonate group. Let's focus on the anion alone. I'm not sure if it can be bis(difluoromethanesulfonyl)imide because there is already a compound named bis(trifluoromethanesulfonyl)imide / bistriflimide ($\ce{LiN(CF3SO2)2}$). The structure is given on Wikipedia. I've also seen other compounds like $\ce{LiCF3SO3}$ (lithium trifluoromethanesulfonate / lithium triflate) and $\ce{LiC(SO2CF3)3}$ (lithium tris(trifluoromethanesulfonyl)methide / lithium triflide), but nothing with $\ce{CF2}$. The sulfonate ions are of the form R−S(=O)2−O−, and the sulfonyl group is of the form R−S(=O)2−R. The only ways that make sense to me are -CF2-(SO2-O)-(O-SO2)-CF2- and -SO3-CF2-CF2-SO3-, both of which seem to form cyclic amides (the two ends connected to N). But I'm not sure if such a molecule will be valid or how to name them. Edit: and one more: CF2=(S=O)-O-N-O-(S=O)=CF2. But this seems even more weird. Edit 2: I would say, since $\ce{SO3^{2-}}$ is sulfonate, and $\ce{SO2}^2-$ is sulfoxyl, contrary to the Wikipedia page, $\ce{LiN(SO2CF3)3}$ would be lithium tris(trifluoromethanesulfoxyl)imide as opposed to lithium tris(trifluoromethanesulfonyl)imide. That would make $\ce{LiN(CF3SO3)3}$ lithium tris(trifluoromethanesulfonyl)imide. When I Google for it, different sources mention lithium tris(trifluoromethanesulfonyl)imide as $\ce{LiN(CF3SO3)3}$ or $\ce{LiN(CF3SO2)3}$, but I don't see any other names. This is probably due to the short name lithium bistriflimide / lithium triflimide (Li-TFSI). Edit 2.1: I forgot that sulfonyl group was of the form R-SO2-R'. So maybe Lithium tris(trifluoromethanesulfonyl)imide is correct. What would be the name of this compound? Answer: From the answer by @NicolauSakerNeto to my other related question (Is LiN(CF3SO3)2 is valid, being similar to Lithium Bistriflimide (LiN(CF3SO2)2)?), it seems that the compound is not as popular is it seems from the papers. He mentioned that the -O-N-O- bridge appears unstable (although I'm unsure why), and the compound would re-arrange itself into something else, which would make sense for $\ce{LiN(CF3SO2)2}$ to form instead. He suggested that I check the CAS database, and in it, there are only two compounds with the $\ce{CF3SO3}$ backbone and one reference from an obscure patent, while there were more than 14000 compounds with the $\ce{CF3SO2}$ backbone and over 40000 references. But checking the CAS database is impractically expensive for anyone who does not have a commercial or academic license, so as it stands, we'll just have to rely one someone else with access to the database to check it for us. Update: I found some compounds with a related structure ($\ce{N(CF3SO3)2}$) by using ChemSpider, with associated CAS numbers, but @NicolauSakerNeto still mentioned that those compounds are suspicious. But in case the compound was real, it would be named N,N-Bis(trifluoromethylsulfonyloxy)amide. Again, this compound may not be a valid one.
{ "domain": "chemistry.stackexchange", "id": 18014, "tags": "organic-chemistry, nomenclature, ionic-compounds, erratum" }
What do typical results from inductively coupled plasma mass spectrometry (ICP-MS) look like, and how do I interpret them?
Question: I have a degree in chemical engineering and remember having seen a inductively coupled plasma mass spectrometer (ICP-MS) during the laboratory part of a chemistry course I took while at university, but it is many years ago. Now I would like to know what a typical output from a ICP-MS looks like and how I interpret the results. There is no picture at the wikipedia page but some frequency domain like plots can be found with a Google search and I understand the Fourier transform and how it is used in number theory. Are the ICP-MS results anything like the spectrum from the Fourier transform of some signal? Answer: I am not an expert in ICP-MS, but an ICP-MS should be considered an ICP ion source coupled with a mass spectrometer. There are some key issues raised by the coupling between both instruments due to the high temperature of the ICP source as well as due to the fact that one wants to analyse atomic ions and remove any interfering molecular ions (mostly oxides, but other molecules and clusters are possible). Therefore, to attempt an answer to your question, the standard output of an ICP-MS will be presented as a mass spectrum (intensity versus m/z or intensity versus amu). There are a couple of sample spectra in the brochures published by instrument makers, see for instance the Bruker brochure or a Thermo application note. How this mass spectrum is obtained from the detector will lead to the type of data that is acquired: For a quadrupole mass filter, as well as for magnetic sector instruments, two modes of operation are possible. Either a single mass is selected, and therefore ont will only get a trace of the abundance of a given mass as a function of time (chromatography or on-line sample analysis). Otherwise the mass filter can be scanned, leading to a mass spectrum. For time of flight detectors, the mass spectrum is built by the measurement of a flight time for an ion to reach the detector. Therefore the mass spectrum is converted from a time dependent signal to a spectrum. Finally there have been some attempts to couple an ICP ion source with a Fourier transform mass spectrometer. As far as I know, these are not commercial instruments. In such instruments, the ions oscillate at m/z dependent frequencies, and the recorded image current is converted to a spectrum through the use of Fourier transform.
{ "domain": "chemistry.stackexchange", "id": 66, "tags": "mass-spectrometry" }
Electric flux through a cylinder's side
Question: We have a cylinder with radius $R$, as shown above, and a point charge $Q$. We want to calculate the electric flux passing through the cylinder's side surface, excluding its caps. Is there a way to do this with Gauss's formula? Or should I double integrate? Answer: The total flux out of the cylinder must be zero. You can use Gauss's law (reduced with solid angles) to find the flux in and out through spherical caps at each end of the cylinder. The difference must go out through the sides.
{ "domain": "physics.stackexchange", "id": 77567, "tags": "homework-and-exercises, electromagnetism, electricity, electric-fields, gauss-law" }
What is the physical unit of the energy calculated using VQE on Qiskit?
Question: Suppose there is any arbitrary ansatz producing variational energy for any arbitrary Hamiltonian. What is the physical unit corresponding to the energy? Since numerical algorithms only output a numeric value, the unit is missing. I know for quantum chemistry, it is usually taken to be Hartree. However, not all problem Hamiltonians are from chemistry. Answer: Short answer: The units of your result will be the same as the units of your input problem to VQE (units of your Hamiltonian coefficients). Long answer: In a VQE calculation, some problem Hamiltonian is mapped to a representation that is suitable for implementation in a quantum circuit. Often, this means mapping all Hamiltonian operators to strings of Pauli operators with appropriate coefficients. Then there are two cases to distinguish. Given your original problem Hamiltonian corresponds to a physical system, it will have coefficients with units of energy. Take the generic kinetic energy term of a Hamiltonian, $\hbar^2 \nabla^2 / 2m$ has units of energy ($J$). Sometimes energies may be expressed in alternative units (e.g. Hartree, eV, etc.) and it is your choice which unit system you use (see here for an example of a ground state calculation of an electronic structure problem in Hartree, but also of a vibrational problem in $cm^{-1}$). Now, if you do a VQE calculation, the coefficients of your Pauli operators in your qubit-mapped Hamiltonian will have the same units as the coefficients of your original problem Hamiltonian. Your results will then be expressed in the same units. VQE is essentially an optimization of a cost function. You can also use VQE to optimize an arbitrary cost function that you construct such that it describes an abstract problem (see for instance the Max-Cut problem). In that case, your Hamiltonian coefficients will have arbitrary units and so will the VQE result.
{ "domain": "quantumcomputing.stackexchange", "id": 2619, "tags": "qiskit, vqe" }
How to dump genes from GenBank in GFF3 format?
Question: This question has also been asked on BioStars If I look at this record in GenBank I see about 6k genes: https://www.ncbi.nlm.nih.gov/nuccore/CM000760?report=gbwithparts I'd really like to be able to dump those genes in GFF3 format, but I'm guessing I can't do that in a single step, so... How to dump all genes from a GenBank record in 'whatever' format? How to convert 'whatever' format to GFF3? Answer: Turns out you can just grab the GFF3 from the NCBI's FTP site! ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/003/195/GCA_000003195.3_Sorghum_bicolor_NCBIv3/ See: https://www.ncbi.nlm.nih.gov/genome/doc/ftpfaq/#files Or "Access the data" on the right here: https://www.ncbi.nlm.nih.gov/assembly/GCF_000003195.3
{ "domain": "bioinformatics.stackexchange", "id": 123, "tags": "gff3, data-download" }
Is there symmetry in 2D stress tensor in linear elastic fracture mechanics?
Question: Assumptions: Cross terms in strain tensor are defined as equal $\varepsilon_{xy} = \varepsilon_{yx}$. pure mode I crack. Far from crack tip, material is purely elastic and we are way below yield stress => $\varepsilon_{ij} \propto \sigma_{ij}$. Cross term $\sigma_{yx} $in linear elastic fracture mechanics (LEFM) contains factor $\sin{\frac{\theta}{2}}$. Now, cross term is not symmetric respect to $\theta$ (contains sin). The material however should be symmetric and thus $\varepsilon_{xy} = -\varepsilon_{yx}$. What am I missing? What is the difference between positive and negative shear stress? LEFM formulas can be found e.g. in page 6 in: http://www.public.iastate.edu/~gkstarns/ME417/LEFM.pdf Answer: You are confusing the global coordinate $\theta$ with the local direction. Both the strain and the stress tensors are symmetric locally, that is, index-wise:$\epsilon_{ij}=\epsilon_{ji}$ and likewise for the stress. The symmetry of the strain is by definition and that of the stress is due to torque balance. This is true for (almost) all systems. However, the stress and strain, as fields, exhibit a global symmetry, which results from the symmetry of the loading, and is system-specific: $$ \sigma_{xx}(r,\theta)= \sigma_{xx}(r,-\theta) \qquad \sigma_{yy}(r,\theta)= \sigma_{yy}(r,-\theta) \qquad \sigma_{xy}(r,\theta)=- \sigma_{xy}(r,-\theta)$$ For mode I cracks (for mode II it's the opposite).
{ "domain": "physics.stackexchange", "id": 2330, "tags": "stress-energy-momentum-tensor, elasticity, continuum-mechanics, stress-strain" }
Singleton design pattern
Question: I am a beginner in PHP-OOP and design patterns. I have got this basic piece of code in Singleton Pattern and I changed it in order to understand its behavior. I can see that it works as expected. However, it is not clear for me exactly how it works. Can anyone explain me? So it is working! My doubt is "Whis $inst is not always null in the if since I am defining it null before entering the if"? <?php final class UserFactory // "final" prevents child classes from overriding a method by prefixing the definition { private $state=1; /** * Call this method to get singleton * * @return UserFactory */ public static function Instance() { static $inst = null; if ($inst === null) { $inst = new UserFactory(); } return $inst; } private function __construct() { } public function addToState(){ $this->state=$this->state+1; } public function getState(){ return $this->state; } } // $myFactory=new UserFactory(); //Throws an error $myFactory=UserFactory::Instance(); print_r($myFactory); echo "<br>"; echo 'myState: '.$myFactory->getState()."<br>"; $myFactory->addToState(); echo 'myState: '.$myFactory->getState()."<br>"; $myFactory2=UserFactory::Instance(); print_r($myFactory2); ?> The output is: UserFactory Object ( [state:UserFactory:private] => 1 ) myState: 1 myState: 2 UserFactory Object ( [state:UserFactory:private] => 2 ) UserFactory Object ( [state:UserFactory:private] => 2 ) Answer: You're misunderstanding how the static keyword works. The first time it is encountered for a specific variable in the method it defines the $inst variable as null, then checks to see if it is null (it is :p) and then sets it to a new object of that class. The second time, the static definition isn't used because the static variable has already been defined. It then checks to see if it is null, it isn't, it has already been set to a new object of the class, so it skips the if and returns the object.
{ "domain": "codereview.stackexchange", "id": 6304, "tags": "php, beginner, object-oriented, singleton, static" }
Physics of color: why do visible colors absorb all colors except the color itself?
Question: It's funny I cannot answer this question, but I really don't have a very good physical intuition for how color works. I know that sunlight and (say) light from lightbulb is a superposition of all colors of the EM spectrum. When I pass this "light" into a prism, the longer wavelengths are separated from the shorter wavelengths, resulting in a rainbow. However, I learned that if something is red, it is absorbing all/most colors except red. Our eyes are seeing the wavelength of red reflected from (say) a red shirt, which is cotton with red dye. So, this red dye is "reflecting" red light, and "absorbing" everything else. The color black absorbs most colors, while white reflects most colors. What is going on here physically? When I say "absorb" and "reflect", what does that actually mean in terms of electromagnetism? Answer: The answer to your question is the obverse of it: we assign a color to an object based on the wavelengths which are reflected to our eyes (or in the case of filters, transmited to our eyes). That means other wavelengths are absorbed. The absorption of wavelengths is based, primarily, on the chemistry of the object. Red dye applied to cotton cloth is a chemical whose molecules absorb less red light than other wavelengths, hence the red wavelengths are more intense than other wavelengths in comparison to the light from other objects. Similarly for blue, green, yellow, etc objects. Most objects of colors don't absorb all the energy of other wavelengths; they just absorb less of certain wavelengths, and we assign a color name based on the modified mixture reaching our eyes. In fact, the "colors" surrounding each other can modify our interpretation of what color we see. (Search for "color optical illusions". There are fascinating examples.) Regarding absorb and reflect: they mean exactly what you think. The energy of an EM wave is taken into a molecular structure and not released as the same wavelength (absorption) or it is released as the same wavelength (reflection or transmission).
{ "domain": "physics.stackexchange", "id": 24355, "tags": "electromagnetism, visible-light" }
Expand granges object different amounts upstream vs. downstream
Question: I am attempting to get gene regions and their immediate neighborhoods using Bioconductor GenomicRanges packages. It is very easy for me to extend the regions around genes if I want to extend it the same distance upstream and downstream as in this example: library(GenomicRanges) library(Homo.sapiens) genes = genes(Homo.sapiens) genes_5kb_flank = genes + 5000 And if I want to extend both upstream and downstream different amounts centered on the TSS I can use promoters(genes, upstream = 5000, downstream= 1000) But I am having trouble finding a function that is strand-aware and also chromosome length aware that allows me to extend the gene region a different amount upstream vs. downstream... For instance if I wanted a 5kb upstream flanking region and a 1kb downstream flanking region as is used for genomic region annotations on GREAT. Anyone know of a drop in function for this? something like promoters() but for the full gene region instead of only around the TSS? Answer: There doesn't appear to be a built in function for that, so here's a modified version of promoters(): expandRange = function(x, upstream=2000, downstream=1000) { strand_is_minus = strand(x) == "-" on_plus = which(!strand_is_minus) on_minus = which(strand_is_minus) start(x)[on_plus] = start(x)[on_plus] - upstream start(x)[on_minus] = start(x)[on_minus] - downstream end(x)[on_plus] = end(x)[on_plus] + downstream end(x)[on_minus] = end(x)[on_minus] + upstream x } You can then call expandedGenes = expandRange(genes, 5000, 1000).
{ "domain": "bioinformatics.stackexchange", "id": 580, "tags": "bioconductor, sequence-annotation, genomicranges" }
Difference between using WHERE and AND with ON in SQL
Question: I have a Sql query SELECT AccountMaster.AccountID, AccountMaster.AccountDesc, Settings.Description FROM AccountMaster INNER JOIN Settings ON AccountMaster.AccountID = Settings.AcID AND Settings.CompanyID = @CompanyId AND AccountMaster.CompanyID = @CompanyId So I thought I could use like this SELECT AccountMaster.AccountID, AccountMaster.AccountDesc, Settings.Description FROM AccountMaster INNER JOIN Settings ON AccountMaster.AccountID = Settings.AcID WHERE Settings.CompanyID = @CompanyId AND AccountMaster.CompanyID = @CompanyId What is the difference and which one is better to use? or is the two queries run differently? Answer: In this case with INNER JOIN there is no difference in result (OUTER JOIN would be different). Logically, they give the same result. From a readability perspective, it's better to separate JOIN and filter conditions. FROM AccountMaster INNER JOIN Settings ON --This is a JOIN condition, with no outside restriction (date, id, etc) AccountMaster.AccountID = Settings.AcID WHERE --these are filters, independent of the join Settings.CompanyID = @CompanyId AND AccountMaster.CompanyID = @CompanyId As you get into reporting or data warehouse type queries, the number of JOINs and filters will increase massively. Keeping them separate helps readability. Another point, when you read this out, the separation makes sense Give me rows from AccountMaster and Settings table where they match on AccountID and AcID. Then filter to this CompanyId Now, you might think "if I filter first it will be quicker".Maybe it will, but this is the job of the query optimiser. SQL is declarative: that is, you say what you want but not how to do it. The database engine (via the optimiser) will work out the best way to do it but honour your intention. Logically, the order of processing is 1. FROM 2. ON 3. JOIN 4. WHERE 5. GROUP BY 6. WITH CUBE/ROLLUP 7. HAVING 8. SELECT 9. DISTINCT 10. ORDER BY 11. TOP 12. OFFSET/FETCH But the optimiser may filter first (WHERE), but will maintain the logical order.
{ "domain": "codereview.stackexchange", "id": 29501, "tags": "sql, comparative-review" }
How do we know that bending of light around stars is due to bending of space-time and not diffraction?
Question: One question that popped up during the studies of special and general relativity (which I am forced to take unfortunately) is the following: How do we know that this is due to the bending of space-time and not just plain old 3rd semester diffraction. If you find this a silly question, downvote, otherwise consider the following picture. The Sun is the straightedge, the screen is the earth and the star is the point source. This is simplified to a great extent but the idea still holds (I think) Answer: We know this because the position of the apparent star is perfectly matching the GR calculations about bent spacetime, depending on a few things including the mass of the star (the one in between that bends spacetime, in your case the Sun). What you are describing, interference, would not depend on the same way on the mass, the density, stress-energy and a few more things as GR describes bent spacetime. There were numerous calculations and experiments like the Shapiro test and they all perfectly gave the matching numbers according to GR. Interference would not depend on the same things, for example interference would react differently on the size/mass ratio or density of the star, whereas in GR it really matters what your star's energy density, for example, is compared to its size, for example, a black hole in your case would have an interference of what? I believe that interference would not even work with a black hole.
{ "domain": "physics.stackexchange", "id": 57420, "tags": "general-relativity, optics, spacetime, curvature, diffraction" }
Visitor Pattern/Leaky Bucket variant implementation to run an operation at a certain interval
Question: My code is a variant on the Visitor Pattern and a "leaky bucket" variant. The goal is pretty straightforward: the "bucket" will collect a specified number of items (say, for example, 500) and then empty the queue, running some CPU-bound operation on ("visiting") each item. (Yes, I'm aware that there's probably existing code "out there" that does something similar to what I'm doing here - this code really isn't all that novel of an idea). I've read documentation on how to implement the Visitor pattern before but this is truthfully the first time I've actually implemented one. Is what I have below a "valid" way to implement this? Also, have I properly implemented the concurrency/thread safety? I think it should be free from race conditions but it's also possible that I can implement this more simply. /// <summary> /// "Overflowing bucket" implementation /// </summary> /// <typeparam name="T">Type of the data structure we're operating on</typeparam> /// <typeparam name="U">Type of the items in the data structure</typeparam> /// <remarks> /// The basic idea of this data structure is that it'll collect a certain number of items and then empty the queue. /// /// The "overflowing bucket" metaphor isn't perfect because every time that the "bucket" is "filled to the brim" or starts /// to "overflow" we just empty the whole thing. /// </remarks> public class OverflowingBucket<T, U> { #region Fields // Not volatile - this will only be accessed by the background thread private readonly T itemToActOn; // Not volatile - this will only be accessed by the background thread private readonly Action<T, U> visitorOperation; // This action runs after we empty the queue private readonly Action<T> afterAction; // Must be concurrent because we could add to the queue while an operation is in place private readonly ConcurrentQueue<U> queue; // Will be accessed by multiple threads private volatile bool inProgress = false; // Obviously used for locking private readonly object lockObj = new object(); #endregion Fields #region Constructor /// <summary> /// Create a new Overflowing Bucket /// </summary> /// <param name="itemToActOn">Item that is "visited." Required.</param> /// <param name="visitorOperation">Visitor operation. Required.</param> /// <param name="afterAction">Action that occurs after each time that the queue is emptied. Optional.</param> /// <param name="leakyNumber">Number of items that the queue may contain before the bucket starts "leaking."</param> public OverflowingBucket(T itemToActOn, Action<T, U> visitorOperation, Action<T> afterAction, int leakyNumber = 50) { this.itemToActOn = itemToActOn; this.visitorOperation = visitorOperation; this.afterAction = afterAction; queue = new ConcurrentQueue<U>(); LeakyNumber = leakyNumber; } #endregion Constructor #region Properties /// <summary> /// Get or set the number at which we run an iteration /// </summary> public int LeakyNumber { get; set; } /// <summary> /// Get a value indicating whether a "visit" is already in progress (i.e. we are actively emptying the queue) /// </summary> public bool InProgress { get { return inProgress; } } #endregion Properties #region Public Methods /// <summary> /// Add an item, emptying the queue if necessary /// </summary> /// <param name="item">Item to add to the "bucket"</param> /// <returns>Handle to await the outcome</returns> public async Task Add(U item) { queue.Enqueue(item); if (queue.Count >= LeakyNumber && !inProgress) { await Visit(); } } /// <summary> /// Visit the data structure. This will ignore successive calls (i.e. calls that occur while a previous"visit" operation is running). /// </summary> /// <returns>Handle to await the boolean result; result is <c>true</c> if we successfully ran the /// operation and <c>false</c> if the operation failed or never ran</returns> public async Task<bool> Visit() { // If there's already an operation in progress, ignore the call if (!inProgress) { bool result = true; await Task.Run( delegate () { lock (lockObj) { // "Double check" pattern to prevent race conditions if (!inProgress) { inProgress = true; // Completely empty the queue while (queue.Any()) { U item; bool success = queue.TryDequeue(out item); if (success) visitorOperation(itemToActOn, item); else result = false; } // The "afterAction" operation is optional, so only afterAction?.Invoke(itemToActOn); inProgress = false; } // End double-check condition } // Release lock }); // End Task return result; } // End if inProgress // TODO: Should we throw an exception if we're already Visiting? return false; } // End Visit method #endregion Public Methods } // End OverflowingBucket class Here is an example of code that I wrote to call this (to give a better sense of what I'm trying to do). It's probably not that great of an example because the operations in question aren't particularly CPU-bound (which is obviously the primary purpose of Task.Run operations), but it should still give an idea of what I'm trying to do. private static void TestOverflowingBucket() { var doc = new XmlDocument(); doc.LoadXml("<QATestLog></QATestLog>"); Action<XmlDocument, XmlElement> operation = delegate (XmlDocument document, XmlElement element) { document.DocumentElement.AppendChild(element); }; Action<XmlDocument> afterOperation = xmlDoc => xmlDoc.Save("TestDocument.xml"); var bucket = new OverflowingBucket<XmlDocument, XmlElement>(doc, operation, afterOperation); // AsyncPump is to correct some "oddities" in the way that async/await works in a console application // Source: https://blogs.msdn.microsoft.com/pfxteam/2012/01/20/await-synchronizationcontext-and-console-apps/ AsyncPump.Run(async () => { List<Task> tasks = new List<Task>(); Random random = new Random(); for (int i = 0; i < 1003; i++) { XmlElement newElement = doc.CreateElement("step"); XmlAttribute newAttribute = doc.CreateAttribute("number"); newAttribute.Value = i.ToString(); newElement.Attributes.Append(newAttribute); // Don't await, it's OK for us to continue to add to the queue while an execution is happening // This will generate a compiler warning since Add is async and we don't await Task task = bucket.Add(newElement); // For added realism, wait some random time between adds await Task.Delay(random.Next(1, 1000)); if (!task.IsCompleted) tasks.Add(task); } if (bucket.InProgress) { Task.WaitAll(tasks.ToArray()); } else { // If there are any "left over" that weren't "picked up" by a previous Visit execution await bucket.Visit(); } }); } Answer: First of all, IMHO the code is well written and has some meaningful comments. Visitor Pattern Actually, I am not sure if the OverflowingBucket has something to do with the visitor pattern. In my eyes, the visitor pattern can be used to add logic to a data structure by implementing visitors. A visitor visits each element of the data structure and decide what to do depending of the element's type. The OverflowingBucket can be used to add logic to an object for processing other items in a special manner. Therfore I wouldn't use the term "visitor pattern" here. OverflowingBucket Because it is not a visitor in my eyes, I would rename the method Visit to Process or something like that. Consider to make the method Process private and add another method ProcessPendingItems or something like that. The only case where the method Visit returns false is, if queue.Any() is true and queue.TryDequeue(out item) is false. That should never happen in the current version of the class... therefore I would make the method void. TestOverflowingBucket The delay does not delay the Add-Task but the main thread. Adding Thread.Sleep() within the operation delegate would be more realistic. There is no need to check if the task is completed. You can just add them all to the tasks list.
{ "domain": "codereview.stackexchange", "id": 22207, "tags": "c#, timer, async-await, visitor-pattern" }
Small jumping bugs inside Swiss flat
Question: Saw these bugs inside the flat I'm renting (Switzerland), but I really can't figure what kind they are. What I can say about them: They can do small jumps (up to 5cm I would say). It doesn't seem to be for locomotion, but as a means of escape when I try to catch one. Size: about 1.5mm for the "big" ones. There are of various sizes down to 0.25mm for the really tiny ones (approximately; never managed to get one without crushing it). They behave much the same as the big ones (including the jumps) and are whiter in color. They don't seem to live as a colony or in a nest (or I simply didn't find it), but sparsely on the wood floor, often near the baseboards. Found some on the walls, but those were the exceptions. They don't seem to like the bathroom. Rather delicate; can easily be crushed. Seem more active in the evening and at night, but don't mind going out during the day. I live alone, without pets, but I don't remember having been bitten. I guess they don't feed on blood. Seem to have long antennas (about half their body size) and small wings. Side note: the flat is on the 3rd floor of a recent building (finished 14 months ago). I'm the first tenant. An exterminator came and quickly classified them as some kind of flea. He put some pesticide on the baseboards, and their number is now quickly decreasing. But I'm still curious. My first thought was "fleas" (because of the jumps), but I deemed them too squishy and not "vampire" enough for that. Also, I don't think that (what I assume to be) the young ones should be able to jump. Then I thought "springtails", but I didn't see any furcula (granted, I don't have a good eyesight) and I don't think they are supposed to have wings. One of them was nice enough to jump on some tape. Top (on a ruler): Front/Side: Also, blowing on it (to open the wings): Youtube video of one of them Answer: I believe these are some sort of Psocoptera, "booklice", even though they don't really jump much. Could it be that they ran or flew a little bit instead? I won't try an identification of the species, but in general booklice like damp places and most of them aren't considered serous pests. Example of a booklouse: Slender legs and antennae, wings, and general appearence close to your specimens. (Image from Wikipedia)
{ "domain": "biology.stackexchange", "id": 7325, "tags": "species-identification, entomology" }
Little log engine in C
Question: I programmed a little log engine in C I plan to use in my project and maybe some others in future. I am very novice C programmer and would like to have feedback of some experienced ones on this. It's spawned over few files but I've joined them so it's easier to compile. Thank you. #include <stdarg.h> #include <syslog.h> #include <assert.h> #include <time.h> #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <errno.h> #include "config.h" /* this is log.h */ #define TPL_IDENT PACKAGE_NAME enum tp_log_level { TPL_DEBUG, TPL_INFO, TPL_ERR, TPL_EMERG, }; enum tp_log_mode { TPLM_SYSLOG, TPLM_FILE, }; void tp_log_init(int mode, int level, int fd); void tp_log_write(int level, const char *fmt, ...); void tp_log_close(void); /* end of log.h */ /* function from pio.c I use */ ssize_t tp_write(int fd, const void *buf, size_t len) { ssize_t ret, wlen = 0; const char *ptr; ptr = buf; while (wlen != len) { ret = write(fd, ptr, len-wlen); if (ret == -1) if (errno == EINTR) continue; else return -1; wlen += ret; ptr += ret; } return wlen; } /* start of log.c */ static int log_fd = -1; static int log_level; static int log_mode; static const char *level_txt[] = { "DEBUG", "INFO", "ERROR", "EMERG" }; static const int level_syslog[] = { LOG_DEBUG, LOG_INFO, LOG_ERR, LOG_EMERG }; void tp_log_init(int mode, int level, int fd) { assert(mode == TPLM_SYSLOG || mode == TPLM_FILE); assert(mode == TPLM_FILE ? fd >= 0 : 1); assert(level >= TPL_DEBUG && level <= TPL_EMERG); switch (mode) { case TPLM_FILE: log_fd = fd; break; case TPLM_SYSLOG: openlog(TPL_IDENT, LOG_PID|LOG_CONS, LOG_DAEMON); break; } log_level = level; log_mode = mode; } #define MAX_TIME_LEN 512 #define MAX_PREFIX_LEN 10 #define MAX_MSG_LEN 1024 #define MAX_POSTFIX_LEN 2 static void tp_vlog_write(int level, const char *fmt, va_list alist) { int ret, len; time_t t; struct tm *tm; char msg[MAX_TIME_LEN+MAX_PREFIX_LEN+MAX_MSG_LEN+MAX_POSTFIX_LEN]; assert(log_fd); assert(fmt); assert(level >= TPL_DEBUG && level <= TPL_EMERG); if (level < log_level) return; switch (log_mode) { case TPLM_FILE: t = time(NULL); tm = localtime(&t); if (tm == NULL) abort(); len = strftime(msg, MAX_TIME_LEN, "%a, %d %b %Y %T %z", tm); len += snprintf(msg+len, MAX_PREFIX_LEN, " [%s] ", level_txt[level]); ret = vsnprintf(msg+len, MAX_MSG_LEN, fmt, alist); if (ret >= MAX_MSG_LEN) len += MAX_MSG_LEN-1; else len += ret; snprintf(msg+len, MAX_POSTFIX_LEN, "\n"); tp_write(log_fd, msg, len+1); break; case TPLM_SYSLOG: #ifdef HAVE_VSYSLOG vsyslog(level_syslog[level], fmt, alist); #else vsnprintf(msg, MAX_MSG_LEN, fmt, vl); syslog(level_syslog[level], "%s", msg); #endif break; } } void tp_log_write(int level, const char *fmt, ...) { va_list vl; va_start(vl, fmt); tp_vlog_write(level, fmt, vl); va_end(vl); } void tp_log_close(void) { assert(log_fd); switch (log_mode) { case TPLM_SYSLOG: closelog(); break; case TPLM_FILE: close(log_fd); break; } } Answer: As promised, here are some comments on the actual code you posted. General: don't define multiple variables on the same line. is there a good reason to redefine log levels in tp_log_level rather than use the levels Syslog defines? you have two arrays, level_txt and level_syslog that are dependent upon the values of tp_log_level but nothing ties these together. In such a small program, this is unlikely to be a problem, but this sort of loose dependency tends to break with time - someone appends (or even inserts!) a value to one part without adjusting the other. Do you really need to log the level text, "DEBUG" etc? Syslog doesn't. Why complicate the job? If you just use syslog constants, both these arrays could be discarded. If you really must use them, make a structure holding a level and a text and make an array of these; then at least you only have one dangling dependency... I personally don't have any dogmatic objection to multiple returns. Some coding standards object to them on the grounds that they make functions unclear. I would argue the contrary: they can, if used well, make functions clearer. In tp_write why using raw write (as opposed to stdio)? The fact that you test for interrupted writes makes me think that you intend to log to a socket or a serial port etc... Is that so? you should get compiler warnings about comparisons and conversions between size_t and ssize_t. Since the function only has two return values, -1 and len, and the caller knows the value of len anyway, it seems unnecessary to return anything but an int: 0 for success, and -1 for failure. That way you can avoid using a ssize_t and the warnings it generates. you need braces in the if/else statements: the compiler has to assume that the else belongs to the inner if. Using braces even when not strictly necessary is generally considered good practice. But they can be ugly... might be better coded as: ssize_t tp_write(int fd, const void *buf, size_t len) { const char *p = buf; const char *end = p + len; while (p < end) { ssize_t ret = write(fd, p, (size_t) (end - p)); if (ret > 0) { p += ret; } else if (errno != EINTR) { return -1; } } return 0; } In tp_log_init mode would be more logically of type tp_log_mode I don't use asserts much. You are relying on the asserts to find problems that the subsequent code does not handle (eg. the switch has no default). But another approach is to allow the code to fail if the parameters are wrong by falling back on syslog. This works even when NDEBUG is defined (asserts disabled): void tp_log_init(enum tp_log_mode mode, enum tp_log_level level, int fd) { log_level = level; if (mode == TPLM_FILE) { if (fd >= 0) { log_fd = fd; log_mode = TPLM_FILE; return; } /* print an error or abort() if you like */ } openlog(TPL_IDENT, LOG_PID|LOG_CONS, LOG_DAEMON); log_mode = TPLM_SYSLOG; } In tp_vlog_write MAX_TIME_LEN and MAX_MSG_LEN seem generous for a log message. Remember that msg is on the stack and if you are in a restricted environment, 1500 bytes might be too much. ctime uses 26 bytes for the time/date, so I doubt your time format needs 512. level seems like it should be a enum tp_log_level the switch could be done with if/else I would extract the creation of a time string into a separate function. Also, why abort if localtime fails? This makes your code more fragile than it needs to be - just don't call strftime() if tm == NULl. your assert(log_fd) fails if log_fd == 0. This should apply to the TPLM_FILE mode only. as I said before, printing the level text seems unnecessary. If you need it, use strcpy instead of snprintf. printing "\n" could be done with strcpy(msg+len, "\n") Personally, I have no objection to the use of varargs in this code. @Lundin is right that it is not type-safe, but sometimes it has its uses. This seems a legitimate use to me. In tp_log_close your assert fails if log_fd == 0. if/else would be clearer.
{ "domain": "codereview.stackexchange", "id": 3277, "tags": "c, logging" }
Calculating the Tension in a rope with two masses and two pulleys
Question: Each time I try to solve for the tension, I always get 80N instead of 40N. I set $T=mg$ and assumed the tension in the horizontal portion of the block was equal to $2T$ becuase there are two masses and two pulleys and got $T = 80N$. Could someone please try to point out how I might be setting up the problem incorrectly or show how to solve the problem? Thanks Answer: Try looking at just half the diagram initially. If there were a block hanging off the table and not moving it is being pulled down by gravity with 40N, but since it is not moving there must be 40N pulling the other way to cancel it out since F=ma. If you follow that 40N back you see it is coming from the other block. If there were 80N in the string the blocks would end up being pulled upwards.
{ "domain": "physics.stackexchange", "id": 36901, "tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, forces" }
Why can a ball tossed up object free fall upwards?
Question: I've found many websites that say that a ball tossed upward would be free falling upwards, but none of them say why. If the definition of free fall is an object that is falling while only under the force of gravity alone, then the force you are exerting on the ball when you toss it up rules it out from free fall, right? Answer: In common parlance, saying that an object is "falling" implies that the object's velocity is in the direction of the ground. However, the phrase free fall is defined as applying to an object which has no forces acting on it except for gravity, a definition that has nothing to do with velocity. If you throw a ball 3 meters above your head, your hand stops applying a force to the ball at the moment the ball leaves your hand. At that moment, there are no forces acting on the ball any more except for gravity, so it meets the definition of being in "free fall", even though at that point the direction of the ball's velocity is away from the Earth, and hence the ball isn't "falling" as that word is used in common parlance. To answer your follow-up question, no, the ball doesn't "carry force with it" after it leaves your hand. After the ball leaves your hand, the ball has kinetic energy and momentum and an upward velocity due to the force that had been on the ball, but there is no force acting on the ball any more.
{ "domain": "physics.stackexchange", "id": 83892, "tags": "newtonian-mechanics, projectile, free-fall" }
Pauli matrices and Lorentz transformations
Question: Consider the Weyl equations: \begin{align} i\sigma^{\mu} \partial_{\mu} \psi_{L} & = 0 \\ i\overline{\sigma}^{\mu} \partial_{\mu} \psi_{R} & = 0, \end{align} where $\sigma^{\mu} = \left ( \sigma^{0}, \sigma^{1}, \sigma^{2}, \sigma^{3} \right )$ and $\overline{\sigma}^{\mu} = \left ( \sigma^{0}, - \sigma^{1}, - \sigma^{2}, - \sigma^{3} \right )$, with $\psi_{L}$ and $\psi_{R}$ being left and right handed Weyl spinors such that $\psi$ is a two-component spinor: \begin{equation} \psi = \binom{\psi_{L}}{\psi_{R}}. \end{equation} These equations are Lorentz invariant. Does this mean that $\psi_{L}' = A \psi_{L}$ and $A \sigma^{\mu} A^{-1} = a_{\nu}^{\mu} \sigma^{\nu}$, similarly for $\psi_{R}$, where $A$ is a matrix and $a_{\nu}^{\mu}$ is a Lorentz transformation? Answer: It means that the matrices transform like $$\sigma^\mu \mapsto \Lambda^{\nu'}{}_\mu A \sigma^\mu A^{-1}.$$ The left-handed spinor belongs to the $(\frac 1 2, 0)$ representation of the Lorentz group, and the right-handed spinor to the $(0, \frac 1 2)$ representation. (Or maybe the other way around, I'm not sure right now.) This view with representations can be found in Weinberg, Vol. 1, Ch. 2. Viewing spinors in terms of their transformation properties is more the view of texts like Pirani's chapter in Lectures on General Relativity, and Penrose and Rindler's Spinor's and Spacetime. They are equivalent, of course, and I think you need both views to really get it, but you may find one easier to start with than the other.
{ "domain": "physics.stackexchange", "id": 36908, "tags": "special-relativity, dirac-equation, spinors, lorentz-symmetry, dirac-matrices" }
Trajectory of a projectile in a three dimensional space
Question: $g$: the gravitational acceleration—usually taken to be $9.81\:\mathrm{m/s^2}$ near the Earth's surface $θ$: the angle at which the projectile is launched $v$: the speed at which the projectile is launched $y_0$: the initial height of the projectile $d$: the total horizontal distance travelled by the projectile The height $y$ of the projectile at distance $x$ is given by However I'm writing a 3D game engine and would like to find out how to make that equation work in a three dimensional space $x,y,z$. Answer: Three dimensions are no different than two. If $y$ is the vertical direction and you have worked out the equations for $y$ and $x$, you can apply the same equations with $z$ substituted in for $x$. This is because physics is isotropic - the same in all directions. Gravity breaks the isotropy for the vertical dimension, but there is no physical difference between $x$ and $z$ so they are interchangeable. This is not perfectly true on Earth because the rotation axis of Earth breaks the symmetry between $x$ and $z$, so there are Coriolis forces that are different in the $x$ and $z$ directions, but these are minor for everyday circumstances.
{ "domain": "physics.stackexchange", "id": 25928, "tags": "homework-and-exercises, kinematics, simulations, projectile, coordinate-systems" }
Why can't it work? Perpetual motion machine
Question: Lets assume that the red ball is more dense than the white fluid and less dense than the blue fluid. The fluids are of very low viscosity and the machine can be of any height. The flaps open when pushed from the bottom and after the ball goes through then they close hermetically(some magnets perhaps). My understanding is as follows: The ball needs a finite velocity to open the flaps. Since H1, H2, can be of any height that is not a problem. The ball needs a finite velocity to jump off the water high enough. That is not a problem since H3 can be of any height. The ball needs a finite velocity to get into the water. That is not a problem since H1+H2+H3 can be of any height. If it worked it would be a perpetuum mobile. Please help me find out why it won't work. Answer: First, remove the flaps and replace them a single flap (or valve) where the balls enter the water on the bottom right which prevents the water to flow out. In order to get the red ball into the water, you'll then have to overcome the excess pressure corresponding to the water column, which will cost an energy exactly equal to the energy gained by the balls rising to the top in water. Thus, the overall process is energy neutral. By inserting extra flaps, you just split this extra work into several slices. (You can see this by inserting very thin layers of vacuum at each flap.) This is a random illustration I found on the internet: BTW, it seems buoyancy is a favorite tool for perpetuum mobiles, see e.g. https://www.lhup.edu/~dsimanek/museum/themes/buoyant.htm or http://www.hp-gramatke.net/perpetuum/english/page0220.htm (which essentially has your device in the section Hydraulic Perpetua Mobilia). And that's just the first two google hits for "perpetuum mobile buoyancy". The third hit is a similar question here on physics.se: Why this is not a perpetuum mobile.
{ "domain": "physics.stackexchange", "id": 24485, "tags": "fluid-dynamics, perpetual-motion" }
Was this reef protected?
Question: I came across this satellite animation the other day. It shows a reef being entirely destroyed to build an industrial complex (airport, harbor, various facilities...). The animated GIF is showcased on the website of a company called Digital Globe (here is the web page cached by the Internet Archive in case they take it down). I find this quite shocking given how long it takes for reefs to form (I read "100,000 and 30 million years for a fully mature reef to form"). My understanding is that lots of those reefs are actually protected. My question (I know it's a tough one): Is there a way to find out which reef this was and whether or not it was protected? Answer: The caption under the image on the Digital Globe website identifies the location as Fiery Cross Reef. Multiple sites identify what's going on as the Chinese government building an airstrip and related facilities on the site to solidify its claim to the South China Sea. According to this 2015 BBC story: China claims almost the whole of the South China Sea, resulting in overlapping claims with several other Asian nations including Vietnam and the Philippines. That story is based on a Jane's 360 analysis of satellite images: The 23 March images show a paved section of runway 503 m by 53 m on the northeastern side of Fiery Cross Reef, which China began to turn into an island in late 2014. Paving and ground preparation of other sections of the runway has also begun further along the island. In addition, workers have paved about 400 m by 20 m of apron. According to the Permanent Court of Arbitration on the wider South China Sea territorial dispute between China and the Philippines: (PDF) ...China had caused severe harm to the coral reef environment and violated its obligation to preserve and protect fragile ecosystems and the habitat of depleted, threatened, or endangered species. But it didn't seem that the reef in question was explicitly protected by any governmental action, especially given the dispute over which country governed the territory. China does not recognize the jurisdiction of the court.
{ "domain": "earthscience.stackexchange", "id": 1440, "tags": "ocean, environmental-protection" }
How does the introduction of the charm quark suppress FCNC?
Question: I did some reading on the GIM mechanism today, and simply fail to understand how it works. I understand how the CKM-matrix can be used to do the basic calculation of the probability of, say, observing an up quark after a strange quark's decay over the weak interaction. However, I don't understand how the CKM matrix is applied to these Feynman diagrams: I see that the vertices for the weak interactions of the quarks are labeled with the corresponding matrix elements, but how do I take these two Feynman diagrams and infer that they cancel each other out? Answer: Well, the two interfering amplitude diagrams do not quite cancel each other out: they almost cancel each other out. That is to say in the notional limit that the mass of the u and the mass of the c were identical, the two diagrams would be identical except for the minus sign of the Cabbibo (CKM for 3 generations) matrix influence on the vertices, which you reassure us you are comfortable with. To the extent the masses of the two quarks in the internal lines differ, the effect of them on the respective propagators differ, and so the respective results of the loops differ. In fact, good SM books compute the nonvanishing, but vastly suppressed amplitude. It is a function of $m_c/m_u$ which goes to 0 as that ratio goes to 1. Something like $\propto g^4 \frac{m_c^2}{M_W^2} ( 1-m_u^2/m_c^2)$. So you might sensibly object that the term in the parenthesis is much closer to 1 than it is to 0. But, look at the factor multiplying it, $\alpha^2 m_c^2/M_W^2$, and how small it is: do this. (Still, if that were a part of your puzzlement, perversely, the introduction of c actually increases the $\Delta S=1$ rate instead of suppressing it! Historically, the rate was used to bound the mass of the then hypothetical c from above!) When the third generation is introduced, the 3x3 unitary analog matrix (CKM) performs the same function. The quark mass ratios are bigger, but the couplings are suppressed at the vertices, so, if I recall, the effect of neglecting the 3rd generation is not dramatic. Extra credit hypothetical: In an imaginary world where the masses of u and c differ by one part per million, but are huge, say half the mass of the W, would you have this strangeness-changing neutral current amplitude be suppressed, or not?
{ "domain": "physics.stackexchange", "id": 60702, "tags": "particle-physics, standard-model, feynman-diagrams, quarks, weak-interaction" }
Python Web-Scraping with BeautifulSoup
Question: I'm relatively new to python and for an assignment I had to write a program that fetches a webpage with BeautifulSoup, extract all Paragraphs from it, and extract all words ending with "ing", and in the end save it to a file with the format "Word" + tab + "wordcount" + "newline". This is my code so far. Is there a more pythonic way to handle this? Or generally ways to improve the code? from bs4 import BeautifulSoup import requests import re def main(): site = "https://en.wikipedia.org/wiki/Data_science" r = requests.get(site).content soup = BeautifulSoup(r) ps = soup.findAll('p') fulltext = '' for p in ps: fulltext += p.get_text() words = match_words(fulltext) formated_words = sort_and_format(words) with open(r"Q1_Part1.txt","w") as file: file.write(formated_words) def match_words(string): pattern = re.compile(r'\b(\w*ing)\b') words = re.findall(pattern, string.lower()) matching_words = {} for word in words: if word in matching_words: matching_words[word] += 1 else: matching_words[word] = 1 return matching_words def sort_and_format(dict): ordered_keys = sorted(dict, key=dict.get, reverse=True) output_string = '' for r in ordered_keys: output_string += f"{r}\t{dict[r]}\n" return output_string main() Answer: if word in matching_words: matching_words[word] += 1 else: matching_words[word] = 1 If you're checking if a dictionary has a key before adding to it, a defaultdict may be a better option: from collections import defaultdict matching_words = defaultdict(int) matching_words[word] += 1 int returns a 0 when called without arguments, and that 0 is used as a default value for the dictionary when the key doesn't exist. fulltext = '' for p in ps: fulltext += p.get_text() This isn't very efficient. Performance of += on strings has gotten better in later versions of Python, but it's still generally slower. The typical alternative is using join: pieces = [p.get_text() for p in ps] fulltext = "".join(pieces) # Or just fulltext = "".join([p.get_text() for p in ps]) Then similarly in sort_and_format: output_string = "".join([f"{r}\t{dict[r]}\n"] for r in ordered_keys]) In sort_and_format, you've named the parameter dict. This is suboptimal for a couple reasons: dict is a generic name that doesn't properly describe the data. dict is the name of a built-in class, and shadowing it makes your code more confusing, and prevents you from using the built-in. Indicating the type can be helpful though, so I might introduce type hints here from typing import Dict def sort_and_format(words: Dict[str, int]) -> str: . . . This says that the functions accepts a Dictionary mapping strings to ints, and returns a string Also for sort_and_format, I've found that when you start sticking and into names, that can suggest that the function is doing too much. You may find that the code will make more sense if the sorting and formatting happen separately. That functions can handle purely formatting, and can be handed a sequence to work with instead. If that sequence is sorted, great, if not, also great. It doesn't matter for the purposes of formatting what the sort order is.
{ "domain": "codereview.stackexchange", "id": 39423, "tags": "python, web-scraping, beautifulsoup" }
Post processing gmapping generated maps to add glass wall information
Question: The gmapping system misses glass surfaces since it makes use of laser data. Unfortunately, we have an environment that has quite a few glass walls/doors. Has anyone run into this issue? If yes, how do you folks suggest we handle it? I can always modify the map manually in an image editor. Is there a better way? Originally posted by FranciscoD on ROS Answers with karma: 128 on 2013-08-11 Post score: 1 Answer: There's a related question here, which links to some answers. The problem is that if these glass walls are actually invisible in the laser scan, the local trajectory planner can still try to go through them, even if you manually paint the walls into the global map. So you'll need something like layered costmaps which contain the "no go zones". Or you could do what we did with our (small) glass door, which is stick a couple sheets of paper onto it at a certain height... :-) Or try additionally using an ultrasound ranger for local obstacle avoidance, if your robot happens to have one (not tried, but should work). Originally posted by Martin Günther with karma: 11816 on 2013-08-12 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by FranciscoD on 2013-08-12: Thanks for your answer :). We thought of covering up glass surfaces already. The problem is that our environment is an entire floor of the building, and covering up all the glass surfaces will be tedious. I was hoping I can somehow detect glass and augment the maps. Comment by FranciscoD on 2013-08-12: (I've observed that the laser does seem to detect glass when incidence is near normal.)
{ "domain": "robotics.stackexchange", "id": 15223, "tags": "navigation, gmapping" }
use some functions of move_base
Question: Hello! Is there any way to use only some functions of move_base? I want to use in my node only the global costmap created with move_base and nothing else (no planners) Originally posted by alex920a on ROS Answers with karma: 35 on 2014-09-08 Post score: 0 Answer: You can either create a costmap object yourself in your code or if you want the one from move_base, just start it as normal and don't call the planners. move_base won't do anything if you don't send it anything. Originally posted by dornhege with karma: 31395 on 2014-09-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by David Lu on 2014-09-08: This is correct. Check out the costmap2d_node in the costmap package.
{ "domain": "robotics.stackexchange", "id": 19334, "tags": "ros, navigation, move-base, global-costmap" }
Can someone give me a reality check on this video?
Question: https://www.facebook.com/DabungPakistan/videos/497292570459987/ This is a video in which a person keeps increasing his swing distance on a large swing (pendulum motion/simple harmonic motion) until he makes a complete circle. What I find incredible in all this is that when he gets the swing angle more than 45° to the horizontal, he still manages to keep the swing behaving in a perfect pendulum motion. Instead of that, the swing should have fallen downwards straight until the tension of the rope re-initiated the pendulum motion. This is why I think a full swing in pendulum motion is impossible to achieve by gradually increasing the distance of each swing. Can someone please verify my thoughts or show me otherwise? Thanks. Answer: When considering the video with a rope examine the forces at the moment in time that you question. What are the forces on a rope and the weight. During the upward portion of the swing the rope would be in tension because the weight wants to travel in a straight line. If the pendulum were to stop at exactly horizontal at the top of the upswing and the rope had not stretched due to the tension the only force acting on the weight would be gravity and the direction of that force does not create tension on the rope nor does it attempt to compress it. Anything past horizontal and gravity will pull the weight in a direction that attempts to compress the rope. This of course would cause the rope to sag and the weight to fall in a straight line until it pulls the sag out of the rope. The rope would stay tight until the weight approaches the top of its trajectory. Then the weight of the rope would begin to pull it off it's circular path due to the effect of gravity overcoming the forces of inertia on a pendulum (commonly called centripetal forces). I hope this helps. Drew K
{ "domain": "physics.stackexchange", "id": 39675, "tags": "oscillators" }
The poles of Feynman propagator in position space
Question: This question maybe related to Feynman Propagator in Position Space through Schwinger Parameter. The Feynman propagator is defined as: $$ G_F(x,y) = \lim_{\epsilon \to 0} \frac{1}{(2 \pi)^4} \int d^4p \, \frac{e^{-ip(x-y)}}{p^2 - m^2 + i\epsilon} $$ $$= \begin{cases} -\frac{1}{4 \pi} \delta(s) + \frac{m}{8 \pi \sqrt{s}} H_1^{(1)}(m \sqrt{s}) & s \geq 0 \\ -\frac{i m}{ 4 \pi^2 \sqrt{-s}} K_1(m \sqrt{-s}) & s < 0\end{cases} $$ using $(+,-,-,-)$ Minkowski signature convention. If one wants to apply the trick of Wick rotation, then one should know the position of the poles. It's easy to see that the poles $p_0$ of $\Delta(p)=\frac{1}{p^2 - m^2 + i\epsilon}$ are $p_0 = \pm (\omega - i\epsilon)$. Then, my question is what's the poles $x_0$ or $t$ of $$ \Delta(x) = G_{F,\epsilon}(x) = \int d^4p \, \frac{e^{-ip x}}{p^2 - m^2 + i\epsilon}. $$ I have tried as following: Because $$ \Delta(p) = \frac{1}{p^2-m^2+i\epsilon} = -i \int_0^\infty d\alpha ~e^{i(p^2 - m^2 +i\epsilon)\alpha} $$ Thus $$ \Delta(x) = \int \frac{d^4 p}{(2\pi)^4} e^{-ipx} \Delta(p) \\ = -i \int_0^\infty d\alpha \int \frac{d^4 p}{(2\pi)^4} ~e^{-ipx+i(p^2 - m^2 +i\epsilon)\alpha} \\ = -i \int_0^\infty d\alpha \frac{1}{(2\pi)^4} [-i\pi^2\alpha^{-2} e^{\frac{-ix^2}{4\alpha}-i(m^2-i\epsilon)\alpha}] $$ Let $\beta = \frac{1}{\alpha}$, then we get $$ \frac{-1}{16\pi^2} \int_0^\infty d\beta~ e^{-\frac{i\beta x^2}{4}-\frac{i(m^2-i\epsilon)}{\beta}} $$ But how to do the last integration and what's the poles $x_0$? ps: This material by Yuri Makeenko (page 8) gives a figure to show poles and the directions of Wick rotation. Answer: There is an integration formula (see "Table of integrals, series and products" 7ed, p337 section3.324 1st integral) $$\int_0^\infty d\beta \exp\left[-\frac{A}{4\beta}-B\beta\right]=\sqrt{\frac{A}{B}}K_1\left(\sqrt{AB}\right)\qquad [\mathrm{Re}A\ge0, \mathrm{Re}B>0].$$ If $\mathrm{Re}A\ge0, \mathrm{Re}B>0$ is violated, the integral will be divergence. In your case, $A=4(im^2+\epsilon)$ and $B=ix^2/4$, so $\mathrm{Re}A=4\epsilon>0$ and $\mathrm{Re}B=0$ which does not satisfy the convergent condition. Therefore, to guarantee the convergence of the integral, we should treat $B=ix^2/4$ as the limit $B=\lim_{\epsilon'\rightarrow0+}i(x^2-i\epsilon')/4$. Thus we have $$\Delta(x)=\lim_{\epsilon,\epsilon'\rightarrow0+}\frac{-1}{16\pi^2}\int_0^\infty d\beta \exp\left[-\frac{i\beta (x^2-i\epsilon')}{4}-\frac{i(m^2-i\epsilon)}{\beta}\right]\\ =\lim_{\epsilon,\epsilon'\rightarrow0+}\frac{-1}{4\pi^2}\sqrt{\frac{m^2-i\epsilon}{x^2-i\epsilon'}}K_1\left(\sqrt{-(m^2-i\epsilon)(x^2-i\epsilon')}\right)\\ =\lim_{\epsilon'\rightarrow0+}\frac{-m}{4\pi^2\sqrt{x^2-i\epsilon'}}K_1\left(m\sqrt{-(x^2-i\epsilon')}\right)$$ As a result, the singularity of the propagator is at $x^2-i\epsilon=t^2-\mathbf{x}^2-i\epsilon=0$, i.e. $t=\pm(|\mathbf{x}|+i\epsilon)$. Actually, the convergent condition of the integral restricts the analytic regime of $\Delta(x)$: $$0<\mathrm{Re}(ix^2)=\mathrm{Re}(it^2)=-\mathrm{Im}(t^2)$$ i.e. $$(2n-1)\pi\le\arg(t^2)=2\arg(t)\le 2n\pi\\ (n-\frac{1}{2})\pi\le\arg(t)\le n\pi$$ Therefore, $\Delta(x)$ only can be analytically continued to the second and the forth quadrants in the complex plane of $t$. In conclusion, the wick rotation in $t$ plane should be clockwise.
{ "domain": "physics.stackexchange", "id": 21786, "tags": "quantum-field-theory, fourier-transform, wick-rotation, propagator" }
Build an entity, based on a defined model by comparing table column to entity names and building
Question: Can I optimize the BuildEntity method more? protected IEnumerable<GroupTitle> BuildProductGroup() { var productGroups = new List<GroupTitle>(); using (var connection = new SqlConnection(_dbConnection)) using (var command = new SqlCommand(_getProductGroups, connection)) { connection.Open(); command.CommandType = CommandType.StoredProcedure; using (var reader = command.ExecuteReader()) while(reader.Read()) { var productGroup = new GroupTitle(); BuildEntity<GroupTitle>(reader, ref productGroup); productGroups.Add(productGroup); } } return productGroups; } protected TEntity BuildEntity<TEntity>(IDataReader reader, TEntity model) { var type = model.GetType(); var table = Enumerable.Range(0, reader.FieldCount).Select(reader.GetName).ToArray(); foreach (var column in table) { var matchColumnToProperty = type.GetProperties().FirstOrDefault(property => String.Compare(property.Name, column, true) == 0); if (matchColumnToProperty != null && !reader.IsDBNull(reader.GetOrdinal(matchColumnToProperty.Name))) matchColumnToProperty.SetValue(model, reader.GetValue(reader.GetOrdinal(matchColumnToProperty.Name)), null); } return model; } Answer: I ended up improving performance and tidying the code up a bit by doing the following: public IList<TEntity> List<TEntity>(string query, CommandType commandType, params SqlParameter[] parameters) where TEntity : class, new() { using (var connection = new SqlConnection(dbConnection)) using (var command = new SqlCommand(query, connection)) { connection.Open(); command.CommandType = commandType; foreach (var parameter in parameters) command.Parameters.Add(parameter); return BuildEntity(command, new TEntity()); } } Then BuildEntity would be as follows: public List<TEntity> BuildEntity<TEntity>(SqlCommand command, TEntity entity) where TEntity : class, new() { var collection = new List<TEntity>(); var properties = GetColumnDataFromProperty<TEntity>(); using (var reader = command.ExecuteReader()) while(reader.Read()) collection.Add(MapEntity<TEntity>(reader, properties)); return collection; }
{ "domain": "codereview.stackexchange", "id": 20361, "tags": "c#" }
Explanation of the identities $\rho=\rho(\delta)$ ($\delta$ function) and $\rho=q\,\delta(\bar{r})$
Question: Poisson's equation is $$\boldsymbol{\nabla}^{2}\varphi=-4\pi k_{e}\,\rho, \tag{*}$$ that in the case of a point charge $q$, already with spherical symmetry, has as solution \begin{equation} \varphi(r)=k_{e} \frac{q}{r}, \tag{**} \end{equation} Replacing the (**) in the (*) we get: \begin{equation} k_{e}\, q \boldsymbol{\nabla}_{r}^{2}\left(\frac{1}{r}\right)=-4\pi \, k_{e}\, \rho \end{equation} Why is the charge density $\rho$ also considered as a delta-function ($\delta$) over the whole classical space-time $\mathbb{R}^4$? Why exist this identity \begin{equation} \rho=q\,\delta(\bar{r})\quad ? \end{equation} Answer: As you noticed the Laplacian is ill-defined at the origin: such equations are usually solved by lifting them to distribution theory and using Green's functions. Let $$ -\nabla^2 \phi(\mathbf{x}) = -4\pi \rho(\mathbf{x})\tag{1} $$ one can show$^1$ that the solution $\phi(\mathbf{x})$ can always be written as$^2$ $$ \phi(\mathbf{x}) = \frac{1}{4\pi} \int_V d^3 x' \rho(\mathbf{x})G(\mathbf{x}, \mathbf{x}') + \frac{1}{4\pi}\int_{\partial V}d\sigma\Big[G(\mathbf{x}, \mathbf{x}')\frac{\partial \phi}{\partial n} - \phi(\mathbf{x}') \frac{\partial}{\partial n}G(\mathbf{x}, \mathbf{x}')\Big] \tag{2} $$ where the Green's function $G(\mathbf{x}, \mathbf{x}')$ solves the associated Green's equation $$ -\nabla^2 G(\mathbf{x}, \mathbf{x}') = -4\pi \delta(\mathbf{x}-\mathbf{x}')\tag{3} $$ Using appropriate boundary conditions (in the simple cases one can demand the functions to vanish at the boundaries and the first order derivative to be proportional to the surface element$^3$) equation $(2)$ can be solved by plugging the solution of $(3)$, which you already have recognised to be the potential of a single point charge. Details on all of this can be found in the standard textbook of J. D. Jackson on Classical Electrodynamics. $^1$ In order to show why this holds multiply $(1)$ by $G(\mathbf{x}, \mathbf{x}')$ and $(3)$ by $\phi(\mathbf{x})$ and integrate against $\delta(\mathbf{x}-\mathbf{x}')$. $^2$ There might be some $2\pi$ being forgotten left and right, somewhere. $^3$ I might be wrong about this.
{ "domain": "physics.stackexchange", "id": 56837, "tags": "electromagnetism, differentiation, dirac-delta-distributions" }
Why doesn't a photon lose energy according to $E=mc^2$?
Question: As photons are massless thus by $E=mc^2$, the energy should be zero right? or should it be? I would be pleased to be corrected Answer: The full formula is $E= \sqrt{c^2p^2+m^2 c^4}$, so for massless particles $E= pc$. The momentum $p$ of of photon is given by $p= h/\lambda$, so, from $\lambda \nu=c$ where $\nu$ is the frequency, we have $E= h\nu$ .
{ "domain": "physics.stackexchange", "id": 89447, "tags": "special-relativity, electromagnetic-radiation, photons, mass-energy" }
Lock-free thread-safe Fibonacci number generator
Question: Here is my implementation of a lock-free thread-safe Fibonacci number generator. Is it correct? The idea is to use an immutable holder for previous and current numbers (since it's hard to change two values atomically at the same time) and AtomicReference which points to the current Fibonacci number. Lock-free public class FibonacciSequence { private static class FibonacciNumber { protected final BigInteger prev; protected final BigInteger curr; public FibonacciNumber(BigInteger prev, BigInteger curr) { this.prev = prev; this.curr = curr; } public FibonacciNumber next() { return new FibonacciNumber(curr, prev.add(curr)); } public BigInteger value() { return curr; } } private static final class FirstFibonacciNumber extends FibonacciNumber { public FirstFibonacciNumber() { super(null, BigInteger.valueOf(0L)); } public FibonacciNumber next() { return new FibonacciNumber(curr, BigInteger.valueOf(1L)); } } private final AtomicReference<FibonacciNumber> currentFibNumberRef; public FibonacciSequence() { currentFibNumberRef = new AtomicReference<>(new FirstFibonacciNumber()); } public BigInteger next() { while (true) { FibonacciNumber currFibNumber = currentFibNumberRef.get(); if (currentFibNumberRef.compareAndSet(currFibNumber, currFibNumber.next())) return currFibNumber.value(); } } Testing public static void main(String[] args) { testFibonacciNumberCorrectness(); // todo add thread-safe test } private static void testFibonacciNumberCorrectness() { int[] some = {0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811}; FibonacciNumber currFibNumber = new FirstFibonacciNumber(); for (int num : some) { assert num == currFibNumber.value().intValue(); currFibNumber = currFibNumber.next(); } } } Answer: BigInteger operations are slow. Each iteration through the process you are creating a new BigInteger, as well as a new FibonacciNumber instance. These instances are likely more expensive to process (and garbage collect) than the time saved through lock-free management. Still, using a three-state operation you can solve this problem without the FibonacciNumber class: protected final AtomicReference<BigInteger> nextRef = new AtomicReference<>(BigInteger.ONE); protected final AtomicReference<BigInteger> currRef = new AtomicReference<>(BigInteger.ONE); public BigInteger next() { BigInteger current = null; do { // spin loop while someone else is updating the reference. // updating is indicated by a null value in the reference. current = currRef.getAndSet(null); } while (current == null); BigInteger next = nextRef.get(); nextRef.set(next.add(current)); currRef.set(next); return current; } Note that the above code puts a waiting process in to a spin lock (100% CPU) instead of locking. This is the same as your code. My preference would be to use a more traditional Lock (or even synchronization) and block threads that are waiting.
{ "domain": "codereview.stackexchange", "id": 11399, "tags": "java, fibonacci-sequence, lock-free" }
Semiconductors ->Transistors npn and pnp
Question: I have had a tough time with semiconductors, I cannot understand the working of a transistor in saturation, as a switch, and also the cases as to when which part is forward biased, I also tried google but found only about the working not specific cases I need help understanding the following parts 1)working in saturation mode(for both NPN and PNP) 2)as switch - only the bias 3)also any logic with which I can deduce the type - reverse biased or forward biased for the junctions. any help would be appreciated. Answer: In saturation mode, the collector-base junction of a bipolar transistor is forward biased as opposed to reverse biased in normal operation. Thus the collector-base voltage is positive in a pnp-transistor and negative in a npn-transistor.This forward bias produces minority current injection from the collector into the base and thus causes a current component opposed to the normal emitter-collector current leading to a decreasing collector current with increasing collector-base forward bias. The operation as a switch can be easiest understood by considering the collector current vs collector-emitter voltage (output) characteristic with a resistive load line. (See, e.g., S.M Sze, Physics and Technology of Semiconductor devices, Chapter 5.) For a switch operation between saturation mode "on" (current high, voltage low) the resistive load line cuts the characteristics for high input base current in the saturation region of the transistor while the "off" state with low current and high voltage is the intersection of the load line with the output characteristics at low input base current. There exist logic circuits where the switching uses an "on" state in normal mode instead of saturation mode.
{ "domain": "physics.stackexchange", "id": 38513, "tags": "semiconductor-physics, electronics, electrical-engineering" }
Proof for $\langle i[A,B]\rangle$
Question: I have to prove the following equation: $$ \langle i[A,B]\rangle = 2\mathfrak{Im}\left[\int dV(\overline{B\psi)}(A\psi)\right]\,,$$ where A,B are hermitian operators. Here is my calculation, but I don't get the right result. Can you help me to find my mistake? $$ \begin{aligned} \langle i[A,B]\rangle &=(\psi,i[A,B]\psi) \\&=(\psi,i(AB-BA)\psi) \\&=(\psi,iAB\psi-iBA\psi) \\&=(\psi,iAB\psi)-(\psi,iBA\psi) \\&=i(A^\dagger\psi,B\psi)-i(B^\dagger\psi,A\psi) \\&=i(A\psi,B\psi)-i(B\psi,A\psi) \\&=-i[(B\psi,A\psi)-(A\psi,B\psi)] \\&=-i[(B\psi,A\psi)-\overline{(B\psi,A\psi)}] \\&=-2i\,\mathfrak{Im}[(B\psi,A\psi)] \end{aligned} $$ Answer: I think the mistake is in the last line. Using the following identity helps to get the result what you are seeking for. $$ (a+ib)-(a-ib)=2ib=2i Im(a+ib) $$
{ "domain": "physics.stackexchange", "id": 57706, "tags": "quantum-mechanics, homework-and-exercises, operators, hilbert-space, commutator" }
Motivation for using Euler angles instead of other coordinates
Question: Why are Euler angles used when describing the rotation of a rigid body instead of angles around the principal axes of inertia? What exactly is the benefit of introducing Euler angles? They seem not like a "natural" choice but they are the standard way to handle rigid body rotation in classical and quantum mechanics. I have never seen a discussion why we use them and not some other set of coordinates. What makes Euler angles so particularly useful? What is the motivation to introduce them and what problems would we run into if we tried to use other coordinates? Are they truly the only set of coordinates that lead to "manageable" equations when it comes to rigid body rotation? Answer: Why are Euler angles used when describing the rotation of a rigid body instead of angles around the principal axes of inertia? How do you propose to use angles around the principal axes, as they need to be applied with a specific sequence, and once you have applied the first one, the principal axes aren't aligned anymore? Besides, using three rotations about the principal directions is still considered Euler angles. Just a different scheme of angles. If you use a sequence of three orthogonal rotations, regardless if they start from a principal axis or not, you have Euler angles. Some Euler angle schemes use a sequence of body-centered axis (like you propose) and some use inertially centered axis. When describing the orientation of a body, you can either use an arbitrary angle about an arbitrary direction (1+2 quantities defined), or 3 arbitrary angles about three fixed directions (3 quantities defined). An alternative, which I recommend is to use quaternions that encode the axis-angle system but without gimbal lock that might happen with spherical coordinates for the direction of rotation. In robotics, you might use Euler angles as each rotation would correspond to a physical joint between parts. But for free rigid bodies, go with quaternions instead. See my post in SO about how to use quaternions.
{ "domain": "physics.stackexchange", "id": 94532, "tags": "classical-mechanics, rotational-dynamics, coordinate-systems" }
Extract thoughput value from JMeter log file
Question: I use JMeter for load testing and want to report error on CI pipeline when application though put is less than 1/sec. JMeter logs summary of results as - 2017/01/17 01:57:00 INFO - jmeter.reporters.Summariser: summary + 103 in 00:00:30 = 3.4/s Avg: 2937 Min: 48 Max: 7962 Err: 0 (0.00%) Active: 2 Started: 10 Finished: 8 2017/01/17 01:57:00 INFO - jmeter.reporters.Summariser: summary = 12218 in 01:00:03 = 3.4/s Avg: 2942 Min: 20 Max: 27317 Err: 1 (0.01%) 2017/01/17 01:57:00 INFO - jmeter.reporters.Summariser: summary + 1 in 00:00:00 = 66.7/s Avg: 3227 Min: 3227 Max: 3227 Err: 0 (0.00%) Active: 0 Started: 10 Finished: 10 2017/01/17 01:57:00 INFO - jmeter.reporters.Summariser: summary = 12219 in 01:00:03 = 3.4/s Avg: 2942 Min: 20 Max: 27317 Err: 1 (0.01%) Herein I would capture the last line and if value of throughput is less than 1 (which is 3.4/s in the current example) then it would be reported as error. Here is the script I wrote to capture value of throughput - grep "meter.reporters.Summariser: summary" test.log | tail -1 | awk '{print $12}' | cut -d "." -f1 and it gets me 3 which is ok. How can I improve this script? Answer: You are using here four different tools: grep, tail, awk and cut to do the following: select lines get the last one print a given field of that one extract the integer part of that field This is clear and it works. However, using four processes is a bit of overkill, since all of these can be performed one-handedly via awk: awk '/meter.reporters.Summariser: summary/ {rec=$12} END{printf "%d\n", rec}' file /meter.reporters.Summariser: summary/ {rec=$12} When reading from the file, /pattern/ will match those lines containing the "pattern". If this happens, the action within {} is performed. In this case, it consists in storing the 12th field. But wait, weren't we looking for the last line in which this occurs? Yes, and this is exactly what we are doing by overwriting rec every time the pattern happens: last time it is written, it will store the definitive result. END{printf "%d\n", rec} Once all lines have been processed, red will have the desired value on the form 3.4/s. We just want the 3 to appear, so we could do some string triming similar to the one you did with cut. However, we can also use awk's printf() to use Format-Control Letters to just print the integer part of it. This will also print 0 if rec is empty: $ awk 'BEGIN {printf "%s\n", "3.4/s"}' 3.4/s $ awk 'BEGIN {printf "%d\n", "3.4/s"}' 3 $ awk 'BEGIN {printf "%d\n", ""}' 0 Of course, you could also add the threshold logic in the script.
{ "domain": "codereview.stackexchange", "id": 23925, "tags": "shell, awk" }
Physical interpretation of the Reissner-Nordström metric
Question: The Reissner-Nordström metric (in spherical coordinates and $c = 1$) differs from the Schwarzschild metric in a additive term $$\frac{GQ^2}{4\pi\epsilon_0 r^2},$$ so the metric is $$ds^2 = \left(1 - \frac{2GM}{r} + \frac{GQ^2}{4\pi\epsilon_0 r^2}\right)dt^2 - \left(1 - \frac{2GM}{r} + \frac{GQ^2}{4\pi\epsilon_0 r^2}\right)^{-1} dr^2 - r^2d\Omega^2.$$ This has an effect of "repulsion". Why adding more energy (by the addition of charge to the black hole) has this repulsive effect? What is the physical meaning of adding charge to a black hole? Answer: In the Schwarzschild metric all the mass is at the centre of the black hole. So the stress-energy tensor is zero everywhere except at the singularity (where it's undefined). In the Reissner-Nordström metric we still have the mass at the centre, but we also have the electrostatic field that exists both inside and outside the event horizon. This makes an additional contribution to the stress-energy tensor that is not present in the Schwarzschild geometry. Suppose you consider a sphere of radius $r_-$ and ask how much of the electrostatic field is inside the sphere and how much is outside. Obviously at $r_- = 0$ all the field, and all the energy in it, is outside the sphere, and if you don't mind a rather loose metaphor you can think of the part of the field outside the sphere as pulling the surface of the sphere pulling outwards. This creates a inner horizon that grows outwards from $r = 0$ as you increase the energy in the electrostatic field. The radius of the inner horizon is at the value of $r$ where the field energy inside and outside the horizon balance out. This happens at the radius: $$ r_- = \tfrac{1}{2}\left(r_s - \sqrt{r_s^2 - 4r_Q^2}\right) $$ where $r_s$ is the Schwarzschild radius and $r_Q^2 = Q^2G/4\pi\varepsilon_0 c^4$. For the outer horizon we get an analogous but opposite effect. The outer radius starts at the usual Scharzschild event horizon radius of $r = 2M$. When we introduce an electrostatic field some of the field is inside the event horizon and some is outside. Because the field falls with $r$ it turns out that the net effect is that the field inside the horizon dominates and pulls the horizon inwards to a radius: $$ r_+ = \tfrac{1}{2}\left(r_s + \sqrt{r_s^2 - 4r_Q^2}\right) $$ When $r_s^2 = 4r_Q^2$ the two horizons merge and disappear leaving a naked singularity. However this is thought to be unphysical and the charge can never get that high. One last comment while I'm here: the Schwarzschild geometry does not literally have a mass $M$ at the centre. The point $r = 0$ is singular and the geometry is not defined there. The mass is actually zero everywhere the Schwarzschild metric applies. However we can associate a mass called the ADM mass with the geometry, and the parameter $M$ is actually this ADM mass. The same applies to the charge $Q$. This is not literally the charge at the singularity, it the charge associated with the electrostatic field.
{ "domain": "physics.stackexchange", "id": 26814, "tags": "general-relativity, black-holes, event-horizon, singularities, reissner-nordstrom-metric" }
How does magnatic flux conservation work for semi-infinite solenoid?
Question: From the equation of $\Phi_B$ for a region $$\Phi_B=\int_Vd^3r \nabla\cdot B=0$$ , it gave a sense of magnetic flux conservation, just like electric flux in a region of space without charge. However, consider an infinite solenoid cut in half. It's easy to see that the magnetic flux at the cutted open surface was only half of that inside. How's that possible? where did the magnetic flux go in at the open of semi-infinite solenoid? Answer: "where did the magnetic flux go in at the open of semi-infinite solenoid?" It escaped through the curved surface of the solenoid near the geometrical "North pole" end of the solenoid. Roughly half the flux that enters the half-solenoid from the other half leaves the half-solenoid through its open end, the other half leaves through the curved surface.
{ "domain": "physics.stackexchange", "id": 63242, "tags": "homework-and-exercises, electromagnetism, magnetic-fields" }
Minimal distance to an obstacle using pointcluds
Question: Hello I would like know the minimal distance from the robot platform to some obstacle. I have laser, Kinect and IMU as well but would like to use the Kinect. I think PCL library is good start as I dont need any transform of depth images. But Im absolute beginners in this and need some help to start. So I need to install PCL library, yes?? Thank have to find the point in the point clouds with the maximal or minimal values that correspond to the furthest and closest point the object , yes?? Than how to calculate the distance between that point and the robot platform?? Any help?? Thanks Originally posted by Astronaut on ROS Answers with karma: 330 on 2013-06-27 Post score: 0 Answer: You can use openni_launch package for obtaining the kinect pointcloud. Then write a node that subscribes to the topic with a callback pcl::PointCloud < pcl::PointXYZ>. ROS will do the conversion for you. Then just access the Z values to get your minimal distance. One of the ways to implement this is shown here http://wiki.ros.org/pcl_ros Originally posted by aswin with karma: 528 on 2014-03-06 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14735, "tags": "ros" }
Linear correction for projectile
Question: I am working on a forward model for 3D projectile motion which includes effects from constant wind and Magnus force from spin which changes as a function of time. The input to the model is launch vector and spin axis/angle. This part of the model is done and works as expected. However, the modeled final positions of the projectile (or rather their positions as they cross an arbitrary plane) differ from known locations substantially. This is certainly due to the limitations of the assumptions of my model. So I am looking to insert a simple linear correction to my model which would shift the projectile's path so that it crosses the arbitrary plane coincident with the known location. How do I apply a linear correction so the projectile's trajectory passes through the known location? Do I forward model the trajectory, collect the 3D points, and then apply the correction? Or can I include a translation that vary's as a function of the distance perpendicular to the arbitrary plane? This forward model will be run several million times, so some consideration for a computationally efficient solution will be useful. Answer: I see two ways to do this, depending on your needs. Corrective displacement You could run the simulation without any correction to generate the trajectory $\vec{x}(t)$. Suppose the error between the final position and the target is $\vec{e}$, then you can add a linear displacement to the trajectory: $$\vec{x}(t) \rightarrow \vec{x}(t) + \vec{e}\frac{t}{T}$$ where $T$ is the time-of-flight of the trajectory. However, this approach is effectively changing the velocity of the trajectory without correctly modifying the Magnus force so it would only be suitable if $\vec{e}$ is reasonably small (or if you don't care too much about the physics). Corrective force This would be more physically sound but is more complicated: apply a constant force $\vec{f}$ to the trajectory during the entire flight. However, since you have the Magnus force acting, I don't see an analytical way to compute $\vec{f}$ (but I could be wrong), in which case you will need to solve it numerically, i.e. by finding $\vec{f}$ that minimises $|\vec{e}|$ but without derivatives. There are lots of methods for tackling this, see Wikipedia: Derivative-free optimization. Personally I would use coordinate descent for this problem.
{ "domain": "physics.stackexchange", "id": 42500, "tags": "newtonian-mechanics, kinematics, projectile, aerodynamics" }
What is the computational complexity of "real-life" regular expressions?
Question: Regular expressions in the sense as equivalent to regular (Chomsky type 3) languages know concatenation xy, alternation (x|y), and the Kleenee star x*. "Real-life" regular expressions as used in programming usually have a lot more operations available; amongst others, quantification x{n}, negation [^x], positive and negative lookahead x(?=y), or back-reference \n. There is a famous post on SO stating that regular expressions can not be used to parse HTML for the reason that HTML is not a regular language. My question is: Is this accurate? Do "real-life" regular expressions, say the selection defined in the Java docs, really have the same expressive power as regular expressions as understood in formal language theory; or do the additional constructs, although possibly not strong enough to capture HTML and the like, put common regular expressions further up on the Chomsky scale than just Type 3 languages? I would imagine the proof of the computational equality of the two would amount to showing that each operation available for the common regexp is just syntactic sugar and can be expressed by means of the 3 basic operations (concatenation, alternation, Kleene start) alone; but I am finding it hard to see how one would e.g. simulate back-reference with classic regexes alone. Answer: The following extended regular expression matches the language $\{ ww : w \in \Sigma^* \}$: $$ \texttt{^\\(.*\\)\\1\\\$} $$ This language is neither regular nor context-free. Matching using extended regular expressions is NP-complete; see for example this paper, which discusses efficient algorithms in some special cases.
{ "domain": "cs.stackexchange", "id": 16487, "tags": "formal-languages, regular-expressions, chomsky-hierarchy" }
How to connect ROS to a MQTT broker
Question: Hi all, I'm totally new to ROS, and I want ROS to communicate with a MQTT broker through the tcp protocol. I have seen the TCPROS package but it needs the md5 sum which i don't have. Is it possible to use TCPROS without md5 sum? How to do it? Thanks Originally posted by anhar on ROS Answers with karma: 21 on 2015-02-11 Post score: 2 Original comments Comment by Johan on 2015-03-21: Have you solved your issue? Comment by Meet H Thakkar on 2015-10-06: is there a project already begin worked-out ? I would like to write this package/ process. Or contribute to it. Comment by jarvisschultz on 2015-10-06: Please don't add answers that are not answers; it is preferred to ask a new question. I found this ros-users that discusses this package. Perhaps it is worth looking into. Comment by godfreap on 2015-10-27: I'm new to ROS, but I have written a rospy script that changes messages from MQTT to ROS and vice versa. I have a few problems with it (mostly related to services being dumb and unable to send ROS images over them), but will post it once I find out how (just joined the site). Answer: They will not talk directly. They use different protocols. You will need to write a process which translates ROS messages in to MQTT compatible messages and resends them, as well as vice versa. Originally posted by tfoote with karma: 58457 on 2015-02-11 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 20853, "tags": "ros" }
Can we measure comets in a similar way to exoplanets?
Question: A 2km comet at 40AU's can dim a sun sized star at 100 light years by 3%, and a 10km comet can cause an eclipse (is that right?) So would 2,3,10 km objects be easy to measure via star dimming if we had enough telescopic data from space? Can we measure lines of stars that are eclipsed/dimmed in sequence by transiting comets? Given 1000 frames of a small patch of space, where all the images are recorded into a 3D image stack (a 3D array, a voxel) X,Y and Time as dimensions, then an Intel processor can compare 20 million voxels per second of that collection of images to search for vector information of aligned patterns of star dimming, i.e. searching for vectors of comets. If that process is repeated for many patches of space, perhaps we can detect many comets, and I don't know the real maths involved, if a satellite of that design would realistically only find 1 comet, or 5000? It's for a theoretical comet apprehension system which uses 2/3 identical satellites each comprised 100Tb of SSD and a 20Mflops processors to search for vector information in the images. Some kind of fairy idea going around in my head, because of Oumuamua and because I work on voxels and I know that image collections from space can be stacked into 3D arrays and be searched for vector patterns at 2million voxels per second on a 200W PC, an idea that I am confusedly astrally-projecting as a basis to find comets. edit- A quantitative answer would be cool, say if the voids between stars on the milky way arm are 8 times the width of the comet, the comet would on average hit 2 stars every 8x8 frames. Answer: The thing about detecting exoplanets by eclipses is that the eclipses are repeatable. You have the same star, whose light curve dims in the same way with each eclipse, which gives you information about the planet's orbital period and its apparent size relative to the star. The Kepler mission observed many fluctuations in stellar brightness that were not associated with exoplanets, but instead with changes in the instrument, fluctuations in the actual brightness of the star, probably some other explanations that I can't think of off the top of my head, and actual mysteries. Basically, if you can only observe it once, then you can't claim to explain it. Occultations of distant stars by planets or comets don't give you a light curve: the star goes from "on" to "off". So a single occultation doesn't give you any information about the size, speed, or direction of the object doing the occulting. And the occultations are sufficiently rare that I think you'd be extremely unlikely to find a "line" of them by chance --- though that would make a fun data-mining project for an astronomy student. What you can do, if you have an object that you already know about, is to make many observations of the same occulatation. That's the mission of the International Occultation Timing Association. If you click through to their web page you'll see this image: Each line here shows the observed brightness of a particular star observed from a particular location at the same time. You can see clearly that some observers saw the star disappear very briefly, some for longer, and some (who might have been predicted to be in the very center of the path) saw no occultation at all. Assembling all these individual observations, properly synchronized, shows clearly the shadows of the two parts of a binary asteroid, each of order 50 km in diameter. Note that 90 Antiope is a main-belt asteroid. The angular diameter decreases linearly as the distance increases, so an object the same physical size orbiting out by Neptune (perhaps fifteen times farther) would need fifteen times the density of observing locations to create an image with the same detail.
{ "domain": "astronomy.stackexchange", "id": 2625, "tags": "telescope, exoplanet, comets, apparent-magnitude, oort-cloud" }
shortest distance vs shortest path
Question: This question might be redundant but I want to verify my understanding further. Suppose I have this linear directed graph: $$S \overset{2}{\to} A \overset{1}{\to} B \overset{3}{\to} E \overset{1}\to D.$$ Here, $S$ is the source, $D$ is the target, and numbers indicate to edge weight: When taking about shortest path, do we only count the number of edges? So in this example, shortest paths from $S$ to $D$ is 4. Thus, when taking about shortest distance, we find the sum of the edges weights. So in this example, shortest distance from $S$ to $D$ is 7. The reason why I elaborate this question using example is that I've seen plently of answers when I google this question but it seems they treat the work path and distance in similar way which is counting the number of edges (hops). Answer: If it is ambiguous (as in this case), a well-written text should indicate what it means. If it doesn't, you might have to guess. Sometimes people are sloppy or imprecise, or rely on you to infer what they mean from context. There's certainly no guarantee that people will use language the way you describe -- it's not a standard assumption you can make. In a graph without weights on the edges, the situation is clear, and we will use "shortest path" to mean the path with the fewest edges, and use "distance" to mean the number of edges. In such a graph, the notion of number of edges and sum of edge weights coincides (since we assume each edge has weight 1), so there's no distinction to draw. Thus, in that situation, we'd use the two words similarly. In a graph with weights (lengths) on the edges, the most common situation is probably to use "shortest" path to mean the path with the smallest sum of edge weights, and use "distance" to refer to the length (sum of edge weights) of the shortest path. In other words, when there are weights (lengths) on the edges, the most common situation is that we never refer to anything involving the number of edges on the path. But sometimes we have a graph with weights (lengths) on the edges, and we want to talk about two different distance metrics: the number of edges on the path, and the sum of the weights on the edges on the path. Then it's anyone's guess what terminology people will use; I don't think there's any accepted standard. In this situation, I think you have to hope that the writer defines their terms, or try to infer their meaning from context.
{ "domain": "cs.stackexchange", "id": 8105, "tags": "graphs" }
How to select 'cutoff' of classifier probability
Question: I have recently used xgboost to conduct binary classification in an nlp problem. The idea was to identify if a particular article belonged to an author or not, pretty standard exercise. The results are outputted as a probability between 0 and 1, and there is the ocasional article that is completely misclassified. I would like to know if there is a statistical approach that gives me a confidence interval for the probability outputs (for example if I consider all articles with prediction of 0.4 I will get 95% of the articles that belong to the author), or something that helps me make decisions regarding the cut-offs. Answer: What you're looking for is something along the line of an ROC curve: Using the threshold as a decision parameter, you can observe the trade-off between FPR (False Positive Rate: how many of the articles not belonging to the author will be correctly classified) and TPR (True Positive Rate, aka recall: how many of the articles which are really by the author will be classified as such). When the parameter is at one end, you'll classify all documents as belonging to the author (100% recall, but pretty bad precision), and at the other hand, you'll have 100% precision but pretty bad recall. The plot will allow you to decide on a value that satisfies your requirements (i.e. how much will your precision suffer when you want 95% recall). You can select it based on your desired value in one metric (e.g. 95% recall), but really I'd just plot it and have a look. You can do it in SKLearn with plot_roc_curve.
{ "domain": "datascience.stackexchange", "id": 7906, "tags": "classification, metric" }
Ros_groovy_Workspace_Creation Catkin_Make Error
Question: There seems to be an error when using catkin_make to create the ROS workspace using groovy distribution. When i type the catkin_make command to do environment setup in ~/catkin_ws/src, this is the error message i see. CMake Error at /home/shaurya/ros_ws/devel/share/baxter_sim_kinematics/cmake/baxter_sim_kinematicsConfig.cmake:165 (find_package): Could not find module Findbaxter_core_msgs.cmake or a configuration file for package baxter_core_msgs. Adjust CMAKE_MODULE_PATH to find Findbaxter_core_msgs.cmake or set baxter_core_msgs_DIR to the directory containing a CMake configuration file for baxter_core_msgs. The file will have one of the following names: baxter_core_msgsConfig.cmake baxter_core_msgs-config.cmake Call Stack (most recent call first): /opt/ros/groovy/share/catkin/cmake/catkinConfig.cmake:75 (find_package) baxter_simulator/baxter_sim_hardware/CMakeLists.txt:4 (find_package) -- Eigen found (include: /usr/include/eigen3) -- Configuring incomplete, errors occurred! Invoking "cmake" failed Originally posted by GoBaxter on ROS Answers with karma: 15 on 2014-09-22 Post score: 0 Answer: Your build is failing to find a dependency in either your system installation or your local workspace. The dependency in this case is baxter_core. You should ether install it to your system or add it to your workspace to compile. Originally posted by tfoote with karma: 58457 on 2014-09-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19480, "tags": "ros" }
What is $\epsilon(\infty)$ in $\epsilon(\omega) = \epsilon(\infty)[1-\bar\omega_p^2/\omega^2]$, in Kittel's solid state book?
Question: I've been reading the chapter on plasmon polaritons and polarons, and I have come to a halt trying to understand this part of the chapter: The dielectric function of the free electron gas follows from (6) and (7): $$ \text{(CGS)}\quad \epsilon(\omega) = 1-\frac{4\pi ne^2}{m\omega^2}; \qquad \text{(SI)}\quad \epsilon(\omega) = 1-\frac{ne^2}{\epsilon_0m\omega^2}. \tag{8} $$ The plasma frequency $\omega_p$ is defined by the relation $$ \text{(CGS)}\quad \omega_p^2=4\pi ne^2/m; \qquad \text{(SI)}\quad \omega_p^2 = ne^2/\epsilon_0m. \tag{9} $$ A plasma is a medium with equal concentration of positive and negative charges, of which at least one charge type is mobile. In a solid the negative charges of the conduction electrons are balanced by an equal concentration of positive charge of the ion cores. We write the dielectric function (8) as $$ \epsilon(\omega) = 1-\frac{\omega_p^2}{\omega^2},\tag{10}$$ plotted in Fig. 1. If the positive ion core background has a dielectric constant labeled $\epsilon(\infty)$ essentially constant up to frequencies well above $\omega_p$, then (8) becomes $$ \epsilon(\omega) = \epsilon(\infty)-4\pi ne^2/m\omega^2 = \epsilon(\infty)\left[1-\bar\omega_p^2/\omega^2\right],\tag{11} $$ where $\bar\omega_p$ is defined as $$ \bar\omega_p^2 = 4\pi ne^2/\epsilon(\infty)m. \tag{12} $$ Notice that $\epsilon=0$ at $\omega=\bar\omega_p$. I just need to know how we arrive at equation no. 11 and equation no. 12. In particular, what does the $\infty$ in $\epsilon(\infty)$ signify? Answer: Here $\epsilon(\infty)$ denotes the dielectric constant of the ion background. The excerpt might be clearer if modified a bit: If the positive ion core background has a dielectric constant, labeled $\epsilon(\infty)$, which is essentially constant up to frequencies well above $\omega_p$, then ... In this framework you are splitting the system as a plasma that consists of fixed ions and mobile conduction electrons. However, it's important to realize that the background of fixed ions still contains electrons: in copper, for example, the $3d$ electrons are mobile, but the core electrons - in the $1s$, $2s$, $2p$, $3s$ and $3p$ shells - are still pretty much bound to the nucleus. These are the "ionic cores" of the excerpt - the nucleus plus the inner electrons. More importantly, these ionic cores can still be polarized by an external field, since the inner electrons can be displaced slightly with respect to the nucleus. This adds an additional term to the polarizability and therefore to the global dielectric constant. In addition, Kittel is making the assumption that the frequency dependence of said ionic cores is relatively flat in the region of interest. This is a reasonable approximation: this polarizability does depend on frequency, but only in regimes where you can 'talk' to the excitations of this core, which tend to be at higher energies than the valence electron. This means, then, that you have some additional term in the permittivity, which you don't know but which is reasonably flat, and this means that you can replace the background of $1$ with some suitable constant, which Kittel labels $\epsilon(\infty)$. Everything after that (including e.g. equation 12) is just pushing symbols around. As to why Kittel chose $\epsilon(\infty)$ as the symbol to use, though - you'd have to ask Kittel.
{ "domain": "physics.stackexchange", "id": 34469, "tags": "condensed-matter, solid-state-physics" }
Do these pair of diagrams need a relative sign due to Fermi statistics?
Question: Consider the following two diagrams involving a loop correction to some process $q \bar q \rightarrow q \bar q \gamma$. In a special case where I assume that the momenta $p_1$ and $p_2$ are identical (that is $p_1=p_2$), would these two diagrams come with a relative minus sign in determining their contribution to the amplitude for such a process? The two fermion lines in the upper part of each diagram are representing a quark and antiquark. Because these are not identical species then I think the statistics is not in play here but I am not certain. Answer: Whether $p_1$ and $p_2$ are equal or not is irrelevant: if the two particles were the same, the idea would be that we would not know whether to assign $p_1$ to one and $p_2$ to the other, or the other way around. But they are not identical particle in your case, so no. Beware though that if you measure e.g. $\gamma + \text{jet}$, and a jet coming from the hadronisation of a quark is not distinguished from a jet coming from the hadronisation of an antiquark, you would need to add both contribution, but at the cross-section level, not at the amplitude level.
{ "domain": "physics.stackexchange", "id": 44434, "tags": "quantum-field-theory, feynman-diagrams, fermions, quantum-chromodynamics, spin-statistics" }