anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is feature importance in XGBoost or in any other tree based method reliable?
Question: This question is quite long, if you know how feature importance to tree based methods works i suggest you to skip to text below the image. Feature importance (FI) in tree based methods is given by looking through how much each variable decrease the impurity of a such tree (for single trees) or mean impurity (for ensemble methods). I'm almost sure the FI for single trees it's not reliable due to high variance of trees mainly in how terminal regions are built. XGBoost is empirically better than single tree and "the best" ensemble learning algorithm so we will aim on it. One of advantages of using XGBoost is its regularization to avoid overfitting, XGBoost can also learn linear functions as good as linear regression or linear classifiers (see Didrik Nielsen). My trouble is about its interpretation has came up due to image bellow: In upper side i've got the FI in XGBoost for each variable and below the FI (or coefs) in logistic regression model, i know that FI to XGB is normalized to ranges in 0-1 and logistic regression is not but the functions usually used to normalize something are bijective so it won't comprimise the comparation between the FI of two models, logistic regression got the same accuracy (~90) than XGB at cross validation and test set, note that the most three important variables in xgb are v5,v6,v8 (the importances are respective to variables) and in logistic model are v1,v2,v3 so it's totally different to the two models, i'm sure that the interpretation to logistic model is reliable so would xgboost interpretation not be reliable because this difference? if it wouldn't so it wouldn't only for linear situations or in general case? Answer: Your main problem (it turns out, thanks for following up in the comments) is that you used the raw coefficients from the logistic regression as a measure of importance, but the scale of the features makes such comparisons invalid. You should either scale the features before training, or process the coefficients after. I find it helpful to emphasize that feature importances are generally about interpreting your model, which hopefully but not necessarily in turn tells you about the data. So in this case, it could be that some set of features has predictive interaction, or that some feature's relationship with the target is nonlinear; these will be found important for the xgboost model, but not for the linear one. Aside from that, impurity-based feature importance for tree models have received some criticism.
{ "domain": "datascience.stackexchange", "id": 9837, "tags": "feature-selection, decision-trees, xgboost, feature-engineering, boosting" }
Top_p parameter in langchain
Question: I am trying to understand the top_p parameter in langchain (nucleus sampling) but I can't seem to grasp it. Based on this we sort the probabilities and select a subset that exceeds p and concurrently has the fewer members possible. For example for: t1 =0.05 t2 = 0.5 t3 = 0.3 t4 = 0.15 and top_p=0.75 we would select t2 and t3, right? If this is the case what happens if top_p=0.001? We just need one token and any one of these is enough. Do we select the biggest one (t2)? (based on my experience this makes sense, since i tested top_p=0.001 on an LLM and the output was coherent, so since we select only one token if it was a random token with probability >0.001 the output should be garbage). Answer: If top_p=0.75 we would select t2 and t3, right? → YES. If top_p=0.001? → We would select only t2. This is the original definition: The key idea is to use the shape of the probability distribution to determine the set of tokens to be sampled from. Given a distribution $P(x | x_{1:i-1})$, we define its top-$p$ vocabulary $V^{(p)} \subset V$ as the smallest set such that \begin{equation} \sum_{x \in V^{(p)}} P(x | x_{1:i-1}) \geq p. \end{equation} I would add something about the lines "the result of the sum shall be maximal among all possible combinations". In practical terms, we can take a look at the original implementation: sorted_probs, sorted_indices = torch.sort(samp_probs, descending=True) cumulative_probs = torch.cumsum(sorted_probs, dim=-1) sorted_indices_to_remove = cumulative_probs > p sorted_indices_to_remove[:, 1:] = sorted_indices_to_remove[:, :-1].clone() sorted_indices_to_remove[:, 0] = 0 sorted_samp_probs = sorted_probs.clone() sorted_samp_probs[sorted_indices_to_remove] = 0 ... sorted_next_indices = sorted_samp_probs.multinomial(1).view(-1, 1) next_tokens = sorted_indices.gather(1, sorted_next_indices) next_logprobs = sorted_samp_probs.gather(1, sorted_next_indices).log() There, we can see that the first thing they do is sorting and then they compute the cumulative probability distribution to find the cutting point.
{ "domain": "datascience.stackexchange", "id": 12197, "tags": "machine-learning, nlp, sampling, artificial-intelligence" }
Is the Euclidean metric the only one invariant under Galilean Transformations?
Question: Is $$ds^2=dx^2+dy^2+dz^2$$ the only metric that is invariant under Galilean transformations? And if yes how do you prove it? Answer: I am not expert in this area but I think following should be a good enough proof. Let us consider the general Galilean transformation: $$x'=Ax+a$$ where $A$ is $3\times 3$ matrix and $a$ is a vector. Then, we can implement this transformation as $$y'=By$$ where $B$ is a $4\times 4$ matrix and $y$ is a vector in 4 dimensions such that $$B\equiv\begin{pmatrix}A&a\\0&1\end{pmatrix}\quad,\quad y\equiv\begin{pmatrix}x\\1\end{pmatrix}$$ Now we are looking for invariants under this transformation. Assume that $I$ is such an invariant of the form $$I=y^TKy$$ for a $4\times4$ matrix $K$. Then, the invariance requires $$I'=I\rightarrow y^TB^TKBy=y^TKy\rightarrow B^TKB=K$$ Let us take $K$ to be of the form $$K\equiv\begin{pmatrix}p&q\\r&s\end{pmatrix}$$ for $3\times 3$ matrix $p$. Then, the invariance requires $$\begin{pmatrix}A^T&0\\a^T&1\end{pmatrix} \begin{pmatrix}p&q\\r&s\end{pmatrix} \begin{pmatrix}A&a\\0&1\end{pmatrix} =\begin{pmatrix}p&q\\r&s\end{pmatrix} $$ From this equation, we get the followings $$A^TpA=p\\ A^Tpa+A^Tq=q\\ a^TpA+rA=r\\a^Tpa+a^Tq+ra+s=s$$ Now, since $a$ and $A$ are independent, choosing $a=0$ in second and third equations forces $q=r=0$. Hence we are left with $$A^TpA=p\\a^Tpa=0$$ Since $A$ generates $SO(3)$ transformation, first equation implies that $p=1$. But the second equation then can only be satisfied if $a=0$. That means, for the generic Galilean transformation, there is no invariant quantitiy unless $p=1$ and $a=0$. Since $a$ should be independent, we somehow need to eliminate it. The straightforward method would be to use $dx$ instead of $x$, hence the only invariant is $$I=dx^Tdx$$ which is $dx^2+dy^2+dz^2$ in component form.
{ "domain": "physics.stackexchange", "id": 38081, "tags": "newtonian-mechanics, metric-tensor, inertial-frames, galilean-relativity" }
Trie data structure in Swift lang
Question: This is an implementation of a generic Trie DS, in Swift-lang, that is using however, for the sake of a easy example, strings. It implements two operations, insert and search. So it can insert a text, and look for substrings of that text, for example. I'd be interested in recommendations as to how correct the code is, but also how idiomatic, relative to the Swift language. The Node<T>struct represents a node in the Trie<T> tree. Each node holds a dictionary that maps a key of type T, towards another Node<T>. This is the code: import Foundation struct Node<T: Hashable> { var children: [T : Node<T>] = [:] var isLeaf: Bool { children.isEmpty } mutating func addChild(withKey key: T) { guard children[key] == nil else { return } children[key] = Node() } func hasChild(withKey key: T) -> Bool { children[key] != nil } } struct Trie<T: Hashable> { public var root = Node<T>() mutating func addContent(of content: some Collection<T>) { var iter = content.makeIterator() addCotentHelper(&root, &iter) func addCotentHelper(_ node: inout Node<T>, _ iter: inout some IteratorProtocol<T>) { guard let key = iter.next() else { return } node.addChild(withKey: key) addCotentHelper(&node.children[key]!, &iter) } } private func containsPrefix(of content: some Collection<T>, at root: Node<T>) -> Bool { guard let first = content.first else { return true } if root.hasChild(withKey: first) { return containsPrefix(of: content.dropFirst(), at: root.children[first]!) } return false } public func hasContent(of word: some Collection<T>) -> Bool { return hasContentHelper(of: word, at: root) func hasContentHelper(of word: some Collection<T>, at root: Node<T>) -> Bool { guard !word.isEmpty else { return true } guard root.isLeaf == false else { return false } if containsPrefix(of: word, at: root) { return true } return root.children.values.contains(where: { child in hasContentHelper(of: word, at: child) }) } } func printTrie() { printTrieHelper(node: root, indent: 1) func printTrieHelper(node: Node<T>, indent: Int) { let leading_indent = "| " let last_indent = "|-" for (k, v) in node.children { print( String( repeating: leading_indent, count: indent - 1) + last_indent, terminator: "") print(k, terminator: "\n") printTrieHelper(node: v, indent: indent + 1) } } } } Answer: Access control Some types and methods are explicitly made public, but in order to make the code compilable and usable as an external library, all types and methods which are meant to be called from outside must be made public. Also Trie needs a public init method. So the public interface should look like this: public struct Node<T> where T : Hashable { } public struct Trie<T> where T : Hashable { public init() public mutating func addContent(of content: some Collection<T>) public func hasContent(of word: some Collection<T>) -> Bool public func printTrie() } Another option is to make Node private to the Trie type: public struct Trie<T: Hashable> { private struct Node<T: Hashable> { ... } private var root = Node<T>() // ... } Minor remarks The generic type placeholder can be omitted if it is given from the context, i.e. in struct Node<T: Hashable> can var children: [T : Node<T>] = [:] be simplified to var children: [T : Node] = [:] There is a typo in addCotentHelper(). To guard or not to guard The guard statement is useful to avoid the “if-let pyramid of doom” and to handle exceptional situations. But it should (in my opinion) not be used as a general “if not” statement. So it is perfectly fine here guard let key = iter.next() else { return } But these statements guard children[key] == nil else { return } children[key] = Node() guard !word.isEmpty else { return true } guard root.isLeaf == false else { return false } are easier to read and to understand as simple if statements: if children[key] == nil { children[key] = Node() } if word.isEmpty { return true } if root.isLeaf { return false } Simplifying the code The addContent() method uses recursion and passes Node instances as inout parameters around a lot. This gets much easier if we make Node a class (so that pointers to instances can be passed around) and let the addChild() return the child node: class Node<T: Hashable> { var children: [T : Node] = [:] func addChild(withKey key: T) -> Node { if let child = children[key] { return child } else { let child = Node() children[key] = child return child } } // ... } public mutating func addContent(of content: some Collection<T>) { var node = root for key in content { node = node.addChild(withKey: key) } } Similarly, containsPrefix() can now use iteration of recursion, and creating slices of the content is no longer necessary: private func containsPrefix(of content: some Collection<T>, at root: Node<T>) -> Bool { var node = root for key in content { guard let next = node.children[key] else { return false } node = next } return true } In hasContentHelper(), the check for word.isEmpty is not needed because containsPrefix(of: word, at: root) will return true in that case. The check for root.isLeaf is also not needed, because root.children.values.contains will return false if there are not children. It is a matter of taste, but I find an explicit loop easier to read than calling root.children.values.contains() with a closure: for child in root.children.values { if hasContentHelper(of: word, at: child) { return true } } return false With the above changes the isLeaf property and the hasChild() method of Node are no no longer needed. Naming Again a matter of personal taste, but I would use private func containsPrefix(_ content: some Collection<T>, at root: Node<T>) public func contains(_ word: some Collection<T>) -> Bool func containsHelper(_ word: some Collection<T>, at root: Node<T>) -> Bool public func print() The first three method resemble the contains() method of collections. And “Trie” in printTrie() is a “needless word” because it just repeats the type, which is clear from the context. As an alternative to a print() method one can also implement the CustomStringConvertible or CustomDebugStringConvertible protocol, e.g. extension Trie: CustomDebugStringConvertible { public var debugDescription: String { // add code here ... } }
{ "domain": "codereview.stackexchange", "id": 43623, "tags": "algorithm, swift, trie" }
Number of mincuts of a graph without using Karger's algorithm
Question: We know that Karger's mincut algorithm can be used to prove (in a non-constructive way) that the maximum number of possible mincuts a graph can have is $n \choose 2$. I was wondering if we could somehow prove this identity by giving a bijective (rather injective) proof from the set of mincuts to another set of cardinality $n \choose 2$. No specific reasons, its just a curiosity. I tried doing it on my own but so far have not had any success. I would not want anyone to squander time over this and so if the question seems pointless I would request the moderators to take action accordingly. Best -Akash Answer: The $\binom{n}{2}$ bound I think was originally proven by Dinitz, Karzanov and Lomonosov in 1976, in "A structure for the system of all minimum cuts of a graph". Perhaps you can find what you're looking for in this paper, but I'm not sure if it's online.
{ "domain": "cstheory.stackexchange", "id": 3664, "tags": "ds.algorithms, graph-theory, co.combinatorics, graph-algorithms, max-flow-min-cut" }
C++ reactor bad implementation
Question: folks. I have recently started writing software using Modern C++ 11-14. I have been developing software for more than a decade and just wanted to broaden my skillset. I am practicing building some simple design components using Modern C++. I dont have any friend or a colleague who knows c++ and noone can review my practice problems. I would be very grateful if you can review a couple of my code snippets and provide your feedback. Thank you. Below is my recent implementation of the Reactor. Please criticize :) At the core of the reactor lies a thread called main_thread. Reactor will be receiving messages of type struct Message which is defined in Message.hpp file. Messages will be delivered using virtual method WaitForMessage. Users should be able to register their concrete event handlers which are derived from the base class IEventHandler. Reactor will call OnMessage of the handler if the received message type matches the type that IEventHandler was registered to. Inside AbstractReactor the handlers will be wrapped in a class named MessageListener and AbstractReactor will keep MessageListeners inside the vector. Would a map be a better choice?. I decided to use vector therefore MessageListeners can be sorted by the type of the message that they are looking for and we will be able to use binary search(this is what std::lower_bound is used for) rather than looping. One of the requirements was. A user should be able to call registerHandler and unregisterHandler from within the OnMessage routine of a concrete handler. I am using push_back on every handler which is registered while I am running in the context of main_thread and sort it after the message has been processed. When registerHandler is called outside the main_thread context it will search the appropriate position in the vector where the handler should be inserted and will insert it at that position. If deregisterHandler is called while we are at the main_thread context the listener will not be removed from the vector immediately. Flag m_handlersBeenUnregistered will be set and only after the message is processed we will check which of the listeners have to be removed and will call erase method. Thank you File AbstractReactor.cpp #include <mutex> #include <algorithm> #include "AbstractReactor.hpp" #include "IEventHandler.hpp" int MessageListener::m_IdCount = 0; AbstractReactor::AbstractReactor() {} AbstractReactor::~AbstractReactor() { if (!m_stopThread) stopThread(); } void AbstractReactor::mainThread() { while(!m_stopThread) { /* Block until message gets available * mainThread now owns a message */ std::unique_ptr<Message> m_ptr = waitForMessage(); if (m_ptr.get() == nullptr) { /* Reactor process may have received a signal to abort */ /* TODO: this may be reported calling some error handler */ continue; } /* Lock the list of listeners, I am using recursive mutex, because * we may call registerHandler and unregisterHandler functions while invoking a handler function of the listener */ std::unique_lock<std::recursive_mutex> lock; /* All handler entries are sorted by message type handlers are looking for * find the position of the first message listener whose type matches the type of the message. We may have multiple message listeners registered * for the same message type */ m_searchValue.m_type = m_ptr->type; m_searchValue.m_handleId = -1; auto pos = std::lower_bound(m_listeners.begin(), m_listeners.end(), m_searchValue, [](const MessageListener& one, const MessageListener& two) { if (one.m_type < two.m_type) return true; else return false; } ); if (pos == m_listeners.end()) { /* We couldnt find any message listener which was registered for this message type * we will keep listenning for new events * We may add some statistics for future references */ continue; } /* Set the flag that we are processing a message * When this flag is set registerHandler will not try to insert a handler to the proper position, rather it will push_back a handler to the end of the vector. * All newly registered handlers will be at the end of the list * When reactor finishes calling handlers he will sort its handlers table again.*/ m_processing = true; auto size = m_listeners.size(); auto i = pos - m_listeners.begin(); while(i < static_cast<int>(size) && m_listeners[i].m_type == m_ptr->type){ /* Handlers are user-defined. * If listener fails it shouldn't affect our Reactor */ try { m_listeners[i].m_hptr->OnMessage(m_ptr.get()); } catch(...) { /* We may need to report an exception. * Reactor should not have any error handling but it will need to somehow to log this error */ } i++; } m_processing = false; if (m_listeners.size() > size) { /* If the list has grown while we were invoking handlers, we will need to sort it again and place new handlers * at appropriate positions in the vector according to the message type */ std::sort(m_listeners.begin(), m_listeners.end(), [](const MessageListener& first, const MessageListener& second){ if (first.m_type <= second.m_type) return true; else return false; }); } /* If there there was at least one unregisterHandler call while we were processing a message * we will need to go through the whole table and remove the ones which have to be unregistered */ if (m_handlersBeenUnregistered == true) { for (auto it = m_listeners.begin(); it != m_listeners.end(); ++it) { if (it->m_mustRemove) it = m_listeners.erase(it); } m_handlersBeenUnregistered = false; } } } int AbstractReactor::unregisterHandler(int handleId, int32_t type) { if (handleId < 0) return -1; std::unique_lock<std::recursive_mutex> lock; m_searchValue.m_type = type; m_searchValue.m_handleId = handleId; auto pos = std::lower_bound(m_listeners.begin(), m_listeners.end(), m_searchValue, [](const MessageListener& theirs, const MessageListener& my) { if (theirs.m_type < my.m_type ) return true; else return false; } ); if (pos == m_listeners.end()) { /* If we were unable to find a match for this handler in the listeners table * we will return negative status to the user */ return -1; } auto i = pos - m_listeners.begin(); while(i < static_cast<int>(m_listeners.size()) && m_listeners[i].m_type == type) { if (m_listeners[i].m_handleId == handleId) { if (m_processing == false) m_listeners.erase(m_listeners.begin() + i); else m_listeners[i].m_mustRemove = true; break; } i++; } /* Set a global flag that will indicate that a handler has been marked to be deleted */ if (m_processing == true) m_handlersBeenUnregistered = true; return 0; } void AbstractReactor::start() { m_thread = std::thread(&AbstractReactor::mainThread, this); } void AbstractReactor::stopThread() { m_stopThread = true; m_thread.join(); } void AbstractReactor::stop() { /* we will just stop processing messages, but we will not delete * all message listeners * Message listeners entries will be deleted on destruction */ stopThread(); } File AbstractReactor.hpp #pragma once #include <vector> #include <mutex> #include <thread> #include <memory> #include <algorithm> #include "IEventHandler.hpp" #include "Message.hpp" struct MessageListener { int32_t m_type{-1}; int m_handleId{-1}; bool m_mustRemove{false}; static int m_IdCount; std::unique_ptr<IEventHandler> m_hptr; public: MessageListener() = default; MessageListener(int32_t type, std::unique_ptr<IEventHandler> h): m_type(type), m_handleId(m_IdCount++), m_hptr(std::move(h)) {} MessageListener(int32_t type, int handleId): m_type(type), m_handleId(handleId) {} }; class AbstractReactor { public: AbstractReactor(); virtual ~AbstractReactor(); /* This is an virtual function which must be implemented in the concrete reactor which you * derive from the AbstractReactor class. This function will be the source of the messages * to the reactor. * It will block until an OS informs us that an event occurred and message is available * Concrete implementation of Abstract reactor must override it */ virtual std::unique_ptr<Message> waitForMessage() = 0; void start(); void stop(); /* Register handler is a templated function which will require * message type and parameters used for constructing concrete user handler derived from IEventHandler * */ template<typename HandlerType, typename ...HandlerParametersType> int registerHandler(int type, HandlerParametersType&&... handlerParams) { std::unique_lock<std::recursive_mutex> lock; auto pos = m_listeners.end(); if (m_processing == false) { /* Add message listeners in sorted order sorting by their message type, * so we will be able to use binary search when trying to find listener registered for a specific message type * Not sure how many message types there are. If the number if huge then simply iterating over the list of big length * with not be an ideal solution */ m_searchValue.m_type = type; m_searchValue.m_handleId = -1; pos = std::lower_bound(m_listeners.begin(), m_listeners.end(), m_searchValue, [](const MessageListener& theirs, const MessageListener& my) { if (theirs.m_type < my.m_type) return true; else return false; } ); } pos = m_listeners.emplace(pos, type, std::move(std::make_unique<HandlerType>(std::forward<HandlerParametersType>(handlerParams)...))); if (m_processing == false) return pos->m_handleId; else return m_listeners.back().m_handleId; } int unregisterHandler(int handleId, int32_t type); private: std::recursive_mutex m_mutex; std::vector<MessageListener> m_listeners; std::thread m_thread; MessageListener m_searchValue; bool m_stopThread{false}; bool m_processing{false}; bool m_handlersBeenUnregistered{false}; void stopThread(); void mainThread(); }; File IEventHandler.hpp #pragma once #include "Message.hpp" class IEventHandler { public: virtual ~IEventHandler() {}; virtual void OnMessage(const Message *msg) = 0; }; File Message.hpp #pragma once #include <cstdint> struct Message { int32_t type; char data[32]; }; Answer: No comment on the design, just style improvements. auto pos = std::lower_bound(m_listeners.begin(), m_listeners.end(), m_searchValue, [](const MessageListener& one, const MessageListener& two) { if (one.m_type < two.m_type) return true; else return false; } ); I find this snippet very hard to read, especially because the lambda's parameter-list runs off the right side of the screen. I would write it with "Python-style" indentation: auto pos = std::lower_bound( m_listeners.begin(), m_listeners.end(), m_searchValue, [](const auto& a, const auto& b) { return (a.m_type < b.m_type); } ); Notice that if (x) return true; else return false; is a too-verbose way of writing return x; Also notice that we can use a generic lambda (auto) to shorten the parameter list, assuming that the reader already knows that m_listeners is a list of MessageListener objects so we don't have to explicitly repeat that type's name. if (m_ptr.get() == nullptr) Treat smart pointers like normal pointers. Using any named member function on a smart pointer is a code smell. If you want to test a pointer (smart or raw) for null, write simply: if (m_ptr == nullptr) typename ...HandlerParametersType — I strongly recommend naming packs something plural. This isn't a type; it's a pack of types. So: class... HandlerParameterTypes, or simply class... Params, or simply class... Ts. std::move(std::make_unique~~~ The result of a function call expression like std::make_unique<T>(args...) is already a prvalue. You don't have to cast it with std::move. (Remove the call to std::move.) if (!m_stopThread) stopThread(); I strongly recommend using curly braces around the body of every control-flow construct in your program. Consider what happens if you add a logging statement temporarily: if (!m_stopThread) std::cout << "stopping the thread\n"; // Oops! stopThread();
{ "domain": "codereview.stackexchange", "id": 38857, "tags": "c++11, c++14" }
Angular10 RxJS - Interceptor to add/refresh JWT Token
Question: I have a project, for which I use Tokenauthentication with JWT Tokens. I am relatively new to Angular Development and rxjs in particular, so there are a lot of concepts I am likely not yet familiar with or can't apply properly. My Backend is Django 3, using the Django Rest Framework and rest_framework_simplejwt. What the interceptor is doing is check any outgoing HTTP request on if it's a call to my API. If it is, attach the JWT Token. If it is and the Access Token is expired, refresh the Access Token first, then send the call to the API. I haven't yet coded in the scenario on what to do if the Refresh Token expires/is close to expiring but I'm doing this step by step and that's next on the list. I don't like my code here. It's hard for me to grasp to the point I need comments to make it easier. An even bigger problem is that I don't fully understand the code and thus am struggling to split the intercept function into smaller chunks to move into their own functions. What could I be doing to make it more comprehensible? What could I "split off" into some nicely named function? Here the code: //jwt-interceptor.ts @Injectable() export class JWTInterceptor implements HttpInterceptor{ private tokenRefreshInProgress: boolean = false; private refreshAccessTokenSubject: Subject<any> = new BehaviorSubject<any>(null); constructor(private userService: UserService){} intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>>{ if (this.isApiUrl(request.url)){ const accessToken = this.userService.getAccessToken(); //Below: If Access Token Expired and no refresh of it currently running if(this.userService.isTokenExpired(accessToken) && !this.tokenRefreshInProgress){ this.tokenRefreshInProgress = true; this.refreshAccessTokenSubject.next(null); return this.userService.refreshToken().pipe( switchMap(authResponse => { this.userService.setAccessToken(authResponse.access); this.tokenRefreshInProgress = false; this.refreshAccessTokenSubject.next(authResponse.access); request = this.addTokenToRequest(authResponse.access, request); return next.handle(request); }) ) //Below: If Access Token is expired and a refresh of it already running } else if(this.userService.isTokenExpired(accessToken) && this.tokenRefreshInProgress){ return this.refreshAccessTokenSubject.pipe( filter(result => result !== null), first(), switchMap(response => { request = this.addTokenToRequest(this.userService.getAccessToken(), request); return next.handle(request); }) ) //Below: If Access Token Valid } else { request = this.addTokenToRequest(accessToken, request); } } return next.handle(request); } isApiUrl(url: string): boolean{ const isApiUrl: boolean = url.startsWith(Constants.wikiApiUrl); const isTokenLoginUrl: boolean = url.endsWith('/token'); const isTokenRefreshUrl: boolean = url.endsWith('/token/refresh'); return isApiUrl && !isTokenLoginUrl && !isTokenRefreshUrl; } addTokenToRequest(token: string, request: HttpRequest<any>): HttpRequest<any>{ const httpHeaders = new HttpHeaders().set("Authorization", `Bearer ${token}`); request = request.clone({headers: httpHeaders}); return request; } } //methods from UserService class in user.service.ts isTokenExpired(token: string): boolean{ const [encodedHeader, encodedPayload, encodedSignature] = token.split('.'); const payload = JSON.parse(atob(encodedPayload)); const expiryTimestamp = payload.exp; const currentTimestamp = Math.floor((new Date).getTime()/1000); return currentTimestamp >= expiryTimestamp; } getRefreshToken(): string{ return localStorage.getItem('refresh_token'); } getAccessToken(): string{ return localStorage.getItem('access_token'); } setAccessToken(accessToken: string): void{ localStorage.setItem('access_token', accessToken); } refreshToken(): Observable<{access: string, refresh: string}>{ const refreshToken = this.getRefreshToken(); return this.http.post<{access, refresh}>(Constants.wikiTokenRefreshUrl, {refresh: refreshToken}); } ``` Answer: Short explanation of the code If the token is expired and not yet requested, the process is quite straight: Change tokenRefreshInProgress status to true so that other interceptions will know that and do not trigger the refresh also The refreshAccessTokenSubject BehaviorSubject gets set to null Refresh the token and as soon as we get a result set the token change the tokenRefreshInProgress to false store the token in our BehaviorSubject. add the token to the current request and now finally execute the current request and we return that observable of the request If the token is expired but already requested Listen to the refreshAccessTokenSubject and wait until it sends an event The first event is the current value in the BehaviorSubject, most likely a "null" (because the token refresh is still in progress), that event gets filtered out by filter . The second event is the refreshed token, that will pass the filter We are only interested in that token, so with ''first'' we take the first that passed the filter (the event reached the first pipe) and after processing that we stop listening to the BehaviorSubject Now we switch from the stream of the BehaviorSubject to a new Stream We add the token to the current request and now we finally handle the current request and return the observable of that request Things I would change Danger of multiple instances Be aware that there may be multiple instances of your interceptor if you import HTTPCLientModule multiple times (see HTTPInterceptor documentation). There are two ways of handling that. Hope that nobody will add the HTTPClientModule ever somewhere else. Extract the logic into a class and make it a singleton (having a service with Injectable( {providedIn: 'root'}) makes it a singleton the easy way) Access Modifier I would always use access modifiers (public / private). Yes, if none is mentioned it's public by default. But the reader does not know if its public by intention or if the developer has missed the private modifier. Add Types whereever possible I love typed variables, therefore, I would take the extra step to create custom types if none are provided RxJs side effects There are some side effects (like changing the status or changing the Behavior Subject. I would move those into a tap to make it more obvious that those are wished side effects and that we are aware of that. Omit else parts I personally omit the else part, when the if part clearly returns out of the method (return). With "clearly" I mean that when I read the if that I already see the return, that means the if block may only be 1-3 lines long. In this case, the blocks are a bit longer because of the pipe and the switchMap, but the return is on the first line after the if statement, so the approach is still okay for me. Changed Code I would move each exported element into its own file, to separate them more clearly export interface AccessToken{ ... } Injectable() export class JWTInterceptor implements HttpInterceptor { constructor(private refreshToken: RefreshTokenService) { } public intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { if (this.isApiUrl(request.url)) { return this.refreshToken.handleRequest(request, next); } return next.handle(request); } private isApiUrl(url: string): boolean{ const isApiUrl: boolean = url.startsWith(Constants.wikiApiUrl); const isTokenLoginUrl: boolean = url.endsWith('/token'); const isTokenRefreshUrl: boolean = url.endsWith('/token/refresh'); return isApiUrl && !isTokenLoginUrl && !isTokenRefreshUrl; } } Injectable({ providedIn: 'root' }) export class RefreshTokenService{ private tokenRefreshInProgress: boolean = false; private refreshAccessTokenSubject: Subject<AccessToken> = new BehaviorSubject<AccessToken>(null); constructor(private userService: UserService){} public handleRequest(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { const accessToken: AccessToken = this.userService.getAccessToken(); if(this.tokenNeedsRefresh(accessToken)){ return this.refreshToken().pipe( switchMap((token:AccessToken) => { request = this.addTokenToRequest(token, request); return next.handle(request); }) ) } if(this.hasToWaitForRefresh(accessToken)){ return this.waitForRefreshToken.pipe( switchMap((token:AccessToken) => { request = this.addTokenToRequest(token, request); return next.handle(request); }) ) } request = this.addTokenToRequest(accessToken, request); return next.handle(request); } private tokenNeedsRefresh(accessToken: AccessToken):boolean{ return this.userService.isTokenExpired(accessToken) && !this.tokenRefreshInProgress } private hasToWaitForRefresh(accessToken: AccessToken):boolean{ return this.userService.isTokenExpired(accessToken) && this.tokenRefreshInProgress } // Completes after first event private refreshToken():Observable<AccessToken>{ return this.userService.refreshToken().pipe( map((authRespose):AccessToken => authRespose.access), tap((token:AccessToken) => { this.userService.setAccessToken(token); this.tokenRefreshInProgress = false; this.refreshAccessTokenSubject.next(token); }) ); } // Completes after first event private waitForRefreshToken():Observable<AccessToken>{ return this.refreshAccessTokenSubject.pipe( filter(result => result !== null), first() ) } private addTokenToRequest(token: string, request: HttpRequest<any>): HttpRequest<any>{ const httpHeaders = new HttpHeaders().set("Authorization", `Bearer ${token}`); request = request.clone({headers: httpHeaders}); return request; } } At least that would be my approach. Three developers -> four approaches. And all are kind of valid :-) Pick the parts you like and ignore the rest :-)
{ "domain": "codereview.stackexchange", "id": 39978, "tags": "typescript, angular-2+, rxjs" }
"Sequence Duplication Levels" module still fails after pre-processing Illumina data
Question: I want to ask about why the sequence duplication levels are high after I trimmed by using Trimmomatic? I am using the following Trimmomatic operations: HEADCROP = 19 TRAILING = 20 MINLEN = 66. How can i solve this problem? Thank You. Answer: To answer your direct question, there are a few reasons why there might be high levels of sequence duplication. From the FastQC help: The underlying assumption of this module is of a diverse unenriched library. Any deviation from this assumption will naturally generate duplicates and can lead to warnings or errors from this module. As @DevonRyan mentioned, with certain sequencing protocols such as RNA-Seq, two sequence reads at exactly the same location aren't that uncommon. This isn't a problem with RNA-Seq data, or with Trimmomatic, or with FastQC. It's just that this kind of data violates the assumption, and therefore should be ignored in those circumstances. PCR duplicates are another possible cause. PCR duplicates can give the false impression of high coverage at a particular locus when in fact it's just a single observed read that has been duplicated many times (see here for more details). PCR duplicates can usually be detected and removed if your analysis involves mapping to a reference genome. But whether this is actually a problem you need to fix depends on what type of data you have and what types of analysis you want to do. Large numbers of adapter dimers or rRNA may be present in your sample. But I think it's also important to address how quality control (QC) is run. It can be tempting to run and re-run QC tools like Trimmomatic until all errors go away, but to be blunt these tools cannot think for you. For example, it's possible to get rid of most adapters by aggressively cropping/trimming both ends of each read, but you'll likely throw away a lot of good data that way. You may want to look into Trimmomatic's ILLUMINACLIP operation. It's also may be tempting to crop/trim reads aggressively if there are compositional biases near the beginning or end of the read. In fact, random hexamer priming can cause the Per-base Sequence Content module to fail on almost any RNA-Seq sample. Again, from the FastQC help (emphasis mine): Libraries produced by priming using random hexamers (including nearly all RNA-Seq libraries) and those which were fragmented using transposases inherit an intrinsic bias in the positions at which reads start. This bias does not concern an absolute sequence, but instead provides enrichement of a number of different K-mers at the 5' end of the reads. Whilst this is a true technical bias, it isn't something which can be corrected by trimming and in most cases doesn't seem to adversely affect the downstream analysis. It will however produce a warning or error in this module. So in other words, it's best to determine what can cause each FastQC module to fail, investigate whether this is actually a problem for your data set (referring to documentation as needed), and make a deliberate QC plan that addresses the issues that need attention.
{ "domain": "bioinformatics.stackexchange", "id": 840, "tags": "illumina, data-preprocessing, trimming, fastqc" }
How do we calculate the mass of a scalar field that represents finite mass?
Question: Since all scalar fields contain an infinite number of points with values assigned to each point, then a scalar field with finite mass could mistakenly add up to infinite mass because any positive value of mass time times infinity more or less equals infinity but is actually undefined. I ask, in the simplest mathematical breakdown as possible, how do we calculate the mass of a scalar field with finite mass? Answer: I think you may be mixing up two concepts. The first concept is the mass of a particle associated with a field. When we say "a scalar field has mass $m$", what we mean is that quanta (fundamental excitations) of the field are spin-0 particles of mass $m$. In this case $m$ is (of course) a finite number. The second concept is the total energy associated with a given field configuration. Because of $E=mc^2$, it is not unreasonable to associate a "mass" with a field configuration, and this can be meaningful when talking about special field configurations like solitons that have particle-like behavior. Typically there are boundary conditions that force the scalar field to reach its ground state (which we can assume has zero energy) asymptotically, so that the total energy of the field configuration is finite. However, you can imagine situations -- like a classical plane wave -- where the total energy is given by a finite energy density times an infinite volume, leading to an infinite total energy. Typically we think of this kind of situation as being a mathematical idealization of a more realistic situation with finite energy (for example, an infinite plane wave solution is often used as an approximation to the far field of a spherical wave), but mathematically there is nothing wrong with such solutions. There is no direct connection between the total energy (or mass) of a specific classical field configuration, and the mass of a single quantum (aka particle) associated with the field. The total energy of a field configuration can be infinite (at least mathematically), the mass of a single particle should be finite.
{ "domain": "physics.stackexchange", "id": 92976, "tags": "mass, field-theory" }
Another question about Shankar's notation
Question: I have another question on the notation in Shankar. I think it's sloppy, but I also may just be misunderstanding it. Again, this is at the very beginning of the math intro. He has: $$a\left| V \right\rangle \to \left[ {\begin{array}{*{20}{c}} {a{v_1}}\\ {a{v_2}}\\ \vdots \\ {a{v_n}} \end{array}} \right] \to \left[ {\begin{array}{*{20}{c}} {{a^*}v_1^*,}&{{a^*}v_2^*,}& \cdots, &{{a^*}v_n^*} \end{array}} \right] \to \left\langle V \right|{a^*}$$ It is customary to write $aV\rangle $ as $a\left| V > \right\rangle$ and the corresponding bra as $\left\langle aV \right|$. What we have found is that $\left\langle aV \right|=\left\langle V \right|{a^*}$. The * means conjugate. This doesn't look correct to me unless I'm missing something. First it would seem that the LHS of the final equation should be a ket not a bra. Then it also seems that it's not really "equals". From what I've seen if it is a bra on the LHS, commuting the scalar shouldn't cause it to result in taking its conjugate. Is the text correct or am I not understanding something? Answer: This is OK. We define the scalar multiplication in the Ket vector space to be complex linear, $a\left|V\right>=\left|aV\right>$. The inner product $\left<W|V\right>$ is complex linear on the Ket vector space, but complex anti-linear on the Bra space, $$a\left<W|V\right>=\left<W|aV\right>=\left<a^*W|V\right>.$$ That's almost a matter of definition for the inner product of a complex Hilbert space, where in an alternative notation we would write $(a^*W,V)=(W,aV)=a(W,V)$. In particular, this means that $\left<aW|aV\right>$ is a positive multiple of $\left<W|V\right>$, $\left<aW|aV\right>=|a|^2\left<W|V\right>$. Perhaps the notation would be slightly less worrying if we wrote $\left<aV\right|=a^*\left<V\right|$?
{ "domain": "physics.stackexchange", "id": 2557, "tags": "quantum-mechanics, vectors, notation" }
How is it possible to use up the water in a region?
Question: The Dead Sea, rivers in California and the Aral Sea are said to be shrinking due to water usage, e.g. for agriculture. Yet the water must go somewhere. After the fields are irrigated, the water either soaks into the ground or evaporates. In the first case, the groundwater is replenished. In the second case, the water ends up as rain, presumably within a few hundred kilometers of the evaporation point. Either way, the water gets back in circulation. So what is happening here? How can water be "used up"? Answer: You're making a mistake, at least for the second case: In the second case, the water ends up as rain, presumably within a few hundred kilometers of the evaporation point. You cannot model a dry region (or indeed any region on earth) as a closed system for hydrological purposes. When water evaporates in a dry climate, it transports much farther than a few hundred kilometres. General circulation can transport airmasses for thousands of kilometres. In all likelihood, when the water finally precipitates it will do so in a different catchment area and/or far upstream, often in an area that already has plenty of precipitation. From there it may flow thousands of kms to different climate areas yet. This is why in particular hydro lakes in hot climates have such a large impact on ecology: a hydro lake is far larger evaporation than a river, due to its much larger surface area and other factors. When water from a dry region is gone, it can, for all practical purposes, be counted as a loss.
{ "domain": "earthscience.stackexchange", "id": 1627, "tags": "meteorology, hydrology, water, groundwater" }
Is periscope window mathematically possible?
Question: I was always wondering, why don't we have periscope windows? What I imagine is a "light intake" on a roof, from which the light is concentrated into a straight long narrow tube that takes it to an underground flat where the light is dispersed into a fake window. Is such design mathematically possible? What would determine the viewing angles of the fake window? Maybe it could use Fresnel lenses to save on cost and weight. We would probably not get a clear picture of the outside world but at least lot of natural daylight. Answer: Depends on how narrow this tube is. Without moving optics that can track the sun, the tube must be as large as the collection area, and no kinds of tricks with mirrors or lenses can make it better. If sun-tracking optics are allowed then you can do much better, provided it's not cloudy. Let's start with a practical example: sun tunnels which transmit light from a skylight into a fixture below: The light is diffuse, so you get natural light, but no "picture". The technical data gives a visual transmittance of 0.36 for the most efficient models, meaning 36% of the light hitting the collector on the roof ends up coming out the fixture at the bottom. That may not seem like much, but since our visual perception is logarithmic it's not nearly as bad as it seems. As an example, the 14" model has a collection area of approximately 0.1 square meters. The illuminance outside on a sunny day is around 150,000 lux. That means the luminous flux at the collector is: $$ 150000\:\mathrm{lx} \cdot 0.1 \:\mathrm{m^2} = 15000 \:\mathrm{lm} $$ 15000 lumens. The visual transmittance coefficient of 0.36 means the luminous flux of the fixture, after the light lost in the optical system between the collector and fixture is: $$ 15000 \:\mathrm{lm} \cdot 0.36 = 5400 \:\mathrm{lm} $$ That's a lot of light. For comparison, a typical "60 watt equivalent" LED is only about 800 lumens. Could we improve on the performance of this sun tunnel with some optical system? Perhaps we'd like to collect more light from a larger area, while keeping the tube small. Can some arrangement of mirrors or lenses help? There's an optical law called the conservation of étendue which is relevant. The best explanation I've found is in an XKCD what-if on starting a fire with moonlight: Maybe you can't overlay light rays, but can't you, you know, sort of smoosh them closer together, so you can fit more of them side-by-side? Then you could gather lots of smooshed beams and aim them at a target from slightly different angles. Nope, you can't do this. It turns out that any optical system follows a law called conservation of étendue. This law says that if you have light coming into a system from a bunch of different angles and over a large "input" area, then the input area times the input angle equals the output area times the output angle. If your light is concentrated to a smaller output area, then it must be "spread out" over a larger output angle. In other words, you can't smoosh light beams together without also making them less parallel, which means you can't aim them at a faraway spot. So your first attempt at improving the sun tunnel might be to put a big lens at the top which focuses light on the smaller entrance to the tunnel. If the sky is equally bright all over (it's cloudy), you have a problem: you're already collecting light from a 180 degree hemisphere. If you attempt to focus light down on a smaller point it must be spread out even more, beyond 180 degrees. But that means some of the light is turned around, going back at the sky, which doesn't help your objective of lighting the room below. So in this case, you simply can't cram any more light in the tube. If it's sunny, maybe the optics should focus the brightest part of the sky, the sun's disk? This is a good idea, because now the input angle is only about 0.53 degrees. This means you have some margin to focus the incoming light without spreading it out beyond 180 degrees. But while physically possible, I'm not sure it's economically feasible since it would require expensive sun-tracking optics, and on a cloudy day it wouldn't work any better than the cheap variety with no optics at all.
{ "domain": "physics.stackexchange", "id": 50098, "tags": "optics" }
why does ring flip of L-fructose occur?
Question: when drawing the chair conformation for beta-L-fructose, I got the structure above (in the picture). The answer said that the conformation above had a ring flip. Why does a ring flip occur and how can we tell when it occurs? Answer: These ring-flips are called 1C4-4C1 transitions and these happen to all of the pyranose sugars in solution when the conditions allow. However, the abundance of each forms depends on their energy. There is a so called 1,3-diaxial interaction between the axial substituents of a cyclohexane analogue. Two axial substituents on the same side are much closer than two equatorial substituens. If these substituents are relatively big, it will be energetically unfavorable to bring them close together due to steric effects. Therefore the molecule will "choose" the conformation that has less energy, so less "large" substituents on the same side. In your case the OH and Me groups are considered the "large" substituents, while the H atoms (undrawn) are the "small" ones. Therefore the conformation you drawn has an unfavorable Me-OH interaction, which will drive the conformation change from 4C1 to 1C4. (source: prenhall.com)
{ "domain": "chemistry.stackexchange", "id": 10277, "tags": "organic-chemistry, biochemistry, conformers" }
Relation between isentropic/isenthalpic to adiabatic?
Question: We have $dQ = T dS$. Does this imply that a process is adiabatic $dQ = 0$ if and only if it is isentropic $dS = 0$ for any process? This does not sound right, as this would mean that there is no point in distinguishing these two cases. Also we have the enthalpy defined by $H = E + pV$. This gives us $dH = dE + d(pV)$. If we assume that $p$ is constant, then by the first law of thermodynamics $dH = dE + p dV = dQ$. So for any reversible process isenthalpic should be the same as adiabatic. Does this make sense? Answer: We have dQ=TdS. Does this imply that a process is adiabatic dQ=0 if and only if it is isentropic dS=0 for any process? Remember the interpretation of each quantity: S is a function of state, Q is not, it is heat flow. So even if they are related they are completely different concepts. But it is correct to say that if a reversible process (so S is defined all the time) is adiabatic, then it is isentropic. So for any reversible process isenthalpic should be the same as adiabatic The increase in enthalpy of a system is equal to the added heat, provided that the system is under constant pressure and that the only work done by the system is expansion work. So for any reversible process isenthalpic should be the same as adiabatic only if p is constant (that is, if it also isobaric).
{ "domain": "physics.stackexchange", "id": 18739, "tags": "thermodynamics, statistical-mechanics" }
Why do magnetic field lines describe a force?
Question: My professor stated the four Maxwell equations, as well as the "Lorentz force" equation $$ \mathbf{F} = q\left(\mathbf{E}+\frac{1}{c}\mathbf{v} \wedge\mathbf{B}\right) \tag{1} $$ He said that this equation together with the Maxwell equations describe all classical phenomena of electrodynamics. As far as I can see, the Maxwell equations describe how $\mathbf{E}$ and $\mathbf{B}$ behave, and the equation above describes how they affect electric charges. But two permancent magnets at rest are not electrically charged, and their magnetic fields do not change in time, so $\mathbf{E}=0$ and $\mathbf{v}=0$, therefore also $\mathbf{F}=0$. Why do they attract or repel? From which equation can the force between magnetic moments at rest be deduced? EDIT: Wikipedia has an explanation using the Ampère model which treats all magnetic dipoles as the result of an electric current. The formula is $$ \mathbf{F}=-\nabla(\mathbf{B}\cdot\mathbf{v}) \tag{2} $$ But the Ampère model is not something that can be derived from the Maxwell equations. Another frequent explanation is that the magnet "tries to get into a position with the lowest magnetic energy density". But this is an additional postulate, it does not follow from Maxwell's equations. So, I'm still looking for a derivation of this formula from the Maxwell equations. Answer: Feel free to skip all text and to read only the equations. =). (I dislike wordy books). Texts are explanations to the equations. Also, I've used SI instead of CGS (I also dislike CGS). You want a proof that two magnets executes force upon each other. A magnet can be interpreted by a magnetic dipole, which has its own magnetic dipole moment vector. A dipole generates a magnetic field. Thus, two dipoles, will interact with each other by means of the field. Thus, suffices to prove that a magnetic dipole will experience a force by an external magnetic field. And this is exactly what we are going to prove, using only Maxwell equations and Lorentz force. Assume a closed circuit $\gamma$, with stationary current $I$. On this region, there is a external magnetic field $\mathbf B(\mathbf r)$. The force on each charge $dq$ in the circuit is by Lorentz force: $$ d\mathbf F = dq\mathbf v\times\mathbf B $$ Assuming $n$ charge carriers per unit volume of the conductor, where $A$ is the sectional area of the circuit, we can compute the force over an element of circuit $d\mathbf l$: $$ d\mathbf F = NAq|d\mathbf l|\mathbf v\times\mathbf B = NAq|\mathbf v| d\mathbf l\times\mathbf B = A|\mathbf J| d\mathbf l\times\mathbf B = I d\mathbf l\times\mathbf B $$ Now we can have the torque: $$ d\tau = \mathbf r\times d\mathbf F = \mathbf r\times (I d\mathbf l\times\mathbf B) = I\mathbf r\times (d\mathbf l\times\mathbf B) $$ About the torque, notice that if we integrate component by component, we will arrive in: $\tau = I\mathbf A\times\mathbf B$, where $\mathbf A$ is the vector where $A_i$ is the area over of the projections of the curve $\gamma$ over plane xy, yz, zx. At this point, we can find its magnetic moment: $$ \mathbf A = \frac{1}{2}\oint_\gamma \mathbf r\times d\mathbf l \quad\Longrightarrow\quad \mathbf m = I\mathbf A = \frac{I}{2}\oint_\gamma \mathbf r\times d\mathbf l $$ Its possible to prove that such integral also gives vector $\mathbf A$. Therefore, the torque can be calculated using magnetic moment: $$ \tau = \mathbf m\times\mathbf B = \oint_\gamma I\mathbf r\times (d\mathbf l\times\mathbf B) $$ Thus we are proving (not defining) that, this quantity indeed is equal to the magnetic vector moment $\mathbf m$. Means, this circuit has an associated magnetic moment, and because of this there is a torque. It coincides with the actual defined value of the magnetic moment vector from the multipole expansion of the magnetic vector potential from localized current distributions. Furthermore, it indicates systems with magnetic moment (such as, magnets) are equivalent to circuit current loops (notice how the magnetic moment encodes the geometry of the circuit). A similar treatment of all of this is possible using a general localized current distribution $\mathbf J(\mathbf r)$ instead of a closed current circuit $\gamma$ with statioanry current $I$. Taylor expansion of the magnetic field gives: $\mathbf B(\mathbf r) = \mathbf B_0 + \mathbf r\cdot\nabla\mathbf B$, until second order. You can consider $\nabla\mathbf B$ as the Jacobian matrix of the magnetic field. Notice that, we can use such approximation to compute the force over the circuit: $$ \mathbf F = \oint_\gamma Id\mathbf l\times\mathbf B = \oint_\gamma Id\mathbf l\times\mathbf B_0 + \oint_\gamma Id\mathbf l\times\mathbf r\cdot\nabla\mathbf B $$ The first integral, $\mathbf B_0$ is constant, and will get out the integral. If $I$ is constant, it will also get out, and we will be left with a closed integral over $d\mathbf l$ which is zero. So, the force contribution comes from the non-uniformity of the magnetic field, ie, its second order term expansion: $$ \mathbf F = I\oint_\gamma d\mathbf l\times(\mathbf r\cdot\nabla\mathbf B) = (\mathbf m\times\nabla)\times\mathbf B $$ Done. Here we made use of the magnetic moment. This means, anything with an associated magnetic moment (our circuit, a magnet, a dipole, a planet, etc) can be equivalent modeled by our circuit, and will experience a force and torque from an external non-uniform magnetic field (such as, the magnetic field generated by other magnetic dipole, or equivalently, the magnetic field generated by another something with associated magnetic moment vector). We can simplify: $$ \mathbf F = (\mathbf m\times\nabla)\times\mathbf B = \nabla(\mathbf m\cdot\mathbf B) - \mathbf m(\nabla\cdot\mathbf B) $$ Where here we used a vector identity. Now, using Maxwell equation: $\nabla\cdot\mathbf B = 0$, we have the force and torque experienced by the dipole immersed in external non-uniform magnetic field: $$ \mathbf F = \nabla(\mathbf m\cdot\mathbf B), \quad\quad \tau = \mathbf m\times\mathbf B $$
{ "domain": "physics.stackexchange", "id": 25771, "tags": "electromagnetism, maxwell-equations" }
Different forms for the energy operator
Question: I am a little bit confused about which is really the energy operator. During the lectures, the professor told us that the energy operator is simply the Hamiltonian $\hat{H}$ and that the eigenvalues represent the energy of the system. On Wikipedia [1], though, there is written that the energy operator is $$ i \hbar \frac{\partial}{\partial t}. $$ On this website I rode that $\hat{H}$ and $\hat{E}$ are the same thing so now I do not understand which one is the right energy operator. [1] https://en.wikipedia.org/wiki/Energy_operator Answer: To answer your question I will draw an analogy with a simple question about classical mechanics. Hopefully this will clarify the confusion. Imagine someone coming to you asking what is the correct expression for acceleration. Somewhere she read the correct expression was: $$a=\frac{\partial^2 x}{\partial t^2}$$ Somewhere else she read $$a=\frac{F}{m}$$ She is wondering, which of these two is the correct expression. My answer would be that the first expression is the definition of the acceleration, while the second expression is the value acceleration might take in a given experiment. The equation of motion is thus: $$\frac{\partial^2 x}{\partial t^2}=\frac{F}{m}$$ The question you are asking is similar. $$\hat E =i\hbar \frac{\partial}{\partial t}$$ is the operator that is defined as the energy operator, while $$\hat H = \frac{-\hbar^2}{2m}\nabla^2 +V$$ is the energy operator for a given experiment. The equation of motion is thus: $$i\hbar \frac{\partial}{\partial t}=\frac{-\hbar^2}{2m}\nabla^2 +V$$
{ "domain": "physics.stackexchange", "id": 46450, "tags": "quantum-mechanics, energy, operators" }
ROS Answers SE migration: run the rviz
Question: Each time I run the rviz "rosrun rviz rviz" I have to "export LIBGL_ALWAYS_SOFTWARE=1". And if I open a new windows, I have to export it again. I want to know how should I do if just set it once,and next time I run rviz,I can not set it. Thank you very much. Originally posted by littlestar on ROS Answers with karma: 9 on 2016-04-02 Post score: 0 Answer: If you want an environment variable (like LIBGL_ALWAYS_SOFTWARE) to be set automatically every time you open a new terminal, add it to the bash startup file (.bashrc) in your home directory. Originally posted by ahendrix with karma: 47576 on 2016-04-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by littlestar on 2016-04-02: Thank you!
{ "domain": "robotics.stackexchange", "id": 24295, "tags": "ros, rviz, ros-indigo" }
Handling CSRF protection
Question: I'm currently working on an implementation of a CSRF protection. What is the best practice when we detect a CSRF? Is it better to 404 the page, 403 (forbidden), 200 (OK) with an error message, something else? Bonus : here's my code. I'm not very proud of the preg_replace and ob_*. If you also have suggestions on how to do it better, I'll take that too. The addCSRF method is called just before sending the output of an HTML page. The checkCSRF method is called when the server receives a request. <?php // This method checks if the content contains a form and adds a csrf_token hidden field public static function addCSRF() { $content = ob_get_contents(); if (strlen($content)) { // Random csrf token $randomtoken = base64_encode(openssl_random_pseudo_bytes(32)); // Add the hidden input to the content if needed $content = preg_replace('/(<([^>]*\s)?form(\s[^>]*)?>)/i', '\1<input type="hidden" name="csrf_token" value="'.$randomtoken.'" />', $content, -1, $count); // If at least one input has been added, add the csrf_token value in the $_SESSION and replace the content if ($count) { Session::set('csrf_token', $randomtoken); // Echo the new content ob_end_clean(); ob_start(); echo $content; } } return; } // This method checks if a form has been submited and if the csrf token is given and valid public static function checkCSRF() { // No form submitted if (!isset($_POST)) return; // CSRF detected if (!isset($_POST['csrf_token']) || $_POST['csrf_token'] != Session::get('csrf_token')) { // 404 ? 403 ? 200 + error message ? } Session::forget('csrf_token'); } Answer: While trying to add an easter egg on my framework, I saw this answer : Stack Overflow returning HTTP error code 418 (I'm a teapot)?. Since it looks like there is no "correct" way to handle CSRF, I thought it could be a fun thing to do. Upon detecting a CSRF attack, my framework now sends a HTTP 418 header with a nice ASCII art of a trolly teapot. So I mark this question as answered, because there's no real best practice (yet ?) and anything would be OK.
{ "domain": "codereview.stackexchange", "id": 15923, "tags": "php, security" }
Problems with Catkin Metapackages
Question: I've been hashing through a build of Groovy on a BeagleBone. I've finally figured out there are issues with some of the metapackages that are causing my builds to fail. WARNING: Metapackage "driver_common" must build_tool depend on catkin WARNING: Metapackage "geometry" must build_tool depend on catkin What do I need to do to correct this issue and get these metapackages to make? (and why are these packages faulty and not corrected at the source??) Originally posted by TJump on ROS Answers with karma: 160 on 2013-05-05 Post score: 0 Answer: Those are only warnings in Groovy, they will not cause the meta packages not to build. In Hydro they will be errors and it will cause the build process to fail. They have not been fixed in upstream because that would require a release of those upstream repositories into groovy, which we try not to do unless there is a good reason. No action should be required to have this work in Groovy, if the build is failing it is for another reason. Originally posted by William with karma: 17335 on 2013-05-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by TJump on 2013-05-05: Okay, I will dig some more. It is the packages that give this warning that are not building. Comment by TJump on 2013-05-05: The failure happens when I run rosdep after adding these metapackages into the src directory (rosdep install --from-paths src --ignore-src --rosdistro groovy -y). I get "Unable to locate package..." errors like: Unable to locate package ros-groovy-diagnostic-updater. Why? Comment by William on 2013-05-06: See: http://answers.ros.org/question/61715/rosdep-gives-wrong-home-brew-packages-on-osx/?answer=61762#post-id-61762 Comment by TJump on 2013-05-06: Good to see the link. It makes more sense now and knowing how early in this new enviroment things seem to be.
{ "domain": "robotics.stackexchange", "id": 14067, "tags": "catkin" }
How does rospy.Timer behave if it triggers while the previous callback is still busy?
Question: I'm using a rospy.Timer (periodic, not one-shot) to handle a long-running process in a non-blocking way. I only want one instance of the process to be running at any one time. Therefore, I thought I would have to implement some sort of lock in order to prevent subsequent Timer callbacks from triggering new instances of the process. However, it seems like I didn't have to! Subsequent callbacks are not called if the previous one is still working. Or are they added to a queue like subscriber callbacks? Either is a good thing in my use case, but where can I find out more about it? It's not mentioned in http://wiki.ros.org/rospy/Overview/Time#Timer from what I can tell. Originally posted by spmaniato on ROS Answers with karma: 1788 on 2016-12-26 Post score: 3 Answer: You could have a look at the code: timer.py The callback is directly executed (no thread) so if your function takes longer than your rate, it will be called as fast as possible (independent of your specified rate). Originally posted by NEngelhard with karma: 3519 on 2016-12-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by spmaniato on 2016-12-26: Thanks for the link! According to the source code, it looks like the callback should be getting called regardless of whether the previous one has returned. That's not what I observed though. Hmm. I'll take another look. Comment by NEngelhard on 2016-12-26: Why do you think so? The callback is called in line 223 in the main loop. So the execution waits there until your function finished. Comment by spmaniato on 2016-12-26: Oh, now I get it! The entire Timer / thread is getting blocked by the call to the long-running callback. Thanks again @NEngelhard :-)
{ "domain": "robotics.stackexchange", "id": 26578, "tags": "ros, rospy, callback, timer" }
Split string in chunks preferable at spaces
Question: I'm working on a tool to import data from one database to another. One requirement is that I have to split a string from one source field into three (shorter) fields at the target. If possible the string should be split at a space character. If the string doesn't fit completely into the target fields, the rest can be omitted. Usually I would solve this using an UDF but unfortunately neither UDF's nor Stored Procedures are allowed in my scenario. My source database has the following table: CREATE TABLE dbo.Organisations ( OrganisationID int IDENTITY(1,1) NOT NULL PRIMARY KEY, OrganisationName nvarchar(180) NOT NULL /* More columns omitted for brevity */ ) This table contains company names, for example: OrganisationID | OrganisationName ---------------+-------------------------------------------------------------------- 1 | Microsoft Corporation 2 | S&T System Integration & Technology Distribution Aktiengesellschaft During the import the records of this table should be inserted into a stating table in the target database. The staging table looks like this: CREATE TABLE dbo.OrgStaging ( OrganisationID int NOT NULL, Name1 nvarchar(50) NOT NULL, Name2 nvarchar(50) NOT NULL, Name3 nvarchar(50) NOT NULL /* More columns omitted for brevity */ ) If I would simply use SUBSTRING to split the name I would end up in the staging table like this: OrganisationID | Name1 | Name2 ---------------+---------------------------------------------------+----------------- 1 | Microsoft Corporation | 2 | S&T System Integration & Technology Distribution A|ktiengesellschaft But I don't want to split in the middle of a word so I would like to have the result like this: OrganisationID | Name1 | Name2 ---------------+---------------------------------------------------+------------------ 1 | Microsoft Corporation | 2 | S&T System Integration & Technology Distribution |Aktiengesellschaft To achieve this I came up with the following rather complex query: DECLARE @MaxLen int = 50; -- Maximum length of a target column WITH SpacePositions AS ( SELECT O.OrganisationID, CHARINDEX(' ', O.OrganisationName, 0) AS Position FROM SourceDB.dbo.Organisations O UNION ALL SELECT O.OrganisationID, CHARINDEX(' ', O.OrganisationName, S.Position + 1) AS Position FROM SourceDB.dbo.Organisations O INNER JOIN SpacePositions S ON CHARINDEX(' ', O.OrganisationName, S.Position + 1) > S.Position AND S.OrganisationID = O.OrganisationID ), SplitPositions AS ( SELECT S.OrganisationID, S.Position - 1 AS Position FROM SpacePositions S WHERE S.Position != 0 UNION SELECT O.OrganisationID, LEN(O.OrganisationName) AS Position FROM SourceDB.dbo.Organisations O ), FirstChunk AS ( SELECT D.OrganisationID, 1 AS ChunkStart, MAX(D.Position) AS ChunkEnd FROM ( SELECT S.OrganisationID, S.Position + 1 AS Position FROM SplitPositions S WHERE Position BETWEEN 1 AND @MaxLen UNION SELECT S.OrganisationID, @MaxLen FROM SplitPositions S WHERE NOT EXISTS ( SELECT * FROM SplitPositions SI WHERE SI.Position BETWEEN 1 AND @MaxLen AND SI.OrganisationID = S.OrganisationID ) ) D GROUP BY D.OrganisationID ), SecondChunk AS ( SELECT C.OrganisationID, C.ChunkEnd + 1 AS ChunkStart, MAX(D.Position) AS ChunkEnd FROM FirstChunk C INNER JOIN ( SELECT S.OrganisationID, S.Position + 1 AS Position FROM SplitPositions S INNER JOIN FirstChunk C ON C.OrganisationID = S.OrganisationID WHERE S.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen UNION SELECT S.OrganisationID, C.ChunkEnd + @MaxLen AS Position FROM SplitPositions S INNER JOIN FirstChunk C ON C.OrganisationID = S.OrganisationID WHERE NOT EXISTS ( SELECT * FROM SplitPositions SI WHERE SI.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen AND OrganisationID = C.OrganisationID ) ) D ON D.OrganisationID = C.OrganisationID GROUP BY C.OrganisationID, C.ChunkEnd ), ThirdChunk AS ( SELECT C.OrganisationID, C.ChunkEnd + 1 AS ChunkStart, MAX(D.Position) AS ChunkEnd FROM SecondChunk C INNER JOIN ( SELECT S.OrganisationID, S.Position + 1 AS Position FROM SplitPositions S INNER JOIN SecondChunk C ON C.OrganisationID = S.OrganisationID WHERE S.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen UNION SELECT S.OrganisationID, C.ChunkEnd + @MaxLen AS Position FROM SplitPositions S INNER JOIN SecondChunk C ON C.OrganisationID = S.OrganisationID WHERE NOT EXISTS ( SELECT * FROM SplitPositions SI WHERE SI.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen AND OrganisationID = C.OrganisationID ) ) D ON D.OrganisationID = C.OrganisationID GROUP BY C.OrganisationID, C.ChunkEnd ) INSERT INTO dbo.OrgStaging ( OrganisationID, Name1, Name2, Name3 ) SELECT O.OrganisationID, LTRIM(RTRIM(SUBSTRING(O.OrganisationName, C1.ChunkStart, C1.ChunkEnd))), LTRIM(RTRIM(SUBSTRING(O.OrganisationName, C2.ChunkStart, 1 + C2.ChunkEnd - C2.ChunkStart))), LTRIM(RTRIM(SUBSTRING(O.OrganisationName, C3.ChunkStart, 1 + C3.ChunkEnd - C3.ChunkStart))) FROM SourceDB.dbo.Organisations O INNER JOIN FirstChunk C1 ON C1.OrganisationID = O.OrganisationID INNER JOIN SecondChunk C2 ON C2.OrganisationID = O.OrganisationID INNER JOIN ThirdChunk C3 ON C3.OrganisationID = O.OrganisationID ORDER BY O.OrganisationID; It works as desired but I'm wondering if this can be stated a little more compact. I tried to combine the CTE's FirstChunk, SecondChunk and ThirdChunk into one recursive CTE but that is not working because of the GROUP BY clause which is not allowed in recursive CTEs. Can this be restated more compact or is it already the best I can get? Answer: A nitpick This first point will not really change the query, and it is probably only added for testing purposes, but an ORDER BY with an INSERT INTO statement does not really do anything useful (unless you insert into a table with an IDENTITY column). Us a more compact style It looks like you either have a very strong policy on style and layout, or used an auto-formatter, because in some places, SELECT * is written in two lines. That is good, because the query is at least well formatted and readable. If you want it to be more compact, though, you might want to skimp a bit on the newlines. When formatting code more compactly, there is still "breathing space", but the scroll-factor is tuned down a bit, so you have a bit higher view of the code. I find that it can help. I like to use a combination of indentation to recognize the query parts (my indentation is by no means the default in SQL) and a thing I call "one concept per line", where each line tells me something that can stand on its own for logic. Not indenting UNION s helps to see the "equal level" of both sides of the UNION, and prevents the lines from getting too long. DECLARE @MaxLen int = 50; -- Maximum length of a target column WITH SpacePositions AS ( SELECT O.OrganisationID , CHARINDEX(' ', O.OrganisationName, 0) AS Position FROM dbo.Organisations O UNION ALL SELECT O.OrganisationID , CHARINDEX(' ', O.OrganisationName, S.Position + 1) AS Position FROM dbo.Organisations O INNER JOIN SpacePositions S ON CHARINDEX(' ', O.OrganisationName, S.Position + 1) > S.Position AND S.OrganisationID = O.OrganisationID ) , SplitPositions AS ( SELECT S.OrganisationID , S.Position - 1 AS Position FROM SpacePositions S WHERE S.Position != 0 UNION SELECT O.OrganisationID , LEN(O.OrganisationName) AS Position FROM dbo.Organisations O ) , FirstChunk AS ( SELECT D.OrganisationID , 1 AS ChunkStart , MAX(D.Position) AS ChunkEnd FROM ( SELECT S.OrganisationID , S.Position + 1 AS Position FROM SplitPositions S WHERE Position BETWEEN 1 AND @MaxLen UNION SELECT S.OrganisationID , @MaxLen FROM SplitPositions S WHERE NOT EXISTS ( SELECT * FROM SplitPositions SI WHERE SI.Position BETWEEN 1 AND @MaxLen AND SI.OrganisationID = S.OrganisationID ) ) D GROUP BY D.OrganisationID ) , SecondChunk AS ( SELECT C.OrganisationID , C.ChunkEnd + 1 AS ChunkStart , MAX(D.Position) AS ChunkEnd FROM FirstChunk C INNER JOIN ( SELECT S.OrganisationID , S.Position + 1 AS Position FROM SplitPositions S INNER JOIN FirstChunk C ON C.OrganisationID = S.OrganisationID WHERE S.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen UNION SELECT S.OrganisationID , C.ChunkEnd + @MaxLen AS Position FROM SplitPositions S INNER JOIN FirstChunk C ON C.OrganisationID = S.OrganisationID WHERE NOT EXISTS ( SELECT * FROM SplitPositions SI WHERE SI.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen AND OrganisationID = C.OrganisationID ) ) D ON D.OrganisationID = C.OrganisationID GROUP BY C.OrganisationID, C.ChunkEnd ) , ThirdChunk AS ( SELECT C.OrganisationID , C.ChunkEnd + 1 AS ChunkStart , MAX(D.Position) AS ChunkEnd FROM SecondChunk C INNER JOIN ( SELECT S.OrganisationID , S.Position + 1 AS Position FROM SplitPositions S INNER JOIN SecondChunk C ON C.OrganisationID = S.OrganisationID WHERE S.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen UNION SELECT S.OrganisationID , C.ChunkEnd + @MaxLen AS Position FROM SplitPositions S INNER JOIN SecondChunk C ON C.OrganisationID = S.OrganisationID WHERE NOT EXISTS ( SELECT * FROM SplitPositions SI WHERE SI.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen AND OrganisationID = C.OrganisationID ) ) D ON D.OrganisationID = C.OrganisationID GROUP BY C.OrganisationID, C.ChunkEnd ) INSERT INTO dbo.OrgStaging ( OrganisationID , Name1 , Name2 , Name3 ) SELECT O.OrganisationID , LTRIM(RTRIM(SUBSTRING(O.OrganisationName, C1.ChunkStart, C1.ChunkEnd))) , LTRIM(RTRIM(SUBSTRING(O.OrganisationName, C2.ChunkStart, 1 + C2.ChunkEnd - C2.ChunkStart))) , LTRIM(RTRIM(SUBSTRING(O.OrganisationName, C3.ChunkStart, 1 + C3.ChunkEnd - C3.ChunkStart))) FROM dbo.Organisations O INNER JOIN FirstChunk C1 ON C1.OrganisationID = O.OrganisationID INNER JOIN SecondChunk C2 ON C2.OrganisationID = O.OrganisationID INNER JOIN ThirdChunk C3 ON C3.OrganisationID = O.OrganisationID ORDER BY O.OrganisationID; There is still some compactness to be gained if you write JOIN s with just one ON clause on a single line, but only when that JOIN is trivial (for instance on matching primary keys). Especially in the final part, I find the symmetry of the JOIN s to be clear. Use LEFT JOIN and COALESCE for edge cases Four times you add a UNION on a subquery to account for an edge case. Three of those are when you want to split on @MaxLen, because there is no shorter match. But there is another way to do that. In SQL, missing data is represented as a NULL value. When we use an INNER JOIN, those NULL s disappear, because we can only join on data that we know. Adding missing data afterwards through a UNION and a back reference (querying the same data but asking where it is missing) is possible. But we can also just take those NULL s with an OUTER JOIN (mostly LEFT or RIGHT), and tell SQL to replace missing values by something else, using COALESCE. In the Chunk CTEs In the FirstChunk, we only know that something is missing if we know of all the organisations, so we need to select the data from the origin as well: , FirstChunk AS ( SELECT O.OrganisationID , 1 AS ChunkStart , COALESCE(MAX(D.Position), @MaxLen) AS ChunkEnd FROM dbo.Organisations O LEFT JOIN ( SELECT S.OrganisationID , S.Position + 1 AS Position FROM SplitPositions S WHERE Position BETWEEN 1 AND @MaxLen ) D ON D.OrganisationID = O.OrganisationID GROUP BY O.OrganisationID ) Notice that we don't select D.OrganisationID for the first column any more, because that can also be NULL if we can't split. That also means that we need to GROUP BY the newly selected value. In the other two CTEs, we can just take the existing values of the previous CTEs: , SecondChunk AS ( SELECT C.OrganisationID , C.ChunkEnd + 1 AS ChunkStart , COALESCE(MAX(D.Position), C.ChunkEnd + @MaxLen) AS ChunkEnd FROM FirstChunk C LEFT JOIN ( SELECT S.OrganisationID , S.Position + 1 AS Position FROM SplitPositions S INNER JOIN FirstChunk C ON C.OrganisationID = S.OrganisationID WHERE S.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen ) D ON D.OrganisationID = C.OrganisationID GROUP BY C.OrganisationID, C.ChunkEnd ) , ThirdChunk AS ( SELECT C.OrganisationID , C.ChunkEnd + 1 AS ChunkStart , COALESCE(MAX(D.Position), C.ChunkEnd + @MaxLen) AS ChunkEnd FROM SecondChunk C LEFT JOIN ( SELECT S.OrganisationID , S.Position + 1 AS Position FROM SplitPositions S INNER JOIN SecondChunk C ON C.OrganisationID = S.OrganisationID WHERE S.Position BETWEEN C.ChunkEnd + 1 AND C.ChunkEnd + @MaxLen ) D ON D.OrganisationID = C.OrganisationID GROUP BY C.OrganisationID, C.ChunkEnd ) Now the query is more compact, but also faster, because we add the edge cases in the same "swoop". In the SpacePositions and SplitPositions CTEs Yes, in the -Positions CTEs we do this as well, although this is a bit less clear at first, and will probably not gain as much in readability or performance. For completeness, I will explain it here as well. The edge case is the final position. It does not contain a space, but needs to be taken into account as well. Or does it? Now that we take @MaxLen where there is no space to split on, we will take @MaxLen also when there is no space left to split on. Which may or may not be past the end of the input string. Lets see what happens if we just remove the SplitPositions CTE. I will add some additional test data to see what happens: INSERT INTO Organisations ([OrganisationName]) SELECT SUBSTRING(OrganisationName, 1, 180) FROM ( SELECT 'Microsoft Corporation' UNION ALL SELECT 'S&T System Integration & Technology Distribution Aktiengesellschaft' UNION ALL SELECT 'VeryLongOrganisationNameThatWillHaveToBeSplitWithoutASpace Because It Really Is A Long Name, But In The Second Column We Can Split It' UNION ALL SELECT 'Another VeryLongOrganisationNameThatWillHaveToBeSplitWithoutASpaceButOnlyInTheSecondColumn, Because It Really Is A Long Name' UNION ALL SELECT 'AnotherVeryLongOrganisationNameThatWillHaveToBeSplitWithoutASpaceBecauseItReallyIsALongNameButNowItEvenExceedsTheLimitOfAllThreeColumnsWithAMaximumLenghtOf50Characters(WhichIsACombinedTotalOf150Characters)AndNowWeDon''tHaveAnythingToPutInTheLastBox' UNION ALL SELECT 'OneWordOnly' UNION ALL SELECT 'A' -- Single letter edge case UNION ALL SELECT '' -- Empty string edge case ) Data(OrganisationName); Now when, after the query, we run SELECT *, LEN(Name1), LEN(Name2), LEN(Name3) FROM dbo.OrgStaging; The results are: | OrganisationID | Name1 | Name2 | Name3 | | | | |----------------|----------------------------------------------------|----------------------------------------------------|----------------------------------------------------|----|----|----| | 1 | Microsoft Corporation | | | 21 | 0 | 0 | | 2 | S&T System Integration & Technology Distribution | Aktiengesellschaft | | 48 | 18 | 0 | | 3 | VeryLongOrganisationNameThatWillHaveToBeSplitWitho | utASpace Because It Really Is A Long Name, But In | The Second Column We Can Split It | 50 | 49 | 33 | | 4 | Another | VeryLongOrganisationNameThatWillHaveToBeSplitWitho | utASpaceButOnlyInTheSecondColumn, Because It | 7 | 50 | 44 | | 5 | AnotherVeryLongOrganisationNameThatWillHaveToBeSpl | itWithoutASpaceBecauseItReallyIsALongNameButNowItE | venExceedsTheLimitOfAllThreeColumnsWithAMaximumLen | 50 | 50 | 50 | | 6 | OneWordOnly | | | 11 | 0 | 0 | | 7 | A | | | 1 | 0 | 0 | | 8 | | | | 0 | 0 | 0 | Now lets remove the SplitPositions CTE, and add the - 1 to the SpacePositions CTE. Furthermore, we replace all references to SplitPositions to refer to SpacePositions (of course). WITH SpacePositions AS ( SELECT O.OrganisationID , CHARINDEX(' ', O.OrganisationName, 0) - 1 AS Position FROM dbo.Organisations O UNION ALL SELECT O.OrganisationID , CHARINDEX(' ', O.OrganisationName, S.Position + 2) - 1 AS Position FROM dbo.Organisations O INNER JOIN SpacePositions S ON CHARINDEX(' ', O.OrganisationName, S.Position + 2) - 1 > S.Position AND S.OrganisationID = O.OrganisationID ) , FirstChunk AS Which yields: | OrganisationID | Name1 | Name2 | Name3 | | | | |----------------|----------------------------------------------------|----------------------------------------------------|----------------------------------------------------|----|----|----| | 1 | Microsoft | Corporation | | 9 | 11 | 0 | | 2 | S&T System Integration & Technology Distribution | Aktiengesellschaft | | 48 | 18 | 0 | | 3 | VeryLongOrganisationNameThatWillHaveToBeSplitWitho | utASpace Because It Really Is A Long Name, But In | The Second Column We Can Split | 50 | 49 | 30 | | 4 | Another | VeryLongOrganisationNameThatWillHaveToBeSplitWitho | utASpaceButOnlyInTheSecondColumn, Because It | 7 | 50 | 44 | | 5 | AnotherVeryLongOrganisationNameThatWillHaveToBeSpl | itWithoutASpaceBecauseItReallyIsALongNameButNowItE | venExceedsTheLimitOfAllThreeColumnsWithAMaximumLen | 50 | 50 | 50 | | 6 | OneWordOnly | | | 11 | 0 | 0 | | 7 | A | | | 1 | 0 | 0 | | 8 | | | | 0 | 0 | 0 | Looks good to me :)
{ "domain": "codereview.stackexchange", "id": 17685, "tags": "sql, sql-server" }
catkin_make builds executable in the wrong directory?
Question: Hello! I have a package I have been able to successfully build using catkin_make after setting up my package.xml and CMakeLists.txt. However, I have one problem. When I run catkin_make, it successfully makes the executable, but places the executable in catkin_ws/build/package_name. For example, I have made a small example below (tbrandom and tbsub are my executables): may@MooMoo:~/Desktop/Tree/Programming/ros/TurtleBotRandom$ ls build devel src may@MooMoo:~/Desktop/Tree/Programming/ros/TurtleBotRandom$ cd build may@MooMoo:~/Desktop/Tree/Programming/ros/TurtleBotRandom/build$ ls catkin catkin_make.cache cmake_install.cmake Makefile catkin_generated CMakeCache.txt CTestTestfile.cmake test_results CATKIN_IGNORE CMakeFiles gtest turtle_bot_random may@MooMoo:~/Desktop/Tree/Programming/ros/TurtleBotRandom/build$ cd turtle_bot_random may@MooMoo:~/Desktop/Tree/Programming/ros/TurtleBotRandom/build/turtle_bot_random$ ls catkin_generated cmake_install.cmake Makefile tbsub CMakeFiles CTestTestfile.cmake tbrandom When I source the devel/setup.bash, and try to run the executable through the package, ROS cannot find the executable unless I copy the executable(s) made (in this example, tbrandom and tbsub) and copy them into Desktop/Tree/Programming/ros/TurtleBotRandom/src/turtle_bot_random. This is really annoying for bigger projects which require more compile time, and then require me to copy paste files in between to actually start testing to see if my code even works. What could be the reason or a potential fix? Am I doing something wrong? I've only started toying with catkin last night. MY CMAKELISTS.TXT FILE: http://www.pastebin.ca/3086592 Originally posted by utagai on ROS Answers with karma: 3 on 2015-08-01 Post score: 0 Original comments Comment by Dirk Thomas on 2015-08-01: If you post your CMakeLists.txt file I might be able to tell you what you need to do differently. Comment by gvdhoorn on 2015-08-02: +1 to what Dirk said: without your actual CMakeLists.txt we can only guess what is going on. I'll take a guess though: was this package migrated from rosbuild? I've seen pkgs that forced rosbuild to place binaries in non-standard places. Catkin is different, and those pkgs then break. Comment by utagai on 2015-08-02: Hello guys, I have posted a link to a pastebin of my CMakeLists.txt file (it was too long to copy paste into the post). I hope this helps! Comment by duck-development on 2015-08-02: Your Cmake ist worng Declare a cpp executable add_executable(turtle_bot_random_node src/turtle_bot_random_node.cpp) you can not bild a Comment Comment by utagai on 2015-08-02: duck-development, if you look at lines 8-12, you will see that I have lines for my builds uncommented. I think you are looking at a line I commented out previously. I think I tried that thinking if I put in the path it would build the executable there, but commented it out when it didn't work. Answer: The problem is that you call add_executable() before catkin_package() in the CMake file. Please read the documentation for more information: http://wiki.ros.org/catkin/CMakeLists.txt#catkin_package.28.29 Originally posted by Dirk Thomas with karma: 16276 on 2015-08-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by utagai on 2015-08-02: Thank you! That fixed the problem! However, the documentation doesn't seem to tell me why this caused an issue. Can you explain? Comment by Dirk Thomas on 2015-08-03: Because the catkin_package() call changes the default location of compiled targets. Comment by utagai on 2015-08-03: Is this because catkin_package() overwrites the location to which the executables should be save? Comment by William on 2015-08-03: Yes, it changes the EXECUTABLE_OUTPUT_PATH and the LIBRARY_OUTPUT_PATH. Comment by utagai on 2015-08-03: Alright, that makes sense. Thanks so much!
{ "domain": "robotics.stackexchange", "id": 22352, "tags": "catkin-make, catkin" }
Cause path planning to take wider turns
Question: I am having trouble getting my robot to find a path through doorways. Sometimes the global costmap just doesn't give enough space. A solution is to decrease the inflation_radius, however the result is that it plans paths with very tight corners, meaning sometimes the robot runs over items near the wall that stick out below the sensor level. Is there a way to coerce the planner into accepting tight spaces when there is no alternative, like doorways, but take a wide birth around corners when it is possible? It feels like a need another "inflation" box for "desired personal space". Originally posted by ChrisL8 on ROS Answers with karma: 241 on 2015-07-13 Post score: 3 Answer: This is an ancient question that never got one answer, so I'll throw up what I've found since 2015. The now famous ROS Navigation Tuning Guide PDF by Kaiyu Zheng that came out in 2016, a year after this question was written: http://kaiyuzheng.me/documents/navguide.pdf Especially pay attention to the part about creating a "gradient" for the local planner so every path is in the middle. http://wiki.ros.org/teb_local_planner I don't know why this isn't the default, it has worked so much better in every way than the seemingly more common dwa_local_planner, and teb works "out of the box" with no tuning required to just navigate a map. Originally posted by ChrisL8 with karma: 241 on 2020-07-25 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 22172, "tags": "ros, navigation, dwa-local-planner, amcl" }
How to understand the concept of vanishing point?
Question: Consider the diagram below: We can see from the plane $\Phi$ two parallel lines on its surface. Then these two lines are projected onto the plane $\Pi$ where they appear to be intersecting. My question is how did they become intersecting? and how is the perspective defined in this figure? that is to say from what field of view is the person looking at this diagram to see this phenomena? Because I don't understand what this diagram is specifically describing when it connects all these rays to the optical center $O$. I have attached an example of train tracks and hope someone can help me understand it as an example of this phenomena Answer: In the train track picture, the rails never meet, but appear to. The point where they appear to meet is infinitely far away, but there is a definite angle to look to see the point of intersection (we must look at the middle of the picture). In the top diagram the observer at $O$ must look in the direction $O\Pi$ to see where the two lines appear to meet (even though they never actually meet).
{ "domain": "physics.stackexchange", "id": 82352, "tags": "optics, geometry" }
How can I use yaml config files without having to build after changes in ROS2?
Question: How can I use yaml config files without having to rebuild after changing the config file? I am looking for a way to have the same behaviour as in ROS1 where config files could be changed without having to rebuild the package. According to this answer, it should be possible with the symlink install. However, I haven't been able to achieve this. Additionally, it suggests to use global paths rather than install paths, but I would like to keep my config files within the package they belong to. My current setup and method is the following on ROS2 foxy Linux: Directory structure: my_package/ my_package/ config/ my_package.yaml launch/ resource/ _init_.py package.xml setup.cfg setup.py setup.cfg: [develop] script-dir=$base/lib/caterra_motor_controller [install] install-scripts=$base/lib/caterra_motor_controller setup.py: ## ! DO NOT MANUALLY INVOKE THIS setup.py, USE CATKIN INSTEAD from setuptools import setup import os from glob import glob package_name = 'my_package' setup( name=package_name, version='0.0.0', packages=[package_name], data_files=[ ('share/ament_index/resource_index/packages', ['resource/' + package_name]), ('share/' + package_name, ['package.xml']), (os.path.join('share', package_name, 'launch'), glob('launch/*')), (os.path.join('share', package_name, 'config'), glob('config/*.yaml')), ], install_requires=['setuptools'], zip_safe=True, maintainer='xxx', maintainer_email='xxx', description='xxx', license='TODO: License declaration', tests_require=['pytest'], entry_points={ 'console_scripts': [ 'node = my_package.my_node:main' ], }, ) I then compile the code with: colcon build --symlink-install This works fine for python files, but the yaml config files are still only updated after a rebuild. What am I doing wrong? Originally posted by bartonp on ROS Answers with karma: 3 on 2022-06-30 Post score: 0 Answer: This is a bug with the build tools. To verify, I created a ament_python and ament_cmake package as per the tutorials and did a colcon build --symlink-install. Looking at the install/ directory, the ament_cmake package has its launch files symlinked to the ones in my source tree, but for the ament_python package, there is a copy of the launch. In other words when you call colcon build --symlink-install, your .yaml files are just copied to the right place in the install/ directory. Here are some other issue reports from github with the same complaint: https://github.com/colcon/colcon-core/issues/407. Apparently it has to do with the way ament_python handles the "symlinking" (it does not actually use symlinks): https://github.com/colcon/colcon-core/issues/482. You can join the conversation there and try to resolve the issue if you are interested. You could use ament_cmake instead of ament_python since this seems to be handling the linking in the expected way. Otherwise, depending on your needs, you could try to manually symlink the files in the install directory as a workaround. Take note that this will break every time you call colcon build. Originally posted by vecf with karma: 26 on 2022-07-18 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 37816, "tags": "ros, ros2, build" }
Why are Saturn's rings so thin?
Question: Take a look at this picture (from APOD https://apod.nasa.gov/apod/ap110308.html): I presume that rocks within rings smash each other. Below the picture there is a note which says that Saturn's rings are about 1 km thick. Is it an explained phenomenon? Answer: There seems to be a known explanation. I quote from Composition, Structure, Dynamics, and Evolution of Saturn’s Rings, Larry W. Esposito (Annu. Rev. Earth Planet. Sci. 2010.38:383-410): [The] rapid collision rate explains why each ring is a nearly flat disk. Starting with a set of particle orbits on eccentric and mutually inclined orbits (e.g., the fragments of a small, shattered moon), collisions between particles dissipate energy but also must conserve the overall angular momentum of the ensemble. Thus, the relative velocity is damped out, and the disk flattens after only a few collisions to a set of nearly coplanar, circular orbits. I think the key is that particles in a thick ring would not move in parallel planes but would have slanted trajectories, colliding all the time and losing their energy very fast.
{ "domain": "physics.stackexchange", "id": 7196, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, solar-system, satellites" }
Finding phylogenetic distance between sequences?
Question: I'm working on a piece of software that does comparative genomic analysis; and I found out in homology methods for functional annotation, it's preferable to pick the high scoring homolog from a distant sequence (i.e. not closely related to the query sequence in the phylogenetic tree). My question is: why? and also could NCBI taxid for each taxon be used for approximating this distance? if not, then is there a simple way of making this binary decision (close, not close) when given two sequences. Answer: No, because it is really not a binary decision. There are attempts to associate percent pairwise difference with taxonomic ranks (particularly species), but this is problematic because every independent lineage can vary according to its own rate. You could use sequences from different families, or orders, or even phyla, but these two can depend on whether specialists in the field are lumpers, or splitters. For example, all ants are in the same family, and Formicidae is over 100 million years old, while birds that may have diverged from a common ancestor within the last 10 million years are in a different family. You can use pairwise distance OR higher-level taxonomy as a proxy. But it would be crude. Why not program your software to make hard decisions about homology, and then find the closest most distant homologue in terms of pairwise differences?
{ "domain": "biology.stackexchange", "id": 7989, "tags": "genomics, taxonomy, phylogenetics, homology, ncbi" }
Expectation value of total energy for the quantum harmonic oscillator
Question: A particles unnormalized wavefunction is given as $$\psi(x)=2\psi_1+\psi_2+2\psi_3.$$ How can I find $\langle E\rangle $ without calculating $\langle T\rangle$ or $\langle V\rangle $ first? I'm pretty sure I have $\langle T\rangle $ and $\langle V \rangle$ though, so is the only way to find $\langle E\rangle $ is to add these two values together? Any help would be appreciated. Answer: If you're specifically asked for the expectation values of $T$ and $V$ then the simplest way of getting $\langle E\rangle$ is simply adding $\langle T\rangle$ and $\langle V\rangle$. If you want a direct calculation, your quickest route is probably using the eigenvalue equation $$H\psi_n=\hbar\omega(n+\tfrac12)\psi_n$$ and the orthonormality of the $\psi_n$.
{ "domain": "physics.stackexchange", "id": 9647, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, harmonic-oscillator, superposition" }
Why is ester more reactive to nucleophile than carboxylic acids?
Question: I was given just this statement at face value, but I think it refers to nucleophilic reactions where the carbonyl is the electrophile. From what I understand, the $\ce{-OR}$ in ester is more electron donating inductively than the $\ce{-OH}$ in $\ce{-COOH}$ (due to the alkyl groups in $\ce{-OR}$). So this should make the carbonyl carbon less positive, making it a weaker electrophile, making it less reactive no? The resources I read say that it's because the $\ce{-OR}$ is a better leaving group than $\ce{-OH}$. But alkoxide ($\ce{RO-}$) is a stronger base than hydroxide ($\ce{OH-}$), so why would it leave first? Answer: Remember that a carboxylic acid is an ACID. It donates a proton, forming an anion and will protonate anionic and basic nucleophiles rendering them non-nucleophilic. If there is an excess of the nucleophile, this means that it is attempting to attack a species that is negatively charged; this is not a favourable interaction. Should it succeed the leaving group is not -OH it is O-
{ "domain": "chemistry.stackexchange", "id": 15962, "tags": "organic-chemistry, esters, carboxylic-acids" }
How to show that an $N$-dimensional SHO's dynamics symmetry is $SU(N)$?
Question: From Wikipedia: The dynamical symmetry group of the $n$-dimensional quantum harmonic oscillator is the special unitary group $SU(n)$. As an example, the number of infinitesimal generators of the corresponding Lie algebras of $SU(2)$ and $SU(3)$ are three and eight respectively. This leads to exactly three and eight independent conserved quantities (other than the Hamiltonian) in these systems. The two dimensional quantum harmonic oscillator has the expected conserved quantities of the Hamiltonian and the angular momentum, but has additional hidden conserved quantities of energy level difference and another form of angular momentum How can I show that $\mathbf{H} = \hbar \omega \left(\vec{a}^\dagger \vec{a} + \frac{N}{2}\right)$ has dynamical symmetry of $SU(N)$? Which operator/generator do I need to show to commute with $H$? Answer: Let $$ \vec b=U\,\vec a\, ,\qquad \vec b^\dagger =\vec a^\dagger U^\dagger $$ then $$ \vec b^\dagger \cdot \vec b= \vec a^\dagger U^\dagger U\,\vec a=\vec a^\dagger \vec a \quad \Leftrightarrow \quad U^\dagger U=\hat 1\, , $$ which defines $U$ as unitary matrix. Actually, $SU(N)$ is NOT the dynamical symmetry group of the harmonic oscillator. This dynamical symmetry group is $Sp(N,\mathbb{R})$ (also called $Sp(2N,\mathbb{R})$ depending on notations). $U(N)$ (or $SU(N)$) is the simply the symmetry group of the degenerate states of the H.O. $Sp(N,\mathbb{R})$ is the real symplectic group in $N$ dimensions, with algebra $sp(N,\mathbb{R})$ spanned by $\{a_k^\dagger a_j^\dagger, a_k^\dagger a_j, a_k a_j\}$ with $k,j=1,\ldots, N$. The subset $\{ a_k^\dagger a_j\}$ spans $u(N)$, making $u(N)$ a subalgebra of $sp(N,\mathbb{R})$. Note that $sp(N,\mathbb{R})$ is the dynamical algebra because, in terms of $x$ and $p$, it is spanned by $x_kx_i$, $p_kp_i$ and $x_kp_i+p_kx_i$. Any observable (such as the kinetic energy) expressed as a polynomial in these basic observables will act within a single $sp(N,\mathbb{R})$ irrep. Thus wiki is not quite correct. In general the symmetry algebra of $H$ includes all operators that commute with $H$ and close on an algebra, whereas the dynamical algrebra contains $H$ and itssymmetry algebra, but also includes operators that need not commute with $H$ but still close on a (larger) algebra. For 1d h.o. this would include $x^2$, $p^2$ and $xp+px$ which do not all commute with $H$. Clearly $H$ is in there as a linear combination of $x^2$ and $p^2$.
{ "domain": "physics.stackexchange", "id": 100615, "tags": "quantum-mechanics, homework-and-exercises, symmetry, group-theory, lie-algebra" }
Why does averaging a sentence's worth of word vectors work?
Question: I am working on a text classification problem using r8-train-all-terms.txt, r8-test-all-terms.txt from https://www.cs.umb.edu/~smimarog/textmining/datasets/. The goal is to predict the label using a Random Forest classifier. Each text sentence as been vectorized using the GoogleNews word vectors. The embedding source can be found here: https://github.com/mmihaltz/word2vec-GoogleNews-vectors In the example I am following along with there is one step that irks me - there is a step that converts my array of vectorized tokens to a single vector by taking the mean over the tokens e.g. def transform(self, data): v = self.word_vectors.get_vector('king') self.D = v.shape[0] X = np.zeros((len(data), self.D)) n = 0 emptycount = 0 for sentence in data: tokens = sentence.split() vecs = [] m = 0 for word in tokens: try: vec = self.word_vectors.get_vector(word) vecs.append(vec) m += 1 except KeyError: pass if len(vecs) > 0: vecs = np.array(vecs) X[n] = vecs.mean(axis=0) # take the mean of the vectors? what does it mean? else: emptycount += 1 n += 1 print("Number of samples with no words found: %s / %s" % (emptycount, len(data))) return X I am leaving out some boilerplate but later on I run the model and the results are surprisingly good: model = RandomForestClassifier(n_estimators = 200) model.fit(XTrain, YTrain) print("train score:", model.score(XTrain, YTrain)) print("test score:", model.score(XTest, YTest)) > train score: 0.9992707383773929 > test score: 0.9378711740520785 I understand that the random forest model expects to have one row per example so it is unable to consume a sequence of embeddings like a RNN might. So you are required to convert to single row (1-D array). My question is: WHY does it work? It seems at odds to me that the averaged word vectors would be able to capture anything about the context or meaning of a sentence by merely averaging over the encodings. Best case scenario I would expect this technique breaks down for larger blocks of text because you would tend to squash all your examples into the same neighborhood of your input space. It would be great to get some clarification on this. Answer: It works for the same reason why the good old bag-of-words + TF-IDF works. Despite loosing some word ordering information, a text can be still classified by the typical keywords. Since texts on different topics differ a lot with respect to the vocabulary used, simply putting together the words' embeddings might work surprisingly well. Here is a paper that shows that a simple sentence embedding methods beats sophisticated supervised methods including RNNs and LSTMs. Their method is just a weighted average of the word vectors, modified a bit using PCA/SVD. Section 4.3 tells that the word ordering plays a role, but not too much.
{ "domain": "datascience.stackexchange", "id": 4424, "tags": "word-embeddings, multilabel-classification" }
Wave functions as being square-integrable vs. normalizable
Question: I am a physics undergraduate. I am working in the world of textbook (non-relativistic) Quantum Mechanics. Say we have a wave function $\Psi(x,t)$. Must $\Psi(x,t)$ be square-integrable or normalizable? To my understanding, normalizable implies square-integrable. However, square-integrable does not imply normalizable. For example, $f(x,t) = 0$ is square-integrable, but it is not normalizable. This leads me to think that a wave function $\Psi(x,t)$ must be normalizable, a stricter requirement than just square-integrable. In particular, $f(x,t) = 0$ is not a wave function and thus cannot represent a physical state. And, as an application of this tentative result, could one explain why particle annihilation is not built into (non-relativistic) textbook Quantum Mechanics by appealing to the fact that the evolution of a wave function $\Psi(x,t)$ into the wavefunction $f(x,t) = 0$ (after some time has elapsed) is not possible since $0$ itself is not a wave function? Answer: That is a really nice remark. States are actually not quite vectors on the Hilbert space, but rather rays of vectors. Hence, indeed, they should be normalizable rather than square-integrable. In practice, this means that the allowed wavefunctions are the non-vanishing square-integrable functions (notice all of these are indeed normalizable, where I take "function" to mean actually an equivalence class of functions that are equal almost everywhere). As for the evolution, notice that if QM were to describe particle annihilation, it should also be able to describe particle creation because it is time-reversible. However, you can't give a wavefunction a interpretation of two particles at once. It only tells you how to find a single particle. From a more mathematical point of view, saying that $\psi = 0$ at any time (e.g., after annihilation) determines the whole evolution of the state. Notice that evolving the Schrödinger equation to any time with $\psi(0,x) = 0$ can only lead to $\psi(t,x) = 0$. Hence, wanting to describe the state of the system with a single wavefunction will also prevent one from getting $\psi(t_0,x) = 0$ at any particular time $t_0$, unless the wavefunction vanishes at all times. To actually describe particle creation and annihilation, one must use different formalisms, such as second-quantization and quantum field theory. While this keeps the main ideas of QM (states are vectors on a Hilbert space, observables are operators, etc), you no longer deal with wavefunctions, and the states now live in a more complicated Hilbert space known as Fock space. This space does have states corresponding to different particle numbers and it should be mentioned that even now that particle annihilation is possible, the vacuum state (no particles) is not $\psi = 0$, but rather a non-vanishing vector corresponding to zero particles.
{ "domain": "physics.stackexchange", "id": 92101, "tags": "quantum-mechanics, hilbert-space, wavefunction" }
Simple Parser, C++
Question: I wrote a simple parser for input-output operators. What is wrong with this code and how could it be better? #ifndef PARSER_H_ #define PARSER_H_ #include <string> class parser { public: typedef std::string::const_iterator const_iterator; private: static const_iterator start_position(bool b, const std::string& str); static const_iterator begin_it(const parser& p); static const_iterator end_it(const parser& p); static const_iterator current_it(const parser& p); const_iterator begin; const_iterator end; const_iterator it; public: parser(): begin(NULL), end(NULL), it(NULL) {}; parser ( const std::string& str_to_parse, bool start_from_the_end ): begin(str_to_parse.begin()), end (str_to_parse.end()), it (start_position(start_from_the_end, str_to_parse)) {}; //Give another string to parser: void str(const std::string&); void set_to_begin(); void set_to_end(); parser& operator = (const parser&); bool eof_str() const; char get(); //move forward char rget(); //move backward char peek() const; //watch current symbol //pass an all symbols beginning from current: void pass(char); //moving forward void rpass(char); //moving backward //pass an all symbols beginning from //current which satisfy to condition: void pass(bool (*)(char)); //moving forward void rpass(bool (*)(char)); //moving backward //return iterator: const_iterator current_it() const; }; //This function is used in constructor: //it helps to set iterator of parser //to beginning or to the end of string. inline parser::const_iterator parser::start_position(bool b, const std::string& str) { if (b) return str.end(); return str.begin(); } //This functions are used in operator=. //I decided to do not writing analogous //const-functions for better encapsulation. inline parser::const_iterator parser::begin_it(const parser& p) {return p.begin;} inline parser::const_iterator parser::end_it(const parser& p) {return p.end;} inline parser::const_iterator parser::current_it(const parser& p) {return p.it;} inline void parser::str(const std::string& str_to_parse) { begin = str_to_parse.begin(); end = str_to_parse.end(); it = str_to_parse.begin(); } inline void parser::set_to_begin() {it = begin;} inline void parser::set_to_end() {it = end;} inline parser& parser::operator = (const parser& p) { begin = begin_it(p); end = end_it(p); it = current_it(p); return *this; } inline bool parser::eof_str() const {return it >= end;} inline char parser::get() {return *(it++);} inline char parser::rget() {return *(it--);} inline char parser::peek() const {return *it;} inline void parser::pass(char chr) { while (*it == chr) ++it; } inline void parser::rpass(char chr) { while (*it == chr) --it; } inline void parser::pass(bool (*cond)(char)) { while (cond(*it)) ++it; } inline void parser::rpass(bool (*cond)(char)) { while (cond(*it)) --it; } inline parser::const_iterator parser::current_it() const {return it;} #endif /* PARSER_H_ */ Example: #include "parser.h" #include <string> bool digit(char chr) { return (chr >= '0' && chr <= '9'); } std::string cut_number(parser& p, bool& error) { error = false; //Check on that the first //symbol is a digit: if (!digit(p.peek())) { error = true; return ""; } parser::const_iterator begin = p.current_it(); //Check on that it is //correct number: if (p.get() == '0') { if (digit(p.peek())) { error = true; return ""; } } p.pass(digit); //In the code below could not be //if (p.get() == '.') //because next char after *p.it //must be checked only if *p.it == '.' if (p.peek() == '.') { p.get(); //Check on that it is //correct float pointing number: if (!digit(p.peek())) { error = true; return ""; } else p.pass(digit); } parser::const_iterator end = p.current_it(); return std::string(begin, end); } Answer: I see a few things I'd change (not really errors, but still open to improvement, IMO). First, I'd change the default ctor to use nulltptr instead of NULL: parser() : begin(nullptr), end(nullptr), it(nullptr) {} As shown, I'd also remove the extraneous ; from the end of each definition, as above. I'd also change this constructor: parser ( const std::string& str_to_parse, bool start_from_the_end ): ... to take an enumeration instead of a bool: enum direction {FORWARD, REVERSE}; parser(std::string const &input, enum direction d) { // ... Alternative, I'd consider taking a pair of iterators, so the client code could pass forward iterators or reverse iterators as needed. For: void pass(bool (*)(char)); //moving forward void rpass(bool (*)(char)); //moving backward I think I'd rather see function templates, with the predicate type passed as a template parameter: template <class F> void pass(F f); With this, the user could pass a pointer to a function as is currently allowed, but could also pass a function object instead. The possible shortcoming is that passing an incorrect parameter type might easily produce a less readable error message. inline parser::const_iterator parser::start_position(bool b, const std::string& str) { if (b) return str.end(); return str.begin(); } Again, I'd use the enumerated type from above instead of a Boolean (and, again, consider using iterators instead). This assignment operator: inline parser& parser::operator = (const parser& p) { begin = begin_it(p); end = end_it(p); it = current_it(p); return *this; } ...looks like it's only doing what the compiler-generated operator would do if you didn't write one (in which case, I'd generally prefer to let the compiler do the job).
{ "domain": "codereview.stackexchange", "id": 4100, "tags": "c++, parsing" }
If energy is relative, then how it can remain conserved?
Question: If energy depends on frame of reference of observer, then how it can remain conserved? Same question also for linear and angular momentum. I think energy is conserved when seen from a specific frame of reference, but I have doubts about it. If that's the case, then I think that the energy difference between two systems, when observed from the same frame of reference, remains same for all reference frames, and this energy difference is a more fundamental quantity and it should remain conserved instead. Kindly explain this to me. Answer: You need to distinguish between the two concepts of 'invariance' and 'conservation'. They are not the same. Invariance is when a quantity is invariant under some kind of transformation. This transformation does not depend on when you make measurements: whether it is before or after a reaction/experiment. Conservation is when a quantity is conserved before and after a reaction (given that you have already chosen a reference frame once and for all, before performing the experiment). You can have $4$ types of quantities in physics: Those that are conserved and invariant Those that are conserved but not invariant Those that are not conserved but are invariant Those that are neither conserved nor invariant Energy is a quantity that is conserved before and after a reaction, but it is not invariant (because you can always go to a new, boosted reference frame).
{ "domain": "physics.stackexchange", "id": 95099, "tags": "newtonian-mechanics, reference-frames, energy-conservation, conservation-laws, inertial-frames" }
Thread safe algorithm for arrays information manipulation
Question: I have a program which requires a usage of threads and it must be thread safe. I do not have much experience with threads and critical sections, but as much as I know you must lock code sections where resources are being read and written to. Program is working, but I do not know if I did it correctly. DataB code was added to answer users Incomputable question. It is just a class which stores information required for program to work. class DataB { public: DataB() { } DataB(double sF, int sC, string n); double getField(void); void increaseFieldCount(void); void decreaseFieldCount(void); int getFieldCount(void); void setField(double f); void setCountField(int fC); string getName(void); private: string name; double field; int fieldCount; }; DataB::DataB(double sF, int sC, string n) { field = sF; fieldCount = sC; name = n; } double DataB::getField(void) { return field; } void DataB::increaseFieldCount() { fieldCount++; } int DataB::getFieldCount(void) { return fieldCount; } void DataB::decreaseFieldCount(void) { fieldCount--; } void DataB::setCountField(int cF) { fieldCount = cF; } void DataB::setField(double f) { field = f; } string DataB::getName(void) { return name; } mutex threadLock; void threadRemove(int dataStart, int dataEnd, int dataCount, DataB B[], DataB V[], int &sortedCount) { bool changed = true; while (changed || sortedCount != dataCount ) { changed = false; for (int i = dataStart; i < dataEnd; i++) { threadLock.lock(); int count = V[i].getFieldCount(); threadLock.unlock(); if (count > 0) { threadLock.lock(); double delItem = V[i].getField(); threadLock.unlock(); for (int x = 0; x < dataCount; x++) { threadLock.lock(); double compare = B[x].getField(); threadLock.unlock(); if (compare == delItem) { threadLock.lock(); B[x].decreaseFieldCount(); V[i].decreaseFieldCount(); if (B[x].getFieldCount() == 0) { B[x].setCountField(-1); B[x].setField(-1); } changed = true; threadLock.unlock(); break; } } } } } } int main(int argc, char * argv[]) { /* */ thread G6(threadRemove, 0, 5, dataCount, B, V, ref(sortedCount)); thread G7(threadRemove, 5, 10, dataCount, B, V, ref(sortedCount)); thread G8(threadRemove, 10, 15, dataCount, B, V, ref(sortedCount)); thread G9(threadRemove, 15, 20, dataCount, B, V, ref(sortedCount)); thread G10(threadRemove, 20, 25, dataCount, B, V, ref(sortedCount)); G6.join(); G7.join(); G8.join(); G9.join(); G10.join(); } Answer: Contention There is only one std::mutex, and every thread is permanently in contention for that same lock for every small partial operation. Thus, all the threads are permanently in contention over that lock, and at most one thread at a time is actually doing any work. A single threaded implementation should be faster - it has to do the same work, but doesn't have to fight over lock control. So, how can this be fixed? 1) Separate independent data per thread In the usage example, elements of V are split between threads so that no two threads are accessing the same elements. So, accesses to elements of V don't need a lock - if this convention is followed strictly. If this is possible, locks aren't needed for those parts! 2) Finer granularity locks Right now, taking the lock stops every thread from doing any work at all. Even with the improvement of option 1, only one thread at a time can access elements from B. This can be improved by introducing locks at a finer granularity: simple: one lock per array element (so one for each DataB object) more advanced: one lock for a subset of each array (e.g. one lock for every 10 array elements) Doing so allows other threads to perform work on all unrelated elements. 3) Read/Write exclusivity What's the difference between reading and writing B{0]? Reading can be done concurrently, writing can't. There is a lock that helps for this special case: std::shared_mutex allow multiple threads to read the related object(s), but only allows one thread to write to it (while no one else can access it). The catch? It's only available since C++17. Before that, there might be other libraries providing that functionality, though (e.g. boost), or you make do with a normal std::mutex. Small problem In the current version, every changes to an element of B is done in one transaction - no thread can see any partial state. If this property is required, this can be implemented with some special considerations. How do you change a BData object in one transaction? Create a copy, change the copy, take the lock, overwrite the original with the copy, release lock (aka RCU = Read Copy Update, requires an external lock) Lock the object, perform changes, unlock the object (this requires giving access to an internal lock) Implementation Headers are missing, At least <string>, <mutex> and <thread> need to be included. using namespace std; is considered bad practice and should be avoided. Some member functions of DataB and some variables could be marked const (e.g. DataB::getFieldCount(), DataB::getField(), compare, count, every function parameter). Doing so might help reasoning about code and enables the compiler to verify said reasoning and might enable it to generate better machine code. Inconsistent naming: DataB::getFieldCount and DataB::setCountField. DataB doesn't encapsulate any behavior, it just provides some getters and setters. Maybe make it a POD struct instead? Prefer list initialization in the constructor(s). The void in function_name(void) isn't needed in C++. Fixed contention code For this, I made one internal mutex per DataB object and chose the second option for transactions. Also, I changed DataB to a POD struct to easily provide access to all internals (having getters and setters acquire locks on their own doesn't fit well with having a lock for a transaction). #include <string> #include <thread> #include <shared_mutex> struct DataB { mutable std::shared_mutex mut{}; // mutable so sonst DataB object can still be locked std::string name; double field; int fieldCount; DataB() { } DataB(const double sF, const int sC, const std::string& n) : name{n}, field{sF}, fieldCount{sC} {} }; void threadRemove(const int dataStart, const int dataEnd, const int dataCount, const DataB B[], const DataB V[], const int &sortedCount) { bool changed = true; while (changed || sortedCount != dataCount) { changed = false; for (auto i = dataStart; i < dataEnd; i++) { if (V[i].fieldCount > 0) { for (auto x = 0; x < dataCount; x++) { double compare; { std::shared_lock<std::shared_mutex> read_lock{ B[x].mut }; // shared_lock = read only access compare = B[x].field; } // read_lock goes out of scope and gets released if (compare == V[i].field) { std::unique_lock<std::shared_mutex> write_lock{ B[x].mut }; // unique_lock = write access --B[x].fieldCount; --V[i].fieldCount; if (B[x].fieldCount == 0) { B[x].fieldCount = -1; B[x].field = -1; } changed = true; } } } } } } Note This could maybe be enhanced by strategic use of atomics. However, that would require more knowledge about the members and usages of DataB than I could gather/guess from the code given.
{ "domain": "codereview.stackexchange", "id": 27987, "tags": "c++, thread-safety" }
Is time dilation based on the formula for period of a pendulum?
Question: The theory Albert Einstein put forward about special relativity mentions a possibility for time dilation, in which he states gravity has a considerable effect on time. And in high school physics we learnt that the time period of a simple pendulum is given by, $$T = 2π\sqrt{\frac{l}{g}}$$ Where $l$, $g$ have their usual meanings. Well, this describes how the period of oscillation experienced by a simple pendulum depends on the gravitational acceleration present. My question is whether Einstein proposed his view on time dilation based on a similar phenomenon. It is worth noticing here that as g tends to zero, time period tends to infinity. This doesn't mean that the actual time is lengthening, but the tangential force on the pendulum decreases, which will ultimately cause the pendulum to stop. But the time goes on, as a dead battery on my wrist watch doesn't imply that the actual time has stopped (not even relative to me)! Answer: No, the relationship between the period of a pendulum and $g$ is simple Newtonian mechanics and unrelated to special or general relativity. This is discussed in the answers to Time period related to acceleration due to gravity (though I hesitate to link this as that question was not well received). Time dilation was actually known before Einstein formulated his theory of special relativity. Lorentz published his transformations some time earlier, but their physical significance was not understood. Einstein showed that the transformations arose naturally from his theory of special relativity. By the time Einstein published his theory of general relativity he understood that time dilation is a result of the geometry of spacetime. This applies to special relativity as well as general relativity. I discuss this in my answer to Is gravitational time dilation different from other forms of time dilation?, though you may find this answer goes into a bit too much detail.
{ "domain": "physics.stackexchange", "id": 37186, "tags": "special-relativity, time, time-dilation" }
What kind of algorithm is used by StackGAN to generate realistic images from text?
Question: What kind of algorithm is used by StackGAN to generate realistic images from text? How does StackGAN work? Answer: The paper StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks should provide the answers to your questions. Here's an excerpt from the abstract of the paper. Synthesizing photo-realistic images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose stacked Generative Adversarial Networks (StackGAN) to generate photo-realistic images conditioned on text descriptions. The Stage-I GAN sketches the primitive shape and basic colors of the object based on the given text description, yielding Stage-I low resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high resolution images with photorealistic details. The Stage-II GAN is able to rectify defects and add compelling details with the refinement process.
{ "domain": "ai.stackexchange", "id": 189, "tags": "computer-vision, reference-request, generative-adversarial-networks, image-generation" }
Enforcing application layer consensus on top of a distributed consensus protocol
Question: If I understand correctly, the term "consensus" in distributed consensus algorithms means that all the participants ("peers") achieve a common understanding of some data value. Different algorithms can handle different types of failures or misbehavior by some of the peers, network disconnections or partitions, lost messages, and so on. For example, PBFT has the property that any small group of peers (less than a third) cannot prevent the remaining peers from achieving a common understanding of the data value. BUT there is nothing in these algorithms that prevents any peer from setting the data to a value that they shouldn't set it to. So "consensus" just means we all agree that the value has been set to X. It does not mean we all agree that X is the correct value. Is that right? And, if so, is there some general research into how to enforce a set of application-specific rules about who is allowed to set the data to a given value? For example, Bitcoin has a set of rules that ensure that only "valid" transactions are accepted by the network. So, for example, Joe can't give Fred the same bitcoin that he's already given to Mary. Joe can't give Fred a bitcoin unless someone else already gave Joe that bitcoin, or Joe earned that bitcoin by mining. And so on... So what we have in Bitcoin are a set of application-layer rules that make sense for money, and these rules are embedded in the distributed consensus protocol used by Bitcoin (i.e. blockchain and proof-of-work and the bitcoin reference implementation). However, what if I have a different set of rules that make sense in MY application - how can I ensure that a distributed consensus protocol (e.g. PBFT) not only ensures consensus across all peers about what the data value is, handling various failures and misbehavior by peers, and ALSO ensures that only "legal" values are set by the peers. For example, I might have rules like: The value can only increase - nobody is allowed to decrease it. Except on Sundays, when it can be decreased, but must only be set to an even value. Except peer P can set it to whatever value they want to. And if the value=5, then peer Q is the only one who can change it. And so on... (Believe me, I have a real-world problem that requires this kind of complexity.) Ideally, the rules would be expressed in a formal language, and the distributed consensus protocol would reject any attempts by any peers to change the data value in a way that does not conform to the rules. Could I do this by implementing PBFT and adding in a check where each peer checks that a proposed new value is consistent with the rules, and then refusing to accept (possibly broadcasting a NACK) if the proposed change is not valid? I'm wondering if this has already been studied. Perhaps this is the same as the concept of "consistency" in a distributed database, but I don't want to use a database replication solution, because I want eventual consistency, not strong consistency, and I want the kind of fault-tolerance that PBFT provides. Thanks! Duncan Answer: You've asked multiple questions here. I'll answer the first one. Typically consensus means that all the honest participants have the same view of the state of the system. There's not necessarily a guarantee that this state is "correct" in any sense (some protocols might also provide that guarantee, but it is in some sense a separate or additional request). Often there are some rules about what kinds of state changes are "valid" or "allowed", and the honest participants are also tasked with ensuring that every state transition is valid; they won't generate or request transitions that aren't valid, and they won't accept transitions from others that are invalid. If you do this right then you get the additional guarantee that all the honest participants have the same view of the state of the system, and this state is valid. There are multiple ways to define which states or transitions are valid. One way is by defining a state machine (like a DFA) that represents which state changes are valid; then the protocol provides a distributed implementation of the state machine. I'm not an expert, but if you want to get a more authoritative answer, you could read the literature about state machine replication. It's probably worth reading the PBFT paper, which is presented as an instance of state machine replication.
{ "domain": "cs.stackexchange", "id": 10713, "tags": "algorithms, distributed-systems, fault-tolerance, byzantine" }
Symmetry, conservation laws and coordinates
Question: Consider the following Lagrangian $$L_1 = \iint \left(u_x^2+u_y^2+v_x^2+v_y^2\right)\mathrm d x\, \mathrm dy \tag{01}$$ where subscripts denote (partial) derivatives. The following transformation preserves $L$: $$u_x\to u_x\cos\phi-u_y\sin\phi; \quad u_y\to u_y \cos\phi+u_x\sin\phi; \tag{02a}$$ $$v_x\to v_x\cos\phi-v_y\sin\phi;\quad v_y\to v_y \cos\phi+v_x\sin\phi\tag{02b}$$ for some parameter $\phi$. You can find this by looking for the generators of the symmetries associated with Laplace's eqtn. A natural question is, what is the conservation law associated with this symmetry? To this end, for an infinitesimal angle $\delta \phi$, we find $$\delta u_x =-u_y\delta \phi; \quad \delta u_y =u_x\delta \phi\tag{03a}$$ $$\delta v_x =-v_y\delta \phi; \quad \delta v_y =v_x\delta \phi\tag{03b}$$ Applying this to variations in $L_1$, we have $$\delta L = 0 \implies \iint \left(-u_xu_y+u_yu_x+ -v_xv_y+v_yv_x\right)\mathrm d x\, \mathrm dy = 0.\tag{04}$$ We confirm the Lagrangian stays fixed, but I am having a hard time interpreting this conservation law (does this just say mixed partials commute?). Next, consider a mapping $$u=\sqrt{\rho}\cos\theta\,; \quad v=\sqrt{\rho}\sin \theta \tag{05}$$ The Lagrangian becomes $$L_2=\int \int \left[\frac{\rho_x^2+\rho_y^2}{\rho}+4\rho(\theta_x^2+\theta_y^2)\right] \mathrm d x\, \mathrm dy\tag{06}$$ Is there any intuitive way, starting with this Lagrangian, to find the symmetry that was apparent in Cartesian coordinates? The fact that $L_1$ just depends on gradients, while $L_2$ does not, seems to make the symmetry analysis yield different results (which is consistent with what I've read: eg p 181 of Olver's book) and in particular it's not obvious how to find a relationship for how $\rho$ maps, as it depends on nonlocal (ie integral) quantities. Answer: I) Hints for the first part: Concentrate on $u$ and forget about $v$ as they enter in similar fashion. Let's also put in a conventional $\frac{1}{2}$ factor, i.e. the Lagrangian density becomes $$\begin{align}{\cal L}~=~&\frac{1}{2}u_{\mu}u^{\mu}, \qquad u_{\mu} ~:=~ d_{\mu} u,\cr d_{\mu}~:=~&\frac{d}{dx^{\mu}}, \qquad \mu~\in~\{1,2\}.\end{align}\tag{A}$$ Instead of letting OP's symmetry act on the derivatives $u_{\mu}$, it can be viewed as originating from an infinitesimal (so-called horizontal) rotation $$\delta x^{\mu}~=~\epsilon \varepsilon^{\mu\nu}x_{\nu}\tag{B}$$ in the worldsheet. It in turn induces a so-called vertical infinitesimal transformation $$ \delta_0 u~=~-u_{\mu}\delta x^{\mu} ~\stackrel{(B)}{=}~-\epsilon u_{\mu}\varepsilon^{\mu\nu}x_{\nu},\tag{C}$$ so that the total infinitesimal variation $$ \delta u~=~\delta_0 u+u_{\mu}\delta x^{\mu}~\stackrel{(C)}{=}~0\tag{D}$$ is zero. The corresponding Noether current $$\begin{align} j^{\mu} ~=~~~~~&\frac{\partial {\cal L}}{\partial u_{\mu}}\frac{\delta_0 u}{\epsilon}+{\cal L}\frac{\delta x^{\mu}}{\epsilon}\cr ~\stackrel{(A)+(B)+(C)}{=}& -u^{\mu}u_{\lambda} \varepsilon^{\lambda\nu}x_{\nu}+\frac{1}{2}u_{\lambda}u^{\lambda} \varepsilon^{\mu\nu}x_{\nu}\end{align}\tag{E}$$ satisfies a continuity equation $$ d_{\mu}j^{\mu}~\approx~0\tag{F}$$ on-shell. II) Hints for the second part: Since the transformation from rectangular/Cartesian to polar coordinates takes place in the target space while the symmetry acts in the worldsheet, they seem to be separate issues.
{ "domain": "physics.stackexchange", "id": 96333, "tags": "lagrangian-formalism, symmetry, conservation-laws, variational-principle, noethers-theorem" }
Why isn't output of Deutsch–Jozsa Algorithm simply $|0\rangle$?
Question: If I look at the circuit diagram of the Deutsch–Jozsa Algorithm: Now given the fact that Hadamard matrix or gate is its own inverse (see here), shouldn't the output (top wire) simply give back $|0\rangle$? Answer: It may naively seem that gates like $U_f$ whose action is defined as $$ U_f|x\rangle|y\rangle = |x\rangle|y\oplus f(x)\rangle $$ have no effect on the first register holding the state $|x\rangle$. This naive perception originates in our classical intuition. We see that the operation does not change the contents of the first register and in the classical world this is equivalent to having been unaffected by the operation. However, in quantum mechanics interactions can have other more subtle effects beyond changing register values. Specifically, they can introduce entanglement. As a consequence, one cannot move the second Hadamard on the first register through $U_f$ to cancel it with the first Hadamard (or vice versa) as that would change the entanglement produced by $U_f$ and therefore would perform an inequivalent operation. Moreover, entanglement affects the interference patterns in a way that can have measurable effects on the output distribution of each register. It is instructive to see how things play out in a concrete case. We will use the CNOT gate $$ CNOT|x\rangle|y\rangle = |x\rangle|y \oplus x\rangle \tag1 $$ which is a simple variant of $U_f$. For simplicity, consider the action of a Hadamard on the first qubit followed by a CNOT followed by another Hadamard on the first qubit. This is simpler than Deutsch-Jozsa algorithm, but serves to demonstrate how entanglement may prevent the first register from reading $|0\rangle$ with certainty. We have $$ \begin{align} (H \otimes I) \circ CNOT \circ (H \otimes I) |0\rangle |0\rangle &= \frac{1}{\sqrt{2}}(H \otimes I) \circ CNOT (|0\rangle + |1\rangle)|0\rangle \\ &= \frac{1}{\sqrt{2}}(H \otimes I) (|0\rangle|0\rangle + |1\rangle|1\rangle) \\ &= \frac{1}{2} \left[|0\rangle + |1\rangle)|0\rangle + (|0\rangle - |1\rangle)|1\rangle\right] \\ &= \frac{1}{2} \left[|0\rangle|0\rangle + |1\rangle|0\rangle + |0\rangle|1\rangle - |1\rangle|1\rangle\right] \\ &= \frac{1}{2} \left[|0\rangle(|0\rangle + |1\rangle) + |1\rangle\color{red}{\underline{(|0\rangle - |1\rangle)}}\right] \\ \end{align} $$ where we collected the terms that correspond to the first qubit in the $|0\rangle$ and $|1\rangle$ states. Compare the calculation to what happens in the absence of the CNOT gate $$ \begin{align} (H \otimes I) \circ (H \otimes I) |0\rangle |0\rangle &= \frac{1}{\sqrt{2}}(H \otimes I) (|0\rangle + |1\rangle)|0\rangle \\ &= \frac{1}{2} \left[|0\rangle + |1\rangle)|0\rangle + (|0\rangle - |1\rangle)|0\rangle\right] \\ &= \frac{1}{2} \left[|0\rangle|0\rangle + |1\rangle|0\rangle + |0\rangle|0\rangle - |1\rangle|0\rangle\right] \\ &= \frac{1}{2} \left[|0\rangle(|0\rangle + |0\rangle) + |1\rangle\color{red}{\underline{(|0\rangle - |0\rangle)}}\right] \\ \end{align} $$ where as before we collected the terms corresponding to the first qubit in the $|0\rangle$ and $|1\rangle$ states. In the absence of CNOT, destructive interference zeroes out the amplitude of the $|1\rangle$ state (see the last underlined expression). However, in the presence of CNOT interference is prevented because the amplitudes reside on different kets due to entanglement introduced by the CNOT gate (see the earlier underlined expression). Therefore, in the presence of CNOT, it is possible for the first register to read $|1\rangle$ even though naive reading of $(1)$ might suggest this should not happen. Note that gates such as $$ U_c|x\rangle|y\rangle = |x\rangle|y\oplus f(c)\rangle $$ where the change to the second register's value is independent of the contents of the first register do not generate entanglement. In this case it is possible to write the action of the gate as $U_c = I \otimes V$ for a unitary $V$ and so the two Hadamards on the first register would cancel.
{ "domain": "quantumcomputing.stackexchange", "id": 2266, "tags": "deutsch-jozsa-algorithm" }
Quick question about Two-qubit SWAP gate from the Exchange interaction
Question: I am reading the following paper: Optimal two-qubit quantum circuits using exchange interactions. I have a problem with the calculation of the unitary evolution operator $U$ (Maybe it is stupid): I have figure out the matrix of $H$: \begin{equation} H = J \begin{bmatrix}1 & 0 & 0 & 0\\ 0 & -1 & 2 & 0\\ 0 & 2 & -1 & 0\\ 0 & 0 & 0 & 1\\ \end{bmatrix} \end{equation} But I cannot write the matrix of Operator $U$ and get the result of $(SWAP)^α$. Could you please help me to calculate it? I really want to know how to get the matrix of U. Thank you so much. The figure is shown as below: Answer: You need to calculate $U=e^{-iHt}$. The trick to doing this is working out the eigenvectors of $H$: there's $|00\rangle$ and $|11\rangle$ with eigenvalues J, and $$ |\Psi^{\pm}\rangle=(|01\rangle\pm|10\rangle)/\sqrt{2} $$ with eigenvalues $(-1\pm 2)J$. In particular, notice that this means 3 of the eigenvalues are $J$. Hence, there are two eigenspaces of $H$, $|\Psi^-\rangle\langle\Psi^-|$ and $I-|\Psi^-\rangle\langle\Psi^-|$. Hence, we can find $$ U=e^{-iJt}(I-|\Psi^-\rangle\langle\Psi^-|)+e^{3iJt}|\Psi^-\rangle\langle\Psi^-|. $$ If you remove an irrelevant global phase, this is just the same as $$ U=(I-|\Psi^-\rangle\langle\Psi^-|)+e^{4iJt}|\Psi^-\rangle\langle\Psi^-|. $$ This is exactly what you were after, with $4Jt=\pi\alpha$.
{ "domain": "quantumcomputing.stackexchange", "id": 2128, "tags": "quantum-gate, quantum-state, matrix-representation, pauli-gates, entanglement-swapping" }
Equilibrium of more than two inter reacting species
Question: We know that equilibrium in a chemical system is attained when forward and backward reaction rates are equal. What if the reaction mixtures involve more than one reaction? For example, consider three inter-reacting species A, B and C. The rate constants for the forward and backward reactions of $\ce{A -> B}$ are $k_1$ and $k'_1$, that of $\ce{B -> C}$ are $k_2$ and $k'_2$ and that of $\ce{C -> A}$ are $k_3$ and $k'_3$. What is the relation between the six rate constants so that the system remains in equilibrium? Assume first order kinetics for all the reactions. Thanks in advance. Answer: At equilibrium $~r_1~=r_2~=r_3~=~0$. So, $$k_1[A]-k_{-1}[B]=0$$ $$k_2[B]-k_{-2}[C]=0$$ $$k_3[C]-k_{-3}[A]=0$$ So, The criteria for equilibrium is 1) $$k_1[A]=k_{-1}[B]$$ 2) $$k_2[B]=k_{-2}[C]$$ 3) $$k_3[C]=k_{-3}[A]$$ Now defining equilibrium constant as $K_i=\frac{k_i}{k_{-i}}$ We can derive a formula $$K_1 \times K_2 \times K_3~=~1$$
{ "domain": "chemistry.stackexchange", "id": 3800, "tags": "physical-chemistry, equilibrium, kinetics" }
How does $SU(2)$ group enters quantum mechanics?
Question: What is the reason that $SU(2)$ group enters quantum mechanics in the context of rotation but not $SO(3)$? What really rotates and which space it rotates? It cannot be the physical electron that rotating in real space. I think it is the state vector that rotates in spin space. Am I right? Is it the property of the "weird" spin space (Hilbert space) that rotation by $4\pi$, brings it back to where it started? Answer: Most of this question is already addressed from more than one point of view in answers (and comments) to the OP's related question: Idea of Covering Group In particular, two approaches to seeing why $\mathrm{SU}(2)$ arises are discussed in detail. The first uses the idea of projective representations, and the second involves algebra of quantum observables. However, there is one part of the question that is not directly addressed therein. What really rotates and which space it rotates?...I think it is the state vector that rotates in spin space. Am I right? Yes, you are right. When you rotate a spin-1/2 system in the real world, this corresponds to acting on the quantum state of the system with a unitary operator that represents that rotation on the Hilbert space.
{ "domain": "physics.stackexchange", "id": 11700, "tags": "quantum-mechanics, angular-momentum, group-theory, representation-theory" }
Is where measurement is done the requirement for what gets to be called the computational basis?
Question: In Nielsen and Chuang, chapter 1.3.3 is named as "Measurements in bases other than the computational basis". This name confuses me - after the measurement is done on a new base, doesn't this new base become the computational basis? Answer: The computational basis is just a convention for the $Z$ basis, as its orthogonal basis is $\{|0\rangle, |1\rangle\}$; which is analog to the bit in classical computation, hence the name. So in theory, yes, you can call computational basis any basis you want as long as you clarify what convention are you following, but the most common convention (and, in reality, the only one I've seen) is to call the $Z$ basis this.
{ "domain": "quantumcomputing.stackexchange", "id": 3012, "tags": "nielsen-and-chuang" }
Pure Arduino Quadcopter
Question: I recently bought a set of escs, brushless outrunner motors and propellers. I'm trying to perform a calibration on the esc, but I can't find how I can do that without using components other than the arduino uno itself. The setup I've managed to make is the one shown in the picture. The escs are a mystery, as there is no manual to be found. If it helps, the buy link is this : http://www.ebay.co.uk/itm/4x-A2212-1000KV-Outrunner-Motor-4x-HP-30A-ESC-4x-1045-prop-B-Quad-Rotor-/111282436897 There might also be a problem with the battery (LiPo 3.7V, 2500mAh). Can andybody figure out what I'm doing wrong? The sample arduino code I found was this: #include <Servo.h> #define MAX_SIGNAL 2000 #define MIN_SIGNAL 700 #define MOTOR_PIN 9 Servo motor; void setup() { Serial.begin(9600); Serial.println("Program begin..."); Serial.println("This program will calibrate the ESC."); motor.attach(MOTOR_PIN); Serial.println("Now writing maximum output."); Serial.println("Turn on power source, then wait 2 seconds and press any key."); motor.writeMicroseconds(MAX_SIGNAL); // Wait for input while (!Serial.available()); Serial.read(); // Send min output Serial.println("Sending minimum output"); motor.writeMicroseconds(MIN_SIGNAL); } void loop() { } Answer: What is the problem you're having? You ask, "Can anybody figure out what I'm doing wrong?" but you don't state clearly what your problem is. Will the motors not spin up after? Is anything happening when you do the calibration? As an FYI, here is an answer on EE stack exchange explaining the basic startup modes for an electronic speed controller. Quoting: Normal starup [one style of ESC]: Turn On ESC minimum throttle wait 2 seconds maximum throttle wait 2 seconds minimum throttle wait 1 second OK to Go Normal starup [another style of ESC]: Turn On ESC minimum wait 3 seconds OK to Go Calibration: Turn on ESC maximum wait 2 sec minimum wait 1 sec OK to go From that post, typically there's a beep from the ESC between each of these steps (where you're instructed to wait, wait for the beep).
{ "domain": "robotics.stackexchange", "id": 989, "tags": "arduino, quadcopter, esc" }
Does turning a spoon in water raise the temperature?
Question: I read about Joule's experiment proving the transformation of mechanical work into heat. But say I have a bowl with some water, and I start turning a spoon in it very fast, thus doing work — the water won't get hotter! What am I missing? I think maybe the work I put is simply kinetic, and won't turn into heat. But then how do you explain Joule's experiment? Answer: Well first you have the energy in the form of kinetic energy of the spinning water. Once you let that water settle, it DOES get hotter. The only problem is that water has a high specific heat (it takes a LOT of energy to heat up water), so you don't notice the water getting hotter since the amount it's heating up is not very noticeable. Coincidentally, it is this property of water that makes the earth a habitable planet--we have moderate temperatures compared to other planets because our oceans, bays, and lakes can absorb or release large amounts of heat to moderate the atmospheric temperatures. If you want a more observable experiment, try taking a piece of metal (maybe a paper clip?) and bending it back and forth a lot of times. Although it'll eventually break, you should be able to notice it getting hotter
{ "domain": "physics.stackexchange", "id": 26305, "tags": "thermodynamics" }
finding onset of an impulse signal, basics?
Question: New to signal processing, but making good progress, I think. I have a series of (many) impulses I generated, which will be used as impulse responses to model our church's acoustics, in this time of covid-19. A couple of questions, before the details: What is the best practice (and code solution) for detecting the onset of an impulse? When considering the spectrogram for an impulse, is it unusual for the spectrogram's data to occur earlier than the waveform, and before the waveform(db)? Details: Slap-boards were used to generate the impulses, which were recorded as 2 channel WAV files at 96000, 16bits, using my zoom H1n Handy Recorder. The impulses occur at quite regular times in the data, although not precisely regular, as the board slaps were done by hand, at the beat of my internal drummer, so to speak. I have successfully used scipy.io.wavfile to split the data into two channels, and then used scipy.signal.find_peaks to get (very close to) the onset of each of the pulses by finding the peak of each impulse. However, I can see that the actual onset of each impulse is missed by this approach, and I would like to capture these individual onsets better using python. I've been reading up, and am sure this is a deep and broad topic. However, perhaps some kind soul can assist me with the specifics of how to find the precise times of these onsets? I imagine this is a fairly typical type of problem in signal processing, and I realize it's (quite) a bit of an education that I'm requesting. I'm really hoping for a code solution suggestion to find the onset of these kind of impulse data. To be clear, a) The maximum peak for each pulse is not at the onset, obviously, nor is it necessarily the first noticeable peak for each impulse, as I review the entire datastream. (I think when this delayed peak occurs, a reflected signal has a higher peak than the direct response at the recording device. I'm not certain of this though . . .) b) The waveforms for these pulses do not necessarily go to zero between impulses, in fact, they rarely do. The signal goes close to zero, but not precisely. (I expect this has something to do with ambient noise around the signal, but am not certain . . .) c) The waveform could go negative first, or positive (as is the case with the (initial) data from this sample impulse). In the attached image, the top five graphs show a group of impulses (3 out of several hundred), followed by increasing resolutions zooming into the onset of the first impulse in this group. The bottom two images are the left channel of the first impulse, taken as screenshots from Audacity. They show the waveform, the waveform(db), and the spectrogram for the first impulse -- on the left, the entire impulse, on the right, the onset of the impulse. (I am puzzled why the spectrogram appears to precede the waveform and waveform(db) by a measurable number of samples.) Although I plotted the spectrograms in Audacity, I am not sure how to access spectral data in a WAV file, nor how to use it for detecting the onset of an impulse. I'll try to attach the data leading up to the first impulse, and a little ways into this impulse, but these are quite large files. I don't the rules for sending large datasets. Thanks for your help, kind ones. I am not sure of what is going on in a WAV file, but here are 250 samples taken from the left channel, that I believe start from before the onset of the first impulse, and end somewhat into the impulse itself: wav_left_subset = array([ -23, -16, -20, -19, -18, -19, -15, -20, -18, -21, -20, -22, -22, -18, -22, -17, -22, -20, -17, -24, -14, -21, -16, -16, -16, -13, -17, -11, -18, -14, -18, -14, -16, -13, -12, -13, -9, -16, -11, -16, -16, -13, -16, -14, -14, -15, -13, -13, -11, -14, -9, -12, -12, -13, -15, -13, -15, -15, -13, -16, -8, -14, -12, -12, -13, -11, -11, -12, -10, -8, -8, -8, -6, -9, -6, -7, -5, -6, -2, -3, -2, -1, -4, -2, -4, -1, 0, -1, 2, 0, -1, 3, -3, 6, -2, 9, 4, 5, 7, 4, 7, 9, 1, 10, 6, 11, 13, 9, 13, 15, 12, 18, 15, 17, 20, 20, 22, 20, 21, 23, 20, 23, 25, 24, 32, 27, 33, 30, 32, 29, 33, 34, 36, 41, 39, 43, 42, 49, 47, 55, 51, 59, 60, 63, 67, 67, 72, 70, 78, 75, 83, 85, 88, 93, 96, 102, 106, 111, 115, 124, 127, 135, 143, 146, 161, 163, 181, 185, 197, 209, 222, 239, 249, 269, 281, 303, 322, 344, 369, 399, 431, 466, 501, 544, 588, 642, 701, 779, 858, 1003, 1152, 1466, 1706, 1921, 1352, -13, -4626, -11419, -14567, -17320, -19721, -21829, -23673, -14863, -2840, 2088, 6363, 10091, 13343, 16173, 18656, 20820, 22727, 24392, 25864, 27162, 28305, 29329, 29056, 30424, 31358, 31919, 28408, 22294, 15638, 8584, 1428, -3153, -7130, -10605, -13629, -4656, 5684, 9787, 13358, 16474, 19186, 14213, 8269, 6929, 12547, 18601, 21081, 23248, 25145, 26811, 28274, 28920, 13555, 5571], dtype=int16) Answer: What is the best practice (and code solution) for detecting the onset of an impulse? ... The waveforms for these pulses do not necessarily go to zero between impulses, in fact, they rarely do. The signal goes close to zero, but not precisely. (I expect this has something to do with ambient noise around the signal, but am not certain . . .) Both of these are expected when you are recording in an open field. For the impulse response data, you can measure the average strength of the background level and then consider the start of the impulse as the level that the waveform "breaks through" that noise level. Similarly for when the waveform comes back down to levels comparable to the background noise. This is implemented in Audacity as the Noise Gate if you want to do a quick test. The maximum peak for each pulse is not at the onset, obviously, nor is it necessarily the first noticeable peak for each impulse, as I review the entire datastream. (I think when this delayed peak occurs, a reflected signal has a higher peak than the direct response at the recording device. I'm not certain of this though . . . If there is a direct line of sight between the source and the mic, then the first arrival is the direct one, purely judging by the distance the wave has to travel. Now, sound does not travel in straight lines. Sounds travels faster in higher density media (including air at different temperatures and pressures). But to start assessing how much these effects impact the room you are dealing with you would have to simulate sound propagation to figure out the reasons behind a specific recording. (I am puzzled why the spectrogram appears to precede the waveform and waveform(db) by a measurable number of samples.) The spectrogram view is interpolated between time instances Audacity runs the DFT on. You can find out more about it here. From a theoretical point of view, an impulse is a sharp discontinuity in the time domain, which would result in a broad spectrum (more sinusoids required so that when they are summed, they can reproduce that discontinuity accurately). So, a discontinuity shows up as a bright vertical bar in the spectrogram but because of the reasons explained here, there is no added benefit from the spectrogram in locating exactly where an impulse is supposed to start at. ... here are 250 samples taken from the left channel, that I believe start from before the onset of the first impulse, and end somewhat into the impulse itself: You are recording at 96kHz. What you are looking at prior to the main impulse is the build up of the pressure wave front as it hits the microphone. If there is a direct line of sight between the source and the microphone, you can take as $t=0$ the main impulse (the highest peak) and follow it up until it goes below the noise floor without losing any detail. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 9053, "tags": "audio, signal-detection, impulse-response, audio-processing" }
Newton Second Law's Acceleration
Question: If an object is 2 kg and a 10 N force is applied to an object. So, the acceleration is 5 m/s^2. Does it mean that if the same amount of force is applied to an object continuously, the object will increase its speed 5 m/s every one second? Answer: Yes, you are correct. Acceleration is defined as the rate of change of velocity, i.e. by how much the velocity changes in a certain amount of time. Mathematically the acceleration is given by the time derivative of velocity: $$a=\frac{dv}{dt}$$ which for constant acceleration (constant force) as in this case can also be written as: $$a=\frac{\Delta v}{\Delta t}$$ So an acceleration of $5~m/s^2$ means that the velocity changes by $\Delta v=5~m/s$ every $\Delta t=1~s$, or (which is equivalent) by $\Delta v=10~m/s$ every $\Delta t=2~s$ or ...
{ "domain": "physics.stackexchange", "id": 48949, "tags": "newtonian-mechanics, acceleration" }
Why do we need three equations to find the pH of NaCN, given Ka(HCN)?
Question: To find the pH of a 1.0 M solution of $\ce{NaCN}$, given $\ce{Ka(HCN) =4.9E(-10)}$ The solution (refer end of question) uses three equations: Equation 1 : $\ce{HCN + H2O <-> CN- + H+}$ Equation 2: $\ce{NaCN <-> CN- + Na+}$ Equation 3: $\ce{CN- + H2O <-> HCN + OH-}$ I am confused as to: 1) the logic behind why we need these three equations (up to this point in the course, all the similar questions have been of the form e.g. find pH of NH3 given Kb(NH3), and so we only used the NH3 acid base reaction equation. I assume equation 1 is necessary because the Ka is given for HCN and Equation 2 is necessary because we are asked to find the pH of NaCN. Where does Equation 3 come from? We did talk about spectator ions in class so does it have something to do with Na being a spectator ion? 2) why the ICE (intial, change, at equillibrium) table to find x is written in terms of Equation 3 and not, for example, Equation 2. The solution from the textbook is given below: Thank you. Answer: So strictly speaking you only really need equation 3 to solve this problem, but the first two equations help you figure out how to solve it if you're not too familiar. What do I mean: Equation 1: $\ce{HCN + H2O <=> H3O+ + CN-}$ As you correctly note, this is to remind you of the definition for $K_a$ Equation 2: $\ce{NaCN -> Na+ + CN-}$ Note that I've changed $\ce{<=>}$ to $\ce{->}$. This is because most sodium salts fully dissociate in aqueous solution and so this "equilibrium" doesn't exist and you can then consider all $\ce{NaCN}$ to be converted to $\ce{CN-}$ for your "initial" stage in the ICE table. Hence there's no need to write an ICE Table with respect to Equation 2. Value of writing this out: In the case that you were given something sparingly soluble or did not fully dissociate, you would have to consider another equilibrium and this equation 2 would become significant. But in this case, $\ce{Na+}$ is a spectator ion which you can ignore. Where does Equation 3 come from? Equation 3 comes from the question itself (or rather what is happening in the system described). Remember, you've got a 1.0M solution of $\ce{NaCN}$. Since we've already established that it fully dissociates, at the "initial" stage of the system you have only $\ce{Na+}$ and $\ce{CN-}$ ions floating around. But from Equation 1 you know that $\ce{HCN}$ is a weak acid and so free $\ce{CN-}$ in water may act as a base and take a proton from water, and that is given by the equation: $$\ce{CN- + H2O <=> HCN + OH-}$$
{ "domain": "chemistry.stackexchange", "id": 5981, "tags": "acid-base, ph" }
Hash function that returns the same result when the input is reversed
Question: Do any hash functions exist that provide the same result if the input is reversed? If this is impossible, why is it impossible? I am interested in sending packets of constant size around a circuit in a network in both directions, having each node add its value to the hash and then, when the packets meet, compare the hash values. Answer: You could take any hash function $H$ and define a new hash function $H'$ as $H'(x) = H(x \Vert x^R)$, where $\Vert$ denotes concatenation and $x^R$ is the reversed version of $x$.
{ "domain": "cs.stackexchange", "id": 21641, "tags": "algorithms, hash, hashing" }
Checking a collection of Ints
Question: I'm learning a little Scala by writing a little card game. What I want to do here is check that the Traversable[Team] supplied has the same number of team members for each team. How can I clean this up? val teamSizes = teams.map(_.members.size) require(teamSizes.foldLeft((true, teamSizes.head)) { (tuple, lastSize) => val (b, size) = tuple (b && size == lastSize, lastSize) }._1) Answer: I've come up with an alternative, which is nice and compact: val teamSizes = teams.map(_.members.size) require(teamSizes.forall(_ == teamSizes.head))
{ "domain": "codereview.stackexchange", "id": 518, "tags": "scala" }
Quantum Mechanical Wave Functions
Question: Are wave functions, such as those used in the Schroedinger equation just 'guessed' and verified, or are there other theories which tell us the mathematical description of the wave function for particular systems (i.e. if some new quantum phenomena is discovered, does the wave function need to be 'made up' from scratch and then experimentally verified or are there laws that give tight constraints on what form the wave function can take)? Answer: Sometimes, our formalism is not the wave-function but all sort of symmetries. I refer here, as you asked, about cases in which what we test in the lab is a wave-function. You ask : if some new quantum phenomena is discovered, does the wave function need to be 'made up' from scratch and then experimentally verified or are there laws that give tight constraints on what form the wave function can take? The things go in all the ways: Nothing is wrong in beginning from scratch. But, a more systematic way when we find in our experiments some new effect, is to make a theoretical guess about forces or about potentials (if the forces can be considered conservative), that may lead to that effect. Then we write a Hamiltonian and obtain a wave-function by solving the Schrodinger equation, (or Dirac equation, etc., depending on the case). After that, we test the wave-function experimentally. So, it goes in nuclear interactions. Though, I know a case in which the way above didn't lead to a satisfactory wave-function. The wave-function that was obtained explains some details of a phenomenon, but doesn't explain other details. So, people try first to modify the wave-function, check if it works, and after that they bother with the question which Hamiltonian, if any, produces the desired wave-functions. About constraints, yes we have. A wave-function has to have a finite norm, s.t. we can normalize it to 1, because the wave-function intensity represents probability. A frequent case in which the norm is infinite, is wave-functions that normalize to $\delta$ Dirac, as the plane waves. But such wave-functions are idealizations, used for mathematical simplicity. There are other constraints too, e.g. the wave-functions of a collection of identical bosons or fermions, should obey symmetry laws with respect to the interchange of two particles. That indeed restricts the set of possible wave-functions. Particle physics gives us other symmetry constraints too.
{ "domain": "physics.stackexchange", "id": 18698, "tags": "quantum-mechanics, wavefunction, schroedinger-equation" }
Variance in cross validation score / model selection
Question: Between cross-validation runs of a xgboost classification model, I gather different validation scores. This is normal, the Train/validation split and model state are different each time. flds = self.gsk.Splits(X, cv_folds=cv_folds) cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=xgb_param['n_estimators'], nfold=cv_folds, flds, metrics='auc', early_stopping_rounds=50, verbose_eval=True) self.model.set_params(n_estimators=cvresult.shape[0]) To make the parameters selection, I run multiple times this CV and average the results in order to attenuate those differences. Once my model parameters have been "found", what is the correct way to train the model, which seems to have some inner random states ? Do I : train on the full train set and hope for the best? keep the model with the best validation score in my CV loop (I am concerned this will overfit)? bag all of them? bag only the good ones? Answer: Since you want your model to be a general solution, you want to include all your data when building the final model. You are correct in saying that keeping the model with the best validation score in the CV is overfitting. Including these inner random states help generalize your model, and since you have already tuned your model parameters using CV, you can apply these parameters to the final model. As for feature selection, you want to separate the data used to perform feature selection and the data used in cross-validation, so feature selection is performed on independent data in the cross-validation fold. This prevents biasing the model. If you were to select your features on the same data that you then use to cross-validate, you will likely overestimate your accuracy. Here are some other great posts that help: https://stats.stackexchange.com/questions/11602/training-with-the-full-dataset-after-cross-validation https://stats.stackexchange.com/questions/27750/feature-selection-and-cross-validation Check out Dikran Marsupial's answers to both, they are really good.
{ "domain": "datascience.stackexchange", "id": 1290, "tags": "xgboost, cross-validation, model-selection, parameter-estimation" }
Derivation of Michaelis-Menten kinetics for discrete stochastic simulations
Question: According to this article, [...] the propensity function for the conversion reaction S → P in the well-mixed discrete stochastic case can be written $a(S) = \frac{V_{max}\cdot S}{K_m + S/\Omega}$ where $\Omega$ is the system volume. I don't quite understand how this formula is derived from the non-discrete Michaelis-Menten kinetics $v = \frac{V_{max} \cdot [S]}{K_M + [S]}$ (see Wikipedia). According to my understanding, $[S] = S/\Omega$. If we apply this to the formula from Wikipedia we get $$ \frac{V_{max} \cdot[S]}{K_M + [S]} = \frac{V_{max} \cdot S/\Omega}{K_M + S/\Omega} = \frac{V_{max} \cdot S}{\Omega \cdot K_M + S} $$ which is not the same as $\frac{V_{max}\cdot S}{K_m + S/\Omega}$. So, how can one derive the formula from the quoted article (if it is correct)? If not, how can we correctly get to a discrete propensity function from the Michaelis-Menten kinetics? Answer: I think I figured it out myself. Since $a(S)$ is in $\frac{mol}{s}$ and $v$ is in $\frac{\frac{mol}{l}}{s}$, we have that $a(S)= v * \Omega$. Therefore $$ a(S) = \frac{V_{max}\cdot [S]}{K_M + [S]} \cdot \Omega = \frac{V_{max}\cdot S / \Omega \cdot \Omega}{K_M + S / \Omega} = \frac{V_{max}\cdot S}{K_M + S/\Omega}. $$
{ "domain": "biology.stackexchange", "id": 6190, "tags": "theoretical-biology, kinetics" }
how to get the high 32-bit of the answer of two 32-bit integer multiple?
Question: Recently, I study the instruction set of riscv32 and face a order as "mulh" which tends to multiply two 32-bits signed integers and store the high 32 value into the register. And here comes the problem, there are two integer src1 and src2, I want to implement these instruction in C language by using shift operation but failed. So I came to another method which divide src1 and src2 as A(the high 16 bits)B(the low 16 bits) but here appears an another question, how can I know the correct carry number from low 32-bits to high 32-bits, which I came here for help. Answer: You are computing $$(2^{16}A_1+B_1)(2^{16}A_2+B_2)$$ which is $$2^{32}A_1A_2+2^{16}(A_1B_2+A_2B_1)+B_1B_2.$$ If we decompose $$A_1B_2+A_2B_1=2^{16}A_3+B_3$$ where $A_3$ is at most $17$ bits and $B_3$ is $16$ bits, this turns to $$2^{32}A_1A_2+2^{32}A_3+2^{16}B_3+B_1B_2.$$ So the carry from the least significant $32$ bits is the carry out of $2^{16}B_3+B_1B_2$.
{ "domain": "cs.stackexchange", "id": 20540, "tags": "algorithms, mathematical-programming, c, instruction-set" }
Clarification regarding Bounded Quadratic Congruence Problem
Question: Given: 3 positive integers $a, b, L$. Problem: Is there a positive integer $x < L$ such that $x^2 \equiv \ a (mod\ b)$? The above problem is NP Complete (as mentioned in G&J) even if we have the factorization of $b$ given. My query is the following: Suppose we have a promise/condition imposed that, the number of total occurrences of the residue $a$ is polynomially bounded w.r.t. number of prime factors in $b$, i.e. the number of occurrences of $a$ is always less than ${pct}^C$. $pct$ = number of Prime factors of $b$ , $C$ = some positive integer constant. Does this problem still remain NP-Complete or it becomes P time solvable. Essentially, does the number of times a residue occurs is what makes this problem difficult or it doesn't matter and its just dependent on the number of prime factors in $b$? Answer: The NP-completeness of the original problem was proved by Manders and Adleman [1] using a reduction from 3-SAT. Their reduction is parsimonious. Thus, (taking into account that the number of prime factors is upper bounded by the length of the input $n$, while in the M–A reduction, it is at least $n^\epsilon$) your problem is complete for promise-FewP. Note that by Valiant–Vazirani, already promise-UP is NP-hard under randomized polynomial-time reductions, hence the same holds for promise-FewP. Thus, the problem is essentially as difficult as NP. EDIT: The answer above assumes that in the question, the unclear phrase “the number of occurrences of $a$” means the number of residues $x<L$ such that $x^2\equiv a$. The OP indicates in a comment below that they rather intended it to mean the total number of residues mod $b$ that square to $a$. In the latter case, the problem is solvable in promise-ZPP: using the factorization of $b$, just compute all possible square roots of $a$ modulo $b$ by the usual algorithm (Tonelli–Shanks + Hensel’s lifting + Chinese remainder theorem). Reference: [1] Kenneth L. Manders and Leonard M. Adleman, NP-complete decision problems for binary quadratics, Journal of Computer and System Sciences 16 (1978), no. 2, pp. 168–184.
{ "domain": "cstheory.stackexchange", "id": 4349, "tags": "cc.complexity-theory" }
Does a specific blood group enhance the Plasmodium growth?
Question: I am maintaining Plasmodium falciparum cultures for past 6 months. For the blood culture, usually we lab members take turns and donate blood for the culture. I observed that the parasite's normal growth cycle were fast and more healthy in B+ blood, little slow in AB+ blood. But on the whole the culture was maintained good but I really observed fast stage transitions in B+. Is this really possible? Does a specific blood group have an impact on Plasmodium growth? Did anyone notice the same issue? Thanks. Answer: It is a well documented observation that Plasmodium (vivax and knowlesi) infection is dependent on the Duffy blood groups [1]. Individuals lacking the Duffy antigens (Fya and Fyb) have lower susceptibility to malaria. Plasmodium expressed Duffy Binding Proteins facilitate in establishing the initial contact between the merozoite and the RBCs. However, Plasmodium has evolved to break this dependence on Duffy antigens [2, 3]. For, the ABO blood group system, it has been observed that the blood group-O is associated with lower severity of P.falciparum malaria in adults [4, 5], which possibly happens because of reduced rosetting (binding of infected RBCs with uninfected RBCs) [5]. References: Dean L. Blood Groups and Red Cell Antigens [Internet]. Bethesda (MD): National Center for Biotechnology Information (US); 2005. Chapter 9, The Duffy blood group. Available from: http://www.ncbi.nlm.nih.gov/books/NBK2271/ Ménard, Didier, et al. "Plasmodium vivax clinical malaria is commonly observed in Duffy-negative Malagasy people." Proc Nat Acad Sci, USA 107.13 (2010): 5967-5971. Mendes, Cristina, et al. "Duffy negative antigen is no longer a barrier to Plasmodium vivax–molecular evidences from the African West Coast (Angola and Equatorial Guinea)." PLoS Negl Trop Dis 5.6 (2011): e1192. Cserti, Christine M., and Walter H. Dzik. "The ABO blood group system and Plasmodium falciparum malaria." Blood 110.7 (2007): 2250-2258. Rowe, J. Alexandra, et al. "Blood group O protects against severe Plasmodium falciparum malaria through the mechanism of reduced rosetting." Proc Nat Acad Sci, USA 104.44 (2007): 17471-17476.
{ "domain": "biology.stackexchange", "id": 5101, "tags": "cell-culture, infection, parasitology, pathogenesis" }
About 2D graph state and branching MERA
Question: In my former post I asked if a 2D graph state on a 2D lattice can be represented by branching MERA. I got an answer that it seems this is true. Then I have to following deductions (1) 2D graph state on a 2D lattice is universal for measurement based quantum computation, so that all quantum computations can be achieved by a 2D graph state with local measurements. (2) Branching MERA state can be classically simulated in the meaning that local observables on such a state can be computed efficiently. (3) If (1)(2) are true, then the local measurement on the 2D graph state can be classically simulated. (4) From (1)(2)(3), it seems that all quantum computations can be classically simulated. This of course not true. What's wrong with my deduction? Answer: While local measurements in MERA can be efficiently simulated, this is not true for sequences of measurements, such as required for measurement based quantum computation (where the outcome of the computation is in the correlation between the measurement results). Thus, there is no contradiction.
{ "domain": "physics.stackexchange", "id": 53067, "tags": "quantum-information, quantum-computer" }
Confusion about ket states and bra with position
Question: I am very confused about the bra-ket notation of states and the fact that $$\psi(x) = ⟨x|\psi⟩$$ and $$⟨x|x'⟩ = \delta(x-x')$$ are true. What does this mean? What is the ket $|x⟩$, is it just some kind of identity vector? And what even is a state $|\Psi⟩$. What does it look like, is it an infinite vector where $\Psi_n$ is just that wavefunction with the principle quantum number being $n$?. E.g. the solution to the infinte square well is given by the wave function $$\psi_n(x) = \sqrt{\frac{2}{L}}\sin\left(\frac{n \pi}{L} x\right),$$ if I am not mistaken. Then, can one say that $$ \psi_1(x) =\sqrt{\frac{2}{L}}\sin\left(\frac{1\pi}{L} x\right) $$ $$\psi_2(x) = \sqrt{\frac{2}{L}}\sin\left(\frac{2 \pi}{L} x\right)$$ and etc.? And if so, how is that related to $⟨x|\psi⟩ = \psi(x)$? Answer: Let me do a comparison with the finite dimensional case. A vector $v$ is an abstract entity belonging to a finite dimensional Hilbert space $\mathcal{H}$. Now in order to make actual computations with it, we usually handle with its components $v \to (v_1,v_2,\ldots)$. Once we choose an orthonormal basis the components are just the scalar products with the basis vectors $v_i = \langle v| e_i\rangle$. They are thus numbers because the scalar product provides a map $\mathcal{H}\times \mathcal{H} \to \mathbb{C}$. In the infinite dimensional case the label $i$ of $e_i$ can assume infinitely many values. In this particular case it is actually continuous: $$ i \to x\,,\quad e_i \to |x\rangle\,. $$ And so the wave function $\psi(x)$ it simply the "$x$th" component of $|\psi\rangle$, $$ v_i \to \psi(x)\,,\quad\langle v | e_i\rangle \to \langle \psi | x\rangle\,. $$ As I said before this requires the basis $|x\rangle$ to be orthonormal and when $x$ is a continuous parameter the condition to be imposed is with the Dirac $\delta$. Moreover we must also impose that it is complete, namely $$ \int dx\,|x\rangle \langle x| = \mathbb{1}\,, $$ which is an operator equation. We can use this property to compute scalar products explicitly $$ \langle \psi|\chi\rangle = \langle \psi|\mathbb{1} |\chi\rangle = \int dx\, \langle\psi|x\rangle\langle x|\chi\rangle = \int dx \,\psi(x)\,\chi^*(x)\,. $$ In the same way as we would compute $\langle v | w\rangle = \sum_i v_i w^*_i$.
{ "domain": "physics.stackexchange", "id": 55134, "tags": "quantum-mechanics, hilbert-space, wavefunction, notation, normalization" }
problems when running catkin_create_pkg
Question: Hello, I am currently trying to install ROS, and getting to know the environment by following the well written online tutorials. First I did a complete new ROS install on my Ubuntu 12.04, by following the install instructions here: wiki.ros.org/hydro/Installation/Ubuntu After this I dug right into the tutorials wiki.ros.org/ROS/Tutorials But on the 3. tutorial, wiki.ros.org/ROS/Tutorials/CreatingPackage, I am currently stuck, since I get an error when running catkin_create_pkg. It output this to my shell: File "/usr/local/bin/catkin_create_pkg", line 4, in <module> import pkg_resources File "/usr/local/lib/python2.7/dist-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 2819, in <module> parse_requirements(__requires__), Environment() File "/usr/local/lib/python2.7/dist-packages/distribute-0.6.28-py2.7.egg/pkg_resources.py", line 588, in resolve raise DistributionNotFound(req) I have already tried different things, like reinstalling the whole ROS, tryign to reinstall the python-catkin-pkg, and bunch of other things. Tried to follow the tutorials from scratch several times, to ensure I wasnt missing something. Read some forums, and can already give some details about the installation: python -c 'import catkin_pkg; print(catkin_pkg.__file__)' gives /usr/share/pyshared/catkin_pkg/__init__.py Anyone got any clue of how I can fix this error, or maybe where to look? Any help would be greatly appreciated. Originally posted by jesperhn on ROS Answers with karma: 16 on 2014-04-04 Post score: 0 Original comments Comment by dornhege on 2014-04-04: The /usr/local/... path is somewhat suspicious. For me catkin resides in /usr/bin. Do you have an older manual catkin install from somewhere? Comment by jesperhn on 2014-04-07: Had ROS groovy installed last year on this machine, maybe that could cause it? Answer: Fixed it! Somehow an "EASY-INSTALL-SCRIPT" had put a catkin_create_pkg under /usr/local/bin which was messing with the actual catkin_create_pkg script. Removed the wrong script (which btw had no actual content), and it worked perfect afterwards! Originally posted by jesperhn with karma: 16 on 2014-04-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 17535, "tags": "ros, catkin-create-pkg, begginer-tutorials" }
Charged particle under a uniform electric field
Question: Suppose a charge particle $q$ starts to move without initial velocity under the influence of a uniform electric field $E$ pointing in the positive $x$ direction. Express its position vector in terms of proper time $\tau$. According to wiki:http://en.wikipedia.org/wiki/Lorentz_force#Relativistic_form_of_the_Lorentz_force, The Lorentz Force is given by $\frac {dp^{\alpha}}{d\tau}=qU_{\beta}F^{\alpha\beta}$. In this case $F^{\alpha\beta}$ reduces to $$\begin{bmatrix} 0 & \frac{E}{c} \\ -\frac{E}{c} & 0\end{bmatrix}.$$ Let $U_{\beta}=(u_0,u_1)$, then $p^{\alpha}=m_0(u_0,u_1)$, so we have $$\left(\begin{array}{cccc}m_0\dot u_0&\\m_0\dot u_1\end{array}\right)=q\left(\begin{array}{cccc}0&-\frac Ec&\\\frac Ec&0\end{array}\right)\times\left(\begin{array}{cccc} u_0&\\u_1\end{array}\right).$$ Since $$m_0\dot u_1=q\frac Ecu_0,$$ $$u_1=\frac{-m_0\dot u_0c}{qE},$$ we get $$m_0(\frac{-m_0\ddot u_0c}{qE})=\frac{qE}cu_0,$$ $$\ddot u_0+\frac{q^2E^2}{m_0^2c^2}u_0=0.$$ The characteristic polynomial is $$r^2+\frac{q^2E^2}{m_0^2c^2}=0.$$ Obviously the determinant is negative and $u_0$ is a trigonometric function of the proper time $\tau$. It can also be deduced that $u_1$ is the same kind of function. But this is clearly not the case. I hope someone can tell me where I did it wrong. Answer: As you know the answer should be a hyper trigonometric function instead of a trigonometric one. Your mistake is with lowering/raising of vector components $$ p^\alpha = m_0 \left( u^0, u^1 \right) = m_0 \left( \eta^{00}u_0, \eta^{11}u_1 \right) = \pm \left( - u_0, u_1\right) $$ Where the $\pm$ comes from your metric convention. This will lead to $$ r^2 + \frac{q^2 E^2}{m_0^2 c^2} \longrightarrow r^2 - \frac{q^2 E^2}{m_0^2 c^2} $$ and you get the expected solution
{ "domain": "physics.stackexchange", "id": 21032, "tags": "homework-and-exercises, electromagnetism, special-relativity, classical-electrodynamics" }
The quantum-mechanical description of an electron motion in a magnetic monopole field
Question: The quantum-mechanical motion problem of an electron in electric field of the nucleus is well known. The quantum-mechanical description of electron motion in a magnetic field is also not difficult, since it needs to solve the Schrödinger equation of the form: $$\frac{(\hat p + eA)^2} {2m} \psi = E \psi $$ But if we want to consider the motion of an electron in a magnetic monopole field, the difficulty arises because the definition of the vector potential in the whole space. See, for example. Was this problem solved? What interesting consequences derived from this task? (for energy levels, angular momentum etc.) Answer: The classical version of this problem was solved by Henri Poincaré way back in 1896. This is also problem 5.43 in Electrodynamics by Griffiths. The classical trajectories are geodesics on the surface of a cone. A recent treatment of the classical version of this problem is here. The quantum mechanical version was also solved long back by Igor Tamm in 1931. This is discussed in section 2.3 of the book Magnetic monopoles by Y M Shnir, who follows the treatment in Charge quantization and nonintegrable Lie algebras by Hurst. The quantum mechanical version of the problem turns out to be separable in spherical polar coordinates. The angular part has the generalized spherical harmonics as its eigenvalues, while the radial solution is the same as the radial wave function of the standard Schroedinger equation. The centrifugal potential in the Schroedinger equation turns out to be always repulsive which implies that there are no bound states for this system of an electron in a magnetic monopole field. However a dyon field does have bound state solutions.
{ "domain": "physics.stackexchange", "id": 3445, "tags": "quantum-mechanics, magnetic-monopoles" }
Dynamic programming: Maximize total value
Question: I was trying to solve this problem using dynamic programming. We have $n$ objects in a row where each object has a value represented with a positive number. This is encoded with an array $V[1], . . . , V[n]$ where $V [1]$ is the value of the first object in the row, $V [2]$ is the value of the second object in the row and so on. We want to select a set $I \subseteq \{1, . . . , n\}$ of objects in such a way that the sum of its values, $\sum\limits_{i\in I} V [i]$, is as large as possible. However, we cannot select objects that occupy consecutive positions in the row. That is, if we select object $i$, then we cannot select object $i − 1$ nor object $i + 1$. For example, if $V = (6, 4, 3, 7, 3)$ then the best option would be to select the first object (with value $6$) and the fourth object (with value $7$). It is easy to see that any other option would have smaller total value. We want a dynamic programming algorithm that, given as input array $V$ , computes the value of the set with highest value. Note that, in order to simplify matters, we do not ask that the algorithm returns the set, only its total value. The probleem comes when it says "we cannot select object $i − 1$ nor object $ i + 1$". For $i-1$, the recursive case should be Optimal$(j-2)$+Value$(j)$, but no idea about $i+1$. Answer: For $k \in [\![1, n]\!]$, you can define $f(k)$ as the maximum value reachable using objects $1$, $2$, …, $k$, without two consecutives objects. Now the key to the dynamic programming implementation is to see that: $\forall k \in [\![3, n]\!], f(k) = \max(f(k-1), f(k-2) + V[k])$ The reason is that to compute $f(k)$, you can either keep the $k$-th object or not. If you keep it, then you cannot keep the $k-1$-th object. Finally, you want to compute $f(n)$.
{ "domain": "cs.stackexchange", "id": 18120, "tags": "dynamic-programming" }
How can you start gazebo with a non-0 simulated clock time?
Question: Similar to this question: https://answers.ros.org/question/310797/start-gazebo-simulation-with-non-zero-clock-time/ When gazebo starts up it starts its clock at 0, which can cause problems with code that does math on times, i.e. subtracting 10 seconds from a timestamp that is less than 10 epoch seconds will cause an error. Does gazebo support any mechanism to change the initial clock stamp? Answer: Generally speaking, best practice would be to set the use_sim_time parameter in any ROS nodes that rely on timing in the simulation. Making sure everything is sharing the same clock solves a lot of problems people encounter, so just mentioning that in case it is relevant to you. If you are dependent on explicit time deltas, I'd encourage value checking on the timestamp or some other error handling before trying to subtract some value from it to handle errors. All that said, you can start a Gazebo simulation with a given time if you're willing to edit and use a custom .world file. The sdf spec for a <world>s <state> supports a <sim_time> element (see here) The easiest way to, in my opinion, to generate a valid world file with <state> filled out would be to start your world (without any spawned robots), let gazebo run a few seconds, then save your world. You can then go and tweak the resultant world file to have the desired start time. Then, you can simply load that modified world file when starting Gazebo in the future. I am not aware of a simpler way to do this (e.g. through the Gazebo ROS launch file), but that's not to say that a better method doesn't exist.
{ "domain": "robotics.stackexchange", "id": 38638, "tags": "ros, gazebo" }
The atomic mass of an isotope from atomic weight
Question: I'm reading for an entrance exam and have a practice question about the atomic mass of an isotope that I have to figure out. I am given the atomic weight of the element. How can I calculate the atomic mass of $^{138} \text{Ce}$? I know the atomic weight which is $140.12$. I do not know the abundance of the isotope. I tried searching but most information is about counting atomic weight. Thank you. Answer: igael wrote the answer as a comment so I will quote him to mark my question as solved: "precise masses of neutrons and protons and the Mass defect explain the diff. Try this page"
{ "domain": "physics.stackexchange", "id": 27546, "tags": "atoms" }
Re-implementing an Array Class in Python
Question: I gave an attempt at reinventing the wheel and recreate list methods using my own Array Class to broaden my understanding about lists. I would like advice regarding efficiency, and in implementing negative indexes, since my code only works for positive ones. class Array: def __init__(self, List=[]): self.array = List def display(self): print(self.array) def len(self): array = self.array count = 0 for _ in array: count += 1 return count def append(self, value): array = self.array length = self.len() results = [None] * (length+1) for i in range(length): results[i] = array[i] results[length] = value self.array = results def search(self, value): array = self.array pos = -1 for index in range(self.len()): if array[index] == value: pos = index return pos def insert(self, index, value): array = self.array length = self.len() results = [None] * (length+1) if index > length: raise IndexError elif index == length: self.append(value) results = self.array else: for i in range(length): if i == index: for j in range(index + 1, length + 1): results[j] = array[j-1] results[index] = value break else: results[i] = array[i] self.array = results def delete(self, value): array = self.array length = self.len() results = [None] * (length-1) pos = self.search(value) if pos == -1: raise "Index Error: Element Not Found" else: for i in range(length): if i != pos: results[i] = array[i] else: for j in range(pos+1, length): results[j-1] = array[j] break self.array = results def pop(self): array = self.array length = self.len() results = [None] * (length-1) if length == 0: raise IndexError value = array[-1] for i in range(length-1): results[i] = array[i] self.array = results return value Answer: Seems like cheating to re-implement a list using a list... I feel like the real challenge would be doing so without a list of any sort, say creating a linked list or a tree or something like that. That might be pedantic, but it'd clarify what your limitation are and thus what efficient solutions are available versus what you're making needlessly hard for yourself because it's fun. Most of your functions are going to be very expensive because you're constantly copying memory around. A more common approach is to over-allocate space and then keep two values, "length" and "_allocated". The first is how many valid elements your array contains; the second is how much space you've reserved to store values. When you want to append X to your array, you can just assign X to self.array[self.length] and then increment self.length. If that would take you beyond what you've allocated, only then you do the expensive act of allocating a new chunk of memory (in your case, the [None] * (length + 1) line) and copying the data. To minimize the number of copies, it's common practice to double the length of the array each time you resize it (maybe up to some maximum, at which point you add on new 4K element blocks or whatever rather than multiplying, to prevent running out of memory prematurely). When inserting, then, you'd just need to shift the later values rather than copy everything; similarly, when popping values off, just decrement self.length to mark that last element as unimportant and available to be overwritten. To complement that approach, if you want to optimize for mid-array insertions and deletions, you can maintain a parallel bit-array of which values are valid and which aren't. Iterating through the array becomes trickier (you need to zip the values with the validity flags and only return the valid ones), but deleting an item becomes cheaper since you only need to find the element and mark it as deleted (no shifting required), and inserting an item only requires shifting things until you find an unused element that can be overwritten instead of shifted. Caching the length of the array is a good idea in general (though not necessary if you don't over-allocate). It requires extra code and care to keep synchronized with the rest of the array, but checking a list's length is a very common thing for users to want to do, and the O(N) computation each time can get painful. As a general Python'ism, I'd say use def __len__ instead of def len and implement __get_item__, as well as any other magic methods that generally correspond to an array. For reference see this documentation page, and consider that most people think of an array as some combination of a "Collection" and "[Mutable]Sequence"
{ "domain": "codereview.stackexchange", "id": 38115, "tags": "python, python-3.x, object-oriented, array, reinventing-the-wheel" }
How would the sky look if Earth orbited a red giant at a safe distance?
Question: Let's say that instead of the sun, we have a red giant, but are orbiting it at a safe distance, within the goldilocks zone. Would the sky actually look more red? Or would it be closer to white/transparent due to a shortage of blue light for Rayleigh scattering? Answer: Rayleigh scattering happens at all wavelengths, but the scattering cross section goes as $\lambda^{-4}$. On Earth, the atmospheric optical depth to Rayleigh scattering is very small at red wavelengths, so hardly any red light is scattered, even at sunset when the Sun is viewed through a thick atmospheric layer. On the contrary, there is sufficient optical depth to scatter some blue light, even if it arrives from the Sun at zenith. Some numbers are that the optical depth at zenith, from sea level is about 0.36 at 400 mm (blue) and ten times smaller at 700 nm (Bucholtz 1995). However, the spectrum of light that is being scattered is very different in the case of a red giant. The solar spectrum peaks at about 500 nm and is about a factor of two less intense at both 400 nm and 700 nm. A red giant has a spectrum that peaks at around 900 nm (in the infrared), and the flux is about 100 times lower at 400 nm and two times lower at 700 nm (which is why they are called red giants). If Rayleigh scattering was all that was going on, and the total flux incident at the top of the atmosphere was the same, then the scattered spectrum from the red giant illumination would be quite different. The overall amount of scattered red light would be about the same as in the solar case, but the amount of scattered blue light would be reduced by about a factor of 50. The net effect would be that the sky was much darker, and rather than being dominated by blue light, would actually have a redder spectrum (what colour this would be perceived as, I'm not sure). But Rayleigh scattering isn't the only thing going on. The optical depth to scattering can be dominated by particulates in the atmosphere at wavelengths above 600 mm. This scattering is much less wavelength dependent, depends on the size distribution of the particles and is much stronger for small scattering angles. I think that this would enhance the relative redness of the scattered light a bit more, but given that the incoming flux at 700 nm is similar to that of the Sun, it wouldn't increase the sky brightness. In summary, I think the sky would be much darker (factor of 50) and would have a much redder spectrum.
{ "domain": "astronomy.stackexchange", "id": 5077, "tags": "red-giant" }
Can we find momentum in finite square well potential?
Question: Can we find momentum in finite square well potential? If so how can we find it? Does the eigenfunction of momentum operator is same as eigenfunction of Hamiltonian operator? Answer: If the potential well is finite, then the Hilbert space is the same as for the free particle: $L^2(R)$. The momentum operator is the usual one. Its improper eigenvectors are not eigenvectors of the Hamiltonian.
{ "domain": "physics.stackexchange", "id": 96917, "tags": "quantum-mechanics, operators, momentum, schroedinger-equation, observables" }
Does the centrifugal force of a rotating object acts at this same rotating object?
Question: when rotating an object by a string a centripetal force from the string will act at the object towards the center and by Newton's 3rd law an opposite force will act at the string by the object . Then why it's said that the centrifugal force acts on the rotating object rather than saying it's acting on the string? Answer: You need to show where did you see this so we know the context. Without this, I could guess that the mention of "centrifugal force" you saw may be refering to the analysis of the motion in a noninertial frame. In this case a "fictitious force" is introduced, in the direction opposite to the acceleration (centripetal in this case). This type of "centrifugal force" is not the third law pair of anything as it is not due to the interaction between two objects. It acts on all the objects, when analysed in a rotating frame. The force on the string is a rel force, due to the interaction between the object and the string. And indeed it has a centifugal direction. Hovewer, it is not very common to see it described as the centrifugal force.
{ "domain": "physics.stackexchange", "id": 83684, "tags": "forces, rotational-dynamics, rotational-kinematics, centripetal-force, centrifugal-force" }
Big O of dynamic array
Question: Skiena's Algorithm Design Manual, 3rd Ed p.71 gives the time complexity of a dynamic array according to the number of movements, $M$, as: $$ M = n + \sum_{i=1}^{lg(n)} 2^{i-1} = 1 +2+ 4+\ldots+\frac{n}{2} + n \stackrel{?}{=} \sum_{i=0}^{lg(n)} \frac{n}{2^i} $$ For the life of me, I can't figure out how he's getting from the left hand summation to the right hand one. It's crucial that it does, because his next step is to say the right hand side is less than the geometric series as $i\rightarrow\infty$ which converges to $2n$. There is a known errata, (*) Page 71, formula on line 10: the lower index of the second summation $i=i$ should be $i=0$. I have written the right hand side with the correction. The only possible operations I see are rewriting the exponent... $$ \sum_{i=1}^{lg(n)} 2^{i-1} = \sum_{i=1}^{lg(n)} 2^i\cdot2^{-1}= \sum_{i=1}^{lg(n)} \frac{2^i}{2} = \frac{1}{2} \sum_{i=1}^{lg(n)} 2^i $$ or adjusting the index... $$ \sum_{i=1}^{lg(n)} 2^{i-1} = \sum_{i=0}^{lg(n)-1} 2^{i} $$ Neither of which help. When I expand the right hand side, I get $$ \frac{n}{1} + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + \ldots $$ I don't see any way to rearrange or finagle that so it equals $1 +2+ 4+\ldots+\frac{n}{2} + n$. Answer: If we assume that $n$ is a power of two (I'm assuming the author assumes this), first we can rewrite $n$ as a power of two, and shift the index over by one. Then we can combine the two into one sum: $$n + \sum_{i=1}^{\lg(n)} 2^{i-1} = 2^{\lg(n)} + \sum_{i=0}^{\lg(n)-1} 2^{i} = \sum_{i=0}^{\lg(n)} 2^{i}$$ For step two, take a look at this sum reversal identity: $$\sum_{i=0}^k f(i) = \sum_{i=0}^{k} f(k - i)$$ So we can reverse our sum by substituting $i$ with $\lg(n) - i$: $$\sum_{i=0}^{\lg(n)} 2^{i} = \sum_{i=0}^{\lg(n)} 2^{\lg (n) - i} = \sum_{i=0}^{\lg(n)} \frac{2^{\lg(n)}}{2^i} = \sum_{i=0}^{\lg(n)} \frac{n}{2^i}$$
{ "domain": "cs.stackexchange", "id": 20228, "tags": "summation, dynamic-array" }
How much mass is typically ejected from a supernova?
Question: How much mass is released from a supernova of a 15 solar-mass star? 20? 25? What is the relation between star mass and mass ejected? Answer: I like to explain this using a figure from a talk by Marco Limongi some years ago. Based on a given set of models, the $x$-axis shows the initial mass of the models and the $y$-axis the final mass. The different coloured layers show the composition of the star at the moment of collapse. The mass ejected in the supernova is the difference between the curve marked remnant mass, which specifies (for these models) how much matter became part of the remnant, and the final mass, which was the mass of the star at collapse, after it had already lost a lot during its life. The interesting point in this prediction is the change between the supernovae that leave neutron stars versus those that leave black holes. At the boundary, there's a large drop in the supernova-ejecta mass, because the black hole doesn't have a surface off of which inward falling material can bounce. But, though the broad trends are probably right, note that this is the result for a particular set of model assumptions (e.g. mass loss on the main sequence, supernova energy and dynamics). The amount of ejecta for the supernova of a given progenitor is an open question, and still subject to intense research.
{ "domain": "physics.stackexchange", "id": 20626, "tags": "mass, astrophysics, stars, supernova, stellar-evolution" }
Problem with vibrations in air bearing
Question: We have an air bearing for a planar xy motion. Today it consists of four pockets according to picture. In the current design there are no sealings around the peripheries of the pockets and we suspect that is the reason we get vibrations. In the current design we control the pressure, same for all for recesses. The flow is adjustable individually for each recess. In practice it is very hard to tune it. For the non recess surfaces we have used Slydway as we need to be able to operate it without pressure occasionally. To try to solve the problem we plan to develop a prototype where we can try out the effect of using sealings around the periphery of the pockets. The idea is something like this: Questions Is the idea with adding sealings good? (sanity check) Suggestions for sealings? (I'm thinking a porous material like felt or cigarette filter) Of course all suggestions are welcome. Edit I'm going to try and add grooves around the recesses to evaquate the air that leaks. My thinking is that this will give us a more defined area under pressure. Answer: Non of the commercial air bearings I've seen have attempted to seal like this, so I think that your problems with vibration may lie elsewhere. The problems I have seen with air-bearing systems have been related to mechanical over constraint. A high performance linear bearing stage I once worked on used six bearings like these: Arranged in 3 pairs like this: /\em_ Where /, \ & _ are a pair of bearings (side by side along the track), e was the encoder strip and m was a linear motor. The problem that we had was that no matter how we tuned the servo loop, if it was tightly tuned enough to get up to the speed and acceleration we needed (3m/s & 2g) then the system would often get into a limit cycle (sometimes erroneously called resonance) when stopping (i.e. it would sit humming). The way we solved this was to remove one of the air bearings on the / row, relocating it to the middle: Front view Side Unstable Stable view o o o o |o m m |m e e |e o o o o \o o o o /o By removing the excess constraint, we appeared to remove the tendency for a perturbation in one air bearing to affect another bearing and then ripple through to the other bearings. Depending on your typical direction of motion and the flatness of your surface, you may find that a three point bearing system may work better for your system: You may also want to consider purchasing commercial air-bearing modules and attaching them to a frame (at least for a prototype) rather than attempting to manufacture your own air bearings. That way you can leverage the knowledge and support provided by the air-bearing manufacturer. One other point is that we used ordinary Polyimide tape on our air bearings. We considered various more permanent methods, but in the end decided that being able to easily remove old, scarred or scored tape and replace it with fresh tape quickly and easily made the most sense for our application.
{ "domain": "robotics.stackexchange", "id": 151, "tags": "linear-bearing" }
Relaxation of the Boltzmann transport equation
Question: My professor in kinetic gas theory said that when considering the Boltzmann Transport Equation (BCE) $$ \partial_tf + \frac{\vec{p}}{m}\cdot\nabla_{\vec{q}}f + \vec{F}\cdot\nabla_{\vec{p}}f = (\partial_tf)_{Coll} $$ Over long periods of time, the system tends to relax, which makes the distribution $f$ homogenous ($\nabla_{\vec{q}}f$ = 0) and time independent ($\partial_tf = 0$). This means that the system tends to return to equilibrium, which makes sense to me. However, my prof. said that the momentum term does not relax. I.e. $\nabla_{\vec{p}}f \neq 0$ even if the system is in equilibrium. Why is that so? I would have thought that for a system in equilibrium the particles should have similar velocities and thus similar momentums to have a homogenous distribution of energy. Moreover, if we're only considering particle collisions as interaction term the momentum should remain constant. Answer: Having $f$ independent of $t$ means that the distribution is similar at different times; having $f$ independent on $\vec{q}$ means that particles at different positions have similar distribution. Similarly, having $f$ independent of $\vec{p}$ means that particles with different momenta distribute similarly. But we know that this is not the case at equilibrium. Higher energy states are less probable as $p \propto \exp(-E/k_B T)$. Also the density of states is momentum dependent, meaning that different $\vec{p}$ have more or less states available to populate. Having $\nabla_\vec{p} f = 0$ means that all these features are irrelevant, and a particle has the same probability to be in each value of $\vec{p}$ just as likely. This is not the case in equilibrium.
{ "domain": "physics.stackexchange", "id": 93202, "tags": "thermodynamics, fluid-dynamics, statistical-mechanics, kinetic-theory, boltzmann-equation" }
Creating and playing animations for a game using LibGDX
Question: It feels like there's quite a lot of code involved in order to manually build up an animation using the libGDX framework. In my specific case, I am creating a number of animations for a portrait view of a character. The character will do things like talk, blink, and laugh. There are a handful of different characters to worry about. I would like to get some feedback on my approach. I'm hoping to simplify things as much as I can, but this is the best that I have come up with so far. First, a texture atlas is created from a file. Then, the types from an enum are used to create a map of the types to the frames. I've removed all but one of the types just for brevity, but there is one for every single frame of animation. PortraitType.java public enum PortraitType { GOBLIN_TALK01("goblinTalkRight01", 106), GOBLIN_TALK02("goblinTalkRight02", 107), GOBLIN_TALK03("goblinTalkRight03", 108), GOBLIN_TALK04("goblinTalkRight04", 109), GOBLIN_TALK05("goblinTalkRight05", 110), GOBLIN_TALK06("goblinTalkRight06", 111), GOBLIN_TALK07("goblinTalkRight07", 112), GOBLIN_TALK08("goblinTalkRight08", 113), GOBLIN_TALK09("goblinTalkRight09", 114), GOBLIN_TALK10("goblinTalkRight10", 115), GOBLIN_TALK11("goblinTalkRight11", 116); public final String fileName; public final int id; private PortraitType(String fileName, int id) { this.fileName = fileName; this.id = id; } } LibGDXGame.java private Map<PortraitType, TextureRegion> loadPortraitTextures() { Map<PortraitType, TextureRegion> textures = new HashMap<PortraitType, TextureRegion>(); TextureAtlas atlas = new TextureAtlas("rampartedPortraits01.atlas"); for (PortraitType type : PortraitType.values()) { AtlasRegion region = atlas.findRegion(type.fileName); TextureRegion textureRegion = region; textures.put(type, textureRegion); } return textures; } After the map of all the frames is created, another map is created for each character that maps the type of animation to the animation itself. PortraitAnimationType.java public enum PortraitAnimationType { NONE, BLINK, TALK, TALK_BLINK, LAUGH, DEFEAT; } LibGDXGame.java private Map<PortraitAnimationType, Animation> loadGoblinAnimations() { Map<PortraitAnimationType, Animation> animations = new HashMap<PortraitAnimationType, Animation>(); animations.put(PortraitAnimationType.NONE, AnimationLoader.goblinNoneAnimation(this.portraitTextures)); animations.put(PortraitAnimationType.TALK, AnimationLoader.goblinTalkAnimation(this.portraitTextures)); animations.put(PortraitAnimationType.BLINK, AnimationLoader.goblinBlinkAnimation(this.portraitTextures)); animations.put(PortraitAnimationType.LAUGH, AnimationLoader.goblinLaughAnimation(this.portraitTextures)); animations.put(PortraitAnimationType.TALK_BLINK, AnimationLoader.goblinTalkBlinkAnimation(this.portraitTextures)); animations.put(PortraitAnimationType.DEFEAT, AnimationLoader.goblinDefeatAnimation(this.portraitTextures)); return animations; } There is a separate class that is only used for creating the actual animations. I realize that it is possible to create animations automatically using an atlas, however there are some limitations to that approach. By doing it manually like this, I can influence exactly what frames will be played, and can do things like add a pause to the animation by repeating a frame. AnimationLoader.java public static Animation goblinTalkAnimation(Map<PortraitType, TextureRegion> textures) { TextureRegion[] regions = new TextureRegion[11]; regions[0] = textures.get(PortraitType.GOBLIN_TALK01); regions[1] = textures.get(PortraitType.GOBLIN_TALK02); regions[2] = textures.get(PortraitType.GOBLIN_TALK03); regions[3] = textures.get(PortraitType.GOBLIN_TALK04); regions[4] = textures.get(PortraitType.GOBLIN_TALK05); regions[5] = textures.get(PortraitType.GOBLIN_TALK06); regions[6] = textures.get(PortraitType.GOBLIN_TALK07); regions[7] = textures.get(PortraitType.GOBLIN_TALK08); regions[8] = textures.get(PortraitType.GOBLIN_TALK09); regions[9] = textures.get(PortraitType.GOBLIN_TALK10); regions[10] = textures.get(PortraitType.GOBLIN_TALK11); return new Animation(1/8f, regions); } public static Animation goblinNoneAnimation(Map<PortraitType, TextureRegion> textures) { TextureRegion[] noneRegions = new TextureRegion[6]; noneRegions[0] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[1] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[2] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[3] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[4] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[5] = textures.get(PortraitType.GOBLIN_TALK01); return new Animation(1/6f, noneRegions); } Finally, I've written a somewhat simple class that extends Image that handles playing the animations. As long as this image is part of a stage, it will automatically play the animations without needing to be manually updated each frame. I also programmatically handle whether the face of the character is facing left or right. AnimatedPortrait.java public class AnimatedPortrait extends Image { private float stateTime = 0; private PortraitAnimationType currentAnimation; private final Map<PortraitAnimationType, Animation> animations; private boolean paused = false; private boolean isRightSide; public AnimatedPortrait(Map<PortraitAnimationType, Animation> animations, boolean isRightSide) { super(animations.get(PortraitAnimationType.NONE).getKeyFrame(0)); this.animations = animations; this.isRightSide = isRightSide; this.currentAnimation = PortraitAnimationType.NONE; } public PortraitAnimationType getAnimation() { return this.currentAnimation; } public void setAnimation(PortraitAnimationType type) { this.stateTime = 0; this.currentAnimation = type; } private void cycleAnimations(float delta) { if (this.animations.get(currentAnimation).isAnimationFinished(this.stateTime)) { int current = currentAnimation.ordinal(); current += 1; if (current >= PortraitAnimationType.values().length) { current = 0; } this.stateTime = 0; this.currentAnimation = PortraitAnimationType.values()[current]; } } @Override public void act(float delta) { super.act(delta); if (this.paused) { return; } TextureRegion region = this.animations.get(this.currentAnimation).getKeyFrame(this.stateTime += delta, true); if (this.isRightSide && !region.isFlipX()) { region.flip(true, false); } ((TextureRegionDrawable)getDrawable()).setRegion(region); this.cycleAnimations(delta); } } Here is a sample gif of the animation cycle for two characters: And you can play an early demo of the game here: Play Castleparts Demo Answer: First of all, it's a pleasure watching your games evolve, keep it up! Data management Most of the code is about data. Creating all your objects with their behaviors programmatically is tedious, not practical. It's not easy to see all the data, as you have to jump between multiple classes to piece everything together. Probably it didn't seem that way in the beginning, but now it definitely is. I suggest to rework how the characters are built up, using a data driven approach. As the first step, create factory and repository interfaces that will be in charge of materializing all the characters with their behaviors. The initial implementation can be the current code, transformed appropriately, still creating the characters fully programmatically. As the second step, create an alternative implementation, creating the characters from flat files, for example, it could be CSV, or XML, it doesn't matter much. At some point later you might want to ditch that too, for example using a database backend or REST service backend, it doesn't matter, because thanks to the factory and repository interfaces, you will be able to replace the implementation without affecting the rest of the program. Creating arrays This is a very fragile way to populate an array: TextureRegion[] noneRegions = new TextureRegion[6]; noneRegions[0] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[1] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[2] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[3] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[4] = textures.get(PortraitType.GOBLIN_TALK01); noneRegions[5] = textures.get(PortraitType.GOBLIN_TALK01); return new Animation(1/6f, noneRegions); It's fragile, because the array size must match the assigned elements, the indexes must be correct, unique and compete. There are many possible points of human error. Better to write like this: return new TextureRegion[] { textures.get(PortraitType.GOBLIN_TALK01), textures.get(PortraitType.GOBLIN_TALK01), // .. }; Unused variables The cycleAnimations method takes a delta parameter that is never used. The paused field is never used. Naming The currentAnimation field is of type PortraitAnimationType, which is confusing considering there is also an Animation type, which is a close collaborator. Other methods that work with PortraitAnimationType also have just Animation in their names, further aggravating the confusion. It would be better if method names were more consistent with the types of objects they work with.
{ "domain": "codereview.stackexchange", "id": 23552, "tags": "java, game, animation, libgdx" }
What is the exact relationship between scale invariance and renormalizability of a theory?
Question: I have often read that renormalizability and scale invariance are somehow related. For example in this tutorial on page 12 in the first sentence of point (7), self similarity (= scale invariance ?) is referred to as the non-perturbative equivalent of renormalizability. I don't understand what this exactly means. Can one say that all renormalizable theories are scale invariant but the converse, that every scale invariant theory is renormalizable too, is not true? I'm quite confused and I'd be happy if somebody could (in some detail) explain to me what the exact relationship between scale invariance and renormalizability is. Answer: your question, Can one say that all renormalizable theories are scale invariant but the converse, that every scale invariant theory is renormalizable too is not true? has a sharp answer: no, one cannot say so. Renormalizable theories typically have running coupling constants with non-vanishing beta functions. The second part (what you called the 'converse') is false too. The first example that come to my mind is a theory with a spontaneously broken CFT that delivers a dilaton: the low-energy lagrangian for the dilaton is scale invariant and still is non-renormalizable having an infinite series of terms organized by the number of derivatives envolved. The only relations I can see between scale invariance and renormalization are well known: a) renormalization typically spoils classical scale-invariance; b) a theory with strictly renormalizable terms (i.e. dimension 4 only) is classically scale invariant and it has a chance to be scale invariant at the quantum level as well; c) a non-scale invariant theory may run and approach a scale invariant theory at the end of the RG flow, either IR or UV, depending where you are heading to. This last point may be violated in very special non-unitary QFT, though.
{ "domain": "physics.stackexchange", "id": 5311, "tags": "quantum-field-theory, renormalization, scaling" }
how to move PR2 arm with recorded position trajectory(in joint space)
Question: I have a 2 seconds long arm movement trajectory which recorded in 10hz(joint space). I want to play this trajectory in pr2 arm. What would be the best way to do this? Thanks Originally posted by ecalisgan on ROS Answers with karma: 11 on 2012-07-09 Post score: 0 Answer: Use the action /left_arm_controller/joint_trajectory_action Originally posted by David Lu with karma: 10932 on 2012-07-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 10129, "tags": "ros, pr2-arm-navigation, arm-navigation" }
BAM to BigWig without intermediary BedGraph
Question: I have a pipeline for generating a BigWig file from a BAM file: BAM -> BedGraph -> BigWig Which uses bedtools genomecov for the BAM -> BedGraph part and bedGraphToBigWig for the BedGraph -> BigWig part. The use of bedGraphToBigWig to create the BigWig file requires a BedGraph file to reside on disk in uncompressed form as it performs seeks. This is problematic for large genomes and variable coverage BAM files when there are more step changes/lines in the BedGraph file. My BedGraph files are in the order of 50 Gbytes in size and all that IO for 10-20 BAM files seems unnecessary. Are there any tools capable generate BigWig without having to use an uncompressed BedGraph file on disk? I'd like this conversion to hapen as quickly as possible. I have tried the following tools, but they still create/use a BedGraph intermediary file: deepTools Some Benchmarks, Ignoring IO Here are some timings I get for creating a BigWig file from a BAM file using 3 different pipelines. All files reside on a tmpfs i.e. in memory. BEDTools and Kent Utils This is the approach taken by most. time $(bedtools genomecov -bg -ibam test.bam -split -scale 1.0 > test.bedgraph \ && bedGraphToBigWig test.bedgraph test.fasta.chrom.sizes kent.bw \ && rm test.bedgraph) real 1m20.015s user 0m56.608s sys 0m27.271s SAMtools and Kent Utils Replacing bedtools genomecov with samtools depth and a custom awk script (depth2bedgraph.awk) to output bedgraph format has a significant performance improvement: time $(samtools depth -Q 1 --reference test.fasta test.bam \ | mawk -f depth2bedgraph.awk \ > test.bedgraph \ && bedGraphToBigWig test.bedgraph test.fasta.chrom.sizes kent.bw \ && rm test.bedgraph) real 0m28.765s user 0m44.999s sys 0m1.166s Although it is has less features, we used mawk here as it's faster than gawk (we don't need those extra features here). Parallelising with xargs If you want to have a BigWig file per chromosome, you can easily parallelise this across chromosome/reference sequences. We can use xargs to run 5 parallel BAM->BedGraph->BigWig pipelines, each using the tmpfs mounted /dev/shm for the intermediary BedGraph files. cut -f1 test.fasta.chrom.sizes \ | xargs -I{} -P 5 bash -c 'mkdir /dev/shm/${1} \ && samtools depth -Q 1 --reference test.fasta -r "${1}" test.bam \ | mawk -f scripts/depth2bedgraph.awk \ > "/dev/shm/${1}/test.bam.bedgraph" \ && mkdir "./${1}" \ && bedGraphToBigWig \ "/dev/shm/${1}/test.bam.bedgraph" \ test.fasta.chrom.sizes \ "./${1}/test.bam.bw" \ && rm "/dev/shm/${1}/test.bam.bedgraph"' -- {} deepTools Let's see how deepTools performs. time bamCoverage --numberOfProcessors max \ --minMappingQuality 1 \ --bam test.bam --binSize 1 --skipNonCoveredRegions \ --outFileName deeptools.bw real 0m40.077s user 3m56.032s sys 0m9.276s Answer: This can be done in R very easily from an indexed .bam file. Given single-end file for sample1. library(GenomicAlignments) library(rtracklayer) ## read in BAM file (use readGAlignmentPairs for paired-end files) gr <- readGAlignments('sample1.bam') ## convert to coverages gr.cov <- coverage(gr) ## export as bigWig export.bw(gr.cov,'sample1.bigwig') Be aware that this method doesn't include normalization steps (such as normalizing to total coverage). Most of these additional steps can be added if necessary.
{ "domain": "bioinformatics.stackexchange", "id": 190, "tags": "sam, file-formats, format-conversion" }
What organic solvents are suitable to use with potassium permanganate?
Question: I'm attempting to oxidize a substance with potassium permanganate and I was wondering what organic solvents I have at my disposal. The compound is soluble in chloroform and ethanol, but obviously those would not do well with a powerful oxidizing agent. The phase-transfer catalyst I'm going to employ is tetra-n-butylammonium bromide. Answer: It sounds like you need a solvent that 1) does not react with KMnO4 and 2) is immiscible with water. As @Janice DelMar said in her comment, some of the lower-chlorinated hydrocarbons (methylene chloride, 1,2-dichloroethane, 1,1,2-trichloroethane) should work. If you can live with a high boiling point solvent, you might consider chlorobenzene. Other solvents that fit these criteria include most alkanes (pentane, hexane, cyclohexane, petroleum ether, etc.), benzene (but not toluene), and diethyl ether. A quick test to determine compatibility of your solvent with KMnO4 involves a TLC stain. Make (or borrow) a batch of KMnO4 stain as described here. Dip a TLC plate into your solvent and then dip it into the stain with minimal drying between. If the plate turns brown (MnO2), then your solvent was oxidized and is no good. If the plate stays purple, then your solvent is resistant.
{ "domain": "chemistry.stackexchange", "id": 83, "tags": "solvents" }
Are the eigenvectors of the Choi-Jamiolkowski state maximally entangled?
Question: Let $\phi: M_n\rightarrow M_n$ be a quantum channel (completely positive trace preserving). Via the Choi-Jamiolkowski isomorphism we can transform this into a state $$J(\phi) = (I_n\otimes\phi)(M) = \sum_{ij}E_{ij}\otimes\phi(E_{ij})$$ where $M$ denotes the maximally entangled state and $E_{ij}$ the matrix with a 1 at the $ij$ position and zero's everywhere else. This state is positive definite if and only if $\phi$ is completely positive. This means that it has an eigenvalue decomposition: $$J(\phi) = \sum_i \lambda_i P_i$$ for some 1-dimensional projections $P_i\in M_n\otimes M_n$. These projections can be called maximally entangled when Tr$_1(P_i) = I_n$ and Tr$_2(P_i) = I_n$. Can the $P_i$ be chosen such that they all are maximally entangled? I know this is true when $\phi$ is a unitary conjugation and when $n=2$. Is it true in general? Answer: No. A simple counterexample is the qubit channel $$ \phi:\rho\mapsto \mathrm{tr}(\rho)|0\rangle\langle0|\ . $$ Its Choi state is $J(\phi)=\tfrac12\mathbb{I}\otimes |0\rangle\langle0|$, whose eigenvalue decomposition satisfies $\mathrm{tr}_1(P_i)=|0\rangle\langle0|$. EDIT: I have now compiled a list of canonical examples to check.
{ "domain": "physics.stackexchange", "id": 35279, "tags": "quantum-mechanics, quantum-information, quantum-entanglement, trace" }
C++ Vector with templates
Question: I am learning about templates in C++ so I decided to implement an N-dimensional vector. The code seems to work fine, but there are a few things that I am unsure about. To stop GetW() being called on a 3-dimensional vector, I used std::enable_if. This works, but the error message given when it's misused isn't as helpful as I'd like: error: no type named ‘type’ in ‘struct std::enable_if<false, void>’. It would be better if I could make it into something friendlier like: error: attempted to get 4th component of a 3d vector instead. I am also unsure about the best practices with templates, so any feedback would be appreciated. Thank you :) #include <array> #include <cassert> #include <cmath> #include <type_traits> template< typename T, int num_components, typename = typename std::enable_if<std::is_arithmetic<T>::value, T>::type, typename = typename std::enable_if<(num_components >= 2)>::type > class Vector { using this_t = Vector<T, num_components>; public: Vector() : components{0} {} Vector(std::array<T, num_components> components) : components(components) {} template<int idx> inline T Get() const { typename std::enable_if<(num_components >= idx)>::type(); return this->components[idx]; } inline T Get(int idx) const { assert(idx <= num_components); return this->components[idx]; } inline T GetX() const { return this->components[0]; } inline T GetY() const { return this->components[1]; } inline T GetZ() const { typename std::enable_if<(num_components >= 3)>::type(); return this->components[2]; } inline T GetW() const { typename std::enable_if<(num_components >= 4)>::type(); return this->components[3]; } template<int idx> inline void Set(T value) { typename std::enable_if<(num_components >= idx)>::type(); this->components[idx] = value; } inline void Set(int idx, T value) { assert(idx >= num_components); this->components[idx] = value; } inline void SetX(T value) { this->components[0] = value; } inline void SetY(T value) { this->components[1] = value; } inline void SetZ(T value) { typename std::enable_if<(num_components >= 3)>::type(); this->components[2] = value; } inline void SetW(T value) { typename std::enable_if<(num_components >= 4)>::type(); this->components[3] = value; } double LengthSquared() const { double ret = 0; for (int i = 0; i < num_components; i++) { const T value = this->components[i]; ret += value * value; } return ret; } double Length() const { return std::sqrt(this->LengthSquared()); } void Normalise() { const double length = this->Length(); for (int i = 0; i < num_components; i++) { this->Set(i, this->Get(i) / length); } } this_t Normalised() const { const double length = this->Length(); std::array<T, num_components> new_components; for (int i = 0; i < num_components; i++) { new_components[i] = this->Get(i) / length; } return this_t(std::move(new_components)); } void Negate() { for (int i = 0; i < num_components; i++) { this->components[i] = -this->Get(i); } } this_t Negated() const { std::array<T, num_components> new_components; for (int i = 0; i < num_components; i++) { new_components[i] = -this->Get(i); } return this_t(std::move(new_components)); } this_t operator+(const this_t &r) const { std::array<T, num_components> new_components; for (int i = 0; i < num_components; i++) { new_components[i] = this->Get(i) + r.Get(i); } return this_t(std::move(new_components)); } this_t operator-(const this_t &r) const { std::array<T, num_components> new_components; for (int i = 0; i < num_components; i++) { new_components[i] = this->Get(i) - r.Get(i); } return this_t(std::move(new_components)); } this_t operator*(const this_t &r) const { std::array<T, num_components> new_components; for (int i = 0; i < num_components; i++) { new_components[i] = this->Get(i) * r.Get(i); } return this_t(std::move(new_components)); } this_t operator*(T s) const { std::array<T, num_components> new_components; for (int i = 0; i < num_components; i++) { new_components[i] = this->Get(i) * s; } return this_t(std::move(new_components)); } bool operator=(const this_t &r) const { for (int i = 0; i < num_components; i++) { if (this->Get(i) != r.Get(i)) return false; } return true; } this_t Cross(const this_t &r) const { // pretend that cross product is only defined for a 3-dimensional vector typename std::enable_if<(num_components == 3)>::type(); std::array<T, num_components> new_components; new_components[0] = this->GetY() * r.GetZ() - this->GetZ() * r.GetY(); new_components[1] = this->GetZ() * r.GetX() - this->GetX() * r.GetZ(); new_components[2] = this->GetX() * r.GetY() - this->GetY() * r.GetX(); return this_t(std::move(new_components)); } T Dot(const this_t &r) const { T ret = 0; for (int i = 0; i < num_components; i++) { ret += this->Get(i) * r.Get(i); } return ret; } private: std::array<T, num_components> components; }; using Vector2d = Vector<double, 2>; using Vector2f = Vector<float, 2>; using Vector2i = Vector<int, 2>; using Vector3d = Vector<double, 3>; using Vector3f = Vector<float, 3>; using Vector3i = Vector<int, 3>; using Vector4d = Vector<double, 4>; using Vector4f = Vector<float, 4>; using Vector4i = Vector<int, 4>; Answer: Comments: I am learning about templates in C++ so I decided to implement an N-dimensional vector. It's not an N-dimensional vector. It's a 1-D vector with N elements. OK. Now I have read this for a while. I am starting to think "Mathematical Vector". Is that what you mean? But even that is not quite correct, as a "Mathematical vector" has a direction (which you can specify with 3 lengths); but you also need a point for it to go through (so you need another 3 points to define that). Still confused. To stop GetW() being called on a 3-dimensional vector, I used std::enable_if. This works, but it produces a very nasty error message: error: no type named ‘type’ in ‘struct std::enable_if<false, void>’. This is because it is a compile time check. It's supposed to be readable by developers. That's because it is supposed to stop developers from making mistakes (not users). It would be better if I could make it into something friendlier like: error: attempted to get 4th component of a 3d vector instead. This should be available with a C++ feature called Concepts. Unfortunately this is not scheduled until 2020 (if it is not cancelled again). I am also unsure about the best practices with templates, so any feedback would be appreciated. Thank you :) That's what we are here for. Review: My compiler generates a warning on this line: Vector() : components{0} {} components is expecting a list here, not a single element. It can be a list with a single element. Pass by reference: This creates a copy of the parameters. Then the parameters are copied into the member variable. Its possible the compiler may optimize this, but don't count on it. Vector(std::array<T, num_components> components) : components(components) {} So you can pass by reference (to prevent extra copy). It will still need to be copied into the destination. Vector(std::array<T, num_components> const& components) : components(components) {} So you can pass by r-value reference. Vector(std::array<T, num_components>&& components) : components(std::move(components() {} Now some people may point out that std::array is non-movable. I know. But I like to future proof my code. At some point in the future somebody may change the type of components and I would like my code to still stay as efficient as possible. So it may be worth doing this just for the future. BUT I can also see the counter argument against so take or leave this one as you see fit. Return by reference Here you are returning by value. template<int idx> inline T Get() const { typename std::enable_if<(num_components >= idx)>::type(); return this->components[idx]; } If all you want to do is read a member of T. Then making a copy (return by value causes a copy) then this seems overkill. Also if you return by reference there is the added benefit (if that is applicable) that you can potentially change the value in place. If you don't want to change the value then return a const reference to allow reads but not writes (via the reference). inline is the most useless keyword. The key word inline is a hint to the compiler that is universally ignored by all modern compilers. Humans are terrible at deciding when inlining is appropriate so the compilers started ignoring their human masters a long time ago and decide internally when to actually inline the code. inline T Get(int idx) const { Only use the inline keyword when you have to. This is used when functions/methods are defined outside the class but in a header file (so there are potentially multiple copies across compilation units). Here it is used to tell the linker to ignore all the extra copies and they are the same. Don't use this-> This is a code smell in C++ and hides errors. inline T GetX() const { return this->components[0]; } The only reason to use this-> is to disambiguify a shadowed member. The problem here is that if you forget to use this-> the compiler will not tell you there is an error it will use the most locally scoped version of the variable (so it hides errors when you forget to use it). Also for a bug fixer it is hard to tell if you deliberately did not use it and meant to use the local shadowing variable or you it was a mistake and you wanted the member variable. On the other hand you only need it when you have shadowed variables. If you never have shadowed variables you never have to use it. You will also not have any ambiguity on which variable you meant to use because you used nice unique variables for everything. Get/Set bad interface template<int idx> inline void Set(T value) { I think the whole Java world got it wrong. Get/Set is a terrible paradigm for accessing an object as it breaks the encapsulation. What they did get very well is the whole automation of serialization and other tools that can be built when you do use this. But it's also considered a bad pattern for C++ (we don't have any of that tooling (so our code is not so brittle to change). Use references to avoid copying. const T value = this->components[i]; ret += value * value; Nothing wrong with breaking this into two lines and making it readable. But you have to watch the assignment to value. This is a copy operation. So you make a copy then multiply the values together. OK a copy is not that bad for ints/doubles or any numeric types. But you are defining this for an arbitrary type T. The cost of copying T could potentially be huge so don't do it you don't need to. T const& value = this->components[i]; // Assign to reference // ^^^ ret += value * value; You don't need move on return std::array<T, num_components> new_components; ... return this_t(std::move(new_components)); When you return a value it is a prvalue. So moving the object to a temporary is not required before a return (it will already be a prvalue) and the compiler will optimize and move a returned value. return new_components; // Achieves the same result. Assignment vs Increment These operators are all fine: this_t operator+(const this_t &r) const; this_t operator-(const this_t &r) const; this_t operator*(const this_t &r) const; this_t operator*(T s) const; Some people may argue that these should be free standing functions. There is an argument for this if you want auto conversions to happen (which I usually don't). So I usually do as you have done and make them members. BUT you should look at your use case and make sure that is what you want. Also when people define these methods they usually also define the: this_t& operator+=(const this_t &r) const; this_t& operator-=(const this_t &r) const; this_t& operator*=(const this_t &r) const; this_t& operator*=(T s) const; This is because they are easy to write as a pair. // The += looks a lot like your original code. // except you are updating the current objet (so its not a const method) this_t& operator+=(this_t const& r) { for (int i = 0; i < num_components; i++) { components[i] += r.components[i]; } return *this; } // The + operator simply copies itself and then uses // the += operator to do the hard work on the copy (which is returned). this_t operator+(this_t const& r) const { this_t copy(r); return copy += r; } Its standard for the assignment to return a reference to itself. This: bool operator=(const this_t &r) const; Is normally written as: this_t& operator=(const this_t &r); // Can't be const as you are modifying this.
{ "domain": "codereview.stackexchange", "id": 32192, "tags": "c++, template, coordinate-system" }
What happens to the distribution of the induced angle of attack, if we suppose the circulation distribution is constant?
Question: Assuming the lifting-line theory, we know the circulation must go to zero at the tips of the wing. Therefore, I wonder if we can actually conceive such an idea as a constant circulation across the whole wing span, and if yes, would the induced angle be zero for all of the points ? Answer: What you postulate is only possible with a wing of infinite span. Here, circulation is indeed constant over span because the wing has no end, and the induced angle of attack is zero. Once the wing is not infinite, the wingtip forces circulation to zero, as you say. Since induction in subsonic flow reaches infinitely in all directions, this jump in circulation will not be confined to the wingtip but will influence the amount of circulation over the whole wing, so the decrease in circulation strength will happen gradually over span. If that distribution happens to be elliptical, with the wingspan being one half axis of the ellipse, the induced angle of attack is constant over span. If you now twist the wing locally to bend the circulation distribution into a more rectangular shape, the induced angle of attack will become smaller at mid-wing but larger towards the tips. But no amount of twisting will make the circulation distribution completely rectangular.
{ "domain": "physics.stackexchange", "id": 85999, "tags": "fluid-dynamics, aerodynamics, lift" }
“Massive” v.s. “Supermassive” black holes
Question: I think I understand what a black hole is. When there is enough mass in a given area the “bend” in time space approaches infinite (please correct me if this is wrong). I understand there are different infinities (countable v.s. uncountable). So how do I make sense of a.. “standard” black hole v.s. a “supermassive” black hole. I thought the point of a black hole was “supermassive.” Is the difference “just” the diameter of the event horizon? Answer: The supermassive black hole is simply a very massive black hole. The mass of the black hole is proportional to its horizon radius so they are also very big. There is a class of "ordinary" black holes that are formed by collapsing heavy stars at the end of their evolution. The heaviest among them have masses that are less than hundred solar masses. They are properly called stellar mass black holes. In contrast the masses of the supermassive black holes are larger than 100000 solar masses, many are much larger actually. There are few that may reach billions of solar masses! Those giants are usually situated near the center of the galaxies and their origin is caused by the collapse processes during the formation of galaxies.
{ "domain": "physics.stackexchange", "id": 94563, "tags": "black-holes, mass, event-horizon, order-of-magnitude" }
Finding the charge on a capacitor (RC circuit)
Question: I am having trouble finding the charge on the capacitor for $t\rightarrow \infty$ after the switch has been closed at $t=0$. I already know the current $I$. I also have to find out on which sides the positive/negative charges are. I have a feeling, that the positive charges are located on the left side of the capacitor but finding the charge $Q$ troubles me. Answer: In steady state, $t \to \infty $ the capacitor and resistor in the red ellipse can be ignored as no current flows in that part of the circuit. You now have a network of resistors and need to find in terms of $\mathcal E$: $V_{\rm BF}\,\Rightarrow \, V_{\rm AF}$, $V_{\rm DF} \Rightarrow V_{\rm AD}$
{ "domain": "physics.stackexchange", "id": 57822, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, batteries" }
Normalisation of wavefunction given by the form $Ae^{i(kx-wt)}$
Question: Question 1. Let's say that the wavefunction is given in the form $$\Psi(x, t) = Ae^{i(kx-wt)}$$ Then because of the normalisation condition, the following should hold. $$\int \Psi^*\Psi dx = A^2 \int_{-\infty}^{\infty} e^{-i(kx-wt)}\times e^{i(kx-wt)} \ dx = 1$$ Because $e^{-i(kx-wt)}\times e^{i(kx-wt)} = e^{-i(kx-wt) + i(kx-wt)} = 1$, the condition demands that $$A^2 \int_{-\infty}^{\infty} dx = 1$$ As the integral value diverges to $+\infty$, we reach the conclusion that $A$ should converge to zero. What's wrong here? Question 2. This is another question that should be classified and asked separately but as it is a short one I will just put this one into here. When expressing the wavefunction as a linear combination of basis functions, especially in discrete cases, is it that the index varies from $-\infty$ to $\infty$? That means, is it that $$\Psi(x) = \sum_{-\infty}^{\infty} c_i \psi_i \ ?$$ Apologies in advance if the questions are trivial. I am a newcomer to quantum mechanics. Answer: (1) Nothing wrong there. Plane waves are states of infinitely precise momentum and cannot be properly normalized in position space due to having infinite spread from Heisenberg uncertainty. In practice they still help e.g. in the scattering matrix formalism to get an amplitude for reflection and an amplitude for transmission, and to settle e.g. the basic physics of an Aharonov-Bohm ring where the actual lengths one cares about are finite. (2) You always can and you never have to. There is a bijection between $\mathbb Z$ and $\mathbb N$ so however you number things is up to you. There is a slight reason to prefer $\mathbb N$ which is that a large class of these basis states are eigenfunctions of a Hamiltonian which is bounded from below, and thus these eigenvalues go on infinitely in one direction but not the other.
{ "domain": "physics.stackexchange", "id": 76062, "tags": "quantum-mechanics, hilbert-space, wavefunction, probability, normalization" }
Do adult mammalian cochlear inner hair cells regenerate?
Question: The consensus seems to be no, but I see conflicting evidence. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5361427/ Supernumerary human hair cells—signs of regeneration or impaired development? A field emission scanning electron microscopy study The combination of scarring and proximity to the supernumerary cells suggests it is regeneration and not just misplacement. Do adult mammalian cochlear inner hair cells regenerate? Answer: Short answer The current consensus is that hair cells in the cochlea of humans do not regenerate spontaneously. Background I took the liberty to show the linked paper to a colleague of mine. This guy has been doing histology on the inner ear for his entire professional career. He pointed out that the consensus is that in mammals, cochlear hair cells do not regenerate. This as opposed to birds and fish where they can regenerate. In mammals, vestibular hair cells can also regenerate (Santaolalla et al., 2013). Where do these 'supernumerary inner hair cells' (sIHCs) the authors of the paper show in their (quite stunning) EM photos come from? I wish to point you to the last lines of the Discussion of your linked paper (Rask-Anderson et al., 2017): Taken together, it cannot be settled if the sIHC represent renewed or redundant accessory IHCs. Further molecular studies are needed to verify if the regenerative capacity of the human auditory periphery might have been underestimated. In other words, the extra hair cells may have appeared during development in utero, or perhaps due to regenerative processes later on. Note that the inner ear in mammals is fully developed and doesn't grow anymore in size. The EM photos in the article don't allow for the identification of functionality either. Also note that sIHCs have been identified in the 1800s in other animals and that the beauty of this paper lies more in the fact that it shows that sIHCs exist in humans, rather than showing that IHCs regenerate in man (that would likely not appear in the Upsula journal of medical sciences, but in Nature or the likes). On a side note - the authors of the paper are giants in the field. Literature - Rask-Andersen, Ups J Med Sci (2017); 122(1): 1–19 - Santaolalla et al., Neural Regen Res (2013); 8(24): 2284–9
{ "domain": "biology.stackexchange", "id": 10923, "tags": "human-biology, neuroscience, neurophysiology, histology, human-ear" }
Difference between DFT and Z-Transform
Question: I have searched this question but couldn't find the answer in this network. I know this is very confusing question for DSP beginners. Both DFT and Z-transform work for Discrete signal. I have read that "Z-transform is the general case of DFT, when we consider unit circle then, Z-transform becomes Discrete Fourier Transform (DFT)". What does this mean? Ok, I can understand the mathematical verification but what is the physical meaning of this and how this affect the analysis in DSP? Answer: Actually, the Z transform is not really a proper transform, just a re-interpretation of the sequence of samples as coefficients of a formal Laurent series. In some cases the formal Laurent series converges, if it does, it does so on an annular region in the complex plane. For useful signals (stable, summable, exponentially decaying) this annulus contains the unit circle, and the evaluation of the Laurent series on the unit circle corresponds to the Fourier series. The interesting point of connecting a signal sequence to a periodic function on the unit circle is the inverse transformation, that many useful sequences are sequences of Fourier coefficients. And of course that convolution of signals corresponds to point-wise multiplication of the functions.
{ "domain": "dsp.stackexchange", "id": 1533, "tags": "fourier-transform, z-transform" }
Why must an integrating sphere be a sphere?
Question: Why must an integrating sphere be a sphere? Why can't it be an integrating cube? What is the difference? Could I use a cube to measure total illuminance like an integrating sphere does? Answer: Surface coating of an integrating sphere is optimized for low losses. This white coating (barium sulfate or PTFE/Teflon) acts like an ideal lambertian scatterer. all light is scattered (Ok, not 100%, but a very high percentage like 99,5%. See ressources) it is emitted in the hemisphere following the cosine law: perpendicular to the surface it's highest. Intensity decrease follows a cosine law. First generation stray light (blue in OP's picture) shows this light cone. Imagine this cone at the corner of a cube: some light will hit a wall again and suffers tiny losses. Detector port in cubic geometry hat a lower propability to to be hit with the ray of highest energy. With a sphere however all surface normal vectors point to its center. Remember, that these rays "carry more energy" according to Lambert's cosine law. It will have lower losses than a measurement head with a cube geometry. A spherical geometry reduces the necessary number of stray events. Resources Labsphere Spectralon data sheet: 99,5% hemispherical reflectance value: So 0,5% loss. Spherical geometry is more expensive than cube geometry: Stellarnet There also is a cylindrical geometry: ILX Lightwave manuals
{ "domain": "physics.stackexchange", "id": 12985, "tags": "optics, experimental-physics, visible-light, scattering, instrument" }
Can neural networks be adapted without recreating them completely?
Question: If I have, for example, a classification network which can tell if there is a dog or a cat in a picture, is it possible to adapt the network so it can also learn to detect a mouse? Without making a new one from scratch. In this case it doesn't make sense, but I'm wondering if for example Netflix has to retrain its network completely with every new show they add. And if so, when do they do that? The first few days may be the most crucial ones, but also the ones with the least data to train a network from. The actual problem I do have is that I'm trying to train a network to predict the location of public transport vehicles. That by itself is hard enough, but what do I do if there is a new transport line, which I don't have an input neuron for. - Here I would need to add an input neuron. Or another idea I'm thinking about is a neural network which can help me order my documents by detecting the company and date. I would like to classify an unknown document layout, a few times, so the neural network is able to classify it by itself, but without losing everything it learned so far. - Here I would need to add an output neuron. It seems like there is a way to do this, since there are some machine learning algorithms out there which seem to pull off stuff like that. Or do they somehow work around this? If so, how? Answer: The answer to your question is "Transfer Learning". Since the datasets "cat and dog" and "mouse" are quite similar as both are images. In DeepNet for recognising "cats and dogs", any deep learning network in its early layers learn to identify low-level features like edges, etc. It learns high-level feature like eyes, ears, etc in few further layers and in very later layers of DeepNet, it starts to recognise the intended object. In DeepNet for recognising "dogs", similar pattern will follow. On comparing these two, one may find that first few layers of DeepNet in both cases produce similar low-level features like edges, etc. Hence, first few layers of the DeepNet learned from "cats" can be used as base layers for the DeepNet for "mouse" detection. This technique is called transfer learning. To apply transfer learning, dataset on which DeepNet has been trained on and dataset on which this technique can be applied must be similar. This transfer learning video by Andrew Ng would also be helpful in understanding the concept
{ "domain": "datascience.stackexchange", "id": 3190, "tags": "neural-network, deep-learning, classification, transfer-learning" }
Python script breaks after resetting model poses
Question: I am using a python script to move a robot to a certain position, reset model poses and loop this process n times. However, after first reset, I obtain a message: [WallTime: 1500641282.768572] [0.001000] ROS time moved backwards: 2089.084s. I was wondering if there is a way to deal with the time problem, as I do not care what time it is in my program. Thanks! Originally posted by kchledowski on ROS Answers with karma: 52 on 2017-07-21 Post score: 0 Answer: I have realized, that while /gazebo/reset_simulation resets the clock, we can use /gazebo/reset_world, which only resets the poses. Originally posted by kchledowski with karma: 52 on 2017-07-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2017-07-21: Could you please just accept your own answer instead of closing the question? Thanks. And a tip: for questions that are really Gazebo specific, I would recommend posting on answers.gazebosim.org, not here.
{ "domain": "robotics.stackexchange", "id": 28406, "tags": "ros, gazebo" }
Are male and female brains physically different from birth?
Question: Male and female brains are wired differently according to this article: Maps of neural circuitry showed that on average women's brains were highly connected across the left and right hemispheres, in contrast to men's brains, where the connections were typically stronger between the front and back regions. But since learning in the brain is associated with changes of connection strengths between neurons, this could be or not the result of learning. What about physical differences from birth? Are there differences in size, regions, chemical composition, etc. from birth? Answer: Short answer Yes, men and women's brains are different before birth. Background First off, learning effects versus genetic differences is the familiar nature versus nurture issue. Several genes on the Y-chromosome, unique to males, are expressed in the pre-natal brain. In fact, about a third of the genes on the Y-chromosome are expressed in the male prenatal brain (Reinius & Jazin, 2009). Hence, there are substantial genetic differences between male and female brains. Importantly, the male testes start producing testosterone in the developing fetus. The female hormones have opposing effects on the brain as testosterone. In neural regions with appropriate receptors, testosterone influences patterns of cell death and survival, neural connectivity and neurochemical composition. In turn, while recognizing post-natal behavior is subject to parenting influences and others, prenatal testosterone may affect play behaviors between males and females, whereas influences on sexual orientation appear to be less dramatic (Hines, 2006). The question is quite broad and I would start with the cited review articles below, or if need be, the wikipedia page on the Neuroscience of sex differences. References - Hines, Eur J Endocrinol (2006); 155: S115-21 - Reinius & Jazin, Molecular Psychiatry (2009); 14: 988–9
{ "domain": "biology.stackexchange", "id": 8060, "tags": "neuroscience, brain, neurophysiology, development, sex" }
An elevator moving with constant velocity
Question: While an elevator moves up, it moves up with a constant velocity. I read this post and understood that it's because of inertia. However, I'm not really convinced. So what happens which I have understood is the upward tension($T$) on the rope on the load side imparts an upward acceleration which being greater than the weight $L$ of the elevator itself, causes net force to act upwards. The effort $E$(effort in the sense that the elevator is a pulley) imparts a downward force on the other side of the rope, which is greater $T$, hence causing net force and acceleration downwards. In an elevator exhibiting dynamic equilibrium, as soon as the acceleration is imparted on the elevator, the effort is made to cease to act such that the net force acting on the effort side is 0, but since it already is in motion, it continues to be in motion because of Newtons First Law, and so is the case for the elevator or load itself. But how is the effort controlled in such a way so as to make $L=T$ in an elevator? Does it mean that the $E$ is not caused by gravity? Even if it's not, when the net force acting on is 0, won't $mg$ cause the effort to move down with an acceleration again? Or is there another device resisting $mg$? I just want to understand the mechanism behind how the forces are obtained equal in an elevator, as I had learnt that in an Atwood's Machine, $E>T>L$, so I can't really grasp situations where they are equal. Answer: There is no physical observable called Effort. From your comment reply I think you're just just thinking of force. The counterweight is supplying most of the force. The rest of the force is supplied by the motor, such that, for counterweight mass $m_c$, elevator mass including cargo $m_e$, motor force $F_m$, elevator acceleration $a_e$, counterweight acceleration $a_c$, and neglecting friction $$m_c(g-a_c) + F_m = -T = m_e(g-a_e)$$ Note that $g$ is a negative number (gravity points down) and although I don't know how elevators are engineered I suspect $a_c = -a_e$ (when the elevator goes down, the counterweight goes an equal and opposite amount up). For constant velocity, $a_e = a_c = 0$. Note that $T$ is the tension on the part of the cable connected to the elevator. Somewhere between the elevator and the counterweight (probably at the pulley itself, but I don't know how elevators are designed), the motor is bearing some of the load, so the cable connected to the counterweight experiences a different tension, $T_c = T+F_m$. ($F_m$ is negative if it's helping to support the elevator, or positive if it's helping to support the counterweight.)
{ "domain": "physics.stackexchange", "id": 84013, "tags": "newtonian-mechanics, forces, newtonian-gravity, equilibrium, inertia" }
Question on relation between resistance and photon emitting
Question: |Hello, everyone, I have a question regarding the relation between photon emitting in terms of electricity. I asked the question in the Electrical Engineering forum but I was told that here is the appropriate place. Here's what I know A photon is emitted whenever an electron goes from higher energy level to lower energy level(goes from upper to lower valence shell), thus lowering its own internal energy and because of the difference between its current state of energy and its former one has to go somewhere(law of energy conservation) a photon is produced.Also, according Newton's law - an object with constant velocity and direction remains moving without losing energy unless some external force is exerted upon it.So, my questions is when a particular electron pass from an environment with lower resistance to an environment with higher resistance(flowing through conductive wire with given R1 and reaches a resistor R2 where R2 > R1 - a DC circuit) some force is being exerted upon its direction of movement but electromagnetic wave is not produced, why is that, since in AC current produce such waves if electrons change their direction totally, so is this the necessary condition(180 degrees of change) for an electron to lose energy or is there something else? Thanks in advance. Best regards, Nina Answer: Also, according Newton's law - an object with constant velocity and direction remains moving without losing energy unless some external force is exerted upon it. When one is talking of electrons and photons, one is in the quantum mechanical regime, and also in the special relativity regime. Newton's laws apply to the macroscopic classical regime. Both the electron and the photon are elementary particles following quantum mechanical equations. So, my questions is when a particular electron pass from an environment with lower resistance to an environment with higher resistance(flowing through conductive wire with given R1 and reaches a resistor R2 where R2 > R1 - a DC circuit) Resistance is an emergent quality , it emerges from the underlying quantum mechanical behavior and describes the macroscopic behavior of circuits. Electrons in solids are described by the quantum mechanical model of the band theory of solids.. In a conductor the electrons belong in an energy band that ties them to the whole lattice, and the attraction of the electric field gives to each individual electron a drift velocity which will build up the current. In an insulator most electrons are tied in their locations in the lattice and very few are in the conduction band . More energy is needed to give a drift velocity to electrons. some forces is being exerted upon its direction of movement but electromagnetic wave is not produced, the concept of force at the quantum level is a change in momentum, a dp/dt . All such changes for an electron in an electric field will give electromagnetic radiation, i.e. a photon will carry off some momentum and energy. This energy will be in the infrared frequencies and will appear as heat when the current is high. For a conductor where the electrons are in the conduction band very little radiation is released because the dp/dt of the individual electrons is small. Note that resistors heat up, that is photons in the infrared frequencies. why is that, since in AC current produce such waves if electrons change their direction totally, so is this the necessary condition(180 degrees of change) for an electron to lose energy or is there something else? No , it is not necessary to have a 180 degrees reversal. Any acceleration/deceleration will give off radiation. Applying a voltage to a circuit induces accelerations on the drifting electrons, including the statistical scatterings due to their motion. This radiation is in the infrared frequencies, appearing macroscopically as heat. Now to address this: A photon is emitted whenever an electron goes from higher energy level to lower energy level(goes from upper to lower valence shell), thus lowering its own internal energy and because of the difference between its current state of energy and its former one has to go somewhere(law of energy conservation) a photon is produced For electrons in the conduction band the quantum mechanical energy differences in a conductor (where a large number of the electrons of the lattice are), are so small as to be considered a continuum. For electrons in insulators there will be scatterings of electrons moving in the conduction band with electrons bound strongly in the atoms of the lattice and those will take up energy and momentum in the form you are describing.
{ "domain": "physics.stackexchange", "id": 36557, "tags": "electromagnetic-radiation, electrons, photon-emission" }
Non-toxic organic bases?
Question: I am trying to find out if there are any non-toxic organic bases that could be given orally on a non-empty stomach to treat acidosis. From my research, histidine and diluted choline hydroxide were the best candidates. Some phosphazenes, including polyphosphazenes, could make the cut, but I don't know which ones exactly. Wikipedia [1] lists the following compounds as organic bases: pyridine, alkanamines (such as methylamine), imidazole, benzimidazole, histidine, guanidine, phosphazene bases, hydroxides of quaternary ammonium cations or some other organic cations. The LDLO of guanidine given orally to rabbits is 500 mg/kg [2]. The LD50 of imidazole (rabbits, oral) is 950 mg/kg. Pyridine seems to be in the same range [3]. Benzimidazoles are also toxic. Histidine, which is considered the most toxic amino acid, can be therapeutic in doses of up to 4.5 g/d, but is toxic in the range 24–64 [4]. The last ones from this list are choline hydroxide and phosphazenes. The UTL of choline is 3.5 g/d; in the 10-16 g/d range it causes fishy smell so it probably overloads us with TMA, which is a carcinogen. Polyphosphazenes are being studied for drug delivery; apparently their metabolism generates low-toxicity products such as urea and phosphates. [5]. Answer: Naturally occuring amino acid with basic sidechain - L-Arginine: Oral supplementation with L-arginine at doses up to 15 grams daily are generally well tolerated. source here
{ "domain": "chemistry.stackexchange", "id": 13529, "tags": "organic-chemistry, acid-base, toxicity" }
"Property Container" design-pattern
Question: I've tried to write my Property Container design-pattern implementation. Could anybody, please, tell me, if this code is really what I intended to write (follows the Property Container design-pattern rules)? Is there anything that can be improved? <?php class PropertyContainer { private $PropertyContainer = array(); public function __construct() { } public function addProperty($k, $v) { for($i = 0; $i < count($this->PropertyContainer); $i++) { } $this->PropertyContainer[$k] = $v; } public function setProperty($k, $v) { while($this->PropertyContainer) { if(key($this->PropertyContainer) == $k) { $this->PropertyContainer[$k] = $v; return; } next($this->PropertyContainer); } echo "Key was not found"; } public function getProperty($k) { //var_dump($this->PropertyContainer); foreach($this->PropertyContainer as $key => $val) { if($key == $k) { return $val; } } echo "Key was not found"; return; } } $pc1 = new PropertyContainer(); $pc1->addProperty("myProperty1", 31); $pc1->addProperty("myProperty2", 32); $pc1->addProperty("myProperty3", 33); $pc1->setProperty("myProperty2", 7); echo $pc1->getProperty("myProperty1") . "<br />"; echo $pc1->getProperty("myProperty2") . "<br />"; echo $pc1->getProperty("myProperty3") . "<br />"; echo "<br />"; $pc2 = new PropertyContainer(); $pc2->addProperty("myProp1", 11); $pc2->addProperty("myProp2", 11); $pc2->addProperty("myProperty3", "Some String"); $pc2->setProperty("myProp2", 12); echo $pc2->getProperty("myProp1") . "<br />"; echo $pc2->getProperty("myProp2") . "<br />"; echo $pc2->getProperty("myProperty3") . "<br />"; echo "<br />" . $pc2->getProperty("myProperty5") . "<br />"; ?> Answer: Yes, you are using this patter correctly here. Its use in PHP (especially as a generic implementation) is greatly reduced though. As the answers in your programmers.stackexchange.com point out: basically it is a hashmap. Now, PHP implements arrays as some form of hash map already. I see this pattern as an anti-pattern too. There are some use-cases of this pattern when you register validators along each property this property has to fullfil when setting. The php class stdClass basically does what you want to achieve it a more direct way: $object = new stdClass(); $object->key = $value; Though I'd recommend not to use this code in production, it is a good way to learn of course. So my review for your code: $PropertyContainer: It is uncommon in PHP to have variables start in upper case. Most code I do read either starts in lower-case and follow Camel-Case or is snake casing. public function __construct() { } just fills up space. Not required, therefore remove it. You should have a closer look at how to work with arrays in PHP. Your current approach is highly inefficient. Many of the array functions do not relay on the array iterator's state. Currently you force PHP to iterate over the array. If you use dedicated methods instead, PHP doesn't have to iterate (which is much faster of course :)) To add an entry to your array there is no need to iterate over it without doing it first. You can either array_push it at the end or just set it by $this->PropertyContainer[$k] = $v. Same goes for checking of existence. Use isset instead. And for accessing you can use $this->PropertyContainer[$k] instead of iterating $k and $v are bad parameter names. While they usually are recognized as key and value, there is no harm to call them this way. But this can be improved: you are talking about properties and not keys. So better name them $propertyName and $value. Shorthands should be used in local scope only (if at all). While I suppose they are just there for debugging: echo'ing on error is a bad idea. Either return a error code or even better, throw an exception. Of course this requires a method to check if a property exists (e.g. hasProperty). return; without any value is the default anyway at the end of a method. You can remove this, shorting the code. Elsish after some time another developer might wonder why it was written once: was it located at some other line? was there some return value previously?
{ "domain": "codereview.stackexchange", "id": 6431, "tags": "php, design-patterns, classes, php5, properties" }
Help identify parasitic plant on oak seedling
Question: See pictures below. I found five of them on a patch of oak seedlings in northern Turkey yesterday, they are about 2 centimeters in diameter and smell like grass. Five months later: I tried PlantNet and Google Lens before asking but nothing came up. Answer: I think these are galls produced by a wasp in the genus Andricus. Here are some produced by A. polyceras: However, there are many species in this genus so more detail would be needed to narrow it down.
{ "domain": "biology.stackexchange", "id": 12366, "tags": "species-identification, botany, parasitism" }
launchfile's problem
Question: When I write a launch file,I try to run a node with command line arguments. Just like: <node pkg="my_pkg" type="my_type" name="my_node" output="screen" args="--my_arg my_num"> But when i check the argv parameters,i found there're more than three, the additional two is __name:=my_node __log:=xxxxx.log What's wrong with it? Is there any solution to skip the aditional two comand line? Originally posted by Epsilon_cm on ROS Answers with karma: 5 on 2018-10-01 Post score: 0 Original comments Comment by PeteBlackerThe3rd on 2018-10-01: In a nut shell nothing is wrong with it. ROS uses command line arguments to pass standard parameters to the node. The args property in the launch file just adds extra arguments onto these. Can you simply ignore them when processing arguments in your node? Comment by Epsilon_cm on 2018-10-01: Thank you. Answer: I would not call this a "problem", but I can imagine you'd like to somehow deal with this. See #q272267. Edit: and if the program you're launching is actually a ROS node, then you can make use of the Python shown in #q272267 directly. But know that ROS nodes typically don't take command line arguments, but use ROS parameters. Originally posted by gvdhoorn with karma: 86574 on 2018-10-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Epsilon_cm on 2018-10-01: Thanks a lot for your response.
{ "domain": "robotics.stackexchange", "id": 31844, "tags": "roslaunch, ros-kinetic" }
Is quantum gravity, ignoring geometry, the theory of a fictitious force?
Question: This question is motivated by this question and this one, but I will try to write it in such a way that it is not duplicate. In short, I don't understand the motivation for a "quantum theory of gravity" that predicts a particle such as the graviton, because it stands in opposition to the entire framework of general relativity. Let me explain. First, a personal comment: I am a mathematician with a general understanding of what GR is (I took an undergraduate course from Sean Carroll, which used Hartle's book because his own was not yet published). I know slightly less than the corresponding amount of QM, which I do understand at the level of "an observable is a Hermitian operator acting on a Hilbert space of which the wave function is an element"; I know nothing technical about QFT, though I believe that "a particle is a representation of the gauge group" is true. However, I do have a popular-science understanding of elementary particles (at the precise level of Asimov's Atom, which I read a lot in high school). Now for the framing of the question. GR in a nutshell is that that spacetime is fundamentally a 4-manifold with a metric locally isomorphic to Minkowski space and that particles are timelike geodesics in this manifold. The force of gravity only appears as such when these geodesics (i.e. curves) are parametrized in a coordinate patch, where "geodesic" and "coordinatewise unaccelerated" are different concepts; the apparent acceleration seems to reflect a force (via Newton's second law) that we experience as gravity. This phenomenon is identical to the appearance of a centrifugal force in a rotating coordinate system where none exists in a stationary one (it is identical in that this is a particular instance of it). I understand that the standard model is formulated in a flat spacetime, and its use for the hypothetical graviton is to mediate the force of gravity. I also understand that quantum gravity is a (shall we say) paradigm in which the standard model is made consistent with GR, and that the hallmark of a theory of quantum gravity is that it predicts a graviton. But this seems to be barking up the wrong tree: you cannot remove the geometry from GR without changing it entirely, but you cannot consider gravity a force (and thus, have a corresponding quantum particle) without removing the geometry. It does seem from this question that I have oversimplified somewhat, however, in that some theories, which I had thought were not very mainstream, do treat gravity in an "emergent" manner rather than as an interaction. My question: What I want to know is: rather than pursuing a "quantum theory of gravity", why not pursue a "gravitational quantum theory"? That is: since the formulation of standard quantum physics is not background independent in that it requires a flat spacetime, it cannot be compatible with GR in any complete way. At best it can be expanded to a local theory of quantum gravity. Why is it that (apparently) mainstream opinion treats GR as the outlier that must be correctly localized to fit the standard model, rather than as the necessary framework that supports a globalization of the standard model? One in which the graviton is not a particle in the global picture, but a fictitious object that appears from formulating this global theory in local coordinates? PS. I have seen this question and this one, which are obviously in the same direction I describe. They discuss the related question of whether GR can be derived from other principles, which is definitely a more consistent take on the unification problem. I suppose this means that part of my question is: why must Einstein's spacetime be replaced? Why can't the pseudo-Riemannian geometry picture be the fundamental one? Edit: I have now also seen this question, which is extremely similar, but the main answer doesn't satisfy me in that it confirms the existence of the problem I'm talking about but doesn't explain why it exists. PPS. I'd appreciate answers at the same level as that of the question. I don't know jargon and I'm not familiar with any theoretical physics past third-year undergrad. If you want to use these things in your answer you'll have to unravel them a bit so that, at least, I know what explicit theoretical dependencies I need to follow you. Answer: First of all, there is a quantum field theory on a curved background, even though it is not perfect. There are problems with global definitions of spinors, vacua, particle numbers etc. and this all seems to be a consequence of the core properties of GR such as no privileged definition of time or "global God observer". But the main issue of the theory is that the quantum fields are acted upon by the geometry but don't act back which is heavily against Machian principles. There is a quantum stress-energy operator $$\hat{T}^{\mu}_{\;\;\nu} = \frac{\partial \hat{\mathcal{L}}}{\partial (\partial_\mu \phi)} \partial_\nu \hat{\phi} - \delta^\mu_{\;\; \nu} \hat{\mathcal{L}}$$ But you cannot just put this equal to the left hand side of Einstein's equations $$R_{\mu \nu} - \frac{1}{2} g_{\mu \nu} R$$ Because that is a set of completely different objects. You can obviously distill a natural number from the stress-energy operator by taking it's expectation value and this gives you semi-classical gravity. Nonetheless, the expectation value flattens out a whole spectrum of information, so the most natural thing is to try to convert the left hand side to match the quantum operator richness of the right hand side. Then you get quantum gravity. We could argue whether gravity is treated as a force by quantization or not, but for a down-to-earth physicist that is just wordplay. When a violent cosmic event sends of a gravitational wave your detectors will vibrate upon the arrival of the wave whether you understand it as a geometric or a field effect. When you start with a given background in classical GR and perturb it, you will get several types of deformations and certain outgoing and ingoing oscillations of the metric representing weak gravitational waves. Due to the traditional formulation of QFT concerning itself mainly with scattering events with asymptotic outgoing and ingoing states, you will also concentrate on these in quantum gravity. You can call these perturbative ingoing and outgoing quanta "gravitons" but also "quantized gravitational wave-packets". You can call an eventual interaction of this excitation of the metric with other fields as "the graviton decaying into a positron-electron pair" or "the conversion of energy carried by the gravitational wave into the energy of a specific spinor wave by a quantum process". The scattering picture of QFT is so stressed because it is quite well understood both intuitively and analytically, and it also has the best experimental underpinning. But this whole popular picture of flying particle-balls crumbles when you start to study non-perturbative and non-scattering effects (such as bound states). A partial understanding can then be obtained via the language of states, superposition and so on. But to be honest, I don't believe anyone really understands how does the full-fledged QFT interaction add up into something like the quark-antiquark condensate in a proton. I believe that in a certain sense it is more useful to forget the particle picture for understanding a proton. I.e. "there are no quarks but a a certain quantum configuration of the respective fields." In the same way the quantum-gravitational field around a black-hole can be understood as "no gravitons but a certain quantum configuration of the pseudo-Riemannian geometry". So I think your intuition is basically in agreement with the current theoretical-physics picture, it's just not clear through the popular-physics portrayal.
{ "domain": "physics.stackexchange", "id": 19268, "tags": "general-relativity, quantum-gravity" }
Impulse response and amplitude-frequency characteristic of a communication channel
Question: The task is to implement communication system for transmission of 32 multiplexed signals through communication channel represented by copper cable whose frequency characteristic is given by: $$H(f) = e^{-d\gamma(f)}$$ $$\gamma(f) = j 2\pi f \sqrt{LC \left (1+ \frac{(1-j)k}{L\sqrt{2\pi f}}\right)}$$ where: $d$=1200m, $L$= 0.5 mH / km, $C$=0.04 µF/km, $k$=0.18. Amplitude characteristic of a communication channel should be analyzed in a frequency range from 0 to 500 kHz. For the first part of my task, I should plot amplitude characteristic of transfer function in dB and impulse response of a communication channel. The task should be done in Python. Professor gave us what he got as plots and I think I should get the same plots as he did. The plots he got: The plots I get: This is the code I used import numpy as np import matplotlib.pyplot as plt # Given parameters L = 0.5 * 1e-6 # H/m C = 0.04 * 1e-9 # F/m d = 1200 # meters k = 0.18 # Frequency range from 0 to 500 kHz frequency_range = np.linspace(0.1, 500.1, 500_000) # Transfer function y(f) y = 1j * 2 * np.pi * frequency_range * np.sqrt((L * C) * (1 + ((1 - 1j) * k) / (L * np.sqrt(2 * np.pi * frequency_range)))) # Frequency response H(f) H = np.exp(-d * y) # Amplitude response amplitude = np.abs(H) # Convert to decibels amplitude_dB = 20 * np.log10(amplitude) # Impulse response using the inverse Fourier transform h = np.fft.ifft(H) # Time array for plotting impulse response t1 = np.linspace(0.1, 50.1, 500_000) # Create subplots side by side with switched positions fig, axs = plt.subplots(1, 2, figsize=(15, 6)) # Plot amplitude response in dB axs[0].plot(frequency_range, amplitude_dB) axs[0].set_title('Channel Amplitude Response in dB') axs[0].set_xlabel('Frequency (kHz)') axs[0].set_ylabel('Amplitude (dB)') axs[0].grid(True) # Plot impulse response axs[1].plot(t1, np.real(h)) axs[1].set_title('Impulse Response') axs[1].set_xlabel('t (μs)') axs[1].set_ylabel('h (t) ') axs[1].grid(True) plt.tight_layout() plt.show() As you can see the first plot is similar but the values on y axis are different and second plot is not even similar. Do I need to convert the given units to base units as I did in my code, and if I do , will the coefficient 'k' change? Does anyone know what could I do to get same plots as he does or If someone can spot a mistake in my code if there is one. Answer: There's something off with the units. $LC$ has unit $\left[s^2\right]$, so the square root has unit $\left[s^2 + F\sqrt{s}\right]$ which doesn't make sense. In any case, once that's fixed, use standard units: # Given parameters L = 0.5 * 1e-3 # H/km C = 0.04 * 1e-6 # F/km d = 1.2 # km k = 0.18 # Frequency range from 0 to 500 kHz fmax = 500*1e3 step = 1000 frequency_range = np.linspace(1, fmax, step) Then fix your time vector: # Time array for plotting impulse response fs = 2*fmax t1 = np.linspace(0, step/fs*1e6, step)
{ "domain": "dsp.stackexchange", "id": 12389, "tags": "digital-communications, python, frequency-response, impulse-response, channel" }
2nd Law of Thermodynamics
Question: I understand that the 2nd law of thermodynamics roughly states that, if you have a body (or a gas in a chamber) that is hot at one end and cold on the other, the heat will always flow from the hot to the cold part and to get the opposite effect, one has to put energy in from the outside (through a machine or something). Now, I don't understand why this fact cannot be explained just through probabilities (of the velocities of the gas molecules, say). It would seem to me that it is simply very, very, very unlikely that faster moving molecules all end up in (approximately) one spot at any time. But from all the fuzz about the 2nd law, I'm led to believe that there has to be more behind it than probability. So where am I wrong? Why is the second law beyond probability? How is the 2nd law tested? (so that one can rule out simple probability?) ps.: I haven't yet had a course on probability theory. So my understanding of it is limited. Answer: I assure you, it is all probability AND statistics. Well you see when you say "a gas is at 300 Kelvin", then it does not mean all the molecules in the gas are "at 300 Kelvin", it rather states an average. It represents the total behavior of the gas compared to anything at any other temperature. So there are actually molecules with more kinetic energy, and those with less kinetic energy, interacting with each other and the environment they are in, however due to the massive number of collisions and interactions, the result with highest probability(i.e. same temperature box&gas stays at the same temperature) is the macroscopic outcome. Probability, combined with statistics is a very powerful tool to represent macroscopic nature, and I suggest you to take probabilistic maths and statistical thermodynamics courses to further investigate such issues.
{ "domain": "physics.stackexchange", "id": 48061, "tags": "thermodynamics" }
Rezise of the map necessary?
Question: Hi, I m trying to load a blank map with a defined dimension of [width, height] = [20, 20] but still it displays in Rviz just a small white square of constant dimension 6x6. I read this tutorial here and I put the right parameter in the costmap file: global_costmap: global_frame: /map robot_base_frame: /base_link update_frequency: 3.0 publish_frequency: 0.0 # For a static global map, there is generally no need to continually publish it static_map: true # This parameter and the next are always set to opposite values. The global map is usually static so we set static_map to true rolling_window: false # The global map is generally not updated as the robot moves to we set this parameter to false resolution: 0.01 transform_tolerance: 1.0 map_type: costmap width: 20 height: 20 and local_costmap: global_frame: /odom robot_base_frame: /base_link update_frequency: 3.0 publish_frequency: 2.0 static_map: false rolling_window: true width: 16 height: 16 resolution: 0.01 transform_tolerance: 1.0 and here my map yaml file: image: blank_map.pgm resolution: 0.01 origin: [0, 0, 0] occupied_thresh: 0.65 free_thresh: 0.196 # Taken from the Willow Garage map in the turtlebot_navigation package negate: 0 width: 20 height: 20 Anyway I after many tries I foudn that the map is going to be resized every time I start the corresponding node: [ INFO] [1412758776.217404554]: Using plugin "static_layer" [ INFO] [1412758776.420654688]: Requesting the map... [ INFO] [1412758776.643031834]: Resizing costmap to 600 X 600 at 0.010000 m/pix [ INFO] [1412758776.741877357]: Received a 600 X 600 map at 0.010000 m/pix [ INFO] [1412758776.769172167]: Using plugin "obstacle_layer" and I m quite sure that causes the problem. Here and here I found similar problem but not the right solution. How to solve this annoying problem? Originally posted by Andromeda on ROS Answers with karma: 893 on 2014-10-08 Post score: 0 Answer: I must say that the answer given by David was very helpful and works. But I should admit that as a simple workaround I've created a bigger map using Gimp. Just create a brand new plane with the dimension you want, the above reported map dimension is the dimension in pixel of the image. So if you want to create a map 20x20 [meter], just create with Gimp a page 2000x2000, save it as .pgm file and you are done. The big vantage is that you don t need to play too much with your global and local_costmap files. The map is going to be visualized in RViz very nicely. The second thing that I discovered is that the origin of your map must be integer, since double doesn't work. So if you have let's say: image: my_map.pgm resolution: 0.01 origin: [-10, -10, 0] occupied_thresh: 0.65 free_thresh: 0.196 # Taken from the Willow Garage map in the turtlebot_navigation package negate: 0 width: 20 # m Don't forget to create a file with 2000 pixel width height: 20 # m Don't forget to create a file with 2000 pixel height then put origin [-10,-10,0] as int and not as double or float. I hope that it helps Originally posted by Andromeda with karma: 893 on 2014-10-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19672, "tags": "ros, local-costmap, global-costmap" }