anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Add multiples of 3 or 5 below 1000, Can this code be optimised. Project Euler #1
Question: I have seen some solution here for the same problem, this and this I have found better ways to solve this problem. I just want to know how good or bad my code and way of approaching this problem is. using System; public class Program { public static void Main() { int sum = 0; for (int i = 0; 3 * i < 1000; i++) { sum += 3 * i; if (5 * i < 1000 && (5 * i) % 3 != 0) { sum += 5 * i; } } Console.WriteLine(sum ); } Answer: The main problem The main issue here is that you define i in function of it being a "factor of 3", and then also try to use this number as a "factor of 5". This doesn't make any sense - as there is no inherent relationship between being a threefold and a fivefold number. It would've made sense if you were doing it for 3 and 6, for example, because 3 and 6 are related: //i*3 is obviously always a multiple of 3 sum += 3 * i; if ((3*i) % 2 == 0) { //Now we know that i*3 is also a multiple of 6 } But that is not the case here. The for loop's readability I understand your idea, you wanted to only iterate over factors of three which keep the multiple under 1000. While I will suggest to change this approach (later in the answer), your approach could've been written in a much more readable manner: for( int i = 0 ; i < 1000 ; i+=3 ) This will iterate over i values of 0,3,6,9,12,15,... and will skip all values inbetween. The benefit here is that you don't need to work with i*3 all the time, you can just use i itself. You will need to iterate over the multiple of 5 separately. However, should you keep using this approach, I would always suggest splitting these loops anyway. The algorithm Your approach works, but it's not the easiest approach. If I put your approach into words: Add every multiple of 3 to the sum. Also add the multiple of 5, but only if it's still below 1000, and it's not already divisble by 3. The issue here is in how you handle the multiples of five. You're working with an i that is defined as the allowed values for threefolds. For any i > 200, you're effectively having to manually exclude this value. You're using a different approach for the 5 than you are using for the 3, even though the logic is exactly the same. That's not a reusable approach. Secondly, there is a readability problem. Your code should be trivially readable, and I simply wasn't able to understand your intention. I had to google what the question was before I could understand what your code was trying to achieve. So let me offer a better approach, first putting it into words: Check every number from 0 to 1000 (not including 1000) If it is divisible by 3 or it is divisible by 5, then add it to the sum. This can be put into code, step by step: // Check every number from 0 to 1000 (not including 1000) for(int i = 0; i < 1000; i++) { var isDivisibleBy3 = i % 3 == 0; var isDivisibleBy5 = i % 5 == 0; //If it is divisible by 3 or it is divisible by 5 if(isDivisibleBy3 || isDivisibleBy5) { //then add it to the sum sum += i; } } Note how the code exactly mirrors my algorithm description. You don't need to use the booleans. I simply added them to simplify the example. if(i % 3 == 0 || i % 5 == 0) would be equally okay to use because it's still reasonably readable. If the calculations become more complex, I suggest always using the booleans so you neatly break your algorithm down to small and manageable steps. It will do wonders for your code readability, and it does not impact performance (the compiler will optimize this in a release build). A LINQ variation This can be further shortened using LINQ: var sum = Enumerable.Range(0,1000) .Where(i => i % 3 == 0 || i % 5 == 0) .Sum(); LINQ is just a nicer syntaxt, but it uses a for/foreach iteration in the background, so I suspect it won't be much more performant than the previous example. But I do consider this highly readable. Maximizing performance The previous suggestion maximizes readability, but it does so at the cost of performance, as it now has to loop over 1000 values and evaluate them all. You already linked several other answer that clearly dive deeper into the code in order to maximize the performance, I hope you can see that this dramatically impacts the readability. For example: public void Solve(){ result = SumDivisbleBy(3,999)+SumDivisbleBy(5,999)-SumDivisbleBy(15,999); } private int SumDivisbleBy(int n, int p){ return n*(p/n)*((p/n)+1)/2; } By itself, I would have no idea what this code does. I can sort of understand the intention of Solve(), but it's not quite apparent how SumDivisbleBy() works. SumDivisbleBy() has effectively become impossible to maintain. If you needed to introduce a change, you would effectively have to reverse engineer it before you can alter it. This means that starting from scratch is the better option, which is clearly not a good thing. However, when performance is the main focus, this is acceptable. I would, however, strongly urge you to document the algorithm in comments specifically to help future readers in understanding how/why this works. Note that AlanT's answer contains a more readable variant of the SumDivisbleBy() method, which already helps a lot with understanding the algorithm. The clear naming used clarifies what the algorithm does, which is the main goal of writing readable code (explaining why something works is only a secondary goal and not always required).
{ "domain": "codereview.stackexchange", "id": 35083, "tags": "c#, beginner, programming-challenge" }
What are some unanswered questions in electromagnetism?
Question: What don't we know about how electricity and/or magnetism works? Answer: We don't really understand why charge is quantized. Nor we do know if there ought to be magnetic monopoles. These two things seem linked. Dirac gave an argument for charge quantization in the early days, but this presupposed the existence of a magnetic monopole. In Maxwell's equations, it would be completely natural to imagine the existence of magnetic monopoles, and some high energy theories predict their existence, but we've seen no direct evidence of them yet. And to my knowledge, though it's not my area of expertise, I don't think there have been any other really compelling arguments for charge quantization to compete with Dirac's original proposal. We also don't understand high temperature superconductivity yet. We also have a hard time computing electronic energy bands for complicated structures. Current Density Functional Theory techniques have errors on the percent level.
{ "domain": "physics.stackexchange", "id": 14974, "tags": "electromagnetism, electricity, soft-question, quantum-electrodynamics, classical-electrodynamics" }
RG of the Gaussian Model: Finding the scaling factor
Question: I'm studying how the Renormalization Group treatment of the simple Gaussian model, $$\beta H = \int d^d r \left[ \frac{t}{2} m^2(r) + \frac{K}{2}|\nabla m|^2 - hm(r)\right]$$ In momentum space, the Hamiltonian reads $$\beta H = \frac{1}{(2\pi)^d} \int d^d q \left[\frac{t + q^2 K}{2} |m(q)|^2\right] - hm(0)$$ and the $q$-integral runs from $0$ to some long-wavelength cut-off $\Lambda$. The coarsening is done by splitting this integral into one from $0$ to $\Lambda/b$ and one from $\Lambda/b$ to $\Lambda$, and because the Gaussian model is so simple, the two integrals don't mix and decouple nicely. The high-momentum integral contributes just a constant additional term to the free energy, so we ignore it, and then we are left with $$\beta H = \frac{1}{(2\pi)^d} \int_0^{\Lambda/b} d^d q \left[\frac{t + q^2 K}{2} |m(q)|^2 \right] - hm(0).$$ The rescaling is done by introducing a new momentum $q' = bq$. For the order parameter $m(q)$, one makes the scaling assumption $m'(q') = m(q)/z$. Then I can rewrite $\beta H$ in terms of the new momentum variable $q'$, and then I demand that the rescaled Hamiltonian has the same functional form as the old Hamiltonian, which allows me to read off $$t' = b^{-d} z^2 t$$ $$K' = b^{-d-2} z^2 K$$ $$h' = zh$$ Now we don't know $z$, and in the literature I found that one somehow demands that $K' = K$, so that $z = b^{d/2 + 1}$, and I don't really understand why we can make that demand, and if there are other possibilities. Could we also demand that $t' = t$ and read off a different $z$? Answer: Yes. It is a merely a convenient choice to fix the term in front of the gradient. You may make some other choice and get some other equivalent RG flow. An interesting toy to play with is a field with kinetic term $K_1 q^2 + K_2 |q|^\alpha$ where $\alpha >0$ is some parameter. You can write an RG flow choosing to fix either $K_2$ or$K_1$. Look at the resulting fixed points and their stability in either flow.
{ "domain": "physics.stackexchange", "id": 54757, "tags": "statistical-mechanics, renormalization" }
Difference in equations of LASER and plane monochromatic wave
Question: I was asked in an interview to write the equation of a plane monochromatic wave. I wrote it as: $$ E = E_0 \exp(i(k.x - wt)) $$ Now, they asked me to differentiate between this and light LASER. Although I know the basics of both this seemed difficult as the main properties of LASER are directionality, monochromaticity, and coherence. The plane wave equation seemed to suggest all of that. The interviewer hinted to me that the answer is related to temporal coherence but I couldn't figure out why it would not be coherent. Can one explain how? Answer: Only your interviewer knows exactly what they meant, but I suspect they might mean that lasers remain coherent only for times of the order of $100\mathrm{ns}$ to $1\mu\textrm{s}$. This means the equation is really something like: $$ E(x,t) = E_0 \exp(i(kx - \omega t + \phi(t))) $$ where $\phi(t)$ is a phase offset that varies randomly on a timescale of $100\mathrm{ns}$ to $1\mu\textrm{s}$. So at any time $t$ the light from the laser remains coherent everywhere in space i.e. the phase varies sinusoidally with distance. That's why lasers produce such good interference patterns. But if we picked a point in space and looked at the variation with time we'd find the phase varied sinusoidally only for short periods and experienced random jumps every now and then. Usually for lasers we refer to a coherence length, which is just $c$ times the coherence time. Typically this is around $100\textrm{m}$ though it depends on the type of laser and can vary widely.
{ "domain": "physics.stackexchange", "id": 94925, "tags": "optics, laser" }
What is the dimensionality of each part of a covariant derivative?
Question: In the standard model, we have the following covariant derivative: $$D_\mu = \partial_\mu - ig_sG_\mu^a\lambda_a-igW_\mu^a\frac{\sigma^a}{2}-ig'B_\mu\frac{Y}{2}$$ If we let this work in on e.g. the lefthanded quark ($SU(2)$) doublet $Q_L$ then how exactly does this work dimensionally? $G$, $W$ and $B$ are $4\times1$ vectors $\sigma^a$ are $2\times2$ matrices For the first component $\mu = 0$, this means that the third term will give us scalar x matrix x doublet = $2\times1$. But $\lambda^a$ in the second term is a $3\times3$ matrix, so this doesn't work when applying the same logic. Do I need to find a 2D representation of $SU(3)$? Is there even one? I guess you could go to a 3D representation of $SU(2)$ instead, but then I don't see how this works on the $SU(2)$ doublet. Answer: It's neither $2 \times 2$ nor $3 \times 3$. It's in the tensor product space, which is much larger. First consider a simpler example. A single quark is a Dirac spinor in the $3$ of $SU(3)$. A Dirac spinor has $4$ components, but a color triplet has $3$ components, so does the quark really have $3$ or $4$ components? Neither. It actually has $12$ components, which might be written as $$q^i_\alpha, \quad i \in \{1, 2, 3\}, \quad \alpha \in \{0, 1, 2, 3\}.$$ All matrices that act on the quark field are $12 \times 12$, usually the tensor product of a $3 \times 3$ matrix and a $4 \times 4$ matrix, so properly we should write something like $$M^{ij}_{\alpha \beta} q^j_\beta.$$ However, writing this many indices gets tiresome, so we suppress them whenever we can, such as if some of the tensor product factors are just the identity matrix. The same is going on in your example above. You have $4 \times 4$ matrices in spinor space, $3 \times 3$ matrices in $SU(3)$ color space, and $2 \times 2$ matrices in isospin space. So really, everything is properly a $24 \times 24$ matrix. Every term should have a total of $7$ indices ($1$ Lorentz, $2$ spinor, $2$ color, $2$ isospin). Almost all of these indices are suppressed to keep the notation simple. (See here for more detail.)
{ "domain": "physics.stackexchange", "id": 54858, "tags": "lagrangian-formalism, gauge-theory, group-theory, representation-theory, dimensional-analysis" }
What is GST RhoA fusion protein?
Question: Would you please explain to me what GST RhoA fusion protein means and how it differs from RhoA protein? I am a kind of doing an experiment using this protein. Answer: GST is glutathione-S-transferase, an enzyme that binds reasonably tightly to glutathione. It is often used as a tag to purify proteins of interest. In your case, GST has been fused to RhoA most likely so that you have some means to purify it by applying it to a glutathione conjugated column. See here for more information.
{ "domain": "biology.stackexchange", "id": 6930, "tags": "biochemistry, proteins" }
Clean way to listen to asynchronous messages
Question: I am currently writing an Engine that runs on a background thread which produces outputs asynchronously. Those outputs are gathered by a Pipe, whith a readabilityHandler that sends back the messages the main thread using DispatchQueue.main.async like so: private init() { myPipe.fileHandleForReading.readabilityHandler = { [weak self] handle in let data = handle.availableData if let chunk = String(data: data, encoding: .utf8), chunk.trimmingCharacters(in: .whitespacesAndNewlines) != "" { DispatchQueue.main.async { self?.processOutput(chunk: chunk) } } } } On the processOutput method, I gather those messages (that can sometimes arrive as a block separated by newLine characters), I parse them, and I perform an action when a specific message is obtained. Ex: Manager.shared.send("my command") output: > info calculating... > info calculating... > info calculating... > info calculating... > result 42 // -> triggers an action > info calculating... > info calculating... To achieve this I rely on """listeners""" like so: private var listeners: [(token: String, handler: (String) -> ())] = [] private func processOutput(chunk: String) { let split = chunk.components(separatedBy: .newlines) let messages = split.map({ String($0) }).filter({ $0 != "" }) messages.forEach{ message in NSLog(" \(message)") // 1. for each message, I get the listeners, if any self.listeners.filter({ message.starts(with: $0.token) }).forEach { listener in // 2. they perform their action (main thread) listener.handler(message) } // 3. I discard the listener when done self.listeners.removeAll(where: { message.starts(with: $0.token) }) } } Therefore, when I need to get a message from a command I write: Manager.shared.listeners.append( (token: "result", handler: { print($0) }) ) Manager.shared.send("my command") // or Manager.shared.send("my command", awaits:"result") { print($0) } It actually works pretty fine for now, but I have the strange feeling that this listener method is way to convoluted. I don't like the fact that I'm keeping track of all the listeners in an array, then remove it manually from it when done, I reckon it should be releasing itself. Plus in the future, I want to be able to have listeners that perform their task for a given amount of time (e.g. get options) or until the end of a timer (e.g. error handling when the response never comes). Isn't there a cleaner way to achieve this? Is there a pattern I can read about that covers this very specific subject? Sidenote: The Engine is actually a 3rd party library game engine that I cannot modify. I cannot pass those messages directly into my methods, I have to rely on sending commands, reading the output via a pipe, but this part works actually pretty well. Its only the listenir part that I'm affraid of doing wrong. Thank you a lot Answer: A few observations: You should be careful with readabilityHandler. Your code presumes that a chunk coming will represent a full line of output. But you risk having it capture fractional portions of one or more lines. You shouldn’t make any assumptions in this regard. It may be working right now, but it is brittle, subject to significant behavior change resulting from innocuous and undocumented changes in the process you are piping. I might advise reading the Data into a buffer until you reach an end of line. A “read line” sort of pattern. You could buffer this yourself, but you can also use lines, an AsyncSequence provided by Swift concurrency, fileHandleForReading.bytes.lines. The you can use for try await line in lines { … } pattern. See ProcessWithLines in this answer. Regarding your “listeners” structure: I agree that this “closure lookup” sort of pattern feels over-engineered. This sort of approach is generally used if you are writing some general-purpose third-party API and app developers are going to be passing tokens/strings and closures to some SDK. But if you are integrating with some well-established 3rd party engine, this feels like overkill. Also, assuming for a second that you really wanted this “closure lookup” sort of structure, a dictionary seems more promising/logical approach. It could conceivably have O(1) performance, rather than the O(n) scan through an array of tuples. Now, clearly, if it was going to be something more dynamic like regex lookups or some awk-like pattern matching, perhaps you’re stuck with this O(n) sort of pattern, but it seems like a strange overhead to introduce without some compelling need. (Again, if I understand your goals, you are trying to integrate with an established 3rd party engine, presumably not to design a general purpose API that works with any random pipe.) In short, we need to see more information about the nature of the commands and responses you will be getting via these pipes before we can advise you further on alternatives to this “listeners” pattern.
{ "domain": "codereview.stackexchange", "id": 43425, "tags": "swift, grand-central-dispatch" }
How to properly define a zero-knowledge proof system with oracle access
Question: An $IP$ system $(P,V)$ is zero-knowledge (ZK) for some language $L$ if for every probabilistic polynomial-time verifer $V^*$ there exists a probabilistic polynomial-time algorithm $S$ for every $x\in L$ such that the output distribution of $(P,V^*)(x)$ and $S(x)$ are "close". Close means that the probability distribution of their outputs are either "equal" (perfect ZK), "statistically close" (statistical ZK), or "computationally indistinguishable" (computational ZK). My question is, if we allow the honest-verifier $V$ to have access to oracle $O$, do we have to allow any other cheating-verifier $V^*$ to also have access to $O$? Or, do we just simply consider all cheating-verifiers without access to any oracle? My main problem is that, intuitively, the complexity classes for ZK might be very different depending on if we allow the cheating-verifiers such oracle access. I cannot find any reference regarding relativized classes of ZK. If anyone could direct me to any reference I will really appreciate it. I'm aware of the Random Oracle model (RO), where all parties, even malicious ones, have access to a random oracle. But the goals, motivation, and applications of RO are not the ones I'm interested. I'm interested in relativization results in the Baker-Gill-Solovay sense. Answer: Regarding your reference request, I remember the following paper off the top of my head: Willaim Aiello and Johan Hastad. 1991. Relativized perfect zero knowledge is not BPP. Inf. Comput. 93, 2 (August 1991), 223-240. http://dx.doi.org/10.1016/0890-5401(91)90024-V. Now, to answer your question: Consider any NP-complete language $L$, let $G$ be a generator for a public-key encryption scheme, and define $O$ as a decryption oracle for this encryption system. Finally, let $(P,V)$ be a protocol for proving membership in $L$. Based on $(P,V)$, define the following modified protocol $(P',V')$: In the beginning of the protocol, $P$ uses $G$ and generates a public key pair $(pk,sk)$, and sends $pk$ to $V'$. Moreover, any message exchanged in $(P,V)$ in encrypted using $pk$ in $(P',V')$. The prover can find the decrypted messages using $sk$, and the honest verifier $V'$ can use the oracle $O$ to find the decrpted messages. However, no other verifier $V^*$ can follow the protocol if we disallow their access to $O$. Therefore, the protocol is ZK for any cheating verifier! In general, I think the definition should allow any verifier to access $O$, to prevent "twisted" protocols such as the one described above. The same holds for the distinguisher and simulator. However, I believe the prover need not necessarily have access to $O$.
{ "domain": "cstheory.stackexchange", "id": 2255, "tags": "cc.complexity-theory, cr.crypto-security, zero-knowledge" }
What angle does light leave a 400micron fibre if it entered from a single mode fibre (0.14NA)?
Question: We would like to connect a single mode coupled laser to a 400micron fibre. Ideally, we'd like the light to come out of the 400micron fibre at all angles or close to the maximum divergence angle, which in our case is 0.5NA. I've seen from other threads that the larger core fibres preserve the entry aperture better than smaller core fibres, so I'm not expecting a positive answer here. If this is right, are there any lens-ended fibres or micro lenses that could sit between the single mode and multimode that would result in greater exit angles from the 400micron fibre? To be clear on the setup: Laser - single mode fibre - 400micron fibre - free space The single mode fibre will be very short (<10cm) and the 400 micron fibre will be 1-5m in length. Thanks a lot in advance Answer: What you are looking for is to have the power in the 400-um fiber to have close to its equilibrium distribution among the available modes by the time it exits the fiber. Without special care, the launch from the 9-um fiber to the 400-um fiber will excite only a (probably fairly small) subset of the available modes. And 1-5 m of fiber is likely not enough on its own to allow the optical power to redistribute itself among the modes. I can think of a few ways to get closer to an equilibrium distribution at the fiber exit: At the single-mode to multi-mode transition, try to increase the number of modes excited by defocussing the beam so as to get closer to an equilibrium mode distribution on launch. This will likely degrade the coupling efficiency resulting in less power being launched into the multimode fiber. Put some fairly tight bends in the multimode fiber so as to encourage redistribution of the modal energy. This will cause power loss as some power is coupled into cladding modes or even radiating modes. Increase the length of the multimode fiber to allow the power to naturally redistribute between the modes due to imperfections in the fiber along its length. Unfortunately I don't work with 400-um fiber so can't give more specific guidance on how to re-focus the launch beam, how tight a bend is needed to redistribute the modes, how long a fiber is needed to redistribute the modes, etc., but you should be able to work these things out with some fairly simple experiments. Also, as I've mentioned, all these methods are going to increase optical loss, which could be a problem depending on your overall requirements.
{ "domain": "physics.stackexchange", "id": 55276, "tags": "optics, laser, fiber-optics, laser-interaction" }
How do we know neutrons have no charge?
Question: We observe that protons are positively charged, and that neutrons are strongly attracted to them, much as we would expect of oppositely charged particles. We then describe that attraction as non-electromagnetic "strong force" attraction. Why posit an ersatz force as responsible, rather than describing neutrons as negatively charged based on their behavior? I keep running up against circular and tautological reasoning from the laity in explanation of this (i.e. "We know they aren't charged because we attribute their attraction to a different force, and we ascribe this behavior to a different force because we know they aren't charged"). I'm looking for an empirically-based (vs. purely theoretical/mathematical) explanation. Can someone help? Answer: Free neutrons in flight are not deflected by electric fields. Objects which are not deflected by electric fields are electrically neutral. The energy of the strong proton-neutron interaction varies with distance in a different way than the energy in an electrical interaction. In an interaction between two electrical charges, the potential energy varies with distance like $1/r$. In the strong interaction, the energy varies like $e^{-r/r_0}/r$, where the range parameter $r_0$ is related to the mass of the pion. This structure means that the strong interaction effectively shuts off at distances much larger than $r_0$, and explains why strongly-bound nuclei are more compact than electrically-bound atoms.
{ "domain": "physics.stackexchange", "id": 61489, "tags": "electromagnetism, neutrons, protons, baryons" }
How to design interface of deep-copy behaving pointer containers?
Question: I want to make a container which manages big objects. which performs deep copies on copy construction and copy assignment. I also like the interface of the std containers, so I wanted to implement it using public inheritance: template <class TBigObject> class Container : public std::vector< std::shared_ptr<TBigObject> > { public: Container(int nToAllocate){ /* fill with default constructed TBigObjects */} Container(const Container& other){ /* deep copy */ } Container(Container&&) = default; Container& operator = (const Container& population){ /* deep copy */ } Container& operator = (Container&&) = default; }; I heard of the "Thou shalt not inherit from std containers" maxim and the reasons behind it. So I decided to make an alternative: template <class T> using Container = std::vector< std::shared_ptr<T> >; template <class T> Container<T> defaultAllocate(int nItems); template <class T> Container<T> deepCopy(const Container<T>& other); This looks more expressive, but using the code feels strange and verbose: class Big; Container<Big> someContainer = defaultAllocate(12); Container<Big> copy = deepCopy(someContainer); instead of: class Big; Container<Big> someContainer(12); Container<Big> copy = someContainer; I am a beginner programmer and I don't want to make errors early on. I would like to ask your advice on which choice to make. Or even better, if there is a third option. Answer: I think you are going about it the wrong way. Containers already perform copy on copy construction/assignment because they are designed for value objects not pointers. What you really want is a wrapper object for pointers that performs a deep copy when that object is copied. This has the added advantage of working with all containers. If you define the move operators then it also works well with re-size operations as the move semantics will be applied to the wrapper when the the container is re-sized. The other advantage here is you can make the wrapper behave like the underlying type. So you get easy access to all the algorithms. template<typename T> class Deep { T* value; public: Deep(): value(NULL) {} // Take ownership in this case. Deep(T const* value): value(value) {} Deep(T const& value): value(new T(value)) {} ~Deep() {delete value;} Deep(Deep&& move) {std::swap(value, move.value);} Deep(Deep const& copy) {T* tmp = new T(*(copy.value));swap(tmp,value);delete tmp;} // Note this does do a deep copy via copy and swap Deep& operator=(Deep copy) {std::swap(value, copy.value);return *this;} Deep& operator=(Deep&& move) {std::swap(value, move.value);return *this;} // Note: undefined behavior if used when value is NULL. // You may want to add logic here depending on use case. operator T&() {return *value;} operator T const&() const {return *value;} T* release() {T* tmp = value;value = NULL;return tmp;} }; int main() { std::vector<Deep<int>> deep(5,7); }
{ "domain": "codereview.stackexchange", "id": 3871, "tags": "c++, c++11" }
Simple URL shortener
Question: This URL shortener is just for my own use and it runs on my local machine. I created this for fun and I doubt if I would ever use it in production. Is my script any good? shortener.php <?php require_once 'lib/db.php'; function random_string($length, $charset = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789') { $chars_length = strlen($charset) - 1; $str = ''; for ($i = 0; $i < $length; ++$i) { $str .= $charset[random_int(0, $chars_length)]; } return $str; } $url = $error = ''; if (isset($_GET['id'])) { $id = trim($_GET['id']); $stmt = $pdo->prepare('SELECT url FROM url WHERE id = ?'); $stmt->execute([$id]); $url = $stmt->fetchColumn(); header('Location: ' . ($url ?: "http://{$_SERVER['HTTP_HOST']}{$_SERVER['SCRIPT_NAME']}")); exit; } else { if ($_SERVER['REQUEST_METHOD'] === 'POST') { $url = $_POST['url']; if (filter_var($url, FILTER_VALIDATE_URL) !== FALSE) { do { $id = random_string(5); $stmt = $pdo->prepare('SELECT COUNT(*) FROM url WHERE id = ?'); $stmt->execute([$id]); $count = $stmt->fetchColumn(); } while ($count > 0); $stmt = $pdo->prepare('INSERT INTO url (id, url) VALUES (?, ?)'); $stmt->execute([$id, $url]); header("Location: http://{$_SERVER['HTTP_HOST']}{$_SERVER['SCRIPT_NAME']}"); exit; } else { $error = 'Not a valid URL'; } } $last_id = $pdo->query('SELECT id FROM url ORDER BY created DESC LIMIT 1')->fetchColumn(); } ?><!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Extremely Simple Short URL Generator</title> </head> <body> <form method="post"> <input type="text" name="url" placeholder="URL here" value="<?= htmlspecialchars($url) ?>"> <button type="submit">Submit</button> </form> <?php if ($last_id): ?><p><a href="<?= "{$_SERVER['SCRIPT_NAME']}?", http_build_query(['id' => $last_id]) ?>">Latest Link</a></p><?php endif ?> <?php if ($error): ?><p style="color: red"><?= $error ?></p><?php endif ?> </body> </html> I'm aware that separating the view from the script is better, but for a simple app like this, I don't think it's necessary. db.php <?php $pdo = new PDO( 'mysql:host=host;dbname=dbname;charset=utf8mb4', 'username', 'password', [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, PDO::ATTR_EMULATE_PREPARES => FALSE, ] ); Any improvement is welcome. Answer: Feedback The script looks pretty good. For a small script that runs on your local machine it appears to suffice for your needs. Is the goal to only show the latest link? Or would there be a use for showing previous links? Suggestions Variable naming This variable naming might be misleading: $chars_length = strlen($charset) - 1; Because the value is one less than the length of the string. If I was working with that code, I would ask you to rename it to something more appropriate, like: $max_index = strlen($charset) - 1; since that value is ultimately used to determine the maximum index used to get a character out of the string. Use uuid() or uuid_short() Bearing in mind that you might likely need to alter the length of the id column, instead of generating your own random string, you could consider using MySQL's uuid() or uuid_short() function. That might eliminate the need to repeat the loop of calling random_string() and running a query just to see if the return value isn't used by any records. One other approach to a really short URL would be to just have an auto-increment integer field starting at 1... up until there are 9,999 records, the id would have 1-4 digits... Unspecified parameter $charset $charset doesn't appear to be passed by the one place that the function is called. Unless you plan to utilize that, the default value could be made a constant: define('CHARSET', 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'); function random_string($length) { $chars_length = strlen(CHARSET) - 1; for vs foreach I thought about suggesting you use foreach(range(0, $length) but after reading posts like this it might not be wise to get into a habit of that, in case you work on code that deals with large amounts of data. If you were more used to using foreach, range() can allow you to use foreach like a for statement: foreach(range(1, $length) as $i) { $str .= $charset[random_int(0, $chars_length)]; }
{ "domain": "codereview.stackexchange", "id": 29898, "tags": "php, mysql, random, pdo, url" }
Why can't we have P=NP but P not contained in NP-hard?
Question: Why isn't it possible to have P=NP, but not all problems in P are in NP-hard? The diagram of the various classes would look something like: We have P=NP, but not all problems in P lie inside NP-hard. The intersection of P and NP-hard is still NP-complete. Lander's theorem seemed relevant, but it holds when P != NP while my proposed scenario is for P=NP. Answer: NP-hardness is technically defined in terms of Karp reduction, in which the answer to the target problem must be the answer to the source problem, and not in terms of Cook reduction, in which the target problem is used as an oracle. This means that the empty language and the language of all words aren't in NP-hard even if P=NP. Excluding those trivial cases, every problem in P is polynomial-time reducible to every other problem in P, so if P=NP then every problem in NP is polynomial-time reducible to every problem in P (except those two), i.e., every problem in P (except those two) is NP-hard.
{ "domain": "cs.stackexchange", "id": 20511, "tags": "complexity-theory, np-hard" }
Why do we integrate up to the invariances
Question: Following Witten's essay What every physicist should know about string theory I understood that in the Hilbert-Einstein action is invariant under diffeomorphism in 1D and under conformal mapping in 2D. As I understood, this is the reason for doing the integral over the metrics up to diffeomorphism or up to conformal mapping. Did I understood this correctly? If the answer for 1, why does this: "the Hilbert-Einstein action is invariant under diffeomorphism in 1D and under conformal mapping in 2D" imply that we must "integrate over the metrics up to diffeomorphism or up to conformal mapping". Answer: This answer ended up being much longer than I expected so tl/dr: The "invariances" Witten is talking about are redundancies in our description of the theory. So if we want to integrate over the physical space, we must integrate "up to diff and Weyl". If we did integrate over everything, we basically end up integrating over the same theory multiple times so we would be over-counting the integral. Let's understand this by looking first at a much simpler example. A circle $S^1$ can be described by an angle parameter $\theta \in [0,2\pi)$. Integrating a function $f(\theta)$ over the circle is simply $$ I = \int_0^{2\pi} d\theta f(\theta). $$ Alternatively, we can describe the circle as the entire real line $\theta \in {\mathbb R}$ with the identification $\theta \sim \theta + 2\pi$. In this description, a function $f(\theta)$, $\theta \in {\mathbb R}$ is a function on the circle only if $f(\theta+2\pi) = f(\theta)$. The periodicity in $f$ is required since $\theta$ is identified as shown previously. This description is expressed by $S^1 = {\mathbb R}/{\mathbb Z}$. In this description integrating a function over the circle does not translate to integrating over the entire real line (again because of the identification in $\theta$). Rather, we must choose a "fundamental region" which represents the circle and we will integrate over this fundamental region. The fundamental region is chosen by requiring that every value of $\theta$ on the real line can be mapped to a value in the fundamental region using the identification for $\theta$. A particularly nice choice for the fundamental region is $[0,2\pi)$. It should be clear that every value of $\theta$ can be mapped to this region using the identification. For instance, $$ \theta = 50.67 \sim 50.67-8(2\pi) = 0.4045 \in [0,2\pi). $$ Integration over the circle is now the same thing as integrating over the fundamental region so we have $$ I = \int_0^{2\pi} d\theta f(\theta). $$ Note that we could have chosen ANY other fundamental region also and the result would have been the same. For instance, if we choose the region $[-2\pi,0)$, then $$ I = \int_{-2\pi}^0 d\theta f(\theta) = \int_0^{2\pi} d\theta f(\theta-2\pi) = \int_0^{2\pi} d\theta f(\theta). $$ In the last step, we have used the periodicity of $f$. If we had integrated over the entire real line, we would have gotten a nonsensical answer, but one that has an interesting interpretation $$ I' = \int_{\mathbb R} d\theta f(\theta) = \sum_{n=-\infty}^\infty \int_{2n\pi}^{2(n+1)\pi} d\theta f(\theta) = \sum_{n=-\infty}^\infty \int_0^{2\pi} f(\theta) = \sum_{n=-\infty}^\infty I $$ Integrating over ${\mathbb R}$ is therefore the same as integrating over $S^1$ (infinitely) many times. $I'$ is clearly divergent due to the infinite sum, but we can rewrite this in an interesting way. The infinite sum is the volume of the group ${\mathbb Z}$ (i.e. the number of elements) so we can write $$ I' = \text{vol}({\mathbb Z}) I \quad \implies \quad I = \frac{I'}{\text{vol}({\mathbb Z})}. $$ Both the numerator and denominator are infinite, but the "infinities cancel" to give a finite answer at the end. Let us go back to the original question posed by OP. Everything should become clear if we simply relate everything Witten said (or was meaning to say, IMHO) to the circle example presented above. Gravity is the dynamical theory of metrics (metrics $\leftrightarrow \theta$). However, not all metrics are "different". Rather, any two metrics related by diffeomorphisms and Weyl transformations are identified, $$ \text{metric} \sim \text{metric} + \text{diff and Weyl} \quad \leftrightarrow \quad \theta \sim \theta + 2\pi $$ All physical functions must be "invariant" under these transformations $$ f(\text{metric}) = f( \text{metric} + \text{diff and Weyl}) \quad \leftrightarrow \quad f(\theta) = f(\theta + 2\pi). $$ In the path integral, our goal is to integrate over "different" metrics. But remember, not all metrics are different - some are identified. Thus, as in the circle example, we must choose a fundamental region and only integrate over that region. This is what Witten means when he says integrate "up to diff and Weyl". Note that following the last circle example, there is a better way to describe such an integral. We could instead integrate over ALL metrics and then divide by the volume of diff and Weyl, i.e. $$ I = \frac{I(\text{all metrics})}{\text{vol(diff and Weyl)}} \quad \leftrightarrow \quad I = \frac{I'}{\text{vol}({\mathbb Z})} . $$ In practice, this is often the way integrals over metrics are performed. They are much easier to do since we don't have to this "up to diff and Weyl" business. As long as we divide by the volumes the answer is the same.
{ "domain": "physics.stackexchange", "id": 77690, "tags": "string-theory, conformal-field-theory, quantum-gravity, invariants, diffeomorphism-invariance" }
$sss$ decay and violation of strangeness
Question: Why can the hyperon $\Omega^{-}$ not decay by strong interaction? It seems that strangeness must be violated, but why is it the only way? Answer: The reason is because it is the lightest baryon with strangeness 3, the mass energy of lightest strange object of s=1 (the kaons) is greater than 1/3 it's mass, so it can't decay into these. It is a general principle of energy conservation and strangeness conservation: the lightest example of any conserved quantum numbers can't decay without changing this number. In this case you need the strangeness to go down, and this requires weak decay, because, as dmckee says in his comment, the strong interaction respects flavor.
{ "domain": "physics.stackexchange", "id": 5146, "tags": "particle-physics, quarks" }
Simple Phase Locked Loop
Question: Here is a simple Phase Locked Loop, which is a circuit used in radio communications for synchronisation between transmitter and receiver. The loop works by calculating the (phase) difference between the input signal, and a reference oscillator, and then adjusting the reference until the phase difference is zero. In this code, the adjustment is made by approximating a digital biquad filter's output - simply by multiplying by the Q factor of the filter. My main concern with this code, is that it would require some re-writing if I wanted to extend it. import numpy as np import pdb class SimPLL(object): def __init__(self, lf_bandwidth): self.phase_out = 0.0 self.freq_out = 0.0 self.vco = np.exp(1j*self.phase_out) self.phase_difference = 0.0 self.bw = lf_bandwidth self.beta = np.sqrt(lf_bandwidth) def update_phase_estimate(self): self.vco = np.exp(1j*self.phase_out) def update_phase_difference(self, in_sig): self.phase_difference = np.angle(in_sig*np.conj(self.vco)) def step(self, in_sig): # Takes an instantaneous sample of a signal and updates the PLL's inner state self.update_phase_difference(in_sig) self.freq_out += self.bw * self.phase_difference self.phase_out += self.beta * self.phase_difference + self.freq_out self.update_phase_estimate() def main(): import matplotlib.pyplot as plt pll = SimPLL(0.002) num_samples = 500 phi = 3.0 frequency_offset = -0.2 ref = [] out = [] diff = [] for i in range(0, num_samples - 1): in_sig = np.exp(1j*phi) phi += frequency_offset pll.step(in_sig) ref.append(in_sig) out.append(pll.vco) diff.append(pll.phase_difference) #plt.plot(ref) plt.plot(ref) plt.plot(out) plt.plot(diff) plt.show() Here is the output. Answer: I didn't find much to say about your coding style. Maybe in_sig is not a perfect name: signal_in would be easier to read in my opinion (putting the _in at the end is more consistent with you other variable names) it would require some re-writing if I wanted to extend it. What kind of extension ? Using iterators to get the samples For your input functions, you could use a generator instead: def sinusoid(initial_phi, frequency_offset): """Generates a sinusoidal signal""" phi = initial_phi while True: yield np.exp(1j*phi) phi += frequency_offset When initially called, this function will return an iterator: iter_sin = sinusoid(3.0, -0.2) And add set_signal_in and signal_out methods in your SimPLL class: def set_signal_in(self, signal_in): """Set a iterator as input signal""" self.signal_in = signal_in def signal_out(self): """Generate the output steps""" for sample_in in self.signal_in: self.step(sample_in) yield self.vco (maybe signal_out could generate some tuples with the sample_in and phase_difference, or even yield self) Usage You can then do a pll.set_signal_in(iter_sin). If you have a list of data, you can then do pll.set_signal_in(list_of_values). Plotting For plotting a limited amount of points, you may use itertools.islice
{ "domain": "codereview.stackexchange", "id": 18638, "tags": "python, numpy, signal-processing" }
Fast Reduction from RSA to SAT
Question: Scott Aaronson's blog post today gave a list of interesting open problems/tasks in complexity. One in particular caught my attention: Build a public library of 3SAT instances, with as few variables and clauses as possible, that would have noteworthy consequences if solved. (For example, instances encoding the RSA factoring challenges.) Investigate the performance of the best current SAT-solvers on this library. This triggered my question: What's the standard technique for reducing RSA/factoring problems to SAT, and how fast is it? Is there such a standard reduction? Just to be clear, by "fast" I don't mean polynomial time. I'm wondering whether we have tighter upper bounds on the reduction's complexity. For example, is there a known cubic reduction? Answer: One approach to encode Factoring (RSA) to SAT is to use multiplicator circuits (every circuit can be encoded as CNF). Let's assume we are given an integer $C$ with $2n$ bits, $C=(c_1,c_2,\cdots,c_{2n})_2$. We are interested in finding two $n$-bit integers $A=(a_1,\cdots,a_n)$ and $B=(b_1,\cdots,b_n)$ whose product is $C=A*B$. The most naive encoding can be something like this; we know that: $$c_{2n}= a_n \land b_n$$ $$c_{2n-1}= (a_n\land b_{n-1}) xor (a_{n-1}\land b_n)$$ $$Carry:d_{2n-1}= (a_n\land b_{n-1}) \land (a_{n-1}\land b_n)$$ $$c_{2n-2}= (a_n\land b_{n-2}) xor (a_{n-1}\land b_{n-1}) xor (a_{n-2}\land b_{n}) xor d_{2n-1}$$ ... Then using Tseitin transformation, the above encoding can be translated into CNF. This approach produces a relatively small CNF. But this encoding does not support "Unit Propagation" and so, the performance of SAT Solvers are really bad. There are other circuit for multiplication which can be used for this purpose, but they produce a larger CNF.
{ "domain": "cstheory.stackexchange", "id": 4513, "tags": "cc.complexity-theory, cr.crypto-security, sat, reductions, factoring" }
A curious thought about inclined planes
Question: So imagine you have a block on an inclined plane. The angle of inclination is such that the block doesn't move - it is at rest. There's friction and normal force too but the key point is that the block is not moving. Would the angle of inclination (theta max) be larger if the system was on the moon instead of the earth? or put generally - does the angle of inclination depend on the gravity? Answer: Would the angle of inclination (theta max) be larger if the system was on the moon instead of the earth? If by "theta max" you mean the maximum incline angle where the block will remain at rest (not start to slide), then the answer is no. The maximum incline angle before sliding is impending is independent of the mass of the object or the force of gravity. This can be shown as follows: The component of the force of gravity acting down the plane is given by $$F_{g}=mg\sin\theta$$ The normal force on the plane is $mg\cos\theta$. Thus the maximum possible static friction force acting up the plane is given by $$F_{s-max}=\mu_{s}mg\cos\theta$$ Where $\mu_s$ is the coefficient of static friction between the block and the incline surface. Impending sliding of the block is when the two forces are equal $$mg\sin\theta=\mu_{s}mg\cos\theta$$ $$\mu_{s}=\tan\theta_{max}$$ $$\theta_{max}=\tan^{-1}\mu_{s}$$ So $\theta_{max}$ is independent of the mass and the magnitude of the force of gravity. It depends only on the coefficient of static friction. So the force of gravity, be it $g$ or $\frac{1}{6}g$ (approximate on the moon) is not a factor. Of course if there were no gravity, the block will not move regardless of the inclination since there is no downward force along the incline nor normal force perpendicular to the incline (i.e., the net force acting on the block is zero). Hope this helps.
{ "domain": "physics.stackexchange", "id": 92059, "tags": "newtonian-mechanics, friction, free-body-diagram" }
How is electric field lines for a negative particle and a plane
Question: How is electric field lines for a negative particle and a plane? (plane doesn't have any charge and it has thickness) does it like the following picture or not? (the picture is a little bit exaggerated and the particle is drawn bigger than it is) I know the direction of one of lines is not right. DO NOT consider it. Answer: This is correct. It should look like the classic image charge solution, the lower half of a dipole.
{ "domain": "physics.stackexchange", "id": 36562, "tags": "electricity, electric-fields, charge" }
What sort of fish is this?
Question: Just saw a meme online with this fish on it. Is this a real fish? Can some aquatic expert identify this for us? Answer: That is a blue parrotfish (Scarus coeruleus): (photo: Marc Tarlock via Wikimedia Commons)
{ "domain": "biology.stackexchange", "id": 9591, "tags": "species-identification, ichthyology" }
To-do app API made with Slim 3
Question: I have put together the back-end (API) with the Slim framework (v3) and MySQL. In index.php I have: use \Psr\Http\Message\ServerRequestInterface as Request; use \Psr\Http\Message\ResponseInterface as Response; require '../vendor/autoload.php'; require '../src/config/db.php'; $app = new \Slim\App; // Todos Routes require '../src/routes/todos.php'; $app->run(); In db.php I have: class db{ // Properties private $dbhost = 'localhost'; private $dbuser = 'root'; private $dbpass = ''; private $dbname = 'todoapp'; // Connect public function connect(){ $mysql_connect_str = "mysql:host=$this->dbhost;dbname=$this->dbname"; $dbConnection = new PDO($mysql_connect_str, $this->dbuser, $this->dbpass); $dbConnection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); return $dbConnection; } } In todos.php I have: use \Psr\Http\Message\ServerRequestInterface as Request; use \Psr\Http\Message\ResponseInterface as Response; $app = new \Slim\App; $app->options('/{routes:.+}', function ($request, $response, $args) { return $response; }); $app->add(function ($req, $res, $next) { $response = $next($req, $res); return $response ->withHeader('Access-Control-Allow-Origin', '*') ->withHeader('Access-Control-Allow-Headers', 'X-Requested-With, Content-Type, Accept, Origin, Authorization') ->withHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS'); }); // Get Todos $app->get('/api/todos', function(Request $request, Response $response){ $sql = "SELECT * FROM todos"; try{ // Get DB Object $db = new db(); // Connect $db = $db->connect(); $stmt = $db->query($sql); $todos = $stmt->fetchAll(PDO::FETCH_OBJ); $db = null; echo json_encode($todos); } catch(PDOException $e){ echo '{"error": {"text": '.$e->getMessage().'}'; } }); // Add Todo $app->post('/api/todo/add', function(Request $request, Response $response){ $title = $request->getParam('title'); $completed = $request->getParam('completed'); $sql = "INSERT INTO todos (title, completed) VALUES (:title,:completed)"; try { // Get DB Object $db = new db(); // Connect $db = $db->connect(); $stmt = $db->prepare($sql); $stmt->bindParam(':title', $title); $stmt->bindParam(':completed', $completed); $stmt->execute(); echo '{"notice": {"text": "Todo Added"}'; } catch(PDOException $e){ echo '{"error": {"text": '.$e->getMessage().'}'; } }); // Update Todo $app->put('/api/todo/update/{id}', function(Request $request, Response $response){ $id = $request->getAttribute('id'); $title = $request->getParam('title'); $completed = $request->getParam('completed'); $sql = "UPDATE todos SET title = :title, completed = :completed WHERE id = $id"; try{ // Get DB Object $db = new db(); // Connect $db = $db->connect(); $stmt = $db->prepare($sql); $stmt->bindParam(':title', $title); $stmt->bindParam(':completed', $completed); $stmt->execute(); echo '{"notice": {"text": "Todo Updated"}'; } catch(PDOException $e){ echo '{"error": {"text": '.$e->getMessage().'}'; } }); // Delete Todo $app->delete('/api/todo/delete/{id}', function(Request $request, Response $response){ $id = $request->getAttribute('id'); $sql = "DELETE FROM todos WHERE id = $id"; try{ // Get DB Object $db = new db(); // Connect $db = $db->connect(); $stmt = $db->prepare($sql); $stmt->execute(); $db = null; echo '{"notice": {"text": "Todo Deleted"}'; } catch(PDOException $e){ echo '{"error": {"text": '.$e->getMessage().'}'; } }); Questions/concerns: Is the application well-structured or should I move the logic into controllers? If I should move the logic into controllers, what would be the best approach to doing that? Post scriptum I have added the front-end of the application here for those that might be interested. Answer: Is the application well-structured or should I move the logic into controllers? I agree with NemoXP that the current format is not well-structured. A file with 112 lines of code isn't horrible in terms of file length, but it handles multiple things - e.g. routing, updating models, etc. Separating the code out to controller methods would be wise - especially if multiple developers end up modifying the files. If I should move the logic into controllers, what would be the best approach to doing that? As NemoXP stated: a controller would be a good solution for moving the logic out of the router - e.g. TodoController, with a method for each route. That could have methods to abstract common tasks like returning the JSON, exception handling, etc. Suggestions Avoid re-assignment // Get DB Object $db = new db(); // Connect $db = $db->connect(); With the first assignment $db is assigned an instance of class db. Then in the next assignment that same variable is assigned the return value from the method connect, which is an instance of PDO. I likely mentioned in a review of your JavaScript code that it is wise to use const instead of let to avoid accidental re-assignment. The concept applies here as well - over-writing a variable can make it difficult to "reason" about code if a variable changes type. A better name for the variable in the second assignment would be something that illustrates that it is a connection, not a database - something like $connection, $conn, etc. Limit data returned $sql = "SELECT * FROM todos"; For a small application this likely isn't an issue, especially if there are typically a small number of records and only a few columns. Problems can occur when: the table grows to have many columns - of course it isn't ideal but in real life it does happen. It is best to specify only the columns needed instead of * the number of rows grows to more than is necessary. This can lead to memory issues. It is best to limit the number of results instead of selecting all records the data isn't filtered- perhaps for this application there isn't a need to filter the data but in a larger application, it would be wise to add some conditions to limit the data returned. Storing credentials and other details private $dbhost = 'localhost'; private $dbuser = 'root'; private $dbpass = ''; private $dbname = 'todoapp'; These are things that should not be stored in a repository. It is wise to store them in a file that is ignored by the VCS - e.g. .env files, which can be included with packages like phpdotenv. Response Type It appears that all routes return strings that are to be interpreted as JSON. In such cases it is appropriate to add a header to describe the Content-Type. While it likely isn’t a security hole anymore, at one point ~15 years ago an XSS attack may have been possible if content-type headers weren’t set. $response->withHeader('Content-type', 'application/json'); Presuming all routes have that same type, the header could be added with the other headers (e.g. Access-Control-Allow-*). JSON String construction In the route to get all TODO items json_encode() is used to convert the list to JSON format. Yet in other cases, included the catch blocks, a JSON object is created manually - e.g. echo '{"error": {"text": '.$e->getMessage().'}'; This could be simplified using json_encode() echo json_encode('["error" => ["text” => $e->getMessage()]]]; The benefit here is no risk of the exception message breaking the string literal - e.g. if it contained a delimter character like ” or }. Actually, now that I think of it, the original line could lead to a JavaScript error because the message is not surrounded by double quotes!
{ "domain": "codereview.stackexchange", "id": 41453, "tags": "php, sql, api, slim, php7" }
Python: forecast unevenly spaced time-series?
Question: My data has timestamps corresponding to the failure occurrences of a specific component in machinery. The timestamps are not uniformly distributed. My question is: 1) what methods can I use to (almost) accurately to forecast future occurrences (timestamps) of Failure? 2)What other features can I derive? What I've tried so far: Since the timestamp sequence is unevenly spaced I've derived a feature datediff= difference between sequential fault occurrences. Since it is now a univariate time-series I have tried classical time-series forecasting methods like ARIMA and SARIMA (hasn't worked out well) I am posting the seasonal decompositions of the time-series freq=7(weekly) freq=30 acf/pacf Answer: I reccomend you to use Prophet: Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well. In the documentation you can see that is really easy to implement in python. Also you could convert the problem to a supervised learning one. You can read this blog where they make an introduction of how to face the problem.
{ "domain": "datascience.stackexchange", "id": 7081, "tags": "python, time-series, forecasting" }
How to calculate the distance to object with orbbec astra camera
Question: Hello, I am using orbbec Astra camera in order to get the distance to the already detected object. I have detected the object by using OpenCV and /brg/image_raw topic from Astra camera. Now I want to estimate the distance to the object by using depth in order to send command (move forward or stay) to the robot. I am tracking that object. Could someone help me with that, or where I can find the material about distance estimation with orbbec camera. Originally posted by Yehor on ROS Answers with karma: 166 on 2019-03-24 Post score: 0 Answer: There's probably 3 major steps you should consider. You must get that bounding box / segmented pixels / whatever way you detected the object with respect to the depth frame. You can look at TF to get into the depth frame, then project them into the pixel frame of the depth camera. There are tons of great resources to cover this pixel projection work in calibration literature and CV basics. Now that you have your detection in the depth frame, you can then take you bounding box / segmented pixels to find the pixels that belong to you object. If a segmented mask, you can look at the depth values directly. If you have a bounding box, cluster the foreground of object of the depth image using one of a number of clustering techniques, or just use all the values in the box for first order estimate Then take all your pixels you think belong to the object from a segmented mask, or from the bounding box clusters, and average them. That should be a good estimate of distance. Originally posted by stevemacenski with karma: 8272 on 2019-03-25 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Yehor on 2019-03-27: How to check the depth values? Do you mean to check the value of the pixel? Comment by Yehor on 2019-03-27: and what is TF?
{ "domain": "robotics.stackexchange", "id": 32732, "tags": "ros, opencv, ros-kinetic, astra, depth" }
Gauge the symmetry $φ \to φ + a(x)$ for a free massless real scalar field
Question: How does one alter the Lagrangian density for a real scalar field $$\frac{∂_μφ∂^μφ}{2}$$ such that is will be invariant under the gauge transformation $φ → φ + a(x)$? For a complex scalar field with internal transformation $φ → e^{i \lambda (x)}φ$ the lagrangian can be altered by using the covariant derivative instead of the usual $∂_\mu$ to remain locally invariant (correct me if I'm wrong). I've just can't find in any textbook the $φ \to φ + a(x)$ transformation, apart for the global transformation when $a$ is a constant. Any pointers would be much appreciated! Answer: If $$\tag{1} \delta\varphi~=~\varepsilon$$ is a global shift symmetry, we can gauge the symmetry, i.e. enhance it to a local symmetry by (i) introducing a gauge field $A_{\mu}$ with gauge symmetry $$\tag{2} \delta A_{\mu} ~=~\partial_{\mu}\varepsilon, $$ and (ii) replace partial derivatives $\partial_{\mu}\varphi$ with covariant derivatives $$\tag{3} D_{\mu}\varphi~=~\partial_{\mu}\varphi- A_{\mu}$$ in the action. The latter is known as minimal coupling.
{ "domain": "physics.stackexchange", "id": 26422, "tags": "lagrangian-formalism, symmetry, gauge-theory, field-theory, gauge-invariance" }
how do I publish folder which contains 100 images over ROSTOPIC using custom message type
Question: I don't know how to use folder which contains 100 images in custom message type.. Originally posted by Abinaya on ROS Answers with karma: 1 on 2014-03-17 Post score: 0 Answer: Write yourself a little publisher like: #include <iostream> #include <list> #include <vector> #include <sensor_msgs/Image.h> #include <cv_bridge/cv_bridge.h> #include <opencv2/highgui/highgui.hpp> int main ( int argc, char** argv ) { if ( argc < 2 ) { std::cout << "no image names given" << std::endl; return -1; } ros::init( argc, argv, "image_publisher" ); ros::NodeHandle lo_nh; try { std::vector<cv::Mat> lv_images; ROS_INFO( "Loading images" ); for ( int lo_i = 1; lo_i < argc; lo_i++ ) { lv_images.push_back( cv::imread(std::string( argv[ lo_i ] ), CV_LOAD_IMAGE_COLOR ) ); } ros::Publisher lo_pub = lo_nh.advertise<sensor_msgs/Image>( “/image”, 5 ); int lo_rate = 10; ROS_INFO_STREAM( "Publishing images at " << lo_rate << "hz" ); ros::Rate lo_rate1( lo_rate ); std::vector<cv::Mat>::iterator lo_iter_imgs( lv_images.begin() ), lo_iter_imgs_end( lv_images.end() ); for ( ; lo_iter_imgs != lo_iter_imgs_end ; ++lo_iter_imgs ) { cv_bridge::CvImage lo_img; lo_img.encoding = "bgr8"; lo_img.image = *lo_iter_imgs; lo_pub.publish(lo_img.toImageMsg() ); ros::spinOnce(); lo_rate1.sleep(); } } catch ( const std::exception& lo_e ) { std::cout << "Exception in main: " << lo_e.what() << std::endl; return -1; } catch( ... ) { std::cout << "Unknown exception in main" << std::endl; return -1; } return 0; } And call it with all images in the current folder using a shell script like: #!/bin/bash echo "Working ..." FILES= for i in $(ls *.png); do FILES="${FILES} ${i}" done rosrun my_pkg my_image_publisher $FILES echo "Done" Originally posted by Wolf with karma: 7555 on 2014-03-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 17317, "tags": "ros, image, custom-message" }
Identify audio using Gracenote
Question: This script records audio from the computer's output and passes it to a C program that fingerprints it and queries the Gracenote database. If it identifies the audio then it offers options to look it up on Discogs. I made this because I wanted a way to identify tracks in DJ mixes that didn't involve using Shazam on my phone. Running the script requires setting up audio devices using Soundflower and installing SwitchAudioSource to programmatically switch device (details here). I am particularly interested in comments on the design. I think the most significant problem is that the main() function is overly dense. I decided not to use a more object-oriented style because I wasn't sure of the benefit over using a few functions. If I were to do more with either Discogs or Gracenote I would probably use the Python API clients they both provide, so rather than recreate a partial implementation of them I just used several functions. Would an OO-style have been preferable here? The functions could easily be split into modules as they have quite distinct tasks. Does the length of this script justify breaking it into separate modules? This is also the first time I have written something using .rc files and command-line flags (most of my experience is JS in the browser) so I am interested in views on whether I have used them sensibly. #!/usr/bin/python import wave import time import subprocess import os.path import json import argparse import webbrowser import sys import requests import keyring import pyaudio from rauth import OAuth1Service # ---------------- Config ----------------- CHUNK = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 2 RATE = 44100 RECORD_SECONDS = 6 SAVE_PATH = os.path.expanduser("~") + "/Music/recordings/" WAVE_OUTPUT_FILENAME = "temp_{}.wav".format(int(time.time())) COMPLETE_NAME = os.path.join(SAVE_PATH, WAVE_OUTPUT_FILENAME) CONFIG_PATH = os.path.expanduser("~") + "/.identifyaudiorc" def load_user_config(path): user_config = {} try: with open(path, "r") as f: for line in f: split = line.split(" ") user_config[str(split[0])] = str(split[1]).strip() except: raise IOError("Unable to read config file") return user_config config = load_user_config(CONFIG_PATH) # ---------------- Setup ------------------ class GracenoteError(Exception): def __init__(self, value): self.value = value def __str__(self): return repr(self.value) p = pyaudio.PyAudio() # ---------------- Discogs ---------------- discogs = OAuth1Service( name="discogs", consumer_key=config["DISCOGS_CONSUMER_KEY"], consumer_secret=config["DISCOGS_CONSUMER_SECRET"], request_token_url=config["DISCOGS_REQUEST_TOKEN_URL"], access_token_url=config["DISCOGS_ACCESS_TOKEN_URL"], authorize_url=config["DISCOGS_AUTHORIZE_URL"], base_url=config["DISCOGS_BASE_URL"]) def discogs_get_master(artist_name, album_name): # TODO: sometimes comes up with nothing when it should find something payload = { "key": config["DISCOGS_KEY"], "secret": config["DISCOGS_SECRET"], "artist": artist_name, "release_title": album_name, "type": "master" } r = requests.get("https://api.discogs.com/database/search", params=payload) result = r.json()["results"] if len(result): return r.json()["results"][0] else: raise RuntimeError("No Discogs master found") def discogs_get_release(master_id): release = requests.get("https://api.discogs.com/masters/"+str(master_id)+"/versions", params={"per_page": 1} ) return release.json()["versions"][0] def discogs_get_oauth_session(): access_token = keyring.get_password("system", "access_token") access_token_secret = keyring.get_password("system", "access_token_secret") if access_token and access_token_secret: session = discogs.get_session((access_token, access_token_secret)) else: request_token, request_token_secret = discogs.get_request_token() authorize_url = discogs.get_authorize_url(request_token) webbrowser.open(authorize_url, new=2, autoraise=True) log("To enable Discogs access, visit this URL in your browser: "+authorize_url) oauth_verifier = raw_input("Enter key: ") session = discogs.get_auth_session(request_token, request_token_secret, method="POST", data={"oauth_verifier": oauth_verifier}) keyring.set_password("system", "access_token", session.access_token) keyring.set_password("system", "access_token_secret", session.access_token_secret) return session def discogs_add_wantlist(session, username, release_id): r = session.put("https://api.discogs.com/users/"+username+"/wants/"+str(release_id), header_auth=True, headers={ "content-type": "application/json", "user-agent": "identify-audio-v0.2"}) return r.status_code # ----------- Audio devices --------------- def find_device(device_sought, device_list): for device in device_list: if device["name"] == device_sought: return device["index"] raise KeyError("Device {} not found.".format(device_sought)) def get_device_list(): num_devices = p.get_device_count() device_list = [p.get_device_info_by_index(i) for i in range(0, num_devices)] return device_list def get_soundflower_index(): device_list = get_device_list() soundflower_index = find_device("Soundflower (2ch)", device_list) return soundflower_index def get_current_output(): device_list = get_device_list() try: find_device("USB Audio Device", device_list) return "USB Audio Device" except: return "Built-in Output" def get_multi_device(output): if output == "USB Audio Device": return "Multi-Output Device (USB)" else: return "Multi-Output Device (Built-in)" # ---------- Recording to file ------------ def record_audio(device_index, format, channels, rate, chunk, record_seconds): stream = p.open(format=format, channels=channels, rate=rate, input=True, input_device_index=device_index, frames_per_buffer=chunk) log("Recording for {} seconds...".format(record_seconds)) frames = [] for i in range(0, int(rate / chunk * record_seconds)): data = stream.read(chunk) frames.append(data) stream.stop_stream() stream.close() return frames def write_file(frames, path, format, channels, rate): wf = wave.open(path, "wb") wf.setnchannels(channels) wf.setsampwidth(p.get_sample_size(format)) wf.setframerate(rate) wf.writeframes(b"".join(frames)) wf.close() return path # ----------- Gracenote ------------------- def query_gracenote(sound_path): # TODO - handle double quotes in the output out = subprocess.check_output([config["APP_PATH"], sound_path]) result = json.loads(out) try: error = result["error"] except: return result raise GracenoteError(error) def log(statement): if not args["quiet"]: print statement # ----------- Main ------------------------ def main(): output = get_current_output() multi_out = get_multi_device(output) FNULL = open(os.devnull, "w") if subprocess.call(["SwitchAudioSource", "-s", multi_out], stdout=FNULL, stderr=FNULL) == 0: length = RECORD_SECONDS match = False attempts = 0 while not match and attempts <= 2: input_audio = record_audio(get_soundflower_index(), FORMAT, CHANNELS, RATE, CHUNK, length) try: write_file(input_audio, COMPLETE_NAME, FORMAT, CHANNELS, RATE) except IOError: log("Error writing the sound file.") resp = query_gracenote(COMPLETE_NAME) if resp["result"] is None: log("The track was not identified.") length += 3 attempts += 1 if attempts <= 2: log("Retrying...") else: match = True if match: print json.dumps(resp["result"], indent=4, separators=("", " - "), ensure_ascii=False).encode("utf8") if args["discogs"] or args["want"]: try: master = discogs_get_master(resp["result"]["artist"], resp["result"]["album"]) except RuntimeError as e: log(e) else: url = "https://discogs.com" + master["uri"] log("Find online: " + url) if args["open"]: webbrowser.open(url, new=2, autoraise=True) want_add = None if not args["want"] and not args["open"]: want_add = raw_input("Add this to your Discogs wantlist? y/n/o (to open in browser): ") if want_add == "o" or args["open"]: webbrowser.open(url, new=2, autoraise=True) want_add = raw_input("Add this to your Discogs wantlist? y/n: ") if want_add == "y" or args["want"]: release = discogs_get_release(master["id"]) session = discogs_get_oauth_session() status = discogs_add_wantlist(session, config["DISCOGS_USERNAME"], release["id"]) if status == 201: log("Added '{}' to your Discogs wantlist".format(release["title"])) else: log("Error code {} adding the release to your Discogs wantlist".format(status)) else: raise RuntimeError("Couldn't switch to multi-output device.") p.terminate() os.remove(COMPLETE_NAME) if subprocess.call(["SwitchAudioSource", "-s", output], stdout=FNULL, stderr=FNULL) == 0: return else: raise RuntimeError("Couldn't switch back to output.") if __name__ == "__main__": parser = argparse.ArgumentParser(description="Identify currently playing audio") parser.add_argument("--discogs", "-d", action="store_true") parser.add_argument("--want", "-w", action="store_true") parser.add_argument("--open", "-o", action="store_true") parser.add_argument("--quiet", "-q", action="store_true") parser.add_argument("--verbose", "-v", action="store_true") args = vars(parser.parse_args()) if args["verbose"]: main() else: try: main() except GracenoteError as e: print "Gracenote error: "+e sys.exit(1) except Exception as e: print e sys.exit(1) Answer: To OO or to not to OO Your program is fairly small and well organized and easy to read. I don't see any reason at this point to make it "more OO". If in the future you write more programs like this you'll want to organize the code into libraries to re-use common code. But at this point what you have is perfectly fine. Now on to some stylistic comments... Global variables You have two kinds of global variables: constants like RECORD_LENGTH, CHUNK values which invoke code and could vary from run to run, e.g. WAVE_OUTPUT_FILENAME, CONFIG_PATH values which are initialized to objects, e.g. p and discogs First of all as globals the p and discogs variables should have ALL CAPS names (and p should be given a more descriptive name -- it is even used?) This will make it obvious to the reader that they are globals. Secondly, I would move the initialization of the path name globals to main: def main(): user_home = os.path.expanduser("~") config_path = os.path.join(user_home, ".identifyaudiorc") save_path = os.path.join(user_home, "/Music/recordings/") complete_name = os.path.join(save_path, "temp_{}.wav".format(int(time.time()))) ... In fact, none of these values need to be global since main passes them to whatever function needs them - and that's a good aspect of your code! As an enhancement, consider adding command line options to set these paths - I think being able to set the location of the recordings directory would be convenient. load_user_config() What you have is fine, but be aware that there are tons of python libraries to load (and save) configuration files. Just google, e.g., "python load config file". long if-blocks main has a very long spanning if-block, and to make the logic more apparent I would organize it like this: def main(): ... if not subprocess.call(["SwitchAudioSource", ...): raise RuntimeError("Couldn't switch to multi-output device.") ... Now the reader can easily see what happens when the call to SwitchAudioSource fails without having to search for the matching else clause. Another long if-block is the if match: statement. It would be more readable if it was written: if not match: return print json.dumps(resp["re... In fact, now it is obvious that you aren't doing anything if there isn't a match. Perhaps you want something like: if not match: raise RuntimeError("Unable to read audio.") ...
{ "domain": "codereview.stackexchange", "id": 20027, "tags": "python, audio" }
Is the universe considered to be flat?
Question: I've read various articles and books (like this one) stating that we are not certain about the geometry of the universe, but there were experiments on-going or planned that would help us find out. Recently though, I've watched a lecture by cosmologist Lawrence Krauss where he seems to categorically assert that the universe has been proven to be flat by the BOOMERanG experiment. Here's the relevant portion of the talk. I've looked around and there are still articles stating that we still don't know the answer to this question, like this one. So, my question is two-fold: Am I mixing concepts and talking about different things? If not then is this evidence not widely accepted by some reason? What reason would that be? Answer: I think the reason you're suffering from conflicting sources is that you're mixing both new and old, out-of-date pieces of information. First off, the book you cited was published in 2001 - 15 years ago - and the other article you cite was published in 1999 - 17 years ago. There's been a lot of work done in the past 15 years, often under the term "precision cosmology", in an attempt to really nail down the precise content, shape, size, etc. of our Universe. By the early 2000's we pretty much knew the science behind everything (we knew about dark matter, dark energy, had well-developed theories on the Big Bang, etc.) but what we didn't have, were good, solid, believable numbers to put into these theories, explaining why the flatness of the universe was still contested in your sources. I'll direct you to two incredibly important observatories which have been paramount in achieving our goal of having "good numbers". The first is the Wilkinson Microwave Anistropy Probe (WMAP), launched in 2001, and the second is the Planck satellite, launched in 2009. Both missions were designed to stare intently at the Cosmic Microwave Background (CMB) radiation and try to sort out the treasure trove of information which can be gleaned from it. In this vein, you might also come upon the Cosmic Background Explorer (COBE), launched in 1989. This satellite had a similar purpose as the other two, but was not nearly as precise as the later two missions as to provide us with good numbers and definitive statements by the early 2000's. For that reason I'll mostly focus on what WMAP and Planck have told us. WMAP was a hugely successful mission which stared at the CMB for 9 years and created the most detailed and comprehensive map of its day. With 9 years of data, scientists were really able to reduce the observational errors on various cosmological quantities, including the flatness of the universe. You can see a table of their final cosmological parameters here. For the flatness, what you want to do is add up $\Omega_b$ (the baryonic matter density), $\Omega_d$ (the dark matter density), and $\Omega_\Lambda$ (the dark energy density). This will give you the overall density parameter, $\Omega_0$, which tells you the flatness of our universe. As I'm sure you know from your sources, if $\Omega_0 < 1$ we have a hyperbolic universe, if $\Omega_0 = 1$ our universe is flat, and $\Omega_0 > 1$ implies a spherical universe. From the results of WMAP, we have that $\Omega_0 = 1.000 \pm 0.049$ (someone can check my math) which is very close to one, indicating a flat universe. As far as I know, WMAP was the first instrument to give a truly precise measurement of $\Omega_0$, allowing us to say definitively that our universe appears flat. As you say, the BOOMERanG experiment also provided good evidence for this, but I don't think the results were nearly as powerful as WMAP's was. The other important satellite here is Planck. Launched in 2009, this satellite has provided us with the best high precision measurements of the CMB to-date. I'll let you dig through their results in their paper, but the punchline is that they measure the flatness of our universe to be $\Omega_0 = 0.9986 \pm 0.0314$ (calculated from this result table), again extremely close to one. In conclusion, recent results (within the past 15 years) allow us to definitively state that our Universe appears flat. I don't think, at this time, anyone contests that or believes it is still uncertain. As it usually goes with science, answering one question has only resulted in more questions. Now that we know $\Omega_0 \simeq 1$, we have to ask why is it one? Current theory suggests it shouldn't be - that it should be either enormously small or enormously large. This is known as the Flatness Problem. That in turn delves into the Anthropic Principle as an attempted answer, but then, I'm getting out of the scope of this question.
{ "domain": "astronomy.stackexchange", "id": 5670, "tags": "cosmology, space-geometry" }
How step-up transformers help in transmission of electrical energy over long distances?
Question: i have read just read this blog: http://www.blueraja.com/blog/194/do-transformers-obey-ohms-law below blockquotes is taken from the blog above: We connect a 1 volt AC generator to a 1 ohm resistor and measure the current. By Ohm’s Law, we should get 1 ampere of current. Now imagine we stuck a 1:10 transformer in the circuit, splitting our one circuit into two electrically-separate circuits. The confusion arises from the following question: does the current through the resistor go up because the voltage went up, or down because the transformer needs to conserve power? Treating the transformer as a 10-volt AC voltage source in the right circuit, we use Ohm’s law to see that the current through the resistor has gone up – it is now 10 amps. In order to preserve power, this means that in the left circuit our original AC power source is now drawing 100 amps of current, 100x what it was drawing before. and, i have something in my textbook stating: The large scale transmission and distribution of electrical energy over long distances is done with the use of transformers. The voltage output of generator is stepped-up (so that the current is reduced and consequently, the $I^2R$ loss is cut down). It is then transmitted over long distances to an area sub-station near the consumers. There the voltage is stepped-down. It is further stepped-down at the distributing sub-stations and utility pole before a power supply of $240 V$ reaches our homes. well, as told in the blog, step-up transformers increases both voltage and current proportionally on the secondary side (right side), so $I^2R$ loss increases.. this is confusing me.. have i mistaken something or have a big misconception?? also, why the textbook refers to $I^2R$ loss, why not $VI$ loss or $\frac{V^2}{R}$ loss, which includes voltage, which definitely increases?? Answer: step-up transformers increases both voltage and current proportionally on the secondary side This is incorrect. Say you have a 1:100 turns-ratio transformer, with (for example) 100 V input and 10 kV output, and the load on the secondary draws 1 A (for example). Then the source on the primary side will have to provide 100 A, not 0.01 A. The current is stepped down in the same proportion (for an ideal, lossless transformer) as the voltage is stepped up. without transformer current was 1 amp. but with transformer, it became 10 amp. The example assumes an ideal AC voltage source, with the same source voltage before and after inserting the transformer. In this case, it's correct that the current delivered to the load increases by inserting the transformer, so 10 A is delivered to the resistive load. But also notice that the current drawn from the source increased even more --- to 100 A. If your load was located far from the source, you'd rather put the transformer near the source and only send the 10 A current over the long transmission line, rather than put it near the load so that the transmission line has to carry 100 A. Even better (as far as reducing $I^2 R$ losses in transmission) you could put a 1:100 transformer near the source, and a 10:1 transformer at the load. Then the source would still need to source 100 A. And the load would still get 10 A. But the transmission line would only need to carry 1 A.
{ "domain": "physics.stackexchange", "id": 55858, "tags": "electric-circuits, power" }
Two particles colliding head on with each moving at almost the speed of light -- how long does it take vs one stationary?
Question: I believe that an observer on one of the particles would see the other particle moving not at nearly twice the speed of light but only at roughly the speed of light. But surely the time it takes for the two particles to collide for a human observer who knows when each particle was shot out of an accelerator and can tell when the collision occurs is half the time it would take if one particle was stationary relative to the observer (maybe it is held at the half-way point) and only the other is moving relative to the observer at near light speed. That is to say, the observer sees the collision of two moving particles occurring at almost twice the speed of light. Or is this wrong because of Special Relativity? What about kinetic energy? Is it the same or different in the two situations? EDIT: If the two particles are one light second apart and one is stationary, the outside observer sees the collision occurring after one second. If the particles are both moving at C, I can't see why the collision does not occur in half a second. Except: That would mean that information could be sent faster than light to an observer on either particle. But the outside observer gets no information faster than light. Answer: In the frame of the stationary observer, the two particles approach each other at a speed which is greater than c. However, in the frame of either particle, the closing speed is less than c.
{ "domain": "physics.stackexchange", "id": 100343, "tags": "special-relativity, velocity" }
ROS Groovy simulator_gazebo on OSX
Question: Hello everyone, I am trying to install gazebo in the ROS (groovy) stack (simulator_gazebo), which I believe is still 1.3. I am compiling using clang 3.2 (actually it is based on 3.2). At first it failed to find TBB and libxml2, which I fixed by adding their .pc files so that pkg-config could find them. Then it complained about -DMAC_OS_X_VERSION not being set, so I added add_definitions(-DMAC_OS_X_VERSIONS=1080) In the CMakeLists.txt. After that I believe I had some problems with gazebo using CLOCK_REALTIME, which isn't available on OSX so I replaced that with the OSX equivalent. I have gotten a long way but now I am stuck with linking dependencies for gazebo_sensors and gazebo_physics. gazebo_sensors depends on gazebo_physics, however gazebo_physics seems to also depend on gazebo_sensors, causing a circular dependency. The dependency on gazebo_sensors for gazebo_physics did not exist, without this I was getting linking errors. Adding this link causes a circular dependency. Removing the dependency on gazebo_physics in gazebo_sensors allows it to compile gazebo_physics, but then it gets stuck with linking errors for gazebo_sensors because it is missing things from gazebo_physics there then. I also tried compiling with gcc (4.2 and 4.7), but this gave me a whole set of different errors. I had a similar problem with gazebo_physics and gazebo_physics_ode, gazebo_physics_ode needed to link against gazebo_physics, however gazebo_physics did not need to link against gazebo_physics_ode so I could just add the link to gazebo_physics in gazebo_physics_ode. What can I try next? I can't seem to resolve the circular dependency that I apparently have. How come this compiles fine on Ubuntu (I assume)? Should I actually be asking this on ROS answers? I tried version 1.4 from gazebosim.org but it seems to give me the same problems. Originally posted by Hansg91 on Gazebo Answers with karma: 11 on 2013-03-02 Post score: 1 Answer: With pull requests from @Hansg91 and help from William at OSRF, I've made a homebrew formula for gazebo. It's still experimental. To try it: Install homebrew: ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)" Install Xcode command-line development tools Install XQuartz Run the following commands: brew tap ros/groovy brew tap scpeters/gazebo brew install gazebo source /usr/local/share/gazebo/setup.sh gazebo Originally posted by scpeters with karma: 2861 on 2013-04-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Hansg91 on 2013-04-03: Sounds great :) I don't think you really need XQuartz, you might only need it because somewhere the CMakeLists.txt checks if you have it, its not building against it I believe. Comment by tkoolen on 2013-04-05: Works perfectly so far! Thank you so much! I wish I could upvote, but I don't have enough karma for that... Comment by tkoolen on 2013-04-05: Has anybody tried getting drcsim to work on OSX by the way? Comment by scpeters on 2013-04-05: XQuartz is needed to build OGRE using the home-brew formula given by the ros/groovy tap. I haven't tried building drcsim yet. There's still a problem with plugins being hard-coded as '.so' files when they're actually '.dylib' files on the mac. Also, it crashes when I insert cameras. So there's still some work left to do. I just started a wiki page to track status on OS X, feel free to add any answers threads to the list at the bottom of this page. Comment by scpeters on 2013-04-05: XQuartz is needed to build OGRE using the home-brew formula given by the ros/groovy tap. I haven't tried building drcsim yet. There's still a problem with plugins being hard-coded as '.so' files when they're actually '.dylib' files on the mac. Also, it crashes when I insert cameras. So there's still some work left to do. I just started a wiki page to track status on OS X, feel free to add any answers threads to the list at the bottom of this page. Comment by scpeters on 2013-04-05: XQuartz is needed to build OGRE using the home-brew formula given by the ros/groovy tap. I haven't tried building drcsim yet. There's still a problem with plugins being hard-coded as '.so' files when they're actually '.dylib' files on the mac. Also, it crashes when I insert cameras. So there's still some work left to do. I just started a wiki page to track status on OS X, feel free to add any answers threads to the list at the bottom of this page. Comment by scpeters on 2013-04-05: XQuartz is needed to build OGRE using the home-brew formula given by the ros/groovy tap. I haven't tried building drcsim yet. There's still a problem with plugins being hard-coded as '.so' files when they're actually '.dylib' files on the mac. Also, it crashes when I insert cameras. So there's still some work left to do. I just started a wiki page to track status on OS X, feel free to add any answers threads to the list at the bottom of this page. Comment by Hansg91 on 2013-04-08: Well it is very nice to see that you picked up this task, I don't need gazebo anymore at the moment (intended to use it for navigation, but I am using Stage at the moment) but I have no doubt that for some future feature I will need it and be loving it on OSX :) Comment by Andrew Hundt on 2014-03-28: Note that these instructions are now out of date. I think the current homebrew steps are: brew tap osrf/homebrew-simulation ; brew install gazebo
{ "domain": "robotics.stackexchange", "id": 3084, "tags": "gazebo" }
String formatting in C++
Question: To develop my understanding of C++, I've written an overload function that allows users to use & to insert strings within other strings (denoted by a $ character). So far, it only works with one replacement in a string. For example: "Hello $" & "World" -> "Hello World" I want to receive feedback, because I'm sure there's a much better way than using three for loops. And I'd like a better foundation before moving forward with multiple replacements in a single string. #include <string> #include <iostream> std::string operator&(const std::string& str, const char* cstr) { if (str.empty()) { return std::string(cstr); } if (str.find("$") == std::string::npos) { return str + std::string(cstr); } int string_length = str.length(); int cstr_length = 0; while (cstr[cstr_length] != '\0') { cstr_length++; } int length = string_length + cstr_length; std::string result = ""; int i; for (i = 0; i < str.find("$"); i++) { result += str[i]; } i++; // Skips past the '$' for (int j = 0; j < cstr_length; j++) { result += cstr[j]; } for (; i < string_length; i++) { result += str[i]; } return result; } int main(int argc, char** argv) { std::string string = "Hello $, My name is Ben!"; char* cstr = (char*)"World"; std::string result = string & cstr; std::cout << result << std::endl; return 0; } Answer: Keep your includes ordered. Thus, you won't lose track of them, even if there are more of them. Common modern order goes from local to universal, to ensure headers are self-contained. Exceptions are fixed or at least annotated. All groups are sorted and separated by an empty line for emphasis: precompiled header if applicable, for obvious technical reasons the corresponding header, to ensure it is self-contained own headers external libraries headers system headers standard headers Any of the last three might be amalgamated. #include <iostream> #include <string> #include <string_view> Don't bolt your own operators on someone else's type. It just invites confusion. Use a properly named free function instead. Using C++17 std::string_view leads to a more efficient and convenient interface, and simplifies your implementation. Avoid special cases. They mean more code (which can be wrong), and slow down the common case. std::string replace_placeholder(std::string_view format, std::string_view a) { auto pos = format.find('$'); auto rest = pos + 1; std::string r; r.reserve(format.size() + a.size() - (pos != std::string_view::npos)); if (pos == std::string_view::npos) rest = pos = format.size(); r.append(format.begin(), format.begin() + pos); r += a; r.append(format.begin() + rest, format.end()); return r; } Don't name arguments you won't use. Thus, you avoid confusing future readers (like yourself) nor cause spurious warnings. In the case of main(), you might just leave the arguments out completely. Eliminate dead code and dead variables. They will just confuse, and if you want to restore them, there is source control. Let the compiler warn you about them as it can. Don't muzzle the compiler just because it has the audacity to warn you. Figure out what you did wrong instead, and fix that. Yes, sometimes you have to override it, but you have to be judicious and careful there, using the least dangerous tool to get the job done. In the specific case, instead of casting away const, fix the pointer type. const_cast would have been less dangerous than an unrestricted C-style cast, if it had been appropriate. Embrace auto. Almost Always Auto is a good idea to avoid mismatches and unintended conversions, especially if the exact type doesn't matter to you. Only use std::endl when you need to flush the stream. Actually, belay that, be explicit and use std::flush. Also, when the program ends the standard streams are flushed. See "What is the C++ iostream endl fiasco?". return 0; is implicit for main(). int main() { std::string string = "Hello $, My name is Ben!"; auto cstr = "World"; auto result = replace_placeholder(string, cstr); std::cout << result << "\n"; }
{ "domain": "codereview.stackexchange", "id": 43449, "tags": "c++, strings" }
Networking with raspberry pi, silent topic [closed]
Question: Hi, I've recently bought a Raspberry Pi 4. And I really want to use it to control my own robots. But for most applications I would like to have my Ubuntu virtual machine serve as a GUI for my robot. Therefore I need to have a working ROS network between my VM and my RasPi. All the instructions on the ROS Networking tutorial have been followed and I can ping and netcat between both machines. Since I don't have access to the tutorial's listener and talker on my RasPi I figured I could just use some basic rostopic commands to try out my setup. But when I attempt to echo a topic, this topic remains silent. On my master device I have ran roscore and rostopic pub -r 1 /plop std_msgs/String -- 'Hello slave' both in separate terminals, of course. On the slave side the master URI has been set : export ROS_MASTER_URI=http://192.168.0.100:11311 And I can see the /plop topic : bgi@ubuntu-VB:~$ rostopic list /plop /rosout /rosout_agg But when trying to listen to it, there's no message : rostopic echo /plop Bot my virtual machine and my Raspberry Pi are running ROS melodic and they can ping and netcat each other. Has anybody an idea of what is going on here, why can I see a topic but not listen to it? Originally posted by BGI on ROS Answers with karma: 11 on 2020-12-06 Post score: 1 Answer: In ROS1, on client hosts (of a host where ROS Master runs on), setting ROS_MASTER_URI is required but not sufficient. Review "Name resolution" section in http://wiki.ros.org/ROS/NetworkSetup. I'd say ROS_HOSTNAME and/or ROS_IP are minimum requirement. Originally posted by 130s with karma: 10937 on 2020-12-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by BGI on 2020-12-06: Thank you for your quick reply. That actually worked. I added both on both sides and now it runs properly. Comment by BGI on 2020-12-06: If anybody else runs into this issue it was just a matter of adding ROS_IP on the master side.
{ "domain": "robotics.stackexchange", "id": 35841, "tags": "ros, ros-melodic, rasbperrypi, network" }
How to understand wavefunction in quantum mechanics in math
Question: I am reading some introduction on quantum mechanics. I don't understand all but I get the point that the wavefunction tells some probability aspects. In one book, they show one example of the wavefunction $f(x)$ in position space as a complex function, so they said the probability of finding the particle is $f^*(x) f(x) = |f(x)|^2$. In other book, the same example shown but in so-called bra and ket vector form, I know if I calculate the absolute square, I should get the same answer. But I am still learning the bra, ket notation, so I wonder if $\langle f(x)|f(x)\rangle$ or $|\langle f(x)|f(x)\rangle|^2$ gives the probability? If the last one gives the probability, what is $\langle f(x)|f(x)\rangle$? Is $\langle f(x)|f(x)\rangle = f^*(x)\cdot f(x)$ ? Answer: "$| f(x) \rangle$" does not mean anything and is not proper bra-ket notation. For translating back and forth beteween wavefunction and bra-ket notation, here is the #1 thing to keep in mind: $$ f(x) = \langle x \mid f \rangle $$ So, the probability density to find the particle at $x$ is $$ \left|f(x)\right|^2 = \left| \langle x \mid f \rangle \right|^2 $$ Since $\langle a \mid b \rangle = \langle b \mid a \rangle^*$, this can also be written $$ |f(x)|^2 = \langle f \mid x \rangle \langle x \mid f \rangle $$ Remember, this represents a probability density in $x$. What this means is that $$ \int dx\, A(x) \left| f(x) \right|^2 = \left< f \right| \left\{ \int dx \, A(x) \left| x \rangle \langle x \right| \right\} \left| f \right> $$ should be the expected value of the function A(x). The quantity in the brackets is an operator: $$ \hat A = \int dx \, A(x) \left| x \rangle \langle x \right| $$ (Edit: As pointed out by Trimok, the above is not true for most operators. It is only true for any operator that is diagonal in the x basis, or equivalently that can be written as a function of the operator $\hat x$. These are the only kind of operator for which expectation value and higher moments can be computed using $|f(x)|^2$ as a probability density function.) The expectation value of this operator is $$ \langle A \rangle = \left< f \right| \hat A \left| f \right> $$
{ "domain": "physics.stackexchange", "id": 9783, "tags": "quantum-mechanics, wavefunction, probability, born-rule" }
Weaknesses arising from using same key in both channel directions
Question: I came across the following question: Which of the following risk may arise, when same key is used to encrypt both directions of a communication channel, that are not present if using different keys in both direction? The answer to the question was reflection attack. The other options provided were denial of service, eavesdropping and none of the above. I presume it could be eavesdropping. Eg: When a Diffie-Hellman key exchange is performed, a man-in-the-middle attack (type of eavesdropping attack) could occur. So is the answer reflection attack or eavesdropping or something else entirely and why ? I tried looking up resources to justify and figure out the right answer but this is what I could conclude with. Answer: Suppose that the keys used in both directions are the same. You listen to something sent from A to B: kjhsdfiugdsa8ltr634o87qy5843jlgrwjklfjgsfds You then listen to something sent from B to A: jh35i798fdgkjh53jkh87gfhkjsfhtjk35h1o8798ug Does it matter that the key is the same? Can you understand any of the messages? If the answer to these questions is negative, then eavesdropping can be ruled out. In contrast, suppose that you sent the following message from B to A: kjhsdfiugdsa8ltr634o87qy5843jlgrwjklfjgsfds You don't necessarily have any idea what this means, but it is identical to a message sent from A to B. Because the key is the same, whatever meaning the original message had, so does this one. This kind of attack (which is easy to thwart even if the keys are the same) is known as replay.
{ "domain": "cs.stackexchange", "id": 14591, "tags": "cryptography, security, encryption" }
If MFCC vectors are speaker independent, then how can we reconstruct unique speech signals from them?
Question: Reconstructing a speech signal from a collection of MFCC vectors seems to work pretty well, but I've heard that one advantage of MFCCs is speaker-independence i.e. they are more-or-less the same across different speakers for a given phoneme. How then can a speech signal, with all its speaker-dependent idiosyncrasies (accent, etc.) be reconstructed from a supposedly speaker-independent MFCC vector? Are MFCCs not in fact speaker-independent then? If not, what does determine speaker-independence vs. speaker dependence? Thanks. Answer: First of all there is some serious "cheating" in the MFCC reconstruction experiment you linked to: not only the MFCCs are used, but also the voiced/unvoiced bit and the pitch. MFCC are not speaker independent. In fact, they are used for speaker identification/verification tasks! The speaker "idiosyncracies" are both in their prosody (preserved by this reconstruction experiment because the pitch is provided as a side-information to the reconstruction process) and in the articulation/timbre (preserved by the MFCC). Two ingredients are needed to get MFCCs to work for speaker-independent recognition: Vocal tract length normalization. A linear transformation (matrix multiplication of the MFCC vector) can map relatively well the MFCC sequences of two speakers speaking the same sentence. So even if MFCC are not speaker-independent one can optimize for a transformation matrix that "flattens out" the speaker-specific details. Acoustic modelling. Using a large number of gaussians (or any classifier with large capacity) for a specific acoustic unit allows it to capture all the variations.
{ "domain": "dsp.stackexchange", "id": 1745, "tags": "speech-recognition, mfcc, independence" }
The Abstract Factory design pattern as a Database Operations program
Question: File - DatabaseOperations.h This file contain classes representing database operations for three different types of databases (relational, document based and graph based) like establishing connections, executing statements, and committing and closing the connections. There are four abstract classes with one pure virtual functions each. The class DatabaseConnection contains pure virtual function connect() - this class is inherited by three concrete classes representing three different types of databases each one of those implement the connect() function. The class ExecuteStatements contains one pure virtual function Execute() - this class is again inherited by three concrete classes representing three different types of databases and each one of those implement the Execute() function. The class CommitDisconnect contains one pure virtual function commit() - this class is yet again inherited by three concrete classes representing three different types of databases and each one of those implement the commit() function. The class DatabaseOperation which aggregates all the operations and contains three pure virtual functions, connect, execute and commit. All these three virtual functions are implemented in three respective classes. #include <iostream> #include <string> /**************************************************************** * Connection *****************************************************************/ class DatabaseConnection { public: virtual void connect() = 0; }; class RelationalDatabaseConnection : public DatabaseConnection { public: void connect() override { std::cout << "Connection to RDBMS\n"; } }; class DocumentDatabaseConnection : public DatabaseConnection { public: void connect() override { std::cout << "Connection to Document DBMS\n"; } }; class GraphDatabaseConnection : public DatabaseConnection { public: void connect() override { std::cout << "Connection to Graph DBMS\n"; } }; /**************************************************************** * Execute Statements *****************************************************************/ class ExecuteStatements { public: virtual void Execute() = 0; }; class RelationalDatabaseExecute : public ExecuteStatements { public: void Execute() override { std::cout << "Executing ANSI SQL\n"; } }; class DocumentDatabaseExecute : public ExecuteStatements { public: void Execute() override { std::cout << "Executing JSON/Parquet etc.\n"; } }; class GraphDatabaseExecute : public ExecuteStatements { public: void Execute() override { std::cout << "Creating Nodes and Edges\n"; } }; /**************************************************************** * Commit & Disconnection *****************************************************************/ class CommitDisconnect { public: virtual void commit() = 0; }; class RelationalCommitDisconnect : public CommitDisconnect { public: void commit() override { std::cout << "Committing and closing RDBMS connection\n"; } }; class DocumentCommitDisconnect : public CommitDisconnect { public: void commit() override { std::cout << "Committing and closing Document DBMS connection\n"; } }; class GraphCommitDisconnect : public CommitDisconnect { public: void commit() override { std::cout << "Committing and closing Graph DBMS connection\n"; } }; class DatabaseOperation { public: virtual DatabaseConnection* connect() = 0; virtual ExecuteStatements* execute() = 0; virtual CommitDisconnect* commit() = 0; }; class Relational : public DatabaseOperation { public: DatabaseConnection* connect() override { return new RelationalDatabaseConnection(); } ExecuteStatements* execute() override { return new RelationalDatabaseExecute(); } CommitDisconnect* commit() override { return new RelationalCommitDisconnect(); } }; class Document : public DatabaseOperation { public: DatabaseConnection* connect() override { return new DocumentDatabaseConnection(); } ExecuteStatements* execute() override { return new DocumentDatabaseExecute(); } CommitDisconnect* commit() override { return new DocumentCommitDisconnect(); } }; class Graph : public DatabaseOperation { public: DatabaseConnection* connect() override { return new GraphDatabaseConnection(); } ExecuteStatements* execute() override { return new GraphDatabaseExecute(); } CommitDisconnect* commit() override { return new GraphCommitDisconnect(); } }; file - main.cxx #include <iostream> #include "DatabaseOperations.h" int main() { int value = -1; DatabaseOperation* operation = nullptr; std::cout << "Relational: 1, Document: 2 and Graph: 3\n" << "Enter: "; std::cin >> value; switch (value) { case 1: operation = new Relational(); break; case 2: operation = new Document(); break; case 3: operation = new Graph(); default: break; } if (operation != nullptr) { operation->connect()->connect(); operation->execute()->Execute(); operation->commit()->commit(); } return 0; } Does the above code implements the Abstract Factory design pattern correctly? Answer: Does the above code implements the Abstract Factory design pattern correctly? No. First, it is wildly overcomplicated. It seems ridiculous to need four different class hierarchies just to do a database transaction. Second, it is just plain bad C++, because it leaks memory all over the place. You use new like it’s going out of style (which it is), but you never once use delete. And third, and perhaps most importantly, you don’t actually make a factory. A factory is kinda important if you want to implement the abstract factory pattern. The abstract factory pattern only requires a single abstract base, and a single factory (which is a function). For your case, it might look something like this: // database.hpp ================================================ // abstract base class database { public: virtual ~database() = default; virtual auto connect() -> void = 0; virtual auto execute() -> void = 0; virtual auto commit() -> void = 0; }; // factory [[nodiscard]] auto create_database(int id) -> std::unique_ptr<database>; // database.cpp ================================================ #include "database.hpp" // concrete class // // actual implementation could be in another file, but declaration must be // visible to the factory function class relational_database : public database { public: auto connect() -> void override { std::cout << "Connection to RDBMS\n"; } auto execute() -> void override { std::cout << "Executing ANSI SQL\n"; } auto commit() -> void override { std::cout << "Committing and closing RDBMS connection\n"; } }; // and the same thing for the other database types // factory implementation [[nodiscard]] auto create_database(int id) -> std::unique_ptr<database> { switch (id) { case 1: return std::unique_ptr{new relational_database{}}; case 2: return std::unique_ptr{new document_database{}}; case 3: return std::unique_ptr{new graph_database{}}; default: // should really throw an exception here, but whatevs return nullptr; } } // main.cpp ==================================================== #include "database.hpp" // usage in main() auto main() -> int { std::cout << "Relational: 1, Document: 2 and Graph: 3\n" << "Enter: "; auto input = -1; std::cin >> input; if (auto operation = create_database(input); operation != nullptr) { operation->connect(); operation->execute(); operation->commit(); } } To use the factory, you only need to be able to see the abstract base (database), and the factory (create_database()). All of the concrete classes could be hidden (though, of course, they need to be visible to the factory. Now as for some specific problems in the code… I don’t see any sense in breaking the connect, execute, and commit/close functions into three different class hierarchies. All of these functions are tightly connected: the mechanism for connecting to a database is intimately connected to the mechanism for closing that connection. It makes no sense at all for those two things to be in two entirely different classes. And to actually do anything with the database, you need the connection handle… so how are you supposed to get the connection handle opened in one class… over to the execute class… then finally over to the commit/close class. That whole mess is just bonkers. class Relational : public DatabaseOperation { public: DatabaseConnection* connect() override { return new RelationalDatabaseConnection(); } No. Just no. We don’t use raw pointers for ownership in C++ anymore. That was bad practice even back in C++98 (hence, std::auto_ptr). This is just plain unacceptable in modern C++. But the real problem here is that your entire interface is just terrible. Your base class looks like this: class DatabaseOperation { public: virtual DatabaseConnection* connect() = 0; virtual ExecuteStatements* execute() = 0; virtual CommitDisconnect* commit() = 0; }; In order to implement this interface correctly you would have to do something like this: class CorrectDatabase : public DatabaseOperation { std::unique_ptr<DatabaseConnection> _connection; std::unique_ptr<ExecuteStatements> _statements; std::unique_ptr<CommitDisconnect> _commit; public: // note we have to do all the allocations in the constructor, and hold // the pointers as data members CorrectDatabase() : _connection{new RelationalDatabaseConnection} , _statements{new RelationalDatabaseExecute} , _commit{new RelationalCommitDisconnect} {} DatabaseConnection* connect() override { return _connection.get(); } ExecuteStatements* execute() override { return _statements.get(); } CommitDisconnect* commit() override { return _commit.get(); } }; The way you tried to do it—allocating each class in the actual function—is not only clunky, it is wrong, because there is no possible way to do that without leaking memory. But while the code above is now correct, it’s still terrible. Because to use it, I have to do this: auto db = CorrectDatabase{}; db.connect()->connect(); db.execute()->execute(); db.commit()->commit(); That’s just silly. Why not: auto db = CorrectDatabase{}; db.connect(); db.execute(); db.commit(); That looks a lot more reasonable. And all you’d have to do is this: class DatabaseOperation { public: virtual void connect() = 0; virtual void execute() = 0; virtual void commit() = 0; }; class CorrectDatabase : public FixedDatabaseOperation { std::unique_ptr<DatabaseConnection> _connection; std::unique_ptr<ExecuteStatements> _statements; std::unique_ptr<CommitDisconnect> _commit; public: // note we have to do all the allocations in the constructor, and hold // the pointers as data members CorrectDatabase() : _connection{new RelationalDatabaseConnection} , _statements{new RelationalDatabaseExecute} , _commit{new RelationalCommitDisconnect} {} void connect() override { return _connection->connect(); } void execute() override { return _statements->execute(); } void commit() override { return _commit->commit(); } }; But of course, this is still terrible design. What happens if someone forgets to call connect() before they call execute()? What happens if they forget to call commit() after? A good C++ interface is easy to use, and hard to use incorrectly. For example: auto db = create_database(database_type); // the factory function // the database is *already* connected to when it was constructed, so you can // just go ahead and use it db->execute(statement_1); db->execute(statement_2); db->execute(statement_3); // you can manually commit, if you want db->commit(); // commits the transaction consisting of the previous 3 statements // more statements db->execute(statement_4); db->execute(statement_5); // you don't need to manually commit, the destructor will commit automatically // the destructor will also close the connection There is no way to use that kind of interface wrong. That’s what good C++ looks like. (You could go even further and have RAII transaction objects, that will auto-commit in smaller chunks.) The bottom line: No, you have not implemented the factory pattern… mostly because you didn’t actually implement a factory. Your code is just plain wrong because it leaks memory all over the place. You have wildly over-engineered the whole thing, to the point of absurdity. I don’t even see how it could possibly work in practice. Try it with a real database, and see if you can actually get it to work. I would suggest throwing the whole thing out, and starting from scratch with a much simpler design. I would suggest starting with the absolute basic abstract factory pattern, if that’s what you’re interested in (an example of the absolute basic abstract factory is below). I would also suggest trying it out with a real database (just use SQLite as a test database, for example), to see what’s practical, because what seems to make sense in the abstract just won’t work in reality. (For example, how the hell are you going to handle the connection handle across three distinct class hierarchies?) Example of absolute basic abstract factory: // thing.hpp =================================================== // the only header you need to include to use the factory #include <memory> #include <string_view> // abstract base class thing_base { public: virtual ~thing_base() = default; virtual void func() = 0; }; // factory declaration [[nodiscard]] auto create_thing(std::string_view id) -> std::unique_ptr<thing_base>; // thing.cpp =================================================== #include "thing.hpp" #include "thing_concrete.hpp" // factory implementation [[nodiscard]] auto create_thing(std::string_view id) -> std::unique_ptr<thing_base> { if (id = "thing_concrete") return std::unique_ptr{new thing_concrete}; // handle error for unknown ID } // thing_concrete.hpp ========================================== // example concrete type, can be a private header #include "thing.hpp" class thing_concrete : public thing { public: void func() override; }; // thing_concrete.cpp ========================================== #include "thing_concrete.hpp" void thing_concrete::func() override { // ... } // main.cpp ==================================================== // usage #include "thing.hpp" auto main() -> int { auto t = create_thing("thing_concrete"); } ```
{ "domain": "codereview.stackexchange", "id": 41873, "tags": "c++, c++11, design-patterns, abstract-factory" }
(Groovy | 12.04 | armv7l) bash: rosrun: command not found
Question: I ran the following commands: $ roscore $ rosrun turtlesim turtlesim_node and obtained the following error: bash: rosrun: command not found My environment reads: root@localhost:~# export | grep ROS declare -x ROS_DISTRO="groovy" declare -x ROS_ETC_DIR="/opt/ros/groovy/etc/ros" declare -x ROS_MASTER_URI="http://localhost:11311" declare -x ROS_PACKAGE_PATH="/home/ubuntu/ros_ws:/opt/ros/groovy/share:/opt/ros/groovy/stacks" declare -x ROS_ROOT="/opt/ros/groovy/share/ros" And was not able to find the rosrun file at /opt/ros/groovy/bin Any suggestions? Originally posted by amrivera on ROS Answers with karma: 13 on 2013-03-02 Post score: 1 Original comments Comment by ahendrix on 2013-03-02: How did you install ROS? If you installed from debs, which debs did you install? Comment by amrivera on 2013-03-11: I used this link --> http://packages.ros.org/web/rosinstall/generate/raw/groovy/desktop Answer: The env variable you need for this is PATH: $ env | grep ^PATH= PATH=/opt/ros/groovy/bin:... This entry is added by sourcing /opt/ros/groovy/setup.*sh Originally posted by KruseT with karma: 7848 on 2013-03-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by amrivera on 2013-03-14: It does have the PATH info that you mention.
{ "domain": "robotics.stackexchange", "id": 13149, "tags": "ros-groovy, ubuntu-precise, rosrun, ubuntu" }
Commutativity and symmetric property in tensor manipulation
Question: I have been trying to express $\eta^{\mu\nu}$ in terms of $\eta_{\mu\nu}$ and I have stumble upon the following relation: $\eta^{\mu\nu} = \eta^{\mu\alpha}\eta^{\nu\beta}\eta_{\alpha\beta}$ I can see how the indices $\alpha$ and $\beta$ contract to form the contravariant tensor $\eta^{\mu\nu}$, but I do not understand why the ordering of the tensors $\eta^{\mu\alpha}$, $\eta^{\nu\beta}$ and $\eta_{\alpha\beta}$ does not matter. If I am correct, only the inner upper and lower indices can contract, so that the ordering $\eta^{\mu\alpha}\eta_{\alpha\beta}\eta^{\nu\beta}$ is more appropriate. So, $\eta^{\mu\alpha}\eta_{\alpha\beta}\eta^{\nu\beta} = \eta^{\mu}_{\beta}\eta^{\nu\beta}$, so that inner upper and lower index $\alpha$ contracts. Then, $\eta^{\mu}_{\beta}\eta^{\nu\beta} = \eta^{\mu}_{\beta}\eta^{\beta\nu}$, because the metric tensor is symmetric. Next, $\eta^{\mu}_{\beta}\eta^{\beta\nu} = \eta^{\mu\nu}$ and we end up with the desired form. Do you think ordering of the tensors in a tensor product matters? I say this, because matrix multiplication is non-commutative. Therefore, shouldn't tensor multiplication also be non-commutative? Also, I could simplify the RHS into LHS only because the metric tensor is symmetric? What if the tensor were not symmetric? How could we then have managed to simplify? Answer: Well they are real numbers... if you like you can express to another physicist your relation just as well like so: Suppose $\eta_{\mu \nu}$ is a metric in $N$ dimensions with components who vary contravariantly and $\bar{\eta}_{\mu \nu}$ the corresponding covariant components. Then $\eta_{\mu\nu} = \sum_{\alpha=1}^N\sum_{\beta=1}^N\eta_{\mu\alpha}\eta_{\nu\beta}\bar{\eta}_{\alpha\beta}$ the summand is just a product of real numbers, and products of real numbers can be arranged however you please: $abc=cab=bca=cba=\cdots$. You'll see this kind of phrasing in books just before they introduce the Einstein summation convention! "whose components vary contravariantly" is replaced with a superscript and "whose components vary covariantly" is replaced with a subscript and the sums are dropped, so you can write the whole thing more quickly. But you're always talking about sums and your components are actual numbers. In matrix multiplication $AB\neq BA$ can occur. This equation has no indices. It is still the case that $\sum_j A_{ij}B_{jk}=\sum_j B_{jk}A_{ij}$, because these are just real numbers. You really should take a second look at the definition of the Einstein tensor notation. There's absolutely no magic to it (excluding covariance/contravariance!)
{ "domain": "physics.stackexchange", "id": 20373, "tags": "metric-tensor" }
Why are particles still a thing?
Question: Couldn't we just assume that waves have mass and momentum and can become localized? Dirac Deltas can be given a rigorous mathematical foundation but physicist do not use the Gelfand triple. Why not just assume that electrons, photons, etc., have the same dimension as space? I know string theory does this but are there reformulations of QFT that use this idea that together with the insight that renormalization has a connection with scale invariance gets rid of all the problems that plague QFT? It just seems to me that the concept of a particle is a very unphysical idea. Answer: The QFT is not a theory of classical particles. However it is also not a theory of classical fields. And it has both as liming cases. Because of that it is often useful to use the language of both while understanding that you talk about something that is neither one of them. You may approach it in the field-centric fashion. Construct free QFT as a "quantization" of the classical free field theory, then construct interacting QFT as a perturbation by a local interaction. Or you can approach it in the particle-centric fashion. Construct Fock space as a formal state space of relativistic identical particles assuming that they have momentum and are discrete entities. Then construct their scattering amplitudes as a formal series assuming that they interact in the local way (i.e. assuming that they are in a cerain sense pointlike). Because most of the problems in QFT are scattering problems you borrow from this way of thinking even when you start in the field-centric fashion. I can see you screaming "But what about non-perturbative QFT???" But the thing is that most of the time we don't have anything you may call non-perturbative QFT!!! Instead you have recipes based on some ideological considerations that should complete you perturbative construction. Because we have some axiomatic field-correlator constructions (e.g. CFT) or semi-classical considerations (that assume the specific classical limit to work in this parameter range) that don't rely on the perturbative expansions, we try use them. But let's be honest, we must be ready that these recipes may fail to give the appropriate non-perturbative completion. The fun thing is that contrary to what you write about string theory, it is built in the particle-centric fashion! The things like string dualities and AdS/CFT that we use to understand string theory non-perturbatively do not rely on attempts to generalize the field-notion like string field theory. This goes with your comment about physicists and Gelfand triples. The mathematicians go with the purified abstract constructs that are defined in a closed and self-consistent fashion. The physicsts use the constructs to describe reality. So physicst tend to the way "Let's regularize and define it as a limit, hoping that it does not depend on the regularization" because: They are only interested in obtaining results They usually are lazy to decipher mathematitician's language that differs sometimes from the their language considerably. And unlike physicist mathemations more often than not do not give "the complete idiot's explanation of my idea" in their papers. There is often an actual physical reason to consider model as a limit of regularized stuff They often work with bad models (like non-renormalizable effective QFTs) that actually have regularization-dependence and other issues. And while mathematicians with give you their abstract constructs we return to the reasons 1-3 for the physicsit to skip them. In fact most physicists don't care even about all the complicated stuff about QM: that there is all these stuff about definitions on dense subspaces, about self-adjoint extensions of symmetric operators, about rigged Hilbert spaces etc. When they find the issues with naive ways they usually regularize the problem and take the limit and reinvent e.g. symmetric operator with multiple self-adjoint extensions in the fashion they need for their physics problem. Or find that actually this limiting definition pinpoints the specific self-adjoint extensions whereas the abstract approach do not give you the answer. I personally had such experience. So if you are a physicst do not cling to any nice model and actually absorb all viewpoints because you don't know which one will actually be more useful in the future.
{ "domain": "physics.stackexchange", "id": 97684, "tags": "quantum-field-theory, particle-physics, field-theory, fourier-transform, wave-particle-duality" }
turtlebot_folower doesn't work on 410c with kinect
Question: jessie on 410c with kinetic version of ROS by following the instruction on wiki, kinect works with rqt_image_view, either depth or rgb raw image could be captured properly. But the bot won't move after running turtlebot_follower app, there is no eeror from the std output, only like this INFO] [1470897001.944838498]: Centroid at -0.110756 0.132475 0.521000 with 26297 points [ INFO] [1470897002.949478232]: Centroid at -0.110221 0.132640 0.521000 with 26389 points [ INFO] [1470897004.031415689]: Centroid at -0.110192 0.132608 0.521000 with 26425 points [ INFO] [1470897005.028972904]: Centroid at -0.110192 0.131853 0.519000 with 26564 points [ INFO] [1470897006.098538296]: Centroid at -0.109379 0.131921 0.519000 with 26630 points [ INFO] [1470897007.128141633]: Centroid at -0.110063 0.131935 0.519000 with 26554 points [ INFO] [1470897008.236953214]: Centroid at -0.110100 0.131929 0.520000 with 26519 points [ INFO] [1470897009.263505487]: Centroid at -0.110249 0.131807 0.519000 with 26521 points [ INFO] [1470897010.283830842]: Centroid at -0.111009 0.131917 0.520000 with 26443 points [ INFO] [1470897011.313652663]: Centroid at -0.109750 0.132123 0.520000 with 26477 points [ INFO] [1470897012.335891665]: Centroid at -0.109882 0.132068 0.520000 with 26448 points [ INFO] [1470897013.347062630]: Centroid at -0.109674 0.132116 0.520000 with 26536 points In addition, the bot works with teleop app with keyboard. Originally posted by zphou on ROS Answers with karma: 16 on 2016-08-11 Post score: 0 Original comments Comment by tfoote on 2016-08-14: Can you confirm that the topics for the follower are all connected using rqt_graph? Answer: Thanks for your comments, I figured it out eventually, it was another app was missing when running the follower. Before launch follower, turtlebot_bringup is also needed. Originally posted by zphou with karma: 16 on 2016-08-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25499, "tags": "turtlebot" }
How do kinematic equations work regardless of mass of the object?
Question: I came across this question (very simple): "A dog is running and starts to get faster at $2 ms^{-2}$ for $3s$. If the dog covers $20 m$ over this time, what velocity did it start with?" Using the kinematic equations, the answer is $3.7 m/s$. My teacher said that this is true regardless of the nature of the object. For example, if it were a ball that was accelerating at $2 ms^{-2}$ for $3 s$ and covered $20 m$, the answer would still be $3.7 m/s$. Why? How come the mass does not affect it? I understand it intuitively, but I can't seem to find an answer after thinking hard about it. Shouldn't forces such as gravity alter the acceleration or motion of an object? And gravity depends on mass. I understand that kinematics is the branch of physics not concerned with forces, but how is it so accurate (provided constant acceleration)? How is it that when figuring out the motion through kinetics (which does depend on the object, e.g. its mass), you still get the same answers as if it were done through kinematics? P.s. I am a high school student so i would appreciate it if the answers could be simple enough for me to understand (if it turns out to be complicated) P.p.s I know calculus, basic derivitives, integrals, and up to first-order differential equations if it comes to that. Much thanks! Answer: how is it so accurate (provided constant acceleration)? This is 100% it. We are assuming that the ball and the dog have the same acceleration (implicitly). This means we are a priori setting $F$ in $F=ma$ to whatever it has to be such that $a$ is the same for the difference mass objects. Concretely, if the ball is $1$ kg and the dog is $10$kg, then the force that is used to accelerate both of them to $a$ is different and the one for the dog is $10$ times larger. In kinematics, we don’t even care about this force, we are just given the value of the constant acceleration. You mention differential equations, the kinematic equations are all useful forms of what happens when you consider the differential equation $$\frac{d^2x}{dt^2} =a$$ for constant $a$, which is simply the statement that acceleration is constant. The reason there are 5 SUVAT equations is because sometimes we want to solve for things in terms of different variables given in the problem, but they all come from this ODE. How is it that when figuring out the motion through kinetics (which does depend on the object, e.g. its mass), you still get the same answers as if it were done through kinematics? I am assuming you mean when you are given the force (or collection of forces on an object) and the mass and are expected to use Newton’s Second Law to find the acceleration. The reason this works is because it results in the same equation of motion, we obtain the same differential equation; the only extra step was that you were expected to find the acceleration rather than just have it as a given.
{ "domain": "physics.stackexchange", "id": 99171, "tags": "newtonian-mechanics, kinematics, acceleration, kinetic-theory" }
ros as root - common bug, when workspace is in /home/user/ros_workspace
Question: Hey guys, I tried to run some ROS-Packages as root and got this: Warning: error while crawling /home/username: boost::filesystem::status: Permission denied: "/home/username/.gvfs" According to some google research, this is a common bug - somehow .gvfs denies any access for anyone who is not the user. My setup of shell&environment was absolutly fine, once I copied the workspace to another path(/home/ros_workspace) there was no problem :-) This is no real question, but I thought it might be helpful to mention this problem. Originally posted by Flowers on ROS Answers with karma: 342 on 2012-06-11 Post score: 0 Original comments Comment by Asomerville on 2012-06-12: Probably best to file a bug instead: http://www.ros.org/wiki/Tickets Answer: Why are you running ROS packages as root from the workspace of another user? That is likely that actual problem. Originally posted by dornhege with karma: 31395 on 2012-06-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Flowers on 2012-06-12: Because I thought it would be useful in that situation ;-) and the system has one user(me) and root...usually root should have access to everything which it has not, so I reported this. Comment by joq on 2012-06-12: Good to be aware of. The message is "don't run as root", and especially "don't build anything as root".
{ "domain": "robotics.stackexchange", "id": 9755, "tags": "ros, root, ros-workspace" }
Does QFT renormalization preserve Hilbert space?
Question: In the Wilsonian picture, a renormalized theory is about change of scale. As we change scale for a quantum field theory, does the Hilbert space of a theory remain unchanged? Answer: We can think of a quantum field theory as an association between observables and regions of spacetime: for each spacetime region $\cal{O}$, we have a collection $\cal{A}(\cal{O})$ of observables. Together with a representation of all of these observables as operators on a single Hilbert space, this is the only data that needs to be specified. (The data must satisfy some conditions, but there is no additional data.) Conceptually, in the Wilson picture, renormalization amounts to a simple two-step process: Step 1: For some $\lambda>0$, replace each $\cal{A}(\cal{O})$ with $\cal{A}(\lambda\cal{O})$, where $\lambda\cal{O}$ is the image of $\cal{O}$ under a scale transformation of spacetime. Step 2: Re-scale the units so that $\lambda\cal{O}$ has the same size in the new units that $\cal{O}$ had in the original units. Does this change the Hilbert space? For a typical quantum field theory in continuous spacetime, the original Hilbert space is infinite-dimensional (think about the effect of continuous translations), and so is the new one. Up to isomorphism, there is only one separable infinite-dimensional complex Hilbert space, so the Hilbert space must remain the same (for finite values of $\lambda$, at least). There are two possible loopholes: In the case of a lattice QFT, the Hilbert space may be finite-dimensional. Then the preceding argument fails. This one is a semantic loophole. In the quantum (field) theory literature, the expression "Hilbert space" sometimes implicitly refers to the net of observables as well as to the Hilbert space. For example, some authors may refer to the Hilbert space of square-integrable functions of a single real variable and the Hilbert space of square-integrable functions of two real variables as two different Hilbert spaces, when in fact they are the same Hilbert space in two different representations, each tailored to a different net of observables. The observables — at least their association to regions of spacetime — certainly do change under renormalization in the Wilson picture, and I suppose some authors might describe this as a change of the Hilbert space.
{ "domain": "physics.stackexchange", "id": 66174, "tags": "quantum-field-theory, hilbert-space, renormalization" }
Why is a transform not available after waitForTransform?
Question: Hello ros-users, I have strange issue with tf-transforms. I use laser_geometry::LaserProjection to project laser scans to point clouds, but I assume the problem is related to tf. The code-snippet is from the scan callback function. if(!gTfListener->waitForTransform(scan->header.frame_id, "camera", scan->header.stamp, ros::Duration(2.0), ros::Duration(0.01), &error_msg)) { ROS_WARN("Warning 1: %s", error_msg.c_str()); return; } try { gProjector->transformLaserScanToPointCloud("camera", *scan, cloud, *gTfListener); }catch(tf::TransformException e) { ROS_WARN("Warning 2: %s", e.what()); return; } After a few seconds of running, Warning 2 is raised on the console for like 1 out of 4 scans with this error message: Warning 2: Lookup would require extrapolation into the future. Requested time 1412757571.359567610 but the latest data is at time 1412757571.357117891, when looking up transform from frame [laser_link] to frame [camera] How can this happen, after waitForTransform obviously succeeded (returned true)? Thanks in advance, Sebastian Originally posted by Sebastian Kasperski on ROS Answers with karma: 1658 on 2014-10-08 Post score: 1 Original comments Comment by GuillaumeB on 2014-10-08: you could try to set a very long time for the duration (the time it should wait) . My computer was slow and it solved the problem Comment by Sebastian Kasperski on 2014-10-08: Increasing the wait time did not change the behaviour, as waitForTransform already returned true (e.g. didn't timeout) Answer: My assumption about tf was wrong, the solution was indeed related to laser_geometry. As described here, the Projector uses start and end time of the scan and therefore asks for a timestamp in the future. So to wait long enough, the call to waitForTransform should be: if(!gTfListener->waitForTransform( scan->header.frame_id, "camera", scan->header.stamp + ros::Duration().fromSec(scan->ranges.size()*scan->time_increment), ros::Duration(2.0), ros::Duration(0.01), &error_msg)) Originally posted by Sebastian Kasperski with karma: 1658 on 2014-10-08 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 19669, "tags": "ros, laser-geometry, transform" }
Maximum proper time in Minkowski Spacetime for free particles
Question: Consider two events $\mathcal{A}$ and $\mathcal{B}$ corresponding to the beggining and the ending of trajectories of two massive particles, respectively. The particle named $\mathcal{P1}$ is in free motion, and the other particle $\mathcal{P2}$ is in accelerated motion. Both particles measure events $\mathcal{A}$ and $\mathcal{B}$ as events that occurs at same place, in both rest-frames, so both particles also measure their respective proper times elapsed between these events. How can I prove that the proper time of the free particle $\mathcal{P1}$ is bigger than proper time of the accelerated particle $\mathcal{P2}$? Answer: If we approximate the accelerated path from $A$ to $B$, by $n$ "inertial" steps: $A$ to $A_1$, $A_1$ to $A_2$,..., $A_{n-1}$ to $B$: $t_{A-A_1}' = \gamma_1 (t_1 - v_1 \Delta x_1)$ $t_{A_1-A_2}' = \gamma_2 (t_2 - v_2 \Delta x_2)$ . . . $t_{A_{n-1}-B}' = \gamma_n (t_n - v_n \Delta x_n)$ where $t_k$ is the time from $A_{k-1}$ to $A_{k}$ measured by the inertial frame, $\Delta x_k$ is the coordinate difference $x_{k} - x_{k-1}$ , measured also by the inertial frame. And $v_k$ is the speed of each step. Adding the times: $t_{A-B}' = \Sigma \gamma_k t_k - \Sigma \gamma_k v_k \Delta x_k$ But: $\Delta x_k = v_kt_k$ $t_{A-B}' = \Sigma \gamma_k t_k - \Sigma \gamma_k v_k^2 t_k = \Sigma \gamma_k t_k (1 - v_k^2) = \Sigma \frac {t_k}{\gamma_k}$ As $t_{A-B} = \Sigma t_k\implies t_{A-B} > \Sigma \frac {t_k}{\gamma_k}$ $t_{A-B} > t_{A-B}'$ The accelerated path is the limit when the time of each step go to zero and the number of steps go to infinity.
{ "domain": "physics.stackexchange", "id": 64811, "tags": "homework-and-exercises, special-relativity, reference-frames, acceleration, time-dilation" }
Problem of using Turtlebot to process image
Question: Hi, everyone ! I'm doing image processing using turtlebot right now, and I have completed programming on VS Windows, it is going perfect, but when I transformed it to ROS, it just went wrong. I am new in this area so I don't know what to do it right, I followed the tutorials and some simple example, and I successfully generated the exe file, but when I rosrun my node, the terminal says 'segmentation fault '. Here is my program, it's pretty simple even though it looks long: #include <opencv/cv.h> #include <opencv/highgui.h> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h> using namespace cv; using namespace std; int main(int argc, char** argv) { ros::init(argc, argv, "environment_changes"); ros::NodeHandle ench; //ParametersSetting unsigned char t = 50; //difference threashold int w = 50; //window for checking int output = 0; //output for whether there are changes double threashold = (2 * w - 1)*(2 * w - 1)*0.92; //threashold for whether there are changes double r = 0.05; //rate of elimated size double X = 0; //x of the centre double Y = 0; //y of the centre int p = 0; //number of the changed space int i; //rows int j; //cols //LoadImage IplImage * image_ref; IplImage * image_trans; image_ref = cvLoadImage("test1.jpg"); image_trans = cvLoadImage("test2.jpg"); //ChangeToGray IplImage * image_ref_gray; IplImage * image_trans_gray; image_ref_gray = cvCreateImage(cvGetSize(image_ref), image_ref ->depth ,1); image_trans_gray = cvCreateImage(cvGetSize(image_trans), image_trans -> depth ,1); cvCvtColor(image_ref, image_ref_gray, CV_RGB2GRAY); cvCvtColor(image_trans, image_trans_gray, CV_RGB2GRAY); //ShowImage cvNamedWindow("test1", 1); cvNamedWindow("test2", 1); cvShowImage("test1", image_ref_gray); cvShowImage("test2", image_trans_gray); waitKey(0); //ComputeTheImage IplImage * image_sub; image_sub = cvCreateImage(cvGetSize(image_ref_gray), image_ref_gray->depth, 1); Mat ref; Mat trans; Mat sub; ref = Mat(image_ref_gray); trans = Mat(image_trans_gray); sub = Mat(image_sub); absdiff(trans, ref, sub); i = sub.rows; j = sub.cols; int m; int n; for (m = round(i*r); m <= round(i*(1 - r)); m++){ for (n = round(j*r); n <= round(j*(1 - r)); n++){ if (sub.at<uchar>(m, n) >= t){ sub.at<uchar>(m, n) = 255; X = X + m; Y = Y + n; p = p + 1; } } } X = X / p; Y = Y / p; cout << X << " " << Y << endl; * image_sub = IplImage(sub); cvNamedWindow("test3", 1); cvShowImage("test3", image_sub); waitKey(0); //ReleaseTheSpace cvDestroyWindow("test3"); cvDestroyWindow("test2"); cvDestroyWindow("test1"); cvReleaseImage(&image_sub); cvReleaseImage(&image_trans_gray); cvReleaseImage(&image_ref_gray); cvReleaseImage(&image_trans); cvReleaseImage(&image_ref); return 0; } right now I simply just do the image load and processing and then show it without dealing with Turtlebot. Can somebody tell me what to do plz!! Thank you guys sooooo much! Originally posted by Henschel.X on ROS Answers with karma: 62 on 2016-03-15 Post score: 1 Original comments Comment by Reza1984 on 2016-03-15: I haven't worked with opencv but I used ROS-PCL. There I get the point cloud using ROS messages then convert it to PCL data and process the data then covert it back to ROS message. I think for opencv also should work the same. btw you define a ros node and you never used it, is it correct way? Comment by Henschel.X on 2016-03-15: I don't know. I just guess it may need that. I plan to do as you said too, get image from ROS, convert it to Opencv Image, do the processing and change it back, I just don't know what else should I do. Answer: I used several ROS_INFO, and it came with this error, I don't know what it means. OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /tmp/buildd/ros-hydro-opencv2-2.4.9-2precise-20141231-1923/modules/imgproc/src/color.cpp, line 3737 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/buildd/ros-hydro-opencv2-2.4.9-2precise-20141231-1923/modules/imgproc/src/color.cpp:3737: error: (-215) scn == 3 || scn == 4 in function cvtColor Originally posted by Henschel.X with karma: 62 on 2016-03-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Mehdi. on 2016-03-15: You should be using the opencv 2 functions, all functions starting with cv are the old functions that probably need you to preallocate space for images. Use cv::Mat instead of IplImage as well. Comment by Henschel.X on 2016-03-15: I have to use the old function cvCvtColor to convert RGB image to gray scale image, but Mat won't work for this, but I figure it out the problem, I have to replace my file name with the path, otherwise it cannot find the image. Comment by Mehdi. on 2016-03-16: cv::Mat is totally fine to convert rgb images to gray scale, just use the function cvtColor and not cvCvtColor Comment by Henschel.X on 2016-03-16: oh OK, I didn't know that.... Thanks a lot!
{ "domain": "robotics.stackexchange", "id": 24114, "tags": "ros, opencv, image" }
Variable area manometer duct
Question: How does one measure the pressure at the end of the manometer tube when the manometer tube itself is having a variable area? Answer: The area of the manometer tube makes no difference. All that matters is the difference in the heights of the two ends (labelled $x$ in your diagram). That's why pressure units like the torr exist that are (or rather were) defined as the pressure difference when the difference in height of a mercury manometer is 1mm. All that matters is the height difference.
{ "domain": "physics.stackexchange", "id": 15803, "tags": "thermodynamics, fluid-dynamics, pressure" }
Was Max Born the first to notice a connection between quantum mechanics and randomness?
Question: Max Born introduced the Born Rule in a paper from 1926. But was this really the first time that a connection between quantum mechanics and randomness was noticed? Today, quantum mechanics and randomness seem to be so closely connected that it's hard to imagine that 21 years should have past between Einstein's 1905 paper on the photoelectric effect, and the realization that randomness might be involved if "energy is exchanged only in discrete amounts". Answer: Certainly not. I was already known for a long time that certain microscopic phenomena were best described by probabilistic theories, the prime example being radioactivity. Even on the classical level, statistical mechanics (canonical example: Brownian motion) had prepared some physicists to relax their classical conceptions of reality. However, it was of course not clear how exactly quantum phenomena and 'randomness' are connected until the advent of quantum mechanics and Born's interpretation of $|\Psi(x,t)|^2$. The small book 'Uncertainty' is largely dedicated to the development of these concepts in the late 19th/early 20th century, and is a nice read even though it does not really get down to the nitty gritty of the technical details.
{ "domain": "physics.stackexchange", "id": 16189, "tags": "quantum-mechanics, history, probability, born-rule" }
How can a non-rotating black hole or singularity be created?
Question: Every star or other massive body in the universe rotates, if only a little. If such a body collapses, its spin, any spin at all, and thus, angular momentum approach infinity as r approaches 0. Angular momentum must be conserved. In order to produce a true singularity or a non-rotating black hole, there must be some process by which ALL angular momentum (energy) is dissipated or converted to something else. Is there such a process? Otherwise all black holes must rotate, perhaps too fast or too slowly for us to detect, but they must rotate. Am I missing something? I've never heard this discussed before. Answer: There are, to our knowledge, no non-rotating black holes (or other massive bodies). Perhaps you have heard something like the Schwarzschild metric being discussed. The Schwarzschild metric was discovered in 1915 and is a solution to non-rotating bodies. It is a useful approximation for describing slowly rotating astronomical objects, including Earth and the Sun. It is also much simpler to solve for non-rotating bodies than for rotating bodies. However, the Kerr metric is a generalization of the Schwarzschild metric and describes a solution for a rotating black-hole. This solution was not discovered until 1963. An extension to this solution, the Kerr–Newman metric, was discovered shortly thereafter in 1965.
{ "domain": "physics.stackexchange", "id": 59243, "tags": "black-holes, angular-momentum, astrophysics, astronomy" }
What do decomposers eat if they break down complex substances and make it available for producers?
Question: I have a doubt for which I haven't found the exact answer. Decomposers break down complex substances into simpler substances and make it available for producers. But what does decomposers get (or eat) in this process. Answer with an example would be appreciated. Thanks in advance! Answer: There is stored energy in complex substances: that's what the decomposers use. For instance, if you've ever made bread, you'll have seen yeast decompose sugars & starches (complex substances) into less-complex CO2 and ethanol, using the energy released to reproduce.
{ "domain": "biology.stackexchange", "id": 11458, "tags": "food, decomposition, food-web" }
How to decode a MIMO stream using V-BLAST?
Question: I am attempting to implement the V-BLAST algorithm for MIMO streams as described by Wolniansky et al. in the following paper: https://ieeexplore.ieee.org/document/738086 (I am not commenting on its useful-ness or how efficient it is, I am simply trying to implement it using the paper as written.) The algorithm assumes the following: The channel, H, is known perfectly (or already assumed). It is a rich scattering environment with some number of Transmitters and some number of Receivers, but for simplicity, I myself will assume that the number of Transmitters and the number of Receivers are equal, N. The vector of received symbols follows the form r = Ha + n, where H is the channel, a is the encoded transmitted symbols, and n is Additive White Gaussian Noise (unit-mean). The algorithm is described as follows in the paper: $$ \begin{align*} &\textrm{initialization:}\\ &(1)i\leftarrow 1\\ &(2)G_1 = H^+\\ &(3)k_1 = \textrm{argmin}_j||(G_1)_j||^2\\\\ &\textrm{recursion:}\\ &(4)w_{k_i} = (G_i)_{k_i}\\ &(5)y_{k_i} = w_{k_i}^Tr_i\\ &(6)\hat{a}_{k_i} = Q(y_{k_i})\\ &(7)r_{i+1} = r_i - \hat{a}_{k_i}(H)_{k_i}\\ &(8)G_{i+1} = H^+_{\overline{k_i}}\\ &(9)k_{i+1} = \textrm{argmin}_{j\not\in\{k_1,...,k_i\}}||(G_{i+1})_j||^2\\ &(10)i\leftarrow i+1 \end{align*} $$ (An image from the paper can also be found here.) Some definitions: $H^+$ is the pseudoinverse of $H$. (As in a zero-forcing decoder) The algorithm confusingly mixes notations. On (3), $(G_1)_j$ is the j-th row of $G_1$, but on (7), $H_{k_i}$ is the $k_i$th column of H. $Q(y)$ is the quantization (slicing) operation, according to the constellation used. With that out of the way, I'm getting lost in the implementation. I think I understand it on a surface level, but I get tripped up in the recursion. So then, I have a few questions: In my implementation with QPSK symbols and N = 4, $r$ is a vector of size 4x1, and $G$ is 4x4. So in line (5), what would the expected size of $y_{k_i}$ be? My MATLAB implementation gives me a 1x4 array (row vector), but I'm not 100% certain that's right. What exactly is the Quantization (slicing) operation on line (6)? Is it simply demodulating whatever it is (using qpskdemod in MATLAB), or am I approximating the received points to its nearest points on the ideal constellation? What would the expected size of $\hat{a}$ be, a 4x1 row vector? I think with this I'll be able to fully implement the algorithm in MATLAB. I originally wrote this question going line-by-line with my MATLAB implementation at each line of the algorithm, but the question got really long. I can provide my MATLAB implementation if need be. I'd also greatly appreciate anyone's attempts at a MATLAB implementation so I can fully understand the workings of this algorithm, but that isn't necessarily necessary. Thank you so much in advance! Answer: In V-BLAST, $y_{ki}$ is a scalar (that is, a complex number). It is the (equalized) symbol transmitted by antenna $k_i$, with noise. Keep in mind that all vectors are column vectors ($n \times 1$). Then, $w_{ki}$ is $4 \times 1$, and $w^T_{ki}$ is $1 \times 4$. Then, $w^T_{ki} r_i$ is $1 \times 1$, i.e. a scalar. The "quantization" operation $Q(y_{ki})$ consists in finding the constellation element that is closest (in Euclidean distance) to $y_{ki}$. So, $Q(y_{ki})$ is the estimate of the symbol transmitted by antenna $k_i$.
{ "domain": "dsp.stackexchange", "id": 11233, "tags": "matlab, qpsk, quantization, mimo" }
Can we watch the light travelling in slow motion?
Question: If light travels 3 lac km in 1 second we cannot see it moving and it seems to be continuous. Everything in our lives are far slower than this. Like we cannot differentiate between the different images that make a movie clip if it changes 60 frames a second and the movie is continuous. Of course, as each frame or image making a movie clip has its own identity we can differentiate between each one of them while we cannot differentiate between light it is continuous. i have read somewhere recently a scientist in UK has succeeded to slow down the light by moving it between certain molecules at specific temperatures and pressure. If we make a movie of light and try to watch it in slow motion. How much slow moving it may seem? can i slow it to more than 60 times per second. Do we have a instrument which can help us watch the light in slow motion. about 1/300 000 000 times slower so that in each fraction of time in seconds it only moves 1 meter. And since I want to make it seem to move in slow motion I will be able to watch it over a longer period of time. obviously I cannot watch what happens in that small fraction of second. Answer: Its really hard to do something like this but people at MIT have done something that might interest you. (This is "not" a direct observation of light in slow motion) They call it "Femto-Photography". The equipment they made captures images at a rate of roughly a trillion frames per second. But since direct recording of light is impossible at that speed, so the camera takes millions of repeated scans to recreate each image. Below are some example time lapse images they have produced: Please check out their website and read the Frequently Asked Questions on the page to get more understanding of this concept : Femto-Photography: Visualizing Photons in Motion at a Trillion Frames Per Second
{ "domain": "physics.stackexchange", "id": 29207, "tags": "speed-of-light" }
Can $\mathbb{R}^4$ be globally equipped with a non-trivial non-singular Ricci-flat metric?
Question: I'm self-studying general relativity. I just learned the Schwarzschild metric, which is defined on $\mathbb{R}\times (E^3-O)$. So I got a natural question: does there exist a nontrivial solution (other than Minkowski spacetime) to the vacuum Einstein field equation on a manifold diffeomorphic to the entire $\mathbb{R}^4$ (i.e., without singularity)? In other words, is there a Lorentzian metric is Ricci-flat but not Riemann-flat on $\mathbb{R}^4$? Answer: Yes you can have space-time which is Ricci flat but still has Weyl curvature. Imagine two Minkowski space-times $M^+$ and $M^-$ separated by the null plane $\nu=0$. So $M^+$ (resp. $M^-$) corresponds to the region where $\nu>0$ (resp. $\nu<0$). Say, the metric in $M^-$ is given by $$ds^2_-= 2dud\nu-2d\zeta d\bar{\zeta}$$ But as soon as you cross the $\nu=0$ plane, you shift your $u$ coordinate to $u-f(\zeta,\bar{\zeta})$ so that your metric now looks like $$ds^2_+=2d\nu(du-f(\zeta,\bar{\zeta})\delta(\nu)d\nu)-2d\zeta d\bar{\zeta}$$. You can see that space-time is Minkowskian except at $\nu=0$ where you have delta function type singularity. This is an example of Impulsive wave. Ricci flatness implies $$\frac{\partial^2f}{\partial\zeta\partial\bar{\zeta}}=0$$ which means $f=g(\zeta)+\bar{g}(\bar{\zeta})$ for some function $g$, while the Weyl curvature is proportional to $\frac{\partial^2f}{\partial^2\zeta}$ and $\frac{\partial^2f}{\partial^2\bar{\zeta}}$. This is now a gravitational impulsive wave. Although there is an apparent delta function in metric components, it is only an artifact of choice of coordinates. The metric is still $\mathscr{C}^0$ at the junction. You can read more about this in J. L. Synge and L. O'Raifeartaigh, General Relativity; Papers in Honour of J. L. Synge
{ "domain": "physics.stackexchange", "id": 98639, "tags": "general-relativity, spacetime, differential-geometry, metric-tensor, curvature" }
How much does proving that a special case of a problem is NP-complete tell me about if the general problem is NP-complete?
Question: Define a graph problem as follows. Given a graph $G$ and two integers $c$ and $k$, delete $k$ nodes and all edges incident to them, such that, in the remaining graph, every connected component has at most $c$ nodes. What can I say about the time complexity class of the above problem? I can easily prove that under the condition that $c=1$, I can reduce Vertex Cover to the problem in polynomial time, and thereby prove that the problem is NP-hard under these parameters. My work book says that this, accompanied with a polynomial verification, is enough to prove that the problem is NP-complete, but I have a hard time accepting this since we have only observed the problem under a certain set of parameters? Answer: If a problem $P'$ is a special case of problem $P$, this means that $P'$ can be reduced to $P$. Therefore, if $P'$ is NP-hard, it follows that $P$ is NP-hard (because if any problem $H$ in NP can be reduced to $P'$, then it can also be reduced to $P$). If you also have a proof that $P$ is in NP (your proof of polynomial verification; for $P$, not just for $P'$), then you have a proof that $P$ is NP-complete.
{ "domain": "cs.stackexchange", "id": 21011, "tags": "graphs, time-complexity, np-complete" }
Converting a dict to a list + ID
Question: I wrote this code: PROJECTS_LIST = [ project if not project.update({"project_id": project_id}) else None for project_id, project in PROJECTS.items() ] where PROJECTS is a dict. The goal is to convert a dict like {"project123": {"a": "b"}} to [{"project_id": "project123", "a": "b"}] I worry that this isn't the best approach. Answer: project if not project.update({"project_id": project_id}) else None The above section is rather odd. I would prefer to one of the following solutions depending on Python version: Python 3.9+: use the dictionary merging operator |: project | {"project_id": project_id} Before Python 3.9: The, in my opinion, nicest alternate was to use dictionary unpacking: {**project, "project_id": project_id} I'd recommend having a newline before the for, but otherwise the code looks fine. PROJECTS_LIST = [ project | {"project_id": project_id} for project_id, project in PROJECTS.items() ]
{ "domain": "codereview.stackexchange", "id": 45617, "tags": "python, python-3.x" }
Mathematical meaning for Algebraic Bethe Ansatz
Question: I'm a mathematician who's trying to understand the meaning of Algebraic Bethe Ansatz. What I understood is that when dealing with quantum integrable models (like XXZ Heisenberg spin chain), one is interested in showing that there are conserved quantities (ie operators that commute with the Hamiltonian). My questions are about the tools that we have to use in order to exhibit them. Can someone explain to me in which way I should think about these objects? The Lax operator The $R$ matrix The monodromy matrix The transfer matrix I understand that one defines a Lax operator then show that it verifies a certain commutation relation with the $R$ matrix (which verifies Yang-Baxter equation). One multiplies the Lax operators along all sites and obtain the monodromy matrix, taking its trace yields the transfer matrix. Finally one can show that this last operator can be seen as the generating function of conserved quantites. Do not hesitate to correct me where I'm wrong. Answer: This is a big topic and the most suitable answer depends on your background. Since you're a mathematician I assume that you are familiar with representation theory. I'll try to indicate the key rep-th terms corresponding to the terms in the OP, and hope that helps. For the basic example one starts from the Yangian of $\mathfrak{gl}_2$. It has a presentation (called Drinfeld 3rd realisation = Faddeev--Reshetikhin--Takhtadzhyan presentation = '$RLL$ presentation') by generators and relations. The generators are often combined into an operator $L(u)$ on an ('auxiliary') vector space (which is 2d for the case of $\mathfrak{gl}_2$) with entries that are formal power series in the 'spectral parameter' $u$ whose coefficients lie in the Yangian. This is the $L$-operator, sometimes called monodromy matrix -- though that name often also/instead denotes its image in a representation, see below -- and also often denoted by $T$ instead of $L$. The generators are subject to quadratic relations called the '$RLL$ relations' because of their symbolic form (and called the 'fundamental commutation relations' by Faddeev), where the (rational) $R$-matrix contains the 'structure constants' of the Yangian. For this to define an associative algebra, the $R$-matrix needs to obey the Yang--Baxter equation. Any finite dimensional representation of $\mathfrak{sl}_2$ gives rise to a representation of this Yangian called an evaluation representation, with an 'inhomogeneity' parameter that can be either viewed as a complex parameter or an indeterminant. In the basic case we do this for the 2d (spin 1/2) irrep and take trivial value of the inhomogeneity parameter. Physically, this is a single site. The image of the $L$-operator here is sometimes called the (local) Lax operator. Bigger representations can be constructed by taking tensor products to obtain the Hilbert space of the spin chain (multiple spins). The image of the $L$-operator in the resulting space is the (global Lax operator or) monodromy matrix. It still acts on the auxiliary space as well. Taking the trace over this auxiliary space gives the transfer matrix, which is the image of an abelian subalgebra of the Yangian (called the Bethe subalgebra) which, when expanded in the spectral parameter, provides a family of commuting operators (conserved charges) on the spin-chain Hilbert space including the translation operator and the Heisenberg XXX Hamiltonian. Finally, in this language, the algebraic Bethe ansatz is a sort highest-weight construction of the eigenvectors of the transfer matrix (and thus the spin chain) that produces actual eigenvectors provided the spectral parameters involved solve the Bethe-ansatz equations. NB. For the XXZ chain, start from a quantum affine algebra instead, with trigonometric $R$-matrix, and everything carries over (in the generic case). For the XYZ chain, start from an elliptic quantum group, with elliptic $R$-matrix; then you can still construct a transfer matrix and obtain commuting Hamiltonians, but the construction of eigenvectors is much harder since (depending on the type of elliptic quantum group) there are no highest-weight representations (spin-$z$ is not conserved).
{ "domain": "physics.stackexchange", "id": 99328, "tags": "quantum-spin, spin-models, integrable-systems, spin-chains" }
How many full orbits around the galactic center our Earth has done so far since its creation?
Question: I have read that the estimated age of our Milky Way galaxy is 13.61 billion years which is by using our current size and status of our galaxy about 59.17 Galactic years which each galactic year representing the time taken our sun to make a full rotation around the current galactic center (i.e. 230 million years). However, since our galaxy has started with a much smaller size and evolved and also known that the age of the Earth is 4.543 billion years, I am asking if there is way to estimate how many full rotations around the galactic center Earth has done so far since its creation? Answer: There is no guarantee or likelihood that the Sun was in its present orbit in the past. In fact it is more likely that it has migrated to its present position ($r \simeq 8$ kpc) from a smaller galactocentric radius ($4<r<7$ kpc). This is inferred from the fact that the Sun has a larger metallicity than most of the stars in its current neighbourhood together with the fact that there is a negative metallicity gradient with galactocentric radius (e.g. Minchev et al. 2013; although others disagree - see Martinez-Barbosa et al. 2015). That being so (and it is by no means certain), and with the Galactic rotation curve being approximately flat, this means that the orbital velocity at smaller galactocentric radius was the same, so the orbital period would have been shorter. In other words, it is likely that the Sun has executed more circuits of the Galaxy than 4.57 billion/230 million $\simeq 20$ and perhaps as many as $\simeq 30$. You are right that the Milky Way has evolved over time, but I think its gravitational potential has probably been reasonably settled for the last 5 billion years.
{ "domain": "physics.stackexchange", "id": 86026, "tags": "astrophysics, orbital-motion, earth, galaxies, milky-way" }
Is VPN necessary for networking turtlebot?
Question: Hi , I was testing my system for installing the turtlebot setup. I was wondering if VPN is anecessary step in the networking setup? Can the two pc's communicate only via wifi ? I did not understand the concept of VPN completely.. (Lack of detailed VPN etup instructions on the turtlebot tutorial page I guess) Next, I want to know how does one map a large area say corridors and large multiple rooms at the same time. The wifi router cannot have such large range to continue networking between the two pc's. How to map large areas in such situation with turtlebot pc and workstation pc? Originally posted by Vegeta on ROS Answers with karma: 340 on 2012-10-13 Post score: 0 Answer: VPN is only a secure way to communicate. You can run ROS over normal Wifi without problems. The distance problem is solved by only processing data on the robot. If that isn't powerful enough, record a logfile and process that on the workstation later. Originally posted by dornhege with karma: 31395 on 2012-10-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Vegeta on 2012-10-13: thanks a lot, can you please tell me how to record the log file? Comment by joq on 2012-10-14: Use the rosbag command. See: http://www.ros.org/wiki/rosbag/Commandline#record
{ "domain": "robotics.stackexchange", "id": 11353, "tags": "ros, wifi, turtlebot, ros-electric" }
missing packge ,how to find the correct package
Question: While running cmake the following package missing problem occured , Could not find a package configuration file provided by "sys" with any of the following names: sysConfig.cmake sys-config.cmake How to solve this Originally posted by anadgopi1994 on ROS Answers with karma: 81 on 2017-02-22 Post score: 0 Answer: Are you by any chance trying to find_package(..) the sys module from Python in your CMakeLists.txt? If so, that is not needed and won't work: apart from the fact that sys is part of the Python standard runtime libs, Python modules are never find_package(..)ed. Just remove it from your CMakeLists.txt. Originally posted by gvdhoorn with karma: 86574 on 2017-02-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27095, "tags": "ros, package" }
GOC allylic,vinylic, benzylic positions, carbocation stability
Question: Carbo-cations may be stabilized by: (a) π-bonds only at allylic position (b) π-bonds only at vinylic position (c) π-bonds at allylic and benzylic position also (d) -I effect While the answer is obviously not (d), I am really confused about what 'allylic', 'vinylic' and 'benzylic' positions actually mean. I have heard about allylic/vinylic/benzylic carbons, but positions..? Please, help. Thank-you. The question has been taken from ADVANCED PROBLEMS IN ORGANIC CHEMISTRY by HIMANSHU PANDEY Answer: allylic position explanation benzylic position explanation vinylic position explanation For carbocation stability, think about which case has opportunity for electron resonance, i.e., which case(s) allows the positive charge to be delocalized to adjacent pi bonds. This resource might help.
{ "domain": "chemistry.stackexchange", "id": 7688, "tags": "organic-chemistry" }
How to fit all genes (labels) in chromosome ideogram plot made by RCircos package?
Question: I am using Rcircos to make a chromosome ideogram plot for my gene list (n = 45). However, I am getting this error: Not all labels will be plotted. Type RCircos.Get.Gene.Name.Plot.Parameters() to see the number of labels for each chromosome. I am using the below-given script and what to know is there any way to fit all my genes in the chromosome ideogram plot library(RCircos) data(UCSC.HG19.Human.CytoBandIdeogram) RCircos.Set.Core.Components( cyto.info=UCSC.HG19.Human.CytoBandIdeogram, chr.exclude = NULL, tracks.inside = 10, tracks.outside = 0) #Making a plot with RCircos out.file <- "RCircos_IF.pdf"; pdf(file=out.file, height=8, width=8, compress=TRUE); RCircos.Set.Plot.Area(); par(mai=c(0.25, 0.25, 0.25, 0.25)); plot.new(); plot.window(c(-2.5,2.5), c(-2.5, 2.5)); # plot chromosome ideogram RCircos.Set.Plot.Area(); RCircos.Chromosome.Ideogram.Plot() gene.lable.data = read.table ("IF.txt", sep="\t", header=T, row.names = 1) # IF.txt is my input file RCircos.Gene.Connector.Plot(gene.lable.data, track.num = 1, side = "in"); track.num <- 2; RCircos.Gene.Name.Plot(gene.lable.data,name.col = 4, track.num) dev.off() My input file looks like Chromosome chromStart chromEnd Gene.name 1 chr1 150363091 150476566 RPRD2 2 chr1 150549369 150560937 ADAMTSL4 3 chr1 91949371 92014426 BRDT 4 chr1 31365625 31376850 FABP3 5 chr1 150960583 150975004 CERS2 Answer: To change the maximum number of genes in RCircos, we could modified the char.width according to this link. The actual value in a RCircos session could be modified with get and reset methods for plot parameters. params <- RCircos.Get.Plot.Parameters() #$char.width params$char.width <- 100 #default 500 RCircos.Reset.Plot.Parameters(params) #the maxLabels are updated accordingly RCircos.Get.Gene.Name.Plot.Parameters()
{ "domain": "bioinformatics.stackexchange", "id": 1503, "tags": "r, circos" }
Is there a simple way to calculate Clebsch-Gordan coefficients?
Question: I was reading angular momenta coupling when I came across these CG coefficients, there is a table in Griffith's but doesn't help much. Answer: It might depend on your definition of "simple". For easy cases (low numbers, direct steps), yes, it is simple. However, it gets complicated too fast. What we do is: calculate only the easy ones, and let the rest for computers. I really encourage you to do this. The trick is: at the top of the ladder, there is only one possibility, except for a global phase factor. For example: $| j_1 \ j_2 ; m_1 m_2 \rangle = |1\ ½; \ 1 \ ½\rangle $ can only be one possibility: $|J \ M\rangle= |3/2\quad 3/2\rangle $ BEcause it is the maximum value of both $j_1$ and $j_2$. They can only yield the maximum $J,M$. So we know one equivalence: $|1\ ½; \ 1 \ ½\rangle \equiv|3/2\quad 3/2\rangle $ or, if you want, $|1\ ½; \ 1 \ ½\rangle \equiv 1\cdot|3/2\quad 3/2\rangle $ I know, I know, except for a phase factor, but let's ignore that. So, what do we do now? We apply $J_-$ at both sides of the equation. On the one hand, $J_-=J_{1-}+J_{2-}$, so we know what it does: $J_- |1\ ½; \ 1 \ ½\rangle = J_{1-}|1\ 1\rangle +J_{2-}|½\ ½\rangle $ (carry on yourself). On the other hand, the result is also an angular momentum, so: $J_- | 3/2 \quad 3/2\rangle = \hbar\sqrt{3/2\cdot(3/2+1)-3/2\cdot(3/2-1)} \ \ |3/2 \quad ½\rangle $ So, equating both sides, you can get $|3/2 \quad ½\rangle $ as a function of $|1\ ½; \ 1 \ -½\rangle$ and $|1\ ½; \ 0 \ ½\rangle$. You can get an orthogonal vector to that one, so you complete the sub-space. Now, you can keep applying $J_{-}$ to both of them. Like that, you get all C-G coefficients. Of course, you can also start by the bottom of the ladder and go upside with $J_+$. This is the standard procedure. You do it once with the simplest case (for example, two ½ spins). Then, you make the computer program and breath. By the way, there are already many of them, even online.
{ "domain": "physics.stackexchange", "id": 91426, "tags": "quantum-mechanics, angular-momentum, hilbert-space, group-representations" }
Understanding camera lens and sensor compatibility
Question: When a lens is specified not just by its focal length, but also the sensor format for which it is intended, what compatibility limits does that place on the sensors it can be used with? For example, if I have a CCD camera with a 1/3-inch sensor, must I use a 1/3-inch lens, or will a 1/2 inch lens (or anything greater or equal to 1/3 inch) do just as well? If I took a pair of test images using a fixed camera and target scene, one image using a 1/3-inch 16mm lens and the other using a 1/2-inch 16mm lens, would the images be virtually identical to the casual observer? What differences should an expert observer see, if any? Answer: A lens intended for a 1/2 inch sensor should work fine. Camera lenses normally produce a circular image called “image circle”, and you want the diameter of this circle to be larger than the diagonal of the sensor. In large format photography, it is quite common to have image circles significantly larger than the imaging medium, as this allows for camera movements. It is also quite common, in digital photography, to use lenses intended for a “full frame” sensor (meaning 24×36 mm), on DSLRs with smaller sensors. If I took a pair of test images using a fixed camera and target scene, one image using a 1/3-inch 16mm lens and the other using a 1/2-inch 16mm lens, would the images be virtually identical to the casual observer? What differences should an expert observer see, if any? They should be practically the same. The 1/2-inch lens should probably deliver less vignetting. It may also have slightly less resolution, as it is intended to be used with a larger sensor, with presumably larger pixels. If the lenses are of good enough quality, however, it will be hard to notice any difference.
{ "domain": "physics.stackexchange", "id": 18089, "tags": "optics, lenses, camera" }
Can't get dynamic reconfigure to spin when callback is a member of a class
Question: I am using ROS Indigo, Ubuntu 14.04. Get no compile errors. When I debug with GDB I can see I go into my callback once, but no more thereafter. This also happened when the callback was made global outside the class. I thought that perhaps this was why it was not being called. But it seems this is not the problem. I have three other subscription callbacks in this code that work fine as well one publication. Below I show the callback and the main function. Currently I use spinOnce() but I have tried other spin mechanisms like spin() AsyncSpinner, MultiThreadedSpinner to see if that made a difference. The full file / ROS package can be found here: https://github.com/birlrobotics/birl_baxter_controllers/blob/master/force_controller/src/force_controller_topic.cpp Any help would be greatly appreciated. namespace force_controller { //*********************************************************************************************************************************************** // callback(...) for dynamic Reconfigure set as a global function. // When the rqt_reconfigure gui is used and those parameters are changed, the config.param_name in this function will be updated. Then, these parameters need to be set to the private members of your code to record those changes. //*********************************************************************************************************************************************** //void callback(force_error_constants::force_error_constantsConfig &config, uint32_t level) void controller::callback(force_error_constants::force_error_constantsConfig &config, uint32_t level) { // Print the updated values ROS_INFO("Dynamic Reconfigure Prop gains: %f %f %f %f %f %f\nDerivative gains: %f", // Proportional Gains config.k_fp0, config.k_fp1, config.k_fp2, config.k_mp0, config.k_mp1, config.k_mp2, // Derivative Gains config.k_fv0); // Save proportional gains to the corresponding data members. k_fp0=config.k_fp0; k_fp1=config.k_fp1; k_fp2=config.k_fp2; k_mp0=config.k_mp0; k_mp1=config.k_mp1; k_mp2=config.k_mp2; // Save derivative gains to the corresponding data members. k_fv0=config.k_fv0; // change the flag force_error_constantsFlag = true; } ... ---------------- int main(int argc, char** argv) { ros::init(argc, argv, "control_basis_controller"); // Create a node namespace. Ie for service/publication: <node_name>/topic_name or for parameters: <name_name>/param_name ros::NodeHandle node("~"); // Instantiate the controller force_controller::controller myControl(node); // Set up the Dynamic Reconfigure Server to update controller gains: force, moment both proportional and derivative. if(myControl.dynamic_reconfigure_flag) { // (i) Set up the dynamic reconfigure server dynamic_reconfigure::Server<force_error_constants::force_error_constantsConfig> srv; // (ii) Create a callback object of type force_error_constantsConfig dynamic_reconfigure::Server<force_error_constants::force_error_constantsConfig>::CallbackType f; // (iii) Bind that object to the actual callback function //f=boost::bind(&force_controller::callback, _1, _2); // Used to pass two params to a callback. // Bind a new function f2 with the force_controller::controller::callback. // The left side of the command, tells boost that the callback returns void and has two input parameters. // It also needs the address of the this pointer of our class, and the indication that it has the two parameters follow the order _1, _2. boost::function<void (force_error_constants::force_error_constantsConfig &,int) > f2( boost::bind( &force_controller::controller::callback,&myControl, _1, _2 ) ); // (iv) Set the callback to the service server. f=f2; // Copy the functor data f2 to our dynamic_reconfigure::Server callback type srv.setCallback(f); // Update the rosCommunicationCtr myControl.rosCommunicationCtrUp(); } if(!myControl.start()) { ROS_ERROR("Could not start controller, exiting"); ros::shutdown(); return 1; } ros::Rate rate( myControl.get_fcLoopRate() ); /*** Different Communication Modes ***/ while(ros::ok()) { // 1. Non Blocking spin ros::spinOnce(); myControl.force_controller(); rate.sleep(); } return 0; } Originally posted by Juan on ROS Answers with karma: 208 on 2016-12-27 Post score: 1 Original comments Comment by chwimmer on 2017-05-22: Did you really change the values with the dynamic parameters? The Callback method is just called when one of the values change. Otherwise its just called one time at the beginning Answer: Convert the rate while loop into a ros::Timer http://wiki.ros.org/roscpp/Overview/Timers#Creating_a_Timer , and ros::spin() at the bottom. Originally posted by lucasw with karma: 8729 on 2017-05-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Zacryon on 2020-02-26: I've had a similar problem. But since I need a while loop at that point, I've just added a "ros::spinOnce();" right before the sleep call for the loop happens. Thanks for your answer!
{ "domain": "robotics.stackexchange", "id": 26581, "tags": "ros, spinonce, dynamic-reconfigure" }
Does aluminum oxide react with rubidium?
Question: I have rubidium vapor inside of the vacuum chamber. Inside the vacuum chamber, there are two flat stainless steel coated with aluminum oxide at temperature ~100 °C. Since rubidium is active, I am worrying that it will interact with the aluminum oxide $\ce{Al2O3}$ coating. Any ideas? Answer: As suggested in the comments, rubidium aluminates do exist. Aluminates such as $\ce{RbAlO2}$ and $\ce{Rb6Al2O6}$ are prepared at higher temperatures (above 550 °C [1]) or even from the melt. However, alkali metal vapors at elevated temperatures act as reducing agent, so I wouldn't expect aluminates anyway. On the other hand, aluminides such as $\ce{RbAl}$ (Zintl phase) are formed from the melt at much higher temperatures (above melting point of aluminium) and pressures exceeding atmospheric. Since the corundum-coated plate is supposed to perform in vacuum under mild temperatures, I think it shouldn't be affected by rubidium vapors. There is a possibility of coating embrittlement over time as rubidium diffusion occurs, but I wouldn't expect it to be an issue. References Schläger, M.; Hoppe, R. Darstellung und Kristallstruktur von $\ce{K6[Al2O6]}$ und $\ce{Rb6[Al2O6]}$. Zeitschrift für anorganische und allgemeine Chemie 1994, 620 (5), 882–887. https://doi.org/10.1002/zaac.19946200522.
{ "domain": "chemistry.stackexchange", "id": 11501, "tags": "inorganic-chemistry, reactivity, alkali-metals" }
How small would a neutron star be to see the entirety of it?
Question: How small in Schwarzschild radii would a neutron star need to be for its gravity to be strong enough to deflect light emitted from one side toward an observer on the opposite side? I know the figure is above $1.5R_s$. Answer: For a Schwarzschild spacetime outside the neutron star (i.e. spherically symmetric and non-rotating), the neutron star surface would need to be at a radial coordinate $\leq 1.76 r_s$ (e.g. Pechenik et al. 1983). This corresponds to an apparent radius at infinity of $\leq 2.68r_s$. The derivation (and it is a numerical problem) is to just figure out the closest approach to a Schwarzschild object at which light from infinity is bent through 90 degrees. This means that light emitted tangential to the surface from this minimum radial coordinate at the back of the neutron star will bend through 90 degrees and reach an observer at infinity on the opposite side. EDIT: I attach a plot from a script I've written to calculate the total deflection angle either as a function of the impact parameter of light ($b$) or the closest approach ($r_{tp}$). The plot shows that a total deflection angle of 180 degrees (corresponding to a 90 degree bend for light emitted tangentially from the centre of the opposite side of the neutron star) occurs for $r_{tp}=1.76r_s$ and for an impact parameter of $2.68r_s$. In other words, the apparent radius of the neutron star at infinity would be $2.68r_s$, with the centre of the rear-side of the neutron star forming the outermost ring of the apparent image.
{ "domain": "physics.stackexchange", "id": 87715, "tags": "general-relativity, neutron-stars" }
How to reduce CPU usage
Question: Hi, Is normal to have a CPU usage of 85% when I am simulating the "empty.world" file? And, if it is normal how can I reduce it? I tried with the "update_rate" param but nothing changed I have a: Intel(R) Core(TM)2 Quad CPU Q8300 @ 2.50GHz ATi Radeon HD 4350 Thanks Originally posted by AzCafre on Gazebo Answers with karma: 46 on 2013-02-14 Post score: 1 Original comments Comment by Ben B on 2013-02-15: It's not exactly an answer to your problem, but I can tell you that it's processor dependent. Using the same amount of RAM and same video card, my CPU usage dropped hugely when switching from a Xeon quad core to a CORE i5. Everything became a lot smoother too. Answer: I unistalled the proprietary fglrx driver for ATI Radeon and now I have a CPU usage of 5-10% Originally posted by AzCafre with karma: 46 on 2013-02-18 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 3038, "tags": "gazebo" }
Is there any difference between an-elastic and visco-elastic materials?
Question: Most of the sources that I had read about either talks about anelasticity or viscoelasticity, they don't compare both. From what I have read so far ,both anelastic and viscoelastic materials are the same. And they both show significant amount of time dependent strain component. Is there any difference between these two? Answer: Anelastic is a material that exhibits a delay in the deformation with respect to the loading. figure 1: Anelastic material bevahiour (left: wrt to time, right: stress vs. strain) (source Princeton) Visco-elastic are materials that the load to obtain the deformation also depends on the strain rate. I.e. How fast the deformation is applied. It might depend on other things. There are different models for the viscoelastic material: figure 1: common Viscoleastic models (source Dickerson) So a material can be anelastic and viscoelastic at the same time. They describe different properties which happen to be both related to time. Usually aviscoelastic material exhibits hysteresis which is trademark for anelastic.
{ "domain": "engineering.stackexchange", "id": 4224, "tags": "stresses, plastic, elastic-modulus" }
What if I left Earth then turned it into light?
Question: So I asked a question about what would happen in regards to gravitational potential if I left earth and then vaporized it. The answer I got was that the Mass would still remain the same and even if something is split the total amount of gravity it generates is linearly proportional to mass. But what if I used $E=mc^2$ and turned the entire earth into massless radiant energy? Where would the gravitational potential energy go? Answer: Tricky question! That equation, $E=MC^2$, is supposed to represent the equivalence of mass and energy in the theory of general relativity. One of the ways in which they are equivalent is that they both curve spacetime. We should, in principle, be able to say that the gravitational influence of the Earth on the rest of the universe will not change just because you annihilated the Earth. It could be tricky to measure though, because if there was any change in the gravitational field, that change would propagate outward as a gravitational wave moving at exactly the same speed as the expanding sphere of the light itself. And, as soon as that light sphere encompasses you, then you're going to have to take the shell theorem into account.
{ "domain": "physics.stackexchange", "id": 86585, "tags": "general-relativity, energy, gravity, photons, mass-energy" }
What is the point of the four different thermodynamic potentials?
Question: Why are there four different thermodynamic potentials? What is the point? Nothing new is defined, just a reshuffling? Could someone help me in showing this? Does it relate to doing experiments? If I understand correctly you swap intensive and extensive variables (e.g. for Helmholtz $S \rightarrow T$) Answer: An an example, imagine that you have an insulated cylinder with a locked piston which is partially filled with liquid H$_2$O. Some amount of the liquid will evaporate and form an H$_2$O vapor atmosphere which fills the remainder of the container. In equilibrium, how much of the H$_2$O will be in the vapor phase? The principle of maximum entropy states that if we obtain a functional relation $S=S(U,x)$ (where $x$ is e.g. the number of moles of H$_2$O in the vapor phase), then the equilibrium value for $x$ is obtained by holding $U$ fixed and maximizing $S$ with respect to $x$. This principle can be inverted to yield the principle of minimum energy, which states that if we obtain an equation of state $U=U(S,x)$, then the equilibrium value $x$ is obtained by holding $S$ fixed and minimizing $U$ with respect to $x$. These approaches turn out to be equivalent, so which one you use depends on which equation of state is more convenient to model. On the other hand, what if the cylinder is not insulated, but instead is allowed to exchange energy with a heat reservoir at temperature $T$? In this case, the equilibrium state is found not by maximizing the entropy of our system by itself, but rather by maximizing the entropy of the system and the reservoir. This is terribly inconvenient, because it seems to require us to come up with an equation of state involving the reservoir as well. As it turns out, however, that is not necessary. The only information we need to know about the reservoir turns out to be its temperature; otherwise we are completely free to restrict our attention to our system all by itself. However, the quantity which is minimized in equilibrium is no longer our system's internal energy $U$, but rather its Helmholtz energy $F := U - TS$. To obtain our equilibrium state, we hold $T$ fixed and minimize $F$ with respect to $x$. What if the cylinder remains insulated, but we unlock it and expose it to some ambient pressure $p$ so that the volume can change? In an abstract sense, this is like putting our system in contact with a volume reservoir rather than an energy reservoir. Once again, it turns out that if we want to focus on our system then we are free to do so, but once again the internal energy of the system is not the quantity which is minimized in equilibrium. Instead, we should consider its enthalpy $H:= U+ pV$, which we minimize while holding $p$ fixed. In exactly the same way, if the cylinder can exchange both energy and volume with its environment then the correct quantity to minimize is the Gibbs energy $G := U - TS + pV$, which we minimize while holding both $T$ and $p$ fixed. So in summary, the motivation for formulating different thermodynamic potentials is that we would like to have some property of our system alone which we can extremize in order to find the equilibrium state of our system. If the extensive parameters ($V,N,$ etc) are all held fixed, then the equilibrium state is obtained by minimizing $U$ at fixed $S$. However, if the system is allowed to exchange energy, volume, particles, etc. with its environment, then this will not yield the correct result; instead, we will need to minimize the associated thermodynamic potential which is relevant to our imagined scenario.
{ "domain": "physics.stackexchange", "id": 88687, "tags": "thermodynamics, energy, potential" }
Percolation theory: Is there a clear relationship between "probability of connection" and "effective porosity"?
Question: Disclaimer: I am a geophysicist, not a physicist. Sometimes we speak a slightly different language and use different terms. Suppose you have an $N$ x $N$, 2-D rectangular lattice with bonds which have a probability $p$ of being "open" and a probability $(1-p)$ of being "closed". At some threshold probability, $p_c$, there is a 100% probability that there will be a path through the lattice from one side to the other. For the square lattice, $p_c$ = 0.5. In terms of rocks, the porosity, $\phi$, of a rock is defined as the void (or pore) space (i.e. the "open" space) divided by the total volume (i.e. the "closed" space + "open" space). In the case of the above 2-D lattice, the porosity of a given random mesh will be approximately equal to $p$ itself: $$\phi \approx p$$ However, the majority of rock physics problems are primarily concerned with effective porosity, $\phi_{eff}$. This is the connected void space divided by the total volume. My question: Is there a relationship between $p$ and $\phi_{eff}$? Example: Below is a figure of a 5x5 network in which the probability of connection is $p$=0.5 and the grid was generated randomly in MATLAB. The black edges denote "closed" and the red and green edges denote "open". You can count them and find that there are 28 "closed" and 32 "open" for a total of 60 edges on the mesh. So approximately 50% of edges are "open" as expected from probability. The total porosity of this random mesh is $\phi$ = 0.53 because there are 32 open edges (a.k.a pore spaces) out of a total of 60 edges (a.k.a. total volume). However, you'll notice that, of the open pores, there are 5 green ones which are disconnected from the main network. In geology, these would be known as "isolated" pores. As a result, the effective porosity of the rock would be the number of red edges divided by total space. In other words: $\phi_{eff}$ = (32 - 5)/60 = 0.45. If I generate 1000 of these types of meshes, the average $\phi$ will approach $p$. Will the average of $\phi_{eff}$ also converge to some value as a function of $p$? Any help is appreciated. Cheers Answer: Understanding the percolation literature can be challenging because various fields seem to use different terminology for closely related concepts. For example, geophysicists might be interested in the "effective porosity", chemical engineers in pore "accessibility", and mathematicians in the fraction of "connected clusters". There is no simple analytical relationship between $p$ and $\phi_{eff}$, but for an infinite lattice we expect \begin{align} \phi_{eff}=& \,0 &\textrm{for} \; \phi < \phi_c\\ =&\, \alpha\left(\phi-\phi_c\right)^\beta &\textrm{for} \; \phi\gtrsim \phi_c\\ =&\, \phi &\textrm{for} \; \phi \rightarrow 1 \end{align} where $\alpha$ and $\beta$ are positive constants and $\phi_c$ is the critical porosity (or probability) corresponding to the percolation threshold. (See, for example, "On the relationship between effective and total pore space in sea ice" by Petrich and Langhorne.) These relationships make sense since for small enough $p$ we expect i.e. $\phi_{eff} \rightarrow 0$ because any open edges are likely to be isolated so there is zero percolation, while for high enough probability, $\phi_{eff} \rightarrow \phi$, since the change of an edge being disconnected is tiny. e.g. $\left(1-p\right)^6$ for an infinite square grid, since each edge has 3 connections at each end. For a finite lattice, as in your example, $\phi_{eff} > 0$ for any $p>0$, although it may be tiny. For example, Liu, Zhang, and Seaton calculated the fractions of accessible and occupied bonds in three-dimensional cubic lattices of dimension $L$. The figure below (based on their Figure 4) shows the effective porosity $\phi_{eff}$ ("accessible bond fraction" $X_A$ in Liu et al.'s notation) vs porosity $\phi$ ("occupied bond fraction" $X$). The observed bond percolation threshold is consistent with the value of 0.24881 expected for a 3-dimensional simple cubic lattice. The difference between effective and total porosity is negligible for porosity > 0.5. The nominal $L=\infty$ data is actually from $L=60$ simulations with pores on the surface excluded, so it differs slightly from the expected threshold behaviour (the dotted line showing $\phi_{eff}\sim\left(\phi-\phi_c\right)^\beta$ for critical percolation exponent $\beta = 0.4188$).
{ "domain": "physics.stackexchange", "id": 45582, "tags": "material-science, probability, statistics, percolation, porous-media" }
Have any more "white dwarf pulsars" been discovered or searched for?
Question: Back in 2016, Marsh et al. reported that the binary system AR Scorpii exhibits complex radio signals similar to those observed from traditional pulsars. In particular, pulsed synchrotron emission appears to be produced by interactions between the magnetospheres of a white dwarf and a red dwarf, powered in part by the spin-down of the white dwarf. The system has been dubbed the first "white dwarf pulsar" based on the signals. The system is thought to be similar to, if not an extreme case of, intermediate polars; we know of a number of these systems. Therefore, I'm wondering: Have there been any concerted searches for other intermediate polars exhibiting this pulsed emission, and if so, have any candidates been discovered (possibly serendipitously)? This is partly belatedly in honor of February's focus tag. Answer: Edson et al. (2017) list two other candidates that may be white dwarf pulsars. AE Aquarii has been described as pulsar-like (e.g. Ikhsanov 1998) and was the best case for a white dwarf pulsar before AR Scorpii came along. From what I understand, the case for AE Aqr containing a white dwarf pulsar is not watertight. Blinova et al. (2018) explain the high spin-down luminosity as being the result of interactions with an accretion disc and the white dwarf having a magnetic field typical of that in intermediate polars, rather than a pulsar-like mechanism. Zhang & Gil (2005) proposed that the radio source GCRT J1745-3009 is caused by a white dwarf pulsar. As far as I can tell, the case for this one isn't particularly secure either because there are other possible explanations. Follow-up observations have not yet confirmed the nature of the source. I'm not aware of any specific surveys for such objects but so far it seems that AR Scorpii is the best example of a white dwarf pulsar.
{ "domain": "astronomy.stackexchange", "id": 3567, "tags": "radio-astronomy, white-dwarf, pulsar" }
What is the extra kicks to accelerate a spacecraft during gravity assist?
Question: Suppose a spacecraft coming from behind a planet during a flyby, it gets a boost in speed and also change in direction of travel but where does this extra kicks come from? Now that I understand gravity isn't exactly a true force so what is actually slingshotting the spacecraft and how come it have to change it's direction of travel? Does it matters if the spacecraft is moving at a constant speed or it has to accelerate to benefit from the gravity assist? Answer: For almost all real-world purposes, treating gravity as a real force that just happens to work just like a pseudoforce is good enough. Suppose all you have is a rocket and planet X. You want to move as fast as possible relative to planet X. Can you gravity assist to do that? No. While you are moving towards planet X, planet X's gravitational field does work on your space ship. While you are moving away from planet X, your space ship does the same amount of work back on the gravitational field. The total work done is zero, so the total change in relative speed is zero; all you've done is changed direction. However, suppose you have a rocket, planet X, and planet Y. You want to move as fast as possible relative to planet X. Planet Y has a certain relative velocity to planet X. If you can use your rocket booster to start with a relative speed to planet Y and end with the same relative speed to planet Y, but in a different direction, your speed relative to planet X can have increased. Gravity assists are often treated as being like elastic collisions, and an elastic collision makes an easy way to move the question into the world of practical every day effects. Suppose you have a tennis ball and a train zipping past you at 130 km/h. If you fling the tennis ball at the front of the train at 50 km/h (relative to you), its initial speed relative to the train is 180km/h and its final speed relative to the train is 180km/h in the opposite direction (less a bit since it's not a perfectly elastic collision). However, the tennis ball's initial speed relative to you was 50 km/h, and it's final speed relative to you is 310km/h (180km/h relative to the train plus the train's 130km/h relative to you). If you're trying to throw the ball as far away from the train as possible, there's no point in throwing it at the train, you could just throw it past the train and the train will go past on its own. But if you're trying to throw the ball as far away from you as possible, bouncing it off of the high speed train is a pretty good idea.
{ "domain": "physics.stackexchange", "id": 86575, "tags": "forces, newtonian-gravity, orbital-motion, rocket-science, centrifugal-force" }
Using SMOTENC in a pipeline
Question: I am trying to figure out the appropriate way to build a pipeline to train a model which includes using the SMOTENC algorithm: Given that the N-Nearest Neighbors algorithm and Euclidian distance are used, should the data by normalized (Scale input vectors individually to unit norm). Prior to applying SMOTENC in the pipeline? Can the algorithm handle missing values? If data imputation and outlier removal based on median and percentiles values are performed prior to SMOTENC rather than after it, wouldn’t this bias the imputation/percentiles? Can SMOTENC be applied after one-hot encoding and defining the numerical binary columns as categorical features? When the pipeline is included in a cross validation schema, will the data balancing only be applied to the imbalanced training fold or also for the test fold? Here is how my pipeline currently looks like: from imblearn.pipeline import Pipeline as Pipeline_imb from imblearn.over_sampling import SMOTENC categorical_features_bool = [True, True, ……. False, False] smt = SMOTENC(categorical_features =categorical_features_bool, random_state=RANDOM_STATE_GRID, k_neighbors=10 ,n_jobs=-1 ) preprocess_pipeline = ColumnTransformer( transformers=[ ('Winsorize', FunctionTransformer(winsorize, validate=False, kw_args={'limits':[0, 0.02],'inplace':False,'axis':0}), ['feat_1,'Feat_2']), ('num_impute', SimpleImputer(strategy='median', add_indicator=True) , ['feat_10,'Feat_15']), ], remainder='passthrough', #passthough features not listed n_jobs=-1, verbose = False ) Model = LogisticRegression() model_pipeline = Pipeline_imb([ ('preprocessing', preprocess_pipeline), ('smt', smt), ('Std', StandardScaler()), ('classifier', Model) ]) Answer: The usual normalisation for Euclidian distance is NOT to scale each input to unit length, but to scale each column to mean 0 and variance 1. The scaling of each data is possible but it is not common. I dont know The whole point of SMOTENC is not to do the one-hot encoding. One hot encoding is a way to transform categorical data into numeric data (on multiple dimensions) for algorithms that cannot deal with categorical data. SO my suggestion is not to convert the categorical columns and let SMOTENC deal with them The Pipeline in imblearn does the right thing - it only applies the oversampling (of other imbalance strategies) on the training set not on the test set. See this question in StackOverflow: https://stackoverflow.com/questions/63520908/does-imblearn-pipeline-turn-off-sampling-for-testing
{ "domain": "datascience.stackexchange", "id": 8323, "tags": "class-imbalance, smote, imbalanced-learn, smotenc" }
Molarity of hydrogen gas in standard hydrogen electrode
Question: I am unable to understand the folowing statement of Wikipedia it is about standard hydrogen electrode : The concentration of both the reduced form and oxidised form is maintained at unity. That implies that the pressure of hydrogen gas is 1 bar. But why if I apply ideal gas equation then I am getting molarity of nearly 0.004M .this is condradiction to the statement in Wikipedia. N/V=P/RT THEREFORE M=N/V~0.004 Answer: The standard hydrogen electrode is a "reference electrode." It is supposed to be used with ultra-low currents. In operation, hydrogen gas is bubbled through a 1 N acid solution, but there is little hydrogen gas in the solution. In order to get the hydrogen gas to bubble through the solution, a pressure of a bit more than 1 bar must be used.
{ "domain": "chemistry.stackexchange", "id": 4738, "tags": "physical-chemistry" }
Period of Interference Pattern on a Substrate
Question: Can anybody explain to me where this equation came from? It's for two point sources at the two listed points, and it's calculating the period of the wave on the substrate. It seems to be $\lambda/\sin(\theta)$, which seems contrary to what I would normally expect the period to be. i.e $\sin(\theta)\lambda$ where $\theta$ is the angle from the normal. Answer: Defining $$f(x,y,z)=\mbox{exp}\left(\frac{2\pi i\sqrt{x^2+y^2+z^2}}{\lambda}\right)$$ we find that $$|f(x-a,y,0-c)+f(x+a.y,0-c)|^2=4\mbox{cos}\left[\lambda^{-1}\pi\left(\sqrt{(a+x)^2+y^2+(c-z)^2}-\sqrt{(a-x)^2+y^2+(c-z)^2})\right)\right]^2.$$ To compute the spatial frequency of $\mbox{cos}(\phi(x,y))$ it suffices to compute $\frac{1}{2\pi}|\nabla \phi|$. Since $\mbox{cos}(x)^2$ oscillates twice as fast as $\mbox{cos}(x)$, we have the spatial frequency is $\frac{1}{4\pi}|\nabla \phi|$, where $\phi$ is the expression inside the $\mbox{cos}^2$ expression I gave above. Mathematica spat out the following result: $$\frac{1}{4\pi}|\nabla \phi|=\frac{\sqrt{y^2 \left(\frac{1}{\sqrt{(a-x)^2+c^2+y^2}}-\frac{1}{\sqrt{(a+x)^2+c^2+y^2}}\right)^2+\left(\frac{a-x}{\sqrt{(a-x)^2+c^2+y^ 2}}+\frac{a+x}{\sqrt{(a+x)^2+c^2+y^2}}\right)^2}}{\lambda }$$ and if you simply ignore the left term in the radical and invert the expression (to get period), this is the same result as in the book. I am not sure why the book decides to omit the left expression in the radical, although maybe they're assuming $a\gg x,a\gg y$ (ie, that the light sources are far away from each other), in which case the approximation is valid.
{ "domain": "physics.stackexchange", "id": 10458, "tags": "interference" }
A conceptual question about Green's function's treatment of interaction
Question: Here we have electron gas and some other stuff. We expand the Hamiltonian to the 1st order of one single harmonic oscillator's displacement $\vec{u}$. Its equilibrium position is at the origin. Then we get an effective coupling Hamiltonian $\vec{j}(\vec{r})\times\vec{f}(\vec{r})\cdot \vec{u}$, wherein $\vec{j}(\vec{r})$ is the electron density, $\vec{f}(\vec{r})$ is some effective potential. I assumed a particular mode (frequency $\Omega$ for x,y,z) of $\vec{u}$. And I tried to work out the above diagram. It's doable. In this electron Green's function calculation, do we take into account of the interaction's influence on the oscillator? Is it damped or not? I'm confused about where the momentum transfer $q$ come from. Potential $\vec{f}(\vec{r})$ or oscillator's motion $\vec{u}$? I only Fourier transform $\vec{j}(\vec{r})$ and $\vec{f}(\vec{r})$ in the Hamiltonian, therefore this $q$ merely appears in terms that come from $\vec{j}(\vec{r})\,,\vec{f}(\vec{r})$. So I suppose this momentum transfer $q$ comes from the potential. Is this correct? Answer: The electron motion does feed back to the oscillator, but that is another diagram, known as the bubble diagram, in which you calculate the self-energy correction of the oscillator. That self-energy presumably contains imaginary part, which is then interpreted as the damping of the oscillator. You can either calculate the self-energy corrections self-consistently, or you can simply neglect it in the weak coupling limit away from the non-fermi-liquid criticality. The momentum $q$ should be the momentum of the phonons (quanta of the oscillator motion).The potential $f(r)$ also carries a momentum, but this momentum is on the vertex (where the electron emits/absorbs the phonon). More precisely, due to the presence of the potential $f(r)$, momentum is not conserved on the vertex, the non-conserved amount of momentum is provided by the potential scattering.
{ "domain": "physics.stackexchange", "id": 11919, "tags": "quantum-mechanics, condensed-matter, solid-state-physics, greens-functions" }
Fermion propagator as derivative of scalar propagator
Question: I've seen this expression in two spacetime dimensions, $$ \langle \bar{\psi}(x) \psi(0) \rangle = \gamma^\mu{\partial_\mu} \langle \phi(x) \phi(0) \rangle $$ The LHS is the fermion propagator, and the expectation on RHS is the scalar propagator. For 2 dimensional case, the scalar propagator is (assuming all massless) $$ \langle \phi(x) \phi(0) \rangle = \int \frac{d^2p}{4\pi^2} \frac{1}{p^2} e^{-ipx} $$ Two questions: Why the fermion propagator is derivative of scalar propagator? How are the gamma matrices defined in two dimensions? Answer: The free scalar and fermion propagator is $$ G_\psi(x,y) = \int \frac{d^dp}{(2\pi)^d} \frac{-i(\gamma^\mu p_\mu + m)}{ p^2 + m^2 - i \epsilon} e^{- i p \cdot ( x - y ) } $$ The scalar propagator is $$ G_\phi(x,y) = \int \frac{d^dp}{(2\pi)^d} \frac{-i}{ p^2 + m^2 - i \epsilon} e^{- i p \cdot ( x - y ) } $$ Clearly, $$ G_\psi(x,y) = ( i \gamma^\mu \partial_\mu -m)G_\phi(x,y)~. $$ PS - In any dimension, the gamma matrices are defined to satisfy $\{ \gamma^\mu , \gamma^\nu \} = - 2 \eta^{\mu\nu}$. PPS - I am using metric signature $(-,+,+,+,\cdots)$ in this answer.
{ "domain": "physics.stackexchange", "id": 41906, "tags": "quantum-field-theory, fermions, dirac-equation, propagator" }
Are there secondary causes of sea level change?
Question: Aside from the fraction of water stored as ice on land and temperature of the water, are there other factors that change sea level, and if so what are is the magnitudes of the these changes? For example, by how much does sediment and soluble matter entering the ocean change sea level? What about volcanoes and tectonic activity? Is there a tendency toward hydrostatic equilibrium where the Earth is entirely covered by an ocean of uniform depth? Answer: Yes, there are lots of other factors. Factors affecting sea levels are no different from other natural processes: there is a large number of coupled, non-linear effects, operating on every time scale, and at every length scale, and across many orders of magnitude. The Wikipedia page Current sea level rise lists many of the known processes. And I wrote a blog post, Scales of sea-level change, a couple of years ago with a long list, mostly drawn from Emery & Aubrey (1991). Here's the table from it: Reference Emery, K & D Aubrey (1991). Sea-Levels, Land Levels and Tide Gauges. Springer-Verlag, New York, 237p.
{ "domain": "earthscience.stackexchange", "id": 233, "tags": "oceanography, sea-level, stratigraphy, erosion" }
Deriving an lumped element LC circuit as the limit of a resonant cavity mode
Question: In principle, an LC circuit is just a degenerate limit of an electromagnetic cavity, where the frequency $\omega_0$ of the resonant mode is much lower than the inverse size of the cavity. As a result, LC circuits are usually treated in the magnetoquasistatic limit and the lumped element approximation, but there should be no problem treating them starting from the full Maxwell's equations. I've been trying to do this, but I can't quite get the results to match up. The standard treatment of resonant cavities is as follows: write the electric and magnetic fields as $$\mathbf{E} = e(t) \tilde{\mathbf{E}}, \quad \mathbf{B} = b(t) \tilde{\mathbf{B}} \tag{1}\label{1}$$ where the mode spatial profiles satisfy, in natural units, $$\nabla \cdot \tilde{\mathbf{E}} = 0, \quad \nabla \times \tilde{\mathbf{E}} = \omega_0 \tilde{\mathbf{B}}, \quad \nabla \cdot \tilde{\mathbf{B}} = 0, \quad \nabla \times \tilde{\mathbf{B}} = \omega_0 \tilde{\mathbf{E}} + \tilde{\mathbf{J}} \tag{2}\label{2}$$ where $\tilde{\mathbf{J}}$ is the current profile on the conductor surfaces. For concreteness, let's normalize by $$\int_V \tilde{\mathbf{E}}^2 = \int_V \tilde{\mathbf{B}}^2 = 1.$$ If we excite the cavity with a current $\mathbf{J}_a$, Maxwell's equations give the equations of motion $$\dot{b} = - \omega_0 e, \quad \dot{e} = \omega_0 b - \int_V \mathbf{J}_a \cdot \tilde{\mathbf{E}} \tag{3}\label{3}.$$ This is all standard. Now let's consider an LC circuit with capacitance $C$ and inductance $L$. Its state is described by a charge $Q$ on the capacitor and a current $I$ through the inductor, and by definition, $$\frac12 \int_V \mathbf{B}^2 = \frac12 LI^2, \quad \frac12 \int_V \mathbf{E}^2 = \frac12 \frac{Q^2}{C} \tag{4}\label{4}.$$ Comparing this to the expressions above, we have the correspondences $$b = \sqrt{L} \, I, \quad e = Q/\sqrt{C}.$$ Substituting this into the equations of motion above and eliminating $C$ using $\omega_0 = 1/\sqrt{LC}$ gives $$\dot{I} = - \omega_0^2 Q, \quad \dot{Q} = I - \omega_0 \sqrt{L} \int_V \mathbf{J}_a \cdot \tilde{\mathbf{E}} \tag{5}\label{5}$$ which is wrong. We actually want to get Kirchoff's loop rule plus the definition of current, $$\dot{I} = - \omega_0^2 Q + \frac{\mathcal{E}}{L}, \quad \dot{Q} = I.$$ My results are close to right, but they seem to effectively have the roles of $I$ and $Q$ reversed, or equivalently the location of the driving term flipped! I've searched through a bunch of physics and engineering textbooks, but none of them has anything like my derivation above; either they work only with Maxwell's equations or only with lumped elements. Linking the two treatments should be very straightforward, so I must have done something basic wrong. What's the problem? Answer: Let me use a slightly different notation and certainly not normalize so we can see the dimensional parameters. Start with Maxwell's equations: $$\nabla \times \mathbf H = \mathbf J +\epsilon_0 \dot {\mathbf E}$$ $$\nabla \times \mathbf E = -\mu_0 \mathbf {\dot H}$$ and let $\mathbf E = e(t) \mathbf {\tilde E }$ and $\mathbf H = h(t) \tilde {\mathbf H }$, then $$ h(t)\nabla \times \tilde {\mathbf H} = \mathbf J +\epsilon_0 \dot {e}(t) \tilde {\mathbf E} \tag{1}$$ and $$e(t)\nabla \times \mathbf {\tilde E} = -\mu_0 \dot h(t) \mathbf {\tilde H} \tag{2}$$ Multiply $(1)$ by $\tilde{\mathbf E}$ and $(2)$ by $\tilde {\mathbf H}$ and integrate, then over space $$h(t)\int d\tau (\nabla \times \mathbf {\tilde H})\cdot \mathbf {\tilde E} \\= \int d\tau\mathbf J\cdot \mathbf {\tilde E} +\epsilon_0 \dot e(t) \int d\tau|\mathbf {\tilde E}|^2 $$ and $$e(t)\int d\tau (\nabla \times \mathbf {\tilde E})\cdot \mathbf {\tilde H} \\= -\mu_0 \dot h(t) \int d\tau|\mathbf {\tilde H}|^2 \tag{4}$$ Normalize so that $\int d\tau|\mathbf {\tilde E}|^2=\int d\tau|\mathbf {\tilde H}|^2=1$ and denote the geometric integrals by $k_2=\int d\tau (\nabla \times \mathbf {\tilde E})\cdot \mathbf {\tilde H}$ and $k_1=\int d\tau (\nabla \times \mathbf {\tilde H})\cdot \mathbf {\tilde E}$ with $g= \int d\tau\mathbf J\cdot \mathbf {\tilde E} $ to get $$k_1h(t)=g+\epsilon_0\dot e(t) \tag{5}$$ $$k_2e(t)=-\mu_0\dot h(t) \tag{6}$$ Equations $(5)$ and $(6)$ represent a parallel "LC" circuit driven by a current source $g$ in which $$L=\frac{\mu_0}{k_1}\\ C=\frac{\epsilon_0}{k_2}\tag{7}$$ and the identification of the field amplitudes are $$I(t)=k_1h(t)\\V(t)=k_2e(t)\tag{8}$$ I think the problem lies in identifying the electric field amplitude with charge while having the magnetic field amplitude identified with charge rate, that is current. This would be a mistake analogous to use velocity and not momentum as the conjugate variable to position in analytical mechanics. The proper identification of electric field amplitude is with voltage. Comment: I also think that we can consider this to be a closed a system and then we have $\int d\tau \rm{div }(\mathbf E \times \mathbf H) =0$ from which it will follow also that $k_1=k_2$ but I have to think about that more. With respect to your question it does not matter.
{ "domain": "physics.stackexchange", "id": 95817, "tags": "electromagnetism, electric-circuits, electric-current, resonance, electrical-engineering" }
Three-table queryset (full outer join?) in Django
Question: I've got three tables I need query to display to a user for a web application I've written. Each table (Url, Note, Quote) has a foreign key relation to the User table. For every User, I need to sort all Bookmarks, Notes and Quotes based on a _date_created_ field, then deliver that to the template as one iterable. This is for a toy/self-learning project so I don't really need to worry about scale. The approach I've taken is this: from operator import attrgetter from app.models import Url, Note, Quote date_dict = {} for url in Url.objects.filter(user=request.user): date_dict[url.date_created] = Url.objects.get(date_created=url.date_created) for note in Note.objects.filter(user=request.user): date_dict[note.date_created] = Note.objects.get(date_created=note.date_created) for quote in Quote.objects.filter(user=request.user): date_dict[quote.date_created] = Quote.objects.get(date_created=quote.date_created) my_query = sorted((date_dict[i] for i in date_dict.iterkeys()), key=attrgetter("date_created"), reverse=True) I have also tried this: from operator import attrgetter from itertools import chain from app.models import Url, Note, Quote items = sorted(chain(Url.objects.filter(user=request.user), Note.objects.filter(user=request.user), Quote.objects.filter(user=request.user)), key=attrgetter("date_created"), reverse=True) I'm concerned how expensive this would get if I had huge data sets (I don't), but from what I've read the second is faster. This code works and does what I expect it to, but is there a better/faster approach? Answer: EDIT Removed previous version which was pretty much telling you to use the second method shown above. Your first method loads everything from the database twice. (Django may do some caching, but still) I'm not sure why you are putting everything in the dictionary. The problem with the current implementation is that sorting is done by python rather then in the database. It would be better to have the sorting done in the database. There are two ways I see of doing that. You could have the three different sources sorted by the database. Then you could perform a merge pass to combine them. However, you are probably going to have need a lot of data before that becomes a worthwhile strategy. Sorting will be implemently in C for python which will make it have a speed advantage over a merge which you would have to implement in python. You could redesign your database so Urls, Quotes, and Notes are stored in the same table. That way you could request the database to sort them. Both of these strategies are really only helpful if you end up with a lot of data. If you have that much data you aren't going to want to present it all to the user anyways. The result is that its probably not useful.
{ "domain": "codereview.stackexchange", "id": 137, "tags": "python, django" }
converting PointCloud2 to pcl::PointCloud is slow with RGB field
Question: I measured a time to convert sensor_msgs::PointCloud2, pcl::PointCloud and pcl::PCLPointCloud2. https://github.com/garaemon/pcl_ros_conversion_benchmark I found that converting from pcl::PCLPointCloud2 to pcl::PointCloud takes a long time when the input data had RGB field. Is there any way to speed up this conversion? Originally posted by Ryohei Ueda on ROS Answers with karma: 317 on 2014-05-20 Post score: 1 Answer: My best guess is that when transforming from PCL2 to PCL you need to get rid of the RGB field. If I am not mistaken, the representation in memory of the point cloud for PCL2 is something like: [X0, Y0, Z0, RGB0, X1, Y1, Z1, RGB1, ... Xn, Yn, Zn, RGBn] and for PCL is: [X0, Y0, Z0, X1, Y1, Z1, ... Xn, Yn, Zn] So the only way to transform from PCL2 to PCL is by iterating over the data on PCL2 and ignoring each 4th component (the RGB component) and this takes time, because you can not do a simple memcpy. Inside the implementation of fromPCLPointCloud2 between lines 202 and 217 you will see what I mean. Originally posted by Martin Peris with karma: 5625 on 2014-05-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Ryohei Ueda on 2014-05-21: Thanks! you are correct. When I use pcl::PointXYZRGBA, the conversion time decreases.
{ "domain": "robotics.stackexchange", "id": 18012, "tags": "ros, pcl, pcl-conversions" }
How do we decide the direction of angular velocity vector?
Question: I know that the angular velocity vector $\vec{\omega}$ points towards the axis of the circular motion and there is a formula $$\vec{v}=\vec{\omega} \times \vec{r}$$ I want to know the origin of this formula which governs the direction of $\vec{\omega}$. And why only this direction, why not the opposite direction? Is it purely convention or is there some logic behind this? Answer: We use a right handed coordinate system. This means that we draw our coordinate systems so that $\hat x\times\hat y=\hat z$ (described by whatever right hand rule you've been taught). The choice of right handed or left handed coordinate system is arbitrary, but once it's established as convention, you need to stick with it and not switch conventions unless you want a real headache of conversion factors trying to interpret others' work. The axis of rotation needs to point parallel to $\overrightarrow\omega$, that's part of the definition of $\overrightarrow\omega$. $\overrightarrow\omega$ describes a right handed rotation, so it must result from the cross product of the position and velocity $\hat\omega=\hat r\times\hat v$ (note the hats, these are unit vectors). You can rotate the unit vectors to get them in the same order you had them written and multiply by their magnitudes to get your equation back.
{ "domain": "physics.stackexchange", "id": 42972, "tags": "vectors, coordinate-systems, definition, rotational-kinematics, angular-velocity" }
Functions versus Vectors in Quantum Mechanics
Question: In the beginning quantum mechanics is introduced by representing the states as cute little complex vectors, for example: $$|a\rangle=a_+|a_+\rangle+a_-|a_-\rangle$$ this is a complex vector representing a state that can collapse in two possible states, with corrisponding probabilities $|a_+|^2,|a_-|^2$. On the other hand observables are represented by hermitian operators, the eigenvalues of those operators are the possible outcomes of a measurement and the corresponding eigenvectors are the corresponding states of the system after the measurement. Ok, problem is we often deal with observables with an infinite number of possible outcomes of a measurement (one classical example of this is a measurement of position); so we need to work with a complex vector space that has infinite dimension. (Incidentally functions with real argument and complex value can be thought as a vector space with infinite dimension, this will become important later I think). So now, after a bit of work to define the specifics of this infinite dimensional vector space, we can define the position and momentum operators ($\hat{x},\hat{p}$). Here comes the problem for me, I have found two different definition of this two operator, this first one comes from Leonard Susskind's lectures: $$\hat{x}\psi(x)=x\psi(x)$$ $$\hat{p}\psi(x)=-i\hbar\frac{\partial}{\partial x}\psi(x)$$ Where $\psi(x)$ is any function such as $\psi : \mathbb{R} \to \mathbb{C}$. The second definition comes from Stefano Forte - Fisica Quantistica and it's the following: $$\langle x|\hat{x}|\psi\rangle=x\psi(x)$$ $$\langle x |\hat{p}|\psi\rangle=-i\hbar \frac{\partial}{\partial x}\psi(x)$$ where $|x\rangle$ is an eigenvector of the position operator and $\psi(x)$ is the wave function, defined as (where $|\psi\rangle$ is an arbitrary state): $$\psi(x)=\langle x|\psi\rangle$$ The first definition defines the operators as acting on functions, while the second operator defines them as acting on vectors. This confounds me quite a bit. In the continuous case the states are represented by functions or by vectors? Does this distinction even make sense since functions form a vector space? We also like to talk about eigenfunctions and eigenvectors somewhat interchangeably. But I don't see why we can talk about them interchangeably, for example what does it mean to derive a vector with respect to $x$ as the momentum operator does? Answer: It's good that you're confused because Susskind's notation is ridiculous. $\psi(x)$ is a number and so you cannot conceivable apply the $\hat x$ operator to it. This is an example of typical misuse of notation by physicists who like to denote a function $f$ by its value at a particular point $f(x)$. This abuse of notation is responsible for so much confusion that it breaks the heart. In the continuous case the states are represented by functions or by vectors? I would say, that in the continuous case the vectors are represented by functions. Remember that a vector $\left \lvert v \right \rangle$ can be expressed in many different bases. In one basis, this vector may have components $(0, 1)$ which in another basis it may have components $(1 / \sqrt{2})(1, 1)$. Similarly, the vector $\left \lvert \psi \right \rangle$ may have different components in infinite dimensions... and those components are expressed as a function $\psi: \mathbb{R} \rightarrow \mathbb{C}$. For example, the notation $\psi(x)$ usually means "The components of the vector $\left \lvert \psi \right \rangle$ in the $x$ basis", where by "$x$ basis" we mean the set of vectors $\left \lvert x \right \rangle$ with the property $$ \hat X \left \lvert x \right \rangle = x \left \lvert x \right \rangle $$ i.e. the set of vectors that are eigenvectors of the $\hat X$ operator. See, when you wrote $$ \langle x | \hat X | \psi \rangle = x \psi(x) $$ you can think of it like this $$ \langle x | \hat X | \psi \rangle = \left( \langle x | \hat X \right) \lvert \psi \rangle $$ and as $\hat X$ is hermitian it can act to the left producing $$ x \langle x \lvert \psi \rangle = x \, \psi(x) $$ where we used the definition $\psi(x) \equiv \langle x | \psi \rangle$. This is all in agreement with what you already wrote. So now let's get to the questions. In the continuous case the states are represented by functions or by vectors? Either way, but note that the functions are representations of the vectors in a particular basis. Does this distinction even make sense since functions form a vector space? This is quite deep. The representations of vectors in a particular basis are themselves vectors spaces. This is true even in finite dimensions. Consider the set of arrows in two dimensions. Those arrows can be summed and multiplied by scalars, so they form a vector space. However, if we choose a basis, we can express those arrows as pairs of real numbers $(x, y)$, and those pairs are themselves a vectors space as they too can be summed and multiplied by scalars. One can say that the vector space of arrows in two dimensions is isomorphic to the vector space of pairs of real numbers, and so the space of pairs of real numbers can be used to represent the space of arrows. We also like to talk about eigenfunctions and eigenvectors somewhat interchangeably. Yes, this is typical loosey-goosey physicist talk. But I don't see why we can talk about them interchangeably Good, that's a good instinct. for example what does it mean to derive a vector with respect to x as the momentum operator does? So first of all, as we said above, Susskind's notation $\hat x \psi(x)$ is unclear and bad for two reasons: It makes no sense to apply the $\hat x$ operator to the number $\psi(x)$. $\hat x$ exists independent of any choice of basis, but $\psi(x)$ is implied to mean "The components of $\lvert \psi \rangle$ in the $x$ basis. The $\hat x$ is basis independent, but the $\psi(x)$ is not, so he's mixing notations, which is confusing. As for the momentum operator, note that it is only a derivative when expressed in the $x$ basis! If we work in the $p$ basis, then we'd have e.g. $$ \langle p | \hat P | \psi \rangle = p \psi(p) $$ where here $\psi(p)$ is implied to mean "the components of $\lvert \psi \rangle$ in the $p$ basis. The function $\psi(p)$ is also a wave function -- it's just the wave function for momentum instead of for position. Now note that I'm using awful notation myself here because $\psi(x)$ and $\psi(p)$ look like the same function evaluated at two different points whereas really they are completely different functions [1]. Really we should distinguish the position and momentum wave functions by using different symbols: \begin{align} \langle p | \psi \rangle &= \psi_\text{momentum}(p) \\ \langle x | \psi \rangle &= \psi_\text{position}(x) \, . \end{align} Please let me know if this answers all your questions. [1]: They are actually related by Fourier transform.
{ "domain": "physics.stackexchange", "id": 71044, "tags": "quantum-mechanics, operators, wavefunction, vectors, quantum-states" }
Intern with no mentor struggling with refactoring and OOP
Question: A little background I'm an intern at a large engineering company, and I'm also the only CS major in the entire building. The people on my team don't have technical backgrounds, but hired me to start developing an internal application. About my program I'm trying to read two part numbers as input, search through a CSV file to find them, and then extract all of the associated parts into two separate lists that can then be compared. Currently I can enter part numbers and then read the associated parts into an ArrayList. However things feel kind of hacked together at this point I don't exactly have a solid grasp on OOP The two classes that "smell" to me are my FileInput.java and PartList.java classes (although for all I know the whole thing may stink, it's difficult without a developer to mentor me). Ideally, I want to have two PartList objects that can then be compared in another class. My main issue is how I'm instantiating the PartList within the FileInput. I was essentially fiddling around with Eclipse and its built in refactoring features to get it to work, and that's what I got. I just don't think things are organized logically, and I'm unsure how to continue. I omitted the PartNumber class because it's twice as long as the others and I don't think it's necessary to understand my problem. Any advice is appreciated! FileInput.java public class FileInput { // Just a test file public final static String strFile = "C:\\Users\\hamlib1\\Projects\\HVC_Project\\TestData\\TT524S-FinalV1.csv"; public void readCSV(String input) throws IOException { CSVReader reader = new CSVReader(new FileReader(strFile)); String[] row; while ((row = reader.readNext()) != null) { int limit = row.length; for (int i = 0; i < limit; i++) { if (row[i].contains(input)) { // The way I'm instantiating and calling from PartList here seems off to me PartList partList = new PartList(); partList.setList(reader, row); } } } reader.close(); PartList.printList(); System.out.println("Reader Closed"); } } PartList.java public class PartList { private static ArrayList<String[]> partList = new ArrayList<String[]>(); // Not sure what to even put here public PartList() { } // Algorithm for filling the list works, but it smells to me // Mainly passing the CSVreader and row, but when I start reading I want // to be at the row that matches the input public void setList(CSVReader reader, String[] row) throws IOException { String levelString = row[0]; int levelInt = Integer.parseInt(levelString); if (levelInt == 2) { partList.add(row); // Add the first row to the list row = reader.readNext(); String nextLevelString = row[0]; int nextLevelInt = Integer.parseInt(nextLevelString); while (nextLevelInt > 2) { // Add subsequent rows partList.add(row); row = reader.readNext(); nextLevelString = row[0]; nextLevelInt = Integer.parseInt(nextLevelString); } } } // Just so I can check if I'm filling the list correctly public static void printList() { for (int i = 0; i < partList.size(); i++) { String[] strings = partList.get(i); for (int j = 0; j < strings.length; j++) { System.out.print(strings[j] + " | "); } System.out.println(); } } } RunProgram.java public class RunProgram { public static void main(String[] args) { PartNumber part1 = new PartNumber(); PartNumber part2 = new PartNumber(); FileInput file = new FileInput(); part1.readInput(); part2.readInput(); String partString1 = part1.getPartNumber(); String partString2 = part2.getPartNumber(); try { System.out.println(part1.toString()); file.readCSV(partString1); System.out.println(part2.toString()); file.readCSV(partString2); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } Here's an example of the CSV. The one I pulled this from is about 35,000 lines, but they all follow the same format. I deleted some irrelevant columns for simplicity, but the important information are the levels and part numbers. They won't change position, and searching through an entire CSV only takes a few seconds. To pull a group of parts I first check if its level is 2 (denoting a "top" level) then I pull all subsequent parts until the next 2. Edit I decided to create create a PartList object and pass it to my readCSV method. This appeared to work when I printed the list. Then I created a second PartList, and tried to print both lists. What should have happened is each list printed separately. Instead, it printed both lists merged together, twice. Somehow I'm adding the parts for two separate lists into a single list. I think this comes down to my PartList class (and lack of appropriate constructors, getters & setters?). I think this problem is now more suited for Stack Overflow, but if anyone has general refactoring help I'll still greatly appreciate it. Answer: Some beginner mistakes... The functionality of the classes should make sense logically. Instead of bringing out the terminologies, I prefer to think from a layman's interpretation of a class. Why should your PartList know what is a CSVReader? And why should your PartNumber know how to read an input? Other pointers You also need to learn how to accurate translate the rules and requirements of your work into programming logic. For example, the identification of non-"top level" parts. Are they identified by a model level number greater than two, as written in your code? Or just when they are not equal to two, as you have written in your question? This is also where validation of values and "business requirements" come in to play, to ensure your code functions as per expectations. Gotcha Looking at your code again, there is a glaring bug between the for loop below and the setList method of PartList, which is called in the same loop too: for (int i = 0; i < limit; i++) { if (row[i].contains(input)) { // The way I'm instantiating and calling from PartList here seems off to me PartList partList = new PartList(); partList.setList(reader, row); } } setList is actually modifying the state of your reader object, i.e. when this method returns and there is the slim chance that the next column matches the input (I know it's not intentional, so probably bug #2?), you wouldn't be looping through the same lines of the reader object. Instead, you will be starting from the next line which the first invocation of setList left off. In fact, your code will also be reading in one extra line when setList exits upon encountering the next top-level part, and the flow goes into the next iteration of the outer while loop which calls reader.readNext() again. Last but not least, PartList is instantiated within the for loop, meaning it can't be used outside at all. Suggested solution package com.myproject.se; public class Part { enum Serviceability { SERVICEABLE("SVCBLE"), NOT_SERVICEABLE("NOT SVC"), UNDEFINED(""); private String value; Serviceability(String value) { this.value = value; } public String toString() { return value; } static Serviceability fromValue(String value) { if (value != null) { String valueToCompare = value.toUpperCase(); for (Serviceability current : Serviceability.values()) { if (current.value.equals(valueToCompare)) { return current; } } } return UNDEFINED; } } enum PartType { PART("PART"), ASSEMBLY("ASSEMBLY"), UNDEFINED(""); private String value; PartType(String value) { this.value = value; } public String toString() { return value; } static PartType fromValue(String value) { if (value != null) { String valueToCompare = value.toUpperCase(); for (PartType current : PartType.values()) { if (current.value.equals(valueToCompare)) { return current; } } } return UNDEFINED; } } private int modelLevel; private String partNumber; private String name; private Serviceability serviceability; private int quantity; private PartType partType; public Part(String[] values) { // TODO: sanity checking for desired number of columns setModelLevel(Integer.parseInt(values[0])); setPartNumber(values[1]); setName(values[2]); setServiceability(values[3]); setQuantity(Integer.parseInt(values[4])); setPartType(values[5]); } public int getModelLevel() { return modelLevel; } public void setModelLevel(int modelLevel) { this.modelLevel = modelLevel; } public String getPartNumber() { return partNumber; } public void setPartNumber(String partNumber) { // normalize to upper case this.partNumber = partNumber.toUpperCase(); } public String getName() { return name; } public void setName(String name) { // normalize to upper case this.name = name.toUpperCase(); } public Serviceability getServiceability() { return serviceability; } public void setServiceability(Serviceability serviceability) { this.serviceability = serviceability; } public void setServiceability(String serviceability) { this.serviceability = Serviceability.fromValue(serviceability); } public int getQuantity() { return quantity; } public void setQuantity(int quantity) { this.quantity = quantity; } public PartType getPartType() { return partType; } public void setPartType(PartType partType) { this.partType = partType; } public void setPartType(String partType) { this.partType = PartType.fromValue(partType); } public boolean isTopLevelPart() { return modelLevel == 2; } public boolean hasPartNumberContains(String value) { return partNumber.contains(value); } public boolean isServiceable() { return serviceability == Serviceability.SERVICEABLE; } public String toString() { return modelLevel + " | " + partNumber + " | " + name + " | " + serviceability + " | " + quantity + " | " + partType; } } I used two Enums to represent valid values for serviceability and part type, in case I need to compare these values often and I want to avoid any typos in doing String comparisons. I also normalize String inputs to upper case for this exercise. By right, I can also have a PartFactory class to take in the six String inputs and construct a Part for me, but I have decided to keep it simple and just implemented a constructor for this purpose. Feel free to add useful methods that is related to a Part and will aid in your work here, for example I have the isTopLevelPart method that will return true if the model number is 2. package com.myproject.se; import java.util.ArrayList; import java.util.Iterator; import java.util.List; public class PartList implements Iterable<Part> { private List<Part> partList = new ArrayList<Part>(); public PartList(Part part) { addPart(part); } public void addPart(Part part) { // simple constraint if (partList.isEmpty() && !part.isTopLevelPart()) { throw new RuntimeException("This is not a top-level part."); } partList.add(part); } @Override public Iterator<Part> iterator() { return partList.iterator(); } public Part getTopLevelPart() { return partList.get(0); } public int size() { return partList.size(); } public List<Part> getServiceableParts() { List<Part> result = new ArrayList<Part>(); for (Part part : partList) { if (part.isServiceable()) { result.add(part); } } return result; } } PartList is a simple wrapper class that stores Part objects in an ArrayList. The only enhancement I have added is a constraint on the first element of the List, which must be a top-level part (see above). Other than that, feel free to once again introduce methods related to what a PartList should know, in my case here I added a simple getServiceableParts method for illustration. package com.myproject.se; import java.io.Closeable; import java.io.IOException; import java.util.ArrayList; import java.util.Iterator; import java.util.List; public class RecordSupplier implements Iterable<Part>, Closeable { private static final List<String[]> testValues = new ArrayList<String[]>(); static { testValues.add(new String[] { "2", "ABC", "NAME 1", "", "1", "" }); testValues.add(new String[] { "3", "DEF", "NAME 2", "SVCBLE", "2", "PART" }); testValues.add(new String[] { "4", "GHI", "NAME 3", "NOT SVC", "3", "ASSEMBLY" }); testValues.add(new String[] { "2", "JKL", "NAME 4", "", "1", "" }); testValues.add(new String[] { "3", "MNO", "NAME 5", "SVCBLE", "5", "PART" }); } @Override public void close() throws IOException { // do nothing } @Override public Iterator<Part> iterator() { return new Iterator<Part>() { private Iterator<String[]> innerIterator = testValues.iterator(); @Override public boolean hasNext() { return innerIterator.hasNext(); } @Override public Part next() { return new Part(innerIterator.next()); } @Override public void remove() { throw new UnsupportedOperationException(); } }; } } I needed a mock class here to simulate a CSVReader, so here's a RecordSupplier using the iteration pattern of providing input. What is crucial to note here is that RecordSupplier, or more specifically its iterator, supplies Part objects, instead of a String array. Generally, you will want a well-behaved input provider that passes only types that can appropriately encapsulate the raw underlying values to caller methods that need the data. The other minor benefit of wrapping your CSVReader in such a class is that you can easily replace the actual input in the future by a database, or XML file reader etc, and all the caller methods see are still Part objects. package com.myproject.se; import java.io.IOException; import java.util.HashMap; import java.util.Iterator; import java.util.Map; import java.util.Map.Entry; import java.util.Scanner; public class Main { public static void main(String[] args) { @SuppressWarnings("unused") Scanner scanner = new Scanner(System.in); String partNumber1; String partNumber2; System.out.println("Enter first part number:"); // partNumber1 = scanner.next(); partNumber1 = "JKL"; System.out.println("Enter second part number:"); // partNumber2 = scanner.next(); partNumber2 = "ABC"; Map<String, PartList> result = getPartLists(partNumber1, partNumber2); for (Entry<String, PartList> entry : result.entrySet()) { System.out.println("Part number search term: " + entry.getKey()); Iterator<Part> iterator = entry.getValue().iterator(); while (iterator.hasNext()) { System.out.println(iterator.next()); } System.out.println("===================="); } } public static Map<String, PartList> getPartLists(String... partNumbers) { // only searching for two parts now, easily extensible if (partNumbers.length != 2) { throw new RuntimeException("Only two part numbers needed."); } Map<String, PartList> result = new HashMap<String, PartList>(); RecordSupplier supplier = new RecordSupplier(); Iterator<Part> iterator = supplier.iterator(); PartList partList = null; while (iterator.hasNext()) { Part part = iterator.next(); if (part.isTopLevelPart()) { String match = partNumberMatches(part, partNumbers); if (match.isEmpty()) { partList = null; } else { // match belongs to a new PartList partList = new PartList(part); // add the new PartList to result result.put(match, partList); } } else if (partList != null) { // keep adding current Part to existing PartList partList.addPart(part); } } try { supplier.close(); } catch (IOException e) { e.printStackTrace(); } // returning just a List of PartList should be good enough actually return result; } public static String partNumberMatches(Part part, String... partNumbers) { for (String partNumber : partNumbers) { if (part.hasPartNumberContains(partNumber)) { return partNumber; } } return ""; } } Finally, the main class. I have commented out the parts that read in user input (see the use of the Scanner class) and hard-coded two test values. The bulk of the work is done in getPartLists. Map<String, PartList> result = new HashMap<String, PartList>(); RecordSupplier supplier = new RecordSupplier(); Iterator<Part> iterator = supplier.iterator(); PartList partList = null; Create our result object and the RecordSupplier. while (iterator.hasNext()) { Part part = iterator.next(); Get a Part object representing the current row. if (part.isTopLevelPart()) { String match = partNumberMatches(part, partNumbers); If this is a top-level part, check if it matches our inputs or not (I think partNumberMatches is pretty self-explanatory and will not go there). if (match.isEmpty()) { partList = null; If there is no match, we set our partList to null, using this as some sort of a flag afterwards to know we are ignoring subsequent lines until the next top-level part. } else { // match belongs to a new PartList partList = new PartList(part); // add the new PartList to result result.put(match, partList); } We have a match, lets set partList as a new PartList with this top-level part and add it to our result. } else if (partList != null) { // keep adding current Part to existing PartList partList.addPart(part); } } For other non-top-level parts, we will just check if partList is null or not. If it isn't, it means we have just matched a top-level part to our inputs and we need to add the current part to partList. Implicit assumptions: We are only matching part number names with the second column, Part Number Breakdown (see partNumberMatches and Part.hasPartNumberContains) One top-level part will only match one input. I hope this is informative enough...
{ "domain": "codereview.stackexchange", "id": 4254, "tags": "java, object-oriented, csv" }
How to transfer mechanical power from the inside of a vacuum chamber to the outside while maintaining a seal?
Question: In a vacuum chamber how would one transfer mechanical power (either rotation or linear) from inside to the external environment? I'm working on an idea for a new/different type of motor that would require an evacuated internal atmosphere and am wondering how to transfer the generated motion outside the case. I have thought about using rotating magnetic coupling on either side of a thinned wall section but don't think it would scale well for larger versions of the motor (perhaps powering a bike or car). I expect with a high enough budget (which I don't have) that an extremely high-tolerance mechanical seal could be machined but am wondering if there are any solutions that could sidestep the high tolerances by thinking laterally. Answer: One standard way is a rotary fitting using a ferrofluidic seal. These are fairly standard parts in your favorite vacuum components catalog. Often used in semiconductor processing equipment.
{ "domain": "physics.stackexchange", "id": 15026, "tags": "classical-mechanics, home-experiment, vacuum" }
Event tracking system
Question: I am learning C++ programming and just learned about basic OOP and decided to create a simple project to test my understanding and practice what I've learned. The idea I came up with is an event tracking system where you add events into a calendar and then you get all of your events displayed. I have 2 classes: Event, where your events are created, and Calendar, which holds a vector of all Events. Could you please review my code saying what are the most efficient ways of doing things and the best practices to be followed? Main.cpp #include "Calendar.h" int main() { Calendar calendar {}; calendar.add_event("Exam", "urgent", "10/12/2020", "10:30"); calendar.add_event("Meeting", "non-urgent", "08/11/2020", ("12:20")); calendar.display_events(); } Event.h #include <string> class Event { private: std::string event_type; std::string event_priority; std::string event_date; std::string event_time; public: Event(std::string eventType, std::string eventPriority, std::string eventDate, std::string eventTime); bool display_event() const; ~Event(); }; Event.cpp #include "Event.h" #include <iostream> #include <utility> Event::Event(std::string eventType, std::string eventPriority, std::string eventDate, std::string eventTime) : event_type(std::move(eventType)), event_priority(std::move(eventPriority)), event_date(std::move(eventDate)), event_time(std::move(eventTime)) { } bool Event::display_event() const { std::cout << "You have " << event_type << " on " << event_date << " at " << event_time << " it's " << event_priority << "\n"; return true; } Event::~Event() = default; Calendar.h #include "Event.h" #include <vector> class Calendar { private: std::vector<Event> calendar; public: bool display_events() const; bool add_event(std::string event_type, std::string event_priority, std::string event_date, std::string event_time); const std::vector<Event> &getCalendar() const; bool is_event_valid(const std::string& event_date, const std::string& event_time); ~Calendar(); }; Calendar.cpp #include "Calendar.h" #include <iostream> #include <utility> const std::vector<Event> &Calendar::getCalendar() const { return calendar; } bool Calendar::display_events() const { if (!getCalendar().empty()) { for (const auto &event : calendar) { event.display_event(); } return true; } else { std::cout << "Your calendar is empty \n"; return false; } } bool Calendar::add_event(std::string event_type, std::string event_priority, std::string event_date, std::string event_time) { if (is_event_valid(event_date, event_time)) { Event event {std::move(event_type), std::move(event_priority), std::move(event_date), std::move(event_time)}; calendar.push_back(event); return true; } else { std::cout << "Event is not valid\n"; return false; } } bool Calendar::is_event_valid(const std::string& event_date, const std::string& event_time) { int day{}, month{}, year{}, hours{}, minutes{}; day = std::stoi(event_date.substr(0,2)); month = std::stoi(event_date.substr(3, 2)); year = std::stoi(event_date.substr(6, 4)); hours = std::stoi(event_time.substr(0, 2)); minutes = std::stoi(event_time.substr(3, 2)); bool is_date_valid = (day > 0 && day <= 24) && (month > 0 && month <= 12) && (year >= 2020 && year <= 3030); bool is_time_valid = (hours >= 0 && hours <= 24) && (minutes >= 0 && minutes <= 60); if (is_date_valid && is_time_valid) { return true; } else { std::cout << "The event's time or date is not valid\n"; return false; } } Calendar::~Calendar() = default; I am also thinking about adding a feature where you can sort the events by date. Answer: General Observations Welcome to the Code Review Site. Nice starting question, very good for a begining C++ programmer and a new member of the Code Review Community. The functions follow the Single Responsibility Principle (SRP) which is excellent. The classes also follow the SRP which is very good as well. You aren't making a fairly common beginner mistake by using the using namespace std; statement. Good use of const in many of the functions. The Single Responsibility Principle states: that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by that module, class or function. The SRP is the S in the SOLID programming principles below. The object oriented design needs some work, for instance the Event class should have an is_valid() method to let each event validate itself, this would be useful when creating a new event. The Calendar class could use this method and doesn't need to know about the private members of the event class. Including access to the private members of the Event class in the Calendar class prevents this from being a SOLID object oriented design. In object-oriented computer programming, SOLID is a mnemonic acronym for five design principles intended to make software designs more understandable, flexible and maintainable. Include Guards In C++ as well as the C programming language the code import mechanism #include FILE actually copies the code into a temporary file generated by the compiler. Unlike some other modern languages C++ (and C) will include a file multiple times. To prevent this programmers use include guards which can have 2 forms: the more portable form is to embed the code in a pair of pre-processor statements #ifndef SYMBOL #define SYMBOL // All other necessary code #endif // SYMBOL A popular form that is supported by most but not all C++ compilers is to put #pragma once at the top of the header file. Using one of the 2 methods above to prevent the contents of a file from being included multiple times is a best practice for C++ programming. This can improve compile times if the file is included multiple times, it can also prevent compiler errors and linker errors. Class Declarations in Header Files For both the Event and Calendar classes you define a destructor for the object in the class declaration and then set that destructor in the .cpp file, it would be better to do this in the class declarations themselves. For simple or single line functions such as display_event() you should also include the body of the function to allow the optimizing compiler to decide if the function should be inlined or not. In C++ the section of the class immediately following class CLASSNAME { is private by default so the keyword private isn't necessary where you have it in your code. Current conventions in object oriented programming are to put public methods and variables first, followed by protected methods and variables with private methods and variables last. This convention came about because you may not be the only one working on a project and someone else may need to be able to quickly find the public interfaces for a class. Example of Event class refactored #include <iostream> #include <string> class Event { public: Event(std::string eventType, std::string eventPriority, std::string eventDate, std::string eventTime); bool display_event() const { std::cout << "You have " << event_type << " on " << event_date << " at " << event_time << " it's " << event_priority << "\n"; return true; } ~Event() = default; private: std::string event_type; std::string event_priority; std::string event_date; std::string event_time; }; Object Oriented Design The Calendar class has dependencies on the private fields of the Event class, the problem with this is it limits the expansion of the code of both classes and makes it difficult to reuse the code which is a primary function of object oriented code. It also makes the code more difficult to maintain. Each class should be responsible for a particular function / job. You mention sorting the events by date as a possible expansion of the program, in this case you need to add an <= operator to decide what order the events should be in, that operator should be in the Event class, but it looks like you would implement it in the Calendar class. The following code does not belong in a Calendar class method, it belongs in an Event class method: day = std::stoi(event_date.substr(0, 2)); month = std::stoi(event_date.substr(3, 2)); year = std::stoi(event_date.substr(6, 4)); hours = std::stoi(event_time.substr(0, 2)); minutes = std::stoi(event_time.substr(3, 2)); Currently the only way to create a new event is to try to add it to the calendar, it would be better to create each event on it's own, check the validity of the even and then call the add_event() method in the calendar.
{ "domain": "codereview.stackexchange", "id": 39751, "tags": "c++, beginner, object-oriented" }
PyCat: a netcat implementation in Python3.x v2
Question: This is a follow up question from this question Intro netcat is an all-round tool used with many applicable features My last try felt a bit rushed, and should've improved before posting here. But this time, I am happy with the result. CHANGELOG kwargs @use_ssl decorator Multi-Platform (Posix, *nix, Windows) Improved code structure Download Upload I tried adding a context manager, but couldn't really make it work in an elegant way. Any and all reviews are welcome. Example server $ python pycat.py -lsp 8080 [*] Incoming connection from 127.0.0.1:53391 username@hostame PyCat C:\dev\Pycat > echo hooooooi hooooooi username@hostame PyCat C:\dev\PyCat > cd ../ username@hostame PyCat C:\dev > exit client python pycat.py -si localhost -p 8080 Code import argparse import datetime from functools import wraps import socket from ssl import wrap_socket, create_default_context, CERT_NONE import sys import subprocess import tempfile import os import re from cryptography import x509 from cryptography.x509.oid import NameOID from cryptography.hazmat.primitives import serialization, hashes from cryptography.hazmat.primitives.asymmetric import rsa from cryptography.hazmat.backends import default_backend SUCCES_RESPONSE = b"Command succesfully completed" def ssl_server(func): @wraps(func) def wrapper(inst, *args): inst.socket.bind((inst.host, inst.port)) inst.socket.listen(0) if inst.ssl: inst.context = create_default_context() inst.key, inst.cert = inst.generate_temp_cert() inst.socket = wrap_socket( inst.socket, server_side=True, certfile=inst.cert, keyfile=inst.key ) func(inst, *args) return wrapper def ssl_client(func): @wraps(func) def wrapper(inst, *args): inst.socket.connect((inst.host, inst.port)) if inst.ssl: inst.context = create_default_context() inst.context.check_hostname = False inst.context.verify_mode = CERT_NONE inst.socket = wrap_socket(inst.socket) func(inst, *args) return wrapper class PyCatBase(): def __init__(self, **kwargs): self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = kwargs['port'] self.host = kwargs['host'] or "0.0.0.0" self.operating_system = os.name == "nt" self.upload = kwargs['upload'] self.download = kwargs['download'] self.timeout = kwargs['timeout'] self.ssl = kwargs['ssl'] def exit(self): self.socket.close() sys.exit(0) def read(self, connection, length=1024): response = b"" while True: data = connection.recv(length) response += data if len(data) < length: break return response.decode("utf-8").rstrip() def upload_file(self, connection, file): with open(file, "rb") as f: connection.send(f.read()) def download_file(self, connection, file): recieved = self.read(connection) with open(file, "wb") as f: f.write(recieved) @staticmethod def generate_temp_cert(): _, key_path = tempfile.mkstemp() _, cert_path = tempfile.mkstemp() name_attributes = [ x509.NameAttribute(NameOID.COUNTRY_NAME, "OK"), x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, "OK"), x509.NameAttribute(NameOID.LOCALITY_NAME, "OK"), x509.NameAttribute(NameOID.ORGANIZATION_NAME, "OK"), x509.NameAttribute(NameOID.COMMON_NAME, "PyCat") ] key = rsa.generate_private_key( public_exponent=65537, key_size=2048, backend=default_backend() ) with open(key_path, "wb") as f: f.write( key.private_bytes( encoding=serialization.Encoding.PEM, format=serialization.PrivateFormat.TraditionalOpenSSL, encryption_algorithm=serialization.NoEncryption() ) ) subject = issuer = x509.Name(name_attributes) cert = x509.CertificateBuilder()\ .subject_name(subject)\ .issuer_name(issuer)\ .public_key(key.public_key())\ .serial_number(x509.random_serial_number())\ .not_valid_before(datetime.datetime.utcnow())\ .not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=365)) cert = cert.sign(key, hashes.SHA256(), default_backend()) with open(cert_path, "wb") as f: f.write( cert.public_bytes(serialization.Encoding.PEM) ) return key_path, cert_path class PyCatServer(PyCatBase): def __init__(self, **kwargs): super(PyCatServer, self).__init__(**kwargs) def create_prompt_string(self): self.client.send(b"cd") if self.operating_system else self.client.send(b"pwd") pwd = self.read(self.client) self.client.send(b"whoami") whoami = self.read(self.client) self.client.send(b"hostname") hostname = self.read(self.client) return f"{whoami}@{hostname} PyCat {pwd}\n> " @ssl_server def main(self): if self.timeout > 0: self.socket.settimeout(self.timeout) self.client, addr = self.socket.accept() print(f"[*] Incomming connection from {':'.join(map(str, addr))}") self.handle_client() def handle_client(self): if self.upload is not None: self.upload_file(self.client, self.upload) elif self.download is not None: self.download_file(self.client, self.download) else: while True: prompt_string = self.create_prompt_string() buf = input(prompt_string) self.client.send(buf.encode("utf-8")) if buf == "exit": break print(self.read(self.client)) self.exit() class PyCatClient(PyCatBase): def __init__(self, **kwargs): super(PyCatClient, self).__init__(**kwargs) def change_dir(self, path): try: os.chdir(path) return SUCCES_RESPONSE except FileNotFoundError as e: return str(e).encode("utf-8") def exec_command(self, command): try: return subprocess.check_output(command, stderr=subprocess.STDOUT, shell=True) except Exception as e: return str(e).encode("utf-8") def handle_command(self, command): if command == "exit": self.exit() change_dir = re.match(r'cd(?:\s+|$)(.*)', command) if change_dir and change_dir.group(1): return self.change_dir(change_dir.group(1)) return self.exec_command(command) @ssl_client def main(self): if self.timeout > 0: self.socket.settimeout(self.timeout) if self.upload is not None: self.upload_file(self.socket, self.upload) elif self.download is not None: self.download_file(self.socket, self.download) else: while True: cmd = self.read(self.socket) response = self.handle_command(cmd) if len(response) > 0: self.socket.send(response) def parse_arguments(): parser = argparse.ArgumentParser(usage='%(prog)s [options]', description='PyCat @Ludisposed', formatter_class=argparse.RawDescriptionHelpFormatter, epilog='Examples:\npython3 pycat.py -lvp 443\npython3 pycat.py -i localhost -p 443') parser.add_argument('-l', '--listen', action="store_true", help='Listen') parser.add_argument('-s', '--ssl', action="store_true", help='Encrypt connection') parser.add_argument('-p', '--port', type=int, help='Port to listen on') parser.add_argument('-i', '--host', type=str, help='Ip/host to connect to') parser.add_argument('-d', '--download', type=str, help='download file') parser.add_argument('-u', '--upload', type=str, help='upload file') parser.add_argument('-t', '--timeout', type=int, default=0, help='timeout') args = parser.parse_args() if (args.listen or args.host) and not args.port: parser.error('Specify which port to connect to') elif not args.listen and not args.host: parser.error('Specify --listen or --host') return args if __name__ == '__main__': args = parse_arguments() pycat_class = PyCatServer if args.listen else PyCatClient pycat = pycat_class(**vars(args)) pycat.main() Answer: I tried adding a context manager, but couldn't really make it work in an elegant way. class PyCatBase(): def __enter__(self): return self def __exit__(exc_type, exc_val, exc_tb): self.socket.close() return False # ... if __name__ == '__main__': args = parse_arguments() pycat_class = PyCatServer if args.listen else PyCatClient pycat = pycat_class(**vars(args)) with pycat: pycat.main() Other things. SUCCES_RESPONSE should be spelled SUCCESS_RESPONSE. Similarly, there's a typo in that string. This: self.operating_system = os.name == "nt" suggests one of two things. Either operating_system should be named is_windows, or you need to change it to simply self.operating_system = os.name. This: def __init__(self, **kwargs): is only a good idea in other, limited, contexts (for example, if you're extending a class with a highly complex initializer). Don't do that, here. Spell out your args. Having implicit kwargs hurts you and your users in a number of ways, including kneecapping your IDE's static analysis efforts. Here: cert = x509.CertificateBuilder()\ .subject_name(subject)\ .issuer_name(issuer)\ .public_key(key.public_key())\ .serial_number(x509.random_serial_number())\ .not_valid_before(datetime.datetime.utcnow())\ .not_valid_after(datetime.datetime.utcnow() +datetime.timedelta(days=365)) the generally accepted thing to do rather than a handful of newline continuations is to surround the thing in parens. This: change_dir = re.match(r'cd(?:\s+|$)(.*)', command) should have its regex pre-compiled in __init__, since you call it for every command.
{ "domain": "codereview.stackexchange", "id": 32907, "tags": "python, python-3.x, networking" }
Scale factor for flat universe filled with radiation and cosmological constant
Question: I am trying to solve problem 1.20 in the book 'Physical Foundations of Cosmology' from Mukhanov. In the problem, we should show that the scale factor $a$ for a flat universe filled with cosmological constant and radiation is given by, $$ a(t)=a_0(\sinh2H_\Lambda t)^{1/2},\tag{1} $$ wherein $H_\Lambda=(8\pi G\epsilon_\Lambda/3)^{1/2}$. As a hint, we are given, $$ a^{\prime\prime}+ka=\frac{4\pi G}{3}\left(\epsilon-3p\right)a^3,\tag{2} $$ where prime denotes the derivative with respect to conformal time $\eta$. The equation of state for the cosmological constant $\Lambda$ is $p=-\epsilon$ whereas the equation of state for radiation is $p=\frac{1}{3}\epsilon$. Using these equations the right-hand side of the former equation yields, $$ \frac{4\pi G}{3}4\epsilon_\Lambda a^3=2H_\Lambda a^3.\tag{3} $$ As for a flat universe $k=0$ we arrive at, $$ a^{\prime\prime}=2H_\Lambda a^3.\tag{4} $$ If we multiply both sides with $a^\prime$ we can write, $$ \frac{1}{2}\frac{d}{d\eta}(a^\prime)^2=a^{\prime\prime}a^\prime=2H_\Lambda a^3a^\prime=\frac{1}{2}H_\Lambda \frac{d}{d\eta}a^4,\tag{5} $$ and we can integrate both sides with respect to $\eta$. Dropping the integration constant and taking the square root of both sides, $$ a^\prime=\pm H_\Lambda^{1/2}a^2,\tag{6} $$ we can solve this by separation and of variables and find, $$ \eta=\int d\eta=\pm H^{-1/2}_\Lambda\int \frac{da}{a^2}=\mp H^{-1/2}_\Lambda \frac{1}{a}.\tag{7} $$ We need the scale factor to be positive, therefore, $$ a(\eta)=H^{-1/2}_\Lambda \eta.\tag{8} $$ From the definition of the conformal time, $$ \eta=\int\frac{dt}{a(t)},\tag{9} $$ we find, $$ t=\int dt=\int d\eta a(\eta)=\frac{1}{2}H^{-1/2}_\Lambda \eta^2,\tag{10} $$ which solved for $\eta$ can be used to express the scale factor in proper time $t$, $$ a(t)=\sqrt{2 t}H_\Lambda^{3/4},\tag{11} $$ which obviously is very different from the actual result. What did I wrong? Answer: I see some mistakes: First of all, according to your definition of $H_{\Lambda}$, in the right-hand side of your equation you should have $H_{\Lambda}^{2}$ and not $H_{\Lambda}$. When you solve by separation of variables and leave $a$ in terms of $\eta$, you obtain $a \propto \eta^{-1}$ and not $a \propto \eta$ as you wrote. At some point in your derivation you should take into account the integration constant, that is how you will make the $a_0$ appear in the final result. I'm not sure that you have switched correctly from conformal to proper time. I think that the best thing to do is to reduce the second-order equation to a first-order equation (just as you did) and then switch to proper time by using the chain rule and $d\eta = dt/a(t)$. Finally, I would recommend to enumerate your equations, in this way it is much easier to correct your work.
{ "domain": "physics.stackexchange", "id": 60051, "tags": "homework-and-exercises, cosmology" }
Is it possible to push together two charged particles of the same charge hard enough so it becomes a black hole?
Question: My chain of thought is the following: To push together two charges of the same sign you need to do work. The energy spent will be turned into electrostatic potential energy. Can we pump so much energy into the two particle system this way so it becomes a black hole? Answer: In principle yes, though in practice it's far beyond our current capabilities. The problem is that a charged black hole has to have a Schwarzschild radius greater than $2r_Q$, where $r_Q$ is given by: $$ r_Q = \sqrt{\frac{Q^2 G}{4\pi\varepsilon_0 c^4}} $$ If this isn't true the black hole will be superextremal and this (probably) means it's unstable. For two electrons colliding the charge will be $2e$, so the condition becomes: $$ r_s = \frac{2GM}{c^2} \ge 2\sqrt{\frac{4e^2 G}{4\pi\varepsilon_0 c^4}} $$ or: $$ M \ge \sqrt{\frac{4e^2}{4\pi\varepsilon_0 G}} $$ and the lowest vaue of $M$ works out to be about $3.7 \times 10^{-9}$ kg or about $2 \times 10^{27}$ eV. So you'd need to collide your two electrons with a centre of mass energy of at least 2000 yottavolts to get a stable black hole. It'll be a while before we get round to building an accelerator that powerful. Even if we did have such a powerful accelerator there's another restriction to worry about. Unless we can line up the two electrons to within a small fraction of the Schwarzschild radius the resulting black hole would have a non-zero angular momentum, which would make it a Kerr-Newman black hole rather than Reissner-Nordstrom. We'd then have to worry about getting the angular momentum small enough for the KN black hole not to be extremal. I'll leave it as an exercise for the reader to work out how precise the alignment has to be.
{ "domain": "physics.stackexchange", "id": 25135, "tags": "electromagnetism, general-relativity, black-holes, charge" }
sp2 hybridisation of alkyl radicals causing formation of racemic mixture
Question: What I know about $\ce{^{.}CR3}$ radical is that it has both $\ce{sp^2}$ and $\ce{sp^3}$ character but the $\ce{sp^2}$ character dominates and so the radical is $\ce{sp^2}$ in nature. So, suppose we have the above situation. In this, a radical is formed at the carbon of the 6 membered ring at which one $\ce{-CH3}$ is attached since the bromination is selective, but will it form enantiomers? That is, can the $\ce{^{.}Br}$ radical attack from both sides? In this question, the answer given is (c), but I think that enantiomers can not form, as the electron of the radical would be present only on one side of the radical carbon, and so cannot migrate to other side. Answer: In the first step, due to the selectivity of bromine in a free radical reaction, the tertiary carbon which has a $\ce{-CH3}$ group attached to it forms the radical. Now, since the radical is a $\ce{^{.}CR3}$ radical, the carbon radical in the intermediate becomes $\ce{sp^2}$. This means that $\ce{^{.}Br}$ can attack from both the top and the bottom since this lone electron in the $\ce{^{.}CR3}$ radical is in a p-orbital. This is because the electron is equally likely to be found in both lobes of the orbital. Now after the attack of the $\ce{^{.}Br}$ radical, you get two products. The first one shows the product formed when $\ce{^{.}Br}$ attacks from the bottom and the second one shows the product formed after the $\ce{^{.}Br}$ attacks the intermediate from the top. You may have noticed that both products are enantiomers of each other and hence the answer would be (c). Reference Chem Libretexts
{ "domain": "chemistry.stackexchange", "id": 14194, "tags": "organic-chemistry, stereochemistry, hybridization, radicals, halogenation" }
Turtlebot gazebo
Question: Can i install Turtlebot_simulator on my Fuerte Ubuntu 12.04 ?? Thanks in advance :) Originally posted by salma on ROS Answers with karma: 464 on 2012-10-05 Post score: 1 Answer: Yes. Assuming that you've properly followed the instructions, you can type: sudo apt-get install ros-fuerte-turtlebot-simulator or you can choose to install it from source from the location included on the wiki. Originally posted by SL Remy with karma: 2022 on 2012-10-05 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 11247, "tags": "simulation, turtlebot, ubuntu, ros-fuerte, ubuntu-precise" }
I have an error "AttributeError: 'module' object has no attribute 'ApproximateTimeSynchronizer'" even if i already used TimeSynchronizer
Question: System information OS: Ubuntu 12.07 ROS version: Hydro this the code i just create to be able to use two topics at the same time: import numpy as np import message_filters import rospy from gps_common.msg import GPSFix from personalpackage.msg import imu from personalpackage.msg import GNSS message = "" def callback(GNSS,GPS): f=open("the_times.csv","a+") print("inside callback") time_ros= GNSS.header.stamp.secs+GNSS.header.stamp.nsecs*0.000000001 time_gps= GNSS.time.secs+GNSS.time.nsecs*0.000000001 time_gps_pps= GNSS.time_clock.secs+GNSS.time_clock.nsecs*0.000000001 tsmp = time_gps_pps - time_ros message="time_ros time_gps time_gps_pps difference between timeGPS & timeGPSPPS\n" message=message+str(time_ros)+"\t"+str(time_gps)+"\t"+str(time_gps_pps)+"\t"+str(tsmp)+"\n" print(message) rospy.loginfo(rospy.get_caller_id() + "I heard %s", data.data) def the_times(): rospy.init_node('the_times', anonymous=True) GnssData=message_filters.Subscriber("GPS",GPSFix) ImuData=message_filters.Subscriber("GNSS",GNSS) print("after filtering") ApproSynchron=message_filters.ApproximateTimeSynchronizer([GnssData,ImuData], 10, 0.1,allow_headerless=True) print("after Synchro") ApproSynchron.registerCallback(callback) print("after Callback") rospy.spin() if __name__ == '__main__': the_times() When i execute it i get this error message : Traceback (most recent call last): File "the_times.py", line 51, in the_times() File "the_times.py", line 41, in the_times ApproSynchron=message_filters.ApproximateTimeSynchronizer([GnssData,ImuData], 10, 0.1,allow_headerless=True) AttributeError: 'module' object has no attribute 'ApproximateTimeSynchronizer' I already tried the same code using "TimeSynchronizer" and i didn't get any error message. Originally posted by MohamedAkaarir on ROS Answers with karma: 16 on 2020-02-25 Post score: 0 Answer: The problem was simply that i don't have the good version of message_filters library that contains "ApproximateTimeSynchronizer", the default version of message_filters library on hydro distro is 1.10.12. ApproximateTimeSynchronizer was first introduced in 1.11.4 version of message_filters library. Originally posted by MohamedAkaarir with karma: 16 on 2020-02-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34494, "tags": "ros-hydro, message-filters" }
Separate two voices from a speech signal
Question: I have to make a project college. I have to come with some application of signals that I can implement in MATLAB. Is it possible to separate two voices from an audio signal, say a .waw file ? I mean, using techniques from a first course in digital signal processing (DFT, Spectogram, Cepstrum). Answer: (i dont have enough points to post comments like everyone else) The problem you are referring to is commonly called "the cocktail problem algorithm" here is a little link I found https://stackoverflow.com/questions/20414667/cocktail-party-algorithm-svd-implementation-in-one-line-of-code I actually discovered this algorithm, in that same online course on coursera a few weeks ago. This article also has some links to research. If you want to see it (I recommend you do) just sign up at coursera for the stanford machine learning class. The video is the 4th entry from week 1 introduction at approximately 5mins 30 seconds in, it's titled "unsupervised learning". I don't think I can legally post a link to the video, but since it's free, why not take a look. At this point it's important to say other than those few minutes at the end of the video clip, I don't think he discusses the cocktail party algorithm again. So there is no need to scour through hours of videos and lectures, you won't find anything (at least in the currently posted content, I haven't finished the course yet)
{ "domain": "dsp.stackexchange", "id": 2442, "tags": "speech, project-ideas" }
a problem on circular motion
Question: Question:A cyclist is riding with a speed of $7.5$ m/s. As he approaches a circular turn on the road of radius $80m$, he applies brakes and reduces his speed at the constant rate of $0.50$ m/s every second. What is the magnitude and direction of the net acceleration of the cyclist on the circular turn ? My approach:I know that if I can find the centripetal acceleration($a_c$) and tangential acceleration($a_t$) then I can easily calculate their resultant and as here $a_t=0.5$m/s/s so I just need to find $a_c$ ...in which I am having problem because if the speed of the cyclist was constant I could have easily said $a_c=\frac{v^2}{r}$ but here(in this question)velocity of the cyclist is not constant it is changing with time at the rate of 0.5 m/s/s so I don't think I can directly apply this formula as $v$ is changing here and not constant ...what should I do after this step...any suggestion/help is welcome. Answer: The magnitude of centripetal acceleration is $\frac{v^2}{r}$ instantaneously. It applies no matter the speed on your circular path. (Technically it's true for any curve, but $r$ would be changing on non-circular curves, making calculations more difficult.) The tangential acceleration is constant, so you can write a function for $v$. Then you have two mutually perpendicular components of the acceleration vector, one which is constant in magnitude and changing direction (tangential) and the other changing in a calculable fashion as a function of time. Now set up a coordinate system and use polar coordinates and trigonometry to find the functions of time that tell you the magnitudes and directions of the acceleration components. Have fun learning!
{ "domain": "physics.stackexchange", "id": 28198, "tags": "homework-and-exercises, newtonian-mechanics, kinematics" }
Why use sampling instead of the mean value for policy in Reinforcement Learning?
Question: I'm quite new in RL and I'm currently following David Silver's course on RL. But at the same time, I also want to get hands-on, so I followed this tutorial from Gymnasium documentation: https://gymnasium.farama.org/tutorials/training_agents/reinforce_invpend_gym_v26/ I understand the general concept and Idea, but I'm curious about why we should model the policy as a distribution (a Normal distribution in this case) and then take a sample from that distribution as an action to be applied to the RL environment. Why don't we just use the mean value as an action instead of taking a sample from distribution as an action? Here's the piece of code that I'm talking about: def sample_action(self, state: np.ndarray) -> float: """Returns an action, conditioned on the policy and observation. Args: state: Observation from the environment Returns: action: Action to be performed """ state = torch.tensor(np.array([state])) action_means, action_stddevs = self.net(state) # create a normal distribution from the predicted # mean and standard deviation and sample an action distrib = Normal(action_means[0] + self.eps, action_stddevs[0] + self.eps) action = distrib.sample() prob = distrib.log_prob(action) action = action.numpy() self.probs.append(prob) return action As an experiment, I have tried to change the action from action = distrib.sample() to action = action_means[0]. But it turns out that the model isn't learning. Does anyone has an idea? Answer: If the mean is used, the value is approximately the same over time. Thus the actions will be very similar over time, providing less opportunity for the model to learn what other actions could be useful (aka, the explore in exploit-explore concept). If a sample is used, the value is random. The randomness is weighted by the value of previous actions. The current action exploits previously useful actions and explores also.
{ "domain": "datascience.stackexchange", "id": 11585, "tags": "machine-learning, reinforcement-learning, openai-gym" }
Simple cat in Rust
Question: I'm learning Rust, from a Python background, and while I've used languages like C and C++ in the (distant) past, system languages aren't really my specialty. I would just like to know if my code is sane, if I seem to have the principles roughly right, but I'd be happy to hear any improvements at all, particularly with performance. I'm also a bit unsure about error handling. Anyway, for my first program I've made a simple implementation of the UNIX cat command, without support for any arguments: use std::env; use std::io; use std::io::Read; use std::io::Write; use std::fs::File; macro_rules! println_stderr( ($($arg:tt)*) => ( match writeln!(&mut io::stderr(), $($arg)* ) { Ok(_) => {}, Err(err) => panic!("Unable to write to stderr: {}", err), } ) ); fn main() { let args: Vec<_> = env::args().collect(); if args.len() < 2 { loop { let mut s = String::new(); match io::stdin().read_line(&mut s) { Ok(_) => {}, Err(err) => { println_stderr!("{}", err.to_string()); continue; } }; print!("{}", s); } } else { for arg in &args[1..] { let mut s = String::new(); let mut file = match File::open(arg) { Ok(file) => file, Err(err) => { println_stderr!("{}", err.to_string()); continue; } }; match file.read_to_string(&mut s) { Ok(_) => {}, Err(err) => { println_stderr!("{}", err.to_string()); continue; } }; print!("{}", s); } } } Answer: imports Your imports should be compressed: use std::io::{self, Read, Write}; Although it might be better to use the io prelude: use std::io; use std::io::prelude::*; I prefer the former for its explicitness. match vs if let match io::stdin().read_line(&mut s) { Ok(_) => {}, Err(err) => { println_stderr!("{}", err.to_string()); continue; } }; is better as just if let Err(err) = io::stdin().read_line(&mut s) { println_stderr!("{}", err); continue; }; The same occurs later, and in the macro. prefer late initialization You should only initialize s just before use in the second branch. copy-paste error? Your continue when reading from stdin is a bit optimistic. I'd suggest just exiting at that point - if reading one line fails then reading the rest probably will too. plz no Unicode You read and write strings holding the whole file (or line) - cat should work instead on raw bytes, and preferrably in large chunks too. The most obvious way is something like fn redirect_stream<R, W>(reader: &mut R, writer: &mut W, buffer: &mut Vec<u8>) -> io::Result<()> where R: Read, W: Write { let mut buffer = vec![0; 64 * 1024]; loop { let len_read = try!(reader.read(&mut buffer)); if len_read == 0 { return Ok(()) } try!(writer.write_all(&buffer[..len_read])); } } plz no allocate Using the above means we're stuck allocating 64k for each stream, even if there are many streams or the stream is line-buffered. The first can be solved by passing the buffer into the function. The later can be solved by resizing the buffer up to some hard limit from a small size. DRY Note that there's still duplication between the args.len() < 2 and else branches. One could solve this by writing a wrapping function to extract this functionality. fn handle_arg<R, W>(reader: &mut R, writer: &mut W, buffer: &mut Vec<u8>) where R: Read, W: Write { if let Err(err) = redirect_stream(reader, writer, buffer) { println_stderr!("{}", err.to_string()); } } Making handle_arg a closure would be prettier but require dynamic dispatch. That's probably fine, but maybe less idiomatic: let stdout = &mut io::stdout(); let buffer = &mut vec![0; SMALL_BUFFER_SIZE]; let mut handle_arg = move |mut reader: &mut Read| { if let Err(err) = redirect_stream(&mut reader, stdout, buffer) { println_stderr!("{}", err.to_string()); } }; - means stdin? This is more reasonable if we want to allow the code to accept - to mean standard input, as we can do let mut args: Vec<_> = env::args().skip(1).collect(); if args.is_empty() { args.push("-".into()); } let stdout = &mut io::stdout(); let buffer = &mut vec![0; SMALL_BUFFER_SIZE]; for arg in args { if arg == "-" { handle_arg(&mut io::stdin(), stdout, buffer); continue; } match File::open(arg) { Ok(ref mut file) => { handle_arg(file, stdout, buffer) }, Err(err) => { println_stderr!("{}", err); continue; } } } Result use std::env; use std::io::{self, Read, Write}; use std::iter; use std::fs::File; const SMALL_BUFFER_SIZE: usize = 256; const LARGE_BUFFER_SIZE: usize = 64 * 1024; macro_rules! println_stderr( ($($arg:tt)*) => ( if let Err(err) = writeln!(&mut io::stderr(), $($arg)* ) { panic!("Unable to write to stderr: {}", err); } ) ); fn redirect_stream<R, W>(reader: &mut R, writer: &mut W, buffer: &mut Vec<u8>) -> io::Result<()> where R: Read, W: Write { loop { let len_read = try!(reader.read(buffer)); if len_read == 0 { return Ok(()) } try!(writer.write_all(&buffer[..len_read])); if len_read == buffer.len() && len_read < LARGE_BUFFER_SIZE { buffer.extend(iter::repeat(0).take(len_read)); } } } fn main() { let mut args: Vec<_> = env::args().skip(1).collect(); if args.is_empty() { args.push("-".into()); } fn handle_arg<R, W>(reader: &mut R, writer: &mut W, buffer: &mut Vec<u8>) where R: Read, W: Write { if let Err(err) = redirect_stream(reader, writer, buffer) { println_stderr!("{}", err.to_string()); } } let stdout = &mut io::stdout(); let buffer = &mut vec![0; SMALL_BUFFER_SIZE]; for arg in args { if arg == "-" { handle_arg(&mut io::stdin(), stdout, buffer); continue; } match File::open(arg) { Ok(ref mut file) => { handle_arg(file, stdout, buffer) }, Err(err) => { println_stderr!("{}", err); continue; } } } }
{ "domain": "codereview.stackexchange", "id": 14398, "tags": "rust" }
Find the total number of emails in a folder (with all of the subfolders)
Question: The Folder struct definition below is given: type Folder struct { ID int emailCount int childFolderIDs []int } Description of the fields are: Id : folder id emailCount : number of emails in the folder childFolderIDS : if the folder has subfolders, this list contains folder IDS of the subfolders. I am trying to write a function: Function is taking two parameters: First parameter is a list of all folders in the email application var folders = []Folder{ Folder{1, 30, []int{2, 4}}, Folder{2, 10, []int{3}}, Folder{4, 60, []int{}}, Folder{3, 20, []int{}}, } Second parameter is folder id of a folder Function will return a total number of emails with ID=Folder Id and all of its children recursively. For example: If the second parameter is: four, then the function should return 60 (there is no subfolder) two, then the function should return 30 My code is below: package main import "fmt" type Folder struct { ID int emailCount int childFolderIDs []int } func InsertIntoMap(data_arr []Folder) map[int]Folder { var retval = make(map[int]Folder) var child_folders []int for _, elem := range data_arr { child_folders = elem.childFolderIDs var folder Folder folder.ID = elem.ID folder.emailCount = elem.emailCount folder.childFolderIDs = child_folders retval[elem.ID] = folder } return retval } func GetTotalEmailCount(p_map map[int]Folder, folder_id int) int { total := 0 var child_folders []int child_folders = p_map[folder_id].childFolderIDs total = total + p_map[folder_id].emailCount if len(child_folders) == 0{ return total }else{ for e := range child_folders { return total + GetTotalEmailCount(p_map, child_folders[e]) } } return total } func main() { var folders = []Folder{ Folder{1, 30, []int{2, 4}}, Folder{2, 10, []int{3}}, Folder{4, 60, []int{}}, Folder{3, 20, []int{}}, } var m = InsertIntoMap(folders) fmt.Println(GetTotalEmailCount(m,1)) // result is 60 fmt.Println(GetTotalEmailCount(m,2)) // result is 30 fmt.Println(GetTotalEmailCount(m,3)) // result is 20 fmt.Println(GetTotalEmailCount(m,4)) // result is 60 } ``` Answer: First, your code doesn't give correnct answers; for (m, 1) the answer should be 120 (30 in ./1, +10 in ./1/2, +20 in ./1/2/3, +60 in ./1/4; you never count more than a single entry in the sub-folder slice). Also, you should run go fmt (or goimports) on your code (perhaps you did and that was lost when posting). Also, adding a Go Playground link to your original code in posts is helpful. You mention that you wanted to create a function taking two parameters, the first being []Folder but your code instead converts the slice into a map and uses that as the first argument. Which did you intend? When you have a function or functions that are answering questions based on an argument you could consider making a method. For example, perhaps something like: type Folders []Folder // or type Folders struct { list []Folder byID map[int]Folder } func (f Folders) EMailCount(folder int) int { // code that uses f and folder as arguments } You use snake_case for some of your identifiers and camelCase for others; idiomatic Go code uses camelCase for all identifiers. In your InsertIntoMap function: When you make the map you know how many entries you expect so you should include that number to make as a size hint (e.g. make(map[int]Folder, len(data))). If the size is large this avoids the having to dynamically re-size the map as you add entries. You name the map retval, I'd instead name it based on what contains (a map of folders by ID) rather than it happens to be the return value. I'd call it just m or byID. Within the loop you effectively copy the element elem to the variable folder. You can do this with just folder := elem but at that point the entire loop body can just be rewritten as retval[elem.ID] = elem. In your GetTotalEmailCount function: Normally in Go instead of: var foo []int foo = something[bar].foo you'll just see: foo := something[bar].foo not only is it shorter but it can make future code changes easier. In the latter if the field foo changes type you don't need to edit the type of the variable foo to match. (By the way, shorter isn't always better. Clarity is more important than conciseness). Idiomatic Go code tends avoid indenting code by using early returns (see https://github.com/golang/go/wiki/CodeReviewComments#indent-error-flow). You return if len(…) == 0 so you should just drop the else clause and remove the indent (golint will suggest this). E.g, instead of: if someCondition { return something } else { // other code // possibly with more conditional/loop indenting } it would be: if someCondition { return something } // other code // possibly with more conditional/loop indenting In this specific case, you don't even need the conditional since for range loops don't do anything on empty/nil slices the following return total line is sufficient. It's in this for loop your your bug exists. On the first iteration you stop looping and return the total of this folder and it's first child without iterating to the next child. return total + GetTotalEmailCount(p_map, child_folders[e]) should be: total += GetTotalEmailCount(p_map, child_folders[e]) Without changing the basic structure of your code, all the above gives something like this (Go Playground): package main import "fmt" type Folder struct { ID int emailCount int childFolderIDs []int } func InsertIntoMap(data []Folder) map[int]Folder { m := make(map[int]Folder, len(data)) for _, e := range data { m[e.ID] = e } return m } func GetTotalEmailCount(m map[int]Folder, folder int) int { children := m[folder].childFolderIDs total := m[folder].emailCount for e := range children { // BUG: doesn't detect infinite loops total += GetTotalEmailCount(m, children[e]) } return total } func main() { var folders = []Folder{ Folder{1, 30, []int{2, 4}}, Folder{2, 10, []int{3}}, Folder{4, 60, []int{}}, Folder{3, 20, []int{}}, } var m = InsertIntoMap(folders) fmt.Println(GetTotalEmailCount(m, 1)) // result is 120, 30+10+20+60 fmt.Println(GetTotalEmailCount(m, 2)) // result is 30 fmt.Println(GetTotalEmailCount(m, 3)) // result is 20 fmt.Println(GetTotalEmailCount(m, 4)) // result is 60 }
{ "domain": "codereview.stackexchange", "id": 35509, "tags": "beginner, recursion, go" }
Electron in an infinite potential well
Question: Does this problem have any sense? Suppose an electron in an infinite well of length $0.5\ \rm nm$. The state of the system is the superposition of the ground state and the first excited state. Find the time it takes the electron to go from one wall to the other. Strictly speaking the electron isn't even moving and $\vert \Psi \vert ^2$ is zero at the walls so it doesn't even "touch" them. I think the only solution would be a semiclassical interpretation. Answer: Yes, I believe you have to think of it as if it were a semiclassical problem; you evaluate with QM the mean square velocity $\left< v^2 \right>$ of the particle, then calculate its square root; this should give you an estimate of the typical velocity of the particle. Once you have it, you divide the length of the well by it and find the time it takes the electron to go from one wall to the other as if it were a classical particle with velocity $\sqrt{\left< v^2 \right>}$
{ "domain": "physics.stackexchange", "id": 79092, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, potential" }
Moving contained liquids vertically
Question: I am wondering how would liquid react if one would put it in a container and move the container on a perfect vertical line (perfect, as in completely vertical with no angle / horizontal shift whatsoever). E.G taking a glass of water and moving it up and then back down. On any 'normal' movement of the container, one would observe waves formed on account of the liquid being tossed to one side, but if the movement is perfectly vertical, how can waves form? Of course the scenario is only hypothetical, since this is probably impossible to reproduce in real life. Thanks Answer: The vertical movement would cause no extra forces in the container, but the acceleration at the top and bottom would cause some very small compression waves in the water. Think of an elevator. You only feel lighter and heavier when the elevator is speeding up or slowing down. Your scenario would be exactly equivalent to a situation where the bottle is stationary but the gravitational acceleration of the Earth is increasing and decreasing. This is Einstein's Equivalence Principle. Assuming that the container is not deforming, only the depth of the water would change slightly as the gravity changes. In our everyday experience, water is considered incompressible so you wouldn't be able to see this change in the water depth.
{ "domain": "physics.stackexchange", "id": 49496, "tags": "kinematics" }