anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Simple spec testing DSL
Question: I wrote a very small little DSL for spec testing: (ns funky-spec.core) (def described-entity (ref nil)) (defn it [fun & v] (assert (apply fun (cons (deref described-entity) v)))) (def it-is it) (def it-is-the it) (defmacro describe [value nest] `(dosync (ref-set described-entity ~value) ~nest)) (defmacro when-applied [nest] `(dosync (ref-set described-entity ((deref described-entity))) ~nest)) (defmacro when-applied-to [& args-and-nest] `(dosync (ref-set described-entity (apply (deref described-entity) (take (dec (count (quote ~args-and-nest))) (quote ~args-and-nest)))) (eval (last (quote ~args-and-nest))))) This turned out to be very nice, for writing some simple, concise specs: (ns funky-spec.core-test (:require [clojure.test :refer :all] [funky-spec.core :refer :all])) (describe 42 (it = 42)) (describe 99 (it not= 42)) (describe 0 (it-is zero?)) (describe 99 (it-is-the (complement zero?))) (defn answer [] 42) (describe answer (when-applied (it = 42))) (describe identity (when-applied-to 99 (it = 99))) (describe + (when-applied-to 42 35 (it = 77))) (describe + (when-applied-to 42 35 9 10 (it = 96))) I'm new to Clojure and macros so any thoughts on this code would be appreciated. I have a few concerns that you could start with: when-applied could (I believe) be an alias for when-applied-to. I can't get this to work with def, is there something I am missing? Is there a better pattern I could use other than a "global" ref for the described-entity? I tried using let and shadowing, but couldn't get it to work with macros In when-applied-to , there's a lot of quoting and an eval, I did this to keep it from pre-maturely executing. Is there a cleaner way to thunk it? Answer: This is a case where a dynamic variable is the ideal solution: (def ^:dynamic *described-entity*) (defn it [fun & v] (assert (apply fun *described-entity* v))) (def it-is it) (def it-is-the it) (defmacro describe [value nest] `(binding [*described-entity* ~value] ~nest)) (defmacro when-applied-to [& args-and-nest] (let [args (butlast args-and-nest) nest (last args-and-nest)] `(binding [*described-entity* (*described-entity* ~@args)] ~nest))) (defmacro when-applied [nest] `(when-applied-to ~nest)) Online resources can probably explain dynamic vars better than I can here. For the purposes here, think of it like your atom except that it's thread local and will restore the original value when the binding context exits. To use a macro from another macro, you'll want to do it as shown here rather than trying to literally def it in terms of another macro. The reasons why should probably be saved for when you get a little more macro experience. And of course, butlast. In when-applied-to you can do your argument wrangling outside of the macro. Note though, I'd probably consider passing args in a vector like (when-applied-to [42 35] (it = 77)) to make the arguments more explicit and require less pre-processing.
{ "domain": "codereview.stackexchange", "id": 14641, "tags": "beginner, unit-testing, clojure, dsl" }
Are translational KE and rotational KE exactly analogous?
Question: My textbook states that translational KE and rotational KE are completely analogous. The author states "They both are the energy of motion involved with the coordinated(non random) movement of mass relative to some reference frame. I can understand why this applies when comparing rotational velocity and translational velocity, but how can we justify that mass is equal to moment of inertia? It appears to me that moment of inertia isn't exactly analogous to mass in that it involves both direction and a physical quantity. Answer: Keep in mind what an analogy is. It's a comparison of forms, not of identical quantities. So if we look at the two forms, $$\frac{1}{2}mv^2 \text{ and } \frac{1}{2}\mathcal{I}\omega^2, $$ we see that there are two types of quantities in the first form: a mass, which is independent of the motion of the object, and a square of a motion quantity. If we look at the 2nd form we see a square of motion quantity, $\omega^2$, so what is that $\mathcal{I}$ thing? Well, its exact form depends on the shape of the rigid object (if it's not a rigid object, then the analogy falls apart) and the mass of the object, but it's independent of the motion of the object. That's as far as the analogy can go, because of reference frame and rotational center point considerations. To help begin thinking about how $\mathcal{I}$ gets related to a motion-independent part of the form, consider a point mass, $M$, moving in a circular path of radius $R$, having some instantaneous speed $v$. The kinetic energy in the reference frame where the center of rotation is at rest (path of motion is a circle) is $$K = \frac{1}{2}Mv^2.$$ But we can also describe the motion in terms of an angular speed, $\omega =v/R$. if we do this we get $$K=\frac{1}{2}M\left(\omega R\right)^2 = \frac{1}{2}MR^2\omega^2.$$ Here we see that $M$ and $R$ are speed independent, so we can lump them together and give them a single algebraic symbol and maybe even a name: $$MR^2 \to \mathcal{I}.$$
{ "domain": "physics.stackexchange", "id": 75443, "tags": "energy, angular-momentum, mass, torque, moment-of-inertia" }
How to reduce unnecessary waiting time when using IBM's backend?
Question: I'm working with a program, which needs iterations of quantum computation like this def quantum(n): Grover(oracle).run(QuantumInstance(...)) #n is input size associated with oracle, #and some other components are omitted. for n in range(0,10): start = time.time() quantum(n) end = time.time() Now I have to wait for hours to run this on the 16-qubit quantum computer. So is there any way to pack all computation into one round? Answer: Assuming your quantum() method creates a circuit, you can run lots of circuits in one go by using the execute command. For example execute([grover_1, grover_2, grover_3], backed=my_backend).
{ "domain": "quantumcomputing.stackexchange", "id": 1566, "tags": "qiskit, programming, ibm-q-experience" }
Could we make things out of newly discovered particles?
Question: Right now, all of the "stuff" that has been created in the world is made of protons, electrons, and neutrons. I'm aware that particles other than these have much shorter lifetimes. But I've also heard that mixing particles in the same state can increase their lifetimes - the neutron, for example, has a half life of 15 minutes when it's alone, but is stable in an atom. Could we make combinations of other particles that last for (at least) days at a time? The combination need not consist entirely of new* particles, it must just have at least one. I am very interested in this subject, so ideally I'd like a concrete answer, not just a guess. Answer: This is really just an extended comment on CuriousOne's answer. You probably know that there are just a few elementary particles: six quarks, three electron-a-likes (electron, mu and tau), three neutrinos and various assorted bosons. All matter is made up from various combinations of these particles. The problem is that the heavy particles decay into the light ones on short timescales. So top, bottom, charm and strange quarks end up as up and/or down quarks while tau and mu end up as electrons. So very quickly everything ends up as electrons and up and down quarks, which of course make protons and neutrons. The energy difference between up and down quarks is relatively small, and indeed it's comparable to nuclear binding energies. That's why a neutron can be stable in a nucleus and unstable out of it, because the nuclear binding energy is large enough to stabilise the neutron. However in all other cases the energy difference between the different types of quarks is far larger than nuclear energies and (apart from some special cases - see below) there is no stabilising them. Likewise the energy differences between the electron, mu and tau are too large for the heavier particles to be stabilised by atomic binding energies. I've skipped over the bosons because you can't make matter from bosons. Bosons don't obey the Pauli exclusion principle, and it's the exclusion principle that allows atoms to exist. If you attempted to make matter from bosons at best you just get a condensate. I did say there we some special cases. Let me start with a known one: you can make matter from muons. Muonic hydrogen has been made, and so has a hydrogen analogue made from an anti-mu and electron. I thought the mu/anti-mu equivalent of hydrogen had been observed, but Wikipedia says not. Anyhow, these atoms last only until the mu decays. As mentioned above, the binding energies available are too small to stabilise the mu and prevent it decaying to an electron. The other special case is still entirely theoretical. A neutron is stable because the nuclear binding energy is high enough to prevent the down quark to up quark decay, but nuclear energies aren't high enough to prevent for example a strange quark decaying. However it has been suggested that at exceedingly high pressures the strange quarks could be stabilised and form strange matter. Most of us regard these ideas as huge fun but rather unlikely.
{ "domain": "physics.stackexchange", "id": 18860, "tags": "particle-physics, nuclear-physics, standard-model, antimatter" }
How to execute local_setup.bash generated in cross-compilation environment?
Question: Hi, I'd like to know how to setup ROS2 environment on arm platforms. I thought simply executing local_setup.bash can get all the things ready. But it doesn't work. I build up ROS2 in cross-compilation environment on X86 PC. After the building completes, I just copy the install directory to an arm64 platform. I create the same directory on arm platform as that on X86 PC. I execute local_setup.bash on arm but the ROS2 binary cannot be found. Besides, It seems that some modules are neither found. For example, I get an error show as below when I run ROS2 command. Traceback (most recent call last): File "./ros2", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module> @_call_aside File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside f(*args, **kwargs) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master ws.require(__requires__) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'ros2cli==0.4.0' distribution was not found and is required by the application Could you please tell me how to adjust local_setup.bash on arm platform so I can execute it as on X86 PC? Updates: After set AMENT_TRACE_SETUP_FILES, the output of local_setup.bash is showed as below. This code block was moved to the following github gist: https://gist.github.com/answers-se-migration-openrobotics/f053dea359c6a74f5bcaf9d4cf01b265 The output of PYTHONPATH is empty. The python library in use is: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-aarch64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] I follow ros2-for-arm cross-compilation instructions to build ROS2. The host X86 PC is Ubuntu 16.04 x86_64. The arm64 platform is a Hikey960 board with Ubuntu 16.04 aarch64 version. Originally posted by davidhuziji on ROS Answers with karma: 16 on 2018-04-09 Post score: 0 Answer: Hi, I download the latest code and force ROS2 to use Python3 on arm platform. Although there are still errors dumped, the demo can run. Thanks. Originally posted by davidhuziji with karma: 16 on 2018-04-20 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 30584, "tags": "ros2" }
Storing icons statically as strings in custom class
Question: I have a class that I call "Icons" Icons.h #import <Foundation/Foundation.h> @interface Icons : NSObject + (UIImage *) someIcon; + (UIImage *) someOtherIcon; + (NSString *)imageToNSString:(UIImage *)image; + (UIImage *)stringToUIImage:(NSString *)string; @end Icons.m #import "Icons.h" @implementation Icons + (UIImage *) someIcon { return [self stringToUIImage:[self someIconStr]]; } + (UIImage *) menuLines { return [self stringToUIImage:[self someOtherIconStr]]; } + (NSString *)imageToNSString:(UIImage *)image { NSData *data = UIImagePNGRepresentation(image); return [data base64EncodedStringWithOptions:NSDataBase64EncodingEndLineWithLineFeed]; } + (UIImage *)stringToUIImage:(NSString *)string { NSData *data = [[NSData alloc]initWithBase64EncodedString:string options:NSDataBase64DecodingIgnoreUnknownCharacters]; return [UIImage imageWithData:data]; } + (NSString *) someIconStr { return @"iVBORw0KGgoAAAANSUhEUgAAABYAAAAoCAYAAAD6xArmAAAACXBIWXMAABYlAAAWJQFJUiTwAAAAHGlET1QAAAACAAAAAAAAABQAAAAoAAAAFAAAABQAAAB5EsHiAAAAAEVJREFUSA1iYKAimDhxYjwIU9FIBgaQgZMmTfoPwlOmTJGniuHIhlLNxaOGwiNqNEypkwlGk9RokoIUfaM5ijo5Clh9AAAAAP//ksWFvgAAAEFJREFUY5g4cWL8pEmT/oMwiM1ATTBqONbQHA2W0WDBGgJYBUdTy2iwYA0BrILDI7VMmTJFHqv3yBUEBQsIg/QDAJNpcv6v+k1ZAAAAAElFTkSuQmCC"; } + (NSString *) someOtherIconStr { return @"iVBORw0KGgoAAAANSUhEUgAAADIAAAAKCAYAAAD2Fg1xAAAACXBIWXMAABYlAAAWJQFJUiTwAAAAHGlET1QAAAACAAAAAAAAAAUAAAAoAAAABQAAAAUAAACZxAe6RgAAAGVJREFUOBFiYACCmTNn8k+ePLl+0qRJ74H4PxS/B4mB5EkFA2Ye0MHnkTwA8wiYBsmR6pEBMQ8U6rg8ARMHqSHWMwNmHtCxyMkJJTZgHgGpIdYjA2YekmNxeQIsToxHQHljoMwDAAAA//+psxowAAAAWUlEQVRjmDRp0n9iMAMRYObMmfzEmAVSQ4RxDCSZBzT0PRGWvyfGYpCaATNv8uTJ9YQ8AlJDrEcG1Dyg5edxeQYkR6wnYOoG1DxoSCIns/cgMZjjSKXpbR4A1NvIaZrhxd8AAAAASUVORK5CYII="; } This way, whenever I need to add an icon to a button or whatever, it's UIButton * btn; [btn setImage:[Icons someIcon] forControlState:UIControlStateNormal]; It doesn't seem like a big fix, but here's the main reasons I do it. Easy To Use Same Icon Library Across Projects No misspelled image errors No need to create image file keys Changing an icon quickly updates all iterations across project Questions Anyone else using something similar to this? Should I store the image strings statically as opposed to returning them via method as is now? Any other thoughts? Answer: I do something similar, except rather than creating my own class, I create a UIImage class category. File names are called UIImage+Icon.h and UIImage+Icon.m. In the .h, we have this: @import UIKit.UIImage; @interface UIImage (Icon) + (UIImage*)someIcon; + (UIImage*)someOtherIcon; @end And then put your code in the .m as normal. Be sure to note @import UIKit.UIImage; versus the #import. You don't need to import the entire UIKit. @import is new in Xcode 5 and vastly speeds up compile time as it only imports the required modules. Anyway, the advantage to this versus what you're doing is that we call our methods as such: [UIImage someIcon]; It's clear as day that we're definitely returning a UIImage object, because we're calling a UIImage class method. EDIT: Having just noticed that your class also includes a method which returns NSString, I still recommend the class category. In this case, the imageToNSString method would look like this: - (NSString*)toEncodedString; The method doesn't take an argument because it is an instance method. Because this is a UIImage class category, you can refer to self, which gives you the UIImage instance on which this method was called. I personally like the UIImage class category, but another option that is still potentially better than what you're doing would be to create C-style functions. So you'll have a .h file that would include this function declaration: UIImage * imageFromString(NSString *str); As well as a list of NSString * const objects that you send as arguments. Then your .m, you've just got the logic for returning an image from a string. The main point here is that there are two options here that don't involve a class called Icon. The problem is, if there's a class called Icon, I kind of expect to be working with Icon objects in some way. And I might even try to instantiate an Icon object, which makes no sense... Ultimately though, if you're NOT going to create a UIImage class category OR use C-style functions and still work with the Icon class, since you only have class methods and your class certainly shouldn't be instantiated, I highly recommend adding this to your .h file: + (id)alloc __attribute__((unavailable("Icon class can not be instantiated"))); - (id)init __attribute__((unavailable("Icon class can not be instantiated"))); This will throw red errors and prevent compilation if anyone tries to instantiate the Icon class. This is the worst option of the three I purposed in this situation, but it can be acceptable in some cases perhaps.
{ "domain": "codereview.stackexchange", "id": 6668, "tags": "objective-c, ios" }
What's the difference between the Actor Model of Concurrency and Communicating Sequential Processes
Question: I'm trying to wrap my head around what the real differences between the Actor Model of concurrency and Communicating Sequential Processes (CSP) model of concurrency. So far the best that I have been able to come up with is that the Actor Model allows the number and layout of nodes to change while CSP has a fixed structure of nodes. Answer: I believe one core difference is that in CSP, processes synchronize when messages are received (i.e. a message cannot be sent from one process unless another process is in a receiving mode), while the Actor model is inherently asynchronous (i.e. messages are immediately sent to other processes' address, irrespective of whether they're actively waiting on a message or not). There should be another answer that is more well-developed, however.
{ "domain": "cstheory.stackexchange", "id": 5840, "tags": "concurrency" }
Turning a wheel of a car, stationary vs turning
Question: At the end of class, my physics professor left us with a question: "Why turning the wheel of a static car is much harder than turning one in motion?" I took the wheel to be a little wide and have a little compressibility so the area of contact would make a rectangle. Now in the first case turning is opposed by friction on all points on the rectangle (except the one in the center) and in the second it is friction that facilitates turning (which is more on one side to make it turn). Have I used the right assumptions? Answer: why turning the wheel of a static car is much harder than turning one in motion Straight answer is that in a static car you are overcoming static friction and when in motion - rolling resistance. Static friction coefficient for tires is about $50\times$ higher than a rolling resistance coefficient for same tires.
{ "domain": "physics.stackexchange", "id": 71663, "tags": "newtonian-mechanics, kinematics" }
Error when loading joint controller
Question: Hi I am trying to load a real time joint controller as in the tutorial http://www.ros.org/wiki/pr2_mechanism/Tutorials/Running%20a%20realtime%20joint%20controller when i write rosrun pr2_controller_manager pr2_controller_manager list-types i find my controller. rosrun pr2_controller_manager pr2_controller_manager list-types JointGravityController JointPendulumController MyControllerCart ethercat_trigger_controllers/MultiTriggerController ethercat_trigger_controllers/ProjectorController ethercat_trigger_controllers/TriggerController my_controller_pkg/MyControllerPlugin but when i try to run it by -- rosrun pr2_controller_manager pr2_controller_manager load my_controller_ns i get an error [ [ WARN] [1348483742.077905347, 5304.895000000]: The deprecated controller type MyControllerPlugin was not found. Using the namespaced version my_controller_pkg/MyControllerPlugin instead. Please update your configuration to use the namespaced version. [ERROR] [1348483742.078175612, 5304.896000000]: Could not load class my_controller_pkg/MyControllerPlugin: Failed to load library /home/nachum/my_controller_pkg/lib/libmy_controller_lib.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: No manifest in /home/nachum/my_controller_pkg/lib/libmy_controller_lib.so: my_controller_pkg__MyControllerPlugin [ERROR] [1348483742.078288739, 5304.896000000]: Could not load controller 'my_controller_ns' because controller type 'my_controller_pkg/MyControllerPlugin' does not exist any ideas what am i doing wrong? Thanks Nachum Originally posted by Nachum on ROS Answers with karma: 208 on 2012-09-24 Post score: 0 Original comments Comment by ahendrix on 2012-09-24: Which version of ROS and which OS are you using? The error messages complain that you aren't declaring and exporting your plugin properly; have you checked that? If you can post your controller code, manifest and CMakelists.txt, that will help debug. Answer: Some how I just closed the Gazebo and turned it on again and it worked. Originally posted by Nachum with karma: 208 on 2012-10-15 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Lorenz on 2012-10-15: Probably because the controller manager re-initialized the list of plugins.
{ "domain": "robotics.stackexchange", "id": 11126, "tags": "microcontroller" }
Can neutrinos orbit galaxy centre like stars do?
Question: I am layman in particle physics. I saw this question about gravitation affecting neutrinos. There is link to an article, but it is too difficult for me. Is it known if and where can neutrinos orbit heavy objects, say centre of galaxy, is their trajectory stable? Answer: Neutrinos have mass and so they are affected by gravity (indeed, even if they were massless, they would still be affected by gravity, just as photons are). However, the mass of a neutrino is very very small, so neutrinos emitted by the the nuclear fusion processes inside stars (and in even more extreme events such as supernovae) have speeds that are very close to the speed of light - they are relativistic particles. Because they travel at speeds so close to the speed of light, it is very difficult to directly measure the speed of a neutrino and this is an area of ongoing research - see this Wikipedia article To go into a closed orbit around a galaxy, a star or even a black hole, a neutrino would have to lose energy and slow down. But because they are not affected by the electromagnetic force or the strong nuclear force, neutrinos hardly ever interact with other particles, so a relativistic neutrino is very unlikely to lose enough energy to ever go into a closed orbit.
{ "domain": "physics.stackexchange", "id": 74519, "tags": "gravity, neutrinos, space" }
How does one deduce small step operational semantics?
Question: This question arises from my reading of "Types and Programming Languages" (WoldCat) by Benjamin C. Pierce. For the small step operational semantic evaluation rules for the arithmetic expressions (NB) in Figure 3-2 on page 41, there is the rule $pred\;(succ\;nv_1)\rightarrow\;nv_1$ My understanding is that it is to keep out invalid input like $pred\;(false)$ but how did he come to that exact syntax for the rule? Is there some algorithm that is used to massage the rules into the necessary form? Is there some book or paper that explains how to formulate rules for small step operational semantics? Note: I am aware that there is a forum dedicated to questions for the book here. Answer: Devising an operational semantics of a programming language requires ingenuity, just like engineering a new car does. Nevertheless, there are some principle and correctness criteria by which we can judge whether a given semantics is "good". For example, a good operational semantics should make sure that "valid programs do not do bad things". In practice this translates to "if a program succesfully compiled it is not going to dump core when we run it". Some languages are unsafe in this respect (C/C++), some are designed with safety in mind (Java, Python), and some are proved to be safe (Standard ML). One way of translating "valid programs do not do bad things" is to interpret "valid program" as "well typed program" and "bad thing" as "getting stuck during evalution, even though we have not reached a final value". For this to make sense you have to define what "well typed" and "value" means. The central theorem then is Safety: If a program $p$ has type $\tau$ then its evaluation diverges or terminates in a value of type $\tau$. The theorem is typically proved by combining two lemmas: Type preservation: If $p$ has type $\tau$ and $p \mapsto p'$ then $p'$ has type $\tau$. Remember this as "types do not change during evaluation". Progress: If $p$ has type $\tau$ then it is either a value or there is a program $p'$ such that $p \mapsto p'$. On the other hand, it does not matter what the operational semantics does with senseless programs, such as $\mathtt{pred}(\mathtt{false})$ because those do not have a type. It is the combination of typing and operational semantics that ensures safety. You cannot ensure safety just with the operational semantics, unless you are willing to make bizarre evaluation rules that will make your language awfully hard to debug.
{ "domain": "cs.stackexchange", "id": 1136, "tags": "semantics, operational-semantics, types-and-programming-languages, small-step-semantics" }
Is "Find a Steiner Tree of cost < K" NP-hard?
Question: Finding the minimum Steiner tree is well known NP-hard. It seems obviois that asking "find a Steiner tree of cost < K" has to be NP hard, because I could solve the original problem by repetedly calling an algorithm that solves the second. But is this "repeteadly call"/loop a valid reduction? There seems to be something fishy with it, isn't it exponential in the number of bits of K (like the famous knapsack DP algorithm)? Thanks Answer: In fact, the optimization version of minimum Steiner tree isn't NP-hard, since only decision problems can be NP-hard. When we say that a minimization problem like minimum Steiner tree is NP-hard, what we mean is that the corresponding decision version is NP-hard. The decision version is just what you wrote: given an instance of minimum Steiner tree and a number $K$, to decide whether the minimum Steiner tree has cost at most $K$.
{ "domain": "cs.stackexchange", "id": 7610, "tags": "np-complete, reductions, np-hard" }
Textbooks about non-RE languages for undergraduate students
Question: I'd like to read up on non-recursively enumerable languages. Which textbooks should I look into to get a decent understanding about the subject? Thank you. Answer: Despite its name, Soare's book Recursively enumerable sets and degrees has lots of information about this. In particular, it covers in pretty good detail the arithmetical hierarchy. (Incidentally, note that the non-r.e. elements of the arithmetical hierarchy aren't all "r.e.-hard." For example, consider any minimal $\Delta^0_2$ degree, that is any Turing degree $\le_T{\bf 0'}$ which is nonzero but not strictly above any nonzero Turing degree. There are infinitely many minimal $\Delta^0_2$ degrees; by the Sacks Density Theorem, they are all non-r.e. and hence by minimality don't compute any nonrecursive r.e. sets. The way in which the r.e. degrees sit inside the $\Delta^0_2$ degrees is actually fairly complicated.)
{ "domain": "cs.stackexchange", "id": 16621, "tags": "books" }
Electrical Potential Energy And forces Experienced By Particles
Question: I am really stuck on a question and would appreciate a helping hand. A particle has a charge of $+3.5$ $\mu$$C$ and moves from point $A$ to point $B$, a distance of $0.5m$. The particle experiences a constant electric force and it's motion is along the line of action of the force. The difference between the particles electrical potential energy at point $A$ and point $B$ is $EPE_{A}$ $-$ $EPE_{B}$ $= +7.0x10^{-3} J$. (a) Find the magnitude and direction and direction of the electric force that acts on the particle. (b) Find the magnitude and direction of the electric field the particle experiences. The fact that the work done is a positive quantity, leads me to think that the electric force acts in the direction from $A$ to $B$ but I don't know how to quantify it. Thank you for any help you can give! Answer: Relation between electric field and potential energy is given by V = E.dr -----> (1) where, V - Potential Difference E - Electric field dr- small distance between the two points at which potential difference is calculated Also, we know that the electric force is given by - F = qE ------> (2) where, F - Electric force q - it is the net charge on which the force is acting For answering the first part of your question - (i) From equation (1) and (2), we have - F = q * V / dr Now just put the values according to your question which are - q = +3.5 μC dr = 0.5 m V = 7.0 X $10^{-3}$ Then the you will get force (F) = 4.9 X $10^{-2}$ N (ii) Using the (2) equation you can find the Electric field. (Calculate it yourself!) Now while you find the direction of Electric field you must note one thing that the electric field for the axial charge distribution is simply from positive to negative and since in your case the electric field is uniform you can simply conclude that it would be from A to B (also the work done is positive). To quantify Work Done- Work Done = Force * Displacement Work Done = qE.dr = qV ------>(3) So, from (3) you can get that work done is positive and in the same direction as of potential energy (since here position vectors of V do not change).
{ "domain": "physics.stackexchange", "id": 12148, "tags": "homework-and-exercises, electrostatics" }
Building blocks of Life
Question: I'm learning Clojure, and decided to write a Conway's Game of Life clone as my starting project. I ended up dicking around for a bit before diving in, and came up with a few functions that I'd like looked over. Mainly, I'm concerned about writing them more concisely and idiomatically. I'm planning on using a 1D vector to represent a 2D field. The typical equation to get the index of a vector corresponding to an (x,y) coordinate is y * width + x. Here are my first attempts: ; A 2D point representing a coordinate, or any pair of numbers (defrecord Point [x y]) ; Represents the Game of Life world. ; cells is a vector representing a 2D matrix of cells ; dims is the dimensions of the world as a Point (defrecord Enviro [cells dims]) (defn index-of [width x y] (+ (* y width) x)) (defn enviro-index-of [enviro x y] (let [width (:x (:dims enviro))] (index-of width x y))) (defn enviro-index-of-2 [enviro x y] (let [width (-> enviro :dims :x)] (index-of width x y))) index-of is straight forward. My issue is the 2 convenience functions to get an index by supplying the Enviro instead of the width directly. I tried deconstructing the record directly in the argument list, but it complained that it didn't recognize the keyword record keys. The work-around was to just "navigate" the records manually, and bind "width" in a let. My first attempt was kind of naïve; using the accessor functions to get the Enviro, then the dimensions Point. Then I remembered the thread macro! For my second attempt, I used -> to find the width. It seems to be much cleaner, but is this as idiomatic as it gets? Next, I decided to try writing a function that returns a vector containing all the points surrounding a given point. I can then check the cell at each neighboring point to see whether or not the center point should be dead or alive: (defn generate-neighbor-points ([cx cy r] (let [start-x (- cx r) end-x (+ cx r 1) ; Adding 1 so it's inclusive start-y (- cy r) end-y (+ cy r 1)] (for [y (range start-y end-y) x (range start-x end-x) :when (and (not= x cx) (not= y cy))] ; Should actually be `or` instead of `and` (Point. x y)))) ([center-point r] (let [cx (:x center-point) cy (:y center-point)] (generate-neighbor-points cx cy r)))) There's a couple things that I'm not happy about here: The fact that this generates a list of cells, just so it can be checked. I might see if I can just make this a higher-order function that takes a callback that's executed on each Point surrounding the cell. That might be thinking too javascript-y though. The fact that this requires the creation of 2 "range"s, which are really full lists. This seems to be the best way of "iterating" a range of numbers and returning a list. I'm open for suggestions on how to make this more efficient and idiomatic though. Answer: There's a lot that you can streamline here. For your Point record: ; A 2D point representing a coordinate, or any pair of numbers (defrecord Point [x y]) If you were only going to use a Point to represent a 2D Cartesian coordinate pair, this record may have some value, but not very much. The problem is the "or any pair of numbers" part. If you want to represent a pair of numbers in Clojure, just use a vector: [4 2] Vectors are very easy to destructure: (defn prettify [point] (let [[x y] point] (str "(" x ", " y ")"))) (prettify [4 2]) ;=> "(4, 2)" Or if you want to just get the first or second element in a pair, you can do that as well: (first [4 2]) ;=> 4 (second [4 2]) ;=> 2 You're creating a fair bit of extra work (e.g. the index-of functions) for yourself by representing your 2D world as a 1D vector. Using a 2D vector instead would eliminate the need for all three of those functions, as well as the Enviro record itself: (def enviro (vec (repeatedly 10 (fn [] (vec (repeatedly 10 (fn [] (rand-int 10)))))))) (pprint enviro) ;; [[3 7 6 2 9 8 5 7 0 2] ;; [0 6 4 7 1 7 0 7 1 9] ;; [1 8 0 5 6 9 7 7 8 3] ;; [7 9 0 8 4 3 3 7 8 6] ;; [8 9 3 5 1 4 6 5 0 2] ;; [6 2 3 2 9 4 8 0 3 1] ;; [0 5 7 7 7 9 0 0 9 5] ;; [3 8 9 8 2 0 6 3 1 9] ;; [9 7 4 1 5 8 0 5 1 2] ;; [9 9 1 9 4 6 3 1 8 2]] ;=> nil (get-in enviro [4 2]) ;=> 3 Or, if your matrix is sparse, you could use a map from points to values instead: (def enviro (into {} (repeatedly 10 (fn [] [[(rand-int 10) (rand-int 10)] (rand-int 10)])))) (pprint enviro) ;; {[2 8] 7, ;; [5 4] 2, ;; [4 2] 6, ;; [7 8] 1, ;; [9 6] 5, ;; [1 7] 2, ;; [2 6] 3, ;; [6 0] 5, ;; [3 5] 4, ;; [0 1] 5} ;=> nil (get enviro [4 2]) ;=> 6 If you just use a plain hash map for this, you'll lose some of the spatial query capabilities of a vector, but you could get some of those back by using a quadtree or some other spatially indexed data structure. I'll leave it up to you to decide how to go about doing that in Clojure if you want to take that route. Your generate-neighbor-points function looks pretty good. Since range and for return lazy sequences, your implementation doesn't have any real performance issues. If I were to write it myself, though, I would probably be a bit more terse: (defn neighbors [point radius] (let [[xs ys] (map #(range (- % radius) (+ % radius 1)) point)] (for [y ys x xs :when (not= point [x y])] [x y]))) There's nothing wrong with just returning the sequence of neighbor points and then leaving it up to the caller to decide what to do with them. I wouldn't recommend complecting the "find all the neighbor points" logic and the "do something with each of those neighbor points" logic into the same function, because I think it would make more sense to put that second bit in a separate function: (defn update-all [enviro points f] (reduce #(update-in %1 %2 f) enviro points)) Example (using the 2D vector enviro example above): (pprint (update-all enviro (neighbors [4 3] 2) #(mod % 2))) ;; [[3 7 6 2 9 8 5 7 0 2] ;; [0 6 4 7 1 7 0 7 1 9] ;; [1 0 0 1 0 1 7 7 8 3] ;; [7 1 0 0 0 1 3 7 8 6] ;; [8 1 1 5 1 0 6 5 0 2] ;; [6 0 1 0 1 0 8 0 3 1] ;; [0 1 1 1 1 1 0 0 9 5] ;; [3 8 9 8 2 0 6 3 1 9] ;; [9 7 4 1 5 8 0 5 1 2] ;; [9 9 1 9 4 6 3 1 8 2]] ;=> nil
{ "domain": "codereview.stackexchange", "id": 20086, "tags": "beginner, clojure, game-of-life" }
What happens if I use c++ 17 features in my ros nodes?
Question: In the ROS manual/wiki you can see, that ROS is only made for C++11. But what exactly does that mean? What happens if I just put add_compile_options(-std=c++17) in the CMakeLists.txt of my ROS-package and use C++17-features anyway? Specifically: Is it possible to use parallel std::for_each from C++17 in my ROS-nodes? Originally posted by max11gen on ROS Answers with karma: 164 on 2020-01-09 Post score: 2 Original comments Comment by gvdhoorn on 2020-01-09:\ In the ROS manual/wiki you can see please always link to pages you are referring to. Right now we don't know what you "see" exactly. Comment by max11gen on 2020-01-09: @gvdhoorn I know, sorry. The thing was just, that I couldn't actually find where exactly I had read that, but I rather just had it in the back of my head. Answer: ROS is only made for C++11 this is actually not really true. For ROS 1: Melodic has lifted the max version to C++14 (see here). For ROS 2: all ROS 2 versions target C++14 by default (see REP-2000, search for "Minimum language requirements"). What happens if I just put add_compile_options(-std=c++17) in the CMakeLists.txt of my ROS-package and use C++17-features anyway? Nothing. It probably will just work, as long as your C++17 object code is ABI compatible with whatever libraries you are linking against. In other words: you'll potentially run into the exact same problems you could have with ABI incompatibilities between libraries when not using ROS. Originally posted by gvdhoorn with karma: 86574 on 2020-01-09 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by max11gen on 2020-01-09: Thanks for your answer. But how can I find out, if the object code is ABI compatible after all? Will I get errors if the compatibility is not given, or can it happen that there will be just arbitrary, undefined behaviour occurring? Comment by gvdhoorn on 2020-01-09: Yes, could be linking errors, could also be SEGFAULTs. I can't give you a more definitive answer unfortunately. As I wrote: there is nothing really ROS specific here. It's essentially plain C++. From my own personal experience though: C++03 + C++11 was troublesome (std::string etc). C++11 and newer has not been a problem for me so far (ie: combining binary artefacts compiled with these different versions). But again: personal experience, so this is not a guarantee everything will work. Comment by pavel92 on 2020-01-09: In addition to above-mentioned, you can also use the set_properties macro in CMakeLists to set cpp standards for specific targets only (where you know you need c++17 for example) within a package instead for the whole package as it is done with add_compile_options. Here is an example which applies the c++17 standard for a defined executable: set_property(TARGET my_executable PROPERTY CXX_STANDARD 17) set_property(TARGET my_executable PROPERTY CXX_STANDARD_REQUIRED ON) Comment by max11gen on 2020-01-09: @gvdhoorn Alright, thanks for your help! Comment by max11gen on 2020-01-09: @pavel92 Great hint, thanks. Comment by gvdhoorn on 2020-01-09: I would use target_compile_features(..) with a meta-feature instead, but it depends on your CMake version whether that is available. Comment by audrius on 2021-05-23: We do not have any issues with linking our C++17 code to noetic. Everything builds and runs just fine, with all tests passing. Comment by allsey87 on 2022-05-25: It depends a bit on the compilers in use. For example, if you use one version of GCC to build everything then you are fine to link C++11, 14, and 17 code together (see https://stackoverflow.com/a/49119902/5164339)
{ "domain": "robotics.stackexchange", "id": 34246, "tags": "c++, ros-melodic" }
Rake task to send users a reminder to post with conditions
Question: I am implementing a feature that reminds users to make a post via email if the user has set daily reminders to true he has not posted yet today and if the current hour matches when he would like to receive the daily reminder in his time zone. I am going to write a Rake task for it and then schedule it to run every hour. Currently, the rake tasks looks like this: namespace :mail do desc "Send daily reminder to users" task :daily_reminder => :environment do User.all.each do |user| Time.zone = user.time_zone if user.reminder == true && Time.current.strftime("%H").to_i == user.reminder_time && user.reminded == false UserMailer.test_email(user).deliver_now user.update_attribute(:reminded, true) elsif Time.current.strftime("%H") == user.reminder_time + 1 && user.reminded == true user.update_attribute(:reminded, false) puts "Update to false" end end end end I am resetting the field reminded back to false the hour after they were reminded. Since I am rather new to Ruby and Ruby on Rails I would appreciate feedback on the setup of this feature a lot. Please critique away on anything you think is broken or needs improvement. Also, right now it is not working and it's giving me back a undefined method + for nil:NilClass which tells me that the user hasn't an ActiveRecord assigned to it? Because the reminder_time column exists and is set (to 16:00). Follow-up question: The if clause looks kind of large and unclean. Is that the proper way to go about this? Shall I make the whole logic more clean by putting some of the conditions in methods? Answer: Never User.all.each in the real world. That loads your entire table into memory and instantiates ActiveRecord models for every record. You want User.find_each do |user|, which uses find_in_batches to load 1000 records at a time. Don't do simple filtering in Ruby, do it in the database. This... if user.reminder == true should be a simple scope that prevents these records from ever being loaded. Something like this: class User < ActiveRecord::Base scope :with_reminder, -> { where(reminder: true) } end Now, your top-level loop can use the scope: User.with_reminder.find_each do |user| This filters out a huge number of users who would otherwise be loaded from the database needlessly. Your if/else aren't actually correct, since your intent is to load all users with reminder = true. You load all users, and then your if looks for user.reminder == true, but your else doesn't. It affects all users with reminder of true or false, and then sets their reminded to false. ... which tells me that the user hasn't an ActiveRecord assigned to it? Because the reminder_time column exists and is set (to 16:00). Err, no, that's not at all what that tells you. Nothing has "an ActiveRecord" assigned to it, that isn't at all how ActiveRecord works. Every record in the database is wrapped in an ActiveRecord object as you read it from the database, it's impossible for some users to "have" ActiveRecords and others not to. See point 3. Your else currently checks all records, including those with reminder == false, and presumably those records don't have a reminder_time set. It's really not clear what you're trying to do with Time.current.strftime("%H") == user.reminder_time + 1. What is "1"? You have no unit, so it means "1 second". Your probably never going to hit that.
{ "domain": "codereview.stackexchange", "id": 18389, "tags": "ruby, datetime, ruby-on-rails, email, scheduled-tasks" }
Are normal modes, standing waves, natural modes, acoustic resonances, forced vibrations and room modes the same thing?
Question: I am studying the normal modes and I am confused because there are several names that I am not sure if they are the same physical phenomena or they are different things that are related. I would like to know the definitions of each one and how these concepts are related to the resonances of a room. I read the chapter on standing waves in the 9th edition of Serway and Vuille's book and I know that standing waves are waves created by the interference of two waves in opposite directions and produce nodes and antinodes. This book explains this phenomenon with a string model, but this model is a bit different from a room model and doesn't mention normal modes, so how does this relate to normal modes, room acoustic resonances, room modes, and forced vibrations? Answer: All mechanical systems (and you could consider an acoustical system as a mechanical one. This is justified in the use of electro-acoustico-mechanical analogies) have at least one preferred way of "behaving" when oscillating. This has to do with their normal response when they are excited too, which is of course the topic of forced excitation. These terms are clarified below. Quoting Harris' Shock and Vibration Handbook: Mode of vibration: In a system undergoing vibration, a mode of vibration is a characteristic pattern assumed by the system in which the motion of every particle is simple harmonic with the same frequency. Two or more modes may exist concurrently in a multiple-degree-of-freedom system. This means that, according to the authors, a mode is the pattern of vibration. This is consistent with the fact that for a specific geometry, you will always (in the linear regime) get the same pattern for a specific mode, regardless of the frequency that excites the mode. For example, the modes of a rectangular plate or a parallelepiped room have the same patterns (different for each case of course) no matter what their scale is. Of course scale does affect the frequency that excites the mode. Based on that and quoting the same book again Natural frequency: Natural frequency is the frequency of free vibration of a system. For a multiple-degree-of-freedom system, the natural frequencies are the frequencies of the normal modes of vibration. and the normal mode of vibration is (again quoting the same reference) Natural mode of vibration: The natural mode of vibration is a mode of vibration assumed by a system when vibrating freely. Key word here is freely. This means (as mentioned in the beginning) that the natural modes (excited at the natural frequencies) are the ways (at the frequencies) that the system "likes to" (or has the "tendency") to vibrate. Since acoustics, like vibrations can be described by the "unified framework" of oscillations the same principles apply there too. Now, according to the same authors, there is a specific definition for the normal modes and this is Normal mode of vibration: A normal mode of vibration is a mode of vibration that is uncoupled from (i.e., can exist independently of) other modes of vibration of a system. When vibration of the system is defined as an eigenvalue problem, the normal modes are the eigenvectors and the normal mode frequencies are the eigenvalues. The term classical normal mode is sometimes applied to the normal modes of a vibrating system characterized by vibration of each element of the system at the same frequency and phase. In general, classical normal modes exist only in systems having no damping or having particular types of damping. Now, coming to resonances. As I have already done multiple times in this answer I will, once more, quote the same old reference for a definition of resonance. This is Resonance: Resonance of a system in forced vibration exists when any change, however small, in the frequency of excitation causes a decrease in the response of the system. For the sake of completeness I will also quote a term that is not used so often. This is antiresonance and according to the same source it is Antiresonance: For a system in forced oscillation, antiresonance exists at a point when any change, however small, in the frequency of excitation causes an increase in the response at this point. As you can see, when we refer to resonances we are already talking about forced excitation of a system. As you may already know, when we are forcing oscillations to a system we can "arbitrarily" pick the frequency of excitation. The frequency at which a resonance exists/happens is called resonance frequency (or frequency of resonance). The same definition from the source is Resonance frequency: Resonance frequency is a frequency at which resonance exists. Now, coming to the part we didn't touch so far, the standing waves. According to the reference, a standing wave is Standing wave: A standing wave is a periodic wave having a fixed distribution in space which is the result of interference of progressive waves of the same frequency and kind. Such waves are characterized by the existence of nodes or partial nodes and antinodes that are fixed in space. Of course you may already know that, but on the other hand you should consider how would it be possible for a system of infinite extent to have natural modes (free vibrations) or exhibit resonances (forced vibrations). I don't think you could have such a thing (please correct me if I am wrong). This means that in a sense you can think of the standing waves as the natural/normal modes of an acoustical system. These two "things" (modes and standing waves) seem to have similar characteristics (according to the definitions presented above). Both exhibit fixed distributions in the spatial variables which constitutes them, at least for practical reasons, similar if not identical. Normal Modes To explain the normal modes (answering to your comment), you first of all have to consider that they are modes. As stated above, a mode is a specific spatial distribution pattern (in the room acoustics case, most often of pressure or displacement but it is not restricted to only those quantities). Now, to make it clearer, modes are excited at specific frequencies, no matter if we are talking about free or forced oscillations. The only difference being that when a system is freely vibrating it does so only at the normal frequencies (in the stade-state) whereas in forced oscillations it vibrates at the frequency of the external force. Nevertheless, when the frequency of external forcing function coincides with one of the natural frequencies you achieve resonance and you can observe the mode. If those modes (regardless of whether they are excited at a freely or forced vibrating system) are (linearly) independent then we talking about normal modes. As already mentioned in the answer, the normal modes are the eigenvectors and the frequencies they happen are the eigenvalues of an oscillating system is stated/modeled as an eigenvalue problem.
{ "domain": "physics.stackexchange", "id": 85733, "tags": "waves, acoustics" }
Yukawa interaction in QM (0+1D field theory)
Question: This is a question about considering a simple ordinary quantum mechanics system from a quantum field theory perspective. Out of necessity the setup describing the problem is fairly long, but the punchline is from a QFT perspective there are certain diagrams that don't seem to be reflected in the simple ordinary QM picture. Let's say we have the Hamiltonian $$H=\frac{1}{2}\left(p^2+\omega^2 q^2\right)+m c^\dagger c + \lambda q c^\dagger c$$ where $c^\dagger, c$ are fermionic creation and annihlation operators for a single state (i.e. they describe a simple two level system). In the 'bosonic' sector of the Hilbert space, where $c^\dagger c=0$, we have a harmonic oscillator Hamiltonian $$H_B=\frac{1}{2}\left(p^2+\omega^2 q^2\right)$$ In the fermionic sector, $c^\dagger c=1$, we have a shifted harmonic oscillator Hamiltonian $$H_F=\frac{1}{2}\left(p^2+\omega^2 \left(q+\frac{\lambda}{\omega^2}\right)^2\right)+m-\frac{\lambda^2}{2\omega^2}$$ If we now consider this from a path integral perspective we can consider the partition function $$Z(\beta)=\int\mathcal{D}{q}\mathcal{D}\bar{\psi}\mathcal{D}{\psi}\exp\left[-\int_0^\beta d\tau \frac{1}{2}\left(\dot{q}^2+\omega^2q^2\right)+\bar{\psi}\left(\partial_\tau+m\right)\psi+\lambda q\bar{\psi}\psi\right]$$ and find the Euclidean time ordered propagator $\text{Tr}\left(q(\tau)q(0)e^{-\beta H}\right)/Z$ by doing ordinary Feynman diagram perturbation theory like in the following figure. Calculating the correction to the fermion propagator in diagram A we can find that $m$ is corrected to $m-\frac{\lambda^2}{2\omega^2}$ like we expect. The overall amplitude of the propagator also agrees with the amplitude between two harmonic oscilator ground states which are displaced with respect to each other by $\lambda/\omega^2$. So I'm doing something right here at least. If we calculate the correction to bosonic fields at zero temperature as in B, we find that the corrections vanish because all loops containing only fermions vanish since in the fermion propagator $(ik+m)^{-1}$ all the poles are in the same half of the complex plane. This is consistent with the idea that $H_B$ is the same thing as the original Hamiltonian with $\lambda=0$. The tadpole in C which represents the expectation value of $q$ is a little tricky since it is logarithmically divergent. But this is presumably just due to the operator ordering ambiguity for $\bar\psi \psi$ taken at the same time. If we go to real space by inserting $e^{ik\tau}$ and taking the limit as $\tau\rightarrow 0$ we get either $0$ or $-\lambda/\omega^2$ depending on the sign of $\tau$, which again is what we expect from the ordinary QM picture and time ordering in path integrals. Now here comes the problem. Diagram B vanishes at zero temperature. So to see the correction to bosonic fields we need turn on a temperature so the bosonic fields can 'see' the excited state with $c^\dagger c = 1$. From a QFT perspective this is accomplished by making the fields periodic or antiperiodic over a Euclidean time interval $\beta$ and evaluating the sum over Matsubara frequencies. Diagram C works again and the sum gives the correct temperature factor (just considering the partition function) $$\langle q\rangle_\beta = -\frac{\lambda}{\omega^2}\frac{e^{-\beta m}}{1+e^{-\beta m}}$$ and if I included higher order corrections to the tadpole it would shift $m$ to the corrected value $m_\lambda\equiv m-\frac{\lambda^2}{2\omega^2}$ But corrections to the bosonic propagator don't seem to work. From the QM perspective when we work with harmonic oscillator states in the fermionic sector the only difference is the overall shift in energy by $m_\lambda$ and the shift of the operator $q$ to $q'= q+\frac{\lambda}{\omega^2}$. So we would expect the correction to the propagator to be $$\langle q(\tau)q(0) \rangle_{\beta,\lambda}=\langle q(\tau)q(0) \rangle_{\beta,\lambda=0}+\left(\frac{\lambda}{\omega^2}\right)^2\frac{e^{-\beta m_\lambda}}{1+e^{-\beta m_\lambda}}$$ But diagram B will produce something that depends on the external momentum and thus a $\tau$ dependent correction to the propagator in real space. The disconnected diagram D seems like it might give the right answer since the external momentum vanishes, but it squares the temperature factor. What is the correct physical interpretation of diagram B and D and all their multiloop corrections? Answer: To restate the problem again, we expect all higher order corrections to the $q$ propagator to just result in a constant $$\delta\langle q(\tau)q(0)\rangle=+\left(\frac{\lambda}{\omega^2}\right)^2\frac{e^{-\beta m_\lambda}}{1+e^{-\beta m_\lambda}}.$$ Diagrams B and D should represent the lowest $\mathcal{O}(\lambda^2)$ contribution to this constant, which effectively means just using $m$ instead of the corrected $m_\lambda$. The problem is that Diagram B looks like it should depend on external momentum, not just be a constant, and Diagram D looks at first glance like it could give the right answer but the factor involving exponentials is wrong. So let's calculate diagram B. Looking at the form of the partition function in the question the bare propagators for $q$ and $\psi$ respectively are $$\frac{1}{p^2+\omega^2},\quad \frac{-i}{p+im},$$ and so diagram B evaluates to $$\left(\frac{\lambda}{p^2+\omega^2}\right)^2\frac{1}{\beta}\sum_{n}\frac{1}{\left(q_n+im\right)\left(q_n + p+im\right)}.$$ Here I've already written it as a sum over the Matsubara frequencies of the fermion loop $\omega_n=(2n+1)\pi/\beta$. The sum is easy enough to calculate with the Matsubara trick, $$\left(\frac{1}{p^2+\omega^2}\right)^2\frac{i}{ p}\left(\frac{1}{1+\epsilon^{\beta m}}-\frac{1}{1+\epsilon^{-i\beta p}\epsilon^{\beta m}}\right).$$ As user196574 pointed out in the comments, we need to also understand $p$ as a bosonic Matsubara frequency $p_n = 2\pi n/\beta$, so in fact this sum vanishes for $p\neq 0$. For $p=0$ it is indeterminate, but we can find the limit as $p\rightarrow 0$, $$\left(\frac{\lambda}{\omega^2}\right)^2\frac{\beta \epsilon^{\beta m}}{\left(1+\epsilon^{\beta m}\right)^2}.$$ So Diagram B is like a delta function in momentum space only having a non-zero value at $p=0$. When we transform back to position space we pick up a factor of $1/\beta$ cancelling the $\beta$ in the numerator. Finally, Diagram D is just the square of Diagram C which was calculated in the question. So summing B and D $$\left(\frac{\lambda}{\omega^2}\right)^2\frac{ \epsilon^{ \beta m}}{\left(1+\epsilon^{\beta m}\right)^2}+\left(-\frac{\lambda}{\omega^2}\frac{e^{-\beta m}}{1+e^{-\beta m}}\right)^2=\left(\frac{\lambda}{\omega^2}\right)^2\frac{e^{-\beta m}}{1+e^{-\beta m}},$$ and that's exactly what we were trying to show!
{ "domain": "physics.stackexchange", "id": 81888, "tags": "quantum-mechanics, quantum-field-theory, path-integral, thermal-field-theory" }
Can exhaust gases be diverted to other cylinders during engine operation?
Question: Here's a description of new combustion engine improvement by Mazda, called SkyActive-G. They claim that in a "generic" engine... when the exhaust manifold is short, the high pressure wave from the gas emerging immediately after cylinder No. 3’s exhaust valves open, for example, arrives at cylinder No.1 as it finishes its exhaust stroke and enters its intake stroke. As a result, exhaust gas which has just moved out of the cylinder is forced back inside the combustion chamber, increasing the amount of hot residual gas I always though, that exhaust manifold and intake manifold are separated, so exhaust gases just can't possibly enter other cylinders combustion chambers as described in the above quoted text. Can exhaust gases be diverted into other cylinders as claimed in that text? Answer: The exhaust and the intake are separate. What they are claiming is that exhaust gases that have just left a piston, may be forced back into it, before the valve closes, due to a high pressure wave originated at a different piston's exhaust. I am not sure how innovative that is though, since some 20 years ago they were already teaching Mechanical engineers like myself that an intelligently designed exhaust manifold can take advantage of reflections of compression and rarefaction waves to improve cylinder emptying.
{ "domain": "physics.stackexchange", "id": 5343, "tags": "heat-engine" }
Rosbag Python API: why do messages from a rosbag have mangled __class__?
Question: I find that messages loaded from a rosbag via the Python API, have a weird class attribute: import rosbag from sensor_msgs.msg import PointCloud2 # Round-trip a PointCloud2 message to a rosbag and back bag = rosbag.Bag('test.bag', 'w') try: scan = PointCloud2() print("__class__should be: \n{}".format(scan.__class__)) bag.write('scan', msg) finally: bag.close() bag = rosbag.Bag('test.bag') for topic, msg, t in bag.read_messages(topics=['scan']): print("But it comes out as: \n{}".format(msg.__class__)) bag.close() The output: __class__should be: <class 'sensor_msgs.msg._PointCloud2.PointCloud2'> But it comes out as: <class 'tmpf1u1_e._sensor_msgs__PointCloud2'> Is this a bug, or a feature? ROS Indigo, Ubuntu 14, Python 2.7 Originally posted by Rick Armstrong on ROS Answers with karma: 567 on 2018-02-25 Post score: 1 Answer: This is a feature, not a bug. Instead of using or relying on the local message definitions, which may not match the message definitions stored in the bag file (or may not exist at all!), the python API for rosbag generates class definitions on the fly for each message type stored in the bag file. Unfortunately, these auto-generated classes don't have the same names as the original classes. Originally posted by ahendrix with karma: 47576 on 2018-02-26 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 30150, "tags": "python, rosbag, ros-indigo" }
Electric fields (a-level) question
Question: A proton is accelerated from rest by a uniform electric field, strength is 2x10^5. calculate the time it takes to travel 0.05 m. I calculated force to be 3.2x10^-14. I then found wk done is, wk = force x distance = 1.6x10^15. Next, I used this is equal to 1/2 mv^2. Hence v = 1.91...x10^12. Then I used t= d/s Which gave me 3.62x10^-8. But the answer is 7.3x10^-8. Why can't I do the question this way instead of using f = ma. Why is this wrong? Answer: You are using equations for a particle which is moving with constant velocity but this is a acclerated particle and you need to use equation like S= $ut+1/2at^{2}$ and solve a quadratic to get time. You can't simply use the basic ones because the velocity is changing with time, It will gradually increase to maximum value and you need to take that in mind. You are just using the maximum velocity that is why time taken is very less. You just need to revisit your kinematics notes.
{ "domain": "physics.stackexchange", "id": 73080, "tags": "homework-and-exercises, electrostatics, electric-fields" }
Create Period column based on a date column where the first month is 1, second 2, etc
Question: I have a dataset with many project's monthly expendituries (cost curve), like this one: Project Date Expenditure(USD) Project A 12-2020 500 Project A 01-2021 1257 Project A 02-2021 125889 Project A 03-2021 102447 Project A 04-2021 1248 Project A 05-2021 1222 Project A 06-2021 856 Project B 01-2021 5589 Project B 02-2021 52874 Project B 03-2021 5698745 Project B 04-2021 2031487 Project B 05-2021 2359874 Project B 06-2021 25413 Project B 07-2021 2014 Project B 08-2021 2569 Using python, I want to create a "Period" column that replace the month value for a integer that represents the count of months of the project, like this: Where the line is the first month of the Project A (12-2020) the code should put 1 in the "Period" column, the second month (01-2021) is 2, the third (02-2021) is 3, etc. because I need to focus on the number of months that the projects of my dataframe had an expediture (month 1, month 2, month 3...) Project Date Period Expenditure(USD) Project A 12-2020 1 500 Project A 01-2021 2 1257 Project A 02-2021 3 125889 Project A 03-2021 4 102447 Project A 04-2021 5 1248 Project A 05-2021 6 1222 Project A 06-2021 7 856 Project B 01-2021 1 5589 Project B 02-2021 2 52874 Project B 03-2021 3 5698745 Project B 04-2021 4 2031487 Project B 05-2021 5 2359874 Project B 06-2021 6 25413 Project B 07-2021 7 2014 Project B 08-2021 8 2569 Answer: The easiest thing is for you to calculate for each row: The start date of the corresponding project. The months since current date and start date of the project. Below is a sample code that does that for you: import pandas as pd import numpy as np df = pd.DataFrame( [ ["Project A", "12-2020", 500], ["Project A", "01-2021", 1257], ["Project A", "02-2021", 125889], ["Project A", "03-2021", 102447], ["Project A", "04-2021", 1248], ["Project A", "05-2021", 1222], ["Project A", "06-2021", 856], ["Project B", "01-2021", 5589], ["Project B", "02-2021", 52874], ["Project B", "03-2021", 5698745], ["Project B", "04-2021", 2031487], ["Project B", "05-2021", 2359874], ["Project B", "06-2021", 25413], ["Project B", "07-2021", 2014], ["Project B", "08-2021", 2569], ], columns=["Project", "Date", "Expenditure(USD)"], ) df["Date"] = pd.to_datetime(df["Date"], format="%m-%Y") # Convert date column type # get the start date of the project # i.e find the lowest date of rows that have the same project as the current row df["Project Start Date"] = df.apply(lambda row: min(df[df["Project"] == row["Project"]]["Date"]), axis=1) # calculate the period # i.e. the number of months of the current date since the start of the project + 1 df["Period"] = ((df["Date"] - df["Project Start Date"]) / np.timedelta64(1, "M") + 1).round().astype(int) print(df) It gives you the following: Project Date Expenditure(USD) Project Start Date Period 0 Project A 2020-12-01 500 2020-12-01 1 1 Project A 2021-01-01 1257 2020-12-01 2 2 Project A 2021-02-01 125889 2020-12-01 3 3 Project A 2021-03-01 102447 2020-12-01 4 4 Project A 2021-04-01 1248 2020-12-01 5 5 Project A 2021-05-01 1222 2020-12-01 6 6 Project A 2021-06-01 856 2020-12-01 7 7 Project B 2021-01-01 5589 2021-01-01 1 8 Project B 2021-02-01 52874 2021-01-01 2 9 Project B 2021-03-01 5698745 2021-01-01 3 10 Project B 2021-04-01 2031487 2021-01-01 4 11 Project B 2021-05-01 2359874 2021-01-01 5 12 Project B 2021-06-01 25413 2021-01-01 6 13 Project B 2021-07-01 2014 2021-01-01 7 14 Project B 2021-08-01 2569 2021-01-01 8
{ "domain": "datascience.stackexchange", "id": 10248, "tags": "dataset, data-science-model, python-3.x, scipy, project-planning" }
In which direction does mud fly off a moving bike's tire & why?
Question: If a bike moves through a muddy area, mud gets on its tires. Then the mud flies off from the tires. Which forces are acting on it? In which direction does it fly off? On my physics test, I wrote that it flies off along tangential velocity at that point, but it was marked incorrect. Answer: It is because there aren't any forces acting on the mud keeping turning with the tire that it flies off. At whatever point the mud comes off, it will travel tangent to the tire at first and the follow a parabola due to earths gravity. It is most likely the more loose mud will come off first and at that point the tangential direction of the tire points straight at your eyes! If there was an equal probability the mud would come loose around the tire then it would be flung forwards, and either upwards or downwads with equal probability. The tangential velocity of the tire as a function of the azimouth angle (position angle) $\theta$ is $$[v_x,v_y] = [ v (1-\cos \theta), -v \sin\theta]$$ where $v$ is the bike speed and $\theta=0$ is at the contact point moving CCW for positive angle. Interestingly the acceleration is always pointing towards the center of the tire with magnitude $v^2/r$. So the force of adhesion needed to keep the mud on the tire is constant all along the tire unless the bike is accelerating also. If the bike is accelerating with $\dot{v}$, then define the dimensionless acceleration as $\alpha = \frac{\dot{v}}{ v^2/r}$ and you can find the peak acceleration of the tire surface occurs at $$\theta = \frac{3\pi}{2}-\arctan(\alpha)$$ with peak acceleration magnitude $$\dot{v} = \frac{v^2}{r} \sqrt{\left(1+2 \alpha^2 + 2 \alpha \sqrt{1+\alpha^2}\right)}$$ This corresponds to an area near where the tire goes downwards before it contacts. But if the bike is deccelerating the peak acceleration is at $\theta = \frac{\pi}{2}-\arctan(\alpha)$ which is when the tire just leaves the ground. So you are most likely to get sprayed with mud when on the brakes hard.
{ "domain": "physics.stackexchange", "id": 94357, "tags": "classical-mechanics, forces, rotational-dynamics, angular-velocity" }
How to filter a signal using a bandpass filter consisted of two moving average filters?
Question: I want to filter a PPG signal on a microcontroller. I have limited memory and a was searching for low computational methods. I found the work of Kazuhiro Taniguchi, Earable POCER: Development of a Point-of-Care Ear Sensor for Respiratory Rate Measurement where they use moving average filters (m3, m30 and m80) in order to filter their data and obtain freequencies between certain values, specifically they create a a passband between 189 mHz and 504 mHz. Their idea is that after the initial m3 moving average(every 3 values) they create the m30 and m80 and they apply an iteration between them ( r= m30 - m80). This is their way to obtain the values corresponding to the passband of interest, eventhough, as they specify, The moving average does not have an ideal lowpass filter function, and so some frequency elements other than those in the passband may pass, even if they are attenuated. to obtain the window size for their moving average filters they applied the following equation: I couldn't quite understand where this equation (5) was coming from but as soon as you replace the value $n$ with 30 or 80 you get the values of the passband cut-off freequencies 189 mHz and 504 mHz. When I asked for specifications on how they did it, they send me to a japanese forum that unfortunately I couldn't translate but there were two links to stackexchange, on cuttoff freequency and filter design. I tried to adapt all these new staff on my model but couldn't get the passband wanted (0.1 Hz - 0.8 Hz) using the equation (5) from the image above with my parameters (sampling freequency of 50Hz, cut-off freqs etc...). I don't understand what is the problem and my question is the following: What are the moving average filter windows that I have to use and in which order to be able to filter a signal with a sampling frequency of 50 Hz in order to isolate the frequencies between the passband 0.1 Hz - 0.8 Hz? Answer: The Japanese link actually implies how to derive Eq.5, and ironically they also refer to an existing dsp.se answer at the bottom. Derivation of Eq.5 is as follows: Consider a moving average filter of length $N$, with the impulse response $h[n]$: $$h[n] = \begin{cases} ~~~1/N~~~,~~~n=0,1,...,N-1 \\ ~~~0~~~,~~~ \text{otherwise} \end{cases} \tag{1}$$ Magnitude of its frequency response $~H(\omega)~$ (DTFT of $h[n]$) can be shown to be: $$ |H(\omega)| = \frac{1}{N} \left|\frac{ \sin(\frac{\omega}{2}N)}{\sin(\frac{\omega}{2})}\right| \tag{2}$$ Now, after replacing the $\sin()$ functions with their Taylor expansions in powers of $\omega$, it can be shown by polynomial long division that Eq.2 is also given by : $$|H(\omega)| = 1 + \frac{1}{24}(1-N^2) \omega^2 + H.O.T. \tag{3}$$ where H.O.T. refers to higher order terms in powers of $\omega$. An approximation for small values of $\omega$ is obtained by neglecting H.O.T.: $$|H(\omega)| \approx 1 + \frac{1}{24}(1-N^2) \omega^2 \tag{4}$$ Using this approximate frequency response magnitude, we can obtain an approximate cutoff frequency $\omega_c$ at which the magnitude $|H(\omega_c)|$ falls to $1/\sqrt{2}$ of its value at $\omega = 0$, which is $H(0) = 1$. $$ |H(\omega_c)| = \frac{1}{\sqrt{2}} = 1 + \frac{1}{24}(1-N^2) \omega_c^2 \tag{5}$$ Replace the discrete-time frequency $w_c$ by $w_c = 2\pi f_c /f_s$, where $f_c$ is the analog cutoff frequency in Hz, and $f_s$ is the sampling frequency in Hz. Finally solving the resulting algebraic expression for $f_c$ yields the formula that you refer to as Eq.5 in the document : $$ |H(\omega_c)| = \frac{1}{\sqrt{2}} = 1 + \frac{1}{24}(1-N^2) \left( 2\pi \frac{f_c}{f_s} \right)^2 \tag{6}\\\\$$ $$ f_c = \frac{1}{\pi} \frac{\sqrt{6 - 3\sqrt{2}}}{\sqrt{N^2-1}} ~f_s ~~ =~~ \frac{0.422}{\sqrt{N^2-1}} ~f_s \tag{7} $$ Eq.7 above is the formula that provides an approximate cutoff frequency calculation for the moving average filter of length $N$ (order $N-1$). In your posted link, Eq.5 there's a slight variation at the scale of $0.442$ instead of $0.422$, probably they tried to apply some correction to the actual cutoff vs approximated one. Note that in the derivation, we've used an approximation of the DTFT magnitude which was valid as long as $\omega$ was small compared to $\pi$. This means that the approximation will be satisfactory if $\omega_c$ is close to $0$, or in other words, $f_c$ is a small compared to $f_s$. And indeed this will be the case for high order moving average filters. And the approximation gets better as $N$ increases. Using such two moving average filters with approximate cutoff frequencies $f_{c1}$ and $f_{c2}$ to create a bandpass filter will not be very satisfactory unless your out of band signal energy is insignificand after 20 to 30 dB of attenuation.
{ "domain": "dsp.stackexchange", "id": 9430, "tags": "filters, bandpass, moving-average" }
Calculate general relativity-related tensors/arrays using metric tensor as input
Question: Please be gentle with me-- I just began learning to code a few weeks ago as a hobby in order to support my other hobby of learning general relativity, so this is the first bit of code I've ever written. I would love to keep getting better at coding, though, so any feedback on ways to improve my code would be much appreciated. What this code does is take as input the number of dimensions of a manifold, the coordinate labels being used, and the components of a metric, and outputs the non-zero components of the metric (exactly what was input, it just looks prettier), and also of the inverse metric, derivatives of the metric, the Christoffel symbols, the derivatives of the Christoffel symbols, the Riemann curvature tensor, the Ricci curvature tensor, the Ricci scalar, and the Einstein tensor (with 2 covariant indices, but also with 1 contravariant and 1 covariant). For those of you who run the code, here are some useful tips on the user inputs (I will also include an example at the bottom): When inputting metric components, you can use '^' instead of '**' for exponents, and when multiplying a number by a symbol or something in parentheses, you don't need to include a '*'. Feel free to include undefined functions- just make sure you include its arguments if you want it to be differentiated correctly (e.g. if you want a function that will have a non-zero derivative of x, then type 'f(x)' in your expression instead of just 'f'). Also feel free to use greek letters (spelled out in English) when inputting coordinate labels and/or functions in your metric components. Here is the code: from sympy import * from dataclasses import dataclass from IPython.display import display as Idisplay from IPython.display import Math greek = ['alpha', 'beta', 'gamma', 'Gamma', 'delta', 'Delta', 'epsilon', 'varepsilon', 'zeta', 'eta', 'theta', 'vartheta', 'Theta', 'iota', 'kappa', 'lambda', 'Lambda', 'mu', 'nu', 'xi', 'Xi', 'pi', 'Pi', 'rho', 'varrho', 'sigma', 'Sigma', 'tau', 'upsilon', 'Upsilon', 'phi', 'varphi', 'Phi', 'chi', 'psi', 'Psi', 'omega', 'Omega'] n = int(input('Enter the number of dimensions:\n')) coords = [] for i in range(n): coords.append(Symbol(str(input('Enter coordinate label %d:\n' % i)))) @dataclass(frozen=False, order=True) class Tensor: name: str symbol: str key: str components: list def rank(self): return self.key.count('*') def tensor_zeros(self, t=0): for i in range(self.rank()): t = [t,] * n return MutableDenseNDimArray(t) def coord_id(self, o): a = [] for i in range(self.rank()): c = int(o/(n**(self.rank() - i - 1))) a.append(str(coords[c])) if any(letter in a[i] for letter in greek) is True: a[i] = '\\' + a[i] + ' ' o -= c * (n**(self.rank() - i - 1)) x = self.key w = 0 for i in x: if i == '*': x = x.replace('*', a[w], 1) w += 1 return self.symbol + x def print_tensor(self): for o in range(len(self.components)): if self.components[o] != 0: Idisplay(Math(latex(Eq(Symbol(self.coord_id(o)), self.components[o])))) print('\n\n') def assign(instance, thing): instance.components = thing.reshape(len(thing)).tolist() def fix_input(expr): expr = expr.replace('^', '**') for i in range(len(expr)-1): if expr[i].isnumeric() and (expr[i+1].isalpha() or expr[i+1] == '('): expr = expr[:i+1] + '*' + expr[i+1:] return expr metric = Tensor('metric tensor', 'g', '_**', []) metric_inv = Tensor('inverse of metric tensor', 'g', '__**', []) metric_d = Tensor('partial derivative of metric tensor', 'g', '_**,*', []) Christoffel = Tensor('Christoffel symbol - 2nd kind', 'Gamma', '__*_**', []) Christoffel_d = Tensor('partial derivative of Christoffel symbol', 'Gamma', '__*_**,*', []) Riemann = Tensor('Riemann curvature tensor', 'R', '__*_***', []) Ricci = Tensor('Ricci curvature tensor', 'R', '_**', []) Einstein = Tensor('Einstein tensor', 'G', '_**', []) Einstein_alt = Tensor('Einstein tensor', 'G', '__*_*', []) # user inputs metric: g = eye(n) while True: diag = str(input('Is metric diagonal? y for yes, n for no\n')).lower() if diag == 'y': for i in range(n): g[i, i] = sympify(fix_input(str(input( 'What is g_[%s%s]?\n' % (str(coords[i]), str(coords[i]) ))))) else: for i in range(n): for j in range(i, n): g[i, j] = sympify(fix_input(str(input( 'What is g_[%s%s]?\n' % (str(coords[i]), str(coords[j]) ))))) g[j, i] = g[i, j] if g.det() == 0: print('\nMetric is singular, try again!\n') continue else: break # calculate everything: # inverse metric: g_inv = MutableDenseNDimArray(g.inv()) assign(metric_inv, g_inv) g = MutableDenseNDimArray(g) assign(metric, g) # first derivatives of metric components: g_d = metric_d.tensor_zeros() for i in range(n): for j in range(i): for d in range(n): g_d[i, j, d] = g_d[j, i, d] for j in range(i, n): for d in range(n): g_d[i, j, d] = diff(g[i, j], coords[d]) assign(metric_d, g_d) # Christoffel symbols for Levi-Civita connection (Gam^i_jk): Gamma = Christoffel.tensor_zeros() for i in range(n): for j in range(n): for k in range(j): Gamma[i, j, k] = Gamma[i, k, j] for k in range(j, n): for l in range(n): Gamma[i, j, k] += S(1)/2 * g_inv[i, l] * ( -g_d[j, k, l] + g_d[k, l, j] + g_d[l, j, k] ) assign(Christoffel, Gamma) # first derivatives of Christoffel symbols (Gam^i_jk,d): Gamma_d = Christoffel_d.tensor_zeros() for i in range(n): for j in range(n): for k in range(j): for d in range(n): Gamma_d[i, j, k, d] = Gamma_d[i, k, j, d] for k in range(j, n): for d in range(n): Gamma_d[i, j, k, d] = simplify(diff(Gamma[i, j, k], coords[d])) assign(Christoffel_d, Gamma_d) # Riemann curvature tensor (R^i_jkl): Rie = Riemann.tensor_zeros() for i in range(n): for j in range(n): for k in range(n): for l in range(k): Rie[i, j, k, l] = -Rie[i, j, l, k] for l in range(k, n): Rie[i, j, k, l] = Gamma_d[i, j, l, k] - Gamma_d[i, j, k, l] for h in range(n): Rie[i, j, k, l] += (Gamma[h, j, l] * Gamma[i, h, k] - Gamma[h, j, k] * Gamma[i, h, l]) Rie[i, j, k, l] = simplify(Rie[i, j, k, l]) assign(Riemann, Rie) # Ricci curvature tensor (R_jl): Ric = simplify(tensorcontraction(Rie, (0, 2))) assign(Ricci, Ric) # Ricci curvature scalar: R = 0 for i in range(n): for j in range(n): R += g_inv[i, j] * Ric[i, j] R = simplify(R) # Einstein tensor (G_ij): G = Einstein.tensor_zeros() for i in range(n): for j in range(i): G[i, j] = G[j, i] for j in range(i, n): G[i, j] = simplify(Ric[i, j] - S(1)/2 * R * g[i, j]) assign(Einstein, G) # G^i_j: G_alt = Einstein_alt.tensor_zeros() for i in range(n): for j in range(n): for k in range(n): G_alt[i, j] += g_inv[i, k] * G[k, j] G_alt[i, j] = simplify(G_alt[i, j]) assign(Einstein_alt, G_alt) # print it all print() metric.print_tensor() metric_inv.print_tensor() metric_d.print_tensor() Christoffel.print_tensor() Christoffel_d.print_tensor() Riemann.print_tensor() Ricci.print_tensor() if R != 0: Idisplay(Math(latex(Eq(Symbol('R'), R)))) print('\n\n') Einstein.print_tensor() Einstein_alt.print_tensor() EDIT: this code should be executed in Jupyter Example input: number of dimensions: 4 coordinate 0: t coordinate 1: l coordinate 2: theta coordinate 3: phi metric diagonal?: y g_tt: -1 g_ll: 1 g_thetatheta: r(l)^2 g_phiphi: r(l)^2sin(theta)^2 Answer: In general, have your classes on top of the file, and then all the code. Don't have them inbetween. In Python, if we know a variable won't change, and we just have it as a reference or only read from it, we call it a constant. By convention, we use all uppercase names for those: greek -> GREEK. Names should be representative of what they contain. Does greek contain Greeks?!?? Maybe GREEK_CHARACTERS or GREEK_SYMBOLS or GREEK_LETTERS would represent better what's inside it. What happens if the user enters an invalid coordinate? You should account and check that the user input is a valid one. You should use the if __name__ == '__main__' pattern (see more here). IMPORTANT: Divide your code into functions you can reuse. This will avoid code repetition and make your code more modular and easy to understand. This could become one function def clean_coordinates(coordinates). And same for the rest of the code. g = eye(n) while True: diag = str(input('Is metric diagonal? y for yes, n for no\n')).lower() if diag == 'y': for i in range(n): g[i, i] = sympify(fix_input(str(input( 'What is g_[%s%s]?\n' % (str(coords[i]), str(coords[i]) ))))) else: for i in range(n): for j in range(i, n): g[i, j] = sympify(fix_input(str(input( 'What is g_[%s%s]?\n' % (str(coords[i]), str(coords[j]) ))))) g[j, i] = g[i, j] if g.det() == 0: print('\nMetric is singular, try again!\n') continue else: break Try to avoid mixing logic with input. E.g., in the example above, you have logic (fixing the input, etc.), but also user input (asking for the diagonal). Instead, have logic functions which handle the logic and take as a parameter whatever they need (e.g. def clean_coordinates(coordinates, diag):) and call them with the user input. This will make your code more modular, testable, clean, organized, reusable, etc. 200 lines of code is a moderately large file... Maybe you can split your code into multiple files which makes it easier to read/understand? There surely is a lot more, we are just scratching the surface, but I think this is enough for one CR. If you fix all of this and get back, ping me and we can look into more issues!
{ "domain": "codereview.stackexchange", "id": 42062, "tags": "python, sympy, latex" }
Trivial and Non-trivial topology of band structure
Question: I don't understand the meaning of the expression "trivial topology" or "non-trivial topology" for an electronic band structure. Does anybody have a good explanation? Answer: One of the early triumphs of QM (through e.g. Kronig-Penney model) was the explanation of the insulating state of matter. Energy bands (and gaps) appear as the result of hybridization of many atomic orbitals, and for a specific filling you can end up with the top most pair of bands being either entirely filled (valence band) or entirely empty (conduction band). No (small) electric field can perturb them enough to cause motion, and thus you have an insulator. In this trivial insulator, although the bulk is insulating, there is possibility of for example dangling bonds introducing states that lie in an energy gap. These states are localized at the edge; however they are not robust, and as such not particularly useful. Now if you have a material with a sufficiently strong spin-orbit interaction for example (not essential for the effect, but historically important approach), this can cause the energy bands above and below the gap to swap places. This twisting is protected by time-reversal invariance, and although still an insulator, the resulting phase is topologically different from an ordinary insulator. The twisting of the band structure is what the phrase non-trivial topology is referring to; an analogy would be the way a Mobius strip is a twisted version of an ordinary strip. This manifests itself in the fact that when you put the two in contact, the curled up band structure of the TI must unwind so that the band structure fits the one in the ordinary insulator. This unwinding will will have to close the gap near the edge, hence the topologically protected edge states. This is the interesting part of the topological insulators from the practical standpoint. So whether the band structure is wound up or not is a topological property, and one can measure it with the topological index, also called a Chern number, defined as $$ C=\frac{1}{2\pi}\sum_n\oint Fd\mathbf{k} $$ where the sum is over occupied bands, the integral is over the entire Brillouin zone, and the integrated quantity is the Berry curvature (analogue of the magnetic field in $\mathbf{k}$ space) $F=-i\nabla_{\mathbf{k}}\times\langle u_{n\mathbf{k}}|\nabla_{\mathbf{k}}|u_{n\mathbf{k}}\rangle$, where $u_{n\mathbf{k}}$ are the Bloch eigenvectors. If $C=0$ you have a trivial insulator, and if $C\neq0$ you have a non-trivial or topological insulator.
{ "domain": "physics.stackexchange", "id": 8527, "tags": "condensed-matter, topology, topological-insulators" }
Converting from binary to unary
Question: I have function that convert binary number to unary number based on a Markov algorithm. In what ways can I improve the code? The algorithm goes as follows: Replace all 0l with 0l "|0" -> "0||" Replace all l0 with 0ll "1" -> "0|" Major Step 3: Remove the 0 "0" -> "" public List BinaryToUnary(List value) { var temp = new List(); var temp2 = new List(); var temp3 = new List(); string hold = ""; int count = 1; TxtToDisPlay = "Major Step 1: replace All 1 with 0l userInput " + string.Join("", value) + "\n\n"; foreach (var x in value) { if (x.Equals("1")) { temp.Add("0l"); TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + x + " = " + "0l" + "\n"; } else { temp.Add(x); TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + x + " = " + x + "\n"; } count++; } TxtToDisPlay = TxtToDisPlay + "\n\nMajor Step 2: Replace all l0 with 0ll userInput = " + string.Join("", temp) + " \n\n"; foreach (var x in string.Join("", temp)) { hold = hold + x; if (hold.Equals("l0")) { temp2.Add("0ll"); TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + hold + " = " + "0ll" + "\n"; hold = ""; } if (hold.Equals("0")) { temp2.Add(hold); TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + hold + " = " + hold + "\n"; hold = ""; } count++; } if (hold.Equals("l")) { temp2.Add("l"); } hold = ""; foreach (var y in string.Join("", temp2)) { if (temp2.Count == 1) { temp3.Add("l"); break; } hold = hold + y; if (hold.Equals("l0")) { TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + hold + " = " + "0ll" + "\n"; temp3.Add("0ll"); hold = ""; } if (hold.Equals("0") || hold.Equals("ll")) { if (hold.Equals("ll")) { TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + hold + " = " + hold + "\n"; temp3.Add("l"); hold = "l"; } else { TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + hold + " = " + hold + "\n"; temp3.Add(hold); hold = ""; } } } if (value.Count == 3 && hold.Equals("l")) { temp3.Add("l"); temp3.Add("l"); } else if (value.Count == 3 || hold.Equals("l")) { temp3.Add("l"); } TxtToDisPlay = TxtToDisPlay + "\n\nMajor Step 3 : get rid of the 0 from userInput = " + string.Join("", temp3) + " \n\n"; var answer = new List(); foreach (var x in temp3) { if (x.Contains("0ll")) { foreach (var y in x.Where(y => y.Equals('l'))) { answer.Add("l"); TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + "x = l so " + "Add l To new Array result = " + string.Join("", answer) + "\n"; } } else if (x.Contains("l")) { TxtToDisPlay = TxtToDisPlay + "Step " + count + ": " + "Add l To new Array result = " + string.Join("", answer) + "\n"; answer.Add(x); } count++; } return answer; } Answer: You are not supposed to replace all matches but only the first one. And then repeat the process. Here is some javascript code that implements the markov algorithm exactly the way it is described on the wikipedia page. Now all you have to do is translate that to C#. function binaryToUnary(value) { var matchFound = false, rules = [["|0", "0||"], ["1", "0|" ], ["0", "" ]]; do { matchFound = false; for(var i = 0; i < rules.length; i++) { var rule = rules[i], pattern = rule[0], replacement = rule[1], index = value.indexOf(pattern); if(index >= 0) { value = value.substring(0, index) + replacement + value.substring(index + pattern.length); matchFound = true; break; //break because only the first pattern that matches should be replaced } } } while(matchFound); return value; }
{ "domain": "codereview.stackexchange", "id": 13153, "tags": "c#, algorithm, converting" }
R package development: How does one automatically install Bioconductor packages upon package installation?
Question: I have an R package on github which uses multiple Bioconductor dependencies, 'myPackage' If I include CRAN packages in the DESCRIPTION via Depends:, the packages will automatically install upon installation via devtools, i.e. devtools::install_github('repoName/myPackage') This is discussed in Section 1.1.3 Package Dependencies, in Writing R Extensions Is there a way to streamline this such that packages from Bioconductor are automatically installed as well? Normally, users install Bioconductor packages via BiocLite, e.g. source("http://www.bioconductor.org/biocLite.R") biocLite("edgeR") Answer: There's a trick to this where one needs to add biocViews: to the package Description. That's the only solution I've ever seen to allowing automatic installation of bioconductor dependencies. If you need a couple of examples, then click through the link I posted and scroll down to pull requests referencing that issue, they will generally include the actual example.
{ "domain": "bioinformatics.stackexchange", "id": 2043, "tags": "r, bioconductor" }
Interspecies competition and pathogen
Question: Following my answer to this question, a debate ensued on whether the loss in population of one species (namely red squirrels) due to its lesser resistance to a pathogen brought by a competing species (namely squirrel parapoxvirus and grey squirrel) should be labelled as a result of interspecies competition or not. Here I will try to state it as a question with in a more general setting. I'll give my own answer to it, and would welcome other views on the issue. Assume the following: Species $A$ lives in a closed ecosytem and has reached carrying capacity $A=M_A$. At time $0$, a new species $B$ is introduced which is in competition with $A$, and additionally bears a pathogen to which it is resistant but which causes a high lethality rate in $A$. It is observed that $A$ dimishes while $B$ thrives. Can we necessarily ascribe it to interspecific competition? Answer: No, you cannot for certain ascribe it to competition, without further information. The mediation by a pathogen is similar to the effects on prey species that are indirectly caused by shared predators (e.g. Prey B increase -> Predator X increase -> more predation on Prey A). Such effects, which can be tricky to separate from direct competition, are usually called apparent competition (see e.g. the classic paper Holt. 1977. Predation, apparent competition, and the structure of prey communities. Theor Popul Biol 12). However, to some extent, the terminology used may depend on what you view as "the environmental background". All species have lots of adaptations to certain aspects of their living environment, and these naturally differ between species. Some might be more resistant to a certain pathogen (of to drought or other abiotic factors), but this might trade-off against other traits. I that respect, you might view species B as outcompeting species A in a certain environment (e.g. in the presence of the pathogen). However, since you specifically know about the pathogen (which is the focal point of the question), and that species B will probably function as a host/source population of the pathogen, I think it is more useful and informative to label the effect as apparent competition. To partition the direct and indirect effects you will however need moe information.
{ "domain": "biology.stackexchange", "id": 5873, "tags": "evolution, ecology, population-genetics, population-dynamics" }
Basis functions in group theory and wave functions
Question: I just started to study group theory by reading the book Group theory: Applications to the physics of condensed matter by M. S. Dresselhaus. In chapter 4 it was mentioned: Suppose that we have a group $G$ with symmetry elements $R$ and symmetry operators $\hat{P}_R$ . We denote the irreducible representations by $\Gamma_n$, where $n$ labels the representation. We can then define a set of basis vectors denoted by $\left| \Gamma_n j\right>$. ... These basis vectors relate the symmetry operator $\hat{P}_R$ with its matrix representation $D^{(Γ_n)} (R)$ through the relation \begin{equation} \hat{P}_R \left|\Gamma_n \alpha\right> = \sum_j D^{(\Gamma_n)}(R)_{j\alpha} \left| \Gamma_n j \right>\end{equation} The basis vectors can be abstract vectors; a very important type of basis vector is a basis function which we define here as a basis vector expressed explicitly in coordinate space. Wave functions in quantum mechanics, which are basis functions for symmetry operators, are a special but important example of such basis functions. I don't understand the definition here. Are basis functions defined through the equation above? How do I know whether a function is a appropriately chosen basis function that generate an irrep or not? Also, under what situation do wave functions become basis functions defined here (I'm guessing when the Hamiltonian possess the symmetry associated with the group), and why? I have tried to search for answer in the book and also on the internet, but found nothing useful. It would be great if someone can provide some help. Thank you. My attempt to the question I noticed that we can have any set of basis and prove that the coefficients in the equation above are indeed the representation of the group. Therefore, I believe that the basis vectors can really be any set of basis in a vector space. However, I would like to know whether this statement is true. Also, there are still several problems as listed below. Let's assume we have a set of basis vectors $\left|\Gamma_n i\right>$ in a vector space, and we have group elements $\alpha$, $\beta$, $\gamma$ with $\gamma = \beta\alpha$, and the corresponding symmetry operators $\hat{P}_\alpha$, $\hat{P}_\beta$, $\hat{P}_\gamma$. We let $\hat{P}_\alpha$ act on a basis vector, and the result should in general be able to be expanded by the same set of basis: $$ \hat{P}_\alpha \left|\Gamma_n i\right> = \sum_j C^\alpha_{ji} \left| \Gamma_n j \right> $$ Next we also let $\hat{P}_beta$ act on it: $$ \hat{P}_\beta\hat{P}_\alpha \left|\Gamma_n i\right> = \hat{P}_\beta \sum_j C^\alpha_{ji} \left| \Gamma_n j \right> = \sum_{j, k} C^\beta_{kj} C^{\alpha}_{ji} \left|\Gamma_n k\right> $$ But at the same time, $\hat{P}_\gamma = \hat{P}_\beta \hat{P}_\alpha$, so $$ \hat{P}_\gamma \left|\Gamma_n i\right> = \sum_k C^\gamma_{ki} \left| \Gamma_n k \right> $$ We see that $C^\gamma_{ki} = \sum_j C^\beta_{kj} C^\alpha_{ji}$, which shows that $C$ is a set of matrices that follows the same multiplication rules as the group, indicating that it must be a representation of the group. Now, several problems arise here: What are the conditions required to ensure that the representation here is irreducible? If my proof is correct, it seems that the basis vectors do not even need to be orthogonal, as long as they are linearly independent. Is that true? The vector space here can be any vector space, as long as they have well-defined innerproduct. But the basis functions listed in the character tables can be quadratic, so what is the definition of inner product here? Answer: The basis functions are whatever you want them to be. There are no constraints on them other than that they actually form a basis for the representation (a linearly independent set of vectors that span all the vectors that transform under the representation). If you have two bases $|\Gamma_nj\rangle,|\Gamma_n'j\rangle$ for the same representation, then you can write each vector in one as a linear combination of the vectors in the other and arrange the coefficients into a matrix $B$ that helps transform between the two: $$|\Gamma_n'\alpha\rangle=\sum_jB_{j\alpha}|\Gamma_nj\rangle.$$ The equation given in your book defines the matrix $D^{(\Gamma_n)}(R)$ that represents the operator $\hat P_R,$ not the basis: the $\alpha$'th column of the matrix representation forms the coefficients on the $\Gamma_n$ basis vectors when you apply the symmetry operator to the $\alpha$'th element in that basis. If you have this matrix $D^{(\Gamma_n)}(R)$ for the $|\Gamma_nj\rangle$ basis, you can convert it into any other basis once you have the matrix $B$ by the similarity transformation $$D^{(\Gamma_n')}(R)=BD^{(\Gamma_n)}(R)B^{-1}.$$ Read $B^{-1}$ as converting from the new basis back to the old one, $D^{(\Gamma_n)}(R)$ as performing the operation in the old basis, and $B$ as converting back to the new basis. $D^{(\Gamma_n')}(R)$ satisfies the equation you gave, but for the new basis: $$\hat P_R|\Gamma_n'\alpha\rangle=\sum_jD^{(\Gamma_n')}(R)_{j\alpha}|\Gamma_n'j\rangle.$$ This also tells you how to find reducible representations: Consider what happens if you can find a single similarity transformation like $B$ that makes $D^{(\Gamma_n)}(R)$ (proper) block triangular (with the same block sizes) for all $R$: $$D^{(\Gamma_n')}(R)=BD^{(\Gamma_n)}(R)B^{-1}=\begin{bmatrix}D^{(\Gamma_n',11)}(R)&D^{(\Gamma_n',12)}(R)\\\mathbf{0}&D^{(\Gamma_n',22)}(R)\end{bmatrix}$$ where $D^{(\Gamma_n,11)}(R)$ is a square matrix of reduced (but nonzero) dimension compared to $D^{(\Gamma_n)}(R).$ Say it is a $m\times m$ matrix. This means that the first $m$ columns of $B$ transform only into linear combinations of each other under $D^{(\Gamma_n)}(R)$ for all symmetry elements $R$. Equivalently, the vectors formed by taking each of the first $m$ columns of $B$ and using them as coefficients for the $|\Gamma_nj\rangle$ transform only into linear combinations of each other under $\hat P_R$. The span of these vectors together with this action of the group upon them form a subrepresentation of $\Gamma_n$ (and you can use these vectors as a basis for this representation). $\Gamma_n$ is irreducible iff there is no similarity transform (i.e. change of basis) that simultaneously block-triangularizes the matrix representations of all the symmetry operators. Here is an example where we find a representation of all of $C_\mathrm{3v}$ in the valence s orbitals of $N\!H_3$. Due to the chosen basis, the matrix representation of all the symmetry operators are immediately seen as block triangular, indicating reducibility. If a different basis were chosen, you might not get that. The mathematically important part is whether there is a basis where the matrix representations of all the symmetry operators are block-triangular. To your problem (3): inner products? Where? Nothing at all in either your question or my answer requires inner products. You need to decompose vectors as linear combinations of basis elements. That's all. As to your "wavefunctions" question: Wavefunctions appear as the basis vectors of a representation when wavefunctions are the thing you're studying. Again, that's really it. You start with some Hilbert space of wavefunctions $\mathcal{H}$ and a Hamiltonian $H$. You look for a group $G$ and a unitary representation $\hat U\!_{R\in G}:\mathcal{H}\to\mathcal{H}.$ "Physically relevant" representations are the ones that commute with the Hamiltonian. You may also find subrepresentations of this one. For any representation, you may want to pick out some basis vectors for it. TL;DR: Basis vectors are not important! The physically relevant things are the representations, consisting of a subspace of some vector space and a group action on that subspace. The basis vectors just make computation on those objects possible (the subspace is the span of any basis for it, and the group action defined on a basis extends to the whole (sub)space). You choose the basis however you want to make computation/interpretation easy.
{ "domain": "physics.stackexchange", "id": 81001, "tags": "quantum-mechanics, hilbert-space, symmetry, group-theory, representation-theory" }
Is a metal Prince Rupert's drop possible?
Question: Can a Prince Rupert's drop be made of metal instead of glass? Answer: Sort of. This is rendered a bit of a moot point as most metals already have the the sort of hardness and impact resistance that heat treating imparts on glass anyway so you are essentially approaching the problem from the opposite direction. However it is certainly true that heat treating can create residual stresses in susceptible metals in a similar fashion, stainless steel with its high coefficient of thermal expansion and low thermal conductivity is particularly susceptible. Also steels with sufficient carbon content can be dfferentially hardened by quenching so if you quench a thick enough section of high carbon steel you will get a hard surface layer with a softer core. Also the hard section will increase in volume, creating residual stress throughout the whole sample. There are also various processes for surface hardening of steels, such as case hardening which works by diffusing carbon into the surface of low carbon steels. The closest real world application is in ball bearings which tend to have hard surfaces and soft cores to combine surface wear resistance with toughness. This isn't exactly the same as a Prince Rupert's drop as you have a change of micro-structure going on. There is also the complication that molten high carbon steel tends to start to burn in air so making an actual droplet might require an inert atmosphere, equally molten steel is generally a lot less viscous than glass so forming the 'tail' wouldn't work in quite the same way. Having said that you could achieve a similar effect by quenching from solid. Fully hardened (ie quenched but not tempered) high carbon steels aren't that far away from the brittleness of glass. Indeed it is not unknown for quenched parts to spontaneously crack between quenching and hardening.
{ "domain": "engineering.stackexchange", "id": 3494, "tags": "materials, pressure, stresses, metals, cooling" }
Relationship of Fourier Transform and Spatial Directivity Pattern
Question: I am reading over and over the Section 1.2 of most understandable mic array tutorial for me . A sound wave is dependent on time and space which can be observed in the great link. But I can't understand why the Fourier transform of the spatially sampled data gives the directivity pattern of the microphone array (which can be seen in the section 1.3.2 of the first document)? Is there any one who can describe this in more computer sciencist way? Answer: Have a look at the scilab code where I answered your other question. That question is about a discrete aperture, but it is analogous to the question you are asking here. For that question, the beam pattern is just given by: $$ D(\theta) = \sum_{n=0}^{N-1} w_n \exp\left(j\frac{2\pi n d}{\lambda} \sin(\theta) \right). $$ With judicious choice of parameters, you can see how this looks somewhat like the discrete Fourier transform of the weights $w_n$ which are analogous to $A_R$ in the paper. To obtain the formulae from the paper, you need to think of the continuous aperture rather than discrete sensors.
{ "domain": "dsp.stackexchange", "id": 2497, "tags": "fft, beamforming" }
Save Games on iOS with NSCoding
Question: This is the second time I have implemented saving and loading for a game using Objective-C. I am using the built in NSCoding methods. I would love to hear opinions about NSCoding and whether or not it is a viable option for saving and loading games. My code does function, however there is a lot of boilerplate code required to make it work. Maybe there is a way to reduce the lines of code required that I don't know about. I did find a library that supposedly automated part of this process, however that library only encodes the @properties of the class, and I am also encoding private instance variables as well. I also was not planning to use an external library, although I will take any recommendations given. First, here are the saving and loading methods in the SKScene: #pragma mark - Save and Load -(void) saveGameToSlot:(NSString *)saveSlot { NSLog(@"Saving game %@", saveSlot); [self saveCustomObject:_game key:saveSlot]; [self closeSaveMenu]; } -(void) loadGameInSlot:(NSString *)loadSlot { NSString *savePath = [[self applicationDocumentsPath] stringByAppendingPathComponent:loadSlot]; BOOL saveExists = [[NSFileManager defaultManager]fileExistsAtPath:savePath]; if (saveExists) { [self prepareForLoading]; _game = [self loadCustomObjectWithKey:loadSlot]; [self createUIAndRenderer]; [self createSceneElements]; NSLog(@"Loading game %@", loadSlot); [self closeSaveMenu]; } } -(void) saveCustomObject:(DTGame *)object key:(NSString *)key { NSString *path = [[self applicationDocumentsPath] stringByAppendingPathComponent:key]; [NSKeyedArchiver archiveRootObject:object toFile:path]; } -(DTGame *) loadCustomObjectWithKey:(NSString *)key { NSString *path = [[self applicationDocumentsPath] stringByAppendingPathComponent:key]; DTGame *object = [NSKeyedUnarchiver unarchiveObjectWithFile:path]; return object; } -(void) prepareForLoading { [_world removeAllChildren]; [_world removeFromParent]; [_mainHud removeAllChildren]; [_mainHud removeFromParent]; [_resourceCounters removeAllChildren]; [_resourceCounters removeFromParent]; [[self view] removeGestureRecognizer:_panRecognizer]; [[self view] removeGestureRecognizer:_pinchRecognizer]; [[self view] removeGestureRecognizer:_swipeLeft]; [[self view] removeGestureRecognizer:_swipeRight]; } -(NSString *) applicationDocumentsPath { return [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]; } Here is the code required in the Game class to make this work: //in the header file @interface DTGame : NSObject <NSCoding> //in the implementation #pragma mark - NSCoding methods -(void) encodeWithCoder:(NSCoder *)aCoder { [aCoder encodeInt:self.currentCommonResources forKey:@"currentCommonResources"]; [aCoder encodeInt:self.currentRareResources forKey:@"currentRareResources"]; [aCoder encodeInt:self.currentFoodResources forKey:@"currentFoodResources"]; [aCoder encodeInt:_foodStockpileSize forKey:@"foodStockpileSize"]; [aCoder encodeObject:_towerArray forKey:@"towerArray"]; [aCoder encodeObject:_gameDwarfArray forKey:@"gameDwarfArray"]; [aCoder encodeInt:_currentTower forKey:@"currentTower"]; [aCoder encodeCGSize:_worldSize forKey:@"worldSize"]; } -(id) initWithCoder:(NSCoder *)aDecoder { self = [super init]; if (self) { _currentCommonResources = [aDecoder decodeIntForKey:@"currentCommonResources"]; _currentRareResources = [aDecoder decodeIntForKey:@"currentRareResources"]; _currentFoodResources = [aDecoder decodeIntForKey:@"currentFoodResources"]; _foodStockpileSize = [aDecoder decodeIntForKey:@"foodStockpileSize"]; _towerArray = [aDecoder decodeObjectForKey:@"towerArray"]; _gameDwarfArray = [aDecoder decodeObjectForKey:@"gameDwarfArray"]; _currentTower = [aDecoder decodeIntForKey:@"currentTower"]; _worldSize = [aDecoder decodeCGSizeForKey:@"worldSize"]; } [self continueGame]; return self; } -(void) continueGame { self.isPaused = YES; self.hasTowerChanged = YES; } And then every class that will be saved and loaded has to be an NSCoding delegate and implement those two methods. Here is an example of how to set this up for a subclass: #pragma mark - NSCoding methods -(id) initWithCoder:(NSCoder *)aDecoder { self = [super initWithCoder:aDecoder]; if (self) { _dwarfMovementState = [aDecoder decodeIntegerForKey:@"dwarfMovementState"]; _floorList = [aDecoder decodeObjectForKey:@"floorList"]; _floorListForWork = [aDecoder decodeObjectForKey:@"floorListForWork"]; } return self; } -(void) encodeWithCoder:(NSCoder *)aCoder { [super encodeWithCoder:aCoder]; [aCoder encodeInteger:self.dwarfMovementState forKey:@"dwarfMovementState"]; [aCoder encodeObject:_floorList forKey:@"floorList"]; [aCoder encodeObject:_floorListForWork forKey:@"floorListForWork"]; } Is this method of implementing saving and loading reliable and efficient? I hate that I have to use strings to access the properties, because there is no auto correction for the text, so mistakes are likely. If the class names ever change, the strings will not have to change, but they will be unclearly named until they are fixed. And any and every single variable that is added to the classes will need to have this boilerplate code to back it up. Answer: Is this method of implementing saving and loading reliable and efficient? It should be reliable. And if saving all of this information is necessary, then this is almost certainly the most efficient way to do it all. The way to improve efficiency further is to ask yourself what datapoints are absolutely necessary to write and read later, and what datapoints can be derived from the necessary ones without explicitly being saved? If it can be implicitly determined, you don't necessarily need to save it. I hate that I have to use strings to access the properties, because there is no auto correction for the text While you do have to use strings for the keys, you can (and should) get auto-complete to help you by defining the keys as named constants. In all of my iOS projects, I always have multiple files where my global constants such as user defaults keys are defined. In this case, these strings could be declared simple within the current file as they're only used in two methods, but nonetheless, it would still be good to do it even if they're only used in two spots. One other thing to think about is actually using NSUserDefaults. You've already got init/encodeWithConder methods written, so now you can store these to NSUserDefaults. This won't make loading a game any faster or efficient, as at the end of the day, the loading process would be the same as what you're already doing, however, using NSUserDefaults for at least the current game being played should allow you to take regular snapshots of the current game state and auto-save without the player even noticing. When you save to NSUserDefaults using setValue:forKey:, at first, you're just setting the value in temporary memory. This is no different from setting a value in a NSMutableDictionary. It's quite quick. You just pointing to a memory address. You can call synchronize on NSUserDefaults to force it to write everything in the temporary memory to permanent storage, and sometimes this can be appropriate, but you don't want to do this frequently. Instead, NSUserDefaults works in the background and wait till the processor has spare time and then writes the values to permanent storage. I'm pretty sure the only way the data in NSUserDefaults can be lost is if the phone suddenly died unexpectedly. Even if the app is killed or the phone is powered off normally, the data in NSUserDefaults will be saved before it's lost. What I'd recommend is only saving the currently active game to NSUserDefaults however. Just before a different game becomes active, the data from NSUserDefaults should be saved in the manner you're already using here.
{ "domain": "codereview.stackexchange", "id": 8247, "tags": "game, objective-c, serialization, file-structure" }
A pythonic way of de-interleaving a list (i.e. data from a generator), into multiple lists
Question: I've recently discovered the wonders of the Python world, and am quickly learning. Coming from Windows/C#/.NET, I find it refreshing working in Python on Linux. A day you've learned something new is not a day wasted. I need to unpack data received from a device. Data is received as a string of "bytes", of arbitrary length. Each packet (string) consists of samples, for eight channels. The number of samples varies, but will always be a multiple of the number of channels. The channels are interleaved. To make things a bit more complex, samples can be either 8 or 16 bits in length. Check the code, and you'll see. I've already got a working implementation. However, as I've just stumbled upon generators, iterators, maps and ... numpy, I suspect there might be a more efficient way of doing it. If not efficient, maybe more "pythonic". I'm curious, and if someone would spend some time giving me a pointer in the right (or any) direction, I would be very grateful. As of now, I am aware of the fact that my Python has a strong smell of C#. But I'm learning ... This is my working implementation. It is efficient enough, but I suspect it can be improved. Especially the de-interleaving part. On my machine it prints: time to create generator: 0:00:00.000040 time to de-interleave data: 0:00:00.004111 length of channel A is 750: True As you can see, creating the generator takes no amount of time. De-interleaving the data is the real issue. Maybe the data generation and de-interleaving can be done simultaneously? This is not my first implementation, but I never seem to be able to drop below approx 4 ms. from datetime import datetime def unpack_data(data): l = len(data) p = 0 while p < l: # convert 'char' or byte to (signed) int8 i1 = (((ord(data[p]) + 128) % 256) - 128) p += 1 if i1 & 0x01: # read next 'char' as an (unsigned) uint8 # # due to the nature of the protocol, # we will always have sufficient data # available to avoid reading past the end i2 = ord(data[p]) p += 1 yield (i1 >> 1 << 8) + i2 else: yield i1 >> 1 # generate some test data ... test_data = '' for n in range(500 * 12 * 2 - 1): test_data += chr(n % 256) t0 = datetime.utcnow() # in this example we have 6000 samples, 8 channels, 750 samples/channel # data received is interleaved: A1, B1, C1, ..., A2, B2, C2, ... F750, G750, H750 channels = ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H') samples = { channel : [] for channel in channels} # call unpack_data(), receive a generator gen = unpack_data(test_data) t1 = datetime.utcnow() print 'time to create generator: %s' % (t1-t0) try: while True: for channel in channels: samples[channel].append(gen.next()) except StopIteration: pass print 'time to de-interleave data: %s' % (datetime.utcnow()-t1) print 'length of channel A is 750: %s' % (len(samples['A']) == 750) Answer: from datetime import datetime def unpack_data(data): l = len(data) p = 0 I'd avoid such small variable names, it makes your code harder to follow while p < l: # convert 'char' or byte to (signed) int8 i1 = (((ord(data[p]) + 128) % 256) - 128) p += 1 if i1 & 0x01: # read next 'char' as an (unsigned) uint8 # # due to the nature of the protocol, # we will always have sufficient data # available to avoid reading past the end i2 = ord(data[p]) p += 1 yield (i1 >> 1 << 8) + i2 else: yield i1 >> 1 # generate some test data ... test_data = '' for n in range(500 * 12 * 2 - 1): test_data += chr(n % 256) It usually better to put all the pieces of a string in a list and then join them. Python doesn't have good performance for added strings. t0 = datetime.utcnow() # in this example we have 6000 samples, 8 channels, 750 samples/channel # data received is interleaved: A1, B1, C1, ..., A2, B2, C2, ... F750, G750, H750 channels = ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H') samples = { channel : [] for channel in channels} # call unpack_data(), receive a generator gen = unpack_data(test_data) t1 = datetime.utcnow() print 'time to create generator: %s' % (t1-t0) All you've done is created the generator, that won't do any actual work. So you aren't measuring much of anything here. You are still spending much of the time inside the function you've defined after this point. try: while True: for channel in channels: samples[channel].append(gen.next()) except StopIteration: pass It's best to avoid dealing with StopIteration directly if you can. In this case you can do: for sample, channel in zip(gen, itertools.cycle(channels)): samples[channel].append(sample) itertools.cycle() will give you a generator that goes repeatedly through all the channels in order. print 'time to de-interleave data: %s' % (datetime.utcnow()-t1) print 'length of channel A is 750: %s' % (len(samples['A']) == 750) You can use numpy, I've done that for you. Basically, numpy lets you do operations over a whole array and that's faster then doing them in your loops. See below: from datetime import datetime import numpy def unpack_data(data): # reads the string in as a sequence of uint8 data = numpy.fromstring(data, numpy.uint8) # figure out if the most significant bit is set # for everything odds = numpy.logical_not(data & 0x01) # calculate the interpretation of each number # both possible ways singles = data.astype(numpy.int8) >> 1 doubles = singles << 8 + numpy.roll(data, -1) # I couldn't vectorize this, it fills up the # result array with True for every actual starting value result = numpy.empty(data.shape, bool) current = True for index, byte in enumerate(odds): # the next bit is a starting bit if # if this isn't a starting bit, or the 1 bit wasn't set current = not current or byte result[index] = current # where chooses from the single and doubles # based on the lsb, and result filters those we actually want return numpy.where(odds, singles, doubles)[result] # generate some test data ... test_data = '' for n in range(500 * 12 * 2 - 1): test_data += chr(n % 256) t0 = datetime.utcnow() # in this example we have 6000 samples, 8 channels, 750 samples/channel # data received is interleaved: A1, B1, C1, ..., A2, B2, C2, ... F750, G750, H750 channels = ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H') samples = { channel : [] for channel in channels} # call unpack_data(), receive a generator data = unpack_data(test_data) t1 = datetime.utcnow() print 'time to create generator: %s' % (t1-t0) # reshape converts 1 dimensional array # into two dimensional array data = data.reshape(-1, len(channels)) for index, channel in enumerate(channels): samples[channel] = data[:,index] print 'time to de-interleave data: %s' % (datetime.utcnow()-t1) print 'length of channel A is 750: %s' % (len(samples['A']) == 750)
{ "domain": "codereview.stackexchange", "id": 3052, "tags": "python, numpy" }
What is mitochondrial run length?
Question: I am reading the following journal paper and I have come across the following statement: Overexpression of GSK-3β significantly increases motile mitochondria in a Tau protein-dependent manner. However, GSK-3β does not alter mitochondrial velocity or mitochondrial run length. I understand what mitochondrial velocity is but I am not sure what exactly is meant by mitochondrial run length. Any insights are appreciated. Answer: Run length means how far the mitochondria move, each time they move. The article this references is: "GSK3β Is Involved in the Relief of Mitochondria Pausing in a Tau-Dependent Manner". This article has a figure showing the distances in micrometers, although I have trouble working out interpreting the error bars on the graph to see if the distances ("run length") are really different.
{ "domain": "biology.stackexchange", "id": 10989, "tags": "molecular-biology, cell-biology, mitochondria" }
Why are fruits so large compared to their seeds?
Question: Why do many plants produce such large fruits(apples and strawberries,for example) if those contain only relatively small seeds? Answer: The short answer: Fruits are large compared to seeds because humans have made them large. In the natural environment, there is a different set of evolutionary pressures. A fruit has to be able to successfully propagate itself using its seeds, while commercially farmed fruit is usually cloned via vegetative propagation. Therefore, the commercial farmed fruits do not need large seeds to propagate, since they are cloned by the farmers. In many cases, their seeds are actually nonviable (i.e. they will not grow when planted). For example, this is a wild banana, before being selectively bred by humans. As you can see here, the seeds are enormous compared to those in commercial Cavendish bananas. Similarly, this is a wild strawberry compared to its commercial farmed variant. Fruits with large seeds don't appeal to consumers, and therefore farmers who sell fruits with large seeds will get a poorer return on investment. Therefore, farmers who plant the fruits which appeal the best to consumers by being the easiest to eat (small seeds) will reap the greatest profits.
{ "domain": "biology.stackexchange", "id": 3639, "tags": "botany, fruit, agriculture" }
Bellman Ford algorithm relaxation
Question: In the Bellman-Ford algorithm's relaxation part, I can't seem to understand why we need to use (d[v] = min{ d[v], d[u] + c(u, v) }), the (c(u,v)) part. This uses the original weight. Why can't it use the least cost(shortest) between (u,v) instead of c(u,v)? Answer: You can't, because the length of the shortest path between $u$ and $v$ is not known. The purpose of the Bellman-Ford algorithm is to compute such lengths, so you can't assume they are already known.
{ "domain": "cs.stackexchange", "id": 21795, "tags": "algorithms" }
Why don't we consider both the forces while calculating the magnitude of stress in an elastic body?
Question: Consider a wire being stretched from two ends with equal forces. We know that both of these forces collectively participate in elongating the wire; had there been one force the wire would have accelerated in the direction of force. Why can't then the stress be calculated using the two forces (knowing that the vector resultant of the two forces would come out to be zero)? Answer: Tricky question. Basically you would think the total force is 0 on any plane intersecting the cylinder at right angles, hence there would be no pressure, right? Well, the first point (0 net force) is correct, second is not. Imagine being physically pulled by two equally strong friends in opposite directions. Total force is 0 so you remain standing where you are, but you will feel a stress (pressure) in your body from your pulling friends. It helps to think of the situation from a different perspective: One force is trying to pull out (extend) the cylinder, and is therefore providing a pressure/stress on the cylinder, of magnitude P=F/A. But the cylinder is not moving, so there must be an opposite force of equal magnitude, which maintains the cohesion of the cylinder. At some level of external force this cohesion is no longer strong enough, so the external force will rip the cylinder apart. You can think of the cohesive force as a sort of reaction (or supportive, passive) force, and the pulling as an "active" force.
{ "domain": "physics.stackexchange", "id": 52546, "tags": "elasticity, stress-strain" }
Electrophilic aromatic substitution reaction of dihydrofuran-2,5-dione with anisole
Question: I am having trouble understanding the reaction of anisole with dihydrofuran-2,5-dione (succinic anhydride) in presence of $\ce{AlCl3}$ and acid works or how an electrophile is formed. I feel like the product should be something like $\ce{C11H12O4}$. Answer: Nice! You've brought everything you need. If you can't identify your electrophile immediately, do it the other way around: Assign function to all components. The one that is left must be your electrophile - or its precursor. Do you remember the role of $\ce{AlCl3}$ in electrophilic aromatic substitutions? Where could it possible attack here? Here is another hint: What is the trivial name for dihydrofuran-2,5-dione?
{ "domain": "chemistry.stackexchange", "id": 887, "tags": "organic-chemistry, aromatic-compounds, synthesis" }
How are IBM's 127 qubits more potent than the 5760 qubits D-Wave - Advantage_system6.1?
Question: Hi I'm a newbie in quantum computing recently I had a look at the AWS braket and I've seen they had a machine called D-Wave - Advantage_system6.1 that has 5760 qubits but when I googled the highest qubits quantum computer so far it showed me IBM's 127-qubit Eagle processor is the most potent Quantum computer so far! How are IBM's 127 qubits more potent than the 5760 qubits D-Wave - Advantage_system6.1? Also, should I learn Amazon braket or IBM quantum computing using Qiskit? Thank you. sorry for my English. Answer: They are different kinds of quantum computers: The D-Wave is a quantum Annealer. A rather specific (non-general purpose) device that can solve QUBO-type problems. IBM offers universal gate-based quantum computers. They can in principle solve everything the D-Wave can. But with this generality comes the downside of control. As regards to your questions, you should learn depending on what problems do you want to solve.
{ "domain": "quantumcomputing.stackexchange", "id": 3794, "tags": "ibm-q-experience, experimental-realization, d-wave" }
File validation and unit testing
Question: I just started a new project which reads in a video file and splits it up into frames. I started by just throwing everything in the GUI, but decided I wanted to make this a more well designed program, backed with unit tests. So I started to split out functionality into its own class and have a few questions on best practice. from os import path VALID_EXTENTIONS = ["avi", "mp4"] class InvalidVideoSourceException(Exception): pass def isVideoSrcValid(src): if (not path.isfile(src)): return False filenameTokens = path.splitext(src) if (len(filenameTokens) < 1): return False if (not filenameTokens[1][1:].lower() in VALID_EXTENTIONS): return False return True class VideoCapture(object): def __init__(self, videoSrc): if (not isVideoSrcValid(videoSrc)): raise InvalidVideoSourceException("Invalid video file.") self.videoSrc = videoSrc I'm setting up my capture class which will end up handling all the interaction with the video-- skipping to a frame, getting meta data, etc. Pretty simple. My concern is the best way to handle validation of the path. The class is essentially useless until it has a valid video file, so you must provide that to __init__. If the path is invalid for whatever reason, then the class should throw an exception. Originally, I had the isVideoSrcValid function within the class and it returned nothing, it would go through its validation and throw an exception as it got to it. This was nice because VALID_EXTENTIONS belonged to the class which I like since it doesn't feel good having a global variable hanging around like that. I could pass it into the function, but I like that even less, since it is constant data. The other advantage of this approach was having precise error messages. If the file was invalid because the extension isn't supported, it would report that. The downside, and why I ultimately choose to return a Boolean from the function rather than throwing exceptions inline, was so that if down the line I want to handle invalid files in a different way, such as throwing up a dialog, this would make it very easy to do so without messy exception handling. This is why I now check to see if the path is valid, and then raise a general exception. I choose to pull the isVideoSrcValid function out of the class because it isn't closely related semantically to the VideoCapture class. There may be other times when I'd like to check to see if a file is a valid video file without constructing a VideoCapture class. This also makes it much easier to unit test. Have I made the correct choices here? Is there a better way of doing things? I suppose the third option which I didn't consider is to create a Video class, which when constructed represents a valid video. Then the VideoCapture class would take in a Video rather than a path. Finally, I'm new to unit testing in Python. Here is my unit tests for isVideoSrcValid. Please let me know if you'd do anything differently. from capped import VideoCapture as vc class TestValidStream(unittest.TestCase): def setUp(self): open("testFile.avi", 'a').close() open("testFile.txt", 'a').close() open("testFile.avi.txt", 'a').close() open("testFile.abc.avi", 'a').close() open("testFile", 'a').close() def test_empty(self): self.assertFalse(vc.isVideoSrcValid("")) def test_validFile(self): self.assertTrue(vc.isVideoSrcValid("testFile.avi")) def test_noFileExists(self): self.assertFalse(vc.isVideoSrcValid("abcdeg.avi")) def test_noFileExtension(self): self.assertTrue(os.path.isfile("testFile")) self.assertFalse(vc.isVideoSrcValid("testFile")) def test_invalidFileExtension(self): self.assertTrue(os.path.isfile("testFile.txt")) self.assertFalse(vc.isVideoSrcValid("testFile.txt")) def test_invalidDoubleFileExtension(self): self.assertTrue(os.path.isfile("testFile.avi.txt")) self.assertFalse(vc.isVideoSrcValid("testFile.avi.txt")) def test_validDoubleFileExtension(self): self.assertTrue(vc.isVideoSrcValid("testFile.abc.avi")) def tearDown(self): os.remove("testFile.avi") os.remove("testFile.txt") os.remove("testFile.avi.txt") os.remove("testFile.abc.avi") os.remove("testFile") Answer: As a general statement, Python convention uses underscores_in_names instead of camelCase. Class Structure This implementation is really up to you. Without a little more detail on your program structure its hard to suggest how the classes should be arranged. However, it feels like the isVideoSrcValid function as well as VALID_EXTENSIONS should be in some video class. isVideoSrcValid This code can be simplified. What does it matter if the file had no extension? If thats the case, then the next check not filenameTokens[1][1:].lower() in VALID_EXTENTIONS would make the function return false. Because of this, we can remove the 2nd if-statement then just return the value from the last if-statement. Instead of slicing our filenameTokens extension (to remove the .) simply add a period to the VALID_EXTENSIONS. Also, unpack the tuple returned from splitext() for readability. Here is the revamped code: VALID_EXTENSIONS = ['.avi', '.mp4'] def isVideoSrcValid(src): if (not path.isfile(src)): return False root, ext = path.splitext(src) return ext.lower() in VALID_EXTENTIONS Testing Your tests look pretty comprehensive. I have two main points: Off-base tests Why are you testing os.path.isfile()? I understand its to make sure that your code in setUp functioned properly and open().close() created valid files. However, the only way your open().close() calls would not create valid files is if they threw an error. If thats the case, your tests would not run. Moreso the point I want to make is this: testing os.path.isfile() is not the point of your tests and this test class. This test class should only test whether or not isVideoSrcValid works correctly. These tests are independent and thus assume that the functions it uses act politely. Name your test functions to indicate what they test. Test functions should be named to indicate what they test, not what they test with. For example, take your first test function: test_empty(). This is saying "I'm testing my function with an empty string". However, how much information does this give us if we needed to debug should it fail? Did the first if-statement fail (yes)? Or was it the second check? Names like that can be ambiguous in that sense. I would recommend changing that function to: def test_is_file_failure(self): self.assertFalse(vc.isVideoSrcValid("")) # Test with an os resource that is not a file. self.assertFalse(vc.isVideoSrcValid(os.path.expanduser('~')))
{ "domain": "codereview.stackexchange", "id": 7708, "tags": "python, unit-testing" }
Rationalizing enamine stabilities
Question: The solutions manual said that enamine B with its roughly sp2 methyl group can force the nitrogen out of the plane and this messes around with the conjugation. How about the fact there is a significant degree of resonance donation by the nitrogen, which places a significant partial negative charge on carbon. In enamine A, this carbon is secondary. In enamine B, this carbon is tertiary. Extra inductive donation from the methyl group destabilizes the product. Answer: Conjugation effects are not very important in this case. The main reason is steric hindrance. If the more substituted alkene is formed, then both the methyl group and the nitrogen ring will be on the same plane, and repulsions will destabilize the molecule. In the case of A, the methyl group is more free to move in a pseudo-chair conformation of the cyclohexene ring. But there are exceptions, for example: In the second case the more substituted alkene is formed, because is more stable and there is essentially no difference in regard to steric hindrance.
{ "domain": "chemistry.stackexchange", "id": 2356, "tags": "organic-chemistry, resonance" }
Has anyone ever tried to formulate physics based on computer science or information processing?
Question: Some physicists and university researchers say it's possible to test the theory that our entire universe exists inside a computer simulation, like in the 1999 film "The Matrix." In 2003, University of Oxford philosophy professor Nick Bostrom published a paper, "The Simulation Argument," which argued that, "we are almost certainly living in a computer simulation." ref: Physicists testing to see if universe is a computer simulation but this is not my question, i want to know that, Has anyone ever tried to formulate physics base on computer science? or Has anyone ever tried to formulate physics bas on The evolution of information instead time evolution? If we accept the simulated world then physics is viewpoint of world from inside the simulation then, What is the look of world from the outside the simulation? For example, the world is a complex from the inside of fractal but from the outside it is simple M-set $z_{n+1}=z_{n}^2+c$ In physics and cosmology, digital physics is a collection of theoretical perspectives based on the premise that the universe is, at heart, describable by information, and is therefore computable. Therefore, the universe can be conceived of as either the output of a computer program, a vast, digital computation device, or mathematically isomorphic to such a device. Digital physics is grounded in one or more of the following hypotheses; listed in order of decreasing strength. The universe, or reality: a. is essentially informational (although not every informational ontology needs to be digital) b. is essentially computable c. can be described digitally d. is in essence digital e. is itself a computer f. is the output of a simulated reality exercise Quantum mechanics is very impressive. But an inner voice tells me that it is not yet the real thing. The theory yields a lot, but it hardly brings us any closer to the secret of the Old One. In any case I am convinced that He doesn't play dice. - Albert Einstein Answer: Has anyone ever tried to formulate physics base on computer science? No, what do you mean by computer science? Data structures, algorithm, cryptography, artificial intelligence? No. Programming, computer architecture, networking, virus, brain computer interface? No. Computer graphics, visualization, database, linux kernel, Windows 7, ... Noooo. I can't think of anything that is related. Has anyone ever tried to formulate physics bas on The evolution of information instead time evolution? There are people proposing the possibility of using entropic force (example) to explain the gravity force between objects. The emphasis is that entropy is more fundamental than energy. I am not quite sure whether it would work, as I think entropy are always carried by energy $S=\Delta Q/T$. It is the closest study as entropy is closely related to information. If we accept the simulated world then physics is viewpoint of world from inside the simulation then, What is the look of world from the outside the simulation? Most physicist are already accepted a kind of simulation worldview: There exists fundamental laws in our universe, and everything follow strictly of these rules. If there is something "outside", those are rules and things that are even more fundamental than our current perceived view of universe. Our observable universe are just the simulation results of these rules. As far as causality holds, people can always think in this way. There are people studying the existence of more "fundamental rules" that results in our current known rule, for example string theory. Some of them also playing with lattice like cellular automaton to see whether some game rules might generate nature phenomenon (see, say, A New Kind of Science). For example, the world is a complex from the inside of fractal but from the outside it is simple M-set zn+1=z2n+c Complexity in our world are understood as the emergence phenomenon from a low level. Physics can derive everything of chemistry, and so does biology and our society. People are working hard to fill in these gap of understanding these emergence phenomenon. Thank you to powerful computer and advanced theoretical tools, now there are already good progress: Interatomic force results into three phase of matter. Proteins simply minimize its energy will fold into a particular shape. Enlarged disturbance results in chaotic dynamics. Preferential attachment in social network results in power law distribution. All of them are the results of the simple fundamental laws. You might argue how these complexity can arise from these simple rules, but it really does. From the broken symmetry point of view, our current system can only take one of the many possible state, and there are many out there that you don't yet know. Another newer viewpoint of the nature are that they are self-organized criticality, so a particular phenomenon will arise without the need to control precisely of system parameters (see, say, How nature work). All these play important role in the observed pattern formation. More purity, more fundamental. More dirty, more complex.
{ "domain": "physics.stackexchange", "id": 9204, "tags": "entropy, big-bang, information, algorithms, cellular-automaton" }
Why doesn't an electromagnetic wave violate conservation of energy?
Question: I'm starting to study electromagnetic waves and as i understand, an electromagnetic wave projects a varying electric field. This electric field can in turn give forces of repulsion/attractions to the electrons and protons it passes very close to. Why doesn't it violate the law of conservation of energy? Answer: The electromagnetic field itself contains energy distinct from the energy of charged bodies, the energy in a given volume of empty space can be found by integrating the energy densities $\frac{1}{2}\epsilon E^2$ and $\frac{1}{2} \frac{B^2}{\mu}$ over the region. When the EM fields increase the kinetic energy of charged particles, there is a corresponding decrease in the energy of the EM field in that region, so total energy is unchanged. The general proof that any combination of fields and charges obeying Maxwell's equations will conserve energy is known as Poynting's Theorem, proved for example on pages 346-348 of Introduction to Electrodynamics, Third Edition by David J. Griffiths, or on this page from physicspages.com
{ "domain": "physics.stackexchange", "id": 19750, "tags": "electromagnetism, electromagnetic-radiation" }
Display image from live webcam as taken, with four different color filters and in B/W
Question: I have a live webcam windows which means there are six subdivided windows in a single window live. And show pictures in colored and black and white. Is there anyway I can make the code minimalistically minimized? I believe the code is considerably long. import cv2 import cv import numpy as np import matplotlib.image as mpimg from matplotlib import pyplot as plt def threshold_slow(T, image): # grab the image dimensions h = image.shape[0] w = image.shape[1] d = image.shape[2] # loop over the image, pixel by pixel for y in range(0, h): for x in range(0, w): for z in range(0, d): # threshold the pixel if image[y, x,z] >= T: image[y, x,z] = 255 else: image[y, x,z] = 0 # return the thresholded image return image def grab_frame(cam): #cv2.namedWindow("test") #img_counter = 0 while True: ret, color1 = cam.read() #r = 100.0 / color1.shape[1] r = 640.0 / color1.shape[1] #r = 0.25 dim = (100, int(color1.shape[0] * r)) dim = (640,480) # perform the actual resizing of the image and show it color = cv2.resize(color1, dim, interpolation = cv2.INTER_AREA) #color = color1.copy() b = color.copy() # set green and red channels to 0 b[:, :, 1] = 0 b[:, :, 2] = 0 g = color.copy() # set blue and red channels to 0 g[:, :, 0] = 0 g[:, :, 2] = 0 r = color.copy() # set blue and green channels to 0 r[:, :, 0] = 0 r[:, :, 1] = 0 #y= color.copy() #gray = cv2.cvtColor(l,cv2.COLOR_RGB2GRAY) #_,y = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY) #y = cv2.cvtColor(y, cv2.COLOR_GRAY2RGB) y = cv2.add(r,g) d = color.copy() gray1 = cv2.cvtColor(d,cv2.COLOR_RGB2GRAY) _,p = cv2.threshold(gray1, 60, 255, cv2.THRESH_BINARY) p = cv2.cvtColor(p, cv2.COLOR_GRAY2RGB) #threshold_slow(220,p) return [color,b,g,r,y,p] cam = cv2.VideoCapture(0) #cv2.waitKey(0) while(1): ret, color = cam.read() [color,b,g,r,y,p] = grab_frame(cam) horiz = np.hstack((color,b,g)) #verti = np.vstack((color,r)) horiz1 = np.hstack((r,y,p)) verti = np.vstack((horiz,horiz1)) cv2.imshow('HORIZONTAL', verti) if not ret: break k = cv2.waitKey(1) if k%256 == 27: # ESC pressed print("Escape hit, closing...") break cam.release() cv2.destroyAllWindows() Answer: First, get rid of all the unneeded whitespace. Use consistent amount between functions (Python's official style-guide, PEP8, recommends two). PEP8 also recommends using spaces in lists, after the commas, and lower_case for all variables and functions (your T in threshold_slow violates this). Don't use magic numbers in your code. Give them readable names and if necessary make them global constants: WIDTH, HEIGHT = 640, 480 Next, since your images are already numpy arrays, use that fact. Your (unused) threshold_slow function can be replaced by a single line using numpy.where: def threshold_fast(T, image): return np.where(image >= T, 255, 0) Note that this does not modify the image inplace. It is a bad practice to do that and return a modified/new object. You should decide, either return a new object or modify in place and return None. The import cv is not used (and I could not even find a way to install it anymore). Tuple assignment works also without a list on the left side, just do color, b, g, r, y, p = grab_frame(cam). The same is true when returning a tuple (return color, b, g, r, y, p). Arguably, I would split up your grab_frame code into subfunctions like red(image), green(image), blue(image), yellow(image), black_and_white(image). def red(image): """Copy only the red channel from image""" out = np.zeros_like(image) # for some reason red is in the last channel out[:, :, 2] = image[:, :, 2] return out ... While this move will not make your code shorter, it will make it more readable. Note that the canonical order is red, green, blue (RGB). If at all possible I would stick to that. I'm not sure why openCV would deviate from that. You should at least add a docstring to each of your functions as a rudimentary documentation. See above for a short example. You can use while True instead of while(1). No parenthesis needed and True is unambiguous (even for those people who both know C-like languages, where 0 is False and shell scripting languages like bash, where non-zero is False). I would also add a tile(images, rows) function that puts your images into rows and columns. You could just use the itertools recipe grouper for this: from itertools import zip_longest def grouper(iterable, n, fillvalue=None): "Collect data into fixed-length chunks or blocks" # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) Since you seem to want to use different amounts of tiles, and different effects, it might make sense to keep a list of functions to apply to the base image, so that in the end you only need one call: def identity(x): return x def tile(images, cols, fillvalue=None): return np.vstack(np.hstack(group) for group in grouper(images, cols, fillvalue)) funcs = identity, red, black_and_white, canny images = (func(image) for func in funcs) # arrange them in a 2x2 grid cv2.imshow('HORIZONTAL', tile(images, cols=2, fillvalue=np.zeros_like(image))) If the number of images is not evenly divisible by the number of columns, the row is filled up with blank images. You should put your main calling code under a if __name__ == "__main__" guard to allow importing from this script.
{ "domain": "codereview.stackexchange", "id": 36746, "tags": "python, opencv" }
Adding data from multiple Excel files to two lists
Question: I have 5 nested for loops below, to add rows of data from multiple files to one of two lists. Is there a more pythonic way of doing this? I've come across an iterator-generator method named iteritems() -- could this be used to make this code more pythonic? # 5 nested loops for root,dirs,files in os.walk(src): files = [ _ for _ in files if _.endswith('.xlsx') ] for file in files: wb = xlrd.open_workbook(os.path.join(root,file)) worksheets = wb.sheet_names() for worksheet_name in worksheets: if worksheet_name.rfind('7600') != -1 : sheet = wb.sheet_by_name(worksheet_name) keys = [sheet.cell(3, col_index).value for col_index in xrange(sheet.ncols)] for row_index in xrange(4, sheet.nrows): d = {keys[col_index]: sheet.cell(row_index, col_index).value for col_index in xrange(sheet.ncols)} if file.rfind('oam') != -1 : list_7600EoX.append(d) else: list_7600EoX_OAM.append(d) Answer: Your problem is not with 5 (or more) loops. Your problem is that you mixing code of different nature: walking over filesystem and name matching, and processing of files into single chunk of code. Separate it into different functions calling each other: def process_worksheet(wb, worksheet_name): sheet = wb.sheet_by_name(worksheet_name) # ... def process_xslx(path): wb = xlrd.open_workbook(path) worksheets = wb.sheet_names() for worksheet_name in worksheets: if worksheet_name.rfind('7600') != -1 : process_worksheet(wb, worksheet_name) for root,dirs,files in os.walk(src): files = [ _ for _ in files if _.endswith('.xlsx') ] for file in files: process_xslx(os.path.join(root, file)) Another option is to use generators to hide some details on how iteration is performed. For example, instead of walking over filesystem, let generator yield workbooks: def walk_xlsx(src): for root,dirs,files in os.walk(src): files = [ _ for _ in files if _.endswith('.xlsx') ] for file in files: wb = xlrd.open_workbook(os.path.join(root, file)) yield wb for wb in walk_xlsx(src): # filter() is also a generator which yields only # worksheet names that have '7600' in their names worksheets = filter(lambda wn: '7600' in wn, wb.sheet_names()) for worksheet_name in worksheets: # ...
{ "domain": "codereview.stackexchange", "id": 13776, "tags": "python, file-system, excel" }
16-QAM demodulation
Question: I wonder how higher order QAM modulations (like 16-QAM) are demodulated in practice. Let us assume hard detection for simplicity. For 4-QAM, checking sign of real and imaginary parts is enough, but in my opinion this approach does not scale and using it for higher order modulations would require checking against multiple different thresholds. This simply seems wasteful. On the other hand, I could not come up with any alternatives except checking a distance between a received symbol and all possible symbols (essentially ML detector). This also does not seem like a good idea, especially in case of large constellations, like 1024-QAM. Kind regards Answer: For 4-QAM, checking sign of real and imaginary parts is enough, but in my opinion this approach does not scale and using it for higher order modulations would require checking against multiple different thresholds. This simply seems wasteful. Well, there's hardly a different way to do it! You first decide the sign of the real and imaginary part (and that typically gives you the first two bits: Gray coding!). Then, you know in which quadrant you are. You add / subtract a complex constant so that the quadrant lies centered. Then you again decide the real and imaginary part. Repeat. Or, you just go through a series of if / else if / else statements. In a software decider, the iterative approach is "natural", in a hardware decider (i.e. digital logic circuit), the second might be faster, because you can basically do as many comparisons as you want in parallel. On the other hand, I could not come up with any alternatives except checking a distance between a received symbol and all possible symbols (essentially ML detector). This also does not seem like a good idea, especially in case of large constellations, like 1024-QAM. Since the ML decision for a rectangular QAM is exactly what you get with threshold decisions, well, that wouldn't be any better. Let me lose a few words on: Let us assume hard detection for simplicity. vs how higher order QAM modulations are demodulated in practice In practice, you use larger QAM (like, large, as in 1024 and up) because you need to get close to channel capacity at a high SNR. So, using a large QAM and then hard decision is basically a waste. Should have used a smaller QAM and less forward error correction redundancy instead, if the rate doesn't matter! In some cases you don't do the soft decoding for complexity reasons, but in general, in modern systems using large QAMs, the channel codes used allow for soft decoding. That's by design. So, you do use soft decision. It's basically the same, you do it first for the first bit (project onto the real axis, calculate the Log-Likelihood ratio (LLR) directly from that value), then for the second value (project onto imaginary axis, calculate the LLR value), and then you shift the whole constellation, and repeat. (there's simplifications/approximations done here.) You feed the the thus won LLRs (aka. softbits!) into your soft decoder.
{ "domain": "dsp.stackexchange", "id": 10959, "tags": "demodulation" }
Potential inside metal sphere in field of external charge
Question: If a have a arrangement like-( Where r is radius of hollow conducting sphere with centre C and Q is point charge) Then what is potential at any point between P and C? I think first we need to find Potential at P- $$V_P=\frac{Kq_{in}}{r}+\frac{KQ}{3r}$$ where $q_{in}$ is induced charge on the sphere surface. Now as we go inside the sphere this potential should remain constant because net electric field inside a conducting hollow sphere is always 0 and it should be same at centre too i.e at C. But somewhere in a book while solving a problem related to the same situation i found-"Potential at C due to Q and induced charges is $V_c =\frac{kQ}{2r}$ which is just due to Q [Potential at C due to induced charges is zero]" I have been studying the concept over and over but i can't find my mistake, where i interpreted this wrong? Answer: At any point $P$ inside the sphere, let the superposed electric field be $$\vec{E_P} = \vec{E}_{PQ} + \vec{E}_{PI}$$ where $\vec{E}_{PQ}$ means electric field at $P$ due to $Q$ and $\vec{E}_{PI}$ means electric field at $P$ due to all of the induced charges on the sphere. Then, \begin{align*} V_P = - \int_\infty^P \vec{E_P} \cdot d\vec{l} &= - \int_\infty^P \vec{E}_{PQ} \cdot d\vec{l} - \int_\infty^P \vec{E}_{PI} \cdot d\vec{l} \\ &= V_{PQ} + V_{PI} \end{align*} where $V_{PQ}$ means potential at $P$ due to $Q$ and $V_{PI}$ means sum of potential at $P$ due to all of the induced charges on the sphere. For point $C$ (the center of sphere) \begin{align*} V_{CQ} &= \frac{kq}{2r} \\ V_{CI} &= \sum_i^{\text{all induced charges on the sphere}} \frac{kq_i}{r} = 0 \\ V_C &= V_{CQ} + V_{CI}\\ &= \frac{kq}{2r} \end{align*} Notice above that $V_{CI}$ is zero due to the assumption that the sphere is neutral to start with, so that the sum of all induced $q_i$ should be zero (conservation of charge). Now consider point P as located in OP's diagram. Since potentials of any two points inside the sphere are the same, and since $P$ and $C$ are both in the sphere, \begin{align*} V_P &= V_C \\ V_{PQ} + V_{PI} &= \frac{kq}{2r} \\ \frac{kq}{3r} + V_{PI} &= \frac{kq}{2r} \\ V_{PI} &= \frac{kq}{6r} \end{align*}
{ "domain": "physics.stackexchange", "id": 100242, "tags": "electrostatics, electric-fields, potential, conductors, method-of-images" }
Magnet spinning between two other magnets
Question: Suppose, we have two magnets, MA, MB, and we have a third magnet MC in between the two magnets. Each magnets' north pole faces the other magnets south pole, and the magnets are placed horizontally side by side. We spin the magnet MC at a speed between super fast and slow. What will happen then? I mean, once MC's north pole faces MA's south pole and MC's south pole faces MB's north pole, then the next moment MC's south pole faces MA's south pole and MC's north pole faces MB's north pole. So what will happen then? Answer: The middle magnet is spinning, so it attracts and repulses the other two magnets once per rotation. It is spinning "super fast" - that is so fast that the attraction and repulsion phases are super short. The other magnets are just too heavy to even start moving visibly in one or the other direction, before the direction of the force changes again. We could say "nothing happens" - except the outer magnets oscilate slightly with each rotation. If the middle magnet would spin "super slow", the others would just jump to the middle one, stick to it, and rotate with it as if it's all one magnet. What happens if the middle magnet would spin with a frequency in betwen? That's difficult, because much depends on how the rotation starts, and we only know it has started... If the rotation starts slowly and gets faster then, up the middle speed, different things can happen during the first rotation. The magnets could stick to the rotating one, or move away a little bit; That whould have a big influence on what happens later.
{ "domain": "physics.stackexchange", "id": 15983, "tags": "electromagnetism, experimental-physics, magnetic-fields" }
Molar mass on phase transition
Question: I want to know for a single substance in transition between liquid and gas, if the molar mass will change. Answer: I want to know for a single substance in transition between liquid and gas, if the molar mass will change. No, not at all. Molar mass (MM) is simply the mass of $1$ mole of the substance. During a phase transition no matter is neither created nor destroyed, so the MM doesn't change. As an aside, the MM of a mixture of substances doesn't change either during phase transitions, for the same reason.
{ "domain": "physics.stackexchange", "id": 73819, "tags": "mass, phase-transition, physical-chemistry, molecules" }
Algorithm to find for each vertex, a vetrex that it can reach with the lowest cost in a graph
Question: We have a directed graph $G=(V,E)$ and each vertex $v\in V$ has a cost: $price(v)$. Our mission is to find an algorithm that runs in time $\mathcal{O}(|E|+|V|)$ that finds $\forall v\in V$ the minimal price of all vertices $w$ which are reachable from $v$ (all vertives $w$ s.t. there is a path from $v$ to $w$). Note that $v$ can reach itself. As an example if the graph simply was $3\to 5 \to 1$ then the minimal price of all vertices was $1$, since all vertices can reach $1$ and its minimal. I tried using a BFS-like algorithm but failed. Answer: Consider a strongly connected component $C$ of $G$ and notice that the sought value associated with all vertices $v$ in $C$ is the same. Let's call this value $P(C)$. Compute all strongly connected components of $G$ (there are several ways to do this in time $O(|V|+|E|)$, see for example Tarjan's algorithm) and let these be $C_1, \dots, C_k$. Now consider a directed graph $G'=(V', E')$ whose vertex set is $V' = \{ C_1, \dots, C_k\}$ and such that there is a directed edge $(C_i, C_j)$ in $E'$ if and only if $i \neq j$ and $(u,v) \in E$ for some vertex $u$ in $C_i$ and some vertex $v \in C_j$. Notice that $G'$ is a directed acyclic graph, and therefore admits a topological ordering. Compute such an ordering and assume w.l.o.g. that it is $\langle C_1, \dots, C_k \rangle$ (otherwise we can just rename the components). This can be done in time $O(|V'|+|E'|) = O(|V|+|E|)$ (actually, Tarjan's algorithm already returns the strongly connected components in reverse topological order). Notice that if some $C_i$ is a sink in $G'$ (i.e., it has no outgoing edges) then $P(C_i)$ can be immediately computed as $\min_{w \in {C_i}} \text{price}(w)$ If $C_i$ is not a sink, then we have: $$ P(C_i) = \min \big\{ \min_{w \in C_i} \text{price}(w), \min_{(C_i, C_j) \in E'} P(C_j) \big\}. $$ By examining the vertices of $G'$ in reverse topological order, we are able to compute all values $P(\cdot)$. Moreover, notice that the time needed to compute $P(C_i)$ is proportional to the number $|C_i|$ of vertices in $C_i$, plus the out-degree $\delta_i$ of $C_i$ in $G'$. Therefore, the overall time complexity of computing $P(\cdot)$ once the topological order is known is upper bounded (up to multiplicative constants) by: $$ \sum_{i=1}^k ( |C_i| + \delta_i ) = \sum_{i=1}^k |C_i| + \sum_{i=1}^k \delta_i = |V| + |E'| \le |V| + |E|. $$
{ "domain": "cs.stackexchange", "id": 19626, "tags": "algorithms, graphs, graph-traversal" }
Are there any animals that maintain white fur year round?
Question: Besides examples of rare albino animals, it seems animals only have white fur during the winter. Additionally, and not coincidentally, the examples I've found live in the northern latitudes with predicable snow cover and have different colored coats in the summer months. Are there any examples of animals that maintain a white coat of fur year round, or are there simply no environments that such a trait would be beneficial in? Answer: Polar bears (Ursus maritimus) have white fur all year long. There are probably several other examples. @L.Diago gave sheep as example. There are also all white troglodyte species.
{ "domain": "biology.stackexchange", "id": 8798, "tags": "zoology, environment" }
Parity transformation for spinors (pinors) in odd spacetime dimensions
Question: What is the transformation law for spinors (pinors) under parity in an odd number of spacetime dimensions? I know how to derive the transformation properties of spinors (pinors) under parity in an even number of spacetime dimensions. Let $$\eta^{ab} = \mathrm{diag} (1, 1\ldots 1, -1, -1 \ldots -1) $$ where there are $p$ entries of $1$ and $q$ entries of $-1$, and $p+q=n$. Let the gamma matrices $\gamma^a$ generate the real Clifford algebra $\mathrm{Cl}(p,q)$, $$ \{ \gamma^a, \gamma^b \} = 2 \eta^{ab} $$ The Pin group $\mathrm{Pin}(p,q)$ is defined as the set of invertible elements $S_{\Lambda}$ of $\mathrm{Cl}(p,q)$ that satisfy $$ S_{\Lambda} \gamma^a S_{\Lambda}^{-1} = {\Lambda^a}_b \gamma^b $$ for some element ${\Lambda^a}_b$ of the orthogonal group $\mathrm{O}(p,q)$, and also $S_{\Lambda}S_{\Lambda}^{\tau} = \pm 1$, where the superscript $\tau$ denotes a linear operator that reverses the order of products, e.g. $(\gamma_0 \gamma_1 \gamma_2)^{\tau} = \gamma_2 \gamma_1 \gamma_0$. For each $\Lambda$, there are two solutions for $S_{\Lambda}$ that differ by a minus sign, and the the map that sends these two solutions to $\Lambda$ is a $2-1$, homomorphism from $\mathrm{Pin}(p,q)$ to $\mathrm{O}(p,q)$. A parity transformation in the orthogonal group $\mathrm{O}(p,q)$ that inverts the $i$-th spatial axis is given by $P_i = \mathrm{diag}(1, 1 \ldots 1, -1, 1, 1, \ldots 1)$, with the entry of $-1$ acting on the $i$-th spatial axis. To find a parity transformation on a spinor (pinor), one solves the above equations for $\Lambda = P_i$. In an even number of spacetime dimensions, the solution is $$ S_{P_i} = \pm \gamma_i \gamma_n $$ where $\gamma_n = \prod_{a=0}^{n-1} \gamma_a$. Crucial to make this work is the fact that $\gamma_n$ anti-commutes with all of the gamma matrices $\gamma_a$ in an even number $n$ of spacetime dimensions. However, in an odd number of spacetime dimensions, the operator $\gamma_n$ is proportional to the identity matrix, and thus commutes with everything. And indeed I believe there is no operator in an odd number of spacetime dimensions that anticommutes with all the gamma matrices $\gamma_a$. As a result, I cannot see that there is a solution to the above equations for a parity transformation on spinors (pinors) in odd dimensions. The main reference I have been using is http://arxiv.org/abs/math-ph/0012006. In section 5, page 65, a similar conclusion is reached. Then it is said that the 2-1 homomorphism/covering map from $\mathrm{Pin}(p,q)$ to $\mathrm{O}(p,q)$ given above is not surjective in an odd number of spacetime dimensions, and in particular it does not 'hit' axis reflections in $\mathrm{O}(p,q)$. Answer: I believe I'm ready to answer my own question. The pin group can alternately be defined as the set of all invertible elements $S_{\Lambda} \in \mathrm{Cl}(p,q)$ satisfying $S_{\Lambda} S_{\Lambda}^{\tau} = \pm 1$ and $$ \alpha(S_{\Lambda}) \gamma^a S_{\Lambda}^{-1} = {\Lambda^a}_b \gamma^b $$ for some element ${\Lambda^a}_b \in \mathrm{O}(p,q)$. The map $\alpha: \mathrm{Cl}(p,q) \rightarrow \mathrm{Cl}(p,q)$ sends odd elements of $\mathrm{Cl}(p,q)$ to minus themselves, and even elements to themselves. It is an algebra automorphism. This defines a second $2-1$ homomorphism called the twisted map from $\mathrm{Pin}(p,q)$ to $\mathrm{O}(p,q)$ that is surjective in any number of spacetime dimensions. In particular, the elements that get mapped to a reflection of the $i$-th spatial axis are $\pm \gamma_i$. The major difference when using the twisted map to define parity transformations is that $\gamma^a$ is now a pseudovector since it transforms with a minus sign under reflections. The twisted map is the only surjective homomophrism from $\mathrm{Pin}(p,q)$ to $\mathrm{O}(p,q)$ in odd dimensions and therefore it must be used to define the parity transform for spinors. In an even number of spacetime dimensions, there is a choice to be made. There is further ambiguity in the sign of the parity operator, and the spacetime metric signature, which can lead to different parity operators. Which parity operator is 'correct' is a matter that is determined by experiment.
{ "domain": "physics.stackexchange", "id": 11961, "tags": "quantum-field-theory, fermions, spinors, parity" }
Bash script for user backups with BorgBackup
Question: I use BorgBackup to handle backups for my personal computer. The following is a bash script that creates backups of my system to various different targets. I have little experience with bash and hope to get some pointers regarding best practices and possible security issues/unanticipated edge cases. One specific question is about the need to unset environment variables (in particular BORG_PASSPHRASE) like I have seen people doing (for example here) From my understanding this should not be necessary because the environment is only local. Ideally I would also like to automatically ensure the integrity of the Borg repositories. I know there is borg check which I could run from time to time, but I am not sure if this is even necessary when using create which supposedly already makes sure the repository is in a healthy state? Some notes regarding the code below: noti is a very simple script for notifications with i3status but could be replaced with anything else Some paths and names are replaces with dummies I can not exit properly on errors of borg create because I backup /etc where some files have wrong permissions and BorgBackup will throw errors The TODO comments in the code are things I may want to look into at some time, but the code works for now Bash script: #!/bin/bash # User data backup script using BorgBackup with options for different target repositories # usage: backup.sh <targetname> # Each target must have a configuration file backup-<targetname>.conf that provides: # - pre- and posthook functions # - $repository - path to a valid borg repository or were one should be created # - $backup - paths or exclusion paths # - $pruning - borg pruning scheme # Additional borg environment variables may be provided and will not be overwritten. # Output is logged to LOGFILE="$HOME/.local/log/backup/<date>" # INSTALLATION # Place script and all target configs in $HOME/.local/scripts # $HOME/.config/systemd/user/borg-backup.service # ``` # [Unit] # Description=Borg User Backup # [Service] # Environment=SSH_AUTH_SOCK=/run/user/1000/keyring/ssh # ExecStart=%h/.local/scripts/backup.sh target1 # Nice=19 # IOSchedulingClass=2 # IOSchedulingPriority=7 # ``` # # $HOME/.config/systemd/user/borg-backup.timer # ``` # [Unit] # Description=Borg User Backup Timer # [Timer] # OnCalendar=*-*-* 8:00:00 # Persistent=true # RandomizedDelaySec=10min # WakeSystem=false # [Install] # WantedBy=timers.target # ``` # $ systemctl --user import-environment PATH # reload the daemon # $ systemctl --user daemon-reload # start the timer with # $ systemctl --user start borg-backup.timer # and confirm that it is running # $ systemctl --user list-timer # you can also run the service manually with # $ systemctl --user start borg-backup function error () { RED='\033[0;91m' NC='\033[0m' printf "${RED}%s${NC}\n" "${1}" notify-send -u critical "Borg" "Backup failed: ${1}" noti rm "BACKUP" noti add "BACKUP FAILED" exit 1 } ## Targets if [ $# -lt 1 ]; then echo "$0: Missing arguments" echo "usage: $0 targetname" exit 1 fi case "$1" in "target1"|"target2"|"target3") target="$1" ;; *) error "Unknown target" ;; esac # TODO abort if specified target is already running # exit if borg is already running, maybe previous run didn't finish #if pidof -x borg >/dev/null; then # error "Backup already running." #fi ## Logging and notification # notify about running backup noti add "BACKUP" # write output to logfile log="$HOME/.local/log/backup/backup-$(date +%Y-%m-%d-%H%M%S).log" exec > >(tee -i "$log") exec 2>&1 echo "$target" ## Global Prehook # create list of installed software pacman -Qeq > "$HOME/.local/log/package_list.txt" # create list of non backed up resources ls -R "$HOME/misc/" > "$HOME/.local/log/resources_list.txt" # create list of music titles ls -R "$HOME/music/" > "$HOME/music/music_list.txt" ## Global Config # set repository passphrase export BORG_PASSCOMMAND="cat $HOME/passwd.txt" compression="lz4" ## Target specific Prehook and Config CONFIGDIR="$HOME/.local/scripts" source "$CONFIGDIR"/backup-"$target".conf # TODO make non mandatory and only run if it is defined prehook || error "prehook failed" ## Borg # TODO use env variables in configs instead? # export BORG_REPO=$1 # export BORG_REMOTE_PATH=borg1 # borg create ::'{hostname}-{utcnow:%Y-%m-%dT%H:%M:%S}' $HOME SECONDS=0 echo "Begin of backup $(date)." borg create \ --verbose \ --stats \ --progress \ --compression $compression \ "$repository"::"{hostname}-{utcnow:%Y-%m-%d-%H%M%S}" \ $backup # || error "borg failed" # use prune subcommand to maintain archives of this machine borg prune \ --verbose \ --list \ --progress \ "$repository" \ --prefix "{hostname}-" \ $pruning \ || error "prune failed" echo "End of backup $(date). Duration: $SECONDS Seconds" ## Cleanup posthook noti rm "BACKUP" echo "Finished" exit 0 Example configuration file: backup="$HOME --exclude $HOME/movie --exclude $HOME/.cache --exclude $HOME/.local/lib --exclude $HOME/.thumbnails --exclude $HOME/.Xauthority " pruning="--keep-daily=6 --keep-weekly=6 --keep-monthly=6" repository="/run/media/username/DRIVE" prehook() { :; } # e.g. mount drives/network storage posthook() { :; } # unmount ... Answer: Your specific question One specific question is about the need to unset environment variables (in particular BORG_PASSPHRASE) like I have seen people doing (for example here) From my understanding this should not be necessary because the environment is only local. When you execute a Bash script with path/to/script.sh or bash path/to/script.sh, the script runs in a sub-shell, and cannot modify its caller environment. No matter if the script has export FOO=bar or unset FOO, these will not be visible in the caller shell. So the example you linked to, having BORG_PASSPHRASE="" (and followed by exit) is completely pointless. Use an array for $backup This setup will break if ever $HOME contains whitespace: backup="$HOME --exclude $HOME/movie --exclude $HOME/.cache --exclude $HOME/.local/lib --exclude $HOME/.thumbnails --exclude $HOME/.Xauthority " Even if that's unlikely to happen, it's easy enough and a good practice to make it safe, by turning it into an array: backup=( "$HOME" --exclude "$HOME/movie" --exclude "$HOME/.cache" --exclude "$HOME/.local/lib" --exclude "$HOME/.thumbnails" --exclude "$HOME/.Xauthority" ) And then in the calling command: borg create \ --verbose \ --stats \ --progress \ --compression $compression \ "$repository"::"{hostname}-{utcnow:%Y-%m-%d-%H%M%S}" \ "${backup[@]}" I suggest to do the same thing for $pruning too. Although there is no risk there (with the current value) of something breaking, it's a good practice to double-quote variables used on the command line. So it should be: pruning=(--keep-daily=6 --keep-weekly=6 --keep-monthly=6) And then when using it: "${pruning[@]}" Use stricter input validation Looking at: if [ $# -lt 1 ]; then echo "$0: Missing arguments" echo "usage: $0 targetname" exit 1 fi According to the usage message, only one argument is expected, but the validation logic allows any number above that, and will simply ignore them. I suggest to strengthen the condition: if [ $# != 1 ]; then Output error messages to stderr The error function, and a few other places print error messages on stdout. It's recommended to use stderr in these cases, for example: if [ $# -lt 1 ]; then echo "$0: Missing arguments" >&2 echo "usage: $0 targetname" >&2 exit 1 fi Don't use SHOUT_CASE for variables It's not recommended to use all caps variable names, as these may clash with environment variables defined by system programs. Use modern function declarations The modern syntax doesn't use the function keyword, like this: error() { # ... } Simplify case patterns In this code: case "$1" in "target1"|"target2"|"target3") target="$1" ;; You can use glob patterns to simplify that expression: case "$1" in "target"[1-3]) target="$1" ;; Why exit 0 at the end? The exit 0 as the last statement is strange: Bash will do the same thing automatically when reaching the end of the script. You can safely remove it. SECONDS Wow, I completely forgot about this feature of Bash, very cool, thanks for the reminder!
{ "domain": "codereview.stackexchange", "id": 43756, "tags": "bash" }
How effective is a level A hazmat suit for immersion in deadly substances?
Question: I know that level A type hazmat suits can protect against minor splashes, but could somebody in one survive without harm after complete immersion for a few minutes in deadly chemicals such as sodium cyanide, high concentration (100%) hydrogen peroxide, and 80% hydrochloric/nitric acid? Answer: Note that I'm used to standard lab safety measures including gas masks and masks mith independent air supply but I never used a Hazmat level A suit in research. You might want to take the following with a grain of salt! Sodium cyanide ($\ce{NaCN}$) is a solid with a melting point around 560 °C. While it is a solid, jumping into a pool of it shouldn't do any harm. Level A suits are like "your personal bubble" with independant air support. They are however not that heat resistant: Do not jump into a pool of molten sodium cyanide! High concentration (100%) Hydrogen Peroxide 3% solutions of hydrogen peroxide ($\ce{H2O2}$) in water are used for wound disinfection, solutions up to 18% are used to bleach hair. (In German, the term wasserstoffblond was used for bleach blonde). Much higher concentrations or waterfree (100%) ($\ce{H2O2}$) is another league. Hydrogen peroxide can decompose into water and oxygen: $$\ce{2 H2O2 ->[\textrm{cat}] H2O + O2}$$ The reaction releases some heat, may happen spontaneously and is catalyzed by some metals. Add some burnable material that can be vapourized and you have a rocket propellant. Think Messserschmitt Me 163. If the hazmat suit is tight, and the material withstands "bleaching" by the hydrogen peroxide there's still the question whether hooks, valves, visor frames or any other metal parts, that might be part of the whole gear will catalyze the decomposition. 80% hydrochloric acid That does not exist in water. Water won't take up that much hydrogen chloride gas under normal conditions. Concentrated hydrochlorid acid (~ 38%) should be fine. 80% nitric acid That's close to fuming nitric acid and again a pretty strong oxidant. I'd say that you're safe, but don't take my word for it ;)
{ "domain": "chemistry.stackexchange", "id": 12187, "tags": "everyday-chemistry, safety" }
Console colorizer
Question: To be able to apply various colors to the console I created a ConosoleColorizer. It's really simple. It just takes an XML and renders it to the console with the colors specified. The element names are actually optional and can be any names. They are only required to parse the XML. What matters are the attribute and color names. I didn't know how to solve it with less effort without inventing a new markup. internal class ConsoleColorizer { public static void Render(string xml) { Render(XElement.Parse(xml).Nodes()); } public static void Render(IEnumerable<XNode> xNodes) { Render(xNodes, null, null); } private static void Render(IEnumerable<XNode> xNodes, ConsoleColor? lastForegroundColor, ConsoleColor? lastBackgroundColor) { foreach (var xChildNode in xNodes) { var xElement = xChildNode as XElement; if (xElement != null) { Render( xElement.Nodes(), SetForegroundColor(xElement), SetBackgroundColor(xElement) ); } else { RestoreForegroundColor(lastForegroundColor); RestoreBackgroundColor(lastBackgroundColor); Console.Write(((XText)xChildNode).Value); } } Console.ResetColor(); } private static ConsoleColor? SetForegroundColor(XElement xElement) { var foregroundColor = (ConsoleColor)0; if (Enum.TryParse<ConsoleColor>(xElement.Attribute("fg")?.Value, true, out foregroundColor)) { return Console.ForegroundColor = foregroundColor; } return null; } private static ConsoleColor? SetBackgroundColor(XElement xElement) { var backgroundColor = (ConsoleColor)0; if (Enum.TryParse<ConsoleColor>(xElement.Attribute("bg")?.Value, true, out backgroundColor)) { return Console.BackgroundColor = backgroundColor; } return null; } private static void RestoreForegroundColor(ConsoleColor? consoleColor) { if (consoleColor.HasValue) { Console.ForegroundColor = consoleColor.Value; } } private static void RestoreBackgroundColor(ConsoleColor? consoleColor) { if (consoleColor.HasValue) { Console.BackgroundColor = consoleColor.Value; } } } Example: var xml = @"<line>Hallo <color fg=""yellow"">colored</color> console! <color fg=""darkred"" bg=""darkgray"">These are <color fg=""white"" bg=""blue"">nested</color> colors</color>.</line>"; ConsoleColorizer.Render(xml); Answer: The code in question looks mostly good to me, but it could be enhanced a little bit. private static void Render() The call to Console.ResetColor(); doesn't belong here because it isn't necessary to reset the colors each time. Remarks from link above: The foreground and background colors are restored to the colors that existed when the current process began. So it is sufficient to call that method at the end of the Render(IEnumerable<XNode> xNodes) method. The passed in lastForegroundColor and lastBackgroundColor are only needed if one of the childnodes is a XElement so if we extract the rendering of a XElement to a separate method, the former Render() method would look after renaming to RenderInternal() like so private static void RenderInternal(IEnumerable<XNode> xNodes) { foreach (var xChildNode in xNodes) { var xElement = xChildNode as XElement; if (xElement != null) { RenderInternal(xElement); } else { Console.Write(((XText)xChildNode).Value); } } } The extracted RenderInternal(XElement) method could look like so, if we just use the current implemented methods private static void RenderInternal(XElement xElement) { ConsoleColor lastForegroundColor = Console.ForegroundColor; ConsoleColor lastBackgroundColor = Console.BackgroundColor; SetForegroundColor(xElement); SetBackgroundColor(xElement); RenderInternal(xElement.Nodes()); RestoreForegroundColor(lastForegroundColor); RestoreBackgroundColor(lastBackgroundColor); } which removes the strangeness of a SetXxx method to return something. But hey, I just don't like how this is looking. So let us introduce a struct ConsoleColors to hold both the Foreground- and the Backgroundcolor like so public struct ConsoleColors { public ConsoleColor BackgroundColor { get;} public ConsoleColor ForegroundColor { get;} public static ConsoleColors Current { get { return new ConsoleColors(Console.ForegroundColor, Console.BackgroundColor); } } public ConsoleColors(ConsoleColor foregroundColor, ConsoleColor backgroundColor) :this() { ForegroundColor = foregroundColor; BackgroundColor = backgroundColor; } } and add a method which sets a ConsoleColors struct like so private static void SetColors(ConsoleColors colors) { Console.ForegroundColor = colors.ForegroundColor; Console.BackgroundColor = colors.BackgroundColor; } and refactor the RenderInternal(XElement) method like so private static void RenderInternal(XElement xElement) { ConsoleColors savedColors = ConsoleColors.Current; SetForegroundColor(xElement); SetBackgroundColor(xElement); RenderInternal(xElement.Nodes()); SetColors(savedColors); } which looks better, but still has the calls SetForegroundColor and SetBackgroundColor. So we can add an extension method which turns a XElement to a ConsoleColors so we can use the SetColors method. public static ConsoleColors ToConsoleColors(this XElement xElement, string foregroundAttributeName = "fg", string backgroundAttributeName = "bg") { if (xElement == null) { return ConsoleColors.Current; } var foregroundColor = Console.ForegroundColor; Enum.TryParse<ConsoleColor>(xElement.Attribute(foregroundAttributeName)?.Value, true, out foregroundColor); var backgroundColor = Console.BackgroundColor; Enum.TryParse<ConsoleColor>(xElement.Attribute(backgroundAttributeName)?.Value, true, out backgroundColor); return new ConsoleColors(foregroundColor, backgroundColor); } which leads to internal class ConsoleColorizer { public static void Render(string xml) { Render(XElement.Parse(xml).Nodes()); } public static void Render(IEnumerable<XNode> xNodes) { RenderInternal(xNodes); Console.ResetColor(); } private static void RenderInternal(IEnumerable<XNode> xNodes) { foreach (var xChildNode in xNodes) { var xElement = xChildNode as XElement; if (xElement != null) { RenderInternal(xElement); } else { Console.Write(((XText)xChildNode).Value); } } } private static void RenderInternal(XElement xElement) { ConsoleColors savedColors = ConsoleColors.Current; SetColors(xElement.ToConsoleColors()); RenderInternal(xElement.Nodes()); SetColors(savedColors); } private static void SetColors(ConsoleColors colors) { Console.ForegroundColor = colors.ForegroundColor; Console.BackgroundColor = colors.BackgroundColor; } } }
{ "domain": "codereview.stackexchange", "id": 22619, "tags": "c#, recursion, console, xml" }
How to interpret the conservation of mass equation?
Question: I want to know that the difference is between the following equations: $$ \frac{\delta h_d}{\delta t} = -\nabla.(\vec{v_s} h_d) = -u_s(\frac{\delta h_d}{\delta x}) -v_s(\frac{\delta h_d}{\delta y}) - h_d(\frac{\delta u_s}{\delta x}) - h_d(\frac{\delta v_s}{\delta y})$$ $$ \frac{\delta h_d}{\delta t} = -\vec{v_s}.\nabla(h_d) = -u_s(\frac{\delta h_d}{\delta x}) -v_s(\frac{\delta h_d}{\delta y})$$ Here, $\vec{v_s}$ is the 2D velocity vector with components $u_s$ and $v_s$, and $h_d$ the thickness of a rock layer that is being advected. How does the addition of the last 2 terms in the first equation change the physical meaning of the equation? Does it make a ddifference if the velocity field is constant or non-constant in time? Answer: The second equation only holds if the material is incompressible; i.e. if $\nabla\cdot {\bf v}=0$. I imagine that rock is rather incompressibe under most circustances. It will change density if it gets hot though --- for example in subduction.
{ "domain": "physics.stackexchange", "id": 89409, "tags": "conservation-laws, differential-equations" }
Find number of ways to traverse matrix of 0's and 1's
Question: I solved a Daily Coding Challenge, but i suspect my code could be optimised. The challenge is the following: You are given an N by M matrix of 0s and 1s. Starting from the top left corner, how many ways are there to reach the bottom right corner? You can only move right and down. 0 represents an empty space while 1 represents a wall you cannot walk through. For example, given the following matrix: [[0, 0, 1], [0, 0, 1], [1, 0, 0]] Return two, as there are only two ways to get to the bottom right: Right, down, down, right Down, right, down, right The top left corner and bottom right corner will always be 0. You can see my solution below. Any tips on how to improve on it are very welcome. def solution(matrix, x=0, y=0): count = 0 right, left = False, False if y == len(matrix) - 1 and x == len(matrix[0]) - 1: # found a way return 1 if x < len(matrix[0]) - 1: if matrix[y][x+1] == 0: count += solution(matrix, x+1, y) # look right right = True if y < len(matrix) - 1: if matrix[y+1][x] == 0: count += solution(matrix, x, y+1) # look down left = True if not right and not left: # dead end return 0 return count if __name__ == "__main__": print(solution([[0, 0, 0], [0, 0, 0], [0, 1, 0]])) Answer: You don't need right and left: they add nothing that isn't already covered by count. Compute lengths once rather than repeatedly. Check for empty matrix rows and/or columns, or invalid x and y, if your code needs to handle such edge cases. def solution(matrix, x=0, y=0): count = 0 y_limit = len(matrix) - 1 x_limit = len(matrix[0]) - 1 # Success. if y == y_limit and x == x_limit: return 1 # Look right. if x < x_limit and matrix[y][x + 1] == 0: count += solution(matrix, x + 1, y) # Look down. if y < y_limit and matrix[y + 1][x] == 0: count += solution(matrix, x, y + 1) return count
{ "domain": "codereview.stackexchange", "id": 41754, "tags": "python, python-3.x, programming-challenge, matrix" }
ros control assertion failed
Question: I wrote a custom controller for my robot. I used ros control boilerplate (by dave coleman) for hardware interface. The code compiles but I get the following error. myrobot_hw_main: /opt/ros/lunar/include/hardware_interface/posvel_command_interface.h:72: void hardware_interface::PosVelJointHandle::setCommandVelocity(double): Assertion `cmd_vel_' failed. [myrobot/myrobot_hardware_interface-1] process has died [pid 96416, exit code -6, cmd /home/user/catkin_ws/devel/lib/control_pkg/myrobot_hw_main __name:=myrobot_hardware_interface __log:=/home/user/.ros/log/aee6bd46-702a-11e8-82d2-000c29ad2621/myrobot-myrobot_hardware_interface-1.log]. log file: /home/user/.ros/log/aee6bd46-702a-11e8-82d2-000c29ad2621/myrobot-myrobot_hardware_interface-1*.log I wrote a controller for omni drive and if I understand it properly, this is where the problem is. I used this interface called PosVelJointInterface from posvel_command_interface.h. The problem occurs when I executed the following lines. Basically in my controller, I take in the twist command, do some math and generate four velocities which I'm trying to populate using the setCommandVelocity method. hardware_interface::PosVelJointHandle wheel_1; wheel_1.setCommandVelocity(wheelVelocities.velocity[0]); Originally posted by venkisagunner on ROS Answers with karma: 89 on 2018-06-15 Post score: 0 Answer: According to the sources (here), cmd_vel_ is a pointer to a double that gets initialised in the ctor (here). The code you show ("The problem occurs when I executed the following lines") doesn't initialise the PosVelJointHandle properly (or at least: it doesn't provide any values for the arguments), it's using this ctor, which sets all pointers to 0. If you then try to use setCommandVelocity(..), the assert is triggered. Two comments: if you're writing a controller, why are you intantiating hardware_interface classes directly? Shouldn't that be the responsibility of your / a hardware_interface implementation for your robot? read up / find examples of how to initialise a PosVelJointHandle correctly and update your code (if you have determined you really should be intantiating those classes yourself. Originally posted by gvdhoorn with karma: 86574 on 2018-06-18 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by venkisagunner on 2018-06-18: I realised the mistake and solved the problem. Forgot to close the issue. My apologies. Thank you for taking your time in helping me out here. Comment by gvdhoorn on 2018-06-18: Could you help future readers by describing how you eventually solved your problem? Comment by venkisagunner on 2018-06-18: I found a way here. This is available in ROS controllers git repo. Instead of using a PosVelJointHandle, I used the JointHandle and JointStateInterface .. Comment by venkisagunner on 2018-06-18: And integrating that with the boilerplate was trivial. In my previous approach, I declared the joint handles but didnt call the getHandle (line 319) method which resulted in no resource allocation. When I ran rosrun controller_manager controller_manager myrobot/list_controllers, I found no resources
{ "domain": "robotics.stackexchange", "id": 31022, "tags": "ros, ros-lunar, ros-control, hardware-interface" }
Why do Earth and Venus have different atmospheres?
Question: Venus appears to be the closest to Earth in mass, density, size, etc. - though they clearly have different atmospheres. Why do Earth and Venus have different atmospheres? Answer: Note: This response is bereft of references. I'll try to add them later. There is no definitive answer. There are a lot of conjectures, however. Whatever the cause, the Earth is markedly void of carbon compared to Venus. The amount of CO2 in Venus atmosphere corresponds to a 0.88 km thick layer of carbonate. The Earth's geosphere (lithosphere plus oceans plus atmosphere) collectively contains about half that amount of carbon, almost all of it locked up in the lithosphere. The huge amount of carbon in Venus atmosphere versus the paucity of carbon in the Earth's atmosphere is the primary factor that distinguishes the two atmospheres. Planet formation. The leading hypothesis regarding the late formation of the Earth is the giant impact hypothesis. Multiple simulations suggest that this collision had to be rather oblique. An oblique collision between a Mars-sized body and a not-quite Earth-sized body would have drastically changed the Earth's rotation rate prior to and after that collision. If the Earth was rotating very fast prior to the collision, that fast rotation combined with the collision could have resulted in the Earth losing a large chunk of its primordial atmosphere. Venus does not have a large Moon. That suggests that it's formation was a bit less tumultuous than the formation of the Earth. With no big collision to eject that primordial atmosphere, Venus may well have been operating in runaway greenhouse mode from the very onset. Water. Whether Venus first had free water on its surface or in its atmosphere has long been a subject of debate. Venus is now nearly devoid of water. If water was present in Venus early atmosphere, it would have served to even more strongly magnify the already huge greenhouse effect of a thick atmosphere that is opaque in the thermal infrared frequencies. If the very young Venus did have liquid water on its surface, it wouldn't have lasted very long. Plate tectonics. One of the consequences of a very thick atmosphere and no liquid water is no plate tectonics. Venus has a very thick greenhouse atmosphere, which makes for a very hot surface. The high temperature of Venus's surface means the surface healed itself too rapidly. Plates couldn't form on Venus (Bercovici 2014). Water is an important lubricant for plate tectonics, particularly for subduction (Mian 1990). (But also see (Fei 2013) for an opposing view.) With no plate tectonics, there was no mechanism to bury carbon inside of Venus. Plate tectonics developed fairly early on in Earth's history. By the time the Sun became hot enough (the early Sun was faint), the Earth had already started the process of sequestering away carbon into the lithosphere. Life. Life loves carbon. It is one of the key agents by which atmospheric carbon is transferred to the lithosphere. Life apparently never had a chance on Venus. References Bercovici D. and Ricard Y., "Plate tectonics, damage and inheritance," Nature 508, 513-516 (2014) Fei et al., "Small effect of water on upper-mantle rheology based on silicon self-diffusion coefficients," Nature 498, 213–215 (2013) Mian Z. and Tozer D., "No water, no plate tectonics: convective heat transfer and the planetary surfaces of Venus and Earth," Terra Nova 2:5, 455-459 (1990)
{ "domain": "earthscience.stackexchange", "id": 123, "tags": "atmosphere, planetary-science" }
How to utilize libserial with ROS -Ubuntu 12.04 - Hydro
Question: I have an old ROS Fuerte project which code i wanted to reuse in an another ROS Hydro node, this code uses libserial library. I have a problem to find out the correct way to include this library into my new node via CMakeList.txt Here what i include in my header: #include <SerialStream.h> using namespace LibSerial; SerialStream fd; I'm not totally sure where the error comes from but doesn't the "undefined reference" mean that problem could be within the compiling and linking? Here is the error code: #### #### Running command: "make cmake_check_build_system" in "/home/x/Dropbox/catkin_ws/build" #### #### #### Running command: "make -j2 -l2" in "/home/x/Dropbox/catkin_ws/build" #### [ 0%] [ 0%] Built target std_msgs_generate_messages_cpp Built target std_msgs_generate_messages_py [ 20%] [ 20%] Built target std_msgs_generate_messages_lisp Built target serial_interface_generate_messages_cpp [ 60%] [ 80%] Built target serial_interface_generate_messages_py Built target serial_interface_generate_messages_lisp [ 80%] Built target serial_interface_generate_messages Linking CXX executable /home/x/Dropbox/catkin_ws/devel/lib/serial_interface/serial_interface /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu/crt1.o: In function `_start': (.text+0x20): undefined reference to `main' CMakeFiles/serial_interface.dir/src/serial_interface.cpp.o: In function `CA::RoverInterface::RoverInterface(ros::NodeHandle)': serial_interface.cpp:(.text+0x514): undefined reference to `LibSerial::SerialStream::Open(std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::_Ios_Openmode)' CMakeFiles/serial_interface.dir/src/serial_interface.cpp.o:(.rodata._ZTVN9LibSerial15SerialStreamBufE[vtable for LibSerial::SerialStreamBuf]+0x48): undefined reference to `LibSerial::SerialStreamBuf::showmanyc()' CMakeFiles/serial_interface.dir/src/serial_interface.cpp.o:(.rodata._ZTVN9LibSerial15SerialStreamBufE[vtable for LibSerial::SerialStreamBuf]+0x50): undefined reference to `LibSerial::SerialStreamBuf::xsgetn(char*, long)' CMakeFiles/serial_interface.dir/src/serial_interface.cpp.o:(.rodata._ZTVN9LibSerial15SerialStreamBufE[vtable for LibSerial::SerialStreamBuf]+0x58): undefined reference to `LibSerial::SerialStreamBuf::underflow()' CMakeFiles/serial_interface.dir/src/serial_interface.cpp.o:(.rodata._ZTVN9LibSerial15SerialStreamBufE[vtable for LibSerial::SerialStreamBuf]+0x68): undefined reference to `LibSerial::SerialStreamBuf::pbackfail(int)' CMakeFiles/serial_interface.dir/src/serial_interface.cpp.o:(.rodata._ZTVN9LibSerial15SerialStreamBufE[vtable for LibSerial::SerialStreamBuf]+0x70): undefined reference to `LibSerial::SerialStreamBuf::xsputn(char const*, long)' CMakeFiles/serial_interface.dir/src/serial_interface.cpp.o:(.rodata._ZTVN9LibSerial15SerialStreamBufE[vtable for LibSerial::SerialStreamBuf]+0x78): undefined reference to `LibSerial::SerialStreamBuf::overflow(int)' collect2: ld returned 1 exit status make[2]: *** [/home/x/Dropbox/catkin_ws/devel/lib/serial_interface/serial_interface] Error 1 make[1]: *** [serial_interface/CMakeFiles/serial_interface.dir/all] Error 2 make: *** [all] Error 2 EDIT 1: I have updated my CMakeList.txt file to be as follows with the new information I got from an answer: cmake_minimum_required(VERSION 2.8.3) project(serial_interface) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs message_generation ) include_directories(include ${catkin_INCLUDE_DIRS}) add_message_files( FILES Control.msg ) generate_messages( DEPENDENCIES std_msgs ) catkin_package( INCLUDE_DIRS include CATKIN_DEPENDS roscpp rospy std_msgs message_runtime ) add_executable(serial_interface src/serial_interface.cpp) add_dependencies(serial_interface serial_interface_generate_messages_cpp) target_link_libraries(serial_interface ${catkin_LIBRARIES} libserial0 ) install(DIRECTORY include/${PROJECT_NAME}/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} FILES_MATCHING PATTERN "*.h" PATTERN ".svn" EXCLUDE ) And if i run dpkg -L libserial0 I do get the following: /. /usr /usr/share /usr/share/doc /usr/share/doc/libserial0 /usr/share/doc/libserial0/copyright /usr/share/doc/libserial0/changelog.Debian.gz /usr/lib /usr/lib/libserial.so.0.0.0 /usr/lib/libserial.so.0 Therefore I should have the correct library installed on my system? Anyway i still receive this error while trying to run catkin_make. #### #### Running command: "make cmake_check_build_system" in "/home/x/Dropbox/catkin_ws/build" #### #### #### Running command: "make -j2 -l2" in "/home/x/Dropbox/catkin_ws/build" #### [ 0%] [ 0%] Built target std_msgs_generate_messages_cpp Built target std_msgs_generate_messages_py [ 0%] [ 20%] Built target std_msgs_generate_messages_lisp Built target serial_interface_generate_messages_cpp [ 40%] [ 80%] Built target serial_interface_generate_messages_lisp Built target serial_interface_generate_messages_py Linking CXX executable /home/x/Dropbox/catkin_ws/devel/lib/serial_interface/serial_interface [ 80%] Built target serial_interface_generate_messages /usr/bin/ld: cannot find -llibserial0 collect2: ld returned 1 exit status make[2]: *** [/home/x/Dropbox/catkin_ws/devel/lib/serial_interface/serial_interface] Error 1 make[1]: *** [serial_interface/CMakeFiles/serial_interface.dir/all] Error 2 make: *** [all] Error 2 Invoking "make" failed Originally posted by ajr_ on ROS Answers with karma: 97 on 2014-02-10 Post score: 0 Answer: "Undefined reference" usually means that you're compiling against the headers for a library, but aren't linking to that library during your final link phase. In CMake, you usually link an executable to a library using the target_link_libraries() command. Depending on if or how you're finding the libserial package from CMake, you'll want to do different things. If you're using find_package to find libserial, you'll want something like: target_link_libraries(my_node ${libserial_LIBRARIES}) But if you're just assuming that the system has libserial installed, and you normally pass -lserial to link to it, you probably want to do: target_link_libraries(my_node serial) Originally posted by ahendrix with karma: 47576 on 2014-02-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by ajr_ on 2014-02-11: I updated my question and it seems I do not have the correct library name which to use for linking? Comment by joq on 2014-02-11: Leave off the lib prefix. You probably want -lserial, so put serial in your target_link_libraries(), as @ahendrix suggested. Comment by ajr_ on 2014-02-11: ok this works, although there is another error now but its not related to this question anymore. Comment by ajr_ on 2014-02-11: the new problem is this error, not sure if its still related to linking: /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu/crt1.o: In function _start': (.text+0x20): undefined reference to main' Comment by ahendrix on 2014-02-11: That sounds like a new question. Comment by ajr_ on 2014-02-11: Ok, actually missing the main function & node. Beginners mistake... This problem is solved, thanks to all for the help!
{ "domain": "robotics.stackexchange", "id": 16934, "tags": "catkin, ros-hydro, ubuntu, ubuntu-precise, linking" }
How to model compositional data?
Question: What is the best way to model compositional data problems? Compositional data is when each example or sample is a vector that sums to 1 (or 100%). In my case, I am interested in the composition of minerals in a rock and I have sensors that tell me the sum of the minerals but not the components that make up the sum. For example, lets say I have two minerals, $m_1$ and $m_2$, that are made up of 3 elements (like copper and other elements from the periodic table) which form a vector of length 3: m1 = [0.1, 0.3, 0.6] m2 = [0.6, 0.2, 0.2] If a rock has 25% of $m_1$ and 75% of $m_2$, the sensor reading produces the sum of the two minerals (shown in bottom-left subplot below): $$ \begin{align} &0.25*m_1 + 0.75*m_2 \\ =&0.25*[0.1, 0.3, 0.6] + 0.75*[0.6, 0.2, 0.2] \\ =&[0.475, 0.225, 0.3] \end{align} $$ I would like to know how to model and solve the problem of unmixing a composition into its underlying components, where the sum of the elements is normalized to 100% (e.g. $0.25m_1 + 0.75m_2$ has the same composition as $0.50m_1 + 1.50m_2$). Furthermore, my example is simplistic; in reality a composition can have more than just 2 minerals (up to 3000) and each mineral is made up of 118 elements, not just 3 (all the elements of periodic table - though many elements will be zero). The elemental composition of a mineral is assumed to be known (definition of $m_1$ and $m_2$ in the example). Also, the sensor reading is noisy - each element of the observed composition is assumed to have Gaussian noise. Answer: First normalize the result vector. E.g. [.95, .45, .6] by dividing by 2 (sum of the members); giving [.475, .225, .3]. Let $x$ be the share of the first mineral, than $(x-1)$ is the share of the second mineral. Solve the three linear equation which must give the same result. $.1 * x + .6 * (1-x) = .475$ $.3 * x + .2 * (1-x) = .225$ $.6 * x + .2 * (1-x) = .3$ Result is $x = 1/4$ as expected. UPDATE The above proposed solution works of course only if the number of minerals is less or equal the number of elements. In the update of the question is clearly stated, that this is not the case (3000 minerals and 118 elements). Lets simulate this oposite case on a small example with 3 minarals and 2 elements. m1 <- c(0.2, 0.8) m2 <- c(0.4, 0.6) m3 <- c(0.9, 0.1) and with the mix of minerals x <- c(.25, .65, .1) which produce a measurement of t(matrix(c(m1,m2,m3),3,2, byrow= T)) %*% x [,1] [1,] 0.4 [2,] 0.6 This gives following linear equations $m_1 + m_2 + m_3 = 1$ $.2 m_1 + .4 m_2 + .9 m_3 = .4$ $.8 m_1 + .6 m_2 + .1 m_3 = .6$ The solution of the equations is not unique $m_3 \in <0, 1 / 3.5>$ $m_1 = 2 - 2 * m_2 - 4.5 * m_3$ $m_2 = 1 - 3.5 * m_3$ Some alternative solutions are provided below 0 1 0 0.125 0.825 0.050 0.375 0.475 0.150 0.5 0.3 0.2 0.625 0.125 0.250 0.71428571 0.00000000 0.28571429 This is of course oversimplified example, but shows that you should carefully select the optimization goal, as there could be more "equaly good" solutions. For example with promoting sparsity you will find the solution [0 1 0] which is far from our used mix.
{ "domain": "datascience.stackexchange", "id": 515, "tags": "regression" }
Library to be used against a specific REST web service
Question: I have been trying to generate a basic library that I can use at work, to call different REST calls towards a software vendor that we are using. I would love someone's opinion on it, and what I could do better, hopefully with some examples or links, and some descriptions. I am by no means any good at Python, but I tried my best. The library is meant to just be imported into an existing script.. Questions: 1. From a library point of view, can this be designed any different? 2. How could i make it any shorter / more understandable? 3. I feel that i need to learn more about args* kwargs**, is that needed here? 4. General coding style, naming conventions or breaking any standards (PEP8 im looking at you)? #!/usr/bin/env python """Test SDK for Arcsight Logger""" import time import json import datetime import requests import untangle from requests.packages.urllib3.exceptions import InsecureRequestWarning class ArcsightLogger(object): """ Main Class to interact with Arcsight Logger REST API """ def __init__(self): self.target = 'https://SOMETHING:9000' self.login = 'username' self.password = 'password' def post(self, url, data, isjson): """ Post Call towards Arcsight Logger :param url: URL to retrieve :param data: Request Body :param isjson: Checks if post needs to be JSON :return: HTTP Response """ requests.packages.urllib3.disable_warnings(InsecureRequestWarning) if data: if isjson: try: r = requests.post(url, json=data, verify=False) return r except requests.exceptions.RequestException as e: print e else: try: r = requests.post(url, data, verify=False) return r except requests.exceptions.RequestException as e: print e def arcsight_login(self): """ Log in the user defined in self.user :return: User token to be used with all requests against Arcsight """ data = { 'login': self.login, 'password': self.password, } url = self.target + '/core-service/rest/LoginService/login' r = self.post(url, data, False) r.raise_for_status() loginrequest = untangle.parse(r.content) return loginrequest.ns3_loginResponse.ns3_return.cdata def arcsight_search(self, token, query): """ Executes a searchquery, that is then stored and needs to be called again to get results, using the returned searchid. :param token: Token received from login method to authenticate :param query: Query to be run with the search :return: Array of the current searchid, which is needed for other functions, and the content of HTTP response. """ data = { 'search_session_id': int(round(time.time() * 1000)), 'user_session_id': token, 'query': query, } url = self.target + '/server/search' searchid = data['search_session_id'] r = (searchid, self.post(url, data, True)) return r def arcsight_status(self, token, searchid): """ Checks the current status of a search using the searchid :param token: Token received from login method to authenticate :param searchid: The searchid that was generated when a new search was called :return: The status of the search, currently this will wait for the search to complete and then return that the search is finished. """ data = { 'search_session_id': searchid, 'user_session_id': token, } url = self.target + '/server/search/status' r = self.post(url, data, True) r = r.json() while r['status'] != 'complete': time.sleep(5) print 'waiting' r = self.post(url, data, True) r = r.json() print 'search is finished' return r def arcsight_events(self, token, searchid): """ Gathers events from a finished search :param token: Token received from login method to authenticate :param searchid: The searchid that was generated when a new search was called :return: The events generated by a search. This returns the default arcsight JSON format. """ data = { 'search_session_id': searchid, 'user_session_id': token, } url = self.target + '/server/search/events' r = self.post(url, data, True) self.arcsight_stop(token, searchid) return r def arcsight_events_custom(self, token, searchid): """ Gathers events from a finished search :param token: Token received from login method to authenticate :param searchid: The searchid that was generated when a new search was called :return: The events generated by a search. This returns a custom JSON format """ data = { 'search_session_id': searchid, 'user_session_id': token, } url = self.target + '/server/search/events' r = self.post(url, data, True) d = json.dumps(r.json()) r = json.loads(d) name = r['fields'] results = r['results'] a = [] for result in results: a.append({f['name']: r for f, r in zip(name, result)}) r = (json.dumps(a, sort_keys=True, indent=4)) self.arcsight_stop(token, searchid) return r def arcsight_stop(self, token, searchid): """ Stops the search operation but keeps the search session so that the search results can be narrowed down later. :param token: Token received from login method to authenticate :param searchid: The searchid that was generated when a new search was called :return: A message that the search has been stopped. """ data = { 'search_session_id': searchid, 'user_session_id': token, } url = self.target + '/server/search/stop' r = self.post(url, data, True) print 'search stopped' return r def arcsight_close(self, token, searchid): """ Stops the execution of the search and clears the search session data from the server. :param token: Token received from login method to authenticate :param searchid: The searchid that was generated when a new search was called :return: A message that the search has been stopped. """ data = { 'search_session_id': searchid, 'user_session_id': token, } url = self.target + '/server/search/close' r = self.post(url, data, True) print 'search is closed' return r def main(self): """ Testruns of different functions """ # token = self.arcsight_login() # print token # query = 'deviceAddress CONTAINS 192.168.2.26' # r = self.arcsight_search(token, query) # searchid = r[0] # print searchid # print r[1].content # self.arcsight_status(token, searchid) # r = self.arcsight_events(token, searchid) # print r.content if __name__ == "__main__": o = ArcsightLogger() o.main() Answer: Creating a session As a library, I find it barely usable. If I want to use it with my own credentials, I have to: either modify the source of your library to put them in __init__; or tamper with the attributes after building an ArcsightLogger object: o = ArcsightLogger() o.login = 'spam' o.password = 'eggs' o.main() Moreover, having to manually store and feed back to each method the generated token is unnecessary boilerplate. Instead, I would try to log the user in as soon as possible and store the generated token in an attribute for easy access by each of your methods: class ArcsightLogger(object): """ Main Class to interact with Arcsight Logger REST API """ TARGET = 'https://SOMETHING:9000' def __init__(self, username, password): """ Log in the user whose credentials are provided and store the access token to be used with all requests against Arcsight """ data = { 'login': username, 'password': password, } url = self.TARGET + '/core-service/rest/LoginService/login' r = self.post(url, data, False) r.raise_for_status() loginrequest = untangle.parse(r.content) self.token = loginrequest.ns3_loginResponse.ns3_return.cdata Note the use of the class constant TARGET instead of an instance attribute as this is not something meant to be changed when using the API. Posting data First off, a comment would be more than helpfull to know why you requests.packages.urllib3.disable_warnings(InsecureRequestWarning). Second, do you really need to call this at each request? I would rather put this line right after your last import. Or not at all, since it let the user know about a potential MITM attack. An other solution would be to have a configurable behaviour that let the warnings appear by default but can be disabled by the user if they so wish. I would however use the warnings module to control that. Next, I would change the signature of _post a bit. I would ask the caller to provide only the route part of the URL as the domain is arleady stored as a class constant. It will avoid some boilerplate as the concatenation can be done in this method. I would also take advantage of Python's syntax to turn data into a dictionnary instead of letting the caller do it. I would change the isjson parameter to accept a default value of True since this is what most method uses. And, lastly, I would name it _post as it is mainly an helper function for your methods rather than part of your public API. Finally, you should let the exceptions bubble up rather than print them as they will most likely indicate an issue that will prevent further processing: import warnings from requests.packages.urllib3.exceptions import InsecureRequestWarning class ArcsightLogger(object): """ Main Class to interact with Arcsight Logger REST API """ TARGET = 'https://SOMETHING:9000' def __init__(self, username, password, disable_insecure_warning=False): """ Log in the user whose credentials are provided and store the access token to be used with all requests against Arcsight """ action = 'ignore' if disable_insecure_warning else 'once' warnings.simplefilter(action, InsecureRequestWarning) r = self.post( '/core-service/rest/LoginService/login', login=username, password=password, is_json=False) r.raise_for_status() loginrequest = untangle.parse(r.content) self.token = loginrequest.ns3_loginResponse.ns3_return.cdata def post(self, route, is_json=True, **data): """ Post Call towards Arcsight Logger :param route: API endpoint to fetch :param is_json: Checks if post needs to be JSON :param data: Request Body :return: HTTP Response """ if not data: return url = self.TARGET + route if isjson: return requests.post(url, json=data, verify=False) else: return requests.post(url, data, verify=False) Verfying search status Your arcsight_status method is not optimal. As a user, knowing that an operation can take a huge amount of time, I prefer have the ability to not perform a blocking call so I can perform other operations in the meantime. You could split the functionnality by providing an arcsight_search_complete method returning a boolean and build arcsight_status on top of it: def arcsight_search_complete(self, search_id): """ Checks the current status of a search using the search_id :param search_id: The search_id that was generated when a new search was called :return: Whether or not the search finished already. """ response = self.post( '/server/search/status', search_session_id=search_id, user_session_id=self.token) return response.json().get('status') == 'complete' def arcsight_wait_for_search(self, search_id): """ Blocks until the search represented by search_id completes :param search_id: The search_id that was generated when a new search was called :return: The status of the search. """ while not self.arcsight_search_complete(search_id): time.sleep(5) return self.post( '/server/search/status', search_session_id=search_id, user_session_id=self.token).json() Generating events reports I don't understand why you return an HTTP response in arcsight_events and try extra hard to return a string in arcsight_events_custom. For one, these two methods seek to offer the same kind of data (only filtered differently) so they should return the same type of data. For two, these data types are barely usable on their own. Why not directly return a dictionary that the user could manipulate or format the way they wish? arcsight_events_custom would return a list, but at least they are both collections that can be directly manipulated. Lastly, since they perform essentially the same task, I would have one rely on the other to avoid code duplication: def arcsight_events(self, search_id): """ Gathers events from a finished search :param search_id: The search_id that was generated when a new search was called :return: The events generated by a search. This returns the default arcsight JSON format. """ response = self.post( '/server/search/events', search_session_id=search_id, user_session_id=self.token) self.arcsight_stop(token, searchid) return response.json() def arcsight_events_custom(self, search_id): """ Gathers events from a finished search :param search_id: The search_id that was generated when a new search was called :return: The events generated by a search. This returns a custom JSON format """ events = self.arcsight_events(search_id) return [{ field['name']: result for field, result in zip(events['fields'], results) } for results in events['results']] Generic remarks Since the class is called ArcsightLogger, I don't understand the decision to prefix each method with arcsight_; it doesn't really add any value. The main method has nothing to do in this class. Such code should either be in the if __name__ == '__main__': clause if it is test or demo code, or in a user script. You may want to reduce the amount of text per line in your docstrings. PEP 8 recommends to limit such lines to 72 characters. I don't really have any insights of how it could be built, but you could provide a context manager around a search query so that the user don't have to bother storing the search ID. Something that could work along the lines of: arcsight = ArcsightLogger('me', 'mypass', True) with arcsight.search('query') as search: search.wait() data = search.events(custom=True) # auto close at the end of the with block Putting all that together your code can become: #!/usr/bin/env python2 """Test SDK for Arcsight Logger""" import time import warnings import untangle import requests from requests.packages.urllib3.exceptions import InsecureRequestWarning class ArcsightLogger(object): """ Main Class to interact with Arcsight Logger REST API """ TARGET = 'https://SOMETHING:9000' def __init__(self, username, password, disable_insecure_warning=False): """ Log in the user whose credentials are provided and store the access token to be used with all requests against Arcsight """ action = 'ignore' if disable_insecure_warning else 'once' warnings.simplefilter(action, InsecureRequestWarning) r = self._post( '/core-service/rest/LoginService/login', login=username, password=password, is_json=False) r.raise_for_status() loginrequest = untangle.parse(r.content) self.token = loginrequest.ns3_loginResponse.ns3_return.cdata def _post(self, route, is_json=True, **data): """ Post Call towards Arcsight Logger :param route: API endpoint to fetch :param is_json: Checks if post needs to be JSON :param data: Request Body :return: HTTP Response """ if not data: return url = self.TARGET + route if is_json: return requests.post(url, json=data, verify=False) else: return requests.post(url, data, verify=False) def search(self, query): """ Executes a searchquery, that is then stored and needs to be called again to get results, using the returned search_id. :param query: Query to be run with the search :return: Array of the current searchid, which is needed for other functions, and the content of HTTP response. """ search_id = int(round(time.time() * 1000)) response = self._post( '/server/search', query=query, search_session_id=search_id, user_session_id=self.token) return search_id, response.json() def search_complete(self, search_id): """ Checks the current status of a search using the search_id :param search_id: The search_id that was generated when a new search was called :return: Whether or not the search finished already. """ response = self._post( '/server/search/status', search_session_id=search_id, user_session_id=self.token) return response.json().get('status') == 'complete' def wait(self, search_id): """ Blocks until the search represented by search_id completes :param search_id: The search_id that was generated when a new search was called :return: The status of the search. """ while not self.search_complete(search_id): time.sleep(5) return self._post( '/server/search/status', search_session_id=search_id, user_session_id=self.token).json() def events(self, search_id, custom_format=False): """ Gathers events from a finished search :param search_id: The search_id that was generated when a new search was called :param custom_format: Whether to return the response from ArcSight unmodified or to pre-process it. :return: The events generated by a search. """ response = self._post( '/server/search/events', search_session_id=search_id, user_session_id=self.token) self.arcsight_stop(token, searchid) events = response.json() if not custom_format: return events return [{ field['name']: result for field, result in zip(events['fields'], results) } for results in events['results']] def stop(self, search_id): """ Stops the search operation but keeps the search session so that the search results can be narrowed down later. :param search_id: The search_id that was generated when a new search was called :return: A message that the search has been stopped. """ response = self._post( '/server/search/stop', search_session_id=search_id, user_session_id=self.token) return response.json() def close(self, search_id): """ Stops the execution of the search and clears the search session data from the server. :param search_id: The search_id that was generated when a new search was called :return: A message that the search has been stopped. """ response = self._post( '/server/search/close', search_session_id=search_id, user_session_id=self.token) return response.json() if __name__ == '__main__': arcsight = ArcsightLogger('username', 'password', True) print arcsight.token query = 'deviceAddress CONTAINS 192.168.2.26' search_id, r = arcsight.search(query) print searchid print r arcsight.wait(search_id) events = arcsight.events(search_id) print events I also removed most of the prints as this is a bad idea to mess with the users output: they may want to have their own that they need to parse afterwards or whatever. Consider using the logging module instead as it is easy to turn it off or redirect it to an other stream.
{ "domain": "codereview.stackexchange", "id": 22919, "tags": "python, python-2.x, api, rest, client" }
Display resources for each factory in the Rocket Valley Tycoon
Question: I'm new here. I'm trying to teach myself programming by learning Python. I have assigned myself the following task. There's a game I'm playing (Rocket Valley Tycoon) where the goal is to basically place an Extractor to gather resources, then use Factories to transform the resource into a good, which then usually gets processed again by another Factory into a processed good. The point of the program I'm trying to write is to be able to see the resources needed for any Factory. So, for example, Copper plates need 10 carbon and 20 copper ore, and at the same time, the carbon needs 2 coal and 4 water, so I would like the final output to be something like: Copper plates need 20 copper ore, 20 coal, and 40 water. I have a working prototype, that works with just this, but if I know it can be simplified somehow. (Especially in the print_total function) Also, this only works with a Factory that has just one "child" Factory and not with n "child" Factories. This is the first tech tree: Can someone please point me in the right direction? class Factory: def __init__(self, name, ing1, amount1, ing2=None, amount2=None): self.name = name self.ing1 = ing1 self.amount1 = amount1 self.ing2 = ing2 self.amount2 = amount2 self.total_amount1_1 = None self.total_amount1_2 = None self.total_amount2_1 = None self.total_amount2_2 = None self.total_ing1_ing1 = None self.total_ing1_ing2 = None self.total_ing2_ing1 = None self.total_ing2_ing2 = None def print_ing(self): print(self.name + ' needs: ') print(self.amount1, self.ing1.name, self.amount2, self.ing2.name) def print_total(self): if isinstance(self.ing1, Factory): print('Ing1 is Factory') self.total_ing1_ing1 = self.ing1.ing1 self.total_ing1_ing2 = self.ing1.ing2 self.total_amount1_1 = self.amount1 * self.ing1.amount1 self.total_amount1_2 = self.amount1 * self.ing1.amount2 elif isinstance(self.ing2, Factory): print('Ing2 is Factory') self.total_ing2_ing1 = self.ing2.ing1 self.total_ing2_ing2 = self.ing2.ing2 self.total_amount2_1 = self.amount2 * self.ing2.amount1 self.total_amount2_2 = self.amount2 * self.ing2.amount2 print(self.name + ' needs: ') print(self.amount1, self.ing1.name, self.total_amount2_1, self.total_ing2_ing1.name, self.total_amount2_2,\ self.total_ing2_ing2.name) class Extractor: def __init__(self, name): self.name = name coal = Extractor('Coal') water = Extractor('Water') copper_ore = Extractor('Copper Ore') carbon = Factory('Carbon', coal, 2, water, 4) copper_plates = Factory('Copper Plates', copper_ore, 20, carbon, 10) Answer: I would turn this into one class that takes a list of ingredients and cost tuples. The resulting object can be added with another of its instance or multiplied with a number. from collections import defaultdict class Ingredient: def __init__(self, *ingredients): self.d = defaultdict(int) if len(ingredients) == 1 and isinstance(ingredients[0], str): # special case for base ingredients self.d[ingredients[0]] = 1 else: for ingredient, amount in ingredients: self.d[ingredient] += amount @property def ingredients(self): return list(self.d.keys()) @property def amounts(self): return list(self.d.values()) def __rmul__(self, other): # allows doing other * self, where other is a number return Ingredient(*zip(self.ingredients, [other * x for x in self.amounts])) def __add__(self, other): # allows self + other, where other is another Ingredient return Ingredient(*zip(self.ingredients + other.ingredients, self.amounts + other.amounts)) def __str__(self): return str(list(self.d.items())) Then you can define the base items and any derived item is just algebraic manipulations: coal = Ingredient("Coal") water = Ingredient("Water") copper_ore = Ingredient("Copper Ore") carbon = 2 * water + 4 * coal copper_plates = 20 * copper_ore + 10 * carbon print(copper_plates) # [('Copper Ore', 20), ('Water', 20), ('Coal', 40)]
{ "domain": "codereview.stackexchange", "id": 30346, "tags": "python, beginner" }
Does dark energy make galaxies expand over long periods of time?
Question: Does dark energy expand galaxies slightly over time? I would think this could be verified easily (observe if galaxies far away / further in the past smaller and denser), and might make a good research topic! I am specifically asking at the galaxy level here. It is pretty clear dark energy acts at levels beyond a galaxy. Edit: There have been similar questions pointed out, but I have not seen any asking specifically at the level of a galaxy. Note: It would seem that if the galaxies used to be smaller, that might explain the increased star formation explained here: https://webbtelescope.org/webb-science/galaxies-over-time "About 10 billion years ago, galaxies were more chaotic, with more supernovae, 10 times more star formation" Answer: TLDR: Dark energy drives the accelerated expansion of empty space between galaxy clusters. Other effects dominate the dynamics within a given cluster. If I can get philosophical for a moment, we must remember that every equation we write down is an approximate description of nature built from simplifying assumptions and with a well defined domain of validity. So what is the domain of validity for dark energy? cosmology as dust in the wind When we derive the Friedmann equations from the FLRW metric, we assume that the contents of the universe have uniform density. The matter of the universe is modeled as a uniform, non-interacting dust. In this case the dust grains are galaxy clusters. By "non-interacting" we mean the galaxy clusters just sit in place unless they are carried around by the cosmological dynamics. Just dust in the wind, man. Dark energy fits into the Einstein field equations as a cosmological constant. It has a constant energy density. In the past the dust grains were closer together, and the universe was matter dominated. The cosmological dynamics were driven primarily by the matter in the universe. As the universe expanded, there became more empty space between the dust grains. The matter density of the universe decreased. Eventually the matter density got down to a similar scale as the dark energy density. At this point dark energy, starts to noticeably affect the cosmological dynamics. As the universe expands more, the matter density continues to decrease, but the dark energy density stays the same, leading to the dark energy dominated cosmology we see today. The Friedmann equations describe the dynamics of galaxy clusters. That is their domain of validity. inside a grain of dust If we zoom in on a single grain of dust and look inside, we'll find many galaxies. The key thing to understand is that at the scale of a single galaxy cluster, the spacetime isn't dark energy dominated. The average density of matter in a cluster is way bigger than the average density of the universe. There's just way more empty space between clusters than between galaxies within a cluster. If we apply the same cosmological assumptions at this scale the dynamics would be different than for galaxy clusters. The increased matter density means the expansion won't happen at the same rate. The rate of expansion between clusters is larger than the rate of expansion between neighbor galaxies which is larger than the rate of expansion between stars within a galaxy. The non-interacting assumption certainly doesn't hold within a cluster. The galaxies are not just floating on the wind of cosmology, they are interacting gravitationally and affecting each other. In this case we might have to worry about solving the gravitational $N$-body problem with a non-zero cosmological constant. The Friedmann equations which describe cosmology are not a useful approximation of the dynamics inside a galaxy. The cosmological constant (dark energy) modifies the gravitational dynamics, but it does not drive accelerated expansion in the same way it does for the empty space between galaxy clusters.
{ "domain": "astronomy.stackexchange", "id": 6077, "tags": "galaxy, space-time, expansion, dark-energy" }
Why does $L\subseteq \textbf{P} \cap \textbf{NP}$ is $\textbf{NP}$-complete imply $\textbf{NP} = \textbf{P}$?
Question: If I show that a language $L$ is contained in $\textbf{P}$ and $\textbf{NP}$ and I know that the language is $\textbf{NP}$-complete, why did I proof that $\textbf{P} = \textbf{NP}$? Answer: Because you have then showed that $L$ is an $\textbf{NP}$-complete language which, since $L \in \textbf{P}$, is decidable in poly-time. Since any other language $L' \in \textbf{NP}$ is efficiently reducible to $L$ (because of $\textbf{NP}$-completeness), $L' \in \textbf{P}$ as well. It follows that $\textbf{NP} \subseteq \textbf{P}$ (and the other inclusion is trivial).
{ "domain": "cs.stackexchange", "id": 13130, "tags": "complexity-theory" }
How can I explain what a kilogram is using Planck's constant?
Question: I want to understand what $1\ \mathrm{kg}$ represents. For example: I know that $1$ second is equal to $9\,192\,631\,770$ transitions from the microwave radiation that a cesium-133 atom (at $0\ \mathrm K$) emits, if it's excited just right. I can imagine that. I can see how you would "count" these 9 billion transition until you know, that exactly $1$ second has passed. Now I would like to know if there is a similar explanation for the kilogram. I understand how Planck's constant has be redefined using methods such as the Kibble balance. I would like to know how I can explain what $1\ \mathrm{kg}$ is using $h$. Here is what I've got so far: Knowing that $E=hf$ and $E=mc^2$, if both of those energies are equal, this gives $m=\frac{hf}{c^2}$. So if we want to know what $1\ \mathrm{kg}$ is, we find the frequency $f$, that gives $\frac{hf}{c^2}=1\ \mathrm{kg}$, which would be $1.3564 \times 10^{50}\ \mathrm{Hz}$. What does this frequency represent? Is it the frequency of light that you would need to "push" an object with a force equivalent to the weight of $1\ \mathrm{kg}$? Sorry if my thinking is completely off. Edit: the answer of the question What are the proposed realizations in the New SI for the kilogram, ampere, kelvin and mole? explains in detail how the new units get defined and what their relations are, but does not give a satisfying explanation as to what e.g. a kilogram represents. Answer: Quoting this excellent thought experiment from the article An atomic physics perspective on the new kilogram defined by Planck’s constant by Wolfgang Ketterle (thank you wcc!): The new kilogram may be understood as the mass difference of $1.4755214 \times 10^{40}$ Cs atoms in the ground state versus the same number in the excited hyperfine state or as the mass of $1.4755214 \times 10^{40}$ photons at the Cs hyperfine frequency trapped in a microwave cavity. The numerical value $$ 1.4755214 \times 10^{40}\,\text{kg}^{-1} = \frac{c^2}{h \cdot \nu_{Cs}} = \frac{299\ 792\ 458\,\text{m}^2\text{/s}^2}{6.626\ 070\ 15\times 10^{-34}\,\text{kg m}^2\text{/s}\ \cdot 9\ 192\ 631\ 770\,\text{s}^{-1}} $$ is fixed through the definitions of $h$, $c$ and $\nu_{Cs}$ and has no uncertainty. In a thought experiment, one could measure out 1 kg of any substance by having a mechanical balance where the substance and the ground state Cs atoms on one side are compared with the Cs atoms in the excited hyperfine state on the other side of the balance. So this is it! I think this is probably the best way to think about what this new definition of the kilogram actually represents: It's the difference in mass of a bunch of cesium atoms in one energy state versus another.
{ "domain": "physics.stackexchange", "id": 59504, "tags": "mass, definition, si-units, metrology" }
Should the agent play the game until the end or until the winner is found?
Question: I'm using the DQN algorithm to train my agent to play a turn-based game. The winner of the game can be known before the game is over. Once the winning condition is satisfied, it cannot be reverted. For example, the game might last 100 turns, but it's possible to know that one of the players won at move 80, because some winning condition was satisfied. The last 20 moves don't change the outcome of the game. If people were playing this game, they, would play it to the very end, but the agent doesn't have to. The agent will be using memory replay to learn from the experience. I wonder, is it helpful for the agent to have the experiences after the winning condition was satisfied for a more complete picture? Or is it better to terminate the game immediately, and why? How would this affect agent's learning? Answer: You should probably grant reward at the point that the game is logically won. This will help the agent learn more efficiently, by reducing the number of timesteps over which return values need to be backed up. Stopping the episode at that point should also be fine, and may add some efficiency too, in that there will be more focused relevant data in the experience replay. It seems like on the surface that there is no benefit to exploring or discovering any policy after the game is won, and from the comments no expectation from you as agent developer that the agent has any kind of behaviour - random actions would be fine. It is still possible that the agent could learn more from play after a winning state. It would require certain things to be true about the environment and additional work from you as developer. For example, if the game has an end phase where a certain kind of action is more common and it gains something within the game ("victory points", "gold" or some other numbered token that is part of the game mechanics and could be measured), then additional play where this happened could be of interest. Especially if the moves that gained this measure could also be part of winning moves in the earlier game. To allow the agent to learn this though, it would have to be something that it predicted in addition to winning or losing. One way to achieve this is to have a secondary learning system as part of the agent, that learns to predict gains (or totals) of this resource. Such a prediction could either be learned separately (but very similarly to the action value) and fed into the q function as an input, or it could be a neural network that shares early layers with the q function (or policy function) but with a different head. Adding this kind of secondary function to the neural network can also have a regularising effect on the network, because the interim features have to be good for two types of prediction. You definitley do not need to consider such an addition. It could be a lot more work. However, for some games it is possible that it helps. Knowing the game, and understanding whether there is any learning experience to be had as a human player to play on beyond winning or losing, might help you decide whether looking into trying to replicate this additional experience for a bot. Even if it works, the effect may be minimal and not worth the difference it makes. For instance running a more basic learning agent for more episodes may still result in a very good agent for the end game. That only costs you more run time for training, not coding effort.
{ "domain": "ai.stackexchange", "id": 2136, "tags": "reinforcement-learning, dqn" }
Can someone please explain to me how to sample in the time domain?
Question: I am trying to understand how the Nyquist-Shannon theorem applies to sampling in the time domain. Suppose I want to sample a function whose time constants I know. From what I understand, the bandwidth is determined by the shortest time constant. After that, I'm shaky. Wikipedia appears to suggest that the Nyquist rate is twice the bandwidth as I defined it previously, and that the Nyquist frequency should be between that and the bandwidth (https://en.wikipedia.org/wiki/Nyquist_frequency). On the other hand, a signal processing book that I consulted (Oppenheim and Schafer) suggests that twice the bandwidth is the Nyquist rate, and that the Nyquist frequency should be between that and the bandwidth. And it also shows the sampling frequency as being $2\pi$ times the bandwidth (I think) for reasons that I don't understand. Based on the latter source, I would guess that my sampling rate in the time domain must fall within that interval in order not to produce artifacts in reproducing the function that I mentioned. From what I understand, aliasing is one such artifact. Can somebody help clear this up for me? This is very muddy in my head. (My background is in an area of science that doesn't involve signal processing as part of the education, and I am trying to understand this because it seems quite fundamental.) Answer: This feels a bit like duplicate of your previous question How to sample a signal in time based on a set of relaxation times? The answer is still the same: you cannot sample a first order decay process (with a single time constant) without aliasing. The sampling criteria requires the signal to be bandlimited. That means the spectrum must be 0 above the Nyquist frequency. Bandwidth for a process with a time constant is typically the -3dB point which is something different. You should sample at a rate substantially higher than the -3dB point. How much higher depends on how much aliasing your application can live with.
{ "domain": "dsp.stackexchange", "id": 11065, "tags": "sampling" }
Does an atom gain mass when it absorbs a photon?
Question: I understand that an "at rest" a photon has no mass, but it has energy. So when a photon is absorbed by an atom the atom gains the energy of the photon. This captured energy raises the mass of the atom by some quantity... I'm guessing that the frequency of the photon determines the amount of mass added to the atom... An example for a given frequency would be helpful for me to understand. Any help would be appreciated, my cat really wants to know and I'm running out of games to distract him... he's very demanding... Answer: Sure, the atom will gain mass. But that extra energy puts the atom into an unstable state, so the atom will radiate that energy away again in a short time, and so the mass gain is only temporary. We can calculate the mass gain using $E = mc^2$ and $E = h \nu$, where $\nu$ is the frequency of the photon, and $h$ is Planck's constant.
{ "domain": "physics.stackexchange", "id": 51020, "tags": "photons, mass, atomic-physics, mass-energy, subatomic" }
What is a constant vector?
Question: If the acceleration vector and velocity vector are antiparallel and constant vectors(?) what is the type of motion of the particle/body? Also, what are constant vectors? Answer: Since this is physics, such a question is best answered by a computation. Acceleration is the derivative of velocity: $$\vec{a}=\frac{d\vec{v}}{dt}$$ If it's constant, then this can be be integrated as: $$\vec{v}=\vec{a}t+\vec{v}_0$$ with $\vec{v}_0$ the initial velocity. Since you're asking for velocity to be also constant, then: $$\vec{v}=\vec{v}_0$$ You can derive that to return to acceleration: $$\vec{a}=\frac{d\vec{v}_0}{dt}=\vec{0}$$ So the only way to have constant acceleration and velocity is to have zero-acceleration, and constant (zero or non-zero) velocity.
{ "domain": "physics.stackexchange", "id": 90084, "tags": "kinematics, vectors, terminology" }
Find first repeating Char in String
Question: Given a string, find the first repeating character in it. Examples: firstUnique("Vikrant") → None firstUnique("VikrantVikrant") → Some(V) Scala implementation: object FirstUniqueChar extends App { def firstUnique(s: String): Option[Char] = { val countMap = (s groupBy (c=>c)) mapValues(_.length) def checkOccurence(s1: String ): Option[Char] = { if (countMap(s1.head) > 1) Some(s1.head) else if (s1.length == 1) None else checkOccurence(s1.tail) } checkOccurence(s) } println(firstUnique("abcdebC")) println(firstUnique("abcdef")) } I also have a followup question. What is the recommended way if I do not want to solve this problem with recursion? Instead of using the checkOccurence method I can traverse through the string and break when I find the first element with a count more than 1. But that will require a break, which is discouraged in Scala. Answer: Your checkOccurrence(s) is just a clumsy way to write s.find(countMap(_) > 1). You can significantly simplify the solution by taking advantage of .distinct. def firstUnique(s: String): Option[Char] = s.zipAll(s.distinct, '\u0000', '\u0000') .collectFirst({ case ab if ab._1 != ab._2 => ab._1 })
{ "domain": "codereview.stackexchange", "id": 33181, "tags": "strings, recursion, interview-questions, functional-programming, scala" }
Why do we need continuum approximation in fluid mechanics?
Question: We know that a fluid in reality is not continuous. It has spaces and voids between atoms and molecules. Continuum approximation is a famous approximation that is taken in any fluid mechanics textbook. It says that even though the fluid has spaces and voids it can be assumed to behave as a continuous media. Why do we need to assume that a fluid is a continuous media? That is, what was the problem that we were facing when it was not continuous? Answer: Materials were intuitively uniform for 60,000 years. A few people started guessing they might be "atomic" about 3000 years ago. They only became rigorously atomic about two hundred years ago. And they only got a rigorous continuum model about one hundred years ago. But they were being treated as such on an ad hoc basis long before then. There isn't any conflict between the continuum model and the atomic viewpoint. There never was. The two developed in concert. Boyle published his 1662 law that involved gas pressure, and they needed a way to measure and mathematically handle this rather poorly understood phenomena. The "elasticity" of a gas was a real dilemma. Boyle and Hooke imagined little springs between their imagined atoms. So in the 17th C, you had a hypothesized atomic model whose behavior needed to agree with the measurements of the day, quantities we now associate with the continuum model. Enter calculus, stage right, which was developed from little "infinitesimals" (generalized atoms.) The result was integral and differential calculus applied to continuous functions (in retrospect, this was an unfortunate choice of terms.) In order to harness the power of calculus, it helps to have a formal underpinning that allows you to treat pressure, density, velocity, and a host of other things you can measure as continuous functions. They didn't have that in the 18th C, but that didn't stop Bernoulli and Euler from applying calculus to fluids. Work, as defined by Coriolis (1826), didn't need calculus, just buckets of water and a rope. But there's only so much you can do with those, and not everyone has a mine shaft. A calculus-based definition of work was a lot more convenient. So basically, calculus was a solution in search of a problem. Fluid dynamics was a reasonable candidate. After a century of ad hoc application and some decent successes, mathematicians and physicists went back and developed the formal underpinnings to justify what had been done. It let us consolidate thousands of ad hoc experiments into a few laws, and it allowed us to do performance-based design of dynamic systems like steam engines. Burying the lead - You said "We know that a fluid in reality is not continuous. It has spaces and voids between atoms and molecules." You are assuming the continuum model assumes a continuous structure. It doesn't. What the continuum model does is assume continuous function expressions that relate pressure, density, etc to each other. Continuous functions are actually defined based on the epislon-delta argument of Cauchy. In his 1821 book Cours d'analyse, Cauchy discussed variable quantities, infinitesimals and limits, and defined continuity of $y=f(x)$ by saying that an infinitesimal change in x necessarily produces an infinitesimal change in y, while (Grabiner 1983) claims that he used a rigorous epsilon-delta definition in proofs.[2] Continuum models are, and always have been, fully consistent with an atomic structure. They were produced with that structure in mind. It is the behavior of the atoms that has been captured in the continuum model.
{ "domain": "engineering.stackexchange", "id": 4942, "tags": "mechanical-engineering, fluid-mechanics, thermodynamics, fluid" }
Semiclassical Approximation
Question: In many books I read about semiclassical approximation applied to the field of Bose-Einstein condensation. But I don't understand what it really means. For example I read that an expression like this $f_\textbf p (\textbf r) \frac{d^3\textbf rd^3\textbf p}{(2\pi \hbar)^3}$ for the state density. Have anybody a good explanation applied to this case? Or a general definition of semiclassical approximation? Answer: I recently updated the wikipedia article on statistical ensembles which might be relevant. Basically, in classical physics the probability distribution for the state of a system is written as an integral over position and momentum as in your equation. It turns out to be necessary to choose an arbitrary unit of action (energy times time) in order to define "one state" and make the units work out. It also turns out that if you make this action unit equal to Planck's constant, then the number of classical states contained in the ensemble is roughly the same as in quantum mechanics, at least in the limit when quantum mechanics is behaving classically. That is the semiclassical limit. For a full explanation, note that your distribution function only has 6 coordinates, whereas a statistical ensemble would have 6N coordinates (3 momenta and 3 coordinates for each particle). In other words, even though they have a potentially complex multi-particle system, they are only tracking the distribution of single-particle parameters. This means they are not interested in all the correlations between different particles. Such an approach is useful if the particles are non-interacting, and also if they are weakly interacting in which case a sort of "molecular chaos" sets in, like the chaotic motions of particles in a gas.
{ "domain": "physics.stackexchange", "id": 10385, "tags": "quantum-field-theory, statistical-mechanics, terminology, bose-einstein-condensate, semiclassical" }
Question related to L-arginine biosynthesis
Question: With respect to the L-arginine Biosynthesis pathway, the very first reaction converts L-glutamate to N-acetyl L-glutamate. In the linked reaction scheme, why are only L-glutamate and N-acetyl glutamate considered, and not acetyl-CoA or the coenzyme? Is this because L-glutamate and N-acetyl L-glutamate are the main compounds to be considered? Why are they the main compounds being considered? Are these two compounds (L-glutamate and N-acetyle L-glutamate) the ligands? Answer: The synthesis of N-acetylglutamate is mediated by the enzyme N-acetylglutamate synthase. This enzyme has L-glutamate as its substrate and uses acetyl-coenzyme A as a co-enzyme acetyl donor. Acetylcoenzyme A (Acetyl-CoA) is generally abbreviated in structural formulas, because it is a relatively complex molecule. The only thing of relevance is the acetyl group which is transferred from a thiol group to L-glutamate. Note that ligands is a term reserved for molecules binding to receptors; substrates is the proper term for intermediates in enzymatic reactions. N-acetylglutamate synthesis. Source: The Medical Biochemistry page
{ "domain": "biology.stackexchange", "id": 3954, "tags": "organic-chemistry, biochemistry" }
What is the frequency response function for an input through a differentiator and mixer?
Question: Through the differentiator, the frequency response will be $j \omega X(j\omega)$, but what about through a mixer with $\sin(\omega_ct)?$ Will it be $$\frac{1}{2j} j\omega\left[X(j(\omega-\omega_c))-X(j(\omega-\omega_c))\right]$$ $$\text{or}$$ $$\frac{1}{2j} j(\omega-\omega_c)\left[X(j(\omega-\omega_c))-X(j(\omega-\omega_c))\right]$$ I'm not sure if for a mixer you replace all $\omega = \omega-\omega_c$ or not. Answer: The output of the differentiator is $Y(\omega)= j\omega X(\omega).$ Multiplying by the sine in the time domain is convolving with the Fourier Transform of sine in the frequency domain: $$\mathcal F\left\{\sin(w_ct)\right\} = \frac{1}{2j}(\delta(\omega+\omega_c)-\delta(\omega-\omega_c))$$ Which results in the following expression for the output of the mixer, as a function of $Y(\omega)$: $$\frac{1}{2j}(Y(\omega+\omega _c)- Y(\omega-\omega _c))$$ substituting back for $Y(\omega)$ in terms of $X(\omega)$: $$\frac{1}{2j}(j(\omega+\omega _c) X(\omega+\omega _c)- j(\omega-\omega _c) X(\omega-\omega _c))$$ which is $$\frac{1}{2}(\omega+\omega _c) X(\omega+\omega _c)- (\omega-\omega _c) X(\omega-\omega _c))$$
{ "domain": "dsp.stackexchange", "id": 4860, "tags": "fourier-transform, frequency-response, frequency-modulation" }
Auto.arima with xreg in R, restriction on forecast periods
Question: I am using the forecast package and implement auto.arima with xreg. Here I want to forecast only for 1 year ahead but I am unable to use h parameter in the forecast function. Below is the reason for that: Definition is given in manual(F1 check): h = "Number of period of forecast but if xreg is used 'h' is ignored and the forecast period will be number of rows" Please suggest me an alternate way to use h for the specific period forecast. Answer: Using xreg suggests that you have external (exogenous) variables. In this, a regression model is fitted to the external variables with ARIMA errors. When forecasting you need to provide future values of these external variables. In practice, these are often forecasts or could be known. For example, if you're trying to predict Sales and you use Advertising spend as an external variable, you may know the advertising spend for the upcoming year. auto.arima then produces forecasts for the length of xreg, therefore disregarding h. Based on your comments below, I've provided an example script demonstrating this based on the Sales example above. library(forecast) # Generate sample data sales <- sample(100:170, 4*10, replace = TRUE) advertising <- sample(50:70, 4*10, replace = TRUE) # Create time series objects. sales_ts <- ts(sales, frequency = 4, end = c(2017, 4)) fit <- auto.arima(sales_ts, xreg = advertising) # If we pass external_regressor into the forecast, h will be disregarded and we will # get a forecast for length(external_regressor) wrong_forecast = forecast(fit, h = 4, xreg = advertising) length(wrong_forecast) # Will be 40 # To forecast four quarters in advance, we must provide forecasted external regressor data # for the upcoming four quarters, so that length(new_regressor) == 4. # In reality, this data is either forecasted from another forecast, or is known. We'll randomly generate it. upcoming_advertising <- sample(50:70, 4, replace = TRUE) correct_forecast <- forecast(fit, xreg = upcoming_advertising) length(correct_forecast$mean) # Will be 4 The key things to note are: If we forecast with the same regressors as we did when generating the forecast, h will be disregarded and a forecast will be generated for the length of xreg in your case, 10 years. As such, we must provide new data for xreg for the length of time we wish to forecast - in your case, 4 quarters.
{ "domain": "datascience.stackexchange", "id": 5425, "tags": "r, time-series, forecast" }
Typo in physics book (capacitors)
Question: I'm currently working through an AP revision guide. The section on charging a capacitor outlines the following steps: When a capacitor is connected to a battery, a current flows in the circuit until the capacitor is fully charged, then stops. The electrons flow onto the plate connected to the negative terminal of the battery, so a negative charge builds up. The build up of negative charge repels electrons from the positive terminal of the battery, making that plate positive. These electrons are attracted to the positive terminal of the battery. [The description continues] My question here is: Shouldn't the last part of step 3 say "These electrons are attracted to the negative terminal of the battery"? as that is where the negative charge is building up. Slightly confused about this but I was hoping it's just a typo. Answer: No. Step number two already mentions electrons from the battery flowing into the negative terminal of the capacitor, giving the negative terminal a negative charge. Step number three is talking about electrons flowing out of the positive terminal of the capacitor, giving the positive terminal a net positive charge.
{ "domain": "physics.stackexchange", "id": 21736, "tags": "electric-circuits, charge, capacitance, conventions, batteries" }
Map strings to (const) ints
Question: I'm imagining go generate would be a good tool for this. I want to convert strings to ints to save space. type Source int func NewSource(s string) Source { switch s { case "Twitter": return Twitter case "Facebook": return Facebook case "Gplus": return Gplus case "Spotify": return Spotify case "Linkedin": return Linkedin case "Github": return Github case "Lastfm": return Lastfm default: panic(ErrUnknownSourceType) } } const ( Twitter Source = iota Facebook Gplus Spotify Linkedin Github Lastfm ) Answer: Shortest (shortest by you) would be indeed to use go generate. If you don't want to do that: You have to enumerate your source names and source values to associate them, you can't avoid that. But this enumeration and pairing can be shorter by using a map[string]Source: var srcMap = map[string]Source{ "Twitter": Twitter, "Facebook": Facebook, "Gplus": Gplus, "Spotify": Spotify, "Linkedin": Linkedin, "Github": Github, "Lastfm": Lastfm, } func NewSource(s string) Source { if src, ok := srcMap[s]; ok { return src } panic(ErrUnknownSourceType) } Also note that panicing is a little "strong" reaction for an invalid source name. I would rather return an error along with the source, or return a special UnknownSrc source instead of panicing. And while we're at it: you should exploit the zero value of Source for representing UnknownSrc and that way you don't even have to use the comma-ok idiom when checking in the map: indexing a map returns the zero value of the value type if the key is not found. So: const ( UnknownSrc Source = iota // It will be 0, zero value for the underlying type (int) Twitter // ... and your other sources ) And this way converting a source name to the Source type is one-line: func NewSource(s string) Source { return srcMap[s] } It's just indexing a map, you don't even need a function for that. If you would want to return an error, it could look like this: func NewSource(s string) (Source, error) { if src, ok := srcMap[s]; ok { return src, nil } return UnknownSrc, errors.New("Invalid Source name!") }
{ "domain": "codereview.stackexchange", "id": 15224, "tags": "converting, go" }
Compton effect and Photoelectric effect
Question: Why in Compton effect an electron (which is very lightly bounded) can't absorb whole energy of incident photon but in photoelectric effect the electron absorb whole energy of the incident photon? In Compton effect the electron is not totally free, if it absorb whole energy then it does not violet the basic postulate of relativity. But in this case we consider the collision of incident photon with the free electron (though it is not completely free) is elastic. Why it is not consider here some energy (though small) loss to free the electron from the metal? Please answer my question analytically. Answer: The two processes are different. In the Compton effect the energy of the incident photon (~ many keV or Mev) is very much larger than the binding energy of the electron in the atom (~ eV) and so a target electron bound in an atom can be considered as essentially free. In the Compton effect the target electron is ejected from the atom and the process can be treated using "billiard ball" dynamics. The equation for the change in wavelength derived by use of the conservation of momentum and energy shows that there is a finite maximum amount of energy that can be transferred from the photon to an electron in this process. $\lambda' - \lambda = \dfrac {h}{m_{\rm e} c}(1 - \cos \theta)$ where $\lambda' - \lambda$ is the change in wavelength of the photon and $\theta$ is the scattering angle whose maximum value is $180^\circ$. The scattered photon having lost some of its energy can then undergo further collisions with electrons in the material. The photons responsible for the photoelectric effect have energies of order of a few electron-volts ($450\; \rm nm\approx 2.8 \;eV$) and for metals it is the conduction (free) electrons which are ejected. A free electron is given enough energy by the photon it has absorbed to overcome a potential barrier (work function energy) to escape with the rest of the energy of the photon from the surface of the metal.
{ "domain": "physics.stackexchange", "id": 34712, "tags": "photons, atomic-physics, scattering, photoelectric-effect" }
Can I use the spacetime interval even if origins are not equal at $t=t'=0$?
Question: Can I use the relativity formula $(\Delta s)^2 = (c \Delta t)^2 - (\Delta r)^2$ even when observers did not start at the same origin at $t=t'=0$? I know this is constant, but is it constant even if the other origins of observers were not the same at $t=t'=0$? Answer: Space-Time Interval: Suppose an observer measures two events as being separated in time by $\Delta t$ and a spatial distance $\Delta x$. Then the spacetime interval $(\Delta s)^2$ between the two events that are separated by a distance $\Delta x$ in space and by $\Delta ct =c\Delta t$ in the $ct$-coordinate is: $$(\Delta s)^2=(\Delta ct)^2-(\Delta x)^2$$ You can see not the absolute time but the time interval between two intervals is involved in the formula and So it doesn't matter whether or not $t=t'=0$ But both observers must measure the time interval between the same events.
{ "domain": "physics.stackexchange", "id": 74430, "tags": "special-relativity, coordinate-systems, inertial-frames, observers" }
Ambiguity in oxidation state of phosphorus in H3PO3
Question: Phosphorus(2.19) is less electronegative than hydrogen(2.20), so during calculation of oxidation state of phosphorus the oxidation state of hydrogen should be taken as-1 but in books it's taken as +1 why? Answer: There are several electronegativity scales. In the Pauling scale, H and P are at 2.1. In the Mulliken scale, H is at 2.2 and P at 2.3. In the Allred and Rochow scale, it is the contrary : H is at 2.2 and P at 2.1. These scales are contradictory for H and P. So it is necessary to choose for teaching purposes. And book authors have usually chosen +1 for H, because H is rarely at -1, and only in very reactive compounds like NaH. Ref. : Michael O. Hurst, J. Chem. Ed. 79, June 2002. How we teach Molecular Structure to Freshmen.
{ "domain": "chemistry.stackexchange", "id": 13565, "tags": "inorganic-chemistry, oxidation-state" }
How to pick up a card from a pile
Question: I don't know if this is a purely "robot" question or a more DIY/hackish one, but let's give it a try. I currently have a set of cards that I want to sort based on several criteria. My setup includes: MeArm A 4 DOF robot arm Raspberry Pi 3 + Shields for controlling the arm A mini vacuum pump, hold in place by the gripper These are the steps: Move the arm on top of pile of cards Turn on the vacuum pump Pick the first card Move the arm in the right spot Turn off the pump and let the card fall Repeat Everything is working fine, my only main issue occurs when I'm lifting the arm. It seems that between the cards there are some kind of forces and under the first one, several others come up, attached. I tried to shake the arm and make them fall, but it's not working. Any suggestions? Maybe I'm missing some simple/obvious solution. Answer: If your cards are like playing cards, they are likely very smooth. Cards like this can adhere or bond together in a similar way to "wringing" gauge blocks: http://en.wikipedia.org/wiki/Gauge_block#Wringing The phenomena isn't that well understood, but is thought to be a combination of air pressure, surface tension and/or molecular attraction when two very flat surfaces are brought into contact. To pick up a single card, you could try a different movement to help break the adhesion between cards, or, if possible, modify the cards to make them less likely to adhere. For example, roughing up the card surfaces may help.
{ "domain": "robotics.stackexchange", "id": 1359, "tags": "robotic-arm, raspberry-pi" }
Find force from potential energy
Question: If we have a force whose component along $\vec{r}$ is $F_r$. where $\vec{r} = x \hat{i} + y \hat{j} + z \hat{k}$. then the force is = derivative of $U$ (potential energy) wrt $r$. So my question is : if $F_r = \frac{dU}{dr}$ then: $F_r = \frac{\partial{U}}{\partial{x}} + \frac{\partial{U}}{\partial{y}}+ \frac{\partial{U}}{\partial{z}}$ ...is this correct? where $\partial$ is partial differential Answer: the force is \begin{equation} \vec{F} = - \frac{\partial U} {\partial{\vec{r}}} =-\left(\frac{\partial U} {\partial{x}}; \frac{\partial U} {\partial{y}}; \frac{\partial U} {\partial{z}}\right) \end{equation}
{ "domain": "physics.stackexchange", "id": 83404, "tags": "work, potential-energy, differentiation" }
What is the relationship between Choi and Chi matrix in Qiskit?
Question: I'm struggling with the framework for quantum process tomography on Qiskit. The final step of such a framework is running fit method of ProcessTomographyFitter class. Documentation states that such function gives a Choi matrix as output. Nevertheless, I'd want the Chi matrix to define the superoperator of a circuit. Specifically, I'm interested in understanding how a 2-qubit circuit affects a single qubit. Thus, my questions are: What is the relationship between Choi and Chi matrix? When do they coincide? How to obtain Chi from Choi matrix? Answer: ( I copied some text from a previous answer of mine) Defining the Choi and $\chi$ matrix The Choi matrix is a direct result of the Choi-Jamiolkowski isomorphism. Some intuition on what this is can be found in this previous answer by Norbert Schuch. Consider the maximally entangled state $|\Omega \rangle = \sum_{\mathrm{i}}|\mathrm{i}\rangle \otimes |\mathrm{i}\rangle$, where $\{|\mathrm{i}\rangle\}$ forms a basis for the space on which $\rho$ acts. (Note that we thus have a maximally entangled state of twice as many qubits). The Choi matrix is the state that we get when on one of these subsystems $\Lambda$ is applied (leaving the other subsystem intact): \begin{equation} \rho_{\mathrm{Choi}} = \big(\Lambda \otimes I\big) |\Omega\rangle\langle\Omega|. \end{equation} As the Choi matrix is a state, it must be positive semidefinite (corresonding the the CP constraint) and must have unit trace (necessary but not sufficient for the TP constraint). The process- or $\chi$-matrix comes from the fact that we can write our map as a double sum: \begin{equation} \Lambda(\rho) = \sum_{m,n} \chi_{mn}P_{m}\rho P_{n}^{\dagger}, \end{equation} where $\{P_{m}\}$ & $\{P_{n}\}$ form a basis for the space of density matrices; we use the Pauli basis $\{I,X,Y,Z\}^{\otimes n}$ (thereby omitting the need for the $\dagger$ at $P_{n}$). The matrix $\chi$ now encapsulates all information of $\Lambda$; the CP constraint reads that $\chi$ must be positive semidefinite, and the trace constraint reads that $\sum_{m,n}\chi_{mn}P_{n}P_{m} \leq I$ (with equality for TP). Computing one from another From this, we get the following two identities: \begin{equation} \begin{split} \rho_{\mathrm{Choi}} &= \sum_{m,n} \chi_{m,n} |P_{m}\rangle\rangle\langle\langle P_{n}|, \\ \chi_{m,n} &= \langle\langle P_{m} | \rho_{\mathrm{Choi}} |P_{n}\rangle\rangle, \end{split} \end{equation} where $|P_{m}\rangle\rangle$ is the 'vectorized' version of $P_{m}$, which is essentially just the columns of $P_{m}$ stacked on top of each other, giving a vector. That answers question 3. Again I shamelessly 'self-promote': in the first appendix of my thesis I work through proofs of all these relations. The most intuitive way is by using the Kraus decomposition as an intermediary, but it is not needed. Relationship between the two From this, you can see that the Choi matrix and the chi matrix do indeed have some relationship. In fact, by choosing either the (qubit)-basis in which we express the Choi matrix, or choosing the (operator)-basis that we associate with the $\chi$-matrix, they can be one and the same. As @AdamZalcman has pointed out in his comment (Thank you!), from the identity $\chi_{m,n} = \langle \langle P_{m}|\rho_{\mathrm{Choi}}| P_{n}\rangle\rangle$ we can choose the $P_{m/n}$ so that we just select the $m$-th row and $n$-th column of $\rho_{\mathrm{Choi}}$. This works if $P_{k} = |i\rangle \langle j|$, with $k = id + j$. Since both $i$ and $j$ run from $0$ to $d-1$ (indicating the column and row, respectively), this gives exactly $d^{2}$ elements. The same effect can be reached if one expresses the Choi matrix in a different basis, while keeping the $P_{k}$ associated with $\chi_{m,n}$ the usual Paulis. For the two to coincide then (i.e. $\chi_{m,n} = \rho_{\mathrm{Choi}}^{m,n}$), we see that $\rho_{\mathrm{Choi}}^{m,n}$ should be expressed in the `vectorized-Pauli-basis' (which is a set of states, i.e. a basis for the Hilbert space!) - this is exactly the Bell basis.
{ "domain": "quantumcomputing.stackexchange", "id": 1499, "tags": "qiskit, quantum-operation, state-tomography, quantum-process-tomography" }
Reason behind canonical quantization in QFT?
Question: Reason behind canonical quantization in QFT? In the scalar field theory we simply promote the scalar field, $\phi(x)$ to a set of operators: $\hat{\phi}(x)$. What is the reason behind this? Answer: There are many ways to answer this question with varying levels of sophistication but here's an attempt at a short and relatively non-sophisticated answer. Assume the classical field obeys a wave equation such that each mode of the field obeys the equation of motion of an independent harmonic oscillator. It's straightforward to show that promoting the classical equation of motion of the field to an operator equation of motion is equivalent to quantizing each mode of the classical field as an independent quantum harmonic oscillator. This allows the quanta of each mode, which are created and destroyed by associated ladder operators for each mode, to be interpreted as "particles" with definite energy and momentum.
{ "domain": "physics.stackexchange", "id": 8559, "tags": "quantum-field-theory, operators, quantization, second-quantization" }
Does scale invariance imply massless or continuous mass distribution?
Question: $\newcommand{\ket}[1]{\lvert #1 \rangle}\newcommand{\bra}[1]{\langle #1 \rvert}\newcommand{\scp}[2]{\langle #1 \vert #2 \rangle}$ In his 2008 slides (PDF), Tzu-Chiang Yuan mentions the following on p. 21: Suppose $P^2\ket{p}=m^2\ket{p}$ with $\scp{p}{p}=1$ and fixed $m^2$. Then, $$[P^2,D]=P^{\mu}[P_{\mu},D]+[P^{\mu},D]P_{\mu}=2\mathrm{i}P^2$$ $$\bra{p}[P^2,D]\ket{p}=2\mathrm{i}\bra{p}P^2\ket{p} =2\mathrm{i}m^2$$ $$\bra{p}[P^2,D]\ket{p}=\bra{p}(m^2D-Dm^2)\ket{p}=0$$ implies $m=0$ Or continuous mass spectrum since: $$e^{\mathrm{i}sD}P^2e^{-\mathrm{i}sD}=e^{2s}P^2$$ Where $D=x_{\mu}P^{\mu}$ and $P^{\mu}$ up there is $\mathrm{i}P^{\mu}$ My questions are, first why would we assume that $P^2\ket{p}=m^2\ket{p}$ and how did his analysis make him conclude that $m=0$? Lastly he did not define $s$ in the continuous spectrum scenario and also I did not see why would $e^{isD}P^2e^{-isD}=e^{2s}P^2$ imply continous spectrum, I would appreciate if some one can explain those points out. Answer: $p^2 = m^2$ is the definition (up to a minus sign) of the mass of a momentum eigenstate. He derived that the same quantity (the expectation value of $[P^2,D]$ w.r.t. $\lvert p\rangle$) equals $0$ and $2\mathrm{i}m^2$, so $m^2 = 0$. The $s$ is the scale parameter of the scale transformation induced by $D$, and it is any real number, so, starting from a given state with mass value $p^2 = m^2$, we can produce any positive mass value by the scale transformation, so $P^2 \mapsto \mathrm{e}^{2s}P^2$ under a scale transformation implies that the spectrum is continuous.
{ "domain": "physics.stackexchange", "id": 22135, "tags": "quantum-field-theory, special-relativity, mass, representation-theory, scale-invariance" }
Fascinating, ma'am
Question: Finding a famous question with high score I haven't voted on yet was proving to be a bit difficult for me. So I came up with this simple SEDE query to find good candidates: SELECT TOP 10 Id AS [Post Link], ViewCount FROM Posts WHERE ViewCount >= 10000 AND Score >= 25 ORDER BY ViewCount DESC Can this be improved in any way? (Btw, I'm a total noob with SEDE.) Answer: General Feedback With a query this short it's hard to go wrong, and you haven't. The formatting is consistently SHOUTCASE for keywords, as per convention. And there is nothing glaringly wrong at all. I really can't find anything to complain about, you've got a nice query. A Small Suggestion Have you considered adding parameters to the query. That way if you, or someone else, want to run using different numbers for the minimum ViewCount or Score then they wouldn't need to alter the query. SEDE provides a special syntax for parameters, which looks like this: ##MinViewCount:int?10000## where MinViewCount is the parameter name, int is the data type and 5 is the default value SELECT TOP 10 Id AS [Post Link], ViewCount FROM Posts WHERE ViewCount >= ##MinBadges:int?5## AND Score >= ##MinBadges:int?5## ORDER BY ViewCount DESC
{ "domain": "codereview.stackexchange", "id": 11172, "tags": "beginner, sql, sql-server, stackexchange" }
Dirac free particle with $x$-momentum
Question: For a free particle with momentum $\mathbf{p}=p\mathbf{x}$, the Dirac Hamiltonian is \begin{equation} H=\alpha_xp+\beta m = \begin{pmatrix} m & 0 & 0 &p\\ 0 & m & p & 0\\ 0 & p & -m & 0\\ p & 0 & 0 & -m \end{pmatrix} \ , \end{equation} of which one eigenstate is \begin{equation} u_1 = \begin{pmatrix} 1 \\ 0 \\0\\ \frac{p}{E_p+m}\end{pmatrix} \ . \end{equation} This should also be an eigenstate of the helicity $\hat{h} = \mathbf{\Sigma}\cdot\mathbf{\hat p}= \begin{pmatrix} \sigma_x & 0 \\ 0 & \sigma_x \end{pmatrix}$. However it is pretty clear to see that \begin{equation} u_1^\dagger\hat hu_1=\begin{pmatrix} 1 & 0 & 0 & \frac{p}{E_p+m} \end{pmatrix}\begin{pmatrix} 0 & 1 & 0 & 0\\ 1 & 0&0&0\\ 0 &0&0&1\\ 0&0&1&0 \end{pmatrix} \begin{pmatrix} 1 \\0 \\ 0 \\ \frac{p}{E_p+m} \end{pmatrix}=0 \ , \end{equation} which should not be the case. I think I'm getting confused with something notational/simple here; please let me know. Answer: In the single-particle Dirac theory, the operators $\vec{p}$ and $h=\vec{\Sigma}\cdot\vec{p}$ both commute with the Hamiltonian. That means that it is possible to find eigenstates of $H=\vec{\alpha}\cdot\vec{p}+\beta m$ that are also simultaneous eigenstates of $\vec{p}$ and $h=\vec{\Sigma}\cdot\vec{p}$. However, it does not mean that an arbitrary eigenstate of the three-momentum $\vec{p}$ will also be an eigenstate of $h=\vec{\Sigma}\cdot\vec{p}$, which is what you seem to be aiming for in your example. The simultaneous eigenstates of $h=\Sigma_{x}$ when $\vec{p}=p\hat{x}$ are (in the Dirac representation used in the question) $$u=\left[ \begin{array}{c} u_{>} \\ u_{<} \end{array}\right]=\left[ \begin{array}{c} 1 \\ \pm 1 \\ \mp\frac{p}{E_{p}+m} \\ \frac{p}{E_{p}+m}\end{array}\right].$$ Note that both the large ($u_{>}$) and small ($u_{<}$) components of the Dirac four-spinor are separately two-spinor eigenvectors of $\sigma_{x}$. What you have tried to do with the $u$ in the question is take a a spinor in which $u_{>}$ is an up eigenstate of $\sigma_{z}$, and what you found was exactly what was to be expected—that the energy and momentum eigenstate with this $u_{>}$ was not an eigenstate of $\Sigma_{x}$ (or $\Sigma_{z}$), unless $p=0$.
{ "domain": "physics.stackexchange", "id": 91448, "tags": "quantum-mechanics, dirac-equation, spinors" }
Maximizing Multiplicity of Einstein Solid == (Temperature = $\infty$)?
Question: If I have a system consisting of 2 Einstein solids (A and B) is it equivalent to say that maximizing the multiplicity of the system is the same as setting the temperature to $\infty$? I have two reasons to support this Idea. 1) The total multiplicity of the system can be expressed as: $$\Omega_{tot} = \Omega_A(q_A,N_A) \Omega_B(q_B,N_B)$$ Which is the product of the individual multiplicities, and $q = q_A + q_B = $ Total quanta of energy shared between the system, and $N = N_A + N_B =$ total number of quantum harmonic oscillators. I wish to maximize the multiplicity with respect to oscillator A's total internal energy ($U_A)$ $$\left( \frac{ \partial \Omega_{tot} }{ \partial U_A } \right) = 0 \implies \left( \frac{ \partial Ln( \Omega_{tot} )}{ \partial U_A } \right) = \frac{ \Omega_{tot}'}{ \Omega_{tot}} = 0$$ This is true because Ln(x) is a monotonic function. But it should be noted that the definition of temperature is such that: $$\frac{1}{kT} = \left( \frac{ \partial Ln( \Omega_{tot} )}{ \partial U_A } \right) = 0 \implies T= \infty$$ k here is the boltzman constant. 2) This makes some intuitive sense. On the Gaussian distribution as a function of Internal energy, the maximum multiplicity lies on the peak. The change in temperature gives a quantification of the direction of heat flow. If the maximum multiplicity is attained, then heat cannot flow any higher, it can only flow downward. This corresponds to infinite temperature. I hope my intuition here is correct. Answer: My intuition was correct. For more information visit APS Website.
{ "domain": "physics.stackexchange", "id": 7795, "tags": "quantum-mechanics, homework-and-exercises, thermodynamics, statistical-mechanics" }
Misconception about closed string worldsheet definition
Question: I'm a little confused about the precise way to define the worldsheet $\Sigma$ of a closed string. Its parametrization must be of the form $X: \Sigma \longrightarrow \mathbb{R}^{1,D-1}$ and one of its characteristic properties is the periodicity in $\sigma$ coordinate: $$X(\tau, \sigma) = X(\tau, \sigma +2π) \tag1.$$ The problem is that some books define $\Sigma$ as the set $\mathbb{R} \times [0,π]$ (Becker & Becker), or $\mathbb{R} \times [0,2π]$ (Polchinski) and most of the books claim that $\Sigma$ has the topology of a cylinder. So, apply to X a $\sigma$-value outside the domain $[0,2π]$, for example, makes no sense. I know that we usually define the circle $S^1$ as the quotient $\mathbb{R}/\sim$, where $\forall \sigma_1,\sigma_2 \in \mathbb{R}, \sigma_1\sim \sigma_2 \iff \sigma_2 = \sigma_1 +2πn, \ n \in \mathbb{Z}$. What also would make no sense is define $$X: \mathbb{R} \times S^1 \longrightarrow \mathbb{R}^{1,D-1} \tag2, $$ since the elements of $S^1$ are sets, not numbers. So, how do I define correctly the worldsheet and parametrization of a closed string? Answer: All the things you claim make no sense do, in fact, make sense. The following are equivalent: A function $f_r : \mathbb{R}\to X$ with $f(x) = f(x+2\pi)$ for all $x$. A function $f_i : [0,2\pi]\to X$ with $f(0) = f(2\pi)$ A function $f_s : S^1 \to X$ We start with a function $f_r$ as in $1$. Then the restriction $f_i = f_r\vert_{[0,2\pi]}$ is a function as in 2. If we think of an element of the circle $S^1$ as an angle $\phi\in[0,2\pi)$ (in your representation as a quotient of $\mathbb{R}$, just choose the smallest positive number in each equivalence class as its representant), then $f_s(\phi) = f_i(\phi)$ is a function as in 3. Finally, given an $f_s$ and again choosing the angle parametrization of $S^1$, define a function $f_r$ on $\mathbb{R}$ by $f_r(x) = f_s(x \mod 2\pi)$. This is a function as in 1. Therefore, it does not matter whether people say that the worldsheet parameter $\sigma$ has the property $f(\sigma) = f(\sigma + 2\pi)$ for all $f$, that it is valued in $[0,2\pi]$ with $f(0) = f(2\pi)$ for all $f$ or that it is valued in $S^1$ - all these things describe the same situation, namely that of the circle $S^1$ and hence the closed worldsheet of the free string as a cylinder. Note that people aren't always careful with their language, so if stuff like "$\mathbb{R}\times[0,2\pi]$ is the cylinder" bothers you because it is technically wrong you have to learn to live with that and extend some amount of good faith towards the authors (in this case that they really meant to glue the ends of the interval together to form an $S^1$).
{ "domain": "physics.stackexchange", "id": 86114, "tags": "string-theory, topology" }
How do I get started in theoretical CS ?
Question: I'm a freshmen studying computer science and I already know that I want to go into academia with focus of theoretical comp sci. I already read some of papers referenced in this question and this question convinced me further. What should I be doing now, as an undergrad, to get involved in the field? What can I do to prepare for research in the field? Answer: Let me provide an answer from the other side. I've had a few undergraduate student researchers work with me. The experience has been mixed: with some, I have published papers and have work in progress, and with others, we never really got off to any kind of start. It's great that you know what you want to do. As an undergraduate, here's what you should be focusing on: Building the mathematical "muscles" that will help you when you start working on problems in earnest Exploring different aspects of theoretical CS to get a sense of the area and figure out what kinds of problems/areas you find interesting (depending on the area) working out some puzzles, maybe solving some exercises, and working your way up to a research question. Find a professor to guide you, and PUT IN THE TIME ! The hardest thing you'll face is creating the open time to think about problems in the midst of classwork, assignments and exams. But you need to reserve blocks of time for your independent study and research otherwise it will be very hard to make any kind of progress. How you do this is upto you: maybe you can find a professor to meet with you once a week and set intermediate goals for you, or maybe you can set a long term goal (working through X exercises from a text) and work steadily on that.
{ "domain": "cstheory.stackexchange", "id": 517, "tags": "soft-question, advice-request, career" }
C++ 3D Vector Implementation
Question: I have been learning C++ now for 2 months and this week I started reading a book on 3D graphics. I like coding whatever mathematical stuff I learn so I can understand it better, so when I learnt about Vectors, I decided to write a class on it. I'd be grateful for any suggestions on my code, be it style-wise, performance-wise, or anything at all. Some considerations I took when writing this code were: I use inline functions because I heard that the C++ compiler, while smart, will not be able to inline everything that I want to be inlined automatically, even if I give hints. I use commenting style that takes a lot of extra space. For me personally, it aids me in reading and documenting my code step by step. I heard use of 'friend' operator is discouraged, but I seem to like using it. It allows me to code functions that, while could work as methods (e.g. vector.CrossProduct(otherVector)) sound better as functions CrossProduct(vector1, vector2) in my opinion. I don't comment the implementation code. It seems too trivial to comment, I wonder if you think this is the case too? //********************************************************************** //* Vector.h //* //* Just Another Vector Implementation (JAVI) //********************************************************************** #ifndef __VECTOR_H__ #define __VECTOR_H__ #include "Math.h" #include <ostream> class Vector { public: //****************************************************************** //* Constructors //****************************************************************** // Default Constructor //------------------------------------------------------------------ // Sets the x, y and z components of this Vector to zero. //------------------------------------------------------------------ Vector (); //------------------------------------------------------------------ // Component Constructor //------------------------------------------------------------------ // Sets the x, y and z components of this Vector to corresponding // x, y and z parameters. //------------------------------------------------------------------ Vector (float x, float y, float z); //------------------------------------------------------------------ // Copy Constructor //------------------------------------------------------------------ // Sets the x, y and z components of this Vector to equal the x, y // and z components of Vector v. //------------------------------------------------------------------ Vector (const Vector &v); //****************************************************************** //****************************************************************** //* Friend Operators //****************************************************************** // Stream Insertion Operator //------------------------------------------------------------------ // Writes the Vector v into the output stream in the format (x,y,z) // so it can be used by various iostream functions. //------------------------------------------------------------------ friend std::ostream &operator << (std::ostream &os, const Vector &v); //------------------------------------------------------------------ // Equal To Operator //------------------------------------------------------------------ // Compares the x, y and z components of Vector v1 and to the x, y // and z components of Vector v2 and returns true if they are // identical. Otherwise, it returns false. //------------------------------------------------------------------ friend bool operator == (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // Not Equal To Operator //------------------------------------------------------------------ // Compares the x, y and z components of Vector v1 and to the x, y // and z components of Vector v2 and returns true if they are not // identical. Otherwise, it returns false. //------------------------------------------------------------------ friend bool operator != (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // Addition Operator //------------------------------------------------------------------ // Adds the x, y and z components of Vector v1 to the x, y and z // compenents of Vector v2 and returns the result. //------------------------------------------------------------------ friend Vector operator + (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // Subtraction Operator //------------------------------------------------------------------ // Subtracts the x, y and z components of Vector v2 to the x, y and // z compenents of Vector v1 and returns the result. //------------------------------------------------------------------ friend Vector operator - (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // Multiplication Operator //------------------------------------------------------------------ // Multiplies the x, y and z components of Vector v with a scalar // value and returns the result. //------------------------------------------------------------------ friend Vector operator * (const Vector &v, float scalar); friend Vector operator * (float scalar, const Vector &v); //------------------------------------------------------------------ // Division Operator //------------------------------------------------------------------ // Divides the x, y and z components of Vector v with a scalar // value and returns the result. //------------------------------------------------------------------ friend Vector operator / (const Vector &v, float scalar); friend Vector operator / (float scalar, const Vector &v); //****************************************************************** //****************************************************************** //* Friend Functions //****************************************************************** // DotProduct //------------------------------------------------------------------ // Computes the dot product between Vector v1 and Vector v2 and // returns the result. //------------------------------------------------------------------ friend float DotProduct (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // CrossProduct //------------------------------------------------------------------ // Computes the cross product between Vector v1 and Vector v2 and // returns the result. //------------------------------------------------------------------ friend Vector CrossProduct (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // Lerp //------------------------------------------------------------------ // Returns a linear interpolation between Vector v1 and Vector v2 // for paramater t, in the closed interval [0, 1]. //------------------------------------------------------------------ friend Vector Lerp (const Vector &v1, const Vector &v2, float t); //------------------------------------------------------------------ // Clamp - TODO: make this a method instead? //------------------------------------------------------------------ // Clamps this Vector's x, y and z components to lie within min and // max. //------------------------------------------------------------------ friend Vector Clamp (const Vector &v1, float min, float max); //------------------------------------------------------------------ // Min //------------------------------------------------------------------ // Returns a Vector whos x, y and z components are the minimum // components found in Vector v1 and Vector v2. //------------------------------------------------------------------ friend Vector Min (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // Max //------------------------------------------------------------------ // Returns a Vector whos x, y and z components are the maximum // components found in Vector v1 and Vector v2. //------------------------------------------------------------------ friend Vector Max (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // DistanceBetween //------------------------------------------------------------------ // Returns the scalar distance between the Vector v1 and the Vector // v2. //------------------------------------------------------------------ friend float DistanceBetween (const Vector &v1, const Vector &v2); //------------------------------------------------------------------ // DistanceBetweenSquared //------------------------------------------------------------------ // Returns the scalar squared distance between the Vector v1 and // the Vector v2. //------------------------------------------------------------------ friend float DistanceBetweenSquared (const Vector &v1, const Vector &v2); //****************************************************************** //****************************************************************** //* Operators //****************************************************************** // Copy Assignment Operator //------------------------------------------------------------------ // Assigns this Vector's components to be equal to Vector v's // components. //------------------------------------------------------------------ Vector &operator = (const Vector &v); //------------------------------------------------------------------ // Addition Assignment Operator //------------------------------------------------------------------ // Adds to this Vector's components the components of Vector v. //------------------------------------------------------------------ Vector &operator += (const Vector &v); //------------------------------------------------------------------ // Subtraction Assignment Operator //------------------------------------------------------------------ // Subtract from this Vector's components the components of Vector // v. //------------------------------------------------------------------ Vector &operator -= (const Vector &v); //------------------------------------------------------------------ // Multiplication Assignment Operator //------------------------------------------------------------------ // Multiply this Vector's components by a scalar value. //------------------------------------------------------------------ Vector &operator *= (float scalar); //------------------------------------------------------------------ // Division Assignment Operator //------------------------------------------------------------------ // Divide this Vector's components by a scalar value. //------------------------------------------------------------------ Vector &operator /= (float scalar); //------------------------------------------------------------------ // Unary Minus Operator //------------------------------------------------------------------ // Negate the components of this Vector. //------------------------------------------------------------------ Vector &operator - (); //------------------------------------------------------------------ // Array Subscript Operator //------------------------------------------------------------------ // Allows access to the x, y and z components through an array // subscript notation. //------------------------------------------------------------------ float &operator [] (int i); //****************************************************************** //****************************************************************** //* Methods //****************************************************************** // X //------------------------------------------------------------------ // Returns the x component of this Vector. //------------------------------------------------------------------ float X (); //------------------------------------------------------------------ // Y //------------------------------------------------------------------ // Returns the y component of this Vector. //------------------------------------------------------------------ float Y (); //------------------------------------------------------------------ // Z //------------------------------------------------------------------ // Returns the z component of this Vector. //------------------------------------------------------------------ float Z (); //------------------------------------------------------------------ // Set //------------------------------------------------------------------ // Sets the x, y and z components of this Vector to the paramaters // of this function. //------------------------------------------------------------------ void Set (float x, float y, float z); //------------------------------------------------------------------ // MakeZero //------------------------------------------------------------------ // Sets the x, y and z components of this Vector to zero. //------------------------------------------------------------------ void MakeZero (); //------------------------------------------------------------------ // IsZero //------------------------------------------------------------------ // Returns true if the x, y and z components of this Vector are // equal to zero. //------------------------------------------------------------------ bool IsZero (); //------------------------------------------------------------------ // LengthSquared //------------------------------------------------------------------ // Returns the magnitude of the x, y and z components squared. //------------------------------------------------------------------ float LengthSquared (); //------------------------------------------------------------------ // Length //------------------------------------------------------------------ // Returns the magnitude of the x, y and z components. //------------------------------------------------------------------ float Length (); //------------------------------------------------------------------ // Normalize //------------------------------------------------------------------ // Sets the components of this Vector in such a way that their // magnitude is equal to one. //------------------------------------------------------------------ void Normalize (); //------------------------------------------------------------------ // IsNormalized //------------------------------------------------------------------ // Compares the magnitude of this Vector to one. //------------------------------------------------------------------ bool IsNormalized (); //****************************************************************** private: //****************************************************************** //* Private Member Variables //****************************************************************** // x //------------------------------------------------------------------ // The x component of this Vector. //------------------------------------------------------------------ float x; //------------------------------------------------------------------ // y //------------------------------------------------------------------ // The y component of this Vector. //------------------------------------------------------------------ float y; //------------------------------------------------------------------ // z //------------------------------------------------------------------ // The z component of this Vector. //------------------------------------------------------------------ float z; //****************************************************************** }; inline Vector::Vector() : x(0.0f), y(0.0f), z(0.0f) {} inline Vector::Vector(float x, float y, float z) : x(x), y(y), z(z) {} inline Vector::Vector(const Vector &v) : x(v.x), y(v.y), z(v.z) {} inline std::ostream &operator<<(std::ostream &os, const Vector &v) { os << '(' << v.x << ',' << v.y << ',' << v.z << ')'; return os; } inline bool operator==(const Vector &v1, const Vector &v2) { return (AreEqual(v1.x, v2.x) && AreEqual(v1.y, v2.y) && AreEqual(v1.z, v2.z)); } inline bool operator!=(const Vector &v1, const Vector &v2) { return (!AreEqual(v1.x, v2.x) || !AreEqual(v1.y, v2.y) || !AreEqual(v1.z, v2.z)); } inline Vector operator+(const Vector &v1, const Vector &v2) { return Vector(v1.x+v2.x, v1.y+v2.y, v1.z+v2.z); } inline Vector operator-(const Vector &v1, const Vector &v2) { return Vector(v1.x-v2.x, v1.y-v2.y, v1.z-v2.z); } inline Vector operator*(const Vector &v, float scalar) { return Vector(v.x*scalar, v.y*scalar, v.z*scalar); } inline Vector operator*(float scalar, const Vector &v) { return Vector(v.x*scalar, v.y*scalar, v.z*scalar); } inline Vector operator/(const Vector &v, float scalar) { assert(!EqualsZero(scalar)); scalar = 1.0f / scalar; return Vector(v.x*scalar, v.y*scalar, v.z*scalar); } inline Vector operator/(float scalar, const Vector &v) { assert(!EqualsZero(scalar)); scalar = 1.0f / scalar; return Vector(v.x*scalar, v.y*scalar, v.z*scalar); } inline float DotProduct (const Vector &v1, const Vector &v2) { return v1.x * v2.x + v1.y * v2.y + v1.z * v2.z; } inline Vector CrossProduct(const Vector &v1, const Vector &v2) { return Vector(v1.y*v2.z - v1.z*v2.y, v1.z*v2.x - v1.x*v2.z, v1.x*v2.y - v1.y*v2.x); } inline Vector Lerp(const Vector &v1, const Vector &v2, float t) { return Vector(Lerp(v1.x, v2.x, t), Lerp(v1.y, v2.y, t), Lerp(v1.z, v2.z, t)); } inline Vector Clamp(const Vector &v, float min, float max) { return Vector(Clamp(v.x, min, max), Clamp(v.y, min, max), Clamp(v.z, min, max)); } inline Vector Min(const Vector &v1, const Vector &v2) { return Vector(Min(v1.x, v2.x), Min(v1.y, v2.y), Min(v1.z, v2.z)); } inline Vector Max(const Vector &v1, const Vector &v2) { return Vector(Max(v1.x, v2.x), Max(v1.y, v2.y), Max(v1.z, v2.z)); } inline float DistanceBetween(const Vector &v1, const Vector &v2) { Vector distance = v1 - v2; return distance.Length(); } inline float DistanceBetweenSquared (const Vector &v1, const Vector &v2) { Vector distance = v1 - v2; return distance.LengthSquared(); } inline Vector &Vector::operator=(const Vector &v) { x = v.x; y = v.y; z = v.z; return *this; } inline Vector &Vector::operator+=(const Vector &v) { x += v.x; y += v.y; z += v.z; return *this; } inline Vector &Vector::operator-=(const Vector &v) { x -= v.x; y -= v.y; z -= v.z; return *this; } inline Vector &Vector::operator*=(float scalar) { x *= scalar; y *= scalar; z *= scalar; return *this; } inline Vector &Vector::operator/=(float scalar) { assert(!EqualsZero(scalar)); scalar = 1.0f / scalar; x *= scalar; y *= scalar; z *= scalar; return *this; } inline Vector &Vector::operator-() { x = -x; y = -y; z = -z; return *this; } inline float &Vector::operator[](int i) { if (i == 0) { return x; } else if (i == 1) { return y; } else if (i == 2) { return z; } else { assert("[] Access error!"); } } inline float Vector::X() { return x; } inline float Vector::Y() { return y; } inline float Vector::Z() { return z; } inline void Vector::Set(float x, float y, float z) { this->x = x; this->y = y; this->z = z; } inline void Vector::MakeZero() { x = y = z = 0.0f; } inline bool Vector::IsZero() { return EqualsZero(x) && EqualsZero(y) && EqualsZero(z); } inline float Vector::LengthSquared() { return x*x + y*y + z*z; } inline float Vector::Length() { return Sqrt(LengthSquared()); } inline void Vector::Normalize() { float magnitude = Length(); assert(!EqualsZero(magnitude)); magnitude = 1.0f / magnitude; x *= magnitude; y *= magnitude; z *= magnitude; } inline bool Vector::IsNormalized() { return AreEqual(Length(), 1.0f); } #endif Answer: Comments Honestly, too many of them. Object-orientation I don't see why vector components are made private. A client can freely and independently modify them via Set method. There's no internal state to maintain, no invariant to protect. I recommend to make them public and eliminate Set(), X(), Y(), Z() methods altogether. Math I am quite surprised by the presence of operator/(float, Vector). There is no immediately obvious value in it. Mathematically such operation makes no sense, and dividing a scalar by a vector shall be flagged as error ASAP. On the other hand, dot product would very naturally be float operator*(Vector&, Vector&) There is no implementation of AreEquals and EqualsZero methods. In any case, I'd expect IsZero method to compare a norm rather than individual components. It would actually be nice to abstract a norm calculation (right now the client is forced to Euclidean distance). Implementation details It is strongly recommended to express operator != in terms of operator ==. Otherwise, you face with a double maintenance problem, and the reader should do extra work to make sure that the semantics of comparisons is correct. Similarly, other operators with tightly bound semantics also should not be independent. It is usually recommended to express operator+ in terms of operator+=, etc. For more details, see a Canonical Implementation section in C++ reference.
{ "domain": "codereview.stackexchange", "id": 34449, "tags": "c++, beginner, reinventing-the-wheel, computational-geometry" }
Question about Radian as a unit
Question: I'm having a hard time trying to understand the units between angular velocity and basic velocity of a circle. For angular velocity the units are Radian(s) per second(s) or degree(s) per second(s). The speed or velocity of the circles circumference is the angular velocity times the radius, but the units for this is meter(s) per second(s). So where did the radian go? It counts as a unit for angular velocity but why it doesn't count for the speed? Answer: Actually the radian does not have units as it is defined as the ratio arclength/radius. Since the arclength is a distance and thus has units of meters, and the radius is also in meters, the ratio is dimensionless. In particular for angular opening $\theta$ the arclength is $r\theta$ for a circle of radium $r$, and going around the circle in full once give a ratio $2\pi r/r=2\pi$ rad, where the arclength is the full circumference in this case.
{ "domain": "physics.stackexchange", "id": 51109, "tags": "units, dimensional-analysis" }
Turning stacked overlapping intervals (with associated data) into non-overlapping intervals
Question: I'm looking for an efficient algorithm to merge a list of overlapping intervals (each of which has data associated) into non-overlapping intervals. In case two or more intervals overlap, the latter one wins (e.g. the later intervals shadow the earlier ones). In my case, the intervals actually come pre-sorted (by starting point) but of course I can't do any further sorts because the intervals that come later in the input list dominate the previous ones. Other facts: the intervals may overlap but don't have to there may be gaps not covered by any intervals This feels like a common problem but I haven't found a common algorithm quite yet. Algorithm ideas The naive algorithm (O(n2)) is just to intersect each interval with every interval that follows, keeping only the 'non-occluded' pieces. But that's inefficient. I've also thought about using a variation of the regular overlapping intervals algorithm: Basically, building a stack and always intersecting the next element with the top of the stack followed by pushing the non-overlapping part of the old element and the overlapping part of the new element. But of course that potentially leaves a left-over piece from the old element. (Essentially ((1, 10), "1") and ((2, 5), "2") becomes ((1, 2), "1"), ((2, 5), "2") with a left-over of ((5, 10), "1"). But that left over is now unsorted with respect to the elements that follow and I can't sort it into the input list either because other intervals need to dominate this one. Or maybe I should use an interval intersection algorithm to find each of the segments and then use a scan line algorithm to find the original interval that dominates each of the segments? Examples: Example 1 input: [((1, 10), "green"), ((5, 8), "red")] output: [((1, 5), "green"), ((5, 8), "red"), ((8, 10), "green")] Example 2: input: [((1,10), "1"), ((2, 4), "2"), ((3, 8), "3"), ((4, 7), "4"), ((5, 6), "5"), ((6, 9), "6")] output: [((1, 2), "1"), ((2, 3), "2"), ((3, 4), "3"), ((4, 5), "4"), ((5, 6), "5"), ((6, 9), "6")] visualised: ((1,10), "1") 1111111111 ((2, 4), "2") |222 | ((3, 8), "3") ||333333 | ((4, 7), "4") |||4444 | ((5, 6), "5") ||||55 | ((6, 9), "6") |||||6666| |||||||||| vvvvvvvvvv ======================== merged 1234566661 ``` Answer: I found an answer that works well. The basic concept is to iterate a sorted list of all 'points of interest' (start and end location of intervals). Iterating that (scan line) allows us to maintain a stack of 'active intervals' (push when interval starts; pop when interval ends) where the top one is the dominating interval. That will essentially give us a list of non-overlapping segments associated with the dominating interval. That is exactly the goal. In slightly more detail (hope I got the text description about right): Create an array of all intervals which are already sorted by start point in $O(n)$. Initialize: An empty hash set of 'dead intervals' in $O(1)$. An empty stack of 'active intervals' in $O(1)$. An empty array of 'output intervals' in $O(1)$. Create a 'points of interest' array of all start and end points of all the intervals alongside the kind (start/end) and index (into the array of all intervals) in $O(n)$. Sort the 'points of interest' array by position/index/kind in $O(n \log n)$. Run a scan line from front to back: Iterate through the 'points of interest' from front to back in $O(n)$. For each 'point of interest' do: If it's a start point: Add the interval (looked up by index) to the stack of 'active intervals' in $O(1)$. If we have previously remembered an 'open interval', close it and append it to the array of 'output intervals'. Also clear the remembered 'open interval' in $O(1)$. Remember the current position as 'open interval' in $O(1)$. If it's an end point: Add the interval (looked up by index) to the set of 'dead intervals' in $O(1)$. While the interval at the top of the 'active intervals' is contained in the set of 'dead intervals' in amortized $O(1)$ (this can be $O(n)$ but each interval can only be in the dead intervals once): Remove from 'dead intervals' in $O(1)$. Pop the stack in $O(1)$. If we have previously remembered an 'open interval', close it and append it to the array of 'output intervals'. Also clear the remembered 'open interval' in $O(1)$. If the stack of 'active intervals' is non-empty, remember the current position as an 'open interval' in $O(1)$. Overall, the algorithm runs in $O(n \log n)$.
{ "domain": "cs.stackexchange", "id": 21836, "tags": "algorithms, intervals" }
Discontinuity at the edge of Chebychev window
Question: I am using Chebychev window for its narrow main lobe. The problem of chebychev window is that it has discontinuities at the edge, and it seems that Taylor window solves this issue. More detail: http://de.mathworks.com/help/signal/ref/taylorwin.html http://en.wikipedia.org/wiki/Window_function#Dolph.E2.80.93Chebyshev_window I've searched around but I can't find any information on how to implement a Taylor window. Any information on taylor window or suggestions on fixing this issue of edge discontinuities would be very appreciated. Answer: You've probably seen that at the bottom of the mathworks page on the Taylor window there are two references. But I guess it might be hard access these publications. It's indeed difficult to find the formula online, but on the bottom of this page you can find a link to a (not yet released) Octave function implementing the Taylor window. I guess it's not too hard to distill the formula from the code. And here is a C implementation of the Matlab function taylorwin.m.
{ "domain": "dsp.stackexchange", "id": 2798, "tags": "fft, window-functions" }
Why can't carbon form an ionic bond?
Question: My textbook says that a $\ce{C^{4+}}$ cation cannot be formed because it requires a lot of energy to remove 4 electrons. Formation of ionic bonds involve "removing" electrons and there seems to be enough energy there. So what's different for carbon? The textbook also mentions that a $\ce{C^{4-}}$ anion cannot be formed because 6 protons cannot hold on to 10 electrons. Elements like chlorine form ionic bonds and end up with 18 electrons and 17 protons. I know there's just one more electron. So is $\ce{NaCl}$ possible because it's not very hard to hold on to 1 extra electron? In that case at what exact point does it become hard to hold onto the extra electrons? Answer: $\ce{C^{4+}}$ ions: Single ions are really only observed in the gas phase. There is absolutely nothing that prevents a C4+ ion from being generated in the gas phase, given sufficient energy. Each successive electron removal requires additional energy though, so by the time you get to 4 electrons removed, you're looking at quite a lot of energy required. Note that this does not mean that $\ce{C^{4+}}$ ions are seen in compounds, just that they can be made in the gas phase. Any cation you can imagine can be produced in gas phase. The difficulty arises when we want to do something with that carbon cation. If I want to make a compound with it, it can bond with other substances and make new bonds. Once that is done we can evaluate those compounds to tell whether the bond is truly ionic or not. In condensed phases or in solution, we look at how the parts of a compound interact with the particles surrounding them to decide whether to treat them as ions or not. The best chance at making a C4+ ion would be to bond it to fluorine, which is an extremely electron-hungry element. When I actually try this, I find that the C-F bond is very polar, but the $\ce{CF_4}$ molecule that forms behaves as a covalent compound normally would. For example, it does not dissociate into ions when dissolved or melted and it doesn't form an ionic crystal in its solid form. As much as fluorine wants electrons, it doesn't want them enough to completely steal 4 of them from carbon, because the energy required to remove each electron becomes greater than the energy required to remove the one before it. It never gets to the point of $\ce{C^{4+}}$ and $\ce{F^{-}}$ ions being bonded. Lead can form a +4 ion, $\ce{Pb^{4+}}$, in compounds such as lead fluoride. The reason that this can happen but carbon cations cannot is that the outermost electrons in a lead atom are much farther from the nucleus than those of a carbon atom and are easier to remove as a result. $\ce{C^{4-}}$ ions: In gas form this does not happen. Carbon's first electron affinity is positive, so we would expect to see it form $\ce{C-}$ ions in the gas phase if given free electrons to attract. Adding a second electron to this substance would require it to attract an electron against its already negative overall charge, and that is not energetically favorable. In order to make the $\ce{C^{4-}}$ion in a compound, a species would have to donate 4 electrons to carbon. The attraction of carbon's nucleus to other atoms' electrons is just too low for that to happen. The best candidates for a cation in an ionic compound of this type would be either lithium or cesium, however, when we try to actually make this compound we get another class of ions, the carbides, which feature $\ce{C2^{2-}}$ ions. In short, it doesn't happen. Phillip's comment regarding the carbides is a good one. There are a few metal carbides that feature carbon atoms bonding to a metal in the ratios we would expect if it were purely ionic, like $\ce{Mg2C}$. In these compounds the carbon does have a formal charge of -4, but the properties of the substance are such that it behaves like a covalent network compound rather than an ionic compound. What is the limit of this process? Anions with -3 charge, like nitride or phosphide, do exist, though they require a very weakly electronegative species, like an alkali metal or alkaline earth metal, to form an ionic compound. Carbon atoms can carry a charge as part of an ion, but only when they are part of a larger species (again, like $\ce{C2^{2-}}$) and they do not carry 4 additional electrons per atom. Check out cyanide as another good example of this type of polyatomic ion.
{ "domain": "chemistry.stackexchange", "id": 2046, "tags": "organic-chemistry, bond, covalent-compounds" }
Does Seq2Seq decoder take a special vector or the weights of the last encoder cell as an output?
Question: I'm reading Sequence to Sequence Learning with Neural Networks and there's a thing that I couldn't quite grasp. Paper says the encoder outputs a vector to be fed to the decoder. More precisely Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector However, when I look at the diagram: there's no such vector here. What I understand from this diagram is decoder RNN takes the weights of the last encoder cell as an input. Which one is correct? can you explain? Stanford notes put it as The final hidden state of the cell will then become C So, is there no vector? Answer: That drawing it's a bit oversimplified. Check this blog for a better explanation and implementation details. I'll refer to the image they have to answer: the yellow boxes represent embedding layers, required to convert words in numbers the green boxes represent the unfolded encoder the red box represent the context vector, i.e. the vector you're looking for. Note that is just the final vector you obtain by applying the encoder to a sequence of words. For this reason some people prefer to draw directly a line to the decoder part, without drawing the final vector explicitly. the blue boxes represent the unfolded decoder. the purple boxes represent the linear layer used to predict a final word for the decoder hidden state.
{ "domain": "ai.stackexchange", "id": 3095, "tags": "recurrent-neural-networks, papers, seq2seq" }
Does genetic morbid obesity exist?
Question: With the acceptance of being a little overweight I hear of people saying they were born morbidly obese and it was genetically passed on. I'm aware being so obese leads to countless health issues, yet we have people defending fat shaming and the likes. Is the excuse of having 'obese' genetics scientifically justified? Do people have genetics that make them ridiculously fat? Alongside, would this simply be down to the diet of the person from a young age? Thanks. Answer: It's a myth that obesity is mostly caused by genes. Yes, there are many genes that predetermine your apetite but when a baby is born obese, it's not a matter of genes but a matter of the mother's health and diet. And because till the birth, the fetus shares a single bloodflow with the mother, it gets everything the mother has in her blood. Thus, some babies are born obese, some are born addict. Indeed, there are papers showing that there's a higher chance for the newborn to die during birth, even though you can't always say what was the main cause of death (also it depends on what you connect with obesity).
{ "domain": "biology.stackexchange", "id": 7154, "tags": "genetics, health, diet" }