anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How to calculate impulse required to move an object vertically upward by given distance
Question: Suppose I have a stationary object of mass $m$ and I want to apply a momentary force in the vertical direction so that it just reaches the height $h$. So how do I calculate the impulse required in this case? Also how do I exactly define the $\delta t$ for the momentary force? Shall I just take 1 second? Edit 1: Suppose I take $\delta t$ = 0.1 sec, is there anyway I can calculate the impulse now? Edit 2: This edit is in response to the "on hold" tag. The object is in water which I didn't mention earlier but later in the comments. Thanks to @Floris for taking the time to answer even though I didn't mention this initially. Answer: Assuming no air friction, you compute the initial velocity needed from conservation of energy: $$ \frac12 mv^2=mgh $$ The impulse needed is $mv=F\Delta t$. The product of these ($F, \Delta t$) is constant - shorter time implies higher force. The above assumes the time of impact is short enough not to affect the over all time (otherwise you need to solve the trajectory more carefully) Note that if drag cannot be ignored (as is probably the case in your problem, as you later specified in the comments that this is "in water"), the treatment is more complex. Assuming that the initial impulse is still very short, we have to now consider two additional non-trivial forces: Buoyancy (which means that the increase in potential energy is smaller than expected) Drag - which we will assume to be quadratic in velocity, and with a constant "shape factor". The latter assumption ($C_D=0.5$ for a sphere) is usually OK in a turbulent regime of Reynolds numbers; a more precise treatment would have to adjust for changing $C_D$ with changing velocity / Reynolds number. There is another factor that can play a role; when a sphere is accelerated in a liquid, its inertia appears to be increased by an amount equal to half the mass of the displaced fluid. Intuitively this makes sense - when you move an object through a liquid, the liquid also moves, but not as fast as the object. The additional kinetic energy of the liquid has to come from somewhere - this gives rise to an apparently greater inertia of the sphere. The buoyancy is of course given by $\rho V g$, meaning that the net gain in potential energy is $(m-\rho V) gh$; we can think of this as though the object is moving in a reduced gravity $g' = g\left(1-\frac{\rho V}{m}\right)$. The equation of motion for a projectile being launched vertically, and subject to both (constant) gravity and (quadratic) drag can be written as $$m\ddot{y} = -mg +\rho V g - C\ddot{y}$$ Where the constant $C=\frac12\rho A C_D$. The solution for this is given on this page of the hyperphysics site. Given a terminal velocity $$v_t = \sqrt{\frac{2mg}{C_D\rho A}}$$ And characteristic time $\tau$ $$\tau = \frac{v_t}{g}$$ The height reached is $$y_{peak}=-v_t \tau \log\left[\cos\left(\tan^{-1}\frac{v_0}{v_t}\right)\right]$$ In your case, you know $y_{peak}$ and you would like to find $v_0$, so we need to invert the equation. Thus we get $$v_0 = v_t \tan\left[\cos^{-1}\left(e^{-\frac{y_{peak}}{v_t\tau}}\right)\right]$$ Finally, we have to substitute $g'$ for $g$ in the expressions for $\tau$ and $v_t$, and you have your answer for the initial velocity. From which it is a short step to the impulse needed. Note - if you look at the expression for $v_0$ you can see that as $y_{peak}>v_t\tau$, the velocity required quickly becomes bigger.
{ "domain": "physics.stackexchange", "id": 30472, "tags": "homework-and-exercises, newtonian-mechanics, forces, momentum, work" }
Can the maximum stereo disparity of 128 be increased?
Question: Hello, I have built a high-resolution stereo camera (two Prosilica GC2450-C cameras running at full-res), and I'm obtaining beautiful point-clouds that require both min_disparity and disparity_range to be set to 128. Unfortunately, I need to lengthen the target object such that I need disparity_range higher than 128. I know the slider for disparity_range maxes out at 128 in stereo_image_proc, but I'm wondering if there's a way to hack around that maximum (or if 128 is a fundamental maximum in the stereo code). If it can be hacked, do you have suggestions for how to do so? I'm a mechanical engineer trying to code, so sorry if this is a noob question. Originally posted by rdbrewer on ROS Answers with karma: 66 on 2013-05-26 Post score: 3 Answer: Solution: There's no inherent limit of 128 in the openCV stereo matching code on the maximum value for min_disparity or disparity_range. It's just a hard-coded maximum in stereo_image_proc. To increase these limits, make the following code change in stereo_image_proc/cfg/Disparity.cfg and recompile stereo_image_proc. Disparity_range must always be an increment of 16 (probably the same for min_disparity, but not sure). In the following code, all we did was change the range of min_disparity to [-1024, 1024]. Index: stereo_image_proc/cfg/Disparity.cfg +++ stereo_image_proc/cfg/Disparity.cfg (working copy) +gen.add("min_disparity", int_t, 0, "Disparity to begin search at, pixels (may be negative)", 0, -1024, 1024) gen.add("disparity_range", int_t, 0, "Number of disparities to search, pixels", 64, 32, 128) Originally posted by rdbrewer with karma: 66 on 2013-05-30 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14301, "tags": "stereo, stere-image-proc" }
roslaunch ssh error
Question: Hi, I am getting the following error when attempting to launch a series of nodes across two machines.. beaglebone black (ubuntu@bbone): ubuntu 14.04 LTS (3.14 kernel) and ROS indigo Linux workstation (thayer@T1538): ubuntu 14.04 LTS (3.14 kernel) and ROS indigo I have verified that both machines have the correct ROS_MASTER_URI environment variables set, and I have full bi-directional internet connectivity between the two. I can get success if I start up roscore and bring up each node on the respective machines one at a time, but can't get the launch file to work. Also, FWIW, the sys.excepthook is missing lost sys.stderr error message occurs often on the beaglebone after killing roscore if it is running, however has never affected functionality in any way. Does this point to a deeper issue? Error message: thayer@T1538:~/catkin_ws/src/mvp_ros/launch$ roslaunch mvp_ros rangetest.launch ... logging to /home/thayer/.ros/log/b097f6ac-dbd8-11e5-a8ba-78acc0a5686d/roslaunch-T1538-9473.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://T1538:44498/ remote[bbone-0] starting roslaunch remote[bbone-0]: creating ssh connection to bbone:22, user[ubuntu] launching remote roslaunch child with command: [env ROS_MASTER_URI=http://localhost:11311 /opt/ros/indigo/env.sh roslaunch -c bbone-0 -u http://T1538:44498/ --run_id b097f6ac-dbd8-11e5-a8ba-78acc0a5686d] remote[bbone-0]: ssh connection created remote[bbone-0]: Exception while registering with roslaunch parent [http://T1538:44498/]: Traceback (most recent call last): File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/server.py", line 506, in _register_with_server code, msg, _ = server.register(name, self.uri) File "/usr/lib/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib/python2.7/httplib.py", line 975, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 835, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 797, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 778, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): gaierror: [Errno -2] Name or service not known The traceback for the exception was written to the log file Unhandled exception in thread started by sys.excepthook is missing lost sys.stderr [bbone-0] killing on exit remote roslaunch failed to launch: bone The traceback for the exception was written to the log file Launch File: <launch> <machine name="bone" address="bbone" default="true" user="ubuntu" password="ubuntu"/> <machine name="thayer" address="localhost" default="true"/> <node name="relay" pkg="mvp_ros" type="xbox_ser_relay" machine="bone"/> <node name="teleop" pkg="mvp_ros" type="teleop_xbox" machine="bone"/> <node name="joystick" pkg="joy" type="joy_node" machine="thayer"> <param name="dev" value="/dev/input/js0"/> </node> </launch> EDIT1: Has nobody ever seen this error before? It seems oddly specific. I also get it when I try to launch on my raspberry pi. I have included the log file here. The key parts are below. [roslaunch][INFO] 2016-02-28 19:15:35,291: started roslaunch server http://192.168.1.117:38349/ [roslaunch.parent][INFO] 2016-02-28 19:15:35,291: ... parent XML-RPC server started [roslaunch.remote][INFO] 2016-02-28 19:15:35,291: remote[pi-3-0] starting roslaunch [roslaunch][INFO] 2016-02-28 19:15:35,292: remote[pi-3-0] starting roslaunch [roslaunch][INFO] 2016-02-28 19:15:35,292: remote[pi-3-0]: creating ssh connection to pi-3:22, user[pi] [roslaunch.remoteprocess][INFO] 2016-02-28 19:15:35,292: remote[pi-3-0]: invoking with ssh exec args [/opt/ros/indigo/env.sh roslaunch -c pi-3-0 -u http://192.168.1.117:38349/ --run_id 8d8946 cc-de79-11e5-9723-b8e85632a8ba] [paramiko.transport][INFO] 2016-02-28 19:15:35,363: Connected (version 2.0, client OpenSSH_6.7p1) [paramiko.transport][ERROR] 2016-02-28 19:15:35,366: Exception: Incompatible ssh peer (no acceptable kex algorithm) [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: Traceback (most recent call last): [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1585, in run [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: self._handler_table[ptype](self, m) [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1664, in _negotiate_keys [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: self._parse_kex_init(m) [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1779, in _parse_kex_init [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: raise SSHException('Incompatible ssh peer (no acceptable kex algorithm)') [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: SSHException: Incompatible ssh peer (no acceptable kex algorithm) [paramiko.transport][ERROR] 2016-02-28 19:15:35,367: [roslaunch.remoteprocess][ERROR] 2016-02-28 19:15:35,370: Traceback (most recent call last): File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/remoteprocess.py", line 192, in _ssh_exec ssh.connect(address, port, username, password, timeout=TIMEOUT_SSH_CONNECT) File "/usr/lib/python2.7/dist-packages/paramiko/client.py", line 306, in connect t.start_client() File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 465, in start_client raise e SSHException: Incompatible ssh peer (no acceptable kex algorithm) [roslaunch][ERROR] 2016-02-28 19:15:35,370: remote[pi-3-0]: failed to launch on mvp: Unable to establish ssh connection to [pi@pi-3:22]: Incompatible ssh peer (no acceptable kex algorithm) [roslaunch.pmon][INFO] 2016-02-28 19:15:35,371: ProcessMonitor.register[pi-3-0] [roslaunch.pmon][INFO] 2016-02-28 19:15:35,371: ProcessMonitor.register[pi-3-0] complete [roslaunch.pmon][INFO] 2016-02-28 19:15:35,371: ProcessMonitor.shutdown <ProcessMonitor(ProcessMonitor-1, started daemon 140521618212608)> [roslaunch.pmon][INFO] 2016-02-28 19:15:35,379: ProcessMonitor._post_run <ProcessMonitor(ProcessMonitor-1, started daemon 140521618212608)> [roslaunch.pmon][INFO] 2016-02-28 19:15:35,379: ProcessMonitor._post_run <ProcessMonitor(ProcessMonitor-1, started daemon 140521618212608)>: remaining procs are [<roslaunch.remoteprocess.SSHChildROSLaunchProcess object at 0x7fcdbd4ef3d0>] [roslaunch.pmon][INFO] 2016-02-28 19:15:35,380: ProcessMonitor exit: killing pi-3-0 [roslaunch][INFO] 2016-02-28 19:15:35,380: [pi-3-0] killing on exit [roslaunch.pmon][INFO] 2016-02-28 19:15:35,381: ProcessMonitor exit: cleaning up data structures and signals [roslaunch.pmon][INFO] 2016-02-28 19:15:35,381: ProcessMonitor exit: pmon has shutdown [roslaunch][ERROR] 2016-02-28 19:15:35,381: unable to start remote roslaunch child: pi-3-0 [roslaunch][ERROR] 2016-02-28 19:15:35,381: The traceback for the exception was written to the log file [roslaunch][ERROR] 2016-02-28 19:15:35,381: Traceback (most recent call last): File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/__init__.py", line 307, in main p.start() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 268, in start self._start_infrastructure() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 230, in _start_infrastructure self._start_remote() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/parent.py", line 210, in _start_remote self.remote_runner.start_children() File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/remote.py", line 124, in start_children p = self._start_child(server_node_uri, machines[m], counter) File "/opt/ros/indigo/lib/python2.7/dist-packages/roslaunch/remote.py", line 97, in _start_child raise RLException("unable to start remote roslaunch child: %s"%name) RLException: unable to start remote roslaunch child: pi-3-0 [rospy.core][INFO] 2016-02-28 19:15:35,381: signal_shutdown [atexit] Thanks, Brett Originally posted by bigbrett on ROS Answers with karma: 58 on 2016-02-25 Post score: 1 Answer: Found the solution after much digging.... The error lay in the version of paramiko on my computer, as seen here. I updated paramiko and then that error went away. I also needed to provide an env-loader option in my .launch file when I declared the remote machine. Once I did this, everything worked. I don't have enough points to accept my own answer, but if someone else could flag it as accepted, that would be great. Originally posted by bigbrett with karma: 58 on 2016-02-28 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 23906, "tags": "ros, roslaunch, xml, multiple, ssh" }
Checking if a Stack is sorted in ascending order
Question: Write a method isSorted that accepts a stack of integers as a parameter and returns true if the elements in the stack occur in ascending (non-decreasing) order from top to bottom, and false otherwise. That is, the smallest element should be on top, growing larger toward the bottom. For example, passing the following stack to your method should cause it to return true: bottom [20, 20, 17, 11, 8, 8, 3, 2] top The following stack is not sorted (the 15 is out of place), so passing it to your method should return a result of false: bottom [18, 12, 15, 6, 1] top An empty or one-element stack is considered to be sorted. When your method returns, the stack should be in the same state as when it was passed in. In other words, if your method modifies the stack, you must restore it before returning. Obey the following restrictions in your solution: You may use one queue or stack (but not both) as auxiliary storage. You may not use other structures (arrays, lists, etc.), but you can have as many simple variables as you like. Use the Queue interface and Stack/LinkedList classes discussed in the textbook. Use stacks/queues in stack/queue-like ways only. Do not call index-based methods such as get, search, or set (or use a for-each loop or iterator) on a stack/queue. You may call only add, remove, push, pop, peek, isEmpty, and size. Your solution should run in O(N) time, where N is the number of elements of the stack. You have access to the following two methods and may call them as needed to help you solve the problem: public static void s2q(Stack s, Queue q) { ... } public static void q2s(Queue q, Stack s) { ... } I'm looking for general feedback that will make my future code better. Are there any glaring issues that you can find that make it harder for you to read my code? Is there a better way you can think of to accomplish this task? public boolean isSorted(Stack<Integer> s) { boolean isSorted = true; if (s.size() == 1 || s.isEmpty()) { return isSorted; } Queue<Integer> q = new LinkedList<Integer>(); while(!s.isEmpty()){ int s1 = s.pop(); if (!s.isEmpty()) { int s2 = s.peek(); isSorted &= s2 >= s1; } q.add(s1); } while (!q.isEmpty()) { s.push(q.poll()); } while (!s.isEmpty()) { q.offer(s.pop()); } while (!q.isEmpty()) { s.push(q.poll()); } return isSorted; } Answer: Advice 1 if (s.size() == 1 || s.isEmpty()) { return isSorted; } You can write more succinctly if (s.size() < 2) { return true; } Advice 2 boolean isSorted = true; if (s.size() == 1 || s.isEmpty()) { return isSorted; } I suggest you write instead: if (s.size() == 1 || s.isEmpty()) { return true; } boolean isSorted = true; This is a micro-optimization. Advice 3 Making it generic is not hard. You could demand a type parameter E which extends a Comparable<? super E> Alternative implementation I had this in mind: public static <E extends Comparable<? super E>> boolean isSortedV2(Stack<E> stack) { if (stack.size() < 2) { return true; } Stack<E> aux = new Stack<>(); aux.push(stack.pop()); while (!stack.isEmpty() && stack.peek().compareTo(aux.peek()) >= 0) { aux.push(stack.pop()); } boolean sorted = stack.isEmpty(); // Restore the input stack: while (!aux.isEmpty()) { stack.push(aux.pop()); } return sorted; }
{ "domain": "codereview.stackexchange", "id": 28538, "tags": "java, stack" }
Regelation, and melting ice with pressure
Question: It is an experimental fact (regelation) that if two weights are hung on the ends of a rigid bar, which passes over a block of ice, then the bar gradually passes through the block of ice. Moreover, the block remains intact even after the bar has passed completely through it. It is also a fact that less mass (for the two weights) is required if a semi-flexible bar is used (assume that the bar is otherwise of the same dimensions), rather than a rigid bar. It is this latter fact which is escaping me, why is this? One can show using the Clapeyron equation that one needs to apply some large pressure (call it $P$) locally to the ice to achieve melting locally. In the case of the rigid bar, the laws of mechanics say that the force is distributed evenly throughout the bar and we can thereby determine the pressure applied by the bar to the ice. I do not know enough about continuum mechanics to understand what happens when the bar is flexible. Is the distribution of pressure then such that certain parts of the bar exert larger pressures for fixed masses, and this then causes melting locally (at pressure $P$) at a mass of the weights which is otherwise lower? Answer: A search for regelation ice flexible beam in Google Scholar yields as the very first result "Paths swept out by initially slack flexible wires when cutting soft solids; when passing through a very viscous medium; and during regelation," which gives an analysis of the pressure, pressure-induced melting, and curve shape, with references to previous literature: A simplifying assumption is made that the wire (unlike a beam) has negligible bending stiffness, but this doesn't really affect the key aspect: When the wire/beam is curved, the load is applied to a smaller cross-section than the width of the block. (A perfectly rigid bar always loads the entire width of the block.) To prove this to yourself, try drawing lines connecting points where the wire/beam straightens; the lines are shortest at sharper corners, and so the stress (load per cross-sectional area) and thus the regelation speed are the highest there. (A no-friction assumption is handy for the wire case because it allows the tension to be considered constant; see the literature on four-point bending for guidelines on analyzing the beam case.)
{ "domain": "physics.stackexchange", "id": 96440, "tags": "thermodynamics, fluid-statics, physical-chemistry, continuum-mechanics" }
Why is there an increasing solubility in water for chlorides, chlorates, perchlorates in that order?
Question: According to Wikipedia $\pu{100 mL}$ of water dissolve at $\pu{25 ^\circ{}C}$ about $\pu{35 g}$ of $\ce{NaCl}$ (1), but about $\pu{79 g}$ of $\ce{NaClO3}$ (2), and around $\pu{210 g}$ for $\ce{NaClO4}$ (3). This looks to me roughly like its doubling solubility for every single higher level of oxidation. Earlier I was under the impression that the % of $\ce{Cl}$ or % of $\ce{Na}$ in the water would still be the same, and the amount of additional oxygen is what could be causing the increase in solubility.... ie. now each molecule has 3 oxygen atoms as opposed to none in case of $\ce{NaCl}$. But this doesn't hold true. I would also like to know what is the effect of multiple solutes sharing same atoms on each other. For example does dissolving some chlorate cause the solvent to hold more chloride ? As you may have guess Chemistry isn't my strong suite, I know basic chemistry, but now when I am looking at it again.. the whole thing has changed atleast w.r.t what I was taught during my school days. (right from how we determine the number of electrons) Answer: Science is all about discovering hidden regularities, patterns, and laws. In that sense your bold generalization is good, but will it hold if we check just one more data point? Let's see: $\ce{KCl}$ has solubility of 25g/100ml, $\ce{KClO3}$ has 8, and $\ce{KClO4}$ has 1.5. Er, well... What we have here is an interplay between solvatation energy and lattice energy, the latter being determined by the crystal structure, which is a tricky thing, and hence so is solubility. Don't expect it to be predictable from qualitative considerations. You don't know it until you measure it. Chemistry is an experimental science, after all.
{ "domain": "chemistry.stackexchange", "id": 8366, "tags": "solubility, stoichiometry, ionic-compounds" }
ar_pose no longer downloadable...new url?
Question: Hi all, I'm trying to clone the following git repository: http://robotics.ccny.cuny.edu/git/ccny-ros-pkg/ccny_vision.git But the cloning is failing. Furthermore, when I try to access anything related tot he robotics.ccny.cuny.edu base url, I'm redirected through wayback machine (with no valid snapshots in June). It seems like this repository is dead. Does anyone know where the new one might be located, if there's one at all? Is there any other way to get this package? Originally posted by aespielberg on ROS Answers with karma: 113 on 2013-07-09 Post score: 1 Answer: It has moved to https://github.com/LucidOne/ar_tools Originally posted by Procópio with karma: 4402 on 2013-07-09 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14850, "tags": "ros, ar-pose, git" }
Commutation relations in quantum mechanics
Question: As we know, simple harmonic oscillator can be solved only by commutation relations between creation and annihilation operators, and the Hamiltonian expression. The spin energy is either solved only using commutative relations between spin operators in axes ($J_i$) and $J^{2}$. As an another illustration, for quantizing fields (such as Real Klein-Gordon scalar field) in QFT, one approach is to postulate canonical commutation relations between field and momentum operators. I already know quantum operators create Lie Algebra, and commutation relations are important in a Lie Algebra. However, I have a question: is it always true that commutation relations are sufficient to obtain eigenvalues and eigenvectors of a Hamiltonian in the Hilbert space? If it is true, why commutative relations are sufficient to solve a quantum mechanics problem? Answer: If your Hamiltonian belongs to a Lie algebra for which you can solve the initial value problem in the corresponding group then you can use geometric quantization to solve the corresponding Schroedinger equation. This is because the solution of the Schroedinger equations is just $\psi(t)=e^{-itH/\hbar}\psi_0$, and $e^{-itH/\hbar}$ is an element of the group generated by the Lie algebra (in the appropriate representation on the given Hilbert space). Thus the problem is reduced to group representation theory.
{ "domain": "physics.stackexchange", "id": 40133, "tags": "quantum-mechanics, operators, hilbert-space, commutator" }
Finding a "typical" path
Question: Consider an undirected graph with two distinguished nodes $u\neq v$. How hard is it to find an $u-v$ path, such that its length is as close to the average $u-v$ path length as possible? Formally, for a path $P$ let $\ell(P)$ denote its length (number of edges on $P$, but a weighted version may also be considered). Let $A$ denote the average length of simple $u-v$ paths: $$A= \frac{1}{N}\sum \ell(P)$$ where $N$ is the number of simple $u-v$ paths and the summation is taken over all such paths. Let us call a simple $u-v$ path $P_0$ typical if $|\ell(P_0)-A|$ is minimum. Question: What is the complexity of finding such a typical path? Is anything known about this problem? Answer: Lemma 1. The weighted problem is NP-hard by reduction from Partition. Lemma 2. The unweighted problem is NP-hard by reduction from Hamiltonian Path. Proof of Lemma 1. Given a Partition instance $(x_1, \ldots, x_n)$, construct the multigraph $G=(V,E)$ with $V=[n+1]$ and, for each $i\in [n]$, two copies of edge $(i, i+1)$, one with weight $x_i$ and the other with weight zero. Then ask for the path from $1$ to $n+1$ whose weight is as close to average over paths from $1$ to $n+1$. By linearity of expectation, the average weight is $\sum_{i=1}^n x_i/2$, so there is a path with average weight if and only if the Partition instance is feasible. (If desired, the multigraph can easily be converted to an equivalent graph by splitting each edge $(i, i+1)$ of weight zero into two zero-weight edges.) $~~~\Box$ Proof sketch for Lemma 2. Given a Hamiltonian Path instance $G=(V,E)$ with source $s$ and sink $t$, construct the following multigraph $G'$. Let $n=|V|$. First, add a new, long "super-path" from $s$ to $t$ as follows. Fix some $p, q, k, \ell$ to be determined later. Add $k$ new vertices $a_1, a_2, \ldots, a_k$, with $p$ new multi-edges $(a_i, a_{i+1})$ between each consecutive pair. Add edges $(s, a_1)$ and $(a_k, t)$. This addition adds $p^k$ paths of length $k$ from $s$ to $t$. Now add another (separate) super-path to add $q^\ell$ paths of length $\ell$. Choose $p, q, k, \ell$ so that the number of added paths is much larger than $n!$, so that the added paths determine (up to lower-order terms) the average path length. Choose $k$ and $\ell$ with $k < n-1 \ll \ell$, so that the average path length is larger than $n$, and closer to $n$ than to $\ell$. (Details below.) Then the typical path will be a Hamiltonian path from $s$ to $t$ in the original graph, if there is one. Here are the details for choosing $p, q, k, \ell$. Choose $p=n^{30}$, $q=n^2$, $k=n/3$, and $\ell=5n$. Then the addition adds $p^k = n^{10n}$ paths of length $k=n/3$, and $q^\ell = n^{10n}$ paths of length $\ell= 5n$. The average length of the added paths is then $(1/3 + 5)n/2 = 8n/3$. If there is a Hamiltonian $s$-$t$ path in the original graph, its length will be about $5n/3$ shorter than the average. The longer added paths, of length $5n$, are about $7n/3$ longer than the average, so they are not better. (Note that there are at most $n! \ll n^{10n}$ paths in the original graph, of length at most $n$, so they affect the average path length in the final graph by only lower-order terms.) If a multigraph is not allowed, the construction can be adjusted appropriately by splitting each multi-edge as usual, then taking account how this affects $k$ and $\ell$. $~~~\Box$
{ "domain": "cstheory.stackexchange", "id": 5573, "tags": "graph-theory, graph-algorithms, np-hardness, polynomial-time" }
Are larger stars rounder?
Question: The Earth is a very smooth sphere, and the Sun even more so, with only minor fluctuations. I am wondering: are larger stars even rounder? Intuitively, that seems self evident, but I am not so sure. For instance, the hydrostatic equilibrium causes larger stars to be much less dense than red dwarfs. So what is the most important factor for how round a star is, a higher mass, or less activity? The most prominent cause of irregularities is of course the rotation rate of the star, which is pretty much independent of size. Ignoring that, do larger stars have smaller a deviations from the ellipsoid relative to their size? Edit As it seems like the "other than rotation rate" criterion is not really meaningfull, I now terminate it. Answer: In terms of mean angular velocity, the distribution of rotation rates among main sequence stars is well known. Allen (1963) compiled data on mass, radius, and equatorial velocity, which was then expanded upon by McNally (1965), who focused on angular velocity and angular momentum. It became clear that angular velocity increases from low rates for spectral types of G and below before rising to a peak around type A stars and then slowly decreasing. Equatorial velocity continues increasing to mid-B type stars, before slowly decreasing, but because of the increased radii of O and B type main sequence stars, the peak in angular velocity occurs before this. As part of Jean-Louis Tassoul's Stellar Rotation notes, many O type stars have rotational periods similar to that of the G-type stars like the Sun! The distribution is not smooth and uniform (McNally noticed a strange discontinuity in angular momentum per unit mass right for A0 and A5 stars; see his Figure 2); Barnes (2003) observed two distinct populations in open clusters, consisting of slower rotators (the I sequence) and faster rotators (the C sequence). Stars may migrate from one sequence to another as they evolve. Interestingly enough, stars on the I sequence lose angular momentum $J$ faster than stars on the C sequence: $$\frac{\mathrm{d}J}{\mathrm{d}t}\propto-\omega^n,\quad\text{where}\begin{cases} n=3\text{ on the I sequence}\\ n=1\text{ on the C sequence}\\ \end{cases}$$ Here, of course, $\omega$ is angular velocity. These results obey Skumanich's law. Oblateness can be determined from mass, radius, and angular velocity as $$f=\frac{5\omega^2R^3}{4GM}$$ Using this and McNally's data, some quick calculations get me the following table: |Spectral type|$f/f(O5)$| |--|-------| |O5 | 1 | |B0 | 1.28 | |B5 | 1.84 | |A0 | 1.67 | |A5 | 1.35 | |F0 | 0.482| |F5 | 0.0387| |G0 | 0.000314|
{ "domain": "astronomy.stackexchange", "id": 1538, "tags": "star, hydrostatic-equilibrium, stellar-structure" }
Show that sum and product are both examples of accumulation
Question: Given this task: Exercise 1.32 a. Show that sum and product (exercise 1.31) are both special cases of a still more general notion called accumulate that combines a collection of terms, using some general accumulation function: (accumulate combiner null-value term a next b) Accumulate takes as arguments the same term and range specifications as sum and product, together with a combiner procedure (of two arguments) that specifies how the current term is to be combined with the accumulation of the preceding terms and a null-value that specifies what base value to use when the terms run out. Write accumulate and show how sum and product can both be defined as simple calls to accumulate. b. If your accumulate procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process. I wrote the following solution: Recursive: (define (accumulate combiner null-value term a next b) (if (> a b) null-value (combiner (term a) (accumulate combiner null-value term (next a) next b)))) Iterative: (define (i-accumulate combiner null-value term a next b) (define (iter a result) (if (> a b) result (iter (next a) (combiner (term a) result)))) (iter a null-value)) Sum/Product using iterative accumulate: (define (sum term a next b) (i-accumulate + 0 term a next b)) (define (product term a next b) (i-accumulate * 1 term a next b)) What do you think? Answer: Since the only parameter that changes in your recursive definition is a, you can write an inner definition like so: (define (accumulate combiner null-value term a next b) (define (rec a) (if (> a b) null-value (combiner (term a) (rec (next a))))) (rec a))
{ "domain": "codereview.stackexchange", "id": 205, "tags": "lisp, scheme, sicp" }
Sign of mass of an anti-particle
Question: When deriving the Lagrangian for Spin $\frac{1}{2}$ particles we are naturally led to using $\Psi$ and $\bar{\Psi}$. The Euler-Lagrange equations lead us to two wave equations: \begin{equation} (i\gamma_\mu \partial^\mu - m ) \Psi =0 \end{equation} \begin{equation} (i \gamma_\mu \partial^\mu + m )\bar{\Psi} =0 \end{equation} which differ by a sign in front of the mass term. The same thing happens if we look at the electromagetic coupling of these $\frac{1}{2}$ fields. Again their coupling is different by a sign. This is interpreted as particle and anti-particle having opposite charge. Nevertheless it is unconventional to speak of the anti-particle having negative mass. Why is this the case? Answer: The second equation is actually incorrect. It should be written as follows: $$ i\partial^{\mu}\overline{\Psi}\gamma_{\mu}+m\overline{\Psi}=0. $$ Here, $\overline{\Psi}$ is understood as a 4-component row vector (not in the sense of the vector rep. of the Lorentz group). At any rate, $\overline{\Psi}$(or $\Psi^{\dagger}$) is not what you obtain from $\Psi$ by exchanging the roles of particles and antiparticles. The result of such operation is $\Psi^{C}\equiv-i\gamma^{2}\Psi^{\ast}$, and it satisfies the same Dirac equation as $\Psi$. Among other things, it means that antiparticles have the same mass as particles.
{ "domain": "physics.stackexchange", "id": 11570, "tags": "particle-physics, antimatter, dirac-equation" }
Energy for a Lattice gas model
Question: The energy for a Lattice gas model can be written as \begin{equation} E_{LG} = \frac{\epsilon}{4} \sum_{<i,j>} (s_{ij} + s_i +s_j +1) - \frac{\mu}{2}\sum_i (s_i +1) \\ = \frac{\epsilon}{4} \sum_{<i,j>} s_i s_j- \frac{q \epsilon}{4}\sum_is_{i}- \frac{\mu}{2}\sum_i s_i + \text{constants} \end{equation} where $q$ is the nearest neighbour Where does $\frac{q \epsilon}{4}\sum_is_{i}$ come from? and what happens to the $s_j$? Answer: To each edge $\langle i,j\rangle$, your first sum associates $\sigma_i+\sigma_j$. In particular, the term $\sigma_k$ appears once for each edge having $k$ as an endpoint. The number of such edges is equal to the number of neighbors of $k$, which is $q$. Therefore, $$ \sum_{\langle i,j\rangle}(\sigma_i+\sigma_j) = q \sum_i \sigma_i. $$ Alternatively, in a more formal way, \begin{align} \sum_{\langle i,j\rangle}(\sigma_i+\sigma_j) &= \sum_{\langle i,j\rangle}\sigma_i + \sum_{\langle i,j\rangle}\sigma_j \\ &= \tfrac12 \sum_i\sum_{j\sim i} \sigma_i + \tfrac12 \sum_j\sum_{i\sim j} \sigma_j \\ &= \sum_i\sum_{j\sim i} \sigma_i \\ &= \sum_i \sigma_i \underbrace{\sum_{j\sim i} 1}_{=q} \\ &= q \sum_i \sigma_i, \end{align} where the notation $\sum_{j\sim i}$ means that the sum is restricted to the neighbors $j$ of the vertex $i$. (The factors $\frac12$ in the second line come from the fact that summing first over $i$ and then over its neighbor $j$ will yields each edge twice (once for each ordering of the two endpoints); the third line follows from the fact that both sums are equal.)
{ "domain": "physics.stackexchange", "id": 79022, "tags": "field-theory, phase-transition, ising-model" }
Why would this viral strain-specific antiserum fail to immunoprecipitate the same (98% identical protein) from another strain?
Question: I'm reading this paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC392475/ and I can't work out why a certain immune serum didn't work on the same viral protein but from different strains. The serum was extracted from tumour-carrying animals infected with the 'SR' strain of Rous sarcoma virus and was later used to immunoprecipitate out the viral protein p60-src from virusinfected cells. The cells in question had been infected either with SR or other closely-related strains, all strains express the same protein. But the antiserum only extracted the SR strain's p60-src and not the same protein from other strains. I thought it could be because the serum had by chance targeted an epitope specific to the SR strain, so you get a lot of proliferation of B cells complimentary to this epitope, therefore overshadowing the detection of other epitopes. But 1. I think the serum was taken from multiple rabbits and surely they wouldn't all have happened to do this. and 2. a later paper I read said that p60-src is 98% the same across SR, Pr, Br strain etc so it seems unlikely to have latched onto an SR-specific bit. And yet, the authors state that due to similarities between SRC genes, "... similar gene products are probably present in cells transformed by other strains of ASV, but they are not detected because of a lack of crossreaction with the currently available antisera." So my question is, what is causing this lack of crossreaction? Apologies if the answer is very obvious, I am new to the subject and have very little knowledge of experimental methods currently. Thank you in advance for any insights, or corrections on the conclusions I've drawn so far as they might be inaccurate too. Note: thanks mods for letting me edit the question to improve privacy. Answer: As you note, the viral strains are not identical; it's unclear from your question the actual amount and position of amino-acid differences among them, but it's not surprising that even a portion of the protein would have an immunodominant epitope and thus drive reproducible strain-specificity of any immune serum produced. Remember that the rabbits or other animals used to produce antibodies are typically at least somewhat inbred strains raised for lab use and so are likely to have less genetic diversity among individuals than you might otherwise expect. Another paper from this time period reports that different immune sera generated from related virus strains and produced in different animals ranged from immunoprecipitating the src viral protein in zero of the viral strains tested up to immunoprecipitating the protein in all viral strains tested: "Detection of the Viral Sarcoma Gene Product in Cells Infected with Various Strains of Avian Sarcoma Virus ..." Brugge et al (1979) Journal of Virology Vol 29 No 3 p 1196-1203
{ "domain": "biology.stackexchange", "id": 11585, "tags": "proteins, immunology, cancer, lab-techniques" }
Fermions in a well
Question: I have two identical fermions in an infinite potential well. They are non-interacting. How should I show that the first excited state is four-fold degenerate? Is the wavefunction just the superposition of the wavefunction of each fermion? Answer: In the ground state, both electrons are in the state with the lowest value of "n". E.g., in the case of an infinite potential well, the lowest quantum number is n=1. In this case, both electrons have n=1, but one electron is spin up and one electron is spin down, because of exclusion. The first excited state is one in which one of the electrons has n=1 (lowest single particle level) and one of the electrons has n=2 (first excited single particle level). In this case, either spin is okay for either electron. So there are four states: (n=1,up; n=2,up), (n=1,up; n=2, down), (n=1,down; n=2, up), (n=1,down; n=2,down). And, No, the wave function is not just the superposition for each one. The wave function is a Slater Determinant of the single-particle wavefunctions. For example, in the case of the ground state, the spatial part of the wavefunction is symmetric $$ \sin(x_1\pi/L)\sin(x_2\pi/L) $$ and the spin part of the wavefunction is anti-symmetric $$ |\uparrow\downarrow>-|\downarrow\uparrow>\;. $$ You can work out the four excited states similarly.
{ "domain": "physics.stackexchange", "id": 20028, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, potential, fermions" }
Do galaxies change shape and size over time?
Question: Galaxies come in different shapes and sizes. What factors determine the shape and size of a galaxy, and how can these change over time? Answer: The size of a galaxy is dependent on how the matter was distributed in the early universe, and how it collapsed under its own gravity to form clumps of (dark) matter that the galaxies would form around. The very earliest galaxies are irregular, but as matter fell towards them, and they developed a consistent orbital direction they formed into a disc. Gravity waves in the disc then show up as spiral arms. Large galaxies that are actively forming stars tend to have this structure. Small galaxies may never reach the point of forming a disc and remain irregular. As galaxies collide and interact, the orbits of the stars are disrupted. The result of the merger of two large spiral galaxies is often a galaxy in which the stars orbit in all directions, which looks like an elliptical galaxy. Star formation in elliptical galaxies is often very low. Elliptical galaxies are the most evolved form of galaxies. The time scale of this evolution is slow. It takes billions of years for galaxies to evolve from one form to another. The details of the process of galaxy formation and evolution are still uncertain. The exact nature of dark matter will be important in understanding how galaxies form. The role (if any) that black holes play in galaxy formation is uncertain.
{ "domain": "astronomy.stackexchange", "id": 1719, "tags": "galaxy" }
Why do we echo "source /opt/ros/kinetic/setup/bash"?
Question: This guide here specifies the command $ echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc to source the setup.bash in the ~/.bashrc. For this reason, the end of my .bashrc has been looking like the code snippet below. echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash source /opt/ros/kinetic/setup.bash Why can't we just add a source /opt/ros/kinetic/setup.bash to the .bashrc instead of the echo? Originally posted by sharan100 on ROS Answers with karma: 83 on 2016-12-27 Post score: 1 Answer: This command is done one time only: echo "source /opt/ros/indigo/setup.bash" >> ~/.bashrc Once it is appended to .bashrc everytime you open a new terminal the bash script /opt/ros/indigo/setup.bash is run. You could edit .bashrc in a number of ways to add the line "source /opt/ros/indigo/setup.bash" to the end of the .bashrc file, echo is just a handy way to edit .bashrc. Originally posted by billtecteacher with karma: 101 on 2016-12-27 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by gvdhoorn on 2016-12-27: "could" -> "should". Comment by sharan100 on 2016-12-27: So a simple source /opt/ros/kinetic/setup.bash would suffice? I was wondering if there was a particular reason as to why the echo was being used. Comment by gvdhoorn on 2016-12-27: @sharan100: yes. Comment by gvdhoorn on 2016-12-27: I think you misread the installation / setup instructions, as they don't actually tell you to add the echo "source .. line to your .bashrc. Comment by sharan100 on 2016-12-28: yeah, I would think so!
{ "domain": "robotics.stackexchange", "id": 26584, "tags": "ros, installation, bashrc, ros-kinetic, setup.bash" }
Modeling a decision tree-like form
Question: A picture says more than a thousand words, so here's a mock-up. The actual choices are different, this is just a sample. A user first has to choose between Red, Blue and Green. Upon selecting an option, additional form elements are displayed, depending on the choice. For Blue, a Date and Font are required. Currently, there's no need for a third level, but who knows what the future brings. In order to represent this data, I've come up with the following model (after many attempts). public class SampleWizard { public enum ColourChoice { Red, Blue, Green } public class RedModel { public IEnumerable<SelectListItem> AvailableShapes { get; set; } public int Shape { get; set; } public DateTime Date { get; set; } } public class BlueModel { public enum FontChoice { Arial, Helvetica, Other, } public DateTime Date { get; set; } public FontChoice Font { get; set; } public IEnumerable<SelectListItem> AvailableOtherFonts { get; set; } public int OtherFont { get; set; } } public ColourChoice Colour { get; set; } public RedModel Red { get; set; } public BlueModel Blue { get; set; } } I'm using Enums for hard-coded choices, so I can make decisions based on the value. ints come from the database and do not matter in the programmed decision flows. This works, but I can't help but feel like this is an ugly solution and I'm worried about future modifications. I've never had to make something like this before so I can't make decisions based on past experiences. The UI design is more or less set in stone, unless strong arguments can be made against it. (I'd need to bring them up to the designer and project manager.) Answer: Well, i think, you should first ask yourself do you really want to complicate things? Is this something that you will re-use in other places? Because the feeling "i need to create something reusable and extendable" if familiar to every one of us, but the amount of code you'll need to write and debug is not always worth it in the end. If you answer is "no", then you can simply do a little refactoring: move classes to separate files, replace individual viewmodel properties with Dictionary<ColourChoice, ViewModelBase>, stuff like that. It will end up looking fine. If your answer is "yes", then i would probably create a tree-like structure from my view models by implementing something like: interface ITreeViewModel { IEnumerable<ITreeItem> Children { get; } bool IsSelected { get; } ITreeItem Parent { get; } } Then i would create a custom user control wich will display such tree using this interface and specidied DataTemplates (after i will fail to make wpf TreeView work as i want it too yet again -_-). This is a tricky task with plenty of pitfalls, but i think its possible. At least that would be an approach I'd try first.
{ "domain": "codereview.stackexchange", "id": 4362, "tags": "c#, asp.net-mvc-4" }
2D lattice random walk plots in functional style
Question: To practice writing code in the functional programming style, I wrote a program to plot two-dimensional lattice random walks. I'd appreciate any feedback about how to improve its "functionaliness". possibleSteps <- list(c(-1,0), c(0,-1), c(1,0), c(0,1)) step <- function(x) { return(unlist(sample(possibleSteps, 1))) } takeRandomWalk <- function(nSteps) { coordPairs <- Reduce(`+`, lapply(1:nSteps, step), accumulate = T) x <- sapply(coordPairs, `[`, 1) y <- sapply(coordPairs, `[`, 2) return(list(x, y)) } plotRandomWalk <- function(nSteps, margins) { walkObj <- takeRandomWalk(nSteps) plot(seq(-margins,margins), seq(-margins,margins), type = 'n', xlab = "", ylab = "") lines(walkObj[[1]], walkObj[[2]]) } Call plotRandomWalk(10000, 80) for an example. EDIT I have now compiled a far schnazier version as a shiny app: Check it out! Thanks for the help! Answer: Looks good. The code is functional with the exception of using return, which is not necessary as the last value will be returned anyway, so I'd simplify to the following and also in the interested of efficiency only compute the coords once and without seq, as : is shorter: step <- function(x) { unlist(sample(possibleSteps, 1)) } takeRandomWalk <- function(nSteps) { coordPairs <- Reduce(`+`, lapply(1:nSteps, step), accumulate = T) x <- sapply(coordPairs, `[`, 1) y <- sapply(coordPairs, `[`, 2) list(x, y) } plotRandomWalk <- function(nSteps, margins) { walkObj <- takeRandomWalk(nSteps) coords <- -margins:margins plot(coords, coords, type = 'n', xlab = "", ylab = "") lines(walkObj[[1]], walkObj[[2]]) } Reduce with accumulate is pretty cool, I think this is the first time I've seen an accumulate parameter on a reduce-like function anywhere. Now I think you know that, but usually R code like this would be written with less lists and more data frames or matrixes. That doesn't mean the code isn't functional: As long as it's using functions, not mutating things and using functional abstractions instead of imperative constructs it's fine. So the following is probably more idiomatic and at the same time uses more efficient representations for the data: possibleSteps <- matrix(c(-1, 0, 0, -1, 1, 0, 0, 1), ncol = 2, byrow = TRUE, dimnames = list(NULL, c("X", "Y"))) Gives a matrix with named columns like so, which will be the representation throughout the rest of the code (which is good for consistency): > possibleSteps X Y [1,] -1 0 [2,] 0 -1 [3,] 1 0 [4,] 0 1 The random walk will still be created by sample, except it's sampling indexes. cbind will create a new matrix from the two accumulated sums (cumsum): takeRandomWalk <- function(nSteps) { indexes <- sample(1:dim(possibleSteps)[1], nSteps, TRUE) walk <- possibleSteps[indexes,] cbind(X = cumsum(walk[,1]), Y = cumsum(walk[,2])) } And finally plotRandomWalk is a bit simpler as we can just give lines the constructed coordinates matrix: plotRandomWalk <- function(nSteps, margins) { coords <- -margins:margins plot(coords, coords, type = 'n', xlab = "", ylab = "") lines(takeRandomWalk(nSteps)) }
{ "domain": "codereview.stackexchange", "id": 12504, "tags": "functional-programming, random, r, higher-order-functions, data-visualization" }
How Last Edge Image Can Be Achieved from Law Masks
Question: as we know thechnique of law edge detection has 25 2d masks that is obtained form 5 different 1d filters. we can use each of those masks on image with convolution but how can we obtain final result? is that achieved from from some kind of fusion or it have to be chosen? thank you Answer: If I understand correctly, the question is, given many images which are result of different Edge Filter applied on the same image, how to actually mark edges. Well, you basically created 25 tests for each pixel to decide whether or not it is an edge. You could apply many approaches to decide: Majority Votes - If more than half of the voters decided it is an edge, nark it as an edge. Threshold - If more than $ x $ voters vote for edge the pixel will be declared as an edge. Spatial Model - Instead of per pixel decision, look around it and other voters. Weight of Votes - Don't mark votes as "Yes / No" but give it a scalar. If the sum of all scalars above a threshold value, declare an edge. As you can see the options are endless.
{ "domain": "dsp.stackexchange", "id": 5416, "tags": "image-processing, edge-detection, texture" }
move_base accuracy thresholds
Question: Hi there, I'm using a turtlebot to do some navigation stuff and have some high level logic that determines where the turtlebot should go (I have regions of interest on my map, and this higher level logic chooses a specific region and decides to move the robot to the centre of the region.). One of the issues I ran into was when it picks the locations and passes the co-ordinates to the move_base server (which seems to be using global_planner) there is no guarantee that a valid plan can be found for those specific co-ordinates. What I wanted to know is if there is any way to set a threshold for the accuracy of the planner - in a use case like this it is not actually important that the robot moves to exact co-ordinates specified - being roughly there is good enough. I was thinking that I could probably just make it keep trying to generate plans by randomly picking new points within a certain radius until it succeeds, but this seemed a little inelegant. Would appreciate any ideas/pointers on how to get around this issue :) Originally posted by genericsoup on ROS Answers with karma: 45 on 2015-02-18 Post score: 2 Answer: move_base also provides the make_plan service which allows you to specify a tolerance. Originally posted by David Lu with karma: 10932 on 2015-02-19 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by genericsoup on 2015-02-21: Thanks! - this is exactly what I was trying to search for - was just looking in the wrong places!
{ "domain": "robotics.stackexchange", "id": 20924, "tags": "ros, navigation, path-planning, global-planner, move-base" }
Difference between spin-orbit coupling and the Russell-Saunders Effect?
Question: The Russell-Saunders effect is the same thing as 'spin-orbit interaction, correct? The reason I am asking is because I was reviewing the Wikipedia page on 'spin-orbit interaction' and it does not mention Russell-Saunders at all... Answer: I'm not aware of the Russell–Saunders effect, but the Russell–Saunders coupling scheme is definitely a thing. As you noted, the Wikipedia page on "spin-orbit interaction" doesn't talk about it, but a different Wikipedia page does, and basically tells you the same thing as I will. The answer is... yes and no. The word "coupling" refers to the coupling of several sources of angular momentum, namely the spin component and orbital component. Now, the issue is that in an atom you typically have many electrons, and every electron has its own orbital angular momentum $\vec{l}$ and spin angular momentum $\vec{s}$,* so you have many, many sources of angular momentum. The challenge is to bring all of these together in a way that allows us to describe the electronic state of an atom using its angular momentum properties (for example, this is what a term symbol does). There are two approaches towards coupling all of these angular momenta together: (Russell–Saunders or LS-coupling) Couple all the individual $\vec{l}$'s together to form one gigantic orbital angular momentum $\vec{L}$, and couple all the individual $\vec{s}$'s together to form one gigantic spin angular momentum $\vec{S}$. Then couple these two together to form the total angular momentum $\vec{J}$. If you have studied term symbols before it is probably using the Russell–Saunders scheme, where you calculate $L$, $S$, and $J$, then write the term symbol $^{2S+1}L_J$. (jj-coupling) For each individual electron, couple $\vec{l}$ and $\vec{s}$ together to form the total angular momentum $\vec{j}$ for that one particular electron. Then bring all the electrons' total angular momentum together to form $\vec{J}$. Note that we haven't ever mentioned $L$ and $S$ here, so the term symbols under this coupling scheme are different. Instead, you'd label the term symbols with the individual values of $j$ for each electron. For an example see e.g. Atkins Molecular Quantum Mechanics. Now, which you use depends on whether electron-electron repulsions or the spin-orbit coupling is a "bigger" term. If spin-orbit coupling is very significant, then it means that the spin and orbital angular momenta on their own (i.e. $\vec{L}$ and $\vec{S}$) are not very useful quantities,† since the interaction between them is large. In this scenario, jj-coupling is a more appropriate way of describing the effects of spin-orbit coupling on the electronic state. On the other hand, if the spin-orbit coupling is relatively small, then $\vec{L}$ and $\vec{S}$ are useful quantities which are still applicable to the atom's electronic state, so the Russell–Saunders scheme is appropriate. And of course, sometimes we get stuck in the middle ground where neither scheme is fully appropriate. TL;DR The Russell–Saunders and jj schemes are both methods which can be used to describe the effects of spin–orbit coupling, but they are not the same thing as spin–orbit coupling.‡ * Well, sort of, anyway: the electrons are indistinguishable, so it's more accurate to say that the $n$ electrons in the atom have $n$ orbital angular momenta $\{\vec{l}_1, \vec{l}_2, \cdots, \vec{l}_n\}$ and $n$ spin angular momenta $\{\vec{s}_1, \vec{s}_2, \cdots, \vec{s}_n\}$. † To be precise, $L$ and $S$ are not "good quantum numbers" because the operators $\hat{L}$ and $\hat{S}$ don't (approximately) commute with the total Hamiltonian $\hat{H}_0 + \hat{H}_\text{so}$, where $\hat{H}_\text{so}$ is the spin-orbit coupling Hamiltonian and $\hat{H}_0$ is the rest of the Hamiltonian (which does commute with $\hat{L}$ and $\hat{S}$). ‡ If you read the previous footnote, then the spin–orbit coupling itself is represented by the Hamiltonian $\hat{H}_\text{so}$. The two coupling schemes can be thought of as ways of dealing with this term as a perturbation to $\hat{H}_0$. In Russell–Saunders the perturbation is small, and consequently, the "good" quantum numbers are similar to those of $\hat{H}_0$. In jj the perturbation is large.
{ "domain": "chemistry.stackexchange", "id": 14274, "tags": "quantum-chemistry, electrons, spin" }
Deflection of thin plate with 1 free edge and deflection > thickness
Question: I have an thin uniform rectangular steel plate supported on 3 sides and free on one, and supporting a uniform load. I'm trying to determine what thickness of plate I need to keep deflection below a given value, but the deflection may not be "small" compared to the plate thickness, so standard formulae for maximum deflection may not be accurate, and I'm stuck on how to check my answer, or how to calculate it more accurately. Data: The plate is a flat rectangular plate (stainless 316 or mild, undecided) anything from 1.5 -15mm thick with an unsupported area at its edge of 1250x500mm total 0.625m^2. It is simply supported along the 500-1250-500 edges and free on the other 1250 edge. The unsupported area carries a static load of 4800 N/m^2 (approx 360kg spread uniformly over the unsupported area) plus its self weight. The 3 supported edges are unrestrained simple supports that can slide or rotate; they don't resist any movement except in the Z-direction (a bit like it's resting across the 3 edges of a "u" shaped pit). My question is the thickness of steel plate I need, to ensure maximum deflection (at the middle of the free edge?) stays under possible values of 3mm / 5mm / 10mm (the most likely permitted deflections or at least a good selection to choose between) The problem is that I am guessing solutions could be thicknesses around 1.5 - 5mm, which means that the deflection might not be "small" compared to its thickness and the usual simple calculation may not be very accurate or trustworthy. But I'm not sure.... Thanks - and any hints how I can work this out myself appreciated but not essential :) Answer: You are correct that this problem falls within the category of 'large-deflection' problems since the deflection is larger then about $\frac{t}{2}$. To correctly analyse a plate under large-deflections would require solving the Föppl–von Kármán equations, which is generally not possible except for some very specific cases. Numerical solutions to some cases have been published (for example in this book), unfortunately your case does not appear to be one of them. Using non-linear finite element analysis may be the only option. However, if the edges were restrained from sliding membrane action will occur (some of the load will be carried by direct tension in the plate and not just bending) and the stiffness of the plate would be higher than that predicted by small deflection theory [Roark (2002) pg. 448]. Therefore, if a small displacement analysis predicts the displacement will be less than your requirements, a large-displacement analysis would result in a smaller deflection and therefore also meet your requirements. Note in this case stresses for a given load will also be less than predicted by small-displacement theory. Reference: Roark, R. J., Young, W. C., & Budynas, R. G. (2002). Roark's formulas for stress and strain. New York: McGraw-Hill.
{ "domain": "engineering.stackexchange", "id": 1800, "tags": "deflection, numerical-methods" }
Is intersection of $k \ge 3$ graphic matroids in P?
Question: It is known that intersection of three general matroids is NP-hard (source), which is done via reduction from Hamiltonian cycle. The reduction uses one graphic matroid and two connectivity matroids. A special case of a problem I am working on can be solved by intersection multiple graphic matroids, but I haven't been able to find, whether this problem is in P. Question: Is it known? Can someone please refer me to a paper or something? (Note: I have asked this question on Computer Science and was referred here.) Answer: I think it is still NP-complete, by a reduction from Hamiltonian paths in bipartite graphs with two degree-one vertices and all other vertices having degree three. (This is just the same as finding Hamiltonian cycles through a specified edge in a cubic bipartite graph — replace the specified edge by two leaves.) To reduce from Hamiltonian paths to graphic matroid intersection, use one graphic matroid to force the subgraph you choose to be a spanning tree (true of every path) and two more graphic matroids, one on each side of the bipartition, to force the subgraph to have degree two at each degree-three vertex and to have an edge at each degree-one vertex. These are the graphic matroids of a graph with disjoint copies of $K_3$ for each degree-three vertex and $K_2$ for each degree-one vertex.
{ "domain": "cstheory.stackexchange", "id": 4027, "tags": "reference-request, np-hardness, reductions" }
why Nitrocellulose does not explode
Question: my question is simple why Nitrocellulose does not explode since it contains Nitrogen and oxygen in the structure of the cotton why it does burn ? Answer: It does detonate in certain scenarios. Detonation is a case of rapid decomposition where reaction front travels at supersonic speed. Obviously, for such decomposition to occur, the compounds needs a way to decompose with big enough release of energy. Also, in this case decomposition front travels not by means of heat transfer, but by means of pressure wave. This means, that the detonating solid must be, well, solid and not fluffy. Another requirements is generation of the initial pressure wave. Some compounds can create it on their own, some require a primer and some require a pretty damn big primer.
{ "domain": "chemistry.stackexchange", "id": 9160, "tags": "explosives" }
Does scattering cross-section of a large number of scatterers tend to sum of their cross-sections?
Question: Suppose we know differential cross-section for some type of scatterer particle. Now, consider a large number of such scatterers distributed randomly in some volume. If there's much space between these scatterers, I suppose the cross-sections of individual scatterers can be simply summed, because multiple scattering would be negligible(?). But in general, it seems, multiple scatterings should distort differential cross-section of individual scatterers, so that simple sum won't work to get the cross-section for the collection. Is this correct? If yes, how can one calculate the differential cross-section of a large collection of randomly-distributed scatterers, given the cross-section for one scatterer? Answer: As long as the target is "thin" (meaning the total odds of being scattered are small), then you are safe simply adding cross-sections together because the odds of being scattered more than once are negligible. Once the total chance of a incident particle being scattered gets to be significant, then simply summing the cross-sections of scattering centers in the beam will over count—because a significant fraction of incident particles will be scattered by more than one center, but it is still only one particle. As with other concerns the definition of "negligible" is driven by your precision goals for the measurement and by comparison to other uncertainties. To keep the frequency of individual incident particles experiencing multiple hard scattering events small enough to ignore when you want a 1% final precision (the goal in most of my dissertation work) you'll need a target that scatters less than $\approx 10\%$ of incident beam particles (we used targets up to 6% radiation length).
{ "domain": "physics.stackexchange", "id": 49593, "tags": "scattering-cross-section" }
What are the different ways to measure the spatial curvature of the universe?
Question: Just what the question asks. Assuming the Friedmann-Rovertson-Walker (FRW) metric, what measurements can be performed to determine the spatial curvature of the universe. Answer: The curvature of the universe can be derived from the temperature fluctuations in the Cosmic Microwave Background. For a given amount of radiation, baryons, dark matter and dark energy in the universe, these temperature fluctuations can be calculated theoretically, and compared with observations, and so one searches for the values that yield the best-fitting model. The amount of matter and dark energy also determines the curvature of the universe, which has an effect on the appearance of the temperature fluctuations. In particular, the curvature of the universe has an effect on the angular size of the temperature fluctuations: In a universe with positive curvature (a 3-sphere), the fluctuations will look bigger; in the case of negative curvature, they will appear smaller: http://science.nasa.gov/media/medialibrary/2000/04/27/ast27apr_1_resources/model_maps.jpg The top figure shows the actual observational data, the 3 panels below are theoretical simulations for a positive, zero, and negative curvature. As it turns out, the best-fitting model is a universe with zero spatial curvature. We can see this in more detail if we plot a distribution of the sizes of the temperature fluctuations: The peaks tell us what angular sizes are most abundant. If the curvature wasn't zero, then these sizes would be different, which means that the peaks would be at different locations; in particular, in a negatively curved universe, they would be shifted to the right (smaller scales). The animation below shows what that would look like: The total density mainly consists of matter and dark energy, so $\rho_\text{tot} = \rho_M + \rho_\Lambda$. In a flat universe, the total density is equal to the so-called critical density $\rho_c$, so one can define the parameters $\Omega_M=\rho_M/\rho_c$, $\ \Omega_\Lambda=\rho_\Lambda/\rho_c$, and $$ \Omega_K = 1 - \Omega_M - \Omega_\Lambda. $$ A flat universe corresponds with $\Omega_K=0$, while a negatively curved universe has (somewhat confusingly) $\Omega_K>0$. The animation shows two scenarios: for the yellow curve, $\Omega_\Lambda$ is fixed to zero and $\Omega_M$ gradually decreases, so that $\Omega_K$ increases and the curvature is increasingly negative. And indeed, the peaks move to the right. For the blue curve, $\Omega_K$ is fixed to zero (a flat universe) and $\Omega_M$ gradually decreases (so that $\Omega_\Lambda$ increases accordingly). This time, it follows that the peaks move slightly to the left as the amount of dark energy increases. The best fit with observations has $\Omega_M\approx 0.3$ and $\Omega_\Lambda\approx 0.7$, thus a universe with zero curvature. An even more careful analysis allows cosmologists to distinguish between the amount of baryons and dark matter. Sources: Cosmic microwave background (wikipedia) Planck 2013 results. I. Overview of products and scientific results, Fig 19 General Relativity and the Geometry of the Universe CMB Introduction
{ "domain": "physics.stackexchange", "id": 10921, "tags": "general-relativity, cosmology, experimental-physics, observational-astronomy, cosmic-microwave-background" }
Why is the reduction from Vertex-Cover to Subset-Sum of polynomial time?
Question: In the standard proof why Subset-Sum is (weakly) NP-complete, one reduces Vertex Cover to Subset-Sum by using suitable numbers with O(m+n) bits (where m is the number of edges and n the number of vertices). But how can we talk about a polynomial time reduction if we generate exponential-size numbers? I guess that this is the key why Vertex Cover is strongly NP-complete and Subset-Sum is only weakly NP-complete. But I didn't get why it is in fact a polynomial time reduction. Answer: The key is indeed that the numbers are expressed in binary. As you observe, the numbers could have a value that is exponential in the size of the vertex cover instance. If the reduction "wrote them out" in unary when computing the Subset-Sum instance, this would take exponential time. However in binary, writing down the bits of the number (i.e. computing their representation in the Subset-Sum instance) only takes polynomial time. Just to emphasise the distinction, it only takes 7 digits to write out 1,000,000 in base ten, so writing it down takes about 7 steps. If we did it in unary, it would take about a million steps. The same thing is happening in the reduction, if it were using unary, it would not be polynomial in the size of the graph, but in binary - because what we're measuring is the number of steps taken to write it down - it is polynomial.
{ "domain": "cs.stackexchange", "id": 2749, "tags": "np-complete, reductions" }
Negative mass dirac equation -> Propagator?
Question: There are two types of Dirac equations: $(p_\mu\gamma^\mu - m)\Psi(x) = 0$ and $(p_\mu\gamma^\mu + m)\Psi(x) = 0$. Here $p$ are the momentum operators. The fermion propagator is defined in the equation $(p_\mu\gamma^\mu - m)S_F(x-x') = \delta(x-x')$. This equation can be easily solved in momentum space: $$S_F(p) = \frac{p_\mu\gamma^\mu + m}{p^2-m^2+i\epsilon}.$$ As far as I can see this contains only the first Dirac equation and not the second equation. Nevertheless we use this to calculate all processes in QED, so is the second Dirac equation not included in QED? Answer: The two equations would lead to equivalent physics but only one of them may be right: the correct equations of motion for Dirac spinor fields are first-order in derivatives. The convention is that only the first equation is right for the Dirac field $\Psi$. The second equation is simply incorrect and doesn't follow from the Lagrangian etc. If we wanted solutions to both equations to be right, $\Psi$ would effectively obey the Klein-Gordon equation of the type $(p^2-m^2) \Psi = 0$ for each component. But a virtue of the Dirac equation is that it is a first-order equation. That's also why it can be so nicely reduced to the non-relativistic Schrödinger's equation in the non-relativistic limi5 which is also first-order in time derivatives. This reduction is needed for the Dirac equation to make the right predictions for the hydrogen atom, among related things. If $\Psi$ obeys the first equation, then e.g. $(-i) (\bar \Psi \gamma^0 \gamma^2)^T$, the charge-conjugate Dirac spinor, obeys the second equation. So it's easy to produce solutions of the second equation from the solution of the first equation (the two charge-conjugate fields are related in the same way as particles and antiparticles) but only one sign of the mass of a Dirac field is right once we fix the conventions!
{ "domain": "physics.stackexchange", "id": 26775, "tags": "quantum-electrodynamics, dirac-equation" }
DNA topology Linking number vs twist?
Question: Background Linking number: the linking number represents the number of times that each curve winds around the other. Twist, called "twist", refers to the number of Watson-Crick twists in the chromosome when it is not constrained to lie in a plane I read these explanations on Wikipedia and I understand the concept of this (supercoils have to form to compensate the "tension" etc..) however I'm not able to discriminate between these terms while looking at a picture. (source) Based on the formula I can easily calculate the twists in the right image. But I'm searching for a way to see/count the twists. Question What is the difference between the linking number and the twists (not from a formula perspective)? Answer: This article (DNA Topology: Fundamentals by Mirkin SM) probably defines and describes linking number better than I ever could: The fundamental topological parameter of a covalently closed circular DNA is called the linking number (Lk). Assume that one DNA strand is the edge of an imaginary surface and count the number of times that the other DNA strand crosses this surface (Figure 3). The algebraic sum of all intersections (which accounts for a sign of every intersection) is the Lk. Two important features of the Lk are evident from Figure 3. First, Lk is always an integer. Second, Lk cannot be changed by any deformation of the DNA strands, i.e. it is topologically invariant. The only way to change Lk is to introduce a break in one or both DNA strands, rotate the two DNA strands relative to each other and seal the break... Another characteristic of a circular DNA is called twist, or Tw. Tw is the total number of helical turns in circular DNA under given conditions. Since DNA is a right handed helix with 10.5 base pairs (bp) per turn, Tw is a large positive number for any natural DNA. Take a planar, circular DNA and try to locally separate the two DNA strands, i.e. to decrease the Tw. Since Lk cannot change, a decrease in Tw will be compensated by several positive writhes of the double helix (Figure 4). Writhing (Wr) is the third important characteristic of circular DNA, describing the spatial pass of the double helix axis, i.e. the shape of the DNA molecule as a whole. Wr can be of any sign, and usually its absolute value is much smaller than that of Tw. The above consideration can be formalized by the following equation: Lk = Tw + Wr. Note, that while Lk is an integer, neither Tw nor Wr should be such. Also, neither Tw nor Wr are topological invariants and their values easily change... Linking number is simply the number of times one strand of DNA passes over the other. Compare this to: Twist - the number of helical turns Writhe - the number of superhelical turns You can, perhaps, think of twist as the passing over of strands within the helix and writhe as the passing over of strands when the entire helix crosses itself. Both twist and writhe involve strands of DNA passing over the other and so both contribute to the linking number. In the absence of writhe, linking number and twist are equal and both describe the same thing: the number of helical turns. However, in the presence of writhe, twist alone is insufficient to describe the topology of DNA because it doesn't account for the passing of the entire helix over itself. Since twist is the number of helical turns, actually counting it in your image is not necessarily difficult, just time consuming. The image already has the twists numbered, and I've put a red do at every turn of the double helix. I've also circled, in blue, every point where the double helix passes over itself (writhe): In that image, in may be difficult to visualize (and even understand!) that the strands only cross over each other 36 times (Lk), even though there are 42 helical turns (Tw). While twist and writhe are different modes of strand crossing, it is important to realize that they are interchangeable without breaking the strands. Let's consider a simpler example, where Lk = -1 (Wikipedia: DNA Supercoil): In the lower image, the strands do not form a helical structure (Tw = 0) but they do both pass over each other together (Wr = -1). Can you imagine that if you were to "unfold" the superhelix (Wr = 0), the two individual strands would still be linked together in a helical structure (Tw = -1)? Neither could I, so I made a video with my shoelaces: https://www.youtube.com/watch?v=rI3LWIvptf0 I hope that answers your question. Let me know if I misunderstood what you were asking or if anything needs clarification. You may also find my shoelace video on the topology of helicase-mediated DNA unwinding informative.
{ "domain": "biology.stackexchange", "id": 6700, "tags": "dna, 3d-structure, supercoiling, topology" }
JSON REST client proxy generator
Question: Trying to create something really lightweight. Sources are on GitHub. To create a proxy we need to define an interface first, e.g.: // Fake Online REST API for Testing and Prototyping [Site("https://jsonplaceholder.typicode.com")] public interface ITypicode { [Get("posts")] Task<BlogPost[]> GetAsync(); [Get("posts/{0}")] Task<BlogPost> GetAsync(int id); [Post("posts")] Task<BlogPost> PostAsync([Body] BlogPost data); [Put("posts/{0}")] Task<BlogPost> PutAsync(int id, [Body] BlogPost data); [Delete("posts/{0}")] Task<BlogPost> DeleteAsync(int id); } public class BlogPost { public int UserId { get; set; } public int Id { get; set; } public string Title { get; set; } public string Body { get; set; } } At this moment we actually know enough to generate a proxy: ITypicode typicode = RestClient.Create<ITypicode>(); BlogPost blogPost = await typicode.PutAsync(1, new BlogPost { Body = "Wow!" }); Console.WriteLine(blogPost.Body); We can inject HttpMessageHandler: ITypicode typicode = RestClient.Create<ITypicode>(handler); We can also emit the proxy class for dependency injection: Type typicodeType = RestClient.Emit<ITypicode>(); About error handling – this exception is thrown for unsuccessful HTTP status codes: public class RestException : Exception { public RestException(HttpResponseMessage response) { Response = response; } public HttpResponseMessage Response { get; } public override string ToString() => Response.Content.ReadAsStringAsync().Result; } We could specify extra type parameter for the SiteAttribute: [Site("https://jsonplaceholder.typicode.com", Error = typeof(TypicodeError))] public interface ITypicode { // … } to have generic exception be thrown instead: public class RestException<TError> : RestException { public RestException(HttpResponseMessage response) : base(response) { } public T Error => JsonConvert.DeserializeObject<TError>(ToString()); } So error response body will be kindly deserialized for us. It is possible to implement API interface on a server side ASP.NET Web API Controller also to insure compatibility. What do you think about this design? To be continued with implementation details. UPDATE: Adding support for HTTP headers – something like this: [Site("https://jsonplaceholder.typicode.com")] public interface ITypicode { [Get("posts/{0}")] [Header("X-API-KEY: {1}")] // req - in [Header("Content-Type: {2}; charset={3}")] // res - out Task<BlogPost> GetAsync( int id, string apiKey, out string contentType, out string charset); } Does it look good? Anything else that might be useful? Answer: Usually when I see your questions there isn't much for me to say, because you usually flesh everything out really well. :) (Probably why this has been unanswered so long.) That said, I think I do have one comment here: If you have support to edit the HeaderAttribute or the GetAttribute, I would consider replacing the {0}, {1} (etc.) format symbols with a named format symbol. [Get("posts/{id}")] Task<BlogPost> GetAsync(int id); [Site("https://jsonplaceholder.typicode.com")] public interface ITypicode { [Get("posts/{id}")] [Header("X-API-KEY: {apiKey}")] // req - in [Header("Content-Type: {contentType}; charset={charset}")] // res - out Task<BlogPost> GetAsync( int id, string apiKey, out string contentType, out string charset); } It makes everything more meaningful. If it's WebAPI you probably don't have access to the source for these attributes, but you can probably wrap them with a new one to add support for this feature. (Adds a little complexity, but should be really awesome to see happen.)
{ "domain": "codereview.stackexchange", "id": 23845, "tags": "c#, api, rest, proxy" }
Will the kinetic energy and potential energy of a wave on a string be maximum or minimum in its mean position?
Question: According to the answer key, the kinetic as well as potential energy is maximum in the mean position. I am unable to understand the reason behind this. Please explain. Answer: In the mean position, the potential energy will be $0$ and all the energy will be in the form of kinetic energy. The answer key is wrong.
{ "domain": "physics.stackexchange", "id": 62507, "tags": "waves" }
Show by example that a linear combination of entangled states is not necessarily entangled
Question: $\newcommand{\bra}[1]{\langle#1\rvert} % Bra \newcommand{\ket}[1]{\lvert#1\rangle} % Ket \newcommand{\qprod}[2]{ \langle #1 | #2 \rangle} %Inner Product \newcommand{\braopket}[3]{\langle #1 | #2 | #3\rangle} % Matrix Element \newcommand{\expect}[1]{ \langle #1 \rangle} % Expectation value$ I am working through the book Quantum Computing: A Gentle Introduction and I was working on problem 3.2. There are no solutions in the back of the book, so I wanted to double-check this one because I was unsure if I was correct or not. (I probably could do this for all of these questions, but I don't want to spam the board) The problem is: Show by example that a linear combination of entangled states is not necessarily entangled. I read this and thought that the only way a linear combination of entangled states would be not entangled is if they could be measured from a different basis. My thought was to find a linear combination of Bell states where the outcome is of the form $\ket{v} = a\ket{00} + 0\ket{11}$ which would be the standard basis state $\ket{00}$. That equation looks like this: $\ket{\phi^+} + \ket{\phi^-} = \frac{2}{\sqrt{2}}\ket{00} + 0\ket{11}$ Is my thought process correct? If it's not, can you explain why it's wrong and what a correct answer to this question would look like? Answer: Yes, that's correct, and your example is exactly what I had in mind after reading the question title. More generally, the four Bell states form a basis in the space of all 2-qubit states, so you can express any state using a linear combination of them, including all unentangled states.
{ "domain": "quantumcomputing.stackexchange", "id": 3365, "tags": "entanglement, textbook-and-exercises" }
Help with xacro default values
Question: Hello everybody, I have seen an interesting tool that I wanted to use : default values for parameters in xacro http://wiki.ros.org/xacro#Default_parameters But when I am trying to use it, the model can't be loaded. There is some test that I made : <xacro:macro name="lidar" params="lidarName x y z roll pitch yaw min_angle:=-2.36 max_angle:=2.36"> invalid parameter min_angle <xacro:macro name="lidar" params="lidarName x y z roll pitch yaw min_angle:='-2.36' max_angle:='2.36'"> invalid parameter min_angle <xacro:macro name="lidar" params="lidarName:=lms x:=0.0 y:=0.0 z:=0.0 roll:=0.0 pitch:=0.0 yaw:=0.0 min_angle:=-2.36 max_angle:=2.36"> Invalid parameter "roll" This model works perfectly when I don't try to use default values. I am using ROS indigo 1.11.20. There is my questions : Do i need to set a default value for each parameter ? Is there an obvious mistake that I am missing in the declaration of my macro ? Am I running a ROS version that supports this xacro default values ? Thanks in advance Originally posted by F.Brosseau on ROS Answers with karma: 379 on 2018-02-08 Post score: 0 Answer: I am using ROS indigo 1.11.20. [..] Am I running a ROS version that supports this xacro default values ? you are, but you're probably not 'activating' them. In order to use Jade+ xacro features under Indigo, you need to invoke the xacro script like so: xacro --inorder /path/to/your/top-level/file.xacro Note: I haven't checked your syntax, so it could be that even with this it doesn't work, but enabling Jade+ xacro would be the first step. Edit: hm, according to wiki/xacro - Default parameters, this should already be supported in Indigo. Not sure what is going on then. Originally posted by gvdhoorn with karma: 86574 on 2018-02-08 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 29986, "tags": "ros-indigo, xacro" }
Charge conjugation in arbitrary basis
Question: Consider the matrix $C = \gamma^{0}\gamma^{2}$. It is easy to prove the relations $$C^{2}=1$$ $$C\gamma^{\mu}C = -(\gamma^{\mu})^{T}$$ in the chiral basis of the gamma matrices. Do the two identities hold in any arbitrary basis of the gamma matrices? How is $C$ related to the charge conjugation operator? Answer: Let's instead use the Majorana basis for Gamma matrices, which I'll denote by a tilde. The main thing about this basis is all the gamma matrices are imaginary so the Dirac equation $(i\tilde{\gamma}^{\mu}\partial_\mu-m)\tilde{\psi}=0$ is real, and solutions can be broken up into purely real and imaginary parts. So if $\tilde{\psi}$ satisfies the equation so does $\tilde{\psi}_c\equiv\tilde{\psi}^{*}$. This is what charge conjugation looks like in this basis. In a different basis of gamma matrices, say the chiral basis, we need to do a unitary transformation $\psi=U\tilde{\psi}$. Then $$\psi_c=U\tilde{\psi}^*=U(U^\dagger \psi)^*=UU^T \psi^*\equiv\gamma^0C\psi^*,$$ where in the last line we defined the matrix $C$ $$\gamma^0C\equiv UU^T.$$ So above is the formula for charge conjugation in an arbitrary basis, where $C$ is defined in terms of the unitary transformation from the Majorana basis. The reason we include the factor of $\gamma^0$ is so $C$ satisfies the second identity you mentioned. This follows from $\gamma^0\gamma^\mu\gamma^0=\gamma^{\mu\dagger}$ which is preserved by unitary transformations. From this we can prove the generalization of the identities you listed $$C\gamma_\mu C^{-1}=-\gamma_\mu^T$$ $$-CC^*=1$$ What is not necessarily general is $C=C^{-1}$. I presented the same argument here as in the paper arXiv:1006.1718 so you might want to look at that.
{ "domain": "physics.stackexchange", "id": 38001, "tags": "special-relativity, representation-theory, dirac-matrices, clifford-algebra, charge-conjugation" }
What are the Basic Properties of a Photon?
Question: I want to grasp the idea of a photon. While researching, I have come upon many different ways of describing a photon, but have found "quantum of the electromagnetic field" to be most satisfying. However, I still have a few questions about this description. I. What does 'quantum' mean in this context? Quantum of what physical quantity? II. What features do photons exhibit as a wave? (wavelength, speed, et cetera) III. What features do photons exhibit as a particle? (mass, spin, et cetera) I would especially thank if anyone could explain the momentum of a photon as a wave and as a particle. For anybody wondering, I am a high school student interested, but not fluent in physics. Answer: I. What does 'quantum' mean in this context? Quantum of what physical quantity? Photons of frequency $\nu$ have energy $E = h\nu$. This means: if a photon of this frequency is absorbed or emitted, exactly this amount (quantum) of energy is transferred. Photons of frequency $\nu$ have momentum $p = \frac{h\nu}{c}$. This means: if a photon of this frequency is absorbed or emitted, exactly this amount (quantum) of momentum is transferred. Photons move in the direction of their momentum vector. Circularly polarized photons have angular momentum $\ell_z=\pm \hbar$. This means, if they are absorbed or emitted, exactly this amount (quantum) of angular momentum is transferred. II. What features do photons exhibit as a wave? (wavelength, speed, et cetera) As a wave they have frequency $\nu$ and wavelength $\lambda$, related by $$ \nu\lambda = c,$$ where $c$ is the speed of light. The speed of light is the same in all inertial frames of reference. As a wave they also have an amplitude. As waves they can interfere (e.g. double slit experiment), be reflected from mirrors, be transmitted through matter. III. What features do photons exhibit as a particle? (mass, spin, et cetera) The particle properties of photons are energy, momentum, and angular momentum. They have zero rest mass. Their spin (angular momentum) depends on their polarization, but it is integer, so they are bosons. As particles, they can, for example, hit objects and exert pressure. They can be emitted and absorbed transferring discrete portions of energy, momentum, and angular momentum.
{ "domain": "physics.stackexchange", "id": 56525, "tags": "optics, particle-physics, waves, photons" }
Using functions to separate code for Fahrenheit to celsius conversion
Question: I'm relatively new to programming, and I've been wondering about a problem such as this. How can I think about decomposing the code into its constituent functions? I went the simplest route I could, just putting the formula in its own function. I believe this makes the code more readable, but there's also a part of me that wants to put the while loop into a function as well. Is there a good rule of thumb for separating my code into functions? #include <stdio.h> int fahr_to_celsius(int f); /* print Farhenheit-Celsius table for fahr = 0, 20, ..., 300 *; use a function for conversion */ int main(void) { int fahr, celsius; int lower, upper, step; lower = 0; /* lower limit of the temperature table */ upper = 300; /* upper limit */ step = 20; /* step size */ fahr = lower; while (fahr <= upper) { celsius = fahr_to_celsius(fahr); printf("%d\t%d\n", fahr, celsius); fahr = fahr + step; } } int fahr_to_celsius(int fahr) { int celsius = 5 * (fahr-32) / 9; return celsius; } Answer: In a more complex program I would probably extract the loop, but I think this is actually pretty good as is. Your main routine reads very well and describes precisely what this program does. You separated the concerns of performing the conversion and displaying the results and your conversion method performs a single task. I agree that extracting it was a wise choice. You'll often hear the phrase A method should do one thing, and do it well. You nailed the first part, but I'm not so sure about the second. I guess it depends on how accurate you need it to be. int celsius = 5 * (fahr-32) / 9; This is very likely to be a non-integer number before you cast it. C casts from floating point to integer via truncation. This means that you're always "rounding" the result down. I doubt that's what you intended. Use the round() method instead for a more accurate result. Or, even better, return a double. One other thing, don't abbreviate variable names. I understand why you used fahr instead of fahrenheit, but it's a bad habit to get into.
{ "domain": "codereview.stackexchange", "id": 15649, "tags": "c, converting" }
Scaling with the Ising Model
Question: I am stuck with one formula in the CFT book by Di Francesco and al. Chapter 3. Equation 3.46 third step, for those who don't have the book, he integrates out degrees of freedom from the Ising Model by summing over some blocks and defining new variables $$\Sigma_I := \frac{1}{R}\sum\limits_{i \in I} \sigma_i$$ and then rescales the free energy $$ f(t',h') = r^d f(t,h)$$ implying $$ f(t,h) = r^{-d}f(r^{\frac{1}{\nu}}t,Rh) $$ because $h' = Rh$ (equality of the external field part of the two Hamiltonians). My trouble comes when he tries to find the dependence of R in r from the two point function, by writing down $$ \Gamma'(n) = \langle\Sigma_I\Sigma_J\rangle-\langle\Sigma_I\rangle\langle\Sigma_J\rangle$$ $$ =R^{-2}\sum\limits_{i\in I}\sum\limits_{J\in J}\left(\langle\sigma_i \sigma_j\rangle -\langle\sigma_i\rangle\langle\sigma_j\rangle\right)$$ $$=R^{-2}r^{2d}\Gamma(rn).$$ I don't know where that last step is coming from. Is it a scaling of the two-point function? Answer: Let me repeat/reproduce some of the most important definitions. $d~=~$ dimension of lattice. $n~=~$ number of blocks between block $I$ and block $J$. $r~=~$ length of a block measured in units of lattice spacings. $r^d~=~$ number of lattice points in a block. $nr~=~$ distance between between block $I$ and block $J$ measured in units of lattice spacings. $R~=~$ normalization constant to make block spin $\Sigma_I$ have values $\pm 1$. The spin correlation function $$\Gamma(n) := \langle \sigma_i \sigma_j\rangle -\langle\sigma_i\rangle\langle\sigma_j\rangle$$ depends on the distance $n=||i-j||$ (measured in units of lattice spacings) between the $i$'th and the $j$'th lattice site. So to answer the question(v1) about the last step: We argue that the spin correlation function $\Gamma(||i-j||)$ does not depend (much) on which representative site $i$ we use inside the block $I$. The sum $\sum\limits_{i\in I}$ over lattice sites $i$ in a block $I$ therefore yields an overall volume factor $r^d$. Similar with the other block $J$. The argument of the spin correlation function $\Gamma$ can then be taken to be $nr$.
{ "domain": "physics.stackexchange", "id": 2372, "tags": "statistical-mechanics, quantum-spin, renormalization, ising-model" }
Accurately Measuring Indoor/Outdoor Air Temperature & Relative Humidity
Question: I’m conducting an independent study as a concerned citizen on the indoor temperature at an animal shelter to support the need of improved air conditioning by documenting how hot it can reach during summer that would exceed veterinary standards. I’ve bought an electronic hygrometer and have also obtained the building floor plans. My current data collection process would be the following: conduct measurements during a high temperature, high humidity, and low cloud cover day. best mimics the extreme conditions that the animals would be exposed to. record the date/time and measure or record current outdoor conditions: temp (shade and direct sun), relative humidity, cloud cover, wind speed. record the current facility’s AC temperature setting. take relative humidity (RH) and temperature measurements throughout the facility at 5ft spacing. (Please advise if my distance is too large or too small) gently wave hygrometer from side to side for approximately 15 seconds to adjust sensors to current position’s environment. take all indoor measurements from a 1ft height (approximate mid-height of a dog) using a pole. take measurements from one end of the facility and walk forwards/sideways in a grid pattern to prevent my body heat and humidity from influencing the reading, and also to limit air circulation from my movements. wear latex gloves so hand sweat does not increase the RH value. take measurements within each animal enclosure as there will be additional humidity from water bowls, urine, etc. this would be done after the main isles are measured to reduce air movement from opening and closing the enclosures from influencing the aisle measurements. mark each measurement point on the floor plan and log the data in a spreadsheet. My data analysis will consist of the following: floor plan heat map overlays for the following: air temperature relative humidity heat index (HI) calculate the following values: animal enclosures average/min/max temperature average/min/max RH average/min/max HI shelter facility average/min/max temperature average/min/max RH average/min/max HI Am I missing anything that could potentially invalidate my results and result in inaccurate measurements, or am I lacking any specific data which I should be collecting or calculating? I want this to be as accurate as possible and provide credible evidence that current conditions are not within veterinary standards. I had asked if this type of question was on-topic here, but overall if it is not please do not close it. Just leave a comment and I'll delete it so as not to ding me negatively in the system. Thank you. Answer: Looking at the source you provided in the comment, there is some other information you may be able to accumulate to help your study. Control or Baseline. You need an the same actual set of data from a facility in the area that is meeting the above-mentioned standards. This needs to be as close to your target facility in size, structure and regional conditions as you can find so you can demonstrate that under the same Temperature and Humidity proper conditions can be maintained. More Data. You haven't mentioned Time in your question. Most building temperature studies show conditions throughout the day. The duration that the animals are exposed to extreme conditions may be relevant. There are data-logging thermometer type devices available , or you may need to perform your tests once an hour during the day to collect enough relevant data to make your case. More Data. You talk of taking measurements on an extreme day. Gather data on a variety of days, and you may find the care standards are exceeded more (or less) often then you expect. This way you may find the lower limit of conditions which cause problems. Which brings us to... More Data. Once you have a measured example of the problem temperature and humidity reading, for a given day, you can collect data from local weather sources and you can present a graphic showing the number of days, or perhaps hours per year the animals are exposed to these conditions. You mention Heat Index. According to NOAA, Heat Index is what the temperature feels like to the human body when relative humidity is combined with the air temperature... You might have to prove the relevance of this data to animals which do not react the same way to humidity. If you can find an external source supporting your use of that index, then by all means continue to use it. You may even be able to find a study showing that the animals feel the effects of humidity more then humans do. Look for, more data. The link above mentions a possible place to begin. Personally I believe the main argument you will face would be the amount of time the proper conditions are exceeded. Like the people who leave the dog in the car, 'but it was just for a minute...'. Good luck, no matter what results you find.
{ "domain": "earthscience.stackexchange", "id": 830, "tags": "temperature, measurements, humidity, data-analysis" }
Calling a service in C++ without write it on terminal
Question: I wanna clear my turtlesim path when I press a specific button of my joystick, like when I write it on terminal: rosservice call clear . Afterwards, I wanna do the same idea but changing the background like rosparam set background-r 150 #include <ros/ros.h> #include <geometry_msgs/Twist.h> #include <sensor_msgs/Joy.h> #include <ros/service.h> class TeleopTurtle { public: TeleopTurtle(); private: void joyCallback(const sensor_msgs::Joy::ConstPtr& joy); ros::NodeHandle nh_; int linear_, angular_; double l_scale_, a_scale_; ros::Publisher vel_pub_; ros::Subscriber joy_sub_; }; TeleopTurtle::TeleopTurtle(): linear_(1), angular_(2) { nh_.param("axis_linear", linear_, linear_); nh_.param("axis_angular", angular_, angular_); nh_.param("scale_angular", a_scale_, a_scale_); nh_.param("scale_linear", l_scale_, l_scale_); vel_pub_ = nh_.advertise<geometry_msgs::Twist>("turtle1/cmd_vel", 1); joy_sub_ = nh_.subscribe<sensor_msgs::Joy>("joy", 10, &TeleopTurtle::joyCallback, this); } void TeleopTurtle::joyCallback(const sensor_msgs::Joy::ConstPtr& joy) { geometry_msgs::Twist twist; twist.linear.x = l_scale_*joy->axes[linear_]; twist.angular.z = a_scale_*joy->axes[angular_]; vel_pub_.publish(twist); if (joy->buttons[2] == 1) { //ros::service::call(clear); // Call service: rosservice call clear } } int main(int argc, char** argv) { ros::init(argc, argv, "teleop_turtle"); TeleopTurtle teleop_turtle; ros::spin(); } Originally posted by astur on ROS Answers with karma: 31 on 2014-10-07 Post score: 0 Original comments Comment by dornhege on 2014-10-07: All this is possible, but what is your question? Comment by astur on 2014-10-07: Calling a ros service (rosservice call clear) in C++ without write it on terminal when i press a specific button Answer: Tutorial, please research in the future. Originally posted by paulbovbel with karma: 4518 on 2014-10-07 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by astur on 2014-10-07: Sorry I lacked: without write it on terminal. I checked all tutorials and some answers like http://answers.ros.org/question/12661/using-rosservicecall/ Comment by paulbovbel on 2014-10-07: You should work through all the beginner tutorials listed here, it will save you a lot of trouble down the line.
{ "domain": "robotics.stackexchange", "id": 19658, "tags": "ros, call-service" }
Why can't our cognitive sense or five senses sense or fathom hyper-dimensional spaces?
Question: Why can't our cognitive sense or five senses sense hyper-dimensional spaces? Is it because of the way our human body is built? Answer: Because we have not evolved such abilities, because they would not be of (evolutionary fitness increasing) use to creatures like us. This is similar to how we lack senses for radioactivity or gravitational waves. Our space-time is to a good approximation 4+1 dimensional (three space dimensions and one time) even if some superstring or brane theory were to hold true and there were extra dimensions. But since these are "rolled up" to a tiny size they do not provide extra information or degrees of freedom we can use. Organisms in a universe with more big dimensions would presumably gain survival advantages from perceiving and acting on them... assuming there was something there to perceive and act on. Earth-life isn't strongly affected by gravitational waves (no nearby strong sources) and cannot affect them (we are too low-density and slow), so there is no pressure to evolve these abilities. Similarly there has to be some advantage for the high-dimensional creatures to exploit the extra dimensions for their evolution to give them such abilities (if they are physically possible). It is a bit less clear why we are bad at thinking about high-dimensional spaces. That our brains are wired to be good enough at interpreting sensory information from our world is obvious, but we humans do have cognitive abilities that allow us to visualise and think about non-physical concepts such as causal links, mathematics or abstract spaces. A likely answer is that the general intelligence we have allows us to do things that are not evolutionary pre-specified and this includes abilities that do not directly boost survival: much of our thinking and acting is about concepts distant in time, space or possibility (and done well will increase individual evolutionary fitness). These general abilities allow us some abstraction ability but is not tuned to make it perfect. Hence we are better at some abstract mental tasks than others, and high-dimensional thinking is one of the tasks we are fairly limited at. Senses and brains reflect the environment they evolve in, but do not have to make use of all available information or describe the environment perfectly - they only need to be adequate enough.
{ "domain": "physics.stackexchange", "id": 51183, "tags": "spacetime-dimensions, perception, visualization" }
Report a status based on a list of results
Question: In one of my services, I have the function as presented below: private def convertCreateResultsToCreateAllReasult(results: List[CreateResult.Result]): CreateAllResult.Result = { val firstConflictResult = results.collectFirst { case r: CreateResult.Conflict => r } match { case Some(cr: CreateResult.Conflict) => Some(CreateAllResult.Conflict(cr.message)) case None => None } val firstNotFoundResult = results.collectFirst { case r: CreateResult.NotFound => r } match { case Some(nfr: CreateResult.NotFound) => Some(CreateAllResult.NotFound(nfr.message)) case None => None } val firstFailedResult = results.collectFirst { case r: CreateResult.Failed => r } match { case Some(fr: CreateResult.Failed) => Some(CreateAllResult.Failed(fr.message)) case None => None } val failedResults = List(firstConflictResult, firstNotFoundResult, firstFailedResult).flatten if (failedResults.isEmpty) { val createdDefinitions = results.collect { case r: CreateResult.Ok => r.definition } CreateAllResult.Ok(createdDefinitions) } else { failedResults.head } } The argument of this function is a list of result objects, where the result object looks like this: object CreateResult { sealed trait Result case class Ok(definition: ChecklistRuleDefinition) extends Result case class Conflict(message: String) extends Result case class NotFound(message: String) extends Result case class Failed(message: String) extends Result } As you can see the result can be Ok (which is considered successful) and everything else (that considered as failed for various reasons). The function should: Return first fails result converted into a different type in case there is one such result in the list. or Return all successful results converted into CreateAllResult.Ok (if all of them successful). Here is the CreateAllResult object: object CreateAllResult { sealed trait Result case class Ok(definitions: List[ChecklistRuleDefinition]) extends Result case class Conflict(message: String) extends Result case class NotFound(message: String) extends Result case class Failed(message: String) extends Result } The thing is, whatever I posted here, it seems to be working as expected. But, I hate the code. I know know that scala has all the bells and whistles to make the code nicer and more readable. I just didn't find the way to do so. How can this code be written in a nicer, scala like, way? Answer: Something like this maybe? @tailrec def trf( results: List[CreateResult.Result], conf: Option[CreateAllResult.Conflict] = None, nf: Option[CreateAllResult.NotFound] = None, f: Option[CreateAllResult.Failed] = None, defs: List[ChecklistRuleDefinition] = Nil ): List[CreateAllResult.Result] = results.map { case r: CreateResult.Conflict :: tail if conf.isEmpty => trf(tail, Some(CreateAllResult.Conflict(r)), nt, f, Nil) case r: CreateResult.NotFound :: tail if nf.isEmpty => trf(tail, conf, Some(CreateAllResult.NotFound(r)), f, Nil) case r: CreateResult.Failed if f.isEmpty => trf(tail, conf, nf, Some(CreateAllResult.Failed(r)), Nil) case Ok(defn) :: tail if conf ++ nf ++ f isEmpty => trf(tail, None, None, None, dfn :: defs) case _ if defs.isEmpty => conf ++ nf ++ f case _ => CreateAllResult.Ok(defs.reverse) } Oh, wait you are only returning the first failure, are you? I didn't realize that at first, thought you wanted the first of each type. Well, that makes it kinda simpler ... How about this: object Failure { def unapply(res: Result) = res match { case r: CreateResult.Conflict => Some(CreateAllResult.Conflict(r)) case r: CreateResult.NotFound => Some(CreateAllResult.NotFound(r)) case r: CreateResult.Failed => Some(CreateAllResult.Failed(r)) } } def trf(results: List[CreateResult.Result]) = results .collectFirst { case Failure(r) => r } .getOrElse(CreateAllResult.Ok( results.collect { case r: CreateResult.Ok => r.definition } ))
{ "domain": "codereview.stackexchange", "id": 27732, "tags": "functional-programming, error-handling, scala" }
Double Newman Projections
Question: How do I draw a double Newman projection for cyclohexane? Which bond(s) do I sight down? From what I understand, if we have 1-methylcyclohexane, we sight C1-C2 and C5-C4, as shown in my attempt below: Would this be correct? I wish to achieve level 4! Answer: Note that -Cl and -CH(CH3)2 are on equatorial positions(The slant lines suggest that) And the -H is on axial position(The perfectly vertical line suggest that) Note that it is not just a coincidence, rather it is intentional. I think it would suffice further.
{ "domain": "chemistry.stackexchange", "id": 1330, "tags": "organic-chemistry, cyclohexane" }
Unexpected result using different approach while solving circular motion problem
Question: A dot started to move around circle with constant angular acceleration $\alpha = 0.25\frac{\text{rad}}{s^2}$. At what time tangential and perpendicular acceleration of the dot will become equivalent? Very easy problem, but one thing is not clear to me. This is the first way how I solved it. We know that $\omega_0=0$ because the dot has no velocity at point $t=0$. Let $t = t_1$ be the point in time at which $a_\tau=a_n$ (equivalent tangential and perpendicular acceleration). From well-known equation we know that $$a_n=\frac{v^2}R$$ where $R$ is the radius of the circle. Also $$a_\tau=\alpha R$$ and $$v=\omega R$$ Combining these equations, we get $$\begin{align}\alpha&=\omega^2\\&=\left(\omega_0+\alpha t_1\right)^2\\&=\alpha^2t_1^2\end{align}$$ which gives us $$\begin{align}t_1&=\sqrt{\frac{\alpha}{\alpha^2}}\\&=\sqrt{\frac1\alpha}\\&=\sqrt{\frac1{0.25}}s\\&=2s\end{align}$$ This is also the correct solution provided in my book. But, after solving this I tried to solve the same problem using different approach. This is my second way. By definition, we know that $$a_\tau=\frac{dv}{dt}$$ From equation $a_\tau=a_n$ we get $$\frac{dv}{dt}=\frac{v^2}R\\\frac{dv}{v^2}=\frac{dt}R\\\int_{v_0}^v\frac{dv}{v^2}=\int_{t_0}^t\frac{dt}R\\\frac1{v_0}-\frac1v=\frac tR$$ Surprisingly, for $v_0\to0$ this equation gives $t\to\infty$. Possibly, I missed something abvious, but I still cannot figure out where is my mistake in the second solution. What I actually did wrong? Answer: You can't just randomly set things equal to each other. The accelerations $a_\tau(t)$ and $a_n(t)$ are independent quantities, and the question is asking for the time $t_0$ where they happen to be equal. Instead, you set $a_\tau(t) = a_n(t)$ for all times, which doesn't make any sense.
{ "domain": "physics.stackexchange", "id": 34748, "tags": "acceleration, velocity, integration, angular-velocity, differential-equations" }
Formatting a string in Python with three possible replacement fields
Question: I'm looking for a much simpler way of formatting a string in Python which will have a different number of replacement fields in different cases. Here's what I'm doing now, which is working fine: if '{yes}' in reply and '{value}' in reply: reply = reply.format(yes=get_yes(), value=value) elif '{no}' in reply and '{value}' in reply: reply = reply.format(no=get_no(), value=value) elif '{yes}' in reply: reply = reply.format(yes=get_yes()) elif '{no}' in reply: reply = reply.format(no=get_no()) elif '{value}' in reply: reply = reply.format(value=value) The only problem is that this code has a Cognitive Complexity of 11 on Code Climate, which is higher than the allowed value of 5, and so I'm trying to find out a way of reducing it. Additional information about variables and methods reply is a string with will have one of the following combinations of replacement fields: {yes} and {value} {no} and {value} {yes} only {no} only {value} only no replacement field get_yes() randomly returns a string that has the same meaning as "yes" ("yeah", "yep" etc.) get_no() randomly returns a string that has the same meaning as "no" ("nah", "nope" etc.) value is a numeric value (integer or float) Answer: Build a kwargs dictionary, rather than using an if-else. You don't have to treat yes and no as mutually exclusive. You could use a 'key, function' list to create a dictionary comprehension that builds the dictionary for you. This can lead to: kwargs = {} if '{value}' in reply: kwargs['value'] = value if '{yes}' in reply: kwargs['yes'] = get_yes() if '{no}' in reply: kwargs['no'] = get_no() reply = reply.format(**kwargs) params = [ ('value', lambda:value), ('yes', get_yes), ('no', get_no) ] def format_format(format, params): return format.format({ key: fn() for key, fn in params if f'{{{key}}}' in format }) reply = format_format(reply, params)
{ "domain": "codereview.stackexchange", "id": 28446, "tags": "python, strings, python-3.x, formatting" }
Expected value and autocorrelation
Question: I have a wide sense stationary stochastic process ${{X_t;t\in \Re}}$ with mean 0 and autocorrelation function $R_x(\tau)=1-\frac{1}{4}|\tau|$ for $|\tau|=0,1,2,3;$ and zero anywhere else. I'm supposed to calculate the linear minimal mean square estimator (LMMSE) of $Z=\frac{1}{2}(X_t+X_{t-1})$ based on $(X_{t-2},X_{t-3})$, for which i need $E(ZX_{t-2})$ and $E(ZX_{t-3})$. The first one ends up as $E(ZX_{t-2})=\frac{1}{2}(R_X(2)+R_X(1))$, never mind the second one. I fail to understand how this calculation works. How can $E(ZX_{t-2})$ end up as $\frac{1}{2}(R_X(2)+R_X(1))$. On one hand, $t=2$ and $t=1$ could simply be inserted into $Z=\frac{1}{2}(X_t+X_{t-1})$, but how does this relate to the expected value of $ZX_{t-2}$? What's the calculation in between? What's the reasoning? Answer: By defition, the autocorrelation function at lag $\tau$ is $$R_X(\tau) = \mathbb{E}(X_t X_{t-\tau})$$ Now the expected value of $ZX_{t-2}$ is $$\mathbb{E}(ZX_{t-2}) = \mathbb{E} \left(\frac{1}{2} (X_t + X_{t-1}) X_{t-2} \right)$$ $$ = \frac{1}{2} \bigg(\mathbb{E}(X_t X_{t-1}) + \mathbb{E}(X_t X_{t-2})\bigg)$$ $$ = \frac{1}{2} \bigg(R_X(\tau = 1) + R_X(\tau = 2)\bigg)$$
{ "domain": "dsp.stackexchange", "id": 11619, "tags": "autocorrelation, self-study" }
Analysis of clustering results
Question: Suppose that i have multidimensional dataset and performed some partitioning clustering on it. Is there any way to find out what objects in a particular cluster have in common (except the fact that clustering algorithm decided to put them together)? I've read many times that clustering is not a well-posed problem in general and that one should not overinterpret its results but still people are trying to cluster multidimensional data and make some practical sense of the results. I just can't find any good source on how the interpretation is done in practice. Any tips and resources will be highly appreciated. Answer: In my experience, with very high dimensional climate data, if I do something as easy as k-means clustering, I would probably first look at the silhouette values of the clusters. As you would probably know, a high silhouette value for the clusters would mean that they are well classified and distinctly different from each other, while a low silhouette value or even a negative one would mean the opposite. In case these values are reasonable, you can then try to reduce the dimensionality of your data set by doing something like PCA (for starters or even something like t-SNE) and see if your clustering results are still that good. If they are so, then you have a sense that those components (PCs in case of PCA or the retained components in case of t-SNE) are probably dominating the feature space that is used for clustering. In my very personal opinion, I would try to reduce(dimensionality) as much as I can and go about doing the clustering until I can visualize them in 3D or 2D. With t-SNE this can be done quite successfully for the right kind of data sets. Sometimes, if you are lucky, even comparing L2 distances between cluster centers and samples may yield something fruitful but that's hardly ever the case for high dimensional data. I guess you're question is a little open ended and a discussion on such would be great. But, once again, if I understand it correctly, extracting the part of the feature space that dominates the " distance" metric for clustering is difficult to find algorithmically in my limited knowledge but some sense can be extracted if you try these exercises.
{ "domain": "datascience.stackexchange", "id": 3432, "tags": "clustering" }
A few questions regarding the difference between policy iteration and value iteration
Question: The question already has some answer. But I am still finding it quite unclear (also does $\pi(s)$ here mean $q(s,a)$ ?): The few things I do not understand are: Why the difference between 2 iterations if we are acting greedily in each of them? As per many sources 'Value Iteration' does not have an explicit policy, but here we can see the policy is to act greedily to current $v(s)$ What exactly does Policy Improvement mean? Are we acting greedily only at a particular state at a particular iteration OR once we act greedily on a particular state we keep on acting greedily on that state and other states are added iteratively until in all states we act greedily? We can intuitively understand that acting greedily w.r.t $v(s)$ will lead to $v^*(s)$ eventually, but does using Policy Iteration eventually lead to $v^*(s)$? NOTE: I have been thinking of all the algorithms in context of Gridworld, but if you think there is a better example to illustrate the difference you are welcome. Answer: $\pi(s)$ does not mean $q(s,a)$ here. $\pi(s)$ is a policy that represents probability distribution over action space for a specific state. $q(s,a)$ is a state-action pair value function that tells us how much reward do we expect to get by taking action $a$ in state $s$ onwards. For the value iteration on the right side with this update formula: $v(s) \leftarrow \max_\limits{a} \sum_\limits{s'}p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$ we have an implicit greedy deterministic policy that updates value of state $s$ based on the greedy action that gives us the biggest expected return. When the value iteration converges to its values based on greedy behaviour after $n$ iterations we can get the explicit optimal policy with: $\pi(s) = \arg \max_\limits{a} \sum_\limits{s'} p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$ here we are basically saying that the action that has highest expected reward for state $s$ will have probability of 1, and all other actions in action space will have probability of 0 For the policy evaluation on the left side with this update formula: $v(s) \leftarrow \sum_\limits{s'}p(s'\mid s, \pi(s))[r(s, \pi(s), s') + \gamma v(s')]$ we have an explicit policy $\pi$ that is not greedy in general case in the beginning. That policy is usually randomly initialized so the actions that it takes will not be greedy, it means we can start with policy that takes some pretty bad actions. It also does not need to be deterministic but I guess in this case it is. Here we are updating value of state $s$ according to the current policy $\pi$. After policy evaluation step ran for $n$ iterations we start with the policy improvement step: $\pi(s) = \arg \max_\limits{a} \sum_\limits{s'} p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$ here we are greedily updating our policy based on the values of states that we got through policy evaluation step. It is guaranteed that our policy will improve but it is not guaranteed that our policy will be optimal after only one policy improvement step. After improvement step we do the evaluation step for new improved policy and after that we again do the improvement step and so on until we converge to the optimal policy
{ "domain": "ai.stackexchange", "id": 1004, "tags": "reinforcement-learning, policies, value-iteration" }
Do only outer electron shells take part in forming chemical bonds?
Question: Do only outer electron shells take part in forming chemical bonds? Or could an inner shell create a bond under some conditions? Answer: Great Question. From this article, it seems that extreme conditions of pressure can lead to new type of molecules,bondings and how the electrons participate in the bonding. Under very high pressures, it appears, electrons in the atom's inner shells can also take part in chemical bonds. “It breaks our doctrine that the inner-shell electrons never react, never enter the chemistry domain,” says Mao-sheng Miao, a chemist at the University of California, Santa Barbara, and the Beijing Computational Science Research Center in China. Miao's calculations show that under extreme pressures cesium and fluorine atoms can form exotic molecules with inner-shell bonds. Miao identified two molecules that, at high pressure, would involve cesium's inner electrons as well. To form cesium trifluoride ($\ce{CsF3}$), a cesium atom would share its single valence electron and two inner-shell electrons with three fluorine atoms. Four inner electrons would go into making cesium pentafluoride ($\ce{CsF5}$). “That forms a very beautiful molecule, like a starfish,” Miao says. He reported his findings in Nature Chemistry. Both the shape of the resulting molecules and the possibility of their formation are “very surprising,” says Nobel Prize–winning chemist Ronald Hoffmann, a professor emeritus at Cornell University. Found this in the comments about the stability of the exotic molecules at normal pressures "Since the compound is thermodynamically unstable it would decompose to the more stable form at lower pressures. Since this means that they are going to higher states of enthalpy I would expect that this decomposition would be a endothermic reaction which means that they would get cold during the process."
{ "domain": "chemistry.stackexchange", "id": 3307, "tags": "physical-chemistry, bond, electrons" }
Why studying machine learning is an opportunity in today's world?
Question: I just wanted to gather some perspective on why this is a great opportunity to be able to study machine learning today? With all the online resources (online courses like Andrew Ng's, availability of datasets such as Kaggle, etc), learning machine learning has become possible. I understood that you can have high paid jobs; but you also need a lot of work dedication to be good at it, which makes your salary not so attractive! (in comparison to the number of hours you spend to keep up with this fast moving field) Why it is so desirable to take this opportunity and start learning machine learning today? (community, ability to start a business, etc.) Answer: What sort of opportunity it is depends on how much you want to focus on it. If you want to be a regular programmer, you might take the time to learn a high level interface for some machine learning tools, such as Tensorflow or Keras. There will be plenty of things you don't know how to do (even within those tools), but you may be able to apply predesigned model architectures to problems. The models won't be as good as one designed specifically for the problem, but it's one more tool in your toolbox, and it's possible you'd be able to get some useful results occasionally without devoting a huge amount of time to mastering the techniques. But if you want to really focus on machine learning, at the research level, you can potentially tackle problems that existing techniques haven't been able to solve. This is where most of the big projects that you've probably heard of will be happening: self-driving cars, AlphaGo, etc. What you can expect here is a lot of hard work. You will need to develop a fairly deep understanding of the mathematics involved so you can visualize (to some extent) what is happening in the potentially high dimensional spaces involved, identify potential failure modes, and identify models that won't fall into them. It involves a lot of trial and error, failed attempts and gradual improvement before you're able to develop a model that beats the stuff already out there. It's very rewarding work if you enjoy it. There are well-paying positions in the field, but that's just a bonus if you already enjoy the work, and it isn't enough of a bonus if you don't. In my opinion, going into this for just the money would be a mistake. There are almost definitely other jobs that pay just as well but don't take anywhere near the time investment to become (and stay) competent at them. But if you really want to work in this field for its own sake, and also want to make sure you don't starve while you're doing it, it's absolutely worth it.
{ "domain": "ai.stackexchange", "id": 848, "tags": "machine-learning, social, profession" }
Sorting strings by length - functional Python
Question: I'm trying to port this little F# snippet while staying pythonic: ["something"; "something else"; "blah"; "a string"] |> List.map (fun p -> p, p.Length) |> List.sortBy snd In case you don't speak F#, it gets the length of each string, then sorts by length. Output: [("blah", 4); ("a string", 8); ("something", 9); ("something else", 14)] In Python, this is the best I could do so far: sorted([(p, len(p)) for p in ["something", "something else", "blah", "a string"]], key=lambda a:a[1]) While correct, this doesn't look very elegant to me, or maybe it's just my non-pythonic eye. Is this pythonic code? How would you write it? Maybe an imperative style is more appropriate? Answer: data = ["something", "something else", "blah", "a string"] result = [(x, len(x)) for x in sorted(data, key = len)] Basically, its more straightforward to sort first then decorate. Although, I'm not sure why you would need the length of the list in your tuple. If you don't really need it sorting by length can be much shorter. EDIT: If all I wanted was to output the data, I'd do it like this: for string in sorted(data, key = len): print string, len(string) If you really wanted to eliminate the two references to len you could do: mykey = len for string in sorted(data, key = mykey): print string, mykey(string) But unless you are reusing the code with different mykey's that doesn't seem worthwhile.
{ "domain": "codereview.stackexchange", "id": 5277, "tags": "python, beginner, strings, sorting, functional-programming" }
Is our Solar System moving independently through our galaxy?
Question: I would like know if our Solar System is moving independently through our galaxy or is stuck on the Orion Arm which is revolving around our galaxy? Is our solar system an onboard passenger of the Orion Arm? Answer: The arms of a spiral galaxy are "gravity waves" Stars orbiting the galaxy are pulled towards them and speed up as they approach, and then are slowed down as they leave. So an arm of the galaxy is not composed of the same stars. The arm is, in some ways, like a fountain, because it can maintain its shape even though the stars in it are constantly changing. So the sun will move in its orbit out of the arm and into another, just the same as other stars.
{ "domain": "astronomy.stackexchange", "id": 5536, "tags": "galaxy, solar-system, movement, apparent-motion" }
FFTs of a complex signal - separating the real and imaginary parts
Question: I have a complex time varying signal at a single frequency x = a + jb where a represents the contribution from the cosine basis function and b represents the contribution from the sinusoid basis function. I am trying to understand how the output differs if I was to take a complex FFT of x and inspect the real and imaginary components compared to taking two FFTs of the real and imaginary parts separately. I naively expected these to be equal when a=b. What am I missing? % signal parameters fs = 1e3; f0 = 50; t = (0:999)/fs; wt = 0.5; %weighting between cosine and sin basis functions NFFT = 1024; freq = linspace(-fs/2,fs/2, NFFT); freq2 = linspace(0,fs/2, NFFT/2); wnd = hanning(length(t)).'; % Complex signal x = wt*cos(2*pi*f0.*t) + (1-wt)*1j*sin(2*pi*f0.*t); % 1 - Take FFT of complex signal and split into re/im components X = fft(x.*wnd, NFFT); Xre = abs(X(1:NFFT/2)); Xim = abs(fliplr(X(NFFT/2+1:end))); % 2 - Split the signal into Re/Im and compute FFTs Yre = abs(fft(real(x).*wnd, NFFT)); Yim = abs(fft(imag(x).*wnd, NFFT)); close all; figure; ax(1)=subplot(221); plot(freq2, Xre, 'b'); legend('Cmplx Re'); ax(2)=subplot(222); plot(freq2, Yre(1:NFFT/2), 'r'); legend('Split Re'); ax(3)=subplot(223); plot(freq2, Xim, 'b'); legend('Cmplx Im'); ax(4)=subplot(224); plot(freq2, Yim(1:NFFT/2), 'r'); legend('Split Im'); linkaxes(ax,'xy'); figure; plot(freq, fftshift(abs(X))); Answer: Below is a chart I had of "Universal Fourier Transform Properties", that apply in either direction (going from time to frequency or going from frequency to time). For example, a signal that is periodic in one domain, will be discrete in the other: a digitized time domain signal becomes periodic in frequency (such that we can concern ourselves with the spectrum from $-F_s/2$ to $+F_s/2$ only. Similarly a signal that is periodic in time will only have discrete frequencies (at the fundamental repetition rate and its harmonics in frequency). Specific to your case I draw attention to the properties when a signal is only real, and when a signal in only imaginary: This may help you to see what is going on, specifically the FT of a signal that is only real in the time domain, and the FT of a signal that is only imaginary in the time domain. A signal that is only real will have a spectrum that is conjugate symmetric: the positive half of the spectrum is equal in magnitude to the negative half but opposite in phase (Meaning the real portion of the FFT is "even" and the imaginary portion of the FFT is "odd", in the same fashion that a cosine in time is an even function and a sine in time is an odd function. Similarly a signal that is only imaginary will be have the FFT real components to be odd and the FFT odd components to be even. So hopefully you see the difference between taking the fft of b, and taking the fft of jb, where to be clear b is real, so jb is imaginary (we could complicate this by allowing a and b to be complex numbers but my guess in your case they are indeed real). This is also interesting in showing how causal time domain signals MUST be complex in frequency, and the imaginary and real components are related by the Hilbert transform: if you consider the real and imaginary components of the spectrum seperately, we see via the odd/even relationship that everything for t<0 cancels while everything for t>0 adds. This last point is further elaborated by Sir Robert Bristow-Johnson in this post: What is the easiest, most straight-forward way to prove this about minimum-phase filters?
{ "domain": "dsp.stackexchange", "id": 4953, "tags": "fourier-transform, complex" }
In Blackett's cloud chamber why couldn't an alpha particle bounce off a nitrogen atom knocking a proton off of it?
Question: I was reading this which said In 1919 Rutherford assumed that the alpha particle merely knocked a proton out of nitrogen, turning it into carbon. After observing Blackett's cloud chamber images in 1925, Rutherford realized that the alpha particle was absorbed I looked around for the Blackett's cloud chamber experiment and then found this which said He asked Blackett to use a cloud chamber to find visible tracks of this disintegration, and by 1925, he had taken 23,000 photographs showing 415,000 tracks of ionized particles. Eight of these were forked, and this showed that the nitrogen atom-alpha particle combination had formed an atom of fluorine, which then disintegrated into an isotope of oxygen 17 and a proton. Then I found this image of the tracks in the cloud chamber on some other website. Here I'm assuming the two forks are the ones circled in red How did they know the alpha particle had to combine with nitrogen from this? Couldn't this picture still be explained with the alpha particle knocking off a proton from nitrogen (the light path im guessing) and the alpha particle continuing to move on the other path? Should it form three tracks instead of two in that case then? One for the proton, one for the alpha particle and one for the carbon as well as the carbon should be negatively charged so propanol can condense around it? Answer: How these alpha absorption events were identified is well-explained in Blackett's subsequent 1925 paper on "The ejection of protons from nitrogen nuclei, photographed by the Wilson method": But amongst these normal forks due to elastic collisions, eight have been found of a strikingly different type. … These eight tracks undoubtedly represent the ejection of a proton from a nitrogen nucleus. It was to be expected that a photograph of such an event would show an alpha-ray track branching into three. The ejected proton, the residual nucleus from which it has been ejected, and the alpha-particle itself, might each have been expected to produce a track. These eight forks however branch only into two. The path of the first of the three bodies, the ejected proton, is obvious in each photograph. It consists of a fine straight track, along which the ionisation is clearly less than along an alpha-ray track, and must therefore be due to a particle of small charge and great velocity. The second of the two arms of the fork is a short track similar in appearance to the track of the nitrogen nucleus in a normal fork. Of a third arm to correspond to the track of the alpha-particle itself after the collision there is no sign. On the generally accepted view, due to the work of Rutherford, the nucleus of an atom is so small, and thus the potential at its surface so large, that a positively charged particle that has once penetrated its structure (and almost certainly an alpha-particle that ejects a proton must do so) cannot escape without acquiring kinetic energy amply sufficient to produce a visible track. As no such track exists the alpha-particle cannot escape. In ejecting a proton from a nitrogen nucleus the alpha-particle is therefore itself bound to the nitrogen nucleus In summary, The kinematics of $\alpha + N \rightarrow \alpha + C + p$ scattering is such that three observable tracks ($\alpha, F, p$) should be produced, and these events only have two as expected for $\alpha + N \rightarrow F + p$. The ionization the light tracks was measured with sufficient accuracy to show that they could not be alpha particles, and the range of the heavy tracks was consistent with nitrogen nuclei but not alpha particles.
{ "domain": "physics.stackexchange", "id": 98871, "tags": "particle-physics, experimental-physics" }
Modulus of four acceleration
Question: The four acceleration is defined as $$\alpha^\mu = \gamma_V ^4 \left(\frac{\vec{v} \cdot \vec{a}}{c},\frac{\vec{v} \cdot \vec{a}}{c^2} \vec{v} + \frac{1}{\gamma_V ^2} \vec{a} \right)$$ where $\vec{v}$ is ordinary velocity and $\vec{a}$ ordinary acceleration in a certain reference frame. Our professor told us that it comes straight forward from the definition that $$\alpha_\mu \alpha^\mu = \eta_{\mu \nu} \alpha^\mu \alpha^\nu = -\alpha^0 \alpha^0 + \vec{\alpha} \cdot \vec{\alpha} = |\vec{a}|^2$$ but I can't understand the last passage. Answer: The equation your professor gave you ($\alpha^\mu \alpha_\mu = |\vec{a}|^2$) is only true in a frame where the object is instantaneously at rest ($\vec{v} = 0$). In this instance, we have $\alpha^\mu = (0, \vec{a})$ and the result follows fairly obviously. A more general result can be obtained by plugging in the components of $\alpha^\mu$ into the definition of the four-vector norm: \begin{align*} \alpha^\mu \alpha_\mu &= -(\alpha^0)^2 + \vec{\alpha} \cdot \vec{\alpha} \\ &= \gamma_V^8 \left[ - \frac{(\vec{v} \cdot \vec{a})^2}{c^2} + \left( \frac{\vec{v} \cdot \vec{a}}{c^2} \vec{v} + \frac{1}{\gamma_V ^2} \vec{a} \right) \cdot \left( \frac{\vec{v} \cdot \vec{a}}{c^2} \vec{v} + \frac{1}{\gamma_V ^2} \vec{a} \right) \right] \end{align*} You can multiply this out, and it does simplify a little upon the application of the identity $(\gamma_V)^{-2} = 1 - |\vec{v}|^2/c^2$. But it does not reduce to $\alpha^\mu \alpha_\mu = |\vec{a}|^2$ unless $\vec{v} = 0$.
{ "domain": "physics.stackexchange", "id": 78515, "tags": "special-relativity, metric-tensor, acceleration" }
Number of independent components for tensors in general
Question: The question in my assignment: Suppose we have a tensor $A^{\mu\nu\alpha\beta}$ in four spacetime dimensions. This tensor is antisymmetric in the first two indices, i.e., $A^{\mu\nu\alpha\beta}=-A^{\nu\mu\alpha\beta}$ and symmetric in last two indices, i.e., $A^{\mu\nu\alpha\beta}=A^{\mu\nu\beta\alpha}$. Determine the number of independent components this tensor has. On the other hand, if the tensor is antisymmetric in all four indices how many independent components it will have? In general, if we have a '$n$' dimensions, how many independent components it will have. My answer: As the tensor $A^{\mu\nu\alpha\beta}$ is anti-symmetric under exchange of its first two indices, there are $\frac{4(4-1)}{2}=6$ independent combinations for $\mu$ and $\nu$. Now, for each of these $6$ combinations there are $\frac{4(4+1)}{2}=10$ independent combinations of $\alpha$ and $\beta$, as the tensor is symmetric under the exchange of these two indices. Thus, there are in total $6\times 10=60$ independent components of the tensor. If the tensor is anti-symmetric in all its four indices, then:\par As the indices cannot be repeated, thus the first index has $4$ numbers to choose from; once that is done for the second index we have only $3$ choices; for the third index $2$ choices and the last index is determined. The number of possible combinations is $4\times3\times2=4!$. But all these combinations can be obtained from permuting a single combination, as there are $4!$ possible permutations, therefore, the number of independent components is $\frac{4!}{4!}=1$ Number of independent components for a fully antisymmetric $(4,0)$ rank tensor in $n$ dimension:\par As the indices cannot be repeated, thus the first index has $n$ numbers to choose from; once that is done for the second index we have only $n-1$ choices; for the third index $n-2$ choices and the last index has $n-3$ choices. Therefore, the number of possible combinations $n\times(n-1)\times(n-2)\times(n-3)=\frac{n!}{(n-4)!}$. Again due to the total antisymmetry, once one combination of indices is determined, the rest can be obtained by permutations. As there are $4!$ possible permutations, the number of independent components $\frac{n!}{4!(n-4)!}={}^nC_4$. Question: (1) Whether my arguments are correct. (2) Is there a list for most general formulas for calculating independent components of tensors in various situations? Or maybe someone can list a few with explanations. Answer: Note that we expect there to be $n^4$ components to start out with for an arbitrary $(4,0)$ tensor $T^{abcd}$ in $n$ dimensions. (and in general a generic $(m,0)$ tensor in $n$ dimensions should have $n^{m}$ components) (a) Start with the antisymmetric case where $A^{abcd} = - A^{bacd}$. Notice that for any $a=b$ we end up having $A^{aacd} =0$, which is sort of like having a $(3,0)$ tensor with all components zero. This means that you would expect $n^3$ components to be zero, so at this point there are $n^4 - n^3$ components left. We also note that for $a \neq b$ we also always have $T^{bacd} = - T^{abcd}$, which implies that half the remaining components are independent: this means there are a total of $\frac{1}{2} \cdot (n^4 - n^3) = \frac{n(n-1)}{2} \cdot n^2$ free components for an antisymmetric tensor of this form. (b) For the symmetric case $S^{abcd} = S^{abdc}$, the argument is similar, except your 'diagonals' are now free components. As in the above (but now $S^{abdc} = S^{abcd}$ for $c \neq d$), there are $\frac{1}{2} \times (n^4 - n^3)$ free components which are 'off-diagonal', and so now just add to this the extra $n^3$ free diagonal components $S^{abcc}$. The total is $\frac{1}{2} \cdot (n^4 - n^3) + n^3 = n^2 \cdot \frac{n(n+1)}{2}$ (c) If you have a tensor with both properties (b) and (c), the arguments in the above follow through similarly (because the symmetries act on separate sets of indices), and can be phrased as you did --- the first two indices being anti-symmetric mean there are $\frac{n(n-1)}{2}$ free combinations of $a$ and $b$, and the latter two indices have $\frac{n(n+1)}{2}$ free combinations. Overall the tensor has $\frac{n(n-1)}{2} \cdot \frac{n(n+1)}{2} = \frac{n^2 (n-1)(n+1)}{4}$ free components. That is equal to $60$ for $n=4$. (d) Finally for the tensor $F^{abcd}$ which is antisymmetric in all its indices (also known as a completely/totally antisymmetric tensor). You've got the right answer and the arguement is correct. Interestingly in $n = 4$ dimensions, having 1 free component means that the only type of totally antisymmetric $(4,0)$ tensor you can have is proportional to the Levi-Cevita tensor (and this is generically true for a totally antisymmetric $(m,0)$ tensor in $n$ dimensions for $n=m$).
{ "domain": "physics.stackexchange", "id": 72608, "tags": "homework-and-exercises, symmetry, tensor-calculus" }
1d Ising model: energy inside domains
Question: I am trying to understand some calculations to get the excitation energy $\Delta E_\text{M} = E_\text{M} - E_0$ (M is the number of domain walls) in the 1d Ising model in the absence of a magnetic field: $$ H = - J\sum_{i=1}^N s_i s_{i+1} $$ As far as I know, you can take two approaches -- open boundary conditions and periodic boundary conditions. Open boundary conditions means that the Hamiltonian becomes $$ H_\text{OBC} = - J ( s_1 s_2 + s_2 s_3 + \dots + s_{N-1} s_N )$$ while the periodic boundary conditions lead to $$ H_\text{PBC} = - J ( s_1 s_2 + s_2 s_3 + \dots + s_{N-1} s_N + s_N s_1 ) .$$ This leads to different ground state energies $$ E_{0, OBC} = - J ( N - 1 )$$ and $$ E_{0, PBC} = - J N .$$ If I understood everything correctly, every domain will contribute an energy $- (L_i - 1) J$ (where $L_i$ are just the number of spins in the domain $i$) plus an additional $+J$ per domain wall in the OPC leading to $$ E_{M, OPC} = - J \sum_{i=1}^M (L_i - 1) + J M = - JN + 2JM = E_{0, OPC} + \Delta E_{M, OPC} $$ but in the PBC, I would get an energy $- J L_i$ per domain plus $0$ per domain wall since the energy contribution is $\propto - JN$. This leads to $$ E_{M, PBC} = - J \sum_{i=1}^M L_i = - JN = E_{0, PBC} $$ I am really confused by this and I feel like I've mixed things up quite a lot. I was expecting both results to match (or at least to match in thermodynamic limit). Answer: Simplify your life by rewriting your Hamiltonian as $$H_{\rm OBC} = -J (N-1) -J \sum_{i=2}^N (s_{i-1}s_i -1) $$ and $$ H_{\rm PBC} = -J N -J \sum_{i=1}^N (s_{i-1}s_i -1), $$ where in the latter sum I have set $s_0\equiv s_N$. In the ground states, $s_{i-1}s_i = 1$ for all $i$ and you recover your formulas for $E_{0,{\rm OBC}}$ and $E_{0,{\rm PBC}}$. Now, if you have $M$ domain walls (that is, $M$ pairs of neighbors with disagreeing spins), then the energy clearly increases by $2MJ$, so $E_{\rm M, OBC} = E_{0,{\rm OBC}} + 2MJ$ and $E_{{\rm M, PBC}} = E_{0,{\rm PBC}} + 2MJ$.
{ "domain": "physics.stackexchange", "id": 64399, "tags": "statistical-mechanics, ising-model" }
Confusion about the frequency of a signal recorded with a USRP
Question: In my assignment I am provided with a recording of a radio signal that was supposedly done in the AM spectrum with a USRP, and I am told to process it in various ways with GNURadio. However, the assignment makes a statement about the recorded signal that I have slight difficulty understanding. The assignment says that the signal was recorded with the USRP set to 710 kHz, yet it also says that the recording was sampled at 256 kHz, and that the 0 kHz component of the recording actually corresponds to 710 kHz in the original signal. My understanding is that the only way for this to be the case is that the signal was recorded with a frequency well above 710 kHz, then the frequencies were translated so that the 710 kHz component was shifted to 0 kHz, and finally sampled at 256 kHz. Is my understanding correct? In relation to that, what does it mean for a USRP to be set to a particular frequency, if the recorded signal seems to contain frequencies several tens of kHz above and below that frequency? Answer: The USRP is taking a frequency band of width 256 kHz centered on 710 kHz, and downconverting it to baseband. Whatever is in that band will appear in the USRP output. To be specific, the band that the USRP is capturing goes from 710-128 = 582 to 710+128 = 838 kHz. Keep in mind that the band is 256 kHz wide (that is, equal to the sampling rate) because the USRP actually performs complex sampling.
{ "domain": "dsp.stackexchange", "id": 10498, "tags": "software-defined-radio, usrp" }
Recurrence relation T(n) = 3T(n-1) + n
Question: I'm trying to solve the recurrence relation T(n) = 3T(n-1) + n and I think the answer is O(n^3) because each new node spawns three child nodes in the recurrence tree. Is this correct? And, in terms of the recurrence tree, is there a more mathematical way to approach it? Answer: Using the substitution method, we find out that $$ \begin{align*} T(n) &= n + 3T(n-1) \\ &= n + 3(n-1) + 3^2T(n-2) \\ &= n + 3(n-1) + 3^2(n-2) + 3^3T(n-3) \\ &= \cdots \\ &= n + 3(n-1) + 3^2(n-2) + \cdots + 3^{n-1}(n-(n-1)) + 3^n T(0) \\ &= \frac{3^{n+1}-2n-3}{4} + 3^nT(0) \\ &= \Theta(3^n). \end{align*} $$ Even without doing the full calculation it is not hard to check that $T(n) \geq 3^{n-1} + 3^n T(0)$, and so $T(n) = \Omega(3^n)$. A cheap way to obtain the corresponding upper bound is by considering $S(n) = T(n)/3^n$, which satisfies the recurrence relation $S(n) = S(n-1) + n/3^n$. Repeated substitution then gives $$ \frac{T(n)}{3^n} =\sum_{m=1}^n \frac{m}{3^m} + T(0). $$ Since the infinite series $\sum_{m=1}^\infty \frac{m}{3^m}$ converges, this implies that $\frac{T(n)}{3^n} = \Theta(1)$ and so $T(n) = \Theta(3^n)$.
{ "domain": "cs.stackexchange", "id": 11323, "tags": "recurrence-relation" }
On solutions of Schrödinger equation
Question: Consider a quantum system described by the wave function $\psi({\bf x}, t)$ and subjected to a time-independent ordinary potential $V({\bf x})$. The relative Schrödinger equation takes the form: $$\underbrace{\left (-\frac {\hbar ^2 \nabla ^2}{2m} + V({\bf x}) \right)}_{\hat H ({\bf x}, \ {\bf p})} \psi({\bf x}, t) = i \hbar \frac \partial {\partial t}\psi({\bf x}, t)$$ Using separation of variables, we write the solution as $\psi({\bf x}, t) = \varphi ({\bf x}) \ \phi(t) $, where: $\phi(t) = \exp(-\frac i \hbar Et)$ solves the equation $ \displaystyle i \hbar \frac d {dt} \phi(t) = E \phi(t)$ , with $E$ constant. $\varphi({\bf x})$ solves the equation $H({\bf x},{\bf p}) \varphi ({\bf x})= E \varphi({\bf x}) $ Question. Is this the most general solution? Or is it just a particular one for a specific guess (a factorized solution)? Answer: Consider the following wavefunction: $$\psi(x,t) = \frac1{\sqrt2}\left(\phi_1(x) e^{-iE_1t/\hbar} + \phi_2(x) e^{-iE_2t/\hbar}\right)\!,$$ Where $\phi_1(x)\,e^{-iE_1t/\hbar}$ and $\phi_2(x)\,e^{-iE_2t/\hbar}$ are stationnary solutions of the Schrödinger equation with $E_1 \neq E_2$. By linearity, $\psi$ is also a solution of the Schrödinger equation and it cannot be factorized. Now, since $H$ is Hermitian, it can be diagonalized and with some additional work you can show that the factorized solutions that you wrote form a complete basis of the solutions to Schrödinger equation, in the sense that any solution can be written as a linear combination of factorized solutions.
{ "domain": "physics.stackexchange", "id": 80986, "tags": "wavefunction, schroedinger-equation, superposition, differential-equations, linear-systems" }
Compute sum of even powers of x
Question: I am to compute recurrently (not necessarily recursively) the y, where $$y = {x}^{2n} + {x}^{2(n-1)} + ... + {x}^{4} + {x}^{2} + x$$ Here is what I am doing: #include <iostream> using namespace std; double polynom(unsigned, double); int main(void) { cout << polynom(2, 3) << endl; return 0; } double polynom(unsigned n, double x) { double currentAdd(x); double res(0); while (n) { res += currentAdd; currentAdd *= currentAdd; --n; } return res; } Is it possible to do it better? Answer: Don't do this: using namespace std; See: Why is “using namespace std” considered bad practice? Its a bad habits you should not get into doing (even for small programs). Bad habits bite you in the butt when you get to larger projects and forget to stop doing them. Comments There are a couple of things I would do differently. But I can't say you are doing them wrong. I prefer not to use unsigned for numeric values. double polynom(unsigned, double); There is no way to check that what was passed was not negative. I can easily call this with: polynom(-1, 2); // Compiler is happy to convert that -1 to a very large // positive number without an y errors. On the reciving // side you can not tell if it was an error. There are very few times when you need the extra bit for larger positive numbers. Any time I have come across this type of case I have just used a larger integer type (not swapped to the unsigned version). Unsigned values should be reserved for bit masks in user code. Don't use return in main. int main(void) { return 0; } If you don't put a return at the end of main() then the compiler will automatically plant code to return 0. So it is not technically needed. Additionally not using the return 0 is a way to indicate that your application can never fail. Note: return 0 indicates a success to the OS and the compiler is planting it by default to indicate successful termination. So when I see a return 0 at the end of main() I also start to scan the rest of main() to see if there are any other returns (with non zero value) that will indicate the error conditions of the application. Initialization This is fine. double currentAdd(x); double res(0); But personally I think the = sign provides more context. double currentAdd = x; double res = 0; Prefer for() over while() This is so 50/50. Its just that for() loops can be slightly more compact. while (n) { --n; } // over for(;n; --n) { } Would not worry too much about it though.
{ "domain": "codereview.stackexchange", "id": 28316, "tags": "c++" }
Canonical momentum density vs. energy-momentum tensor
Question: Suppose we have a scalar field $\varphi$ with Lagrangian $$ \mathcal{L} = \frac{1}{2} \kappa \left( \frac{\partial \varphi}{\partial x} \right)^2 + \frac{1}{2} \rho \left( \frac{\partial \varphi}{\partial t} \right)^2 \,. $$ Then the canonical momentum density is $$ \pi = \frac{\partial \mathcal{L}}{\partial \dot{\varphi}} = \rho \dot{\varphi} \,.$$ Whereas the energy-momentum tensor: $$ T^\mu{}_\nu =\frac{ \partial \mathcal{L}}{\partial (\partial_\mu \varphi)} \partial_\nu \varphi - \delta^\mu_\nu \mathcal{L} $$ Has a 01 component whose interpretation (I believe) is momentum density, $$T^0{}_1 = -\rho \dot{\varphi}\varphi' \,.$$ These two quantities don't correspond; what is going on here? Thank you. Answer: The two quantities don't correspond because they are conserved quantities corresponding to different symmetries. One is a symmetry from shifting your field, the other from shifting space-time itself. Here is what is going on precisely: Let us do a simpler case first: In a particle mechanics system, let's say a free particle with $L = \frac{1}{2}m\dot{x}^2$, the "field" is simply $x(t)$. However $L$ does not explicitly depend on $x$, so we may shift our system $x$ to $x + \epsilon$ and have our overall action be unchanged. Following the Noether procedure, we make $\epsilon$ time dependent, do the variation again, and recover the conserved "canonical" momentum, the usual $m\dot{x}$. Also note that the system does not explicitly depend on time either. Varying $t$ according to Noether gives us $H$. In the field theoretic case, recovering the canonical momentum is exactly analogous. In this case, our field is $\varphi$, so we if take $\varphi$ to $\varphi + \epsilon \psi$, i.e $\delta \varphi = \epsilon \psi$ to be our variation, we can recover the canonical momentum. The symmetry that gives you the stress-energy tensor, on the other hand, is if you shift your space-time variables. Take $x^{\mu}$ to $x^{\mu} + \epsilon^{\mu}$. This is equivalent to letting $\delta \varphi$ = $\epsilon^{\mu}\partial_{\mu}\varphi$. Proceeding with Noether gives you the stress energy tensor (this computation can be found in standard field theory textbooks).
{ "domain": "physics.stackexchange", "id": 11169, "tags": "momentum, stress-energy-momentum-tensor, classical-field-theory" }
About spontaneous combustion of methane
Question: I heard that spontaneous reaction happens if $\Delta G=\Delta H-T\Delta S$ is negative. For combustion of methane, according to Chemguide: $\Delta H=\pu{-891.1 kJ K^{-1} mol^{-1}}$ $\Delta S=\pu{-0.2422 kJ K^{-1} mol^{-1}}$ Thus, I would deduce that at normal temperature, $\Delta G<0$, combustion of methane is spontaneous, while at very high temperatures, it will stop. But, from another reference, I deduce the opposite conclusion: if we consider the diagram below (taken from https://fr.wikipedia.org/wiki/R%C3%A9action_chimique), I would think that in order to be able to go to the intermediate state, one needs to provide energy, thus that it could not be spontaneous for this reason. One needs to provide energy, for example by heating, that is increasing temperature. How is that possible? Any comment? Answer: "Spontaneous" means different things in different contexts Your penultimate paragraph captures a key idea. The explanation for why this is right requires a recognition of the context of the term "spontaneous". The context of the statement at the start of the question $\Delta G=\Delta H-T\Delta S$ is negative is thermodynamic stability. But this is somewhat at variance with the more natural use of the term which implies "things happen without being pushed". This idea is closer to the idea of kinetic stability in chemistry. You correctly identify the need to add energy to get the reaction past the transition state. Even if the reaction overall releases energy (thermodynamically spontaneous) the reaction won't just happen if there is a huge barrier to getting over the transition state. There are big kinetic barriers that stop the reaction "just happening". Oxygen and gasoline will react to release energy but this doesn't happen without the push given by the spark plug in the engine of a car. That barrier is so low in some cases a compound will react with nearly everything with little excuse (chlorine trifluoride will set fire to asbestos). That is a spontaneous reaction in any context. Luckily, few thermodynamically spontaneous reactions are also kinetically spontaneous or humans would catch fire in air. So when you see the term "spontaneous" ask what is the context: thermodynamic or kinetic? And don't confuse them.
{ "domain": "chemistry.stackexchange", "id": 14623, "tags": "physical-chemistry, reaction-mechanism, enthalpy, free-energy" }
Implicit Postulate of Quantum Mechanics
Question: Consider the following quantum system: a particle in a one dimensional box (= infinite potential well). The energy eigenstates wave functions all vanish outside the box. But the position eigenstates wave functions don't all vanish outside the box. Each one of them is a delta function at a specific location, and some of these locations are outside the box. So it seems that there is no overlap between certain position eigenstates and all energy eigenstates. So the energy eigenstates don't span the whole Hilbert space! And these position states have zero probability for any energy outcome in a measurement! Now, I know that when speaking about an infinite potential well, it is assumed the particle cannot be outside the well. But I don't see any reason to assume this from the postulates of quantum mechanics. Is there an implicit additional postulates that says: "The Hilbert space of the system is spanned by the Hamiltonian operator's eigenstates (and not other operators, such as position)"? Or is the infinite potential well just an ill defined system because it contains infinities (just like free particle...)? Answer: My understanding is that in various problems in quantum mechanics the final step is to restrict the Hilbert space to physically permissible states. In this problem, such a restriction requires that the state is supported exclusively on the spatial interval in which the potential is finite. This would imply the resolution of your paradox is that the position eigenstates outside of this interval are not in the Hilbert space. This is not the only example of such a restriction. In the harmonic oscillator there is a similar restriction, that we limit our Hilbert space to states which can be eventually annihilated to the vacuum, and we reject those which can be lowered arbitrarily. Similarly when quantising the vector field we find the non-physical degrees of freedom allow for states of zero norm, in order to recover physicality, and a theory which obeys the appropriate gauge conditions, we reject these.
{ "domain": "physics.stackexchange", "id": 9337, "tags": "quantum-mechanics, mathematical-physics, operators, wavefunction, hilbert-space" }
Difficult reccurence with two variables
Question: My question is a follow-up for the following thread: Solving unusual recurrence with two variables I baisically have the same reccurence relation but with a small change--- $$T(n,k) = T(n-1,k)+T(n-m,k+1)$$ The change is the addition of $m$ in the second element in the recursion (instead of 1 in the original question) The boundary cases remain the same (for some given constant $C$): For all $x \leq C$ and for any $k$: $T(x,k)=1$ For all $y \geq C$ and for any $n$: $T(n,y)=1$ I'm trying to approximate the value of $T(n,0)$ (with a tight upper bound as possible). In the original question we were able to give a close formula for the reccurence after $i$ steps, which helped bounding its value. But due to the addition of $m$, this formula doesn't hold anymore. A direction for how to address such reccursions or any idea for the solution would be very helpful. Answer: It will be less confusing to reparametrize $T$ by switching the order in which the second parameter increases: $$ T(n,k) = \begin{cases} 1 & \text{if } n \leq C \text{ or } k = 0, \\ T(n-1,k) + T(n-m,k-1) & \text{otherwise}. \end{cases} $$ Having done this switch, let us get rid of $C$ completely. Including $C$ as the third parameter, notice that $T(n+C,k,C) = T(n+D,k,D)$, and so it suffices to solve the recurrence for some value of $C$. We choose $C = 1$. Let us now unroll the recurrence. Suppose that $n > m$ and $k > 0$. Then \begin{align} T(n,k) &= T(n-m,k-1) + T(n-1,k) \\ &= T(n-m,k-1) + T(n-m-1,k-1) + T(n-2,k) \\ &= T(n-m,k-1) + \cdots + T(2-m,k-1) + T(1,k) \\ &= T(n-m,k-1) + \cdots + T(2-m,k-1) + 1 \\ &= T(n-m,k-1) + \cdots + T(2,k-1) + m + 1. \end{align} This gives us $T(n,1) = n-m-1+m+1 = n$ if $n > m$; you can check that the same formula works for all $n \geq 1$. Continuing, $$ T(n,2) = T(n-m,1) + \cdots + T(2,1) + m + 1. $$ If $n > m$ then we can compute exactly $$ T(n,2) = (n-m) + \cdots + (2) + m + 1 = \frac{(n-m)(n-m+1)}{2} + m. $$ When $1 \leq n \leq m$, you can check that $T(n,2) = n$. At this point we can in principle continue and obtain exact formulas. However, they will be quite messy. Fortunately, when $m$ is constant, it is easy to prove by induction that $T(n,k) = \Theta(n^k)$; indeed, $T(n,k) = n^k/k! + O(n^{k-1})$. Obtaining an explicit dependence on $m$ is certainly possible, but unless it's expressly needed, I wouldn't bother.
{ "domain": "cs.stackexchange", "id": 17009, "tags": "asymptotics, recurrence-relation, big-o-notation" }
Two questions about the standing waves in a black body
Question: I am currently reading a derivation of Rayleigh-Jeans law for cavity radiation from Eisberg and Resnick 1 . The authors derive the law by considering a cavity with metallic walls. In the book, the authors state Now, since electromagnetic radiation is a transverse vibration with the electric field vector $\mathbf{E}$ perpendicular to the propagation direction, and since the propagation direction for this component is perpendicular to the wall in question, its electric field vector $\mathbf{E}$ is parallel to the wall. A metallic wall cannot, however, support an electric field parallel to the surface, since charges can always flow in such a way as to neutralize the electric field. Therefore, $\mathbf{E}$ for this component must always be zero at the wall. That is, the standing wave associated with the $x$-component of the radiation must have a node (zero amplitude) at $x = 0$. But blackbodies need not be made of metallic walls. The walls can be insulating as well which can support a non-zero electric field. Then how does Rayleigh-Jeans law hold for non-metallic blackbodies (eg. stars)? Also, the authors consider a sinusoidal wave-function for the electric field standing wave in the cavity. The electric field for one-dimensional electromagnetic standing waves can be described mathematically by the function $$E\,(x, t) = E_0 \sin {(2 \pi x / \lambda)} \sin {(2 \pi v t)} \tag{1-6}$$ where $\lambda$ is the wavelength of the wave, $v$ is its frequency, and $E_0$ is its maximum amplitude. Why do we need that the time and space dependence of the electric field to be sinusoidal? Shouldn't any standing wave which has nodes at the same points suffice? Since any wave can be described as a sum of sinusoidal components using Fourier analysis, why don't we write the electric field as such and the proceed further? Reference: Eisberg, R.; Resnick, R. Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, 2nd ed.; Wiley: Hoboken, NJ, 1985, pp. 6-12. Answer: The logic is as follows. First, one can show by a very general thermodynamic argument that, in the limit where the linear dimensions of the cavity are large compared to the wavelengths in the radiation, then in thermal equilibrium conditions the spectrum of energy density per unit frequency interval ($\rho(\omega))$ in the radiation is independent of the shape and nature of the cavity. We only require that the walls are opaque; they can be of any material and surface quality. (The reasoning is associated with Kirchhoff though I guess others contributed too). With this prediction in our back pocket, as it were, we can now go ahead and try to calculate $\rho(\omega)$ for one specific type of cavity, such as a metallic conducting one with a simple treatment of the walls. We can be sure that the answer we get will in fact be more general! The use of sine waves in the calculation is a form of Fourier analysis. It is saying that any distribution at all can be mathematically expressed as a sum of sine waves. By the way, this is a nice example of the way thermodynamic reasoning plays a significant role in physics. It became fashionable during recent decades to regard thermodynamics as somehow a 'second-class-citizen,' as if all we need is quantum theory and Boltzmann's definition of entropy. This is a mistake because in fact thermodynamics offers constraints and symmetry principles which can play a powerful role in understanding many situations in thermal physics. In the present example the roles are: quantum theory of radiation in a solvable case tells what happens in that case thermodynamic reasoning says that the prediction thus obtained can also be applied more generally (in thermal equilibrium)
{ "domain": "physics.stackexchange", "id": 58423, "tags": "quantum-mechanics, thermodynamics, electromagnetic-radiation, thermal-radiation" }
2d pose estimation in navigation with amcl
Question: Hi! I launch these files: $ roscore $ roslaunch frontier_exploration robot.launch $ roslaunch modeling display.launch model:="modeling/urdf/myrobot.urdf" $ roslaunch frontier_exploration move_base_staticmap.launch $ roslaunch frontier_exploration global_map.launch in robot.launch I have the commands to start the lidar and tf transformations (scanmatcher -> base_link -> neato_laser); in move_base_staticmap.launch I have this: <launch> <!-- Run the map server --> <node name="map_server" pkg="map_server" type="map_server" args="$(find robot_setup_tf)/mymaps/map.yaml" output="screen"/> <include file="$(find frontier_exploration)/launch/amcl_diff.launch"> </include> <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <rosparam file="$(find frontier_exploration)/launch/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find frontier_exploration)/launch/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find frontier_exploration)/launch/local_costmap_params.yaml" command="load" /> <rosparam file="$(find frontier_exploration)/launch/global_costmap_params.yaml" command="load" /> <rosparam file="$(find frontier_exploration)/launch/base_local_planner_params.yaml" command="load" /> </node> </launch> Is it normal that I have to give the 2d pose estimate in rviz and it is not automatically done by amcl? Because if I move the laser, the points move but the robot model remains stationary in the map? Can I use hector_slam instead of amcl_diff.launch? If I can do, what I have to replace this file? Please if you have at least one answer to all these questions, you give me a big help! Thank you! :) Originally posted by papaclaudia on ROS Answers with karma: 94 on 2016-03-24 Post score: 0 Answer: Is it normal that I have to give the 2d pose estimate in rviz and it is not automatically done by amcl? When? At the very beginning, AMCL needs to initialize its filter. This is done with a initialPose message given by the user, e.g. with RVIZ. If you don't provide an initialization, if I remember correctly, the particle set is initialized at pose-zero, I mean, the origin of your map or odom frame (which should coincide/have-the-same-pose at the beginning). Because if I move the laser, the points move but the robot model remains stationary in the map? This is not a problem with the lasers, AMCL asks to TF "if the robot has moved" to "update the filter". If you don't provide a transform (usually from base_frame to odom) AMCL will not update anything. As every filter, AMCL has two main steps, Prediction (called UpdateAction inside the AMCL code, i.e. the motion model) and Measure incorporation (UpdateSensor inside AMCL, one of the two sensor models, i.e. Likelihood Field Model or the Beam Model). The first one is done using TF, so your system needs to provide that motion estimate. Then, the predicted estimate (according to the motion model) will be corrected/improved using the sensor readings, i.e. the laser scan and the known map. So, to answer to your question: if the points move but the robot model remains stationary in the map you have some problems with TF, or you are not providing a motion estimate :-) bye! Originally posted by Augusto Luis Ballardini with karma: 430 on 2016-03-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by papaclaudia on 2016-03-29: Thank you!
{ "domain": "robotics.stackexchange", "id": 24237, "tags": "ros, slam, navigation, rviz, hector-slam" }
Is agriculture really a net contributor of greenhouse gases?
Question: A lot of scientific studies and credible sources indicate that agriculture is one of the major contributors of greenhouse gases. The exact numbers seem to vary a lot, I've seen everything from 8% to 45% of global greenhouse gas emissions. What makes me unsure how to really interpret these numbers is the fact that the process of growing feed to feed livestock, raising livestock, and then eating that livestock, is a cycle. The very same plants that the livestock is eating has during the course of its growth acted as a sink, i.e. absorbing carbon dioxide from the air. This is totally different from pumping oil out of the ground and burning it, which is (in the relevant short term) only production of greenhouse gases, without the sink. I would assume that any serious study on greenhouse gases would factor in the entire cycle of absorption through animal feed and release back into the atmosphere, yet none of the studies I have seen say exactly how they arrived at the emissions. Aside from the fact that oil is used in agriculture, when only considering the actual gases released from animal farming vs plant farming, it doesn't really make basic sense to me that there could be a net production of greenhouse gases - where did the surplus come from? Answer: The issue is that it is not always a cycle, when you drain wetlands or burn forests to make more farmland that's not a cycle that is permanent change. A change that can continue having effects for centuries. Then of course you have petroleum fuel used to run tractors and the production of fertilizer which are often not cycles either but pure extraction. One of the reason the number varies so much is because of disagreements about what falls under agriculture, does the fuel used to run a cargo ship moving bananas count? How about mining equipment used to extract minerals used to make fertilizer? Many simply forget to include things like peat bogs or changes in soil bacteria. For instance draining a peat bog releases tremendous amounts of carbon, as a functioning net carbon sink is changed into a carbon producer. Sometimes this happens as water is diverted to farmland but often it is done just on purpose to get more farmable land. Exact impacts vary from place to place leading to wide ranges of estimations further complicating it.
{ "domain": "biology.stackexchange", "id": 7252, "tags": "metabolism, food, photosynthesis, cell-cycle" }
Processing selected and deselected elements
Question: I think that this code is very bad because it does not agree with DRY principle. Can you advise me how to improve it? public void WhatSelected(object paramaters) { string ElementToProcess = BASM.FindElement(paramaters, true); SeleсtedCurrent.Add(ElementToProcess); int i = (from x in AssetSelectorCollection where x.IsChecked == true select x).Count(); if (i >= state) { foreach (BinaryAssetSelector item in AssetSelectorCollection) { if (item.IsChecked == false) { item.IsEnable = false; item.IsChecked = false; } } } if (state != 1) { foreach (BinaryAssetSelector item in AssetSelectorCollection) { if (item.IsChecked == true && item.IsEnable == false) { item.IsEnable = true; } } } Messenger.Default.Send<BinarySelectorCommunicator>(new BinarySelectorCommunicator { SelectedIN = SeleсtedCurrent }); } public void WhatDeSelected(object param) { string ElementToProcess = BASM.FindElement(param, false); SeleсtedCurrent.Remove(ElementToProcess); int i = (from x in AssetSelectorCollection where x.IsChecked == true select x).Count(); if (i <= state) { foreach (BinaryAssetSelector item in AssetSelectorCollection) { if (item.IsEnable == false) { item.IsEnable = true; } } } if (state != 1 && i == 1) { foreach (BinaryAssetSelector item in AssetSelectorCollection) { if (item.IsChecked == true) { item.IsEnable = false; } } } Messenger.Default.Send<BinarySelectorCommunicator>(new BinarySelectorCommunicator { SelectedIN = SeleсtedCurrent }); } Answer: Variable names: Names like i, param, parameters, state don't have a useful meaning, not for you, nor for others who are reading/reviewing your code. Use meaningful names for your variables, this is better for readability and maintainability. You have four different actions, based on the four different conditions regarding state and i: i >= state state != 1 i <= state state != 1 && i == 1 Is there a reason why they are in separate methods? The only overlapping part is that in two conditions i can be the same as state (1 and 3). If this is a mistake, you can place all the code in one method, if the checks are correct and you want to keep both methods, also fine. I'll base my answer on the correctness of your code and keep two methods. In both the methods you perform two foreach loops based on a check. This means that if AssetSelectorCollection has 200 items, you loop 400 times, which is not smart. Reverse the code like this: foreach (BinaryAssetSelector item in AssetSelectorCollection) { if (i >= state) { if (item.IsChecked == false) { item.IsEnable = false; item.IsChecked = false; } } if (state != 1) { if (item.IsChecked == true && item.IsEnable == false) { item.IsEnable = true; } } } Now you only have one loop over the collection, which will increase performance, certainly over a bigger collection. This must be applied in both methods. Redundant checking: Following code: if (item.IsChecked == false) { item.IsEnable = false; item.IsChecked = false; } contains redundant checks. The IsChecked property is a boolean and thus needs no evaluation against true or false in the condition, plus you un-check the item when it is already unchecked, no need to do that twice. Rewrite it as follows: if (!item.IsChecked) { item.IsEnable = false; } and even shorter: item.IsEnable = item.IsChecked Note that this last line will enable the item when it is checked, not only disable the item when it is unchecked. This logic can also be applied to all the if statements in the code. Example of the first method, reviewed: public void WhatSelected(object paramaters) { string ElementToProcess = BASM.FindElement(paramaters, true); SeleсtedCurrent.Add(ElementToProcess); var i = (from x in AssetSelectorCollection where x.IsChecked == true select x).Count(); foreach (BinaryAssetSelector item in AssetSelectorCollection) { if (i >= state) { if (!item.IsChecked) { item.IsEnable = false; } } if (state != 1) { if (item.IsChecked && !item.IsEnable) { item.IsEnable = true; } } } Messenger.Default.Send<BinarySelectorCommunicator>(new BinarySelectorCommunicator { SelectedIN = SeleсtedCurrent }); } Edit: Since you're not doing anything between the two if statements you can place the condition of the second statement inside the first one with an && operator: public void WhatSelected(object paramaters) { string ElementToProcess = BASM.FindElement(paramaters, true); SeleсtedCurrent.Add(ElementToProcess); var i = (from x in AssetSelectorCollection where x.IsChecked == true select x).Count(); foreach (BinaryAssetSelector item in AssetSelectorCollection) { if (i >= state && !item.IsChecked) { item.IsEnable = false; } if (state != 1 && (item.IsChecked && !item.IsEnable)) { item.IsEnable = true; } } Messenger.Default.Send<BinarySelectorCommunicator>(new BinarySelectorCommunicator { SelectedIN = SeleсtedCurrent }); }
{ "domain": "codereview.stackexchange", "id": 14067, "tags": "c#, form" }
What allows WFIRST to have similar resolution to Hubble over a 100x larger solid angle?
Question: In the video linked in the Space.com article What Would It Mean for Astronomers If the WFIRST Space Telescope Is Killed?, (available in YouTube as WFIRST: The Best of Both Worlds) after 02:00 NASA astrophysicist Neil Gehrels says of WFIRST: It has the same image precision and power as the Hubble Space Telescope, but with one-hundred times the area of sky that is used. Looking at the crude drawing on this NASA page shows only a Cassegrain-like geometry, but there must be additional goodies to obtain a similar resolution with a 10x wider field. Does it have three, four, or five mirrors like the E-ELT or the LSST (cf. What is a quaternary mirror and why does the E-ELT need one? and answers therein) or corrector lenses? What is the magic sauce that makes this possible, and is there at least a schematic diagram of the optics somewhere? above: From NASA. Answer: The ultimate resolution of a telescope depends upon its diameter. The larger the telescope aperture, the finer the resolution. Hubble and WFIRST are both the same diameter 2.4m so they both have the same theoretical resolution. However the image quality on any telescope degrades the further you are away from the optical axis. In WFIRST there is a 3rd mirror that corrects for these inherent distortions for a much wider field of view. The last factor is the improvement in detector technology. The individual pixels are smaller, smaller gaps, and there chips are much larger. Also WFIRST has mosaic of these CCD chips to cover the larger corrected field of view. The paper Optical Design of the WFIRST-AFTA Wide-Field Instrument (Pasquale et al. 2014, Proc. SPIE-OSA Vol. 9293, 929305) discusses the design of the optical path (cycle #4). In addition to a small figure adjustment to the primary, the figures of the secondary and a tertiary mirror are optimized to provide high resolution over a much wider field than the HST. This design also provides a pupil at which filters can be located as well as an intermediate focus (1:1 with the final focus) as shown below. The basic optical design is decidedly simple for fabrication, integration and test purposes. This approach begins with the base optical design; all three powered mirrors are optically co-axial and simple conics. We considered more complicated designs, including the use of tilted and de-centered mirrors and Zernike and/or anamorphic surface figures.[2,3] TMA designs are widely preferred for high A*Omega telescopes such as JWST and ATLAST. The primary mirror, referred to here as Telescope mirror 1 (T1), is a fast f/1.2 primary with a linear central obscuration of 31%. It is a light-weighted mirror using a hollow honeycomb core. T1 and Telescope mirror 2 (T2) together form an ~f/8, intermediate focus. This focus is uncorrected, with a large caustic image and strongly curved image surface. You will note the off-axis field bias (seen in Figure 1); this is typical in all TMA systems. Within the instrument, Mirror 3 (M3) is almost a 1:1 magnification relay of the intermediate image to the focal plane. Working in concert with T1 & T2, these 3 powered mirrors form the large corrected field of the TMA. A pupil is formed between M3 and the focal plane. With a diameter of approximately 100mm, the pupil allows for the insertion of bandpass filters and spectral dispersion elements via an element wheel. Finally, in order to package the system into the volume constraints, two fold mirrors were used.
{ "domain": "astronomy.stackexchange", "id": 2760, "tags": "telescope" }
Is there anywhere I can find some kind of database on all known stars and their properties (mass, surface temperature, radius, and luminosity)?
Question: I have been looking for some kind of collection of all known stars and their individual respective mass/radius/temperature/and luminosity, can you link me to one if it exists? My goal is to write an algorithm that creates polynomial functions that calculate a stars temp/lum/radius based on mass. I know there are existing equations, however the ones I have found are more troublesome and imprecise compared to if I could use polynomial functions I create myself. The Hertzsprung-Russel diagram ( http://upload.wikimedia.org/wikipedia/commons/1/17/Hertzsprung-Russel_StarData.png ) is a great example of representation of what data I need, however it's axes are not constant, so I can not visually gather accurate sample data to deduce any functions. Edit: Essentially even a large quantity of estimates will suffice. I intend to use this data to generate a semi-realistic galaxy for an MMO I am developing, so it doesn't by any means need to be perfect. Answer: I did something similar with the HRD about a week ago. I found that VizieR has a very large database with observations. You can download the tables in different formats (like csv or plain text), but you should first check if the Temperature (mostly log.Tegg) and luminosity (logL) are available. A very detailed one is table V/19. It has those values for 68 clusters, enough to keep you busy for a while! There are also masses in that list, but you can also create an isochrone yourself, and determine the age of each cluster. Well that was my assignment, and we got pretty close to the expected values for some clusters!
{ "domain": "astronomy.stackexchange", "id": 659, "tags": "star" }
Is the 'i' prefix for "iso-" italicised?
Question: I want to abbreviate pseudoisocytosine. Should it be "ΨiC" or "ΨiC"? I would usually write "sec-butyl", "tert-butyl", etc, and using "t-BuOH" for "t-butyl alcohol. I usually see, in respectable places, "i-PrOH" for isopropanol, and that what I have done up till now. The Wikipedia for chemical naming conventions notes that "cyclo, iso, neo, and spiro are considered part of a chemical name (such as isopropanol) and not considered prefixes. No hyphens or italics are used in these cases." IUPAC writes "isopropyl" prefix. What is correct? Answer: I don't have the time for a very in-depth answer at the moment. However, if you look at P-29.6 of the 2013 Blue Book, you can see what they italicise and what they don't. (Just do a Ctrl-F for the relevant words.) https://iupac.qmul.ac.uk/BlueBook/P2.html#2906 sec-butyl is italicised tert-butyl is italicised tert-pentyl is italicised o-tolyl is italicised isopropyl is not italicised isobutyl is not italicised neopentyl is not italicised It should be pointed out that of all these, only tert-butyl is a preferred prefix. The section linked above makes this clear. But in most cases, that probably needn't concern you. As for abbreviations, these are covered in the 2008 IUPAC recommendations on Graphical Representation Standards for Chemical Structure Diagrams. See in particular Table II on page 406, which gives the following forms: iPr, iBu, s-Bu, and t-Bu (which is in line with the Blue Book). It says that if you choose these abbreviations, then explicitly defining them is unnecessary. (Of course, you can choose whatever abbreviation you like, as long as you define it. The text accompanying Table II makes that clear. It also says that the non-italicised forms are fine too.) Authors are welcome to create their own abbreviations as well, but any abbreviations not included in the list below should be defined clearly when they are used. [...] Several of the abbreviations listed below include portions of the text in italics. The italicization shown is the preferred formatting of those abbreviations. However, the corresponding forms without italicization are also acceptable.
{ "domain": "chemistry.stackexchange", "id": 16810, "tags": "nomenclature" }
AttributeError: 'numpy.ndarray' object has no attribute 'nan_to_num'
Question: I'm trying to run a Random Forest model from sklearn but I keep getting an error: ValueError: Input contains NaN, infinity or a value too large for dtype('float32'). I tried following steps in ValueError: Input contains NaN, infinity or a value too large for dtype('float32') fillna(0) on my pandas dataframe still gave the ValueError. So I tried working with my numpy array: val = setTo.ravel().nan_to_num(0) But I keep getting an error: 'numpy.ndarray' object has no attribute 'nan_to_num' I'm wondering how I can deal with the nan values if I have ndarray? Update Thanks so much to @Beniamin H for all the help, as suggested, I rescaled the data, which I based on https://stackoverflow.com/questions/34771118/sklearn-random-forest-error-on-input and it worked! Answer: You are using the right method but in a wrong way :) nan_to_num is a method of numpy module, not numpy.ndarray. So instead of calling nan_to_num on you data, call it on numpy module giving your data as a paramter: import numpy as np data = np.array([1,2,3,np.nan,np.nan,5]) data_without_nan = np.nan_to_num(data) prints: array([1., 2., 3., 0., 0., 5.]) In your example: import numpy as np val = np.nan_to_num(setTo.ravel())
{ "domain": "datascience.stackexchange", "id": 8898, "tags": "python, scikit-learn, pandas, random-forest, numpy" }
Developing an Empirical Model for Non-Linear Data
Question: I have collected Temp vs. Time data for a controlled environment. I performed three tests, with the thermocouple in the same location for all three tests. I now have three Temp. vs. Time curves for this location. The graphs are shown here: I am looking to generate an empirical model based on the Temp. vs. Time at this location that can predict the temperature at this location over time. How should I approach this? My initial approach was to identify the local minima and maxima, and average the three points together (for each maximum and minimum). And then correlate that average to a sequence of time, for example, 30C between 750 and 1250s, or something like that. I really don't think this is an efficient approach... What do you all think would be an efficient approach to this? Answer: A couple of thoughts... First as a (former) physicist: As a scientist/physicist I would likely never be satisfied to analyze the data that you have presented because there are some obvious issues with it. I was initially thinking that it was way to schizophrenic, but then I noticed that you referred to the time units as seconds, in which case this is a pretty slowly varying time series. Are the units really seconds? If they happen to be milliseconds or microseconds then you might want to think about a probe that is less noisy (i.e. a larger probe with more of a temperature sink) and check weather you are seeing the effects of ground loop feedback in your system or just EMF interference from your electrical system. Secondly, there are clear patterns in the data and these do not line up very well, so I would rerun the experiment with things timed much more accurately. For instance, the event that occurs midway through the experiment that drops the temperature (looks like an icicle or dagger) occurs 5-8 minutes apart in comparing the dark blue experiment to the purple experiment. It would be great if these could be synced better. The point being that if you can't sync them better, then the noise of the time series renders most phenomenon apart from a flat line, almost useless. Now as a data scientist: This data is really noisy or at least fluctuates significantly! You should stay away from averaging minima and maxima as you suggest in your post. extrema are very sensitive to noise where as you are trying to reduce the noise in your system. You should perhaps think about applying some sort of smoothing method to reduce the jitter in your data. Perhaps a second-order exponentially weighted moving average like Holt-Winters. Finally, you could probably just average the three signals to produce a mean signal. Means are more suseptable to outliers than medians,so the median signal is also an option. Some other options include separating the trend from the general behavior using some type of de-seasoning or double ARIMA procedure. But, it seems like these are too extreme given that the dagger/icicle event looks significant and you don't want to transform it away. So I would only use these if there is some type of periodicity to your data. Hope this helps!
{ "domain": "datascience.stackexchange", "id": 402, "tags": "dataset" }
Async file copy/move
Question: My effort to write async methods for copy/move a file in C# public static class FileHelper { private const int _FileStreamDefaultBufferSize = 4096; private static bool HasNetworkDrive(string path) { try { return new DriveInfo(path).DriveType == DriveType.Network; } catch (Exception) { return false; } } private static bool IsUncPath(string path) { try { return new Uri(path).IsUnc; } catch (Exception) { return false; } } private static async Task InternalCopyToAsync(string sourceFilePath, string destFilePath, FileOptions? sourceFileOptions = null, bool overwrite = false) { sourceFilePath.AssertHasText(nameof(sourceFilePath)); destFilePath.AssertHasText(nameof(destFilePath)); var sourceStreamFileOpt = (sourceFileOptions ?? FileOptions.SequentialScan) | FileOptions.Asynchronous; using (FileStream sourceStream = new FileStream(sourceFilePath, FileMode.Open, FileAccess.Read, FileShare.Read, _FileStreamDefaultBufferSize, sourceStreamFileOpt)) using (FileStream destinationStream = new FileStream(destFilePath, overwrite ? FileMode.Create : FileMode.CreateNew, FileAccess.Write, FileShare.None, _FileStreamDefaultBufferSize, true)) { await sourceStream.CopyToAsync(destinationStream, _FileStreamDefaultBufferSize).ConfigureAwait(false); } } public static async Task MoveAsync(string sourceFilePath, string destFilePath) { sourceFilePath.AssertHasText(nameof(sourceFilePath)); destFilePath.AssertHasText(nameof(destFilePath)); if (IsUncPath(sourceFilePath) || HasNetworkDrive(sourceFilePath) || IsUncPath(destFilePath) || HasNetworkDrive(destFilePath)) { await InternalCopyToAsync(sourceFilePath, destFilePath, FileOptions.DeleteOnClose).ConfigureAwait(false); return; } FileInfo sourceFileInfo = new FileInfo(sourceFilePath); string sourceDrive = Path.GetPathRoot(sourceFileInfo.FullName); FileInfo destFileInfo = new FileInfo(destFilePath); string destDrive = Path.GetPathRoot(destFileInfo.FullName); if (sourceDrive == destDrive) { File.Move(sourceFilePath, destFilePath); return; } await Task.Run(() => File.Move(sourceFilePath, destFilePath)).ConfigureAwait(false); } public static async Task CopyAsync(string sourceFileName, string destFileName) { await InternalCopyToAsync(sourceFileName, destFileName).ConfigureAwait(false); } public static async Task CopyAsync(string sourceFileName, string destFileName, bool overwrite) { await InternalCopyToAsync(sourceFileName, destFileName, overwrite: overwrite).ConfigureAwait(false); } } The extension method AssertHasText is just throwing an ArgumentNullException if !string.IsNullOrWhiteSpace(argument) Regarding the implementation of MoveAsync I followed these guidelines If the source or dest path are a from a network drive, or UNC path, I call the internal async copy method with the flag FileOptions.DeleteOnClose If the source drive is the same as the dest drive, I call the standard File.Move method, because it is an almost-instantaneous operation, as the headers are changed but the file contents are not moved In any other case, I use a Task with the standard File.Move. I differentiate the above case to save an unnecessary thread Question: Regarding the implementation of CopyAsync it will always copy the stream, Can previous claims be applied to the copy as well? EDIT: Adding the implementation of AssertArgumentHasText public static void AssertArgumentHasText(this string argument, string name) { if (argument.IsNullOrEmpty()) { throw new ArgumentNullException( name, string.Format( CultureInfo.InvariantCulture, "Argument '{0}' cannot be null or resolve to an empty string : '{1}'.", name, argument)); } } Answer: Regarding the implementation of CopyAsync it will always copy the stream. Can previous claims be applied to the copy as well? Your current implementation exposes two sort of operations: Move Copy They are sharing the same signature (more or less). The override functionality is not controllable in case of Move from the consumer perspective. The Copy operation is optimized to reduce latency (to take advantage of data locality) by branching based on the location of the drive. The same branching could be applied to the Move as well to provide symmetric behaviour. If it branches in the same way then extension in any direction (new driver location (for example Kubernetes virtual drive), new operation (for example Delete), etc.) would be way more convenient. From readability and maintainability perspective it is easier to have symmetric functions, because after awhile (without any context / reasoning why Move does not apply the same optimization as Copy do) no one will know who and why did this.
{ "domain": "codereview.stackexchange", "id": 38812, "tags": "c#, asynchronous, io" }
What is the definition of Computer Science, and what is the Science within Computer Science?
Question: I am pursuing a BS in Computer Science, but I am at an early point of it, and I am pretty sure I will be happy with my choice given that it seems like an academically and career flexible education to pursue. Having said that, there seems to be a variety of definitions about what Computer Science really is in respects to academia, the private-sector, and the actual "Science" in "Computer Science" I would love to have answers(Or shared pondering) as to the breadth of things an education in Computer Science can be applied to, and ultimately the variety of paths those within Computer Science have pursued. Answer: Computer science is a misnomer - there is actually no "science" in computer science, since computer science is not about observing nature. Rather, parts of computer science are engineering, and parts are mathematics. The more theoretical parts of computer science are purely mathematical. For example, what is a good algorithm for sorting? How do we define the semantics of programming languages? How can we be sure that a cryptographic system is secure? When computer science gets applied, it becomes more like engineering. For example, what is the best way to implement a matrix multiplication algorithm? How should we design a computer language to facilitate writing large programs? How can we design a cryptographic system to protect online banking? In contrast, science is about laws of nature, and more generally about natural phenomena. The phenomena involved in computer science are man-made. Some aspects of computer science can be viewed as experimental in this sense, for example the empirical study of social networks, the empirical study of computer networks, the empirical study of viruses and their spread, and computer education (both teaching computer science and using computers to teach other subjects). Most of these examples are border-line computer science, and are more properly multidisciplinary. The closest one gets to the scientific method in computer science is perhaps the study of networks and other hardware devices, which is mainstream in the subarea known unofficially as "systems". These examples notwithstanding, most of the core of computer science is not science at all. Computer science is just a name - it doesn't need to make sense. As for the scope of computer science, the best definitions is perhaps: that which computer scientists do. Computer science, like every other academic discipline, is a wide area, and it is difficult to chart completely. If you want a sampling of what people consider computer science, you can look at the research areas of your faculty.
{ "domain": "cs.stackexchange", "id": 1943, "tags": "terminology, history" }
Why don't chilli peppers taste as hot in space?
Question: The following commentator writes: Chili peppers don’t taste as hot in space as they do on Earth. Nobody knows why. We know that the 'hot' feeling of chilli peppers is caused by Capsaicin. We read: Capsaicin inside the pepper activates a protein in people’s cells called TRPV1. This protein’s job is to sense heat There appears to be some question about the cause of the 'spicy' taste of chilli peppers. My question is: What is the reason that chilli peppers don't taste as hot in space? Answer: TL;DR All food taste bland in space, not only chilli. The claim in that tweet ("Chili peppers don’t taste as hot in space as they do on Earth") is not exactly correct because, the way it's worded, it gives the impression that only chilli peppers have a different taste. In fact, astronauts/cosmonauts use more chili and other spices than they regularly do on Earth, for a reason: they say that food in space (microgravity) taste bland. All food, not only chili. Since the beginning of space flight, astronauts report that food taste different in microgravity. According to the Scientific American article When It Comes to Living in Space, It's a Matter of Taste (Romanoff and Romanoff, 2017): Many said that flavors are dulled and they crave fare that is spicier and considerably more tart than they would prefer on Earth. So, due to the food tasting so bland, astronauts in fact use more chili and other spices than they normally do on Earth: It's possible that hot sauce and salsa could be key ingredients to the success of a manned mission to Mars. The kicked-up condiments already came close to causing a mutiny on the International Space Station (ISS) in 2002 when astronaut Peggy Whitson threatened to bar entry to the crew of the visiting shuttle Atlantis unless they came bearing a promised resupply of the spicy stuff. Only when shuttle commander Jeff Ashby announced that he had the goods did Whitson say, "Okay, we'll let you in then." Whitson was joking, but the need for astronauts to be able to spice up their food while in orbit is no laughing matter. However, it's still unknown why food taste bland in microgravity. Also, there is no consensus about that fact, to start with: There's little scientific data to back up astronauts' claims that taste changes in space, despite a number of studies since the 1970s on the effect of microgravity on the sense of taste and smell. In essence [...] study participants were split on the matter. For those that report a change in the taste, some hypothesis were proposed: One of the most prominent physiological changes associated with spaceflight has to do with fluid shifting from the lower to the upper parts of the body because of weightlessness. This facial and upper-body swelling also creates significant nasal congestion, and because odor is essential to the sense of taste, a decrease in the perception of flavors would occur. And also: The shuttle has a "sterile" smell, which when combined with other odors, such as the scent of their rinse-free shampoo, can be somewhat distracting. However, there is little scientific evidence supporting this fluid-shifting hypothesis. According to Vickers et al. (2001), reproducing that fluid-shift in a simulated microgravity, where people had a head-down bed rest, has no effect on the threshold sensitivity of tastants. The same lack of changes in the taste and smell sensitivity under simulated microgravity was found by Olabi et al. (2002). Despite that, this fluid-shifting hypothesis is supported by NASA (Nasa.gov, 2017) in an educational resource for kids: From the early 1960s, astronauts found that their taste buds did not seem to be as effective when they were in space. Why does this happen in space? This is because fluids in the body get affected by the reduced gravity conditions (also called fluid shift). On Earth, gravity acts on the fluid in our bodies and pulls it into our legs. In space, this fluid is distributed equally in the body. This change can be seen in the first few days of arriving in space when astronauts have a puffy face as fluid blocks the nasal passages. The puffy face feels like a heavy cold and this can cause taste to be affected in the short term by reducing their ability to smell. The same source says: When food seems to lose its flavor, astronauts usually ask for condiments, such as hot sauces, to give food some intensity of taste. A variety of condiments are available for the crewmembers to add to their food such as honey, and sauces like soy sauce, BBQ, and taco. (emphasis mine) Conclusion For reasons yet unknown, it seems that all food (not only chilli peppers) taste bland in microgravity. To cope with that, astronauts rely on chili and other hot spices. Sources: Romanoff, J. and Romanoff, J. (2017). When It Comes to Living in Space, It's a Matter of Taste. [online] Scientific American. Available at: https://www.scientificamerican.com/article/taste-changes-in-space/ [Accessed 26 Dec. 2017]. Nasa.gov. (2017) [online] Available at: https://www.nasa.gov/sites/default/files/files/Taste-in-space-TLA-FINAL.pdf [Accessed 26 Dec. 2017]. VICKERS, Z., RICE, B., ROSE, M. and LANE, H. (2001). SIMULATED MICROGRAVITY [BED REST] HAS LITTLE INFLUENCE ON TASTE, ODOR OR TRIGEMINAL SENSITIVITY. Journal of Sensory Studies, 16(1), pp.23-32. Olabi, A., Lawless, H., Hunter, J., Levitsky, D. and Halpern, B. (2002). The Effect of Microgravity and Space Flight on the Chemical Senses. Journal of Food Science, 67(2), pp.468-478.
{ "domain": "biology.stackexchange", "id": 8157, "tags": "food, human-physiology, senses" }
Is the chemical structure of an amide bond (-CONH) or (-CONH2)?
Question: The Dictionary definition of an amide is an organic compound obtained by replacing the −OH group in acids by the −NH 2 group. So from this, I deduced that the bond would be CONH2. But when I searched around, I found some images that showed the bond only had one hydrogen atom, like so: Which is the correct bond structure? I'm looking specifically at the condensation polymerization of diamine and diacarboxylic acid to form nylon. And also, I see online that lots of bond structures (like NH2 and COOH) are prefixed with a dash, so they'd appear like -NH2 and -COOH. Is this just a personal preference, or is there a more elaborate reason behind this? I'm currently doing the GCSEs. Thanks Answer: Amides are derivative of carboxylic acids. It consists of 2 parts, and the bond between carbonyl carbon and nitrogen of ammonia is called is amide bond. The bond structure of 1° amides is as shown in the figure. What you have shown in the question is bond structure of 2° amide, which can be obtained by removing 1 hydrogen from 1° amide. Now if you observe structure of nylon, you will see there are n numbers of 2° amides. And also, I see online that lots of bond structures (like NH2 and COOH) are prefixed with a dash, so they'd appear like -NH2 and -COOH. Is this just a personal preference, or is there a more elaborate reason behind this? The dash means it is Functional Group like -R means hydrocarbyl functional group, -COOH means carboxylic acid functional group, etc.
{ "domain": "chemistry.stackexchange", "id": 5402, "tags": "organic-chemistry, polymers, amines" }
Long-term efficacy and absence of side effects of anti-covid vaccines
Question: The question is about the methodology/biostatistics of clinical trials (I state this beforehand to avoid accusations of being an anti-vaxxer). As multiple anti-COVID vaccines are offered on the market, the question naturally arises about their long-term efficacy and possible side effects. As SARS-CoV-2 virus has been known for a bit more than a year, neither of these could be tested directly, so one probably has to rely on indirect knowledge due to the experience in developing other vaccines. Given that multiple vaccines have been approved for a wide use, there should exist well-established procedures to assure their efficacy and safety. The question is: how does one prove the long-term efficacy and absence of side effects during a short period of time (short in comparison to the expected duration of immunity and absence of side effects). Update As @MattDMo have correctly pointed in their answer, the absence of long-term effects is established in the last phase of the clinical trials, while observing cohorts of vaccinated individuals during a long period of time. @MaximilianPress, in their comment, has also correctly brought up the fact that this is done within the framework of survival analysis. In particular, this means that it is sufficient to follow the vaccinated cohort during several years to evaluate the side effects that occur during their lifetime, as the survival analysis keeps track of the sensored data. Here we are dealing with a situation where such a study could not be done - the best we can guarantee is the absence of the side effects in the next few months. We thus need to rely on a different method, which would account for the similarity between the established and the new vaccines. I am less concerned about the vaccine efficacy, since the inefficacy of vaccines is usually against different strains of virus, as is in the case of influenza and Dengue. Update 2 The accepted answer points out two important things: The long-term side effects of a vaccine/medication can be truly understood only in the Phase IV of the clinical trials, i.e., after the vaccine is in widespread use. The threshold for approving this specific vaccine is lower due to the emergency of the current health crisis. This however still leaves open the question of the safeguards undertaken in the Phase III of the clinical trials, before the vaccine approval. I.e., the question is about ranking the candidate vaccines against the above-mentioned threshold. Remarks: I understand why this question irritates, but I believe that it is not the asking, but rather refusing to seek the answer that plays in the hands of anti-vaxxers. I did follow the suggestions in the comments to pose this question in the medical and the statistics communities, but it seems to get there even less traction. Update 3 Recent suspensions of Astrazeneca vaccine in 9 European countries demonstrate the importance of this issue. Update 4 Pfizer vaccine has been fully approved - that is from now on it is considered a fully tested vaccine (till now it had been used under the emergency authorization). Answer: You are correct that we have been creating and approving vaccines for a very long time, so the procedures for looking at efficacy and side effects of vaccines in general long-term are quite well developed. They occur via so-called "Phase IV" trials, also known as post-marketing surveillance. However, one complicating factor with at least the Pfizer/BioNTech and Moderna vaccines is that they're the first mRNA vaccines ever approved. While I hope the companies have been following some of the volunteers from previous clinical studies of other mRNA vaccines, we're in somewhat uncharted territory right now. The vaccines appear to be safe and effective, at least on the scale of tens of thousands of Phase III test subjects, but we just don't know exactly how they'll behave long-term. They were approved in the US as "Emergency Use" products, meaning the upside of having them available outweighs any known or potential downsides at this point. So, the technical answer to your question "how does one prove the long-term efficacy and absence of side effects in such a short time?" is that they can't prove it. Based on the available data to date, there's no reason to suspect that some previously-unobserved deleterious effect will become common, but we can't be 100% certain until the products have been in widespread use for years and possibly decades. However, given the seriousness of COVID-19 and its currently unchecked spread across much of the world, the benefits of having "highly, strongly, very most likely safe" vaccines (that are also very effective) are huge.
{ "domain": "biology.stackexchange", "id": 11106, "tags": "vaccination, biostatistics, clinical-trial" }
Listing teams by total distance and bonus
Question: I had to write a SQL query recently to do a few joins and sums, and at the end of it I realized that I have nearly written a storybook. I know the tools to optimize it, but my trouble is the length of it. I am very sure, it can be shortened up, but my little brain refuses to strengthen this belief. Basically, I want a list of teams ordered by the total Distance + Bonus. The Bonus consists of the team bonus and the bonus of team members (Member Bonus). In order to calculate distance covered by the team, I have to sum up the distance covered by the team members. The relationship between the tables: SELECT TeamID, Name, Country, Region, OfficeLocation AS [Office Location], Minutes As Steps, Distance, SUM(MemberBonus+TeamBonus) AS Bonus,TeamSize, ROW_NUMBER() OVER (ORDER BY SUM( ISNULL(Distance,0.0000000) +ISNULL(MemberBonus,0.0000000)+ISNULL(TeamBonus,0.0000000)) DESC) AS Place FROM ( -- Sum the Distance, Member Bonus and Team Bonus and assign them a rank based on the sum value. SELECT TeamID, Name, Country, Region, Minutes, Distance, ISNULL(MemberBonus,0.0000000) AS MemberBonus, SUM(ISNULL(teamBonusData.BonusPoints,0.0000000)) AS TeamBonus, OfficeLocation, TeamSize FROM ( SELECT Team.TeamID, Team.Name, Result.Country AS Country, Result.Region, Result.Minutes AS Minutes, SUM(ISNULL(Result.MemberBonus,0)) AS MemberBonus, Result.Distance AS Distance, TeamSize FROM ( SELECT Group1.TeamID, Group1.Name, Country, Region, Minutes, Distance, MemberBonus, TeamSize FROM ( --Get a sum of distance covered by the team's members. Only get the data for the active teams ( Status = 1) SELECT Team.TeamID, Team.Name, Country.Name AS Country, Region.Name AS Region, ISNULL(SUM(Activity.Minutes),0) AS Minutes, ISNULL(SUM(Activity.Distance),0) AS Distance FROM Team LEFT JOIN Country ON Team.fk_CountryID = Country.CountryID LEFT JOIN Region ON Team.fk_RegionID = Region.RegionID JOIN TeamMember LEFT JOIN Member LEFT JOIN Activity ON Activity.fk_MemberID = Member.MemberID ON Member.MemberID = TeamMember.MemberID ON Team.TeamID = TeamMember.TeamID WHERE Team.Status = 1 AND Member.Disabled = 0 GROUP BY TeamMember.TeamID, Team.TeamID, Team.Name, Country.Name, Region.Name )Group1 JOIN ( -- Get a sum of Bonus points given to the team's members. SELECT TeamMember.TeamID, Member.MemberID, SUM(ISNULL(MemberBonus.BonusPoints,0)) AS MemberBonus FROM Team JOIN TeamMember JOIN Member LEFT JOIN dbo.MemberBonus ON Member.MemberID = MemberBonus.fk_MemberID ON TeamMember.MemberID = Member.MemberID ON Team.TeamID = TeamMember.TeamID GROUP BY Member.MemberID, TeamMember.TeamID ) Group2 ON Group1.TeamID = Group2.TeamID JOIN ( -- Get the team size ( number of members in the team) SELECT COUNT(TeamMember.TeamID) AS TeamSize,TeamID FROM TeamMember GROUP BY TeamMember.TeamID )Group3 ON Group1.TeamID = Group3.TeamID )Result JOIN Team ON Result.TeamID = Team.TeamID GROUP BY Team.TeamID, Team.Name, Result.Country, Result.Minutes, Result.Distance, Result.Region, Result.TeamSize )teamRank LEFT JOIN ( --Get the Bonus points given to the team SELECT ISNULL(TeamBonus.BonusPoints,0)AS BonusPoints, fk_TeamID FROM TeamBonus )teamBonusData ON teamRank.TeamID = teamBonusData.fk_TeamID LEFT JOIN ( -- Get the office location value for the team's Captain SELECT TeamID AS CapTeamID, OfficeLocation FROM TeamMember JOIN Member on TeamMember.MemberID = Member.MemberID WHERE MemberType='Captain' )captainData ON teamRank.TeamID = captainData.CapTeamID GROUP BY teamRank.TeamID, teamRank.Name, teamRank.Country, teamRank.Minutes, teamRank.Distance, teamRank.Region, teamRank.MemberBonus, captainData.OfficeLocation, teamRank.TeamSize ) myTeamRank GROUP BY myTeamRank.TeamID, myTeamRank.Name, myTeamRank.Country, myTeamRank.Minutes, myTeamRank.Distance, myTeamRank.Region, myTeamRank.OfficeLocation,TeamSize ORDER BY Place Answer: You really should be more consistent with your choice of line breaks and indentation, especially in a query as large as this. Improving the readability should also make it easier to spot areas for improvement. You didn't indicate which DBMS you're using, but if it's supported, the WITH clause ("common table expressions" or "subquery factoring clause") can help reduce the number of indentation levels you have to deal with. I feel like you're using ISNULL too much. Remember that aggregate functions like SUM ignore NULL values, so an expression like SUM(ISNULL(Result.MemberBonus,0)) is equivalent to SUM(Result.MemberBonus). A big problem I see is that you join some tables and subqueries too early. For example, Country and Region appear to be necessary only for information and don't actually affect any calculations. Furthermore, Country and Region only depend on the TeamID. Since that's the case, instead of joining them in the inner-most subquery, join them as part of the outer query. This will shorten up the GROUP BY clauses. The steps I would take to write this query: Write a query which calculates the distance by team. The only output columns should be TeamID and TotalDistance. Write a query which calculates the total Member bonus by team. The only output columns should be TeamID and TotalMemberBonus. Write a query which calculates the total team bonus by team. The only output columns should be TeamID and TotalTeamBonus. Join the first three queries on TeamID, along with any other information tables like Country and Region. Note that all the aggregation was done in the subqueries, so this final query should not need a GROUP BY clause. If you follow this outline, you shouldn't need to calculate a ROW_NUMBER to sort the results. Just ORDER BY TotalDistance + TotalMemberBonus + TotalTeamBonus. The advantage of this approach is that you can test each individual query separately to ensure that you are getting correct results at each step. It also should result in a shorter, easier-to-follow query.
{ "domain": "codereview.stackexchange", "id": 1839, "tags": "sql, sql-server" }
Replacing Qh with Ql in Etropy equation in Heat Pump
Question: In the heat pump below, the $T_H$ is not constant. It is assumes that the process is steady and reversible. So when constructing an entropy equation over the CV (the purple-colored area): $$\dot ms_1+\dot Q_H/T_H+\dot S_{gen}=\dot ms_2$$ Where $\dot S_{gen}=0$, why is it allowed to replace the term $\dot Q_H/T_H$ with $\dot Q_L/T_L$ (The $T_L$ is constant by the way)? Can someone explain please? P.s. This is from the "Fundamentals of Thermodynamics" 8e Claus Borgnakke, Richard E. Sonntag. With the "previous problem" diagram Fig. P7.28 Answer: Your equation describes the entropy balance on the fluid passing through the heat exchanger between points 1 and 2. In this equation, $Q_H$ represents the heat received by the heat exchange fluid from the heat pump. The interface between the heat pump and the heat exchanger fluid is assumed to be at a constant temperature $T_H$ (say because the working fluid in the heat pump is experiencing a phase change at a constant pressure). So, the rate of entropy transfer between the heat pump and the heat exchanger fluid is $\dot{Q}_H/T_H$. And $\dot{S}_{gen}$ in the equation is the rate of entropy generation within the heat exchange fluid (as a result of temperature gradients within the heat exchanger fluid). If the heat pump is operating close to ideally, I can see how you could replace $\dot{Q}_H/T_H$ by $\dot{Q}_C/T_C$ in the equation. However, I can't see how it can be assumed that $\dot{S}_{gen}$ in the heat exchanger can be assumed to be zero.
{ "domain": "physics.stackexchange", "id": 52508, "tags": "thermodynamics, entropy, heat-engine" }
Determine if stuff is Potassium Alum
Question: How can one determine (at home) if the product shown below is Potassium Alum? I held a piece of it in a gas flame (assuming that the flame would become violet if potassium were present), but there was no change in the color of the flame. [Got product for use as astringent and antiseptic after-shave -- and people say (do not know why) that for such use, it has to be potassium-(not something else)-alum.] Update: As mentioned in the comments to the chosen answer (by @James Gaidis), the product is ammonia alum. Just to be double sure, I repeated the tests with McCormick's Alum powder whose ingredient is potassium alum. (1) The picture of the flame test is below. (2) With the McCormick Alum, the test involving sodium bicarbonate resulted in sound and definitely no ammonia gas. (3) The solution of Fatkari had a salty taste that became sour in a second or so; the K-alum was tasteless and became sour in a second or so. Answer: From Wikipedia: An alum is a type of chemical compound, usually a hydrated double sulfate salt of aluminum with the general formula $\ce{XAl(SO4)2 · 12 H2O},$ where $\ce{X}$ is a monovalent cation such as potassium or ammonium. There are also chromium alums and ferric alums, but the color in your sample does not seem to suggest either of them. The alums melt at relatively low temperatures, because they already have a lot of water in the crystal structure, so ammonium alum and potassium alum can not be differentiated on the basis of melting points. Solubilities are fairly close, too: $$ \begin{array}{llc} \hline \text{Compound} & \text{mp}/\pu{°C} & \text{Solubility}/\% \\ \hline \ce{(NH4)Al(SO4)2 · 12 H2O} & 93 & 15 \\ \ce{KAl(SO4)2 · 12 H2O} & 92.5 & 11 \\ \hline \end{array} $$ If there is no violet color to a flame, there may be no potassium, but the test can be sensitive and may be indeterminate. Eliminate the possibility that the alum is an ammonium salt by dissolving some in water and adding a strong alkali, then sniffing it gently. If you smell ammonia, you have an ammonia alum; it will still work as a flocculant for muddy water, but is different chemically. A strong alkali readily available at home, well, at the grocery store anyway, is sodium carbonate (washing soda).
{ "domain": "chemistry.stackexchange", "id": 14473, "tags": "physical-chemistry, home-experiment" }
Kinect puppeting
Question: I am working on using Kinect skeleton tracking to control a PR2 and manipulate objects in Gazebo. I want an end result similar to Garratt Gallagher's Teleoperation, but he (and I believe Willow Garage) are both using Cartesian coordinate mapping to "puppet", I want joint angle mapping. This will initially control all the joints in both arms, except the forearm roll and writs joints, although I'd eventually like to get finger tracking for grasping detection (Like the MIT examples). Of course it will need to avoid self and environment collisions and unattainable joint angles. Since somewhat I'm new to ROS, I have a few questions to get started: Is anyone working on something similar? Is the code available? Since this will involve lots of vector and trigonometry, what library should I use for such math? What is and outline for the best way to implement safe joint angle control? Note: I'm currently using Fuerte on Ubuntu 12.04 with an Xbox Kinect. Thank you! Originally posted by DanF on ROS Answers with karma: 1 on 2012-12-21 Post score: 0 Answer: Taylor Veltrop did some work along those lines. See his ROS 3D contest entry Originally posted by tfoote with karma: 58457 on 2012-12-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 12184, "tags": "gazebo, kinect, pr2" }
Why continuous features are more important than categorical features in decision tree models?
Question: I have both categorical and continuous features in my prediction model and want to select (and rank) most important features. I have converted all categorical variables into dummy variables using one hot encoding (for better interpretation in my logistic regression model). On one hand, I use LogisticRegression (sklearn) and rank the most significant features by using their coefficients. In this way, I see both categorical and continuous variables among the most important features. On the other hand, When I want to rank the features by using Decision Tree models (SelectFromModel) they always give higher scores (feature_importances_) first to continuous features and then to categorical (dummy) variables. A completely different behavior in comparison with Logistic Regression. Whilst the performance of Decision Tree models is much higher (about 15%) than the performance of Logistic Regression, I want to know which sorting of features (Decision Tree or Logistic Regression) is more correct? And why Decision Tree models give more priority to continuous features? Answer: It could be the way that you encode categorical variables. If you do One Hot Encoding (dummy) each encoded feature will only have two possible values [0,1]. Binary variables normally have less importance in Decision trees given how it is computed the feature weight. Let's say per example, that you are trying to predict the condition of a patient at the hospital Alive==0, Dead ==1. Imagine that you have a feature is called Head_Shot[0,1], that is really rare, it only appears a few times in the dataset. The linear model will assign a lot of weight to this coefficient since it is crucial for the target variable. If this happens the rest of the features has no meaning. For the decision tree, it could do a split in just one of the tree and since it calculates the importance of a feature weighted time the number of times it appears it wouldn´t such a relevant weight. I am assuming you are doing one-hot encoding. With other techniques, it will be different. And I also assume the way that you calculate the feature importance. So this is far from a scientific answer. Continuous variables can have more importance in decision trees because each tree can do several splits along its way. Sorry for the example, it is a bit drastic but I believe it makes the point.
{ "domain": "datascience.stackexchange", "id": 8850, "tags": "machine-learning, feature-selection, decision-trees, logistic-regression, explainable-ai" }
Improvements in repository pattern
Question: I am trying to learn the clean architecture by Uncle Bob. I wanted some basics and following this good post I tried to implement it in python with some minor differences. class ValidationError(Exception): pass class Repository(object): repositories = {} @classmethod def register(cls, type_, repo): cls.repositories[type_] = repo @classmethod def get(cls, type_): return cls.repositories[type_] @classmethod def all(cls): return cls.repositories class UserValidator(object): def validate(self, user): if not (user.name and user.email): raise ValidationError class UserRepo(object): def __init__(self, validator=None): self.users = {} self.next_id = 1 self.validator = validator def save(self, user): self.validator.validate(user) user.id = self.next_id self.users[self.next_id] = user self.next_id += 1 return user def all(self): return self.users def get(self, id): return self.users[id] def delete(self, id): return self.users.pop(id) class CompanyRepo(object): def __init__(self): self.companies = {} self.next_id = 1 def save(self, company): company.id = self.next_id self.companies[self.next_id] = company self.next_id += 1 return company def all(self): return self.companies def get(self, id): return self.companies[id] def delete(self, id): return self.companies.pop(id) class BaseEntity(object): def __init__(self): self.id = None self.created_at = None self.updated_at = None class Company(BaseEntity): def __init__(self, name, address): self.name = name self.address = address super(Company, self).__init__() class User(BaseEntity): def __init__(self, name, email): self.name = name self.email = email self.company_id = None super(User, self).__init__() def serialize(self): return {'id': self.id, 'name': self.name, 'email': self.email, 'company_id': self.company_id} def value(self): return self.serialize() class WorkerAdder(object): def __init__(self, name, email): self.user_repo = Repository.get('user') self.company_repo = Repository.get('company') self.company = self.company_repo.save(Company('FooBar', 'Dummy street')) self.user = User(name, email) def add(self): try: self.user.company_id = self.company.id print self.user_repo.save(self.user).value() print self.user_repo.all() print self.user_repo.get(1) print self.company_repo.all() except ValidationError as ve: print "Validation error!" if __name__ == '__main__': Repository.register('user', UserRepo(validator=UserValidator())) Repository.register('company', CompanyRepo()) adder = WorkerAdder('Foo', 'foo@example.com') adder.add() O/P from above code: {'company_id': 1, 'email': 'foo@example.com', 'id': 1, 'name': 'Foo'} {1: <__main__.User object at 0x7f4b34139d50>} <__main__.User object at 0x7f4b34139d50> {1: <__main__.Company object at 0x7f4b34139d10>} I have basically tried to implement the in memory repository pattern. I would like to hear from others what they think, and why there is so scarcity in the python community regarding the design patterns and different architectures as compared to ruby? PS: The code above is just for learning purpose hence error handling and error cases are not handled properly. Answer: I'm skeptical of the value of this design pattern, for two reasons: The Repository class is basically just a Python dictionary. You never use Repository.all() anyway. Repository is actually a glorified namespace. It's a disguised mechanism for you to make a bunch of global variables. Additionally, I see some puzzling aspects to the code: Why would UserRepo() make its .validator configurable? And why would it default to None? Since UserRepo and CompanyRepo contain nearly identical code, why not consolidate them? At the least they should have a common base class. Maybe they could even be handled by one class. The WorkerAdder object seems contrived. Why split the work between a WorkerAdder() constructor and an .add() method? Why is the company name and address hard-coded? Some minor remarks: It's customary to chain to the superclass constructor first, not as the last statement in the subclass's constructor. Serialization means turning an object into a string representation; your User.serialize() returns a dictionary instead. The string representation needs to contain enough detail to reconstruct the object exactly; your User.serialize() omits the .created_at and .update_at timestamps. In summary, Repository is just a dictionary that serves as a namespace for global variables. There's not much point to it, since symbol tables in Python are also dictionaries. UserRepo and CompanyRepo do serve a purpose, but you would have to reinvent the wheel (implementing the persistence mechanism, for example) to make them useful. You might as well use a real database (which could be as simple as sqlite) with an ORM.
{ "domain": "codereview.stackexchange", "id": 7740, "tags": "python, design-patterns" }
ros_control requirements
Question: Hi Required info: Distro: Kinetic, Ubuntu 16.04 The robot is running on a gaming laptop and brings up kinect2_bridge, roboclaw_node and the rplidar_node and sends data over home wifi. I view the data on a remote PC via RVIZ. Most of my experience up to this point has been with simulated robots and gazebo. I have gone through the tutorials on making a real robot and have a fully built bot IRL. It teleoperates fine and I can make maps, though my odometry appears to suffer from inaccuracy. I am trying to eliminate issues that might be causing it, such as simply not having configured my robot properly. I have read through the wiki pages on the navigation stack and I would like to better understand at what point ros_control becomes necessary in a robot build please. I am referring to this diagram: . My robot has two motors with quadrature encoders being driven by a roboclaw motion control board. it uses the custom roboclaw_node which subscribes to cmd_vel and publishes odom plus some motor status messages. It has its own PID control built in. At what stage would the roboclaw_node appear in the attached diagram? I am not sure if I am missing something from my build. As mentioned, at present I am teleoperating the robot in order to make maps. I am not navigating yet. This is the urdf representation of my real robot Now If that were in gazebo, I would put libdiffdrive in the .gazebo file and set the front caster to have zero friction and gazebo would take care of creating robot movement, which is reflected in RVIZ. However now of course, RVIZ only has the drive wheel encoder odometry to inform its position. Is RVIZ really giving me an accurate representation of movement if there is no model to inform it how the three wheels, 2 x drive plus 1 x caster, interact? Is ros_control and RobotHW always needed in every build? Or does the roboclaw node effectively take care of everything? Originally posted by Wireline on ROS Answers with karma: 48 on 2019-04-17 Post score: 0 Answer: Some comments (slightly pedantic, but I feel important): I would like to better understand at what point ros_control becomes necessary in a robot build never. Using ros_control is a choice, and robots can (and have) been built without it and function perfectly fine. [..] RVIZ only has the drive wheel encoder odometry to inform its position [..] Is RVIZ really giving me an accurate representation of movement [..] Please understand: RViz does not do anything else but visualise data streams. It does not calculate odometry, or the pose of your robot. It's not even giving you "an accurate representation of movement". It just renders a 3D model at a certain 6D pose. But all of that information has to come from outside, as RViz is just a consumer of data. Originally posted by gvdhoorn with karma: 86574 on 2019-04-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Wireline on 2019-04-18: Thank you, that answers my question. Comment by gvdhoorn on 2019-04-18: Note that reusing ros_control can definitely be beneficial (as certainly the controllers it provides can save you a lot of work), but it's never a requirement.
{ "domain": "robotics.stackexchange", "id": 32890, "tags": "ros-control, ros-kinetic" }
Physical interpretation of power law cluster size distribution in percolation problem
Question: In the site percolation problem, when the occupation probability $p \rightarrow p_c$, where $p_c$ is the critical probability. The characteristic length diverges, and assuming the usual scaling ansatz $$n(s,p) = f\left(s/s_\xi \right)s^{-\tau},$$ the cluster size distribution becomes a pure power law with no characteristic length, i.e. $f\left(s/s_\xi \right) \rightarrow C$ with $, C\in{\rm I\!R}$. The physical situation is completely scale-free. Intuitively I would expect that since the problem becomes a statistical fractal, where all sizes are equivalent, then clusters of all sizes should be equally likely to be found. Why is it that in reality the cluster size distribution $n(s) \sim 1/s^\tau$ with $\tau > 1$, i.e. the probability of finding larger clusters is smaller? and if the problem is scale-free, what is the physical relevance of parameter $s$? Answer: In order for the system to be scale free, you don't need all the sizes to be equiprobable (that would be a degenerate case), it's only necessary that their relative probabilities stay constant. That means that, for a given size $s_0$ and a scaling factor $k$, consecutive sizes appear equally less often. In equations: If $$ \frac{n(s_1)}{n(s_0)} =\frac{n(s_2)}{n(s_1)} = C, $$ where $\;n(s_1)=kn(s_0)\;$ and $\;n(s_2)=kn(s_1)$, then, if we write $C=k^{-\tau}$, it must be that $$ n(s_2) = k^{-\tau}n(s_1) = (k^2)^{-\tau}n(s_0) $$ and, in general, $$ n(s_N) = n(k^Ns_0) = (k^N)^{-\tau}n(s_0). $$ This statement is equivalent to equation (given in the question) $n(s) \propto s^{-\tau}$. As for the parameter $s$, even if the system doesn't have a characteristic length, it doesn't mean everything is length independent. Particularly, when dealing with finite approximations of the thermodynamical limit, $s$ is important to quantify the limitations of the approximation.
{ "domain": "physics.stackexchange", "id": 44509, "tags": "statistical-mechanics, scale-invariance, fractals, percolation" }
How to calculate to structure tensor?
Question: Structure Tensor is a matrix in form: $S=\begin{pmatrix} W \ast I_x^2 & W \ast (I_xI_y)\\ W \ast (I_xI_y) & W \ast I_y^2 \end{pmatrix}$ where $W$ is a smoothing kernel (e.g a Gaussian kernel) and $I_x$ is gradient in the direction of $x$ and so on. Therefore,size of structure tensor is $2N \times 2M$ (were $N$ is the image height and $M$ is its width). However it is supposed to be $2\times2$ matrix to decompose eigenvalues such that we obtain $\lambda_1$ and $\lambda_2$ as it is mentioned in many papers. So, how to calculate $S$ matrix? Answer: According to this post you have the definition slightly wrong. It's more like: $$ S(\mathbf{u})=\begin{pmatrix} [W \ast I_x^2](\mathbf{u}) & [W \ast (I_xI_y)](\mathbf{u})\\ [W \ast (I_xI_y)](\mathbf{u}) & [W \ast I_y^2](\mathbf{u}) \end{pmatrix} $$ where $\mathbf{u} = (x,y)$ the location at which $S$ is evaluated. In other words, the algorithm is to form $W \ast I_x^2$, $W \ast (I_xI_y)$, $W \ast (I_xI_y)$, and $W \ast I_y^2$ which are all $N\times M$ images, and then look at the appropriate pixel in each of them to form the $2\times 2$ matrix $S$.
{ "domain": "dsp.stackexchange", "id": 3883, "tags": "image-processing, computer-vision" }
What is it about gene names starting with "LOC"?
Question: I was struggling to use AnnotationDbi to change my ensemble ID to gene name, for datasets of three different species (human, canine, mouse). Among gene names for all three species there are genes with names starting with "LOC" like ENSG00000228037: LOC100996583, ENSCAFG00845012021: LOC613013 and ENSMUSG00000095523 LOC100038995. What type of genes are they? and is the naming unique through species, I mean if I have LOC100996583 in humans, is it possible to have it in canines or mice? if yes, are they doing the same thing? Thanks in advance! Answer: These are "genes" that don't have an official name. As explained in https://www.ncbi.nlm.nih.gov/books/NBK3840/#genefaq.Nomenclature: Symbols beginning with LOC. When a published symbol is not available, and orthologs have not yet been determined, Gene will provide a symbol that is constructed as 'LOC' + the GeneID. This is not retained when a replacement symbol has been identified, although queries by the LOC term are still supported. In other words, a record with the symbol LOC12345 is equivalent to GeneID = 12345. So if the symbol changes, the record can still be retrieved on the web using LOC12345 as a query, or from any file using GeneID = 12345. In other words, when a gene has no known function or homologs in other species, when all we know about it is that it is a locus that seems to be actively transcribed, then it gets the name of LOC plus a numerical gene ID. So these genes, the LOCNNNN are essentially lesser known and studied genes. Some of them may graduate to a full gene name, others might not. As for uniqueness, no absolutely not. GenBank gene IDs are unique, they don't imply homology. Only one gene can have a given ID. For example, gene ID 1 corresponds to human A1BG, alpha-1-B glycoprotein. The mouse homolog of human A1BG, mouse A1bg, alpha-1-B glycoprotein, has the gene ID 117586. The numerical value of the ID tells you nothing at all. It is just a unique identifier for a specific gene in a specific species. Finally, a more general note, homology is rarely as straightforward as your question suggests you might think it is. Even actually homologous genes often have different functions with specific functions having been gained or lost in a given lineage. So it is very hard to know and never safe to assume that two homologous genes in different species "do the same thing".
{ "domain": "bioinformatics.stackexchange", "id": 2651, "tags": "bioconductor, gene, genomics, human-genome" }
frontier_exploration: frontiers inside robot footprint
Question: Hello, we started working with the frontier_exploration ros package using gmapping and the navigation-stack on our robot (Volksbot RT6). Both (gmapping and the navigations stack) work fine as long as we use rviz to publish navigation goals. So we began to include the frontier_exploration package. We use the provided launch-file global_map.launch to run the package and only changed the topics and robot specific variables like the footprint and the baselink topic. The thing we don’t understand though is what the lines 15-29 are related to, are those map specific parameters? When we used rviz to visualize the frontiers topic and we often have frontiers set inside the robots footprint. Since our robot is non-holonomic this often leads to oscillation of the robot and the move_base starts its recovery behaviours. Additional to that we often get the error message 'Could not find nearby clear cell to start search'. We do have some screenshots of this but since we don’t have the karma points yet we can’t post them. I hope even without those it’s clear what we mean. We’d be glad for any suggestions how to resolve this problem. Thank you in advance, Miria and Inga Here is the launch file we use: <launch> <!-- Set to your sensor's range --> <arg name="sensor_range" default="1.0"/> <node pkg="frontier_exploration" type="explore_client" name="explore_client" output="screen"/> <node pkg="frontier_exploration" type="explore_server" name="explore_server" output="screen" > <param name="frequency" type="double" value="2.0"/> <param name="goal_aliasing" type="double" value="$(arg sensor_range)"/> #All standard costmap_2d parameters as in move_base, other than BoundedExploreLayer <rosparam ns="explore_costmap" subst_value="true"> footprint: [[0.36, -0.25], [-0.36, -0.25], [-0.36, 0.25],[0.36, 0.25],[0.51,0.09],[0.51,-0.09]] #!transform_tolerance: 0.5 #update_frequency: 0.5 #publish_frequency: 0.5 #must match incoming static map global_frame: map robot_base_frame: base_footprint #resolution: 0.05--> rolling_window: true track_unknown_space: true plugins: - {name: static, type: "costmap_2d::StaticLayer"} - {name: explore_boundary, type: "frontier_exploration::BoundedExploreLayer"} #Can disable sensor layer if gmapping is fast enough to update scans - {name: sensor, type: "costmap_2d::ObstacleLayer"} - {name: inflation, type: "costmap_2d::InflationLayer"} static: #Can pull data from gmapping, map_server or a non-rolling costmap map_topic: /map # map_topic: move_base/global_costmap/costmap subscribe_to_updates: true explore_boundary: resize_to_boundary: false frontier_travel_point: middle #set to false for gmapping, true if re-exploring a known area explore_clear_space: false sensor: observation_sources: laser laser: {data_type: LaserScan, clearing: true, marking: true, topic: scan_filtered, inf_is_valid: true, raytrace_range: $(arg sensor_range), obstacle_range: $(arg sensor_range)} inflation: inflation_radius: 0.15 </rosparam> </node> </launch> edit: here's a link to some screenshots we took today with tf visualization overlayed. http://imgur.com/gallery/OrgbB and here is the gist of our gmapping and move_base configuration https://gist.github.com/anonymous/b0e7aed627ef05d9a233 Originally posted by e.mint27 on ROS Answers with karma: 21 on 2014-11-27 Post score: 2 Answer: Bumped your karma, you should be able to post links now, try putting the images up on imgur or similar. Can you please retake the screenshots with tf visualization overlaid, something really funky is going on with your transforms/navigation setup. Also please post gists of your move_base and gmapping configuration. ---EDIT Something really weird is going on with your configuration. I'm guessing that's move_base's costmap you're displaying, and not frontier_exploration's. Your frontier points (and thus the goals that are sent to move_base) should be directly on top of any detected frontiers, which are by definition edges between known and unknown space. From the images, it looks like they're being placed in very strange locations. This might be because your odometry is poor, causing discrepencies between the local (laser data) and global (gmapping) frames. You could try using only gmapping data for exploration, taking out the laser stuff. You'll have to make sure gmapping pumps out map updates quickly enough, and potentially you'd have to lower the rate on frontier exploration. A development goal for me may be to tie exploration updates to happen only when new data comes in from gmapping, but that's a specific use case. Originally posted by paulbovbel with karma: 4518 on 2014-11-28 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by e.mint27 on 2014-12-01: thanks. we just added the screenshots and the gist. we hope that helps.
{ "domain": "robotics.stackexchange", "id": 20188, "tags": "ros, frontier-exploration, footprint" }
Connecting plasms in ECTO
Question: I am creating a repository for a new visual processing pipeline which should be able to accommodate several different functionality (data capture, filtering, segmentation, etc) and in each function it should also be able to support several different implementations so that when the system is up and running it can select which implementation may best suit the current process. I am wondering if I should implement this as each function as a plasm but this brings up the question can I connect multiple plasms in ECTO to make the final pipeline or must everything be connected together in one plasm? I have gotten my cell structure set up the way that we want it to be I am not just trying to figure out how to best set up the plasms. Any help would be appreciated. Originally posted by BadRobot on ROS Answers with karma: 91 on 2013-03-25 Post score: 0 Answer: The standard approach to this is to implement these pieces as an ecto blackbox, and then string the blackbox items together to get a plasm. As far as I know, you can only have 1 plasm. Within the object recognition kitchen (ORK), each of the pipelines is implemented as a blackbox, so those might serve as some additional examples. Originally posted by fergs with karma: 13902 on 2013-03-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by BadRobot on 2013-03-25: So the idea would be to encapsulate each function and its varying implementations into a black box and then connect all of the black boxes together into one core plasm which would run the whole system? Comment by fergs on 2013-03-26: Yep. That's the idea. The blackbox is basically just a cell, containing lots of other cells within. Comment by BadRobot on 2013-03-26: Thank you so much for all your assistance with getting all of this set up and running!
{ "domain": "robotics.stackexchange", "id": 13533, "tags": "ros, ecto" }
$C$-parity in $\pi^0\pi^+\pi^-$ system
Question: I'm studying the conservation of the quantum number in the decay $\omega^0\rightarrow\pi^0\pi^+\pi^-$. Since $P(\omega^0)=-1$ and $P(\pi^0\pi^+\pi^-)=P(\pi^0)P(\pi^+)P(\pi^-)(-1)^{L_{+-}}(-1)^{L_{(+-)0}}=(-1)(-1)^{L_{+-}}(-1)^{L_{(+-)0}}$ To conserve parity $L_{+-}=L_{(+-)0}=1$ But to conserve C-parity $C(\omega^0)=-1$ and $C(\pi^0\pi^+\pi^-)=C(\pi^0)C(\pi^+\pi^-)(-1)^{L_{(+-)0}}=(-1)^{L_{(+-)0}}(-1)^{L_{+-}}$ since $C(\pi^0)=1$. Therefore if $L_{+-}=L_{(+-)0}=1$, C-parity is not conserved. What am I missing? Answer: I think the problem with your would-be inconsistency (its a strong decay!) is the angular momentum factor you mysteriously inserted in the formula composing Cs. Composing parity entails the spherical harmonics whose p-waves have negative parity, and they are two of them, as you soundly determined. (In fact, as a mental prop, you may consider the decay as a sequence, $\omega\to \rho^0\pi^0\to 3\pi$, the last step of which is also p-wave. Recall $J^{PC}: ~~~\omega ~~1^{--}; ~~ \rho^0 ~~1^{--}; ~~ \pi^0 ~~O^{-+}$. ) But the composition of the two meaningful (neutral particle) Cs is $$ C(\omega)= C(\rho^0)C(\pi^0)= (-)(+)= (-), $$ consistent. Recall, $C(\rho^0)\equiv C(\pi^+\pi^-)$, as the $\rho^0$ is a notional placeholder for the $\pi^+\pi^-$ p-wave eigenstate of C. There is no notional charge conjugation inserting the relative angular momentum $L_{(+-)0}$ between two charged particle constituents here. Edit on comment : It is an XY problem... The WP point is correct for the C of a particle-antiparticle pair, and the $L_{+-}$ reminds you of the antisymmetry you already incorporated into the $C=-$ of the placeholder $\rho^0$, which I put in for $\pi^+\pi^-$, to focus your thinking; nobody claimed you have to go through this channel; only that it easily summarizes the math. But there is no such $(-)^L$ factor, as you wrongly inferred, for combining two neutral particles with well-defined C! So, $L_{(+-)0}$ cannot and should not appear in the total C formula that you invented out of whole cloth: it is pointless and wrong. Most good texts explain this.
{ "domain": "physics.stackexchange", "id": 85398, "tags": "conservation-laws, standard-model, parity, pions, charge-conjugation" }
REC and RE under intersection
Question: Would the intersection of a recursive language and a recursively enumarable language be recursive or recurisvely enumbarable or neither? Assume $L_{3}$ is the intersection of some language $L_{1}$ $\epsilon$ RE and some other language $L_{2}$ $\epsilon$ REC . Since $L_{2}$ $\epsilon$ REC, there must be a Turing-Machine $M_{2}$ that holds on every input, whethere or not the input x $\epsilon$ $L_{2}$. If we now have a k-NTM $M_{3}$ that on one band simulates $M_{2}$ and upon acceptance of the input switches to another band where it then simulates $M_{1}$ (for $L_{1}$). Since $M_{1}$ is only ever simulated if $M_{2}$ accepts the input, $M_{1}$ only gets the chance to hold for every word x $\epsilon$ $L_{1}$ $\bigcap $ $L_{2}$. That means we can already assume that we wont get stuck in a loop for words x $\notin $ $L_{2}$. However that is not the case for words that are in the intersection or only in $L_{2}$ and not in $L_{1}$, as the simulation of $M_{1}$ wil also be recursively enumarable and may run forever, without us ever knowing whether the input is accepted or not. If my thoughts so far were correct, that means that $L_{3}$ $\epsilon$ RE. What I am uncertain about is if there is any sort of algorithm or technique with which we could make the simulation of $M_{1}$ decidable, or if maybe it is possible to make an entirely new TM, independent from $M_{1}$ and $M_{2}$ and make sure it is decidable? Answer: You have been going well. Yes, $L_3=L_1\cap L_2\in\text{ RE} $. Could it be that $L_3$ is recursive? The answer is not necessarily. There are several cases here. If $L_1$ is a recursive language as well, then $L_3$ is also recursive. If $L_1$ is not a recursive language, then $L_3$ may or may not be recursive. For example, let $L_2$ be the languages of all words. Then $L_3=L_2$ is not recursive. For example, let $L_2$ be the language that does not contain any word. Then $L_3$ is also the language that does not contain any word, which is recursive. Exercise 1. Show that if both $L_1$ and $L_2$ are recursive, then $L_1\cap L_2$ is recursive. Exercise 2. Construct two languages $L_1$ and $L_2$ that satisfy all following conditions. $L_1$ is recursively enumerable. $L_2$ is recursive and $L_2$ is not the language of all words. $L_1\cap L_2\subsetneq L_1$ is recursively enumerable but not recursive.
{ "domain": "cs.stackexchange", "id": 13125, "tags": "formal-languages, turing-machines, computability, undecidability, semi-decidability" }
New exploration package
Question: I have created a simple but functional/complete node for frontier exploration using the hydro costmap_2d layers, but I'm unsure how 'mature' something should be before I should be releasing it. I thought it was a huge pain that 'explore' was unmaintained, and all the other exploration packages were coupled to larger stacks. I've ad-hoc tested it pretty thoroughly, and would like to squeeze out some time in a month or so to implement some proper tests and documentation, but for now I'd just like to get the code out there and find out if anyone else is interested in using, contributing, pointing out glaring mistakes, or suggesting functionality. Is this ready to be released into the wild via bloom? https://github.com/paulbovbel/frontier_exploration Originally posted by paulbovbel on ROS Answers with karma: 4518 on 2014-03-07 Post score: 0 Original comments Comment by ahendrix on 2014-03-07: This looks awesome! Answer: Before you do a release, you'll need to add install rules to your CMakeLists.txt. These specify all of the things that will end up in your deb. Messages and services are installed by default, you'll have to write installation rules for everything else such as launch files, executables, libraries, headers, plugin XML files, URDFs, and other resources. The catkin common tasks howto does a good job of describing most of the common cases. Originally posted by ahendrix with karma: 47576 on 2014-03-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Dirk Thomas on 2014-03-07: You should actually try to install your package using catkin_make install and then try to use your package from the install space to ensure that everything works as expected.
{ "domain": "robotics.stackexchange", "id": 17211, "tags": "ros, navigation, exploration, release, costmap-2d" }
How much energy would be required to actively reduce the temperature of the oceans of Earth by 1℃?
Question: Is there a way to calculate the energy required to reduce the heat of oceans? Am I wrong in thinking it is not as simple as reversing the calculation for specific heat of sea water? Answer: The question is slightly confused, because reducing the temperature of the oceans, in a direct sense, doesn't require energy - it releases it. The amount that is released is simply related to the mass and specific heat capacity of seawater, as you suggest. The missing question, though, is why the ocean is cooling. For it to happen naturally and simply release energy, it would need to be because its surroundings (e.g. the air) were cooler. If you want to actively cool the sea, then yes, that is going to consume energy. If the cooling has approximately the same level of performance as building-sized air conditioning, the power used by the cooling apparatus will be about 40% of the rate at which the energy is removed (and remember that this is not just the total energy divided by the time, because more heat will be leaking back in while you do it). Leaving aside the, uh, engineering challenges of this scale of cooling, this leads to a question as to why one might want to do this. Remember that unless you devise some complex system to radiate this heat into space from above the atmosphere, you're going to be releasing the same heat into the same global climate system, plus the additional 40% that you've used to move it around.
{ "domain": "earthscience.stackexchange", "id": 2412, "tags": "climate-change, oceanography, water, geoengineering" }
COPI/COPII proteins and kinesins/dyneins
Question: I am considering the transport of protein from ER to Golgi, and have read that this involves the COPII protein coat. I have also read that this is a form of anterograde transport, and elsewhere that kinesins are responsible for anterograde transport as they move from the negative to positive (outside) poles of the microtubules. Thus it seemed to me that the kjnesins carry the COPII vesicles carrying proteins from ER to Golgi. However a Google search of the terms 'kinesins' and 'COPII' together did not seem to yield anything about kinesins carrying COPII. On the other hand in this article it links COPII with dynsctjn, which I have not heard of before. Does someone know the link between COPI and COPII transport, and/or retrograde and anterograde transport, with the microtubule motor proteins kinesin and dynein? Where do dynectins come into this? Do they bind the vesicle to the dynein or are they a different type of motor protein? If it is the first case, then why is this transport termed anterograde if it does not use kinesins? Are the terms anterograde and retrograde not directly related to kinesin/dyneins use? Answer: Interesting question. The term anterograde refers to movement in the forward direction. In the context of vesicular trafficking, anterograde refers to (1) movement from the site of protein synthesis in the rough endoplasmic reticulum (RER) towards the Golgi and then (2) movement from the Golgi towards the final destination in the cell. Both processes rely on microtubules for transport. Microtubules are polar and radiate from the microtubule organizing center (MTOC) with their (+) ends directed towards the periphery of the cell: There are two broad classes of molecular motors that facilitate directed movement along microtubules. Kinesins are generally (+)-end directed motors whereas dyneins are (-)-end directed: These images should already give you a hint. Your question may have arisen due to the assumption that the ER is centrally located near the MTOC with the Golgi somewhat peripheral. This is incorrect. In reality, the Golgi complex is actively clustered near the MTOC by movement on microtubules. Furthermore, the ER extends along the microtubule network towards the (+)-end periphery. Taken together, it is evident that microtubule (-)-ends are located near the Golgi and transport to it from any part of the cell, including the RER, requires the (-)-end directed motor dynein. Subsequent anterograde transport from the Golgi to a specific destination in the cell, or through the secretory pathway, requires the (+)-end directed motor kinesin. Retrograde transport from the Golgi back to the ER would also use kinesin: The article cited in the question (which I also reference in this answer), mentions that COPII vesicles are coupled to microtubules via dynactin. Dynactin is a protein complex that is used to both activate and adapt cargo to dynein (ie COPII vesicles are coupled to microtubules via a dynein/dynactin complex):
{ "domain": "biology.stackexchange", "id": 6842, "tags": "membrane-transport, intracellular-transport, cytoskeleton" }
costmap_2d TrajectoryPlannerROS cost_cloud not being published
Question: Hello, I have a laser scanner and a Kinect on my TurtleBot. I would like to use the point cloud from the Kinect to assist with obstacle avoidance while I use the laser for amcl navigation. I have included both the laser and Kinect as observation sources in my costmap_common_params.yaml file (see below). I can view the raw point cloud from the Kinect (/camera/depth/points) in RViz; however, the cost cloud under /move_base/TrajectoryPlannerROS/cost_cloud returns a warning "No messages received". I've read through the costmap_2d Wiki page but I can't see what I am missing. I am running the latest Debian version of ROS Electric under Ubuntu 10.04. Here is my parameter file: obstacle_range: 2.5 raytrace_range: 3.0 robot_radius: 0.17 inflation_radius: 0.6 max_obstacle_height: 0.6 min_obstacle_height: 0.08 observation_sources: scan point_cloud scan: {data_type: LaserScan, topic: /scan, marking: true, clearing: true} point_cloud: {data_type: PointCloud2, topic: /camera/depth/points, marking: true, clearing: true} Thanks! patrick Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-12-05 Post score: 2 Original comments Comment by ctguell on 2013-07-29: Hi could you make the navigation work with gmapping after all? I would really apprciate some help thanks Answer: So... Apparently neither Eitan nor myself actually put any documentation for this on the wiki... whoops. I've added some documentation to the base_local_planner docs that should describe how to configure that topic. See ros-pkg 4620 for some more usage details. Let me know if the wiki documentation isn't sufficient to answer this question and how we can improve it. Originally posted by Eric Perko with karma: 8406 on 2011-12-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pi Robot on 2011-12-06: Thanks Eitan. I have been running the Kinect at QQVGA for both image and depth (mode 8). Do you think this is sufficient or do you recommend downsampling in addition to the low resolution? Comment by eitan on 2011-12-06: Whoops, thanks Eric, I dropped the ball on that one. As far as the cloud_cost topic goes, has it been advertised by move_base at least? Also, I'd highly recommend downsampling the cloud from the kinect with something like a voxel grid filter before passing it to the nav stack to improve performance. Comment by Pi Robot on 2011-12-06: Thanks Eric. I added the line 'publish_cost_grid_pc: true' in my base_local_planner_params.yaml file and verified it is getting set via rosparam. However, I still get "No messages received" in RViz for the cloud_cost topic. Also, 'rostopic hz /move_base/TrajectoryPlannerROS/cost_cloud' subscribes to the topic but then never returns any values. I'm less worried about seeing the cloud than I am verifying that the Kinect's point cloud is actually being factored into the cost. I believe it is since I can see the projection of the cloud onto the ground plane in RViz as occupied cells. Comment by ctguell on 2013-07-29: Hi could you make the navigation work with gmapping after all? I would really apprciate some help thanks
{ "domain": "robotics.stackexchange", "id": 7536, "tags": "navigation, move-base, costmap-2d" }
Custom iComparer
Question: I've got a base class GraphUIControl which is inherited by 4 child classes: BubbleGraphUIControl BatchGraphUIControl LineGraphUIControl StackedGraphUIControl I sometimes put all my GraphUIControl in the same List<GraphUIControl> for some obscure reasons. I wanted to order my list (first by Graph type, then by name) so I implemented my custom iComparer: public class GraphUIControlComparer : IComparer<GraphUIControl> { public int Compare(GraphUIControl x, GraphUIControl y) { if (x.GetType() == y.GetType()) { return String.Compare(x.Parent.Name, y.Parent.Name, StringComparison.Ordinal); } if (x is BubbleGraphUIControl) { return -1; } if (y is BubbleGraphUIControl) { return 1; } if (x is BatchGraphUIControl) { return -1; } if (y is BatchGraphUIControl) { return 1; } if (x is LineGraphUIControl) { return -1; } if (y is LineGraphUIControl) { return 1; } if (x is StackedGraphUIControl) { return -1; } if (y is StackedGraphUIControl) { return 1; } return 0; } } But it seems very verbose and I'd love to make it shorter (also, as it's my first iComparer, if I made a mistake/nonsense/whatever, don't hesitate to tell me!) Example: Let's take A, A1, A2, B & C as : A is a BubblueGraphUIControl A1 is a BubblueGraphUIControl A2 is a BubblueGraphUIControl C is a LineGraphUIControl B is a StackedGraphUIControl I put it in the list named myList in random mode. When I do myList.Sort(new GraphUIControlComparer()) I want myList to be in the order A, A1, A2, C, B because of the types order following this rule: BubbleGraphUIControl BatchGraphUIControl LineGraphUIControl StackedGraphUIControl Then the A, A1, A2 are in alphabetic order. Answer: Under most circumstances, it's a good idea to avoid GetType(). Using it indicates something's wrong with your design. But if you're willing to use it, there's a very interesting way to write this (I won't say it's good, it's just interesting and short): public class GraphUIControlComparer : IComparer<GraphUIControl> { private int NameCompare(GraphUIControl x, GraphUIControl y) { return String.Compare(x.Parent.Name, y.Parent.Name, StringComparison.Ordinal); } private static readonly Dictionary<Type, int> typeLookupDict = new Dictionary<Type,int> { {typeof(BubbleGraphUIControl), 0}, {typeof(BatchGraphUIControl), 1}, {typeof(LineGraphUIControl), 2}, {typeof(StackedGraphUIControl), 3} }; private int TypeLookup(GraphUIControl x) { return typeLookupDict[x.GetType()]; } public int Compare(GraphUIControl x, GraphUIControl y) { int tx = TypeLookup(x); int ty = TypeLookup(y); if (tx == ty) { return NameCompare(x,y); } return (tx < ty ? -1 : 1); } } Also see https://stackoverflow.com/questions/4287537/checking-if-the-object-is-of-same-type
{ "domain": "codereview.stackexchange", "id": 12878, "tags": "c#, sorting" }
Net force on a mass when a string (massless) is cut
Question: In the below given figure, $m_1 = 5$ kg and $m_2 =2$ kg and $F=1$ N. We have to find the acceleration of the either block and also find with what acceleration will $m_1$ fall after the string breaks but the force "F" still acts on the mass $m_1.$ Given the rope and the pulley are massless and the friction between the rope and pulley is negligible. I solved for the acceleration with which the blocks move by applying Newton's second law. But I'm confused about the part where we have to solve for after the string is cut. I believe that, after the string is cut (breaks), we have to take the force of tension the rope is applying on the block $m_1$ into consideration too, i.e, $\sum{F_{m_1}} = m_1g+1-T_{\text{by the above rope}}$. But the solution does not consider this ($T_{\text{by the above rope}}$) into account and just accounts for $m_1g$ and the $F$. The rope is indeed pulling the block before the string got cut and hence I think we have to consider it. Answer: This is one-dimensional problem so I will not write vectors but only magnitudes with the appropriate direction (sign). We write equations for the second Newton's law for the system as follows: $$m_1 a_1 = w_1 + F_1 - T_1 \quad \text{and} \quad m_2 a_2 = -w_2 - F_2 + T_2$$ where $w = mg$ is the object weight, $F$ is an external force that acts on the object in the same direction as the weight, $T$ is the force with which rope pulls the objects in the direction opposite to the weight, and accelerations $a_1$ and $a_2$ act in the direction of $F_1$ and $T_2$, respectively. Note that the acceleration for both objects is the same $a_1 = a_2$. From $F_1 = F_2$ which is given in the OP and from $T_1 = T_2$ which follows from the fact that the rope and the pulley are massless, the above equations are combined into $$a \cdot (m_1 + m_2) = w_1 - w_2 = g \cdot (m_1 - m_2)$$ Finally, the acceleration is $$a = g \cdot \frac{m_1 - m_2}{m_1 + m_2}$$ When: $m_1 = m_2$ then $a = 0$ which means there is no net force and the system is in equilibrium $m_1 > m_2$ then $a > 0$ which means the resultant force acts in the same direction as $F_1$ $m_1 < m_2$ then $a < 0$ which means the resultant force acts in the opposite direction of $F_1$ At the moment the string is cut, the rope becomes loose and there is no longer a tension force that pulls the objects, hence: $$T_1 = 0 \quad \text{and} \quad T_2 = 0$$ and the system is reduced to: $$a_1 = g+ \frac{ F_1}{m_1} \quad \text{and} \quad a_2 = g+\frac{ F_2}{m_2}$$ where $a_1$ and $a_2$ act in the direction of $g$.
{ "domain": "physics.stackexchange", "id": 84885, "tags": "homework-and-exercises, newtonian-mechanics, forces, acceleration, free-body-diagram" }
Bitarr class optimization C++
Question: I have implemented a bitarr class that does bit manipulations. I would love to know if there is any way to make the code more optimized. Any input would be much appreciated. Thank you! Note: int main() should not be modified. #include <iostream> #include <vector> #include <climits> #include <cstring> template<size_t NumBits> class bitarr { private: static const unsigned NumBytes = (NumBits + CHAR_BIT - 1) / CHAR_BIT;//find number of bytes to track least memory footprint unsigned char arr[NumBytes]; public: bitarr() { std::memset(arr, 0, sizeof(arr)); } //initialize array to 0 void set(size_t bit, bool val = true) { if (val == true) { arr[bit / CHAR_BIT] |= (val << bit % CHAR_BIT);//left shift and OR with masked-bit } } bool test(size_t bit) const { return arr[bit / CHAR_BIT] & (1U << bit % CHAR_BIT); //left shift and AND with masked-bit } const std::string to_string(char c1, char c2) { std::string str; for (unsigned int i = NumBits; i-- > 0;) str.push_back(static_cast<char> ('0' + test(i))); while (str.find("0") != std::string::npos) { str.replace(str.find("0"), 1, std::string{ c1 }); } while (str.find("1") != std::string::npos) { str.replace(str.find("1"), 1, std::string{ c2 }); } return str; } friend std::ostream& operator<<(std::ostream& os, const bitarr& b) { for (unsigned i = NumBits; i-- > 0; ) os << b.test(i); return os << '\n'; } }; int main() { try { bitarr<5> bitarr; bitarr.set(1); bitarr.set(2); const std::string strr = bitarr.to_string('F', 'T'); std::cout << strr << std::endl; if (strr != "FFTTF") { throw std::runtime_error{ "Conversion failed" }; } } catch (const std::exception& exception) { std::cout << "Conversion failed\n"; } } Answer: Use size_t consistently You use both size_t and unsigned for counting. Stick with size_t. Use default member initialization You can use default member initialization to ensure arr[] is initialized, without having to call memset(): class bitarr { static const unsigned NumBytes = (NumBits + CHAR_BIT - 1) / CHAR_BIT; unsigned char arr[NumBytes] = {}; ... }; You can then also remove the constructor completely. Unnecessary if-statement in set() You don't need to check whether val is true in set(). If it is false, the body of the if-statement will still do the right thing. While it might look like that would do a lot of work for nothing, the processor might easily mispredict this condition, making it less efficient than not having the if at all. Make all functions that do not modify arr[] const You made the function test() const, but to_string() also does not modify the bit array, so you can make that function const as well. Optimize to_string() Your function to_string() is very inefficient. The caller provides you with the characters to use for the representation of one and zero bits, but you first ignore that and build a string of '0' and '1', and then replace those characters one by one. Why not build the string directly using c1 and c2? Also, since you know how long the string will be, you should reserve space for all the characters up front. std::string to_string(char c1, char c2) const { std::string str; str.reserve(NumBits); for (size_t i = NumBits; i-- > 0;) str.push_back(test(i) ? c2 : c1); return str; } You implementation also has bugs: to_string('0', '1') results in an infinite loop, and to_string('1', '0') always results in a string with all zeroes.
{ "domain": "codereview.stackexchange", "id": 38932, "tags": "c++, bitwise" }