anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Electromagnetic energy-momentum tensor with allowance for gravitational redshift
Question: Is there an electromagnetic energy-momentum tensor that takes into account the gravitational redshift? If so, what does it look like and where can I read more about it? Answer: I assume you are familiar with the components of the electromagnetic energy-momentum tensor in special relativity: $$T_{\mu\nu} = F_{\mu\lambda}F_{\nu}^\lambda - \frac{1}{4}\eta_{\mu\nu}F_{\lambda\sigma}F^{\lambda\sigma}$$ where $\eta_{\mu\nu}$ are the components of the metric tensor in Minkowski space and $F_{\mu\nu}$ are the components of the electromagnetic field strength tensor. (units $\epsilon_0 = \mu_0 = 1$) At first glance, we might expect the components to look starkly different in a spacetime where light is gravitationally redshifted. After all, gravity is affecting the electromagnetic fields which make up the light beam. However, the energy-momentum tensor is typically constructed locally while a redshift occurs over extended trajectories. So the components of the electromagnetic energy-momentum tensor for an arbitrary curved spacetime (one that might have redshift effects), is $$T_{\mu\nu} = F_{\mu\lambda}F_{\nu}^\lambda - \frac{1}{4}g_{\mu\nu}F_{\lambda\sigma}F^{\lambda\sigma}$$ It's identical to that for flat space, but $\eta_{\mu\nu}$ is replaced by the components of an arbitrary metric $g_{\mu\nu}$. It is discussed briefly here.
{ "domain": "physics.stackexchange", "id": 94398, "tags": "general-relativity, stress-energy-momentum-tensor, redshift" }
Python generator function that yields combinations of elements in a sequence sorted by subset order
Question: In Python, itertools.combinations yields combinations of elements in a sequence sorted by lexicographical order. In the course of solving certain math problems, I found it useful to write a function, combinations_by_subset, that yields these combinations sorted by subset order (for lack of a better name). For example, to list all 3-length combinations of the string 'abcde': >>> [''.join(c) for c in itertools.combinations('abcde', 3)] ['abc', 'abd', 'abe', 'acd', 'ace', 'ade', 'bcd', 'bce', 'bde', 'cde'] >>> [''.join(c) for c in combinations_by_subset('abcde', 3)] ['abc', 'abd', 'acd', 'bcd', 'abe', 'ace', 'bce', 'ade', 'bde', 'cde'] Formally, for a sequence of length \$n\$, we have \$\binom{n}{r}\$ combinations of length \$r\$, where \$\binom{n}{r} = \frac{n!}{r! (n - r)!}\$ The function combinations_by_subset yields combinations in such an order that the first \$\binom{k}{r}\$ of them are the r-length combinations of the first k elements of the sequence. In our example above, the first \$\binom{3}{3} = 1\$ combination is the 3-length combination of 'abc' (which is just 'abc'); the first \$\binom{4}{3} = 4\$ combinations are the 3-length combinations of 'abcd' (which are 'abc', 'abd', 'acd', 'bcd'); etc. My first implementation is a simple generator function: def combinations_by_subset(seq, r): if r: for i in xrange(r - 1, len(seq)): for cl in (list(c) for c in combinations_by_subset(seq[:i], r - 1)): cl.append(seq[i]) yield tuple(cl) else: yield tuple() For fun, I decided to write a second implementation as a generator expression and came up with the following: def combinations_by_subset(seq, r): return (tuple(itertools.chain(c, (seq[i], ))) for i in xrange(r - 1, len(seq)) for c in combinations_by_subset(seq[:i], r - 1)) if r else (() for i in xrange(1)) My questions are: Which function definition is preferable? (I prefer the generator function over the generator expression because of legibility.) Are there any improvements one could make to the above algorithm/implementation? Can you suggest a better name for this function? Answer: Rather converting from tuple to list and back again, construct a new tuple by adding to it. def combinations_by_subset(seq, r): if r: for i in xrange(r - 1, len(seq)): for cl in combinations_by_subset(seq[:i], r - 1): yield cl + (seq[i],) else: yield tuple()
{ "domain": "codereview.stackexchange", "id": 177, "tags": "python, algorithm, generator, combinatorics" }
Assymptotic freedom significance
Question: So I have read a bit on this, and get the idea and mathematical machinery leading up to this. I get that it sheds light on the relationship between coupling strengths and length scales. Can someone tell me the "deep insight", or some other consequence of this that commanded a Nobel prize. Are there other physical implications I am missing? Answer: It wasn't just "a" Nobel prize; it was one of the most well-deserved Nobel prizes in the history. All interactions (forces) between particles known before this discovery of QCD had the property that their strength was increasing at shorter distances at a slightly faster rate than the $1/r^2$ classical law. However, asymptotic freedom means that the charge carried by the quarks and gluons – their "color" – is actually getting weaker, not stronger, as we go to ever shorter distances. This is due to the "negative beta-function" of QCD, the first theory that was known to have a negative sign. This sign has many implications. One of them is that with a great resolution seeing inside the proton and similar particles, quarks behave just like free particles. Protons have three "hard seeds" inside, very analogous to the nucleus inside the atom itself. This consequence of QCD had been known previously from experiments with "deep inelastic scattering". QCD totally explained these experiments. Another consequence is that the "confinement" is pretty much the other side of the coin called "asymptotic freedom" (not quite, but close). If the interaction gets weaker at short distances, it becomes stronger at longer distances and that's why the individual quarks are confined: they don't exist in isolation. That explains why they were never separated from each other. Consequently, QCD allowed the theory of quarks (or partons) to become acceptable. The theory of quarks had previously looked like a bookkeeping device to organize hadrons into groups etc. Suddenly, it became clear that those particles were "really" composed of quarks. Hundreds of particles similar to the proton – hadrons – could suddenly be explained as composites of quarks (and gluons). The asymptotically free "strong force" acting between quarks and gluons is now described by QCD, a $SU(3)$ Yang-Mills theory, and it is one of the four basic interactions that explain all processes we know in the Universe. The other ones are electromagnetism, gravity, and the weak nuclear interaction. The discovery of the asymptotic freedom was needed to make the strong force compatible with the experimentally known conditions on the would-be force between quarks. It's pretty much the most important property of the force, so the disoverers of this property may be considered the discoverers of the theory behind the strong force itself. In some counting, it is 1/4 of all of fundamental physics. The asymptotic freedom also means that QCD becomes free and totally consistent at very short distances. Unlike QED, the theory is defined without any problems even if we consider arbitrarily high energies.
{ "domain": "physics.stackexchange", "id": 11482, "tags": "renormalization" }
Can I use robot_localization package with only imu data? If I could,how to config the package?
Question: I want to fuse IMU and GPS data with robot_localization package used in an autonomous car, but there will have some time that I can't obtain the GPS data and IMU is the only source I can get to calculate position. I have a high precise IMU,but the error is huge when I use robot_localization without gps, so my question is can I use robot_localization package with only imu data.If I could,how to config the package correctly? Here is my launch file: <launch> <node pkg="tf2_ros" type="static_transform_publisher" name="bl_imu" args="0 0 0 0 0 0 1 base_link imu_link" /> <node pkg="tf2_ros" type="static_transform_publisher" name="bl_gps" args="0 0 0 0 0 0 1 imu_link gps_link"/> <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization_global" clear_params="true"> <param name="frequency" value="30"/> <param name="sensor_timeout" value="2"/> <!--param name="map_frame" value="map"/--> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value=" base_link"/> <param name="world_frame" value="odom"/> <param name="imu0" value="/imuData"/> <rosparam param="imu0_config">[false, false, false, true, true, true, false, false, false, true, true, true, true, true, true]</rosparam> <param name="imu0_differential" value="true"/> <param name="imu0_remove_gravitational_acceleration" value="true"/> <param name="two_d_mode" value="true"/> <param name="odom0" value="/odometry/gps"/> <rosparam param="odom0_config">[true, true, true, false, false, false, false, false, false, false, false, false, false, false, false]</rosparam> </node> <node pkg="robot_localization" type="navsat_transform_node" name="navsat_transform_node" respawn="true" output="screen"> <param name="magnetic_declination_radians" value="0.157"/> <param name="yaw_offset" value="1.570796327"/> <remap from="/imu/data" to="/imuData"/> <remap from="/gps/fix" to="/gpsData"/> </node> <node pkg="rviz" type="rviz" name="rviz_imu_gps_fuse" output="screen" args="-d $(find robot_localization)/robot_localization.rviz"/> </launch> Originally posted by Nico.Zhou on ROS Answers with karma: 13 on 2016-04-25 Post score: 1 Original comments Comment by Subodh Malgonde on 2018-07-25: Thanks to your post I have a better clarity on creating the launch file. I could not find the robot_localization.rviz file in the robot_localization repository. Can you guide me as to how to create this file? Answer: You can use it with just an IMU, yes, but until I add bias correction, I wouldn't expect great results. Even after bias correction has been added, I wouldn't want to have only an IMU for pose estimation. Double integration of linear acceleration is going to lead to drift. Originally posted by Tom Moore with karma: 13689 on 2016-05-09 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Nico.Zhou on 2016-05-16: Thanks for your answer Tom, I'm think about fuse visual odometry, hope I can get good result. Comment by SprGrf on 2016-09-01: Hello. I just started using the robot localization package only with an IMU. I am using the the launch file Nico provided but I am getting zeros both in position and linear twist (on /odometry/filtered). Any ideas about what I am doing wrong? Is the navsat node necessary? Thank you for your time. Comment by vik748 on 2018-01-29: @SprGrf I am trying to do the same thing, IMU only navigation. Were you able to figure this out? Comment by JamesS on 2021-04-09: I would really like to implement robot_localization with only an IMU but I have yet to get it to produce any position estimates from my IMU input. Is the launch file above usable to achieve a position estimate from just IMU data or is there a source somewhere where a working launch file of only an IMU as the sensor source can be found? Comment by riteshgoru on 2021-04-16: Hello @JamesS were you able to achieve robot_localization with only the imu data? If yes can you post a link to your launch file? Comment by JamesS on 2021-04-16: @riteshgoru I did manage to get it to work but as many have mentioned the drift is terrible. The way I got it to work wasn’t by altering the launch file and in fact I used the default template one. The trick was writing a script that read in the IMU linear acceleration and integrated it to achieve a rough linear velocity. I then input those values into a Nav_Msgs/Odometry message under linear twist and published that to the required Odom sensor in robot_localization and the package immediately started estimating position. Hope that helps :) Comment by Astronaut on 2021-09-26: I have the same problem except I dont have GPS. I have IMU as only Sensor input . Can I use the raw IMU data (orientation, Angular Velocity and linear acceleration) as sensor input to achieve rough linear velocity or have to to the read the IMU raw that and do integration to achieve rough linear velocity or ?
{ "domain": "robotics.stackexchange", "id": 24456, "tags": "imu, navigation, gps, robot-localization" }
Finding largest subgraph that contains a given edge and admits a cycle cover
Question: I am wondering whether there exists a fast algorithm for the following problem. Given a digraph $G$ possibly with loops (that is edges that begin and end at the same vertex) and the choice of an edge $e \in G$, what is the largest (in terms of number of vertices) subgraph of $G$ that contains $e$ and admits a vertex disjoint cycle cover. ps: sorry for the crosspost, but I originally posted at at stackoverflow and was directed here for a possibly more appropriate crowd... Answer: Deterministically, you can do it as follows: Give your edge $e$ the weight $n+1$ and add a self-loop of weight zero to each node that does not have one so far. Existing edges get weight one. By removing the weight zero loops, a maximum weight cycle cover in the new graphs induces a partial cycle cover in the old graph that (1) contains your edge $e$ and (2) has a maximum number of edges. The vertices of the partial cycle cover induce the graph you are looking for. If the cycle cover in the new graph does not contain $e$ at all, then there is no such subgraph. You can compute a maximum weight cycle cover by reducing it to maximum weight perfect matching.
{ "domain": "cstheory.stackexchange", "id": 1895, "tags": "graph-algorithms" }
External Lines in Feynman Rules
Question: From my professor lecture: The Photon field is defined as: $$A_{\mu}(x)= \sum_{s=1,2}\int \frac{d^3 \vec{p}}{{(2 \pi)^{3}\sqrt{2 E_{p}}}}\left(\epsilon^{s}_{\mu}(p) a^s_{\hspace{0.05cm}\vec{p}}e^{-ip^{\nu}x_{\nu}}+\epsilon^{*s}_{\mu}(p)a^{\dagger s}_{\hspace{0.05cm}\vec{p}}e^{ip^{\nu}x_{\nu}}\right)$$ with the commutation relations: $$[a^s_{\hspace{0.05cm}\vec{p}},a^{\dagger r}_{\hspace{0.05cm}\vec{k}}]=(2 \pi)^{3}\delta^{sr}\delta{\left(\vec{p}-\vec{k}\right)}.$$ The Feynman rules for external lines can be obtained by acting the field $A_{\mu}$ on the initial and final state particles: Incoming photon ~~~~• :$\hspace{2.5 cm}$ $A_{\mu}|\gamma(k,r)\rangle=\epsilon^{r}_{\mu}(k)$ Outgoing photon •~~~~ :$\hspace{2.5 cm}$ $\langle\gamma(k,r)|A_{\mu}=\epsilon^{*r}_{\mu}(k)$ Where we used the commutation relations, the expression of the field and the relation: $$|\gamma(k,r)\rangle = \sqrt{2E_{k}} a^{\dagger r}_{\hspace{0.05cm}\vec{k}}|0\rangle$$ Then the lecture follows with the same thing for a fermionic and a scalar field, so far so good, but there are three things that I don't understand: From what I see we use just "half" of the expression of the field: $$A_{\mu}(x)|\gamma(k,r)\rangle= \sum_{s=1,2}\int \frac{d^3 \vec{p}\sqrt{2E_{k}}}{{(2 \pi)^{3}\sqrt{2 E_{p}}}}\epsilon^{s}_{\mu}(p)e^{-ip^{\nu}x_{\nu}}a^s_{\hspace{0.05cm}\vec{p}} a^{\dagger r}_{\hspace{0.05cm}\vec{k}}|0\rangle$$ But not the additional term with $a^{\dagger s}_{\hspace{0.05cm}\vec{p}} a^{\dagger r}_{\hspace{0.05cm}\vec{k}}$. And the same for the outgoing photon for the term with both annihilation operators. I think this is because the fist term ($a^{\dagger s}_{\hspace{0.05cm}\vec{p}}$) of the field creates a photon while the second ($a^{s}_{\hspace{0.05cm}\vec{p}}$) annihilates a photon, but then the notation $A_{\mu}|\gamma(k,r)\rangle$ shouldn't be wrong? And the same for the premise "The Feynman rules for external lines can be obtained by acting the field $A_{\mu}$ on the initial and final state particles" ? Redoing the same calculation I actually obtain and additional prefactor: $$A_{\mu}|\gamma(k,r)\rangle=\epsilon^{r}_{\mu}(k)e^{-ik^{\nu}x_{\nu}}$$ $\hspace{0.75cm}$ And $e^{ik^{\mu}x_{\mu}}$ for the outgoing particle. Why these prefactors are neglected? When we consider the S-matrix element we write: $\langle i|S|f \rangle$ (with i the initial stante and f the final state). Now, since the incoming photon ~~~~• is an initial state why we use the ket $|\gamma\rangle$ and not the bra $\langle\gamma|$? , that would lead to: ~~~~•$=\langle\gamma(k,r)|A_{\mu}=\epsilon^{*r}_{\mu}(k)$ (and viceversa for the outgoing one). Answer: What you are really supposed to calculate is $\langle 0 |A_\mu(x) | \gamma(k,\epsilon) \rangle$. This gets rid of your second term of the form $a^\dagger a^\dagger$. The pre-factor $e^{-ik\cdot x}$ has been absorbed into the remaining parts of the Feynman diagram. To see how this works you should really properly work out the Feynman rules starting from position space, then do all the Wick contractions, then Fourier transform to momentum space. Through this process keep track of the $e^{-ik\cdot x}$ and see what happens to it. The $S$-matrix is $\langle f | S | i \rangle$ NOT $\langle i | S | f \rangle$.
{ "domain": "physics.stackexchange", "id": 80744, "tags": "quantum-field-theory, quantum-electrodynamics, feynman-diagrams" }
Making sense of the Sycamore's computing prowess - power consumption
Question: I came here after reading about the announcement regarding the Sycamore processor and Google's Quantum Supremacy claim. I am hung up on several key things and I am hoping that I could find those answers here, or at least direction to the answers. First, I am trying to make sense of the comparison, i.e. Sycamore is a 53-qubit processor (54 but 1 is not functional). One the paper itself (doi: 10.1038/s41586-019-1666-5) figure caption 4, ... hereas an equal-fidelity classical sampling would take 10,000 years on a million cores, and verifying the fidelity would take millions of years I assume this a million cores refers to IBM Summit at Oak Ridge, right? My actual questions revolve around the power consumption. I assume it takes a lot of power to use that a million cores to simulate quantum state; how much power did the Sycamore consume to perform the same task? In other words, how much power did the Sycamore use along with its other auxiliary/peripheral devices for that span of 200 seconds vs. how much power theoretically Summit would use in the span of 2.5 days (IBM's counter-claim) if not 10,000 years (Google's claim)? Second question with regard to power consumption, given power consumed as percent fraction, how much power does Sycamore consume, how much goes to the dilution refrigerator? Thank you! Answer: They say in Section X.H of the supplement that the Summit supercomputer has a power capacity of 14 megawatts. They compare that to their own setup. Their power consumption is mainly their dilution fridge, which they say is about 10 kilowatts plus about another 10 for chilled water for its supporting equipment. Their own supporting PCs and other electronics is about 3 kilowatts, they say. They give themselves a total power budget of 26 kilowatts, tops. I would say that the comparison is so one-sided that a precise estimate seems a bit silly. Maybe you could reduce the estimate with free chilled water if the experiment were done in Alaska. Or maybe you could up the estimate by including the electricity to heat lunch for the coauthors. Addendum: I guess there is a more serious answer based on the second question in the post, and related to my joke second paragraph. Aizan asks how much power Sycamore itself consumes, vs all of the other equipment around it. Indeed, most of the power consumption of the comparison Summit supercomputer in Oak Ridge is from the core activity of the computer itself, from the trillions of transistors and wires between them as they carry electrical current. Moreover, carrying current is unavoidable when a classical electronic computer changes state. Some of the power budget might go to air condition the computer and the data center to release all of the computer's heat to the outside. That is a serious extra power requirement that can be called peripheral, but it is not the main power cost. However, all of the listed power budget for the Sycamore process is for peripheral equipment, especially but not only the dilution refrigerator. For several reasons, the power consumption of the Sycamore chip itself is negligible. One reason is that the Sycamore is only 53 qubits, which for this question is a lot like 53 bits. (How much power do you need for an old Z80 chip?) Another reason is more interesting: In a natural sense, a perfect quantum circuit always operates at zero temperature and never draws any power! Unitary quantum circuits are reversible, both in the sense of computation and in the sense of physics. In the sense of physics, that means that no heat is generated and no energy is wasted. This was indeed part of the original motivation for quantum computers in the old papers of Feynman, at a time when people could only vaguely guess that the same model might also lead to superior algorithms.
{ "domain": "quantumcomputing.stackexchange", "id": 1030, "tags": "quantum-advantage, google-sycamore" }
IMU Message Error
Question: Hi All, I thought I would try and generate and IMU message for the gyro that's on my robot to feed into the laser_scan_matcher (To be honest as it's quite slow to update I'm not sure if it'll be of much use). Anyway the LSM is complaining with an error "MSG to TF: Quaternion Not Properly Normalized" so I guess I have message the message up. The gyro only measures rotation so that end I have set the co-matrix's zeroth element to -1 for the angular_velocity_covariance and linear_acceleration_covarianc parts. (as per http://docs.ros.org/api/sensor_msgs/html/msg/Imu.html) The message I'm outputting is (338 degres): header: seq: 3734 stamp: secs: 1459073468 nsecs: 76810552 frame_id: imu_msg orientation: x: 5.89921283722 y: 0.0 z: 0.0 w: 0.0 orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] angular_velocity: x: 0.0 y: 0.0 z: 0.0 angular_velocity_covariance: [-1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] linear_acceleration: x: 0.0 y: 0.0 z: 0.0 linear_acceleration_covariance: [-1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] --- What do I need to correct? Many thanks Mark Originally posted by MarkyMark2012 on ROS Answers with karma: 1834 on 2016-03-27 Post score: 0 Answer: Your problem is that the "Quaternion [is] Not Properly Normalized". A Quaternion (your rotation) must have a norm of 1 x**2+y**2+z**2+w**2==1 which is obviously not true for your quaternion. You can use these functions to create a valid quaternion from RPY: http://wiki.ros.org/tf/Overview/Data%20Types Originally posted by NEngelhard with karma: 3519 on 2016-03-27 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by MarkyMark2012 on 2016-03-27: Thanks - what do I call to get the normalized component parts back - i.e so I can set: tf::Quaternion q = tf::createQuaternionFromRPY(degree_to_radian(pRobotData->GyroOrientation),0, 0); imu_msg.orientation.x = q.x; imu_msg.orientation.y = q.y; imu_msg.orientation.z = q.z; etc... Comment by NEngelhard on 2016-03-27: That looks ok. What kind of robot do you have? I would have expected that you'd measure the yaw angle. Comment by MarkyMark2012 on 2016-03-27: This was the compilation error: rror: cannot convert ‘tf::QuadWord::x’ from type ‘const tfScalar& (tf::QuadWord::)() const {aka const double& (tf::QuadWord::)() const}’ to type ‘geometry_msgs::Quaternion_std::allocator<void >::_x_type {aka double}’ imu_msg.orientation.x = q.x; Comment by MarkyMark2012 on 2016-03-27: The imu message is defined as: sensor_msgs::Imu imu_msg; Comment by NEngelhard on 2016-03-27: If you read the message carefully, you could find the "aka const double& (tf::QuadWord::)() const". (Note the '()' ). q.x is a function that returns the x component, so you have to write imu_msg.orientation.x = q.x(); (Check L74 in http://docs.ros.org/jade/api/tf/html/c++/QuadWord_8h_source.html Comment by MarkyMark2012 on 2016-03-27: Good point :) Thanks for your help!
{ "domain": "robotics.stackexchange", "id": 24255, "tags": "imu" }
Time complexity of a tree-based algorithm
Question: I solved a practice interview problem that was sent me by Daily Coding Problem mailing list. I am now curious about the exact time complexity of my solution. Problem Statement Given the mapping a = 1, b = 2, ... z = 26, and an encoded message, count the number of ways it can be decoded. For example, the message '111' would give 3, since it could be decoded as 'aaa', 'ka', and 'ak'. You can assume that the messages are decodable. For example, '001' is not allowed. I made an assumption that my solution should accept any type of mapping, not just the one mentioned in the question. So the running time of the algorithm is parametrized by the encoded message length and the type of mapping used. Attempted Solution class Node: def __init__(self, val): self.val = val self.children = [] def add_child(self, child): self.children.append(child) def count_encodings(cipher, mapping): # what is the longest possible map value longest_val = max([len(str(x)) for x in mapping.values()]) root = Node('') create_cipher_subtree(root, cipher, longest_val, mapping) return count_leaves(root) def create_cipher_subtree(node, cipher, longest_val, mapping): for part_len in range(1, min(longest_val, len(cipher)) + 1): curr_part = cipher[:part_len] if curr_part in mapping.values(): child = Node(curr_part) node.add_child(child) remaining_part = cipher[part_len:] if remaining_part: create_cipher_subtree(child, remaining_part, longest_val, mapping) def count_leaves(node): if not node.children: return 1 count = 0 for child in node.children: count += count_leaves(child) return count We can then reproduce the example solution as follows: from string import ascii_lowercase mapping = {k: str(v) for v, k in enumerate(ascii_lowercase)} cipher = '111' print(count_encodings(cipher, mapping)) In short, this solution constructs a tree, like this: '' '1' '11' '1' '11' '1' '1' Then the number of leaf nodes is counted. Explanation First, the algorithm checks all possible values in the mapping and records the length of the longest value (longest_val). We then create a tree, where each node's val field is a part of the encoded message (cipher) that corresponds to a single mapping value; the root is the only node which has val as empty string. Concatenation of nodes' val fields along the path from the root to a leaf is one possible way of encoding. The tree is created as follows: Check if the first character of cipher can be interpreted as a mapping value. If yes, create a node with that value recorded and make it a child of root. Then, pass the rest of the remaining_part of the encoded message (everything past the first character) to the child and repeat the process from there. Check if the first two characters of cipher can be interpreted as a mapping value. Repeat step 2, but now val is two characters. The remaining_part would be everything in the encoded message past the first two values. This would create another child node of root. If longest_val was 3, we would then check if the first 3 characters of cipher can be interpreted as a mapping value.... And so on. After the tree is created, we count the number of leaf nodes, which corresponds to number of possible messages that can produce the provided encoding. Complexity Analysis I know that creating the tree of all possible ways the message could have been encoded might have been an overkill for this problem (in terms of space use), but doing it this way helped me better reason about the solution. I am now unsure about the exact relation between a mapping choice and message length, and the answer to the question. What is the time complexity of this solution? I am more interested in how various mappings affect the complexity. E.g. if some of the values in the mapping were 3-digit numbers, then many nodes in the tree would have 3 children. Does this increase the complexity of the algorithm? How would one capture this fact when writing Big-Oh expression for the algorithm? Answer: You can show a simple exponential lower bound for you algorithm as follows. Assume we have a $d$ digit encoding (2 in your initial example). For any cipher text of length $n$ we can create a worst-case encoding of $n$ 1's. Now we divide the input into $n \ / \ d$ segments of length $d$. For any of these segments, we have $2^{d-1}$ ways of decoding it. Take $d = 4$ for example, with a segment "1111" we have the following decodings: "1111" "1", "111" "1", "1", "11" "1", "1", "1", "1" "1", "11", "1" "11", "11" "11", "1", "1" "111", "1" Now we can decode all of these $n \ / \ d$ segments separately so we can multiply this together to get: $$(2^{d-1})^{n/d} = 2^{(d-1)n/d}$$ You can get a tighter bound for this however by noting that the number of decodings for a string of ones is: $$f(n) = \begin{cases} 1 & n = 1\\ 2 & n = 2\\ \vdots & \vdots\\ 2^{d-1} & n = d\\ \sum_{i = 1}^d f(n - i) \end{cases}$$ If we plug in for $d=2$ we actually see something familiar: $$f(n) = \begin{cases} 1 & n = 1\\ 2 & n = 2\\ f(n-1) + f(n-2) \end{cases}$$ This is the Fibonacci Sequence! This, we know is exponential in $n$. If you try $d=3$ you get the Tribonnaci Sequence. As you increase $d$ you simply get higher order Fibonacci sequences. All of these will be exponential in the worst case.
{ "domain": "cs.stackexchange", "id": 13646, "tags": "complexity-theory, time-complexity, trees" }
Bar plot with varying length
Question: Hello folks, I am trying to plot a grouped bar plot of two variables with varying lengths, which means the x and y length of both the variables are different. The format of the data is given below: This is for NRI. This is for RI. I want these two datasets to be grouped together. When I try to plot it both the datasets are overlapping each other. If anyone can help me in this regard it will be much appreciated. Here is the code I used: import numpy as np from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import matplotlib.path as mpath from PIL import * import os import sys import csv from matplotlib import rc, rcParams import pandas as pd from matplotlib.ticker import (MultipleLocator, FormatStrFormatter, AutoMinorLocator) #df = pd.read_csv('E:/v.csv') df = pd.read_excel('IC12_freq.xlsx',sheet_name='NRI') df_1 = pd.read_excel('IC12_freq.xlsx',sheet_name='RI') x = df.IC12.values x1 = df_1.IC12.values y = df.FRQ.values y1 = df_1.FRQ.values fig, ax = plt.subplots(figsize=(10,10)) ax.bar( x+1.3,y,color = 'w', width = 1.3,hatch='***',edgecolor='k',label='NRI',align='center') #ax.twinx() ax.bar( x1,y1,color = 'w', width = 1.3,hatch='/////',edgecolor='k',label='RI',align='center') #ax.plot(x, y,'ro',color = 'k') #ax.plot(x1, y1,'ro',color = 'r') ax.xaxis.set_major_locator(MultipleLocator(10)) ax.xaxis.set_major_formatter(FormatStrFormatter('%d')) ax.xaxis.set_minor_locator(MultipleLocator(5)) ax.xaxis.set_minor_formatter(FormatStrFormatter('%d')) # For the minor ticks, use no labels; default NullFormatter. ax.yaxis.set_major_locator(MultipleLocator(10)) ax.yaxis.set_major_formatter(FormatStrFormatter('%d')) ax.yaxis.set_minor_locator(MultipleLocator(5)) ax.yaxis.set_minor_formatter(FormatStrFormatter('%d')) plt.rcParams["font.weight"] = "bold" axis_font = {'fontname':'Arial', 'size':'14'} tick_font = {'fontname':'Arial', 'size':'5'} ax.tick_params(which='both', width=2) ax.tick_params(which='major', length=7) ax.tick_params(which='minor', length=7) plt.xlabel('IC12(kt)', fontweight='bold',**axis_font) #plt.xticklabel(**tick_font) plt.ylabel('Frequency(%)', fontweight='bold',**axis_font) ax.set_facecolor("#f1f1f1") ax.spines['top'].set_linewidth(1.5) ax.spines['right'].set_linewidth(1.5) ax.spines['bottom'].set_linewidth(1.5) ax.spines['left'].set_linewidth(1.5) leg = ax.legend() #plt.grid(True) #plt.style.use('ggplot') ##plt.xlabel("x axis", **axis_font) #plt.ylabel("y axis", **axis_font) #plt.bar(y,x) #plt.savefig('IC12_frq.tif', bbox_inches='tight', dpi=300) plt.show Answer: I would first decide to bin the x-axis such that it can be plotted in groups. Thus if we want to for example group them into bins of width 5 then plot them next to each other we would do something like this width_box = 5 low = -40 high = 40 width_graph = 1.5 # should be at most width_box/2 to not overlap def sum_in_range(df, min_, max_): return df[(df['IC12']<max_) & (df['IC12']>=min_)].FRQ.sum() indices = np.array(range(low, high, width_box)) sum1 = [sum_in_range(df, i, i+width_box) for i in indices] sum2 = [sum_in_range(df_1, i, i+width_box) for i in indices] fig = plt.figure() ax = fig.add_subplot(111) ax.bar(indices,sum1,width_graph,color='b',label='-Ymin') ax.bar(indices+width_graph,sum2,width_graph,color='r',label='Ymax') ax.set_xlabel('Test histogram') plt.show() Which gives us You can play around with the width_graph parameter to change how wide the bars look, and you can change width_box to group up the x-axis in different ranges
{ "domain": "datascience.stackexchange", "id": 7469, "tags": "python, pandas, matplotlib" }
How to translate a virtual Bitvector B to an integer array W | 32-bit integer array
Question: Can somebody please explain to me, how one can calculate W = 2762827861, 1991065600 from previous representation B? An example calculation would be great. I am unable to figure out how the translation from the second last to the last row works. The figure is dealing with elements of fixed size. In the first row you simply see an array of ten (n=10) elements A[i] with i=1,...,10 such that A[2] = 18, ... In the second row, the respective integers are replaced by their binary representation using 5-bit chunks (l=5). The third row, B, is just a concatenation of the second row. The fourth row partitions W into two chunks with w=32 bits each. Empty slots ($2*32 - 10*5= 14$) on the right are filled with zeros until two times 32 bits are represented. The figure is retrieved from https://doi.org/10.1017/CBO9781316588284, Chapter 3: Arrays, p. 41. Answer: The binary representation of 2762827861 is 10100 10010 10110 10110 10000 10101 01 in binary. That exactly the first 32 bit word in W. The second value is 0 1110 11010 10110 10100 00000 00000 00. I've left padded a single zero bit. So these are just the unsigned, big endian values as displayed, although the rightmost zeros are missing as you've indicated.
{ "domain": "cs.stackexchange", "id": 19646, "tags": "arrays" }
The Vectors in $v=f\lambda$
Question: We are learning about waves in physics and I was just wondering what are the vectors and what are the scalars in this function:$$v=f\lambda$$ I know the velocity $v$ is a vector so that means that: either the frequency is a vector and the wavelength is scalar or vice-versa. Personally, I think the wavelength must be the vector. So what are the scalar and vector quantities in this relationship? Answer: This is an interesting question and the answers which have been given show that the $v$ in your equation should be called the magnitude of the velocity or just the speed of the wave. The mixing of the terms speed and velocity happens all the time. Now there is an equation for wave velocity but in comes about in a somewhat convoluted way. Suppose that you produced some ripples on a pond. To illustrate the ripples you might draw something like this: Or would you? Usually when you see such diagrams the ripples are drawn as concentric equally spaced circles. Now those lines are called wave fronts and indicate positions where all the particles are oscillating in phase with one another. For convenience a conventional diagram usually only shows wave fronts which are spaced by one wavelength $\lambda$, shown in red in the diagram. Wave fronts move in a direction which is perpendicular to the wave front. This is the same direction as the direction of wave motion and this is the direction of the the wave velocity $\vec v$. In fact this is the phase velocity as it is to do with the motion of particles which are all in phase with one another. For more advanced work a parameter called wave number $\vec k$ is introduced which is defined as follows: It has a magnitude of $\frac {2 \pi}{\lambda}$ and a direction which is perpendicular to the wave fronts. The connection between the velocity and the wavenumber is that their dot product is equal to $\omega = 2 \pi f$ where $f$ is the frequency of the wave. So you have $\vec v \cdot \vec k = \omega \Rightarrow v = f \lambda$ and this relationship gives you the magnitude of the velocity of a wave or its speed. So in more advanced work it is the parameter $\vec k$ which incorporates information about the direction of motion of a wave
{ "domain": "physics.stackexchange", "id": 28335, "tags": "waves, vectors" }
What is a "timelike half-curve"?
Question: I know what a timelike curve is. But what is a time-like half-curve, as in the definition of a Malament-Hogarth spacetime (below), which appears in this paper? Definition: A spacetime $(M,g)$ is called a Malament-Hogarth spacetime iff there is a future-directed, timelike half-curve $\gamma_P : \mathbb{R}^+ \rightarrow M$ such that $\|\gamma_P\| = \infty$ and there is a point $p \in M$ satisfying $\mathrm{im}(\gamma_P) \subset J^-(p)$. Answer: As the notation implies here, a half-curve is a curve defined from the half-line $\mathbb{R}^+$ (the interval $[0, +\infty)$) to the manifold.
{ "domain": "physics.stackexchange", "id": 96895, "tags": "general-relativity, spacetime, differential-geometry, terminology, causality" }
Is it possible for the measurement to yield a superposition of states in the original space?
Question: Suppose we have a wave function $|\Psi\rangle=\sum_i c_i|\psi_i\rangle$, where the original probability probability amplitude was in a distribution of, i.e. $c_1^*c_1=\frac{1}{2},...,c_i^*c_i=\frac{1}{2^n}$. Suppose we perform a measurement $M$ on $|\Psi\rangle$ that measures all the states except for $|\psi_1\rangle$ and $|\psi_2\rangle$ (possibly simultaneously by a plates). Furthermore, suppose that the measurement $M$ does not change the probability distribution of the states, i.e., $M$ only collapses the measured states, but does not affect the rest of the wave function. Then will the resulting quantum state be a superposition of states, i.e. $|\Psi_M\rangle= \sqrt{2/3}|\psi_1\rangle+\sqrt{1/3}|\psi_2\rangle$? Answer: In QM, a measurement always amounts to a choice of a basis (or more generally, a set of projectors summing to the identity) with respect to which the wavefunction collapses. In other words, any measurement of a state $|\Psi\rangle$ can be described via a set of orthogonal projectors $P_k$ such that $\sum_k P_k=I$, by writing the state as $|\Psi\rangle=\sum_k P_k|\Psi\rangle$ and destroying all the coherence between the subspaces corresponding to each projector. Mathematically, this amounts to the following mapping $$|\Psi\rangle\simeq\mathbb P(|\Psi\rangle)\mapsto\sum_k \mathbb P(P_k|\Psi\rangle),$$ where I used the shortcut notation $\mathbb P(|\phi\rangle)\equiv|\phi\rangle\!\langle\phi|$. When the projectors $P_k$ have unit trace, and thus can be written as $P_k=\mathbb P(|\phi_k\rangle)$, you recover the standard notion of measuring $|\Psi\rangle$ in an orthonormal basis $\{\lvert\phi\rangle\}_k$. This is the most general way in QM in which you can "ask a question" to a state, which is what measurements fundamentally amount to. For this reason, you cannot "measure all $|\psi_k\rangle$ except for some of them". Quite simply, such a statement means nothing. You don't "measure some of the $|\psi_k\rangle$", you measure $|\Psi\rangle$ in a given basis, and observe one of the elements of the basis.
{ "domain": "physics.stackexchange", "id": 61246, "tags": "quantum-mechanics, hilbert-space, quantum-information, wavefunction-collapse, quantum-measurements" }
Using the same data structure in multiple nodes
Question: I am designing a system which consists of multiple nodes. Among others there is a "gui" node for visualization and a "controls" node for controlling the whole application. During runtime I need to keep and update the information about a so called camera_unit for which I created a class CameraUnit (.h + .cpp files). The thing is I need to use this class both in "gui" and "controls" node, as the gui uses the info for visualization and the controls for other computations, but I do not want to duplicate the code. What are my options? I have thought of these: 1. Duplicate the code (Probably the worst option as I would have to always copy .h and .cpp file between nodes when editing them) 2. Make the static library out of the CameraUnit class which would be linked to both "controls" and "gui" node (That seems possible but I still feel there is a more elegant option) 3. Merge "gui" and "controls" nodes to one node and make the gui run in a separate thread (I would like to prevent this option as it would change my architecture) Thank you for any advice! Originally posted by anoman on ROS Answers with karma: 1 on 2015-11-16 Post score: 0 Original comments Comment by Mehdi. on 2015-11-16: Why don't you just include your CameraUnit.h in both nodes? #include <your_package/CameraUnit.h> Comment by anoman on 2015-11-16: That would work but it would mean referencing the code from the different package and I thought this would not be really clean either. Or do you consider such an approach alright? Answer: Use a variant of number 2. Make the CameraUnit a class in a shared library. Basically with catkin, don't think about it and just use add_library. This change should just be 2-3 lines in your cmake file and it should work. Originally posted by dornhege with karma: 31395 on 2015-11-16 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by anoman on 2015-11-16: It works, I just had to make a few more adjustments. You have to add lines INCLUDE_DIRS ${PROJECT_SOURCE_DIR}/src LIBRARIES name_of_your_lib in catkin_packege() in the node which builds the library to make it work. Is there any specific reason why to use SHARED and not STATIC library? Comment by dornhege on 2015-11-16: static libraries would be linked into every dependent package. It probably doesn't matter for smaller projects. But you don't need to worry about it. catkin should do everything correctly here.
{ "domain": "robotics.stackexchange", "id": 22993, "tags": "ros, data, node" }
Contradiction between the first law of thermodynamics and combined gas law
Question: In the first law of thermodynamics, it is stated that: $$\Delta U = Q + W$$ Which can be written as: $$\Delta U = Q + P\Delta V$$ Since $\Delta U$ affects $U$ (internal energy), which itself affects temperature, a measure of the average kinetic energy of particles within a system, the equation, therefore, tells us a few things about a few properties: Pressure Change in volume and by proxy, volume By proxy due to $\Delta U$, temperature These are all quantities present within the combined gas equation as well, given by $PV = kT$, where $k$ is a constant. Hence, my question comes in 2 parts, Is the combined gas equation linked in any way, shape, or form to the first law of thermodynamics? If so, how? For isobaric processes, wherein the pressure is held constant, would a greater increase in volume lead not to a lower decrease in internal energy as per the first law of thermodynamics? If that is the case, then how are temperature and volume still linearly related as per Charles' Law, which states that temperature and volume are directly proportional to each other? I understand that overall there might still be an increase in temperature in spite of expansion as $Q$ may be larger than $P\Delta V$, but how is the relationship still proportional? How does all of this look at the molecular level? (Might be kind of out of topic but just curious as I am not sure if any of these link up) Answer: Part 1 The ideal gas law and first law are independent laws. However, the combination of the two laws shows that the change in internal energy of an ideal gas depends only on charge in temperature, as shown in part 2 below. Part 2 For the version of the first law that you are using when the gas does work (expands) work is negative (reduces internal energy). So for positive $\Delta V$, the first law For a constant pressure process is $$\Delta U =Q-P\Delta V$$ From the ideal gas law $$\Delta V=\frac{R\Delta T}{P}$$ Substituting into first law $$\Delta U =Q-R\Delta T$$ Also for constant pressure $Q=C_{p}\Delta T$, so $$\Delta U =C_{p}\Delta T-R\Delta T$$ Finally for an ideal gas $$R=C_{p}-C_v$$ Substituting in first law $$\Delta U =C_{v}\Delta T$$ Which shows internal energy of an ideal gas depends only on the temperature. Part 3 The molecular level explanation for the last equation is for an ideal gas there are no intermolecular forces (no internal potential energy) so the internal energy is strictly kinetic, which is why the internal energy of an ideal gas depends only on the temperature. So to clarify, the first law of thermodynamics simply tells us that the internal energy of a system is equal to the thermal energy transferred to the system minus the work done by the system; Essentially, yes. But I would rather word it that the change in internal energy is equal to the net heat added to the system minus the net work done by the system. it does not show us how internal energy (and hence temperature) relates to pressure and volume (or more specifically a definite relationship that does not involve accounting for the transfer of T.E.). You are correct it does not show us how internal energy (and hence temperature) directly relates to pressure and volume without accounting for heat transfer (or, as you call it, transfer of thermal energy, T.E.), unless you combine it with the applicable equation of state (e.g., ideal gas equation). But you don't need the ideal gas equation to determine the change in internal energy. In the example being considered (a reversible constant pressure expansion process) we know that the heat transfer into the system is $C_{p}\Delta T$ and the work done by the system is $-P\Delta V$ for $\Delta V>0$. You don't need the ideal gas law to calculate the change in internal energy. You do need the ideal gas law to calculate the change in internal energy based only on temperature change, as noted in Part 2, namely, $\Delta U=C_{v}\Delta T$. Either calculation gives the same result. But, and I need to emphasize this, $\Delta U=C_{v}\Delta T$ only applies to an ideal gas because it is derived from the ideal gas equation. Whereas the combined gas law takes it further by showing how temperature, and hence internal energy, affects pressure and volume (and vice versa). Not sure what you mean by the "combined" gas law. But if you mean the combination of the ideal gas law and the first law, it provides the direct direct relationship between internal energy and temperature, and hence pressure and volume by the gas law. first law of thermodynamics relates the change in internal energy of a system to total heat and work done (hence pressure and change in volume), whereas the ideal gas law relates temperature to pressure and volume (and by extension, to internal energy too). Yes, but only the ideal gas law relates temperature to pressure and volume and by extension to internal energy too. It would not apply to a real gas. It is important to understand that an equation of state only provides the relationship between pressure, volume and temperature for different equilibrium states. For example, in a closed system (constant mass) the relationship between states 1 and 2 is $$\frac{P_{1}V_{1}}{T_{1}}=\frac{P_{2}V_{2}}{T_{2}}$$ This relationship says nothing about the process or path between the two states. For example, if $P_{1}=P_{2}$ that does not necessarily mean the two states are connected by a constant pressure process. There are many possible paths for which the final and initial pressure is the same. So you can't necessarily relate this to heat, $Q$ by $Q=C_{p}\Delta T$ since heat is path dependent and $C_{p}$ only applies to a constant pressure path. The final and initial pressures being the same only means $\frac{V_{1}}{T_{1}}=\frac{V_{2}}{T_{2}}$. Hope this helps
{ "domain": "physics.stackexchange", "id": 70388, "tags": "thermodynamics, ideal-gas, gas" }
Arithmetic complexity of matrix powering
Question: Assume $M\in\Bbb Z_{\geq0}[x_1,\dots,x_n]^{m\times m}$ be an $m\times m$ matrix in $n$ variables. We know that size of smallest formula that computes $\mathsf{Tr}(M^d)$ where $d\in\Bbb N$ could be exponentially large in $n$ and $m$ at least when $n=m^2$. Could it be possible that the smallest circuit size can be polynomial while formula size is exponential at least for case $n=m^2$? Do the answers differ much if we seek monotone circuits and formulas? Answer: It depends on the relationship between $m$ and $d$. If $m \geq 3$ is fixed, but $d$ is allowed to grow without bound, then the corresponding class of functions is exactly the same as functions with polynomial formula size [Ben-Or and Cleve]. (For $m=2$, it is not as powerful [Allender and Wang]). [Update: As far as I know, this is only true for iterated matrix multiplication $tr(M_1 M_2 \dotsb M_d)$, rather than matrix powering $tr(M^d)$. When $m$ is allowed to grow these two are essentially equivalent, but for fixed $m$, e.g. $m=3$ I don't know if $3 \times 3$ matrix powering is poly-formula-size-complete.] If $m$ can grow but $d$ is fixed, then this is the same as matrix multiplication, up to $O(\log d) = O(1)$ factors. Since circuits for matrix multiplication can be converted to bilinear circuits with only a factor 2 blow-up in size, circuit and formula size here are essentially the same, and the question boils down to the classic open question of the exponent of matrix multiplication. If both $m$ and $d$ can grow, then this is equivalent in power to the determinant (corresponding to the Boolean class $\mathsf{DET}$ and the algebraic class $\mathsf{VP}_{ws}$). So here the question becomes about the circuit/formula complexity of the determinant. Both of these are well-known open questions (obviously there are cubic circuits; the best known upper bound on formula size of the determinant is quasi-polynomial). Most (perhaps even all) nontrivial algorithms for matrix multiplication use cancellations in a crucial way, so I would expect there is a difference between the monotone case and the unrestricted case. Also, note that the equivalence between matrix powering and determinant in the last case is necessarily non-monotone (since matrix powering is a polynomial with all nonnegative coefficients, but the determinant is not).
{ "domain": "cstheory.stackexchange", "id": 3431, "tags": "circuit-complexity, matrices, matrix-product, formulas, monotone" }
C Doubly Linked List
Question: It works, but it seems a bit long winded. I'd like some critiquing. #include <stdio.h> #include <stdlib.h> struct node { int data; struct node * previous; struct node * next; }; struct linked_list { struct node * first; struct node * last; }; struct linked_list * init_list(struct node * node) { struct linked_list * ret = malloc(sizeof(struct linked_list)); ret->first = node; ret->last = node; return ret; } struct node * create_node(int data) { struct node * ret = malloc(sizeof(struct node)); ret->data = data; ret->previous = NULL; ret->next = NULL; return ret; } void destroy_list(struct linked_list * list) { struct node * node = list->first, *next = NULL; while(node != NULL) { next = node->next; free(node); node = next; } free(list); } void push_front(struct linked_list * list, struct node * node) { if(list->first != NULL) { node->next = list->first; list->first->previous = node; } else list->last = node; list->first = node; } void push_back(struct linked_list * list, struct node * node) { if(list->last != NULL) { node->previous = list->last; list->last->next = node; list->last = node; } else push_front(list, node); } void insert_after(struct node * node, struct node * to_add) { to_add->previous = node; to_add->next = node->next; node->next = to_add; } void remove_node(struct node * node) { node->previous->next = node->next; node->next->previous = node->previous; free(node); } Answer: As mentioned in the comments this code is not long winded at all. It is very clean in general. Too clean in fact as you will see below. You cannot create an empty linked list with your init_list. However, in other parts of your code you assume list may be empty. It is useful to be able to create an empty linked list since you may not know what you want your first node to be in advance. I'd suggest changing init_list. I personally like symmetry in linked list code. You can always create the dual of one function by swapping all prevs with nexts and all heads with tails. I don't think your push_back code should call push_front. You can handle the else condition just like you did in push_front except by swapping variable names. insert_after is incomplete. Remember this is a doubly linked list. Also the function doesn't handle the case when you insert after the current list->last. You need to add the linked list to the function arguments. remove_node does not handle all cases. What if node was the head of the list? What if it was the tail of the list? What if it was both the head and the tail? You don't actually have to write three different cases to handle this but you can. As simply a style choice I would rename previous to prev I am ignoring that you never check for your pointers being NULL Edit: This code actually can produce an empty list by passing NULL in as an argument to init_list
{ "domain": "codereview.stackexchange", "id": 12343, "tags": "c, linked-list" }
Launching different robots in Gazebo at the same time
Question: Hi guys, I'm trying to write a launch file to simulate two different robots in gazebo at the same time. I've tried many things and changes in my launch file but I can get it working. Sometimes I get two (visual) copies of the same robot. This is part of my launch file Is the parameter "robot_description" unique or something like that so that just one robot description parameter can be set? Am I missing something? I've tried to include parameters such as: Also tried to create different groups for each robot ( xxxxxx , and include "-namespace quadrotor" in the spawn node parameters, but nothing. Any help? Originally posted by RaulC on ROS Answers with karma: 1 on 2012-07-20 Post score: 0 Answer: Here's a portion of a launch file I have where I spawn two robots. I use this format to spawn multiple robots using the same *.urdf or different robots like shown below. also roscd gazebo_worlds/launch and look how the table.urdf.xacro is launched. I don't think what you have is right. Hope it helps. <param name="passive_cubelet_1" textfile="/home/andy/ros-fuerte/workspace/Cubelet/urdf/passive.urdf"/> <node args="-urdf -param passive_cubelet_1 -x 0.58 -y 0.0 -z 0.585 -R 0.0 -P 0.0 -Y 0.0 -model passive_cubelet_1" name="spawn_passive_cubelet_1" output="screen" pkg="gazebo" respawn="false" type="spawn_model"/> <param name="inverse_cubelet_2" textfile="/home/andy/ros-fuerte/workspace/Cubelet/urdf/inverse.urdf"/> <node args="-urdf -param inverse_cubelet_2 -x 0.748 -y 0.0 -z 0.585 -R 0.0 -P 0.0 -Y 0.0 -model inverse_cubelet_2" name="spawn_inverse_cubelet_2" output="\screen" pkg="gazebo" respawn="false" type="spawn_model"/> Originally posted by mcevoyandy with karma: 235 on 2012-07-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 10292, "tags": "gazebo, roslaunch, multi-robot" }
Quantum Computation - Postulates of QM
Question: I have just started (independent) learning about quantum computation in general from Nielsen-Chuang book. I wanted to ask if anyone could try finding time to help me with whats going on with the measurement postulate of quantum mechanics. I mean, I am not trying to question the postulate; its just that I do not get how the value of the state of the system after measurement comes out to $M_m/\sqrt{ <\psi|M_m^+ M_m|\psi> }$. Even though its just what the postulate seems to say, I find it really awkward that why is it this expression. I do not know if what I ask here makes sense, but this is proving to be something which for some reason seems to block me from reading any further, Answer: I don't know if this is an "explanation", but hopefully it is a useful "description". More generally than projective measurements, one always measures an operator. (A projector is a special case of this.) So what does it mean to "measure an operator"? Well, operators often correspond to 'observable' physical quantities. The most important in quantum mechanics, for instance, is energy; but one can also (sometimes indirectly) measure other quantities, such as angular momentum, z-components of magnetic fields, etc. What is being measured always gives real-valued results --- in principle, some definite result (e.g. an electron is in the 'spin +1/2' state as opposed to 'spin −1/2', or in the first excited energy level as opposed to the ground-state in a hydrogen atom, etc.), albeit each a priori possible result is realized with some probability. We assign each of the real-valued outcomes of a measurement to a subspace. The way we do this is to describe a Hermitian operator --- i.e. an operator which associates a real eigenvalue to different subspaces, with the subspaces summing up to the whole Hilbert space. A projector is such an operator, where the real values are 0 and 1; i.e. describing that a vector belongs to a designated subspace (yielding a value of 1), or its orthocomplement (yielding a value of 0). These Hermitian operators are observables, and the eigenspaces are those for which the observable has a "definite" value. But what about those vectors which are not eigenvectors, and do not have "definite" values for these observables? Here is the non-explaining part of the description: we project them into one of the eigenspaces, to obtain an eigenvector with a well-defined value. Which projection we apply is determined at random. The probability distribution is given by the familiar Born rule: $$ \Pr\limits_{|\psi\rangle}\bigl( E = c \bigr) \;=\; \langle \psi | \Pi_c | \psi \rangle \;, $$ where $\Pi_c$ is the projector onto the c-eigenspace of an 'observable quantity' E (represented by a Hermitian operator $A = \sum_c \; c \cdot \Pi_c$). The post-measured state is some projection of the state $|\psi\rangle$ onto some eigenspace of the observable A. And so if $| \psi_0 \rangle$ is the pre-measurement state, $| \psi_1 \rangle$ is the post-measurement state, and $\Pi_c$ is the 'actual result' measured (i.e. the eigenspace onto which the pre-measurement state was actually projected), we have the proportionality result $$ | \psi_1 \rangle \;\propto\; \Pi_c | \psi_0 \rangle $$ by the projection rule just described. This is why there is the projector in your formula. In general, the vector $| \psi'_1 \rangle = \Pi_c | \psi_0 \rangle$ is not a unit vector; because we wish to describe the post-measurement state by another unit vector, we must rescale it by $$ \|\;|\psi'_1\rangle\;\| = \sqrt{\langle \psi'_1 | \psi'_1 \rangle} = \sqrt{\langle \psi_0 | \Pi_c | \psi_0 \rangle} \;,$$ which is the square-root of the probability with which the result would occur a priori. And so, we recover the formula in your question, $$ | \psi_1 \rangle \;=\; \frac{\Pi_c | \psi_0 \rangle}{ \sqrt{ \langle \psi_0 | \Pi_c | \psi_0 \rangle }} \;.$$ (If this formula seems slightly clumsy, take heart that it looks and feels a little bit better if you represent quantum states by density operators.) Edited to add: the above should not be construed as a description of POVMs. A "positive operator valued measurement" is better seen as describing the expectation value of various measurable observables Ec in a collection { Ec }c ∈ C .
{ "domain": "cstheory.stackexchange", "id": 197, "tags": "quantum-computing, quantum-information" }
What is the intuition behind TD($\lambda$)?
Question: I'd like to better understand temporal-difference learning. In particular, I'm wondering if it is prudent to think about TD($\lambda$) as a type of "truncated" Monte Carlo learning? Answer: TD($\lambda$) can be thought of as a combination of TD and MC learning, so as to avoid to choose one method or the other and to take advantage of both approaches. More precisely, TD($\lambda$) is temporal-difference learning with a $\lambda$-return, which is defined as an average of all $n$-step returns, for all $n$, where an $n$-step return is the target used to update the estimate of the value function that contains $n$ future rewards (plus an estimate of the value function of the state $n$ steps in the future). For example, TD(0) (e.g. Q-learning is usually presented as a TD(0) method) uses a $1$-step return, that is, it uses one future reward (plus an estimate of the value of the next state) to compute the target. The letter $\lambda$ actually refers to a parameter used in this context to weigh the combination of TD and MC methods. There are actually two different perspectives of TD($\lambda$), the forward view and the backward view (eligibility traces). The blog post Reinforcement Learning: Eligibility Traces and TD(lambda) gives a quite intuitive overview of TD($\lambda$), and, for more details, read the related chapter of the book Reinforcement Learning: An Introduction.
{ "domain": "ai.stackexchange", "id": 2686, "tags": "reinforcement-learning, comparison, monte-carlo-methods, temporal-difference-methods, td-lambda" }
Incompressible Fluid and Sinks
Question: This question is pretty vague but we just learned about sources and sink in vector calculus. I have read that most liquids are incompressible. I am wondering how it is possible for fluid for be forced inward toward a sink if it is incompressible? Doesn't it have to discharge or go up or down the axis of rotation of the origin. I am assuming the sink means that a force field is compressing the gas, solid, liquid inwards and this means there would have to be compression. I cannot seem to find any sources other than the one below that indicates "For fluid flows, a sink is a negative source and is a point of inward radial flow at which the fluid is considered to be absorbed or annihilated." http://mathfaculty.fullerton.edu/mathews/c2003/SourceSinkMod.html Brian Answer: In theory, the fluid just appears (vanishes) at a source (sink). In practice however there is always a conduit that will add or remove the fluid into the flow domain under consideration (of which the conduit is not a part). The inflow (outflow) of the fluid into the flow-domain is then modelled as a source (sink) without violating incompressibility constraint.
{ "domain": "physics.stackexchange", "id": 54005, "tags": "fluid-dynamics" }
Test equivalence of circuits exactly on qiskit
Question: I have two circuits that I believe are equivalent. When I say 'equivalent' I mean that they have equivalent unitary representations up to a global phase. How can one check this using Qiskit? How I would show equivalence isn't clear to me. Answer: As it is point out, depends on your notion of equivalence. State vectors Two circuits are equivalent upto global phase if they represent the same state vector. Consider the following two circuits: from qiskit import QuantumCircuit import numpy as np qc1 = QuantumCircuit(2) qc1.h(0) qc1.cx(0,1) qc2 = QuantumCircuit(2) qc2.u2(0, np.pi, 0) qc2.cx(0,1) It is possible to check if their state vector is the same with the Qiskit qiskit.quantum_info module: from qiskit.quantum_info import Statevector Statevector.from_instruction(qc1).equiv(Statevector.from_instruction(qc2)) # True Unitary matrices If you need to consider global phase, in that case you need to compare their unitary matrices via simulation. In the following case: qc1 = QuantumCircuit(1) qc1.x(0) qc2 = QuantumCircuit(1) qc2.rx(np.pi, 0) These circuit has the same state vector, but not the same unitary: Statevector.from_instruction(qc1).equiv(Statevector.from_instruction(qc2)) # True backend_sim = Aer.get_backend('unitary_simulator') job_sim = execute([qc1, qc2], backend_sim) result_sim = job_sim.result() unitary1 = result_sim.get_unitary(qc1) unitary2 = result_sim.get_unitary(qc2) np.allclose(unitary1, unitary2) # False Counts If your circuits have measurements, you probably want to consider these to circuits equivalent, since their measured results are equivalent. qc1 = QuantumCircuit(2,2) qc1.h(0) qc1.measure(0,0) qc1.measure(1,1) qc2 = QuantumCircuit(2,2) qc2.h(0) qc2.swap(0,1) qc2.measure(0,1) qc2.measure(1,0) In this case, you want to compare their result counts, considering some statistical error: backend_sim = Aer.get_backend('qasm_simulator') job_sim = execute([qc1, qc2], backend_sim, shots=1000) result_sim = job_sim.result() counts1 = result_sim.get_counts(qc1) counts2 = result_sim.get_counts(qc2) print(counts1, counts2) Up to Ancillas You might want to consider these two circuits equivalent: qc1 = QuantumCircuit(3) qc1.x(0) qc2 = QuantumCircuit(1) qc2.rx(np.pi, 0) It was suggested to invert one of them, compose them (wiring the ancillas) and check if it is the identity. For example: from qiskit.quantum_info import Operator composed = qc1.compose(qc2.inverse(), qubits=range(len(qc2.qubits))) Operator(composed).equiv(Operator.from_label('I'*len(qc1.qubits))) # True
{ "domain": "quantumcomputing.stackexchange", "id": 3380, "tags": "qiskit, programming, circuit-construction" }
What are the eigenstates of the Displacement operator?
Question: I know that the displacement operator: $$ \hat{D}(\alpha)=e^{\alpha \hat{a}^{\dagger}-\alpha^*\hat{a}} $$ acts on the vacuum as: $$ \hat{D}(\alpha) \vert 0\rangle =\vert \alpha\rangle $$ But what are the eigenstates $\vert \Psi \rangle$ of $\hat{D}(\alpha)$, such that: $$\hat{D}(\alpha)\vert \Psi \rangle = \lambda (\alpha)\vert \Psi \rangle \ ?$$ Answer: Displacements shift a state's quasiprobability distribution by $\alpha$ in phase space. This question is thus equivalent to asking what phase-space distributions are unchanged by shifts $\alpha$. One answer is a state whose quasiprobability distribution is a straight line pointing in the same direction as $\alpha$. Such a state is an infinitely squeezed state, with the squeezing in direction perpendicular to $\alpha$. For example, a position eigenstate is infinitely squeezed in terms of its position, so it's momentum distribution is a flat line, thus being unchanged by displacements in momentum. It is easy to geometrically construct other answers but harder to show that they correspond to pure states. For example, a quasiprobability distribution that is a series of parallel lines would suffice. But one cannot attain this by simply taking a superposition of infinitely squeezed states, because that would imply that any superposition of position eigenstates will be unchanged by displacements in momentum, which must be false. The quasiprobability distribution that is a series of parallel lines is really a mixture of infinitely squeezed states, not a superposition thereof. Another answer is a state whose phase-space structure is repeated periodically in $\alpha$. These are things like GKP states, which are explicitly constructed as superpositions of coherent states spaced on an infinite grid $$\sum_{s,t=-\infty}^\infty \exp(-i s \hat{p}\alpha)\exp(2\pi i t \hat{x}/\alpha)|\mathrm{vac}\rangle.$$ The common theme is that infinities are necessary: the quasiprobability distribution cannot be localized; only a state whose quasiprobability distribution has infinite support can be an eigenstate of a displacement operator.
{ "domain": "physics.stackexchange", "id": 94239, "tags": "quantum-mechanics, quantum-optics, linear-algebra, displacement" }
Comparing Brachistochrone curve with a Hypocycloid curve
Question: I want to compare the time that it takes to slide a particle in a frictionless hypocycloid curve, so time would be given by the arclength divided by the velocity So I need first compute the arclength of the hypocycloid curve, but in general the arclength is given by And by conservation of energy, velocity is given by Substituting in the integral results Solving the indefinite integral results in So now I would just substitute the function y corresponding to the hypocycloid curve Is my reasoning right? Then finally to compare times I would just make a graph of the time functions corresponding to the brachistochrone and the hypocycloid Answer: Your reasoning is almost perfect. Just one small, trivial problem that can be easily fixed. You assert that from conservation of energy $v = \sqrt{2gy}$. However, what conservation of energy really states is tha the initial potential energy equals the kinetic energy plus the potential energy at a state. Hence, $\frac{1}{2}mv^2 + mgy = mgy_0$, where $y_0$ is the initial hieght. From this, we obtain that $v = \sqrt{2g(y_0 - y)}$, where $y_0$ is a constant which depends on where you set your object to start rolling. Once $y_0$ is determined, you have your formula for velocity. From there, you solve the integral to obtain a formula similar to the one you have. The best thing to do from here is to either do as you stated and substitute the function for $y$ in terms of $x$, or to write $y'$ in terms of $y$ and substitute. If you the first way, you get a function of $t$ in terms of $x$. If you do the second way, you get a function of $t$ in terms of $y$. Both ways are okay, but the first way is by far the easiest way to go. Which ever way you do, when you are done you could either graph or do a functional analysis.
{ "domain": "physics.stackexchange", "id": 20616, "tags": "newtonian-mechanics, classical-mechanics, energy-conservation, brachistochrone-problem" }
Unruhs Law and how part of a photon pair disappears beyond a Rindler horizon?
Question: Unruhs Law says that "an accelerating observers in empty space will see themselves embedded in a gas of hot photons at a temperature proportional to their acceleration." Tell me if I have this wrong, but as I understand, this is because empty space is full of virtual particles popping into existence and immediately annihilating, so you cannot detect them until you begin to accelerate. This is because an accelerating observer has an effective horizon (a Rindler horizon) which they cannot see beyond, so if one of these particles drifts out of their view and into their hidden region, they only see one of the particles (lets say they're photons) and will be able to detect it and interpret it as heat. How is this possible? Shouldn't a photon still annihilate with its partner regardless of which side of some arbitrary horizon it's on? And is this photon pair entangled? Answer: Yes, it is possible, and although never observed (you'd have to be accelerating fast) it is theoretically on firm ground. The idea is very similar to Hawking gravity from Black Holes (BHs), and the equations are equivalent. There's the view and explanation you mentioned, and there are others more rigorous explanations. Hawking derived his result differently. I am not sure how Unruh did it, but all are equivalent. On your explanation, it is seen as more of a conceptual explanation than a physical result, if there's a horizon if one heads out the other one will head in by conservation of momentum. Another way to do it is by assuming a quantum field outside the horizon, and calculating the field in that gravitational field, or equivalently as seen by an accelerated observer, and it'll turn out that it emits particle pairs (positive and negative freqs, one going out one in), and even for the ground state it still does that. Another way is calculated as quantum tunneling from the other side. All equivalent explanations, but some people don't like the virtual particle explanation because virtual particles are not real, and can be thought of as really mathematical fictions. But the math works out right. On your explanation, the thought is that if there was no gravitational field, or equivalently an accelerated frame, they would indeed annihilate, so you need the acceleration or gravitational field to try to separate them, and you need the horizon to keep them separate. They could be thought of as entangled, conservation of momentum, charge etc makes their states fully entangled. However the one going in is virtual, not really a particle, and has negative mass and energy, so that's all arguable.
{ "domain": "physics.stackexchange", "id": 37277, "tags": "quantum-mechanics, general-relativity, black-holes, relativity, virtual-particles" }
Nodelet crashes when using pcl::VoxelGrid, pcl::SACSegmentation
Question: Goal: Trying to detect and publish all the detected planes (coefficients) from a depth image. Using depth_image_proc to compute the point-cloud. Created a similar segmentation nodelet, attached below. Test script works fine, where I just publish the cloud height. But, when I include pcl::VoxelGrid<pcl::PointXYZ> vg, pcl::SACSegmentation<pcl::PointXYZ> seg Nodelet crashes at run time (compiles without issues) with the following error. [ERROR] [1597300025.991310390]: Failed to load nodelet [/pcl_segmentation] of type [using_image_pipeline/pcl_segmentation] even after refreshing the cache: MultiLibraryClassLoader: Could not create object of class type using_image_pipeline::PclSegmentationNodelet as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary() [ERROR] [1597300025.991394042]: The error before refreshing the cache was: Failed to load library /home/kotaru/catkin_ws/devel/lib//libusing_image_pipeline.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/kotaru/catkin_ws/devel/lib//libusing_image_pipeline.so: undefined symbol: _ZN3pcl7PCLBaseINS_8PointXYZEE13setInputCloudERKN5boost10shared_ptrIKNS_10PointCloudIS1_EEEE) [FATAL] [1597300025.991807722]: Failed to load nodelet '/pcl_segmentation` of type `using_image_pipeline/pcl_segmentation` to manager `using_image_pipeline_nodelet' [pcl_segmentation-4] process has died [pid 12564, exit code 255, cmd /opt/ros/melodic/lib/nodelet/nodelet load using_image_pipeline/pcl_segmentation using_image_pipeline_nodelet __name:=pcl_segmentation __log:=/home/kotaru/.ros/log/00c8aa44-dd2e-11ea-aa9d-54bf64375ec6/pcl_segmentation-4.log]. log file: /home/kotaru/.ros/log/00c8aa44-dd2e-11ea-aa9d-54bf64375ec6/pcl_segmentation-4*.log Can't seem to understand why that is happening, or if the implementation is wrong. Plane segmentation nodelet source code, #include <image_transport/image_transport.h> #include <nodelet/nodelet.h> #include <pcl/filters/extract_indices.h> #include <pcl/filters/filter.h> #include <pcl/filters/passthrough.h> #include <pcl/filters/voxel_grid.h> #include <pcl/io/pcd_io.h> #include <pcl/kdtree/kdtree.h> #include <pcl/point_types.h> #include <pcl/sample_consensus/method_types.h> #include <pcl/sample_consensus/model_types.h> #include <pcl/segmentation/extract_clusters.h> #include <pcl/segmentation/sac_segmentation.h> #include <pcl/visualization/cloud_viewer.h> #include <pcl/visualization/pcl_visualizer.h> #include <pcl_conversions/pcl_conversions.h> #include <ros/ros.h> #include <sensor_msgs/point_cloud2_iterator.h> #include <std_msgs/Float32.h> #include <std_msgs/Float32MultiArray.h> #include <boost/thread.hpp> namespace using_image_pipeline { class PclSegmentationNodelet : public nodelet::Nodelet { public: ros::NodeHandlePtr nh_ptr_; boost::shared_ptr<sensor_msgs::PointCloud2> cloud_; boost::mutex connect_mutex_; boost::mutex mutex_; bool receivedAtLeastOnce = false; ros::Subscriber sub_points_; ros::Publisher pub_planes_; ros::Publisher pub_point_height_; std_msgs::Float32MultiArray msg_planes_; pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_raw, cloud, cloud_f; float leaf_size = 0.01; // 10cm pcl::VoxelGrid<pcl::PointXYZ> vg; // pcl::SACSegmentation<pcl::PointXYZ> seg; PclSegmentationNodelet(/* args */) { NODELET_INFO_STREAM("PclSegmentationNodelet constructor"); } ~PclSegmentationNodelet() {} virtual void onInit(); void pointCb(const sensor_msgs::PointCloud2ConstPtr &points_msg); void connectCb(); void planarSegmentation(); }; void PclSegmentationNodelet::onInit() { ros::NodeHandle &nh = getNodeHandle(); nh_ptr_.reset(new ros::NodeHandle(nh)); // cloud_.reset(new sensor_msgs::PointCloud2(nh)); NODELET_INFO_STREAM("Initialising nodelet... [PclSegmentationNodelet]"); ros::SubscriberStatusCallback connect_cb = boost::bind(&PclSegmentationNodelet::connectCb, this); boost::lock_guard<boost::mutex> lock(connect_mutex_); pub_planes_ = nh_ptr_->advertise<std_msgs::Float32MultiArray>("planes", 3, connect_cb, connect_cb); pub_point_height_ = nh_ptr_->advertise<std_msgs::Float32>("test_msg", 3, connect_cb, connect_cb); cloud_raw.reset(new pcl::PointCloud<pcl::PointXYZ>); cloud.reset(new pcl::PointCloud<pcl::PointXYZ>); } void PclSegmentationNodelet::connectCb() { boost::lock_guard<boost::mutex> lock(connect_mutex_); sub_points_ = nh_ptr_->subscribe("points", 3, &PclSegmentationNodelet::pointCb, this); } void PclSegmentationNodelet::pointCb(const sensor_msgs::PointCloud2ConstPtr &points_msg) { // boost::mutex::scoped_lock lock (mutex_); NODELET_DEBUG("PointCloud2 height %f", (double)points_msg->height); std::cout << points_msg->height << std::endl; // convert points_msg to pcl point cloud pcl::fromROSMsg(*points_msg, *cloud_raw); std_msgs::Float32 tmp_msg; tmp_msg.data = (float)cloud_raw->height; pub_point_height_.publish(tmp_msg); pub_planes_.publish(msg_planes_); } } // namespace using_image_pipeline // Register as nodelet #include <pluginlib/class_list_macros.h> PLUGINLIB_EXPORT_CLASS(using_image_pipeline::PclSegmentationNodelet, nodelet::Nodelet); Originally posted by praskot on ROS Answers with karma: 257 on 2020-08-13 Post score: 1 Original comments Comment by praskot on 2020-08-13: After some googling, I see that the issue is with loading pcl. Answer: Issue was having dependency on PCL find_package(PCL 1.2 REQUIRED) in addition to pcl_ros. Originally posted by praskot with karma: 257 on 2020-08-13 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by wienans on 2021-02-19: Hey @praskot, i face the same problem. What did you actually change to fix it. I deleted the 'find_package(PCL REQUIRED)' in the CMakeLists.txt but it didn't helped with the issue. And i can't read more out of your answer. Comment by praskot on 2021-02-19: Yeah, I just removed the line find_package(PCL 1.2 REQUIRED) and it worked for me. Are you facing the exact same error? Can you post your CMakelist here? Comment by wienans on 2021-02-22: Yes, i had the same undefined symbol: _ZN3pcl7PCLBaseINS_8PointXYZEE13setInputCloudERKN5boost10shared_ptrIKNS_10PointCloudIS1_EEEE and this was only happening in the nodelet. A normal node compiled without any problem. But actually now i fixed it. For me i used the velodyne_pointcloud::PointXYZIR instead of pcl::PointXYZ for my code. After i changed it to pcl::PointXYZ the issue was fixed with and without deleting find_package(PCL 1.2 REQUIRED) But thanks for your help
{ "domain": "robotics.stackexchange", "id": 35404, "tags": "ros, ros-melodic, pcl, nodelet" }
First Unique Character in a String
Question: The task is taken from leetcode Given a string, find the first non-repeating character in it and return it's index. If it doesn't exist, return -1. Examples: s = "leetcode" return 0. s = "loveleetcode", return 2. Note: You may assume the string contain only lowercase letters. My solution /** * @param {string} s * @return {number} */ function firstUniqChar(s) { const arr = Array.from([...s] .reduce((map, x, i) => map.set(x, !isNaN(map.get(x)) ? null : i) , new Map()) .values()); for(let i = 0; i < arr.length; i++) { if (arr[i] !== null) { return arr[i]; } } return -1; }; Answer: Never indent as you have done. This make code very unreadable due to the long lines. Map.values() creates an iteratable object. It does not require an array to be created and is \$O(1)\$ storage as it iterates over the already stored values You have forced the map to be stored as an array. Array.from(map.values()) That means that it must be iterated and stored \$O(n)\$ complexity and storage best case. If you just kept the map and iterated to find the result you could have a best case of \$O(1)\$ to find the result. Always iterate iteratable objects to reduce memory uses and if there is an early exit to reduce CPU use and complexity. Rewrite Is around 2* as fast depending on input function firstUniqChar(str) { const counts = new Map(); var idx = 0; for (const c of str) { // iterate to avoid storing array of characters if (counts.has(c)) { counts.get(c).count ++ } else { counts.set(c, {idx, count: 1}) } idx++; } for (const c of counts.values()) { if (c.count === 1) { return c.idx } } return - 1; }
{ "domain": "codereview.stackexchange", "id": 34585, "tags": "javascript, algorithm, programming-challenge, ecmascript-6" }
Clarification on the Seebeck Effect
Question: Alright, I've been interested in the Seebeck effect lately, so I've been trying to learn it. From what I understand, this is measured with the Seebeck Coefficient, which gives you the $\mu\textrm{V}$ (Millionth of a volt) per $\textrm{K}$ (Kelvin). For example (according to this), if I take Molybdenum and Nickel, with 1 Kelvin of difference, I will produce 25 $\mu\textrm{V}$. This is where I need clarification, is this per contact (of any size)? I'd assume that size DOES matter, at which point I'd ask, what unit of surface area is this in? (ex: $\mu\textrm{V}/\textrm{K}/\text{cm}^2$) The only reason why I'd think that it is per contact, is that I can't find any unit of surface area. Thanks in advance for your time. Answer: The thermoelectric effect is the direct conversion of temperature differences to electric potential differences (Seebeck effect) and vice-versa (Peltier effect). When considering the electrical currents and heat fluxes involved, there is a size dependency, but such is not the case for the temperature differences and the electric potential differences involved. The proportionality factor between the temperature difference and the electric potential difference is a material dependent constant.
{ "domain": "physics.stackexchange", "id": 52603, "tags": "electricity, heat, thermoelectricity" }
Definition question: continuum-subtracted spectrum
Question: What is the exact definition of a "continuum-subtracted spectrum", such that one finds in SDSS spectra? http://classic.sdss.org/dr7/dm/flatFiles/spSpec.html#specmask Is there a definition somewhere? Thanks Answer: A spectrum of, say, a star or a galaxy, consists of wavelengths emitted from various physical processes. In the case of the star, there is the blackbody spectrum that reflects the temperature of all of its gas, whether it is hydrogen, helium, or metals. On top of this, there are lines, i.e. emission from certain atomic transitions that are excited and then de-excite emitting photons in a very narrow wavelength interval. The blackbody spectrum is said to comprise the continuum of the total spectrum. In the case of galaxies, the continuum consists of the combined spectrum of many different stars' different blackbody spectra, infrared light from dust, etc. On top of this, there are lines, e.g. from nebular processes such as my favorite line Lyman $\alpha$, which is emitted when hard UV radiation from hot O and B stars ionize their surrounding neutral hydrogen, which subsequently recombines emitting mostly Lyman $\alpha$. If we want to measure the total flux in a given line, we are usually only interested in the flux that comes from a particular physical process, and not the part of the flux that comes from the "underlying" sources which contribute to the continuum. Hence, we need to subtract the continuum. This is usually done by fitting some functional form to the continuum (e.g. a straight line or a second order polynomial), substract this fit from the spectrum, and finally integrate from one side of the line to the other in the resulting "continuum-subtracted spectrum". In the top figure below, the total spectrum's flux density$^\dagger$ is in black, and the fit is the red lines. Underneath is drawn the continuum-subtracted spectrum, and a few dashed green lines show some lines that rise above the continuum. The integrated line flux is indicated by light red. $^\dagger$Note that many authors use the term "flux" where it should actually be "flux density". Integrating flux density in $\mathrm{erg}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}\,\mathrm{Å}^{-1}$ over wavelength in Å$^{-1}$ gives flux in $\mathrm{erg}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}$.
{ "domain": "astronomy.stackexchange", "id": 1122, "tags": "spectra, sky-survey" }
What is the physical meaning of the exterior derivative of a non-holonomic constraint form?
Question: Suppose we have a non-holonomic mechanical system, say Lagrangian, for example the Chaplygin sleigh is a model of a knife in the plane. Its configuation space is $Q = S^1\times \mathbb{R}^2$ with local coordinates $q = (\theta,x,y)$. The non-holonomic constraint is "no admissible velocities are perpendicular to the blade" which is specified by a one-form \begin{equation} \omega_q = \sin\theta dx + \cos\theta dy. \end{equation} When evaulating on a velocity $\dot{q}$ we specify the velocity constraint $$ v = \omega_q\cdot \dot{q} = -\dot{x}\sin\theta + \dot{y}\cos\theta = 0. $$ What is the mathematical/physical meaning of the exterior derivative of the constraint one-form? Since $\omega_q$ is a one-form, in physics, we typically interpret it as a force. Mathematically, we integrate this form over a 'line' to get the work due to the force over the 'line'. The exterior derivative is a 2-form, (does this have any physical significance??) $$ \boldsymbol{d}\omega = -\cos\theta d\theta \wedge dx - \sin\theta d\theta \wedge dy $$ If we evaluate this 2-form along a tangent (velocity) vector $\dot{q}$, we find the 'force' $$ \alpha_q = \boldsymbol{d}\omega(\dot{q},.) = \left(\dot{x}\cos\theta + \dot{y}\sin\theta \right)d\theta - \cos\theta\dot{\theta}dx - \sin\theta \dot{\theta}dy. $$ Does this 'force', $\alpha_q \in T^*Q$, have any direct/obvious physical/mathematical physics/differential geometric interpretation? Answer: Let us work more generally, suppose that on the $m$-dimensional configuration space $Q$, we have a set of $r$ pointwise-independent $1$-forms $\theta^\alpha$ ($\alpha=1,\cdots,r$) giving a non-holonomic constraint through the Pfaffian equation $\theta^\alpha=0$. For simplicity I am assuming the constraint is scleronomic, so it does not involve the time variable. The Frobenius integrability theorem tells us that the constraint is equivalent to a holonomic constraint if and only if $d\theta^\alpha=0$ $\mod \theta^1,\cdots,\theta^r$, or in other words $$ d\theta^\alpha=\xi^\alpha_{\ \beta}\wedge\theta^\beta, $$ where the $\xi^\alpha_{\ \beta}$ is any matrix of $1$-forms. Let $\Delta$ be the associated distribution on $Q$ given by $$ \Delta_p=\ker\theta^1_p\cap\cdots\cap\ker\theta^r_p. $$ An equivalent statement of Frobenius' theorm is then that the Pfaffian system is integrable if and only if $$ d\theta^\alpha(u,v)=0 $$ for any pair of vectors $u,v\in\Delta$ that belong to the distribution. Now the $1$-forms $u\lrcorner\ d\theta^\alpha$ then have the intepretation that if they are orthogonal to $\Delta$ for all $u$, then the system is Frobenius-integrable and is thus equivalent to a holonomic constraint. In OP's example this meaning is more pronounced for the following reason. OP's configuration manifold is $3$-dimensional, and OP's Pfaffian system is generated by a single $1$-form. Thus the distribution corresponding to the Pfaffian is two dimensional, and in the neighborhood of each point there is a local frame for the distribution that consists of a pair of independent vector fields. Accordingly suppose that $\dot q$ is a velocity field compatible with the constraint that does not vanish anywhere. Then $d\omega(\dot q,\cdot)=\dot q\lrcorner\ d\omega$ is orthogonal to the distribution if and only if the distribution is integrable. Thus this expression essentially measures how much the constraint fails to be holonomic.
{ "domain": "physics.stackexchange", "id": 81245, "tags": "classical-mechanics, differential-geometry, constrained-dynamics" }
Binary Heap implementation in Common Lisp and tests
Question: I implemented a basic binary heap in Common Lisp to learn programming using the CLOS. It is quite neat and self-contained, the core of the program is about 100 SLOC. The whole repository can be found here. The crux of the program is binheap.lisp: ;;;; Binary heap implementation. ;;;; We implemented this structure using the excellent article at: ;;;; https://en.wikipedia.org/wiki/Binary_heap ;;;; The binary heap is implemented as a CLOS class. ;;;; Please note that the vector you provide to the heap object is used in-place ;;;; throughout the life of the heap. It is up to you to make copies and ensure ;;;; the vector is not modified externally. (in-package :binhp) (defclass heap () ((vec :documentation "Resizeable array to store the implicit Binary Heap." :initarg :vec :initform (error "class heap: Please provide a vector") :accessor vec) (test :documentation "Total order function. The heap enforces: {funcall totorder parent child} throughout the binary tree." :initarg :test :initform (error "class heap: Please provide a total order relation.") :accessor test))) (defun make-heap (vec test) "Heap constructor. I: vec: Vector to back the implicit binary tree structure. Works in place. Must have a fill-pointer. test: The total order to enforce throughout the heap. [funcall test parent child] is true throughout the tree." (assert (array-has-fill-pointer-p vec)) (assert (typep test 'function)) (let ((hp (make-instance 'heap :vec vec :test test))) (build hp) hp)) ;;; Main (defmethod build ((tree heap)) "Initial building of the binary heap from the input vector of data. Sub-method." ;; We work our way up the tree calling the down-heap method on each parent ;; node. ;; Parent nodes are the ones from 0 to floor{n-2 / 2} included. (loop for ind from (floor (/ (- (length (vec tree)) 2) 2)) downto 0 do (down-heap tree ind))) (defmethod insert ((tree heap) newkey) "Push a new element to the heap. I: * Heap instance. * New element." (with-slots ((arr vec) (test test)) tree ;; Inserts a new element at the end of arr and performs a up-heap. ;; Last element of the array is guaranteed to be a leaf of the tree. (vector-push-extend newkey arr) ;; Compare the new element with its parent node. ;; * If order is respected or if we've reached the root of the tree ;; then return. ;; * Else swap. And repeat. (let* ((ind (1- (length arr))) (parind (floor (/ (1- ind) 2)))) (loop while (and (not (= ind 0)) (not (funcall test (aref arr parind) (aref arr ind)))) do (rotatef (aref arr parind) (aref arr ind)) (setf ind parind) (setf parind (floor (/ (1- ind) 2))))))) (defmethod down-heap ((tree heap) ind) "Perform the down-heap operation. Move the parent node at 'ind' downwards until it settles in a suitable position. Sub-method, not exposed to user." ;; Compare the current key with its two children. Return if order is respected ;; else swap the current key with the child that respects the total order with ;; the other child. Also return if we have reached a leaf of the tree. ;; Nodes at an index starting at ceil{n-2 / 2} are leafs. (with-slots ((arr vec) (test test)) tree (let* ((maxind (1- (length arr))) (leaf-limit (floor (/ (1- maxind) 2))) (left-child (+ (* 2 ind) 1)) (right-child (min (1+ left-child) maxind))) (loop while (and ;; Order of tests matters here! (not (> ind leaf-limit)) (not (and (funcall test (aref arr ind) (aref arr left-child)) (funcall test (aref arr ind) (aref arr right-child))))) do ;; Find out the right child to swap with and swap. (if (funcall test (aref arr left-child) (aref arr right-child)) (progn (rotatef (aref arr ind) (aref arr left-child)) (setf ind left-child)) (progn (rotatef (aref arr ind) (aref arr right-child)) (setf ind right-child))) (setf left-child (+ (* 2 ind) 1)) (setf right-child (min (1+ left-child) maxind)))))) (defmethod extract ((tree heap)) "Pop the root element from the heap. Rearranges the tree afterwards. I: Heap instance. O: Root element." (with-slots ((arr vec)) tree (let ((root (aref arr 0))) ;; replace the root with the last leaf ;; resize vector ;; down-heap the new root. (setf (aref arr 0) (aref arr (1- (length arr)))) (vector-pop arr) (down-heap tree 0) root))) (defmethod print-tree ((tree heap)) "Print the whole tree in a very basic formatting, level by level and left to right." (with-slots ((arr vec)) tree (let* ((n (length arr)) (h (floor (log n 2)))) ;; The heap is already ordered by level. And each level is in the right ;; order. (loop for level from 0 upto h do (loop for ind from (1- (expt 2 level)) below (1- (expt 2 (1+ level))) do (if (< ind n) (format t "~a " (aref arr ind)))) (terpri t))))) (defmethod size ((tree heap)) (length (vec tree))) And you can see some examples in example.lisp: ;;;; Examples illustrating the use of the binary heap implementation. ;;; Loading the library, adjust the paths to your own directories. ;;; Can also be done by loading the source files directly. (if (not (member #p"~/portacle/projects/" asdf:*central-registry*)) (push #p"~/portacle/projects/" asdf:*central-registry*)) (ql:quickload :binheap) ;;; Max-heaps ;; Let's build a heap of integers ordered from biggest to smallest. (defparameter *arr* (make-array 6 :fill-pointer 6 :initial-contents (list 3 4 1 2 5 6))) (defparameter *heap* (binhp:make-heap *arr* #'>=)) ;; #'>= is the relation enforced throughout the heap between every parent node ;; and its children. (binhp:print-tree *heap*) ;; => ;; 6 ;; 5 3 ;; 2 4 1 ;; Alright, this is a nice heap. ;; You can insert elements in it: (binhp:insert *heap* 3.5) (binhp:print-tree *heap*) ;; => ;; 6 ;; 5 3.5 ;; 2 4 1 3 ;; The new element fits in the heap. ;; You can pop elements to get the successive biggest of the heap: (loop for it from 0 below (length *arr*) do (format t "~a " (binhp:extract *heap*))) (terpri t) ;; => 6 5 4 3.5 3 2 1 ;;; The same goes for Min-heaps, just replace #'>= with #'<=. ;;; You can define any relation that is a total order in 'test. ;;; Alphabetical heap. ;; The heap implementation works for any element types and any total order. ;; Let's put some strings in an alphabetical order heap. (defparameter *arr* (make-array 5 :fill-pointer 5 :initial-contents (list "Pierre" "Jacques" "Paul" "Jean" "Luc"))) (defparameter *heap* (binhp:make-heap *arr* #'string-lessp)) (binhp:print-tree *heap*) ;; => ;; Jacques ;; Jean Paul ;; Pierre Luc (loop for it from 0 below (length *arr*) do (format t "~a " (binhp:extract *heap*))) (terpri t) ;; => Jacques Jean Luc Paul Pierre As well as some tests in test.lisp: ;;;; Simple tests for validating binheap. ;;; Loading the library, adjust the paths to your own directories. (if (not (member #p"~/portacle/projects/" asdf:*central-registry*)) (push #p"~/portacle/projects/" asdf:*central-registry*)) (ql:quickload :binheap) ;;; Validation (format t "Creating empty or small binary heaps:...") (dotimes (it 10 t) (let ((arr (make-array it :fill-pointer it))) (binhp:make-heap arr #'>=))) (format t " OK") (terpri t) (format t "Simple heaps and operations:...") (loop for test in (list #'>= #'<=) do (loop for nelem from 10 upto 50 do (let ((arr (make-array nelem :fill-pointer nelem)) (arrval (make-array nelem)) (hp nil)) (loop for ind from 0 below nelem do (setf (aref arr ind) (random 100)) (setf (aref arrval ind) (aref arr ind))) (setf hp (binhp:make-heap arr test)) ;; Now pop all the elements and verify that we get the right order (sort arrval test) (loop for ind from 0 below nelem do (assert (= (binhp:extract hp) (aref arrval ind)))) ;; Reinsert shuffled elements. (loop for ind from 0 below nelem do (rotatef (aref arrval ind) (aref arrval (random nelem)))) (loop for elem across arrval do (binhp:insert hp elem)) ;; Now repop everything and check order. (sort arrval test) (loop for ind from 0 below nelem do (assert (= (binhp:extract hp) (aref arrval ind))))))) (format t "OK") (terpri t) ;;; Performance (terpri t) (format t "Performance:") (terpri t) (loop for nelem in (list 100 10000 1000000) do (let ((arr (make-array nelem :element-type 'double-float :fill-pointer nelem :initial-element 0d0)) (hp nil)) (loop for ind from 0 below nelem do (setf (aref arr ind) (random 100d0))) (format t "Building a max-heap of ~a double-floats: " nelem) (terpri t) (time (setf hp (binhp:make-heap arr #'>=))) (format t "Popping a max-heap of ~a double-floats: " nelem) (terpri t) (time (dotimes (it nelem t) (binhp:extract hp))) (format t "Reinserting ~a double-floats:" nelem) (terpri t) (time (dotimes (it nelem t) (binhp:insert hp (random 100d0)))))) Review I am mostly happy with binheap.lisp (but should I be?). Are there any obvious shortcuts that could be used to make the code more elegant/efficient? The tests are quite awkward. I validate the library for random cases and do a few performance benchmarks for double-float types. Is there any package that you could recommend for the same kind of tests but in a less awkward way to program and read? All this testing might be overkill for such a simple and small program, but my goal is to have a self-contained example of a typical, very neat Common-Lisp project. So by all means be nitpicky please. Answer: Packaging The project and repository is called cl-binheap, while the ASDF system is called binheap - it's a good idea for those to match, mostly due to UX: I'll try loading the repository name first all the time. Worse is the package name binhp. Now we have three names instead of one. Simply pick one of them and go with it (not binhp though, why's leaving out two vowels make things better?). Depending on who you'd like to use the code, the Unlicense might not be so advisable. The README.md looks okayish, I'd rather also see an API reference with some more details though. Code The tests use assert, instead I'd recommend one of the existing frameworks, possibly also linking it into ASDF so that asdf:test-system works. Some of the source files correctly have in-package, some don't - I'd suggest always specifying which package to use, even (or especially) if it's example code. example.lisp doesn't work for me on CCL (I'd also suggest testing code with at least two or more implementations if you want it to be widely used): #(6 5 3 2 4 1) is not an adjustable array. Adding :adjustable t to the definition of *arr* fixes that. binheap.lisp has some trailing whitespace and some tabs. Consider M-x delete-trailing-whitespace and M-x untabify. (floor (/ x y)) could be simplified to (floor x y) here (note the second return value is gonna be different though). (not (= ind 0)) will hopefully be optimised. If not, however, (not (eql ind 0)) might be better (of course (/= ind 0) also exists, but still does numeric comparison). I'd suggest adding a key to make the container more generic / match the standard sequence operators. This helps immensely when adding some objects and then using one of the attributes for comparison; it composes better than a single test argument. Lastly, CLOS is great and all, but strictly speaking all of the methods could be functions and therefore be a bit quicker. Unless of course you have plans to add more containers with the same interface. Consider also annotating everything with types and looking at what the compiler (well, SBCL only really) will tell you about problems when compiling with higher optimisation settings ((declaim (optimize ...))). However, I'd say that unless you're very certain do not however put (declare (optimize ...)) into the library code, it's easy to get that wrong. Don't use assert if the error isn't correctable by changing the value interactively. Like in make-heap, both of those should be regular errors: Retrying won't fix the problem (that's a common restart established with assert) and changing vec or test isn't something you'd do interactively ... I think. So, check-type and error would be the way to go here. For randomised testing there's AFAIK nothing like a standard package, look for Quickcheck clones or "random testing common lisp" probably. Edit: Oh I just saw you said to be nitpicky. Alright then: It's "Common Lisp", no dash :) ind, *arr*, arr, arrval, etc. aren't great names. Especially in function signatures consider matching what the standard uses for similar purposes. I'm betting that it's index and array respectively. Common Lisp has a tendency to have long names (for good!) and I'm (nitpicky) of the opinion that it's good style to match that (as much as possible). loop should be replaced by iterate because it has more parentheses. Not kidding, it simply looks better. The docstrings are suboptimal and again should match an existing style. I: * Heap instance... I haven't seen before and it doesn't even mention which parameter (name) it describes. Take some exemplary documentation as a guideline perhaps.
{ "domain": "codereview.stackexchange", "id": 36097, "tags": "unit-testing, heap, common-lisp" }
Member function state machine
Question: For various reasons, I wrote this. For one, I don't like to wrap every single function in a separate class, which is too verbose. It's a state machine based on GoTW 57 function pointer wrapping and CRTP to make things readable, at least in client code parts. template <class Derived> struct StateMachine { typedef Derived Self; typedef StateMachine<Derived> Base; //http://www.gotw.ca/gotw/057.htm struct Func; typedef Func (Derived::*UpdateFuncPtr)(); struct Func { UpdateFuncPtr updateFunc; Func(UpdateFuncPtr ptr) : updateFunc(ptr) {} }; //actual statemachine .. can be as complex as needed Func currentState; StateMachine( Func init) : currentState(init) {} void update() { UpdateFuncPtr p = currentState.updateFunc; Self * pThis = (Self *)this; Func newPtr = (pThis->*p)(); if(newPtr.updateFunc) currentState = newPtr; } }; The client looks like this: struct DemoSM : StateMachine<DemoSM> { int check; //just checking that *this is sane DemoSM() : Base( &Self::update1 ) { check = 42; } private: Func update1() { log("update1 called, check %d\n\r",check); return &Self::update2; } Func update2() { log("update called\n\r"); return NULL; } }; And just checking: DemoSM machine; machine.update(); machine.update(); Any critique or suggestions for improvements or gotchas welcome. The core principle is that the state machine functions themselves code, procedurally (not declaratively in a table), how the transitions get triggered - by returning the function that needs to be called next. This can be made marginally more interesting with separate "update" and "state transition" functions maybe with different signatures. I could not get GoTW suggested operator() to work for taking pointer to members. C++ is insistent on &Class::member; syntax there. Answer: CRTP helpers When using CRTP, it is common to write derived methods in the base class to avoid having to deal with the this pointer and to use reference semantics instead: Derived& derived() { return static_cast<Derived&>(*this); } Derived const& derived() const { return static_cast<const Derived&>(*this); } Thanks to these functions, your update method becomes: void update() { UpdateFuncPtr p = currentState.updateFunc; Func newPtr = (derived().*p)(); if(newPtr.updateFunc) currentState = newPtr; } And it is now easier to have the base class call methods from the derived type without having to deal with the this pointer. Improving Func If you want to impose more responsibility on Func and less on StateMachine itself, you can still add operator UpdateFuncPtr() const to Func to allow easy conversions to and from the class: operator UpdateFuncPtr() const { return updateFunc; } That will allow you simply again the method update: void update() { UpdateFuncPtr p = currentState; Func newPtr = (derived().*p)(); if(newPtr) currentState = newPtr; } The implicit conversion to `UpdateFuncPtr` is also used for the boolean conversion needed to check whether `newPtr` is `NULL` or not. Be careful though, implicit conversions are often the source of many hidden errors. Implementing `operator()` in `Func` would be quite difficult since it would mean that `Func` must know the instance of `StateMachine` on which you wish to call the methods. All in all, it is easier to keep the `this` pointer out of `Func`. Access modifiers There are several parts of the base class that you do not need in the derived class. You could make them private: UpdateFuncPtr currentState The derived methods if you choose to use them Moreover, there are even more things that the final users of the state machine don't need to know. Actually, the only need to know about update, so almost everything else could be made protected. Stylistic tidbits There are a few more details that can be improved with your code: Try to always use constructor initialization lists when you can (you may have forgotten it in this particular case because it's only an example): DemoSM() : Base( &Self::update1 ), check( 42 ) { } Your brace indentation in update is really misleading. When modifying the code, I found it hard to read since the last brace seems to match the if while it actually closes the function block. To reflect that, this last brace should be aligned with the function signature, like every other function in your code. If possible, try to avoid if blocks without braces. While it may be easier to write when there is only one line, your code proves that it can hide subtle bugs: at first, I thought that the last closing brace closed the if and I tried to add lines to it (to perform some checks) while I wasn't actually adding to the if. Had you used braces for the if, I wouldn't have had this problem. You don't seem to be very consistent with regard to parenthesis and spaces. You might want to choose some guideline and apply it everywhere.
{ "domain": "codereview.stackexchange", "id": 8903, "tags": "c++, state-machine, c++03" }
How to save topics in file?
Question: I want to save these topics in file :/amcl/parameter_descriptions /amcl/parameter_updates what can i do? Originally posted by yasamin on ROS Answers with karma: 11 on 2014-05-14 Post score: 0 Answer: rosbag is the thing you are looking for. http://wiki.ros.org/rosbag Good described in this tutorial. Originally posted by BennyRe with karma: 2949 on 2014-05-14 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by yasamin on 2014-05-14: Thank you I have a problem with these :rosbag record -a when i type these command give me these error :[ WARN] [1400075193.299487382]: /use_sim_time set to true and no clock published. Still waiting for valid time... what can i do?? Comment by ahendrix on 2014-05-14: It sounds like you have (or had) a simulator running, but for some reason it isn't publishing the clock topic, or your rosbag node can't subscribe to it. You should make sure the simulator is running, and see if there are any messages published to /clock
{ "domain": "robotics.stackexchange", "id": 17940, "tags": "rostopic" }
Newton's cradle faster than light?
Question: If we have a Newton's cradle toy where the balls actually touch each other. Can energy be transferred from the first ball to the last one faster than the speed of light? And what factors control the energy flow in such case? Answer: Any energy transfer from collisions between the balls won't be transferred faster than light. It will be transferred at the speed of sound within the metal, which is much, much slower than the speed of light. For example, for solid steel balls, the speed of sound is roughly 5900 m/s, so a collision at one end of a 5-cm-long chain of steel balls will take around 8 microseconds to propagate to the other end - detectable with advanced high-speed cameras.
{ "domain": "physics.stackexchange", "id": 61337, "tags": "newtonian-mechanics, speed-of-light, collision, faster-than-light, shock-waves" }
Is the activation function the only difference between logistic regression and perceptron?
Question: As far as I know, logistic regression can be denoted as: $$ f(x) = \sigma(w \cdot x + b) $$ A perceptron can be denoted as: $$ f(x) = \operatorname{sign} (w \cdot x + b) $$ It seems that the only difference between logistic regression and a perceptron model is the activation function. Is this correct? Answer: TL;DR: Yes and No; they're both similar decision function models but there's more to each model than their main formulation One could use the logit function as the activation function of a perceptron and consider the output a probability. Yet, that value would likely need a probability calibration. As with most ML models several things are very similar from model to model, on the other hand varying tiny parameters can result in a different model. Let's take on both sides: Logit Regression and Perceprton similarities The logit function is used in logit regression for its properties of being an S-curve, by default valued between 0 and 1. The sign activation function in the perceptron is also shaped like an "S-curve" (with very rough edges - so mathematically not an S-curve by its definition but with similar properties) valued "between" -1 and 1. Another activation function often used with perceptron is hyperbolic tangent (tanh), which is another S-curve - very similar to the sign function but with a rounded shape (and also valued between -1 and 1). We can say that tanh is similar to sign because: $$ \texttt{sign}(x) \approx \texttt{tanh}(kx) \qquad \text{for k >> 0}\\ or\\ \texttt{sign}(x) = \texttt{tanh}(kx) \qquad k \to \infty $$ So it makes sense to compare tanh with logit as an analogy to compare sign with logit. Now the logit function is (the one used in logistic regression that is - not the statistical $ln(x/(1- x))$ one): $$ \texttt{logit}(x) = \frac{L}{1 + e^{-k(x - b)}} $$ Where $L$, $k$ and $b$ are parameters that we can steer. (Since $b$ is a bias term, an unknown constant, it does not matter if we write $- b$ or $b$) And the hyperbolic tangent is: $$ \texttt{tanh}(x) = \frac{e^{2x} - 1}{e^{2x} + 1} $$ But wait, if we set the parameters of the logit as $L = 1$, $k = 1$ and $b = 0$, then: $$ 2 \cdot \texttt{logit}(2x) - 1 = 2 \cdot \frac{1}{1 + e^{-2x}} -1 = \frac{2e^{2x}}{e^{2x} + 1} - 1 = \frac{2e^{2x}}{e^{2x} + 1} - \frac{e^{2x} + 1}{e^{2x} + 1} = \frac{2e^{2x} - e^{2x} - 1}{e^{2x} + 1} = \frac{e^{2x} - 1}{e^{2x} + 1} = \texttt{tanh}(x) $$ So tanh, which is more-or-less a rounded sign, is a scaled and shifted special case of logit. Not only the model differ by the activation function alone but the activation functions are very similar to each other. Logit Regression and Perceprton differences Log probabilities (this is probably the major difference and the most important one) We looked at the logit function above but in reality the logit regression takes the logarithm of the logit instead of plain values for the probabilities. Or, more exactly, in statistics the logit is defined as the natural logarithm of what in most ML implementations is defined as a logit decision function. This is very different from the perceptron which performs the output directly from the activation function. Regularization Both models (logit regression and perceptron) are often trained with some form of gradient descent optimizer (be it SGD, have it momentum, or even something else). This training will optimize all parameters ($w$ in your representation or $k$ in mine for logit and the weights in a perceptron) but the model itself will also be given a bunch of hyper-parameters. And in terms of hyper-parameters things do differ: The perceptron will be trained by optimizing the weights (including a bias) and can be regularized with $L_2$ or $L_1$ (or a combination of both). One may or may not add a bias term. The actual model of the logistic regression will optimize the logarithm but there's more: most implementations will include a scaling hyper-parameter ($C$) which will multiply the log probabilities. One can use $C$ to regularize the model apart from $L_1$ and/or $L_2$. Multi-class The perceptron is always a binary classifier. One sets an output threshold and that works as the decision function (for the sign or tanh functions the threshold is often $0$). For multi-class classification one must build One-Vs-Rest or One-Vs-One groups of models. Logistic Regression can be used as a binary classifier and in this case can be used for multi-class classification with One-Vs-Rest and One-Vs_one methods. But, there exist a formulation of logistic regression for direct multi-class classification: $$ \texttt{multinomial logit}(x|y) = \frac{e^{-k_y(x - b)}}{\sum^{K}_{k=0} e^{-c(x - b)}} \qquad \text{for y in }\{0,1,\dots\} $$ i.e. whether $x$ belongs to class $y$. This is performed for each class and the result can be passed to a softmax function. This multi-class formulation can be performed because logit regression deals with log probabilities instead of direct inputs (contrary to a perceptron).
{ "domain": "datascience.stackexchange", "id": 5354, "tags": "machine-learning, deep-learning, classification" }
how to get a 2D map using octomap
Question: I am using ubuntu 11.10 and electric ROS. I've already got 3D map using rgbdslam pkg and now I want to downproject the 3D map to 2D in order for localization. However I cannot find a tutorial on how to do this using octomap though I know it has such a function. Can someone give me any advice? Thanks ! Originally posted by Chong on ROS Answers with karma: 76 on 2012-06-18 Post score: 1 Original comments Comment by Chithra on 2013-02-07: Hello, I am currently facing the same problem! How did you solve it? Sorry about opening a closed thread. i am unable to get much info on this online. Thank you! Answer: octomap_server publishes a 2D map on the topic /map. Originally posted by Dan Lazewatsky with karma: 9115 on 2012-06-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Chong on 2012-06-18: I am new in ROS, so I am confused at how to get this 2D map from the topic and use this map for later use. Can you give me some more specific instructions? Thanks a lot ! Comment by cagatay on 2012-06-18: try this command rosrun map_server map_saver -f map , this command will save the map in your current directory Comment by Chong on 2012-06-18: do I have to run some command before that?
{ "domain": "robotics.stackexchange", "id": 9830, "tags": "navigation, mapping, octomap" }
how to make openni_camera package to output 320x240 rgb and depth images directly
Question: hi, in openni_camera, i got to know that it's ROS API has parameters like: ~image_mode (int, default: 2) Image output mode for the color/grayscale image Possible values are: SXGA_15Hz (1): 1280x1024@15Hz, VGA_30Hz (2): 640x480@30Hz, VGA_25Hz (3): 640x480@25Hz, QVGA_25Hz (4): 320x240@25Hz, QVGA_30Hz (5): 320x240@30Hz, QVGA_60Hz (6): 320x240@60Hz, QQVGA_25Hz (7): 160x120@25Hz, QQVGA_30Hz (8): 160x120@30Hz, QQVGA_60Hz (9): 160x120@60Hz ~image_mode (int, default: 2) ~depth_mode (int, default: 2) depth output mode Possible values are: SXGA_15Hz (1): 1280x1024@15Hz, VGA_30Hz (2): 640x480@30Hz, VGA_25Hz (3): 640x480@25Hz, QVGA_25Hz (4): 320x240@25Hz, QVGA_30Hz (5): 320x240@30Hz, QVGA_60Hz (6): 320x240@60Hz, QQVGA_25Hz (7): 160x120@25Hz, QQVGA_30Hz (8): 160x120@30Hz, QQVGA_60Hz (9): 160x120@60Hz Now I want the kinect to output 320x240 rgb and depth images directly, i tried to set this para in launch file but failed , How to set this parameters? Thank you vary much~~ Originally posted by lligen on ROS Answers with karma: 1 on 2015-09-07 Post score: 0 Answer: These are dynparam, not rosparam, so in launch file try something like <node pkg="dynamic_reconfigure" type="dynparam" name="reconfig" output="screen" args="set /camera/driver depth_mode 11" /> for depth_mode, check the dynamic_reconfigure for more, e.g. list of available resolutions. Originally posted by Humpelstilzchen with karma: 1504 on 2015-09-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by lligen on 2015-09-09: thanks for your help~
{ "domain": "robotics.stackexchange", "id": 22582, "tags": "openni-camera" }
Finding an algorithm that decides given 2 regular expressions $E_1$ and $E_2$ and a non-negative integer $k$, whether $|L(E_1) \backslash L(E_2)| = k$
Question: Find an algorithm that decides, given 2 regular expressions $E_1$ and $E_2$ and a non-negative integer $k$, whether $|L(E_1) \backslash L(E_2)| = k$. I know that regular expressions are closed under set difference so I can find a regular expression equivalent to the one above, and then convert it to a DFA and then break it up into a graph of strongly connected components following which it's possible to solve it using dynamic programming. But is there a way to solve this without converting it to a DFA? (I'm trying to find a solution that's not difficult to implement) Answer: Converting to a DFA is pretty easy to implement, especially if you use a library with support for manipulating regular expressions and automata. I doubt you're going to find something simpler to implement.
{ "domain": "cs.stackexchange", "id": 19828, "tags": "automata, regular-expressions" }
Install packages for kinect
Question: Does everyone know, how to use the description package for the Innok Heros in kinetic, not Indigo? http://wiki.ros.org/innok_heros_gazebo/Tutorials/Simulating%20Innok%20Heros Originally posted by ich4913 on ROS Answers with karma: 11 on 2018-04-13 Post score: 0 Answer: For packages that have no binary release for a particular ROS version (or OS), you can always try to build them from sources. No guarantees though (dependencies could not be available, there could be breaking changes in dependencies, etc). See #q252478 for the general procedure. Pay special attention to where the dependencies are installed. Originally posted by gvdhoorn with karma: 86574 on 2018-04-13 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30632, "tags": "ros-kinetic, ros-indigo" }
Why the power of periodic signal is square?
Question: I have DSP in my academics and while going through the video lectures i am stuck regarding power of periodic signal. Its mentioned as $$P_{\text{avg}}= \frac{1}{N}\sum_{n=0}^{N-1}|x(n)|^2$$ my doubt is why we have to mention $|x(n)|$ as square ? Note: I am having format issues please edit my Question and put it in format. thank you. Answer: Power or energy are always squared quantities. If you consider simple circuits as an example, then power is $V^2/R$ or $I^2R$ (with $V$ voltage, $I$ current, and $R$ resistance). For time-varying signals, the power or energy is computed by a time average of the squared signal. For stochastic signals, the power is defined by the expectation $E[|x(t)|^2]$, which again is usually estimated by computing time averages. The fact that we're always dealing with squared quantities basically goes back to the definition of energy and power.
{ "domain": "dsp.stackexchange", "id": 944, "tags": "discrete-signals" }
Can I add dependencies after using catkin_create_pkg?
Question: I used first catkin_init_workspace and thereafter catking_create_pkg example1 roscpp rospy std_msgs and building a simple example file works flawlessly. Now if I run catkin_create_pkg example2 without specifying dependencies, it doesn't work, even though I edited the package.xml and imported exactly the same source code. The reason I want to do this, is I need to move some source code from Fuerte to Hydro, and I'd rather just copy the <build_depend> tags from one xml file to the other. But it doens't work. Originally posted by paturdc on ROS Answers with karma: 157 on 2014-04-30 Post score: 0 Original comments Comment by dornhege on 2014-04-30: What doesn't work? Answer: Turns out I need to add dependencies in CMakelist also, and I had not seen the one at the top find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs ) Originally posted by paturdc with karma: 157 on 2014-04-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dornhege on 2014-04-30: This looks like CMakeLists.txt, not package.xml.
{ "domain": "robotics.stackexchange", "id": 17820, "tags": "ros, ros-hydro, catkin-create-pkg" }
ROS to LCM or ROS to APRIL Integration?
Question: Hello answers, Is anybody working on a general way of interfacing ROS with LCM or APRIL? There are some tools there I'd like to use. Doing it myself won't be too painful, but I figured I'd ask before I got too far in. Originally posted by Mac on ROS Answers with karma: 4119 on 2012-03-22 Post score: 1 Original comments Comment by Mac on 2012-04-04: I was right: it's not painful at all. If I get a chance, I'll write up a tutorial somewhere... Answer: Hi Mac, I am interested in a port of the APRIL tag toolkit to ROS, and was planning on doing it quick and easy using a rosjava wrapper without necessarily using LCM. If you've made some progress on interfacing LCM with ROS, do you think there might be a more generic way for me to set up a port? Originally posted by piyushk with karma: 2871 on 2012-07-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Mac on 2012-07-09: Short answer: no. ROSJava is probably a better way. If this does get done, please please please do a release! I know of several interested people. Comment by piyushk on 2012-08-16: @Mac: http://www.ros.org/wiki/april Comment by Mac on 2012-08-16: I saw that on ros-users today. Awesome!
{ "domain": "robotics.stackexchange", "id": 8688, "tags": "ros, integration" }
Powershell script for zipping up old files
Question: I have a script for zipping up old files. I know using Windows Compression isn't ideal, so I will make the script run using 7-Zip later on. For now though, I just want see how I can make my current script better. By better I mean, how could I make this code cleaner or neater? How could I make the script more effective? Here is the code: Write-Output("Beginning script....") #File path of file to be cleaned $File_Path = "C:\Users\Administrator\Downloads\Testing\*" #Location of ZIP file $Send_To = "C:\Users\Administrator\Documents\ARCHIVE2" #Location of old files before being zipped $Old_Files = "C:\Users\Administrator\Documents\OLD_FILES" #Time frame for files $Days = "-65" $now = Get-Date $last_Write = $now.AddDays($Days) #Filtering files according to time parameters $Filter = Get-ChildItem -Path $File_Path | Where-Object { $_.LastWriteTime -lt $last_Write } #Moving old files to destination folder if (!$Filter){ Write-Host "Variable is null" } else{ Move-Item $Filter -Destination $Old_Files Write-Output("Moving files....") } #Compressing destination folder if there is folders in $Old_Files folder if (!(Test-Path $Old_Files)) { Write-Output("No old files found") } else{ Compress-Archive -Path $Old_Files -DestinationPath $Send_To -Update Write-Output("Old files zipped!") Remove-Item -Path $Old_Files -Force } Write-Output("Script is finished") Answer: I spent several weeks writing and tuning a script that moves old files to a timestamped Zip file. Here are a few guidelines I learned that may help: 1) Move variables into script parameters so they can be changed at runtime without editing the file: Param($File_Path = "C:\Users\Administrator\Downloads\Testing\*", #File path of file to be cleaned $Send_To = "C:\Users\Administrator\Documents\ARCHIVE2", #Location of ZIP file $Old_Files = "C:\Users\Administrator\Documents\OLD_FILES", #Location of old files before being zipped $Days = "-65" ) 2) Filter usually means 'criteria for filtering a list'. Calling the list of files $Filter is misleading. $FilesToZip might be a better variable name. 3) There are lots of reasons a file can't be moved or zipped (in use, doesn't exist, no read/write permission, etc.) You should have a try-catch block around Compress-Archive to account for this, and fail or proceed gracefully. Using ZipFileExtensions, an error created an unusable file until it was finalized. Compress-Archive may be more robust, but there are still chances for failure (try pulling your network cable while creating a large archive of files on your LAN). 4) There's no need for an intermediate folder, and it creates more opportunities for failure. You can just compress the file into the zip and delete on success. 5) The message 'No old files found' is misleading. Reading it, I would think that there were no files found matching the age criteria, when it really means the intermediate folder doesn't exist. 6) Don't you need a .Zip file name for the -DestinationPath? 7) Write-Output is usually used for return values rather than messages. Write-verbose, or Write-Host are more appropriate for status messages.
{ "domain": "codereview.stackexchange", "id": 34428, "tags": "file-system, powershell" }
Double variation of Schwinger action principle
Question: The Schwinger action principle is given by $$\delta_{1}\big\langle b\big|a\big\rangle= i\int_{t_{a}}^{t_{b}}\text{d}t\,\sum_{c,d}\big\langle b\big|c\big\rangle\big\langle c\big|\delta_{1}L(t)\big|d\big\rangle\big\langle d\big|a\big\rangle$$ where the state $|c\big\rangle$ is in time $t_c$ and so on. Now we perform another variation $\delta_{2}$ which is independent of the first variation $\delta_{1}$ $$\delta_{2}\delta_{1}\big\langle b\big|a\big\rangle= i\int_{t_{a}}^{t_{b}}\text{d}t\,\sum_{c,d}\bigg[\bigg(\delta_{2}\big\langle b\big|c\big\rangle\bigg)\big\langle c\big|\delta_{1}L(t)\big|d\big\rangle\big\langle d\big|a\big\rangle +\big\langle b\big|c\big\rangle\big\langle c\big|\delta_{1}L(t)\big|d\big\rangle\bigg(\delta_{2}\big\langle d\big|a\big\rangle\bigg)\bigg]$$. Tom D.J. writes (in the book "The Schwinger Action Principle and Effective Action" page: 345.) "Note that since the second variation in the structure of the Lagrangian is independent of the first, there is no term like $\delta_2\big\langle c\big|\delta_{1}L(t)\big|d\big\rangle$ in the above equation." Could someone elaborate on this and maybe show with an example why this is true? The closest I could think of was something of the lines "$\delta_{2}\big\langle c\big|\delta_{1}L(t)\big|d\big\rangle=0$ since if $\delta_{2}$ and $\delta_{1}$ are with respect to different functions this term will be zero. If they are variations of the same variables this will be of second order and will be ignored" Answer: Well, it becomes a bit clearer when we see the final formulas of Ref. 1: $$\delta \langle a_f , t_f |a_i , t_i \rangle ~=~ \frac{i}{\hbar} \int_{t_i}^{t_f} \! dt \langle a_f , t_f | \delta L(t) |a_i , t_i \rangle \tag{7.126} $$ $$ \delta^{\prime} \delta \langle a_f , t_f |a_i , t_i \rangle ~=~\frac{1}{2}\left(\frac{i}{\hbar}\right)^2 \int_{t_i}^{t_f} \! dt \int_{t_i}^{t_f} \!dt^{\prime} $$ $$\times \langle a_f , t_f |T[ \delta L(t)~\delta^{\prime}L(t^{\prime}) ] |a_i , t_i \rangle. \tag{7.131}$$ Recall that the Schwinger action principle can be described via a time integral $\int_{t_i}^{t_f}\!dt~ \delta L(t)$ of an operator $\delta L(t)$. Imagine that the time interval $$[t_i,t_f]~=~\cup_{n=1}^N I_n, \qquad I_n~:=~[t_{n-1},t_n],$$ is divided into a sufficiently fine discretization $t_i=t_0<t_1, \ldots t_{N-1} < t_N=t_f $, where the integer $N$ is sufficiently large. By inserting many completeness identities $\sum_b |b , t_n \rangle\langle b , t_n |={\bf 1}$, we can split a total variation (7.126) into many small contribution labelled by the time intervals $I_n$, $n\in\{1,2, \ldots, N\}$. Similarly, when performing a double variation (7.131), we will get $N$ diagonal and $N(N-1)$ off-diagonal contributions labelled by two time intervals $I_n$ and $I_m$, where $n,m\in\{1,2, \ldots, N\}$. (A diagonal contribution $n=m$ refers to the same time interval $I_n=I_m$.) If in the limit $N\to\infty$, the $N(N-1)$ off-diagonal contributions dominate over the $N$ diagonal contributions, the two variations becomes effectively independent. However in hindsight, it seems that Ref. 1 is assuming that the two variation $\delta$ and $\delta^{\prime}$ are manifestly independent and not just effectively independent. Manifest independence here means that the $\delta^{\prime}$ variation simply doesn't act on the $\delta L(t)$ operator, and vice-versa. References: D.J. Toms, The Schwinger Action Principle and Effective Action, 1997, Section 7.6.
{ "domain": "physics.stackexchange", "id": 10326, "tags": "lagrangian-formalism, variational-calculus" }
What happens to our force when we walk on ice?
Question: I'm scratching my head a lot in trying to understand friction. So far I understand that "without friction we would not be able to walk". But that sounds really vague and unclear, so much in fact that it doesn't make any sense to me unless I give it one more thought. As far as I understood: the force of friction appears when there are two contact surfaces that interact against each other (I guess the same thing as "they are in relative motion to each another"). As long as the threshold of this friction is not surpassed the Force of friction will adjust itself to any existing opposite force, balancing it. If I understand when walking, the road will have some friction and if I exert a backwards force against the friction of the ground that same friction will apply the equal force to my foot, allowing me to move forwards. For this to happen, the force of friction must adjust itself to my contact force? What force does my foot exert? I'm confused... If my "force" is bigger than the frictional force then the friction won't be able to provide that force in the opposite direction, so my foot will "continue its path" i.e. slip. But where does my force go? It makes me accelerate? I don't think "slipping" means accelerate. There is something I can't understand from friction... Answer: the force of friction appears when there are two contact surfaces that interact against each other If you by "interact against" mean "try to slide over", then correct. As long as the threshold of this friction is not surpassed the Force of friction will adjust itself to any existing opposite force, balancing it. Indeed. And to be accurate, every mention of friction here refers to static friction (as opposed to kinetic friction), which is what happens when sliding is prevented (when something tries to slide but doesn't). If I understand when walking, the road will have some friction and if I exert a backwards force against the friction of the ground that same friction will apply the equal force to my foot, allowing me to move forwards. Yes, although a bit combersome sentence. A surface does not "have friction". Rather, it has a roughness so that friction can appear when sliding against another surface wants to begin. For this to happen, the force of friction must adjust itself to my contact force? What force does my foot exert? I'm confused... When you apply a backwards force with your foot - we can call it a stepping force, if we will - then as per Newton's 3rd law the ground responds with an equal but opposite forwards static friction. I call it a stepping force but there is no typical official name for it as a whole, as far as I'm aware. Depending on scenario you might also call it thrust or traction or the like as mentioned in a comment. If my "force" is bigger than the frictional force then the friction won't be able to provide that force in the opposite direction, so my foot will "continue its path" i.e. slip. What you mean here is that there is an upper maximum limit to static friction. But note, your stepping force cannot be bigger than this limit. Your stepping force can only exist if an equal but opposite static friction force also exists (again, this is Newton's 3rd law). If the ground cannot respond with an equal static friction force then it let's go, meaning the static friction force disappears. Then you don't have to apply any larger stepping force since your foot is just slipping and sliding and not feeling any resistance to apply force against. But where does my force go? It makes me accelerate? I don't think "slipping" means accelerate. As described, it doesn't "go" anywhere because you can't apply a force against nothing. When you punch into empty air then you aren't applying any force; likewise when you step backwards you can only apply a force equal to whichever resistance your feet meet. Instead your foot just slips and slides backwards. Kinetic friction would takes over now, and then your stepping force equals that one instead. But if no other force take over - if there is no other resistance against your foot now - then you will just be "running on the spot" and will never move. As if running on ice, or if you are running while hanging in free space in a space station. This can in no way accelerate you forwards. Only a forwards-pointing force could do that. This is the reason that it is not your step which causes you to walk forward, it is indeed the friction from the ground.
{ "domain": "physics.stackexchange", "id": 74177, "tags": "newtonian-mechanics, forces, friction" }
Extinction Level Event (Asteroid Impact Hypothesis) Likelihood Equation
Question: We have the Drake Equation that calculates the probability of intelligent ET (extraterrestrials/aliens). Since the asteroid impact hypothesis has been more or less accepted as the cause of the K-Pg Extinction Event that wiped out the nonavian dinosaurs, I was wondering if there's a Extinction by Asteroid/Comet Likelihood Equation along similar lines as the Drake Equation. A crude formula might look like this: $N$ = Number of asteroids/comets $f_I $= Fraction of N that regularly visit the inner solar system $f_P$ = Fraction of $f_I$ that are planetkillers $f_k$ = Fraction of $f_P$ that has a killzone that includes the earth Probability of extinction by asteroid/comet = $P(E_{a/c}) = f_I \cdot f_P \cdot f_K$ Number of threats = $f_I \cdot f_P \cdot f_K \cdot N$ Answer: Knowing that there are X asteroids that could threaten is not in itself useful. What you want to get is a event rate, the probability of something happening per year. One oversimplified way is to reason like this: each dangerous object sweeps out a "risk volume" $V=\pi b^2 v$ per unit of time, where $b\sim R_\oplus$ is the distance between the object and the Earth that would lead to an impact, and $v$ is the velocity. The probability per unit time that Earth is in any volume if there are $N$ objects is $P = 1-\exp(-\frac{\sum_{i=1}^N V_i}{V})\approx 1-\exp(-N\bar{V}/V)$ where $V_{solar}$ is the relevant volume of the solar system (about $10^{26}$ m$^3$ if we use 30 AU). So you could now use $\bar{V}\approx 10^{18}$ or so (depending on your views on $b$ and $v$) and start estimating $N$ using your formula. This is somewhat doable, but quickly get complex (different populations, orbits are not actually evenly distributed, etc.) A better approach may simply be to look at the past impacts causing extinctions! Depending on how you count, there has been one known mass extinction due to impacts since the Cambrian 538.8 mya, so the rate might be on the order of $2\cdot 10^{-9}$ per year. But that likely leaves out a fair number of minor extinctions. If we assume all Big 5 were due to impacts the rate becomes $9\cdot 10^{-9}$ per year. Incidentally, to get these values in the above formula for the assumed values, $N$ should be in the range 0.2 to 1. Obviously this can be improved: we can use statistical modelling to get error bars, we can use the known size distribution of asteroids (a power law) to estimate the fraction of Earth-crossers and long-periodic comets that could be bad and their inflow rate, and so on. But that misses the Drake equation approach of trying to find a quick-and-dirty model that shows the key variables we care about and might want to estimate.
{ "domain": "astronomy.stackexchange", "id": 7180, "tags": "asteroids, comets, extinction" }
Is it possible for a star to orbit a planet?
Question: Is it possible for a red dwarf to orbit a gas giant? OR Has this happened and it is just assumed that the gas giant is orbiting the star? Answer: Pretty much by definition, no. In order for "object A" to orbit "object B", "object B" needs to be substantially more massive than "object A". In order for your red dwarf to orbit something, that "something" needs to be more massive than a red dwarf. With the current composition of the universe, that "something" will be mostly hydrogen, and once you reach a certain level of mass (about 90 times that of Jupiter), you can't prevent nuclear fusion from starting. The definition of "planet" is a bit fuzzy, but everyone agrees something lit by nuclear fusion is a star, not a planet. This could theoretically change in the distant future, when there's enough non-hydrogen mass around for stellar-scale rocky bodies.
{ "domain": "astronomy.stackexchange", "id": 683, "tags": "star, orbit, gas-giants" }
How to solve the tranmission probability in an evolution of a quantum system
Question: I've just learned the evolution of some quantum system for about a week, and our homework sometimes something like this. I don't quite have any idea of solving this kind of problem. Can you help giving me some fundamental enlightment on these problems, say one like this: " Assume the spin number of a particle is 1, if the measure of the x component of spin is +1 at first, then the y component is measured -1. What is the probability that the x component is measured to be +1 again? " Thank you very much! Answer: You need to know how to rotate wavefunctions from one z axis to another z axis, that's it. I'll do it for spin 1/2, and you can work out the same for spin 1. For spin 1/2, the operator which generates rotations around the x-axis is: $$ S_x = {1\over 2} \sigma_x$$ Notice that $\sigma_x^2=1$, so that using the Taylor series of the exponential: $$ e^{i\theta S_x} = \cos({\theta\over 2}) I + i \sin({\theta\over 2}) \sigma_x $$ If you rotate by a $\theta$ of 90 degrees, the resulting matrix is $$ {1\over\sqrt{2}} ( I + i \sigma x ) $$ Applying this to the state (1,0), you get the state $(1, i)$, up to a normalizing factor of $\sqrt{2}$. This is the state of +1 spin in the y direction, and you can check that it is an eigenvector of $\sigma_y$. So if you prepare the state $(1,i)$, the amplitude to be in the + or - z spin state is 1,i. The square of this is the probability you want. To repeat this for spin 1, or any spin, you can use tensors of spin-1/2, or D matrices in elementary quantum mecanics books. This is not a transmission of anything, it's just asking what the probability of state A to be found as state B in quantum mechanics without any time evolution, just using idealized measurements. This problem is also discussed very well from a physical point of view for spin 1/2 in Feynman's lectures III.
{ "domain": "physics.stackexchange", "id": 5209, "tags": "quantum-mechanics, homework-and-exercises, probability" }
State space of QFT, CCR and quantization, and the spectrum of a field operator?
Question: In the canonical quantization of fields, CCR is postulated as (for scalar boson field ): $$[\phi(x),\pi(y)]=i\delta(x-y)\qquad\qquad(1)$$ in analogy with the ordinary QM commutation relation: $$[x_i,p_j]=i\delta_{ij}\qquad\qquad(2)$$ However, using (2) we could demo the continuum feature of the spectrum of $x_i$, while (1) raises the issue of $\delta(0)$ for the searching of spectrum of $\phi(x)$, so what would be its spectrum? I guess that the configure space in QFT is the set of all functions of $x$ in $R^3$, so the QFT version of $\langle x'|x\rangle=\delta(x'-x)$ would be $\langle f(x)|g(x)\rangle=\delta[f(x),g(x)]$, but what does $\delta[f(x),g(x)]$ mean? If you will say it means $\int Dg(x) F[g(x)]\delta[f(x),g(x)]=F[f(x)]$, then how is the measure $Dg(x)$ defined? And what is the cardinality of the set $\{g(x)\}$? Is the state space of QFT a separable Hilbert sapce also? Then are field operators well defined on this space? Actually if you choose to quantize in an $L^3$ box, many issues will not emerge, but many symmetries cannot be studied in this approximation, such as translation and rotation, so that would not be the standard route, so I wonder how the rigor is preserved in the formalism in the whole space rather than in a box or cylinder model? I'm now beginning learning QFT, and know little about the mathematical formulation of QFT, so please help me with these conceptual issues. Answer: The objects such as $\hat \phi(x,y,z,t)$ in a QFT are strictly speaking "operator distributions". They differ from "ordinary operators" in the same way how distributions differ from functions. Only if you integrate such operator distributions over some region with some weight $\rho$, $$\int d^3 x\,\hat\phi(x,y,z,t) \rho(x,y,z,t)=\hat O,$$ you obtain something that is a genuine "operator". In a free QFT, the state vectors may be built as combinations of states in the Fock space – an infinite-dimensional harmonic oscillator. But you may also represent them via "wave functional". Much like the wave function in non-relativistic quantum mechanics $\psi(x,y,z)$ depends on 3 spatial coordinates, a wave functional depends on a whole function, $\Psi[\phi(x,y,z)]$. For each allowed configuration of $\phi(x,y,z)$, there is a complex number. Yes, one may also integrate over all classical functions $\phi(x,y,z)$. There also exists a Dirac delta-like object, the Dirac "delta-functional", and it is usually denoted $\Delta$, $$\int {\mathcal D}\phi(x,y,z) F[\phi(x,y,z)] \Delta[\phi(x,y,z)] = F[0(x,y,z)]$$ I wrote the zero as a function of $x,y,z$ to stress that the argument of $F$ is still a function. The functional integration is a sort of infinite-dimensional integration and the delta-functional is an infinite-dimensional delta-function. One must be careful about these objects, especially if we integrate amplitudes that may have amplitudes and especially if we integrate over curved infinite-dimensional objects such as infinite-dimensional gauge groups etc. – there may be subtleties such as anomalies. Yes, the Hilbert space of a free QFT is still isomorphic to the usual Hilbert space: there is a countable basis. But we're talking about the finite-energy excitations only. There are lots of "highly excited states" that aren't elements of the Fock space – one would need infinite occupation numbers for all one-particle states. Physically, such states are inaccessible because the energy can't be infinite. However, when one is changing the energy from one Hamiltonian to another (e.g. by simple operations such as adding the interaction Hamiltonian), finite-energy states of the former $H_1$ may be infinite-energy states of the latter $H_2$ and vice versa. So one must be careful: the physically relevant finite-energy Hilbert space may be obtained from some infinite-occupation-number states in a different, e.g. approximate, Hamiltonian. It's still true that the relevant Hilbert space is as large as a Fock space and it has a countable basis. The "totally inaccessible" states that are too strong deformations have an important example or name – they're "different superselection sectors". Rigor is a strong word. People tried to define a QFT rigorously – by AQFT, the Algebraic/Axiomatic Quantum Field Theory. These attempts have largely failed. It doesn't mean that there isn't any "totally set of rules" that QFT obeys. Instead, it means that it's not helpful to be a nitpicker when it comes to the new issues that arise in QFT relatively to more ordinary models of quantum mechanics; it's neither fully appropriate to think that a QFT is "exactly just like a simpler QM model" but it's equally inappropriate to forget that it's formally an object of the same kind. Formally, many things proceed exactly in the same way and there are also new issues (unexpected surprises that contradict a "formal treatment") that have some physical explanation and one should understand this explanation. Some of these new subtleties are "IR", connected with long distances, some of them are "UV", connected with ever shorter distances. The fact that a QFT has infinitely many degrees of freedom is both an IR and UV issue. So even if you put a QFT into a box, you won't change the fact that you need wave functionals, delta-functionals, and that there are superselection sectors and states inaccessible from the Fock space. By the box, you only regulate the IR subtleties but there are still the UV subtleties (momenta, even in a box, may be arbitrarily large). Those may be regulated by putting the QFT on a lattice. This has some advantages but some limitations, too.
{ "domain": "physics.stackexchange", "id": 6793, "tags": "quantum-field-theory, hilbert-space, quantization, commutator" }
Modulus of action-reaction forces
Question: I read somewhere that action-reaction forces satisfy the form $$ \mathbf{F}_{12}=-\mathbf{F}_{21}=f(|P_1-P_2|)(P_1-P_2) $$ meaning that the modulus can only depend on the distance between the two points $P_1$ and $P_2$. Is that right? Cannot the modulus depend on, say, $|\mathbf{v}_1-\mathbf{v}_2|$? Answer: Yes, in principle it may depend! In pure Newtonian mechanics (not Lagrangian), invariance under the Galileo group requires that for a pair of isolated points $P_1,P_2$ described within an inertial reference frame, the force on $P_1$ due to $P_2$ has the general form: $$\vec{F}_{12}= \vec{F}_{12}(P_1-P_2, \vec{v}_1- \vec{v}_2)$$ with $$\vec{F}_{12}(R\vec{x}, R\vec{u})= R\vec{F}_{12}(\vec{x},\vec{u})\quad \mbox{ for all $R\in O(3)$.} $$ Conservation of momentum implies $\vec{F}_{12}= -\vec{F}_{21}$. Alternatively this constraint can be imposed as the action-reaction principle obtaining the conservation of total momentum. If you require also the conservation of total angular momentum you also have to impose that $\vec{F}_{12}$ is directed along the segment joining $P_1$ and $P_2$. Summing up you get, $$\vec{F}_{12} = f\left(|P_1-P_2|, |\vec{v}_1- \vec{v}_2|, \alpha(\vec{P_1P_2},\vec{v}_1- \vec{v}_2) \right)\: \vec{P_1P_2}$$ where $f$ is a scalar function and $\alpha(\vec{u},\vec{v})$ is the angle between $\vec{u}$ and $\vec{v}$. The fact that $\vec{F}_{12}$ is directed along the segment joining $P_1$ and $P_2$ is sometime called stronger form of the action-reaction principle. Finally, dealing with a set of $N>2$ points, $P_1,\ldots,P_N$, the superposition principle requires that the total force acting on a point, say $P_i$, is the sum of all forces acting on $P_i$ due to $P_j$ with $j\neq i$ as if that pair were isolated. ADDENDUM. In principle $\vec{F}$ could depend on higher derivatives of the positions than velocities (e.g. accelerations and velocity of accelerations as it happens in the semi classical model of the electron). However as soon as one allows it, there is no longer guarantee for the so-called determinism, the fact that, under suitable initial conditions (positions and velocities in the standard case), the evolution of the system is determined.
{ "domain": "physics.stackexchange", "id": 13644, "tags": "newtonian-mechanics, classical-mechanics, forces" }
Inflation Theory. Horizon problem. Couldn't understand
Question: I'm reading a text from MIT, Inflationary Cosmology and the Horizon and Flatness Problems: The Mutual Constitution of Explanation and Questions that is available online. However I failed to understand this sentence, For the numbers discussed above, this yields $$ d_H(t_{\rm end}) = \frac{1}{H}ce^{100} \approx 10^{19}~{\rm cm} $$ At present, the radius of the observable universe is of order $3ct\approx 10^{28}$ cm. At the end of the inflationary era, $R(t)$ was smaller by about a factor of $10^{-27}$. Thus, at that time, the presently observed observable universe had a physical size of about $10$ cm. Please enlighten me, as to how the value of $R(t)$ was calculated to be smaller by a factor of $10^{-27}$. Thank you. P.s. - I understand how horizon distance dh was calculated at the end of inflation. The only trouble I'm having is with the evaluation of $R(t)$ at the end of inflation. Answer: After inflation the universe goes through a radiation dominated epoch, followed by a matter dominated stage, the transition occurs at $$ 1 + z_{\rm eq} \approx 2.4\times10^{4}\Omega_{m,0}h^2 \tag{1} $$ More recently ($z\lesssim 0.6$) the density content of the universe is dominated by the cosmological constant, but I will neglected it. During the first stage the scale factor $a$ (or $R$) follows the expression $$ a\sim t^{1/2} ~~~\mbox{for}~~~ z > z_{\rm eq} $$ therefore, if $a_{\rm end}$ is the scale factor after inflation we have $$ \frac{a_{\rm eq}}{a_{\rm end}} = \left(\frac{t_{\rm eq}}{t_{\rm end}} \right)^{1/2} \tag{2} $$ Similarly, during the matter-dominated epoch of the universe $$ a\sim t^{2/3}~~~~\mbox{for}~~~ z < z_{\rm eq} $$ So, if $a_0$ represents the scale factor today we have $$ \frac{a_0}{a_{\rm eq}} = \left(\frac{t_{0}}{t_{\rm eq}} \right)^{2/3} \tag{3} $$ Putting together (2) and (3) we arrive to $$ \frac{a_0}{a_{\rm end}} = \left(\frac{a_0}{a_{\rm eq}}\right)\left(\frac{a_{\rm eq}}{a_{\rm end}}\right) = \left(\frac{t_{0}}{t_{\rm eq}} \right)^{2/3} \left(\frac{t_{\rm eq}}{t_{\rm end}} \right)^{1/2} \approx 10^{26} $$ So the size of the universe has increased in a factor of around $10^{26}$ after inflation
{ "domain": "physics.stackexchange", "id": 41071, "tags": "cosmology, event-horizon, cosmological-inflation" }
Multiple Bipartite Entangled State in Cirq
Question: I am trying to create this state: rho = = q . rho_{1,2} + r . rho_{2,3} + s . rho{1,3} + (1-q-r-s) . rho_separable And I wrote this code: import random import numpy as np import cirq circuit, circuit2, circuit3 = cirq.Circuit() p = 0.2 q = 0.1 r = 0.3 alice, bob, charlie = cirq.LineQubit.range(1, 4) rho_12 = circuit.append([cirq.H(alice), cirq.CNOT(alice, bob)]) #circuit.append([cirq.H(alice), cirq.CNOT(alice, bob)]) rho_23 = circuit.append([cirq.H(bob), cirq.CNOT(bob, charlie)]) rho_13 = circuit.append([cirq.H(alice), cirq.CNOT(alice, charlie)]) circuit = rho_12 + rho_23 + rho_13 print(circuit) In here I have 2 problem: 1)This line is not working: circuit = rho_12 + rho_23 + rho_13 2)I cannot multiply the state with p or q or r. What I mean is that I can't write this line: rho_12 = circuit.append([cirq.H(alice), cirq.CNOT(alice, bob)]) * q Could you please show me how I can write this state? Answer: You seem to think append is returning a circuit, instead of modifying the circuit you called it on. circuit.append(op) doesn't return anything, it adds an operation to circuit. alice, bob, charlie = cirq.LineQubit.range(1, 4) circuit = cirq.Circuit() circuit.append([cirq.H(alice), cirq.CNOT(alice, bob)]) circuit.append([cirq.H(bob), cirq.CNOT(bob, charlie)]) ... Alternatively, you can make a new circuit for each of the pieces and then add them together: rho_12 = cirq.Circuit( cirq.H(alice), cirq.CNOT(alice, bob), ) ... circuit = rho_12 + rho_23 + rho_13
{ "domain": "quantumcomputing.stackexchange", "id": 2042, "tags": "programming, quantum-state, entanglement, cirq" }
Parsing financials from a YAML file. Is this normal/idiomatic in Rust?
Question: I wrote this Rust code to parse my financials from a YAML file and my main concern is the large match branches (although general code review is welcome; still a Rust beginner): extern crate yaml_rust; use yaml_rust::{Yaml, YamlLoader}; fn main() -> Result<(), std::io::Error> { let doc = std::fs::read_to_string("finances.yaml")?; let data = YamlLoader::load_from_str(&doc).unwrap(); let doc = &data[0]; let map = doc.as_hash().unwrap(); for (k, v) in map.iter() { println!("{}: ", k.as_str().unwrap()); let sum = unwrap_value(v); println!("== {}", sum); println!(); } Ok(()) } fn unwrap_value(v: &Yaml) -> f32 { match v { Yaml::Hash(v) => { let mut sum = 0.; for (k, vv) in v.iter() { match vv { Yaml::String(_vv) => { print!("\t\t {}: ", k.as_str().unwrap()); } Yaml::Integer(_vv) => { print!("\t\t {}: ", k.as_str().unwrap()); } _ => { println!("\t* {}:", k.as_str().unwrap()); } } sum += unwrap_value(vv); } sum } Yaml::Array(v) => { let mut sub_sum = 0.; for h in v.iter() { sub_sum += unwrap_value(h); } println!("\t= {}", sub_sum); sub_sum } Yaml::String(v) => { let tot: f32 = v .split("+") .fold(0., |sum, s| sum + s.trim().parse::<f32>().unwrap()); println!("{}", tot); tot } Yaml::Integer(v) => { println!("{}", v); *v as f32 } _ => 0., } } Here's a sample YAML of the file I'm parsing: --- '2021-01-01': apartment: - rent: 2750 transportation: - uber: 87.69 + 55.36 + 26 + 42 + 42 + 34.92 + 25.76 + 42 + 42 - bus: 12 '2021-02-01': apartment: - rent: 2750 bills: - elctricity: 27 transportation: - uber: 87.69 + 55.36 + 26 + 42 + 42 + 34.92 + 25.76 + 42 + 42 - bus: 12 and this is the output for the example above: 2021-01-01: * apartment: rent: 2750 = 2750 * transportation: uber: 397.73 bus: 12 = 409.73 == 3159.73 2021-02-01: * apartment: rent: 2750 = 2750 * bills: elctricity: 27 = 27 * transportation: uber: 397.73 bus: 12 = 409.73 == 3186.73 Answer: Structure your data The reason you need nested matches is that even after parsing the YAML document into a Yaml object, it's still essentially unstructured data. Not every YAML object will have the right shape, so you have to validate it piece by piece as you traverse it. Worse, you'll have to validate it all over again next time you try to traverse it. This is inefficient and ugly. Define some data structures that will hold the information currently held in a YAML document. How you define these depends on how rigid the data format is and what you plan to do with it. Here's one way to do it. You'll need to add chrono to your dependencies. use chrono::NaiveDate as Date; struct Document { by_date: HashMap<Date, Expenses>, } struct Expenses { by_category: HashMap<String, Vec<Entry>>, } struct Entry { payee: String, payments: Vec<f32>, } Define what it means to total up the whole Document by deferring to its contents: impl Document { fn total(&self) -> f32 { // the total of a Document is the sum of the total expenses on each date self.by_date.values().map(|expenses| expenses.total()).sum() } } impl Expenses { fn total(&self) -> f32 { // the total is the sum of the totals of each Entry in each category self.by_category.values().flat_map(|entries| entries.iter().map(Entry::total)).sum() } } impl Entry { fn total(&self) -> f32 { // the total is the sum of all the individual payments self.payments.iter().sum() } } Note 1: I am completely ignoring the printing part of unwrap_value. You could add that to the above code relatively easily, but it would be noisy and not super instructive, so I'm just going to focus on the totaling. Note 2: It might sometimes be useful to write a Total trait and implement it for Entry, Document and Expenses, instead of having three unrelated functions. But it doesn't seem useful here, so I didn't. Deserialize with Serde Now, the only problem that remains is how to turn a YAML document into a Document. You could write code to convert from a Yaml object, but the bulk of the work has been done for you already by the Serde library. Serde is an extremely versatile library for serialization and deserialization of just about any Rust data structure from just about any data format. Any time you're considering serialization, unless you know you need something specific, it's a good idea to start with Serde. You'll need to add serde, serde-yaml and serde-derive to your dependencies. When the data format is flexible, you might just add #[derive(Deserialize)] on all your structs to get the default behavior. In this case, since there's a particular YAML format we're trying to match, we need to add some annotations to explain to Serde just how to put things together. I used chrono::NaiveDate in part because it already implements Deserialize, which makes this task easier by half. #[derive(Deserialize)] #[serde(transparent)] struct Document { by_date: HashMap<Date, Expenses>, } #[derive(Deserialize)] #[serde(transparent)] struct Expenses { by_category: HashMap<String, Vec<Entry>>, } #[derive(Deserialize)] #[serde(try_from = "HashMap<String, String>")] struct Entry { payee: String, payments: Vec<f32>, } impl TryFrom<HashMap<String, String>> for Entry { type Error = String; fn try_from(value: HashMap<String, String>) -> Result<Self, Self::Error> { for (payee, etc) in value { let payments = etc.split('+') .map(|s| s.trim().parse::<f32>()) .collect::<Result<_, _>>() .map_err(|e| e.to_string())?; return Ok(Entry { payee, payments }); } Err("Empty map".to_owned()) } } I didn't have to actually write any code for Document and Expenses; I just used the serde(transparent) attribute to tell Serde to treat those structs exactly the same as their contained types. For Entry, since Serde's YAML deserializer can already parse a string like rent: 2750 into a HashMap<String, String>, I just implemented TryFrom<HashMap<String, String>> and told Serde to use that. This handy trick requires less code and is easier to wrap your head around than a full Deserialize implementation, but may be slightly slower. Now all we have to do is use it: let doc: Document = serde_yaml::from_str(&doc).unwrap(); println!("total: {}", doc.total()); For a small initial investment in code, your data is now made of meaningful, Rusty types like Date and HashMap instead of all different Yaml values. You can add a little bit more code to also support serialization, and because Serde is a general purpose (de)serialization library and not a particular data format, switching to another format like JSON or bincode (should you want to) is just a couple lines of code. And another thing Floating-point numbers are bad for money because they have variable precision. Use a decimal library, or fixed-point arithmetic (i.e. store an integer number of the smallest possible amount of money – cents instead of dollars.) If you absolutely must use floating-point numbers, at least use f64, and don't say I didn't warn you not to.
{ "domain": "codereview.stackexchange", "id": 40948, "tags": "beginner, rust, yaml" }
Entropy change of reservoirs in a thermodynamic cycle
Question: One mole of diatomic ideal gas ($c_V= {5\over 2}R$) is at an initial state $A$ at a volume $V_A$ and a temperature $T_1$. From $A$, the gas undergoes an isothermal process and expands to state $B$, with volume $V_B=2V_A$. Then, during an isochoric process to state $C$, the temperature of the gas is brought to $T_2=T_1/2$. Next, the gas undergoes another isothermal process to state $D$ where it has a volume $V_D=V_A$, and lastly it is brought back to state $A$ through an isochoric process. All processes are reversible. Calculate the entropy change $\Delta S_1$ and $\Delta S_2$ of the reservoirs of temperatures $T_1$ and $T_2$ respectively. Sorry for my English. Here is what I tried to do. So apparently the cycle is isothermal at $T_1 \to$ isochoric at $V_1 \to$ isothermal at $T_2 \to$ isochoric at $V_2$. All processes are reversible, so the total entropy $\Delta S=\Delta S_1+\Delta S_2$ must be $0$. I tried to calculate the heats of the various processes ($n=1$ because is one mole): $$Q_1=nRT_1 \ln{V_B\over V_A} = RT_1 \ln{2V_A\over V_A} = RT_1 \ln{2}$$ $$Q_2=\Delta U=nc_V(T_2-T_1)=c_V(T_2-2T_2)=-{5\over 2} RT_2$$ $$Q_3 =nRT_2 \ln{V_A\over V_B} = -nRT_2 \ln{V_B\over V_A} = -RT_2 \ln{2}$$ $$Q_4=\Delta U=nc_V(T_1-T_2)=c_V(2T_2-T_2)={5\over 2} RT_2=-Q_2$$ The reservoir at temperature $T_1$ comes into play in processes $1$ (AB) and $4$ (DA), and it releases to the system a total heat of $-Q_1-Q_4<0$, and reservoir at temperature $T_2$ absorbs from the system a total heat of $-Q_2-Q_4>0$. But the relative entropies don't sum up to zero: $$\Delta S_1 = {-Q_1-Q_4\over T_1} = {-RT_1 \ln{2}-{5\over 2} RT_2 \over T_1}={-2RT_2 \ln{2}-{5\over 2} RT_2 \over 2T_2}=-R\left(\ln{2}+{\frac54}\right)\simeq-16.16 \text{ J/K}$$ $$\Delta S_2 = {-Q_2-Q_3\over T_2} = {{5\over 2} RT_2 + RT_2 \ln{2} \over T_2} = R\left(\ln{2}+{\frac52}\right)\simeq +26.55 \text{ J/K}$$ So $\Delta S=\Delta S_1 + \Delta S_2\simeq 10.40 \neq 0$. What did I do wrong? Answer: Hint: Only the isothermal processes are involved with the entropy changes of thermal reservoirs $T_1$ and $T_2$. The isochoric processes involve entropy changes for an infinite series of thermal reservoirs between $T_1$ and $T_2$. Hope this helps.
{ "domain": "physics.stackexchange", "id": 79370, "tags": "thermodynamics, entropy, ideal-gas" }
how do i make source devel/setup.bash permanent
Question: Every time i open a new terminal i have to type the following to set directories $ ~/catkin_ws/ source $ source devel/setup.bash and for once i forget to set it takes me a while to detect. I wanna make this change permanent so i could skip this step Originally posted by Nouman Tahir on ROS Answers with karma: 35 on 2014-11-01 Post score: 0 Answer: you need to source in .bashrc file. open .bashrc file by gedit .bashrc add line source /FULL...PATH../devel/setup.bash at the end Originally posted by bvbdort with karma: 3034 on 2014-11-01 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 19919, "tags": "ros, setup.bash" }
Project Euler #3 - checking only up to sqrt(input)
Question: I know it's possible for the largest factor of a number to be greater than the sqrt (e.g., for 14, 7 is the largest). Why then does almost every solution for the problem check only up to the square root? Also, I have a nagging feeling that boolean[] is not the best data structure to use here, since in this case the number is much smaller than the size of the array. What alternative data structure would be recommended in this case? public static int largestPrime(long n) { int limit=(int)Math.sqrt(n)+1; boolean [] numbers = new boolean[limit]; Arrays.fill(numbers,true); int largestPrimeFactor=1; for(int count=2;count<limit;count++) { if(numbers[count]==true) { for(int i=2;i*count<limit;i++) numbers[count*i]=false; if(n%count==0) largestPrimeFactor=count; } } return largestPrimeFactor; } Edit: Hmm, looks like it's incorrect to only check for primes up to the square root of the input (at least without checking if the complementary factor is prime). This is what I originally wanted to implement, but the number was too large to create a boolean array of that size. public static int largestPrime(int n) { int largestPrime = 2; // Worse case scenario: largest prime n is divisible by is itself int limit = n; // Create an boolean array of index up to n boolean[] numbers = new boolean[limit + 1]; Arrays.fill(numbers, true); for (int count = 2; count <= limit; count++) { // if count is a prime number if (numbers[count]) { // set all multiples of count to false for (int i = 2; i * count <= limit; i++) numbers[count * i] = false; // Check if the input number is divisible by count if (n % count == 0) { // The largest possible prime is the complementary factor limit = n / count; largestPrime = count; } } } return largestPrime; } I guess my second question is, is there a better alternative to a boolean array, and if not, what to change in the program so that it works with large numbers? Answer: This implementation is not correct: for your example 14 it incorrectly returns 2 instead of 7. You don't need to check == true of boolean expressions, you can use them directly, for example: if (numbers[count]) { ... } The formatting doesn't follow Java standards. In Eclipse, if you select your function and press Control-Shift-f it will change it this, which is more pleasant to read: public static int largestPrime(long n) { int limit = (int) Math.sqrt(n) + 1; boolean[] numbers = new boolean[limit]; Arrays.fill(numbers, true); int largestPrimeFactor = 1; for (int count = 2; count < limit; count++) { if (numbers[count]) { for (int i = 2; i * count < limit; i++) { numbers[count * i] = false; } if (n % count == 0) { largestPrimeFactor = count; } } } return largestPrimeFactor; } A simple way to fix the implementation is to re-implement following the same logic you do in math class: Start from d = 2 Divide the target number by d as many times as possible without remainder Increment d, return to step 2, until target is reduced to 1 This is pretty simple, correct, without using a boolean array: public static long largestPrimeFactor(long n) { long d = 2; long target = n; while (target > 1) { while (target % d == 0) { target /= d; } ++d; } return d - 1; } To make this faster, after the ++d you can insert this check: if (d * d > target) { return target; }
{ "domain": "codereview.stackexchange", "id": 10411, "tags": "java, programming-challenge, primes" }
Why is the charge density of a rod in $\rm C/m$ and not in $\rm C/m^2$?
Question: Is it because we consider the rod to be a 1-dimensional straight line? Or is there any other reason? I know it sounds like a stupid question but I am confused in this regard. Answer: For any number of dimensions you can define a density. In 1-D you can have a charge length density, charge per length. In 2-D you can have a charge area density, charge per area. In 3-D you can have a charge volume density, charge per volume. Frequently, these are all shortened and referred to in a single confusing way: charge density. Charge on a rod can be described using any of these 3 densities. In your case, it was provided as a charge-per-length, likely because that would be the most useful for the given problem. It is indeed possible that you would like to treat the rod as a 1-D thin line.
{ "domain": "physics.stackexchange", "id": 84045, "tags": "charge, dimensional-analysis, density, si-units" }
Robot Model does not follow TF frames in RViz
Question: I want to visualize the UR5 robot model in RViz. For that, I am using the following launch file, <launch> <arg name="limited" default="false" doc="If true, limits joint range [-PI, PI] on all joints." /> <arg name="gui" default="true" /> <param unless="$(arg limited)" name="robot_description" command="$(find xacro)/xacro --inorder '$(find ur5_description)/urdf/ur5_robot.urdf.xacro' " /> <node if="$(arg gui)" name="joint_state_publisher" pkg="joint_state_publisher_gui" type="joint_state_publisher_gui" /> <node unless="$(arg gui)" name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" /> <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" /> </launch> Then I ran the RViz node in another terminal and found that the TF frames are plotted correctly as per the joint angles provided in the joint_state node's GUI window but the robot model is fixed at the neutral (wrong) pose. The screenshot of the situation is given below. Any help to resolve the problem will be appreciated. Originally posted by anirban on ROS Answers with karma: 64 on 2020-11-15 Post score: 0 Answer: I figured out the reason why robot model was not following the TF frames. The reason was that I had been using Robot_State from moveit_ros_visualization instead of RobotModel offered by rviz. Changing the robot model display method as stated, resolved the problem. Originally posted by anirban with karma: 64 on 2020-11-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35764, "tags": "ros, rviz, joint-state, ros-kinetic" }
How to design a transfer function for a Model Rocket?
Question: I am building a roll stabilized model rocket. I am beginner at controls systems. How do I find out the transfer function of the rocket. Although, I am aware of definitions and stuff of control systems. I just a need a nudge in the right direction. Also what your thought on using State Space Analysis? Thanks so much! :D Answer: Define your inputs (i.e., control surfaces on the fins, steerable motors, etc.) Define your output (roll position of the rocket) Find the differential equation that describes the roll in terms of the input. Linearize (note that if you're using fins then the linearized equation will be dependent on airspeed, and that if you're using steerable motors then the linearized equation will be dependent on thrust -- this will complicate your life, but get over the first hurdle first!) Extract the transfer function State space is not a bad way to go, but if you're only concerned with roll control, and if you're confident that you're not going to be affecting the behavior in yaw or pitch, then this is a classic single-input, single-output system that can be handled with transfer functions.
{ "domain": "engineering.stackexchange", "id": 2649, "tags": "control-engineering, pid-control, rocketry" }
Kleene algebra - powerset class vs class of all regular subsets
Question: I am currently studying materials for my uni subject. There are two examples of Kleene algebras, but I don't see what is the difference between them. Class ${2^{\Sigma}}^{*}$ of all subsets of $\Sigma^{*}$ with constants $\emptyset$ and $\{\varepsilon\}$ and operations $\cup$, $\cdot$ and $*$. Class of all regular subsets of $\Sigma^{*}$ with constants $\emptyset$ and $\{\varepsilon\}$ and operations $\cup$, $\cdot$ and $*$. What is the difference between ${2^{\Sigma}}^{*}$ and all regular subsets of $\Sigma^{*}$? What is that difference I don't see? Thanks in advance. Answer: The difference between the two is that $2^{\Sigma^*}$ contains all languages over $\Sigma$, whereas the class of all regular subsets of $\Sigma^*$ contains only the regular languages over $\Sigma^*$. So if $L$ is a non-regular language over $\Sigma$ then $L \in 2^{\Sigma^*}$ while $L$ doesn't belong to the set of all regular languages over $\Sigma$. For any $\sigma \in \Sigma$, you can take $L = \{ \sigma^{n^2} : n \geq 0 \}$, for example.
{ "domain": "cs.stackexchange", "id": 5400, "tags": "formal-languages, regular-languages" }
Reversing a linked list and adding/removing nodes
Question: I am new to C and have written this code just to teach myself how to work with linked lists and was hoping someone could review it and give me any comments or tips for efficiency, etc. The code just creates a list, adds and removes nodes from the list, and can reverse the list. /* Creates a linked list, adds and removes nodes and reverses list */ #include <stdlib.h> #include <stddef.h> #include <stdio.h> typedef bool; #define false 0 #define true 1 typedef struct node { int value; struct node *next; } node; node *head = NULL; //Contains entire list node *current = NULL; //Last item currently in the list // Initialize list with input value struct node* create_list(int input) { node* ptr; printf("Creating a LinkedList with head node = %d\n", input); ptr = (node*)malloc(sizeof(node)); // Allocating 8 bytes of memory for type node pointer if (ptr == NULL) // Would occur is malloc can't allocate enough memory { printf("Node creation failed \n"); return NULL; } ptr->value = input; ptr->next = NULL; head = current = ptr; return ptr; } // Add input value to the end of the list struct node* add_to_end(int input) { node* ptr; if (head == NULL) { return create_list(input); } ptr = (struct node*)malloc(sizeof(struct node)); if (ptr == NULL) { printf("Node creation failed \n"); return NULL; } else { ptr->value = input; ptr->next = NULL; // End value in list should have NULL next pointer current -> next = ptr; // Current node contains last value information current = ptr; } printf("%d Added to END of the LinkedList.\n", input); return head; } // Add input value to the head of the list struct node* add_to_front(int input) { node* ptr; if (head == NULL) { return create_list(input); } ptr = (struct node*)malloc(sizeof(struct node)); if (ptr == NULL) { printf("Node creation failed \n"); return NULL; } else { ptr->value = input; ptr->next = head; // Point next value to the previous head head = ptr; } printf("%d Added to HEAD of the LinkedList.\n", input); return head; } // Return the number of items contained in a list int size_list(node* ptr) { int index_count = 0; while (ptr != NULL) { ++index_count; ptr = ptr->next; } return index_count; } // Add an input value at a user-specified index location in the list (starting from 0 index) struct node* add_to_list(int input, int index) { node* ptr_prev = head; node* ptr_new; int index_count; // Used to count size of list int index_track = 1; // Used to track current index ptr_new = (struct node*)malloc(sizeof(struct node)); // Check that list exists before adding it in if (head == NULL) { if (index == 0) // Create new list if 0 index is specified { add_to_front(input); } else { printf("Could not insert '%d' at index '%d' in the LinkedList because the list has not been initialized yet.\n", input, index); return NULL; } } // Count items in list to check whether item can added at specified location if ((index_count = size_list(head)) < index) { printf("Could not insert '%d' at index '%d' in the LinkedList because there are only '%d' nodes in the LinkedList.\n", input, index, index_count); return NULL; } //Go through list -- stop at item before insertion point while (ptr_prev != NULL) { if (index == 0) // Use add_to_front function if user-specified index is 0 (the head of the list) { add_to_front(input); return head; } if ((index_track) == index) { break; } ptr_prev = ptr_prev ->next; ++index_track; } ptr_new ->next = ptr_prev ->next; // Change the new node to point to the original's next pointer ptr_new->value = input; ptr_prev ->next = ptr_new; // Change the original node to point to the new node return head; } // Verify if the list contains an input value and return the pointer to the value if it exists struct node* search_list(int input, struct node **prev) { node* ptr = head; node* temp = (node*)malloc(sizeof(node)); bool found = false; // Search if value to be deleted exists in the list while (ptr != NULL) { if(ptr->value == input) { found = true; break; } else { temp = ptr; ptr = ptr ->next; } } // If the value is found in the list return the ptr to it if(found == true) { if(prev) *prev = temp; return ptr; } else { return NULL; } } // Remove an input value from the list struct node* remove_from_list(int input) { node* prev = NULL; // list starting from one item before value to be deleted node* del = NULL; // pointer to deleted value // Obtain pointer to the list value to be deleted del = search_list(input, &prev); if(del == NULL) { printf("Error: '%d' could not be deleted from the LinkedList because it could not be found\n"); return NULL; } else { if (prev != NULL) { prev->next = del->next; } if (del == current) // If item to be deleted is last in list, set the current last item as the item before deleted one { current = prev; } else if (del == head) // If item to be deleted is the head of the list, set the new head as the item following the deleted one { head = del ->next; } return head; } } // Reverse the order of the list struct node* reverse_list() { node* reverse = NULL; node* next = NULL; node* ptr = head; if (head == NULL) { printf("Error: There is no LinkedList to reverse.\n"); return NULL; } printf("Reversing order of the LinkedList.\n"); while (ptr != NULL) { next = ptr ->next; // Holds the remaining items in the original list ptr ->next = reverse; // List now points to items in reversed list reverse = ptr; // Reversed list set equal to the List ptr = next; // List re-pointed back to hold list } head = reverse; } // Print the list to the console void print_list() { node* ptr = head; printf("------------------------------\n"); printf("PRINTING LINKED LIST\n"); while (ptr != NULL) { printf("%d\n", ptr->value); ptr = ptr->next; } printf("------------------------------\n"); } int main() { int i; reverse_list(); //test function error message for (i = 3; i > 0; --i) { add_to_front(i); } for (i= 4; i < 7; ++i) { add_to_end(i); } add_to_list(4,9); //test function error message add_to_list(4,1); print_list(); remove_from_list(3); print_list(); reverse_list(); print_list(); add_to_list(10,0); print_list(); getchar(); } Answer: You code is quite good for a beginner and shows a grasp of how pointers and lists work, so that's great! However, there are a few points I'd like to address. I'll start by briefly listing what for me are less important issues or some things that others are likely to disagree with, then move on to more problematic errors: General style: Personally, I find that an if statement with a single-line block introduces too much noise. That is, I prefer if (condition) statement; to if (condition) { statement; } Since you typedefed struct node as node, you don't have to write struct node all the time. malloc() issues: Don't cast the return value of malloc() unless you're writing C++: see this SO question. It's better to write ptr = malloc(sizeof(*ptr)); instead of ptr = malloc(sizeof(node)); because that way the line won't need changing if the type of ptr changes. Bigger problems: Globals: head and current seem to be global objects in your program, making all of the functions you wrote impossible to reuse. This impedes the usefulness of your list data structure. It is generally a good idea not to have any read-write globals at all, but read-only globals like conversion tables and constants are usually fine. For the purposes of an example, let's pretend to delete head and current. Let's look at the functions create_list() and add_to_end(): // Initialize list with input value node* create_list(int input) { node *ptr = malloc(sizeof(*ptr)); // Allocating 8 bytes of memory for type node pointer if (ptr == NULL) // Would occur is malloc can't allocate enough memory return NULL; ptr->value = input; ptr->next = NULL; return ptr; } The function merely creates a new list node and returns it, unless there's an error. Then, we can query the return value to determine if the creation of the list was successful: node *head = create_list(7); if (head == NULL) some_kind_of_error_handling_here(); The same goes for add_to_end(): // Add input value to the end of the list node* add_to_end(node *head, int input) { if (head == NULL) return NULL; // Find the last element of the list while(head->next != NULL) head = head->next; node *ptr = malloc(sizeof(*ptr)); if (ptr == NULL) return NULL; ptr->value = input; ptr->next = NULL; // End value in list should have NULL next pointer // Attach new node to the end of the list. head->next = ptr; return ptr; } Now we find the last element, attach a new node to it (unless malloc() fails) and as a bonus return a pointer to it. Actually, a lot of this code looks familiar, so we can do better: node* add_to_end(node *head, int input) { if (head == NULL) return NULL; node *ptr = create_list(input); if (ptr == NULL) return NULL; // Find the last element of the list while(head->next != NULL) head = head->next; // Attach new node to the end of the list. head->next = ptr; return ptr; } Two things should be noted here: 1) We enabled code reuse by (the call to create_list()) by eliminating the globals and 2) that its possible to think of a list as consisting of a head and the rest, which is also a list. Memory leaks: Every object allocated using malloc() and friends must be deallocated using free(). The best way to deallocate the list memory would be to provide a function like delete_list(). Separation of concerns: It is generally a good idea not to mix code that manipulates data with input or output code. It is best to return an error code from functions that may fail and test for it in the caller. Doing this will not only make things easier to change later (for example, should we need to log errors to a file instead of print them out) but also make the functions shorter and easier to understand.
{ "domain": "codereview.stackexchange", "id": 4337, "tags": "beginner, c, linked-list" }
How is energy really lost due to heating in charging a capacitor?
Question: Wanted to be sure, where is this understanding wrong? If a capacitor is charged to a potential difference $V$, then $Q$ in the capacitor would be $CV$. Now if we assume the voltage of the power supply to be $V_{emf}$, then wouldn't the potential difference, $V_{emf} - V$ between the supply and the capacitor slowly decrease leading to the work done by the battery to store charge $Q$ on the capacitor be equal to the stored charge. I'll expand that a bit here. The capacitor is charged until $V_{emf} = V$. $\quad$ At this point: $\quad CV_{emf} = Q$ $$ E = \int_0^{CV_{emf}}{V_{emf} - V}\ dQ $$ $$ \quad = \int_0^{CV_{emf}}{V_{emf} - \frac QC}\ dQ $$ $$ \quad = {V_{emf}}^2 C - \frac{{V_{emf}}^2C^2}{2C} - 0 $$ $$ \quad = QV_{emf} - \frac12C{V_{emf}}^2 = QV_{emf} - \frac12QV_{emf}$$ Therefore the work done by the power supply is $\frac12QV_{emf}$ But this is apparently not the case as the work done by the power supply is $QV_{emf}$. Can someone explain? Answer: If there is resistance in the circuit and no inductance and assuming that, there is a resistor $R$, capacitor $C$ and voltage source $V_{\text{emf}}$ all in series, and the capacitor at time $t = 0$ is uncharged the following equation gives the variation of current with time $I(t) = \dfrac {V_{\text{emf}}}{R}e^{-t/CR}$ The energy dissipated in the resistor while the capacitor is charging is $\int^\infty _0 I^2R\; dt$ Doing the integration produces a the result $\frac 12 C V_{\text{emf}}^2$ which is independent of the value of the resistance.
{ "domain": "physics.stackexchange", "id": 29001, "tags": "capacitance" }
OSGi-like infrastructure
Question: I am a moderately new Scala developer working mostly by myself, so I don't have anyone to tell me what I'm doing wrong (except that the system mostly does work, so it can't be too awful). I would appreciate some feedback on how to make better use of Scala as well as any comments about obvious performance issues. The class below is part of our utility library. It's part of our OSGi-like infrastructure. I didn't use OSGi because the web application is hosted on a Tomcat server but I liked many of the ideas behind it, so I've incorporated them into my own library. package edu.stsci.efl.ml import scala.collection.mutable.{HashSet, Stack, HashMap} import scala.xml.Node import java.util.Date /** * Provides a container for services that are being provided by various modules. * * The default context can be had from the ModuleLoader object. * * Services are accessed by the class of the trait that they implement. For example, * logging services implement LoggerFactoryService so to get access the service, call * getService(classOf[LoggerFactoryService]) to find the service that was defined in * the ModuleLoader configuration file. */ class EFLContext(val name: String) { private var modules: HashMap[String, EFLModule] = new HashMap[String, EFLModule] private val shutdownList: Stack[EFLModule] = new Stack[EFLModule] private val importList: Stack[EFLModule] = new Stack[EFLModule] private val services = new HashSet[Object] private var myDependencies: List[EFLContext] = null private var myDependents: List[EFLContext] = null /** * Used in testing to verify that the correct number of modules have been loaded. */ def getModuleCount = modules.size /** * Use by the ModuleLoader to pass a module definition from the XML configuration file. */ def load(moduleDefinition: Node) { val moduleName = (moduleDefinition \ "@name").toString() val className = (moduleDefinition \ "@class").toString() try { val rawClass = Class.forName(className) val moduleClass = rawClass.asSubclass(classOf[EFLModule]) val instance = moduleClass.newInstance instance.start(this, moduleDefinition) shutdownList.push(instance) modules.put(moduleName, instance) } catch { case e: Exception => { println("Failed to initialize module. Error:") e.printStackTrace() throw new EFLInitializationFailure(moduleDefinition.toString(), e) } } } /** * */ def resolveImport(data: Node) { val contextName = getAttribute(data, "context") val name = getAttribute(data, "name") val module = { val context = ModuleLoader.getContext(contextName) if (context == null) null else { val result = context.modules.getOrElse(name, null) if (result != null) addDependency(context) result } } if (module == null) throw new IllegalArgumentException("Unable to resolve import of module '" + name + "' in context '" + contextName + "'.") module.addContext(this) importList.push(module) modules.put(name, module) } def getAttribute(data: Node, name: String): String = { val attribute = data.attribute(name) if (attribute.isEmpty) throw new IllegalArgumentException("Failed to specify attribute " + name + " in import.") attribute.head.toString() } /** * Used by modules at application startup to announce services that they are providing. */ def addService(service: Object) { services += service } /** * */ def addDependency(fLContext: EFLContext) { myDependencies match { case null => myDependencies = List[EFLContext](fLContext) case _ => myDependencies = myDependencies :+ fLContext } fLContext.addDependent(this) } /** * */ def addDependent(fLContext: EFLContext) { myDependents match { case null => myDependents = List[EFLContext](fLContext) case _ => myDependents = myDependents :+ fLContext } } def hasDependents: Boolean = (myDependents != null) def removeDependency(context: EFLContext) { myDependencies = myDependencies.filterNot((c: EFLContext) => c == context) if (myDependencies.isEmpty) myDependencies = null context.removeDependent(this) } def removeDependent(context: EFLContext) { myDependents = myDependents.filterNot((c: EFLContext) => c == context) if (myDependents.isEmpty) myDependents = null } /** * Used by modules at application shutdown to terminate services. */ def removeService(service: Object) = { services -= service } /** * Called by the ModuleLoader at application shutdown so that loaded modules can be stopped. */ def shutdown() { if (modules == null) throw new RuntimeException("Attempt to shutdown context that was already shutdown.") if (hasDependents) throw new RuntimeException("Attempt to shutdown context with loaded dependent contexts.") while (!importList.isEmpty) { val next = importList.pop() next.removeContext(this) } while (!shutdownList.isEmpty) { val next = shutdownList.pop() next.stop(this) } modules.clear() if (services.size > 0) { println("Improper cleanup: " + services.size + " services still registered.") services.foreach((s) => println(s)) } modules = null while (myDependencies != null) { val next = myDependencies.head removeDependency(next) } } /** Search for a given service by the class/trait the service implements/extends. * * @param serviceClass The class of a service that is needed. * @return Option[A] 'Some' of the given type, or 'None' if no service object extending that type is registered. */ def findService[A](serviceClass: Class[A]): Option[A] = { services.find(serviceClass.isInstance).asInstanceOf[Option[A]] } /** Search for a given service by the class/trait the service implements/extends. * * @return Service of the given type, or null if no service object extending that type is registered. */ @Deprecated def getService[A](serviceClass: Class[A]): A = { val temp = services.find(serviceClass.isInstance) if (temp == None) { null.asInstanceOf[A] } else { temp.get.asInstanceOf[A] } } } Answer: Some words at beginning: I have never used OSGI, so I can't say if the tips below may all work in your context. If they do not, than keep them in the back of your head and use them when you can. One of the most important rules is to avoid mutable state if possible. This will result in more type-safe and easier to read code. This is not always possible but most times it is. Never use null if you don't have to use it. Use Option type instead or use empty collections. private var myDependencies: List[EFLContext] = null // => private var myDependencies: List[EFLContext] = Nil Now you don't have to pattern match against null any more: def addDependency(fLContext: EFLContext) { myDependencies match { case null => myDependencies = List[EFLContext](fLContext) case _ => myDependencies = myDependencies :+ fLContext } fLContext.addDependent(this) } // => def addDependency(fLContext: EFLContext) { myDependencies +:= flContext fLContext addDependent this } Here, never append to a List, this is inefficient (O(n)). Instead use prepending (O(1)). If you have to prepend elements, than use an IndexedSeq. If you want you can use operator notation. In Scala everything returns a value, thus there is no need to mark a method that it returns a value: def getModuleCount = modules.size // => def moduleCount = modules.size method -= is deprecated. Use filterNot instead. Scala has some nice methods defined in Predef. One of them can check input params: if (modules == null) throw new RuntimeException("Attempt to shutdown context that was already shutdown.") // => require(modules.nonEmpty, "Attempt to shutdown context that was already shutdown.") Don't traverse through the elements of a collection with a while-loop. Instead use higher-order functions (foreach instead of while-loop). Furthermore in Scala there is no need to explicitly use a Stack. A List is already a Stack (when you prepend elements what you should do)! while (!importList.isEmpty) { val next = importList.pop() next.removeContext(this) } // => importList foreach { _ removeContext this } importList = Nil Pattern match Options: if (opt.isDefined) ... else opt.get ... opt match { case None => ... case Some(o) => o ... } Use homogeneous types in collections and not Object if possible. private val services = new HashSet[Object] // => val services1 = new HashSet[Type1] val servicesN = new HashSet[TypeN] If you do so there is no need for casts any more like in your method findService. Of course you should choose better names for the variables than I did. To easily come from null to Option you can use its apply method: Option(null) // => None Option(non_null_value) // => Some(non_null_value) No need to use curly brackets in a case-block: case _ => { a b } // => case _ => a b You can use Option in method resolveImport: def resolveImport(data: Node) { val contextName = getAttribute(data, "context") val name = getAttribute(data, "name") val context = Option(ModuleLoader.getContext(contextName)) val result = context flatMap { _.modules get name } result match { case None => throw new IllegalArgumentException("Unable to resolve import of module '"+name+"' in context '"+contextName+"'.") case Some(module) => this addDependency module module addContext this importList push module modules += name -> module } }
{ "domain": "codereview.stackexchange", "id": 1480, "tags": "scala, library" }
Photoreceptors and light with mixed frequencies
Question: I am interested in how the activation of a, say, blue cone depends on the incident light. Wikipedia tells me this: , which describes how strong the activation of the blue cone is for light with a single wavelength and a given intensity. But what happens if the incident light has several frequencies? My question is: given the intensity i(f) of the incident light at frequency f, what is the activation of the blue cone? (A natural guess would be that if b(f) is the activation curve in the figure, then the activation of the blue cone should be the integral of b(f)*i(f) over f. But I have not found any confirmation of this guess. For example, in the reference for the picture above, only pure light was used in the experiments.) Answer: Yes, current color models assume a linear response in the spectrum. Given a light with mixed spectrum $\Phi(\lambda)$, the response of a cone with spectral sensitivity $\Psi_i(\lambda)$ is a (nonlinear) function of the L2 inner product: This is how CIE XYZ color space is defined. Reference: M. D. Fairchild. Color Appearance Models, 3rd edition. 2013 (p.63) https://en.wikipedia.org/wiki/CIE_1931_color_space
{ "domain": "biology.stackexchange", "id": 6122, "tags": "neurophysiology, vision, central-nervous-system" }
What causes light to travel through a curved fountain of water?
Question: Today I observed a tilted fountain spurting water upward (the water fell smoothly; no detectable turbulence). A colored light was shown upward into the water as it left its source. This light traveled through the water and was reflected from the ground wherever the water landed. Why? I suspect the water trapped some portion of the light and acted as a mirror. Why should this be the case? Especially due to the non-linear motion of the water. Answer: Total Internal reflection is what causes optical fibers to propagate beams with minimal distortion http://en.wikipedia.org/wiki/Total_internal_reflection At the small incident angles the beam hits the wall between the two different refraction indices, the light is reflected completely, which from the perspective of geometric rays, keep the beam enclosed as long as the fiber doesn't get bent into a high angle
{ "domain": "physics.stackexchange", "id": 20566, "tags": "gravity, visible-light, laser" }
EMF generated in a rotating disc
Question: A static disc produces no emf between its center and circumference, whereas a rotating disc produces an emf. But as per Maxwells's equations, there should be no emf, since ${\cal E}=-d\phi/dt$, and as there is no change in area or current there, should be no change giving an emf. How we intuitively understand the reason for this emf according to Maxwell's equations? Answer: The emf induced in a conductor moving in a constant magnetic field is due to the magnetic Lorentz force acting on the charge carriers in the conductor. Thus for a charge carrier at a radius vector $\vec r$ from the centre of a disc rotating with angular velocity $\omega$ with a field $\vec B$ at right angles to its plane, $$\vec F=q\vec v \times \vec B\ = q(\vec \omega \times \vec r)\times \vec B=q\left[(\vec \omega.\vec B)\vec r - (\vec r.\vec B)\vec\omega \right]=±q\omega B\vec r.$$ (since $\vec r.\vec B=0$, and $\vec B$ is parallel or antiparallel $\vec\omega$.) The emf between the centre of the disc (radius $R$) and its outer edge is given by the 'work integral': $$\mathscr E=\frac1q \int_{r\ =\ 0}^R ±q\omega B \vec r.d\vec r=±\tfrac12\omega B R^2\ =±fBA.$$ in which $f$ is the number of revolutions per unit time and $A$ is the area of a face of the disc. $fBA$ is the rate of cutting of flux by any radius, so we may be tempted to write $|\mathscr E| = \frac{d\Phi}{dt}$. This may be an abuse of notation, because $\frac{d\Phi}{dt}$ doesn't in this case represent a rate of change of flux. Nor does it give us insight into the origin of the emf – which is simply the magnetic Lorentz force! Note that electromagnetic induction of this type (flux cutting by a moving conductor) arises in a different way from from the type due to varying flux density changing the flux linked with a stationary circuit. It is the latter type that is governed by the Maxwell equation $\text{curl} \vec E =-\frac {\partial \vec B}{dt}$.
{ "domain": "physics.stackexchange", "id": 81999, "tags": "electromagnetism, electromagnetic-induction" }
retrieve raw IR image from Kinect
Question: Is there anyway to retrieve the raw IR image using the openni_kinect stack? Any other stack? I'm pretty sure the old kinect calibration that was part of a different stack and that used a different driver published the raw IR image but it seems like the old calibration tutorial was taken down. Originally posted by ben on ROS Answers with karma: 674 on 2011-03-22 Post score: 0 Answer: Right now, we dont publish the IR images, but this feature is on our ToDo list, espectially with the calibration routines. Suat Originally posted by Suat Gedikli with karma: 91 on 2011-03-22 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5172, "tags": "kinect" }
How to find the period of small oscillations about this circular motion?
Question: This is the question from Goldstein's Classical Mechanics, 2nd edition. Chapter 3 problem 1. A particle of mass $m$ is constrained to move under gravity without friction on the inside of a paraboloid of revolution whose axis is vertical. Find the one-dimensional problem equivalent to its motion. What is the condition on the particle's initial velocity to produce circular motion? Find the period of small oscillations about this circular motion. The equation of motion is found to be, $$m\ddot{r}+4mc^2r^2\ddot{r}+4mc^2r{\dot{r}}^2-mr{\dot{\theta}}^2+2mgcr=0$$ The condition for circular orbit is found to be, $$\dot{\theta}=\sqrt{2gc}$$ on substituting $\dot{\theta}$ in EOM we get, $$(1+4mc^2r^2)\ddot{r}+4mc^2r\dot{r}^2=0$$ I dont think the particle execute harmonic motion in radial co-ordinate $r$, because the first term consists of $\ddot{r}$ and second term consists of $r\dot{r}^2$ (not simply $r$), so I dont think that I can be able to reduce this expression into a simple harmonic oscillation form and the particle cannot execute harmonic oscillation in radial direction? Am I right? Answer: This is an overview of what I would do. Write down the Lagrangian of a particle under the gravitational potential in cilindrical coordinates. Use the paraboloid equation $z = r^2 c$ to constraint it on the surface of the paraboloid. Find the EOM for $r$ and $\theta$. If you set in them $r = R_0$, where $R_0$ is a constant, you find the condition for a circular movement. It will be $\sqrt{2gc}$ and the velocity is thus $\vec{v}=R_0 \hat{\theta} \sqrt{2gc}$. Notice the system has rotational symmetry and thus the angular momentum will be conserved. Use this to eliminate $\dot{\theta}$ in the EOM for $r$ and you will get the EOM for the 1D equivalent problem. Perform a Legendre transformation to find the Hamiltonian. All terms not invoving the radial momentum $p_r = m \dot{r}$ will be your effective potential. Find the mimimun of the effective potential (derive it and equate to 0). Then Taylor expand it around this minimun. The first term will be am irrelevant constant, the second is zero (since you are expanding around a minimun) and the third one has the form $8mgc(r-R_0)^2$. From the coeficient $8mgc$ you can get the period.
{ "domain": "physics.stackexchange", "id": 40500, "tags": "homework-and-exercises, classical-mechanics, rotational-dynamics, oscillators" }
Postfix evaluation using a stack in c
Question: I have written a program to evaluate a postfix expression using a stack. I had my stack implementation using a linked list reviewed here, so I am only including the header file here. I have taken the postfix expression in form of a string where the operators and operands are delimited by spaces and have used a sentinel at the end to mark the end of the expression. stack.h #ifndef STACK_H #define STACK_H #include <stdbool.h> typedef int StackElement; typedef struct stack_CDT *Stack; Stack stack_init(); void stack_destroy(Stack s); bool stack_is_empty(Stack s); void stack_push(Stack s, StackElement elem); StackElement stack_pop(Stack s); #endif eval_postfix.h #ifndef EVAL_POSTFIX_H #define EVAL_POSTFIX_H // space is the delimiter between the different tokens extern const char *sentinel; int eval_postfix(char *exp); #endif eval_postfix.c #include "eval_postfix.h" #include "stack.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdbool.h> #define MAX_TOKEN_LEN 25 const char *sentinel = "$"; static char *get_token(char *token, char *exp, int idx) { sscanf(exp + idx, "%s", token); return token; } static bool is_operator(char *token) { return (strcmp(token, "/") == 0 || strcmp(token, "*") == 0 || strcmp(token, "%") == 0 || strcmp(token, "+") == 0 || strcmp(token, "-") == 0); } static int eval(int a, int b, char *op) { if (strcmp(op, "/") == 0) return a / b; if (strcmp(op, "*") == 0) return a * b; if (strcmp(op, "%") == 0) return a % b; if (strcmp(op, "+") == 0) return a + b; if (strcmp(op, "-") == 0) return a - b; return 0; } int eval_postfix(char *exp) { Stack s = stack_init(); char token[MAX_TOKEN_LEN + 1]; int i = 0; while (strcmp(get_token(token, exp, i), sentinel) != 0) { if (is_operator(token)) { int operand1 = stack_pop(s); int operand2 = stack_pop(s); stack_push(s, eval(operand2, operand1, token)); } else { stack_push(s, (int)strtol(token, NULL, 0)); } i += strlen(token) + 1; // one extra for the space } int res = stack_pop(s); stack_destroy(s); return res; } test.c #include "eval_postfix.h" #include <stdio.h> #include <string.h> #define MAX_EXPRESSION_LEN 100 int main() { printf("Enter postfix expression(no more than %d characters): ", MAX_EXPRESSION_LEN); char exp[MAX_EXPRESSION_LEN + 3]; fgets(exp, sizeof exp, stdin); char *pos; if ((pos = strchr(exp, '\n')) != NULL) *pos = '\0'; strcat(strcat(exp, " "), sentinel); printf("Result = %d\n", eval_postfix(exp)); return 0; } Answer: Optimize strcmp chains Some of your functions use strcmp() repeatedly. You could make those functions faster by eliminating the multiple calls to strcmp() and using switch statements instead. This function: static bool is_operator(char *token) { return (strcmp(token, "/") == 0 || strcmp(token, "*") == 0 || strcmp(token, "%") == 0 || strcmp(token, "+") == 0 || strcmp(token, "-") == 0); } would become: static bool is_operator(const char *token) { if (token[1] != '\0') return false; switch (token[0]) { case '/': case '*': case '%': case '+': case '-': return true; default: return false; } } This function: static int eval(int a, int b, char *op) { if (strcmp(op, "/") == 0) return a / b; if (strcmp(op, "*") == 0) return a * b; if (strcmp(op, "%") == 0) return a % b; if (strcmp(op, "+") == 0) return a + b; if (strcmp(op, "-") == 0) return a - b; return 0; } could become: static int eval(int a, int b, const char *op) { switch (*op) { case '/': return a / b; case '*': return a * b; case '%': return a % b; case '+': return a + b; case '-': return a - b; default: return 0; } } Notice I also added a const to your string arguments because they aren't modified by your functions. Confusing variable names This code confused me a little: int operand1 = stack_pop(s); int operand2 = stack_pop(s); stack_push(s, eval(operand2, operand1, token)); Here, operand1 is actually the second operand, and operand2 is actually the first operand, as you can see from the call to eval(). I would have rewritten it like this, because otherwise it would get confusing if someone were stepping through it with a debugger and examining variable values: int operand2 = stack_pop(s); int operand1 = stack_pop(s); stack_push(s, eval(operand1, operand2, token));
{ "domain": "codereview.stackexchange", "id": 17039, "tags": "c, stack, math-expression-eval, c99" }
Does it make sense to use a tfidf matrix for a model which expects to see new text?
Question: I'm training a model to classify tweets right now. Most of the text classification examples I have seen convert the tweets into tf-idf document term matrices as input for the model. However, this model should be able to identify newly collected tweets without retraining. Does it make sense to use tf-idf in this context? What is the correct way to turn tweets into feature vectors in this task? Answer: The problem is not really "new text", since by definition any classification model for text is meant to be applied to some new text. The problem is out of vocabulary words (OOV): the model will not be able to represent words that it didn't see in the training data. The most simple way (and probably the most standard way) to deal with OOV in the test data is to completely remove them before representing the text as features. Naturally OOV words can be a serious problem, especially in data such as Twitter where vocabulary evolves fast. But this issue is not related to using TF-IDF or not: any model trained at a certain point in time can only take into account the vocabulary in the training data, it cannot guess how future words are going to behave with respect to the class. The only solution for that is to use some form of re-training, for instance semi-supervised learning.
{ "domain": "datascience.stackexchange", "id": 7069, "tags": "nlp, text-classification" }
multiple distributions on a single system
Question: I installed ROS Electric, however I read that ROS Diamondback has an enhanced support as far as 2D/3D visualization and Kinect are concerned (that I require). Can I install Diamondback on the same system (I'm using an Ubuntu 11.10 based PC) or should I just install the additional packages that are in Diamondback? (My concern being any duplicity resulting in errors on having 2 distributions.) Thanks Originally posted by kshitij on ROS Answers with karma: 115 on 2012-01-10 Post score: 1 Original comments Comment by kshitij on 2012-01-18: ^That was probably the case. Thanks. Comment by Murph on 2012-01-11: Where did you read about Diamondback being better for your uses? I use some kinect stuff and have only found improvement with Electric over Diamondback. Be sure you weren't just reading something discussing the time before Electric was released. Answer: If you are installing from debian packages, installing multiple distributions is supported; simply install the packages from the distribution you're interested in. That said, you cannot mix debian packages from different distributions, due to binary incompatibility, and nodes from different distributions are often unable to communicate with one another, due to changes in message formats. As a side note, I would be surprised if Diamondback has better 2D/3D visualization tools. What in particular were you looking at? Originally posted by ahendrix with karma: 47576 on 2012-01-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by joq on 2012-01-11: As @ahendrix says, you can install as many distributions as you like. To avoid mixing packages, be careful to set $ROS_HOME and $ROS_PACKAGE_PATH appropriately. See also: http://answers.ros.org/question/2491/upgrading-from-diamondback-to-electric-problems
{ "domain": "robotics.stackexchange", "id": 7844, "tags": "ros, kinect, pcl, ros-electric, ros-diamondback" }
Simple password generator with a GUI
Question: I made a simple password-generating app with a GUI written in Python and the tkinter library. Here's how the GUI looks like: The passwords are strings composed of random characters. The user can choose the length of the password, and the characters that are allowed to be in it. Here's the source code: import string import tkinter as tk from tkinter import ttk import random from tkinter.constants import DISABLED, E, END, NORMAL, NW, VERTICAL class GUI(tk.Frame): def __init__(self, master): super().__init__(master) self.pack() self.widget_vars() self.create_widgets() self.style() def generate_password(self): passw = Password(self.length.get(), self.lower.get(), self.upper.get(), self.digits.get(), self.punct.get()) # You can only insert to Text if the state is NORMAL self.password_text.config(state=NORMAL) self.password_text.delete("1.0", END) # Clears out password_text self.password_text.insert(END, passw.password) self.password_text.config(state=DISABLED) def widget_vars(self): self.length = tk.IntVar(self, value=16) self.lower = tk.BooleanVar(self, value=True) self.upper = tk.BooleanVar(self, value=True) self.digits = tk.BooleanVar(self, value=True) self.punct = tk.BooleanVar(self, value=True) def create_widgets(self): # Define widgets self.lower_checkbtn = ttk.Checkbutton(self, variable=self.lower) self.lower_label = ttk.Label(self, text="string.ascii_lower") self.upper_checkbtn = ttk.Checkbutton(self, variable=self.upper) self.upper_label = ttk.Label(self, text="string.ascii_upper") self.digits_checkbtn = ttk.Checkbutton(self, variable=self.digits) self.digits_label = ttk.Label(self, text="string.digits") self.punct_checkbtn = ttk.Checkbutton(self, variable=self.punct) self.punct_label = ttk.Label(self, text="string.punctuation") self.length_spinbox = ttk.Spinbox(self, from_=1, to=128, width=3, textvariable=self.length) self.length_label = ttk.Label(self, text="Password length") self.separator = ttk.Separator(self, orient=VERTICAL) self.generate_btn = ttk.Button(self, text="Generate password", command=self.generate_password) self.password_text = tk.Text(self, height=4, width=32, state=DISABLED) # Place widgets on the screen self.length_label.grid(column=0, row=0, rowspan=4, sticky=E) self.length_spinbox.grid(column=1, row=0, rowspan=4, padx=4, pady=2) self.lower_label.grid(column=3, row=0, sticky=E, padx=4) self.lower_checkbtn.grid(column=4, row=0, padx=4, pady=2) self.upper_label.grid(column=3, row=1, sticky=E, padx=4) self.upper_checkbtn.grid(column=4, row=1, padx=4, pady=2) self.digits_label.grid(column=3, row=2, sticky=E, padx=4) self.digits_checkbtn.grid(column=4, row=2, padx=4, pady=2) self.punct_label.grid(column=3, row=3, sticky=E, padx=4) self.punct_checkbtn.grid(column=4, row=3, padx=4, pady=2) self.separator.grid(column=2, row=0, rowspan=4, ipady=45) self.generate_btn.grid(columnspan=5, row=4, padx=4, pady=2) self.password_text.grid(columnspan=5, row=6, padx=4, pady=2) self.grid(padx=10, pady=10) def style(self): self.style = ttk.Style(self) self.style.theme_use("clam") class Password: def __init__(self, length: int, allow_lowercase: bool, allow_uppercase: bool, allow_digits: bool, allow_punctuation: bool) -> None: self.length = length self.allow_lowercase = allow_lowercase self.allow_uppercase = allow_uppercase self.allow_digits = allow_digits self.allow_punctuation = allow_punctuation self.allowed_chars = self.gen_allowed_chars() self.password = self.gen_password() def gen_allowed_chars(self) -> str: # I use a string, because random.choice doesn't work with sets: chars = '' if self.allow_lowercase: chars += string.ascii_lowercase if self.allow_uppercase: chars += string.ascii_uppercase if self.allow_digits: chars += string.digits if self.allow_punctuation: chars += string.punctuation return chars def gen_password(self) -> str: password = '' for _ in range(self.length): password += random.choice(self.allowed_chars) return password if __name__ == '__main__': root = tk.Tk() root.title("Password Generator") app = GUI(root) app.mainloop() Can this code be improved in any way? Does this code follow common best practices? I'd appreciate some advice, especially on the GUI class (I'm new to tkinter). Thank You in advance. Answer: Don't use random for passwords, use secrets "has-a-root" is a cleaner pattern here, I think, than "is-a-root"; in other words, don't inherit - instantiate Cut down the repetition in your options by generalizing to a collection of strings, each expected to be an attribute name on the string module. Represent this name consistently between the UI and the module lookup logic. Type-hint your method signatures. Prefer ''.join() over successive concatenation Try to avoid assigning new class members outside of __init__. Where possible, reduce the number of references you keep on your GUI class. Almost none of your controls actually need to have references kept. Do not call mainloop on your frame; call it on your root Name your variables Sort your grid declarations according to column and row Your Password is not a very useful representation of a class. Whether or not it is kept as-is, it should be made immutable. Also, distinguish between a password and a password generator. A password generator knowing all of its generator parameters but having no actual password state would be more useful. After such a representation is implemented, you could change your TK logic to trace on all of your options variables, and only upon such a change trace, re-initialize your generator. Repeat clicks on 'Generate' will reuse the same generator instance. Don't call things master. In this case "parent" is more appropriate. Suggested import secrets import string import tkinter as tk from dataclasses import dataclass, field from tkinter import ttk from tkinter.constants import DISABLED, E, END, NORMAL, VERTICAL from typing import Iterable, Collection, Protocol, Literal TraceMode = Literal[ 'r', # read 'w', # write 'u', # undefine 'a', # array ] class TkTrace(Protocol): def __call__(self, name: str, index: str, mode: TraceMode): ... class OptControl: NAMES = ('ascii_lowercase', 'ascii_uppercase', 'digits', 'punctuation') def __init__(self, parent: tk.Widget, name: str, trace: TkTrace) -> None: self.name = name self.var = tk.BooleanVar(parent, name=name, value=True) self.var.trace(mode='w', callback=trace) self.label = ttk.Label(parent, text=name) self.check = ttk.Checkbutton(parent, variable=self.var) @classmethod def make_all(cls, parent: tk.Widget, trace: TkTrace) -> Iterable['OptControl']: for name in cls.NAMES: yield cls(parent, name, trace) class GUI: def __init__(self, parent: tk.Tk): self.root = tk.Frame(parent) self.root.pack() self.length = tk.IntVar(self.root, value=16) self.length.trace('w', self.opt_changed) self.opts = tuple(OptControl.make_all(self.root, self.opt_changed)) self.password_text = self.create_widgets() self.style() self.opt_changed() @property def selected_opts(self) -> Iterable[str]: for opt in self.opts: if opt.var.get(): yield opt.name def generate_password(self) -> None: # You can only insert to Text if the state is NORMAL self.password_text.config(state=NORMAL) self.password_text.delete('1.0', END) # Clears out password_text self.password_text.insert(END, self.generator.gen_password()) self.password_text.config(state=DISABLED) def opt_changed(self, *args) -> None: self.generator = PasswordGenerator( length=self.length.get(), opts=tuple(self.selected_opts), ) def create_widgets(self) -> tk.Text: length_label = ttk.Label(self.root, text='Password length') length_label.grid(column=0, row=0, rowspan=4, sticky=E) generate_btn = ttk.Button( self.root, text='Generate password', command=self.generate_password) generate_btn.grid(column=0, row=4, columnspan=5, padx=4, pady=2) password_text = tk.Text(self.root, height=4, width=32, state=DISABLED) password_text.grid(column=0, row=6, columnspan=5, padx=4, pady=2) length_spinbox = ttk.Spinbox( self.root, from_=1, to=128, width=3, textvariable=self.length) length_spinbox.grid(column=1, row=0, rowspan=4, padx=4, pady=2) separator = ttk.Separator(self.root, orient=VERTICAL) separator.grid(column=2, row=0, rowspan=4, ipady=45) for row, opt in enumerate(self.opts): opt.label.grid(column=3, row=row, sticky=E, padx=4) opt.check.grid(column=4, row=row, padx=4, pady=2) self.root.grid(padx=10, pady=10) return password_text def style(self) -> None: style = ttk.Style(self.root) style.theme_use('clam') @dataclass(frozen=True) class PasswordGenerator: length: int opts: Collection[str] allowed_chars: str = field(init=False) def __post_init__(self): super().__setattr__('allowed_chars', ''.join(self._gen_allowed_chars())) def gen_password(self) -> str: return ''.join(self._gen_password_chars()) def _gen_allowed_chars(self) -> Iterable[str]: for opt in self.opts: yield getattr(string, opt) def _gen_password_chars(self) -> Iterable[str]: for _ in range(self.length): yield secrets.choice(self.allowed_chars) if __name__ == '__main__': root = tk.Tk() root.title('Password Generator') GUI(root) root.mainloop()
{ "domain": "codereview.stackexchange", "id": 42010, "tags": "python, object-oriented, tkinter" }
Best parameters to try while hyperparameter tuning in Decision Trees
Question: I want to post prune my decision tree as it is overfitting, I can do this using cost complexity pruning by adjusting ccp_alphas parameters however this does not seem very intuitive to me. From my understanding there are some hyperparameters such as min_samples_split, max_depth, min_impurity_split, min_impurity_decrease that will prune my tree to reduce overfitting. Since I am working with a larger dataset it takes a long time to train therefore don't want to just do trial-error. What are some possible combinations of above mentioned hyperparameters that will prune my tree? + reasoning behind choosing particular combination will be helpful. Thanks in advance! Answer: There are no combinations that work for all cases, hyperparameter tuning is still something that is mostly done by trial and error. Things like Gridsearch and Randomsearch exist though. A good start is always the default setting. An idea if performance is an issue is to tune on a small percentage of the training set to later switch to the full set.
{ "domain": "datascience.stackexchange", "id": 9541, "tags": "python, scikit-learn, decision-trees" }
How does Mask R-CNN automatically output a different number of objects on the image?
Question: Recently, I was reading Pytorch's official tutorial about Mask R-CNN. When I run the code on colab, it turned out that it automatically outputs a different number of channels during prediction. If the image has 2 people on it, it would output a mask with the shape 2xHxW. If the image has 3 people on it, it would output the mask with the shape 3xHxW. How does Mask R-CNN change the channels? Does it have a for loop inside it? My guess is that it has region proposals and it outputs masks based on those regions, and then it thresholds them (it removes masks that have low probability prediction). Is this right? Answer: Object detection models usually generate multiple detections per object. Duplicates are removed in a post-processing step called Non-Maximum Suppression (NMS). The Pytorch code that performs this post-processing is called here in the RegionProposalNetwork class. The filtering loop you've mentioned performs the NMS and applies the score_thresh threshold (although it seems to be zero by default).
{ "domain": "ai.stackexchange", "id": 2817, "tags": "object-detection, semantic-segmentation, mask-rcnn, non-max-suppression" }
Measuring non-commuting properties on entangled particles
Question: Suppose we start with an entangled quantum state such that two particle spins are always perfectly correlated so that $S_{ax} = S_{bx}$ and $S_{az} = S_{bz}$. Suppose I measure $S_{ax}$ = +1 and $S_{bz}$ = -1 simultaneously. Then can't I infer that at the time of those measurements, we have the following: $S_{ax}$ = +1, $S_{az}$ = -1, and $S_{bx}$ = +1, $S_{bz}$ = -1? Both of these would be violations of the uncertainty relations for spin. Answer: If we know in advance $S_{ax}=S_{bx}$ and $S_{az}=S_{bz}$, then $[S_{ax},S_{bz}]\neq 0$. You cannot measure those two simultaneously without uncertainty.
{ "domain": "physics.stackexchange", "id": 58775, "tags": "quantum-mechanics, quantum-entanglement" }
Easy way to make carbonic acid from other chemicals?
Question: I would like to test for the reaction between carbonic acid and copper (to simulate the effect of acid rain on copper). However, I find that many supplies failed to provide carbonic acid (either in powder or liquid form). Some says that carbonic acid is unstable and thus cannot supply it. Therefore, I want to make it myself. Are there any simple ways to make carbonic acid from other chemicals (e.g. sodium carbonate?). Also, I have found some carbonic acid powder in some commercial product such as this one. But it is for bathing instead of chemical use. Can they produce carbonic acid? Any answers and comments are welcome. Thank you very much for helping. Answer: Carbonic acid does not exist either in powder either in liquid form near normal conditions.. ( It can be created and detected at special kryogenic or gaseous phase conditions not applicable for its usage as "on shelf acid".) In a pure form, it exists only as its salts: bicarbonates ( of alkali metals ) and carbonates. As an acid, it exists at ambient conditions only in minor concentration in water solutions of carbon dioxide ( soda water, mineral water ) and/or bicarbonates ( drink water, mineral water, baking soda solution ), with the equilibrium strongly shifted in favour of the oxide. $$\ce{CO2(aq) + H2O <<=> H2CO3(aq)}$$ Note that the acidity of acid rain is not based on carbonic acid, that gives to natural rain just mild acidity about $\mathrm{pH=5.7}$. Acid rain contains traces of strong mineral acids as sulphuric and nitric acid, that are formed from oxides of sulphur and nitrogen, present in air. I suggest for the testing to use very diluted ( 0.1-1 mM ) solution of one of the above acids, or their mixture.
{ "domain": "chemistry.stackexchange", "id": 12924, "tags": "acid-base" }
How are the 'physical' isospin zero states determined?
Question: Consider the light mesons. Since $3 \times \bar{3} = 8 + 1$, the states should be grouped into $\mathfrak{su}(3)$ octets and singlets. In the case of the spin zero states (the pseudoscalars), the singlet state is $$\eta' = u\bar{u} + d \bar{d} + s \bar{s}$$ while the member of the octet with the same isospin and charge is $$\eta = u\bar{u} + d \bar{d} - 2 s \bar{s}.$$ This makes complete sense to me, but Griffiths' particle physics book says that in the case of the vector mesons, the 'physical particles' are instead linear combinations $$\omega = u\bar{u} + d \bar{d}, \quad \phi = s \bar{s}.$$ I'm confused about what that means. How is the term 'physical particle' defined? Why is the situation different for the pseudoscalars and the vectors, i.e. why don't the $\eta$ and $\eta'$ mix like the $\omega$ and $\phi$ have? Answer: 1) Note that the real states are not just $\bar{q}q$ states anyway, they have components that look like glueballs or multi-quark states. 2) However, I can try to measure (on the lattice, or in certain cases, experimentally) the coupling of the physical states to quark-anti-quark currents $$ j_\Gamma^a = \bar{q}\Gamma T^a q $$ where $\Gamma=\gamma_5,\gamma_\mu$ for pseudo-scalar and vector mesons, and $T^a$ are flavor matrices. In the neutral sector we look at $T^0,T^3,T^8$. This can be used to define $3\times 3$ mixing matrices. 3) Empirically, the result is that in the pseudoscalar sector the the eigenstates are approximately (but not exactly) $T^0,T^3,T^8$ (the $\eta'$, $\pi^0$ and $\eta$), but in the vector channel the eigenstates are $T^3$ and $T=diag(1,1,0)$ as well as $T=diag(0,0,1)$ (the $\rho,\omega,\phi$). 4) This is the result of non-perturbative QCD dynamics, but at least roughly the reason can be explained in terms of the anomaly and flavor symmetry breaking. The dominant effect in the pseudoscalar sector is the $U(1)_A$ anomaly, which acts in the $T^0$ channel. As a result the eigenstates are simply $T^0$ and $T^8$, despite some flavor symmetry breaking. In the vector channel there is no anomaly, and the dominant effect is flavor symmetry breaking. The mass matrix has approximate eigenstates $diag(1,1,0)$ (light quarks) and $diag(0,0,1)$ (strange quark), and this is why you get the $\omega$ and $\phi$. Exactly why isospin breaking is not very important, even though $(m_u-m_d)/(m_u+m_d)\sim O(1)$ can be understood by looking at chiral lagrangians.
{ "domain": "physics.stackexchange", "id": 45852, "tags": "particle-physics, quantum-chromodynamics, mesons" }
ROS2: Running subscriber_lambda and publisher_lambda on different machines (in the same network)
Question: Hello, I have two machines connected via ethernet cable, configured to be in the same network and they can ping each other successfully. What I would like to know is what would be the necessary steps to run build/examples_rclcpp_minimal_subscriber/subscriber_lambda on one machine and build/examples_rclcpp_minimal_publisher/publisher_lambda on another? I have tried it but the subscriber never gets anything. Note that I cannot use ros2 run demo_nodes_cpp on one of the machines because it has not been ported yet so that example is not a choice. Thank you in advance. Originally posted by nickcage on ROS Answers with karma: 38 on 2019-04-05 Post score: 0 Original comments Comment by gvdhoorn on 2019-04-05: @nickcage: I've changed the title of your question slightly so as to better reflect what you are asking. Comment by gvdhoorn on 2019-04-05:\ Note that I cannot use ros2 run demo_nodes_cpp on one of the machines because it has not been ported yet so that example is not a choice. what do you mean exactly by this? What hasn't been "ported yet"? Comment by nickcage on 2019-04-05: The thing is one of the machines is an ARM of Texas Instruments SoC, the other one is Intel PC. We have got a ros2 workspace from our clients that we are to use on ARM. Actually the talker and listener demos work on ARM when run locally (from two terminals within the same system) but also output a number of 'Failed to load entry point' messages before they start. They work properly afterwards, though. So maybe they can be used but I have not been able to make them work over network. Comment by gvdhoorn on 2019-04-05: I'm confused: are you trying to run ARM binaries on an amd64 PC or the other way around? Edit: O wait, you have an ARM install made available by your client, and have installed ROS2 on an amd64 machine yourself. You're now trying to make those two "see" each other. Correct? Comment by nickcage on 2019-04-05: Yes, that is mostly correct. What I mean by that is that we have also been provided a workspace for the PC but I think it was installed by standard procedure. Both distros are bouncy (on ARM and on PC). Comment by gvdhoorn on 2019-04-05: I would at least replace the PC side of this with a proper install, instead of a "reused workspace". Debugging issues with this sw is hard enough without a lot of things being unknown variables. Comment by nickcage on 2019-04-08: Hi, thank you for sticking around. Will do that and come back to you with an update. Comment by nickcage on 2019-04-08: I can't seem to get ROS2 bouncy from the repos, I always end up having crystal after executing 'printenv ROS_DISTRO'. I followed this tutorial https://index.ros.org/doc/ros2/Installation/Linux-Development-Setup/ and only changed the --rosdistro parameter from crystal to bouncy in "Install dependencies using rosdep" section. However, I still got crystal. Comment by gvdhoorn on 2019-04-08: Can you clarify why you want to build ROS2 from source? Edit: because this is a Xenial system (#q320563). Comment by nickcage on 2019-04-08: So, I have now installed ROS2 Bouncy on my Intel Ubuntu 16.04 system and tried running a talker from the board and a listener from the PC, it still doesn't work. Any idea how to proceed? Comment by nickcage on 2019-04-09: When I start the talker node on one machine and the listener on the other I can observe Membership Report / Join Group message in wireshark from both machines. Is this ok? However, nothing shows in the listener's terminal. Comment by nickcage on 2019-04-09: I have just tried testing over Internet and it works. Not in a local network, though. Comment by gvdhoorn on 2019-04-09: It might be good at this point if you could describe your network setup a little. What does "testing over Internet" mean? Which network interfaces do your hosts have? How are those configured? Which IP addresses do they have? Netmasks? Etc. Comment by nickcage on 2019-04-10: Well, I have connected the PC to a wider network (Internet) as well as the ARM. I managed to run talker/listener via ros2 run demo_nodes_cpp and they are communicating. However, when on a point-to-point network it doesn't work. The interfaces are configured to have IPs 192.168.1.100 and 192.168.1.101 and the subnet mask is 255.255.255.0. Comment by nickcage on 2019-04-11: I have verified that multicast works using iperf. One machine generates data and sends it to a multicast address and the other machine listens on that address and gets the data. As I said, when I run the talker and listener examples, both the machines become members of the same multicast group (shown by wireshark) but there simply is no feedback from the machine running listener example. Comment by gvdhoorn on 2019-04-11: If it does work when you change your network setup, I would start to suspect the setup when you use the direct cable connection. Can you please update your question (use the edit button/link) with the pertinent information for both cases (ie: direct and "wider network" IP addresses, netmasks, can they ping each other, etc)? Any firewalls active? Are both installations using the same RMW implementation (ie: both FastRTPS or something else)? Comment by gvdhoorn on 2019-07-08: From your new question (#q327515) I'm guessing you got things to work? Comment by nickcage on 2019-07-10: Oh yes, but there is one thing that absolutely needed to be done on both machines and that is the following command: route add -net 224.0.0.0 netmask 240.0.0.0 dev ethX This could be written somewhere so people don't have similar problems. Comment by gvdhoorn on 2019-07-10: That looks like routing for multicast wasn't / isn't setup correctly on your machine without that. Comment by nickcage on 2019-07-10: Yes, it looks like it. You can close this thread, too. Comment by gvdhoorn on 2019-07-10: If the added route was what fixed everything in the end you might want to post that as an answer here. That way it may help future visitors. Comment by nickcage on 2019-07-10: Oh, sure, will do. Answer: OP here, actually I have solved this issue. The problem was the routing for multicast was not setup correctly on participating machines. The solution was to run the following command on appropriate eth port on both machines: route add -net 224.0.0.0 netmask 240.0.0.0 dev ethX That should enable the node discovery mechanism in a local network. Originally posted by nickcage with karma: 38 on 2019-07-10 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ljaniec on 2022-07-20: This worked for me after adding the gw argument with my gateway's IP and properly renamed ethX under what I got from ifconfig.
{ "domain": "robotics.stackexchange", "id": 32827, "tags": "ros, ethernet, ros-bouncy" }
What's the difference between perfect and ideal gas?
Question: There are all kinds of different dinstinctions in the internet, and I'd like to see what you guys thought. Actually, I've been asked if the heat capacity of an ideal gas es independent of temperature. I've said no, even though it's practically invariant under small ranges. Namely the result $C_V/n = \frac{3}{2}R$ is derived from a perfect gas and not an ideal gas and is only an approximation to the latter. Is this true? What really is an ideal gas? What's its difference with a perfect gas? Is my answer to the question I was posed correct? Thanks. Answer: An ideal gas is the same as a perfect gas. Just different naming. The usual name for such gases (for which is assumed that the particles that make up the gas have no interaction with each other) is ideal gas, perfect gas is what such a gas is named in Atkins physical chemistry book. Personally I like the perfect gas naming better as it illustrates the perfect nature of the assumptions made about it. For the simple systems (like monoatomic gases) when we can assume a perfect/ideal gas, $C_{V,m}$ is independent of temperature. For real gases this is certainly not the case. Note that $C_{V,m}=\frac{3}{2}R$ only holds for monoatomic gases.
{ "domain": "chemistry.stackexchange", "id": 13418, "tags": "physical-chemistry, gas-laws" }
What allows the modified Urca process to work at lower density than direct Urca in neutron star cooling?
Question: The dominant method of neutron star cooling is neutrino emission. There are two regimes usually presented, the "direct Urca" and "modified Urca" processes, each of which are sequences of neutron decay and inverse reactions. The direct Urca looks like this: $$n\rightarrow p+l+\overline{\nu_l},\quad p + l \rightarrow n + \nu_l$$ where $l$ is a lepton - either an electron or a muon. These processes cause continuous emission of neutrinos which cools a neutron star relatively quickly. But below a density of $\rho\approx 10^{15}\mathrm{\,g\,cm^{-3}}$ (about three times the nuclear density) this process is suppressed, which means that the direct Urca process only occurs in the core. This is the reason according to a review of neutron star cooling from Pethick and Yakovlev (2004): The process can occur only if the proton concentration is sufficiently high. The reason for this is that, in degenerate matter, only particles with energies within ~$k_BT$ of the Fermi surface can participate in reactions, since other processes are blocked by the Pauli exclusion principle. If the proton and electron Fermi momenta are too small compared with the neutron Fermi momenta, the process is forbidden because it is impossible to satisfy conservation of momentum. Under typical conditions one finds that the ratio of the number density of protons to that of nucleons must exceed about 0.1 for the process to be allowed. This makes some sense. But what surprises me is that this process can still work with a slight modification at lower densities. The modified Urca process can cool the star $$n+N\rightarrow p+N+l+\overline{\nu_l},\quad p + N + l \rightarrow n + N + \nu_l$$ where $N$ is a nucleon - a proton or a neutron. This process, I'm told, can work at much lower densities, but produces 7 orders of magnitude less emissivity. As a result, it's the dominant process in the superfluid outer core. My question is why does the additional nucleon permit lower densities? How does an additional neutron or proton get us out of the conservation of momentum problem with the direct Urca process? Answer: In an $n,p,e$ gas the ratio of neutrons to protons decreases with density. For ideal degenerate gases, the Fermi energies are related by $E_{F,n} = E_{F,p} + E_{F,e}$. In this situation, the largest the proton to neutron ratio can become is 1/8 when all the particles are ultra-relativistic at infinite density. To conserve momentum in the direct URCA process $p_{F,n} \leq p_{F,p} + p_{F,e} = 2p_{F,p}$ (by charge neutrality). Hence because $p_F$ is proportional to $n^{1/3}$, then $n_p \geq n_n/8$. But I previously showed that $n_p < n_n /8$, hence in completely degenerate, ideal $n,p,e$ gases, the direct URCA process is blocked. At densities above about $8\times 10^{17}$ kg/m$^3$, the Fermi energy of the electrons equals the rest-mass energy of muons (105 MeV) and muons can be produced. In more realistic models that include nucleon interactions, this occurs at about $3\times 10^{17}$ kg/m$^3$. The charge neutrality equation changes and the net result is that the number of protons in the gas is able to increase slightly - it could get above 10% of the neutron numbers, or even higher if negatively charged Hyperons are produced at even higher densities. This opens up a channel for the direct URCA process to occur. The image below (taken from a thesis by Stephen Portillo) illustrates the proton (gold), electron (purple) and muon (blue) fractions as a function of baryonic density (in units of the nuclear saturation density $2.8\times10^{17}$ kg/m$^3$). This has been calculated incorporating nucleon interactions. The muons appear at about the saturation density and the proton fraction exceeds 1/8 at about $10^{18}$ kg/m$^3$. These reactions must involve neutrons, protons and electrons that are close to their Fermi energies (within about $kT$). Therefore this requirement on each species introduces an extra dependence of $\sim kT/E_F \ll 1$ per species into the reaction rate. In the modified URCA process "bystander particles" are available to balance the momentum and energy equations - i.e. their role is just to provide sources/sinks of energy/momentum. However, the requirement for two more bystander nucleons (also within $kT$ of their Fermi energies) suppresses the reaction rate considerably (two more factors of $kT/E_F$ or about $10^6$ at typical densities). So, even though the modified process dominates over suppressed direct URCA at low densities, if densities are high enough for direct URCA to be possible, it dominates.
{ "domain": "physics.stackexchange", "id": 12726, "tags": "particle-physics, neutron-stars" }
Can Eve perform this operation?
Question: I am a beginner in quantum computing. Please consider the following scenario: Suppose Alice wants to send $\frac{1}{\sqrt{N}}\sum_{j=0,1,2,..N-1} |j\rangle$ to Bob. Eve has intercepted the state and want to do the following operation - add his $|0_{\text{Eve}}\rangle$ and get $$\frac{1}{\sqrt{N}}\sum_{j=0,1,2,..N-1} |j\rangle |0_{\text{Eve}}\rangle$$ Next he wants to apply apply Unitary Transformation $$U|j\rangle|0_{\text{Eve}}\rangle = |j\rangle|j\rangle$$ and convert the state to $|0_{\text{Eve}}\rangle$ and get $$\frac{1}{\sqrt{N}}\sum_{j=0,1,2,..N-1} |j\rangle |j\rangle$$ Does such unitary operation/transformation exist? if no is it because conflicting no-cloning theorem that you cannot copy a state as it is? Answer: It does exist. It's basically the controlled-not gate generalised to higher dimensional systems. The important thing to realise is that means that the bit that Bob ends up with will be highly entangled with what Eve has, and that will have significant observable consequences. As part of a cryptographic protocol, for exampe, Bob could detect that Eve is doing this before sending any sensitive informaiton.
{ "domain": "quantumcomputing.stackexchange", "id": 1101, "tags": "quantum-gate, textbook-and-exercises, no-cloning-theorem" }
Commutator of scalar field and its spatial derivative
Question: Consider the usual commutation relations of two scalar fields $$\left[\phi_{m}\left(t,\boldsymbol{x}\right),\pi_{n}\left(t,\boldsymbol{y}\right)\right]=\boldsymbol{i}\delta_{mn}\delta\left(\boldsymbol{x}-\boldsymbol{y}\right),$$ $$\left[\phi_{m}\left(t,\boldsymbol{x}\right),\phi_{n}\left(t,\boldsymbol{y}\right)\right]=\left[\pi_{m}\left(t,\boldsymbol{x}\right),\pi_{n}\left(t,\boldsymbol{y}\right)\right]=0.$$ What's the commutator of $\left[\partial_{i}\phi_{m}\left(t,\boldsymbol{x}\right),\phi_{n}\left(t,\boldsymbol{y}\right)\right]$, where $\partial_{i}\equiv\partial/\partial x^{i}$ is one of the three spatial derivatives? What about $\left[\partial_{i}\phi_{m}\left(t,\boldsymbol{x}\right),\pi_{n}\left(t,\boldsymbol{y}\right)\right]$ ? Attempt 1: $$\begin{array}{cl} \left[\partial_{i}\phi\left(t,\boldsymbol{x}\right),\phi\left(t,\boldsymbol{y}\right)\right] & =\partial_{i}\left[\phi\left(t,\boldsymbol{x}\right),\phi\left(t,\boldsymbol{y}\right)\right]+\left[\partial_{i},\phi\left(t,\boldsymbol{y}\right)\right]\phi\left(t,\boldsymbol{x}\right)\\ & =\left[\partial_{i},\phi\left(t,\boldsymbol{y}\right)\right]\phi\left(t,\boldsymbol{x}\right)\\ & =\left(\partial_{i}\phi\left(t,\boldsymbol{y}\right)\right)\phi\left(t,\boldsymbol{x}\right)-\phi\left(t,\boldsymbol{y}\right)\partial_{i}\phi\left(t,\boldsymbol{x}\right)\\ & =? \end{array}$$ Answer: Since we're not taking time derivatives, this is actually a pretty simple thing, but something that, for some reason, doesn't really pop out on a first viewing of a problem like this. The confusion perhaps arises from the fact that you have two types of operators acting on different spaces. You have the derivative operator $\partial_i$ acting on the space of functions from $\mathbb{R}^n$ to some general algebra of fields. You also have the field operators themselves, acting on your Hilbert space $\mathcal{H}$. Since these two operators act on different spaces, then we have $$\left[\frac{\partial}{\partial x^i}\phi(\textbf{x},t),\phi(\textbf{y},t)\right]=\frac{\partial}{\partial x^i}\left[\phi(\textbf{x},t),\phi(\textbf{y},t)\right]=0.$$ That is to say, you can pull out the derivative since only the first term in the commutator depends on $\textbf{x}$. Similarly, we have $$\left[\frac{\partial}{\partial x^i}\phi(\textbf{x},t),\pi(\textbf{y},t)\right]=\frac{\partial}{\partial x^i}\left[\phi(\textbf{x},t),\pi(\textbf{y},t)\right]=i\frac{\partial}{\partial x^i}\delta(\textbf{x}-\textbf{y}).$$ I don't know what it is about this question that trips people up (including myself the first time I was faced with something like this), but it's a lot simpler than it's made out to be.
{ "domain": "physics.stackexchange", "id": 41361, "tags": "homework-and-exercises, field-theory, differentiation, commutator, dirac-delta-distributions" }
Do all equillibrium points of a discrete mapping show up on the bifurcation diagram?
Question: The question in the title is perhaps vaguely posed, so I'll include the concrete example which is bugging me. Suppose we have a mapping given by $$N_{t+1}=N_t\cdot \exp(r(1-N_t-PN_t/(\alpha^2+N_t^2))),$$ where $\alpha$ and $r$ are some fixed constants. If we plot this nonlinear mapping, we get something like this (plotted for $r=0.35, \alpha=0.1$ and, from top to bottom, $P_1=0.165, P_2=0.235, P_3=0.35$). The blue line corresponds to $N_{t+1}=N_t$ From here it can be seen that there are 3 regimes the system can be in, depending on the value of the parameter $P$: For low values of $P$, the system has 2 fixed points (shown at 0 and at cca. 0.8) For medium values of $P$, the system has 4 fixed points (shown at 0, $A$,$B$ and $C$) For high values of $P$, the system has 2 fixed points again (shown at 0 and at cca. 0.03). I took the liberty of drawing cobweb diagrams for them all (except $N=0$, which seems to be an unstable fixed point for the given values), but I'll show just the ones important for my question: Case $P=0.235$, starting point $N_0<N_B$, 2048 iterations show that it's orbiting point $C$. Same case, this time the starting point is $N_0>N_B$ so the system tends towards point $A$, but much faster. This seemed odd at first so I decided to plot the bifurcation diagram and the Lyapunov exponent of values of $P$: Bifurcation diagram (dots are pretty dense after $P=0.4$, so it's not as reliable for comparison with the Lyapunov exponent after that point): Lyapunov exponent: Most of the conclusions I made before drawing those last 2 diagrams got confirmed, but one issue stands: in the medium $P$ regime ($P=0.235$), the bifurcation diagram shows that the system has only one stable state (the higher one, point $A$), but refuses to acknowledge the other ($C$), despite the cobweb diagram showing that it's probably stable. Not only that, the Lyapunov exponent is around -2 for that point, and I'm pretty sure that the graph is accurate (it's plotted for the first 1000 terms in the Lyapunov sum, with a discrete step of 1/10000). So, my main issue is: Does the bifurcation diagram show all equillibrium points that are accessible from a multitude of starting points for a given value of the bifurcation parameter? In this case, wheather the system landed in $A$ or $C$ depended only on the initial condition, namely $N_0<N_B$ or $N_0>N_B$. For the doubled-period and chaotic regimes I got the correct cobweb diagrams (the system jumping between 2 distinct points and not settling in any point, respectively), among other correct predictions, so I'm convinced the error is not in my coding. Any insight in this matter would be greately appreciated. EDIT: feel free to migrate this to math.se if you don't find this appropriate. The question arises from a physics problem, but I understand if it's considered off-topic EDIT#2: As suggested, here is the enlarged version of the bifurcation diagram in the interval from $P=0.11$ to $P=0.5$ Answer: Ok, I do not know the Sage algorithm but I am going to offer a conjecture of what is happening. You have to verify the conjecture by further numerical investigations. I assume that the Sage algorithm works optimally for bifurcations of a single equilibrium and can run into problems such as we see here when dealing with equilibria (AKA fixed points) associated with more than one point. What you see on your bifurcation diagram for $P > 0.26$ is actually a 2-point oscillation around the unstable equilibrium in $C$ (a basic bifurcation algorithm will not show unstable fixed points). The reason why this oscillation seems to converge on $C$ is probably because the oscillation is always very close to $C$. At $P \approx 0.26$ the map-curve $N_{t+1}(N_t)$ touches the $N_{t+1}=N_t$ line, the equilibria $A$ and $B$ emerge as a "tangent bifurcation" for $P<0.26$ and this confuses the Sage algorithm to shift from the oscillation around $C$ to $A$. This confusion shows as the vertical line at $P \approx 0.26$ on your diagram. (Alternatively, there might be a short window of chaos.) Once again, the bifurcation diagram cannot show the unstable equilibrium $B$ which would show as a "lower branch" on the tangent bifurcation as seen for $P<0.26$. To verify this conjecture, you can check that at $P\approx 0.26$ both A and B emerge. So the problem is probably that the Sage algorithm is trying to save computation time by only searching in the vicinity of the previous points on the bifurcation diagram but in so doing might omit parts of the diagram which are entirely due to a different equilibrium. To fix this problem you must simply set up Sage in a different way or use different software. (It is not difficult to write an inefficient but almost fail-safe program which draws the diagram just by shooting a bunch of densely placed random initial conditions and simply showing where they settle after a large number of iterations for every $P$.)
{ "domain": "physics.stackexchange", "id": 28041, "tags": "homework-and-exercises, mathematical-physics, equilibrium, chaos-theory, non-linear-systems" }
Boolean enums: improved clarity or just overkill?
Question: Suppose we are writing a GUI toolkit in C++ (though this question may also apply to other languages). We have a button class with a member function hide, which hides the button. This member function takes a Boolean parameter animated to control if the button should be hidden with an animation. class Button { public: // Rule of three, etc… void hide(bool animated); }; When invoking this member function, it may not be clear what it does. Button button; button.hide(false); // well, does it hide the button or not? // what does "false" even mean here?! We could rewrite this using a Boolean enum. class Button { public: // Rule of three, etc… enum Animated : bool { Animate = true, DoNotAnimate = false, }; void hide(Animated animated); }; Now if we call it, everything becomes more clear. Button button; button.hide(Button::DoNotAnimate); Is this a good thing to do? Does it improve clarity of the code, or is it just overkill and should we use separate documentation (Doxygen-like) for this instead? Answer: I think the enum is a very nice solution here. And in a way I disagree with Johannes, even for single-use the enum improves readability and discoverability of the API, and writing it is a negligible effort; and I’d be wary of using comments as in his example, they scream “hack”.
{ "domain": "codereview.stackexchange", "id": 3086, "tags": "c++, enum" }
Counting Colors in Conway's Game of Life
Question: I have a basic version of CGoL running with pdCurses. My goal was to have each newly spawned cell take on the dominant color of their neighbors (if a spawned cell is mostly surrounded by red, make it red). I managed to get a half-baked solution working, but it has a few problems, mainly: It requires another member vector to hold the frequency of colors It requires aforementioned vector to be marked as mutable so the constness of other functions isn't affected It required creating a struct to return duel results (the neighbor count, and the dominant color) The color frequency storage scheme is slightly confusing If someone can think of a cleaner method of achieving this, I would appreciate it. I'll also take any other kind of critique you may have. My main function to count the neighbors is: NeighborData Population::getNeighborData(int x, int y, int depth) const { int count = 0; for (int cY = y - depth; cY <= y + depth; cY++) { if (cY < 0 || cY >= height) continue; for (int cX = x - depth; cX <= x + depth; cX++) { if (cX < 0 || cX >= width || (cX == x && cY == y)) continue; unsigned char color = getPointColor(cX, cY); if (color != '\0') { count += 1; colorFreqs[color] += 1; } } } unsigned char c = consumeColorFrequencies(); return NeighborData(count,c); } vector colorFreqs has a pre-allocated slot for each color (only 16 on my machine). Every time we check a color, we look up the color using the color as an index, and increment its count. consumeColorFrequenices() is the main function that I'm asking about. It "consumes" the frequency vector; returning the dominant color (or the first found color if more the one have an equal frequency) NeighborData is a small struct with 2 members: the count, and the dominant color. I needed a way to return both bits of data at once to my decideLifeOf() method. consumeColorFrequencies(): unsigned char Population::consumeColorFrequencies() const { int hIndex = 0, highest = 0; for (unsigned int i = 0; i < colorFreqs.size(); i++) { unsigned char freq = colorFreqs[i]; if (freq > highest) { hIndex = i, highest = freq; } } //Set all color frequencies to 0 std::fill(colorFreqs.begin(), colorFreqs.end(), 0); return hIndex; } And, the target use: void Population::decideLifeOf(int x, int y) { NeighborData nD = getNeighborData(x, y, 1); unsigned int ns = nD.count; unsigned char color = nD.color; if (ns < 2 || ns > 3) killPoint(x, y); else if (ns == 3) addPoint(x, y, color); } Population.h: #ifndef POPULATION_H #define POPULATION_H #include <set> #include <vector> #include "curses.h" struct NeighborData { unsigned int count = 0; unsigned char color = COLOR_WHITE; NeighborData(unsigned int ct, unsigned char cr); }; class Population { //To hold the "finished" generation, and the generation // currently being constructed std::vector<unsigned char> cells; std::vector<unsigned char> newCells; //To temporarily hold frequencies of colors //Index is the color, value is the number of occurances mutable std::vector<unsigned int> colorFreqs; int width = 0, height = 0; public: Population(int newWidth, int newHeight); bool pointIsOccupied(int x, int y) const; void addPoint(int x, int y, unsigned char color); void killPoint(int x, int y); unsigned char getPointColor(int x, int y) const; NeighborData getNeighborData(int x, int y, int depth = 1) const; void decideLifeOf(int, int); int getIndexOf(int, int) const; void replacePopulation(); unsigned char consumeColorFrequencies() const; }; unsigned char randomColor(unsigned char starting = 1); #endif Population.cpp: #include "Population.h" #include <cstdlib> #include <algorithm> #include "curses.h" NeighborData::NeighborData(unsigned int ct, unsigned char cr) { count = ct, color = cr; } Population::Population(int newWidth, int newHeight) { width = newWidth; height = newHeight; cells.resize(width * height); newCells.resize(width * height); colorFreqs.resize(COLORS); } bool Population::pointIsOccupied(int x, int y) const { return cells[getIndexOf(x, y)] != '\0'; } unsigned char Population::getPointColor(int x, int y) const { return cells[getIndexOf(x, y)]; } void Population::addPoint(int x, int y, unsigned char color) { newCells[getIndexOf(x, y)] = color; } void Population::killPoint(int x, int y) { newCells[getIndexOf(x, y)] = '\0'; } NeighborData Population::getNeighborData(int x, int y, int depth) const { int count = 0; for (int cY = y - depth; cY <= y + depth; cY++) { if (cY < 0 || cY >= height) continue; for (int cX = x - depth; cX <= x + depth; cX++) { if (cX < 0 || cX >= width || (cX == x && cY == y)) continue; unsigned char color = getPointColor(cX, cY); if (color != '\0') { count += 1; colorFreqs[color] += 1; } } } unsigned char c = consumeColorFrequencies(); return NeighborData(count,c); } void Population::decideLifeOf(int x, int y) { NeighborData nD = getNeighborData(x, y, 1); unsigned int ns = nD.count; unsigned char color = nD.color; if (ns < 2 || ns > 3) killPoint(x, y); else if (ns == 3) addPoint(x, y, color); } int Population::getIndexOf(int x, int y) const { return y * width + x; } void Population::replacePopulation() { cells = newCells; } unsigned char randomColor(unsigned char starting) { return (rand() % (COLORS - starting)) + starting; } unsigned char Population::consumeColorFrequencies() const { int hIndex = 0, highest = 0; for (unsigned int i = 0; i < colorFreqs.size(); i++) { unsigned char freq = colorFreqs[i]; if (freq > highest) { hIndex = i, highest = freq; } } //Set all color frequencies to 0 std::fill(colorFreqs.begin(), colorFreqs.end(), 0); return hIndex; } World.h: #ifndef WORLD_H #define WORLD_H #include <set> #include <sstream> #include <limits> #include <vector> #include "Population.h" class World { Population pop; int worldWidth = 0, worldHeight = 0; public: World(int, int); void compileOutput(std::string disp = "#") const; void simGeneration(); void randomizeCells(double chanceOfLife = 0.3, int newSeed = -1); }; #endif World.cpp: #include "World.h" #include <iomanip> #include <set> #include <cstdlib> #include <string> #include "curses.h" World::World(int xMax, int yMax) : pop(xMax,yMax) { worldWidth = xMax; worldHeight = yMax; } void World::compileOutput(std::string disp) const { for (int cY = 0; cY < worldHeight; cY++) { for (int cX = 0; cX < worldWidth; cX++) { char c = pop.getPointColor(cX, cY); init_pair(c, c, COLOR_BLACK); //(Pair number, fore color, back color) attron(COLOR_PAIR(c)); mvprintw( cY, cX, (pop.pointIsOccupied(cX, cY) ? disp.c_str() : " ") ); attroff(COLOR_PAIR(c)); } } } void World::simGeneration() { for (int y = 0; y < worldHeight; y++) { for (int x = 0; x < worldWidth; x++) { pop.decideLifeOf(x,y); } } pop.replacePopulation(); } void World::randomizeCells(double chanceOfLife, int newSeed) { if (newSeed > 0) srand(newSeed); for (int y = 0; y < worldHeight; y++) { for (int x = 0; x < worldWidth; x++) { if ((rand() % int(1.0 / chanceOfLife)) == 0) { unsigned char color = randomColor(); pop.addPoint(x, y, color); } } } pop.replacePopulation(); } Timer.h: #ifndef TIMER_H #define TIMER_H #include <chrono> class Timer { std::chrono::system_clock::time_point start; public: Timer(); void restart(); std::chrono::system_clock::time_point now(); double getMS(); double getSecs(); }; #endif Timer.cpp: #include "Timer.h" #include <ctime> Timer::Timer() { start = now(); } void Timer::restart() { start = now(); } std::chrono::system_clock::time_point Timer::now() { return std::chrono::system_clock::now(); } double Timer::getMS() { return (now() - start).count() / 10000.0; } double Timer::getSecs() { return getMS() / 1000.0; } Main.cpp: #include "Timer.h" #include "World.h" #include <iostream> #include <cstdlib> #include <vector> #include <chrono> #include <thread> #include "curses.h" int main(int argc, char* argv[]) { using namespace std; initscr(); /* Start curses mode */ start_color(); noecho(); // Don't echo any keypresses curs_set(FALSE); // Don't display a cursor const long maxX = 60, maxY = 40; World w(maxX, maxY); w.randomizeCells(0.4, 10); double lastDur = 1; Timer t; for (int rounds = 0; rounds < 5000; rounds++) { clear(); w.compileOutput("#"); mvprintw(maxY + 1, 0, "%d", rounds); w.simGeneration(); lastDur = t.getMS(); t.restart(); mvprintw(maxY + 2, 0, "%0.1f fps", 1000.0 / lastDur); refresh(); this_thread::sleep_for(chrono::milliseconds( 50 ) ); } endwin(); } Answer: I don't think there is anything wrong with your general approach (or at least I don't have a better suggestion). On an implementation level I've a few suggestions As mentioned before, I'd replace the class member mutable std::vector<unsigned int> colorFreqs; with a local std::array<size_t, COLORS> colorFreqs{}; in getNeighborData and pass the array as a const ref parameter to consumeColorFrequencies. This gets rid of the mutable problem and might even increase performance. I'd write the getNeighborData function a little different: NeighborData Population::getNeighborData(int x, int y, int depth) const { std::array<unsigned char, COLORS> colorFreqs{}; int count = 0; for (int cY = std::max(0, y - depth); cY <= std::min(height-1, y + depth); cY++) { for (int cX = std::max(0, x - depth); cX <= std::min(width-1, x + depth); cX++) { if (cX == x && cY == y) continue; unsigned char color = getPointColor(cX, cY); if (color != '\0') { count++; colorFreqs[color]++; } } } unsigned char c = consumeColorFrequencies(colorFreqs); return NeighborData(count, c); } Whether that is easier to understand than your version is up for discussion, but it should be a little more efficient. consumeColorFrequenciescan be simplified by using an STL algorithm: unsigned char Population::consumeColorFrequencies(const std::array<unsigned char, COLORS>& colorFreqs) const { auto it = std::max_element(std::begin(colorFreqs), std::end(colorFreqs)); return std::distance(std::begin(colorFreqs),it); } In response to the comment about multithreading: You can (more or less) trivially parallelize compileOutput by letting each thread generate the new cells for a slice of the world (e.g. a quarter of the rows on a 4-Core machine). There are many parallel loop implementations out there that can make that Task even easier. Obviously this is only sensible for very large grids.
{ "domain": "codereview.stackexchange", "id": 13002, "tags": "c++, c++11, console, game-of-life" }
Adding and removing classes at different heights on page using jQuery
Question: I want to remove/add classes when the user is at different distances from the top by using jQuery. I have successfully done it, and it works fine, but I think I'm doing it wrong, and I would like your help to optimize the code. The html is simple, basically the sections(including the header), have 100% width. and different colors. I want to make the header change color when its over the first section(for aesthetical purposes). And I also want it to have a shadow when the page has been scrolled more than 1 pixel. I'm doing it by adding/removing classes. When I use one big else if statement it doesn't work well because whenever any any condition is matched js stops checking for other matches, so it doesn't apply all the classes needed. The next code works, however as I said, I think that it's not optimal/badly written. Here is the HTML markup: <header class="dark no-shadow"> Header </header> <section class="blue"> Please Scroll Down to see the header changes... </section> <section> The header color Should change when you pass through me. </section> And here is the jQuery code: var header = $('header'), blueSection = $('section.blue'), // Calculate when to change the color. offset = blueSection.offset().top + blueSection.height() - header.height(); $(window).scroll(function(){ var scroll = $(window).scrollTop(); // Remove Class "dark" after scrolling over the dark section if (scroll >= offset) { header.removeClass('dark'); } else { header.addClass('dark'); } // Remove Class "no-shadows" whenever not on the top of the page. if (scroll >= 1) { header.removeClass('no-shadow'); } else { header.addClass('no-shadow'); } }); And for those of you who like to use JSFiddle(like me!): https://jsfiddle.net/shock/wztdt077/6/ Answer: I know you tagged this jQuery, but I believe that loading in jQuery for this seems like a lot of processing for little benefit, so I decided to write it in a more universal approach, which I think is the main problem with your code. What you want to accomplish is to have a little script that can be included and will work predictably in as many circumstances without being to specific, so first I went about actually creating a structure: The element the class will be applied to will have an attribute named data-scroll-group - it will define the group name for these scrollable items. The elements that your header will respond to will contain an attribute constructed of the above group name and prefixed by data. It's contents will be the classname(s) you want to apply. This structure looks like this: <header data-scroll-group="header-group"></header> <section data-header-group="aClassName"></section> This means you could have many scroll groups on the page, allowing you to reuse the code with different aspect and different headers, different responders, etc... It makes your code less connected to the DOM and more based in a structure. The rest is pretty simple. Use getBoundingClientRect to get the elements position compared to the viewport. I am currently applying the classes using the position of the viewport, but you could simply get the same getBoundingClientRect on the header and add the values together to get a result for your header specifically. Check out the snippet and tell me if you have any questions about it. You could replace my header.className with jQuery if you wanted - you could replace most of it with jQuery, but I think coupling your code to jQuery has no benefit here, as you want this to work anywhere, and pure vanilla Javascript is about as long. // This is a handy wrapper function that will return an array of matching element instead of a nodeList function querySelectorArray(query, root){ return Array.prototype.slice.call((root || document).querySelectorAll(query)); } // Get all headers that are designated 'scroll-group' var headers = querySelectorArray('[data-scroll-group]'); // Loop through the headers headers.forEach(function(header){ // Get the name of the group from the headers [data-scroll-group] attribute var group = header.getAttribute('data-scroll-group'); // Get all the sections with a matching data-[data-scroll-group] attribute var sections = querySelectorArray('[data-' + group + ']'); // Create an Event Listener for scrolling window.addEventListener('scroll', function(){ // Declare a lastSection variable that can store the last class that scrolled by var lastSection = false; sections.forEach(function(section){ // Get the elements position compared to the viewport var offset = section.getBoundingClientRect(); // If the position is smaller than 0 it has scrolled further than that section // The same is true for the scroll being smaller than the negative height - if so, it is out of view. if(offset.top < 0 && offset.top > -offset.height) lastSection = section.getAttribute('data-' + group + ''); }); // Apply the class to your header header.className = lastSection || ''; }) }); body { padding: 0; margin: 0; padding-top: 20px; height: 405vh; font-family: Arial, serif; } header { position: fixed; width: 100%; top: 0; left: 0; background: #dd0300; -webkit-transition: all 1s; transition: all 1s; color: #fff; padding: 5px 20px; box-sizing: border-box; } header.blue { background: #4e88ff; } header.shadow { box-shadow: 0 0 20px #000; } section { box-sizing: border-box; width: 100%; height: 100vh; background: #ccc; padding: 20px; } <header data-scroll-group="header-group"> Header </header> <section data-header-group="blue"> Scroll Down (header will become blue) </section> <section data-header-group="shadow"> Scroll Down (header will have a shadow, no longer blue) </section> <section data-header-group="blue shadow"> Scroll Down (header will have a shadow and be blue) </section> <section> Scroll Down (header will reset) </section>
{ "domain": "codereview.stackexchange", "id": 18605, "tags": "javascript, jquery" }
Difference between ‘tagging’ and ‘conjugating’ a fluorochrome to an antibody?
Question: The Wikipedia entry on fluorescence repeatedly states that “a fluorochrome must be tagged or conjugated to the antibody”. How is tagged or conjugated different? Is this a mistake or are these indeed different concepts? Answer: Conjugated and tagged mean the same thing here, although I would advise against using tagged here. In the context of antibodies, tagging means the addition of a (short) peptide sequence to a protein. Either to do something useful (degradation tag, HHHHHH-tag) or just as an epitope for an antibody.
{ "domain": "biology.stackexchange", "id": 6912, "tags": "terminology, antibody" }
Flex panels in CSS and JS
Question: I was following Wes Bos JS 30-day challenge, so HTML and CSS are mostly are copy-paste, I'd like feedback on JS (mostly). Thanks. const panels = document.querySelectorAll('.panel'); panels.forEach(panel => panel.addEventListener('click', () => { const isOpen = panel.classList.contains('open'); panels.forEach(panel => panel.classList.remove('open')); if(!isOpen) { panel.classList.add('open'); } })); panels.forEach(panel => panel.addEventListener('transitionend', e => { if(e.propertyName.includes('flex')) { panels.forEach(panel => { if(panel.classList.contains('open')) { panel.classList.add('open-active'); } else { panel.classList.remove('open-active'); } }); } })); html { box-sizing: border-box; background: #ffc600; font-family: 'helvetica neue'; font-size: 20px; font-weight: 200; } body { margin: 0; } *, *:before, *:after { box-sizing: inherit; } .panels { min-height: 100vh; overflow: hidden; display: flex; } .panel { background: #6B0F9C; box-shadow: inset 0 0 0 5px rgba(255, 255, 255, 0.1); color: white; text-align: center; align-items: center; /* Safari transitionend event.propertyName === flex */ /* Chrome + FF transitionend event.propertyName === flex-grow */ transition: font-size 0.7s cubic-bezier(0.61, -0.19, 0.7, -0.11), flex 0.7s cubic-bezier(0.61, -0.19, 0.7, -0.11), background 0.2s; font-size: 20px; background-size: cover; background-position: center; flex: 1; display: flex; justify-content: center; flex-direction: column; } .panel1 { background-image: url(https://source.unsplash.com/gYl-UtwNg_I/1500x1500); } .panel2 { background-image: url(https://source.unsplash.com/rFKUFzjPYiQ/1500x1500); } .panel3 { background-image: url(https://images.unsplash.com/photo-1465188162913-8fb5709d6d57?ixlib=rb-0.3.5&q=80&fm=jpg&crop=faces&cs=tinysrgb&w=1500&h=1500&fit=crop&s=967e8a713a4e395260793fc8c802901d); } .panel4 { background-image: url(https://source.unsplash.com/ITjiVXcwVng/1500x1500); } .panel5 { background-image: url(https://source.unsplash.com/3MNzGlQM7qs/1500x1500); } /* Flex Children */ .panel>* { margin: 0; width: 100%; transition: transform 0.5s; flex: 1 0 auto; display: flex; align-items: center; justify-content: center; } .panel *:first-child { transform: translateY(-100%); } .panel *:last-child { transform: translateY(100%); } .panel.open-active *:first-child, .panel.open-active *:last-child { transform: translateY(0); } .panel p { text-transform: uppercase; font-family: 'Amatic SC', cursive; text-shadow: 0 0 4px rgba(0, 0, 0, 0.72), 0 0 14px rgba(0, 0, 0, 0.45); font-size: 2em; } .panel p:nth-child(2) { font-size: 4em; } .panel.open { font-size: 40px; flex: 5; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Flex Panels </title> <link href="https://fonts.googleapis.com/css?family=Amatic+SC" rel="stylesheet" type="text/css"> </head> <body> <div class="panels"> <div class="panel panel1"> <p>Hey</p> <p>Let's</p> <p>Dance</p> </div> <div class="panel panel2"> <p>Give</p> <p>Take</p> <p>Receive</p> </div> <div class="panel panel3"> <p>Experience</p> <p>It</p> <p>Today</p> </div> <div class="panel panel4"> <p>Give</p> <p>All</p> <p>You can</p> </div> <div class="panel panel5"> <p>Life</p> <p>In</p> <p>Motion</p> </div> </div> </body> </html> Answer: It looks pretty good to me. I can only see a few things to consider: More precise propertyName check You have if(e.propertyName.includes('flex')) { because Safari uses flex and others use flex-grow. Are you sure that the flex substring won't be present in any other possible CSS transitions? Even if you're sure, will readers of the code be sure? I'd change to an === test against both possibilities, or at least use startsWith (which is a bit more appropriate than .includes here, since both possibilities start with flex). You can also move the comment about the transition event name to the JS as well as the CSS. Concise classList setting When you want to either add a class name, or remove a class name, based on a condition, you can condense an if(...) classList.add(...) else(...) classList.remove into a single classList.toggle with a second argument that indicates whether to add or remove the class. Your if(panel.classList.contains('open')) { panel.classList.add('open-active'); } else { panel.classList.remove('open-active'); } simplifies to const { classList } = panel; classList.toggle('open-active', classList.contains('open')); Browser compatibility Though, some ancient browsers don't support the 2nd argument, so consider what sort of browsers you need to support. If you only want to support reasonably up-to-date browsers, it's just fine. Another thing to keep in mind is that NodeList.prototype.forEach was only introduced a few years ago, around 2016 or 2017 IIRC; like startsWith, it's newer than ES6, so either use a polyfill or use iterators and Babel instead, eg: for (const panel of panels) { // do stuff with panel (if you want to support IE, you should be using Babel anyway, to transpile your code to ES5 syntax) Void return? panels.forEach(panel => panel.addEventListener returns the value of calling addEventListener to the caller of forEach. Since forEach doesn't look at what its callbacks return, this doesn't do anything. It's not a real problem, but some might consider the code to make a bit more sense if the forEach callback returned void (no return statement or implicit return at all). (Described in TypeScript's TSLint here) Clickable panels Since the panels are clickable, maybe change from the default cursor to cursor: pointer to make it more obvious to the user that they're meant to be clicked? Space between elements in selectors I'd change .panel>* to .panel > * - it makes it a bit easier to read when separate elements are separated by spaces. Repetitive panels Rather than <div class="panel panel1"> </div> <div class="panel panel2"> </div> .panel1 { background-image: url(https://source.unsplash.com/gYl-UtwNg_I/1500x1500); } .panel2 { background-image: url(https://source.unsplash.com/rFKUFzjPYiQ/1500x1500); } Consider using :nth-child instead, allowing you to remove the extra panel# classes entirely. .panel:nth-child(1) { background-image: url(https://source.unsplash.com/gYl-UtwNg_I/1500x1500); } .panel:nth-child(2) { background-image: url(https://source.unsplash.com/rFKUFzjPYiQ/1500x1500); } const panels = document.querySelectorAll('.panel'); panels.forEach((panel) => { panel.addEventListener('click', () => { const isOpen = panel.classList.contains('open'); panels.forEach(panel => panel.classList.remove('open')); if (!isOpen) { panel.classList.add('open'); } }); }); panels.forEach((panel) => { panel.addEventListener('transitionend', e => { /* Safari transitionend event.propertyName === flex */ /* Chrome + FF transitionend event.propertyName === flex-grow */ if (e.propertyName === 'flex' || e.propertyName === 'flex-grow') { panels.forEach(panel => { const { classList } = panel; classList.toggle('open-active', classList.contains('open')); }); } }) }); html { box-sizing: border-box; background: #ffc600; font-family: 'helvetica neue'; font-size: 20px; font-weight: 200; } body { margin: 0; } *, *:before, *:after { box-sizing: inherit; } .panels { min-height: 100vh; overflow: hidden; display: flex; } .panel { background: #6B0F9C; box-shadow: inset 0 0 0 5px rgba(255, 255, 255, 0.1); color: white; text-align: center; align-items: center; /* Safari transitionend event.propertyName === flex */ /* Chrome + FF transitionend event.propertyName === flex-grow */ transition: font-size 0.7s cubic-bezier(0.61, -0.19, 0.7, -0.11), flex 0.7s cubic-bezier(0.61, -0.19, 0.7, -0.11), background 0.2s; font-size: 20px; background-size: cover; background-position: center; flex: 1; display: flex; justify-content: center; flex-direction: column; cursor: pointer; } .panel:nth-child(1) { background-image: url(https://source.unsplash.com/gYl-UtwNg_I/1500x1500); } .panel:nth-child(2) { background-image: url(https://source.unsplash.com/rFKUFzjPYiQ/1500x1500); } .panel:nth-child(3) { background-image: url(https://images.unsplash.com/photo-1465188162913-8fb5709d6d57?ixlib=rb-0.3.5&q=80&fm=jpg&crop=faces&cs=tinysrgb&w=1500&h=1500&fit=crop&s=967e8a713a4e395260793fc8c802901d); } .panel:nth-child(4) { background-image: url(https://source.unsplash.com/ITjiVXcwVng/1500x1500); } .panel:nth-child(5) { background-image: url(https://source.unsplash.com/3MNzGlQM7qs/1500x1500); } /* Flex Children */ .panel > * { margin: 0; width: 100%; transition: transform 0.5s; flex: 1 0 auto; display: flex; align-items: center; justify-content: center; } .panel *:first-child { transform: translateY(-100%); } .panel *:last-child { transform: translateY(100%); } .panel.open-active *:first-child, .panel.open-active *:last-child { transform: translateY(0); } .panel p { text-transform: uppercase; font-family: 'Amatic SC', cursive; text-shadow: 0 0 4px rgba(0, 0, 0, 0.72), 0 0 14px rgba(0, 0, 0, 0.45); font-size: 2em; } .panel p:nth-child(2) { font-size: 4em; } .panel.open { font-size: 40px; flex: 5; } <link href="https://fonts.googleapis.com/css?family=Amatic+SC" rel="stylesheet" type="text/css"> <div class="panels"> <div class="panel"> <p>Hey</p> <p>Let's</p> <p>Dance</p> </div> <div class="panel"> <p>Give</p> <p>Take</p> <p>Receive</p> </div> <div class="panel"> <p>Experience</p> <p>It</p> <p>Today</p> </div> <div class="panel"> <p>Give</p> <p>All</p> <p>You can</p> </div> <div class="panel"> <p>Life</p> <p>In</p> <p>Motion</p> </div> </div>
{ "domain": "codereview.stackexchange", "id": 39805, "tags": "javascript, html, css, ecmascript-6, event-handling" }
Overusing JavaScript closures?
Question: I've finally gotten around to learning Lisp/functional programming. However, what I've noticed is that I'm trying to bring ideas back into JavaScript. Example Before var myPlacemark, myLineString; myLineString = ge.createLineString(''); myLineString.setLatitude(100); myLineString.setLongitude(-100); myPlacemark = ge.createPlacemark(''); myPlacemark.setGeometry(placemark); After var myPlacemark; myPlacemark = (function(point, placemark){ point.setLatitude(100); point.setLongitude(-100); placemark.setGeometry(point); return placemark; })(ge.createPoint(''), ge.createPlacemark('')); Is there any reason I shoudn't be doing it the 2nd way? Answer: What you have there is actually just a fancy assignment operation. The closure there plays no role. And if you would need to set another placemark, you would have to repeat the code or wrap in one more function. IMHO, it would be much more pragmatic to use a lot simpler approach: var createPlacemark = function (point, placemark) { point.setLatitude(100); point.setLongitude(-100); placemark.setGeometry(point); return placemark; }, myPlacemark = createPlacemark(ge.createPoint(''), ge.createPlacemark('')); This way you get reusable routine with a clear name. And if goal of this all was to prevent external sources from adding placemarks, just warp it all in the standard: (function () { }()); The bottom line is: you were over-thinking it.
{ "domain": "codereview.stackexchange", "id": 2375, "tags": "javascript, functional-programming" }
Does thermionic emission require vacuum?
Question: Every source I looked at talks about thermionic emission within the context of vacuum tubes. However I was wondering if vacuum is a requirement for this effect to work. Can any cathode, if sufficiently heated, emit electrons? Answer: A vacuum is not a requirement, except for specific applications such as the Edison effect. The original discovery was not in a vacuum. You can read about it in this wikipedia article
{ "domain": "physics.stackexchange", "id": 67083, "tags": "electricity, electrons, vacuum" }
Creating a list of pairs after splitting the string containing all of the sets
Question: I am working on asp.net/MVC and I need to read a value from my web.config file. Splitting the list into multiple keys is not an option. <add key="DepartAssign" value="Depart1:PersonAssignTo1; Depart2:PersonAssignTo2"/> I handled the problem by doing the following (which I am sure that there is a better approach) public List<string> departmentList = new List<string>(); public List<string> assignedtoList = new List<string>(); public string[] pair = ConfigurationManager.AppSettings["DepartAssign"].Split(';'); foreach (string s in pair) { departmentList.Add(s.Split(':').First()); //add the department to the list assignedtoList.Add(s.Split(':').Last()); //add the assignedto to the list } and of course, I would be able to match the department with the assigned person by using the same index. Not really the cleanest approach but it works for now. any way to make this code better? Answer: You could create your own custom section in your config file. First define what each record looks like: public class Department : ConfigurationSection { [ConfigurationProperty("name", IsRequired = true)] public string Name { get { return (string)this["name"]; } set { this["name"] = value; } } [ConfigurationProperty("assignee", IsRequired = true)] public string Assignee { get { return (string)this["assignee"]; } set { this["assignee"] = value; } } } Define the a collection so the framework can work with your section public class DepartmentCollection : ConfigurationElementCollection { public override ConfigurationElementCollectionType CollectionType { get { return ConfigurationElementCollectionType.BasicMap; } } protected override string ElementName { get { return "Department"; } } protected override ConfigurationPropertyCollection Properties { get { return new ConfigurationPropertyCollection(); } } public Department this[int index] { get { return (Department) BaseGet(index); } set { if (BaseGet(index) != null) { BaseRemoveAt(index); } base.BaseAdd(index, value); } } public Department this[string name] { get { return (Department) BaseGet(name); } } public void Add(Department item) { base.BaseAdd(item); } public void Remove(Department item) { BaseRemove(item); } public void RemoveAt(int index) { BaseRemoveAt(index); } protected override ConfigurationElement CreateNewElement() { return new Department(); } protected override object GetElementKey(ConfigurationElement element) { return (element as Department).Name; } } Define what the list looks like public class DepartmentList : ConfigurationSection { private static readonly ConfigurationPropertyCollection DepartmentListProperties; private static readonly ConfigurationProperty DepartmentProperty; static DepartmentList() { DepartmentProperty = new ConfigurationProperty( "", typeof (DepartmentCollection), null, ConfigurationPropertyOptions.IsRequired | ConfigurationPropertyOptions.IsDefaultCollection ); DepartmentListProperties = new ConfigurationPropertyCollection { DepartmentProperty }; } public DepartmentCollection Departments { get { return (DepartmentCollection) base[DepartmentProperty]; } set { base[DepartmentProperty] = value; } } protected override ConfigurationPropertyCollection Properties { get { return DepartmentListProperties; } } } Now, in your .config file specify the new type: <configSections> <section name="DepartmentAssigneeMapping" type="DatabaseCleanup.DepartmentList, DatabaseCleanup, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/> </configSections> and add your section <DepartmentAssigneeMapping> <Department name="Depart1" assignee="PersonAssignTo1" /> <Department name="Depart2" assignee="PersonAssignTo2" /> </DepartmentAssigneeMapping> Now, in your class: public List<string> departmentList; public List<string> assignedtoList; var mappings = (DepartmentList) ConfigurationManager.GetSection("DepartmentAssigneeMapping"); departmentList = ( from Department department in mappings.Departments select department.Name).ToList(); assignedtoList = ( from Department department in mappings.Departments select department.Assignee).ToList(); Your class is much nicer, and the config is way cleaner. This is fully tested, and the results you are looking for are replicated.
{ "domain": "codereview.stackexchange", "id": 2345, "tags": "c#, asp.net, asp.net-mvc-3" }
Bacterial Conjugation/Horizontal Gene Transfer -- how does the plasmid exchange work?
Question: So according to a PPT I'm reading, bacterial conjugation works by the two bacteria joining pili and exchanging plasmids. So how exactly do the plasmids get across the gap? If I understand this correctly, the pili are little hair-like things on the outside of the bacteria -- so unless the plasmids were actually able to somehow be pushed through the pili, there would be gap. Right? The only other possibility I can think of would be that the membranes fused or something like that. But that comes with it's own whole set of problems. How does this work? Thanks! evamvid Answer: The donor cell retracts it pilus upon contact with another cell and both cells form a pore between the two cells, which allows the transfer of DNA. Take a look at the image, I think this makes it clearer (from the Wikipedia article on Pili): This has even be seen in an electron microscope:
{ "domain": "biology.stackexchange", "id": 1918, "tags": "bacteriology, reproduction" }
EM wave function & photon wavefunction
Question: According to this review Photon wave function. Iwo Bialynicki-Birula. Progress in Optics 36 V (1996), pp. 245-294. arXiv:quant-ph/0508202, a classical EM plane wavefunction is a wavefunction (in Hilbert space) of a single photon with definite momentum (c.f section 1.4), although a naive probabilistic interpretation is not applicable. However, what I've learned in some other sources (e.g. Sakurai's Advanced QM, chap. 2) is that, the classical EM field is obtained by taking the expectation value of the field operator. Then according to Sakurai, the classical $E$ or $B$ field of a single photon state with definite momentum p is given by $\langle p|\hat{E}(or \hat{B})|p\rangle$, which is $0$ in the whole space. This seems to contradict the first view, but both views make equally good sense to me by their own reasonings, so how do I reconcile them? Answer: As explained by Iwo Bialynicki-Birula in the paper quoted, the Maxwell equations are relativistic equations for a single photon, fully analogous to the Dirac equations for a single electron. By restricting to the positive energy solutions, one gets in both cases an irreducible unitary representation of the full Poincare group, and hence the space of modes of a photon or electron in quantum electrodynamics. Classical fields are expectation values of quantum fields; but the classically relevant states are the coherent states. Indeed, for a photon, one can associate to each mode a coherent state, and in this state, the expectation value of the e/m field results in the value of the field given by the mode. For more details, see my lectures http://arnold-neumaier.at/ms/lightslides.pdf http://arnold-neumaier.at/ms/optslides.pdf and Chapter B2: Photons and Electrons of my theoretical physics FAQ.
{ "domain": "physics.stackexchange", "id": 3547, "tags": "quantum-field-theory, mathematical-physics, visible-light, quantum-electrodynamics" }
What is the most agreed upon quantum mechanical equation of motion?
Question: On multiple Wikipedia articles, it mentions several quantum mechanical equations of motion, namely those by Schrödinger and Heisenberg. Which one is the most accurate and agreed upon quantum mechanical equation of motion? Answer: The Schrödinger picture and the Heisenberg picture are unitarily equivalent. None is more accurate, more fundamental, or in any other objective sense "better" than the other.
{ "domain": "physics.stackexchange", "id": 17863, "tags": "quantum-mechanics, schroedinger-equation, time-evolution" }
Questions about Vapor Pressure and Cavitation Bubble
Question: I am currently learning about cavitation in my fluid mechanics study. I am confused about the definition of Vapor Pressure, does it exist only in the saturation line? I am also confused about how can a cavitation bubble form just because the fluid pressure dropped below vapor pressure. Answer: Vapor pressure is a physical property of a given substance that depends only on the substance's temperature. One way to calculate it involves the Antoine equation (see https://en.wikipedia.org/wiki/Antoine_equation). For a given substance under equilibrium conditions in a closed container, and under ambient pressures higher than its vapor pressure (i.e., there is a mixture in the container), the substance still exerts its vapor pressure. For cases where the associated vapor-liquid equilibrium is ideal, that vapor pressure is the partial pressure of the material inside the container, per Raoult's Law (see https://en.wikipedia.org/wiki/Raoult%27s_law). This means that vapor pressure equals saturation pressure only in the case of a pure component.
{ "domain": "physics.stackexchange", "id": 63293, "tags": "fluid-dynamics, evaporation, bubbles" }
The 'directionality' of reductions?
Question: I've been finding myself a bit confused with the direction of reductions used to show that certain languages are not recursive. For example, let us say we want to determine if the Halting Problem ($HALT_{TM}$)is undecidable. I know we can assume that it is decidable and then try to build a decider for the acceptance problem, which is impossible. But though we are using the Acceptance Problem ($A_{TM}$) to help solve the decidability of the Halting Problem, we have reduced the Acceptance Problem to the Halting Problem, not the other way around, right? I sometimes get a little bit confused when I encounter questions that ask me to deploy a reduction; I will be asked to reduce language $x$ to $y$, but what that means is that $y$ is a simpler instance of a problem of $x$, right (or at least should be)? I'm assuming it's impossible to reduce a simpler version of a problem to a more complex version of a problem, am I right in believing that? Answer: Don't worry – everybody gets confused by the direction of reductions. Even people who've been working in algorithms and complexity for decades occasionally have a, "Wait, were we supposed to be reducing $A$ to $B$ or $B$ to $A$?" moment. Reducing $A$ to $B$ produces a statement of the form "If I could solve $B$, then I'd also know how to solve $A$". "Solve" in this sense could mean "compute using any Turing machine", or "compute in polynomial time" or whatever other notion of solution your context requires. This may seem counterintuitive, since "$A$ reduces to $B$" implies that solving $B$ is at least as hard as solving $A$, so you haven't reduced the difficulty. However, you can think of it as reducing the number of problems you need to solve. Imagine that, at the start of the day, your goals were to find an algorithm for $A$ and an algorithm for $B$. Well, now that you've found a reduction from $A$ to $B$, you've reduced your goals to just finding an algorithm for $B$.
{ "domain": "cs.stackexchange", "id": 10152, "tags": "computability, turing-machines, reductions, halting-problem" }
Why is nuclear pasta the strongest part of a neutron star?
Question: From my very rudimentary understanding of neutron stars taken from the abstract to "Elasticity of Nuclear Pasta", nuclear pasta is the strongest substance, hence the strongest part of a neutron star. Our results show that nuclear pasta may be the strongest known material, perhaps with a shear modulus of $10^{30}$ ergs/cm³ and a breaking strain greater than 0.1. It seems like unbelievable pressure can only increase the closer to the core of the star. With all of the particles condensed, wouldn't that– the core– be the strongest part? Not the pasta? Answer: The paper is discussing "strength" in terms of the shear modulus, which is the resistance of a material to shearing forces, not its resistance to compression. A shear modulus of $10^{30}$ erg/cm$^2$ is about 25 orders of magnitude larger than that of steel. The deep interior of a neutron star, interior to any nuclear pasta phase, is a fluid. A fluid does not have a shear modulus. Instead it has a bulk modulus, which is a small multiple of the gas pressure, and measures the resistance of the material to compression. The gas pressure does indeed increase towards the centre of the neutron star. In conclusion, the fluid at the centre is less compressible than nuclear pasta, but nuclear pasta is "stronger" in terms of resistance to shearing forces.
{ "domain": "astronomy.stackexchange", "id": 7105, "tags": "astrophysics, neutron-star" }
Class which updates an object
Question: I have a class which updates an object. The class takes a String id in its constructor and returns the appropriate class based on the id. I think these two methods should be separated into their own classes as returning an object based on a String id will probably have uses elsewhere in the code base. I've considered adding the functionality which returns the object to update into its own static method: ObjectToUpdate = Utils.getObjectToUpdate(id) Is there a better way or a design pattern? UpdateContestantObj c = new UpdateContestantObj(childWithParentRatingsObject.getCoupleId()); c.getContestantObj().setSortVal(singleScoringFM.getScore()); c.persistContestant(); public class UpdateContestantObj { private String id; private ContestantObj contestantToSave = null; public ContestantObj getContestantObj(){ return this.contestantToSave; } public UpdateContestantObj(String id){ this.id = id; SimpleSortingVector simpleSortingVector = (SimpleSortingVector)FutureContent.future.getContent(Constants.CONTESTANTS_DATA); Enumeration contestantsEnumeration = simpleSortingVector.elements(); while(contestantsEnumeration.hasMoreElements()){ final ContestantButtonField contestantButtonField = (ContestantButtonField)contestantsEnumeration.nextElement(); if(contestantButtonField.getContestant().getId().equalsIgnoreCase(this.id)){ contestantToSave = contestantButtonField.getContestant(); } } } public void persistContestant(){ contestantsStore.setContents(contestantToSave); contestantsStore.commit(); } } Answer: There's no super-design-pattern that would be beneficial here, but you could definitely benefit from some basic OO concepts like encapsulation and single responsibility. For starters, rather than making a bunch of utility methods, it's better design to allow each class a static method like: // inside the ContestantObj class public static ContestantObj fromID(string ID) { // code to retrieve ContestantObj } This way, each class is responsible for creating itself, which it should be (outside of cases where a builder or abstract factory is required). Your UpdateContestantObj constructor should take in a ContestantObj. It shouldn't care about how the ContestantObj is created. A "Contestant Updater" should be a machine that takes in Contestants and outputs Contestants that have been updated. Also, the naming convention of ContestantObj should simply be Contestant. We know it's an object.
{ "domain": "codereview.stackexchange", "id": 892, "tags": "java, design-patterns" }
Do some particles get ionised to 2+ or higher in a mass spectrometer or do they only get ionised to 1+?
Question: In a mass spectrometer the sample is vapourised, then ionised, then accelerated. My textbook says that the high energy electrons from the electron gun knock electrons off the sample molecules and hence ionise the particles by 1+. But do some particles get 2 or more electrons knocked out or is it always just one ? I suppose once it has been ionised by 1+ it gets accelerated through the tube so it won't hang around for long but I would think that there is still a small window of time for further ionisations. Answer: In electron impact, you would see mostly +1 charged ions. Electron impact is a "harsh" way of ionization. However, +1 charge is not a universal rule in MS. The state of ionization depends on the ionization source. Later in your courses you might encounter electrospray ionization. There you might have +14 charges, etc.
{ "domain": "chemistry.stackexchange", "id": 15404, "tags": "ions, mass-spectrometry" }