anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Proving $a^{2+n}a^{n}$ is regular using the Pumping Lemma
Question: So I'm trying to practice (and learn, clearly) use of the Pumping Lemma, and this is my current equation. It's clearly regular- it can be simplified: $a^{2+n}a^{n} = a^{2+2n}$ and from there it's fairly easy to derive the regular language $aa(aa)*$ However, I'm trying to explicitly solve this through the Pumping Lemma- but clearly doing something wrong. This is my process (assuming language $A$, string $s$ in $A$, pumping length $p$): First take $s = aaaa$ for $n = 1$ (simple case). Take $p = 2$, and so $xy$ must equal $aa$ to fit $|xy| \leq p$ and $|y| \gt 0$ , giving $x = a, y = a, z = aa$. Finally insert into the equation $xy^{i}z \in A$ with a trivial value of $i \geq 1$ - $xy^{2}z$. This gives a result of $aaaaa$. And that's the problem. $aaaaa$ is not a member of the language $a^{2+n}a^{n}$. Is there a step I'm missing here or a wrong assumption? Thanks. And apologies for yet another pumping-lemma question. Answer: Noooooo! Usage of the pumping lemma to prove regularity is remarkably rare, and certainly won't happen with a direct application. Let's review the formal statement, with particular emphasis on the quantifiers: $$\forall \ L \subseteq \Sigma^* ( \ L \ \text{is regular} \Rightarrow \exists \ p \ge 1 \ \ \forall \ w \dots \ \ \exists \ x, y, z \dots \ \ \forall \ i \dots \ )$$ Two notable observations are that being regular is part of the hypotheses and that nothing is said about the converse, which is in fact false (a number of classical counterexamples can be found in literature). In other words, even if you managed to somehow prove the $\exists \forall \exists \forall$ block, it still wouldn't say anything about the regularity of $L$. The way the pumping lemma is normally used is in its contrapositive form. You may recall from basic logic that $(p \rightarrow q) \rightarrow (\neg q \rightarrow \neg p)$. Using that principle, we can derive the following logical consequence: $$\forall \ L \subseteq \Sigma^* ( \ \forall \ p \ge 1 \ \ \exists \ w \dots \ \ \forall \ x, y, z \dots \ \ \exists \ i \dots \Rightarrow L \ \text{is }\textbf{not }\text{regular} \ )$$ Notice how this is a tool to prove non-regularity, and that, even there, nothing can be said about the converse. I'm afraid your attempt of proof is just entirely misguided.
{ "domain": "cs.stackexchange", "id": 8124, "tags": "regular-languages, pumping-lemma" }
What is the state of aggregation of iodine formed during electrolysis of molten nickel(II) iodide?
Question: Assume the electrolysis of molten $\ce{NiI2(l)}$ (inert electrodes) in a cell. The cell with a heating element is in the room; the room is at $\pu{25 °C}$ and $\pu{101.3 kPa}.$ The half-reaction for oxidation at the anode would be $$\ce{2 I^-(l) -> I2(\text{state}) + 2 e^-},$$ whereas the global reaction will be $$\ce{NiI2(l) -> Ni(s) + I2(\text{state})}.$$ Given that iodine is solid at room temperature, should it be marked as a solid in the equations? However, given that the $\ce{NiI2}$ is liquid, I'd think that the newly synthesized iodine would be a gas for a while? Also, what observations could one make during the electrolysis? I.e. will the iodine accumulate on the positive electrode as a solid, or will it exit the cell in the form of a gas? Also, is the reduction potential for $$\ce{I2(s) + 2 e^- -> 2 I^-(\color{red}{aq})}$$ the same as for $$\ce{I2(s) + 2 e^- -> 2 I^-(\color{red}{l}) }?$$ Answer: Basically, during the electrolysis process of molten nickel(II) iodide ($\ce{NiI2}$), $\ce{I– (l)}$ ions would be oxidized to $\ce{I2 (g)}$ at the anode, and $\ce{Ni^2+ (l)}$ ions would be reduced to $\ce{Ni (s)}$ at the cathode. My state assignments were based on following facts assuming the temperature of electrolytic cell is kept below $\pu{900 ^\circ C}$: Melting point of $\ce{NiI2}$ is about $\pu{797 ^\circ C}$. Therefore, electrolytic cell must be kept above $\pu{800 ^\circ C}$. Thus following equlibrium will be maintained during electrolysis: $$\ce{NiI2 (l) <=> Ni^2+ (l) + 2I- (l)}$$ Boiling point of $\ce{I2}$ is about $\pu{134 ^\circ C}$. Hence, $\ce{I2}$ would be released as a gas at anode. Melting point of $\ce{Ni^\circ}$ metal is about $\pu{1455 ^\circ C}$. Hence, $\ce{Ni}$ would be deposited as a solid at cathode. If you want to keep nickel metal in liquid form, your electrolytic cell must be kept above $\pu{1500 ^\circ C}$. I think I have given enough information you to figure out the rest.
{ "domain": "chemistry.stackexchange", "id": 13914, "tags": "inorganic-chemistry, electrochemistry, redox, electrolysis, oxidation-state" }
Is there a wave model that describes absorption of electromagnetic radiation in matter?
Question: I know that there is a particle model that describes the absorption of electromagnetic radiation in matter - A photon with energy E can excite ("absorbed") an atom if it has energy gap of the same size E. What about a wave model for absorption, or is it only a particle phenomena ? Answer: In the particle model, as you rightly pointed out, absorption is described in terms of photons — quantized packets of electromagnetic energy. When a photon with energy E encounters an atom, it can be absorbed if the atom has an energy gap of the same size E. This process is quantitatively described by Einstein's theory of the photoelectric effect and is a cornerstone of quantum mechanics. Now, let's turn to the wave model. In classical electromagnetism, light is treated as a wave, characterized by its electric and magnetic fields. When an electromagnetic wave encounters a material, its oscillating electric field can interact with the charged particles (such as electrons) within the material. This interaction depends on the frequency of the electromagnetic wave and the natural frequencies of the electrons in the material. If the frequency of the incoming electromagnetic wave matches a natural frequency of oscillation of electrons in the material, resonance occurs. At resonance, electrons absorb energy from the wave efficiently. This absorption leads to a transfer of energy from the electromagnetic wave to the material, resulting in the wave's amplitude decreasing as it passes through the material — a phenomenon we interpret as absorption. So, in the wave model, absorption is not about discrete energy packets being transferred, but rather about the resonant transfer of energy from the wave to the material at specific frequencies. This model is particularly useful for explaining phenomena like why certain materials are transparent at some wavelengths but opaque at others. In conclusion, the absorption of electromagnetic radiation in matter can indeed be described using both the particle and wave models. While the particle model (photon absorption) provides a more intuitive explanation for discrete energy exchanges, the wave model (resonant energy transfer) offers insight into the frequency-dependent nature of absorption. Both perspectives are complementary, reflecting the dual nature of light as both a wave and a particle, a fundamental concept in modern physics.
{ "domain": "physics.stackexchange", "id": 99674, "tags": "wave-particle-duality" }
Do the ladder operators $a$ and $a^\dagger$ form a complete algebra basis?
Question: It is easy to construct any operator (in continuous variables) using the set of operators $$\{|\ell\rangle\langle m |\},$$ where $l$ and $m$ are integers and the operators are represented in the Fock basis, i.e any operator $\hat M$ can be written as $$\hat M=\sum_{\ell,m}\alpha_{\ell,m}|\ell\rangle\langle m |$$ where $\alpha_{\ell,m}$ are complex coefficients. My question is, can we do the same thing with the set $$\{a^k (a^\dagger)^\ell\}.$$ Actually, this boils down to a single example which would be sufficient. Can we find coefficients $\alpha_{k,\ell}$ such that $$|0\rangle\langle 0|=\sum_{k,\ell}\alpha_{k,\ell}a^k (a^\dagger)^\ell.$$ (here $|0\rangle$ is the vacuum and I take $a^0=I$) Answer: @Accidental reminds you this is a theorem. To actually see it in your terms, use the infinite matrix representation of $a, \quad a^\dagger$ of Messiah's classic QM, v 1, ChXII, § 5. Specifically, your vacuum projection operator has a 1 in the 1,1 entry and zeros everywhere else. The operator you chose is freaky to represent, but, purely formally, the diagonal operator for $N\equiv a^\dagger a$, $$ |0\rangle\langle 0|=(1+N) (1-N) \frac{2-N}{2} \frac{3-N}{3} \frac{4-N}{4} ... $$ would do the trick, once anti-normal ordered.
{ "domain": "physics.stackexchange", "id": 50118, "tags": "quantum-mechanics, quantum-field-theory, operators, hilbert-space, harmonic-oscillator" }
Quaternion to Euler angle convention in TF
Question: In reference to this question , the desired conversion is the opposite direction. That is using tf.transformations.euler_from_quaternion function, taking the result from robot_localization of /odometry/filtered topic, my attempt is to unravel the quaternion from the ENU convention to NED convention. The end result should be pitch, yaw, and roll using the aviation (NED) convention. My interpretation is that one must first change back (from ROS ENU) the signs of y and z, followed by “unwinding” the quaternion to euler from 'rzyx' to 'rxyz' per the documentation for euler_from_quaternion definition and the question response: “q = tf.transformations.quaternion_from_euler(yaw, pitch, roll, 'rzyx')” My question: Is my interpretation correct? Any insight is greatly appreciated. B2256 Originally posted by b2256 on ROS Answers with karma: 162 on 2016-07-17 Post score: 2 Answer: See also this question: http://answers.ros.org/question/50113/transform-quaternion/ The easiest way to convert to yaw-pitch-roll is to take the quaternion and create a Matrix3x3: Let q be the quaternion of the current odom transform tfScalar yaw, pitch, roll; tf::Matrix3x3 mat(q); mat.getEulerYPR(&yaw, &pitch, &roll); (Note that the tfScalar type is usually typedef'd to double in scalar.h. Also see http://docs.ros.org/api/tf/html/c++/classtf_1_1Matrix3x3.html.) Originally posted by Mark Rose with karma: 1563 on 2016-08-02 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by antoineniotna on 2018-10-02: Perfect! Thank you so much!
{ "domain": "robotics.stackexchange", "id": 25267, "tags": "navigation, robot-localization, transform" }
Using industrial_robot_client without simple_message
Question: Hi All, I am implementing the Robot Driver specification (link), however the "reference" JointTrajectoryDownloader implementation assumes you are communicating with a robot that uses the simple_message protocol. My robot does not communicate using simple_messages, it communicates using ordinary ROS topics and messages. The usage of simple_message protocol is very deeply embedded into the JointTrajectoryDownloader and JointTrajectoryInterface classes. What is the best way to use the JointTrajectoryDownloader library on a robot that does not use simple_message for communication? Thanks, Bart Originally posted by bjem85 on ROS Answers with karma: 163 on 2014-09-03 Post score: 1 Original comments Comment by sedwards on 2014-09-04: The link you provided is to the industrial_robot_client wiki...not a specification (is this what you meant)? Comment by Adolfo Rodriguez T on 2014-09-04: If you don't want to use the simple_message protocol, do you really need to use the industrial_robot_client, then?. What's the actual problem you'd like to solve, send joint trajectories to your robot and read state feedback? Comment by bjem85 on 2014-09-04: Hi Adolfo, I don't strictly need to use it, but apart from the use of simple_message as a communication method it actually quite closely fits my requirements. From my point of view it makes little sense to do effectively a 'copy-and-paste' programming job. Comment by bjem85 on 2014-09-04: sedwards: link has been updated Comment by bjem85 on 2014-09-04: Conversely, I did investigate writing a receiver for simple_message, but I could not find any documentation or tutorials on how to do so. Is this a gap in the documentation? Answer: The specification you reference simply outlines the ROS API for ROS-Industrial. The industrial_robot_client library is a set of nodes that adheres to that spec and provides a bridge to robot controllers via simple_message. It was never meant to bridge from one ROS API to another...since a robot that could already speak ROS messages should be simple to modify/adapt. I do not think you can easily utilize the industrial_robot_client nodes to bridge between ROS APIs, nor would I recommend you do that. If you still would like to create a receiver for simple_message, then you can find examples in the vendor packages ( here. These files are written in a controller specific language, RAPID, but it's readable enough to get the idea. Originally posted by sedwards with karma: 1601 on 2014-09-04 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by bjem85 on 2014-09-08: Thanks for that info, it looks like I will stick to writing my own interface with MoveIt. Comment by Adolfo Rodriguez T on 2014-09-09: Consider taking a look at the ros_control project. It offers controllers compatible with MoveIt!, and a Gazebo backend. You would have to expose your hardware to ros_control yourself, but the overhead is much lower than doing it all from scratch. Comment by bjem85 on 2014-09-10: Thanks Adolfo. I am presently pursuing this line of development.
{ "domain": "robotics.stackexchange", "id": 19279, "tags": "ros" }
ImportError when running catkin_make_isolated
Question: I'm following the source installation on Debian Stretch on an armel platform. When I come to the final step of running catkin_make_isolated, I get the following error: $ ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release Traceback (most recent call last): File "./src/catkin/bin/catkin_make_isolated", line 12, in <module> from catkin.builder import build_workspace_isolated File "./src/catkin/bin/../python/catkin/builder.py", line 66, in <module> from catkin_pkg.terminal_color import ansi, disable_ANSI_colors, fmt, sanitize ImportError: No module named terminal_color It seems this is due to a version mismatch, but I'm not sure where exactly. I'm running this in a Docker file, so it should be very reproducible: FROM ev3dev/ev3dev-stretch-ev3-generic RUN echo "robot ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers RUN apt-get install --yes --no-install-recommends python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential RUN rosdep init USER robot RUN rosdep update RUN mkdir /home/robot/ros_catkin_ws WORKDIR /home/robot/ros_catkin_ws RUN rosinstall_generator robot --rosdistro melodic --deps --exclude roslisp --tar > melodic-robot.rosinstall RUN wstool init -j8 src melodic-robot.rosinstall RUN rosdep install --os=debian:stretch --from-paths src --ignore-src --rosdistro melodic -y RUN ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release Originally posted by pepijndevos on ROS Answers with karma: 57 on 2018-06-14 Post score: 1 Original comments Comment by gvdhoorn on 2018-06-14: I can't check (as I don't have an arm board to run this on), but check that python does not point to python3 in ev3dev/ev3dev-stretch-ev3-generic. Afaik all ROS Python pkgs are released for Python 2. If terminal_color has been installed for Python 3, Python 2 will not find it. Comment by pepijndevos on 2018-06-14: Debian gives me a really old version of catkin_pkg. I did a pip install -U catkin_pkg and that soled that particular issue. But going full pip instead of deb, get me issues with roslisp that is trying to install packages that don't exist. How can I tell it to exclude that? Answer: Debian gives me a really old version of catkin_pkg. Ah, I think I know what is going on. You're using Debian upstream packages. Please read the page I linked. I'm not sure, but mixing this might not be a good idea. Originally posted by gvdhoorn with karma: 86574 on 2018-06-15 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31018, "tags": "ros, ros-melodic, debian" }
Are galaxies moving away from us faster than before?
Question: Are all galaxies moving away from us at constant speed even those that may be moving in our direction as the space is being formed? How does nothingness appear to push matter? A black hole sucks as space is compressing (being stretched without stopping) or compressed (like how our Sun pulling all planets) and is it the opposite phenomenon that show universe is expanding? Answer: The recessional velocity $v$ of an object depends on two things: firstly it depends on how faraway an object is in terms of proper distance $D$, and secondly on the rate of the Universe's expansion as a function of cosmological time $t$, which is best expressed as the Hubble parameter $H(t)$. Specifically: $$v = D \times H(t)$$ It's worth noting that this equation is slightly vacuous as it is merely the definition of recessional velocity, which is not something that can be directly measured. As recessional velocity depends not just on a function of $t$, but also on $D$, the question as to whether objects are receding from us faster than ever before could be answered in several ways. Before I look at the different ways we could answer your question, I will note a few things. Firstly the definition of the Hubble parameter is: $$H^2(t) = \bigg( \frac{\dot{a}(t)}{a(t)}\bigg)^2$$ where $a(t)$ is the scale factor which describes how the scale of the Universe changes with $t$, and $\dot{a}(t)$ is the first derivative of the scale factor with respect to $t$. Due to cosmological observations, the Universe is said to contain dark energy which causes the Universe's expansion to accelerate. What is meant by this is that at the current time $\ddot{a}(t) > 0$ where $\ddot{a}(t)$ is the second derivative of the scale factor with respect to time. The first way we could look at your question is we could ask whether galaxies currently at a distance $D_0$ are receding faster than other galaxies that were previously at distance $D_0$. From the definition of accelerating expansion and the Hubble parameter we can see that accelerating expansion does not imply that the answer to this question is "yes" and in fact if we assume dark energy takes the form of a cosmological constant (ignoring cosmic inflation), and we delve into the dynamics of the Universe, we find that galaxies currently at $D_0$ must be receding from us slower than the galaxies that were previously at $D_0$ were receding when they were at $D_0$. So in this particular sense the Universe's expansion is slowing down, even though we usually describe it as accelerated. The second way we could answer your question is to ask whether the recessional velocity of any given galaxy is larger now than it has ever been in the past. The answer to this question is more difficult as accelerated expansion does imply that the recessional velocity of a given galaxy increase with time, but the Universe's rate of expansion in previous epochs was decelerating. However, again taking dark energy as taking the form of a cosmological constant, we see that the answer is that galaxies achieve their highest recessional velocities twice: firstly at the Big Bang, and secondly in the infinite future. So the answer to this question is that galaxies are not currently receding from us faster than they have been at all previous times. Recession velocity is different from peculiar velocity (i.e. the local velocity with respect to the CMB). We could add the two to find the 'real velocity', but as I've noted recession velocity doesn't have a direct physical meaning so what this 'real velocity' actually means is not straightforward. Expansion is homogeneous, whereas the vacuum around a black hole is not homogeneous, so in this sense the 'sucking' of a black hole is not the opposite of expansion.
{ "domain": "astronomy.stackexchange", "id": 803, "tags": "galaxy, universe, cosmology, expansion" }
Center of Buoyancy and Its Application
Question: For complete dummies when it comes to Buoyancy , What is Center of Bouyancy ? How is it used in problem solving ? Answer: The center of buoyancy is the location within a solid object, submerged within a liquid where the sum of all pressure forces resolve in a direction opposite to the direction of gravity. For a homogeneous solid object, the center of buoyancy coincides with the center of gravity. But for solid objects where mass is distributed unevenly the center of gravity and center of buoyancy may differ. And in this latter case the separation of centers can create a moment (torque) that will tend to rotate the center of buoyancy directly above the center of gravity (restoring moment). The concept of the center of buoyancy, as well as the center of gravity is an important concept for naval architects and engineers to understand to provide stable operation of both surface vessels and submersibles. And an interesting fact in surface vessel design is, for an upright ship in the water, the center of buoyancy is more often designed to occur below, not above the vessel's center of gravity. This is to allow more maneuverability in the ship. As a ship rolls in one direction or another, the increase in water plane shifts the center of buoyancy creating an upward restoring moment. Designing the center of gravity below the center of buoyancy leads to what's called a bottom heavy vessel. And these are difficult to maneuver with forward velocity. Barges are bottom heavy vessels.
{ "domain": "physics.stackexchange", "id": 36268, "tags": "buoyancy" }
What is meant by optimal estimator and how to determine optimality?
Question: Considering an estimation problem of estimating a scalar deterministic parameter $a$ from the observations $y$ which are corrupted by randomvariable $w$. The observations are $y[n] = a + w[n]$ Least Squares estimator can be used to estimate $a$ when $w$ is a White Gaussian Random Variable. This estimation method is known to be optimal. Why? What if $w$ is from Poisson Distribtuion or some other non-gaussian, then would the estimator for $a$ be better or worse than the one found using $w$ as Gaussian r.v? Answer: So Least squares estimator is as it literally - the estimator which brings the mean square error to minimum. In the case of Gaussian white noise it has a simple and analytic solution. I recommend you develop it yourself, if your'e comfortable with matrices calculus it is not that hard. You can generate tons of estimators by defining different cost functions. Other popular cost functions are l1-norm ($\sum |x_i - y_i|$), likelihood ($P(y|x)$). It's hard to define "better" if the case isn't the same, For example, does the case of very high variance poisson process is the same as the same variance gaussian process? I believe it is not, and in addition one is non negative integer while the other is real (and you can use it for you advantage).
{ "domain": "dsp.stackexchange", "id": 5406, "tags": "estimation, parameter-estimation" }
Ticks hitching a ride on a fly?
Question: Recently, my brother has been bothered by some particularly aggressive houseflies. According to him, when these flies are squished, there is often a clacking noise, and then small arachnids come crawling off or out of the body (it's unclear which). He recently managed to take pictures (see hereunder) where two of the things are in view, and I tentatively recognized them as ticks. Apparently the clacking noise is produced when the arachnids are squished. Are these actually ticks ? And is this normal behaviour ? I haven't managed to find anything about ticks hitching a ride on flies, but I don't have any other ideas. If it helps, he lives in Brittany, in western France. Edit : For scale, the fly in the pictures was a little below 1cm long Answer: Without a close up examination of the subjects, which would require microscope images or a decent macro image it is impossible to identify these creatures with any certainty to the level of species. With that in mind I will give an attempt at an answer. These creatures are not ticks, they are almost certainly mites. Mites are of the subclass Acari within the Arachnida (spiders and scorpions also belong to this class). Ticks also belong in the Acari, so they are closely related to mites and quite similar morpholgically. Mites are quite diverse, but generally less than 1 mm (0.039 inches) in length and are very diverse in what they feed on, ranging from being decomposers in the soil, to grazing algae in water, to feeding on plants and parasitising animals. As humans we have a particularly odd one (to me at least) called Demodex that lives in hair follicles, eating the sebaceous (oily) secretions and dead skin cells from around the hair. For insects I believe (with no certainty, experts please weigh in) that the majority of mites are parasitic, feeding on the insects hemolymph (blood equivalent for insects) by puncturing the seals at the joints in the exoskeleton. Incidentally, you can see some mites feeding on the legs of the crane-fly in this SE Biology post. In the case of the OP we are likely looking at a parasitic mite of the common house fly (Musca domestica, which have left the now dead insect as their food source is no longer supplying food. Flies do have a common mite species Macrocheles muscaedomesticae, which is a mite that feeds on the eggs, and to some extent the larvae of the house fly and several other species of fly, and may also be associated with a range of other insects (see Biology Ecology sections here). The mite is found world-wide and is believed to be distributed on the adult flies. It is uncertain if the mite actually attaches to the adult fly while it is being carried. There are several species in the Macrocheles genus, which all parasitise in a similar manner to M muscaedomesticae. There are also a few other species of mites that seem to parasitise house flies, including Poecilochirus species, but these are less species-specific as far as I can tell.
{ "domain": "biology.stackexchange", "id": 11424, "tags": "entomology, parasitism" }
How to speed up the code for LeetCode "Container with most water" task?
Question: I'm trying to solve the LeetCode question where you need to find out the area of the container with the most water. I have created the solution that seems to work in my testing but it fails for being too slow when I try to submit it on LeetCode. My idea is that I create a dictionary of (x, y) tuples from the input list and then for every item I need to find the maximum distance to any of other lines that are equal or taller than it and from here I can calculate what the maximum area is possible for this line. How else can I approach this to get it to run faster? (I can't submit solution successfully so can't see examples of answers by other users) def max_area(height) -> int: areas = [] coords = {x: (x, y) for x, y in enumerate(height)} for x in coords: higher = [k for k in coords if coords[k][1] >= coords[x][1]] area = max(abs(coords[j][0] - coords[x][0]) for j in higher) * coords[x][1] areas.append(area) return max(areas) Answer: Your question made me want to give it a shot, too. The solution ended up pretty much like the pseudo-code suggested by @Marc , and Python is of course pretty close in readability anyway. The below code passes on the site and runs (there is some deviation between runs) faster than c. 95% and at with less memory usage than c. 75% of solutions. The code contains comments at the relevant positions. There's two extra optimizations, also explained there. def max_area(height: list[int]) -> int: n = len(height) - 1 l = 0 # Index for left bar r = n # Index for right bar max_area = 0 while True: # Give readable names: left = height[l] right = height[r] # Current area, constrained by lower bar: area = min(left, right) * (r - l) if area > max_area: # Keep tabs on maximum, the task doesn't ask for any # more details than that. max_area = area # Move the smaller bar further inwards towards the center, expressed # as moving left, where *not* moving left implies moving right. # The smaller bar constrains the area, and we hope to get to a longer # one by moving inwards, at which point the other bar forms the constraint, # so the entire thing reverses. move_left = left < right # Instead of only moving the smaller bar inward by one step, there's two # extra steps here: # 1. While moving the smaller bar inward, skip all bars that are # *even smaller*; those are definitely not the target, since both # their height and horizontal delta will be smaller. # 2. While skipping all smaller bars, we might hit the other bar: # there is a 'valley' or at least nothing higher in between. # Any more moving inwards would be a wasted effort, no matter the # the direction (from left or right). We can return the current # max. area. # # In the best case scenario, this may skip us right to the solution, # e.g. for `[10, 1, 1, 1, 1, 1, 10]`: only one outer loop is necessary. # # Both loops look very similar, maybe there's room for some indirection # here, although a function call would probably mess with the raw # performance. if move_left: while height[l] <= left: if l == r: return max_area l += 1 else: while height[r] <= right: if r == l: return max_area r -= 1 # Examples from the site print(max_area([1, 8, 6, 2, 5, 4, 8, 3, 7]) == 49) print(max_area([2, 3, 10, 5, 7, 8, 9]) == 36) print(max_area([1, 3, 2, 5, 25, 24, 5]) == 24) As far as your code goes: The mapping coords = {x: (x, y) for x, y in enumerate(height)} seems pretty odd. You're kind of mapping x to itself. I would say for the solution it's much simpler to not treat x as x in the "2D math plot" sense, but just as i in the array index sense. This saves us having to even declare x, we can just iterate using i. You use max twice, which is a linear search operation each time. This is needlessly expensive, but probably not the bottleneck. Any algorithm based on finding e.g. all distances to every other item for every item has explosive complexity. This is likely the bottleneck.
{ "domain": "codereview.stackexchange", "id": 40751, "tags": "python, algorithm, time-limit-exceeded" }
program options from command line initialize [v2 - after CR]
Question: After getting a CR from @pacmaninbw and @ALX23z here, I want to share my new code, and to ask for better ways (which always exist) to improve the code, even with new libraries. The only thing that important to me, is the way of receiving the parameters have to be the command line [I am using Linux OS, so it's highly common to use command line params]. So, to separate the main to smaller functions, alongside avoiding messy functions' parameters handling, I created a class to handle the whole initialize part of the cmd params: Edit: I changed the flags implementation so the user won't need to set the flag value (true/false). If the flag exists the value is true, otherwise it'll be set to false. The project in GitHub. Relative revision at post creation time in GitHub. Please note the updated code after @pacmaninbw CR: program options from command line initialize [v3 - after CR] cmd_options.h #ifndef COMPUTERMONITORINGSTATISTICSPARSER_CMD_OPTIONS_H #define COMPUTERMONITORINGSTATISTICSPARSER_CMD_OPTIONS_H #include <iostream> #include <boost/program_options.hpp> struct cmd_options_data { explicit cmd_options_data(const std::string &options_description) : visible_options(options_description) {} bool help = false; // Show help message bool verbose = false; // Display login/logout details bool anomaly_detection = false; // Show anomalies details if found bool analyze_activity = true; // Analyze login/logout total/summarize times std::string week_start_day; std::string log_file_path; std::string normal_login_word; boost::program_options::options_description visible_options; boost::program_options::variables_map variables_map; }; class cmd_options { public: explicit cmd_options(int ac, char* av[]); cmd_options_data get_data(); private: boost::program_options::options_description init_cmd_po_generic_options(); boost::program_options::options_description init_cmd_po_calender_options(); boost::program_options::options_description init_cmd_po_logger_options(); boost::program_options::options_description init_cmd_po_hidden_options(); boost::program_options::options_description init_cmd_po_mode_options(); boost::program_options::positional_options_description init_cmd_positional_options(); boost::program_options::options_description group_cmd_options() { return boost::program_options::options_description(); } template<class... Args> boost::program_options::options_description group_cmd_options(const boost::program_options::options_description &option, Args&... options); void apply_program_options(int ac, char* av[]); void update_flags(); cmd_options_data _options_data; boost::program_options::options_description full_options; boost::program_options::positional_options_description positional_options; }; template<class... Args> boost::program_options::options_description cmd_options::group_cmd_options(const boost::program_options::options_description &option, Args&... options) { boost::program_options::options_description group; group.add(option); group.add(group_cmd_options(options...)); return group; } #endif //COMPUTERMONITORINGSTATISTICSPARSER_CMD_OPTIONS_H cmd_options.cpp #include "cmd_options.h" namespace boost_cmd_po = boost::program_options; cmd_options::cmd_options(int ac, char* av[]) : _options_data("Usage: program [options] [path/]logger_filename") { auto generic_options = init_cmd_po_generic_options(); auto calender_options = init_cmd_po_calender_options(); auto logger_options = init_cmd_po_logger_options(); auto mode_options = init_cmd_po_mode_options(); auto hidden_options = init_cmd_po_hidden_options(); _options_data.visible_options.add( group_cmd_options( generic_options, calender_options, logger_options, mode_options ) ); full_options.add( group_cmd_options( generic_options, calender_options, logger_options, mode_options, hidden_options ) ); positional_options = init_cmd_positional_options(); apply_program_options(ac, av); update_flags(); } boost_cmd_po::options_description cmd_options::init_cmd_po_generic_options() { auto group = boost_cmd_po::options_description("Generic options"); group.add_options() ("help,h", "produce help message") //("verbose", boost_cmd_po::value<bool>(&_options_data.verbose)->default_value(false), "Show detailed times of login."); ("verbose", "Show detailed times of login."); return group; } boost_cmd_po::options_description cmd_options::init_cmd_po_calender_options() { auto group = boost_cmd_po::options_description("Calender options"); group.add_options() ("week-start-day,d", boost_cmd_po::value<std::string>(&_options_data.week_start_day)->default_value("Monday"), "Week starting day ('--week-start-day help' for a list)."); return group; } boost_cmd_po::options_description cmd_options::init_cmd_po_logger_options() { auto group = boost_cmd_po::options_description("Logger options"); group.add_options(); return group; } boost_cmd_po::options_description cmd_options::init_cmd_po_hidden_options() { auto group = boost_cmd_po::options_description("Logger options"); group.add_options() ("log-path,l", boost_cmd_po::value<std::string>(&_options_data.log_file_path)->default_value( "/home/sherlock/message_from_computer"), "Path to login/logout logger."); return group; } boost_cmd_po::options_description cmd_options::init_cmd_po_mode_options() { auto group = boost_cmd_po::options_description("Mode options"); group.add_options() //("analyze-log", boost_cmd_po::value<bool>(&_options_data.analyze_activity)->default_value(true), "Analyze activity - show activity times and summarise activity.") ("no-analyze", "Disable activity analyzing - don't show activity times/summarise.") //("anomaly-detection", boost_cmd_po::value<bool>(&_options_data.anomaly_detection)->default_value(false), "Check for anomalies in logger.") ("anomaly-detection", "Check for anomalies in logger.") ("normal-login-word", boost_cmd_po::value<std::string>(&_options_data.normal_login_word)->default_value("login"), "For anomaly detector- word that should symbol a login line in login/logout logger (after '+' sign)."); return group; } boost_cmd_po::positional_options_description cmd_options::init_cmd_positional_options() { boost_cmd_po::positional_options_description pd; pd.add("log-path", -1); return pd; } void cmd_options::apply_program_options(int ac, char **av) { boost_cmd_po::store( boost_cmd_po::command_line_parser(ac, av) .options(full_options) .positional(positional_options) .run(), _options_data.variables_map); boost_cmd_po::notify(_options_data.variables_map); } void cmd_options::update_flags() { _options_data.help = (bool) _options_data.variables_map.count("help"); _options_data.verbose = (bool) _options_data.variables_map.count("verbose"); _options_data.analyze_activity = !(bool) _options_data.variables_map.count("no-analyze"); _options_data.anomaly_detection = (bool) _options_data.variables_map.count("anomaly-detection"); } cmd_options_data cmd_options::get_data() { return _options_data; } main.cpp #include <iostream> #include <boost/filesystem.hpp> #include <boost/date_time.hpp> #include "core/day.h" #include "core/log_handler.h" #include "utilities/design_text.h" #include "cmd_options.h" int main(int ac, char* av[]) { cmd_options command_line_options(ac, av); cmd_options_data cmd_data = command_line_options.get_data(); /// --help / -h option handler if (cmd_data.help) { std::cout << cmd_data.visible_options << "\n"; return EXIT_SUCCESS; } /// --log-path / -l option handler if (!boost::filesystem::exists(cmd_data.log_file_path)) throw std::runtime_error("Log file path doesn't exist."); /// --week-start-day / -d option handler /// Initialize available days list auto available_days = std::vector<day>{{"sunday", boost::date_time::weekdays::Sunday}, {"monday", boost::date_time::weekdays::Monday}, {"tuesday", boost::date_time::weekdays::Tuesday}, {"wednesday", boost::date_time::weekdays::Wednesday}, {"thursday", boost::date_time::weekdays::Thursday}, {"friday", boost::date_time::weekdays::Friday}, {"saturday", boost::date_time::weekdays::Saturday}}; if (auto selected_day = std::find(available_days.begin(), available_days.end(), boost::to_lower_copy(cmd_data.week_start_day)); selected_day != available_days.end()) { // Selected day exists log_handler::week_start_day = selected_day->day_symbol; } else { // Selected day doesn't exists if (cmd_data.week_start_day == "help") { // Produce help days message std::cout << "Available days:" << std::endl; std::cout << "\tSun [Sunday]" << std::endl; std::cout << "\tMon [Monday]" << std::endl; std::cout << "\tTue [Tuesday]" << std::endl; std::cout << "\tWed [Wednesday]" << std::endl; std::cout << "\tThu [Thursday]" << std::endl; std::cout << "\tFri [Friday]" << std::endl; std::cout << "\tSat [Saturday]" << std::endl; return EXIT_SUCCESS; } throw std::runtime_error("Unfamiliar day, for options list use '-d [ --week-start-day ] help'."); } // Anomalies detector auto anomaly_detected = log_handler::anomalies_detector(cmd_data.log_file_path, cmd_data.normal_login_word, cmd_data.anomaly_detection); if (cmd_data.analyze_activity) // Analyze logger times log_handler::analyze(cmd_data.log_file_path, cmd_data.verbose); if (anomaly_detected) // Produce anomalies warning if needed std::cout << "\n\n" << design_text::make_colored(std::stringstream() << "*** Anomaly detected! ***", design_text::Color::NONE, design_text::Color::RED, true) << std::endl; return EXIT_SUCCESS; } Update: After @pacmaninbw review, new updated post: program options from command line initialize [v3 - after CR] Answer: First, thank you for providing the link to your GitHub repository, it allowed a more complete review. I've noticed a real tendency in the code to avoid creating classes and to use procedural programming rather than object oriented programming. Namespaces are used instead of creating classes. The use of classes and objects can be very powerful, for one thing it allows inheritance and polymorphism. The use of classes can also decouple modules and reduce dependencies, right now the modules are strongly coupled and this has a tendency to prevent necessary changes to the architecture as the program matures and grows. I've also noticed a rather strong tendency to use auto rather than declaring the proper types. While the auto type is very useful in some cases such as ranged for loops maintaining this code can be more difficult. Personally types help me to understand the code better. I would almost say this code is abusing the use of auto. Avoid Using Namespace std One or more of the source files in the core directory and the utilities directory still contain the using namespace std; statement. Complexity Once again the function main() is too complex (does too much). As programs grow in size the use of main() should be limited to calling functions that parse the command line, calling functions that set up for processing, calling functions that execute the desired function of the program, and calling functions to clean up after the main portion of the program. There is also a programming principle called the Single Responsibility Principle that applies here. The Single Responsibility Principle states: that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by that module, class or function. This code should probably a function be in day.cpp and the function prototype should be in day.h: auto available_days = std::vector<day>{{"sunday", boost::date_time::weekdays::Sunday}, {"monday", boost::date_time::weekdays::Monday}, {"tuesday", boost::date_time::weekdays::Tuesday}, {"wednesday", boost::date_time::weekdays::Wednesday}, {"thursday", boost::date_time::weekdays::Thursday}, {"friday", boost::date_time::weekdays::Friday}, {"saturday", boost::date_time::weekdays::Saturday}}; The function should return a type of std::vector<day>; Or perhaps the function should perform the ensuing search for the day and return the day itself. auto selected_day = get_selected_day_of_the_week() Try Catch Throw Blocks The code in main() currently contains a throw exception but there is no try{} catch{} code to catch the exception, this will result in the program terminating without reporting the problem. At best in the debugger it will report unhandled exception. The main() code should contain a try block and a catch block to handle any exceptions, the throw statement should probably be called in one of the sub functions that main() calls. If this code stays in main() it might be better to change the throw to std::cerr << "MESSAGE" << std::endl. Prefer \n Over std::endl; For performance reasons \n is preferred over std::endl, especially in loops where more than one std::cout is expected. std::endl calls a system routine to flush the output buffer. Calling a system function means that the program will be swapped out while the system function is executing. if (cmd_data.week_start_day == "help") { // Produce help days message std::cout << "Available days:" << std::endl; std::cout << "\tSun [Sunday]" << std::endl; std::cout << "\tMon [Monday]" << std::endl; std::cout << "\tTue [Tuesday]" << std::endl; std::cout << "\tWed [Wednesday]" << std::endl; std::cout << "\tThu [Thursday]" << std::endl; std::cout << "\tFri [Friday]" << std::endl; std::cout << "\tSat [Saturday]" << std::endl; return EXIT_SUCCESS; } was refactored to if (cmd_data.week_start_day == "help") { // Produce help days message std::cout << "Available days:\n"; std::cout << "\tSun [Sunday]\n"; std::cout << "\tMon [Monday]\n"; std::cout << "\tTue [Tuesday]\n"; std::cout << "\tWed [Wednesday]\n"; std::cout << "\tThu [Thursday]\n"; std::cout << "\tFri [Friday]\n"; std::cout << "\tSat [Saturday]" << std::endl; return EXIT_SUCCESS; } to flush all the output at the end.
{ "domain": "codereview.stackexchange", "id": 36296, "tags": "c++, boost" }
Eastin Knill Theorem and groups of transversal gates
Question: The Eastin-Knill Theorem shows that the transversal gates always form a group and that moreover this group is a finite subgroup of the group of all unitaries. For many codes, for example all self dual CSS codes, the group of transversal gates is exactly the Clifford group. https://quantumcomputing.stackexchange.com/a/22226/19675 Does anyone know of codes whose group of transversal gates is significantly different from the Clifford group? For example, does anyone know a code on $ n $ qubits whose group of transversal gates is not isomorphic to a subgroup of the Clifford group on $ n $ qubits? Or failing that, does anyone know any restrictions on which finite groups can occur as the group of transversal gates of a code? Update: The answer to Exotic transversal gate group seems to show that the transversal gate group of an $ [[n,1,d]] $ stabilizer code, for $ d \geq 2 $, must be generated by Clifford gates and/or $ T_k $ gates. So by the classification of finite subgroups of $ PU_2 $ we can conclude that the transversal gate group must be either a dihedral 2 group $ D_{2^k} $ or $ A_4 $ or $ S_4 $ (cyclic transversal gate group is not possible because we have chosen to specialize to stabilizer codes and thus $ X $ and $ Z $ are both transversal and so they generate a noncyclic Klein 4 subgroup and thus the whole transversal gate group must be non cyclic). Moreover all these groups can indeed be realized of the transversal gate group of some $ [[n,1,d]] $ , $ d \geq 2 $, stabilizer code. Each dihedral 2-group $ D_{2^k} $ arise as the transversal gate group of the corresponding $ [[2^{k+1}-1,1,3]] $ quantum Reed-Muller code. $ A_4 $ is the transversal gate group of the perfect $ [[5,1,3]] $ code see What are the transversal gates of the [[5,1,3]] code? And $ S_4 $ (the single qubit Clifford group) is the transversal gate group of the $ [[7,1,3]] $ Steane code see Transversal logical gate for Stabilizer (or at least Steane code) Answer: If you take the Reed-Muller code of 15 qubits, this is a distance 3 CSS code (so has transversal c-NOT, Z and X) but it also has transversal T (and transversal controlled-S and controlled-controlled-Z). What it doesn't have is transversal Hadamard. You'll find this code properly defined in a bunch of places, but, for example, here is the first one that Google threw at me!
{ "domain": "quantumcomputing.stackexchange", "id": 3562, "tags": "quantum-gate, clifford-group, fault-tolerance" }
Can ionic and/or metallic bonding produce stable long chains? Like polymers, though not necessarily as useful
Question: Question Polymers are long chains ⛓️ of covalent bonds. Can similar structures exist for ionic and/or metallic bonding? They don't have to be as useful. I know there can be polymers with ionic bonds in them, but I am talking about the entire structure (at least the base) being ionic and/or metallic. Just in case, could they exist for other types of bonding too? Answer: In the case of ionic bonding, the answer would be that a stable structure is unlikely, although examples may exist among multimeric proteins. You may also check out the work of George Whitesides and others in the field of self-assembling soft matter. However, to be clear, it is assumed that the arrangement in question is of alternating charges in a regular linear sequence (...+-+-+-+-+-+-...), that the attraction between nearest neighbors is isotropic, and that the arrangement is free of other forces (such as external fields). There could be a solvent but if it competes with the ionic interactions then stability is certainly impossible. Such an arrangement would be unstable in a vacuum as well except under exceptional circumstances. The reason is that electrostatic attraction operates over long distances, and charges will therefore experience attraction and repulsion from more than the nearest neighbors. In addition, since the bonding is assumed isotropic, there is little to interfere with bending under the influence of these forces and in the presence of thermal agitation. If one monomer can bond to more than two (coordination number>2) then a sheet or a nonlinear 3D structure will ensue. However, if one monomer can bond stably to only two others, then an entangled, reptating polymer-like structure would result. With regard to metallic bonding, there is such a thing as a "molecular wire". These can form electrically conductive single macromolecules. However I can't at present comment on whether bonding in these would be regarded as metallic, at least in a way comparable to bonding in traditional metal solids. You might ask at matter modelling SE whether such models have been studied in silico.
{ "domain": "chemistry.stackexchange", "id": 17361, "tags": "bond, metal, ionic-compounds" }
Determining Brillouin Zone for a crystal with multiple atoms
Question: The Brillouin Zone (BZ) refers to a region of reciprocal space corresponding to the primitive cell. That is, a Brillouin Zone is a subset of the reciprocal space which contains all the information necessary to describe the crystal. In the case of a crystal which only has a single type of atom, the procedure for determining the corresponding BZ is simple: Take the lattice in the real space and convert it to the recirpocal space Choose any one lattice site as the origin and draw lines connecting it to all of its nearest neighbours Draw the perpendicular bisectors of these connecting lines. The area bounded by the bisectors is the first Brillouin Zone Wikipedia has a very clear illustration of the process. It is further possible to find the Irreducible part of the Brillouin Zone (IBZ), which is the smallest possible subset of the BZ after reducing it along its symmetries in the same ways as the symmetries present in the point group associated with the lattice. The Problem: The above formulation is clear and well-known. However, I am not very sure about what to do if the crystal has more than one atom. There can be several possibilities: We could treat the question as purely concerning the lattice and treat all the atoms as if they were identical. Then follow the earlier process. We could treat the crystal as having a motif consisting of groups of the different atoms. Then, try to obtain a resulting lattice and carry out the standard process on it. Connect nearest neighbours using sets of either only identical or only dissimilar atoms and continue the earlier process. Unfortunately, none of these options seems obviously correct. It feels like there should be a clear answer based on the underlying theory, but I have had no luck finding it so far. What is the correct way to find the Brillouin Zone/ Irreducible Brillouin Zone for a crystal cconsisting of more than one type of atom? (such as the one shown below) Answer: Option 2 is essentially the correct approach. Just as in the your Bravais lattice example, you begin by writing down the lattice vectors $\mathbf{a}_i$ of the real-space lattice. The lattice vectors specify the distance you have to go for the motif (in the case of your honeycomb lattice it consists of two sites, one blue and one red) to repeat itself. Then determine the reciprocal space lattice vectors $\mathbf{b}_i$, which specify the periodicity of the lattice in $\mathbf{k}$-space and thus the size of the Brillouin zone. In the case of the honeycomb lattice, you'll find a hexagonal reciprocal lattice, for which you can then determine the first Brillouin zone following the approach you describe.
{ "domain": "physics.stackexchange", "id": 91418, "tags": "solid-state-physics, group-theory, crystals, x-ray-crystallography" }
Why does carbon alloy with iron specifically?
Question: Everyone knows what an alloy is: it's a metal made by melting two (or more) other metals together. Unless of course you're talking about steel. That's a metal made by mixing carbon (very much not a metal) into molten iron. But you never hear about carbon alloys with any other metal, and that's kind of strange. If a few percentage points of carbon can turn iron into the miracle metal that is the foundation of the Industrial Age, just imagine what it could do to aluminum or titanium, for example. (Or even bronze, for that matter, which is superior to iron in many ways, from a materials science perspective.) But you only ever hear about carbon alloying with iron to form steel. So what's so special about iron? Answer: It's true they are not common, but there are other alloys that use carbon. Nickel is probably one of the more common metals that form alloys with carbon that have desirable properties. For example, Nickel 200, Nickel 201, and Nickel 205 all contain carbon. (See: http://www.asminternational.org/documents/10192/1852239/ACFA9D7.pdf/d490dee6-620e-4e38-b64d-53dd02c5fc81). Chromium and Tungsten also form alloys with carbon called Stellite Alloys: See http://en.wikipedia.org/wiki/Stellite (although some, but not all, stellite alloys contain iron too).
{ "domain": "physics.stackexchange", "id": 17561, "tags": "metals" }
Is there a paper glue that is NOT dissolved by acetone?
Question: Short version of the question: I need a spray-on glue that will stick paper to wood that isn't dissolved by acetone. Long version: I am trying to glue paper on wood. It's a big sheet of paper and I need it stuck on throughout because then I cut the wood-with-paper-on-it into pieces with a laser cutter. Therefore, I'm using a spray glue that can cover the whole sheet of paper quickly and evenly. I have tried a couple different glues. After the pieces are cut, I want to spray an acrylic coating over the resulting pieces to seal them. The process works except for one problem: if the acetone in the acrylic spray seeps through, it dissolves every glue I have tried. I try to go light with the acrylic, but more than 50% of my pieces end up with the paper coming loose from the wood. I can glue it back on later, but that means that the positioning of the paper is way less precise and the edge seal isn't there, making it susceptible to water damage. Are there any glues anyone knows of that meet this challenge? (I suppose, alternatively, is there an inexpensive sealant that doesn't use acetone? But I know the acrylic will hold up to the abuse these playing pieces are going to take, so I am hesitant to swap out that part of my process.) Answer: Something you can try to use is tape to put under the glue. This may stop the acetone from seeping through. Not quite sure but maybe if you really need it done. Also, Elmer's glue does offer a spray on version regardless of what some say on your comments.
{ "domain": "chemistry.stackexchange", "id": 9164, "tags": "everyday-chemistry, solutions" }
Stable marriage in Haskell
Question: I implemented a stable marriage problem in Haskell few month ago. It's not optimized at all and I'd like to know how to make it better from performance and readability perspective. data Sex = Male | Female deriving (Eq, Show) data Virtue = Intelligence | Appearence | Kindness deriving (Eq, Show, Enum) data Parameter = Parameter{ virtue :: Virtue, value :: Int } deriving (Eq, Show) data Person = Person{ name :: String, sex :: Sex, preferences :: [Virtue], parameters :: [Parameter], partner :: Maybe Person } deriving (Eq, Show) Results list of women sorted by preferences of one man by defaultRateFunction :: Person -> Person -> Int. In my implementation it depends on judges preferences and rated person parameters. I won't put it there for brevity. You can find full program in a link to Gist at the bottom of post. Imagine that function to be anything you want. personalRating :: Person -> [Person] -> [Person] personalRating x ys = sortBy (comparing (\y -> defaultRateFunction x y)) ys Man makes an engagement proposal for the woman and if she don't have partner — she replies positively (True) and if she does, if new partner's rating > than the old one's — "returns" True and if it does not — False proposal :: Person -> Person -> Bool proposal male female | isNothing (partner female) = True | defaultRateFunction female male > defaultRateFunction female (fromJust $ partner female) = True | otherwise = False Man makes a proposal for each woman in females untill he'll find the one who'll reply positively. Assumed that there are at least one of this type in the array findTheBride :: Person -> [Person] -> Person findTheBride male females | proposal male (head females) == True = head females | otherwise = findTheBride male (tail females) The ugliest part is marrige algorhitm itself. As I call it recursively I have to clean person from array of corresponded sexes every time it finds partner and also check if woman has an ex-partner, and deal with thier "breakup" also. marrige :: [Person] -> [Person] -> [Person] marrige males females | sm == [] = females | isNothing ex = marrige (fsmWithNewPartner:(delete fsm males)) (fsmPartnerWithFsm:(delete fsmPartner females)) | otherwise = marrige (fsmWithNewPartner:((fromJust ex) {partner = Nothing}):(delete fsm $ delete (fromJust ex) males)) (fsmPartnerWithFsm:(delete fsmPartner females)) where sm = filter (\x -> partner x == Nothing) males -- Single males fsm = head sm -- Fist single male fsmPartner = findTheBride fsm (personalRating fsm females) -- Fist single male's partner ex = partner fsmPartner -- Partner's ex (Maybe) fsmWithNewPartner = fsm {partner = Just fsmPartner} fsmPartnerWithFsm = fsmPartner {partner = Just fsm} Full version of program (where random Person data are generated) avialible on Gist. Answer: import Safe (findJust) import Data.Foldable (null, all) personalRating = sortBy . comparing . defaultRateFunction proposal m f = all (on (>) (defaultRateFunction f) m) $ partner f remarry p x = (x {partner = p} :) . delete x marrige :: [Person] -> [Person] -> [Person] marrige ms fs = case find (null . partner) ms of Nothing -> fs Just m -> marrige (remarry (Just f} m $ maybe id (remarry Nothing) (partner f) ms) (remarry (Just m) f fs) where f = findJust (proposal m) $ personalRating m fs You never actually use ms that have a partner. Why keep track of them? I'll assume that all start out without partners, otherwise filter once at the start. In particular, I'll assume the invariant that partnership is symmetric. marrige :: [Person] -> [Person] -> [Person] marrige [] fs = fs marriage (m:ms) fs = marrige (maybe id (\ex -> (ex {partner = Nothing} :)) (partner f) ms) (f {partner = Just m} : delete f fs) where f = findJust (proposal m) $ personalRating m fs For performance, you could use Data.Map instead of []'s delete. Edit: Here's one where the explicit recursion is less global. Unsetting ex's partner may be superfluous. import Control.Monad.State marrige = execState . traverse_ go where go m = do f <- gets $ findJust (proposal m) . personalRating m modify $ (f {partner = Just m} :) . delete f for_ (partner f) $ \ex -> go $ ex {partner = Nothing}
{ "domain": "codereview.stackexchange", "id": 34054, "tags": "haskell" }
Why there is a difference between atoms and chemicals
Question: I recently asked this about why we call things "isotopes" and "elements" (atoms) instead of just having it be a bunch of particles bundled together in different numbers. This makes me wonder about the definition of Chemical, which Wikipedia has sort of a circular definition. A chemical cannot be broken into elements (atoms) unless breaking chemical bonds (basically the definition).... So atoms have protons and neutrons (and electrons) that are bound by "atomic bonds". I'm not sure that's correct. So strong force holds a single together. But not sure what holds protons together, maybe that's the weak force. Then "chemical bonds" are between electrons between atoms. My question is, why not just get rid of the distinction of calling things "protons vs. atoms vs. chemicals", and just say there are "particles" which bundle together. They bundle into group 1 (atoms), and group 1's bundle into group 2's (chemicals), but really it's all just particles that are all interacting. So this makes me wonder, what exactly a chemical is. It is a stable particle system essentially, where atoms combine into larger chunks. Answer: The distinctions matter because of what we can observe in nature and the lab If you want to make sense of the world around you it helps to have some coherent scheme to describe the world and classify the things in it. That scheme should map onto things we can observe. We could just describe things by their nuclear properties (the count of protons and neutrons in the atoms) but this is a poor classification for two reasons: things with the same number of protons are very, very similar and it mostly doesn't help to distinguish them unless you are doing nuclear physics most of the substances we observe in the world have properties that their nuclear composition does not explain We have a periodic table not a (much bigger) nucleonic table because the easily observable properties of elements (which are mostly chemical or physical unless you are a nuclear scientist) are very similar for isotopes with the same number of protons. Most of the ways those "particles" interact are highly dependent on which element they are. We have a separate class (chemicals, molecules or chemical substances) because of the way atoms interact. Different atoms join together in different ways the mostly demand on how their electrons are configures (which, in turn, depends on the number of protons in their nucleus). There are bazillions of ways to combine atoms into molecules and each resulting chemical substance has distinct features (colour, melting point, taste, toxicity....) The observable features are dependent on how the atoms join together: different ways of joining them together result in substances with different properties. That's why we have a subject called chemistry. If we ignored that combinatorial complexity and just talked about combinations of particles we would be ignoring the obvious ways of classifying the distinct chemical substances we observe. It is worth looking at a few examples to clarify this and to explain the Wiki definition of chemical. First note that chemical substance includes elements (everything is a chemical substance even if it is only made from one type of atom). Take carbon, for example. Diamond is one type of carbon, graphite is another. Both are forms of the element (technically allotropes). Both could be thought of as bundles of carbon atoms but that is a useless definition as it doesn't explain why their properties are so different. Chemistry does: they are different because the way the atoms are joined together and organised is different (one has flat hexagons in near-infinite planes, the other has near-infinite tetrahedral arrays of carbons joined together). Or consider ethanol and dimethyl ether. They both contain exactly the same bundle of atoms. But that doesn't explain why they are so different (one boils at -24 Celsius, the other is a pleasant intoxicating drink). again the way the atoms are connected together explains the observable difference in properties; the bundle of atoms they re made from does not. Moreover, those observable properties of chemicals will be the same even if the atoms they are made from are different isotopes (12C and 13C are very similar and alcohol made form either will still make you drunk and taste like booze). So chemists mostly ignore nuclear properties and explain things using the periodic table which explains almost everything about how atoms connect together into molecules. So we group different nuclei together into elements because that helps us classify the things that make up chemicals in ways that help explain what matters about how they connect together. We have a separate classification for chemical substances as they depend on the details of how atoms connect together. These are meaningful classifications that help us explains the properties we can observe in the world and the lab.
{ "domain": "chemistry.stackexchange", "id": 12118, "tags": "atoms" }
Design a LTI system which returns DC value of the input signal
Question: Suppose that $h(t)$ is the impulse response of LTI system. The input signal $x(t)$ is periodic with period $T$. Determine $h(t)$ so that the output signal $y(t)$ only be the DC component of $x(t)$. Is $h(t)$ necessarily unique? My attempt: It's known that LTI system response to the periodic input is periodic. So I think the only possible $h(t)$ is constant function. If we let $h(t) = 1$ then: $$y(t) = \int_{-\infty}^{\infty}x(\lambda)d\lambda \tag{1}$$ According to Matt L.'s answer, definition of DC value is: $$\bar{x}=\lim_{T_0\rightarrow\infty}\frac{1}{T_0}\int_{-T_0/2}^{T_0/2}x(t)dt\tag{2}$$ Clearly $(1)$ and $(2)$ are different. So what's the appropriate $h(t)$? Maybe question uses different definition for DC value? Answer: A constant impulse response won't work because if the input signal has a non-zero DC component, the output will blow up. Note that the input signal has frequency components at DC and at integer multiples of $1/T$, the latter being its fundamental frequency. So you simply need a filter that retains the DC component and filters out all integer multiples of $1/T$. Any low pass filter with a cut-off frequency less than $1/T$ will do the job. You just need to make sure that the low pass filter's frequency response at DC is unity, so it doesn't change the value of the input signal's DC component. EDIT: Just to clarify, there are infinitely many filters that satisfy your requirements. You just need unity gain at DC, and zero gain at frequencies $f_k=k/T$, $k=1,2,\ldots$. Any low pass filter with unity gain at DC and a cut-off frequency $f_c$ satisfying $0<f_c<1/T$ is a solution (as suggested above). But there are also other solutions, such as filters with notches at $f_k=k/T$, $k=1,2,\ldots$ (and unity gain at DC). One such filter was proposed in Hilmar's answer.
{ "domain": "dsp.stackexchange", "id": 8706, "tags": "continuous-signals, linear-systems, impulse-response" }
Selection rules in atomic physics . Why is $j' = 0$ to $j=0$ transition not allowed?
Question: Consider the quantum numbers j (total angular momentum) and l (angular momentum) where j = l + s. My notes state that: Delta l can only be 1 or -1. Delta j can only equal 1 or zero, because the photon carries away one unit of angular momentum. My question: A transition from j' = 0 to j = 0 is forbidden. Why is it this the case? Answer: By angular momentum coupling, the possible $j_f$ values for the final states must be contained in the decomposition of $$ j_i\otimes (L=1)= \vert j_i-1\vert\oplus \ldots \oplus j_i+1\, . $$ In your case, the initial angular momentum $j_i=0$ so that $$ 0 \otimes 1 =1 $$ and no other value of $j_f$ except $j_f=1$ can occur. Thus $j_f=0$ Is excluded by triangularity of the angular momentum coupling.
{ "domain": "physics.stackexchange", "id": 46640, "tags": "photons, angular-momentum, conservation-laws, atomic-physics" }
Is there parallel algorithm for 3SAT
Question: Is there any parallel algorithms or approximation algorithms for 3SAT? Answer: Take a look at this question for some pointers to SAT solvers. Specially, there's a link to SAT Competition, in which you can find parallel SAT solvers (like ManySAT or gNovelty2+).
{ "domain": "cstheory.stackexchange", "id": 534, "tags": "cc.complexity-theory, sat, approximation-algorithms, dc.parallel-comp" }
Why use a reference volume for leak testing?
Question: An accepted method of leak testing a test piece is by using a differential measurement between it and a reference such as in the schematic here - why might one bother doing this as opposed to just comparing your test piece alone to a predefined or recorded leak rate, or curve? Answer: Pressures change in a vessel based on ambient conditions. The reference volume removes that source of error.
{ "domain": "engineering.stackexchange", "id": 3434, "tags": "product-testing" }
Is there any APIs for crawling abstract of paper?
Question: If I have a very long list of paper names, how could I get abstract of these papers from internet or any database? The paper names are like "Assessment of Utility in Web Mining for the Domain of Public Health". Does any one know any API that can give me a solution? I tried to crawl google scholar, however, google blocked my crawler. Answer: Look it up on: Google Scholar link Citeseer link If you get a single exact title match then you have probably found the right article, and can fill in the rest of the info from there. Both give you download links and bibtex-style output. What you would likely want to do though to get perfect metadata is download and parse the pdf (if any) and look for DOI-style identifier. Please be nice and rate-limit your requests if you do this.
{ "domain": "datascience.stackexchange", "id": 11, "tags": "data-mining, machine-learning" }
Difference in Reference Frames between VLP 16 and HDL 32 and How to convert them?
Question: Hi, How are the Reference Frames used in VLP 16 and HDL 32 differ from each other and How to convert them into a common frame of reference ? I have point cloud data gathered from VLP 16 and HDL 32. May I know how their axes, X, Y, Z are oriented and how do these axes change between these two devices. I would like to convert my HDL 32 data into the same frame of reference as that of VLP 16 so that both points returned by both geometrically and metrically mean the same, considering that their HDL 32 sits on top of VLP 16 at a height of 30 cm. Thank you for your time! Originally posted by sai on ROS Answers with karma: 1935 on 2017-04-28 Post score: 0 Answer: https://github.com/ros-drivers/velodyne/issues/71 This has actually helped me. VLP 16 axes of data published from ROS driver is different from the axes shown in VLP-16 manual. In relation to a body the standard is: x forward y left z up Originally posted by sai with karma: 1935 on 2017-05-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27740, "tags": "velodyne" }
Effect on T1 relaxation due to the application of the gradient
Question: In MRI, the T1 relaxation constant for a tissue type is defined as the time taken for the longitudinal relaxation to recover approximately to 63% of its initial value. Is there any modification of the T1 time due to application of gradients. Suppose, after application of the RF pulse, the magnetization is tipped completely in the transverse place. If one waits, then 63% of the longitudinal magnetization (z-direction) will recover after T1 seconds. Now, say at the same time I apply a gradient along the x or y direction. Would this have any affect on the longitudinal relaxation time? Answer: It does not have any practical effect on the $T_1$ relaxation, if there is a gradient present or not. Sure, from an academic point of view, there is an effect: Typically, in biomedical applications $T_1$ gets longer at higher field strengths. This is the reason why imaging sequences have to be adapted to the field strength of the machine, if $T_1$-dependent signal behavior is exploited (see Fluid Attenuation, or magnetization prearation in MPRAGE as an example). So, since you increase the magnetic field in $z$ diretion by a few mT, this would also influence $T_1$. However, I have never seen a practical application where this was a concern.
{ "domain": "dsp.stackexchange", "id": 3966, "tags": "mri" }
Resolution Algorithm - No new clauses
Question: The standard resolution algorithm returns false if no new clauses are added. I know that if $KB \land \neg a \implies []$, it returns true by proof by contradiction, but how can I understand the false case intuitively? Answer: When $new\subseteq clauses$ then, if we continued, the next line would be $clauses \leftarrow clauses\cup new$ but $clauses\cup new = clauses$ so we'd be doing the exact same calculation the next time through the loop deducing nothing new and looping forever. The point is, if we "resolve" all pairs of clauses and deduce nothing new, then we've derived all the consequences from the original set of clauses. If we still haven't derived a contradiction at that point, then $\neg\alpha$ is consistent with $KB$ and thus $\alpha$ is not provable. This last isn't necessarily completely accurate either, as it only shows that $\alpha$ is not a propositional consequence of $KB$. For example, if $KB$ was $\forall x.P(x)$ and $\alpha$ was $P(t)$, then $\alpha$ is provable from $KB$ but not via propositional reasoning.
{ "domain": "cs.stackexchange", "id": 11314, "tags": "logic, artificial-intelligence, propositional-logic" }
Is deciding whether or not a graph is bipartite in $L$?
Question: Let's consider the problem: Prove that decision whether a graph is bipartite is in $LOGSPACE$. I am convinced that I should use Reingold's theorem. I know that I test whether graph contains an odd-length cycle. For a given graph $G = (V,E)$ I can define the graph $$G' = (V', E') \\ V' = \{ v^1, v^2 | v \in V \} \\ E' = \{(u^1, w^2), (u^2, w^1) | (u,w) \in E\}$$ And now, the vertex $v$ resides in odd-length cycle iff there is a path between $v^1$ and $v^2$. But, how can I construct $G'$ using $LOG$ memory? Answer: Your construction shows that checking whether an undirected graph is bipartite is logspace reducible to PATH (deciding whether there exists a path between two vertices in an undirected graph). Recall that if $A$ is logspace reducible to $B$ and $B\in L$, then $A\in L$. To decide $A$ in logspace, you don't actually compute the reduction (as its output can be of polynomial length), but compute a single bit of the reduction whenever it is requested. More concretely, suppose $f$ is a logspace reduction from $A$ to $B$. To decide $A$, given input $x$, execute the logspace machine for $B$ and keep a counter $i$ for the location on its input tape. Whenever $B$ wants to read a symbol from its input, compute the $i$'th bit of $f(x)$, which requires only logarithmic space (as $f$ is a logspace reduction from $A$ to $B$). This show that you don't actually need to construct $G'$ entirely. You only need to compute a given bit of its adjacency matrix in logarithmic space.
{ "domain": "cs.stackexchange", "id": 9532, "tags": "complexity-theory" }
Tangential acceleration and acceleration of blocks
Question: I encountered this problem while working on this exercise. The question is to find an expression for the acceleration of the two blocks. I started with writing newtons second law for the two blocks and newtons second law involving torque which is related to the angular acceleration of the pulley. My question is: is the tangential acceleration equal to the acceleration of the blocks? If so, why is this the case? Answer: Is the tangential acceleration equal to the acceleration of the blocks? As long as the rope does not stetch, and there is sufficient friction for the rope not to slip on the pulley, then we can take the angular displacement of the pulley $\theta$, multiply it by the radius of the pulley $R$ and get the amount by which the rope has moved. But this must also be the displacement of the blocks, $x$. So we have $x = R \theta$ If we differentiate this once with respect to time we have $v = R \omega$ where $\omega$ is the angular velocity of the pulley. If we differentiate again we have $a = R \dot \omega$ where $a$ is the acceleration of the blocks and $\dot \omega$ is the angular acceleration of the pulley.
{ "domain": "physics.stackexchange", "id": 99502, "tags": "homework-and-exercises, newtonian-mechanics, free-body-diagram, string" }
Ensemble average of polarization with applied field
Question: This question is in regard to a dipole moment fluctuation formula seen in: MOLECULAR PHYSICS, 1983, VOL. 50, NO. 4, 841-858, on page 843. For a system of polar liquid (water) at equilibrium, the ensemble average of the total dipole moment $M$ is: $$\left<M\right> = \frac{\int dq M\exp(-\beta U(q))}{\int dq \exp(-\beta U(q))}$$ Where $q$ is a collective variable (for e.g. position and momentum) that dictates the Hamiltonian ($U$) of the system. The denominator is the partition function. Now consider a small external applied electric field $E_o$. The ensemble average of the total dipole moment is now: $$\left<M\right>_E = \frac{\int dq M\exp(-\beta (U(q)-ME_o))}{\int dq \exp(-\beta (U(q)-ME_o))}$$ where the Hamiltonian has an extra term that accounts for the field-dipole interaction. To simplify the above expression, literature often states "apply linearization on $E_o$" and that's where I'm confused. My interpretation of that is: $$\exp{(\beta ME_o) \approx 1 + \beta ME_o}$$ which makes $$\left<M\right>_E = \frac{\int dq M\exp(-\beta U(q)) + M^2\beta ME_o \exp(-\beta U(q))}{\int dq \exp(-\beta (U(q)-ME_o))}$$ and I'm stuck here. In the final expression shown in the paper, they arrived at: $$\left<M\right>_E = \beta E_o\left<M^2\right>$$ where the bracket around $M^2$ denotes average in equilibrium of no field, i.e. $\left<M^2\right>=\frac{\int dq M^2\exp(-\beta U(q))}{\int dq \exp(-\beta U(q))}$. Could someone help me explain how they arrived at their final expression? Answer: I've found an answer. My misunderstanding was from the term "linearization". That term was meant for $\left<M\right>_E$ and not $E_o$. Starting from ensemble average of $M$ with a field $E_o$: $$\left<M\right>_E = \frac{\int dq M\exp(-\beta (U(q)-ME_o))}{\int dq \exp(-\beta (U(q)-ME_o))}$$ The "linearization" here means to expand $\left<M\right>_E$ to its first order in the Taylor series: $$\left<M\right>_E \approx \left<M\right> + \frac{\delta\left<M\right>_E}{\delta E_o}$$ Applying product/quotient rule to the expression of $\left<M\right>_E$, we'll find: $$ \frac{\delta\left<M\right>_E}{\delta E_o}= \frac {\int dq \beta M^2E_o\exp(-\beta (U(q)-ME_o))\int dq \exp(-\beta (U(q)-ME_o)) - \int dq M\beta E_o\exp(-\beta (U(q)-ME_o))\int dq M\exp(-\beta (U(q)-ME_o))} {\int dq \exp(-\beta (U(q)-ME_o))^2}$$ $$ \frac{\delta\left<M\right>_E}{\delta E_o}= \frac {\int dq \beta M^2E_o\exp(-\beta (U(q)-ME_o))} {\int dq \exp(-\beta (U(q)-ME_o))}- \frac {\int dq M\beta E_o\exp(-\beta (U(q)-ME_o))\int dq M\exp(-\beta (U(q)-ME_o))} {\int dq \exp(-\beta (U(q)-ME_o))^2}$$ Then by following the definition of ensemble average: $$ \frac{\delta\left<M\right>_E}{\delta E_o}= \beta E_o\left<M^2\right> - \beta E_o\left<M\right>\left<M\right>$$ Finally, by also assuming $\left<M\right>=0$ for an isotropic system with no field: $$\left<M\right>_E = \left<M\right> + \beta E_o (\left<M^2\right> - \left<M\right>\left<M\right>) = \beta E_o\left<M^2\right>$$. woooooohooo, it only took me 3 days to figure this out :(
{ "domain": "physics.stackexchange", "id": 82586, "tags": "thermodynamics, electric-fields, dipole-moment" }
Possible objection to hidden variables schema
Question: I know that local hidden variable theories have been ruled out by Bell inequalities experiments, but I have a more fundamental objection that I did not found discussed in the literature. Let us consider one spin one-half particle and let's test it in a line of several stern-gerlach devices, testing various orientations of spin (say x,z,x,y,x,z,z,...). In the hidden variable interpretation the particle should carry a very large number (potentially infinite) of labels, yielding the results of each following measure. I found it completely innatural to assign to one particle an infinite string of possible future measurement results: It seems to me enough to rule out the hidden variable explanation. Am I wrong? Answer: You assume the spin projections have to be among those hidden variables. Bohmian mechanics is an explicit example showing that this is not necessary: with just the position of the particle as hidden variable, and then incorporating the coupling of the spin with the magnetic field in Schrödinger equation as usual, Bohmian mechanics perfectly predict successive Stern-Gerlach measurements, as explained in details in [1]. The gist of it, in very qualitative terms, is that starting from a wave function with a single fairly localised swell, it is split in two non-overlapping swell by the Stern-Gerlach, each corresponding to a different eigenvalue for the spin projection. The particle that surfed the initial state swell will then end up on one or the other final state swell, and the probabilities will be given by the modulus square of the wave function, as per the usual postulate of Bohmian mechanics [*]. A subsequent Stern-Gerlach would then proceed with one of the final swell of the first experiment as initial state, exactly as in traditional QM we start from the wave function collapsed by the first experiment, and the particle surfing it would again end on one of the two well-separated final state swells. Nowhere has spin been used as a hidden variable. [1] Travis Norsen. The pilot-wave perspective on spin. American Journal of Physics, 82(4):337–348, 2014. [*] Although, it is claimed that this postulate can actually be demonstrated from an quantum equivalent of Boltzmann H-theorem.
{ "domain": "physics.stackexchange", "id": 44041, "tags": "quantum-mechanics, quantum-spin, quantum-interpretations" }
How many node does the final B-tree have?
Question: I'm currently studying the B-Trees chapter of Introduction to Algorithms. One of the question from the chapter is: Suppose that we insert the keys $\{1,2,...,n\}$ into an empty B-tree with minimum degree 2. How many nodes does the final B-tree have? I think given that a node will have between $t-1$ and $2t-1$ keys, where $t$ is the minimum degree, so the answer for this question will be somewhere between $n$ and $n/3$? This is where I'm stuck, any help would be appreciated. Answer: Every node contains between $\lceil(m/2)\rceil-1$ and $m-1$ keys (where m is the degree), so we can say that every node has between $\lceil(m/2)\rceil$ and $m$ children. If we imagine to construct a minimun nodes b-tree, we'll have: $n = 1 + 2 + 2\lceil m/2\rceil + 2\lceil m/2\rceil^2 + ... + 2\lceil m/2\rceil^{h-2}$where every addend is the number of nodes for every level, from the root (that can contains even one key), to the leaves level, and where h is the height of the b-tree; If you consider the series, you have $n=1+2 \frac{ \lceil m/2\rceil^{h-1}-1}{\lceil m/2\rceil-1}$. Instead, if you consider the maximum number of nodes, you will have: $n= 1+m+m^2+...+m^{h-1} = \frac{m^h-1}{m-1}$. where at the same way, every addend is the number of nodes for every level, and where h is the height of the b-tree; You can easily find the result if you substitute 2 instead of m in the formula.
{ "domain": "cs.stackexchange", "id": 10715, "tags": "data-structures, search-trees, balanced-search-trees, b-tree" }
What is the status of rdmanifests?
Question: The pylon_camera package installation procedure instructs users to add a custom source to their local rosdep (15-pylon_camera.list) containing two source::uri elements that point to a single rdmanifest file: pylon_sdk.rdmanifest. That rdmanifest then is used to make rosdep satisfy the libpylon(-dev) dependency (when used with rosdep install ..). I can still remember the time when the main rosdep database included snippets of bash (or even things coming close to full scripts), package-local yaml files with similar lines and rdmanifest files being distributed with packages. I also remember a major cleanup of rosdep to remove all that (because of running arbitrary snippets of shell script with super-user permissions, rosdep silently changing a system's configuration (by adding ppas or other apt repositories fi), and because it was just generally not maintainable nor scalable). Seeing a recently created package like pylon_camera use something like rdmanifests makes me wonder what the official status is of those particular pieces of the rosdep infrastructure, and whether current users / packagers / maintainers should be discouraged from relying on them? Afaik the buildfarm will also not build packages that rely on this (deprecated?) functionality, so perhaps that is deterrent enough already, but an authoritative answer would be nice to have. Originally posted by gvdhoorn on ROS Answers with karma: 86574 on 2017-02-21 Post score: 0 Original comments Comment by gvdhoorn on 2017-02-21: Note: I did not choose pylon_camera for any reason other than that it is a good example of a current use of rdmanifest files. Comment by gvdhoorn on 2017-02-21: #q251248 is an example of a question about rdmanifests that was posted as recent as 2017-01-04. Answer: rdmanifests are defined here: http://www.ros.org/reps/rep-0112.html They are a tool that works for developers who are installing from source on more exotic platforms. But it's an important tool for users not on our main supported platforms. They are not recommended for significant distribution and sharing. And cannot be used in a released package. We don't expect to put a lot of energy into adding features. But because there are use cases where it is valuable and there's no viable alternative, we have not chosen to deprecate it. My recommendation is that rdmanifests should only be used as a last resort if there are no other options. Originally posted by tfoote with karma: 58457 on 2017-02-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2017-02-22: Thanks. Personally I don't like shell scripts run with super user access installing all sorts of things circumventing the platform's pkg manager, but I can understand why in some cases there might be no other way to do deployment.
{ "domain": "robotics.stackexchange", "id": 27088, "tags": "ros, rosdep, dependencies" }
The effects of initial condition on Green function
Question: In literature, for proving the existence of Green function for linear systems, it is argued that if for a linear differential equation like $\mathcal{D}[y] = \sum_{n=0}^N {a_n y^{(n)}}$ $y(0)=y_0, \hspace{1cm} y'(0)=y_1, \hspace{1cm} ... \hspace{1cm}y^{N-1}(0)=y_{N-1}$ if we know the response of delta input $\delta(t-t_0)$ we can construct the answer for an arbitrary time-dependent input and here is the sketch of the proof: Assuming that we know a function $g(t,t')$ that is the answer of equation $\mathcal{D}[g] = \sum_{n=0}^N {a_n g^{(n)}} = \delta(t-t')$ and satisfies all the boundary conditions, then, for any arbitrary input, since $f(t) = \int{f(t')\delta(t-t')dt'}$ we can say use the linearity of $\mathcal{D}$ and say $\mathcal{D}\big[\int{f(t')g(t,t')dt'}\big] = \mathcal{D}\big[y(t)] =\int{f(t')\delta(t-t')dt'} =f(t)$ Up to this, it's all clear, but I don't see how the initial condition will affect the answer $y(t)$. In other words, I think for a complete answer we must state the following expression: $y(0) = \int_{0}^{\infty}{g(0,t')f(t')dt'}=y_0 $ $y'(0) = \int_{0}^{\infty}{g'(0,t')f(t')dt'}=y_1 $ $y''(0) = \int_{0}^{\infty}{g''(0,t')f(t')dt'}=y_2$ $...$ These additional constraints, which are forced by initial conditions, never discussed if the initial conditions are non-zero (if all of them are zero, these equations will be satisfied automatically). Now, I'm totally confused, because these constraints, impose some sort of normalization over function $f$ and $g$ which is none sense. I appreciate any help that explains the last expressions, their intuition and the way people deal with that. Answer: Thanks to the hints and with a little bit research, I found what I was looking for in Sadri Hassani's Mathematical Physics book. I try to explain step by step. First of all, since the ODE in non-homogeneous and also has non-homogeneous initial values, the convention is (thanks to the linearity) to decompose it to two functions $y_h$ and $y_i$ which are the homogeneous and non-homogeneous answer of the problem respectively. Regarding to this decomposition, we also have to take care of initial values. Here comes the second convention that embraces the initial values. We request that the initial values for $y_i$ to be zero (homogeneous) but non-zero (non-homogeneous) for $y_h$. So the original problem of $\mathcal{D}[y(t)] = \delta(t-t_0)\ ;\hspace{2cm} \ y^{(n)}(0) = y_{n} \hspace{1cm} \forall n=0, ...,N-1$ is transformed to finding a $y_h$ and $y_i$ such that $y=y_h+y_i$ and $\mathcal{D}[y_h(t)] = 0\ ;\hspace{3.3cm} \ y^{(n)}(0) = y_{n} \hspace{0.8cm} \forall n=0, ...,N-1$ $\mathcal{D}[y_i(t,t_0)] = \delta(t-t_0)\ ;\hspace{1.5cm} \ y^{(n)}(0) =0 \hspace{1cm} \forall n=0, ...,N-1$ The first problem is straightforward. One can solve the characteristic equation to obtain the powers of the exponential terms and solve a linear system for finding the coefficients that make the general solution of $y_h(t)$ fit the initial values. Note that the $y_h(t)$ is a function of $t$ only. The second one can be solved thanks to the magic of Green function. i.e. we can look for a $g(t,t')$ that satisfies the second expression, which in practice is much simpler than what I'd mentioned in the question that would satisfy all initial conditions. Assuming we are able to find such $g$, we can write the $y_i$ as below: $y_i (t) = \int_{0}^{\infty}{f(t') g(t,t') dt'}$ and since now $g(0,t'), g'(0,t'), g''(0,t'), ...$ are all zero, $y_i$ and all derivatives are zero, consistent with the equation above. PS: I was more into a general routine that for any initial/boundary conditions, using Green function technique, produces the general answer for arbitrary input. This recipe, although works for this specific problem, is sort of nicely engineered and I think it may not be the most general case. The reason I didn't find this recipe general is that the decomposition I did at the first place, may not be useful for other cases. And to show what I mean by a general case, I'd rather bring up the following problem. Assume the same problem but this time with boundary conditions instead of initial values: $\mathcal{D}[y(x)] = \sum_{n=0}^N {a_n y^{(n)}(x)} = f(x)$ $y(x_0)=y_0, \hspace{1cm} y(x_1)=y_1, \hspace{1cm} ... \hspace{1cm}y(x_{N-1})=y_{N-1}$ Are we able to construct the answer using similar machinery? What if the BCs contain derivatives (whether first order or higher)? PSS: I haven't thought about them yet, but appreciate if someone introduces me a reference about them in the comments.
{ "domain": "physics.stackexchange", "id": 49889, "tags": "differential-equations, greens-functions, linear-systems" }
Is usage of friend class appropriate in this case?
Question: I want to have a class that stores data from an input file, for use in a simulation program. It is like a struct but may require some small functions for accessors. A seperate class reads all the data from a text input file. I want to store most of this data in the above mentioned object. Therefore this class needs to be able to set the data variables. My current approach is to give the loader class friend access, so that these variables can be set. No other classes should be able to set the data, therefore I am not using setters. Is this appropriate friend class usage? I have included a demonstration example of the situation, as simple as possible: In reality the data holder I have called "Pod" here, holds 10 to 20 different strongly related variables. #include <iostream> class Pod { // holds the data that will be used later in simulation friend class Loader; public: double get_angle() {return angle_ * 3 / 180;}; protected: double angle_; }; class Loader { // loads data from text file. public: bool CheckFile() { // checks file: return true; } Pod ReadData() { double a = 90; // this line simulates reading from file. Pod pod; pod.angle_ = a; return pod; } }; void DoesThingsWithData(Pod data) { std::cout << data.get_angle() << std::endl; } int main() { Pod my_data; { Loader loader; if (loader.CheckFile()) { my_data = loader.ReadData(); } } // do things with data DoesThingsWithData(my_data); return 0; } Answer: Personally, I don't have major objections with that design. It gets the job done in a fairly straightforward way. But it does creates coupling, which might become a nuisance to you as the project grows. So another option with less coupling, as mentioned in comments, would be defining a default parameterized constructor: class Object { public: Object(<several params>); private: <the data> }; class Loader { public: Object LoadObject() { // many fields of data... int a = ... double b = ... std::vector<int> c = ... return Object(a, b, std::move(c), ...); } }; In C++11, you also don't have to worry about unnecessary copies of complex objects such as a std::vector. You can make the constructor take a move ref, e.g.: std::vector<int> && vec and apply std::move in the call site.
{ "domain": "codereview.stackexchange", "id": 10384, "tags": "c++, object-oriented" }
Why is Earth's rotation considered fast?
Question: I'm currently reading Atom (Krauss book), and in chapter 11 there is this text: This collision can explain many things, including the high rate of spin of the Earth (the grazing collision would have twisted it around like two football players who collide while running in opposite directions). This is in reference to a proto-planet hitting the early Earth, which led to the formation of the Moon. Why is Earth's spin considered high rate, given that Mars' spin is pretty similar without having a big moon? Or have I misunderstood what is being referred to? Answer: The interactions of the Earth and its moon have slowed the Earth's rotation significantly over the billions of years since their formation. Wikipedia cites a paper that gives a figure of approximately five hours for rotational period in the past. We don't know of a similar method to slow down Mars. This implies that the Earth must have had a much faster rotation than Mars in the past.
{ "domain": "astronomy.stackexchange", "id": 3702, "tags": "earth, rotation, proto-planetary-disk" }
How do stars from far away affect Earth?
Question: I know that we obviously get light (or we wouldn't be able to see them), but are there any other ways that they affect Earth and maybe just our solar system in general? Answer: A lot (to put it mildly) of elements are created in stars and supernovae. These elements then travel through space until they fall to Earth (or, to be exact, some microscopic portion of them reach us). Earth itself wouldn't exist if stars hadn't generated elements which then clumped into dust, into minerals, and so on until a big ball of matter started to orbit the Sun. Here's a short quote from Wikipedia article on Cosmic ray: Data from the Fermi space telescope (2013) has been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernovae of massive stars. However, this is not thought to be their only source. Active galactic nuclei probably also produce cosmic rays. So I'll stand by my claim that stars are giving us mass (i.e. non-photons) as well as photons in real-time, not just as 5-billion-year-old space dust.
{ "domain": "physics.stackexchange", "id": 28242, "tags": "gravity, earth, atmospheric-science, stars, solar-system" }
Rolling down a slope
Question: I am having some trouble understanding rotational dynamics. If we have a cylinder that we give an initial velocity and rotation such that it satisfies the non slip condition ($ v_{cm} = \omega R$) and let it roll down a slope where its weight component down the slope equals the static friction up the slope then acceleration is equal to 0 and it will have a constant velocity of vcm. But because the friction provides a torque there is an angular acceleration so surely the cylinder is actually accelerating since $a = \alpha r$ even though the forces of friction and weight component down the slope are equal? I know that when there is a constant velocity there is no friction but considering $m g \sin \theta - F_{static} = m a$ wouldn't that make $m g \sin \theta = 0$? Answer: The issue is that for a no slip condition, the static friction cannot be equal to the component of the weight parallel to the incline. Setting up Newton's second law (for linear and rotational motion), we have$^*$ $$mg\sin\theta-f=ma$$ $$fR=I\alpha$$ Imposing the no slip condition $a=R\alpha$, we can determine that $$f=\frac{mg\sin\theta}{1+mR^2/I}$$ So, the only way the magnitude of the static friction force $f$ can be equal to the component of weight down the incline $mg\sin\theta$, it must be that $mR^2/I=0$. This could be obtained when $I\to\infty$ so that we have an object that essentially cannot be rotated, and then in this case the object would in fact remain at rest, since we are imposing a no slip condition on an object that cannot rotate, and hence cannot translate either. $^*$ Sign conventions have been chosen so that the linear acceleration and angular acceleration always have the same sign.
{ "domain": "physics.stackexchange", "id": 74966, "tags": "newtonian-mechanics, friction" }
Why energy is released (most of the time) when electrons are added to atoms?
Question: [Though initially the question may seem to belong to stack exchange chemistry but I believe that I would get a better physical explanation over here. ] Currently I was reading about electron affinity of elements and found that for most of the release energy when an electron is added to them , this made me question why this is so? So I searched and found an answer on chemistry SE. But in the answer it states When a system goes from a higher energy state to a lower energy state, it loses energy. But considering the electron and nucleus as a system shouldn't the energy remain constant if the net external forces acting on the system is zero.But if I'm going wrong somewhere can you explain why and where I'm going wrong? Also please explain if you know as to what is really happening and why the energy is released. Answer: When an electron is bounded by the electric charge of a nucleus, it doesn't have enough energy to "run away" to infinity. Meaning you actually need to give energy to the electron to tear it apart from the atom, creating an ion (that is the meaning of ionizing energy). Looking at it the other way around, if you have a free electron and you want it to bind to an ion to form an atom, you need to take energy from it. Of course, the total energy of the system should be conserved, so some of that energy is released as kinetic energy of the whole atom (meaning some energy goes to the nucleus) but some energy also goes to light. This is quite a general thing to consider: when a charged particle loses energy, because it is always coupled to the electromagnetic field, it can always transfer energy to the field itself (of course only if the conservation laws hold). This effect is actually what makes florescent light bulbs/ Mercury lamps/ Sodium lamps (etc..) work. You create an electric voltage so high it ionizes the atoms of gas inside the lamps (the electric energy gives energy to the electrons to "run away" from the atom - sometimes not enough to really ionize but enough for the electron to go to a higher energy state of the atom). Then, electrons recombine with the ions, giving away some of that energy as light with specific wavelengths, depending on the energy state they began and finished. I hope this explanation helped, you are welcome to ask more questions about it.
{ "domain": "physics.stackexchange", "id": 90599, "tags": "thermodynamics, energy, energy-conservation, potential-energy, atoms" }
Can not open links
Question: http://ias.in.tum.de/kb/wiki/index.php/Coordinate_systems all links like http://ias.in.tum.de/kb/wiki/ never open with me !! so What is the problem Thanks Originally posted by RiskTeam on ROS Answers with karma: 238 on 2012-10-07 Post score: 0 Original comments Comment by moritz on 2012-10-07: Please tag questions with the name of the package or stack so that the maintainers can get email notifications Comment by Lorenz on 2012-10-07: It should be fixed now. Answer: Indeed the wiki is down. You should contact people at IAS, TUM to see if they can fix the problem. I am not sure whether they check ROS Answers regularly so contacting them directly is probably the better option. Originally posted by Thomas with karma: 4478 on 2012-10-07 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 11267, "tags": "knowrob" }
Singly-Linked List In-Place Reversal
Question: I've created a function in C to reverse a singly-linked list in place: typedef struct n { int key; struct n* next; } node; node* inplaceReverse(node* head, int index) { if(!head || !head->next) { /* Indication of last node (or empty list). This will be recursively returned to become the new head pointer of the reveresed list. */ return head; }else { /* Reverse the list after head and store its return in tmp. */ node* newhead = inplaceReverse(head->next,index+1); /* Make the next node's next point to head. */ head->next->next = head; /* If head was the first node, make its next NULL. */ if(!index) head->next = NULL; return newhead; } } It works but one thing that bothered me is that I needed to create an additional argument for the actual function (int index). This is to indicate the original head of the list so I know which one to set to NULL after the reverse. It feels like there's an obvious and better way to do this but I'm not sure. Other suggestions for optimization and improvements are also welcomed. Answer: My primary concern with this implementation is the fact that it is recursive and thus the amount of stack that you are consuming is directly proportional to the length of the linked list. If the linked list is something like 10 nodes long, you are likely going to be fine. However, if it is 1,000,000 nodes long then that is something that you can expect to blow the stack. I would recommend rewriting the algorithm to not use recursion. This can be done in the following manner. node = list->head; if (node is NULL) then there is nothing to do for (next = node->next; next != NULL; next = node->next) { remove 'next' from the list /* * the act of removing 'next' from the list will automatically * update node->next. Note that 'node' never changes. */ add 'next' to the head of the list } Hope this helps.
{ "domain": "codereview.stackexchange", "id": 12700, "tags": "c, linked-list" }
Structural determination of carbohydrates(sucrose)
Question: I recently came across a question in which we have to find out the number of moles of $\ce{HIO4}$(Periodic acid) which are required for cleavage of $\ce{C-C}$ bonds in sucrose. I went to https://en.m.wikibooks.org/wiki/Structural_Biochemistry/Carbohydrates/Carbohydrate_Structure_Elucidation_through_Periodic_Acid_Cleavage . But this article is rather incomplete and doesnot answer the question. In general what are the things to be kept in mind in order to find out the number of moles of periodic acid required for the structural determination of carbohydrates. Any ideas? Thanks. Answer: First of all, lets be familiar with what per-iodic acid does to organic compounds. It selects the carbon-carbon bonds that have either an -OH group, a =O(carbonyl) group or if it is the functional group carbon of a carboxylic acid. Both the carbons of the C-C bond must satisfy this condition. Here are some examples of the selection points of per-iodic acid: Now that per-iodic acid has chosen the center, it cleaves the C-C bond and adds an -OH at each carbon. For every C-C bond, one per-iodic acid molecule is consumed. I am avoiding going throughout the actual mechanism to keep things simple. If any of the carbons has two -OH groups, they will spontaneously dehydrated to give a =O bond instead. For the examples shown above, the final products of the cleavage-oxidation would be: Now, coming to your main question, the structure of sucrose would be: By what we have seen, lets mark the C-C bonds that per-iodic acid can cleave: Since we have 3 centers, so thats why 3 molecules of per-iodic acid are consumed per molecule of sucrose. Now you can even proceed and find out the oxidation products. I hope this answer helps. Happy carbohydrate breaking!
{ "domain": "chemistry.stackexchange", "id": 7990, "tags": "organic-chemistry, carbohydrates" }
Noise reduction approaches in optical spectral measurement
Question: I am using an optical spectrometer to measure some surfaces in the visible, and since the signal is quite noisy I wondering what would be the best way to reduce the noise. In particular, are there some computational solutions to characterize the noise and then filter it, as it is used in signal processing (e.g. something like the optimum filtering)? Answer: The most common noise filter used in spectroscopy is the Savitzky-Golay algorithm. It requires equally spaced data on the x-axis. SG is basically a least squares polynomial fit to the data with the center point being replaced by the calculated result. The filter can be generated by partially solving the least matrix equation with the final computation being a convolution of the smoothing function with the data. See the original paper by SG. Be aware there are a couple of mistakes in it. To understand the fine details of SG smoothing see Willson and Polo, J. Opt. Soc. Am., V71 (1981), p599. SG filters can also generate derivative spectra.
{ "domain": "physics.stackexchange", "id": 55594, "tags": "spectroscopy, experimental-technique, signal-processing, noise" }
Applying a force on a rigid body on a certain point
Question: I have a rigid body with an origin point (at the center of mass). I want to apply a force on a certain point. So what is the force applied to the origin of this rigid body? Description image: Note: I don't want the moment of torque. More explanation: Assume $p$ is a point in the space of a rigid body. So if we had a force $\vec F_{p}$ at a point $p$, how can we get the $\vec F_{center}$? Answer: Any force applied to any part of a rigid object acts on the object as a whole. So if you were to place a directional force meter at the center of the object (or at any other point on the object), it would register the force $\vec{F}_p$.
{ "domain": "physics.stackexchange", "id": 1837, "tags": "forces, rigid-body-dynamics" }
Functions taking iterables: peaks, compress, and skipper
Question: I am just testing out a few small functions for learning purposes: def peaks(iterable): # returns a list of int for those values in the iterable that are bigger # than the value preceding and following them. itr = iter(iterable) peak =[] curr = next(itr) last = next(itr) try: while True: first = curr curr = last last = next(itr) if curr > first and curr > last: peak.append(curr) except: pass return peak def compress(v_iterable,b_iterable): #takes two iterables as parameters: it produces every value from the first iterable that # has its equivalent position in the second iterable representing what Python would consider # a True value. Terminate the iteration when either iterable terminates mega_it = dict(zip(v_iterable, b_iterable)) for nxt in sorted(mega_it): if mega_it[nxt]: yield nxt def skipper(iterable,n=0): #akes one iterable and one int (whose default value is 0) as parameters: it produces values from # the iterable, starting at the first one, skipping the number of values specified by the int #(skipping 0 means producing every value from the iterable) itr = iter(iterable) yield next(itr) while True: for i in range(n): skipped = next(itr) yield next(itr) I feel like my codes are lengthy for the kind of work it does. Is there a way to make my functions cleaner or smaller? Answer: Your comments are good, but they should all be written as docstrings instead. When working with iterators, itertools and generator expressions are your friend. peaks() works with three iterators, so zip() is useful. Since the function accepts an iterable, it seems fitting that it should be a generator rather than returning a list. Python supports double-ended inequalities. from itertools import tee def peaks(iterable): """Generates values in the iterable that are greater than the value preceding and following them.""" before, this, after = tee(iter(iterable), 3) next(this) next(after) next(after) for prev, curr, succ in zip(before, this, after): if prev < curr > succ: yield curr If you wanted to keep the old interface, which returns a list, then a list comprehension would be a good way to write it. def peaks(iterable): """Returns a list of values in the iterable that are greater than the preceding and following value.""" xs, ys, zs = tee(iter(iterable), 3) next(ys) next(zs) next(zs) return [y for (x, y, z) in zip(xs, ys, zs) if x < y > z] I find your compress() weird, in that it sorts the results. You can implement the function using just a generator expression. def compress(values, incl_excl): """Keeping only values whose corresponding entry in incl_excl is true, generates the sorted filtered list.""" yield from sorted(val for val, include in zip(values, incl_excl) if include) In skipper(), the two occurrences of yield next(itr) should be combined. def skipper(iterable, n=0): """Generator that yields the first element, then skips n values between results, until the input is exhausted.""" iterable = iter(iterable) while True: yield next(iterable) for _ in range(n): next(iterable)
{ "domain": "codereview.stackexchange", "id": 17091, "tags": "python, python-3.x, iterator" }
Thomson Exercise 2.12: Why is $p_i^2 = m_i^2$?
Question: I'm just working on some problems from Thomsons Modern Particle Physics. I' having a hard time understanding the last step in this calculation: The relativistic energy-momentum relation is $E^2 = m^2 + p^2$, if I'm not mistaken. But why is it here $m^2 = p^2$? Answer: You're confusing the Minkowski square of a four-vector with the Euclidean square of a 3-vector, viz. $p^2_\text{them}:=E^2-p^2_\text{you}$ i.e. $p^2_\text{four}:=E^2-p^2_\text{three}$.
{ "domain": "physics.stackexchange", "id": 56617, "tags": "homework-and-exercises, special-relativity, particle-physics, kinematics" }
When happens to the current during reversed biased?
Question: The main working function of a half wave rectifier is to convert AC current to DC. My problem is to understand during the negative cycle. The above picture is time versus current. We can see easily that discontinuous current is passing through the diode as we are not getting any current during reverse biased. Therefore if we connect any electronic device, will that work properly due to the discontinuous current? Answer: For devices like battery chargers, motors and lamps the fluctuating voltage (ripple voltage) does not really matter but for a device like an audio amplifier the ripple voltage may well manifest itself as a mains hum.
{ "domain": "physics.stackexchange", "id": 31717, "tags": "electric-circuits" }
RPGC normalisation creates artefacts at centromere
Question: I am studying ChIP-Seq data in HeLa cells and I've started using the RPGC normalisation of deepTool's bamCoverage. MACS2 also uses this normalisation for its peak calling. I am seeing a large number of peaks (like, over 50% of my peaks) at the centromere of each chromosome, and the RPGC normalised BW files also show this increased signal at the centromere. Does this mean that my protein really is at the centromere of each chromosome, or is it some normalisation bias I'm not aware of? Here's the general aspect of my peak calling (in green). Notice the large increase at the centromere of each chromosome. And here's a zoom on the centromere of chromosome 6. In blue at the top is raw BW, then there's a lane for input normalised BW but not RPGC, then green is the peak calling by MACS2, and black is RPGC normalised BW. Answer: UPDATE Here is some other protein ChIPs that I ran through the same normalisation: As you can see, they don't seem to be enriched at the centromere. So unless anyone brings other contributions, I will accept this as answer. RPGC normalisation doesn't appear to create artefacts at centromere. The peaks observed here are as real as can be.
{ "domain": "bioinformatics.stackexchange", "id": 1129, "tags": "normalization, chip-seq, deeptools, centromere" }
Buoyancy forces on a tube exposes to Atmosphere at both ends that passes through a container filled with liquid
Question: I'm trying to get an understanding of a free body diagram for the following situation. A container, such as Styrofoam cup has a straw pushed through the bottom of it and the cup is filled with water. The hole in the cup is very slightly larger than the straw so has no frictional forces and does not restrain the straw. The straw has an o-ring to seal the straw hole interface. What are the forces on the straw? My thoughts are that the straw is similar to the container wall, so sees side loading forces and no upward force. But then I think if a straw had a sealed bottom and was pushed to the bottom of the container in the same orientation, it would have buoyancy and would try to float up. Can someone explain? Thanks Answer: Both of your assumptions are correct, if I'm understanding your setup. For buoyancy to apply, the straw needs to have some sort of "bottom" in contact with the fluid. Even a regular straw at the bottom of a cup typically has buoyancy. Buoyancy occurs due to hydrostatic pressure. The pressure is higher the lower you are, therefore as long as the horizontal parts of the object are in contact with the fluid, the force on the lower horizontal surface will be higher, so net force acts upwards. In the situation where the straw goes through the cup, that analysis no longer applies. All that pushes the straw up is the air on the bottom, which will have essentially the same force as the air at the top of the straw (since air is not heavy).
{ "domain": "physics.stackexchange", "id": 91924, "tags": "free-body-diagram, fluid-statics, buoyancy" }
Text embeddings and data splitting
Question: I have created some document embeddings which were then used further in text classification tasks. After revisiting my code I was unsure about the workflow I used to train the document embeddings. At the moment I am creating the document embeddings based on the complete corpus available at the time of training. After the training is done, I evaluate the model by looking whether it creates useful similarities between the document embeddings. Those embeddings are used then in machine learning models and that's where the embeddings will be split into train, test and validation sets. Now my question is: Where is the right time to split the data? Should I do it before creating the document embeddings to prevent data leakage? I have used the mentioned approach because I viewed the creation of the document embeddings as a preprocessing step, so the computer can work with textual data. However, after I have put some thought into it, I think it's the wrong approach. I wanted to hear from more experienced NLP practitioners how they approach this task. Sorry for this very basic question. Thanks. Answer: TL;DR If you are training the document-embedding model, then split the data before you convert the text into embeddings. If you are using a pre-trained document-embedding model, then it won't matter and it is pre-processing step that it doesn't matter when you execute it. Pipeline when training your own document-embedding model Split your text data into train/validate/test sets. Use your train set to train the document-embedding model. Use your trained document-embedding model to convert train and validation sets to train your other model (e.g. classification model). Test your final model by using your trained document-embedding model to convert the test set and test the trained final (classification) model.
{ "domain": "datascience.stackexchange", "id": 6527, "tags": "nlp, dataset" }
Friction on a Spinning Platform
Question: I do not understand at all why, if an object is sitting on a spinning platform, the friction force is towards the center. I understand the need for a centripetal force during circular motion, but friction is only in opposition to a force being applied to an object / system, why does it act as centripetal? I understand for a car, for example, the wheels providing the forces needed, but, for example, a penny sitting on a rotating disc, why would the friction be towards the center, would it not be in opposition to the impending motion of the tangential velocity? I have seen a lot of explanations on why there would obviously be a centripetal force, the need for one because there is a change in velocity, etc, but none of these explain why friction acts as this force, or how. Answer: Static friction provides whatever force is necessary to stop the object from moving relative to the surface it sits on (up to a limit given by $\mu_s N$ where $\mu_s$ is the coefficient of static friction and $N$ is the normal force). In this case the coin (or whatever) sits on a surface which is accelerating inward - as described by the centripetal acceleration formula. To stop the coin from moving relative to that surface, static friction provides an inward force.
{ "domain": "physics.stackexchange", "id": 95513, "tags": "newtonian-mechanics, rotational-dynamics, friction, centripetal-force, angular-velocity" }
How to refactor this importer to handle validation errors?
Question: This is my importer to database from excel file. I would like to handle situations when any error raise. To not breaks whole import when one errors occurs. For example when there is duplicated record and I have uniqueness validations. I would like to store this row id in errors table and inform about that at the end. class DDImporter attr_accessor :file def initialize(path) @path = path end def open @file = Roo::Excelx.new(@path) end def sheets @file.sheets end def extract_sheets sheets.each do |sheet| unless sheet == ("risks" || "allergies_symptoms") extract sheet end end end def extract sheet_name @file.default_sheet = sheet_name header = file.row 1 2.upto(file.last_row) do |i| row = Hash[[header, file.row(i)].transpose] # row.delete "id" object = sheet_name.classify.constantize object.create!(row) end end end Answer: It can be as simple as : def initialize(path) @path = path @imported = [] @errors = [] end def extract sheet_name @file.default_sheet = sheet_name header = file.row 1 2.upto(file.last_row) do |i| row = Hash[[header, file.row(i)].transpose] # row.delete "id" model = sheet_name.classify.constantize # this is the part that gets the job done. object = model.new(row) if object.save @imported << object else @rejected << object end end end ... this way you can iterate through @imported or @rejected at the end of the import process (to display errors, or perform additional tasks). This code can also easily be adapted to only catch exceptions : def extract sheet_name @file.default_sheet = sheet_name header = file.row 1 2.upto(file.last_row) do |i| row = Hash[[header, file.row(i)].transpose] # row.delete "id" object = sheet_name.classify.constantize object.create!(row) end rescue ActiveRecord::RecordNotUnique, ActiveRecord::RecordInvalid => error # rejected record for these errors can be accessed with : error.record # to resume processing, just : error.continue end
{ "domain": "codereview.stackexchange", "id": 4867, "tags": "ruby, ruby-on-rails, exception-handling, data-importer" }
What is the trace in the Chern-Simons action
Question: I have been looking at the Chern-Simons Lagrangian in $(2+1)$-dimensional spacetime $M$ in terms of a gauge field $A$: $$ S[A] = \frac{k}{4 \pi}\int_M \text{Tr}(A \wedge \text{d}A+ \frac{2}{3}A \wedge A \wedge A). $$ The Chern-Simons theory allows $A$ to be a "Lie algebra valued $n$-form". According to Wikipedia, a Lie algebra $n$-form is defined as an object $A \in (\mathfrak{g} \times M) \otimes \wedge^k T^*(M) $, where $\mathfrak{g}$ is a Lie algebra and $T^*(M)$ is the cotangent bundle. I believe this means that I can take my gauge field $A$ and decompose it as $$ A = \tilde{A} \otimes \omega = A_{\mu}^a T_a \otimes \text{d}x^\mu$$ where $\tilde{A}$ is a Lie algebra valued field on spacetime, $\omega$ is an n-form and $\{ T_a \}$ and $\{ \text{d}x^\mu\}$ are the bases for $\mathfrak{g}$ and $T^*(M)$ respectively. From studying Yang-Mills theory, I am aware that I can produce objects which are scalar with respect to gauge transformations by using the Killing form $$ K(X,Y) = \text{Tr}(\text{ad}X \text{ad}Y)$$ where ad is the adjoint representation of the Lie algebra, however the trace notation "Tr" in the Chern-Simons action has always bothered me - is it really a trace in the usual matrix sense? Some literature seems to suggest that the "trace" is in fact an invariant bilinear form on our Lie algebra, i.e. a Killing form. It is at this point that I get confused. My question If Tr is to be interpreted as an inner product, what does $ \text{Tr}(A \wedge \text{d}A)$ and $ \text{Tr}(A \wedge A \wedge A)$ mean? An inner product should take two arguments so what are the arguments in each case and how do I explicitly evaluate this? Answer: Generally speaking, for a compact connected Lie group $G$ of the form, $$G = U(1) \times \dots \times U(1) \times G_1\times\dots\times G_s$$ where $G_i$ are compact, simple Lie groups, $\langle \cdot,\cdot\rangle_{\mathfrak g}$ is an $\mathrm{Ad}_G$-invariant, positive definite scalar product on $\mathfrak g$ which can be constructed as the direct sum of: a positive definite scalar product on $\mathfrak{u}(1)\times\dots\times \mathfrak{u}(1)$; $\mathrm{Ad}_{G_i}$-invariant positive definite scalar products on $\mathfrak g_i$, which may always be constructed by, for example, the negative of the Killing form. Now suppose we have a Lie algebra-valued one form, $A$, by the notation $\langle A, A\rangle$ we mean taking the wedge product and the inner product, above defined. Now the notation $\mathrm{Tr}$ is misleading because to construct the Lagrangian, we take wedge products, and then evaluate an inner product, which happens to involve a trace, but I find it more proper to note we are taking an inner product. Thus by, $$\mathrm{Tr}(A\wedge A \wedge A)$$ we mean $\langle [A,A],A\rangle$ (with wedge products implied), up to some constant depending on the normalization chosen for the structure constants. Two useful references are: Mathematical Gauge Theory by Mark J.D. Hamilton Differential Geometry, Gauge Theory and Gravity by M. Göckeler and T. Schücker
{ "domain": "physics.stackexchange", "id": 57281, "tags": "lie-algebra, action, chern-simons-theory, trace" }
What can we learn from 'quantum bogosort'?
Question: Recently, I've read about 'quantum bogosort' on some wiki. The basic idea is, that like bogosort, we just shuffle our array and hope it gets sorted 'by accident' and retry on failure. The difference is that now, we have 'magic quantum', so we can simply try all permutations at once in 'parallel universes' and 'destroy all bad universes' where the sort is bad. Now, obviously, this doesn't work. Quantum is physics, not magic. The main problems are 'Parallel universes' is merely an interpretation of quantum effects, not something that quantum computing exploits. I mean, we could use hard numbers here, interpretation will only confuse matters here, I think. 'Destroying all bad universes' is a bit like qubit error correction, a very hard problem in quantum computing. Bogo sort remains stupid. If we can speed-up sorting via quantum, why not base it on a good sorting algorithm? (But we need randomness, my neighbour protests! Yes, but can't you think of a better classical algorithm that relies on randomness?) While this algorithm is mostly a joke, it could be an 'educational joke', like the 'classical' bogosort as the difference between best case, worst case and average case complexity for randomized algorithms is easy and very clear here. (for the record, best case is $\Theta(n)$, we are very lucky but still must check that our answer is correct by scanning the array, expected time is simply awful (IIRC, proportional to the number of permutations, so $O(n!)$) and worst case is we never finish) So, what can we learn from 'quantum bogosort'? In particular, are there real quantum algorithms that are similar or is this a theoretical or practical impossibility? Furthermore, has there been research into 'quantum sorting algorithms'? If not, why? Answer: DISCLAIMER: The quantum-bogosort is a joke-algorithm Let me just state the algorithm in brief: Step 1: Using a quantum randomization algorithm, randomize the list/array, such that there is no way of knowing what order the list is in until it is observed. This will divide the universe into $O(N!)$ universes; however, the division has no cost, as it happens constantly anyway. Step 2: Check if the list is sorted. If not, destroy the universe (neglecting the actual physical possibility). Now, all remaining universes contain lists/arrays which are sorted. Worst Case Complexity: $O(N)$ (we only consider those universes which can observe that the list is sorted) Average/Best Case Complexity: $O(1)$ One of the major problems with this algorithm is the huge possibility magnification of errors as Nick Johnson mentions here: This algorithm has a much bigger problem, however. Assume that one in 10 billion times you will mistakenly conclude a list is sorted when it's not. There are 20! ways to sort a 20 element list. After the sort, the remaining universes will be the one in which the list was sorted correctly, and the 2.4 million universes in which the algorithm mistakenly concluded the list was sorted correctly. So what you have here is an algorithm for massively magnifying the error rate of a piece of machinery. 'Parallel universes' is a highly simplified interpretation of quantum effects, not something that Quantum Computing exploits. Not really sure what you mean by "highly simplified interpretation of quantum effects". The sources (this and this) I found on the internet regarding the quantum bogosort do not explicitly mention that they're using the alternative interpretation of QM i.e. the Everett's interpretation which you might be thinking about. In fact I'm not even sure how to glue together the Everett's interpretation and quantum-bogosort (using post-selection, as some people commented). Anyhow, just as a note: in mainstream cosmology, it is widely believed that more than one universe exists and there are even classifications for them, called the Max Tegmark's four levels and Brian Greene's nine types and Cyclic theories. Read the Wiki article on Multiverse for more details. 'Destroying all bad universes' is a bit like qubit error correction, a very hard problem in Quantum Computing. Sure, it is in fact much harder, and we don't expect to destroy universes literally. The quantum bogosort is just a theoretical concept, with no practical applications (which I know of). Bogo sort remains stupid. If we can speed-up sorting via quantum, why not base it on a good sorting algorithm? (But we need randomness, my neighbour protests! Yes, but can't you think of a better classical algorithm that relies on randomness?) Yes, it does remain stupid. It does seem to have started out as an "educational joke" as you said. I did try to find the origin of this sort, or relevant academic papers, but couldn't find any. However, even the classical bogosort is stupid in the sense that is widely held as one of the most inefficient sorting algorithms. Still it has been researched on, purely out of educational interest. In particular, are there real quantum algorithms that are similar or is this a theoretical or practical impossibility? None that I know of. Such algorithms are indeed theoretical possibilities, but definitely not practical (at least, not yet). Furthermore, has there been research into 'quantum sorting algorithms'? If not, why? There indeed has been research into "quantum sorting". But the problem with such sorting algorithms is any comparison-based quantum sorting algorithm would take at least $\Omega (N\log N)$ steps, which is already achievable by classical algorithms. Thus, for this task, quantum computers are no better than classical ones. However, in space-bounded sorts, quantum algorithms outperform their classical counterparts. This and this are two relevant papers.
{ "domain": "quantumcomputing.stackexchange", "id": 2294, "tags": "quantum-algorithms, complexity-theory, speedup" }
What's the difference between a DSP and microcontroller?
Question: What are the core attributes that make a DSP and microcontroller different from each other? I have in mind a machine vision device that needs to run image processing algorithms but also control some networking and hardware I/O. How good would the DSP be at controlling hardware output pins, and how well can microcontrollers run image processing algorithms? Answer: a Digital Signal Processor is one that has, in its instruction set, some instructions and addressing modes that are optimized for processing digital signals. usually these optimizations can be shown around what is needed to perform the dot-product needed for an FIR filter. $$ y[n] = \sum\limits_{i=0}^{L-1} h[i]\,x[n-i] $$ to do this in, say, $L$ instructions, a DSP must be able to do in one instruction: multiply $h[i]$ and $x[n-i]$ together. accumulate that product into an existing sum. fetch the next $h[i+1]$ and $x[n-i-1]$ in anticipation of the next multiply-accumulate. since these are two numbers to fetch, a DSP will use something called a Harvard architecture that has at least two separate memory spaces for $h[i]$ and $x[n]$ so the DSP can fetch these two numbers simultaneously. addressing $x[n]$ must be in a circular queue. a DSP will perform the modulo (or "wrap around") arithmetic on the index or address of $x[n]$ necessary without additional instructions. the result $y[n]$ will eventually go to an output DAC or fixed-point stream and there is some way to saturate the value of $y[n]$ against some $\pm$ maximum without additional instructions. if the DSP is a fixed-point DSP, then this accumulator register will have width in bits that is the sum of the bitwidth for $h[i]$ and the bitwidth for $x[n]$ plus a few more bits on the left as "guard bits". a general purpose CPU can do all of these, but not likely in a single instruction cycle and things like modulo arithmetic and saturation will need their own specific instructions in a general-purpose CPU. a DSP may also have some instructions and an addressing mode that facilitates the operations of the Fast Fourier Transform (FFT). this may include instructions necessary to perform an FFT "butterfly" in as few as four instruction cycles (or two instruction cycles if SIMD is operational).
{ "domain": "dsp.stackexchange", "id": 4792, "tags": "dsp-core" }
Can a controlled lighting environment be a substitute for bandpass light filters?
Question: Could illuminating a subject with different LEDs be used instead of bandpass filters? I am experimenting with capturing surface reflectance using a monochrome camera. The traditional approach to capturing specific reflected wavelength bands is to take multiple photos each with a bandpass filter in front of the camera which only allows a small band of light wavelengths pass. I will be capturing photos under very controlled lighting, all external light sources will be blocked and the subject will be surrounded by an array of LEDs. I understand that you can't just use white or RGB LEDs as they will only emit in three different bands. A full spectrum of LEDs emitting at different bands would be needed. Here is a good chart showing individually available LEDs and their wavelengths: https://www.lumex.com/article/led-color-guide If this could work the main advantage would be eliminating the need to mechanically change the filters which in turn would eliminate vibrations and speed up the capture process. Probably also reduce costs. Answer: Yes, this would work. But whether you want to do so would also depend on the wavelength resolution you need. An LED emission band can be rather large (perhaps ~10's of nm), and you could probably do better with specialized bandpass filters. More to the point, you have a lot more flexibility if you use bandpass filters, including the flexibility to purchase quite narrow or sharply-defined (i.e. steep cut-off/on) lines. LED emission probably won't be quite so clean. At the very least, you should request the emission spectra of the LEDs before you commit to them so you can see what you're working with. But given the right requirements, it could totally work! Have fun!
{ "domain": "physics.stackexchange", "id": 49557, "tags": "visible-light, reflection, light-emitting-diodes, camera" }
How can I compare two algorithms using their Big-Oh complexities?
Question: I have two recursive algorithms to solve a particular problem. I have calculated their time complexities as $O(n^2\times\log n)$ and $O(n^{2.32})$. I need to find which algorithm is better in terms of time complexity. I tried plotting graphs but two functions seem to be going together. Answer: You will need to compare the rate of change for both functions. This can be accomplished by taking the derivative of both functions: $$ O_1'(n^2 log(n)) = \frac{d}{dn} n^2 log(n) = n + 2n × log(n) $$ $$ O_2'(n^{2.32}) = \frac{d}{dn} n^{2.32} = 2.32n^{1.32} $$ Plot these derivatives and compare the graphs. If they still seem too close, continue to take higher order derivatives such as $\frac{d^2O}{dn^2}$, $\frac{d^3O}{dn^3}$, etc. Eventually, you will notice a clear difference in the rate of change. For those who don't have access to plotting software, you can evaluate the slope of the tangent line for an arbitrarily large n-value: $$ O_1'(10^6)=\frac{d}{dn} n^2log(n)\bigg|_{n=10^6} = 10^6 + 2(10^6) × log(10^6) ≈ 2.8631 × 10^7$$ $$ O_2'(10^6)=\frac{d}{dn} n^{2.32}\bigg|_{n=10^6} = 2.32(10^6)^{1.32} ≈ 1.9296 × 10^8$$ This means, for $n=10^6$, $O_1$'s tangent line @ $(10^6, y_1)$ would result for some equation: $$y - y_1 = (2.8631 × 10^7)(x - 10^6) $$ and $O_2$'s tangent line would result in: $$y - y_1 = (1.9296 × 10^8)(x - 10^6) $$ Now, the answer is clear: $O(n^{2.32})$ has a much more steeper rate of growth than $O(n^2log(n))$ for large values of $n > 10^6$. Conclusion: $O(n^2log(n))$ is more efficient than its counterpart.
{ "domain": "cs.stackexchange", "id": 20806, "tags": "time-complexity, runtime-analysis, big-o-notation" }
What will happen if the universe expansion continues to accelerate?
Question: From my understanding, dark energy is most likely causing the expansion of the universe and it is accelerating. For this thought experiment assume the rate that the universe is expanding is exactly c. Two photons move towards each other each at c, however, space itself also pulls them away from each other at c so their total speed should forever be zero. Since the universe doesn't exactly have a center it pulls all particles in this manner making interaction impossible. So my question is in this universe where all interaction is impossible what would happen to virtual particles since they can't destroy each other wouldn't the universe create an infinite amount of energy? Even if the total sum of energy is zero and the virtual particles would destroy each other it is now impossible. Does this break the conservation of energy? Answer: For this thought experiment assume the rate that the universe is expanding is exactly $c$. This isn't how the expansion works. The expansion rate is described by the Hubble parameter, $H$, and the Hubble parameter tells us the (average) recession velocities at a distance $d$. The equation is simply: $$ v = H d \tag{1} $$ The Hubble parameter is normally given in units of (km/sec)/MPc, where the unit MPc is a megaparsec. So objects at a distance of 1 MPc have an average recession velocity of $H$ km/sec. Objects at a distance of ten MPc have an average recession velocity of $10H$ km/sec, and so on. The current value of $H$ is around $70$ (km/sec)/Mpc, though there is quite a large uncertainty in the value and it could be anywhere in the range $62$ to $82$ (km/sec)/Mpc. In a universe with no dark energy the value of $H$ falls with time as the mutual gravitational attraction of the matter slows the expansion. In a universe with dark energy the value of $H$ tends to a constant value, and in pathological expansions like the Big Rip the value of $H$ increases with time. For more on this see How does the Hubble parameter change with the age of the universe? For any value of $H$ there is a distance where the recession velocity is equal to the speed of light. From equation (1) this distance is just: $$ d_h = \frac{c}{H} $$ And as you suggest in your question light emitted at this distance cannot reach us. This distance is called the particle horizon. If dark energy behaves like a cosmological constant then in our universe the Hubble parameter will tend to a constant value of about $20$% less than it's current value, which puts the particle horizon at about 16 billion light years. So in the far future all observers in the universe will have a cosmological event horizon at around 16 billion light years and they will never be able to see farther into the universe than that. However within this distance everything behaves normally. There is a (wildly speculative) idea that the dark energy density can increase with time eventually driving the Hubble parameter to infinity. This is called the Big Rip. This will in effect tear everything apart and destroy everything, however there is currently no evidence that this will happen. A couple of final points. Firstly energy is not conserved in the expansion of the universe. This is the case no matter how the universe is expanding and doesn't require dark energy to do anything weird like a Big Rip. See for example: Conservation law of energy and Big Bang? How is dark energy consistent with conservation of mass and energy? Secondly the vacuum is't full of virtual particles. The idea of virtual particles being pulled apart is just a toy model originally introduced to give a rough guide to Hawking radiation. Virtual particles are a mathematical device and don't really exist.
{ "domain": "physics.stackexchange", "id": 46612, "tags": "cosmology, energy-conservation, space-expansion, dark-energy, virtual-particles" }
In the SLAM for Dummies, why are there extra variables in the Jacobian Matricies?
Question: I am reading SLAM for Dummies, which you can find on Google, or at this link: SLAM for Dummies - A Tutorial Approach to Simultaneous Localization and Mapping. They do some differentiation of matrices on page 33 and I am getting different answers for the resulting Jacobian matrices. The paper derived $$ \left[ {\begin{array}{c} \sqrt{(\lambda_x - x)^2 + (\lambda_y - y)^2} + v_r \\ \tan^{-1}\left(\frac{\lambda_y - y}{\lambda_y - x}\right) - \theta + v_\theta \end{array}} \right] $$ and got $$ \left[ {\begin{array}{ccc} \frac{x - \lambda_y}{r},& \frac{y - \lambda_y}{r},& 0\\ \frac{\lambda_y - y}{r^2},& \frac{\lambda_y - x}{r^2},& -1 \end{array}} \right] $$ I don't get where the $r$ came from. I got completely different answers. Does anybody know what the $r$ stands for? If not, is there a different way to represent the Jacobian of this matrix? Answer: From the paper: $\begin{bmatrix} range\\bearing \end{bmatrix} = \begin{bmatrix} \sqrt{(\lambda_x-x)^2 + (\lambda_y - y)^2 } + v_r \\ tan^{-1}(\frac{\lambda_y-y}{\lambda_x-x}) - \theta + v_{\theta} \end{bmatrix}$ The Jacobian is: $H = \begin{bmatrix}\frac{\partial range}{\partial x} & \frac{\partial range}{\partial y} & \frac{\partial range}{\partial \theta}\\\frac{\partial bearing}{\partial x} & \frac{\partial bearing}{\partial y} & \frac{\partial bearing}{\partial \theta}\end{bmatrix} = \begin{bmatrix}\frac{-2(\lambda_x-x)}{2\sqrt{(\lambda_x-x)^2 + (\lambda_y - y)^2 }} & \frac{-2(\lambda_y-y)}{2\sqrt{(\lambda_x-x)^2 + (\lambda_y - y)^2 }}&0\\ \frac{1}{1+(\frac{\lambda_y-y}{\lambda_x-x})^2}\cdot\frac{\lambda_y-y}{(\lambda_x-x)^2}&\frac{1}{1+(\frac{\lambda_y-y}{\lambda_x-x})^2}\cdot\frac{-1}{(\lambda_x-x)}&-1\end{bmatrix}$ The expression $\sqrt{(\lambda_x-x)^2 + (\lambda_y - y)^2 }$ is just the ideal range (distance from the robot to the landmark), and so is called $r$. Then the previous expression simplifies to: $H = \begin{bmatrix}\frac{x-\lambda_x}{\sqrt{(\lambda_x-x)^2 + (\lambda_y - y)^2 }} & \frac{y-\lambda_y}{\sqrt{(\lambda_x-x)^2 + (\lambda_y - y)^2 }}&0\\ \frac{\lambda_y-y}{(\lambda_x-x)^2+(\lambda_y-y)^2}&\frac{x-\lambda_x}{(\lambda_x-x)^2+(\lambda_y-y)^2}&-1\end{bmatrix} = \begin{bmatrix}\frac{x-\lambda_x}{r} & \frac{y-\lambda_y}{r}&0\\ \frac{\lambda_y-y}{r^2}&\frac{x-\lambda_x}{r^2}&-1\end{bmatrix}$ As you can see there is a sign error in the paper (second row, second column). You can check it with any differentiation tool (you can find a bunch of them online).
{ "domain": "robotics.stackexchange", "id": 953, "tags": "slam" }
Frequency Analysis & Chi-Squared Test
Question: Following up on my implementation of Cryptopals Challenge 1, this is my solution to Challenge 3. Single-byte XOR cipher The hex encoded string: 1b37373331363f78151b7f2b783431333d78397828372d363c78373e783a393b3736 ... has been XOR'd against a single character. Find the key, decrypt the message. You can do this by hand. But don't: write code to do it for you. How? Devise some method for "scoring" a piece of English plaintext. Character frequency is a good metric. Evaluate each output and choose the one with the best score. The idea here is that I try to decrypt with every possible single byte repeating key (each extended ascii/utf8 character), then compare the resulting character frequency to the expected frequency with a Chi-Squared test. $$ \chi^2 = \sum_{i=1}^n \frac{(obs - exp)^2}{exp} $$ If \$\chi^2\$ < the critical value, then the decrypted cipher is determined to be English text, therefore we've cracked the cipher and found the key. I'm some data and a build script to do some code generation before compiling the rest of the project. data/english.csv 32,17.1660 101,8.5771 116,6.3700 111,5.7701 97,5.1880 ... build.rs use std::env; use std::fs::File; use std::io::BufRead; use std::io::BufReader; use std::io::Write; use std::path::Path; fn main() { let declaration = String::from("fn english_frequencies() -> HashMap<u8, f32> {["); // csv must be 2 columns, no header // ascii number, frequency as percentage // 32,17.16660 let file = File::open("data/english.csv").unwrap(); let reader = BufReader::new(&file); let formatted_lines = reader .lines() .map(|line| format!("({}),\n", line.unwrap())) .collect(); let close = String::from("].iter().cloned().collect()}"); let out_dir = env::var("OUT_DIR").unwrap(); let dest_path = Path::new(&out_dir).join("english_frequencies.rs"); let mut f = File::create(&dest_path).unwrap(); f.write_all( &[declaration, formatted_lines, close] .join("\n") .into_bytes(), ).unwrap(); } This generated table of expected frequencies is then included in a module that implements the frequency analysis. src/frequency.rs use std::collections::HashMap; include!(concat!(env!("OUT_DIR"), "/english_frequencies.rs")); pub fn english(message: &str) -> bool { let expected_counts: HashMap<char, f32> = english_frequencies() .iter() .map(|(k, freq)| (k.clone() as char, (freq / 100.0) * (message.len() as f32))) .collect(); let actual_counts = message .chars() .fold(HashMap::new(), |mut acc: HashMap<char, isize>, c| { let count = match acc.get(&c) { Some(x) => x.clone() + 1, None => 1, }; acc.insert(c, count); acc }); let chi_statistic = chi_statistic(actual_counts, expected_counts); if cfg!(debug_assertions) { println!("X-statistic: {}", chi_statistic); } // Degrees of freedom = 256 - 1 = 255 (character space) // Usign this table: // https://en.wikibooks.org/wiki/Engineering_Tables/Chi-Squared_Distibution // We can use the approximate value for 250 degrees of fredom. // Given a significance factor (alpha) of 0.05, our critical value is 287.882. // If our chi_statistic is < the critical_value, then we have a match. // See this page for an explanation: // https://en.wikipedia.org/wiki/Chi-squared_distribution#Table_of_%CF%872_values_vs_p-values chi_statistic < 287.882 } /// Calculates Pearson's Cumulative Chi Statistic /// https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test#Calculating_the_test-statistic /// /// This is a slight variation. /// Technichally, if the expected value is zero and the actual is non-zero, then the statistic is infinite. /// For the sake of ergonommics, this implementation assumes missing expected values to be small, but non-zero. /// This allows us to only specify values in the expected frequencies that are statistically /// significant while allowing for all valid utf-8 characters in the message. fn chi_statistic(observed: HashMap<char, isize>, expected: HashMap<char, f32>) -> f32 { observed .into_iter() .map(|(key, obs)| { let exp = match expected.get(&key) { Some(x) => x.clone() as f32, None => 0.0000001, //non-zero, but tiny possibility }; (obs as f32 - exp).powi(2) / exp }).sum() } #[cfg(test)] mod tests { use super::*; #[test] fn bacon_message_is_english() { let message = "Cooking MC's like a pound of bacon"; assert!(english(message)); } #[test] fn message_with_unprintable_chars_is_not_english() { assert!(!english( "\u{7f}SSWUR[\u{1c}q\u{7f}\u{1b}O\u{1c}PUWY\u{1c}]\u{1c}LSIRX\u{1c}SZ\u{1c}^]_SR" )); } #[test] fn printable_nonsense_is_not_english() { assert!(!english("Yuuqst}:WY=i:vsq\u{7f}:{:juot~:u|:x{yut")); } #[test] fn readable_but_incorrect_is_not_english() { assert!(!english( "cOOKING\u{0}mc\u{7}S\u{0}LIKE\u{0}A\u{0}POUND\u{0}OF\u{0}BACON" )); } } Finally, we call it from the main program and crack the cipher. src/main.rs extern crate cryptopals; use cryptopals::byte_array::xor; use cryptopals::frequency; use cryptopals::hex; use std::iter; fn main() { let secret = hex::to_bytes("1b37373331363f78151b7f2b783431333d78397828372d363c78373e783a393b3736"); println!("{:?}", crack_xor(&secret)); } fn crack_xor(cipher: &[u8]) -> Vec<String> { let alphabet = 0..255u8; /* ascii/utf-8 range */ alphabet .into_iter() .filter_map(|c| { let key = iter::repeat(c).take(cipher.len()).collect::<Vec<u8>>(); let decrypted = xor(&cipher, &key); match String::from_utf8(decrypted) { Ok(s) => Some(s), Err(_) => None, } }).filter(|s| frequency::english(s)) .collect::<Vec<String>>() } #[cfg(test)] mod tests { use super::*; #[test] fn analysis_matches_bacon_message() { let secret = hex::to_bytes("1b37373331363f78151b7f2b783431333d78397828372d363c78373e783a393b3736"); let actual = crack_xor(&secret); assert_eq!(vec!["Cooking MC's like a pound of bacon"], actual); } } Answer: Why use a build.rs here? Why not read english.csv in at runtime? For the purposes of an exercise like that I'm surprised you went to the trouble of converting the csv file into rust code like that. If you wanted to move as much work as possible to compile time you only went part way. You still have to postprocess the returned HashMap by converting the u8 to chars and converting the 0-100 range to 0-1. Why not do that additional conversion work up front. include!(concat!(env!("OUT_DIR"), "/english_frequencies.rs")); Given that it is rust code, I wonder if you could instead use: mod english_frequencies; Thus including the rust code as a module instead of inserting the code literally here. (I'm not sure I've not tried it). .map(|(k, freq)| (k.clone() as char, (freq / 100.0) * (message.len() as f32))) You clone char/u8 a a lot, but you don't need to. They are copy types and will be "cloned" automatically in most contexts. let count = match acc.get(&c) { Some(x) => x.clone() + 1, None => 1, }; acc.insert(c, count); This whole bit can be replaced by: *acc.entry(c).or_insert(0) += 1; Moving on... fn chi_statistic(observed: HashMap<char, isize>, expected: HashMap<char, f32>) -> f32 { It is a bit weird that this function takes ownership of the HashMaps. You don't need to consume the hashmaps in the function, so I'd expect borrows. let exp = match expected.get(&key) { Some(x) => x.clone() as f32, None => 0.0000001, //non-zero, but tiny possibility }; You can write this as: let exp = expected.get(&key).map(|x| x as f32).unwrap_or(0.0000001); I'm not sure whether that's better or not, but it is an option.
{ "domain": "codereview.stackexchange", "id": 32190, "tags": "programming-challenge, cryptography, statistics, rust" }
Heat Transfer coefficient calculation for hot air flow inside rectangular duct
Question: I am working on modeling of air-heater component. A simplified representation of system is as follows: Cold air flow enters steel duct at one end. Inside duct is Calrod heating element shaped as ellipse. Follwing heat transfer occur simulteneously:- Air gains heat from surface of heated calord. Air also exchanges heat with duct inner surface. Calrod element radiates heat to duct inner surface. Duct loses heat by radiation to surroundings as well as convection. System is solved by taking energy balance equations on Air mass, Duct mass and Heater element mass. My question is in regards to convection between air and duct. In model, I have considered whole assembly to be made of 5 parts along duct length. In each part respective thermal balance on air, duct and heated element mass is taken. In each part, I consider air to be lump-sum mass. Entering air gains heat from calrod and gains some temperature ( which would in real world be average temperature). Problem occurs when that temperature is used for convection transfer with duct. In actual process, air flow is fast enough (0.013 $\frac{m^3}{sec}$) so that air that actually is in contact with heater element doesn't reach duct surface. So air that exchanges heat with duct inner surface is at lower temperature than average air's temperature. This gives incorrect duct temperature predictions. How can I properly set $\Delta T$ for convection heat exchange with duct? Answer: There are several issues going on that I believe are making the model not very accurate. They all in some way relate to your question about the convection: Breaking up the flow into 5 discrete chunks is a good start, but probably not a sufficient number. Keep increasing the number of nodes in your model until the answer stops changing. The next thing to check is the Reynold number of your flow and the length it will take to become fully developed. If your flow is fully developed for most of the length, the heat transfer coefficient will do a good job modeling the problem you are describing with the air not contacting the wall. If the flow is not fully developed (which is my gut guess) then the heat transfer coefficient won't do a good job. In general, you should consider the high level thermal resistances between the coil and the duct wall. Since air is such a poor conductor and has such a low density, it is a poor heat transfer medium. Since the heat transfer coefficient on the surface of the coil to the air will be similar to the heat transfer coefficient from the air to duct, the extreme difference in dT between the coil and air and the air and wall will result in a very large Q to the air and a very small Q to the duct wall from the air. It is not impossible that the air actually cools the duct wall instead of heating. If you work out the air-to-duct heat transfer coefficient you may find that it is so trivial that it is likely not the source of error in your model. Without numbers I can't really say for sure, but my guess is that the air in your model is not actually the primary means of heat from the coil to the wall. Instead, the radiation in dominant. Thus, I would look to improve the accuracy of that calculation first if you want to predict wall temperature. For example, what emissivity are you using for the duct wall? How are you calculating the view factor between the coil's interior edge and the wall? Etc etc. Hope this helps and that this is still even remotely relevant 1 year late.
{ "domain": "engineering.stackexchange", "id": 1555, "tags": "heat-transfer, modeling, convection, radiation" }
How could the precession of Mercury be known so accurately in the 19th century?
Question: The discrepancy between the observed precession of the perihelion of Mercury and the value predicted by Newtonian theory was known in the 19th century to be approximately 43 arcseconds per century. Maybe I totally misunderstand what this value means, but if it is what I think, then this value seems to be absolutely tiny, 43" approximately corresponds to the width of a needle held at an arm's length in front of you. How can this value be obtained from data that cannot have had an extremely high precision and that cannot have spanned more than a couple of centuries of observations? Answer: Some of the most accurate measurements of the solar system come from event timing. (Relatively) small changes in position can lead to significant differences in eclipse or occultation timing. In fact the article on Tests of general relativity mentions that this was the method used to first notice the discrepancy. This anomalous rate of precession of the perihelion of Mercury's orbit was first recognized in 1859 as a problem in celestial mechanics, by Urbain Le Verrier. His reanalysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848 showed that the actual rate of the precession disagreed from that predicted from Newton's theory
{ "domain": "physics.stackexchange", "id": 39405, "tags": "newtonian-mechanics, newtonian-gravity, astronomy, history, celestial-mechanics" }
How is the edge piece made?
Question: Apparently the "CUP HOLDER" has a long history. The 1944 Spitfire cupholder above has me a tad confused about the manufacturing process. I am looking at the encased wired that is running the perimeter of the sloped top side edges. I think this is simply a way of terminating the sheet metal so as not to leave an overly sharp edge or snag points. In order to create it, I originally thought it was just an edge tab rolled over to form a tube then the wire was inserted. Then I thought they might of used the wire are part of the forming process and rolled the edge around it. My final thought was that its actually a circular tube with 26 Gauge wall with a 14 Gauge wire insert as described at the top of the page item "C". So my question is how would one make this "tube" of a different gauge than the adjacent sheet metal which it would be attached to? I noted no welding symbols so I am curious as to the process. Answer: As comments mentioned, and your second guess surmised, the 24 Gauge sheet metal A is wrapped around the (solid) 14 Gauge wire. The confusion appears to be the result of a mistake on the print. The callouts of A B and C refer to materials each part is made from, with designations as to the quantity required to make this part. A is cut and bent from 11.5"x13.5" of 24 Gauge sheet metal B is 2 pieces, 4" each of 0.6"x0.5" 20 Gauge L angle C is 6.1" of 0.8"x0.55" 20 Gauge L angle 26" of 14 Gauge wire is likely what was meant by the error. Material is Alclad so aluminum chart likely applies for Gauge to thickness. The print is not without other errors/omissions - the Tenax fastener appears to be shown on only two of the three views.
{ "domain": "engineering.stackexchange", "id": 5221, "tags": "mechanical-engineering, cad, machining, technical-drawing" }
Find the nth term of a sequence that consists of Fibonacci and prime numbers interleaved
Question: I have given a series problem 1,2,1,3,2,5,3,7,5.... in which the odd terms forms the Fibonacci series and even terms form the prime number series. I have to write a code in which I have to find the nth term. Following is the code for finding the nth term, I want to know whether it can be written more efficiently or not and how? def fib(n): t1=0 t2=1 for i in range(1,n+1): add = t1+t2 t1 = t2 t2 = add print(t1) def prime(n): count =1 for i in range(2,100): flag =0 for j in range(2,i): if (i%j==0): flag=1 break if (flag==1): continue else: if (count==n): print(i) break else: count = count+1 n= int(input('enter the positive number: ' )) if (n%2==0): prime(int(n/2)) else: fib(int(n/2)+1) Answer: You could use a few tricks to implement the two sequences more efficiently, but the short version of my answer is that most significant performance improvements you could make involve some relatively advanced math, and the smaller improvements do more to improve your code's readability than its performance. Useful improvements to prime If you keep a list of the primes you find, you only need to check if those divide the new numbers you are checking, rather than checking every number up to the number you are looking at. You could also skip over even numbers in the outer loop (use range(3, max, 2)), thus avoiding checking even numbers that you can be sure aren't prime (you would need to add a special case for 2). The inner loop (j) can stop at i/2, because no number can be evenly divided by a number more than half its size. Similarly, you can stop the loop at when you pass the square root of n, but you would have to implement that by squaring the factors because sqrt is limited by the inaccuracy of floating-point numbers. Using all of these suggestions, the code might look a little like this: def nth_prime(n): if n == 1: return 2 primes = [2] for candidate_prime in range(3, MAX, 2): for p in primes: if (p ** 2 > candidate_prime): break # candidate is prime if (candidate_prime % p == 0) break # p divides candidate; candidate is not prime if no primes divided candidate_prime: primes.append(candidate_prime) if len(primes) == n: return candidate_prime Additional optimizations for determining whether a number is prime are discussed on the Wikipedia page on the subject. These improvements will only start to have noticeable effects when you start looking at very large primes, so you might also want to use itertools.count to look at all numbers instead of stopping at 100. (If you really want to stop at 100, you could also just make a list of the prime numbers up to 100 and index that for maximum efficiency.) Links to mathematical solutions To really improve efficiency, this answer suggests the solution in this blog post, but this is probably overkill unless you really want to be able to calculate very large fibonacci numbers very fast (I can't tell that there's a delay in your function until somewhere far above n=10000). This question goes into depth about how to find the nth prime number, but the final point is important to note: tl;dr: Finding the nth prime can be efficiently done, but the more efficient you want it, the more mathematics is involved. Other suggestions The following suggestions aren't really about efficiency (if they change the efficiency of your code, the difference will be immeasurably small), but they might make your code a little cleaner. In the loop you're using for the Fibonacci sequence, you can just write t1, t2 = t2, t1 + t2 to update both values in one line. When creating the nth Fibonacci number, you can just loop over range(n); there's no need to adjust the ends of the loop. And if you're not using the index i in that loop, Python convention is call it _, so you end up with for _ in range(n). Using Python's less-than-well-known for/else feature might let you eliminate the flag variable. If you put an else block after a loop in Python, it will run only if you do not break out of the loop, which allows you to avoid flag variables. for i in some_list: if condition(i): break else: do_this_if_condition_was_never_true()
{ "domain": "codereview.stackexchange", "id": 34936, "tags": "python, performance, python-3.x, primes, fibonacci-sequence" }
Custom nullptr_t class
Question: I tried to write a nullptr_t class based on the official proposal to be used in C++03 only. The only differences with the proposal are that we can compare two nullptr_t instances and that it is convertible to bool via an overload to void* to avoid unwanted behaviour such as int a = nullptr; for example. Here is the class: const class nullptr_t { public: // Return 0 for any class pointer template<typename T> operator T*() const { return 0; } // Return 0 for any member pointer template<typename T, typename U> operator T U::*() const { return 0; } // Used for bool conversion operator void*() const { return 0; } // Comparisons with nullptr bool operator==(const nullptr_t&) const { return true; } bool operator!=(const nullptr_t&) const { return false; } private: // Not allowed to get the address void operator&() const; } nullptr = {}; I would like to know if there is any actual flaw in this completed implementation or if there is a difference in the behaviour compared to the C++11 type std::nullptr_t besides the namespace that I can't see. Answer: At the moment, your implementation allows auto& p = nullptr; This is forbidden in C++11 as nullptr is an rvalue. You also do not allow the following: auto p = nullptr; auto pp = &p; While C++11 does allow it. You are also missing overloads for comparison operators. A simple workaround would be to remove the operator& overload and add a macro: #define nullptr (nullptr_t()) Also, I'd generally use struct X { ... } const x; instead of const struct X { ... } x;.
{ "domain": "codereview.stackexchange", "id": 3821, "tags": "c++, c++11, reinventing-the-wheel, c++03" }
Wave nature of a typical ball
Question: how we can observe the wave nature of a typical tennis ball. Construct a thought experiment Answer: Calculations with the Uncertainty Principle imply that the wave nature of a macroscopic object is too small for us ever to hope to detect.
{ "domain": "physics.stackexchange", "id": 73399, "tags": "homework-and-exercises, waves, wave-particle-duality" }
How could inertial and gravitational mass be even conceptually different?
Question: From https://en.wikipedia.org/wiki/Mass Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. [...] Suppose an object has inertial and gravitational masses $m$ and $M$, respectively. If the only force acting on the object comes from a gravitational field $g$, the force on the object is: $$F=Mg.$$ Given this force, the acceleration of the object can be determined by Newton's second law: $$F=ma.$$ In theory, mass could be determined by the number of indivisible particles the object is made of. A better approach would be choosing unit mass and using law of conservation of momentum: $$\frac{m_1}{m_2}=-\frac{\Delta v_2}{\Delta v_1}.$$ If mass can be determined in the absence of any force, how could there (even conceptually) exist more types of mass? Shouldn't we talk about "inertial and gravitational force equivalence" instead of about "inertial and gravitational mass equivalence"? Any kind of mass which is "not invariant" under different kinds of forces makes no sense to me. The answers here (Why did we expect gravitational mass and inertial mass to be different?) and here (Question about inertial mass and gravitational mass) do not answer my question as mass is "determined" by force there. Answer: In theory, mass could be determined by the number of indivisible particles the object is made of. This would only be the case if all indivisible particles were the same mass and if the mass of a composite object were equal to the sum of the masses of the indivisible particles. Neither of those are true. Mass cannot be determined this way. A better approach would be choosing unit mass and using law of conservation of momentum: 1/2=−Δ2/Δ1 Note that for this to work requires $\Delta v_1 \ne 0$. That in turn requires a force. So this method does not avoid the need for a force. However, what it does do is make it clear that the resulting measure is independent of the type of force, without eliminating the need for a force altogether. Of course, the force based definitions also do that, but not as clearly. Shouldn't we talk about "inertial and gravitational force equivalence" instead of about "inertial and gravitational mass equivalence"? Probably if we ever found gravitational mass to be different from inertial mass we would call it gravitational charge instead. So mass would continue to refer to the inertial mass. Then, just like the acceleration of an object in an electric field depends on the ratio of its electric charge and mass, so also the acceleration of an object in a gravitational field would depend on the ratio of its gravitational charge and mass.
{ "domain": "physics.stackexchange", "id": 80196, "tags": "newtonian-mechanics, classical-mechanics, gravity, mass" }
Identify little green long legged bug
Question: Discovered attempting to hide on bathroom sink handle at night, though bathroom light had been left on. Barely reacted to attempts to capture it at all. No flying. No jumping. Just really slow crawling. Found Eastern US. 2017/11/18 Answer: The photo is not high quality, but the shape and size of the insect definitely point towards it being the nymph of some species of assassin bug (order hemiptera, family Reduviidae). See this as an example: Specifically, your specimen looks similar to one found in a home in Virginia. The genus is likely Zelus, and based on it's greenish hue, I'd say it's an individual of the eastern US's most common species of Zelus, Zelus luridus (Pale Green Assassin Bug). Source: Christine Hanrahan Nymphs are solid green, wingless, and with narrow bodies The most reliable feature to distinguish this species from others is the pair of spines on the rear corners of the pronotum. These spines are long on the lighter colored individuals and shorter on ones that are darker. It can also be distinguished by dark bands on the distal ends of the femurs, but these can often be too light to be easily seen. [Source].
{ "domain": "biology.stackexchange", "id": 8015, "tags": "species-identification, zoology, entomology" }
What do "tachionic" neutrinos mean for QG?
Question: Reading about the spectacular Opera claim, I`m (again ;-P) wondering if a confirmation of superliminous neutrinos could help settle some still open quantum gravity issues ...? In this post, Lumo explains why tachyons should better be bosonic if they exist, making use, among other things, of some string theoretical considerations. So what would a confirmation of the claim mean for string theory? On the other hand, would a confirmation of superluminal neutrinos and a corresponding incompleteness of GR (and allowance to violate Lorenz Invariance?) lend some "updraft" to other QG theories like LQG, spin foams, spin networks etc or even provide some positive hint of them? Answer: For this post I will assume that the observation of neutrinos faster than light is accurate and not further mention this assumption below. Of the particles whose speeds have been accurately measured, the neutrino is unique in that it does not participate in electromagnetic interactions. The graviton also does not participate in electromagnetic interactions and so we might expect its velocity to also exceed that of light. This has immediate impact on quantum gravity and would open up a wide range of new theories. Since neutrinos can be used to send a signal, their moving faster than light destroys the relationship between causality and arbitrary reference frames which forms an important part of the special theory of relativity. To get causality to function again, we have to choose one reference frame as the preferred one and then let that reference frame define causality for all others. This variation of the special theory of relativity is sometimes called "Lorentzian Relativity" or "neo-Lorentzian Relativity" or "Lorentz Ether Theory", and has been explored by fringe physicists for most of the last 100 years. The presence of a preferred reference frame in the special theory of relativity implies that one must also be present in general relativity. This makes theories that assume a "background metric" more interesting. Of such theories, my favorite is the one found by the Cambridge Geometric Algebra Research Group. For example, see Gravity, Gauge Theories and Geometric Algebra, Anthony Lasenby, Chris Doran, Stephen Gull, Phil. Trans. R. Soc. Lond. A 356, 487-582 (1998). This theory gives the observable predictions of GR exactly, but avoids wormholes and other topological things as it is built on a flat background metric. Rather than tensors, it uses the gamma matrices used in the rest of particle physics. In terms of the damage to relativity as compared to the damage to quantum mechanics, an accepted observation of superluminal neutrinos would be more damaging to relativity. So I would think that the general effect on quantum gravity would be to make it more quantum and less (traditional) gravity.
{ "domain": "physics.stackexchange", "id": 1670, "tags": "general-relativity, string-theory, quantum-gravity, neutrinos, loop-quantum-gravity" }
Reduction of carbamate with LAH
Question: The products of the reduction of esters with $\ce {LiAlH4}$ and the products of the reduction of amides with $\ce {LiAlH4}$ are vastly different. The former reduction cleaves the ester and produces two alcohols while the latter reduction produces an amine with the carbonyl group of the original amide replaced with $\ce {CH2}$. A carbamate seems to display both chemical behaviour of esters and amides. I am curious to know what would be the mechanism by which reduction of carbamate with $\ce {LiAlH4}$ takes place and what would be the products of such a reduction. Answer: Carbamates are usually reduced to N-methyl groups. There are numerous examples: J. Am. Chem. Soc. 2012, 134 (16), 6936–6939 Org. Lett. 2012, 14 (18), 4834–4837 But it is not always a given. In this next example, the nitrogen is part of a three-membered ring (aziridine). These nitrogens are better leaving groups than usual, cf. Ketone/aldehyde synthesis from N-acylazetidines or aziridines where the same kind of reactivity is observed: Angew. Chem. Int. Ed. 2002, 41 (24), 4683–4685
{ "domain": "chemistry.stackexchange", "id": 12643, "tags": "organic-chemistry, organic-reduction" }
Check style found code duplication in factory
Question: I have some methods in my mapper: public EventDto mapToDto(Event googleEvent, EventDto baseEventDto) { EventDto eventDto = mapToDto(googleEvent, baseEventDto.getOriginalCalendarId()); eventDto.setCalendarId(baseEventDto.getCalendarId()); eventDto.setCollectiveId(baseEventDto.getCollectiveId()); eventDto.setId(baseEventDto.getId()); return eventDto; } and public EventDto mapToDto(Event googleEvent, String originalCalendarId) { ///... } difference is in few rows here because of different clients. Right now, my mapper should be improved with abstraction and another child. To use improved mappers I need a factory that contains: @Autowired private GoogleEventDtoMapperConfirmed googleEventDtoMapperConfirmed; @Autowired private GoogleEventDtoMapperCanceled googleEventDtoMapperCanceled; public EventDto mapToDto(Event googleEvent, EventDto baseEventDto) { switch (googleEvent.getStatus()) { case GoogleEventStatus.CANCELLED: return googleEventDtoMapperCanceled.mapToDto(googleEvent, baseEventDto); case GoogleEventStatus.CONFIRMED: case GoogleEventStatus.TENTATIVE: default: return googleEventDtoMapperConfirmed.mapToDto(googleEvent, baseEventDto); } } public EventDto mapToDto(Event googleEvent, String originalCalendarId) { switch (googleEvent.getStatus()) { case GoogleEventStatus.CANCELLED: return googleEventDtoMapperCanceled.mapToDto(googleEvent, originalCalendarId); case GoogleEventStatus.CONFIRMED: case GoogleEventStatus.TENTATIVE: default: return googleEventDtoMapperConfirmed.mapToDto(googleEvent, originalCalendarId); } } I believe factory is a good choice, but as you see here, there is some duplication that should be reduced. I can't move part mapper logic to the factory because of semantical reason (if I move duplication part will be fixed), and on the other hand, I have sonar duplicate issues. Any ideas on how to reduce duplication part? Or, if you think some mapper part can be moved into the factory, please explain concepts why mapper rules should be moved into the factory part. Answer: I've just got idea how solution should be implemented. It seems stayed clear for me when question was published (this implementation is not best, but idea is pretty good for me). My abstract factory should be replaced with strategy pattern who returns builder that I need. public EventDto mapToDto(Event googleEvent, EventDto baseEventDto) { return getStrategy(googleEvent.getStatus()).mapToDto(googleEvent, baseEventDto); } public EventDto mapToDto(Event googleEvent, String originalCalendarId) { return getStrategy(googleEvent.getStatus()).mapToDto(googleEvent, originalCalendarId); } public GoogleEventDtoMapperAbstract getMappingStrategy(String status) { switch (status) { case GoogleEventStatus.CANCELLED: return googleEventDtoMapperCanceled; case GoogleEventStatus.CONFIRMED: // will be implemented by needs case GoogleEventStatus.TENTATIVE: // will be implemented by needs default: return googleEventDtoMapperConfirmed; } }
{ "domain": "codereview.stackexchange", "id": 26762, "tags": "java, object-oriented" }
Opening tight bottles
Question: Why do so many people recommend patting/hitting on the lid of bottles, or alternatively on the bottom, in order to open them when they are too tight? Is it even effective? And why would it work? Answer: If you hit the bottom of a jar while it is upside down, it loosens the contents and piles them on the lid. If the inside pressure is less than the outside pressure, which should always be the case if the jar was vacuum packed, this may provide a little extra pressure to counteract the outside pressure that is holding the lid on. Tapping on the bottom may also release water vapor and/or other gas from the contents (like shaking a soda bottle), which will help counteract outside pressure. The same effect may result from hitting the lid, with the added bonus of deforming the lid slightly and breaking the vacuum seal. But the safest way to loosen a tight cap may be to run hot water on it so that it expands slightly. Glass has a lower thermal expansion coefficient than metal, so this method usually works.
{ "domain": "physics.stackexchange", "id": 25008, "tags": "pressure, everyday-life" }
Braille Mini-Library in Python
Question: For a small project of mine, I will be needing to work a lot with braille translation. To do that, I have already finished a small program that I can use to translate braille into English letters. To make the whole thing a little more modular, I exported all my braille logic into a small module: from typing import NoReturn, List from bitmap import BitMap from enum import Enum from bidict import bidict class Dot(Enum): TOP_LEFT = 0 MIDDLE_LEFT = 1 BOTTOM_LEFT = 2 TOP_RIGHT = 3 MIDDLE_RIGHT = 4 BOTTOM_RIGHT = 5 class BrailleGlyph(BitMap): UNICODE_BLOCK = 10240 MAPPING = bidict( {"00000000": " ", "00000001": "A", "00000011": "B", "00001001": "C", "00011001": "D", "00010001": "E", "00001011": "F", "00011011": "G", "00010011": "H", "00001010": "I", "00011010": "J", "00000101": "K", "00000111": "L", "00001101": "M", "00011101": "N", "00010101": "O", "00001111": "P", "00011111": "Q", "00010111": "R", "00001110": "S", "00011110": "T", "00100101": "U", "00100111": "V", "00111010": "W", "00101101": "X", "00111101": "Y", "00110101": "Z", } ) def __init__(self, maxnum: int = 8) -> NoReturn: super().__init__(maxnum) self.history: List[Dot] = [] def __str__(self) -> str: ones = int(self.tostring()[4:], 2) tens = int(self.tostring()[:4], 2) * 16 return chr(BrailleGlyph.UNICODE_BLOCK + tens + ones) @classmethod def from_char(cls, char: str) -> "BrailleGlyph": return BrailleGlyph.fromstring(cls.MAPPING.inverse[char.upper()]) def is_empty(self) -> bool: return self.history == [] def click(self, index: Dot) -> NoReturn: if not self.test(index.value): self.set(index.value) self.history.append(index) def delete(self) -> NoReturn: self.flip(self.history.pop().value) def get_dots(self) -> List[Dot]: return self.history def to_ascii(self) -> str: try: return BrailleGlyph.MAPPING[self.tostring()] except KeyError: return "?" class BrailleTranslator: def __init__(self) -> NoReturn: self.glyphs = [BrailleGlyph()] def __str__(self) -> str: return "".join(map(str, self.glyphs)) def new_glyph(self) -> NoReturn: self.glyphs.append(BrailleGlyph()) def delete(self) -> NoReturn: if self.glyphs[-1].is_empty(): if len(self.glyphs) != 1: self.glyphs.pop() else: self.glyphs[-1].delete() def click(self, index: Dot) -> NoReturn: self.glyphs[-1].click(index) def get_current_glyph(self) -> BrailleGlyph: return self.glyphs[-1] def translate(self) -> str: return "".join(map(lambda x: x.to_ascii(), self.glyphs)) The primary use-case is going to be - me clicking dots (or some buttons on my keyboard that will set dots on the glyph) and the translation module will just remember what has been clicked and will display results in real-time. To avoid reinventing too much I used the external bitmap and bidict libraries which both seem very fitting for a task like this. Is there some way to make this more pythonic? Or maybe some obvious flaws? Here is also a sample usage: def braille_translator(self) -> NoReturn: graph = self.window["-GRAPH-"] output = self.window["-BRAILLE_OUTPUT-"] graphed_dots = [] translator = BrailleTranslator() circle_mapping = {Dot.BOTTOM_LEFT: (50, 250), Dot.BOTTOM_RIGHT: (150, 250), Dot.MIDDLE_LEFT: (50, 150), Dot.MIDDLE_RIGHT: (150, 150), Dot.TOP_LEFT: (50, 50), Dot.TOP_RIGHT: (150, 50), } dot_mapping = {"1": Dot.BOTTOM_LEFT, "2": Dot.BOTTOM_RIGHT, "4": Dot.MIDDLE_LEFT, "5": Dot.MIDDLE_RIGHT, "7": Dot.TOP_LEFT, "8": Dot.TOP_RIGHT, } while True: event, values = self.window.read() if event in (None, 'Exit') or "EXIT" in event: exit() elif event in "124578": # Left side of numpad! translator.click(dot_mapping[event]) elif event == " ": translator.new_glyph() elif "BackSpace" in event: translator.delete() current_dots = translator.get_current_glyph().get_dots() for circle in [d for d in graphed_dots if d not in current_dots]: graph.delete_figure(circle) graphed_dots.remove(circle) for dot in [d for d in current_dots if d not in graphed_dots]: circle = graph.draw_circle(circle_mapping[dot], 20, fill_color=GUI.THEME_COLOR) graphed_dots.append(circle) output.update(translator.translate()) And here's a GIF of the sample usage in action: It's a thin line between over-fitting the translation module too much for my current task and keeping it modular for future uses! Answer: First a few small things: If you expect this to be used in other programs, it should be documented: comments to explain implementation choices and details and docstrings to explain how to use the classes and functions in the module. A small suggestion: When modeling something, it is often helpful if the interface uses the same numbering or nomenclature used in the real world. For example, the upper left dot is often referred to as dot 1, not dot 0. So the class Dot(enum) might use values 1-6 rather than 0-5. But perhaps the names are the interface and the values are internal. Multiplying by 16 is the same as shifting by 4 bits. That is, this: def __str__(self) -> str: offset = int(self.tostring(), 2) return chr(BrailleGlyph.UNICODE_BLOCK + offset) Is the same as def __str__(self) -> str: ones = int(self.tostring()[4:], 2) tens = int(self.tostring()[:4], 2) * 16 return chr(BrailleGlyph.UNICODE_BLOCK + tens + ones) Use dict.get(key, default) with a default value instead of the try... except block: def to_ascii(self) -> str: return BrailleGlyph.MAPPING.get(self.tostring(), "?") The BitMap library is overkill and seems to get in the way. For example, the code converts the bits to a string, then converts the string to an int in base 2. But the bits were already stored as a byte in the bit map. I suspect it would be cleaner to implement the bit operations directly. It seems odd that a BrailleGlyph object should keep a list of the dots in the order they were added (history). That's like the letter 'A' knowing what order the lines were added. .history is there so the editor can implement an 'undo' feature, but that seems to be a job for the editor, not the BrailleGlyph class.
{ "domain": "codereview.stackexchange", "id": 41891, "tags": "python, python-3.x, reinventing-the-wheel" }
Prove Lecture Planning is NP-complete
Question: This is a practice problem from my algorithms class. (And no, it was not assigned as homework. I can't prove this, but you don't have to answer if you don't believe me.) To me this seems like a very difficult problem to show NP-completeness for since due to its abundance of features, it's tricky to mold a known NP-complete problem into it. You’ve been asked to organize a freshman-level seminar. The plan is to have the first portion of the semester consist of a sequence of $l$ guest lectures by outside speakers, and have the second portion of the semester devoted to a sequence of $p$ hands-on projects that the students will do. There are $n$ options for speakers overall, and in week number $i$ (for $i = 1, 2, \ldots, l$) a subset $L_i$ of these speakers is available to give a lecture. On the other hand, each project requires that the students have seen certain background material in order for them to be able to complete the project successfully. In particular, for each project $j$ (for $j = 1, 2, \ldots , p$), there is a subset $P_j$ of relevant speakers so that the students need to have seen a lecture by at least one of the speakers in the set $P_j$ in order to be able to complete the project. So this is the problem: Given these sets, can you select exactly one speaker for each of the first $l$ weeks of the seminar, so that you only choose speakers who are available in their designated week, and so that for each project $j$, the students will have seen at least one of the speakers in the relevant set $P_j$. We’ll call this the Lecture Planning Problem. Prove that Lecture Planning is NP-complete. I should note that it isn't explicitly stated in the problem, but I'm assuming that a speaker can only give at most one talk. (But if this seems to prevent the problem from being NP-complete, let me know.) At first I tried solving some graph-based problems (e.g., Independent Set) using Lecture Planning, but wasn't able to proceed because I'm not sure how you would partition vertices into subsets $L_i$ or $P_j$. Thus I decided to go to 3-SAT. Even then, it is not obvious how you would solve 3-SAT using this setup. E.g., what would the clauses be? My first thought was they could be the $l$ subsets $L_i$, and you would need a variable to be true in each; this would correspond to picking a speaker for that week. But then how do you incorporate the $P_j$ so as to ensure the formula is satisfied (if satisfiable)? Also thought about making the clauses the $P_j$, but again, this didn't really seem to work. I would greatly appreciate a hint. Thanks! Answer: Hint 1: For a SAT problem instance with $n$ variables construct a planning problem with $n$ lectures Hint 2: Each $L_i$ will have $2$ elements Hint 3: Clauses will be $P_j$, as you have already guessed.
{ "domain": "cs.stackexchange", "id": 15617, "tags": "complexity-theory, np-complete" }
Filtering a name Array in JavaScript
Question: I was wondering if there is a better, more elegant way to write the following JavaScript code block: var searchString = 'ma'; var names = ['Mark Kennel', 'Hellen Smith', 'Jane Mary Annet', 'Peter']; var filteredNames = names.filter(name => { var words = name.toLowerCase().split(' '); for(var i = 0; i < words.length; i++) { if (words[i].indexOf(searchString.toLowerCase()) === 0 ) return true; } } This runs in a function that filters an autocomplete input results. The filtering rule is: Show the names that have a word which begins with the searchString The above code (with searchString = 'ma') should return an array with two names: 'Mark Kennel' and 'Jane Mary Annet'. Answer: Well, better and more elegant are quite opinion based and depend on different things: a code can be better regarding performance but worse regarding clarity. That being said, I'd like to drop my two cents: since you just want to know if a given word begins with the input string (which is different from containing the input string) you can use lastIndexOf, setting fromIndex as 0. That way, you don't search the whole string, but just its beginning: var searchString = 'ma'; var names = ['Mark Kennel', 'Hellen Smith', 'Jane Mary Annet', 'Peter']; function filterNames(arr, str) { return arr.filter(function(thisName) { return thisName.toLowerCase().split(" ").some(function(d) { return d.lastIndexOf(str.toLowerCase(), 0) === 0 }) }) } console.log(filterNames(names, searchString)) Another obvious option here is using startsWith. However, unlike lastIndexOf, startsWith don't wok on IE (if you care for that).
{ "domain": "codereview.stackexchange", "id": 31367, "tags": "javascript, array, ecmascript-6, autocomplete" }
Implement Factory pattern with multiple parameters and each parameters are interface
Question: I am a little bit confused on Factory Method with multiple parameters in which all parameters can change from GUI by user as seen below picture. For each combobox item I have an interface and concrete implementations. I have a SignalProcessor class which gets parameters as this 3 interfaces as below: public interface ISignalProcessor { double[] Process(double[] data); } public class SignalProcessor : ISignalProcessor { private IFft _fft; private IWindowing _windowing; private IInverseSpectrum _inverseSpectrum; private IDecimation _decimation; public SignalProcessor(IWindowing windowing, IFft fft, IInverseSpectrum inverseSpectrum, IDecimation decimation) { _windowing = windowing; _fft = fft; _inverseSpectrum = inverseSpectrum; _decimation = decimation; } public double[] Process(double[] data) { var windowingResult = _windowing.Calculate(data); var fftResult = _fft.Calculate(windowingResult); var inverseSpectrumResult = _inverseSpectrum.Calculate(fftResult); return _decimation.Calculate(inverseSpectrumResult); } } I decided to produce and use concrete classes according to the selected combobox values so the following factory class created. public static class FactorySP { public static ISignalProcessor Create(string windowingType, int fftSize, bool isInverse, string decimationType) { return new SignalProcessor(CreateWindowing(windowingType), CreateFft(fftSize), CreateInverseSpectrum(isInverse), CreateDecimation(decimationType)); } private static IWindowing CreateWindowing(string windowingType) { switch (windowingType) { case "Triangular": return new Triangular(); case "Rectangular": return new Rectangular(); case "Hanning": return new Hanning(); } } private static IFft CreateFft(int fftSize) { switch (fftSize) { case 128: return new Fft128(); case 256: return new Fft256(); case 512: return new Fft512(); default: return new FftNull(); } } private static IInverseSpectrum CreateInverseSpectrum(bool isInverse) { if (isInverse) return new InverseSpectrumTrue(); return new InverseSpectrumFalse(); } private static IDecimation CreateDecimation(string decimationType) { if (decimationType == "RealTimeDecimation") return new RealTimeDecimation(); return new SweepDecimation(); } } Then used as follows: _signalProcessor = FactorySP.Create(WindowingType, FftSize, InverseSpectrum, DecimationType); result = _signalProcessor.Process(Enumerable.Range(0, 100).Select(a => (double)a).ToArray()); Is there a better way to get what I want than that? I feel there is something missing in the method I use :) I know Factory method is not like that but otherwise I have to create all combinations and permutations of overload of factory classes. Should I use the builder pattern to create the SignalProcessor object? Answer: If I understand your intent and code properly then you rather have an abstract factory implementation than a factory method pattern implementation. The Abstract Factory pattern, a class delegates the responsibility of object instantiation to another object via composition. The Factory Method pattern uses inheritance and relies on a subclass to handle the desired object instantiation. Reference 24th slide | Detailed discussion about the differences Your FactoryFP exposes several methods to create multiple different objects (so called object family). Whereas a Factory Method's scope is a single object. And because of this an Abstract Factory can rely on multiple Factory Methods. "Is there a better way to get what I want than that?" As always it depends what do you want. Do you want to hide direct object creation possibility? (Factory method refactoring) Do you want to ease complex object creation? (Builder design pattern) Do you want to have an easy to read complex object creation interface? (Fluent interface pattern) etc.
{ "domain": "codereview.stackexchange", "id": 39577, "tags": "c#, design-patterns, factory-method" }
An aqueous solution of NaCl has a mole fraction of 0.21. What is the mass of NaCl dissolved in 100.0 mL of solution?
Question: Work mole fraction of NaCl = 0.21/0.21+0.79 If there is 1 mol of total solution, there will be 0.21 mol NaCl and 0.79 mol H2O 0.21 mol NaCl = 12.27 g 0.79 mol H2O = 14.23 g this would yield a ratio of 12.27 : 14.23 inside the solution, giving 46.26 as the number of grams of NaCl inside 100 ml of solution. Problem I've looked around google for answers to this problem and many of them come up with 86.4 grams of NaCl in the solution. I feel like my answer is logical, but I'm not sure. Could you explain the mistakes I made in my work, if any? Answer: The concentration you have calculated is % ($w/w$). In other words, what that means is mass of solute in $\pu{100 g}$ of solution. In your case, it is mass of $\ce{NaCl}$ in $\pu{100 g}$ of solution: $\frac{12.27}{(12.27+14.23)}\times 100=46.3$%. However, in your statement, you have declared it is % ($w/v$), which is not. the others's answer is also not quite correct, because they have assumed when $\ce{NaCl}$ is dissolved in certain volume of water, the volume stays same (won't change). If you are allowed to assume that, then, you can conclude (like others) that when $\pu{12.27 g}$ of $\ce{NaCl}$ is dissolved in $\pu{14.23 g}$ of $\ce{H2O}$ (or $\pu{14.23 mL}$ of $\ce{H2O}$ assuming density of water is $\pu{1.00 g/mL}$), the volume of solution is still $\pu{14.23 mL}$. Based on allowed assumption, the % $(w/v)$ concentration of your solution is: $\frac{12.27}{14.23}\times 100=86.2$% (the value close enough to your internet finding). However, most important data missing in your question is the density ($\rho$) of the sought $\ce{NaCl}$ solution. That would give exactly the volume ($v$) of the solution since $v=m/\rho$. Thus, correct % $(w/v)$ concentration of your solution is: $\left(\frac{12.27}{\rho(12.27+14.23)}\times 100\right)$%.
{ "domain": "chemistry.stackexchange", "id": 11491, "tags": "aqueous-solution, solutions, stoichiometry, concentration, mole" }
Counting the number of modes
Question: The exercise is the following Show that the number $N(\lambda) \, \mathrm d \lambda$ of standing electromagnetic waves (modes) in a large cube of volume $V$ with wavelengts within the interval $\lambda$ to $\lambda + \mathrm d \lambda$ is given by $$N(\lambda)\, \mathrm d \lambda = \frac{8\pi V}{\lambda^4}\, \mathrm d \lambda $$ My solution is wrong by a factor of $2$, but I can't figure out why. I'm hoping that someone here might be able to help me out. Thanks. I went about it the following way: In $k$-space the allowed values for standing waves in a cube of side length $L$ are given by $$k = \left(\frac\pi Ln_1, \frac\pi L n_2, \frac\pi L n_3\right)$$ where the $n_i$ are nonnegative integers. So each point in $k$-space takes up a volume $\left(\frac{\pi}{L}\right)^3$. If the wave vector of a standing wave is in the interval from $k$ to $k + \mathrm d k$, then it lies in the intersection of the positive octant with the sphere of thickness $\mathrm d k$ and radius $k$. This sphere has volume $\frac18 4\pi k^2 \, \mathrm dk = \frac{\pi k^2}{2} \, \mathrm dk$. So the number of standing waves in terms of the wave vector is given by $$N(k) \,\mathrm dk = \frac{\pi k^2 /2}{(\pi/L)^3}\, \mathrm d k = \frac{L^3 k^2}{2\pi^2}\, \mathrm dk= \frac{V k^2}{2\pi^2}\, \mathrm dk $$ Thus, in terms of $\lambda = \frac{2\pi}{k}$ I would get $$N(\lambda) \, \mathrm d\lambda = N(k) \, |d(2\pi/k)| = \frac{V \, (2\pi/\lambda)^2}{2\pi^2}\frac{2\pi}{\lambda^2}\, \mathrm d\lambda = \frac{4\pi V}{\lambda^4}\, \mathrm d \lambda$$ i.e. $N(\lambda) = 4\pi V/\lambda^4$. Can anyone spot where I lost the factor of $2$? Answer: For each wave vector, you have two modes of waves - with perpendicular polarisation. Think of the the wave vector $(k_x,0,0)$. What are the possible directions of the electric field for this mode? The electric field is given by $$\vec E(\vec r) = \vec E^0 e^{i k_x x}$$ Where $E^0$ is a constant vector and I omitted the time dependence as it is not crucial. Since there is no charge you must have $$0=\partial_x E_x+\partial_y E_y+\partial_z E_z= i k_x E_x$$ So the $x$-component of $E^0$ must vanish. But there are no limitations on the $y,z$ components. These components are responsible for the discretization of the $\vec k$'s because the tangential electric field must vanish near the walls.
{ "domain": "physics.stackexchange", "id": 2070, "tags": "homework-and-exercises, statistical-mechanics, electromagnetic-radiation, polarization" }
A query regarding the definition of Conservation of Angular Momentum
Question: Assuming two hollow cyliders $C_{\text{in}}$ and $C_{\text{out}}$ with radius $r_{\text{in}}$ and $r_{\text{out}}$ such that $r_{\text{in}} < r_{\text{out}}$. Each of the cylinder is uniform and of length $L$. Now assuming $C_{\text{in}}$ is symmetrically placed inside $C_{\text{out}}$ without them touching. Both of them are rotating at $k$ rotations per second along x-axis in our reference frame in space. The x axis is their center of mass. Does Conservation of Angular Momentum imply that no matter what physical/mechanical/chemical process these 2 cylinders try they would not be able to change their rotational speed from $k$ rpm to anything different. We are asuming there is no 3rd object to change their angular momentum or apply torque or external force etc. Answer: If we take the two cylinders as one system, and if: $$\sum \vec\Gamma_{\text{external}}=0$$ where, $\Gamma$ is torque ($\vec\Gamma=\vec r\times \vec F$) acting on system about any axis, then we have: $$\sum L=\text{constant}$$, where $L$ is angular momentum of individual bodies. Now here, first equation doesn't simply implies that $\sum F_{\text{external}}=0$, but rather $$\sum_i(\vec r_i \times \vec F_i)=0$$, where all $\vec F_i$s are external. Hope you got the got the main idea/concept!
{ "domain": "physics.stackexchange", "id": 83111, "tags": "angular-momentum, angular-velocity" }
Logic Programming: Transforming B:-A C:-A to B,C:-A
Question: I hope I've come to the right place... it's (probably) a fairly straightforward Logic Programming question. If I have two clauses of the form: B:-A C:-A I can transform these into: B,C:-A (Edit: where B,C is a conjunction. I'm doing bottom-up evaluation and it's useful to me to represent multiple clauses with the same body using one clause with a conjunction of the respective heads. This seems trivial, but I'm wondering if there's a name for such a transformation—however, I know that the resulting clause is no longer a Horn clause.) Does anyone know if this transformation has a name, and if so, can anyone provide a pointer (preferably online) to somewhere that describes it. Many thanks (from a n00b). Answer: So, this is an instance of the fact that $(A \supset B) \wedge (A \supset C)$ is equivalent to $A \supset (B \wedge C)$. A different type isomorphism, namely that $(B \supset A) \wedge (C \supset A)$ is equivalent to $(B \vee C) \supset A$, looks more like what you wrote down, but that's because in logic programming notation we write B :- A when we mean $A \supset B$. (Note: $\supset$ is "implies," $\wedge$ is "and," and $\vee$ is "or.") What you're trying to do is related to the binarization transformation, which is discussed in McAllester's "On the complexity analysis of static analyses." There may be a better name for the transformation, but if so, it wasn't known the authors of the paper "A succinct solver for ALFP" (Alternating Least Fixedpoint formulas are a generalization of Horn clauses for bottom-up logic programs such as yours) - in Example 1 on page 4 they discuss a similar transformation and just call it "exploiting the possibility of sharing of pre-conditions."
{ "domain": "cstheory.stackexchange", "id": 536, "tags": "reference-request, pl.programming-languages, terminology, logic-programming" }
Template for creating 2D games
Question: I am using the following template to program my 2D games in. Is there any way I can improve it? Splash screen: package com.dingle.template2d; import android.app.Activity; import android.content.Intent; import android.os.Bundle; public class template2d extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //set screen setContentView(R.layout.splash); // thread for displaying the SplashScreen Thread splashTread = new Thread() { @Override public void run() { try { int waited = 0; boolean _active; int _splashTime; _active = true; _splashTime = 500; while(_active && (waited < _splashTime)) { sleep(100); if(_active) { waited += 100; } } startActivity(new Intent("com.dingle.template2d.MENU")); } catch(InterruptedException e) { // do nothing } finally { finish(); } } }; splashTread.start(); } } Menu screen: package com.dingle.template2d; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.Window; import android.widget.Button; public class menu extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); setContentView(R.layout.main); this.button(); } private void button(){ Button play_button = (Button)this.findViewById(R.id.play_button); play_button.setOnClickListener( new Button.OnClickListener() { public void onClick(View v) { // parentButtonClicked(v); startActivity(new Intent("com.dingle.template2d.GAME")); } }); } } GameView class: (this is where I put all the game code) package com.dingle.template2d; import android.app.Activity; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Color; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.Window; public class GameView extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); setContentView(new Panel(this)); } public class Panel extends SurfaceView implements SurfaceHolder.Callback{ private MainThread _thread; public Panel(Context context) { super(context); getHolder().addCallback(this); _thread = new MainThread(getHolder(), this); setFocusable(true); } final int windowHeight = getResources().getDisplayMetrics().heightPixels; final int windowWidth = getResources().getDisplayMetrics().widthPixels; final float tscale = getResources().getDisplayMetrics().density; final int scale = (int) tscale; public int _x = 0; public int _y = 0; Bitmap _scratch = BitmapFactory.decodeResource(getResources(), R.drawable.icon); /** * *I Declare everything here * **/ @Override public void onDraw(Canvas canvas) { /** * *I run all my code here * **/ } /* @Override public boolean onTouchEvent(MotionEvent event) { _x = (int) event.getX(); _y = (int) event.getY(); return true; }*/ @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { // TODO Auto-generated method stub } @Override public void surfaceCreated(SurfaceHolder holder) { _thread.setRunning(true); _thread.start(); } @Override public void surfaceDestroyed(SurfaceHolder holder) { // simply copied from sample application LunarLander: // we have to tell thread to shut down & wait for it to finish, or else // it might touch the Surface after we return and explode boolean retry = true; _thread.setRunning(false); while (retry) { try { _thread.join(); retry = false; } catch (InterruptedException e) { // we will try it again and again... } } } } } Main Thread: package com.dingle.template2d; import com.dingle.template2d.GameView.Panel; import android.graphics.Canvas; import android.view.SurfaceHolder; public class MainThread extends Thread { private SurfaceHolder _surfaceHolder; private Panel _panel; private boolean _run = false; public MainThread(SurfaceHolder surfaceHolder, Panel panel) { _surfaceHolder = surfaceHolder; _panel = panel; } public void setRunning(boolean run) { _run = run; } @Override public void run() { Canvas c; while (_run) { c = null; try { c = _surfaceHolder.lockCanvas(null); synchronized (_surfaceHolder) { _panel.onDraw(c); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { _surfaceHolder.unlockCanvasAndPost(c); } } } } } Answer: Notions in no particular order, not probably what you're expecting though: On the unpredictability of sleep In the loop int waited = 0; while(_active && (waited < _splashTime)) { sleep(100); if(_active) { waited += 100; } } Method sleep might not guarantee that the time slept is actually 100 ms. It might more or it might be less depending on the context and clock accuracy etc. It might be better if you relied on the system clock instead of a counter: long started = System.currentTimeMillis(); while(_active && System.currentTimeMillis() - start < _splashTime) { sleep(100); } If you end up using the waited counter, the if(_active) conditions might not be necessary. On initial values If you have no particular reason for defining the variables initial values on separate lines, you could define them in the same line int waited = 0; boolean _active; int _splashTime; _active = true; _splashTime = 500; becomes int waited = 0; boolean _active = true; int _splashTime = 500; On naming I've seen underscore to denote instance variables and I after a quick skimming I thought _activeand _splashTime were such. Instead they were local variables. Perhaps not using and underscore might be more conventional. You use underscore on instance variables as well, which is OK though I'm not a big fan of it. There's also some inconsistency with variable names with multiple words: compare surfaceHolder and play_button. In Java it's convetional that variable names are in camel-case without spaces; playButton would be better. Names of classes usually begin with a capital letter e.g. Template2d, Menu. On constants It seems to me you're trying to declare a constant _splashTime. Why not just do so with public class template2d extends Activity { private static final SPLASH_TIME_IN_MILLISECONDS = 500; } On visibity I would prefer if all instance variables would be either private or final. It's at least not a bad idea to keep the scope of your variables as small as possible. In Canvas c; while (_run) { c = null; try { c = _surfaceHolder.lockCanvas(null); synchronized (_surfaceHolder) { _panel.onDraw(c); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { _surfaceHolder.unlockCanvasAndPost(c); } } } The canvas variable c exists outside of the while-loop while it isn't used anywhere but in it. You could declare the variable inside the loop to limit it's scope while (_run) { Canvas c = null; try { c = _surfaceHolder.lockCanvas(null); synchronized (_surfaceHolder) { _panel.onDraw(c); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { _surfaceHolder.unlockCanvasAndPost(c); } } } On order of things The contents of a class is usually in an order not unline the following The class signature public static constants private static constants private static variables private constants private variables constructors public methods private methods It makes the code harder to follow if there are instance variable declarations in more than one place. Lately I've been experimenting on how it feels like when everything private is tucked down to the bottom of the class and I like it. Coding conventions trump personal preferences though. On doing just one thing Each class and object should preferably do just one thing i.e. they should have a single responsibility. At least the class Panel contain the responsibilities of drawing the surface and responding to events. These two things could be split into two different classes if it makes sense. On different ways of writing the same code In this code retry = true; while (retry) { try { _thread.join(); retry = false; } catch (InterruptedException e) { // we will try it again and again... } } You could achieve the same thing with a break statement to get rid of an extra variable while (true) { // or for(;;) try { _thread.join(); break; } catch (InterruptedException e) { // we will try it again and again... } } or defining the logic in a method to describe your intent more clearly while(!hasThreadStopped) { // Retry until thread stops } ... private boolean hasThreadStopped() { try { _thread.join(); return true; } catch (InterruptedException e) { return false; } }
{ "domain": "codereview.stackexchange", "id": 12645, "tags": "java, game, android, template" }
Electric field inside a conductor using gauss theorem
Question: The electric field inside a conductor is supposed to be zero, but if we take a hollow conductor and place a charge inside it, isn't an electric field present at point $P$ (through Gauss theorem)? So, when we say the electric field is zero, does it mean that $E$ is zero literally inside the conductor, i.e. the black part? Answer: "Inside a conductor" means actually within a conductor, not just enclosed by a conductor. i.e. the point in space you are looking at is occupied by a conductor. In your image, point P is not "inside a conductor", even though it is enclosed within the conducting shell. If it helps, think of it as "the electric field is always zero within a (perfectly) conducting material." So applying the principle correctly, $E$ is $0$ within the black ring itself, but not in the interior of the shell. A charge imbalance will be induced on the inner and outer conductor surfaces so that the field is zero within the conducting material.
{ "domain": "physics.stackexchange", "id": 79748, "tags": "electric-fields, gauss-law" }
Mass dimension of an $n$-particle scattering amplitude in 4D
Question: For the 4-dimensional case, and using the cross-section formula, how can we show that the mass dimensions of an $n$-particle amplitude must be $$[A_n] = 4-n~?\tag{2.99}$$ My understanding is that the cross-section must have dimensions of an area, but I don't quite understand how I can then find the dimensions of a scattering amplitude. I am assuming that the differential cross-section is the amplitude squared, and trying to work backwards. $$\frac{d\sigma}{d\Omega}=|A|^2,$$ where $A$ is the amplitude. Is this the correct path? Reference: https://arxiv.org/abs/1308.1697, equation 2.99. EDIT: Allow me to add a screenshot which may clarify some my confusion. After reading the answer below and being comfortable with the methods (and finding some agreeing literature in a PhD thesis and other textbooks), I am still not sure why this doesn't match up to the paper I'm using. There's a lot of funky conventions, which I suppose is a hazard of dealing with amplitudes. Please see this screenshot: The screenshot shows their working, which I believe restricts to tree-level amplitudes only. Is there any contradiction? Answer: The claimed result $[A_n]=4-n$ is correct, and so is the reasoning of Helvang and Huang in the quoted text in the OP. Notice that $n$ is the total number of particles involved in the process, the in+out particles. In particular, the mass dimension certainly does not depend on the loop order that is needed to generate $A_n$, and so one is free to determine $[A_n]$ by reasoning with tree-level amplitudes only, and Huang and Elvang do so. But let me present an alternate derivation, just for the sake of discussion. The mass dimension $[A_n]$ is as well independent of the spin of the particles involved in the process, and so I can calculate $[A_n]$ for any spin just by calculating $[A_n]$ for processes involving spin-0 only. Moreover, it's independent for particle or antiparticles, so I will use just a real scalar field. I do so using the LSZ prescription: Fourier transform the correlator $\langle \Phi(x_1)\ldots \Phi(x_n)\rangle$, look at the $n$ one-particle simple poles by multipling it for $\prod_{i=1}^n p_i^2-m_i^2$, and remove a $\delta^4(\sum p)$ for passing from $S$ to $A_n$, namely $S=1+(2\pi)^4i\delta^4(\sum p)A_n$. Since each $\Phi$ adds $[\Phi(x)]=1$, each Fourier transform adds $[\int d^4 x]=-4$, each residue adds $[p_i^2-m_i^2]=2$, and removing the delta function adds $[1/\delta^4(\sum p)]=4$, we get $$ [A_n]=n-4n+2n+4=4-n $$ as claimed.
{ "domain": "physics.stackexchange", "id": 54810, "tags": "quantum-field-theory, dimensional-analysis, scattering-cross-section, s-matrix-theory" }
Why can the bra and ket be varied independently?
Question: Given a functional which depends on a function (ket), and its complex conjugate (bra), e.g. $$F[\varphi] = \langle \varphi|\hat{F}|\varphi\rangle = \int \varphi^{*}(\mathbf{r}) \hat{F} \varphi(\mathbf{r}) \, \mathrm{d}\mathbf{r} $$ I have been told that we can vary the bra and ket independently, i.e. the first variation of $F$ in the bra is given by $$\delta F = \int \frac{\delta F}{\delta \varphi^{*}} \eta(\mathbf{r}) \, \mathrm{d}\mathbf{r} = \frac{\mathrm{d}}{\mathrm{d}\epsilon}\left[ \int (\varphi^{*}(\mathbf{r})+\epsilon\eta(\mathbf{r}))(\mathbf{r}) \hat{F} \varphi(\mathbf{r}) \mathrm{d}\mathbf{r}\right]_{\epsilon = 0},$$ and not $$\delta F = \int \frac{\delta F}{\delta \varphi^{*}} \eta(\mathbf{r}) \, \mathrm{d}\mathbf{r} = \frac{\mathrm{d}}{\mathrm{d}\epsilon}\left[ \int (\varphi^{*}(\mathbf{r})+\epsilon\eta(\mathbf{r}))(\mathbf{r}) \hat{F} (\varphi(\mathbf{r})+\epsilon\eta(\mathbf{r})) \mathrm{d}\mathbf{r}\right]_{\epsilon = 0},$$ as one might expect. If the above is correct, how can it be shown that the bra and the ket can be independently varied? Answer: This has nothing to do with "bras" or "kets" and more with the elementary observation that a complex number has two real degrees of freedom, and that derivatives are with respect to one real degree of freedom. The $\frac{\partial}{\partial\phi}$ and $\frac{\partial}{\partial\phi^\ast}$ are the Wirtinger derivatives, which in particular fulfill $\frac{\partial\phi^\ast}{\partial\phi} = 0$, i.e. the derivative of something with respect to its conjugate is zero. This naturally generalizes to the functional derivatives with respect to a complex function.
{ "domain": "physics.stackexchange", "id": 28280, "tags": "lagrangian-formalism, field-theory, complex-numbers, variational-calculus, functional-derivatives" }
Empirical evidence for species selection
Question: Do we have any empirical evidence in favor of species (or lineage) selection? Do we know some cases that can only be explained (or seem to be only explained) by lineage selection? What are today the best study cases in favor of lineage selection? If not, how could we find any evidence for lineage selection? Note: I call lineage selection a differential in speciation and/or extinction rate that drive the proportion of lineage (species for example) displaying a given trait (emergent traits or not) to increase or decrease over time. Answer: A commonly used empirical example of species selection (a.k.a clade selection, lineage selection) is pelagic larvae in sessile ocean species. See Maliska et al (2013) for a recent paper discussing this in Tunicata and Jablonski & Hunt (2006) for larval modes in gastropods. The idea is to some extent really intuitive - pelagic larvae means higher dispersal rates, so the species will be exposted to and can colonize new environments. This can lead to speciation through adaptive radiation. Larger ranges also means lower extinction rates (e.g. through less population synchrony). If the trait is fixed within a lineage, this lineage as a whole will, as compared to an sister lineage lacking the trait, contain more species (i.e. species selection). However, you can also argue that pelagic larvae will lead to higher gene flow between populations, which could inhibit speciation. Therefore, the effect of pelagic larvae, and how these mechanisms are weighted in practice, is really an empirical question (see Duda & Palumbi, 1999 and Jablonski & Hunt, 2006 for discussions on this issue). Jablonski (2008) provides a nice general review of species selection (concept, mechanisms, theoretical models and data), and Okasha (2006) (chap. 7) is good for a theoretical/philosophical treatment of levels of selection. Also remember that traits are selected for at the individual level, but can (especially if fixed) still act on a lineage level.
{ "domain": "biology.stackexchange", "id": 1454, "tags": "evolution, natural-selection, extinction, speciation, selection" }
How to hit / deflect a photon?
Question: If you were trying to scatter a photon, what would be the best thing to try to fire at? Another photon? An electron? A proton? Does the energy of the thing I'm firing increase the probability of scattering the photon? Answer: Photons are elementary point particles in the standard model of particle physics. Interactions of elementary particles depend on quantum mechanics, and are studied using quantum field theory. There are four fundamental forces through which particles interact, with each other, photons interact mainly with the electromagetic force. "Mainly" because the calculations are done with an expansion in a mathematical series in a perturbation theory. To scatter a photon with high probability one should choose a charged particle from the table of particles linked above. Photon photon scattering is very improbable at low energies, though at high energies the probability goes up. These are the lowest order scattering Feynman diagrams for photon electron scattering. In general scattering crossections grow with the energy available for the interaction. Photons will also scatter from the electric and magnetic fields of atoms and molecules.
{ "domain": "physics.stackexchange", "id": 67634, "tags": "particle-physics, photons" }
Batch file to copy specific folders
Question: This is the batch file i have created which copy the specific folders which I want. I use the specific server folder name of which I want to copy. Please suggest any improvements. @echo off :: variables echo This script takes the backup of file SwiftALM Important folders set /P source=Enter source folder Example D:\jboss6.1\server\swift: set /P destination=Enter Destination folder: set /P Folder=Enter Folder name: @echo folder=%folder% mkdir %destination%\%folder% set xcopy=xcopy /E/V/Q/F/H/I echo echo conf folder will be copied %xcopy% %source%\conf %destination%\%folder%\conf echo conf folder is copied echo lib folder will be copied %xcopy% %source%\lib %destination%\%folder%\lib echo lib folder is copied echo deploy folder will be copied %xcopy% %source%\deploy %destination%\%folder%\deploy echo deploy folder is copied echo deployers folder will be copied %xcopy% %source%\deployers %destination%\%folder%\deployers echo deplyers folder is copied echo files will be copy press enter to proceed pause Answer: SETLOCAL ensures that any environment variables you set don't affect the calling process's environment. It's like a sandbox for variables, directory changes, and other shell settings. I usually use @ECHO instead of plain ECHOs so that if the @ECHO OFF is ever temporarily disabled (say, for debugging purposes), the echo statements don't produce needless noise in the output. I also suggest using a loop to repeat the nearly-duplicate statements: @ECHO OFF SETLOCAL :: variables @ECHO This script takes the backup of file SwiftALM Important folders SET /P source=Enter source folder Example D:\jboss6.1\server\swift: SET /P destination=Enter Destination folder: SET /P folder=Enter Folder name: @ECHO folder=%folder% MKDIR %destination%\%folder% SET xcopy=xcopy /E/V/Q/F/H/I FOR /F %%s IN (conf lib deploy deployers) DO ( echo %%s folder will be copied %xcopy% %source%\%%s %destination%\%folder%\%%s IF NOT ERRORLEVEL 1 @ECHO %%s folder has been copied. ) @ECHO Files have been copied. Press enter to proceed. PAUSE
{ "domain": "codereview.stackexchange", "id": 10790, "tags": "batch" }
PID controller making use of the Message type sensor_msgs/JointState
Question: Hi guys I am looking for at PID controller which is capable of understand the message type sensor_msgs/JointState. As far i can see those which are available uses a custom message type, but none which uses the sensor_msg/JointState .. Do any of you know one?? I've looked into some, and http://wiki.ros.org/pid and seems to be what I am looking with the exception of the message type being not the one i want.. Somebody who are able to convert it? Originally posted by 215 on ROS Answers with karma: 156 on 2015-05-23 Post score: 0 Answer: Wikipedia already provides pseudo code.. Originally posted by 215 with karma: 156 on 2015-05-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21763, "tags": "control, ros, pid" }
viso2_ros covariance matrix and robot_pose_ekf
Question: Hi, We want to use viso2_ros package with a Bumblebee camera to perform visual odometry and feed it to the robot_pose_ekf. robot_pose_ekf requires covariance matrices to be published in the odometry it receives, giving an error if the input covariance is zero. In the documentation of viso2_ros it says that covariance matrices are not publised, so we are unsure if it will be possible to combine viso2_ros with robot_pose_ekf. In other answer (answers.ros.org/question/34323/viso2_ros-and-robot_pose_ekf/), it is suggested to use it directly (publishing viso2_ros/odometry to robot_pose_ekf/vo), but according to documentation this shouldn't work, as viso2 doesn't publish covariance matrices and following coments in the answer are not conclusive on the results. We have been unable to test it ourselves yet since we are still setting up everithing. Before continuing, we would like to know if anyone has used viso2_ros with robot_pose_ekf with success and how have they done it. Also, has anyone estimated the covariances for the viso2_ros process, to insert them manually in the published topic? Thank you and best regards, Ivan. Originally posted by IvanV on ROS Answers with karma: 329 on 2013-12-11 Post score: 0 Answer: libviso2 does not provide a covariance matrix in the code. In our wrapper we're publishing fixed value covariance matrices, but they've not been tested with robot_pose_ekf. Look here: https://github.com/srv/viso2/blob/fuerte/viso2_ros/src/stereo_odometer.cpp#L23 Originally posted by Miquel Massot with karma: 1471 on 2013-12-28 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 16430, "tags": "ros, navigation, viso2-ros, robot-pose-ekf" }
Why DMSO is used as a control?
Question: Coming from a non-biology background, I've realised many academic papers on experiments use DMSO as like a control. This is an example: KN-93, a specific inhibitor of CaMKII inhibits human hepatic stellate cell proliferation in vitro What make DMSO so special that so many scientists use it for their experiment and report their results? Why do we need it in the first place? Answer: DMSO can be used as a solvent for a range of experiments. In these cases it makes sense to use the solvent as the control.
{ "domain": "biology.stackexchange", "id": 8641, "tags": "pharmacology, experimental" }
How to get pose from april tag topic tag_detection?
Question: Hi, Currently, I have a bag file which publishes the /tag_detections coming from apriltag node. I'm having trouble in writing a callback function which could extract information like pose from the topic. The details of the topic are: rosmsg show apriltag_ros/AprilTagDetectionArray std_msgs/Header header uint32 seq time stamp string frame_id apriltag_ros/AprilTagDetection[] detections int32[] id float64[] size geometry_msgs/PoseWithCovarianceStamped pose std_msgs/Header header uint32 seq time stamp string frame_id geometry_msgs/PoseWithCovariance pose geometry_msgs/Pose pose geometry_msgs/Point position float64 x float64 y float64 z geometry_msgs/Quaternion orientation float64 x float64 y float64 z float64 w float64[36] covariance The piece of code that I manage to write till now is: #include "ros/ros.h" #include "apriltag_ros/AprilTagDetectionArray.h" void callback(const apriltag_ros::AprilTagDetectionArray::ConstPtr &msg) { //What to do? } int main(int argc, char **argv) { ros::init(argc, argv, "apriltag_location"); ros::NodeHandle n; ros::Subscriber sub = n.subscribe("tag_detections",1000,callback); ros::spin(); } Could someone point out on how to get the pose from here? Originally posted by dj95 on ROS Answers with karma: 52 on 2020-02-14 Post score: 0 Answer: The pose is the msg->detections[i].pose of type geometry_msgs/PoseWithCovariance for all N detections. Originally posted by stevemacenski with karma: 8272 on 2020-02-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dj95 on 2020-02-14: Could you explain what 'i' here would be? geometry_msgs::PoseWithCovariance x; x = msg->detections[0].pose.pose; ROS_INFO("[%f]",x.pose.position.x); Writing like gives me a message 'Segmentation fault (core dumped)' Comment by stevemacenski on 2020-02-14: I think your issue here is not really understanding C++ rather than a ROS-specific issue. I'd recommend going through some C++ STL tutorials and then coming back to revisit to this. Comment by dj95 on 2020-02-14: That is true, thanks for the input. Comment by stevemacenski on 2020-02-14: If I've answered your question, can you hit the checkmark so its removed from our unanswered questions queue?
{ "domain": "robotics.stackexchange", "id": 34442, "tags": "c++, ros-melodic" }
How to find the number of Occupied grids in RTAB-Map global map?
Question: Is there a way to get the total number of grids and the number of occupied grids from the final global map in RTAB-Map? In the RTAB-Map code there are variables that can list these numbers for each local map, however I need these numbers for the final global map. PS: I am using ROS Kinetic on Ubuntu 16.04 VM. RTAB-Map version: 0.19.6 Best regards, Malar. Originally posted by MalarJN on ROS Answers with karma: 25 on 2020-07-03 Post score: 0 Answer: If you subscribe to /rtabmap/grid message, you could count each cell of each type directly. Originally posted by matlabbe with karma: 6409 on 2020-07-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by MalarJN on 2020-07-12: Thank you @matlabbe
{ "domain": "robotics.stackexchange", "id": 35222, "tags": "ros-kinetic, rtabmap" }
Nitrogen lone pair electrons on 2-acetyl-1-pyrroline
Question: Please consider 2-acetyl-1-pyrroline: http://en.wikipedia.org/wiki/2-Acetyl-1-pyrroline What is the molecular geometry of the nitrogen lone pair? Is the lone pair oriented in-plane with the ring? How is this distinct from the orientation carbon-acetyl bond, which should presumably point out-of-plane with the ring? Answer: The nitrogen lone pair is in plane with the ring because it is in one of the three $sp^2$ hybrid orbitals of N. The other two hybrid orbitals form $\sigma$ bonds to the adjacent carbons, and all three are in plane, with an angle of 120° between them. The $p_z$ orbital of N which is perpendicular to the ring plain forms the $\pi$ bond of the C=N double bond. The acetyl substituent should also lie in plane with the ring because this would allow energetically favorable conjugation between the two double bonds (C=N and C=O).
{ "domain": "chemistry.stackexchange", "id": 1190, "tags": "organic-chemistry, carbonyl-compounds, molecular-structure" }
Assumptions/Limitations of Random Forest Models
Question: What are the general assumptions of a Random Forest Model? I could not find by searching online. For example, in a linear regression model, limitations/assumptions are: It may not work well when there are non-linear relationship between dependent and independent variables. It may not work if the dependent variables considered in the model are linearly related. Therefore one has to remove correlated variable by some other technique. It assumes that model errors are uncorrelated and uniform (No hetroscedasticity). Are there any assumptions/limitations on similar lines. Answer: Reproducing the accepted answer from CrossValidated here, as it is the most complete and the best answer to this question. In order to understand this, remember the "ingredients" of random forest classifier (there are some modifications, but this is the general pipeline): At each step of building individual tree we find the best split of data While building a tree we use not the whole dataset, but bootstrap sample We aggregate the individual tree outputs by averaging (actually 2 and 3 means together more general bagging procedure). Assume first point. It is not always possible to find the best split. For example in the following dataset each split will give exactly one misclassified object. And I think that exactly this point can be confusing: indeed, the behaviour of the individual split is somehow similar to the behaviour of Naive Bayes classifier: if the variables are dependent - there is no better split for Decision Trees and Naive Bayes classifier also fails (just to remind: independent variables is the main assumption that we make in Naive Bayes classifier; all other assumptions come from the probabilistic model that we choose). But here comes the great advantage of decision trees: we take any split and continue splitting further. And for the following splits we will find a perfect separation (in red). And as we have no probabilistic model, but just binary split, we don't need to make any assumption at all. That was about Decision Tree, but it also applies for Random Forest. The difference is that for Random Forest we use Bootstrap Aggregation. It has no model underneath, and the only assumption that it relies is that sampling is representative. But this is usually a common assumption. For example, if one class consist of two components and in our dataset one component is represented by 100 samples, and another component is represented by 1 sample - probably most individual decision trees will see only the first component and Random Forest will misclassify the second one.
{ "domain": "datascience.stackexchange", "id": 523, "tags": "random-forest, ensemble-modeling" }
How do I set depthimage resolution
Question: Hi, This must be simple to do, how do I set the resolution of a depthimage? For some reason my xtion is publishing 120x160 pixles. I'm using: rosrun openni2_camera openni2_camera_node To start the process (roslaunch isn't working for me unfortunately ) Many Thanks Mark Originally posted by MarkyMark2012 on ROS Answers with karma: 1834 on 2014-09-01 Post score: 0 Answer: Answer Here Originally posted by MarkyMark2012 with karma: 1834 on 2014-09-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19251, "tags": "ros, resolution" }
Massless bosons but not massless fermions?
Question: I noticed some article on massless Weyl fermions and it got me thinking. I'm wondering if there is any explanation for why bosons (specifically gauge bosons) can be massless (photon and gluon) but we don't see any fundamental massless fermions (working from the most likely confirmed hypothesis that neutrinos are massive). I know that the $W^\pm$ and $Z$ get mass from spontaneous symmetry breaking, so obviously not all gauge bosons are massless, but why do we see no fundamental massless fermions? Answer: The mechanism for "giving mass" to elementary bosons and fermions is different. With bosons, it is related to the gauge symmetry ($SU(3)_c \times SU(2)_L \times U(1)_Y$) which is partially broken (and become $SU(3)_c \times U(1)_{em})$. The unbroken part imposes its associated bosons (gluons and photon) to be massless to respect this symmetry. With fermions, there is no such constraint since their mass does not come from a gauge symmetry (with our current knowledge, fermions masses are put by hand via add hoc yukawa couplings). Therefore, the mass of the fermions is not predicted (contrary to the masses of bosons). So, asking "why do we see no fundamental massless fermions?", is equivalent as asking "why do we see fundamental fermions with their actual mass?". Answer: we don't know!
{ "domain": "physics.stackexchange", "id": 23524, "tags": "mass, fermions, higgs, beyond-the-standard-model, bosons" }
Divergence in the total momentum opertator in QFT
Question: The classical expression for the total momentum operator is $$P^{i} = -\int d^3x \, \pi(x) \, \partial_{i} \phi(x),$$ which, after second quantisation, using $$\hat{\phi}(x) = \int \frac{d^3k}{(2 \pi)^3} \, \frac{1}{ \sqrt{2 E_{k}}} \left( \hat{a}_{k} + \hat{a}^{\dagger}_{-k} \right) e^{i k \cdot x}$$ and $$\hat{\pi}(x) = \int \frac{d^3k}{(2 \pi)^3} \, \sqrt{\frac{E_{k}}{2}} \left( \hat{a}_{k} - \hat{a}^{\dagger}_{-k} \right) e^{i k \cdot x},$$ it becomes: $$\int \frac{d^3p}{(2\pi)^3} \, p^i \left( \hat{a}_{p}^{\dagger} \hat{a}_{p} + \frac12 (2\pi)^3 \delta^{(3)}(0) \right).$$ What is the significance of the diverging second term? I know we get a similar expression for the ground state energy, which we just ignore by arguing that absolute energy is not an observable since we can only measure energy differences. But momentum is surely an observable? We can measure absolute momentum yes? Answer: Well, quantization of a classical system may not be a unique procedure. Classically, all variables commute, and, say, $\pi ~\partial_i \phi = \partial_i\phi~\pi.$ When we quantize, why should we choose $\hat{\pi}\hat{\phi}$ ordering, as OP does in his example? Why not instead the opposite $\hat{\phi}\hat{\pi}$ ordering? Or perhaps symmetrize? Or use normal ordering? Or something else? As FenderLesPaul points out in a comment, the operator ordering ambiguity happens to disappear in OP's case. But this is not always the case. More generally, during quantization, we often replace classical expressions with normal ordered expression plus intercept parameters that parametrize our ignorance on how to order operators. Consistency and other physical principles might later fix these ambiguities for us. See also my Phys.SE answer here.
{ "domain": "physics.stackexchange", "id": 18704, "tags": "quantum-field-theory, operators, quantization" }
Is BAO a standing or moving wave?
Question: The sound horizon is the distance that a wave of plasma can move from the end of Inflation to Recombination (roughly 300,000 years). In several papers and talks, this is described as a moving wave (see https://www.youtube.com/watch?v=JSqIBRbQmb0 at the 23 minute mark). The velocity of the wave is given as $v_{sound} = \frac {c}{\sqrt {3}}$. When recombination occurs, the driving pressure disappears and the density of energy is frozen at that location and is observed by us as a slightly higher temperature than average (the sound horizon). However, other papers I've read (see http://www.quantumfieldtheory.info/CMB.pdf) talk about standing waves where the sound horizon is a function of the fundamental frequency and the second and third peaks are harmonics of that fundamental frequency. How do I resolve the image of a wave moving down the length of a rope vs. a standing wave on the rope? Is the first peak of the Temperature Power Spectrum associated with a shockwave moving outward from the over-density (as described by Eisenstein) or is it a collapse of baryons inward towards the over-density (as described by Klauber)? Answer: Note that there isn't really a physical different between a traveling and standing wave, except that in the latter case, the position of nodes remain fixed in space --- caused by the interference of different components. The BAO is not a standing wave. BAO is the product of waves produced by perturbations in the initial matter-energy power spectrum, which traveled/propagated outwards until recombination at which time the waves were 'frozen in' --- i.e. not waves anymore, but the lasting effects of them. Eisenstein's explanation is the correct one (as would be expected for arguably the single largest contributor to the field). Still, the statement "collapse of baryons inward towards the over-density" is also correct, describing the non-linear growth of perturbations --- but that's separate from the BAO 'waves' themselves.
{ "domain": "physics.stackexchange", "id": 35103, "tags": "cosmology, astrophysics, big-bang, cosmic-microwave-background, bao" }
Will washing a conical flask with deionised water to remove splashes affect the titration result?
Question: When performing a titration, once the acid/base in the burette is slowly released into the conical flask, it may splash onto the sides of the conical flask. It is then advised that we wash the sides of the conical flask with deionised water. Will this not affect the concentration of the acid/base solution by decreasing its concentration and therefore it will take less volume from the burette to change the pH indicator? Answer: In short: a small quantity of additional water will not alter the quantity (moles) of $\ce{OH-}$ of your lye, or $\ce{H3O+}$ of your acid to be characterized. But it will dilute the intensity of the colour of your indicator used; this then hampering the visual inspection of your analysis. True, the addition of additional water to rinse off the drop from the wall increases the total volume of your analyte, and this will lower the concentration of the acid / base to characterize. Thus, aim to titrate without splashing the reagent solution to the beaker's wall. It takes some practice to swirl the Erlenmeyer flask such that analyte and reagent solution are mixed with minimal splashing. The deionized water you already used to dilute your analyte however is neutral; i.e., it neither should be acidic, nor basic. Overall its use will add as much hydroxy $\ce{OH-}$ as hydronium $\ce{H3O+}$ ions. Thus, the consumption of reagent solution is not affected.
{ "domain": "chemistry.stackexchange", "id": 14793, "tags": "titration" }