anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Which order multiplet of a given $SU(N)$ is real or complex?
Question: I am studying the $SU(2)$ symmetric Lagrangian in particle physics. $${\mathcal{L}} = (\partial_\mu \Phi)^\dagger (\partial^\mu \Phi) - (\mu^2 \Phi^ \dagger \Phi + \lambda (\Phi^ \dagger \Phi)^2).$$ In this the $SU(2)$ triplet taken consists of 3 real fields. $$\Phi = \begin{bmatrix} \phi_1 \\ \phi_2 \\ \phi_3 \end{bmatrix}.$$ I am bit confused about why this triplet needs to be real? Can't it be also complex? Instead of a triplet can we take a doublet? If so, will it be real or complex? In general if we are given a $SU(N)$ symmetric Lagrangian, is there any convention to choose real multiplet and which order multiplet should we choose so that it is a real multiplet? Answer: Here is an incomplete list to start with: The fundamental/defining/spinor representation ${\bf 2}$ of $SU(2)$ is a quaternionic/pseudoreal representation. More generally the ${\bf 2j\!+\!1}$ irreducible representation of $SU(2)$ is (pseudo)real if the spin $j$ is a (half)integer, respectively. The fundamental/defining representation ${\bf N}$ of $SU(N)$ for $N>2$ is a complex representation.
{ "domain": "physics.stackexchange", "id": 95333, "tags": "field-theory, group-theory, representation-theory, symmetry-breaking, complex-numbers" }
Deriving smallest possible launching angle such target hits a point from geometrical envelope
Question: I'm trying to understand fact-8 in page -9 of Jaan Kalda's PDF, Fact 8: If target is same level as cannon, then the optimal launching angle (corresponding to smallest launching speed is 45 deg) Below, it is said that we can find this from the equation of envelope of parabola equation (this is what was involved in pr.19), so I derived as follows. I begin with time independent equation of projectile motion: $$ y = x \tan \theta - \frac{g}{2} ( \frac{x \sec \theta}{u})^2 \tag{1}$$ Now, to find condition for enevelope, I need to have the condition that $ \frac{\partial y}{\partial \theta}=0$ i.e: find the condition on $\theta$ such that $y$ is maximized for a given $x,u$: $$ 0 = x \sec^2 \theta - \frac{gx^2}{u^2} \sec^2 \theta \tan \theta$$ Or, $$ \frac{u^2}{g \tan \theta} = x \tag{2}$$ Now, suppose that projectile equations are written about the place where we shoot projectile as $(0,0)$ then it must be that target is at vertical height of 0, pluggin this in $(1)$ $$ 0 = x \tan \theta - \frac{g}{2} ( \frac{x}{u})^2 \sec^2 \theta$$ PLuggying $2$ and simplfying, I get that: $$ \sin \theta = \pm \frac{1}{\sqrt{2} }$$ I'm having a hard time interpreting the above result, I understand that when I apply the partial derivative condition and plug that back into time independent projectile motion equation, I get an equation which gives me the peak of a parabola from the family of parabolas but I can't get these points: Why is it that this is also the condition that I throw the projectile such that it reaches the intended point in least speed? Like how do I get the intuition behind this? When I plug $y=0$ in (1), I am finding the x intercept of parabola, but if I recall correctly, the envelope goes over the peaks of all the possible parabolas, so am I finding a parabola who's peak is the x intercept? Am I misunderstanding something? Answer: Imagine a ball was launched at (A) $45^\circ$ to the horizontal, it has the maximum range (i.e maximum x when y=0). The ball launched at $60^\circ$ (B) at the same speed, has a smaller range. You can see this with a quick sketch. It also means that if you wanted the ball to reach the same place it landed at in (A), but with an angle of $60^\circ$, you'd need a higher speed. So the $45^\circ$ angle also gives you the minimum speed to reach a given point on the ground. Yes you're finding the maximum x intercept. In your formula $0 = x \tan \theta - \frac{g}{2} ( \frac{x}{u})^2 \sec^2 \theta$ it will find the two x intercepts by factorising like this $x (\tan \theta - \frac{g}{2} \frac{x}{u^2} \sec^2 \theta) = 0$ one solution is $x=0$, the starting point, or using $sec^2 \theta = 1+tan^2\theta $ the other x value is $$X=A\frac{t}{1+t^2}\tag{*}$$ where A is the constant $\frac{2u^2}{g}$, and $t=tan\theta$. You can then ignore the A and differentiate to find the best $t$ for the maximum range, $X$. That gives $t^2=1$ and the angle is $45^\circ$. Alternatively put $t=\frac{sin\theta}{cos\theta}$ in $*$, multiply top and bottom by $cos^2\theta$, to get to $Asin\theta cos\theta$ or $\frac12Asin(2\theta)$ and that also has a maximum at $45^\circ$. (the sentence under your equation 1 seems a bit hard to imagine, you have the right answer, but people would usually think of "maximizing x, when y=0, for different angles of projection")
{ "domain": "physics.stackexchange", "id": 77767, "tags": "kinematics, calculus" }
Could the solar shield on the James Webb telescope have been pitch black or does it need to be highly reflective?
Question: When I look at pictures of the sun shield on the James Webb Space telescope (JWST), I see something that looks highly reflective (and hence must have a very low emissivity). My intuition tells me that this is no accident. On the NASA website (https://webb.nasa.gov/content/about/innovations/coating.html) it is stated that the solar shield has a coating consisting of highly reflective (low emissivity) aluminum but also that the coating includes doped-silicon which has a high emissivity that "emits the most heat and light and acts to block the sun's heat from reaching the infrared instruments that will be located underneath it". My first issue is that if the emissivity of the solar shield is something that is needed to be optimized (that is, it is necessary for it to have either a very high value or a very low value) then why use a mixture of high and low emissivity materials? Surely this would lead to a middle-of-the-range emissivity? My second issue is that when I apply the Stefan-Boltzmann law to determine the equilibrium temperature of the shield, I get a result that is independent of emissivity $$P_{in} =P_{out}$$ $$\Rightarrow solarConstant * A*\epsilon =A*\epsilon*\sigma*T^4$$ $$\Rightarrow 1370 W/m^2 * 294m^2 = 294m^2* \sigma *T_{equilibrium}^4$$ $$T=121.26\deg C$$ Clearly the emissivity's, denoted $\epsilon$, in the above derivation cancel leading to an equilibrium temperature independent of emissive values. This leads me to conclude that the emissivity of the coating is completely inconsequential and that the solar shield could have been coated with a hypothetical black-body paint with perfect absorptivity and it would still function as well as it does now with aluminum. But if this is the case, then why does the NASA website even mention emissivity? Is it just an accident that the shield appears highly reflective but it doesn't actually need to be (since the final equilibrium temperature is the same regardless of the reflective/emissive properties of the shield)? Answer: Emissivity is equal to absorptivity at a given wavelength. But the emissivity/absorptivity of any surface does vary with the wavelength of the light in question. In this case, the Webb telescope is absorbing light in the near infrared & visible range of the spectrum and emitting it (hopefully) in the far infrared. So the calculation should be $$ S A \epsilon_\text{vis} = A \epsilon_\text{far IR} \sigma T^4 $$ and since $\epsilon_\text{vis} \neq \epsilon_\text{far IR}$, the two factors don't cancel out. To minimize the temperature $T$, we can see that we want $\epsilon_\text{vis}$ to be relatively low and $\epsilon_\text{far IR}$ to be relatively high. So when the above paragraph refers to "low-emissivity aluminum", it is presumably referring to the emissivity in the near IR & visible spectrum; when it refers to "high-emissivity doped silicon", it is presumably referring to the far infrared.
{ "domain": "physics.stackexchange", "id": 90303, "tags": "thermodynamics, temperature, thermal-radiation, absorption, telescopes" }
How do I generally solve the radiative transfer equation?
Question: Background information to the question I'm a first year physics student and I have a course "Introduction to astronomy" (free translation) which until this year was thought in the third bachelor year, so a lot of the basic physics which is used has to be explained along the way because we haven't learned it yet (not a big fan). This question is about the radiative transfer equation, and specifically about how to solve it. The derivation is given in our book and I think I understand it, but the solution isn't and wherever I look for it online they always solve it in a way that isn't very clear to me. You should know that we haven't really done differential equations yet (only very basic), since that's a course for next year. Question So the radiative transfer equation in the general case that we derived is $$ \dfrac{dI_\nu}{d\tau_\nu}= S_\nu - I_\nu,$$ where $S_\nu=\dfrac{j_\nu}{4\pi k_\nu}$ is the so-called source function, with $j_\nu$ an emission coefficient, and $k_\nu=\dfrac{d\tau_\nu}{ds}$. I've found the pure absorption solution where $j_\nu=0$ to be $$I_\nu(s)=I_0e^{-\tau},$$ with $\tau_\nu=\int k_\nu ds$, the optical depth. The solution I'm looking for looks like $$I(\tau)=I_0e^{-\tau}+\int_{0}^{\tau_\nu}S_\nu(\tau_\nu')e^{-(\tau_\nu-\tau_\nu')}d\tau_\nu'.$$ I've no idea what the physical meaning of $\tau_\nu'$ is, and I don't know how to get this solution. I've found a solution online where they define new $I$ and $S$ with "twiddle's" above them, but I've never seen that kind of way to compute solutions before and I don't understand the physical origin of $\tau_\nu'$ in their solution. I'd really like it if someone could provide me with a general solution to this equation (I mean the steps of course) that is understandable for a first year physics student with only a basic understanding in solving differential equations, and a physical interpretation of it (can be brief since I can usually find that online). Answer: In the solution you have written down, $\tau_\nu^\prime$ is a dummy variable. It is just a way of keeping track of the value of the optical depth at intermediate points along the ray you are tracing. Note that $d\tau_\nu^\prime$ should appear in your integral. The radiative transfer equation tells us that, along a ray in a particular direction, the radiative intensity will change in response to new contributions to the radiation in that direction (emission) and depletions of the radiation as it passes through a medium (absorption). So you can think of the solution that you've written down as follows: Along a ray with total optical depth $\tau_\nu$, the radiative intensity that was originally emitted will be depleted by a factor of $e^{-\tau_\nu}$, due to absorption (compare to your pure absorption solution). There will also be emission contributions from each point along the ray. The strength of these emission contributions as a function of optical depth is represented by the source function $S_\nu$, which can vary along the ray. We must add in these contributions, but weighted by the amount that they will be absorbed by the remaining material along the ray. Their absorption depends on the remaining optical depth from the intermediate point to the endpoint you have specified, which is $\tau_\nu - \tau_\nu^\prime$. The solution you have written down is often referred to as the "formal" solution to the radiative transfer equation. To derive this solution, you can use an integrating factor. The solution is called "formal" because, while generally true, it is only useful in a narrow range of situations. The difficulty lies in what knowledge you have of the source function $S_\nu$. If you are given it in advance, then all is well. But generally, for a variety of reasons that I could describe later, $S_\nu$ will depend $I_\nu$. So $I_\nu$ is actually present on both sides of the equation, and you haven't totally solved for it. To complicate matters, it is buried in an integral on the right-hand side. Such equations are difficult to solve in general, and often require numerical solutions, which are computer algorithms that chop the problem into small segments, yielding an approximate solution up to a desired level of accuracy. You can compare with other common situations where numerical methods are required, such as the equations of hydrodynamics. On top of all that, to fully characterize the radiation field, you need to solve the radiative transfer equation for every direction the radiation can travel, at every frequency, and at every position. So this is a multi-dimensional problem: three spatial dimensions, two direction angles, and frequency, for six in total. Plus all quantities of interest could also vary in time. This is why some of the numerical techniques used to solve for the radiation field resort to Monte Carlo techniques for integration.
{ "domain": "physics.stackexchange", "id": 41021, "tags": "astrophysics, astronomy, thermal-radiation, stellar-physics" }
Gmapping stops updating map
Question: Hey, after finally getting deeper into ROS and understanding most of the things which are necessary for using the Gmapping-SLAM ( basics like tf, odom etc ) im now at the point of testing the whole thing. There is the Problem that Gmapping stops updating the map after serval scans. But first of all i want to give you a short overview: What i want to do: Im trying to use a simulated 2D-Scanner in VREP as source for Gmapping. Systemconfig: I use Groovy on a Ubuntu 12.04 LTS on a old IBM X41 Laptop (1.5Ghz Centrino SinglecoreCPU) Progress: I transfered the /scan and /odom-Topic from VREP to ROS. I wrote some code (odom_baselink_tf.py) for tf of /base_link to /odom: def metadata_odom(data): global p, q, info info = data p = data.pose.pose.position q = data.pose.pose.orientation rospy.loginfo(Received odom) ... broadcaster.sendTransform( (p.x, p.y, 0), (q.x, q.y, q.z, q.w), rosnow, base_lin, odom ) 3 . I use a static tf transform between base_link and LaserScanner_Frame: rosrun tf static_transform_publisher 0 0 0 0 0 0 base_link LaserScanner_Frame 100 This results in the following tf-tree: I use Gmapping with the following Settings: <param name="map_update_interval" value="2"/> <!--2--> <param name="maxUrange" value="5.5"/> <param name="sigma" value="0.05"/> <param name="kernelSize" value="1.0"/> <param name="lstep" value="0.05"/> <param name="astep" value="0.05"/> <param name="iterations" value="5"/> <param name="lsigma" value="0.075"/> <param name="ogain" value="3.0"/> <param name="lskip" value="0"/> <param name="srr" value="0.01"/> <param name="srt" value="0.02"/><!--1.0--> <param name="str" value="0.01"/> <param name="stt" value="0.02"/> <param name="linearUpdate" value="0.1"/> <param name="angularUpdate" value="0.1"/> <param name="temporalUpdate" value="-1.0"/> <param name="resampleThreshold" value="0.5"/> <param name="particles" value="50"/> <!--80--> <param name="xmin" value="-50.0"/> <param name="ymin" value="-50.0"/> <param name="xmax" value="50.0"/> <param name="ymax" value="50.0"/> <param name="delta" value="0.03"/> <param name="llsamplerange" value="0.01"/> <param name="llsamplestep" value="0.01"/> <param name="lasamplerange" value="0.005"/> <param name="lasamplestep" value="0.005"/> But if i use these settings, there is the problem that Gmapping stops updating the map after serval scans. If i look at the SLAM-Debug-Messages there are serval odd messages: update frame 7 update ld=0.0495583 ad=0.179583 Laser Pose= 13.7771 3.96148 -0.695156 m_count 7 [DEBUG] [1403512083.321312113]: TF operating on not fully resolved frame id base_link, resolving using local prefix [DEBUG] [1403512083.321624443]: TF operating on not fully resolved frame id odom, resolving using local prefix Average Scan Matching Score=164.656 neff= 49.8545 Registering Scans:Done [DEBUG] [1403512083.931338640]: scan processed [DEBUG] [1403512083.933940439]: new best pose: 13.752 3.943 -0.644 [DEBUG] [1403512083.935263651]: odom pose: 13.777 3.961 -0.695 [DEBUG] [1403512083.936265942]: correction: -0.025 -0.018 0.051 [DEBUG] [1403512083.937573997]: Trajectory tree: [DEBUG] [1403512083.938640263]: 13.752 3.943 -0.644 [DEBUG] [1403512084.006696687]: 13.513 4.059 -0.069 [DEBUG] [1403512084.010663464]: Reading is NULL [DEBUG] [1403512085.983524674]: TF operating on not fully resolved frame id base_link, resolving using local prefix [DEBUG] [1403512085.985853674]: TF operating on not fully resolved frame id odom, resolving using local prefix [DEBUG] [1403512087.105476077]: Updated the map [DEBUG] [1403512087.117026498]: MessageFilter [target=/odom ]: Added message in frame /LaserScanner_Frame at time 1403512075.593, count now 1 [DEBUG] [1403512087.118606658]: MessageFilter [target=/odom ]: Discarding Message, in frame /LaserScanner_Frame, Out of the back of Cache Time(stamp: 1403512072.685 + cache_length: 10.000 < latest_transform_time 1403512085.969. Message Count now: 1 [DEBUG] [1403512087.120816298]: MessageFilter [target=/odom ]: Added message in frame /LaserScanner_Frame at time 1403512078.106, count now 1 [DEBUG] [1403512087.121834794]: MessageFilter [target=/odom ]: Message ready in frame /LaserScanner_Frame at time 1403512080.746, count now 1 [DEBUG] [1403512087.122950160]: TF operating on not fully resolved frame id odom, resolving using local prefix [DEBUG] [1403512087.126866647]: MessageFilter [target=/odom ]: Added message in frame /LaserScanner_Frame at time 1403512080.746, count now 1 [DEBUG] [1403512087.128059466]: Time jumped forward by [5.379792] for timer of period [0.010000], resetting timer (current=1403512087.128015, next_expected=1403512081.748223) The messages show the last Map-Update (#7). There are also some "TF operating on not fully resolved frame id "-Messages which i dont understand, because i got all transformations in my tf tree. These messages occur as i start gmapping. What should i do against them? I think they are resulting in my problem. Or maybe the processingpower of my old laptop is the problem? Top says its always 100%. Does someone got a idea whats wrong with my setup? Any additional graphics needed to understand the whole thing? EDIT: Okey, is it right that the tf-Rate of my /odom_baselink_tf.py script needs to be higher, as the buffer is about 10s and the rate of the script is below 1Hz ? Could i reduce slam-tf-rates, as i cant increase the odom publish-rate of the VREP-Simulator ? Or i am complety wrong? Additionally i switched over to a x200 Laptop with a Core2duo. This way i got a publish rate of about 3 Hz for odom via VREP. EDIT2: I found the transform_publish_period-Gmapping-parameter and set it to 10Hz. Additionally i changed my pythonscript to publish tf at 10Hz. Every tf in my tree uses now 10Hz for publishing. But the map still dont update. Cheers Julian Originally posted by julled on ROS Answers with karma: 23 on 2014-06-23 Post score: 0 Original comments Comment by dornhege on 2014-06-25: Are you maybe mixing times from different sources? Is there a /clock topic or/and maybe a simulation time? Answer: The fully resolved frame id warnings are not problematic. It's basically base_link vs. /base_link, where if you just send base_link it's gonna make it to be /base_link. This is usually what you want. What you should look at is the "Out of the back of Cache Time(stamp: 1403512072.685 + cache_length: 10.000 " and " Time jumped forward by [5.379792] for timer of period ". This seems like something is wrong with the messages you are sending. All messages should be arriving continuously with a nicely increasing timestamp and laser and tf should fit together. It seems that something in your data doesn't. Originally posted by dornhege with karma: 31395 on 2014-06-23 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by julled on 2014-06-24: Hey domhege, thank you for your answer! You can see a new EDIT at the question. gruß aus Berlin Comment by andreas.gerken on 2014-06-25: Hi julled, sorry for the off-topic but i think we try to do pretty much the same system. Could you send me a mail to andreas.gerken (at) thi.de so we can join powers? Thanks Andreas Gruß aus Ingolstadt :)
{ "domain": "robotics.stackexchange", "id": 18352, "tags": "ros, navigation, gmapping, vrep, laserscan" }
What is a known sequence for which being constant is not provable?
Question: My question concerns the property of being constant for computable functions ${\mathbb N}\to \{0,1\}$, within any common framework $T$ strong enough to include Heyting arithmetic (and of course not known to be inconsistent.) Is there some particular, i.e. known, computable function $f:{\mathbb N}\to \{0,1\}$ such that $T$ proves $f$ to be total, no number $m$ with $f(m)=1$ is known (i.e. it's not actually ruled out that the function is constant) but for which the statement $\ \forall n. f(n) = 0\ $ is actually known to be unprovable in $T$ (it's known that there's no proof in $T$)? Notes, thoughts, re-emphasis: The motivation here is to see some hard motivating cases for a conservative framework not to adopt omniscience principles. Hard in the sense that being zero is not just an open question but it's provably not provable by $T$. It could thus be that the $T$ might necessarily not have non-constructive tools in its proof repertoire. Maybe there's classical statements about $\Pi$ statements where const-ness is then always implied - if so we need to go weaker. If the function returns $0$ in case some effective property $P(n)$ is being evaluated - e.g. some arithmetic relation between each input $n$ and the finite number of primes below $n$, alla Goldbach - then we might quickly have a total function at hand, and there are some open questions of the $\forall n$ form. However, from looking around it seems in practice such problems are usually conjectured to be provable but just hard. The function property of taking the value $0$ everywhere can be undecidable for some class of partial computable functions, but I'm looking for a particular total function - in fact I'd hope for a short explicit algorithm that corresponds to it. There's some total computable functions with unprovable properties in PA. E.g. some function constructed from the Goldstein situation is total but PA doesn't prove that. Just to avoid the this-system-knows-that-system-does-not-know situations I'm explicitly asking for a situation for just a single (possibly quite weak but standard math compatible) framework $T$ that captures the computable function, proves it total but can't prove or disprove const-ness. Answer: Let $T$ be a reasonble theory of arithmetic, say $\mathrm{PA}$. Consider the sequence $$f(m) = \begin{cases} 1 & \text{if $m$ encodes a proof of $\vdash_T 0 = 1$} \\ 0 & \text{otherwise} \end{cases} $$ The sequence is clearly computable, even primitive recursive and therefore representable in $T$. If there is $m$ such that $f(m) = 1$ then $T$ is inconsistent, which is not known to be the case. If $T$ proves $\forall m . f(m) = 0$ then $T$ proves $\neg \mathrm{Bew}(\ulcorner 0 = 1\urcorner)$, i.e., its own consistency, which is not possible if $T$ is consistent.
{ "domain": "cstheory.stackexchange", "id": 5338, "tags": "computability, nt.number-theory, proof-theory, constructive-mathematics, function" }
HackerRank: ACM ICPC Team
Question: I made a solution to this problem from HackerRank : You are given a list of N people who are attending ACM-ICPC World Finals. Each of them are either well versed in a topic or they are not. Find out the maximum number of topics a 2-person team can know. And also find out how many teams can know that maximum number of topics. Note Suppose a, b, and c are three different people, then (a,b) and (b,c) are counted as two different teams. Input Format The first line contains two integers, N and M, separated by a single space, where N represents the number of people, and M represents the number of topics. N lines follow. Each line contains a binary string of length M. If the ith line's jth character is 1, then the ith person knows the jth topic; otherwise, he doesn't know the topic. My code is considered too slow (I think I am allowed 10 seconds, the code below takes more than 20 seconds when N and M are both 500): import random import itertools import time # n number of people # m number of topics n = 500 m = 500 """ binary_string(n) and random_list(n_people, n_topic) just help to generate test cases, irrelevant otherwise. """ def binary_string(n): my_string = "".join(str(random.randint(0, 1)) for _ in range(n)) return my_string def random_list(n_people, n_topic): my_list = [binary_string(n_topic) for _ in range(n_people)] return my_list """ the essential part is item_value(e1, e2) and find_couples(comb_list) """ def item_value(e1, e2): c = bin(int(e1, 2) | int(e2, 2)) return sum(int(i) for i in c[2:]) def find_couples(comb_list): max_value = 0 counter = 0 for pair in itertools.combinations(comb_list, 2): value = item_value(pair[0], pair[1]) if value == max_value: counter += 1 elif value > max_value: max_value = value counter = 1 print(max_value) print(counter) return topic = random_list(n, m) print(topic) start = time.time() find_couples(topic) end = time.time() print(end - start) In the context of the above algorithm, in what ways can I make it more efficient? Is there a better algorithm? Answer: You're spending a lot of time in item_value converting numbers to a binary string back to an int to get the sum. It would be a lot more expedient to just use str.count to get the number of 1s in the string. That saves you a lot of type conversions as well as a call to sum with a generator. def item_value(e1, e2): c = bin(int(e1, 2) | int(e2, 2)) return c[2:].count('1') My results from this caused a reduction from 58.9s to 0.91s.
{ "domain": "codereview.stackexchange", "id": 17173, "tags": "python, performance, algorithm, programming-challenge, python-3.x" }
Predator Prey Simulation
Question: Below is a simple random walk predator prey simulation that is optimized to the best of my abilities. I would love to hear about any improvements that can made. import numpy as np import time from matplotlib import pylab as plt import random def run(): # Initialize grid size = 100 dims = 2 # Each point in the 2D grid can hold the counts of: [prey,predators] grid = np.zeros((size,) * dims, dtype=(int, 2)) num_rows, num_cols, identifiers = grid.shape num_predators = 10 num_prey = 500 prey_countdown = 500 grid[50, 50, 1] = num_predators # Manually inserting a few predators/prey grid[0, 0, 0] = num_prey # Coordinates for all non-empty grid locations coords = np.transpose(np.nonzero(grid != 0)) x_pts, y_pts, idents = zip(*coords) # Please do not consider matplotlib the choke point of this program, # It will be commented out, and is only used for testing and amusement. # (But if you do have a way to speed it up, I'm very curious!) # Initialize figure and axes fig, ax1 = plt.subplots(1) # Cosmetics ax1.set_aspect('equal') ax1.set_xlim(0, size) ax1.set_ylim(0, size) # Display ax1.hold(True) plt.show(False) plt.draw() # Background is not to be redrawn each loop background = fig.canvas.copy_from_bbox(ax1.bbox) # Plot all initial positions as blue circles points = ax1.plot(x_pts, y_pts, 'bo')[0] # I would like to have blue for prey and red for predators, # I'm not sure how to do so quickly. I think multiple calls to axes.plot are needed. #colors = ['ro' if (ident==2) else 'bo' for ident in idents] time_steps = 1000 for idx in range(time_steps): for coord in coords: direction = random.sample(range(1, 5), 1)[0] x, y, ident = coord count = grid[x, y, ident] # Random walk # Prey first if ident == 0: if count: # A predator may have eaten the prey by now grid[x, y, ident] -= 1 # Remove old value if direction == 1: # Move right grid[(x+1) % num_rows, y, ident] += 1 elif direction == 2: # Move left grid[(x-1) % num_rows, y, ident] += 1 elif direction == 3: # Move up grid[x, (y+1) % num_cols, ident] += 1 elif direction == 4: # Move down grid[x, (y-1) % num_cols, ident] += 1 # Predators do not die else: # Predators will consume prey if prey exists at new location grid[x, y, ident] -= 1 # Remove old value if direction == 1: # Move right xnew = (x+1) % num_rows grid[xnew, y, ident] += 1 # Move predator to new grid location if grid[xnew, y, 0]: # If there is prey at the new location... grid[xnew, y, 0] -= 1 # Remove prey prey_countdown -= 1 print 'Crunch! Prey left:', prey_countdown elif direction == 2: # Move left xnew = (x-1) % num_rows grid[xnew, y, ident] += 1 if grid[xnew, y, 0]: grid[xnew, y, 0] -= 1 prey_countdown -= 1 print 'Crunch! Prey left:', prey_countdown elif direction == 3: # Move up ynew = (y+1) % num_cols grid[x, ynew, ident] += 1 if grid[x, ynew, 0]: grid[x, ynew, 0] -= 1 prey_countdown -= 1 print 'Crunch! Prey left:', prey_countdown elif direction == 4: # Move down ynew = (y-1) % num_cols grid[x, ynew % num_cols, ident] += 1 if grid[x, ynew, 0]: grid[x, ynew, 0] -= 1 prey_countdown -= 1 print 'Crunch! Prey left:', prey_countdown # Redraw... coords = np.transpose(np.nonzero(grid != 0)) x_pts, y_pts, idents = zip(*coords) points.set_data(x_pts, y_pts) fig.canvas.restore_region(background) # Restore background ax1.draw_artist(points) # Redraw just the points fig.canvas.blit(ax1.bbox) # Fill in the axes rectangle time.sleep(0.01) # Prevents figure from freezing plt.close(fig) # Do we have as many left as we should? coords = np.transpose(np.nonzero(grid[:,:,0] != 0)) print (len(coords) != prey_countdown) if __name__ == '__main__': run() # Start button for cProfile I hope to continue expanding this simulation to include a "chance to escape" for the prey as well. Any comments about the program's structure that will help ease growth will be greatly appreciated. Answer: The biggest issue I see is code organization. There are two large tasks: Numerical Simulation Data Visualization They are mixed together in ways that create arbitrary dependencies and make extending and maintaining the code difficult. Names Assigning meaningful names to numbers will improve readability and reduce comments: right = 1 left = 2 up = 3 down = 4 if direction == up ... if direction == down ... etc. Decoupling Predators and prey are a different level of abstraction than anything in matplotlib. Predators/prey respond to other predators/prey and move of their own volition. 2d grids are something else entirely. Predator/prey logical abstractions should work for whales and giant squid in the sea, leopards and chimps in the forest or lions and wildebeests on the savannah. The visualization methods should not leak into the simulation's abstractions. The simulation should be able to be run independently of any visualization. This allows running a million iterations, followed by statistical analysis using the Monte Carlo or similar methods. This means that the simulation portion of the program has its own methods and data structures. Likewise the display portion of the program should have its own. In between, the main loop reads the simulation data structures and translates them into a visualization data structure [i.e. the main loop draws a pretty picture]. #pseudo code for visualization on_clock_tick() my_display.draw(translate(simulate(my_simulation))) An alternative use of a decoupled simulation: #pseudo code for later statistical analysis on_clock_tick() my_file.write(translate(simulate(my_simulation))) Note that this also makes it possible to reuse the visualization code for a different simulation: #pseudo code for visualization on_clock_tick() my_display(translate(simulate(my_other_simulation))) Final thoughts The real scientific work is in simulating the behaviors of predators and prey. Getting the display code out of the way will allow you to focus on the core problem clearly. It will make the code more readable and prioritize bugs - it doesn't matter how right the visualization is if the underlying simulation is bad.
{ "domain": "codereview.stackexchange", "id": 12411, "tags": "python, numpy, simulation, matplotlib" }
What does Ae mean when used in a chemical formula?
Question: In this video on slide 38 for example, the 35 minute mark, the formula appears HAeBe With the explaination "some chemestry has taken place..." and an infrared spectra. What is Ae? I tried looking it up thinking it might be a common shorthand, but the only hits are answers that say "no, did you mean Al, Ag, etc.". Answer: HAeBe stands for Herbig Ae/Be star, which is a young star before the main sequence of its formation. The spectra of HAeBe stars show hydrogen and calcium emission lines, which are shown in the video. The cloud could be referring to the gas-dust envelope that surrounds the star, but I'm not an expert in this field and so I'm unsure.
{ "domain": "chemistry.stackexchange", "id": 4410, "tags": "nomenclature" }
Can there be medium height(neither tall nor short) pea plants in Mendel's experiment?
Question: Can there be medium height(neither tall nor short) pea plants in Mendel's experiment? All textbooks I have read seem to imply that pea plants have to be either tall or short, nothing in between. Answer: Medium height (like in people) and other traits that seem like a mixture of two extremes are often a result of incomplete dominance. For example, a red and white flower are bred to produce an offspring with pink petals. Mendelian genetics does not include incomplete dominance (which is classified as, surprisingly, non-Mendelian genetics). Basically, Mendel got very lucky with his choice of plant. Pea plant height is strictly dominant, meaning one dominant allele results in tall plants, regardless of the identity of the second inherited allele. This is a consequence of the genetic makeup of pea plants. Had he tried a similar experiment with snapdragon flower color, he would be very confused. (See https://www.ndsu.edu/pubweb/~mcclean/plsc431/mendel/mendel2.htm for snapdragon incomplete dominance example.)
{ "domain": "biology.stackexchange", "id": 11873, "tags": "genetics" }
Where exactly does $\geq 1$ come from in SVMs optimization problem constraint?
Question: I've understood that SVMs are binary, linear classifiers (without the kernel trick). They have training data $(x_i, y_i)$ where $x_i$ is a vector and $y_i \in \{-1, 1\}$ is the class. As they are binary, linear classifiers the task is to find a hyperplane which separates the data points with the label $-1$ from the data points with the label $+1$. Assume for now, that the data points are linearly separable and we don't need slack variables. Now I've read that the training problem is now the following optimization problem: ${\min_{w, b} \frac{1}{2} \|w\|^2}$ s.t. $y_i ( \langle w, x_i \rangle + b) \geq 1$ I think I got that minizmizing $\|w\|^2$ means maximizing the margin (however, I don't understand why it is the square here. Would anything change if one would try to minimize $\|w\|$?). I also understood that $y_i ( \langle w, x_i \rangle + b) \geq 0$ means that the model has to be correct on the training data. However, there is a $1$ and not a $0$. Why? Answer: First problem: Minimizing $\|w\|$ or $\|w\|^2$: It is correct that one wants to maximize the margin. This is actually done by maximizing $\frac{2}{\|w\|}$. This would be the "correct" way of doing it, but it is rather inconvenient. Let's first drop the $2$, as it is just a constant. Now if $\frac{1}{\|w\|}$ is maximal, $\|w\|$ will have to be as small as possible. We can thus find the identical solution by minimizing $\|w\|$. $\|w\|$ can be calculated by $\sqrt{w^T w}$. As the square root is a monotonic function, any point $x$ which maximizes $\sqrt{f(x)}$ will also maximize $f(x)$. To find this point $x$ we thus don't have to calculate the square root and can minimize $w^T w = \|w\|^2$. Finally, as we often have to calculate derivatives, we multiply the whole expression by a factor $\frac{1}{2}$. This is done very often, because if we derive $\frac{d}{dx} x^2 = 2 x$ and thus $\frac{d}{dx} \frac{1}{2} x^2 = x$. This is how we end up with the problem: minimize $\frac{1}{2} \|w\|^2$. tl;dr: yes, minimizing $\|w\|$ instead of $\frac{1}{2} \|w\|^2$ would work. Second problem: $\geq 0$ or $\geq 1$: As already stated in the question, $y_i \left( \langle w,x_i \rangle + b \right) \geq 0$ means that the point has to be on the correct side of the hyperplane. However this isn't enough: we want the point to be at least as far away as the margin (then the point is a support vector), or even further away. Remember the definition of the hyperplane, $\mathcal{H} = \{ x \mid \langle w,x \rangle + b = 0\}$. This description however is not unique: if we scale $w$ and $b$ by a constant $c$, then we get an equivalent description of this hyperplane. To make sure our optimization algorithm doesn't just scale $w$ and $b$ by constant factors to get a higher margin, we define that the distance of a support vector from the hyperplane is always $1$, i.e. the margin is $\frac{1}{\|w\|}$. A support vector is thus characterized by $y_i \left( \langle w,x_i \rangle + b \right) = 1 $. As already mentioned earlier, we want all points to be either a support vector, or even further away from the hyperplane. In training, we thus add the constraint $y_i \left( \langle w,x_i \rangle + b \right) \geq 1$, which ensures exactly that. tl;dr: Training points don't only need to be correct, they have to be on the margin or further away.
{ "domain": "datascience.stackexchange", "id": 588, "tags": "machine-learning, svm" }
Trying to code for calculating cosine series
Question: I was trying to calculate cosine series : cos x = 1 - x^2/2! + x^4/4! - x^8/8! .... , where x is in radians: include iostream include math.h using namespace std; long factorial (long num) { if (num >1) return num*factorial(num-1); else return 1; } int main() {int X,sum=0; cout<<"Enter value of x in radians : "; cin>>X; for (int i = 0; i<=4; ++i) { int z = pow(-1,i); int p = (pow(X,2*i)*z)/factorial(2*i); sum += p; } cout< return 0; } on putting x as 1.57( 90 degrees), I am getting 1, instead of 0. Can anyone explain why? Answer: You are using integer arithmetic rather than floating point arithmetic. Make sure you understand the difference.
{ "domain": "cs.stackexchange", "id": 5389, "tags": "algorithms, programming-languages" }
Minkowski signature convention and propagator
Question: Planar waves with positive energy are usually defined $$ e^{-ipx} $$ where (I suppose) the west coast metric $(+,-,-,-)$ has been used. Since the east coast metric $(-,+,+,+)$ switches the sign of the scalar product $px$, the positive energy planar waves read $$ e^{ipx} $$ Is that correct so far? If yes, then shouldn't the quantum propagator $e^{-iHt}$ be written as $e^{iHt}$ in the east coast metric? I'm asking because of a confusion: I'm pretty sure I have seen both $e^{-ipx}$ and $e^{ipx}$ as positive energy plane waves, but I have never stumbled across a $e^{iHt}$ propagator. Answer: The convention on the Minkowski metric is only used to write scalar products. In your case, in the West Coast metric, a plane wave with positive energy is given, as you said, by $e^{-i p^\mu x_\mu}$: this is written in components as $e^{-i Et}e^{i \vec p\cdot \vec x}$, and that's a positive energy wave because of the Schrodinger's equation $$ i\partial_tf(\vec x,t)=E f(\vec x,t) $$ If you use your plane wave as $f$, you'll see that the equation is solved. That's why it is a positive energy solution. $e^{i p^\mu x_\mu}$, for similar reasons, is the plane wave with negative energy: you can see that it solves $$ i\partial_t f(\vec x,t)=-Ef(\vec x,t) $$ so it has energy $-E$. You are correct in saying that, in the case of the other metric, the roles are apparently switched: $e^{i p_\mu x^\mu}$ is the positive energy wave, while $e^{-i p_\mu x^\mu}$ is the negative energy wave. The fact is that, when you do the scalar product you will obtain the same expressions as before: $e^{i p_\mu x^\mu}$ written in East Coast convention evaluates to $e^{-i E t}e^{i\vec p\cdot\vec x}$, so it's the same as writing $e^{-i p_\mu x^\mu}$. As you can see, once we write the scalar products in terms of the components of the vectors you get the same thing as before. So, the reason for $U(t)=\exp(-i H t)$ is not related to any metric convention, but it comes from Schrodinger's equation.
{ "domain": "physics.stackexchange", "id": 42128, "tags": "quantum-mechanics, special-relativity, conventions, propagator" }
Does special relativity's kinematic time dilation have a physical effect on matter, and how can you test this?
Question: Kinematic time dilation or else named constant velocity time dilation is an apparent time dilation effect but with real consequences on how we measure time and our everyday life like the GPS satellites which loose in average 7μs in time each day compared to clocks on the surface of the Earth due this Special Relativity (SR) kinematic effect. At the same time they gain time about 45μs due the gravitational time dilation effect GPS and Relativity These two opposite effects must be accounted for 45-7= +38μs per day, a total amount of time dilation the clocks on the satellites are going relative ahead in time (faster) compared to the clocks on the Earth surface and compensation must be applied so all clocks are in synchronization. We know from experiments for certain, that acceleration affects matter slowing down at the molecular scale vibrations and therefore has a physical effect on matter and also on tick rates of clock time measuring devices from the mechanical, electronic to atomic clocks. Therefore gravitational time dilation has an actual physical effect on matter. This above, includes also biological clocks and the aging process of living organisms. A man on Jupiter would age slower than a man on Earth, assuming he/she is immune to the hostile environment of Jupiter :) I am curious about if this is also the case for the SR constant velocity time dilation phenomenon and it actually affects matter independent of frame of reference? Since the velocity is constant in this case I expect no any forces to be present on the moving matter. Therefore the tick rate of the clock must not be affected by the constant velocity and the measured time must flow at a constant pace. Which means the kinematic SR time dilation effect is not a matter phenomenon effect but an apparent time measurement information sharing effect of relative moving clocks due the finite speed of light and their different positions in space. The time measured on the clocks is not shifted only the information of the measured time shared between the two observers is shifted in time. Are there any experiments that can show that SR time dilation has an actual effect on matter? A quite simple experiment would be according to the above information to compensate a GPS satellite for a year only for the gravitational time dilation (i.e. clock on satellite runs +45μs faster per day) and do NOT compensate for the whole year for the kinematic SR time dilation effect (i.e. clock on satellite appears to run -7μs slower per day). According to the above information at the end of the year the clock on the orbiting GPS satellite would lack in time about 365x(-7μs)= -2.55 ms. Bring the satellite down to Earth from orbit to stationary position and read out its clock time. If the clock on the stationary satellite would then found to be lacking around -2.55 ms within the error bars of this experiment then this would conclusively prove that SR constant velocity time dilation has an actual physical effect on matter and is not just an apparent relativity effect! Answer: We know from experiments for certain, that acceleration affects matter slowing down at the molecular scale vibrations and therefore has a physical effect on matter and also on tick rates of clock time measuring devices from the mechanical, electronic to atomic clocks. Therefore gravitational time dilation has an actual physical effect on matter. We do not say that the slowing down of molecular vibrations etc. is an actual physical effect on matter. Rather, it is an effect on the flow of time. BUT, If this is how you define "physical effect on matter", then SR time dilation effect does the same thing as well. It also affects matter slowing down at the molecular vibrations scale and also affects the tick rates of clocks and all time measuring devices, from the mechanical , electronic, to atomic clocks. So, SR time dilation also has an actual effect " Are there any experiments that can show that SR time dilation has an actual effect on matter? " Like i said earlier, neither SR or GR time dilation has an actual effect on matter, but rather they have an effect on the rate of flow of time in a certain frame. Of course, under GR, the motion of matter is affected by the spacetime curvature but that is a different issue. If you are asking whether there have been any experiments done to show SR time dilation, there have been multiple experiments done to confirm the SR time dilation effect. https://math.ucr.edu/home/baez/physics/Relativity/SR/experiments.html#Tests_of_time_dilation " A quite simple experiment would be according to the above information to compensate a GPS satellite for a year only for the gravitational time dilation (i.e. clock on satellite runs +45μs faster per day) and do NOT compensate for the whole year for the kinematic SR time dilation effect (i.e. clock on satellite appears to run -7μs slower per day). According to the above information at the end of the year the clock on the orbiting GPS satellite would lack in time about 365x(-7μs)= -2.55 ms. Bring the satellite down to Earth from orbit to stationary position and read out its clock time. If the clock on the stationary satellite would then found to be lacking around -2.55 ms within the error bars of this experiment then this would conclusively prove that SR constant velocity time dilation has an actual physical effect on matter and is not just an apparent relativity effect!" This could be done, but it is definitey not a SIMPLE experiment. You would have to mess with the GPS satellites, thus rendering all GPS systems worldwide inoperable for a year, just to provide experimental verification for something that has already been verified by many experiments.
{ "domain": "physics.stackexchange", "id": 78440, "tags": "general-relativity, special-relativity, condensed-matter, gps, atomic-clocks" }
One dimensional and two dimensional motion
Question: I read that 1D motion is straight line motion and 2D motion is the motion when the two coordinates change with respect to time , so what would be the the motion by the graph here Answer: 1-D motion would be a straight line motion since there is nowhere else to move. A straight line either y=0 is the x- axis (Abscissa). Think of a ball placed on a real number line. It can only move forward or backward. The equation of motion would be described by the set of coordinates $\{\vec{x}, \vec{v}_x\}$. In 2-D i.e. on a plane (assuming a flat surface), the ball can move forward and backward in the plane defined by the y=0 or x-axis (Abscissa) and x=0 or the y-axis (Ordinate). The graph here shows a relation for a straight line $y=m\, x +c$, with m=1 and c=0 (for a line passing through the origin). Essentially, its is graph for $y=x$. To describe a motion you need to have a time i.e. t- axis instead of the spatial x- axis. The motion in this case is of a body moving with a uniform velocity for the 1D motion with the points on the graph $\{t,x(t)\}$. Otherwise, with the graph you have posted, there is no motion because both the axis are spatial dimensions. Unless, you are plotting a parametric plot with $\{x(t),y(t)\}$. Then in that case you could think that this graph represents the projectile motion for an object thrown at an angle of 45 degrees. The motion being analyzed is for the initial values of x and y where the parabola can be approximated as a straight line.
{ "domain": "physics.stackexchange", "id": 68323, "tags": "kinematics" }
Count islands in a binary grid
Question: This is the No of Island code challenge. Please review my implementation in Haskell. I know there must be some better way of doing this. Given an m x n 2D binary grid grid which represents a map of '1's (land) and '0's (water), return the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid are all surrounded by water. Input: 1 1 1 1 0 1 1 0 1 0 1 1 0 0 0 0 0 0 0 0 Output: 1 Solution: -- leetcode number of island import qualified Data.Vector as V import Data.Maybe {-- 1 1 1 1 0 1 1 0 1 0 1 1 0 0 0 0 0 0 0 0 --} data Visited = N | O deriving (Eq, Show) data Cell = Cell Int Int Visited deriving (Show) -- Function to compare the cells instance Eq Cell where (==) (Cell r0 c0 _) (Cell r1 c1 _) = r0 == r1 && c0 == c1 -- main function takes the Vector Vector Int islands :: V.Vector (V.Vector Int) -> [Cell] islands v = islandhelper1 (0,0) [] where -- fetch the row islandhelper1 :: (Int, Int) -> [Cell] -> [Cell] islandhelper1 b@(r, c) cell = case v V.!? r of Just x -> islandhelper2 x b cell _ -> cell -- fetch the cell and then call neighbors on it islandhelper2 :: V.Vector Int -> (Int, Int) -> [Cell] -> [Cell] islandhelper2 rw (r, c) cells = case rw V.!? c of Just 1 -> islandhelper2 rw (r, c + 1) (getNeighbors v (r,c) cells) Just 0 -> islandhelper2 rw (r, c+ 1) cells _ -> islandhelper1 (r + 1, 0) cells -- fetch the neighbors -- add initial cell and neighboring cells getNeighbors:: V.Vector (V.Vector Int) -> (Int, Int) -> [Cell] -> [Cell] getNeighbors v (r, c) clls = initcl ++ getnbs4 where getnbs4:: [Cell] getnbs4 = catMaybes $ map clfilter [(0,1),(1, 0),(0, -1),(-1, 0)] clfilter:: (Int, Int) -> Maybe Cell clfilter (r', c') = let m = r' + r n = c' + c in case getCellValue v (m, n) of Just (1, cl@(Cell m' n' _)) -> if cl `elem` clls then Nothing else Just (Cell m' n' O) _ -> Nothing initcl :: [Cell] initcl = case getCellValue v (r,c) of Just (1, cl@(Cell m' n' _)) -> if cl `elem` clls then clls else ((Cell m' n' N) : clls) _ -> clls -- get the cell value getCellValue::V.Vector (V.Vector Int) -> (Int, Int) -> Maybe (Int, Cell) getCellValue v (r, c) = case v V.!? r of Just x -> case x V.!? c of Just m -> Just (m, Cell r c O) _ -> Nothing _ -> Nothing -- to Filter the cell based on N isNew :: Cell -> Bool isNew (Cell _ _ k) = k == N -- Converts list to vector ltoV :: [a] -> V.Vector a ltoV = V.fromList -- converts the String to Int vector conV :: String -> V.Vector Int conV = ltoV . map (read::String -> Int) . words main::IO() main = do content <- ltoV . ( map conV) . lines <$> readFile "noofisland.txt" print $ length . filter isNew $ islands content Answer: Edit: all of the below was premature. The basic algorithm you're using doesn't work. Try this test case: 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 1 1 1 0 0 0 0 0 0 0 The usual comments of Code Review apply: Use a linter. Name things clearly. (You've got at least one test case, so that's good!) Some Haskell-specific code-smell stuff also applies: Visited is used like a Bool, so let it be a Bool. (You can wrap it up various different ways for clarity though!) At the same time, your use of Int to represent the land/water distinction isn't great. What if a 3 snuck in by mistake? Again, wrap up a bool! Use record types to name data fields. I suggest using type applications for things like read instead of an inline type hint. There's probably a better way to write getCellValue using lenses, but in this case I'd leverage the Maybe monad. In general, learning more abstractions will be good. For example, if I introduce a Coordinates type to wrap up (Int, Int), I can make it an instance of Semigroup and Monoid. getCellValue returns a lot of redundant data; simplify it down to just get the new stuff. This will push a little extra complexity up into the places where it's used, but the complexity belongs there (and can be resolved there). With respect to the algorithm: Instead of checking if each element is already in the list, consider always adding everything and then using nub at the end. This relies on the specific behavior of nubBy (it keeps the first match). So the code does get a little more fragile, but it enables substantial simplification! Also, in your list of ±1 offsets, there's no reason to be searching backward; you already checked those cells! I've been a bit lazy documenting my process, but the above got me to import Data.Function (on) import Data.List (nubBy) import Data.Maybe import qualified Data.Vector as V newtype Terrain = Terrain{isLand :: Bool} data Coordinates = Coord{row :: Int, column :: Int} deriving (Eq, Ord, Show) instance Semigroup Coordinates where (Coord r1 c1) <> (Coord r2 c2) = Coord (r1 + r2) (c1 + c2) instance Monoid Coordinates where mempty = Coord 0 0 incementColumn :: Coordinates -> Coordinates incementColumn (Coord r c) = Coord r (c + 1) data Cell = Cell { coordinates :: Coordinates, isNew :: Bool } deriving (Show) islands :: V.Vector (V.Vector Terrain) -> Int islands oceanscape = length . filter isNew . nubBy ((==) `on` coordinates) . reverse $ islandhelper1 mempty [] where islandhelper1 :: Coordinates -> [Cell] -> [Cell] islandhelper1 coord cells = case oceanscape V.!? row coord of Just rowVector -> islandhelper2 rowVector coord cells _ -> cells islandhelper2 :: V.Vector Terrain -> Coordinates -> [Cell] -> [Cell] islandhelper2 rowVector coord cells = case rowVector V.!? column coord of Just (Terrain True) -> islandhelper2 rowVector (incementColumn coord) (getNeighbors oceanscape coord ++ Cell coord True : cells) Just (Terrain False) -> islandhelper2 rowVector (incementColumn coord) cells _ -> islandhelper1 (Coord{row = row coord + 1, column = 0}) cells getNeighbors:: V.Vector (V.Vector Terrain) -> Coordinates -> [Cell] getNeighbors oceanscape coord = mapMaybe (clfilter . (coord <>) . uncurry Coord) [(0,1), (1,0)] where clfilter :: Coordinates -> Maybe Cell clfilter coord' = do -- The Maybe Monad! Terrain True <- oceanscape !? coord' -- pattern match failure will yield Nothing. return $ Cell coord' False -- get the cell value (!?) :: V.Vector (V.Vector a) -> Coordinates -> Maybe a oceanscape !? coord = do -- The Maybe Monad! x <- oceanscape V.!? row coord x V.!? column coord At this point we can see that the entire recursive structure of islands is just generating 0-5 items for each step of an iteration, so let's use fmap. (Even if we did need to be examining the accumulator, we could still use a fold.) Here's my final version. I'm pretty sure there's a better way to build locations, but I gotta go do actual work :) There are probably even better ways of thinking about the whole problem, but at that point we'd be changing your fundamental algorithm, so that's out of scope. module Main where import Data.Function (on) import Data.List (nubBy) import Data.Maybe import qualified Data.Vector as V newtype Terrain = Terrain{isLand :: Bool} data Coordinates = Coord{row :: Int, column :: Int} deriving (Eq, Ord, Show) instance Semigroup Coordinates where (Coord r1 c1) <> (Coord r2 c2) = Coord (r1 + r2) (c1 + c2) data Cell = Cell { coordinates :: Coordinates, isNew :: Bool } deriving (Show) islands :: V.Vector (V.Vector Terrain) -> Int islands oceanscape = length . filter isNew . nubBy ((==) `on` coordinates) $ islandHelper `concatMap` locations where islandHelper :: (Coordinates, Terrain) -> [Cell] islandHelper (coord, Terrain True) = Cell coord True : getNeighbors oceanscape coord islandHelper (_, Terrain False) = [] locations :: V.Vector (Coordinates, Terrain) locations = do -- The Vector Monad! (rowIndex, rowVector) <- V.generate (length oceanscape) id `V.zip` oceanscape (columnIndex, value) <- V.generate (length rowVector) id `V.zip` rowVector return (Coord{row=rowIndex, column=columnIndex}, value) getNeighbors:: V.Vector (V.Vector Terrain) -> Coordinates -> [Cell] getNeighbors oceanscape coord = mapMaybe (clfilter . (coord <>) . uncurry Coord) [(0,1), (1,0)] where clfilter :: Coordinates -> Maybe Cell clfilter coord' = do -- The Maybe Monad! Terrain True <- oceanscape !? coord' -- pattern match failure will yield Nothing. return $ Cell coord' False -- get the cell value (!?) :: V.Vector (V.Vector a) -> Coordinates -> Maybe a oceanscape !? coord = do -- The Maybe Monad! x <- oceanscape V.!? row coord x V.!? column coord main::IO() main = do content1 <- parse <$> readFile "code_review_283543_1.test" print $ islands content1 content2 <- parse <$> readFile "code_review_283543_2.test" print $ islands content2 content3 <- parse <$> readFile "code_review_283543_3.test" print $ islands content3 content4 <- parse <$> readFile "code_review_283543_4.test" print $ islands content4 content8 <- parse <$> readFile "code_review_283543_8.test" print $ islands content8 where parse :: String -> V.Vector (V.Vector Terrain) parse = V.fromList . map (V.fromList . map parseCell . words) . lines parseCell = Terrain . (/= 0) . (read @Int)
{ "domain": "codereview.stackexchange", "id": 44510, "tags": "programming-challenge, haskell" }
What does the error rate on the IBMQ website mean?
Question: How is the error rate pointed by the arrow defined? How did you get it? thanks! Answer: Readout error: do a bunch (1000's) of experiments preparing the qubit in either the 0 state or the 1 state and then immediately measuring the qubit state after each preparation. Two types of readout errors can occur: prepared 1 --> measured 0, and prepared 0 --> measured 1. The reported readout error is the average of the rate of those two errors (e.g. if the first occurred on 2% of trials and the second occurred on 1% of trials, then the total error rate would be 1.5% or 0.015. Single-qubit U2 error rate: this is measured using randomized benchmarking (see https://arxiv.org/abs/1109.6887), in which sequences of gates (specifically Clifford gates) are applied with the goal of taking the qubit on a random walk among certain points on the Bloch sphere and returning it to the 0 state it started in. As the number of gates in the sequence is increased, the chance of returning to zero drops exponentially and eventually saturates near 50%. The gate error rate is extracted from the fit to this exponential decay. CNOT error rate: two-qubit randomized benchmarking; same idea as for single-qubit errors, but now the gates are two-qubit Clifford gates.
{ "domain": "quantumcomputing.stackexchange", "id": 1125, "tags": "qiskit, ibm-q-experience" }
Problem during compilation of ROS INDIGO on ARM/Ubuntu trusty from source
Question: Trying to compile ROS from source on Ubuntu 14 (trusty) on NVIDIA Jetson tk1. I'm stucked on this problem: ==> Processing catkin package: 'qt_gui_cpp' ==> Building with env: '/home/ubuntu/ros_catkin_ws/install_isolated/env.sh' Makefile exists, skipping explicit cmake invocation... ==> make cmake_check_build_system in '/home/ubuntu/ros_catkin_ws/build_isolated/qt_gui_cpp' ==> make -j1 -l1 in '/home/ubuntu/ros_catkin_ws/build_isolated/qt_gui_cpp' [ 76%] Built target qt_gui_cpp [ 84%] Running SIP generator for qt_gui_cpp_sip Python bindings... Traceback (most recent call last): File "/home/ubuntu/ros_catkin_ws/install_isolated/share/python_qt_binding/cmake/sip_configure.py", line 41, in <module> subprocess.check_call(cmd) File "/usr/lib/python2.7/subprocess.py", line 535, in check_call retcode = call(*popenargs, **kwargs) File "/usr/lib/python2.7/subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "/usr/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory make[2]: *** [sip/qt_gui_cpp_sip/Makefile] Error 1 make[1]: *** [src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/all] Error 2 make: *** [all] Error 2 <== Failed to process package 'qt_gui_cpp': Command '/home/ubuntu/ros_catkin_ws/install_isolated/env.sh make -j1 -l1' returned non-zero exit status 2 Reproduce this error by running: ==> cd /home/ubuntu/ros_catkin_ws/build_isolated/qt_gui_cpp && /home/ubuntu/ros_catkin_ws/install_isolated/env.sh make -j1 -l1 Command failed, exiting. ANY IDEA? Originally posted by formica on ROS Answers with karma: 61 on 2014-07-25 Post score: 1 Original comments Comment by jacksonkr_ on 2016-04-08: Make sure you have multiverse and universe sources available. Check out my answer below. Answer: UPDATE I now have experimental binary builds of Indigo available for Trusty armhf: http://wiki.ros.org/indigo/Installation/UbuntuARM ORIGINAL ANSWER On my Jetson tk1, this executes /usr/bin/sip, which is part of the sip-dev package. Do you have sip-dev installed? I found out which executable it's trying to run by reading the sip_configure.py script and then running the following python snippet: from PyQt4 import pyqtconfig cfg = pyqtconfig.Configuration() print cfg.sip_bin Originally posted by ahendrix with karma: 47576 on 2014-07-25 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18776, "tags": "robotic-arm, ros, jetson" }
Getting the count of how many times a route has been hit
Question: I have this log file (example below) and I'm trying to find the number of times a particular route is called. Example log file: 2019-05-29 11:00:00 192.168.1.1 POST /route1 200 100000 2019-05-29 11:00:01 10.1.1.2 POST /route1 200 100000 2019-05-29 11:00:01 192.168.1.2 GET /route2 404 200000 2019-05-29 11:00:02 192.168.1.3 GET /route3 200 100000 2019-05-29 11:00:03 10.1.1.3 GET /route4 200 200000 2019-05-29 11:00:04 192.168.1.1 POST /route1 200 100000 Here is the code I've written: route_count = Hash.new(0) File.open('test.log').each do |line| temp_array = line.split(" ") route = temp_array[4] if route_count.has_key?(route) route_count[route] = route_count[route] + 1 else route_count[route] = 1 end end puts( route_count.map{ |k,v| "#{k},#{v}" }) and it gives the output: /route1,3 /route2,1 /route3,1 /route4,1 I wanted to know if there is a better way to do this. Answer: The Enumerable.group_by method is perfect for situations like this. The method takes the items in the enumerable, evaluates a block on them to yield a key, and inserts them into a hash under that key. Using this the code is very short: puts(File.open('test.log').entries .group_by { |line| line.split(" ")[4] } .map { |k, v| "#{k},#{v.count}" })
{ "domain": "codereview.stackexchange", "id": 34781, "tags": "ruby, parsing" }
Maximum Penetration Depth for Ultrasound
Question: How do you define the maximum penetration depth of an ultrasound? I'm assuming it means the depth at which the wave has attenuated to a certain small percentage of the original intensity of the incident beam? Answer: Yes it is customary when talking about penetration length to mean the length when the intensity has dropped to 36.8% of its surface value. This is motivated by the fact that the intensity decays as $I(z)=I_o e^{-z/d}$. See you need a constant of dimension [Lenght] in the exponent denominator, so you call that $d$ the penetration length then its immediate that the intensity at our defined length is $I(d)=I_o e^{-1/1} = 0.368 I_o$. Notice this is just a natural convention, all scientists could have agreed to write $I(z)=I_o e^{-2z/\pi d}$ which is still dimensionally correct but corresponds to a different numerical value of our chosen length. All that matters in any case is the "length scale" which is any of those $d$s up to a constant of order unity.
{ "domain": "physics.stackexchange", "id": 20888, "tags": "waves, acoustics" }
Genomic Range Query in Python
Question: Recently, I worked on one of the Codility Training - Genomic Range Query (please refer to one of the evaluation report for the detail of the task. The solution in Python is based on prefix sum, and was adjust in accordance to this article. import copy def solution(S, P, Q): length_s = len(S) length_q = len(Q) char2digit_dict = {'A':1, 'C':2, 'G':3, 'T':4} str2num = [0]*length_s count_appearances = [[0, 0, 0, 0]]*(length_s+1) cur_count = [0, 0, 0, 0] for i in range(0, length_s): str2num[i] = char2digit_dict[S[i]] cur_count[str2num[i]-1] += 1 count_appearances[i+1] = copy.deepcopy(cur_count) results = [] for i in range(0, length_q): if Q[i] == P[i]: results.append(str2num[Q[i]]) elif count_appearances[Q[i]+1][0] > count_appearances[P[i]][0]: results.append(1) elif count_appearances[Q[i]+1][1] > count_appearances[P[i]][1]: results.append(2) elif count_appearances[Q[i]+1][2] > count_appearances[P[i]][2]: results.append(3) elif count_appearances[Q[i]+1][3] > count_appearances[P[i]][3]: results.append(4) return results However, the evaluation report said that it exceeds the time limit for large-scale testing case (the report link is above). The detected time complexity is \$O(M * N)\$ instead of \$O(M + N)\$. But in my view, it should be \$O(M + N)\$ since I ran a loop for \$N\$ and \$M\$ independently (\$N\$ for calculating the prefix sum and \$M\$ for getting the answer). Could anyone help me to figure out what the problem is for my solution? I get stuck for a long time. Any performance improvement trick or advice will be appreciated. Answer: I'm not sure that I trust Codility's detected time complexity. As far as I know, it's not possible to programmatically calculate time complexity, but it is possible to plot out a performance curve over various sized datasets and make an estimation. That being said, there are a number of unnecessary overhead in your code and so it is possible that Codility is interpreting that overhead as a larger time complexity. As for your code, Python is all about brevity and readability and generally speaking, premature optimization is the devil. Some of your variable names are not in proper snake case (str2num, char2digit_dict). List memory allocation increases by a power of 2 each time it surpasses capacity, so you really don't have to pre-allocate the memory for it, it's marginal. You convert your string into a list of digits, but then only ever use it in a way that could be fulfilled with your original dict. The list of digits is not really needed since you are already computing a list of prefix sums. In the C solution it was important to calculate the len of the P/Q list first so that it's not recalculated every time, but in Python, range (xrange in Python2) is evaluated for i exactly once. It's not necessary to deepcopy cur_count as it only contains integers, which are immutable. You can copy the list by just slicing itself, list[:] Instead of constantly making references to count_appearances (up to 4x each iteration), you could just make the reference once and store it. This will also make it easier to update your references if the structure of P and Q were to change in any way. Cleaning up your code in the ways I mentioned gives me: def solution(S, P, Q): cost_dict = {'A':1, 'C':2, 'G':3, 'T':4} curr_counts = [0,0,0,0] counts = [curr_counts[:]] for s in S: curr_counts[cost_dict[s]-1] += 1 counts.append(curr_counts[:]) results = [] for i in range(len(Q)): counts_q = counts[Q[i] + 1] counts_p = counts[P[i]] if Q[i] == P[i]: results.append(cost_dict[S[Q[i]]]) elif counts_q[0] > counts_p[0]: results.append(1) elif counts_q[1] > counts_p[1]: results.append(2) elif counts_q[2] > counts_p[2]: results.append(3) elif counts_q[3] > counts_p[3]: results.append(4) return results which gets a perfect score on Codility.
{ "domain": "codereview.stackexchange", "id": 28164, "tags": "python, programming-challenge, python-2.x, complexity, bioinformatics" }
Will I age slower in space or on other planets?
Question: I came across few videos which says Astronauts in ISS will have 0.07 seconds extra life. So if life is possible on mars, than what is the time period of life for us humans? Is it the same as that of earth or different? For example, if I spend 500 days on mars, that will be equal to spending around 515 days on earth. So is it that I'm gaining 15 days of life? I mean, can a human life span increase? If a human average life span is 60-80 years on earth, on mars will the average life span of a human be 62-82 years of earth years? Or will it be 60-80 years of Mars years? I don't know how much this makes sense but as a new learner just got a thought on space time. Answer: If a human average life span is 60-80 years on earth, on mars will the average life span of a human be 62-82 years of earth years? Or will it be 60-80 years of Mars years? Relativity is about the relative state of the thing observing and the thing they observe. Locally (e.g. on Mars) things do not go slower for you, but someone observing (e.g. from Earth) sees time passing more slowly for you. Locally your life on Mars is not lengthened. Measured from Earth, more time will pass on Earth (as measured by someone local to Earth) than passes on Mars (as measured by someone on Earth observing Mars). This is more complicated than it might seem at first glance because it works both ways. You (on Mars) see me (on Earth) moving, so from your point of view you also see my time frame slow down relative to yours. From an everyday human point of view that seems like a paradox, as we would not expect both of us to see the other's person's time pass more slowly than ours, but relativity does work that way. There's a question (or two) on Physics SE dealing with the "why" that happens and why it's not a mistake or a fault with the theory. Here is one. It's even more complex when you have planets in orbits around a star, because that uses General Relativity and there are even more subtleties and additional effects on the rate of time we each measure for the other. Gravitational time dilation does not work both ways - that depends on the strength of the gravitational field (from everything) so it's different for each observer. Wikipedia's Time Dilation page is a reasonable place to start reading from, I think. Note that general relativity is considerably more complex mathematically than special relativity. We have measured these effects - they are real, but very small in human experience of them.
{ "domain": "astronomy.stackexchange", "id": 4513, "tags": "space-time" }
Steel plate thickness for load
Question: There is a steel plate with length of 26.5cm (10.4 inch) and width of 4cm (1.57 inch). the steel plate is attached on one end, at the other end there is a load of 30kg, what thickness should be the plate so it will not bend ? Answer: It will always bend as steel has finite stiffness. You need to decide how much deflection is acceptable. This page lists the bending equations for a variety of loading situations. This case is a cantilever with load at one end These equations do rely on some assumptions but should be fine for this situation, be aware though that they apply to static loads only, if the load moves the stresses involved can be significantly higher. These equations also assume that stresses are within the elastic limit of the material, if the yield stress is exceeded then the beam will be permanently deformed and may fail completely. As a rough guide I would say that you are looking at about 8mm thick or more It is also worth nothing that a flat plate is a very inefficient way to support a cantilevered load and unless there is a strong reason why the support needs to be very slender you are much better off either adding an arch type support web below the beam or using a hollow section. For example a 25x25mm square tube with wall thickness of 2mm would support 30 kg no trouble with very small deflection at 27cm cantilever length and weigh much less than an equivalent 4cm wide strip. you also need to consider the fact that there will be a large torsion load where the plate meets the thing it is attached to which is a common point of failure.
{ "domain": "engineering.stackexchange", "id": 1408, "tags": "steel, stresses" }
What is the difference between Non-Conservative and Dissipative?
Question: We often hear these terms. However, they are often confused to be synonyms, but they are not. What are the rigorous definitions of them? Answer: A dissipative force is a force that transfers energy from the macroscopic degrees of freedom into microscopic ones. For instance, the position and velocity of the center of mass of a block of wood sliding on a surface would correspond to macroscopic degrees of freedom. However, the motion of the individual molecules in the surface or the wooden block are microscopic degrees of freedom. Then, when the wooden block slides on the surface, its molecules collide with the molecules of the surface because of their slight imperfections. As the molecules collide, the energy in the macroscopic motion of the block gets transferred into the random motion of the molecules both in the block and the surface (this random motion is also called heat). However, you are only able to see the motion of the whole block, and you thus describe its motion as under the influence of a dissipative force. Every dissipative force is non-conservative, that is, it does not conserve the energy in the degrees of freedom you keep track of. Nevertheless, there are non-conservative forces that are not dissipative. This occurs when there is a macroscopic entity you do not keep track of, which adds or takes away energy from your system. For instance, you can be describing a ball which is attached to a spring which itself is being periodically pulled up and down by some kind of engine. The force of the spring on the ball will be non-conservative, at least if you are only focusing on the ball as the only degree of freedom you are interested in. This is because it can either take, or add energy to the ball on the spring; its action does not conserve the energy of the ball. Of course, fundamentally speaking, the distinction between non-conservative, dissipative, microscopic, and macroscopic can be a little bit conventional. In fact, physicists generally believe that if we account for all the degrees of freedom involved, every force is conservative. But it is really useful to have effective descriptions in which non-conservative and/or dissipative forces appear.
{ "domain": "physics.stackexchange", "id": 57232, "tags": "classical-mechanics, statistical-mechanics, definition, dissipation, conservative-field" }
A two qubit state in a special form
Question: How can a pure two-qubit state $|\psi\rangle = a |00\rangle + b|01\rangle + c|10\rangle + d|11\rangle$, be written in the following form \begin{equation} |\psi_{\alpha}\rangle = \sqrt{\alpha}|01\rangle + \sqrt{1-\alpha} |10\rangle. \end{equation} How can one prove this. And what would be $\alpha$ in terms of $a,~b,~c,~d$? Answer: The trick is in the Schmidt decomposition - Following the theorem proof, we have two qubits, so two Hilbert spaces $H_1$ and $H_2$, both with dimension 2, with the bases defined as $\left\lbrace\left|e_0\right> = \left|0\right>_1,\, \left|e_1\right>=\left|1\right>_1 \right\rbrace$ and $\left\lbrace\left|f_0\right> = \left|0\right>_2,\, \left|f_1\right>=\left|1\right>_2 \right\rbrace$. This defines a tensor $w = \sum_{i, j=0}^1\beta_{i, j}\left|i\right>_1\otimes\left|j\right>_2$. That is, $\beta_{00} = a, \,\beta_{01} = b, \,\beta_{10} = c$ and $\beta_{11} = d$. We then write this as an $n\times n$ matrix $$M_w = \begin{pmatrix}\beta_{00} && \beta_{01} \\ \beta_{10} && \beta_{11}\end{pmatrix} = \begin{pmatrix}a && b \\ c && d\end{pmatrix}.$$ Now that we've got a matrix, we want to diagonalise it, so we perform a Singular Value Decomposition (SVD). That is, we want to write $M_w = UDV^\dagger$, where $D$ is a diagonal matrix (with the elements known as the 'singular values') and $U$ and $V$ and unitary $n\times n$ matrices. Or rather, to save having to do a chunk of maths, we know that the columns (also, rows) of both $U$ and $V$ each form an orthonormal basis - I'll call these $\left\lbrace\left|u_0\right>,\, \left|u_1\right>\right\rbrace$ and $\left\lbrace\left|v_0\right>,\, \left|v_1\right>\right\rbrace$. Helpfully, the expression for the singular values of a $2\times2$ matrix is analytic: $$\sigma_{\pm} = \sqrt{\left|z_0\right|^2 + \left|z_1\right|^2 + \left|z_2\right|^2 + \left|z_3\right|^2 \pm \sqrt{\left(\left|z_0\right|^2 + \left|z_1\right|^2 + \left|z_2\right|^2 + \left|z_3\right|^2\right)^2 - \left|z_0^2 - z_1^2 - z_2^2 - z_3^2\right|^2}},$$ where \begin{align*}z_0 &= \frac{1}{2}\left(a+d\right) \\ z_1 &= \frac{1}{2}\left(b+c\right) \\ z_2 &= \frac{i}{2}\left(b-c\right) \\ z_3 &= \frac{1}{2}\left(a-d\right). \end{align*} This in turn gives $$\sigma_\pm= \sqrt{\frac{1}{2}\pm\sqrt{\frac{1}{4}-\left|ad - bc\right|^2}},$$ as a result of the normalisation condition $\left|a\right|^2 + \left|b\right|^2 + \left|c\right|^2 + \left|d\right|^2 = 1$. As $\sigma_+^2 + \sigma_-^2 = 1$, I'll redefine $\sigma_+ = \sqrt{\alpha}$ and $\sigma_- = \sqrt{\left(1-\alpha\right)}$, for reasons that should become clear below. We can now write $M_w = \sqrt\alpha\left|u_0\rangle\langle v_0\right| + \sqrt{1-\alpha}\left|u_1\rangle\langle v_1\right|$. 'Rewriting' this as a tensor (as at the beginning1) gives a state $$\left|\psi_\alpha\right> = \sqrt\alpha\left|u_0\right\rangle\otimes\left|v_0\right\rangle + \sqrt{1-\alpha}\left|u_1\right\rangle\otimes\left|v_1\right\rangle,$$ which is equivalent to what you have by defining $\left\lbrace\left|u_0\right> = \left|0\right>_u,\, \left|u_1\right>=\left|1\right>_u \right\rbrace$ and $\left\lbrace\left|v_0\right> = \left|1\right>_v,\, \left|v_1\right>=\left|0\right>_v \right\rbrace$, where $$\alpha= \frac{1}{2}+\sqrt{\frac{1}{4}-\left|ad - bc\right|^2}$$ Calculating $M_w$: $\left|u_0\right>$ is the left column of $U$ and $\left|u_1\right>$, the right column. Similarly, for $V$, $\left|v_0\right>$ is the left column and $\left|v_1\right>$, the right. This means that I can write $$U = \begin{pmatrix}\left|u_0\right> && \left|u_1\right>\end{pmatrix}$$ and $$V^{\dagger} = \begin{pmatrix}\left<v_0\right| \\ \left<v_1\right|\end{pmatrix}$$ so that $$M_w = UDV^\dagger = \begin{pmatrix}\left|u_0\right> && \left|u_1\right>\end{pmatrix}\begin{pmatrix}\sqrt\alpha && 0 \\ 0 && \sqrt{1-\alpha}\end{pmatrix}\begin{pmatrix}\left<v_0\right| \\ \left<v_1\right|\end{pmatrix},$$ which can be simplified as $M_w = \sqrt\alpha\left|u_0\rangle\langle v_0\right| + \sqrt{1-\alpha}\left|u_1\rangle\langle v_1\right|$. 1 Even as a non-mathematician, I feel guilty just doing this
{ "domain": "quantumcomputing.stackexchange", "id": 605, "tags": "quantum-state" }
Narrowband DOA Beamforming techniques on Wideband signal
Question: I'm trying to implement a tracking system (undergrad thesis) using DOA of acoustic signals. I'm testing this by playing a 2kHz sound and receiving the signal with four microphones in a ULA. So, the band I'm interested in is narrowband but naturally the signal received is wideband. I have been able to successfully implement a time delay beamformer however when I have tried to implement a phase shift beamformer I have failed. My suspicion is that you can only perform phase shift beamforming techniques on narrowband signals and will not work even if you are looking for a narrowband within a wideband signal. Am I correct? If so, will a simple filter be good enough to correct this issue to transform the signal into a narrowband one? Answer: My suspicion is that you can only perform phase shift beamforming techniques on narrowband signals and will not work even if you are looking for a narrowband within a wideband signal. Am I correct? No; the correlation coefficient of your reference signal to your received signal should, given additive, phase-uniform, white noise, have an expectation value of the phase shift the individual receiver sees, no matter how wide your band is. Note that this is only true if your transmitter/receiver system can be modeled as linear phase for the bandwidth we're talking about, but without loss of generality, radar/sonar/sound systems are designed to pretty much have exactly that characterestic.
{ "domain": "dsp.stackexchange", "id": 4164, "tags": "phase, beamforming" }
global_planner finds a path but is not able to find a valid trajectory
Question: Hi, I am trying to use the global planner global_planner. It finds a path either in dijsktra or A* and changing the different parameters as this image shows But then the robot stays still and (most of the times) I get this warning: [ WARN] [1390404456.350870633, 3038.563000000]: Invalid Trajectory 0.000000, 0.000000, -0.300000, cost: -1.000000 until a fatal error appears, after 6 seconds: [ERROR] [1390404456.403586383, 3038.611000000]: Aborting because a valid control could not be found. Even after executing all recovery behaviors Is there some parameter or remap missing in the global_planner wiki? Is there any example of the global_planner with a launch and yaml files? Thanks in advance! Update I found out that the global_plan publishes the poses from goal to origin whereas navfn publishes them from origin to goal. I would say this is the problem that makes the robot stay still, I copy a couple of results: oringin (x≅0) and goal (x≅1) of a path in navfn: pose: position: x: -0.1 y: -0.05 ... pose: position: x: 0.998614728451 y: 0.0215734690428 oringin (x≅0) and goal (x≅1) of a path in global_planner: pose: position: x: 1.0 y: 0.0 ... pose: position: x: -0.25 y: -0.05 Originally posted by martimorta on ROS Answers with karma: 843 on 2014-01-22 Post score: 1 Answer: I'm pushing a patch but I've just made it work reversing the vector of the plan. https://github.com/ros-planning/navigation/blob/hydro-devel/global_planner/src/planner_core.cpp#L329 Originally posted by martimorta with karma: 843 on 2014-01-23 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 16729, "tags": "ros, navigation, global-planner" }
Given a thermodynamic ensemble and given a macrostate of this ensemble, is there an associated probability distribution on the microstates?
Question: When I learned statistical physics, and ensembles and such things, the term "macrostate" was introduced as some vague thing that "described the state of the system, and is defined completely by a few variables". I couldn't wrap my head around what this was supposed to mean. Upon learning that one can define a probability distribution $\rho$ over set of all microstates $\Gamma$ of a system, and that an equilibrium state is a state where the entropy \begin{align} S = -\int_{\Gamma} d\Omega~\rho(\Gamma) \ln \rho (\Gamma) \end{align} is at a maximum (under appropriate boundary conditions, like a mean energy value, or a mean particle number), I simply identified a macrostate with a certain probability distribution described by the given variables. This identification worked very well for me. In equilibrium state (and when the ensemble is fixed), the distribution is defined by few variables, and it fully describes the state of the system, in the sense that you can calculate any statistical observable you'd like to see. I didn't have any problem with handling it like that so far - but however I don't know if it's actually true: So the question is at first: Can I always map a macrostate to a probability distribution over a set of the microstates of the system? If so - Is this map injective? If so, and I constrain the space of probability distributions to the ones where the equilibrium condition holds, is this map bijective? My own answers to those questions would be "yes", but I don't know if I'm missing something. For example, the proposed map only holds if you specify an ensemble (microcanonic, canonic, grand-canonical and so on ...). Answer: Can I always map a macrostate to a probability distribution over a set of the microstates of the system? Basically, yes. The logic is that the actual state of the system is some microstate. We don't know (and don't really care about) what specific microstate the system is in, but we can (at least in principle) compute the probability distribution over microstates. Given the probability distribution over microstates, we can evaluate expectation values of observables like temperature, pressure, and volume. The term macrostate refers to a "complete" set of these expectation values. The word "complete" here is a little wishy washy, but basically means the full set of stable (not varying on microscopic timescales) quantities we can measure from a macroscopic system in equilibrium. The probability distribution is given by the one which maximizes the entropy, subject to various constraints (eg: fixed total energy and number of particles, or fixed number of particles but varying energy, or varying energy and particles). The combination of a given set of constraints and the corresponding probability distribution is a thermodynamic ensemble. For example, the canonical ensemble refers to a probability distribution over microstates that maximizes the entropy given that the energy can flow in and out of the system, but with a fixed number of particles. If so - Is this map injective? I think I can rephrase this question as: "if I have two systems which have the same set of possible microstates are observed to have different macroscopic observables, do they have different probability distributions over microstates?" In which case the answer is yes. A simple example would be to consider the ideal gas at two different temperatures. Since the temperature is proportional to the average kinetic energy, the distribution of kinetic energies must be different. In general, the macroscopic observables are computed as averages over the microstates with respect to some probability distribution, and so if the average quantities come out different and the space of microstates is the same, then the distribution must be different. If so, and I constrain the space of probability distributions to the ones where the equilibrium condition holds, is this map bijective? If I understand the question, I think your phrasing is a bit too vague for a sharp answer, unfortunately. Mathematically you can define probability distributions over all kinds of bizarre spaces without a clear physical interpretation, you can compute the entropy for these distributions, and you can even maximize the entropy over a class of weird and unphysical distributions. Given that, it's certainly not the case that any probability distribution that can be described as a maximum entropy distribution from a mathematical point of view, represents a physical system. But, I don't know exactly what you mean by "constrain[ing] the space of probability distributions to the ones where the equilibrium condition holds." You could make this statement vacuously true if you define this phrase to mean that you only want to consider probability distributions which do have a physical interpretation, in which case your question is logically something like, "Assuming A, is A true?" So, to summarize, I'd say the answers to your questions are: yes, yes, and no.
{ "domain": "physics.stackexchange", "id": 77769, "tags": "thermodynamics, statistical-mechanics, probability, quantum-statistics" }
Theoretical explanations for practical success of SAT solvers?
Question: What theoretical explanations are there for the practical success of SAT solvers, and can someone give a "wikipedia-style" overview and explanation tying them all together? By analogy, the smoothed analysis (arXiv version))for the simplex algorithm does a great job explaining why it works so well in practice, despite the fact that it takes exponential time in the worst case and is NP-mighty (arXiv version). I have heard a little bit about things like backdoors, structure of the clause graph, and phase transitions, but (1) I don't see how these all fit together to give the larger picture (if they do), and (2) I don't know whether these really explain why SAT solvers work so well, for example, on industrial instances. Also, when it comes to things like structure of the clause graph: why is it that current solvers are able to take advantage of certain clause graph structures? I only find the results about phase transitions partially satisfying in this regard, at least in my currently limited understanding. The phase transition literature is about instances of random k-SAT, but does that really explain anything about real-world instances? I don't expect real-world instances of SAT to look like random instances; should I? Is there a reason to think phase transitions tell us something, even intuitively, about real-world instances even if they don't look like random instances? Related questions that help, but don't completely answer my question, particularly the request for tying things together into a coherent picture: Why is there an enormous difference between SAT solvers? Which SAT problems are easy? What's the correlation between treewidth and instance hardness from random 3SAT? Answer: I am assuming that you are referring to CDCL SAT solvers on benchmark data sets like those used in the SAT Competition. These programs are based on many heuristics and lots of optimization. There were some very good introductions to how they work at Theoretical Foundations of Applied SAT Solving workshop at Banff in 2014 (videos). These algorithms are based on DPLL backtracking algorithm which tries to find a satisfying assignment by setting values to variables and backtracks when it finds a conflict. People have looked at how much impact these heuristics have. E.g. see Hadi Katebi, Karem A. Sakallah, and Joao P. Marques-Silva, "Empirical Study of the Anatomy of Modern Sat Solvers", 2011 It seems that the efficiency of these SAT solvers on the benchmarks comes mainly from two two heuristics (and their variants): VSIDS heuristic fir selecting which variable to branch on next. CDCL: conflict-driven clause learning heuristic which learns a new clause from a conflict. It is well-known that DPLL proofs correspond to proofs in resolution. Without CDCL the only resolution proofs we can get are tree resolution proofs which are much weaker than general resolution proofs. There are results that show with CDCL we can get any general resolution proof. However there are caveats, they need many artificial restarts, artificial branching, and/or particular preprocessing, so it is not clear how close these are to what these programs do in practice. See e.g. the following paper for more details: Paul Beame and Ashish Sabharwal, "Non-Restarting SAT Solvers With Simple Preprocessing Can Efficiently Simulate Resolution", 2014 CDCL is essentially cutting branches from search space. There are various ways of deriving a new learnt clause from a conflict. Ideally we would add a set of minimal clauses which implied the conflict, but in practice that can be large and can be expensive to compute. Top SAT solvers often delete learned clauses regularly and that helps in practice. The other heuristic VSIDS essentially maintains a score for each variable. Every time there is a conflict, all scores are adjusted by multiplying them with a value $\alpha$<1 and adding a constant to those which were "involved" in the conflict. To see what this means think about the sequence $F(v,i)$ which is 1 if variable v was "involved" in the $i$th conflict. Let $0<\alpha<1$ be a fixed constant. The score of variable $v$ at time $n$ is then: $$\sum_{i<n} F(v,i)\alpha^{(n-i)}$$ Intuitively one can say that this tries to emphasize variables which were consistently involved in recent conflicts. You can also think of it as a simplistic but extremely cheap way of predicting which variables will be involved in the next conflict. So VSIDS branches first on those variables. One can claim that the algorithm is essentially a fail-fast algorithm, find conflicts fast. Fast is related to smaller number of variables set, which means blocking large subtrees of the search tree. But this is mostly intuition, afaik no one has formalized it very careful to test it on SAT data sets. Running a SAT solver on one of these data sets is not cheap, let alone comparing it with the optimal decisions (smallest extension of the current assignment to variables which would violate one of the clauses). VSIDS also depends on which variables we bump as each conflict, there are various ways to define when a variable is involved in a conflict. There are results that show particular implementation of these ideas correspond to a time-weighted centrality of vertices in dynamic graphs. There are also suggestions that excluding adversarial instances like those based on NP-hard problems and crypto primitives and random instances (which CDCL SAT solvers are not good at) the rest of instances come from very well structured things like software and hardware verification and somehow these structures are exploited by CDCL SAT solvers (lots of ideas have been mentioned like backdoors, frozen variables, etc.) but afaik they are mostly ideas and do not strong theoretical or experimental evidence to back them up. I think one would have to fist rigorously define the properly and show that the instances on which these algorithms work well have the property and then show that these algorithms exploit those properties. What happens in practice is that people are mostly interested in faster algorithms and if someone comes up with an algorithms that is somehow got inspired by one of these ideas and that algorithms beats other algorithms they go with the claim. Some people keep insisting that the clause ratio and thresholds are the only game in the town. That is definitely false as anyone who is slightly familiar with how industrial SAT solvers work or has any knowledge of proof complexity would know. There are lots of things that make a SAT solver work well or not on an instance in practice and the clause ratio is just one of the things that might be involved. I think the following survey is a good starting point to learn about the connections between proof complexity and SAT solvers and perspective: Jakob Nordstrom, "On the Interplay Between Proof Complexity and SAT Solving", 2015 Interestingly even the threshold phenomenon is more complicated than most people think, Moshe Vardi stated in his talk "Phase transitions and computational complexity" that the median running time of GRASP remains exponential for random 3SAT formulas after the threshold but that the exponent decreases (afaik, it is not clear how fast it decreases). Why are we studying SAT solvers (as complexity theorists)? I think the answer is the same as for other algorithms: 1. compare them, 2. find their limitations, 3. design better ones, 4. answer fundamental questions of complexity theory. When modelling a heuristic we often replace heuristic with nondeterminism. The question then becomes is it a "fair" replacement? And here by fair I mean how close is the model in helping us answer the question above. When we model a SAT solver as a proof system we are partly showing it limitation because the algorithm will be inefficient for statements which have lower bounds in the proof system. But there still is a gap between what the algorithm actually finds and the optimal proof in the proof system. So we need to show that the reverse as well, i.e. the algorithm can find proofs that are as good as those in the proof system. We are not close to answer that question, but the amount of heuristic that is replace by non-determinism defines how close the model is to the proof system. I don't expect that we completely drop the replacement of heuristics with nondeterminism, otherwise we would get automatizability results which have consequences on open problems in crypto, etc. But the more nondeterminism the model has the less convincing it is in explaining the behaviour of the SAT solver. So the question when looking at a model becomes: how much does the models help explain why SAT solver A is better than SAT solver B? How helpful are they in developing better SAT solvers? Does SAT solver find proofs in practice that are close to optimal proofs in the model? ... We also need to model the practical instances. Regarding the intuition that CDCL SAT solvers "exploits the structure of practical instances" (whatever that structure is) is generally accepted intuition I think. The real question is to give a convincing explanation of what that means and demonstrate that it is indeed true. See also Jakob Nordstrom's own answer for more recent developments.
{ "domain": "cstheory.stackexchange", "id": 5302, "tags": "cc.complexity-theory, ds.algorithms, sat, big-picture, heuristics" }
Security issues in a SQL query and possible function?
Question: By adding more and more features to my websites, I am now up to 7 different hard coded queries. I have read a lot about SQL injections and possible flaws and security issues, and wanted to make sure that it was secure enough. This is one of my regular UPDATE queries : Using thisConnection As New OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=""Striped Source""") Using thisCommand As OleDbCommand = thisConnection.CreateCommand ' Open connection object thisConnection.Open() ' Initialize SQL UPDATE command to update the desired data With thisCommand .CommandText = "UPDATE Login SET ClientName = @ClientName, Fonction = @Fonction, CompanyName = @CompanyName, Address = @Address, Country = @Country, " & _ "[Phone Number] = @Phone, [Fax Number] = @Fax, [I prefer to be contacted] = @Contacted " & _ "WHERE Username = @Username" With .Parameters .AddWithValue("@ClientName", VB_Name) .AddWithValue("@Fonction", VB_Fonction) .AddWithValue("@CompanyName", VB_Company) .AddWithValue("@Address", VB_Address) .AddWithValue("@Country", VB_Country) .AddWithValue("@Phone", VB_Phone) .AddWithValue("@Fax", VB_Fax) .AddWithValue("@Contacted", VB_Preference) .AddWithValue("@Username", LBL_Welcome.Text) End With End With thisCommand.ExecuteNonQuery() thisConnection.Close() End Using End Using And this is one of my non-regular double SELECT queries (Lengthy) : Using thisConnection As New OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=""Striped source""") Using thisCommand As OleDbCommand = thisConnection.CreateCommand thisConnection.Open() 'This query is to get the information thisCommand.CommandText = "SELECT [Minimum Length] " & _ "FROM Stats " & _ "WHERE ([N° Cylinder] = @cylinder) AND ([N° Section] = 0)" thisCommand.Parameters.AddWithValue("@cylinder", cylinder Dim thisReader As OleDbDataReader = thisCommand.ExecuteReader() If (thisReader.Read()) Then cylinderLengthMin = thisReader.GetValue(0) End If thisReader.Close() 'This query is to get the number of stages of the cylinder thisCommand.CommandText = "SELECT Stroke " & _ "FROM (Stats) " & _ "WHERE ([N° Cylinder] = @cylinder) " & _ "ORDER BY [N° Section]" thisCommand.Parameters.AddWithValue("@cylinder", cylinder) thisReader = thisCommand.ExecuteReader() count = -1 Dim condition As Boolean = True Dim temp As String 'This loop gets the strokes of the stages While (thisReader.Read()) And condition temp = thisReader.GetValue(0).ToString() If thisReader.GetValue(0).ToString() = "" Then condition = False Else count += 1 End If End While countMax = count 'This is used later for array formatting thisReader.Close() If countMax = -1 Then Throw New SQLException("There has been an error with the database query.") End If 'This query is to gather the data. It is stored in Imperial in the DataBase. ReDim cylinderOutsideDiameter(countMax) ReDim cylinderInsideDiameter(countMax) ReDim cylinderSectionStroke(countMax) ReDim cylinderSectionRetracted(countMax) ReDim cylinderIDStopRing(countMax) thisCommand.CommandText = "SELECT Stroke, [Retracted Length], [Inside Diameter], [Outside Diameter], [Non-contact Diameter]" & _ "FROM (Stats) " & _ "WHERE ([N° Cylinder] = @cylinder) " & _ "ORDER BY [N° Section]" thisCommand.Parameters.AddWithValue("@cylinder", cylinder) thisReader = thisCommand.ExecuteReader() count = 0 While (thisReader.Read()) AndAlso count <= countMax 'Scans each row and adds it to respective string/double array For i = 0 To 4 If thisReader.GetValue(i).ToString() = "" Then Throw New SQLException("There has been an error with the database query.") End If Next cylinderSectionStroke(count) = thisReader.GetValue(0) cylinderSectionRetracted(count) = thisReader.GetValue(1) cylinderInsideDiameter(count) = thisReader.GetValue(2) cylinderOutsideDiameter(count) = thisReader.GetValue(3) cylinderIDStopRing(count) = thisReader.GetValue(4) count += 1 End While thisReader.Close() thisConnection.Close() End Using End Using Questions Are my SQL queries secure? Should I make a function for the queries (as I am up to 7 queries now and I should never repeat the same code multiple times). If yes, how? (I'm not asking for a code dump, just a descriptive explanation on how I could do it would be awesome) If there is anything I could improve in the code, don't hesitate. I would love to have feedback on how to improve it. Answer: Make your code DRY (don't repeat yourself). Extract out your whole code of data retrieving to helper class CommandBehavior.CloseConnection will avoid closing the connection so you need to explicitly close the connection. so use using keyword for VB. it will close the connection automatically you are using parameterized query so it will make your SQL safer in case of SQL Injection. Public Shared Function ExecuteReader(ByVal commandtext As String, ByVal parameters As Dictionary(Of String, Object)) As IDataReader Using thisConnection As New OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=""Striped source""") Using thisCommand As OleDbCommand = thisConnection.CreateCommand thisConnection.Open() thisCommand.CommandText = commandtext For Each parameter In parameters thisCommand.Parameters.AddWithValue(parameter.Key, parameter.Value) Next 'CommandBehavior.CloseConnection will avoid closing the connection so you need to explicity close the connection Return thisCommand.ExecuteReader(CommandBehavior.CloseConnection) End Using End Using End Function Public Function HowToUseExecuteReader() Dim parameters As New Dictionary(Of String, Integer) parameters.Add("@cylinder", cylinder) Dim query As String = "SELECT [Minimum Length] FROM Stats WHERE ([N° Cylinder] = @cylinder) AND ([N° Section] = 0)" Dim dataReader As IDataReader = ExecuteReader(query, New Dictionary(Of String, Object)) Using dataReader While dataReader.Read Dim cylinderLengthMin = dataReader.GetValue(0) End While End Using End Function
{ "domain": "codereview.stackexchange", "id": 9071, "tags": "security, vb.net, ms-access" }
Will the airflow inside a vacuum hose keep a sphere centered inside of it?
Question: Suppose you were to fasten a string to a ping pong ball, then fasten the string to a stationary object, and then use a vacuum cleaner with a strong vacuum to suck the ball partially into the vacuum hose. Due to the strong vacuum's pull on the ball, the string should be very taut and the ball should also be centered inside the vacuum hose. Based on this setup and referring to the two drawings below, if you were to then push down on the string with your finger at the point indicated by the arrow in Figure A, would the tautness of the string move the ball from a centered position in the hose down until it comes to rest on the bottom of the vacuum hose? Or, would pushing down on the string at that point cause a bend in the string, as shown in Figure B, with the result being that the ball would be pulled back towards the opening of the vacuum hose with the ball remaining centered inside the hose due to the strong airflow flowing around it? The reason I'm asking this question is because I have been unsuccessful with keeping the string fastened to the ping pong ball, even with super glue, so I am unable to see what the ball will do. Answer: No - the sphere would likely stick to one wall, or bounce between them. A sphere is not a very aerodynamic shape, having both a relatively blunt front, and sudden cut off at the back. This causes a lot of turbulence, such as the familiar vortex shedding shown here: This shows how the air flows around a fixed sphere in free space - in your hose, the vortex forming on one side of the sphere will push it towards the wall, whereupon the flow will change as a result of the wall being there. The key thing to note, is that the highly turbulent flow is not a stable equilibrium state - think of trying to balance a pencil vertically on your finger. If you replaced the sphere with a different shape, imagine a Nerf football-type situation, that is designed to create a stable flow, then you would be able to get the behaviour that you're after - more like trying to keep the pencil vertically by holding the top, and dangling it downwards.
{ "domain": "engineering.stackexchange", "id": 2505, "tags": "fluid-mechanics, airflow, aerodynamics" }
Chemical potential, Boltzmann
Question: The electrochemical potential of an ion i in an electrolytic solvent is given by: \begin{align}\mu_i(\vec r) &= \mu_i^0 + RT \ln(a_i(\vec r))\\& = \mu_i^0 + RT \ln(\gamma_i(\vec r) c_i(\vec r)/c_0) \\&= \mu_i^0 + RT \ln(c_i(\vec r)/c_0) + RT \ln(\gamma_i(\vec r)) \\&= \mu_i^0 + RT \ln(c_i(\vec r)/c_0)+z_i e \phi(\vec r) \end{align} where $\mu_i^0$ is the chemical potential of the ion in an ideal solution with ionic concentration $c_0$, $\gamma$ is the activity coefficient, $\phi$ the electrostatic potential of the ions in solution (given by the Poisson equation). Is this expression correct? The authors in this paper (I. Rubinstein, Electro-Diffusion of Ions (1990) (SIAM link)) claim the following: ... the entire system, including the boundary layers. This implies in turn the establishment of a certain degree of smoothness on the microscopic scale $r_d$ ($\sqrt{\varepsilon}$ dimensionless terms) of some function of concentration and the electric potential, termed the electrochemical potential and defined as $$\tilde{\mu}_i = RT \ln{\tilde{C}_i} + z_iF\tilde{\varphi} \tag{1.19a}$$ or in dimensionless terms $$\mu_i = \ln{C_i} + z_i\varphi . \label{eq:1.19b} \tag{1.19b}$$ Indeed, by $\eqref{eq:1.19b}$ $$j_i = -\alpha_iC_i\nabla\mu_i, \label{eq:1.19c} \tag{1.19c}$$ so that $j_i = O(1)$ implies that the variation $(\delta \mu)|_{\varepsilon^{1/2}}$ on the length $\sqrt{\varepsilon}$ is of the order $$(\delta \mu)|_{\varepsilon^{1/2}} = O\left(\varepsilon^{1/2}\right).$$ [...] To illustrate some of the notions introduced so far, let us consider the $C_i$ and $\varphi$ fields in an electrodialysis cell at equilibrium. For simplicity, let us limit our consideration to a $1,1$ valent electrolyte at bulk (feed) concentration $C_0$. Assume a constant fixed charge density $\tilde{N}(-\tilde{N})$ for the an- (cat-) ion membrane. Recall that equilibrium is a steady state without macroscopic fluxes. By $\eqref{eq:1.19c}$ this implies constancy of ionic electrochemical potentials in the system. Thus, $$\mu_i = \ln{C_i} + z_i\varphi = 0 \tag{1.23a}$$ or $$C_i = e^{-z_i\varphi} \label{eq:1.23b} \tag{1.23b}$$ (For the univalent electrolyte under consideration $M = 2$, $z_1 = 1$, $z_2 = -1$ and the dimensionless bulk concentration at $\varphi = 0$ is normalized to unity.) Equation $\eqref{eq:1.23b}$ is the equilibrium Boltzmann distribution in a potential field. Substitution of $\eqref{eq:1.23b}$ into (1.9c) yields the Poisson-Boltzmann ... When you assume equilibrium, the ionic flux $j_i \propto \nabla \mu_i$ should be zero. But what people do in this paper, is setting $$ RT \ln(c_i(\vec r)/c_0)+z_i e \phi(\vec r)=0 $$ and therefore we get: $$c_i(\vec r) = c_0 \exp(-z_i e/(RT) \phi(\vec r)),$$ which is the Boltzmann distribution of the ions according o the electrostatic potential. Does anyone understand this logic? Why can we set the additional term to zero in equilibrium? And why do we neglect $\mu_i^0$? Answer: You state Then you can assume equilibrium, thus the ionic flux $j_i\propto\nabla\mu$ should be zero So it seems to me that the authors you read are not defining $$ RT \ln(c_i(\mathbf r)/c_0)+z_i e \phi(\mathbf r)\equiv0 $$ but instead defining $$ \mu_i(\mathbf r)=\mu_i^0. $$ That is, the chemical potential doesn't change.
{ "domain": "physics.stackexchange", "id": 19580, "tags": "electrochemistry, chemical-potential" }
Make a doubly controlled gates from existing operations
Question: I'm trying to implement an modular adder mentioned here with qiskit. I have already built the $\Phi ADD$ gate. But in order to build a modular adder like in figure 5 in the paper, I need to build a doubly-controlled $\Phi ADD$ gate. Qiskit offers a method to transfer a circuit to a controlled gate, but now I need a doubly controlled gate. I know I can use something like this, but that need an additional qubit, which is undesired. I also know that I can use this method, but I don't know how to implement the square root of $\Phi ADD$ gate. Is there any other method to do this (creating a doubly-controlled gate from the $\Phi ADD$ gate I built without adding any additional qubits)? Answer: Here is an example of creating a doubly controlled version of the simplest circuit with one gate (Qiskit's $u1$ gate). from qiskit import * from qiskit.aqua.utils.controlled_circuit import get_controlled_circuit import numpy as np q_reg = QuantumRegister(3, 'q') qc_u1 = QuantumCircuit(q_reg) qc_cu1 = QuantumCircuit(q_reg) qc_ccu1 = QuantumCircuit(q_reg) qc_u1.u1(np.pi/2, q_reg[0]) qc_cu1 = get_controlled_circuit(qc_u1, q_reg[1]) qc_ccu1 = get_controlled_circuit(qc_cu1, q_reg[2]) print(qc_cu1.qasm()) print(qc_ccu1.qasm()) And yes, even for this simplest gate, the printed result looks horrible XD (a lot of gates), because it uses generic procedures to create the controlled version of the given circuit. Maybe optimizing the circuit at the end will reduce the gate number and make it a more readable circuit. Just an answer about the doubly controlled circuit, don't know much about the links and the problem that you have introduced.
{ "domain": "quantumcomputing.stackexchange", "id": 1335, "tags": "quantum-algorithms, qiskit" }
How are the two mating types, a and α, for S. cerevisiae pronounced?
Question: In reading about S. cerevisiae there are two mating types, one being the Latin letter a and the other being the Greek letter α. How are these two types pronounced such that they can be differentiated? In other words when I pronounce the text I don't want to say 'a' for both of them; there should be a different word or phrase for each one so that they are known to be different. Is it simply that a is pronounced as a and α is pronounced as alpha? Answer: Correct. Haploid mating types in Saccharomyces cerevisiae are MATa (pronouce a as in day) and MATα (i.e. MAT-alpha). For convenience you could also refer to the corresponding alleles as MATa or MATalpha (see also here)
{ "domain": "biology.stackexchange", "id": 8732, "tags": "cell-biology" }
Potential $A_\mu$ in the Lie algebra of a group $G$ : question about the "proof"
Question: I have read this topic : Why is the gauge potential $A_{\mu}$ in the Lie algebra of the gauge group $G$? It explained why the potential is in the Lie algebra of the group. But there are things I still don't totally get in the global logic and I need your help with my "proofs" of it. What I understood is that we want to build a derivative that will ensure us to have the following property : $$ D_\mu (g \phi)=g D_\mu \phi $$ Where $g$ is in the group of symmetry of the theory. Because as soon as we have this, we will be able to easily construct gauge invariant Lagrangians. We know that : $$ \partial_\mu(g(x) \phi(x))=[\partial_\mu g(x)] \phi(x) + g(x) [\partial_\mu \phi(x)] $$ So, in practice, what we want to do is to change our $\partial_\mu \rightarrow D_\mu $ such that we cancel the first term of the rhs above. If I want to do this, I will then try with : $$ D_\mu=\partial_\mu + A_\mu $$ Where $A_\mu$ is an operator that acts on $\phi$ but I don't know it is in the Lie algebra at this point. Such that $ D_\mu (g \phi)=g D_\mu \phi $ Thus, what we need is : $$A_\mu(g \phi)=g A_\mu \phi - (\partial_\mu g) \phi \tag{1}$$ And here start the things I don't totally get. In practice we have to define two fields : $A$ and $A'$. But why couldn't we work with only one field $A$ that would follow $(1)$ ? Is it because, if we try to use the same $A$ for all $\phi$ linked by the group transformation, we find that $(1)$ is a too strong condition. Then we "relax" it by saying that when $\phi$ changes, $A$ changes also. And that is why we have a transformation law for $A$ ? Also, if I assume we have two different fields, then I would have : $$A'_\mu(g \phi)=g A_\mu \phi - (\partial_\mu g) \phi$$ $$A'_\mu g =g A_\mu - (\partial_\mu g)$$ $$A'_\mu =g A_\mu g^{-1} - (\partial_\mu g) g^{-1}$$ And I know it is not the good sign on the second part of the rhs of last line. So there is a mistake in my logic somewhere, but I don't know where. Remark : I know there is a close link with covariant derivative and differential geometry but I would like an answer really close to my formulation of the question. Else I think I would be lost. So if it is possible to avoid notion of differential geometry it would be very nice ! Answer: We don't "define two fields".When we make a gauge transformation, a priori all dynamical fields can transform under it. $A$ is a dynamical field, so a gauge transformation goes as $\phi \mapsto \phi' = g\phi,A\mapsto A'$, where the expression for $A'$ is as of yet unknown. $\phi'$ and $A'$ are not "different operators", they are physically the same operators/fields as before, just gauge transformed. We then carry out the computation you did to determine what the transformation behaviour $A'$ of the gauge field is. As you know, it turns out $A'$ is not simply $A$, but transforms non-trivially under a gauge transformation - note that setting $A' = A$ in your equations would simply give inconsistent equations that cannot be true for any $A$. In your computations, you aren't really wrong (except maybe for signs but I have no patience to chase these), you're just not doing the correct step to conclude where $A$ lives, which is: "Muliply by $1 = g g^{-1}$" Your last equation is: \begin{align} A'_\mu & = g A_\mu g^{-1} - (\partial_\mu g)g^{-1} = gA_\mu g^{-1} - g g^{-1}(\partial_\mu g) g^{-1} \\ & = g\left(A_\mu - g^{-1} (\partial_\mu g)\right)g^{-1}\end{align} and the following are facts: You can only add/subtract two things that live in the same vector space $g^{-1}\partial_\mu g$ is an element of the Lie algebra, as you can see by writing $g(x) = \exp(\chi(x))$ for a Lie-algebra valued function $\chi$. Conjugating, i.e. multiplying by $g$ from one side and $g^{-1}$ from the other, a Lie algebra element by a Lie group element $g$ yields a Lie algebra element (this is the adjoint rep of the Lie group upon its own algebra). Therefore, since $g^{-1}\partial_\mu g$ is algebra-valued, so is $A_\mu$, and therefore also $A'_\mu$.
{ "domain": "physics.stackexchange", "id": 45285, "tags": "gauge-theory" }
what is difference between multilayer perceptron and multilayer neural network?
Question: When do we say that a artificial neural network is a multilayer Perceptron? And when do we say that a artificial neural network is a multilayer? Is the term perceptron related to learning rule to update the weights? Or It is related to neuron units? Answer: A perceptron is always feedforward, that is, all the arrows are going in the direction of the output. Neural networks in general might have loops, and if so, are often called recurrent networks. A recurrent network is much harder to train than a feedforward network. In addition, it is assumed that in a perceptron, all the arrows are going from layer $i$ to layer $i+1$, and it is also usual (to start with having) that all the arcs from layer $i$ to $i+1$ are present. Finally, having multiple layers means more than two layers, that is, you have hidden layers. A perceptron is a network with two layers, one input and one output. A multilayered network means that you have at least one hidden layer (we call all the layers between the input and output layers hidden).
{ "domain": "cs.stackexchange", "id": 6173, "tags": "neural-networks, perceptron" }
Sort files in a given folders and provide as a list
Question: I have two folders query and subject. The following function sorts alphabetically and provides the query and subject as a separate list. The file names are mixed of numbers and I found the sort function works perfectly well. Any comments to improve? import os import re subject_path = "/Users/catuf/Desktop/subject_fastafiles/" query_path = "/Users/catuf/Desktop/query_fastafiles" def sorted_nicely(l): """ Sort the given iterable in the way that humans expect. https://blog.codinghorror.com/sorting-for-humans-natural-sort-order/ """ convert = lambda text: int(text) if text.isdigit() else text alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ] return sorted(l, key = alphanum_key) def subject_list_fastafiles(): subject_fastafiles = sorted_nicely([fastafile for fastafile in os.listdir(subject_path) if os.path.isfile(os.path.join(subject_path, fastafile))]) return subject_fastafiles def query_list_fastafiles(): query_fastafiles = sorted_nicely([fastafile for fastafile in os.listdir(query_path) if os.path.isfile(os.path.join(query_path, fastafile))]) return query_fastafiles def filter_files_ending_with_one(sorted_files): """ The function filters the files end with 1 """ files_end_with_one = [name for name in subject_fastafiles if name[-1].isdigit() and not name[-2].isdigit() == 1] return files_end_with_one subject_fastafiles = subject_list_fastafiles() query_fastafiles = query_list_fastafiles() subject_files_ending_with_one = filter_files_ending_with_one(subject_fastafiles) Answer: Docstrings You should include a docstring at the beginning of every function, class, and module you write. This will allow documentation to identify what your code is supposed to do. This also helps other readers understand how your code works. I see that you already have a couple for your functions, but stay consistent. Parameter Names Parameter names should be descriptive enough to be able to tell what should be passed. While l might be obvious to some programmers as an iterable, to others it might not. Since you're passing a list, renaming it to list_ (to avoid using reserved word list) makes it more obvious what you're passing, and accepting. Constant Variables When you have a constant in your program, it should be UPPER_CASE to identify it as such. Code Reduction You want as little code as possible in your program. So, instead of: def subject_list_fastafiles(): """ Method Docstring """ subject_fastafiles = sorted_nicely([fastafile for fastafile in os.listdir(subject_path) if os.path.isfile(os.path.join(subject_path, fastafile))]) return subject_fastafiles def query_list_fastafiles(): """ Method Docstring """ query_fastafiles = sorted_nicely([fastafile for fastafile in os.listdir(query_path) if os.path.isfile(os.path.join(query_path, fastafile))]) return query_fastafiles def filter_files_ending_with_one(sorted_files): """ Method Docstring """ files_end_with_one = [name for name in subject_fastafiles if name[-1].isdigit() and not name[-2].isdigit() == 1] return files_end_with_one You can simply return the function call, instead of assigning it to a variable and returning the variable, like so: def subject_list_fastafiles(): """ Method Docstring """ return sorted_nicely([fastafile for fastafile in os.listdir(SUBJECT_PATH) if os.path.isfile(os.path.join(SUBJECT_PATH, fastafile))]) def query_list_fastafiles(): """ Method Docstring """ return sorted_nicely([fastafile for fastafile in os.listdir(QUERY_PATH) if os.path.isfile(os.path.join(QUERY_PATH, fastafile))]) def filter_files_ending_with_one(): """ The function filters the files end with 1 """ return [name for name in SUBJECT_FASTAFILES if name[-1].isdigit() and not name[-2].isdigit() == 1] Main Guard This is an excerpt from this fabulous StackOverflow answer. When your script is run by passing it as a command to the Python interpreter, python myscript.py all of the code that is at indentation level 0 gets executed. Functions and classes that are defined are, well, defined, but none of their code gets run. Unlike other languages, there's no main() function that gets run automatically - the main() function is implicitly all the code at the top level. In this case, the top-level code is an if block. __name__ is a built-in variable which evaluates to the name of the current module. However, if a module is being run directly (as in myscript.py above), then __name__ instead is set to the string "__main__". Thus, you can test whether your script is being run directly or being imported by something else by testing if __name__ == "__main__": ... If your script is being imported into another module, its various function and class definitions will be imported and its top-level code will be executed, but the code in the then-body of the if clause above won't get run as the condition is not met. Updated Code """ Module Docstring (A description of your program goes here) """ import os import re SUBJECT_PATH = "/Users/catuf/Desktop/subject_fastafiles/" QUERY_PATH = "/Users/catuf/Desktop/query_fastafiles" def sorted_nicely(list_): """ Sort the given iterable in the way that humans expect. https://blog.codinghorror.com/sorting-for-humans-natural-sort-order/ """ convert = lambda text: int(text) if text.isdigit() else text alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)] return sorted(list_, key=alphanum_key) def subject_list_fastafiles(): """ Method Docstring """ return sorted_nicely([fastafile for fastafile in os.listdir(SUBJECT_PATH) if os.path.isfile(os.path.join(SUBJECT_PATH, fastafile))]) def query_list_fastafiles(): """ Method Docstring """ return sorted_nicely([fastafile for fastafile in os.listdir(QUERY_PATH) if os.path.isfile(os.path.join(QUERY_PATH, fastafile))]) def filter_files_ending_with_one(): """ The function filters the files end with 1 """ return [name for name in SUBJECT_FASTAFILES if name[-1].isdigit() and not name[-2].isdigit() == 1] if __name__ == '__main__': SUBJECT_FASTAFILES = subject_list_fastafiles() QUERY_FASTAFILES = query_list_fastafiles() SUBJECT_FILES_ENDING_WITH_ONE = filter_files_ending_with_one()
{ "domain": "codereview.stackexchange", "id": 35972, "tags": "python, python-3.x, regex" }
Velocity change of objects
Question: Is it possible for small object (small mass, let's say bullet) to hit large object (big mass, let's say rock) and still move forward (or stop) instead of being reflected (let's say objects don't crush and collision takes place in one dimension)? Answer: Yeah, definitely. One example is an inelastic collision, where both masses will have the same velocity after colliding. In this case, let's say a bullet of mass $m$ and speed $v_0$ hits a stationary rock of mass $M$ and they stick together and move with a final speed $v$. Intuitively, you can already tell that their velocity must be in the same direction as the bullet's initial velocity. But let's make it explicit: From conservation of momentum, we have that $$mv_0=Mv+mv$$ $$mv_0=(M+m)v$$ Since mass must always be positive, $v$ must have the same sign as $v_0$, which means that the bullet moves in the same direction as before-keeps moving forward (although at a slower speed)-and is not 'reflected' by the rock. EDIT (To respond to additional question). Under what conditions does this happen? Essentially, the bullet would have to stick 'into' the rock.
{ "domain": "physics.stackexchange", "id": 14693, "tags": "newtonian-mechanics, mass, momentum, conservation-laws, collision" }
Key and value mapping from one object to another
Question: I've been trying to figure out a clean way of managing mappings between two objects. In the case of this example, it's two hashes. This spec should illustrate the problem at hand: describe 'key mapping' do let(:have) { { data1: 'foo', data2: 'bar', data3: 'baz' } } let(:want) { { LINE1: 'foo', LINE2: 'bar', LINE3: 'baz' } } let(:result) { {} } after(:each) do expect(result).to eq want end it 'by assigning manually' do result[:LINE1] = have[:data1] result[:LINE2] = have[:data2] result[:LINE3] = have[:data3] end it 'by reading keys from a hash' do mappings = { data1: :LINE1, data2: :LINE2, data3: :LINE3 } mappings.each do |k, v| result[v] = have[k] end end end The notable thing here is that the source and destination can be mapped by known keys and unlike in the example there is no numeric correlation between the source and destination (example lists numbered keys just to make it easier to read) The number of keys could be rather high, so the latter example will make code more readable, but is there an even better way of handling this? Answer: If there's no "logical" correlation between the key you have, and the key you want (i.e. no consistent way to rewrite them) then you basically have to use a lookup of some sort to do the translation. So your "reading keys from a hash" strategy is your best bet, but it can be improved a little. You can be more functional and just use map: lookup = { data1: :LINE1, data2: :LINE2, data3: :LINE3 } result = Hash[ lookup.map { |in, out| [out, have[in]] } ] Or you can just go through the ones in the input hash, and translate the ones that are actually there, discarding ones without a translation: result = Hash[ have.map { |k, v| [lookup[k], v] if lookup[k] }.compact ] And for either of those, if you ever need to translate the other way, you can just use lookup.invert to flip the keys/values around. Lastly, for you spec, don't put an expectation in a after(:each) block. Put the expectation in the spec itself.
{ "domain": "codereview.stackexchange", "id": 7380, "tags": "ruby" }
What form would a "real solution" for the Schrödinger equation of a free particle take?
Question: The usual solution taught in textbooks for free particles (or particles inside constant potentials) is the plane wave solution, $\psi = A \exp (i \vec{k}\cdot \vec{r}-i\omega t)$. However this is not a normalizable solution and therefore it should not represent physical states. Then, what form would a free particle actually take? Answer: If the underlying Hilbert space of your system is $L^2(\mathbb R)$, then a generic state takes the form $$\psi(x) = \frac{1}{\sqrt{2\pi}} \int\mathrm dk \ A(k) e^{ikx}$$ for some square-integrable function $A$, which loosely defines how "much" of the non-normalizable state $e^{ikx}$ is present in $\psi$. If $\psi$ is the initial state of your system, then the state at time $t$ is $$\Psi_t(x) = \frac{1}{\sqrt{2\pi}} \int \mathrm dk \ A(k) e^{ikx} e^{-i\hbar k^2 t/2m}$$ In a more general context, if your Hamiltonian has non-normalizable eigenstates $\phi_k(x)$ which satisfy $$\int \mathrm dx \ \overline{\phi_k(x)} \phi_q(x) = \delta(k-q)$$ and have generalized eigenvalues $E_k$, then a generic state of the system can be written as $$\psi(x) = \int \mathrm dk \ A(k) \phi_k(x)$$ and, if this state is allowed to evolve for a time $t$, we would have $$\Psi_t(x) = \int \mathrm dk \ A(k) \phi_k(x) e^{-iE_k t/\hbar}$$ In the free particle case, we have that $$\phi_k(x) = \frac{1}{\sqrt{2\pi}} e^{ikx} \qquad E_k = \frac{\hbar^2 k^2}{2m}$$ In the most general case, the spectrum of the Hamiltonian will have a discrete part $\sigma_d$ (corresponding to true eigenvectors/eigenvalues) and a continuous part $\sigma_c$ (corresponding to generalized/non-normalizable eigenvectors/eigenvalues). In that case, we would have $$\psi(x) = \sum_{\lambda \in \sigma_d} a_\lambda \varphi_\lambda(x) + \int_{\lambda \in \sigma_c}\mathrm d\lambda \ A(\lambda) \phi_\lambda(x)$$ $$\Psi_t(x) =\sum_{\lambda \in \sigma_d} a_\lambda \varphi_\lambda(x)e^{-iE_\lambda t/\hbar}+ \int_{\lambda \in \sigma_c}\mathrm d\lambda \ A(\lambda) \phi_\lambda(x) e^{-i E_\lambda t/\hbar}$$
{ "domain": "physics.stackexchange", "id": 96200, "tags": "quantum-mechanics" }
Formulate the Marriage Problem into a Maximum-flow problem (Graph theory)
Question: Suppose I have $M=\{1,\ldots, n\}$ men and $W = \{1, \ldots, n\}$ women and $B =\{1, \ldots, m\}$ brokers, such that each broker knows a subset of $M \times W$ and for each pair in this subset a marriage can be set up among the corresponding man and women. Each broker $i$ can set up a maximum of $b_i$ marriages and a person can only be married once. Also we assume all marriages are heterosexual. I want to determine the maximum number of marriages possible and I want to show that the answer can be found be solving a maximum-flow problem. What I've tried: Make source and sink nodes with opposite demand. And then for each ordered pair $(i,j)$ where $i$ is a woman and $j$ is a man making a node. For each broker $j$ make a corresponding node and introduce an arc with capcity $b_j$. For each node $(i,j)$ make an arc from broker $k$ with capacity $1$ if broker $k$ can arrange a marriage otherwise $0$. However, after this I stop. I need to keep track of state, that is no person gets married twice ! Answer: You shouldn't need to keep track of state. This can all be handled with capacity constraints over the nodes. The network can be structured as follows: Start with the graph where one partition consists only of the "men" nodes and the other partition consists of the "women" nodes. Now, add a node for each broker $b$ and for each marriage pair $(m, w)$ on $b$'s list create two edges $(m, b)$ and $(b, w)$. Finally add a source node connected to all men and a sink node connected to all women (you can add another arc between those to simplify constraint cases if you want, but it's not necessary). So now we have a graph, but to have a proper flow-network we need capacity constraints. For each man and women node we have a capacity constraint of 1, since they can each only marry at most one person. For the $i$th broker node, add a capacity constraint of $b_i$. This way the total number of flows (marriages) going through the broker is at most $b_i$. Compute maximum flow over this graph and you should be good to go.
{ "domain": "cs.stackexchange", "id": 3680, "tags": "graphs, reductions, optimization, network-flow" }
What elements do black holes expel into relativistic jets?
Question: What does current research say about the elements that can be created in the accretion disc of a black hole, and does the Relativistic Jet contain any of those elements? Curious as to whether the relativistic jet contributes to the dispersion of heavier elements throughout the universe. Answer: Black hole jets will expel whatever heavy elements are present in the material that they are accreting. Heavy elements are not created in the accretion discs of black holes. Even the hottest parts of the hottest accretion discs in the universe (those around "small" black holes and neutron stars) only reach temperatures of $\sim 10^{7}$ K and have densities far below that at the centre of a star. So no nuclear fusion reactions can take place. There is also no source of free neutrons to allow the r- or s-process to take place. Having said that, there is little doubt that accretion-powered jets are important energy sources in the interstellar medium and these can drive outflows that transport metal-enriched material (not necessarily in the accretion disk of the black hole, but in its local environment) to more distant locations, particularly in galaxy clusters (e.g. Kirkpatrick et al. 2011; Prasad et al. 2018). The exception to the above statements are the probable accretion disks that form around the products of neutron star/neutron star and neutron star/black hole mergers. These mergers may be responsible for (short) Gamma Ray burst sources and will be surrounded by neutron-rich material from the neutron star(s). Here it seems quite feasible that r-process elements (heavy, neutron-rich elements) will be formed by rapid neutron capture and expelled by the jets for some brief period of time (e.g. Janiuk 2019).
{ "domain": "physics.stackexchange", "id": 63793, "tags": "black-holes, astrophysics, radiation, relativistic-jets, metallicity" }
gazebo ros plugin make error: Undefined symbols for architecture x86_64
Question: I'm new in ROS and Gazebo programming. I'm trying to create a control plugin for the laser that I've mounted on my robot model. I'm using ROS indigo and gazebo 7.0 on MAC OS X 10.11.5 My problem comes when I try to compile, using standard 'cmake' and 'make', the .cc file of my plugin. Following an official tutorial (http://gazebosim.org/tutorials/?tut=plugins_model), I've wrote the CMakeLists.txt as: cmake_minimum_required(VERSION 2.8 FATAL_ERROR) find_package(roscpp REQUIRED) find_package(std_msgs REQUIRED) include_directories(${roscpp_INCLUDE_DIRS}) include_directories(${std_msgs_INCLUDE_DIRS}) # Find Gazebo find_package(gazebo REQUIRED) include_directories(${GAZEBO_INCLUDE_DIRS}) link_directories(${GAZEBO_LIBRARY_DIRS}) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${GAZEBO_CXX_FLAGS}") # Build our plugin add_library(velodyne_plugin SHARED velodyne_plugin.cc) target_link_libraries(velodyne_plugin ${GAZEBO_libraries} ${catkin_LIBRARIES} ${Boost_LIBRARIES} ${roscpp_LIBRARIES}) I think that the problem could derive from the libraries' dependencies. The 'cmake' command seems to work well while 'make' return the following output: [ 50%] Linking CXX shared library libvelodyne_plugin.dylib Undefined symbols for architecture x86_64: "sdf::Console::ColorMsg(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int, int)", referenced from: double sdf::Element::Get<double>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o bool sdf::Param::Get<double>(double&) const in velodyne_plugin.cc.o "sdf::Console::Instance()", referenced from: double sdf::Element::Get<double>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o bool sdf::Param::Get<double>(double&) const in velodyne_plugin.cc.o sdf::Console::ConsoleStream& sdf::Console::ConsoleStream::operator<<<char [30]>(char const (&) [30]) in velodyne_plugin.cc.o sdf::Console::ConsoleStream& sdf::Console::ConsoleStream::operator<<<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o sdf::Console::ConsoleStream& sdf::Console::ConsoleStream::operator<<<char [3]>(char const (&) [3]) in velodyne_plugin.cc.o sdf::Console::ConsoleStream& sdf::Console::ConsoleStream::operator<<<char [29]>(char const (&) [29]) in velodyne_plugin.cc.o sdf::Console::ConsoleStream& sdf::Console::ConsoleStream::operator<<<char [15]>(char const (&) [15]) in velodyne_plugin.cc.o ... "sdf::Element::GetElement(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o "sdf::Element::GetAttribute(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from: double sdf::Element::Get<double>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o "sdf::Element::HasElementDescription(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from: double sdf::Element::Get<double>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o "gazebo::event::Connection::Connection(gazebo::event::Event*, int)", referenced from: gazebo::event::EventT<void (gazebo::common::UpdateInfo const&)>::Connect(boost::function<void (gazebo::common::UpdateInfo const&)> const&) in velodyne_plugin.cc.o "gazebo::event::Connection::~Connection()", referenced from: void boost::checked_delete<gazebo::event::Connection>(gazebo::event::Connection*) in velodyne_plugin.cc.o "gazebo::event::Events::worldUpdateBegin", referenced from: boost::shared_ptr<gazebo::event::Connection> gazebo::event::Events::ConnectWorldUpdateBegin<boost::_bi::bind_t<void, boost::_mfi::mf0<void, gazebo::VelodynePlugin>, boost::_bi::list1<boost::_bi::value<gazebo::VelodynePlugin*> > > >(boost::_bi::bind_t<void, boost::_mfi::mf0<void, gazebo::VelodynePlugin>, boost::_bi::list1<boost::_bi::value<gazebo::VelodynePlugin*> > >) in velodyne_plugin.cc.o "gazebo::common::Time::Time(double)", referenced from: gazebo::VelodynePlugin::UpdateChild() in velodyne_plugin.cc.o "gazebo::common::Time::Time()", referenced from: gazebo::VelodynePlugin::VelodynePlugin() in velodyne_plugin.cc.o "gazebo::common::Time::~Time()", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o gazebo::VelodynePlugin::~VelodynePlugin() in velodyne_plugin.cc.o gazebo::VelodynePlugin::UpdateChild() in velodyne_plugin.cc.o "gazebo::common::Time::operator=(gazebo::common::Time const&)", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o "gazebo::common::Time::operator+=(gazebo::common::Time const&)", referenced from: gazebo::VelodynePlugin::UpdateChild() in velodyne_plugin.cc.o "ignition::math::IndexException::IndexException()", referenced from: ignition::math::Vector3<double>::operator[](unsigned long) const in velodyne_plugin.cc.o ignition::math::Vector2<int>::operator[](unsigned long) const in velodyne_plugin.cc.o ignition::math::Vector2<double>::operator[](unsigned long) const in velodyne_plugin.cc.o "sdf::Element::HasElement(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o double sdf::Element::Get<double>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o "sdf::Element::GetElementImpl(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const", referenced from: double sdf::Element::Get<double>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o "sdf::Element::GetElementDescription(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const", referenced from: double sdf::Element::Get<double>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in velodyne_plugin.cc.o "gazebo::common::Time::Double() const", referenced from: gazebo::VelodynePlugin::UpdateChild() in velodyne_plugin.cc.o "gazebo::common::Time::operator-(gazebo::common::Time const&) const", referenced from: gazebo::VelodynePlugin::UpdateChild() in velodyne_plugin.cc.o "gazebo::physics::Base::GetWorld() const", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o "gazebo::physics::Model::GetJointCount() const", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o "gazebo::physics::Model::GetJoints() const", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o "gazebo::physics::World::GetSimTime() const", referenced from: gazebo::VelodynePlugin::Load(boost::shared_ptr<gazebo::physics::Model>, std::__1::shared_ptr<sdf::Element>) in velodyne_plugin.cc.o gazebo::VelodynePlugin::UpdateChild() in velodyne_plugin.cc.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[2]: *** [libvelodyne_plugin.dylib] Error 1 make[1]: *** [CMakeFiles/velodyne_plugin.dir/all] Error 2 make: *** [all] Error 2 Can someone help me? Thank you, Pietro Originally posted by pietroastolfi on ROS Answers with karma: 15 on 2016-06-07 Post score: 0 Answer: Maybe the capitalization is the problem: try changing GAZEBO_libraries to GAZEBO_LIBRARIES. Does that help? Originally posted by scpeters with karma: 111 on 2016-06-08 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by pietroastolfi on 2016-06-08: IT WORKS!! I can't believe it! hours and hours for a f**king UPPERCASE. Thanks a lot
{ "domain": "robotics.stackexchange", "id": 24849, "tags": "ros, gazebo, gazebo-plugin, make" }
Adjust the power of a digital signal to a given value
Question: I have two signals: $x(n)$ is a white-noise signal with a given variance, $y(n)$ is the sum of white noise plus a sum of sinusoids. $$ x(n) = v_1(n) $$ $$ y(n) = \sum_{i=1}^{q} [ a_i \cos(\omega_i n) + b_i \sin(\omega_i n) ] + v_2(n) $$ $$ v_{1,2}(n) \sim W.N.(0,\sigma^2) $$ And that is how I generate the two signals on Matlab. Now I'd like to normalise $y(n)$ so that its power is the same as $x(n)$, how can I do that? Answer: Multiply y(n) by std(x)/std(y)
{ "domain": "dsp.stackexchange", "id": 3444, "tags": "discrete-signals, normalization" }
understanding a spectrum of a modulated signal
Question: I plot a spectrum of a modulated signal. It is a GMSK modulated signal. My goal to find the best parameter for a realisation it. My first spectrum with oversampling ration M = 4 and BT (the product of 3-dB bandwidth of the LPF and the desired Tb) = 0,5 and M = 8, BT =0,5 and M = 6, BT =0,5 According to figures, can I conclude M =4 is the best option for me ( ~ -60dB side lobes)? How to analyse the spectrum of a signal? Answer: The spectrum will be identical for all cases of oversampling. (The frequency axis just scales accordingly). Proper oversampling does not modify the spectrum in band in any way whatsoever, but on the transmit side it provides greater margin to simplify subsequent filtering: either digital filtering if later interpolation / upsampling is performed, or the analog reconstruction filter after the DAC. Higher oversampling creates more spectral space between the images in the signal and therefore relaxes the transition bands in the related filters mentioned. A possible benefit to lower sampling ratios is that the time duration of pulse-shaping filters implemented in the waveform creation (in this case the Gaussian filter) will extend longer in time for the same filter complexity and thus have higher performance. I detail that trade further at this post. Therefore the choice of what oversampling is best is part of a system design with consideration of all subsequent filters needed, and traded with the resource and power requirements needed to run the digital processing at a higher rate. Below is an example demonstrating a 16-QAM Spectrum and it oversampled by 5 where we can see from the details in the plot that the spectrum in band is not modified in any way, but the oversampling gives us much more range in frequency that facilitates later filtering operations both in the digital and analog domains. 16-QAM 4 samples per symbol: 16-QAM 20 samples per symbol:
{ "domain": "dsp.stackexchange", "id": 11153, "tags": "fft, digital-communications, power-spectral-density, gmsk" }
Which lubricant (oil or not) to use if I want to prevent sticky/gummy buildup?
Question: I am currently tinkering with typewriters and two things striked me when I started to read about their maintenance: They are supposed to run almost completely dry, except for a very tiny amount of lubricant at very specific points; Gummy or sticky gunk buildup - usually due to over-oiling + dust buildup + passage of time - is one of the main causes of typewriter malfunctioning. As an avid cyclist I am, I have already noticed some oils have a tencency to get sticky as they dry out and mix with dust (e.g. gearbox oil), while others seem to be very stable and somewhat "repel" dust, mostly automatic transmission fluid - ATF, power steering fluid, or suspension oil. I believe these are somewhat intended to remain fluid and to actively repel dust, which seems to "fall out of the way" instead of get increasingly mixed with the oil and the working surfaces. I know a lot of typewriter folks use sewing machine oil (Singer, mostly), but I wonder if its characteristics are suitable for this need of not getting gunky as the years pass. So que questions are: Given the considerations above, would some type of oil or non-oily lubricant be more adequate than sewing-machine oil? Is the hydraulic fluids hypothesis a good one? Any other relevant consideration? Answer: The best option would probably be "clock oil" which is intended to be long lasting, non-acidic, and (importantly) non-spreading. If you apply it in small enough quantities, it will stay where you put it, and not move around through capillary action. Sewing machine oil is probably a reasonable alternative, but the fact that it is formulated not to mark or damage any fabric or thread that it comes in contact with is probably irrelevant. Also it was not meant to be long-lasting. Singer used to recommend that their sewing machines should be oiled daily if they were in continuous use (i.e. in a factory.) Clock repairers sometimes use transmission fluid to lubricate clock springs (if only because of the expense of using clock oil on big rusty springs in a low-quality clock!) but not for lubricating bearings and pivots.
{ "domain": "engineering.stackexchange", "id": 448, "tags": "mechanical-engineering, tribology" }
Worst case using the first fit $2$-approximation for the bin packing problem
Question: I am with the phone so I would be more verbose when I will have a pc on hand, if you desire. The first fit algorithm for approximating the bin packing problem (NP-hard) is a $2$-approximation for the optimum. Can you show me a concrete worst case showing that $2$ is a good (or bad) extimate for the bound? Answer: The approximation ratio of first fit for bin packing is actually 1.7 rather than 2. See Dósa and Sgall, First fit bin packing: a tight analysis. The paper contains references which prove a matching lower bound.
{ "domain": "cs.stackexchange", "id": 8445, "tags": "optimization, np-hard, approximation" }
Heatmap not respecting the color bounds
Question: For the following dataframe of a chi2 correlation study, i started to plot a heatmap: import pandas as pd import numpy as np columns = ['A', 'B', 'C', 'D', 'E', 'F', 'G'] results = np.array([[0.70709269, 0.17683162, 0.38328705, 0.61449242, 0.43709035, 0.33675627, 0.2661715 ], [0.17683162, 0.70709268, 0.20520211, 0.16044232, 0.07607822, 0.13364355, 0.13093324], [0.38328705, 0.20520211, 0.81649658, 0.37683897, 0.17308779, 0.29541159, 0.29975079], [0.61449242, 0.16044232, 0.37683897, 0.81649658, 0.4991043 , 0.34257853, 0.2786975 ], [0.43709035, 0.07607822, 0.17308779, 0.4991043 , 0.81649658, 0.22700152, 0.17041603], [0.33675627, 0.13364355, 0.29541159, 0.34257853, 0.22700152, 0.81649658, 0.22018705], [0.2661715 , 0.13093324, 0.29975079, 0.2786975 , 0.17041603, 0.22018705, 0.81649658]]) df_matrix = pd.DataFrame(results, columns=columns) category_bounds = [0, 0.2, 0.4, 0.6, 0.8, 1.0] categories = ['Very Weak', 'Weak', 'Moderate', 'Strong', 'Very Strong'] df_heatmap = pd.DataFrame(df_matrix, index=df_matrix.index, columns=df_matrix.columns) colors = sns.color_palette('coolwarm', len(categories)) cmap = ListedColormap(colors) fig, ax = plt.subplots(figsize=(10, 8)) sns.heatmap(df_heatmap, annot=True, cmap=cmap, fmt=".3f", cbar=False, ax=ax, linecolor='white') plt.subplots_adjust(left=0.25, top=0.95) plt.show() But, for some reason (I suppose it is due to rounding the values), 0.71 and 0.82 are plotting in the same color. Can someone give me some guidance on what the problem is? Answer: The problem, as @yuckyh said, was that seaborn normalizes the variables to produce the colors. To avoid that, setting vmin and vmax was necessary: colors = sns.color_palette('coolwarm', len(categories)) cmap = ListedColormap(colors) # Adjust the figure size to avoid overlapping y-axis labels fig, ax = plt.subplots(figsize=(10, 8)) # Plot the heatmap sns.heatmap(df_matrix, cmap=cmap, fmt=".3f", cbar=False, ax=ax, linecolor='white', vmin=0, vmax=1, annot = True) # Adjust the position of the lines indicating the edges of the rectangles ax.hlines(np.arange(n_variables+1), *ax.get_xlim(), color='white', linewidth=1) ax.vlines(np.arange(n_variables+1), *ax.get_ylim(), color='white', linewidth=1) ax.set_xticks(np.arange(df_matrix.shape[1]) + 0.5, minor=False) ax.set_yticks(np.arange(df_matrix.shape[0]) + 0.5, minor=False) # Configure the tick labels ax.set_xticklabels(df_heatmap.columns, rotation=45, ha="right") ax.set_yticklabels(df_heatmap.index, rotation=0) # Create a custom legend legend_labels = [f'{category}: {category_bounds[i]:.1f} - {category_bounds[i+1]:.1f}' for i, category in enumerate(categories)] legend_elements = [plt.Rectangle((0, 0), 1, 1, fc=colors[i]) for i in range(len(categories))] # Reverse the order of the legend elements and labels legend_elements = legend_elements[::-1] legend_labels = legend_labels[::-1] # Add the legend ax.legend(handles=legend_elements, labels=legend_labels, loc='center left', bbox_to_anchor=(1, 0.5)) # Adjust the spacing to avoid overlapping y-axis labels plt.subplots_adjust(left=0.25, top=0.95) plt.show()
{ "domain": "datascience.stackexchange", "id": 11841, "tags": "python, seaborn, heatmap" }
Need help with constructing a DFA
Question: I am trying to construct the DFA that accepts the following language $$ L_2 := \left\{ w \in \{a,b\}^* \mid \#a(w) \text{ is divisible by } 3 \text{ and } \texttt{babbab} \text{is a substring of } w \right\} $$ My solution is illustrated below. I feel like my current solution is incomplete/wrong. Answer: Here's an idea to construct such a DFA. Consider differently the two parts of the language: $L = L_1 \cap L_2$ where $L_1 = \{u\in\{a,b\}^*\mid |u|_a \equiv 0 \mod 3\}$ and $L_2 = \{ubabbabv\mid u,v\in \{a,b\}^*\}$. Note that I reworded "being divisible by $3$" into "being equal to $0$ modulo $3$". This will help to contruct the automaton. For such an automaton, you need to keep track of the value of $|u|_a \mod 3$. This can be done using only three states. An automaton recognizing $L_1$ could be the following one, where $q_i$ denotes the state where $|u|_a \equiv i \mod 3$: Now to build an automaton recognizing $L$, you can first build an automaton recognizing $L_2$, then triple each states and completing transitions according to the value of $|u|_a$. Though, honestly, this is very tedious and annoying. The person giving this assignment could have chosen a short substring condition.
{ "domain": "cs.stackexchange", "id": 19230, "tags": "regular-languages, automata, finite-automata" }
Does an object gain mass as it moves further away from the Earth
Question: I have been reading about the mass defect in atomic nuclei recently and am trying to understand what it is that causes the defect. To my understanding it is the loss of energy of the nucleus that causes an observed decrease in mass in the nucleus (using $E=mc^2$). Therefore, when an object moves further from the Earth, since it gains gravitational potential energy, will it's observed mass increase? Many thanks. Answer: Therefore, when an object moves further from the Earth, since it gains gravitational potential energy, will it's observed mass increase? No, but it is possible to change your analogy in order to make it correct. The thing that's analogous to the nucleus is the system consisting of both the object and the earth, O+E. If an external source of energy brings the object farther away from the earth, then the total energy of the O+E system is increased, and by $E=mc^2$ this is equivalent to an increase in the mass of the O+E system. A real-world example of this process, although in reverse, is in the black hole mergers that we observe in gravitational wave events. The black holes transfer a bunch of energy into gravitational waves, which take energy away into the outside world. Typically the system loses about 5-10% of its mass through the merger. Note that if the system does not exchange energy with the environment, then the system's total energy is conserved, and its mass stays the same.
{ "domain": "physics.stackexchange", "id": 46239, "tags": "gravity, nuclear-physics, potential-energy, mass-energy" }
Reaction of Components with Methyl Ethyl Ketone makes it corrode. How to prevent?
Question: I have an issue with chemical plating of Iron, the component is immersed in Methyl Ethyl Ketone for a time span and then Zinc galvanized plating starts to corrode. I am not understanding what measures should I take. I am not at all into chemical engineering I am embedded engineer and trying to sell my products to companies. My advisor said I should get them alkaline coated. This effect takes place after 3 days and there are no immediate results to the test. Thank you You guys are awesome. Answer: Ritmesh, just to be clear: are the iron parts cleaned in MEK before zinc coating, or are the zinc-coated iron parts corroding after being exposed to MEK during service as drum clamps? I ask because the the usual process for pre-zinc dip coating is an initial acid wash and rinse/dry, not an MEK cleaning dip. Also please note 35μm of zinc is not very thick and may not be able to cover up all the pits & scratches in the iron. Also I notice that the corrosion is pitting corrosion associated with the sheared edges of these parts which suggests pinholes in the zinc in the roughened zones. Two things to try: a tumble deburring of the parts before zinc coating to remove any sharp edges, and a thicker zinc coat. Finally, if the problem is pinholing then you should be able to rapidly produce it by dipping the parts into a warm, well-oxygenated weak acid solution with a little salt in it and then watching closely for gas evolution at the zinc surface, where the pinholes are. You can also do the pinhole test by assembling an electrochemical cell where one electrode is copper and the other is your zinc-coated part and the solution is a weak acid. this will resolve pinholes in seconds if you do it right.
{ "domain": "engineering.stackexchange", "id": 4553, "tags": "chemical-engineering" }
Spiral path of particle moving under circular force
Question: Imagine a situation where a long masless stick is inserted through a ring, and fix one of the ends of the stick. Then, the system is given an impulsive force such that it starts rotating with an initial velocity $\omega_0$, the ring will start moving outwards, and thus, the system will experience a negative torque. I want to find the conditions when the system finally comes to (angular) rest: i.e: distance from centre, time elapsed, final velocity of the ring. I have gotten the following two equations using angular momentum change, and simple free body diagrams, and I'm now stuck. $$\ddot r= \dot \theta^2 r$$ $$r \ddot \theta = -2 \dot \theta \dot r$$ How do I further reduce these? Note: Initial distance from axis can be taken as $r_0$ Answer: Hint: You can write $\omega= \dot{\theta}$, and cancel the $dt$ in the second equation, so it becomes seperable in $r$ and $\omega$.
{ "domain": "physics.stackexchange", "id": 69645, "tags": "classical-mechanics" }
Changing file encoding, removing mid-line LF, and converting DOS CR-LF to Unix LF
Question: We have several thousand large (10M<lines) text files produced by a windows machine. We need to change the file encoding of these files from cp1252 to utf-8, replace any bare Unix LF sequences (i.e. \n) with spaces, then replace the DOS line end sequences ("CR-LF", i.e \r\n) with Unix line end sequences (i.e. \n). The dos2unix utility is not available for this task. We've written a bash function to package these operations together using iconv and sed, with iconv doing the encoding and sed dealing with the LF/CRLF sequences. It first changes the encoding, then replaces \r\n sequences with a placeholder. It then replaces remaining \n with , and finally replaces the placeholders with \n: post_extract() { iconv --from-code=CP1252 --to-code=UTF-8 $1 | \ sed 's/\\r\\n/@PLACEHOLDER@/g' | \ sed 's/\\n/ /g' | \ sed 's/@PLACEHOLDER@/\\n/g' > $2 } I'm not used to writing shell functions. There may be many edge cases and other considerations not handled here but this first pass works. To be specific with how this function is used in practice, it is sent to the --to-command= argument of a call to tar, such that the above process is performed for every file extracted from the tar archive. Use case is preprocessing tabular data for upload to a database. I'm not sure if we can do the operation in place or not. Open to using tools other than sed, e.g. tr, awk or perl (though I don't think the latter is necessary). (Highly simplified) example input: apple|orange|\n|lemon\r\nrasperry|strawberry|mango|\n\r\n Example output: apple|orange| |lemon\nrasperry|strawberry|mango| \n Answer: There's nothing Bash-specific here, so consider using plain sh for lower overheads. The unquoted expansions of $1 and $2 are subject to word-splitting, and you probably don't want that - use "$1" and "$2" respectively. It looks like you're over-escaping \r\n, unless you actually meant to handle those four characters (backslash, r, backslash, n). If we're handling ASCII CR and NL, then we need single backslashes for sed (the single quotes prevent shell expanding them). There's a small risk that the input might contain the magic @PLACEHOLDER@ sequence, and no matter how unlikely you make it, there's always a finite probability that it will turn up. It's worth cultivating a habit that such things aren't needed - that's the sort of "unlikely" input that gets exploited for security breaches. We're never going to match a newline in our sed commands, because sed is line-oriented. We would need an entirely different sed program, that removes \r$ (i.e. CR at end of line), and for lines that don't end in CR, read in the following line using the N command and replace the resulting newline with space). In any case, sed always writes complete lines of output - if input finishes with an incomplete line (no newline) then there's no way to prevent sed from corrupting that. An alternative would be to use Perl (I know you said that might be overkill). It can simply replace all newlines not preceded by CR by using a negative lookbehind assertion: s/(?<!\r)\n/ /g. And can slurp the entire file rather than operating on individual lines: perl -g -pe 's/(?<!\r)\n/ /g; s/\r\n/\n/g;' We might be able to get Perl to do the character-code conversion too. perl -C6 ensures that standard output and error are UTF-8, so if we can put its input stream into windows-1252, we'll have a short single program rather than needing a pipeline with iconv. I'm not sure we can do that with -p - we might need to provide the read-execute-write loop ourselves in that case.
{ "domain": "codereview.stackexchange", "id": 44197, "tags": "beginner, bash, shell, sed" }
Kinetic friction of a sled vs. static friction of a wheel
Question: Since kinetic friction is lower than static friction, and a sled in motion experiences kinetic friction whereas a wheel experiences static friction, which one would go down a hill faster. Let's have a hypothetical scenario where we have a slope which both a sled and a wheel are stationed at the top of, considering all things equal, other than kinetic vs. static friction, which one would get to the bottom of the slope the fastest. In this case the mass of the two objects is the same, the air friction is the same, they experience the same amount of gravitational pull, the surface has the same coefficient of friction for both objects, the surface stays at the same angle for both objects, and the rolling momentum of the wheel is negligible. Only rule is that the wheel rolls and the sled slides, so there is no slippage of the wheel rolling and there is also no tumbling of the sled. Intuitively I think that the sled should be faster as it experiences kinetic friction which is lower than the static friction experienced by the wheel. Answer: Intuitively I think that the sled should be faster as it experiences kinetic friction which is lower than the static friction experienced by the wheel You appear to not understand the roles of kinetic vs static friction. Static friction does not prevent or inhibit the wheel from going down hill. It enables the wheel to roll without sliding as long as the maximum possible static friction force is not exceeded. Static friction can, however, prevent the sled from sliding if the slope is too shallow. If it does slide, kinetic friction does negative work on the sled taking kinetic energy away from it. If the sled were sliding down a frictionless slope and the wheel was rolling down the same angle slope from the same height but with static friction so that it rolls without slipping, the sled would win the race. That's simply because the translational velocity of the center of mass determines the winner and that would be higher for the sled. But that's because the kinetic energy of the sled is all translational, whereas the kinetic energy of the wheel is part translational and part rotational. It is not because the coefficient of kinetic friction is lower than the coefficient of static friction. On the other hand, if there is kinetic friction on the slope with the sled, that friction will do negative work on the sled taking some of its translational kinetic energy away. Whether it takes enough away for it to lose the race, I leave that to you to figure out. Hope this helps.
{ "domain": "physics.stackexchange", "id": 94959, "tags": "newtonian-mechanics, kinematics, friction" }
Dimensional regularization and expansion of gamma function
Question: In my calculations, I used dimensional regularization, i.e. replace $d\rightarrow d-\epsilon$ and calculated the divergent integral. Then, I would like to expand the answer into seriers by $\epsilon$ around $\epsilon=0$. But I obtained strange result. I start from the following integral (where I denote $d=3-\epsilon$): $$\int_{0}^{\infty}dp\frac{p^2}{p^2+m^2}$$ which is divergent. Then, I have calculated the integral $$ I(\epsilon)=\int_{0}^{\infty}dp\frac{p^{d-1}}{p^2+m^2}=\frac{m^{d-2}}{2}\Gamma(d/2)\Gamma(1-d/2)$$ which is convergent for $d<2$. Also, I also have integral over angles, which is in $d$-dimensional case can be written as $$\frac{2\pi^{d/2}}{\Gamma(d)}.$$ So, my final answer is $$I(\epsilon)\propto\Gamma(\epsilon/2-1/2)$$ Using Wolfram Mathematia, I find the expansion around $\epsilon=0$. My expectation was that the divergence of my integral will be appear like a pole, $1/\epsilon$. But from the expansion I see no one singular term. Answer: I would like to present the answer. DimReg replaces any even divergnces (log, quadratic, etc.) by pole $1/\epsilon$. For any odd divergence, DimReg doesn't give the correct answer. This problem can be solved by PDS (Power Divergence Substruction) which is discussed in the following paper.
{ "domain": "physics.stackexchange", "id": 56448, "tags": "quantum-field-theory, regularization, dimensional-regularization" }
How to determine the direction of regulation of a gene by comparing gene expressions?
Question: I am just learning about the gene expressions and regulation. Several researches focus on finding the genes of altered gene expressions on a microarray to claim that they have a correlation to a specific disease. I am confused about how people can determine whether a gene is down-regulated or up-regulated by its gene expression. Assume we have a few samples of a gene: some of the samples are normal patients samples and rest of them are disease-infected samples. Do we determine the direction of regulation of a gene by the ratio of gene expression of normal/disease-infected samples? For example, if the ratio of expressions is a negative value, do we say that the gene is a down-regulated gene, otherwise, it is a up-regulated gene ? Answer: If you have control expression values $c$ and e.g. disease expression values $d$, you take the ratio: $\frac{d}{c}$. If this is greater than one, it's up-regulated. Usually, the log-ratio is computed: $log\frac{d}{c}$. Now, if this is positive, the gene is up-regulated. Gene expression values are usually measured genome-wide and then normalized before computing the ratios. So you rarely deal with individual raw expression values.
{ "domain": "biology.stackexchange", "id": 788, "tags": "bioinformatics, gene-expression, gene-regulation" }
Arrow that changes direction home experiment
Question: I saw this cool optical effect in a experiment on Youtube: https://www.youtube.com/watch?v=G303o8pJzls Could anyone explain to me why the arrow points in the opposite direction? I have read something about the refraction index of the glass and water, but it's not clear to me how those cause the effect. Answer: A convex lens produces an image that is inverted. If we put the object at twice the focal length then the image is the same size as the object but the other way up. This is what is happening in the video. When the glasc is empty it does not act as a lens so the light passes straight through and we see both arrows unchanged. When the glass is filled with water the glass+water acts as a cylindrical convex lens, and this produces an inverted image. The glass has been carefully positioned so that the object is twice the focal length away from the glass, so the inverted image is about the same size as the object.
{ "domain": "physics.stackexchange", "id": 19323, "tags": "optics, refraction, lenses" }
Can neural network probability directly be used as inverse sensor model?
Question: For my robot i'm using semantic segmentation neural network that assigns to every pixel probability of being "road" (not occupied). By using homography matrix i'm re-projecting image to top-down view. The final goal is to build a map (poses are known). I'm going to use simple algorithm "occupancy grid mapping" described in "Probabilistic Robotics" chapter 9.2. Core component of algorithm is "inverse sensor model" $p(m_i |z_t,x_t)$ - probability for cell $m_i$ being occupied given measurement $z_t$ and robot pose $x_t$ Will it be correct if i directly use probabilities from neural network as "inverse sensor model"? if cell is outside current "top-down": return probability_0.5 else return probability_from_network Answer: Sure, it's possible to assign the conditional probability of a pixel to the map, but there is an easier way for occupancy mapping. A normal neural network can learn the probabilities internally so the programmer doesn't has to care about the value by it's own. This will make the development process lighter and modern frameworks like Tensorflow have the natural ability in model building. So the answer to the question is, that the neural network will learn directly the grid map and this allows the programmer to focus on other tasks like pathplanning and steering control. Probabilistic robotics is overestimated because the weights in the neural network are able to adapt to new situation by it's own. [1] Zhang, Jingwei, et al. "Neural SLAM: Learning to explore with external memory." arXiv preprint arXiv:1706.09520 (2017).
{ "domain": "robotics.stackexchange", "id": 1933, "tags": "mapping, probability, neural-networks" }
Equilibrium on a plank
Question: Consider the classic scenario of an equilibrium with a man standing at the centre of the plank, with the plank being held up by two trestles, say the man exerts a force of 500N at the point he is standing at, my book claims that the sum of the upward forces exterted by the trestles must be equal to 500N since the configuration is in an equilibrium, why is this the case? I would understand that the sum of forces in one direction equals the some of forces in the opposite direction about the same point, but here we are considering three different points the centre point and the two endpoints at which the trestles are located… Answer: There is a concept that when a system is in static equilibrium, you can slice any part of the system off and the remaining part must be in equilibrium also. As a result, the part of the plank where the fellow is standing and nearby must be in equilibrium. To do so there are internal forces of 250 N as seen below in the first line. Then consider the part if the plank next to it, it too must have internal forces of 250N each to as an equal and opposite force from the middle part (Newton's 3rd law). And so move all the way to the supports (third line above) and you will see that the plank needs 250N force from the supports to be in equilibrium. As a simplification, you can consider the plank as a rigid body and sum of the forces acting on it to balance out The balance equations are the net force and net moment (about an arbitrary point A, not shown) have to equal to zero. $$ \boldsymbol{F}_{\rm net} = 0 \\ \boldsymbol{M}_{{\rm net},A} = 0 $$
{ "domain": "physics.stackexchange", "id": 93765, "tags": "newtonian-mechanics, free-body-diagram, equilibrium, statics" }
Explanation of Lemon Juice-Invisible Ink
Question: A cute homegrown spy-trick we all know: lemon juice invisible ink. But there is no scholary article available in the internet about this phenomenon. A rough explanation I found in Scientific American: Lemon juice—and the juice of most fruits, for that matter—contains carbon compounds. These compounds are pretty much colorless at room temperature. But heat can break down these compounds, releasing the carbon. If the carbon comes in contact with the air, a process called oxidation occurs, and the substance turns light or dark brown. But, this is not the type of explanation to be content with. Question: What is the mechanism going on here? Please, support your answer with sufficient information and references, don't give a rough explanation. Edit: I don't think my question is duplicate as I am asking for a well-referenced and informative answer which the other question doesn't have. Answer: Lemon juice works as a sympathetic ink partially due to its caramelization, as @Mithoron profoundly noticed -- especially upon extreme heating. Recent review of the degradation process of lemon juice [1] suggests that browning of lemon juice is associated with 3 processes: ascorbic acid degradation; Maillard reaction; caramelization. One of the first in-depth investigations revealed the major role of ascorbic acid degradation referring to the browning phenomena [2]: Compared with other fruit products, lemon juice contains few natural pigments to mask. ... Because of the high acidity (pH 2.5) of this product, it was unlikely that browning was due to sugar-amine condensation and the results showed that ascorbic acid was the main precursor. Browning of lemon juice and model systems was proportional to the level of ascorbic acid; the presence of amino-acids in model systems increased the intensity of browning. The principal role of amino-acids in the non-enzymic browning of fruit products appears not to be in the initial reaction leading to the formation of reactive compounds, but to increase the browning potential after the oxidation of the ascorbic acid to reactive carbonyl compounds. Thus, the pathway for the build-up of carbonyls depends on the pH and nature of the product, but in all cases the formation of melanoidin complexes originates from the polymerisation of carbonyl and $\alpha$-amino-groups. It was also confirmed that the $\alpha/\beta$-unsaturated carbonyls are potent browning agents and also that dicarbonyls of the glyoxal type make a contribution to browning in the early stages. More recent investigation at elevated temperatures (up to $\pu{120^\circ C}$) also supports the major role of ascorbic acid degradation among other factors of browning [3]: Browning of clarified cashew apple juice was caused by the degradation of ascorbic acid. Changes in the absorbance at $\pu{420 nm}$ and the ascorbic acid levels during thermal treatment of clarified cashew apple juice were described by first order kinetics, showing the correlation between ascorbic acid loss and colour formation (browning). It is concluded in all mentioned sources that formation of intermediate undesirable carbonyl compounds (mostly furfural and 5-hydroxymethylfurfural (5-HMF)) and various products of their polymerization and combination with amino acids (furfural is an aldehyde) upon heating is one of the major factors of browning. Maillard reaction is a chemical reaction between an amino acid and a reducing sugar, usually requiring the addition of heat. The reaction is initiated by a condensation between the free amino group of an amino acid, peptide, or protein and the carbonyl group of a reducing sugar, leading to the formation of Amadori compounds (various furoylmethyl derivatives in case of fruit juices)[4]. Caramelization generally occurs at high temperatures (above $\pu{140^\circ C}$), when the juice is dehydrated and has high sugar content. Basically it is a pyrolysis and a final step of decomposition. To sum it up, these are the products of first 2 processes (ascorbic acid degradation and reaction with amino acids (Maillard process)) (from [3], referred to as browning markers): Bibliography Bharate, S. S.; Bharate, S. B. J Food Sci Technol 2014, 51 (10), 2271–2288. DOI 10.1007/s13197-012-0718-8. Clegg, K. M. Journal of the Science of Food and Agriculture 1964, 15 (12), 878–885. DOI 10.1002/jsfa.2740151212. Damasceno, L. F.; Fernandes, F. A. N.; Magalhães, M. M. A.; Brito, E. S. Food Chemistry 2008, 106 (1), 172–179. DOI 10.1016/j.foodchem.2007.05.063. del Castillo, M. D.; Corzo, N.; Olano, A. Journal of Agricultural and Food Chemistry 1999, 47 (10), 4388–4390. DOI 10.1021/jf990150x.
{ "domain": "chemistry.stackexchange", "id": 8594, "tags": "organic-chemistry, acid-base, everyday-chemistry" }
Proof that Grover's operator can be written as $D_N=-H_n R_N H_n$
Question: I am interested in showing the validity of the Grover operator. Now there are several ways to show it. One way is with complete induction. It has to be shown that the following relationship applies: $D_N=-H_n\cdot R_N \cdot H_n $ For induction proof I have already formulated the induction assumption and the induction condition. Induction hypothesis: $N=2$ $$ D_N=-H_n\cdot R_N \cdot H_n, \quad D_N\text{ see Eq. 1 and } R_N \text{ see Eq. 2}$$ $$N=2$$ $$D_2 = -H\cdot R_2 \cdot H = -\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix}\cdot \begin{pmatrix}-1&0\\0&1\end{pmatrix}\cdot\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\1&-1\end{pmatrix} = \begin{pmatrix}0&1\\1&0\end{pmatrix}$$ Thus: $$D_2=\begin{pmatrix}0&1\\1&0\end{pmatrix}=\begin{pmatrix}0&1\\1&0\end{pmatrix}$$ Induction Prerequisite: For an arbitrary but fixed $N \in \mathbb N $ the statement applies At the induction step, I'm not sure if that's right. Here I need some support. In the induction step we show the assertion for an $N + 1$. I am not sure if I have to show both $n + 1$ and $N + 1$. At least this is the first idea for an induction step: $$D_{N+1}=-H_{n+1}R_{N+1}H_{n+1}$$ This is an important statement that you probably need for the induction step: $H_{n+1}$ is equal to $H_1\otimes H_n=\frac{1}{\sqrt{2}}\begin{pmatrix}H_n&H_n\\H_n&-H_n\end{pmatrix}$ then you would have first: $$D_{N+1}=-\frac{1}{\sqrt{2}}\begin{pmatrix}H_n&H_n\\H_n&-H_n\end{pmatrix}\cdot R_{N+1}\cdot \frac{1}{\sqrt{2}}\begin{pmatrix}H_n&H_n\\H_n&-H_n\end{pmatrix}$$ I'm stuck with this step. I am grateful for the answers and hope that the question is clear and understandable. Appendix: Equation 1: $$D_N = \begin{pmatrix}-1+\frac{2}{N}&\frac{2}{N}&...&\frac{2}{N}\\\frac{2}{N}&-1+\frac{2}{N}&...&\frac{2}{N}\\\vdots&\vdots&\ddots&\vdots\\\frac{2}{N}&\frac{2}{N}&...&-1+\frac{2}{N}\end{pmatrix}$$ Equation 2: $$R_N=\begin{pmatrix}-1&0&...&0\\0&1&\ddots&\vdots\\\cdots&\ddots&\ddots&0\\0&...&0&1\end{pmatrix}$$ Equation 3: $$N=2^n$$ Answer: For the proof by induction there isn't really much more than doing some algebra from the equations you already laid out. Let me write for simplicity $D_N\equiv -\mathbb 1_N + \frac{2}{N} \mathcal I_N$, where $\mathbb 1$ denotes the $N$-dimensional identity matrix, while $\mathcal I_N$ denotes the $N$-dimensional matrix whose every element is 1: $(\mathcal I_N)_{ij}=1\forall i,j$. We therefore want to prove that $$D_N\equiv-\mathbb 1_N + \frac{2}{N} \mathcal I_N=-H_n R_N H_n.\tag A$$ Proof by induction Assume (A) to be true for some $N=2^n$, and try to prove it for $n\to n+1$ (note that $n\to n+1$ corresponds to $N\to 2N$): $$ -\mathbb 1_{2N} + \frac{2}{2N} \mathcal I_{2N}=-H_{n+1} R_{2N} H_{n+1}.$$ Observe that $$H_{n+1}=H_n\otimes H_1, \qquad R_{2N}=\begin{pmatrix}R_N & 0\\ 0&\mathbb 1_N\end{pmatrix}.$$ We therefore have $$-H_{n+1} R_{2N} H_{n+1} = - \frac{1}{2}\begin{pmatrix}H_n & H_n\\ H_n& -H_n\end{pmatrix} \begin{pmatrix}R_N & 0\\ 0&\mathbb 1_N\end{pmatrix} \begin{pmatrix}H_n & H_n\\ H_n& -H_n\end{pmatrix} \\ = -\frac{1}{2}\begin{pmatrix}H_n & H_n\\ H_n& -H_n\end{pmatrix} \begin{pmatrix}R_N H_n & R_N H_n\\ H_n& -H_n\end{pmatrix} =-\frac{1}{2}\begin{pmatrix}H_n R_N H_n + \mathbb 1_N & H_n R_N H_n - \mathbb 1_N\\ H_n R_N H_n - \mathbb 1_N& H_n R_N H_n + \mathbb 1_N\end{pmatrix}. $$ By the induction hypothesis, $H_n R_N H_n=\mathbb 1_N-\frac{2}{N}\mathcal I_N$, therefore $$ -\frac{1}{2}\begin{pmatrix}H_n R_N H_n + \mathbb 1_N & H_n R_N H_n - \mathbb 1_N\\ H_n R_N H_n - \mathbb 1_N& H_n R_N H_n + \mathbb 1_N\end{pmatrix} = \begin{pmatrix}-\mathbb 1_N + \mathcal I_N/N & \mathcal I_N/N \\ \mathcal I_N/N & -\mathbb 1_N + \mathcal I_N/N\end{pmatrix}, $$ which is what you wanted to prove. An alternative direct proof Here is another way to prove this fact avoiding induction altogether. We want to prove that $-\mathbb 1_N + \frac{2}{N} \mathcal I_N=-H_n R_N H_n$. Observe that, by definition, the components of $H_n$ read $$(H_n)_{ij}=\frac{1}{\sqrt{2^n}} (-1)^{i_B\odot j_B},$$ where $i_B$ denotes the binary vector whose elements are the components of $i$ in its binary representation, and $i_B\odot j_B\equiv \oplus_k i_k j_k$ (that is, the modulo-$2$ sum of the products of the binary components of $i$ and $j$). On the other hand, $R_N$ is written componentwise as $(R_N)_{ij}=\delta_{ij} (-1)^{\delta_{i,0}}$. The product $H_n R_N H_n$ thus gives $$(H_n R_N H_n)_{ij} = \frac{1}{2^n} \sum_k (-1)^{k\odot(i_B\oplus j_B)} (-1)^{\delta_{k,0}},$$ where $i_B\oplus j_B$ denotes the componentwise bitwise sum of the binary vectors $i_B$ and $j_B$. Observe now that the $(-1)^{\delta_{k,0}}$ term only affects the $k=0$ element of the sum, and it does so by simply changing its sign from $+1$ to $-1$. In other words, it simply subtracts $2$ from the overall sum, so that we can rewrite the whole thing as $$(H_n R_N H_n)_{ij} = \frac{1}{2^n} \left(\sum_k (-1)^{k\odot(i_B\oplus j_B)} - 2\right).$$ To reach the conclusion you now only need to observe that $$\sum_k (-1)^{k\odot(i_B\oplus j_B)}=N\delta_{i,j}.$$
{ "domain": "quantumcomputing.stackexchange", "id": 648, "tags": "quantum-algorithms, mathematics, grovers-algorithm" }
Does a Quantum System Really "Jump" to an Eigenstate When Observed?
Question: Warning: This is a highly hypothetical question. I am bothered with Dirac's description of the system when making a measurement. Without quoting his statement (from The Principles of Quantum Mechanics, Dirac, 1930), he simply states that one of the axioms of quantum mechanics is that regardless of the current state of the system, once we make an observation, the system forcefully "jumps" to an eigenstate of the observable. It, particularly, jumps to the eigenstate associated with that specific eigenvalue that we measured. Now, my question goes as follows, let us assume a hypothetical continuous observer; that is, an observer that observers (measures) the system continuously without a pause. Let us assume, furthermore, that we observed the system in one state, and hence ensured that it jumped to an eigenstate associated with the measurements (eigenvalue), according to Dirac. Let us assume, additionally, that we start applying external effects to the system (probably forces) so as to change the system's state, which is admissible by quantum mechanics. Let us assume, now, that we let the continuous observer observe the system while applying the external forces (agents or any physical effect). We have two possibilities at this point: It is either the case that the state will NOT change, or that it will ACTUALLY change. If the state changes while continuously observing, then Dirac's axiom fails miserably and this postulate of quantum mechanics collapses. On the other hand, if the state remains the same while observing, then by making the external applied forces arbitrary, we can generate any possible state of the system. But then, this means that regardless of the state of the system, the measurement will be always the same with this specific observer (measurer). However, since we know that the system can actually have different measurements when measured at different times and in different states (then by different observers), we can say that the measurement has nothing to do with the state of the system. Rather, it is associated with the observer. Hence, by studying the observer itself, we can get the measurement without bothering to measure the system. To save you some time, my photonics professor said that there is nothing like a continuous observer. His answer did not really convince me for I am creating a hypothetical situation. Answer: What happens after a measurement is controversial and different accounts of what is happening in reality will give different answers. My answer presupposes that quantum mechanical equations of motion such as the Schrödinger or Dirac equation are accurate descriptions of reality. A measurement is an interaction between the measurement device $M$ and a system $S$ that produces a record: a piece of information about the system that can be copied, discussed, criticised etc. A measurement interaction takes place suppresses interference between the different values of the measured observable: https://arxiv.org/abs/0707.2832 As a result the different values of the measured observable evolve autonomously and to reflect this you update the relative state of the system concerned: https://arxiv.org/abs/2008.02328 There are other theories that provide different equations of motion and so different accounts of measurement such as spontaneous collapse or pilot wave theories: https://arxiv.org/abs/2310.14969 https://arxiv.org/abs/2212.12175
{ "domain": "physics.stackexchange", "id": 99855, "tags": "quantum-mechanics, quantum-information, quantum-interpretations, measurement-problem, quantum-measurements" }
Is there a closed form solution to the Universal Kepler Equation, how about the traditional Kepler equation?
Question: I am under the impression that there is no closed form solution , https://en.wikipedia.org/wiki/Kepler%27s_equation# , Kepler's equation is a transcendental equation because sine is a transcendental function, meaning it cannot be solved for E algebraically. Numerical analysis and series expansions are generally required to evaluate E. Yet this paper, https://www.scirp.org/html/12-4500390_52772.htm , Combining these new expressions of the universal functions and their identities, we establish one biquadratic equation for universal anomaly for all conics; solving this new equation, we have a new exact solution of the pre- sent problem for the universal anomaly as a function of the time. The verifying of the universal Kepler’s equation and the traditional forms of Kepler’s equation from this new solution are discussed. seems to claim otherwise. The solution to the traditional form of Kepler's equation is given in equations 40 and 41, and I've checked it using MATLAB, starting with e = 0.3, E = 1, Kepler's eq gives M = 0.7476. Using 40 and 41 to solve the equation given e = 0.3 and M = 0.7476 gives E = 1.0097, which is close :) I have an MS in math, but the math in Tokis' paper is beyond me. Can anyone clarify the situation? Tokis' paper is also available as a pdf at https://www.semanticscholar.org/paper/A-Solution-of-Kepler%E2%80%99s-Equation-Tokis/c09e592ad648ec24e8796c797a6b2ec33eb36f28 Answer: Doesn't look too good to me. $$$$
{ "domain": "physics.stackexchange", "id": 75582, "tags": "classical-mechanics" }
Suppose a hollow metal sphere filled with helium is dropped in a body of water
Question: What are the conditions that would cause said sphere to sink or float? What if the sphere was full of ice instead? Answer: Simply if the average density $\rho_\text{avg}$ of the sphere + helium (or your horse, for that matter) is less than the density of water $\rho_w$. This is because the weight is \begin{align} mg = \rho_\text{avg} V_\text{object} g \end{align} while the buoyancy force is \begin{align} F = \rho_w V_\text{displaced} g, \end{align} where $V_\text{object}$ is the volume of the object and $V_\text{displaced}$ is the volume of water displaced i.e. the volume of the object under water. Of course, $V_\text{object} \geq V_\text{displaced}$, because you can only displace as much volume of water as the volume you take up anyway. If $\rho_\text{avg} \leq \rho_w$ then there exists some $V_\text{displaced}$ which is $ \leq V_\text{object}$ that satisfies $F = mg$, so the object floats, with a bit of it sticking out of the water, and if $\rho_\text{avg} > \rho_w$ then it just sinks and sinks and sinks... cheers
{ "domain": "physics.stackexchange", "id": 6061, "tags": "mass, density" }
Convert elementtree to dict
Question: Just needed a quick way to convert an elementtree element to a dict. I don't care if attributes/elements clash in name, nor namespaces. The XML files are small enough. If an element has multiple children which have the same name, create a list out of them: def elementtree_to_dict(element): d = dict() if hasattr(element, 'text') and element.text is not None: d['text'] = element.text d.update(element.items()) # element's attributes for c in list(element): # element's children if c.tag not in d: d[c.tag] = elementtree_to_dict(c) # an element with the same tag was already in the dict else: # if it's not a list already, convert it to a list and append if not isinstance(d[c.tag], list): d[c.tag] = [d[c.tag], elementtree_to_dict(c)] # append to the list else: d[c.tag].append(elementtree_to_dict(c)) return d Thoughts? I'm particularly un-fond of the not instance part of the last if. Answer: def elementtree_to_dict(element): d = dict() I'd avoid the name d its not very helpful if hasattr(element, 'text') and element.text is not None: d['text'] = element.text getattr has a third parameter, default. That should allow you to simplify this piece of code a bit d.update(element.items()) # element's attributes for c in list(element): # element's children The list does nothing, except waste memory. if c.tag not in d: d[c.tag] = elementtree_to_dict(c) # an element with the same tag was already in the dict else: # if it's not a list already, convert it to a list and append if not isinstance(d[c.tag], list): d[c.tag] = [d[c.tag], elementtree_to_dict(c)] # append to the list else: d[c.tag].append(elementtree_to_dict(c)) Yeah this whole block is a mess. Two notes: Put everything in lists to begin with, and then take them out at the end call elementtree_to_dict once return d This whole piece of code looks like a bad idea. <foo> <bar id="42"/> </foo> Becomes {"bar" : {"id": 42}} Whereas <foo> <bar id="42"/> <bar id="36"/> </foo> Becomes {"bar" : [{"id" : 42}, {"id": 36}]} The XML schema is the same, but the python "schema" will be different. It'll be annoying writing code that correctly handles both of these cases. Having said that, here's my cleanup of your code: def elementtree_to_dict(element): node = dict() text = getattr(element, 'text', None) if text is not None: node['text'] = text node.update(element.items()) # element's attributes child_nodes = {} for child in element: # element's children child_nodes.setdefault(child, []).append( elementtree_to_dict(child) ) # convert all single-element lists into non-lists for key, value in child_nodes.items(): if len(value) == 1: child_nodes[key] = value[0] node.update(child_nodes.items()) return node
{ "domain": "codereview.stackexchange", "id": 1541, "tags": "python, xml" }
Subset of a bipartite graph with maximal number of minimal unmatched vertices
Question: Given a bipartite graph $G = (U \sqcup V, E)$ with sets of vertices $U$ and $V$ and edge set $E \subseteq U \times V$, a matching $M$ is a subset of $E$ whose edges have no common vertices: for all $(u, v)$ and $(u', v') \in M$, $u = u'$ or $v = v'$ implies $(u, v) = (u', v')$. A maximum matching is a matching containing the maximum number of edges. It is immediate that a maximum matching is a matching achieving the minimal number of unmatched vertices, where I say that $u \in U$ is unmatched in $M$ if there is no $(u', v') \in M$ such that $u = u'$, and likewise for $v \in V$. Hence, the minimal number of unmatched vertices in $G$ over all possible matchings of $G$, that I write $f(G)$, can be computed by finding a maximum matching, which can be done in polynomial time with, e.g., the Hopcroft-Karp algorithm. I now consider the following problem: from a bipartite graph $G = (U \sqcup V, E)$, what is the maximum, over all subsets $U'$ of $U$, of $f(G_{|U'})$? By $G_{|U'}$ I mean the bipartite graph which is the restriction of $G$ to $U'$, namely $G_{|U'} = (U' \sqcup V, \{(u, v) \in E \mid u \in U'\})$. Is this problem NP-hard, or can it be solved in PTIME? On the one hand it seems related to set cover or exact cover which are NP-hard, but on the other hand the subproblem of maximum matching is PTIME and maybe there is a clever way to identify what is the worst subset. To motivate this problem, here is a rephrasing in terms of assignments of agents to tasks. $U$ is a set of tasks, $V$ a set of agents, and $E$ indicates which agents can perform which tasks. A matching is a way of assigning tasks to agents such that each agent does at most one task and each task is done by at most one agent. The cost measure is the number of unmatched vertices, namely the number of undone tasks plus the number of unoccupied agents. I want to know what is my worst possible cost in this sense, over all subsets of the tasks (think of $U$ as a set of possible tasks from which an arbitrary subset will be requested, and $V$ as a fixed set of agents that I cannot adjust but which I can allocate freely once a subset has been requested). To see why this problem is not trivial, observe that choosing the empty subset $\emptyset \subseteq U$ yields a cost of $|V|$ (the cost of leaving all agents unoccupied), the complete subset $U \subseteq U$ yields a certain cost and may be the worst if e.g. $|U|$ is much larger than $|V|$, but in general my best solution may be to choose a subset of $U$ which retains the tasks than can be done by only few agents (so that many of them will have to remain undone as each of those agents can do only one task) but would not retain the tasks that can be done by otherwise unoccupied agents (so that those agents remain unoccupied and contribute to the cost). If this problem is in PTIME, I am also interested to know if the following weighted version is also PTIME: each vertex and each edge has a weight, the cost of a matching is the sum of the weights of the unmatched vertices plus the sum of the weights of the retained edges, and I want again to find the subset of $U$ with the worst minimal cost in this sense. In the vocabulary of the assignment problem, this means that each task and each agent has an individual cost of remaining undone or unoccupied, and assigning a task to an agent is not free anymore but carries a certain cost indicated on the edge. Answer: The problem is in $P$ by reducing the problem to submodular minimization. Let $v(A)$ be defined as the size of the maximum matching in $G_{|A}$. $v$ is submodular. One way to see this is convert the problem to a max flow problem, and use the fact that the set function $c(U)=$ max flow from $U$ to $t$ defined over $V\setminus \{t\}$ is submodular. I will use $f(A)$ to mean the $f(G_{|A})$ in your notation. $f(A) = |A| + |V| - 2v(A)$. $|V|$ is a constant, so we can maximize over $f(A)-|V| = |A| - 2v(A)$, which is the same as minimize $g(A) = 2v(A)-|A|$. $g$ is a submodular function(difference of submodular and modular function), and you can apply any polynomial time submodular function minimization algorithm. This can be extended to weighted case if weighted analogue of $v(A)$ is submodular.
{ "domain": "cstheory.stackexchange", "id": 3574, "tags": "ds.algorithms, graph-algorithms, co.combinatorics, optimization, matching" }
Is this grammar right associative (why?)
Question: Given a grammar as follows: $\left<stmt\right> \rightarrow \left<id\right> = \left<expr\right>$ $\left<const\right> \rightarrow A | B$ $\left<expr\right> \rightarrow \left<term\right> \times \left<expr\right> | \left<term\right>$ $\left<term\right> \rightarrow \left<factor\right> + \left<term\right> | \left<factor\right>$ $\left<factor\right> \rightarrow (\left<expr\right>) | \left<id\right>$ I think this grammar is right associative because it expands on the right. Where I am confused is it can be expanded using other non-terminals on the left. For example, $\left<expr\right> \rightarrow \left<term\right> \times \left<expr\right> | \left<term\right>$ Could be expanded via $\left<term\right>$ on the left, I think. Does this make the associativity ambiguous? Assuming it could be ambiguous, how would I fix this to make the grammar completely right associative? Answer: $\left<expr\right> \to \left<term\right>×\left<expr\right> | \left<term\right>$ could certainly expand $\left<term\right>$. But the operator in that expansion (if there is one) is certainly not $×$; it would have to be $+$. So associativity doesn't apply, since associativity is only about expressions involving two of the same operator. So yes, in both operator productions in that grammar, the operator is right-associative.
{ "domain": "cs.stackexchange", "id": 8364, "tags": "formal-grammars, programming-languages" }
Planning Scene/C++ API Tutorial missing part
Question: Hi, as I was reading this tutorial (sorry but I don't have enough karma to post a real link): xxxx://docs.ros.org/api/pr2_moveit_tutorials/html/planning/src/doc/planning_scene_tutorial.html I saw that the right way to maintain current planning scene is to use the PlanningSceneMonitor: The PlanningSceneMonitor is the recommended method to create and maintain the current planning scene (and is discussed in detail in the next tutorial) But I haven't find in any of the other Moveit tutorials any reference to that class. Maybe I'm looking in the wrong place? Tx Originally posted by Wedontplay on ROS Answers with karma: 42 on 2014-07-11 Post score: 0 Answer: Take a look at the answer to this question: http://answers.ros.org/question/157716/obstacles-management-in-moveit/ That contains code snippets too. I hope it helps. That tutorial, according to the answer, was never written. Originally posted by McMurdo with karma: 1247 on 2014-07-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18588, "tags": "ros, moveit, moveit-tutorial" }
Feature Selection - Conditional Entropy
Question: I've developed an algorithm to define conditional entropy for feature selection in text classification. I'm following the formula at Machine Learning from Text by Charu C. Aggarwal (5.2.2). The author mentions that Conditional Entropy values are between (0, log(number of classes)) in which my case is (0, 0.6931472). The author also mentions that features with the largest values can be removed, but he don't give further information about the criteria to define 'largest' (is it only the max value of entropy or a set of the largest entropy values?) Have you ever guys applied Conditional Entropy for feature selection? If so, based on results, what criteria was used to define features to be removed. Here a summary of my Conditional Entropy results: E.tj. Min. :0.5701 1st Qu.:0.6562 Median :0.6563 Mean :0.6558 3rd Qu.:0.6564 Max. :0.6564 Answer: Individual feature selection methods assign a numerical value to every feature so that features can be ranked according to this value. The calculated value is chosen to represent how much the feature contributes to knowing the label/response variable: common choices are conditional entropy, but also information gain or correlation. The actual values assigned to the features are not really useful on their own, what matters is the ordering of the features according to this value. Thus the standard method for selecting the features is not to choose a particular threshold on the value, but simply to choose the number of features one wants to obtain. Example: in a text classification task, there are 1000 documents and a vocabulary of 20000 unique words as candidate features. Using all the words would certainly cause overfitting, so we decide to use only 100 words as features. We can calculate the conditional entropy of every word with respect to the label, and then select the bottom 100 words according to the corresponding ranking as features (the 19900 other words are ignored). Since individual feature selection is very efficient, it's often possible (and a good idea) to try a range of values as the number of features, and train/test the model for each of these values. This way one can experimentally determine the optimal number of features (the one which maximizes performance on the data). Note that this a form of hyper-parameter tuning, and therefore one has to use a validation set for the tuning stage, and then the final model (with the selected optimal number of features) should be applied on a fresh test set.
{ "domain": "datascience.stackexchange", "id": 8315, "tags": "feature-selection, text-mining, feature-extraction" }
Right ascension of a star and local sidereal time
Question: If I'd like to watch star like Betelgeuse that has coordinates - right ascension 05h 55m 10.30536s and declination +07° 24′ 25.4304″. If my local sidereal time is 05h 55m 10.30536s does it mean that Betelgeuse crossed my meridian? And if I want to track it with right ascension on the telescope I can define it as LST - RA of a star take this number and put it on the RA axis of a telescope and see a star? Answer: If your local sidereal time is 05h 55m 10.30536s, then Betelgeuse is on the meridian. The local sidereal time is always equal to the right ascension of a point on the meridian.
{ "domain": "astronomy.stackexchange", "id": 6843, "tags": "observational-astronomy, telescope, amateur-observing" }
Does plucking a guitar string create a standing wave?
Question: About two weeks ago there was a mock test in Korea, and a physics question asked if a plucked guitar (it was actually a gayageum, a traditional instrument, but I'll just call it a guitar for convenience) string creates a standing wave. I've learned in school that this is true, and the answer was true as well. But today my physics teacher said that this is actually false. Because a standing wave is caused by two identical waves traveling in opposite directions, a guitar string cannot create a standing wave. So a plucked guitar string only makes a vibration, not a standing wave. But this is also mentioned in school textbooks. On the page explaining standing waves, there's a picture of a vibrating string and the caption says, "A string tied at both ends makes a standing wave, causing resonance." I am confused. Does plucking a guitar string make a standing wave on the string? Or is this just a vibration? Answer: Yes, plucking a guitar string does create standing waves, but... No, plucking a guitar string does not create a standing wave, as the sum of standing waves is in general not a standing wave (thanks for Ben Crowell for pointing this out), since a standing wave must have a stationary spatial dependence and a well-defined frequency: $$ y(x,t) \propto \sin(2\pi x/\lambda)\cos(\omega t).$$ The initial perturbation is not sinusoidal, but instead contains a plethora of frequencies, of which only remain, after a transient, the resonant ones - which correspond to some of the possible standing waves. It's the sum of those that compose the vibration you'll observe. The counter-propagating waves, if you want to model each of the standing waves this way, you get from the reflections at the cord's ends. For more details see this answer and, especially, the answers to the question Why do harmonics occur when you pluck a string?.
{ "domain": "physics.stackexchange", "id": 49964, "tags": "waves, everyday-life, string, vibrations" }
Byte addressable vs Word addressable
Question: I am trying to understand the difference between byte addressing and word addressing. A 4-way set-associative cache memory unit with a capacity of 16 KB is built using a block size of 8 words. The word length is 32 bits. The size of the physical address space is 4 GB. No of sets in the cache $= (16 * 1024) / (4 * 8 * 4) = 2^7$ If word addressing is used : Block offset $= 3 \ bits$ Since PAS is $4 \ GB$, total no of addresses = $2^{32} / 2^2 = 2^{30}$ So, total address bits $= 30\ bits$ Address structure : Tag bits : $20 \ bits$ Set bits : $7 \ bits$ Block offset bits : $3 \ bits$ Now, suppose the CPU wants to access the 3rd byte of a particular word. The cache controller will use the $7 \ bits$ set-field to index into a set and then comparing the higher $20\ bits$ tag-field with all of the $4$ blocks in the set. If a match is found, then cache hit occurs and the lower $3 \ bits$ block offset to put one of the word out of the $8$ words in one of the general purpose register. The CPU then extracts the 3rd byte from the word and perform the operation. If tags are unmatched, then cache miss occurs, a memory read signal is sent and due to spatial locality of reference, a block containing the word is transferred into the cache. If the CPU is byte addressable: Total address bits $=32$ Address Structure : Tag bits : $20 \ bits$ Set bits : $7 \ bits$ Block offset bits : $5 \ bits$ If the CPU wants to access 3rd byte of a word: Same as in Step 1 of word addressable, but the CPU now can directly address the 3rd byte of the word, using the lower $2 \ bits$ byte offset. However, I'm confused how that would happen. Since, the CPU register size has a width of 1 word, similar to the word addressing, one word out of the 8 words in the block will be transferred to the register. But how would the "byte extracting" step be easier here? And why do we call it byte addressing if still we are actually addressing a word? Same as in Step 2 of word-addressing. Block of data will be transferred from the memory to the cache in case of cache miss. Also, this answer says that physical memory is always byte addressable. Now, what is the difference between the addressablity of the memory and addressablity of the CPU architecture? Answer: Word addressing means that, the number of lines in the address bus in the processor is lesser than the number of bits in the word itself. Lets say we have a 4 byte word. (32 bit address space) If this machine is byte addressable, then the address bus of the CPU will have 32 lines, which enables it to access each byte in memory. If this machine is word addressable, then the address bus of the CPU will have 30 lines ($32 - log_{2}4 =30$), which enables it to access memory ONLY in words/chunks of 4 bytes and that too from addresses which are a multiple of the wordsize. Now if you ask the CPU to fetch a byte from a particular address, it will first drop the 2 least significant bits(by drop i mean overwrite them with 0's) of the address, fetch a word from the resulting address and return a byte using the 2 least significant bits as an offset within the fetched word. This causes memory access time to increase, since the CPU has to spend more time modifying the address and processing the fetched word. But it also helps reduce hardware cost since the complexity of the circuits is reduced due to the reduction in address bus lines. However, this overhead never arises in a byte addressable machine, hence 'byte extracting' is easier.
{ "domain": "cs.stackexchange", "id": 14618, "tags": "memory-access, cache" }
Proper Flow Meter Installation
Question: I have a 12" flow meter that uses a float on a stem and whenever gas is supplied, the initial flow spikes the float to the top and gets wedged. The flow meter is plumbed downstream of a control valve and ball valve that is being used to regulate the flow at 40 cfm. The supply side is helium that can range between 60 - 80 psig and the outlet of the meter is plumbed to a chamber at half atmosphere, ~ 400 Torr. I'm aware that a ball valve should not be used to fine tune flow, that's why I need some clarification to properly correct this issue. I attached an image to help better explain my current setup. Answer: The flow meter needs to be installed such that it is always under pressure (in normal operation). That is it should be after the regulator but before the on/off valve. That way you will get no surge of gas entering an empty pipe and pushing up the float. It is quite a common problem.
{ "domain": "engineering.stackexchange", "id": 2888, "tags": "flow-control" }
What is the meaning of term Variance in Machine Learning Model?
Question: I am familiar with terms high bias and high variance and their effect on the model. Basically your model has high variance when it is too complex and sensitive too even outliers. But recently I was asked the meaning of term Variance in machine learning model in one of the interview? I would like to know what exactly Variance means in ML Model and how does it get introduce in your model? I would really appreciate if someone could explain this with an example. Answer: It is pretty much what you said. Formally you can say: Variance, in the context of Machine Learning, is a type of error that occurs due to a model's sensitivity to small fluctuations in the training set. High variance would cause an algorithm to model the noise in the training set. This is most commonly referred to as overfitting. When discussing variance in Machine Learning, we also refer to bias. Bias, in the context of Machine Learning, is a type of error that occurs due to erroneous assumptions in the learning algorithm. High bias would cause an algorithm to miss relevant relations between the input features and the target outputs. This is sometimes referred to as underfitting. These terms can be decomposed from the expected error of the trained model, given different samples drawn from a training distribution. See here for a brief mathematical explanation of where the terms come from, and how to formally measure variance in the model. Relationship between bias and variance: In most cases, attempting to minimize one of these two errors, would lead to increasing the other. Thus the two are usually seen as a trade-off. Cause of high bias/variance in ML: The most common factor that determines the bias/variance of a model is its capacity (think of this as how complex the model is). Low capacity models (e.g. linear regression), might miss relevant relations between the features and targets, causing them to have high bias. This is evident in the left figure above. On the other hand, high capacity models (e.g. high-degree polynomial regression, neural networks with many parameters) might model some of the noise, along with any relevant relations in the training set, causing them to have high variance, as seen in the right figure above. How to reduce the variance in a model? The easiest and most common way of reducing the variance in a ML model is by applying techniques that limit its effective capacity, i.e. regularization. The most common forms of regularization are parameter norm penalties, which limit the parameter updates during the training phase; early stopping, which cuts the training short; pruning for tree-based algorithms; dropout for neural networks, etc. Can a model have both low bias and low variance? Yes. Likewise a model can have both high bias and high variance, as is illustrated in the figure below. How can we achieve both low bias and low variance? In practice the most methodology is: Select an algorithm with a high enough capacity to sufficiently model the problem. In this stage we want to minimize the bias, so we aren't concerned about the variance yet. Regularize the model above, to minimize its variance.
{ "domain": "datascience.stackexchange", "id": 4242, "tags": "machine-learning, variance" }
Divergence of Electric Field: Moving Del Operator Inside Integral
Question: I'm reading Griffiths E&M book (4th edition), and on page 71 he starts with the expression for the electric field of a volume charge distribution: $$ \vec{E}(\vec{r}) = \frac{1}{4\pi \epsilon_0}\int_V \frac{\vec{r}-\vec{r}'}{\lvert \vec{r}-\vec{r}'\rvert^{3/2}} \rho(\vec{r}') d\tau' $$ He then takes the divergence of this expression: $$ \nabla \cdot\vec{E} = \frac{1}{4\pi\epsilon_0}\nabla\cdot\int_V \frac{\vec{r}-\vec{r}'}{\lvert \vec{r}-\vec{r}'\rvert^{3/2}} \rho(\vec{r}') d\tau' $$ which he then rewrites (actually, he doesn't included the intermediate step I wrote above, so he just immediately writes $$ \nabla \cdot\vec{E} = \frac{1}{4\pi\epsilon_0}\int_V \left(\nabla\cdot\frac{\vec{r}-\vec{r}'}{\lvert \vec{r}-\vec{r}'\rvert^{3/2}}\right) \rho(\vec{r}') d\tau' $$ I understand that $\rho$ is independent of $\vec{r}$ and so the $\nabla$ doesn't touch it, but I don't know how to justify moving the del operator from outside to inside the integral. How do I know (or how can I show) that is allowed? I guess it's related to the Leibniz integral rule. Answer: You can consider each of the partial derivatives on the divergence operator one at a time, and apply the Leibniz rule. The integrals all run from $-\infty$ to $\infty$ so the bounds of integration are constant, and the boundary terms drop out. More heuristically, you can also note (as knzhou points out) that the divergence operator distributes across sums, and an integral is like a sum of many terms.
{ "domain": "physics.stackexchange", "id": 33563, "tags": "mathematical-physics, electric-fields, gauss-law, integration, vector-fields" }
Change in entropy of the universe due to a carnot heat engine
Question: If you consider a Carnot engine operating between a hot reservoir, $T_b$ and a cold reservoir $T_s$. I was asked to find an expression for the total change in entropy of the universe (part D). Exact details: I am not sure how I would go about this. The only fact I know is that the change in entropy over 1 cycle for the Carnot engine is 0. I said that based on the second law of thermodynamics: $\Delta S \geqslant 0. $ $\Delta S_{universe}+\Delta S_{T_s \ res} + \Delta S_{T_b \ res} + \Delta S_{probe} \geqslant 0$ (note the probe being the carnot engine) I worked out the that the change in entropy of the hot and cold reservoir cancels, and by saying that the change in entropy of a Carnot cycle is 0, I get the obvious result that: $\Delta S_{universe} \geqslant 0 $. Answer: I can see what is happening here. Entropy is being generated within the block of radioactive material as a result of the radioactive decay reaction. What they expect you to assume is that this generated entropy is all exactly removed from the block by transferring heat to the engine. So the entropy change of the block is assumed to be exactly zero. If we let $Q_H$ represent the total heat transferred from the block to the engine and $Q_C$ represent the total heat transferred from the engine to the cold reservoir, then the changes in entropy of the hot block, the engine, and cold reservoir are: $$\Delta S_{block}=-\frac{Q_H}{T_b}+\sigma_b=-\frac{\int_0^{\infty}{P_{in}(t)dt}}{T_b}+\sigma_b=-\frac{P_0\tau}{T_b}+\sigma_b=0$$ $$\Delta S_{engine}=\frac{Q_H}{T_b}-\frac{Q_C}{T_s}=0$$ and $$\Delta S_{cold}=+\frac{Q_C}{T_s}$$where $\sigma_b$ is the total entropy generated within the block as a result of nuclear reaction. If we add there three entropy changes together, we obtain the entropy change of the universe: $$\Delta S_{universe}=\Delta S_{block}+\Delta S_{engine}+\Delta S_{cold}=\frac{Q_C}{T_s}=\frac{Q_H}{T_b}=\sigma_b=\frac{P_0\tau}{T_b}$$ So this all depends on the assumption that the entropy removed from the hot block is exactly compensated by the entropy generated by nuclear reaction in the hot block, such that the change in entropy of the hot block is zero.
{ "domain": "physics.stackexchange", "id": 63835, "tags": "homework-and-exercises, thermodynamics" }
How to find a good saw function for PRNG?
Question: As part of an assignment I have to write a PRNG using a sin (or any other trigonometric function) and a saw function which I'm struggling a bit with. So how would you find a good saw function with about equal distribution? This is what it is supposed to look like: There are two functions I've got now, y=(seed/1000*state*49) mod 101 and z=(y-1 % 10) / 10) and distribution looks like this (I'm not fine with that 1 and 0 distribution): Answer: I think this is an effect of the rounding in your implementation or in the histogram. Let's say your histogram contains 100 columns. Each represents a width of 0.01. The column at 10 shows how many items fall between 9.995 and 10.005. The column at 0 shows how many items fall between -0.005 and 0.005. However, there are no items below 0, so there is nothing between -0.005 and 0. This column will only show halve of the items in the other columns.
{ "domain": "cs.stackexchange", "id": 8928, "tags": "randomness, statistics, numerical-algorithms, random-number-generator" }
How to publish a message from a rosserial_arduino subscriber
Question: How do you make a subscriber in a rosserial_arduino node that can publish a message? I'm trying to make a node that listens for a "force_sensors" message, and in response, publishes sensor readings. This is my Arduino Uno sketch: #include <ros.h> #include <std_msgs/Empty.h> #include <std_msgs/Bool.h> #include <std_msgs/Byte.h> #include <std_msgs/Int64.h> #include <std_msgs/Float32.h> #include <std_msgs/Float64MultiArray.h> #include "BooleanSensor.h" sensor = BooleanSensor(); // If true, all sensors will push their current readings to the host, even if they haven't changed // since last polling. bool force_sensors = false; ros::NodeHandle nh; std_msgs::Bool bool_msg; ros::Publisher sensor_publisher = ros::Publisher("sensor", &bool_msg); void on_force_sensors(const std_msgs::Empty& msg) { force_sensors = true; } ros::Subscriber<std_msgs::Empty> force_sensors_sub("force_sensors", &on_force_sensors); void on_toggle_led(const std_msgs::Empty& msg) { force_sensors = true; digitalWrite(STATUS_LED_PIN, HIGH-digitalRead(STATUS_LED_PIN)); // blink the led } ros::Subscriber<std_msgs::Empty> toggle_led_sub("toggle_led", &on_toggle_led); void setup() { pinMode(STATUS_LED_PIN, OUTPUT); digitalWrite(STATUS_LED_PIN, true); nh.getHardware()->setBaud(57600); nh.initNode(); nh.subscribe(toggle_led_sub); nh.subscribe(force_sensors_sub); nh.advertise(sensor_publisher); } void loop() { if (sensor.changed() || force_sensors) { bool_msg.data = sensor.get(); sensor_publisher.publish(&bool_msg); } nh.spinOnce(); delay(1); force_sensors = false; } It has two subscribers, one that listens for a "toggle_led" message, and toggles the state of the onboard LED. The other listens for a "force_sensors" message, and sets a boolean flag to force a sensor message to be published. Attached to the Uno is a simple button whose state is checked when sensor.changed() is called. I've confirmed that when I push this button, I see a message with rostopic echo /uno/sensor. I would also expect to see this same message after I run rostopic pub /torso_arduino/force_sensors std_msgs/Empty --once. However, no message is received. I modified the toggle_led subscriber to also set the force_sensor flag. Yet even though this still responds to a message and toggles the LED, it still doesn't force the sending of any sensor messages. What am I doing wrong? Originally posted by Cerin on ROS Answers with karma: 940 on 2017-02-12 Post score: 0 Answer: the Problem is the oder of the instrucktions. if you change your loop to `void loop() { if (sensor.changed() || force_sensors) { bool_msg.data = sensor.get(); sensor_publisher.publish(&bool_msg); force_sensors = false; } nh.spinOnce(); delay(1); } or `void loop() { if (sensor.changed() || force_sensors) { bool_msg.data = sensor.get(); sensor_publisher.publish(&bool_msg); } force_sensors = false; nh.spinOnce(); delay(1); } the Reson is that the var force_sensors is updatet in nh.spinOnce() and just after that you resett it Originally posted by duck-development with karma: 1999 on 2017-02-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Cerin on 2017-02-12: Good catch.
{ "domain": "robotics.stackexchange", "id": 26992, "tags": "rosserial-arduino" }
Empty an array on focusout - jQuery UI Autocomplete
Question: Is it good to empty an array on focusout? var arr = []; $("#automobil").focus(function() { $.getJSON("auto.json", function(data) { $.each(data, function(key, value) { if ($.inArray(value.name, arr) === -1) { arr.push(value.name) } }) }); }).autocomplete({ source: function (request, response) { var term = $.ui.autocomplete.escapeRegex(request.term) , startsWithMatcher = new RegExp("^" + term, "i") , startsWith = $.grep(arr, function(value) { return startsWithMatcher.test(value.label || value.value || value); }) , containsMatcher = new RegExp(term, "i") , contains = $.grep(arr, function (value) { return $.inArray(value, startsWith) < 0 && containsMatcher.test(value.label || value.value || value); }); response(startsWith.concat(contains)); } }).focusout(function() { arr = []; }); In this case, auto.json isn't big, so arr[] isn't big either. But, in "real world examples", there can be large amounts of data, so the array needs to be empty on focusout (after the job is done), because of resources. Answer: I believe this is a case of being too paranoid in optimization. Here's some problems: Your arr is global. You should probably place this somewhere only the widget knows about. You're clearning the array... by creating another array. You're spawning more objects (in this case, an array) instead of cleaning up. The proper way to clear an array is to set length to 0. If nothing references the items in the array, they'll get GC'ed eventually. autocomplete, focus, and focusout can go out of sync. When you focus on the input, you do getJSON. However, your autocomplete will run regardless if the request has returned. So that means the array might be empty when autocomplete runs. When you focus out, you cleared your array, so if you focus in again, this happens. You're making unnecessary AJAX calls. On focus, you're calling AJAX. In the real world, waiting for AJAX is a more painful experience than memory consumption. With proper practice, the GC can and will reclaim memory but you can't reclaim the time you waited for AJAX. I suggest you do the following instead: You can limit the returned results to a reasonable length. That way, even if you hit the server with AJAX for each keypress, the return won't take that long. Cache your results. Have some internal logic which caches returned data into an array (add when non-existent, update if existing). The problem here isn't the size of your cached data. It's the way you're discarding data and retrieving them back again.
{ "domain": "codereview.stackexchange", "id": 12787, "tags": "javascript, memory-management, json, event-handling, jquery-ui" }
Passing Context in CoffeeScript
Question: I'm trying to migrate from JavaScript to CoffeeScript. However I'm not sure about the best way to optimize the code generated by js2coffee. Below is the original JavaScript source : var async = require('async'); var context = {}; context.settings = require('./settings'); async.series([setupDb, setupApp, listen], ready); function setupDb(callback) { context.db = require('./db.js'); context.db.init(context, callback); } function setupApp(callback) { context.app = require('./app.js'); context.app.init(context, callback); } // Ready to roll - start listening for connections function listen(callback) { context.app.listen(context.settings.http.port); callback(null); } function ready(err) { if (err) { throw err; } console.log("Ready and listening at http://localhost:" + context.settings.http.port); } And below is the generated CoffeeScript : setupDb = (callback) -> # Create our database object context.db = require("./db") # Set up the database connection, create context.db.posts object context.db.init context, callback setupApp = (callback) -> # Create the Express app object and load our routes context.app = require("./app") context.app.init context, callback # Ready to roll - start listening for connections listen = (callback) -> context.app.listen context.settings.http.port callback null ready = (err) -> throw err if err console.log "Ready and listening at http://localhost:" + context.settings.http.port async = require("async") context = {} context.settings = require("./settings") async.series [setupDb, setupApp, listen], ready Now, I'm not sure if the context variable is required in CoffeeScript. In the JavaScript source its function is to share the common set of setting across the various components of the application such as database and the settings. Will a proper use of the => operator in CoffeeScript help ? If yes how ? Answer: The CoffeeScript looks fine to me. You'll still need the context var just as in plain JS, because you're not dealing with this. If you were, you could maybe use the fat arrow to preserve the this context, but even so it'd require some refactoring, and end up very different from the original JS. CoffeeScript's fat arrow is basically meant to be used where you'd otherwise do the var that = this; trick in JavaScript. But since you're not doing such things in your JavaScript, there's no need or opportunity for it in the CoffeeScript either.
{ "domain": "codereview.stackexchange", "id": 3263, "tags": "javascript, coffeescript, callback" }
No JPEG data found in image
Question: I am currently running ROS on VirtualBox. After running $roslaunch usb_cam usb_cam.launch Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://dd-VirtualBox:46806/ SUMMARY PARAMETERS * /image_view/autosize: True * /rosdistro: jade * /rosversion: 1.11.16 * /usb_cam/camera_frame_id: usb_cam * /usb_cam/image_height: 480 * /usb_cam/image_width: 640 * /usb_cam/io_method: mmap * /usb_cam/pixel_format: mjpeg * /usb_cam/video_device: /dev/video0 NODES / image_view (image_view/image_view) usb_cam (usb_cam/usb_cam_node) ROS_MASTER_URI=http://localhost:11311 core service [/rosout] found process[usb_cam-1]: started with pid [12458] process[image_view-2]: started with pid [12459] [ INFO] [1458424521.435064169]: Using transport "raw" [ INFO] [1458424522.048799115]: using default calibration URL [ INFO] [1458424522.048959018]: camera calibration URL: file:///home/dd/.ros/camera_info/head_camera.yaml [ INFO] [1458424522.049046985]: Unable to open camera calibration file [/home/dd/.ros/camera_info/head_camera.yaml] [ WARN] [1458424522.049151010]: Camera calibration file /home/dd/.ros/camera_info/head_camera.yaml not found. [ INFO] [1458424522.049205844]: Starting 'head_camera' (/dev/video0) at 640x480 via mmap (mjpeg) at 30 FPS [mjpeg @ 0xeecda0] No JPEG data found in image [ERROR] [1458424524.182962976]: Error while decoding frame. [mjpeg @ 0xeecda0] No JPEG data found in image [ERROR] [1458424525.317354585]: Error while decoding frame. [mjpeg @ 0xeecda0] No JPEG data found in image [ERROR] [1458424525.449446779]: Error while decoding frame. [mjpeg @ 0xeecda0] No JPEG data found in image [ERROR] [1458424525.986624168]: Error while decoding frame. [mjpeg @ 0xeecda0] No JPEG data found in image [ERROR] [1458424526.986550516]: Error while decoding frame. ^C[image_view-2] killing on exit [usb_cam-1] killing on exit [mjpeg @ 0xeecda0] No JPEG data found in image shutting down processing monitor... ... shutting down processing monitor complete done The webcam lights up but it seems no image is found. I have followed the instructions here http://askubuntu.com/questions/4875/use-my-webcam-with-ubuntu-running-in-virtualbox and using lsusb shows the camera. Am I missing a driver? Originally posted by dmc.2 on ROS Answers with karma: 11 on 2016-03-19 Post score: 1 Answer: I had the same error with Indigo and the usb_cam driver when using an inexpensive Chinese webcam. The webcam worked fine with other applications, but not with the launch file from ROS by example volume 1 for the Vision examples. What I found in my case was that the pixel_format had to change from mjpeg in the original launch file to yuyv. After I made that change the webcam is working fine. Note that I had MSoft LifeCam 3000 that I was also testing with and that camera worked OK with the pixel_format set to mjpeg. Hope this helps someone in the future. Originally posted by burtbick with karma: 201 on 2017-04-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by arminf82 on 2019-09-24: Worked for me. Thanks! Comment by Abdul Mannan on 2021-04-08: Could you please explain how you changed format from mjped to yuyu?
{ "domain": "robotics.stackexchange", "id": 24188, "tags": "ros, usb-cam, camera, webcam" }
what kind of expert can analyze drought effect on rock fall risk?
Question: I'm discussing safety with a team doing endangered plant conservation work in a small very narrow gulch with walls hundreds of feet high. We see evidence that rocks fall often, and would like to assess the hazard to our once-a-year visiting team who visit briefly to survey the plants. Is it plausible that drought conditions are causing the rockfall frequency to increase? Who is the type of scientist who could analyze the soil type or geo formations type and answer this kind of question? Answer: You need to consult a geomechanical engineer. Sometimes they can also be referred to as geotechnical engineers. A geologist can tell you overall geology of the region of interest, but to determine ground stability issues and likelihood of failure and failure modes that is the specialization of geomechanical engineers. Geomechanical engineers are usually engaged in civil engineering projects or mining. Geomechanics is the application of engineering and geological principles to the behaviour of the ground and ground water and the use of these principles in civil, mining, offshore and environmental engineering in the widest sense.
{ "domain": "engineering.stackexchange", "id": 4848, "tags": "geotechnical-engineering" }
How do trees manage to grow equally in all directions?
Question: I was walking down a road with these beautifully huge trees when this question occurred to me. Large trees with many thick branches have to grow equally in all directions, or they would tip over. Is there some sort of mechanism to ensure this uniform growth? Or is it just a happy coincidence arising from uniform availability of sunlight on all sides? If one branch of a tree becomes too heavy due to many sub-branches, does this somehow trigger growth on the opposite side of the tree? I have seen that potted plants in houses tend to grow towards sunlight. My mum often turns pots around by 180 degrees to ensure that plants don't bend in any one direction. I assume that this is because sunlight increases the rate of photosynthesis, leading to rapid growth of the meristem. For trees growing in open spaces, this wouldn't be a problem. But there are many large trees that grow in the shadow of buildings without bending away from the buildings, even though this is the only direction from which they would receive any sunlight. Is there any explanation for this? Answer: Growth in plants is tightly controlled by auxins – plant hormones. Auxin itself usually has an inhibitory effect on growth [EDIT: see comments and Richard’s answer for correction]. As far as I know there is no active control to restore plant symmetry once it has gone awry (but I could be wrong!) but the inhibitory effect of auxin synthesised at the meristem and diffusing in all directions causes a symmetrical pattern of inhibition and activation, forming shoots at symmetrical distances around the shoot apical meristem – this is very visible in the symmetry of the romanesco broccoli: Furthermore, there are several mechanisms involving auxin which shape the general growth of the plant. The most important ones are: Apical dominance which causes the apex (the stem) of the plant to grow more strongly than other parts of the plant, ensuring a general centring of the growth. Phototropism causes the plant to grow towards sunlight. Unlike you hypothesised, this isn’t simply due to more photosynthesis and hence faster growth at the front of the plant facing the light, it’s actively controlled. Gravitropism is a very interesting effect which causes the plant to grow generally upwards. It’s interesting because the mechanism is actually using gravity: the auxin synthesised at the meristem diffuses downwards in the plant due to gravity, inhibiting the growth in lower regions (but note that in the root apical meristem the effect is somehow reversed). Hydrotropism causes the plant to grow towards water. All these effects combined cause the plant to grow in a generally upwards, laterally distributed fashion.
{ "domain": "biology.stackexchange", "id": 420, "tags": "botany, plant-physiology" }
How to refactor JQuery interaction with interface?
Question: The question is very simple but also a bit theoretical. Let's imagine you have a long jQuery script which modifies and animate the graphics of the web site. It's objective is to handle the UI. The UI has to be responsive so the real need for this jQuery is to mix some state of visualization (sportlist visible / not visible) with some need due to Responsive UI. Thinking from an MVC / AngularJS point of view. How should a programmer handle that? How to refactor JS / jQuery code to implement separation of concerns described by MVC / AngularJS? I provide an example of jQuery code to speak over something concrete. $.noConflict(); jQuery(document).ready(function ($) { /*variables*/ var sliderMenuVisible = false; /*dom object variables*/ var $document = $(document); var $window = $(window); var $pageHost = $(".page-host"); var $sportsList = $("#sports-list"); var $mainBody = $("#mainBody"); var $toTopButtonContainer = $('#to-top-button-container'); /*eventHandlers*/ var displayError = function (form, error) { $("#error").html(error).removeClass("hidden"); }; var calculatePageLayout = function () { $pageHost.height($(window).height()); if ($window.width() > 697) { $sportsList.removeAttr("style"); $mainBody .removeAttr("style") .unbind('touchmove') .removeClass('stop-scroll'); if ($(".betslip-access-button")[0]) { $(".betslip-access-button").fadeIn(500); } sliderMenuVisible = false; } else { $(".betslip-access-button").fadeOut(500); } }; var formSubmitHandler = function (e) { var $form = $(this); // We check if jQuery.validator exists on the form if (!$form.valid || $form.valid()) { $.post($form.attr("action"), $form.serializeArray()) .done(function (json) { json = json || {}; // In case of success, we redirect to the provided URL or the same page. if (json.success) { window.location = json.redirect || location.href; } else if (json.error) { displayError($form, json.error); } }) .error(function () { displayError($form, "Login service not available, please try again later."); }); } // Prevent the normal behavior since we opened the dialog e.preventDefault(); }; //preliminary functions// $window.on("load", calculatePageLayout); $window.on("resize", calculatePageLayout); //$(document).on("click","a",function (event) { // event.preventDefault(); // window.location = $(this).attr("href"); //}); /*evet listeners*/ $("#login-form").submit(formSubmitHandler); $("section.navigation").on("shown hidden", ".collapse", function (e) { var $icon = $(this).parent().children("button").children("i").first(); if (!$icon.hasClass("icon-spin")) { if (e.type === "shown") { $icon.removeClass("icon-caret-right").addClass("icon-caret-down"); } else { $icon.removeClass("icon-caret-down").addClass("icon-caret-right"); } } toggleBackToTopButton(); e.stopPropagation(); }); $(".collapse[data-src]").on("show", function () { var $this = $(this); if (!$this.data("loaded")) { var $icon = $this.parent().children("button").children("i").first(); $icon.removeClass("icon-caret-right icon-caret-down").addClass("icon-refresh icon-spin"); console.log("added class - " + $icon.parent().html()); $this.load($this.data("src"), function () { $this.data("loaded", true); $icon.removeClass("icon-refresh icon-spin icon-caret-right").addClass("icon-caret-down"); console.log("removed class - " + $icon.parent().html()); }); } toggleBackToTopButton(); }); $("#sports-list-button").on("click", function (e) { if (!sliderMenuVisible) { $sportsList.animate({ left: "0" }, 500); $mainBody.animate({ left: "85%" }, 500) .bind('touchmove', function (e2) { e2.preventDefault(); }) .addClass('stop-scroll'); $(".betslip-access-button").fadeOut(500); sliderMenuVisible = true; } else { $sportsList.animate({ left: "-85%" }, 500).removeAttr("style"); $mainBody.animate({ left: "0" }, 500).removeAttr("style") .unbind('touchmove').removeClass('stop-scroll'); $(".betslip-access-button").fadeIn(500); sliderMenuVisible = false; } e.preventDefault(); }); $mainBody.on("click", function (e) { if (sliderMenuVisible) { $sportsList.animate({ left: "-85%" }, 500).removeAttr("style"); $mainBody.animate({ left: "0" }, 500) .removeAttr("style") .unbind('touchmove') .removeClass('stop-scroll'); $(".betslip-access-button").fadeIn(500); sliderMenuVisible = false; e.stopPropagation(); e.preventDefault(); } }); $document.on("click", "div.event-info", function () { if (!sliderMenuVisible) { var url = $(this).data("url"); if (url) { window.location = url; } } }); function whatDecimalSeparator() { var n = 1.1; n = n.toLocaleString().substring(1, 2); return n; } function getValue(textBox) { var value = textBox.val(); var separator = whatDecimalSeparator(); var old = separator == "," ? "." : ","; var converted = parseFloat(value.replace(old, separator)); return converted; } $(document).on("click", "a.selection", function (e) { if (sliderMenuVisible) { return; } var $this = $(this); var isLive = $this.data("live"); var url = "/" + _language + "/BetSlip/Add/" + $this.data("selection") + "?odds=" + $this.data("odds") + "&live=" + isLive; var urlHoveringBtn = "/" + _language + '/BetSlip/AddHoveringButton/' + $this.data("selection") + "?odds=" + $this.data("odds") + "&live=" + isLive; $.ajax(urlHoveringBtn).done(function (dataBtn) { if ($(".betslip-access-button").length == 0 && dataBtn.length > 0) { $("body").append(dataBtn); } }); $.ajax(url).done(function (data) { if ($(".betslip-access").length == 0 && data.length > 0) { $(".navbar").append(data); $pageHost.addClass("betslipLinkInHeader"); var placeBetText = $("#live-betslip-popup").data("placebettext"); var continueText = $("#live-betslip-popup").data("continuetext"); var useQuickBetLive = $("#live-betslip-popup").data("usequickbetlive").toLowerCase() == "true"; var useQuickBetPrematch = $("#live-betslip-popup").data("usequickbetprematch").toLowerCase() == "true"; if ((isLive && useQuickBetLive) || (!isLive && useQuickBetPrematch)) { var dialog = $("#live-betslip-popup").dialog({ modal: true, dialogClass: "fixed-dialog" }); dialog.dialog("option", "buttons", [ { text: placeBetText, click: function () { var placeBetUrl = "/" + _language + "/BetSlip/QuickBet?amount=" + getValue($("#live-betslip-popup-amount")) + "&live=" + $this.data("live"); window.location = placeBetUrl; } }, { text: continueText, click: function () { dialog.dialog("close"); } } ]); } } if (data.length > 0) { $this.addClass("in-betslip"); } }); e.preventDefault(); }); $(document).on("click", "a.selection.in-betslip", function (e) { if (sliderMenuVisible) { return; } var $this = $(this); var isLive = $this.data("live"); var url = "/" + _language + "/BetSlip/RemoveAjax/" + $this.data("selection") + "?odds=" + $this.data("odds") + "&live=" + isLive; $.ajax(url).done(function (data) { if (data.success) { $this.removeClass("in-betslip"); if (data.selections == 0) { $(".betslip-access").remove(); $(".betslip-access-button").remove(); $(".page-host").removeClass("betslipLinkInHeader"); } } }); e.preventDefault(); }); $("section.betslip .total-stake button.live-betslip-popup-plusminus").click(function (e) { if (sliderMenuVisible) { return; } e.preventDefault(); var action = $(this).data("action"); var amount = parseFloat($(this).data("amount")); if (!isNumeric(amount)) amount = 1; var totalStake = $("#live-betslip-popup-amount").val(); if (isNumeric(totalStake)) { totalStake = parseFloat(totalStake); } else { totalStake = 0; } if (action == "decrease") { if (totalStake < 1.21) { totalStake = 1.21; } totalStake -= amount; } else if (action == "increase") { totalStake += amount; } $("#live-betslip-popup-amount").val(totalStake); }); toggleBackToTopButton(); function toggleBackToTopButton() { isScrollable() ? $toTopButtonContainer.show() : $toTopButtonContainer.hide(); } $("#to-top-button").on("click", function () { $("#mainBody").animate({ scrollTop: 0 }); }); function isScrollable() { return $("section.navigation").height() > $(window).height() + 93; } var isNumeric = function (string) { return !isNaN(string) && isFinite(string) && string != ""; }; function enableQuickBet() { } }); Answer: You are asking a high level question, so I am going to give a high level answer. Your listeners should have at most 2 lines of code inside, 1 line that changes data in the model if required, one line that updates the screen, if required. This means that you need to extract from your listeners all the code into functions which you can group under a view object, this will help tremendously for reviewing since you can give meaningful names to those functions so that the reader knows what is going on. Your data should live in a model class, you can keep updating the model from the UI or you can let Angular take care of that. Other than, I do have some other minor observations: Remove commented code out, it has no use, use source versioning Magic constants, you have so many, extract them out, name, explain their value
{ "domain": "codereview.stackexchange", "id": 5695, "tags": "javascript, jquery" }
Moveit Path Constraint Failure
Question: I'm using the moveit_commander interface on the Fetch robot and trying to add a path constraint. Here's some example code: constraints = Constraints() oc = OrientationConstraint() oc.link_name = "wrist_roll_link" oc.header.frame_id = "base_link" oc.weight = 1.0 oc.orientation = orientation #current wrist_roll_link orientation oc.absolute_x_axis_tolerance = 3.14 oc.absolute_y_axis_tolerance = 3.14 oc.absolute_z_axis_tolerance = 3.14 constraints.orientation_constraints.append(oc) move_group.set_path_constraints(constraints) Moveit still finds a plan, but the plan is seems to ignore the constraints and outputs an error that says: /move_group GetPositionIK:1025: 0 is < negative number > The results are the same for position and orientation constraints. The same code seemed to work in the past. Any ideas would be welcome! -Sarah Originally posted by velveteenrobot on ROS Answers with karma: 136 on 2017-06-03 Post score: 0 Answer: did you figure this out? The example you give is not restricting the orientation at all (tolerance = pi). Originally posted by v4hn with karma: 2950 on 2017-06-27 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by velveteenrobot on 2017-06-27: I didn't figure it out unfortunately. You're right that in my example the tolerance is just set to pi. I was using that code to test if the error still occurs when there isn't any constraint and it does. Even when you make the tolerance = pi, it still gives an error. Comment by velveteenrobot on 2017-06-27: And when the tolerance is not pi, it gives an error and ignores the constraint.
{ "domain": "robotics.stackexchange", "id": 28050, "tags": "moveit" }
Is it possible to transfer classical bits of information faster than light speed?
Question: Is there any known, verifiable way to transfer classical information faster than light, using quantum entanglement or other phenomenon? Does quantum teleportation, or other known phenomenon, allow FTL transfer of information (not "quantum state", but true classical information)? A friend and I have a few $$$ riding on this ... I know I'm right, and I need some answer he will accept as irrefutable evidence one way or the other. (The evidence would be not that it's physically impossible, but that the general consensus in the physics community is that it's impossible) I saw some duplicate questions, but nothing that asks this simple, generic question. Edit - I need a straight "Yes" or "No" answer, and so far I haven't seen it. Answer: Short answer: No. Short answer as a complete sentence: We have no reason to believe that there is any way that teleportation (or any other quantum mechanical effect, or any other physical phenomenon which we believe occurs) allows you to send superluminal signals. Detailed answer: Quantum teleportation can only be used to transfer quantum states. Those quantum states may encode classical information, but they are still quantum states. (Of course, the world being quantum mechanical, even "classical information" is represented by quantum states, albeit those of a very large number of particles at once — but never mind.) Teleportation requires open classical communication to work anyway; so teleporting the information classically won't save you any work, and in fact is totally unhelpful — except as a quantum mechanical version of a Vernam cipher (i.e. a one-time pad). Whether anything is secretly happening in quantum mechanics faster than the speed of light is actually a matter of philosophical debate in the foundations of physics. There are people who say that there is (such as advocates of de Broglie–Bohm theory), and people who say that there isn't (advocates of the Many Worlds Interpretation, Consistent Histories, and some Bayesians). What people mostly agree on is that quantum mechanics allows you to realise correlations in probability distributions which are not possible in slower-than-light local hidden variable theories; but that even if there is anything happening faster than the speed of light, you'll not going to be able to use it to transmit signals, because everything looks like correlated but uncontrollable random outcomes. Related question: Why can't quantum teleportation be used to transport information?
{ "domain": "physics.stackexchange", "id": 10721, "tags": "quantum-mechanics, special-relativity, faster-than-light" }
How would inertia work in an elevator?
Question: If we imagine standing on a scale inside an elevator, the scale will project our weight. As the elevator starts to move downwards we will not start to move downwards immediately; because of inertia we stay stationary for a very short period of time and then start to accelerate downwards, which causes the scale to project a lower weight. But then, because elevators are not let into free fall, we will catch up with the elevator, since gravity accelerates is downwards. When we have caught up with the elevator, what will the projected mass be? And what if the elevator is in free fall? I would think that while the elevator accelerates we would have a lower weight as our acceleration relative to the elevator is less than one g. But when the elevator is traveling at a constant speed, I would assume that when we have caught up, the scale will project the weight we had when the elevator was standing still, since our relative velocity is one g. Answer: You are correct , once an elevator stops accelerating and moves at a constant speed, then we feel our normal weight. If an elevator were in freefall and there was no air resistance to slow it, then we would be in freefall inside it. If there were air resistance on the elevator in freefall, it's acceleration would slow until it reached terminal velocity, where it's weight would equal it's air resistance, Once it reached terminal velocity then we would feel our normal weight inside, until it hit the bottom.
{ "domain": "physics.stackexchange", "id": 60085, "tags": "newtonian-mechanics, reference-frames, acceleration, inertia" }
What's wrong with my calculation of gravitational potential for a uniform sphere?
Question: This is really embarrassing, but I'm not quite sure where I'm going wrong here... Why is this calculation of the gravitational potential inside a sphere with uniform mass distribution incorrect? Set-Up Let's say the sphere has mass $M$ and radius $R$ (and uniform mass density $\mu$), and what we want to find is the potential at any distance $r$ from the center of the sphere, where $r<R$. We normalize the potential to zero at infinity. Calculation The potential $\phi(r)$ is equal to the potential right outside of the sphere, plus the potential difference between some point inside the sphere and a point right outside. $$ \phi(r)=\phi_0-\int_R^r \frac{\mu G}{r}dV $$ (Sorry for using $r$ for the upper limit of the integral as well as for the variable in the integrand. Hopefully this doesn't cause confusion.) Now to figure out the different aspects of the above equation equation. The potential right outside of the sphere is: $$\phi_0=-\frac{MG}{R}$$ The differential volume element can be expressed as the constant-potential spherical shell's surface area times the shell's differential width: $$dV=4\pi r^2 dr$$ And one final detail, the mass density of the sphere: $$\mu=\frac{3M}{4\pi R^3}$$ Using this information, $$\phi(r)=-\frac{MG}{R}-\frac{3MG}{R^3}\int_R^r r dr$$ $$\phi(r)=-\frac{MG}{R^3}\left[R^2+\frac{3r^2}{2}-\frac{3R^2}{2}\right]$$ $$\phi(r)=-\frac{MG}{2R^3}(3r^2-R^2)$$ Conclusion This result disagrees with a few places I've visited, like this one, which states that the correct result (in terms of the variables I've used) is $$\phi(r)=-\frac{MG}{2R^3}(3R^2-r^2)$$ Both results give the same potential at $r=R$, obviously, but my result starts to look ridiculous for values like $r=R/2$. The only part in my calculation that seems sketchy to me is that first equation, where I talk about the potential difference at points inside and outside of the sphere; I don't know if it's correct to be dividing by $r$ in the integrand... Or maybe I just made a stupid algebra mistake somewhere in there. Where did I go wrong? Answer: I disagree with Qmechanic concerning the core of the problem of your calculation although his information about Newton's shell theorem is correct. The problem in your calculation lies in your first equation which is simply wrong. What you are doing, according to your equation, is to somehow calculate and subtract the potential of the shell outside of $r$. However, this is not how you obtain the potential at the point $r$. What you should do instead is integrate the force acting on a test mass $m$ from $R$ to $r$: $$\Delta\phi=\phi(r)-\phi(R)=-\int_R^r \frac{F_{\rm G}(r')}{m}\,\mathrm{d}r'\,.$$ Here, you have to use the fact mentioned before that only the mass inside of $r'$ contributes to $F(r')$. If you do this, you should obtain the correct result already stated in your question.
{ "domain": "physics.stackexchange", "id": 14926, "tags": "homework-and-exercises, classical-mechanics, gravity" }
cloudsim install problem
Question: hello i tried to install clousim. i installed all of the necessary things. but it give me this error. :/home/ossl2/Documents# ./cloudsim-1.4.0/bin/create_cloudsim.py ****@gmail.com ****** ***** us-east-1a ERROR:boto:401 Unauthorized ERROR:boto: OptInRequiredYou are not subscribed to this service. Please go to http://aws.amazon.com to subscribe. 3408e248-67eb-444f-810c-c4a3b412f256 Traceback (most recent call last): File "./cloudsim-1.4.0/bin/create_cloudsim.py", line 50, in machine = cloudsim.cloudsim_bootstrap(username, tmp_fname.name, auto_launch_constellation) File "/home/ossl2/Documents/cloudsim-1.4.0/cloudsimd/launchers/cloudsim.py", line 376, in cloudsim_bootstrap constellation_directory, website_distribution) File "/home/ossl2/Documents/cloudsim-1.4.0/cloudsimd/launchers/cloudsim.py", line 124, in launch sim_security_group= ec2conn.create_security_group(sim_sg_name, "simulator security group for constellation %s" % constellation_name) File "/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 2286, in create_security_group SecurityGroup, verb='POST') File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1069, in get_object raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized OptInRequiredYou are not subscribed to this service. Please go to http://aws.amazon.com to subscribe.3408e248-67eb-444f-810c-c4a3b412f256 what the problem is? ps: in here i put * instead of key and secret key . Originally posted by robi on Gazebo Answers with karma: 1 on 2013-05-05 Post score: 0 Answer: Hi Robi, it seems a credential problem. Do you have an AWS account? Make sure that you are specifying your public and secret key when you run the create_cloudsim.py tool. Ex: ./create_cloudsim.py cloudsim@gmail.com RKIAIFWTGCDUX7NZ2BSA aRYYgTc36NFZ31KX+E3Jh40qtgS/qXsFrdW51X9G us-east-1a Originally posted by Carlos Agüero with karma: 626 on 2013-05-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 3264, "tags": "gazebo" }
Removing random repeated guesses from brute force program
Question: I came here to ask if there is a way to prevent already guessed guesses from being guessed again in my brute-force Java program. Basically I used a rand.nextInt in my program but the program will guess already guessed passwords again even if they were incorrect. How could I prevent this as it wastes time and processing power to guess already guessed numbers? import java.util.*; import java.io.*; class pwcracker { public static void main (String[] args) { Scanner scan = new Scanner( System.in ); Random rand = new Random(); String pw, choices, guess; long tries; int j, length; System.out.println("Enter a password that is up to 5 chars and contains no numbers: "); pw = "" + scan.nextLine(); length = pw.length(); choices = "abcdefghijklmnopqrstuvwxyz"; tries = 0; guess = ""; System.out.println("Your pw is: " + pw); System.out.println("The length of your pw is: " + length); System.out.println("for TEST- Guess: " + guess + "pw :"+pw); if (!guess.equals(pw)){ while (!guess.equals(pw)) { j = 0; guess = ""; while ( j < length ) { guess = guess + choices.charAt( rand.nextInt( choices.length() ) ); j = j + 1; if (guess .equals(pw)) { System.out.println("Match found, ending loop.."); break; } } System.out.println("Guess: " + guess + " pw :"+pw); tries = tries + 1; } } System.out.println("Here is your password: " + guess); System.out.println("It took " + tries + " tries to guess it."); } } Answer: There are only two ways. Use a sequence which ensures values are not repeated. E.g. aaaaa, aaaab, aaaac ... aaaay, aaaaz, aaaba, aaabb, aaabc etc Keep track of the previously attempted values (e.g. using a HashSet variable) and skip the check when it's already been tried. However, I would have thought that the memory and extra processing used in this method make it pretty pointless since you will normally waste more resources than you will save.
{ "domain": "codereview.stackexchange", "id": 11678, "tags": "java, random" }
If electrons aren't revolving around the nucleus, why do atoms have orbital magnetic moment?
Question: In most introductory textbooks, the explanation of orbital magnetic moment is based on Bohr's model and electrons orbiting around the nuclues, which can be modeled as a current loop. For example, here. But I've never seen an explanation without Bohr's model, using Schrödinger's equation. Would this be possible, or do we need some experimental hypothesis? In particular, I'd prefer avoid charge density orbitals-based explanations, if possible. Answer: Let's consider coupling a charged particle to a magnetic field in quantum mechanics. Assume a uniform magnetic field for simplicity. The prescription for coupling to an EM field is the substitution $\mathbf{p} \rightarrow \mathbf{p} - q\mathbf{A}$. The Hamiltonian is then \begin{equation} H = \frac{\left(\mathbf{p} - q\mathbf{A}\right)^2}{2m} + V \end{equation} Or, expanding, \begin{equation} H = \frac{1}{2m}\left[p^2 +q^2A^2 - q\left(\mathbf{p}\cdot\mathbf{A} +\mathbf{A}\cdot\mathbf{p}\right)\right] + V \end{equation} If we work in the Coulomb gauge, $\nabla\cdot\mathbf{A} =0$, then $\mathbf{p}\cdot\mathbf{A} = \mathbf{A}\cdot\mathbf{p}$ and \begin{equation} H = \frac{1}{2m}\left[p^2 +q^2A^2 - 2q\mathbf{A}\cdot\mathbf{p}\right] + V \end{equation} We still have some gauge freedom here, so let's choose explicitly $\mathbf{A} = \frac{1}{2}\mathbf{B} \times \mathbf{r}$ so that $\mathbf{A} \cdot \mathbf{p} = \frac{1}{2}\left(\mathbf{B} \times \mathbf{r}\right) \cdot \mathbf{p} = \frac{1}{2} \left(\mathbf{r} \times \mathbf{p}\right) \cdot \mathbf{B} = \frac{\mathbf{L}\cdot\mathbf{B}}{2}$ by the cyclic symmetry of the triple product. The full Hamiltonian is \begin{equation} H = \frac{1}{2m}\left[p^2 +q^2A^2\right] - \frac{q}{2m}\mathbf{L}\cdot\mathbf{B} + V \end{equation} and by analogy to the classical energy of interaction between a magnetic dipole and magnetic field, we define $\mathbf{\mu} = \frac{q\mathbf{L}}{2m}$.
{ "domain": "physics.stackexchange", "id": 24314, "tags": "quantum-mechanics, magnetic-moment" }
Where does this equation for the activity in the gas phase equilibrium between N₂, H₂ and NH₃ come from?
Question: I have a question with Equilibrium Constant. I understand that for a reaction in equilibrium $aA +bB \rightleftharpoons cC +dD$ $$K_c= \frac{[C]^c[D]^d}{[A]^a[B]^b}$$ and $K_p$ is just using the partial pressures. I'm looking through my PChem1 tutorials and I saw that they wrote for $$\ce{1/2 N2 + 3/2 H2 <=> NH_3}$$ $$K=\frac{a(NH_3)}{a(N_2)^\frac12a(H_2)^\frac32}$$ I believe that a represents activity. However what i don't get is what follows: $$K=\frac{a(NH_3)}{a(N_2)^\frac12a(H_2)^\frac32}=\frac{\left(\frac{n(NH_3)RT}{p_oV}\right)}{\left(\frac{n(N_2)RT}{p_0V}\right)^\frac12\left(\frac{n(H_2)RT}{p_oV}\right)^\frac32}=\frac{n(NH_3)}{n(N_2)^\frac12n(H_2)^\frac32}\left(\frac{p_oV}{RT}\right)$$ I cant' figure out how $a(NH_3)=\frac{n(NH_3)RT}{p_oV}$ It obviously has something to do with the ideal gas equation $pV=nRT$ but I don't get how. additional context: The question asks for the equilibrium composition starting with 1 mol each of $\ce{N_2}$, $\ce{H_2}$ and $\ce{NH_3}$ in a 1L reaction vessel at 298K P.s. I get that I am to use extent of reaction to solve this. Answer: The activity of a molecule $\ce{M}$ as a function of its concentration in the solution can be written as \begin{equation} a_{\ce{M}} = \frac{\gamma_{\ce{M}} c_{\ce{M}}}{c^0} \ , \end{equation} where $c^0$ is the standard concentration and $\gamma_{\ce{M}}$ is the (dimensionless) activity coefficient (see also: here and Wikipedia). If you have a sufficiently dilute solution so that non-ideal contributions become very small, the activity coefficient can be approximated by 1, i.e. $\gamma_{\ce{M}} \approx 1$. So, what you're left with for the activity under these conditions is \begin{equation} a_{\ce{M}} \approx \frac{c_{\ce{M}}}{c^0} \ , \end{equation} Now, the concentration is defined by $c = \frac{n}{V}$ and with the ideal gas law $\frac{n}{V} = \frac{p}{RT}$ you get: \begin{align} a_{\ce{M}} &\approx \frac{c_{\ce{M}}}{c^0} = \frac{\frac{n_{\ce{M}}}{V}}{\frac{n^0}{V}} \\ &= \frac{\frac{n_{\ce{M}}}{V}}{\frac{p^{0}}{RT}} = \frac{n_{\ce{M}} R T}{p^{0} V} \end{align} where $p^{0}$ is the standard pressure.
{ "domain": "chemistry.stackexchange", "id": 2312, "tags": "physical-chemistry, thermodynamics, equilibrium, gas-laws" }
Add two numbers stored as a list in F#
Question: I am solving the following problem: You are given two lists representing two non-negative numbers. The digits are stored in reverse order and each of their nodes contain a single digit. Add the two numbers and return it as a list. I wrote the following code: let addTwoNumbers (a: int list) (b: int list) = let rec loop (a: int list) (b: int list) (c: int list) (carry: int) = match a, b, carry with | (h1::t1), (h2::t2), _ -> let q, r = Math.DivRem(h1 + h2 + carry, 10) loop t1 t2 (List.append c [r]) q | (h1::t1), [], _ -> let q, r = Math.DivRem(h1 + carry, 10) loop t1 [] (List.append c [r]) q | [], (h2::t2), _ -> let q, r = Math.DivRem(h2 + carry, 10) loop [] t2 (List.append c [r]) q | [], [], 0 -> c | [], [], carry -> let q, r = Math.DivRem(carry, 10) loop [] [] (List.append c [r]) q loop a b [] 0 Is there a way to simplify and possibly to de-duplicate some of this code? Answer: List.append is an expensive O(d) operation, making your addTwoNumbers O(d2). To reverse the list, you would be better off calling List.rev once at the end. (By the way, List.append c [r] would be better written as c @ [r].) The loop helper function could use a better name. My suggestions are either add or addTwoNumbers'. I don't like the name c either, as it is somewhat mnemonically similar to carry. Put the base cases first, followed by the recursive cases. One of the cases can be simplified by taking advantage of commutativity. When adding two numbers, carry should never be greater than 1. let addTwoNumbers (a: int list) (b: int list) = let rec add (a: int list) (b: int list) (sum: int list) (carry: int) = match a, b, carry with | [], [], 0 -> sum | [], [], 1 -> 1 :: sum | [], _, _ -> add b a sum carry | (h1::t1), [], _ -> let c, s = Math.DivRem(h1 + carry, 10) add t1 [] (s :: sum) c | (h1::t1), (h2::t2), _ -> let c, s = Math.DivRem(h1 + h2 + carry, 10) add t1 t2 (s :: sum) c add a b [] 0 |> List.rev
{ "domain": "codereview.stackexchange", "id": 15162, "tags": "f#, integer" }
Doubt regarding foreign key and referential integrity
Question: I am studying database from Database System Concepts. The author was explaining foreign keys and referential integrity with the help of following schemas: $$ instructor(\underline{ID},\space name,\space dept\text_name,\space salary)$$ $$ course(\underline{course\text_id},\space title, \space dept\text_name,\space credits) $$ $$ department(\underline{dept\text_name}, \space building,\space budget)$$ $$ section(\underline{course\text_id,\space sec\text_id,\space semester,\space year},\space building,\space room\text_no,\space time\text_slot\text_id) $$ $$ teaches(\underline{ID,\space course\text_id, \space sec\text_id,\space semester, \space year})$$ The primary keys are underlined. The relation $section$ is used so that each course could be offered multiple times, across different semesters, or even within a semester. Foreign key was explained with the help of schemas $instructor$ and $department$. The attribute $dept\text_name$ in in $instructor$ is a foreign key from $instructor$ referencing $department$. It need not be the case that all $dept\text_name$ values are referenced by $instructor$ relation. Referential integrity constraint was explained with the help of $section$ and $teaches$ schema. If a section exists for a course it must be taught by at least one instructor(it could be taught by more than one instructor) So all the combinations of $(course\text_id,\space sec\text_id,\space semester, \space year)$ that appear in $section$ relation must also appear in $teaches$ relation. But we cannot declare a foreign key from $section$ to $teaches$ since multiple instructors could teach a single section. We could define a foreign key from $teaches$ to $section$. Since we could form a referential integrity constraint from $instructor$ to $department$ by using foreign key but without foreign key we have referential integrity constraint from $section$ to $teaches$. So how is referential integrity different from foreign key constraints? Also if you know some free resources that could clear database concepts please share. Answer: In general the “referential integrity constraint” is a constraint that states that a foreign key must always be equal to some primary key of a tuple in the referred relation (so establishing a correspondence between a tuple in the first relation and another (and only one) in the other relation. If the constraint is violated, that is if the value of the foreign key of a tuple is not equal to the value of the primary key of another tuple, this means that the foreign key has lost its integrity, since it “does not refer anything”, like a “dangling reference” in a programming language. It is a duty of the Database Management System to enforce this constraint, and prevent any violation of it through some policy, that can be specified by the database administrator (“on delete cascade, no action”, etc.) In the book that you cited, it is in fact said that: a referential integrity constraint requires that the values appearing in specified attributes of any tuple in the referencing relation also appear in specified attributes of at least one tuple in the referenced relation. The comment related to the example explains that, even if, according to the rules of the university, each section should be taught by at least one instructor, it is not possible to express such constraint in the database designed, since it is not possible to define the attributes of section as foreign key for the relation teaches (since they do not form a primary key in such relation). Note that this depends on the fact that the only way to represent a 1-n relationship between two sets of data in the Relational Data Model is to use a foreign key in the relation where there is only one element (at maximum) in the other relation, not vice-versa. So the foreign key is from teaches to section, since a tuple in teaches can be related only to one tuple in section.
{ "domain": "cs.stackexchange", "id": 13029, "tags": "database-theory, databases, relational-algebra" }
Is there a way to create an artificial solar eclipse?
Question: I heard this story, where they celebrated the birthday of the now defunct North Korean dictator Kim Il-sung in the 1970s and as a birthday present they created through some very complex artillery maneuvers an artificial total solar eclipse over Pyongyang. At that very moment, the dictator stepped out of his palace, in front of the crowd in a suit covered with reflecting material and had giant reflectors directed at him, so he practically 'outshone' the sun... While this seems plausible in view of the megalomania and extreme personality cult we all know about, I still wonder whether that is technically feasible. (I'm not looking for an answer whether the story is true or not.) So my question is whether there is any possibility to have an artificial, man-made total solar eclipse and if so, how? Answer: It would be very hard to reduce the sense of daylight at the Earths surface as light reaching any point could have been scattered bby the atmosphere over a large angle. This rules out just covering the couple of square degrees the sun extends over. Solar eclipses get much darker because the light is blocked before it reaches our atmosphere and can not be scattered over a large angle. Even on a cloudy day look up and see how uniform the illumination appears practically from horizon to horizon...
{ "domain": "physics.stackexchange", "id": 21358, "tags": "sun" }
Relu with not gradient vanishing function is possible?
Question: I'm beginner in ML. In the ANN, relu has the gradient of 1 in x>0 how ever, i wonder in x=<0 relu has gradient of 0 and may have gradient vanishing problem in deep neural networks. if activation function like y=x(for the all the x) has no gradient vanishing problem, why we dose not use this function in deep neural networks? Is there any side effect for y=x(for all x)? (maybe, the weight may go infinity in deep neural networks...... however, I think this problem is also being happen in ReLU. so it is not a problem(I think.)) Answer: If you are using an activation like y=x, then your model is a simple linear one. Multiple layers with such activation will be equivalent/reduced to only one layer with a linear activation! Thus you can only map linear function satisfactorily with this type of model. To be able to learn complex non-linear functions, you need to use multiple layers with non-linear activation in between to make the whole model non-linear To prevent the vanishing gradient problem, there is a variant of relu called Leaky ReLU. This activation is same as relu in the positive region of x. For negative region of x, it is a linear function with a small slope (e.g. 0.2). This makes Leaky ReLU a non linear activation at x=0 point.
{ "domain": "datascience.stackexchange", "id": 7389, "tags": "machine-learning, activation-function" }
subscribing and publishing
Question: I have an application where 2 nodes are talking to each other over a certain topic, based on where the nodes are at in the program flow they either don't care about the topic or they do. My question is something that the tutorials didn't help with, is there a command that can check a topic on demand? or check a specific piece of a topic on demand? elevator_talk.publish(status); works great for publishing on demand, is there something similar for subscribing? Originally posted by Rydel on ROS Answers with karma: 600 on 2012-07-03 Post score: 1 Original comments Comment by cagatay on 2012-07-03: I am not sure but you may keep the track of subscribed topic by checking out the last time you received the data. For example if it was 10 seconds ago , then you would think that a problem might happen to the publishing node ? or you may use diagnostic_msgs to understand what is happening Comment by Rydel on 2012-07-03: yea your comment doesn't really help me. I guess what I'm asking is when a node is listening to a topic how would i have that node extract the most recent data from that topic and use it? Comment by Thomas D on 2012-07-03: You could just always go to your callback function when you receive a message, which will be the most recent one, and have a class variable you check at the beginning of the callback function to see if you should do anything with that data. Elsewhere in your code you can set the class variable. Answer: As Thomas suggested: run your callback all the time, and ignore messages (or parts of messages) you don't care about. Originally posted by joq with karma: 25443 on 2012-07-03 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Rydel on 2012-07-06: For anyone else who may see this in the future, I found ros::spinOnce was what I was looking for but be careful if your message isn't being published very often. Comment by joq on 2012-07-06: You should either run ros::spin() or call spinOnce() periodically at some appropriate rate. Comment by Rydel on 2012-07-06: right, or if your looking for a particular piece of the message to change like a status bit or something, put ros::spinOnce() in a while loop having the status bit break the loop
{ "domain": "robotics.stackexchange", "id": 10036, "tags": "ros, subscribe, publish" }
stl files are too big in rviz
Question: im putting my stl filesin my urdf file but when i see it in rviz ,its too large.can someone help me with it Originally posted by yams on ROS Answers with karma: 1 on 2017-02-05 Post score: 0 Answer: ROS uses SI units for everything, so also in URDF (for link lengths) and for meshes. See REP-003, section Units. You'll have to make sure your meshes use metres as scale. Meshes exported by CAD tools often use millimetres, so they will be a 1000 times too large. If you don't want to or can't scale your meshes, you could use the scale attribute on the mesh element in your urdf. See wiki/urdf/XML/link - Elements for more information. Originally posted by gvdhoorn with karma: 86574 on 2017-02-05 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26932, "tags": "ros" }
Why Peskin & Schroeder are taking functional derivatives of the Lagrangian density when it is not a functional?
Question: I have some doubts regarding equation 11.58 (see below) in the QFT book by Peskin and Schroeder. If I understand correctly, they are expanding the Lagrangian density about $\phi_{\text{cl}}$ by writing $\phi(x) = \phi_{\text{cl}}(x) + \eta(x)$, after which they do the following expansion: $$ \mathcal{L}_1[\phi] = \mathcal{L}_1[\phi_\text{cl}] + \int d^4x\; \frac{\delta\mathcal{L}_1}{\delta\phi(x)} \eta(x) + \int d^4x \; d^4y\; \frac{1}{2!} \frac{\delta^2\mathcal{L}_1}{\delta\phi(x)\delta\phi(y)} \eta(x) \eta(y) + \cdots,$$ where all the functional derivatives are evaluated at $\phi_\text{cl}$. My question is: the Lagrangian density is simply a function of $\phi$, not a functional (cf. e.g. this Phys.SE post), why then are we taking functional derivatives instead of 'ordinary' derivatives? Also, if the formula above is correct, shouldn't there be a double integral in the term linear in $\eta(x)$ in Eq. 11.58? Quoted from p. 371 of Peskin and Schroeder's QFT book: Answer: OP has a point. If $$S_1~:=~\int d^4x ~\mathcal{L}_1\tag{*}$$ denotes the corresponding action functional, then the 3 last appearances of $\mathcal{L}_1$ in eq. (11.58) should strictly speaking be $S_1$ not $\mathcal{L}_1$. Be aware that such abuse of notation as in eq. (11.58) is quite common. See also e.g. this related Phys.SE post.
{ "domain": "physics.stackexchange", "id": 38402, "tags": "quantum-field-theory, lagrangian-formalism, field-theory, variational-calculus, functional-derivatives" }
In 3 three power system, can damage occurs from incorrectly connected phases?
Question: A three phase generator, no grid involved, phase 1 connected to phase 3 on the load, and phase connected to phase 1 on load, what is going to happen? Answer: What will happen, is that any three-phase AC motors powered locally will reverse their rotation direction. That might break some items, will definitely inconvenience folk with right-handed twist drills, when their drill press, under power, can only back out of the hole. Table saws will aim their gums at the cut instead of their teeth. Big fans will stop exhausting and start intaking. Don't use the elevator. Reversing a squirrel-cage blower motor might make bad noises and lose efficiency, but will still blow forward. Sump pumps, too, will still pump the right direction. A few 'starter' circuits consist of auxiliary motors of varieties other than 3phase AC, and the large motors that use those starters will spin up as usual, then try to change direction. Expect to replace fuses.
{ "domain": "physics.stackexchange", "id": 29823, "tags": "power" }