anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How can it be that there is a series of integrals in Fourier Series if it’s a projection on a continuous basis?
Question: If the process of finding Fourier coefficient is finding the projection of a signal on a member from an orthonormal basis, basis which is continuous in frequency. How can it be that Fourier coefficient is an integral and signal representation, which is the sum of these coefficients multiplied by the projections, is a series? The coefficient basis members are continuous, how can they be discrete in the signal representation while being continuous in the coefficient representation? The logic says: we took projection in continuous space, then we shall continue in continuous space while we multiply by that projection. Why do we have a mix of integral and a series then? How can it be that there is a “series of integrals” in Fourier Series if Fourier is just an operation of projecting a continuous basis and multiplying the projection on the (continuous) basis members once again? Answer: The short answer is that the sinusoidal components are only orthogonal if they are integer multiples of each other. Any other choice of frequency would not result in $<s1, s2> = 0$, which can be easily confirmed by comparing such cases (integer and non-integer related sinusoids). Thus the choice of basis signals is discrete in units of their frequency (the frequency domain), but the time domain waveforms that each of these discrete frequency values represent is continuous. The frequency domain signal can still be a continuous signal if we allow zero-value signals, as it is just the non-zero values that would necessarily be discrete. That said, we can use discrete signals to describe the waveform in frequency without any loss of information. The intuitive view starts from understanding that the Fourier Series Expansion states that we can represent a continuous time single-valued function (with no discontinuities) of duration $T$ as a series of sinusoidal components (sines and cosines) each with a frequency of $1/T$ and integer multiples. Consider such an arbitrary function below: Mathematically we need not limit time to $T$ in the the reconstruction; time can extend to $±\infty$ as long as we allow the signal in time to be periodic over period T, we will achieve the same result as derived in the Fourier Series Expansion. Since the waveform repeats exactly in time over every interval $T$, we need only express the result over period $T$ but extending it as such provides further insight into the Fourier Series Expansion, such as the inability to converge if there was a discontinuity between the end and the beginning of the interval. Here in the graphic I purposely made that boundary continuous so that the repeated waveform has no discontinuity at each transition, thus this is an example that the expansion will converge everywhere from $0$ to $T$ where in general such a continuity would not exist and you will typically see severe ringing at the start and end of the reconstructed waveform. Thus the waveform components in the decomposition MUST also be cyclical over the interval $T$, otherwise they cannot possibly converge to the same waveform over the next period of $T$, meaning the signal cannot be periodic over $T$. (As demonstrated with the possible first two components of the illustrated waveform as shown below.) It is periodicity specifically in the time domain that is associated with discrete values of frequency in the frequency domain since the other frequencies if they did exist would destroy the periodicity over that interval: In order to be discrete in frequency, periodicity must exist in the time domain. Consider a sine-wave itself which is periodic it time and exists at one frequency. As demonstrated with the expansion an arbitrary periodic waveform can have multiple frequency components but they all must be integer multiples of the fundamental component at frequency $1/T$. I like to explain it like this: If I could call out your name exactly once per second, and exactly the same way each time--the resulting frequencies of my voice would only exist at 1 Hz and integer multiples of 1 Hz, all the way out to the 4KHz or so where spectral content exists. This also works in reverse: periodicity in frequency is associated with discrete values in time (sampling). When we sample a time domain signal, we end up with multiple copies of the signal spectrum in frequency, spaced by the sampling rate. Same thing- in this case the frequency spectrum is continuous and periodic while the time domain signal is discrete. The DFT is an example that is discrete both in time and frequency, and therefore there is an implied periodicity both in time and in frequency!
{ "domain": "dsp.stackexchange", "id": 8773, "tags": "fourier, fourier-series, periodic, projection" }
Expression deduction for energy density per wave length
Question: Energy density per frequency is defined by Planck formula as: $$u(\nu,T)=\frac{8\pi h}{c^3} \frac{\nu^3}{e^{\frac{h\nu}{kT}}-1}$$ The relation between wave length, $\lambda$, and frequency, $\nu$, of a wave on vacuum is given by: $$c=\lambda \nu$$ And the relation between energy density per frequency, $u(\nu,T)$, and energy density per wave length, $w(\lambda,T)$ is expressed as: $$w(\lambda,T)d\lambda=u(\nu,T)d\nu$$ So, $w(\lambda,T)=\frac{d\nu}{d\lambda}u(\nu,T)$. I've seen in books that it's supposed to be the absolute value of $\frac{d\nu}{d\lambda}$, $\left|\frac{d\nu}{d\lambda}\right |$ instead of $\frac{d\nu}{d\lambda}$ as I wrote on the equation. But why? Answer: The reason is that $\lambda$ is a decreasing function of $\nu$, so that if $d\nu$ is positive then $d\lambda$ is (at least formally) negative, but we explicitly want to not care about that. We want $u(\nu,T)d\nu$ to be the energy content per non-directed unit frequency, and ditto for $w(\lambda,T)d\lambda$, and the absolute value ensures that that is the case. More specifically, we want to use $u(\nu,T)$ to get the energy content between frequencies $\nu_1$ and $\nu_2>\nu_1$ as $$ U(\nu_1,\nu_2,T)=\int_{\nu_1}^{\nu_2}u(\nu,T)d\nu $$ and we similarly want to use $w(\lambda,T)$ to get the energy content between wavelengths $\lambda_2=c/\nu_2$ and $\lambda_1=c/\nu_1>\lambda_2$ (note the changed order) as $$ W(\lambda_1,\lambda_2,T)=\int_{\lambda_2}^{\lambda_1}w(\lambda,T)d\lambda, $$ and we want both contents to be equal and positive. However, if you do the change of variable you get $$ W(\lambda_1,\lambda_2,T) =\int_{\lambda_2}^{\lambda_1}w(\lambda,T)d\lambda =\int_{\nu_2}^{\nu_1}w(\lambda,T)\frac{d\lambda}{d\nu}d\nu =\int_{\nu_1}^{\nu_2}w(\lambda,T)\left|\frac{d\lambda}{d\nu}\right|d\nu, $$ with the absolute value coming from the minus sign in switching the limits of integration and the fact that $d\lambda/d\nu$ is negative.
{ "domain": "physics.stackexchange", "id": 24930, "tags": "quantum-mechanics, energy, wavelength" }
ROSTOPIC publish every 1 second a new data
Question: Hey all, I would like to publish a rostopic that countains a certain data every 1 second, and i would like that the data change in every publish. Maybe read from a text file and publish line by line every 1 second or publish the output from a math function ... How can i do that ? I'm only using a cmd like this (rostopic pub my_topic std_msgs/String "hello there") and i can't figure it out, i've being looking for a solution online but couldn't find it. Please help me. Thank you. Originally posted by Primo on ROS Answers with karma: 5 on 2020-04-26 Post score: 0 Answer: I would like to publish a rostopic that countains a certain data every 1 second [..] I'm only using a cmd like this (rostopic pub my_topic std_msgs/String "hello there") This is not something rostopic is designed for. You'll have to write a node for this. Originally posted by gvdhoorn with karma: 86574 on 2020-04-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34839, "tags": "ros, ros-melodic, frequency" }
Amplitude not attenuated after bandpass filtering?
Question: I want to make a bandpass filter for an IQ signal recorded from an Software Defined Radio with center frequency "Fc" & sampling frequency "Fs". For futhur analysis I wanted to focus upon signal received between a fixed frequency range & filter out rest frequencies. I generated "sinc" funtion using Python Scipy firwin then I convoluted it with my signal using Python fftconvolve. But as you see in the plots amplitude of the frequency bins outside the pass band of my are still not attenuated. Approach Read the data,convert it from uint8 & strore it as complex notation. Generate a "sinc" function of width half of difference of cutoff frequencies eg. for 20MHz to 40MHz width of filter will be 10MHz. Since "sinc" is symmetric it will filter frequencies from -10MHz to +10MHz. Move the centre of filter to mean of cutoff frequencies using frequency shift property. Convolve the filter with the data. Relevant Code data_raw=signal.flatten() data_raw=data_raw-127.5 data = np.empty(data_raw.shape[0]//2, dtype=np.complex128) data.real = data_raw[::2] data.imag = data_raw[1::2] nyq = 0.5 * fs # Normalized frequency to be used for frequency shift centre=(highcut+lowcut)/2-fc width=np.ceil((highcut-lowcut)/2) h = firwin(513,width,nyq=nyq,window='hanning') h = np.append(h,np.zeros(1024-len(h))) t=np.arange(len(h)) h=h*(np.exp(1j*2*np.pi*t*(centre/fs))) filtered_data = fftconvolve(data, h) Details: Fcentre = 137.65 MHz Fsample = 2 MHz Lowcut = 137.65 MHz Hightcut = 138.15 MHz Plots: Firwin filter linear plot Firwin filter semilog plot Output after applying filter Answer: You seem to be aware of the Fourier Transform pair that relates a sinc function in the time domain to a rectangle function in the frequency domain, but you don't seem to be able to correctly apply this to your situation. I am going to take one more stab at answering you. Let me know if you have questions. A sinc filter can be designed using the relationship \begin{equation} \DeclareMathOperator{sin}{sin} \DeclareMathOperator{sinc}{sinc} \DeclareMathOperator{rect}{rect} 2 B \sinc\left(2 B t\right) \iff \rect\left(\frac{f}{2 B}\right) \end{equation} where $\sinc(x) = \frac{\sin(\pi x)}{\pi x}$. This shows us that the desired low pass bandwidth $B$ must be used in our expression to generate the sinc samples. Note that a modulated sinc has a bandwidth equal to $2B$ whereas a sinc centered at DC has a bandwidth of $B$. In your example, you define several quantities. I am going to give these quantities symbols so I can do some math on them. Fcentre = 137.65 MHz I'll call this $f_c$. Fsample = 2 MHz I'll call this $f_s$. Lowcut = 137.65 MHz I'll call this $f_l$. Hightcut = 138.15 MHz I'll call this $f_h$. For you to generate the sinc with the correct bandwidth, you would sample the sinc function above with \begin{equation} B = \frac{f_h - f_l}{2}. \end{equation} This means that your bandpass filter is given as \begin{equation} h[n] = \left(f_h - f_l\right) \sinc\left(\frac{f_h - f_l}{f_s} n\right) \exp\left(j \pi \frac{f_h + f_l - 2 f_c}{f_s} n \right) \end{equation} where I have ignored the required time shift to make the filter causal (implementable). The amount you shift it by will be determined by the length of the filter. The length of the filter will be determined by how much stopband attenuation you want. Note that no finite length filter will yield zero gain in the stopband and all filters have a transition band (a band where the amplitude gradually goes from the desired passband value to the designed stopband value). You can't avoid this. As a side note, you could have just as easily used the Remez Exchange algorithm (a.k.a. Parks-McClellan) to obtain an optimal filter. The window method is computationally simple, but it is in no sense optimal.
{ "domain": "dsp.stackexchange", "id": 5248, "tags": "filters, python, fast-convolution" }
What does AI software look like, and how is it different from other software?
Question: What does AI software look like? What is the major difference between AI software and other software? Answer: Code in AI is not in principle different from any other computer code. After all, you encode algorithms in a way that computers can process them. Having said that, there are a few points where your typical "AI Code" might be different: A lot of (especially early) AI code was more research based and exploratory, so certain programming languages were favoured that were not mainstream for, say, business applications. For example, much work in early AI has been coded in Lisp, and probably not much in Fortran or Cobol, which were more suited to engineering or business. Special languages were developed to make it easy to program with symbols and logic (eg Prolog). The emphasis was more on algorithms than clever/complex programming. If you look at the source code for ELIZA (there are multiple implementations in many different languages), it's really very simple. Before the advent of neural networks and (statistical) machine learning, most AI programming was symbolic, so there hasn't been much emphasis on numerical computing. This changed as probabilities and fuzziness were increasingly used, but even if using general purpose languages there would be fewer numerical calculations. Self-modifying code is inherently complex; while eg Lisp made no difference between code and data (at least not in the same way as eg C or Pascal), this would just complicate development without much gain. Perhaps in the early days this was necessary when computers had precious little memory and power and you had to work around those constraints. But these days I don't think anybody would use such techniques anymore. As modern programming languages evolved, Lisp and Prolog (which were the dominant AI languages until probably 20 to 30 years ago) have been slowly replaced by eg Python; probably because it is easier to find programmers comfortable in an imperative paradigm rather than a functional one. In general, interpreted languages would be preferred over compiled ones due to speed of development, unless performance is important. The move to deep learning has of course shifted this a lot. Now the core processing is all numeric, so you would want languages that are better with calculations than symbol handling. Interpreted languages would now mainly make up the 'glue' code to interface between compiled modules, and be used for data pre-processing. So current AI code is probably not really that different from code used in scientific computing these days. There is of course still a difference between R&D and production code. You might explore a subject using an interpreted language, and then re-code your algorithm for production in a compiled language to gain better performance. This depends on how established the area is; there will for example be ready-made libraries available for neural networks or genetic algorithms which are well-established algorithms (where performance matters). In conclusion: I don't think AI code is any more complex than any other code. Of course, that's not very exciting to portray in a film, so artistic licence is used to make it more interesting. I guess self-modifying code also enables the machines to develop their own conscience and take over the world, which is even more gripping as a story element. However, given that a lot of behaviour is nowadays in the (training/model/configuration) data rather than the algorithm, this might even be more straight forward to modify. Note: this is a fairly simplified summary based on my own experience of working in AI; other people's views might vary, without either being 'wrong'. Update 2021: I now work at a company that extracts business information/events from news data on a large scale using NLP methods. And we're using Lisp... so it's still in active, commercial use in AI.
{ "domain": "ai.stackexchange", "id": 1515, "tags": "comparison, implementation" }
Haskell marking procedure for non-unique lists
Question: I'm writing a function for generating solutions for a backtracking search problem. To that end, I need to mark an item from a list by removing it from that list, and placing it in a second list. So I have a pair of lists: non-marked items marked items and my method generates all distinct list pairs of possible markings. Because the list may contain duplicates, I'm selecting the marked item via the index. Example: mark 0 ([1,2,2],[]) == ([2,2],[1]) selections ([1,2,2],[]) == [([2,2],[1]),([1,2],[2])] Code so far: mark :: Int -> ([a], [a]) -> ([a], [a]) mark i (src,tgt) = (src',tgt') where src' = let (ys,zs) = splitAt i src in ys ++ (tail zs) tgt' = tgt ++ [e] e = src !! i selections :: Eq a => ([a],[a]) -> [([a],[a])] selections pair@(left,_) = nub [ mark i pair | i <- [0..((length left)-1)] ] I'm not happy with the implementation: it seems crude, looks ugly, and I think it's obvious that someone with a background in imperative languages wrote this function. Can this be solved more elegantly, with Array or other list mechanisms, e.g. a fold? Answer: Found a deceptively simple solution. selections (left,right) = nub [(delete o left,right ++ [o]) | o <- left] List comprehensions are great.
{ "domain": "codereview.stackexchange", "id": 13230, "tags": "haskell" }
gmapping Map not updating correctly
Question: Hi everybody. I am having an issue with gmapping. I have a simulated Rover that is publishing LaserScan messages and odometry data. Basically I am seeing that when I drive my Rover next to an obstacle in a straight line, the occuppied cells created by gmapping (seen in rviz) moves with the Rover. I have followed REP-103 and REP-105 and the "Seting up your robot using tf" as close as I can tell. Sorry for the long post. For tf frames I have: geometry_msgs::Quaternion base_quat = tf::createQuaternionMsgFromYaw(0.0); geometry_msgs::TransformStamped base_trans; base_trans.header.stamp = current_time; base_trans.header.frame_id = "odom"; base_trans.child_frame_id = "base_link"; base_trans.transform.translation.x = current_Northing_m; base_trans.transform.translation.y = current_Easting_m; base_trans.transform.translation.z = -0.2794; base_trans.transform.rotation = base_quat; base_broadcaster.sendTransform(base_trans); geometry_msgs::Quaternion scan_quat = tf::createQuaternionMsgFromYaw(0.0); geometry_msgs::TransformStamped scan_trans; scan_trans.header.stamp = ros::Time::now(); scan_trans.header.frame_id = "base_link"; scan_trans.child_frame_id = "laser"; scan_trans.transform.translation.x = 0.0; scan_trans.transform.translation.y = 0.0; scan_trans.transform.translation.z = 0.3556; scan_trans.transform.rotation = scan_quat; scan_broadcaster.sendTransform(scan_trans); And my frame links: As you can see I am not doing anything with rotation yet. And here are some pictures of my Simulated Rover in my GUI, along with the output of Rviz When I start my program: And now when the Rover is driven alongside the Obstacle: And now when the Rover is driven well past the Obstacle: As you can see, the Map is shifting with the Rover. And here is some output of the gmapping package: Odom Pose Theta: 0.000000 MPose Theta: 0.430916 GOT MAP Center x: -0.400000 y: -0.400000 update frame 3624 update ld=2.91054e-14 ad=0 Laser Pose= 7.70492 -3 0 m_count 108 Average Scan Matching Score=499.238 neff= 29.9807 Registering Scans:Done It is worth mentioning that in rviz, when I drive the Rover forwards, the Red axis on the tf frame is the one moving in the correct direction. Originally posted by davidgitz on ROS Answers with karma: 26 on 2015-10-10 Post score: 0 Answer: So the answer was to add: , link:http://answers.ros.org/question/83566/gmapping-no-transform-from-map-odom_combined/ which then creates a map that doesn't move with the Rover and is re-generated with new scan data. However for my application this won't work apparently. I am using Sonar sensors (not LIDAR) and the map it creates is just not very useful. I knew this would be an issue from the start but hoped otherwise. I will create my own occupancy grid generator. Thanks for your help! Originally posted by davidgitz with karma: 26 on 2015-10-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 22764, "tags": "navigation, laserscan, gmapping, transform" }
If two stars collide, what is the probability that they merge to form a single star?
Question: After looking at What are the odds that the Sun hits another star? and answering it (crudely), now I'd like to ask the following: What is the probability that if two stars collide, their cores merge to form one larger, more massive star? Answer: Fairly good. Two stars of mass $M$ falling from infinity straight towards each other until they merge at distance $2R$ will get kinetic energy $GM^2/R$. This is a lot, for two suns it is $1.8978\times 10^{41}$ J. However, compared to the binding energy of even a single star, $\approx 3GM^2/5R$ this is less(the sun has binding energy $2.2774\times 10^{41}$ J, and a double mass $2^{1/3}R$ radius same density merged star $7.2302\times 10^{41}$ J, 3.17 times more). So there is not enough energy released to blow up a star, but is is about a quarter of it: a lot of matter is going to get ejected or end up in orbits through a heated envelope that will take a while to simmer down. The key issue is whether the cores get slowed down enough by the encounter to remain bound, becoming a common envelope binary. A direct hit clearly would work, but glancing collisions may allow the cores to miss each other: now the question is whether the envelope can absorb enough kinetic energy. A rough estimate may be that there is significant slowing if the mass scooped up/pushed aside $\pi r_{core}^2 \rho_{envelope} R$ becomes comparable to $m_{core}$. For two sun-like stars with $r_{core}=0.2R_\odot$ this seem to happen, but much hydrodynamics may occur complicating things. Glebbek's dissertation on stellar mergers estimates a rough condition for the orbital angular momentum to exceed the maximum spin angular momentum of the merged star as $$\frac{r_p}{R_1+R_2} > k^4\frac{(1+q)^{\xi+4}}{2q^2}$$ where $k^2\approx 0.05$, $\xi\approx 0.6$, $r_p$ the periastron distance, and $q=M_2/M_1$. This typically is exceeded: there is a lot of angular momentum that needs to be shed (for example by blowing off a lot of heated gas). For example, two sun-like stars having $r_p=R_\odot/2$ has a LHS of 1/4 and a RHS of 0.0303. That dissertation also contains numerical simulations of various merger cases.
{ "domain": "astronomy.stackexchange", "id": 5231, "tags": "star, orbit, collision, impact-probability" }
Velocity of a viscous fluid through a tube
Question: When a viscous liquid flows through a tube in a laminar flow, why is its velocity highest at the center? I understand the concept of shear viscosity and why the liquid in contact with moving plate has highest velocity. That's because of viscosity itself. But, I cannot understand how it explains liquid flow in a tube or whether it explains at all. Answer: When a viscous liquid flows through a tube in a laminar flow,why is its velocity highest at the center? Because the boundary condition is that it's zero at the walls? Does that answer your confusion? It shouldn't seem surprising that the point furthest from the walls (the center) is the highest velocity, considering that the fluid exactly at the wall is at exactly zero velocity. I believe it's the Hagen-Poiseuille equation that handles the specifics. With laminar flow, we're talking about equations that are 100% solvable algebraically. That solution is: $$ v = - \frac{1}{4 \eta} \frac{\Delta P}{\Delta x} (R^2 - r^2) $$ This equation uses $r$ for distance from the center. When that is zero, you're at the center-line, and that above expression has the highest value. You can go look up the exact steps for how to get this expression from the fluid momentum equation itself, which is more-or-less a statement of the definition of viscosity.
{ "domain": "physics.stackexchange", "id": 10011, "tags": "fluid-dynamics, velocity, flow, viscosity" }
Number of maximal PAIRS-values
Question: I had an interview question like this: In a company there are different people. One can measure how well they suits for pair coding as follows: First, let us compute the PAIRS-value which is the number of letters P, A, I, R, S given in two people names. Then compute the sum of consecutive digits and sum of digits if the result is over 9. Then do the same for the numbers one got in previous step: sum two adjacent numbers and again, compute the sum of digits of every number if that number is > 10. Repeat this until you have only two digits, which measures the goodness of pairs. For example, "Erkki Esimerkki" and "Matti Meikäläinen" has PAIRS-value 58: First [P = 0, A = 1, I = 6, R = 2, S = 1] and then [0 + 1 = 1, 1 + 6 = 7, 6 + 2 = 8, 2 + 1 = 3] [1 + 7 = 8, 7 + 8 = 15 => 1 + 5 = 6, 8 + 3 = 11 => 1 + 1 = 2] [8 + 6 = 14 => 1 + 4 = 5, 6 + 2 = 8] [5, 8] = 58% Now, there are many worker names given in http://reaktor.fi/wp-content/uploads/2014/08/fast_track_generoitu_nimilista.txt . Find the number of perfect pairs, i.e. those who have PAIRS-value 99%. Is there a faster way to solve as the following code: import urllib.request def sum_digits(number): while number > 9: temp = number y1 = number//10 y2 = temp % 10 number = y1 + y2 return number def compute_percent(pairs): x1 = sum_digits(pairs[0] + pairs[1]) x2 = sum_digits(pairs[1] + pairs[2]) x3 = sum_digits(pairs[2] + pairs[3]) x4 = sum_digits(pairs[3] + pairs[4]) x5 = sum_digits(x1 + x2) x6 = sum_digits(x2 + x3) x7 = sum_digits(x3 + x4) x8 = sum_digits(x5 + x6) x9 = sum_digits(x6 + x7) return 10*x8+x9 if __name__ == '__main__': address1 = 'http://reaktor.fi/wp-content/uploads/2014/08/' address2 = 'fast_track_generoitu_nimilista.txt' address = address1 + address2 agents1 = 'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36' agents2 = '(KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36' agents = agents1 + agents2 req = urllib.request.Request(address, headers={'User-Agent': agents}) r = urllib.request.urlopen(req) datas = r.read().decode('utf8') name = "" names = list() pairs = [0] * 5 matches = list() for i in range(0,len(datas)): if (datas[i] == "\n"): names.append(name) name = "" else: name += datas[i] for i in range(0,len(names)): for j in range(i + 1,len(names)): common = names[i].lower() + names[j].lower() for i in range(0,len(common)): if (common[i]) == "p": pairs[0] += 1 if (common[i]) == "a": pairs[1] += 1 if (common[i]) == "i": pairs[2] += 1 if (common[i]) == "r": pairs[3] += 1 if (common[i]) == "s": pairs[4] += 1 for i in range(0,len(pairs)): pairs[i] = sum_digits(pairs[i]) percent = compute_percent(pairs) if percent == 99: pari = "" + names[i] + " and " + names[j] matches.append(pari) matches = list(set(matches)) print(len(matches)) Answer: This answer has two parts. First part to show some suggestion to your code, keeping the same algorithm performance. Second one to suggest a \$O(n)\$ algorithm (where yours is \$O(n^2)\$). PART ONE The sum_digits function can be written as: def sum_digits(n): if n>0: n = n % 9 if n == 0: n = 9 return n (this is something one can learn in algebra courses but which is very commonly used in elementary school maybe under the name of "casting out nines") or more literally: def sum_digits(n): while n>9: n = sum(map(int,str(n))) return n The compute_percent(pairs) function would be more clearly implemented by a reduce(lst) defined as follows: def reduce(lst): return [x+y for (x,y) in zip(lst[1:],lst[:-1])] and then iterating: def compute_percent(lst): while len(lst)>2: lst = reduce(lst) return lst[0]*10+lst[1] The for loop iterating over the characters of common can be replaced by: for i,c in enumerate('pairs'): pairs[i] = common.count(c) All these simplification do not change the algorithmic complexity of your code. PART TWO Your algorithm is \$O(n^2)\$ since you are iterating on all pairs of names in a huge list of length \$n\$. This can be terribly slow when \$n\$ is becoming very large... so you should try to find if it is possible to make a single iteration on the list. The key point, here, is that the computation on a pair of persons only depends on the number of occurence of the characters P,A,I,R,S in their names. So if two person share the same count of P,A,I,R,S occurences they share the same perfect pairs (if they have any). So you can start by construction a dictionary mapping "PAIRS count" to the number of people with such a count. Moreover every PAIRS count can be reduced modulo 9 (because we use modulo 9 arithmetic) hence the number of items in the dictionary is bounded by \$9^5\$ which gives constant time access and update of the dictionary. So the dictionary can be constructed by a linear sweep over the list of names in \$O(n)\$ time complexity and constant memory. Once the dictionary is constructed you can make the double iteration on the dictionary. Every match in the keys of the dictionary must be multiplied by the counts of both "PAIRS" to get the count of actual people matching. Since the dictionary has less than 100000 entries this counts as a constant time operation. Of course when \$n<100000\$ this could be not better than your algorithm (but also not worse) but I would predict that even for smaller numbers you have a gain, since real people names are not so long and contain very few occurence of five choosen letters (imagine any name with a (9,9,9,9,9) PAIRS count) hence I expect a lot of collisions in the hash dictionary which will sensibly reduce the runtime of the algorithm. Possible implementation: from collections import defaultdict from itertools import combinations import urllib2 def compute_key(name): return tuple(name.lower().count(c) for c in "pairs") def reduce(n): while n>9: n = sum(map(int,str(n))) return n def is_perfect_match(key1,key2): key = [a+b for a,b in zip(key1,key2)] while len(key)>2: key = [x+y for x,y in zip(key[1:],key[:-1])] return map(reduce,key) == [9,9] d = defaultdict(int) url = 'http://reaktor.fi/wp-content/uploads/2014/08/fast_track_generoitu_nimilista.txt' req = urllib2.Request(url, headers={'User-Agent': "Magic Browser"}) for name in urllib2.urlopen(req).readlines(): d[compute_key(name)] += 1 count = 0 # compute count of matchings of names with different keys: for key1,key2 in combinations(d,2): if is_perfect_match(key1,key2): count += d[key1]*d[key2] # compute count of matchings of names with equal keys: # (a name cannot be matched with itself) for key in d: if is_perfect_match(key,key): count += d[key]*(d[key]-1) print count The execution is much faster, even if the input is only 1000 names (1sec vs 1min). And gives 5910 as a result (vs 995).
{ "domain": "codereview.stackexchange", "id": 9476, "tags": "python, interview-questions, python-3.x, http" }
At what scale is Quantum Mechanics start to influence body's behaviour?
Question: What is the biggest body that shows quantum behaviour? An atom? All of them or are there ones that don't act in a quantum manner because of their size? Answer: Though it is not currently feasible to test objects of arbitrarily large size in (for example) a double slit experiment, the larger and larger we get we still see quantum mechanical effects. It seems unlikely (and, in my and many physicists' view unphysical) for there to ever be a cutoff where those effects just stop happening completely. If physicists say large objects don't behave quantum mechanically, what they really mean is generally that we just don't need to treat them with quantum mechanics because the effects are too small to notice anyway.
{ "domain": "physics.stackexchange", "id": 95765, "tags": "quantum-mechanics, experimental-physics" }
Testing after_save hooks in Rails 4.2 with MiniTest
Question: I've recently started working on a new application in Rails 4.2.0-beta2. I've been using this as an opportunity to learn more about MiniTest, with a stretch goal of being as strict with myself with testing as possible. My general problem: How do I test the after_save action for an ActiveRecord model in such a way that I can be confident that an ActiveRecord callback will not cause my code to break? Here's what I've got so far (gist or inline code below). How would you (re)write these files to detect if the NotificationMailer called in the after_save hook prevented a TeamMembership from being persisted? Note that not included here are the relevant test fixtures. You may safely assume that fixtures users(:carol) and teams(:alpha) are not associated with each other. app/models/team_membership.rb class TeamMembership < ActiveRecord::Base # A proc that will enqueue `NotificationMailer.team_invitation` DEFAULT_NOTIFIER = proc do |user, team| NotificationMailer.team_invitation(team, user).deliver_later end class << self # This is a class level attribute that is mainly used for testing. # Defaults to {TeamMembership::DEFAULT_NOTIFIER} attr_accessor :notifier end self.notifier = DEFAULT_NOTIFIER belongs_to :team belongs_to :user after_create :invite_user_to_team! private # ActiveRecord callback used to equeue team invitation emails # @return void def invite_user_to_team! self.class.notifier.call(team, user) nil end end tests/models/team_membership_test.rb require 'test_helper' class TeamMemberTest < ActiveSupport::TestCase test 'callbacks' do # setup test_notifier = Minitest::Mock.new test_notifier.expect(:call, nil, [teams(:alpha), users(:carol)]) TeamMembership.notifier = test_notifier # test TeamMembership.create(user_id: users(:carol).id, team_id: teams(:alpha).id) assert test_notifier.verify # teardown TeamMembership.notifier = TeamMembership::DEFAULT_NOTIFIER end end Answer: A valid approach would be check if NotificationMailer.team_invitation was called. In order to get this job done, you'll need first of all change this NotificationMailer.team_invitation hard-coded call to something injected. Something like this: class TeamMembership < ActiveRecord::Base # other methods after_create :invite_user_to_team! def notification_method(notification_service = NotificationMailer) @notification = notification_service end private def invite_user_to_team! @notification.team_invitation(team, user) nil end end Now, you can create an expectation over NotificationMailer. Inside your test, you can define your mock and define its behavior: class TeamMemberTest < ActiveSupport::TestCase def test_after_create_new_team_member_it_should_be_notified notification = Minitest::Mock.new notification.expects(:team_invitation).with(team, user).once # your TeamMember class working to create new end end I hope it helps :)
{ "domain": "codereview.stackexchange", "id": 10661, "tags": "ruby, ruby-on-rails, unit-testing" }
In a Transformer model, why does one sum positional encoding to the embedding rather than concatenate it?
Question: While reviewing the Transformer architecture, I realized something I didn't expect, which is that : the positional encoding is summed to the word embeddings rather than concatenated to it. http://jalammar.github.io/images/t/transformer_positional_encoding_example.png Based on the graphs I have seen wrt what the encoding looks like, that means that : the first few bits of the embedding are completely unusable by the network because the position encoding will distort them a lot, while there is also a large amount of positions in the embedding that are only slightly affected by the positional encoding (when you move further towards the end). https://www.tensorflow.org/beta/tutorials/text/transformer_files/output_1kLCla68EloE_1.png So, why not instead have smaller word embeddings (reduce memory usage) and a smaller positional encoding retaining only the most important bits of the encoding, and instead of summing the positional encoding of words keep it concatenated to word embeddings? Answer: When you concatenate, you have to define a priori the size of each vector to be concatenated. This means that, if we were to concatenate the token embedding and the positional embedding, we would have to define two dimensionalities, $d_t$ for the token and $d_p$ for the position, with the total dimensionality $d = d_t + d_p$, so $d>d_t$ and $d>d_p$. We would be decreasing the total size we devote to tokens in favor of positional information. However, adding them together is potentially a super case of the concatenation: imagine that there is an ideal split of $d$ into $d_t$ and $d_p$ in terms of minimizing the loss; then, the training could converge to position vectors that only take $d_t$ elements, making the rest zero, and the positions were learned and happened the same, taking the complementary $d_p$ elements and leaving the rest to zero. Therefore, by adding them, we leave the optimization of the use of the $d$ dimensions to the optimization process, instead of assuming there is an optimal partition of the vector components and setting a new hyperparameter to tune. Also, the use of the vector space is not restricted by a hard split in the vector components, but takes the whole representation space.
{ "domain": "datascience.stackexchange", "id": 12174, "tags": "nlp, encoding, transformer, attention-mechanism" }
How to obtain sodium oxide from sodium chloride?
Question: Under what conditions can $\ce{Na2O}$ be made from $\ce{NaCl}?$ I know $\ce{NaCl}$ doesn't oxidise under normal ambient conditions, but in the presence of what temperature and pressure ranges is this reaction possible? Answer: There is no need (or possibility, really, in terms of standard lab capabilities) to oxidize sodium(I). In fact, one method relies on sodium(I) reduction to metal as a method of eliminating unwanted chloride. Method 1 Electrolysis of molten sodium chloride: $$\ce{2 NaCl(l) -> 2 Na(l) + Cl2(g)}$$ Oxidation of sodium metal to oxide by burning: $$\ce{4 Na + O2 ->[>\pu{250 °C}] 2 Na2O}$$ Drawback: pure sodium oxide cannot be obtained by direct oxidation of sodium. Instead, a mixture of sodium peroxide and sodium oxide is formed. In order to suppress the formation of peroxide, sodium metal or sodium nitrate is added in excess to the mixture in inert atmosphere: $$\ce{Na2O2(s) + 2 Na(l) ->[\pu{150 °C}] 2 Na2O(s)}$$ Method 2 Convert sodium chloride to sodium bicarbonate using Solvay process: $$\ce{NaCl(aq, conc) + H2O(l) + NH3(g) + CO2(g) -> NaHCO3(s) + NH4Cl(aq)}$$ Thermal decomposition of bicarbonate first yields in sodium carbonate: $$\ce{2 NaHCO3(s) ->[\pu{250 - 300 °C}] Na2CO3(s) + CO2(g) + H2O(g)}$$ which is subsequently calcined to form an oxide: $$\ce{Na2CO3(l) ->[>\pu{1000 °C}] Na2O(s) + CO2(g)}$$ This appears to be a preferred method as it is less energy- and resources-consuming and allows to obtain $\ce{Na2O}$ selectively.
{ "domain": "chemistry.stackexchange", "id": 12848, "tags": "inorganic-chemistry, synthesis, alkali-metals" }
Equivalent Colorings of Graphs
Question: Call two proper graph colorings equivalent if one can be obtained from the other by a permutation of the colors. In other words, they are the "same" coloring. I'm interested in finding proper non-equivalent colorings. Of course, the decision problem of determining whether or not there is such a non-equivalent coloring given one is NP-complete. Are there FPT, approximation, etc. algorithms for finding such a coloring if one exists? I am currently running a randomized greedy coloring algorithm, which does often succeed in finding a proper coloring - however, I am unsure of when it actually produces a non-isomorphic one. If it is helpful, I'm working with graphs which are essentially $k$-trees. Answer: Many existing heuristics for graph coloring can work even if you specify the colors of a few vertices. So, here is one plausible algorithm you could use: We are given an existing coloring $C$. Pick two vertices $v,w$ randomly. We are going to assign colors for $v,w$ (in the new coloring), leave the other vertices unassigned, and use some existing graph coloring heuristic to extend this to a coloring for the whole graph. If $C(v)=C(w)$, assign $v,w$ two different colors in the new coloring (any two, it doesn't matter which two colors you use). If $C(v)\ne C(w)$, assign $v,w$ the same color in the new coloring (any color, it doesn't matter which you pick). Then extend this to a coloring for the new graph. If this finds a coloring, then you're guaranteed that the new coloring will be non-equivalent to the old one. Moreover, if there exists a non-equivalent coloring, you're guaranteed to be able to find it with at most polynomially many invocations of this procedure. In particular, if $C'$ is non-equivalent, there must exist some pair of vertices $v,w$ where $C(v)=C(w)$ and $C'(v) \ne C'(w)$, or where $C(v) \ne C(w)$ and $C'(v) = C'(w)$. Therefore, if you repeat the above procedure for all pairs $v,w$ of vertices, at least one of those iterations should find a new non-equivalent coloring. So, this might be one reasonable approach, if you want to find a new non-equivalent coloring once. On the other hand, if you are given $m$ existing colorings and want to find a $m+1$st non-equivalent coloring, that looks more complicated.
{ "domain": "cs.stackexchange", "id": 11385, "tags": "graphs, approximation, colorings" }
Best practices for organizing a project
Question: Hello all, Sorry for the long winded explanation but my thoughts are a bit scattered on this. I'm fairly new to ROS. I've been working with it for about a month and I feel I'm beginning to understand most of the basics (packages, topics, subscriptions, services, etc), but when it comes to how to put together a practical project I'm struggling. I'm using ROS for my senior design project, which is a mobile autonomous robot with some mapping and computer vision capabilities. The robot is going to have several well-defined modes of operation where some subsystems should be active or inactive. We're using a Kinect and converting the depth cloud to a laserscan with the depthcloud_to_laserscan node. I've already got this working with HectorSLAM as a preliminary mapping system since the robot isn't built yet and I don't have access to physical odometry. Here are examples of a few of the operation modes: Autonomous Mapping Mode: Sonar/tactile based obstacle avoidance system + SLAM system Manual Mapping Mode: Teleoperation system + SLAM system Autonomous Patrol Mode: Sonar/tactile based obstacle avoidance system + Map-based navigation system + Motion detection system What's the best way to initialize/start this system and switch between these states? I don't want all the subsystems to be active at the same time because power and processing will be important. For prototyping, I want to be able to type commands into the terminal to switch between each of these systems, launching all of the relevant nodes and closing them cleanly when the procedures are complete or are interrupted. I suppose this is more of a design philosophy question, I'm sorry if it's not specific enough. I've read through most of the tutorial content and so far I haven't found anything that quite answers my question. All replies welcome, thanks. Originally posted by troman on ROS Answers with karma: 30 on 2015-06-10 Post score: 1 Answer: For the first revision of your project, I think launch files are appropriate; have a look at roslaunch For my robot, I have a base launch file which runs the driver for my base and my sensors, and the kalman filter that I use to estimate position. All of my modes use this functionality. For each mode, I have an additional launch file which runs the nodes for that mode. Launch files can include one another as well, so if you have some functionality that is used by some modes, but not all, you can separate out those nodes into their own launch file, and include that launch file within each mode that needs it. For even more things that you can do with roslaunch, have a look at the roslaunch tips for large projects Originally posted by ahendrix with karma: 47576 on 2015-06-11 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by troman on 2015-06-11: Thanks for the helpful suggestions. Do you have any advice for how to stop or shut down nodes cleanly? Would it be sufficient to just put a system call in the nodes to kill themselves when a particular condition is met? Comment by troman on 2015-06-11: So I started delving deeper into the rospy API and found this: Initialization and Shutdown If anyone else has the same question your answer is at the bottom: rospy.signal_shutdown(reason) Comment by ahendrix on 2015-06-11: When you stop a roslaunch, it will stop all of the nodes that it started; I usually use that instead of an explicit shutdown within the node. Comment by ahendrix on 2015-06-11: The capabilities framework is a more advanced system which can manage dependencies and startup and shutdown of launch files, but it also requires more setup.
{ "domain": "robotics.stackexchange", "id": 21884, "tags": "ros, best-practices" }
Euler angle convention in TF
Question: I'm a bit confused by ROS conventions. On this page: http://www.ros.org/wiki/geometry/CoordinateFrameConventions It says Euler angles are specified in yaw-pitch-roll (ZYX) format in ROS. However, the tf.transformations.quaternion_from_euler function seems to assume roll-pitch-yaw (XYZ) format, as on this page: http://ros.org/wiki/tf/Overview/Transformations Doesn't this violate the convention, or am I missing something? Originally posted by Jeffrey Kane Johnson on ROS Answers with karma: 452 on 2013-01-29 Post score: 4 Answer: tf uses the Python transformation package written by Christoph Gohlke (see http://www.lfd.uci.edu/~gohlke/). This package supports any definition of Euler angles. Simply pass the desired convention as a second parameter to quaternion_from_euler: def quaternion_from_euler(ai, aj, ak, axes='sxyz'): """Return quaternion from Euler angles and axis sequence. ai, aj, ak : Euler's roll, pitch and yaw angles axes : One of 24 axis sequences as string or encoded tuple >>> q = quaternion_from_euler(1, 2, 3, 'ryxz') >>> numpy.allclose(q, [0.310622, -0.718287, 0.444435, 0.435953]) True """ And from the documentation in the header of transformations.py: """ A triple of Euler angles can be applied/interpreted in 24 ways, which can be specified using a 4 character string or encoded 4-tuple: *Axes 4-string*: e.g. 'sxyz' or 'ryxy' - first character : rotations are applied to 's'tatic or 'r'otating frame - remaining characters : successive rotation axis 'x', 'y', or 'z' """ I suppose the tf maintainers didn't want to modify the underlying (external) python library to change the default behaviour. So to comply with the ROS convention for Euler angles, simply use quaternion_from_euler like this: q = tf.transformations.quaternion_from_euler(yaw, pitch, roll, 'rzyx') Originally posted by Stephan with karma: 1924 on 2013-01-29 This answer was ACCEPTED on the original site Post score: 7 Original comments Comment by Jeffrey Kane Johnson on 2013-01-30: That makes sense. Thanks.
{ "domain": "robotics.stackexchange", "id": 12635, "tags": "transform, quaternion" }
Why RNNs often use just one hidden layer?
Question: Did I get it right, that RNNs most often have just one hidden neuron layer? Is there a reason for that? Will RNNs with several hidden layers in each cell work worse? Thank you!! Answer: Definitely you can have multiple hidden layers in RNN. One the most common approaches to determine the hidden units is to start with a very small network (one hidden unit) and apply the K-fold cross validation ( k over 30 will give very good accuracy) and estimate the average prediction risk. Then you will have to repeat the procedure for increasing growing networks, for example for 1 to 10 hidden units or more if needed. However, in my experience, if you are interested to get best possible accuracy, you should start with small number of hidden layers and more simple structure, and if you are not satisfied with the corresponding accuracy, then we should go on increasing the learning rate by fixed but small small steps and each time start training fresh.
{ "domain": "ai.stackexchange", "id": 1467, "tags": "neural-networks, recurrent-neural-networks" }
Identification of an unknown organic molecule from only 1H NMR and 13C NMR
Question: I am currently trying to determine the structure of my unknown compound with both 1H NMR and 13C NMR spectra. My H NMR peaks: 1.16 ppm [singlet, 1H], 1.68 [singlet, 3H], 1.75 [singlet, 3H], 4.2 [doublet, 2H], 5.4 [multiplet, 1H], and 7.24 (solvent peak) C NMR: 18, 26, 59, 77 (solvent peak), 124, and 136 ppm. I have deduced that my molecular formula is $\ce{C5H10O}$. And I am fairly certain I should have a double bond in my structure. However, I do not know intuitively where this pi bond should be. What should I be looking for to determine the placement of this pi bond? I unfortunately do not have characteristic IR peaks to determine whether I have an alcohol, ether, or a ketone. Answer: The doublet at $4.2\ \mathrm{ppm}$ is almost assuredly an alcohol group or ether group. Since we only have one carbon that is bonded to an oxygen at $59\ \mathrm{ppm}$, we conclude this is a primary alcohol. The situation of being a doublet says smoething else is lurking around. $5.4\ \mathrm{ppm}$ is far enough down field, we start thinking alkenes, but the multiplicity is mysterious for the moment. $1.68\ \mathrm{ppm}$ and $1.75\ \mathrm{ppm}$ are in the range of allyl methyl groups, and the carbon signals are okay with this as well. Additionally singlets for these groups indicate a geminal dimethyl compound with a third substituent on an alkene justifying different chemical shifts. Finally the peak at $1.16\ \mathrm{ppm}$ is difficult to explain. However, we know we have an alcohol, and they can have a fairly wide range of values. If this peak is broadened at all, I would immediately assign it to the alcohol. It being a singlet is also worrisome, but if exchange is occurring, this is not out of the question. The multiplicity of the alkene proton indicates there are some long-range things going on, and these don't always resolve well. My guess is you are picking up some long-range coupling and is not resolving nicely in the methyl groups, which appear to be singlets.
{ "domain": "chemistry.stackexchange", "id": 5587, "tags": "organic-chemistry, nmr-spectroscopy" }
Which of these 2 methods for calculating the focal length of a concave mirror is more accurate?
Question: I have done an experiment to measure $f$ the focal length of a concave mirror. I have a list of 8 values for $u$ the object distance and 8 values for $v$ the corresponding image distance. I calculated the focal length using 2 methods. Method 1: I got 8 values for $f$ using the formula $f=\frac{uv}{u+v}$ All values fell within $2\sigma$ so I used them all to find an average value. Method 2: I graphed $\frac{1}{u}$ against $\frac{1}{v}$ and from the average of the $x$ and $y$ intercepts of the regression line I found the focal length. My question is: Which of these 2 methods is more accurate? Is there some way of qualitatively calculating the accuracy of each method? NB: I did use the third method of approximating the focal length by focusing a distance object on some paper but let's ignore that method for the purpose of comparing the 2 methods in question. Answer: Generally overall second method is preferred as in first method relation f = uv/u+v is assumed to be always true taking in account that mirror is a part of a perfect parabola but that's not true always. In this light rays are taken to be paraxial which isn't true in real scenario. While in second method the value is found without assuming anything. Moreover for perfect values instruments like spherometer can be used .
{ "domain": "physics.stackexchange", "id": 56584, "tags": "optics, refraction, geometric-optics, lenses" }
What is an aromatic cage and what does it do?
Question: Epigenetics, 2. ed, Chapter 3.6: Similarly, methylated lysine residues embedded in histone tails can be read by “aromatic cages” present in chromodomains, or similar domains (e.g., MBT, Tudor) contained within complexes that facilitate downstream chromatin modulating events (see Ch. 7 [Patel 2014] for structural insights I understand it is something like a protein motif, but I cannot find a good definition using google. Answer: It refers to the structures in the PHD-finger domain and chromodomains. The aromatic amino acid residues form a cage like structure which covers and interacts with the methylated ammonium of lysine via a cation-pi interaction. The BPTF-PHD structures reveal the main characteristics of PHD fingers that can read H3K4me3. The binding occurs through an aromatic cage where a trimethyl ammonium group is stabilized by van der Waals and cation-–π interaction, which is similar to the ones observed in chromodomain, MBT, PWWP, and Tudor domains. This aromatic cage is composed of one Trp and three Tyr residues; and it has three faces and a 'lid' that is beyond the tip of H3K4me3. Subsequently determined structures of other fingers in complex with the H3K4me3 peptides show that the cage varies and can contain a combination of two to four aromatic and hydrophobic residues. Margueron et al., 2009
{ "domain": "biology.stackexchange", "id": 6173, "tags": "biochemistry, molecular-biology, proteins" }
How can I fix this launchfile?
Question: Hi dear ROS community! I've started learning ROS a few months ago. Currently I'm trying to create my first launchfile that starts gscam and image_view packages to open a camera and visualize the video stream. I made a launch file, however, gscam and image_view cannot get connected. Gscam node is launching properly, but not the image_view one; it seems that it is not receiving the parameter. If I launch image_view manually ($rosrun image_view image_view image:=/camera/image_raw) everything works fine and I can visualize the video stream from my camera. How can I fix my launch file? Thank you! <launch> <node pkg="gscam" type="gscam" name="gscam01"> <env name="GSCAM_CONFIG" value="v4l2src device=/dev/video0 ! video/x-raw-rgb,framerate=30/1 ! ffmpegcolorspace" /> </node> <node pkg="image_view" type="image_view" name="image_view01"> <param name="image" type="string" value="/camera/image_raw" /> </node> SOLUTION: launch> <node pkg="gscam" type="gscam" name="gscam01"> <env name="GSCAM_CONFIG" value="v4l2src device=/dev/video0 ! video/x-raw-rgb,framerate=30/1 ! ffmpegcolorspace" /> </node> <node pkg="image_view" type="image_view" name="image_view01"> <remap from="image" to="/camera/image_raw"/> </node> Originally posted by diegomex on ROS Answers with karma: 13 on 2014-09-17 Post score: 0 Answer: The input to image_view is a topic, not a parameter. (the command-line usage here can be a little confusing). Try using the <remap> tag: <remap from="image" to="/camera/image_raw"/> Originally posted by ahendrix with karma: 47576 on 2014-09-17 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by diegomex on 2014-09-18: Thank you very much, this worked perfectly!
{ "domain": "robotics.stackexchange", "id": 19436, "tags": "roslaunch, gscam, image-view" }
The electric field formula $E=F/q$
Question: If we follow the formula $E=F/q$ it says that when the force is bigger the electric field is bigger as well but if the charge on which the force is being exerted to bigger the electric field is somehow smaller? How does this make sense? Answer: This is a great example of how causal understanding of physics is not manifestly obvious if you naively look at the mathematical expression. What you say is true but is correctly formulated in the following way: for a given force $F$ on a charge $q$, the electric field $E$ (at the location of the charge) would have to get bigger as the charge $q$ gets smaller. In other words, what it says is that in order to produce the same amount of force on a smaller charge, you need a stronger electric field. As you can see, there is no mystery here at all when you understand it this way. Of course, as you already understand (as implied by your question), the electric field acting on a charge doesn't change if you only change the test charge because it is determined by the external configuration of charges. Physically speaking, when you only vary the test charge, only the force acting on it will change. However, what $E\propto 1/q$ tells you is that if you want to maintain a constant force on a varying test charge, you'll need to vary the electric field in inverse proportion to the value of the test charge (via changing the external configuration of charges that produces the electric field).
{ "domain": "physics.stackexchange", "id": 85543, "tags": "forces, electrostatics, electric-fields, charge" }
Problems expected if logs deleted during process?
Question: ###Question: What would be the expected outcome of deleting logs while a process is running? Specifically which of the following categories would that fall under : ) ? Fine Undefined Very Bad ###Use Case: Our logs are growing too big because of a bug in diamondback where roscpp_internal does not respect log levels. As a work around we have a script that deletes particular log files ever 30 seconds or so. We've also seen a crash (assert/-6) that we've yet to diagnose closely following our log workaround change, and we currently suspect our log deletions as the cause. Originally posted by Asomerville on ROS Answers with karma: 2743 on 2012-07-02 Post score: 2 Original comments Comment by Eric Perko on 2012-07-02: Maybe it's a silly question but, could you upgrade to Electric or Fuerte to fix that bug? :) Comment by Asomerville on 2012-07-02: Unfortunately the system where this is occurring is a production machine which is locked to diamondback for the time being. Comment by joq on 2012-07-02: Not silly, but I think we do still need to be able to build, test and release Diamondback fixes. Comment by Thomas on 2012-07-02: If you need help with debugging this assertion, could you please try to to send more information about what happens? (a backtrace would be a good start) Comment by Asomerville on 2012-07-02: Thanks for the offer. Because of the nature of the situation (remote machine not internet connected) we're going treat-first-diagnose-later. I'll update if we find anything interesting. Answer: Actually what you try to do just will not help. If you get a look at /proc/XXX/fd where XXX is the process id of your ROS node, you will be able to see the currently opened files and in particular the log files you thought were deleted. Don't forget that file descriptors are a mechanism of reference counting on a resource, so as long as rosconsole maintains the file opened, the file will, in fact, not be deleted. This explains why what you do is perfectly safe on Linux :) So maybe this assertion is just due to you consuming the whole disk space and making the process crash or something? Monitor your disk space, you will see it decreasing until your process crashes, the file descriptor gets released and the kernel cleans the space keps by the previous log files... So I think it would be better to try to really fix the error instead of trying to workaround it if possible ;) Originally posted by Thomas with karma: 4478 on 2012-07-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by joq on 2012-07-02: @Thomas is right, that is how POSIX filesystems work. The space will not be recovered until all references to the inode are gone. Comment by Asomerville on 2012-07-02: Duly noted regarding file deletions while the fd is still held.
{ "domain": "robotics.stackexchange", "id": 10018, "tags": "ros, rosconsole" }
Brownian motion from two gaussian noise processes
Question: Consider some brownian motion for which we obtained the following solution for the langevin equations $$ u\left(t\right)=e^{-\alpha t}\int_{0}^{t}e^{\alpha s}\left(\xi\left(s\right)-\xi'\left(s\right)\right)ds $$ Here, $\xi\left(t\right)$ and $\xi'\left(t\right)$ are two independent gaussian white noises with zero mean. Question: I believe we can compute $\left\langle u\left(t_{1}\right)u\left(t_{2}\right)\right\rangle $ with the usual procedure where we consider $\left\langle \xi\left(t_{1}\right)\xi\left(t_{2}\right)\right\rangle =g\delta\left(t_{2}-t_{1}\right)$ if we consider $\left\langle \xi\left(t_{1}\right)\xi'\left(t_{2}\right)\right\rangle =0$ with the argument that the noise processes are independent; can you confirm if I am correct? The book I'm reading shows without proving a solution for $\left\langle u\left(t\right)\xi\left(t\right)\right\rangle $ and I am trying to understand how this is computed. I don't understand how the $\xi\left(t\right)$ could go inside the integral of $u\left(t\right)$ for one to be able to use the usal identity that yields the dirac delta. Do you have an idea how this is done? Could you please advise? Answer: So the original SDE is probably (see this answer for derivation) $$ \mathrm{d}u_t = \alpha u_t \mathrm{d}t + \mathrm{d}W_t + \mathrm{d}W'_t $$ from which indeed $$ u_t = e^{\alpha t} \left(\int_0^te^{-\alpha s}\mathrm{d}W_s + \int_0^te^{-\alpha s}\mathrm{d}W'_s\right) = \mathcal{N}\left(0, \frac{1}{\alpha}(e^{2\alpha t} - 1)\right) $$ given the two motions are uncorrelated (the correlated case should also be rather simple by the standard mapping to an uncorrelated pair, i.e. $\mathrm{d}W' = \rho \mathrm{d}W + \sqrt{1-\rho^2} \mathrm{d}Z$, where $\mathrm{d}Z$ is a Brownian motion uncorrelated with $\mathrm{d}W$, $\mathrm{d}W'$, and the rest of the math is the same as above). And we could verify this easily enough with a bit of ugly Python (see Appendix), plotting the PDFs of analytical vs numerical solutions: So if I interpret your question correctly, you're asking for the Ito integral $$ \mathbb{E}\left(\int_0^tu_\tau\mathrm{d}W_\tau\right) = \mathbb{E}\left(\int_0^te^{\alpha \tau} \left(\int_0^\tau e^{-\alpha s}\mathrm{d}W_s + \int_0^\tau e^{-\alpha s}\mathrm{d}W'_s\right)\mathrm{d}W_\tau\right) = \mathbb{E}\left(\int_0^te^{\alpha \tau} \left(\int_0^\tau e^{-\alpha s}\mathrm{d}W_s\right)\mathrm{d}W_\tau\right) = \mathbb{E}\left(\int_0^te^{\alpha \tau} e^{-\alpha \tau}\mathrm{d}\tau\right) = t $$ Similarly for variance (though that's not what you asked for), $\mathrm{Var}\left(\int_0^tu_\tau\mathrm{d}W_\tau\right) = \frac{1}{2\alpha^2}(e^{2\alpha t} -1) - \frac{t}{\alpha}$ Appendix Analytical vs numerical PDF of the original SDE: import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt alpha = -75. dt = .001 sdt = np.sqrt(dt) N = int(.25/dt) M = 10000 ud = np.zeros(M) for m in range(M): u = 0 w1 = np.random.randn(N)*sdt w2 = np.random.randn(N)*sdt for n in range(N): u += alpha*u*dt + w1[n] + w2[n] ud[m] = u yy, xx = np.histogram(ud, bins=64, density=True) xx = .5*(xx[:-1] + xx[1:]) plt.plot(xx, norm.pdf(xx, 0, np.sqrt(1./alpha*(np.exp(2*alpha*N*dt) - 1))), xx, yy)
{ "domain": "physics.stackexchange", "id": 51761, "tags": "homework-and-exercises, statistical-mechanics, correlation-functions, brownian-motion, stochastic-processes" }
QM: Time evolution with $H = H(t)$
Question: In order to calculate time evolution in QM we use Schrödinger equation \begin{align*} i \partial_t |\psi\rangle_t = H(t) | \psi\rangle_t. \end{align*} If $H\neq H(t)$ then \begin{align*} i \partial_t |\psi\rangle_t = H(0) | \psi \rangle_t \end{align*} and we can expand the state in its Taylor series \begin{align*} | \psi \rangle_t & = |\psi\rangle_0 + t \, \partial_t |\psi\rangle_t \Big|_{t=0} + \frac{1}{2} \, t^2 \, \partial_t^2 |\psi\rangle_t \Big|_{t=0} + ... \\ & = |\psi\rangle_0 + (-i t H(0)) | \psi \rangle_t \Big|_{t=0} + \frac{1}{2} (-itH(0))^2 | \psi \rangle_t \Big|_{t=0} + ...\\ & = |\psi\rangle_0 + (-i t H(0)) | \psi \rangle_0 + \frac{1}{2} (-itH(0))^2 | \psi \rangle_0 + ...\\ & = e^{-itH(0)}| \psi \rangle_0. \end{align*} So far so good. But now we consider $H=H(t)$. My question is: why can't you do the same? Even if now you have $i \partial_t |\psi\rangle_t = H(t) | \psi \rangle_t$ instead of $i \partial_t |\psi\rangle_t = H(0) | \psi \rangle_t$, you still have \begin{align*} \partial_t |\psi\rangle_t \Big|_{t=0} = (-i H(t)) |\psi\rangle_t \Big|_{t=0} = (-iH(0)) |\psi\rangle_0. \end{align*} According to this you would always get the same time evolution operator: \begin{align*} | \psi \rangle_t & = e^{-itH(0)}| \psi \rangle_0, \end{align*} both for time independent and time dependent operator. Of course I realize this doesn't make sense for $H=H(t)$, because it implies that the state at any point is only given by the state and the Hamiltonian at $t=0$, and according to Schrödinger's equation the Hamiltonian "drives" the state at each time. So I just want to know why you can't expand the state in a Taylor series for $H=H(t)$. My guess is that, for some reason, in an isolated system the state is "analytic" and equal to its Taylor series, while for a non isolated system the Taylor series only converges in a neighbourhood of the point, and the correct formula is \begin{align*} | \psi \rangle_{t+\Delta t} & = e^{-itH(t)}| \psi \rangle_t + \mathcal{O}( \Delta t^2), \end{align*} which leads to the general time evolution operator. Or maybe it has nothing to do with "analyticity" and it's just somehting silly I'm not seeing right now. Answer: The reason why your argument doesn't work for time-dependent Hamiltonian is that $$ \left. \partial_t^2 |\psi(t)\rangle \right|_{t=0} = \left. \partial_t (\partial_t |\psi(t)\rangle) \right|_{t=0} = \left. \partial_t (-\mathrm i H(t) |\psi(t)\rangle) \right|_{t=0} \neq \left. (-\mathrm i H(t))^2 |\psi(t)\rangle \right|_{t=0} $$ The time evolution is still analytic, as long as the function $H(t)$ is. The correct way to do it instead is using a time-ordered exponential $$ |\psi(t)\rangle = \mathbf{T} \mathrm e^{-\mathrm i \int_{t_0}^t H(\tau)\, \mathrm d\tau} |\psi(t_0)\rangle , $$ which is defined via the Dyson series. (From your question, I assume that you know this already, so I won't write more about it -- feel free to ask if you have more questions!)
{ "domain": "physics.stackexchange", "id": 52309, "tags": "quantum-mechanics, hilbert-space, time, hamiltonian, time-evolution" }
How can I simulate a path for my model rocket?
Question: I made a model rocket. Specifications: Total weight - 1.635 N (wet). It has a custom solid rocket motor with black powder producing 5 N for 15 sec. So how can I calculate how high will it travel? Some equations would be helpful.... My rocket is guided , works on thrust vector control , i could've just flown it to find the awnsere of my question but in future , i have plans of propulsive landing so i might need to have these precise calculations. i did some research and found out Thrust - (mass x 9.8)/mass is my acceleration , and this will apply for all the cases , also during descent?? Why doesn't the acceleration of 9.8 need to be deducted from my final acceleration? Answer: So first you need to calculate the resultant force and this is $F_R = F_{Thrust} - F_{Weight} = (5 - 1.635)N$. Next you should calculate the net acceleration. This would be the resultant force divided by the mass and your mass will be $1.635/g$. Let's call this $a$. Now you know that the rocket increases its velocity by $a$ every second so over the time interval you specify, which is 15 seconds the velocity will be $a \times 15$. I have not factored in loss of mass as the rocket climbs, since the OP did not provide this information. The OP must be either looking for approximate values or the mass lost is relatively negligible. Noting this is not a homework question so that it's OK to provide this much detail.
{ "domain": "physics.stackexchange", "id": 71955, "tags": "newtonian-mechanics, estimation, rocket-science, propulsion" }
Phase velocity in monatomic chain
Question: When considering a one-dimensional monatomic chain of atoms (identical masses $m$ & spring constant $\kappa$), one finds the following dispersion: $$ \omega(k) = \sqrt\frac{\kappa}{m}\cdot\left|\sin\left(\frac{ka}{2}\right)\right|\, ,$$ which is $\frac{2\mathrm{\pi}}{a}$-periodic. So wavewectors higher than $\mathrm{\pi}/a$ do not provide new physical behaviour. However, when computing the phase velocity, one finds: $$ v_p = \frac{\omega}{k} = \frac{1}{k}\sqrt\frac{\kappa}{m}\cdot\left|\sin\left(\frac{ka}{2}\right)\right|\, .$$ This means that the phase velocity goes like a sinc, which is not periodic; wavevectors outside the first Brioullin zone yield a much lower phase velocity. How is this possible? Is there a good reason to consider only the first Brioullin zone for the phase velocity? Or are there other errors my calculation? Answer: The phase velocity is kind of meaningless outside the first Brillouin zone. The phase velocity is the speed that the "crest" of a wave travels, but outside the first Brillouin zone, the wavelength is less than the spacing between atoms, so there aren't really crests; most "crests" occur in the gaps between the atoms where there is nothing to displace, so the crests are kind of mathematical artifacts. While you can define a continuous function for the displacement of the atoms from their equilibrium position $u\left(x, t\right)$ for the wave, that doesn't mean that the wave is really continuous; the wave only has a meaningful displacement at the $x$ positions where there are atoms. So, some of the intuition coming from waves in a continuous medium doesn't really apply.
{ "domain": "physics.stackexchange", "id": 70746, "tags": "waves, solid-state-physics, phonons, phase-velocity" }
openni camera launch multikinect
Question: Hi, I am using multiple kinects. and I want to address the kinect fix by referring to the bus which the kinects are connected. I think i should do something like this to refer the bus (e.g. bus 2) in the openni_node launch file: but it doesn't work. Help will be appreciated. Cheers, Kang Originally posted by kang on ROS Answers with karma: 79 on 2011-09-22 Post score: 0 Original comments Comment by Patrick Mihelich on 2011-09-23: When asking a question, please specify what version of the software you're using (e.g. Diamondback) and copy/paste all error output. Answer: "002" is not a recognized format for device_id. Examples are given in comments in the openni_node.launch file. The easiest way to get up and running is to use device IDs "#1", "#2", etc., which opens the first/second enumerated Kinect. Another format, closer to what you're trying to do, is "2@3", which opens the device on USB bus 2, address 3. Enumeration order and USB address can both change when you unplug/replug a Kinect however. To make your launch configuration repeatable, use the Kinect serial number as the device_id. The driver prints the serial number to console when you open the device, so you can collect these by opening with "#1" etc. It'll look something like "B00367707227042B". Originally posted by Patrick Mihelich with karma: 4336 on 2011-09-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Patrick Mihelich on 2011-09-26: When you connect a USB device, the host assigns it a unique 7-bit address. This address is basically arbitrary. To open multiple Kinects in a repeatable way, open them by serial number. Comment by kang on 2011-09-25: Thanks Patrick for the reply. Usually I use enumeration which is the easiest way. But now I need to make it fix for addressing. Btw, for '2@3', '2' is bus, and what is address '3' here? port? and this also can change if the device is plug/unplug?
{ "domain": "robotics.stackexchange", "id": 6755, "tags": "kinect, roslaunch, openni, camera, multiple" }
Find all the factors of a given natural number, N
Question: I was asked this question in an interview: Find all the factors of a given natural number, N 1 <= N <= 10**10 Example: N = 6 => 1, 2, 3, 6 N = 60 => 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 I wrote following code,but got an feedback that complexity could be improved. How to optimise following code? import math def get_all_factors(n): ans = [1] if n==1: return ans factors = [False] * n for i in xrange(2,int(math.sqrt(n))): if not factors[i]: if n%i == 0: factors[i] = True factors[n/i] = True for i, is_factor in enumerate(factors[1:], 1): if is_factor: ans.append(i) ans.append(n) return ans ans = get_all_factors(60) print [x for x in ans] Answer: You don't need to keep an intermediate list here, just add all the divisors to a set: from math import sqrt def get_all_factors(n): divisors = set() for i in xrange(1, int(sqrt(n)) + 1): if n % i == 0: divisors.add(i) divisors.add(n / i) return divisors if __name__ == "__main__": print get_all_factors(60) I also added a if __name__ == "__main__": guard to allow importing this function from another script. Note that there is no requirement for the output to be a sorted list in the problem statement you posted. If there is, you can just call sorted on the output of this. As @kyrill mentioned in the comments, you could also make this a generator: from math import sqrt, ceil def get_all_factors(n): sqrt_n = sqrt(n) for i in xrange(1, int(ceil(sqrt_n))): if n % i == 0: yield i yield n / i if sqrt_n % 1 == 0: yield int(sqrt_n)
{ "domain": "codereview.stackexchange", "id": 25268, "tags": "python, factors" }
What is the functionality(reactive sites) of phenol molecule?
Question: My class 12 chemistry textbook says that phenol molecule has three reactive sites or it's functionality is three. But I am confused whether it is so or not, because, what I know, phenol has three double bonds, it must have a functionality of six. Besides, it also has an -OH which is yet another reactive site, so in all making it seven. Please help so that I may have a clear understanding of how to judge functionality of a monomer molecule. Answer: Phenol has $3$ double-bond-equivalent, but not $3$ localized double bonds: Look at its resonance structure: Courtesy Wikipedia. Therefore, to a electrophile, there are $3$ reaction sites, which are the ortho- and the para- positions. A nuecleophile does not attack phenol.
{ "domain": "chemistry.stackexchange", "id": 6604, "tags": "organic-chemistry, polymers, phenols" }
Why is one picture of this star blue with red, and the other red with blue?
Question: Someone just retweeted a NASA tweet onto my timeline, and it includes two images, allegedly from the same star that was in the process of dying, taken by the new space telescope, side by side: I don't quite understand what I'm looking at though. If I understood correctly, these are two images of the same star. But, both have different colour schemes. I know stars can change 'colour' based on their type and life cycle (like blue dwarf, red dwarf), but I doubt that's what I'm looking at as the telescope is relatively new and from what I understand, those life cycles take ages. The other option that comes to my mind is some kind of artistic freedom, like what is done when artists make images of e.g. dinosaurs and guess their colors. But that seems a bit too unscientific for NASA, so I'm expecting there to be a good reason for the difference in color here. If this is the same star, why is the picture on the left blue with red, and the one on the right red with blue? Answer: They are two pictures of the same object. The Southern Ring Nebula. They look different because we are looking at different wavelengths. The picture on the left is near infrared (about the range 0.7 - 5 $\mu m)$, while the one on the right is mid infrared (JWST is sensitive to up to 30 $\mu m$). Both kinds of light are impossible to see with our eyes, therefore, by definition, they don't have a color. There is no such color as infrared. So how do we display these images on an RGB monitor? Essentially, we are entitled to choose the color we want. Scientists usually choose the color so that the image is (i) clear to read, the important features are highlighted (ii) pleasant to the eye. In this case (but this is not a rule), the longest wavelengths have been displayed in red, and the shortest in blue. Mimicking in some way the fact that in the visible spectrum red has the longest wavelength and blue/violet the shortest. This color coding is useful because a scientist can tell at a glance what regions of the nebula are emitting the longest and the shortest wavelengths. If one also knows what processes emit what kind of light, this gives the picture a clear and immediate meaning. As instance you can tell by looking at the left picture that the central part is mainly ionized gas (blue light, shortest wavelengths) while the external region is dust and molecular Hydrogen (longer wavelengths). In the mid infrared image the colors are reversed, because ionized gas emits more strongly in the red part of mid infrared (thus the central region is red), while in the external region we see hydrocarbon grains that emit in the shortest wavelengths of mid infrared. Source: NASA's live coverage of the publication of the first images of JWST
{ "domain": "astronomy.stackexchange", "id": 6459, "tags": "star, james-webb-space-telescope" }
How do you explain pKa to non-professional?
Question: Let's say we have substance $\ce{A}$, which is mixed with substance $\ce{B}$ to improve shelf-life because $\mathrm{p}K_\mathrm{a}$ of the substance $\ce{A}$ is $7.9$ and in mix the $\mathrm{pH}$ is $5.2$. Does this mean that in the solution with $\mathrm{pH}$ near $8$ the substance $\ce{A}$ has multiple molecules in neutral state and not dissociate, thus it forms precipitates? Does $\mathrm{p}K_\mathrm{a}$ mean that most of the molecules of the substance $\ce{A}$ are in nondissociated state? Does lowering $\mathrm{pH}$ in this case causes substance $\ce{A}$ to dissociate? Are bold parts right? Please explain in plain English without the use of external links. Answer: pKa and pH are related concepts but often confused. pKa is a property of a compound that tells us how acidic it is. The lower the pKa, the stronger the acid. pH is a property of a particular solution that depends on the concentrations and identities of the components. For this discussion, I'm going to use the terms protonated and deprotonated to mean that a compound is associated or dissociated with a proton. Based on the relationship between the pKa of a compound and the pH of a solution, we can predict whether a compound will be protonated or deprotonated. If the pH is lower than the pKa, then the compound will be protonated. If the pH is higher than the pKa, then the compound will be deprotonated. A further consideration is the charge on the compound. Acids are neutral when protonated and negatively charged (ionized) when deprotonated. Bases are neutral when deprotonated and positively charged (ionized) when protonated. Given the information you provided, if compound A (with pKa 7.9) is in a solution of pH 5.2, compound A will be in the protonated state. Without knowing anything about the identities of A and B, the following is speculation. Most likely compound A is a base and compound B is an acid. The protonated version of A (its "conjugate acid") is more stable for long term storage than A itself. Another possibility is that compound A is an acid and B is a stronger acid. If the deprotonated version of A is unstable, then compound B is added to ensure that if the mixture came in contact with a base, something more acidic than A would be present to react with the base. EDIT BASED ON NEW INFORMATION: As Mateus B said, lidocaine will be protonated by HCl at the amine nitrogen. This is likely done for two reasons. First, as its hydrochloride, it will be more water soluble. In solution, the compound will dissociate into protonated lidocaine cation and chloride anion. Second, as the nitrogen is protonated, it is less reactive to the environment. Over time, amines can react with oxygen to form the corresponding N-oxide and/or can absorb carbon dioxide.
{ "domain": "chemistry.stackexchange", "id": 14851, "tags": "acid-base, solubility" }
Clarification in the mechanism for Molisch's test for glucose
Question: Wikipedia says Molisch's test is a sensitive chemical test, named after Austrian botanist Hans Molisch, for the presence of carbohydrates, based on the dehydration of the carbohydrate by sulfuric acid or hydrochloric acid to produce an aldehyde, which condenses with two molecules of a phenol (usually α-naphthol, though other phenols such as resorcinol and thymol) also give colored products), resulting in a red- or purple-coloured compound. Can somebody provide an arrow pushing diagram to illustrate the intermediate steps of these reactions? (especially for the dehydration of glucose) Also, for the reaction with of 5-(hydroxymethyl)furfural with 2 moles of phenol, it seems that the acyl group is acting like an electrophile and naphthol is engaged in an electrophilic aromatic substitution, but then again, I am not sure how 2 moles of naphthol combined there. Any help would be appreciated. Answer: YUSUF HASAN: 5-Hydroxymethyl furfural (5-HMF) 1 when protonated on the carbonyl oxygen becomes a reactive electrophile. Addition of phenol 2 at the reactive para-position affords 3 which deprotonates at the para-position, rearomatizing the ring to a phenol and liberating a proton. Secondary alcohol 4 is protonated on oxygen with loss of water to form stabilized cation 5. A second equivalent of phenol adds to the cation as previously described leading to the triarylmethane 6. The methane hydrogen is susceptible to oxidation by atomspheric oxygen, which is in a triplet state (unpaired electrons behaving as a free radical). Resonance stabilized radical 8 is formed along with the hydroperoxy radical 7, which can abstract the hydrogen from triarylmethane 6 forming more of radical 8 and hydrogen peroxide. Oxidation of radical 8 with oxygen gives the resonance stabilized carbocation 9 and superoxide anion, aka, superoxide radical anion 10. This species can be protonated by the the strong acid conditions of the Molisch test to form more of the hydroperoxy radical 7.
{ "domain": "chemistry.stackexchange", "id": 11098, "tags": "organic-chemistry, reaction-mechanism, carbohydrates" }
Why does a carbonyl reform in ester reactions
Question: I have noticed that frequently whenever esters react with a nucleophile, the nucleophile will attack the carbonyl carbon and eventually in order for the carbonyl to reform the other oxygen will leave for example the first addition of the grignard reagent to an ester: $\ce{PhMgBr + Ph-(C=O)-O-R}$ will result in $\ce{(Ph)2-(C-O^{-})-O-R}$ as an intermediate, and then rather than the $\ce{O^{-}}$ simply getting protonated with water for example, its favorable for the $\ce{O-R}$ to leave and the $\ce{C-O^{-}}$ to become $\ce{C=O}$. Why is the carbonyl reforming more favorable, isn't $\ce{O-R}$ a bad leaving group? Answer: Alkoxide ion is only a "bad leaving group" in a relative sense (say compared to chloride). In this case, the tetrahedral intermediate resulting from the initial reaction between the Grignard reagent plus the ester is itself an alkoxide ion, and when the OR group leaves, that then becomes an alkoxide ion - so that step should be close to energetically neutral. Saponification of esters by sodium hydroxide is another situation where alkoxide anion is a leaving group - again because alkoxide and hydroxide ions are close in energy.
{ "domain": "chemistry.stackexchange", "id": 4600, "tags": "esters" }
What is the major product of the reaction given?
Question: I think the substituent of option (a) should attack the meta position as the resultant carbanion gets stabilized by an inductive effect, but no such compound is given. So where am I making a mistake? Answer: Your reasoning is quite correct for an uncatalysed reaction. I think you have forgotten about the hydrochloric acid you also add (in catalytic amounts). You will therefore protonate the double bond first, with the carbo cation being more stable in the ortho position. The alcohol can now act as a nucleophile and attack at that carbon. Therefore (c) is the correct answer.
{ "domain": "chemistry.stackexchange", "id": 4110, "tags": "organic-chemistry, reaction-mechanism" }
"Assertion `!pthread_mutex_lock(&m)' failed." runtime error while working with custom message and kinect
Question: Hi all, I'm working with the Microsoft Kinect. I'm using pointcloud_to_laserscan to retrieve a laserscan from kinect. And there is another node which retrieves the rgb image from kinect for feature extraction and advertises the features using a custom ros message on a topic called "features". Now I have another node. This node should subscribe to the pointcloud_to_laserscan and to the features topic using message_filters, because I need the laserscan and the features taken at the same time. I tried to use the tutorial on http://www.ros.org/wiki/message_filters I defined a sync policy: typedef message_filters::sync_policies::ApproximateTime<sensor_msgs::LaserScan, myNode::myMsg> SyncLaserNodePolicy; message_filters::Synchronizer<SyncLaserNodePolicy> sync2_; message_filters::Subscriber<sensor_msgs::LaserScan> kinectLaser_sub; message_filters::Subscriber<myNode::myMsg> feature_sub; In the constructor of the node I initialize: kinectLaser_sub(n,"kinectLaser",1), feature_sub(n,"features",1), sync2_(SyncLaserNodePolicy(10),kinectLaser_sub,feature_sub), My custom message is defined as: Header header surfKeyp2d[] keypoints2d surfKeyp3d[] keypoints3d surfDescMsg[] descriptors uint32 number sensor_msgs/Image image geometry_msgs/TransformStamped trans The code is compiling without errors, but when I try to run my node I get the following error: myNode: /usr/include/boost/thread/pthread/mutex.hpp:50: void boost::mutex::lock(): Assertion `!pthread_mutex_lock(&m)' failed. Aborted Anybody have an idea whats wrong? Thanks a lot! Originally posted by kluessi on ROS Answers with karma: 73 on 2011-07-25 Post score: 2 Answer: Do you see any warnings while compiling this code? I would expect some warnings about the order in which you are initializing your class members. In C++ member variables are initialized according the order of declaration, not the order you choose in the constructor. As you have declared the Sychronizer first, it's constructor will be called with uninitialized subscribers. That can cause all kind of nasticities, including the one you observed. cheers Dariush Originally posted by Dariush with karma: 91 on 2011-07-26 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 6244, "tags": "ros, message-filters, boost, pointcloud-to-laserscan, synchronization" }
reading a file and publishing data to topic
Question: data in the file is of type int [ ] = {1,2,3,4,5,6,7,8,9,10} Code for Node_1 is as follows: #include<std_msgs/Int32MultiArray.h> ... int main(int argc, char **argv) { int temp; ros::init(argc, argv, "Node_1"); ros::NodeHandle nh; ros::Publisher publisher = nh.advertise<std_msgs::Int32MultiArray>("Topic_1",1); std_msgs::Int32MultiArray vec; ifstream infile("array1.txt"); while(infile >> temp) { ROS_INFO("Extracting the elements of the vector from file"); vec.data.push_back(temp); } publisher.publish(vec); ros::spinOnce(); return 0; } Executables are created when I compile (using catkin_make) But when I run this node, the statement "ROS_INFO("Extracting the elements of the vector from file");" is not at all executed What could probably be going wrong? Originally posted by anonymous25787 on ROS Answers with karma: 31 on 2016-03-01 Post score: 0 Answer: I'm guessing, you have the file in the wrong location. The file array1.txt needs to be in the directory where you execute rosrun <your_package> <your_node>, because this is set as the working directory of your executable. Originally posted by mgruhler with karma: 12390 on 2016-03-02 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 23961, "tags": "ros, nodes" }
Is the bra-ket scalar product the only function invariant under unitary transformations?
Question: In Sabine's recent paper on a proof of Born's rule, she asserts that transition probabilities must be functions of the scalar product of the initial and final states. Section 2 Proof Since $P_N$ is invariant under unitary operations, transition probabilities can only be functions of scalar products How does one prove this statement? [1] S. Hossenfelder, "A derivation of Born’s rule from symmetry", Annals Phys. 425 (2021) 168394, arXiv:2006.14175. Answer: Let $|\psi_1 \rangle, |\phi_1 \rangle,|\psi_2 \rangle,| \phi_2 \rangle$ be unit vectors such that : $$ \langle \psi_1 |\phi_1 \rangle = \langle \psi_2 |\phi_2 \rangle$$ Then, there is a unitary map $U$ such that : $$U |\psi_1 \rangle = |\psi_2 \rangle $$ and $$U |\phi_1 \rangle = |\phi_2 \rangle $$ Since, $P_N$ is invariant under unitary transformations, we have : $$P_N( |\psi_1 \rangle \to |\phi_1 \rangle) = P_N(|\psi_2 \rangle \to | \phi_2 \rangle)$$ This means that $P_N( |\psi_1 \rangle \to |\phi_1 \rangle)$ only depends on $\langle \psi_1 |\phi_1 \rangle$ : QED
{ "domain": "physics.stackexchange", "id": 78787, "tags": "quantum-mechanics, hilbert-space" }
Dimensions of electric charge
Question: I was studying dimensional analysis, which is a technique used in conversion of units, checking the homogeneity of equations and also sometimes deriving unknown equations, if we can guess the factors on which a physical quantity depends and if the dependence is of the product type. While calculating the dimensional formulae of charge, I faced some difficulties. Before mentioning my problem related to this, I want to first pose a general problem which was in my mind. The question is that can a physical quantity have more than one dimensional formulae (they are expressed in most simplified way and in terms of the seven fundamental physical quantities)? This question occurred to me because of considering the following problems when calculating the dimensional formulae of electric charge, Q. I considered the following equation involving electric charge : $$ F=k\frac {Q_1Q_2}{r^2} $$ and then after taking arrived at the following dimensions - 1/2 dimensions in mass, 3/2 dimensions in length and -1 dimensions in time. ( * couldn't write the dimensional formulae using mathjax) But in my book they arrived at the dimensions of electric charge using the formulae - $$Q=It$$ and thus received dimensions as - 1 in current and 1 in time. Both these dimensional formulae are contradicting and I'm unable to find a cause for it. Any help is appreciated. Answer: You run into a problem because you are considering $k$ to be dimensionless - and it is not... "the electromagnetic properties of vacuum" influences the relationship (as captured in $k$) and if you don't include that in your analysis the answer doesn't make sense.
{ "domain": "physics.stackexchange", "id": 40482, "tags": "charge, units, measurements, dimensional-analysis, si-units" }
Can the expansion of the Universe be explained by a giant gravitational wave?
Question: Can the expansion of the Universe be explained by a giant gravitational wave with a wavelength bigger than observable Universe's diameter? In that case the expansion would be temporary, only happening while we are on the trough of the wave. After billions of years when the crest of the wave gets to us, we will see constriction instead. Can this type of gravitational wave explain the accelerated expansion we observe? Answer: According to general relativity gravitational waves are transverse. This means they only act in directions perpendicular to their direction of propagation. The way gravitational waves are polarized also means that as they stretch space in one direction, they squish it in a second. For example if we define the direction of propagation of a wave to be the $\hat{z}$ direction, then the stretching/squishing would be happening in the $\hat{x}$ and $\hat{y}$ directions only. If we then define the direction of maximal stretching to be the $\hat{x}$ direction, then space will be maximally squished in the $\hat{y}$ direction. The expansion of the universe is isotropic. This means it happens equally in all directions. If a giant gravitational wave were affecting the whole observable universe, the amount of stretching or squishing (or not at all) would be different in different directions.
{ "domain": "physics.stackexchange", "id": 82021, "tags": "general-relativity, cosmology, space-expansion, gravitational-waves" }
How can you tell a model explosion from the real thing?
Question: Movies and TV shows frequently show buildings being bombed, cars blowing up, etc. Frequently these are really explosions of miniatures filmed up close. Aside from the speed that the explosion expands relative to the size of the object (which can be adjusted by slowing down the film), are there any features of an explosion that clue us in to the scale on which it occurs? To be definite, let's imagine two tank trucks of gasoline, one 10m long and one 10cm long, but otherwise with the same proportions and made from the same materials. We film the big one exploding from 100m away and film the little one exploding from 1m away. How would we tell which footage is which? Answer: Note that by changing the overall scale by a factor k, you are changing the volume of the gasoline by k^3, and the area you are viewing by k^2. So the overall size of the explosion (ie visible flames etc) is not invariant. To find out what is k, you'd have to know lots of details about the fuel etc I suppose.
{ "domain": "physics.stackexchange", "id": 839, "tags": "classical-mechanics, fluid-dynamics, explosions, scaling" }
undefined reference to tf::poseMsgToEigen
Question: I'm on Ubuntu 14.04 with ROS Indigo. Inside a service callback, I want to convert a geometry_msgs::Pose to Eigen::Affine3d. I'm trying to use tf::poseMsgToEigen. My includes are as follows: #include <tf/tf.h> #include <tf_conversions/tf_eigen.h> #include <eigen_conversions/eigen_msg.h> #include <geometry_msgs/Pose.h> The relevant part of code is: geometry_msgs::Pose pose; Eigen::Affine3d transform; tf::poseMsgToEigen(pose, transform); I get the following error: undefined reference to `tf::poseMsgToEigen(geometry_msgs::Pose_<std::allocator<void> > const&, Eigen::Transform<double, 3, 2, 0>&)' collect2: error: ld returned 1 exit status Inside my CMakeLists.txt I have find_package(catkin REQUIRED COMPONENTS std_msgs geometry_msgs tf tf_conversions ) and in the package.xml I have added the build and exec_depends for the same. Thanks a lot in advance for your help. Originally posted by dhindhimathai on ROS Answers with karma: 136 on 2018-07-26 Post score: 2 Original comments Comment by destogl on 2018-07-26: Are you maybe missing eigen conversion? Comment by dhindhimathai on 2018-07-26: @destogl I already included it: #include <eigen_conversions/eigen_msg.h>. Do I have to include it anywhere else? Comment by Abdulbaki on 2019-04-08: I was missing tf_conversions and had similar error. Answer: Sorry guys, I was missing eigen_conversions inside find_package(catkin REQUIRED COMPONENTS ) in my CMakeLists.txt. Thanks to @destogl for the hint. Originally posted by dhindhimathai with karma: 136 on 2018-07-26 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by destogl on 2018-07-26: That is what I meant
{ "domain": "robotics.stackexchange", "id": 31373, "tags": "eigen, geometry-msgs, ros-indigo, transform" }
What linux signal is sent when roslaunch kills a node?
Question: My launch file spawns a ros qt node. After launching, i tried to quit using ctrl+c, but the ros qt node is not terminating. Few other nodes are also present in the launch file with required == "true", When any of that node terminates, launch process tries to kill the ros qt node, no success in the first attempt, but after that it escalates to SIGTERM and then the node terminates. I tried to check the signal sent by the launch process, by writing a signal handler inside ros qt. But no signal is caught when launch initiates the first kill attempt. Originally posted by dharanikumar on ROS Answers with karma: 35 on 2016-11-10 Post score: 0 Answer: You have to do 2 things : Tell ros you are handling the signal to kill the node This is done by adding ros::init_options::NoSigintHandler to your ros::init call : ros::init(argc, argv, "YourNodeName", ros::init_options::NoSigintHandler); Register a sigint handler to handle the node termination (I assume you use Qt4) I show you the simple way here: #include <QCoreApplication> #include <ros/ros.h> #include <signal.h> void signalhandler(int sig) { qApp->quit(); } int main(int argc, char *argv[]) { //initialize Ros environment ros::init(argc, argv, "YourNodeName", ros::init_options::NoSigintHandler); QCoreApplication app(argc, argv); signal(SIGQUIT, signalhandler); signal(SIGINT, signalhandler); signal(SIGTERM, signalhandler); signal(SIGHUP, signalhandler); return app.exec(); } There exist a more object oriented way to do this, documented here : Calling Qt Function From Unix Signal Handlers Originally posted by Inounx with karma: 293 on 2016-11-10 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 26198, "tags": "roslaunch" }
Quantum mechanics and Classical limit(s)
Question: I have tried to make sense of this and i am not sure i get it. What i gather from this page about the classical limit is: You need coherent states something like $\hbar \to 0$ is not really enaugh. Which makes sense to me because i always though it to be a strange thing to do. Like assuming $c=\infty$ istead of $ {v \over c} \to 0$. But statements on this seem to vary greatly here are a few statements of upvoted answers and one from my statistical mechanics professor: 1) "The short answer: No, classical mechanics is not recovered in the ℏ→0 limit of quantum mechanics." - juanrga There seem to be contradictory statements to this, and people trying to get people to read their paper on this, these are cases of this: 2)"It is natural, and intuitive, as explained above, to assume that the classical limit is a property of a certain class of states. As it happens, that view is incorrect. You can actually obtain an exact recovery of Hamiltonian Classical Point Mechanics for any value of ℏ using a different wave-equation:" - Kingsley Jones 3) "What is the limit ℏ→0 of quantum theory?" is that the classical limit of quantum theory is not classical mechanics but a classical statistical theory. - U. Klein 4) This is what shows up in my stat. mech. lecture as: "Häufig spielen jedoch quantenmechanische Effekte keine Rolle; dies sollte der Fall sein, wenn $\hbar $ kleiner als alle relevanten Wirkungen im System ist und wir den Grenz ̈ubergang $\hbar $ → 0 machen konnen. Dann sollten die quantenmechanischen Formeln in die klassischen Formeln übergehen." my translation : "Many times quantum effects can be neglected, if this is the case, if $\hbar$ is smaller than any relevant actions in the system and we can take the limit $ \hbar \to 0 $. Then all the Q.M formulas should transform into the classical ones" (Talking about statistical avarages $\langle O \rangle = Tr(O\rho)$ and von-Neumann enthropy here.) I would like to know what is going on. Is this true for statistical mechanics? I am looking forward to your takes on this stuff. Just type "classical limit" in the search and look at some threads, it is quite strange (to me) how many "please don't just link your own work" type comments show up. Answer: The classical limit of quantum theories is understood quite well from a mathematical standpoint nowadays. The so-called semiclassical analysis covers the QM (finite dimensional phase-space) cases, the Hepp method and infinite-dimensional semiclassical analysis cover the systems with classically infinitely many degrees of freedom. The ideas can be summed up in the following quantum-classical dictionary, that can be made rigorous in the limit $\hbar\to 0$ with some technical assumptions: Space. Quantum: Infinite dimensional Hilbert space $\mathscr{H}$, in the easier example $L^2(\mathbb{R}^d)$. Classical: finite or infinite dimensional phase space $Z$. In the example above, $Z=\mathbb{R}^{2d}$. States. Quantum: $(\rho_{h})_{h\in (0,\bar{h})}\subset \mathcal{L}^1(\mathscr{H})$ is a family of quantum normal states (positive trace class self-adjoint operators with trace one). The dependence on the semiclassical parameter $h$, that has the role of the Planck's constant, is made explicit because the corresponding classical quantity is obtained as a suitable limit $h\to 0$ in the above family. Classical: Probability measures $\mu\in \mathcal{P}(Z)$ of the classical phase space. A probability measure is a positive Borel measure such that $\mu(Z)=1$. Observables. Quantum: Families of densely defined operators $A_{h}:D_q\subset\mathscr{H}\to \mathscr{H}$ that depend on the semiclassical parameter $h$ in a suitable (controllable) way. Classical: Densely defined function(al)s of the classical phase space $a:D_c\subset Z\to \mathbb{C}$. Evolution. Quantum: Family of strongly continuous unitary groups $U_h(\cdot)=e^{-\frac{i}{h}(\cdot) H_{h}}:\mathbb{R}\times \mathscr{H}\to \mathscr{H}$ that depend on the semiclassical parameter ($H_h$ Hamiltonian of the system). Classical: Nonlinear one-parameter evolution group $\Phi(\cdot):\mathbb{R}\times Z\to Z$; that solves the classical evolution equations (in the case of $\mathbb{R}^{2d}$ the usual Hamilton-Jacobi equations of classical mechanics). In the limit $h\to 0$, the quantum objects converge to the classical ones, in the following sense (as I said under suitable technical assumptions, and "natural" scaling conditions that ensure everything in the limit is finite): at time zero $$\lim_{h\to 0}\mathrm{Tr}[\rho_h A_h]=\int_Z a(z)d\mu(z)\; ,$$ where $\mu$ is the classical measure corresponding to the family of states $\rho_h$, and the classical observable is averaged over the phase space w.r.t. the classical probability measure; at time $t$ $$\lim_{h\to 0}\mathrm{Tr}[e^{-\frac{i}{h}t H_h}\rho_h e^{\frac{i}{h}t H_h} A_h]=\int_Z a(z)d\Phi(t)_{\#}\mu(z)\; ,$$ where $\Phi(t)_{\#}\mu$ is the push-forward of the measure $\mu$ by the classical non-linear flow $\Phi(t)$ that solves the classical equations. In $L^2(\mathbb{R}^d)$, to families of coherent states of the type $C\bigl((q/\sqrt{h},p/\sqrt{h})\bigr)$, it corresponds a delta classical measure on the phase space $\mathbb{R}^{2d}\ni (x,\xi)$ centered in the point $(q,p)$, i.e. $d\mu(x,\xi)=\delta(x-q)\delta(\xi-p)dxd\xi$. This means that in the classical limit to coherent evolution it corresponds punctual evolution of the point $(q,p)$ on the classical phase space. Comment: the answers given in the question you linked are at least inaccurate or incomplete. I would like to stress that the results I sketched above are obtained in a rigorous fashion in a very vast literature of mathematical papers, for a huge class of interesting physical systems of QM and also (bosonic) QFT (in the few cases where it can be defined on a mathematically rigorous standpoint). The picture that emerges is also, in my opinion, quite natural: to a "probabilistic" knowledge that is intrinsic to quantum mechanics, classically it corresponds a similar probabilistic knowledge on the phase space; however the indeterminacy constraint of QM does not hold classically, and for suitable initial quantum states (coherent), a deterministic evolution of observables is recovered in the classical limit.
{ "domain": "physics.stackexchange", "id": 21488, "tags": "quantum-mechanics, classical-mechanics" }
Why are the vectors canceled out in this scenario for angular momentum of a particle?
Question: I have a study guide for our next test, and I'm trying to understand the professors answer but I don't understand why i^ * i^ = 0? Here is his work, Why do we know that the P Vector is on direction i^? Why does it cancel out the other i^ direction once multiplied? I appreciate any and all help in understanding this, thank you! Answer: The vectors cancel because your professor is taking the cross product between them, $\hat{\bf i}\times\hat{\bf i}$, rather than the scalar product, $\hat{\bf i}\cdot\hat{\bf i}$. The cross product between two parallel vectors, (or two of the same vectors) is always zero. With regard to your second question, the momentum vector $\bf P$ is in the $\hat{\bf i}$ direction because the velocity vector $\bf v$ is in this direction, and ${\bf P}=m{\bf v}$, where $m$ is the mass of the object.
{ "domain": "physics.stackexchange", "id": 58536, "tags": "kinematics, angular-momentum, momentum, vectors" }
How dense would planet earth have to be to have the same gravity as Jupiter?
Question: I was reading this question about how small could a planet be while having earth-like gravitational pull. This got me thinking, how dense would planet Earth have to be to have the same gravitational pull as Jupiter while all the other factors staying the same (even if it is impossible in the real world)? If there are any formulas could you please explain them so I understand them please? Edit Sorry about some of the confusion but I meant keeping everything but the mass the same. Answer: There are at least two interpretations to this problem: Per Wikipedia, Jupiter's surface gravity is $2.528$ times Earth's. Thus, if the Earth were $2.528$ times denser, it would have the same surface gravity as Jupiter. The Earth's current density is $5.514$ grams per cubic centimeter, so the new density would be $2.528 \times 5.514$, or about $13.9394$ grams per cubic centimeter. This assumes we change Earth's mass, but not its radius. @Rob_Jeffries answer assumes the Earth's mass remains constant and the radius changes. If the radius shrinks by a factor of $2$, the volume decreases by a factor of $8$, and the planet becomes 8 times more dense. The surface gravity increases by $4$, since it depends on the radius squared. In general, shrinking the planet's radius by $x$ will increase the density by $x^3$ and the gravity by $x^2$. If we want the gravity $2.528$ times higher, we choose $x = \sqrt{2.528}$ or right around $1.590$. This makes $x^3$ equal to about $4.019$. Multiplying that by Earth's current density of $5.514$, we get $22.16$ grams per cubic centimeter, pretty close to what Rob got. So, you can't really change the density without changing anything else: either the mass or the volume must change.
{ "domain": "astronomy.stackexchange", "id": 772, "tags": "gravity, earth, jupiter" }
unable to update octomap correctly using insertPointcloud
Question: Hi, I would like to use octomap for turtlebot path planning. My issue is: I want to update my map after my robot reaching the desired position. To do this, I use the octomap method insertPointCloud(pointcloud, sensor_origin). The pointcloud data is obtained from projectLaser method (which convert /scan to /poincloud2). And sensor_origin is the odometry of turtlebot. The result map I get is "overlapping" which is not correct. (sorry for unable to upload pic here) Anyone have thoughts what could be the potential reasons for that? Originally posted by zhefan on ROS Answers with karma: 7 on 2020-01-17 Post score: 0 Answer: Hi, The API call you use in Octomap expects the input pointclouds to be correctly registered (i.e. aligned) with respect to the global coordinate frame. (In fact Octomap library doesn't do any form of pointcloud alignment at all, so you are responsible in providing the correctly aligned pointclouds). You can play around with something like PCL pointcloud registration (http://pointclouds.org/documentation/tutorials/#registration-tutorial) to align the pointclouds correctly. Also, you would most definitely would require to use a localization or a SLAM solution that generates a more accurate estimation of the current pose of the sensor, as opposed to using the odometry of the turtlebot. Originally posted by Namal Senarathne with karma: 246 on 2020-01-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by zhefan on 2020-01-19: Thank you very much! Your answer really makes sense to me. I also find another method in Octomap InsertPointCloud(pointcloud, sensor_origin, frame_origin). I wonder if this method can deal with unregistered pointcloud? (Just want to know if I can avoid using PCL) Comment by Namal Senarathne on 2020-01-23: Hi, no even with InsertPointCloud(pointcloud, sensor_origin, frame_origin) you still need to compute the transform (i.e. do the registration)
{ "domain": "robotics.stackexchange", "id": 34286, "tags": "ros, octomap, ros-kinetic" }
Does a time-dependent Hamiltonian commute with its self at different times?
Question: Let $H\colon\mathbb{R}\to\operatorname{End}\mathcal{H}$ be a time-dependent hamiltonian operator, where $\mathcal{H}$ is an arbitrary Hilbert space. Does $H$ commutes with itself at different times, i.e. is $\left[H(t_1),H(t_2)\right]=0$ for $t_1\neq t_2$? Since I expect the answer to be negative, I move on to the actual question. Because it commutes with itself at the same time instant, I expect that: $$\lim_{\epsilon\to 0^+} \left[H(t+\epsilon),H(t)\right]=0$$ where $\epsilon>0$. In which limiting procedure would the above limit make sense? And one final question. The formal solution to the initial value problem: $$i\hbar\tfrac{\mathrm{d}}{\mathrm{d}t}\psi(t)=H(t)\psi(t), \ \ \psi(t_0)=\psi_0$$ is given by the time-ordered exponential: $$\psi(t)=\mathcal{T}\exp\left(\tfrac{1}{i\hbar}\int_{t_0}^{t}H(t')\mathrm{d}t'\right)\psi_0$$ where $t\ge t_0$. Taking into account the above, I intuitively expect that as $t\rightarrow t_0$ then: $$\mathcal{T}\exp\left(\tfrac{1}{i\hbar}\int_{t_0}^{t}H(t')\mathrm{d}t'\right)\rightarrow \exp\left(\tfrac{1}{i\hbar}(t-t_0)H(t_0)\right)$$ If the above is in some sense correct, how would you (attempt to) prove it? Answer: Does $H$ commutes with itself at different times? In general, no. If it does happen to, then the eigenstates don't change in time and you don't need to time-order the exponential in the time-evolution operator. In which limiting procedure would the above limit make sense? Define the operator $A_t(\epsilon) := [H(t + \epsilon), H(t)]$ which depends on the parameter $\epsilon$. If the Hamiltonian depends continuously on time at time $t$, then as $\epsilon \to 0^+$, $A_t(\epsilon)$ will approach the zero operator with respect to the "operator trace norm," which is proportional to the RMS value of the eigenvalues of $A_t(\epsilon)$. (In practice, it will generally approach the zero operator with respect to any reasonable operator norm.) If the above is in some sense correct, how would you (attempt to) prove it? The result follows from the Lie product formula. This idea forms the basis of the extremely important Trotter decomposition, which has been studied extensively in the context of almost all areas of quantum mechanics.
{ "domain": "physics.stackexchange", "id": 37047, "tags": "quantum-mechanics, mathematical-physics" }
Can't get a good connection remotle with roscore
Question: Hello, For some kind of reason I'm unable to get a good connection remotely with roscore. I have on my Jetson Nano roscore running and setup the ./bashrc file with ip address export ROS_MASTER_URI=http://192.168.178.63:11311 export ROS_IP=192.168.178.63 When I do a ifconfig on Ubuntu (running on VMware) I get as IP addres 192.168.204.128 So on my Ubuntu VM is set in the ./bashrc file export ROS_MASTER_URI=http://192.168.178.63:11311 export ROS_IP=192.168.204.128 Now when I run a publisher on my Jetson Nano (e.g. Hello World) and can subscribe to the topic on the destkop Ubuntu machine and see the data comming in. But when I run a publisher on my desktop (e.g. Hello World) the Jetson doesn't see the data comming in. Although when I do a rostopic list it shows the topic. Does anybody knows how to fix this? I must do something wrong. I think t is the network settings but I cant change the IP addres of the VM, I tried to change it but it did give me the good results I was even unable to connect to roscore. Some help would be appreciated Originally posted by ElectricRay on ROS Answers with karma: 11 on 2023-08-04 Post score: 0 Original comments Comment by billy on 2023-08-05: I will assume when you say "the Jetson doesn't see the data" you mean you have a subscriber running on the Jetson and the subscriber callback doesn't get entered. I have to ask, does it work if you the run both publisher and subscriber on the desktop? You're running same version of ROS on both machines? Answer: I have figured out what went wrong. It were a couple of things all related to eachother. So as you can see in my question the IP address of the desktop and Jetson Nano were not in the same range. This creates issues, in order to fix this when using a Virtual Machine one needs to bridge the network card. As I was using WiFi on my destkop and have the free version of VMware I could not bridge the WiFi card. So what I did is connect my desktop with a LAN cable to my router and that bridge the network card. By doing this I am able to send between publishers and subscribers n both direction data messages. So problem solved Originally posted by ElectricRay with karma: 11 on 2023-08-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Mike Scheutzow on 2023-08-06: Please click on the checkmark in the gray circle to indicate you (the question author) accept this answer. The icon will turn green. Comment by ElectricRay on 2023-08-06: I tried to do so but I dont have enough points to do this Comment by Mike Scheutzow on 2023-08-06: oh yeah, I forgot about that. Comment by ElectricRay on 2023-08-06: np, I don't know how I receive points I have two questions answered by myself because I fogured it out eventually and thought would be good to close them. Comment by billy on 2023-08-08: I +1'd this post so maybe now you have the points needed. Comment by ElectricRay on 2023-08-08: Thanks! as you can see now I have the rights :)
{ "domain": "robotics.stackexchange", "id": 38484, "tags": "linux" }
first order logic resolution unification
Question: Assuming I have shown part of the knowledge base in the clausal format: [1] p1(banana). [2] not p1(X) or p2(Y). [3] p1(X) or not p3(F). ... and more rules. Most of the books, would do something like this: [1,2] {X=banana} p2(Y). and more steps. First question: is it equally correct to do something like follows: [2,3] {X=X} p2(Y) or not p3(F). and then continue with resolution. Second question: What if different variables were used in each clause, could I do the same as above, for example we had: [2] not p1(X1) or p2(Y1). [3] p1(X2) or not p3(F2). [2,3] {X1=X2} p2(Y) or not p3(F2). Thank you in advance Answer: Assuming $X$ here is a variable, rather than an atomic proposition, then first you must specify what is the quantification for 2 and 3. I assume that it should be $\forall X,Y \neg p1(X)\vee p2(Y)$, and similarly for 3. In this case, what can be done is to replace $X$ and $Y$ with each atomic proposition available, in order to obtain a propositional knowledge base, and work on that. What you suggest to do with 2,3 is sound only under universal quantification, but if you have only universal quantification, it is useful indeed. For your second question: the name of the variable means nothing, so your substitution is sound also there. Indeed, the claim $\forall Y, P(Y)$ is equivalent to $\forall Z, P(Z)$. You can first change the names, if it makes you happy :) I'll remark that usually, in resolution-guided proofs, it is more useful to resolve a concrete expression with a quantified rule. For example, resolving $P(a)$ with $\forall X P(X)\to Q(X)$ in order to obtain $Q(a)$. This is more likely (heuristically) to get you towards the proof of a claim.
{ "domain": "cs.stackexchange", "id": 1090, "tags": "artificial-intelligence, first-order-logic, knowledge-representation" }
"observer pointer" meant to stay updated when the pointed object is moved in memory
Question: I wasn't sure about how to name it, maybe "follow_ptr", "self_updating_ptr", or "stalking_ptr" or something on those lines. For now it's called Identifier. What I'm trying to achieve is a pointer wrapper which will always refer to the same object even when that object is moved in memory (vector resizes is a quite frequent example, also algorithms like std::remove_if that can move elements around). EDIT: One requirement is to allow storing objects in sequential containers (like vector and deque) without losing sequential storage as one would by using unique_ptr or shared_ptr. This whole system is not meant to take care about ownership. It's my bad for using the term "smart pointer in the original title", it's smart in the sense that it follows the pointed object as opposed to an observer pointer which wouldn't do that. A requirement is that the object is stored within an "Identified" class. That class is necessary to keep all the Identifiers updated. The trick is having a double indirection, where a raw pointer living in the heap will point to the object to be stalked: #include <memory> #include <stdexcept> template <typename T> class Identifier; template <typename T> class Identified; // A pointer to an identified object. This object lives in the heap and is used to share information with all identifiers about the object moving in memory. template <typename T> class Inner_identifier { public: Inner_identifier() = default; Inner_identifier(T* identified) noexcept : identified{identified} {} Inner_identifier(const Inner_identifier& copy) = delete; Inner_identifier& operator=(const Inner_identifier& copy) = delete; Inner_identifier(Inner_identifier&& move) = delete; Inner_identifier& operator=(Inner_identifier&& move) = delete; T* identified{nullptr}; }; The Identifier, or stalker, acts as an in-between a smart pointer and an optional. The idea is that if Identifiers outlive an object, they're still valid (assuming the user checks with has_value before using them, like with an optional). I'm unsure if I should just delete the default constructor, so that it's always certain that an Identifier's pointer to the Inner_identifier is always valid, and I can get rid of some checks. For now I've left it just to make writing the example simpler. template <typename T> class Identifier { public: Identifier() = default; Identifier(Identified<T>& identified) : inner_identifier{identified.inner_identifier} {} Identifier& operator=(Identified<T>& identified) { inner_identifier = identified.inner_identifier; return *this; } Identifier(const Identifier& copy) = default; Identifier& operator=(const Identifier& copy) = default; Identifier(Identifier&& move) = default; Identifier& operator=(Identifier&& move) = default; const T& operator* () const { check_all(); return *inner_identifier->identified; } T& operator* () { check_all(); return *inner_identifier->identified; } const T* operator->() const { check_all(); return inner_identifier->identified; } T* operator->() { check_all(); return inner_identifier->identified; } const T* get() const { check_initialized(); return inner_identifier->identified; } T* get() { check_initialized(); return inner_identifier->identified; } bool has_value() const noexcept { return inner_identifier && inner_identifier->identified != nullptr; } explicit operator bool() const noexcept { return has_value(); } private: std::shared_ptr<Inner_identifier<T>> inner_identifier{nullptr}; void check_initialized() const { #ifndef NDEBUG if (!inner_identifier) { throw std::runtime_error{"Trying to use an uninitialized Identifier."}; } #endif } void check_has_value() const { #ifndef NDEBUG if (inner_identifier->identified == nullptr) { throw std::runtime_error{"Trying to retrive object from an identifier which identified object had already been destroyed."}; } #endif } void check_all() const { check_initialized(); check_has_value(); } }; Finally the Identified class, which holds the instance of the object to be pointed to by one or more Identifiers. It is responsible for updating the Inner_identifier whenever it is moved around in memory with either move constructor or move assignment. On the opposite the copy constructor makes sure that the new copy has its own new Inner_identifier and all the existing Identifiers still work with the instance being copied from. Upon destruction, the Inner_identifier is nullified but it will keep existing for reference as long as at least one Identifier to the now defunct object still exists (hence the internal shared_ptrs) template <typename T> class Identified { friend class Identifier<T>; public: template <typename ...Args> Identified(Args&&... args) : object{std::forward<Args>(args)...}, inner_identifier{std::make_shared<Inner_identifier<T>>(&object)} {} Identified(Identified& copy) : Identified{static_cast<const Identified&>(copy)} {} Identified(const Identified& copy) : object{copy.object}, inner_identifier{std::make_shared<Inner_identifier<T>>(&object)} {} Identified& operator=(const Identified& copy) { object = copy.object; return *this; } //Note: no need to reassign the pointer, already points to current instance Identified(Identified&& move) noexcept : object{std::move(move.object)}, inner_identifier{std::move(move.inner_identifier)} { inner_identifier->identified = &object; } Identified& operator=(Identified&& move) noexcept { object = std::move(move.object); inner_identifier = std::move(move.inner_identifier); inner_identifier->identified = &object; return *this; } ~Identified() { if (inner_identifier) { inner_identifier->identified = nullptr; } } const T& operator* () const { return *get(); } T& operator* () { return *get(); } const T* operator->() const { return get(); } T* operator->() { return get(); } const T* get() const { #ifndef NDEBUG if (!inner_identifier || inner_identifier->identified == nullptr) { throw std::runtime_error{"Attempting to retrive object from an identifier which identified object had already been destroyed."}; } #endif return &object; } T* get() { #ifndef NDEBUG if (!inner_identifier || inner_identifier->identified == nullptr) { throw std::runtime_error{"Attempting to retrive object from an identifier which identified object had already been destroyed."}; } #endif return &object; } T object; private: std::shared_ptr<Inner_identifier<T>> inner_identifier; }; On top of criticisms, I'd like some advice on naming. If I were to call the Identifier "follow_ptr", "self_updating_ptr", or "stalking_ptr", I've no idea how to call the other two classes. Aside for the first capital letter of the classes, does the interface feel "standard" enough? Here is an usage example, compile in debug mode for the exceptions: #include <stdexcept> #include <iostream> #include <vector> #include <algorithm> struct Base { int tmp; bool enabled = true; bool alive = true; Base(int tmp) : tmp(tmp) {} virtual volatile void f() { std::cout << "Base::f" << tmp << std::endl; }; void g() { std::cout << "Base::g" << tmp << std::endl; }; }; struct TmpA : public Base { TmpA(int tmp) : Base(tmp) {} virtual volatile void f() override { std::cout << "TmpA::f" << tmp << std::endl; }; void g() { std::cout << "TmpA::g" << tmp << std::endl;/**/ }; }; int main() { //Create empty identifiers Identifier<TmpA> idn; Identifier<TmpA> id1; Identifier<TmpA> id5; std::vector<Identified<TmpA>> vec; if (true) { //Create some data and assign iit to identifiers Identified<TmpA> identified_a1{1}; Identified<TmpA> identified_will_die{0}; idn = identified_will_die; id1 = identified_a1; id5 = vec.emplace_back(5); //Move some identified objects around, this also causes the vector to grow, moving the object Identified by id5. vec.emplace_back(std::move(identified_a1)); } std::cout << " _______________________________________________ " << std::endl; std::cout << "vec[0]: " << " "; try { vec[0]->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "vec[1]: " << " "; try { vec[1]->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "id1: " << " "; try { id1->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "id5: " << " "; try { id5->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "null: " << " "; try { idn->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } //Move some identified objects around std::partition(vec.begin(), vec.end(), [](Identified<TmpA>& idobj) { return idobj->tmp > 2; }); std::cout << " _______________________________________________ " << std::endl; std::cout << "vec[0]: " << " "; try { vec[0]->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "vec[1]: " << " "; try { vec[1]->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "id1: " << " "; try { id1->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "id5: " << " "; try { id5->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } std::cout << "null: " << " "; try { idn->f(); } catch (std::exception& e) { std::cout << e.what() << std::endl; } } Answer: Keeping objects as sequential as possible After reading the comments, it seems the most important use case is for tracking moves of objects in containers, but we want to keep those objects sequential, and be as cache-friendly as possible. It is also likely that you don't want to track all the objects in a container, but just a few. In that case, your implementation has some drawbacks. The main one is that you store a std::unique_ptr along with every object, so they are no longer sequential. Consider that you had a: std::vector<T> vec; Then in memory you have: T0 T1 T2 T3 ... But now you want to track some of the Ts, then you'd write: std::vector<Identified<T>> vec; Then in memory you would have: T0 std::shared_ptr<Inner_identifier<T>> T1 std::shared_ptr<...> T2 ... Can we do better? Ideally we want to get the T's packed back-to-back like in the original vector. We can get that if we move the tracking to a separate, global registry: template<typename T> std::unordered_map<T *, std::shared_ptr<T *>> registry; Now when you create an instance of Identified<T>, you want it to put the address of the object that it constructed into that map. When the object is moved, you have to update the map and the inner identifier accordingly. However, if no one is tracking a given object, it doesn't even have to be stored in the registry, so we can delay adding an object to the registry until someone wants to create an Identifier<T> from it. Here is an example of what it could look like: template<typename T> class Identifier { std::shared_ptr<T *> object; public: Identifier(Identified<T> &identified) { // Check if this object is already in the registry if (auto it = registry<T>.find(&identified.object); it != registry<T>.end()) { // Yes, we also want a reference to it object = *it; } else { // No, make a new entry in the registry object.reset(&identified.object); registry<T>[&object] = object; } } T &operator*() { if (!*object) throw ...; return **object; } ... }; template<typename T> class Identified { T object; public: // Constructor just constructs the object template <typename ...Args> Identified(Args&&... args): object{std::forward<Args>(args)...} {} // Move constructor has to update the registry Identified(Identified &&other) { // Check if this object is already in the registry if (auto it = registry<T>.find(&other.object); it != registry<T>.end()) { // Yes, update the value *it.reset(&object); // And also update the key auto nh = registry<T>.extract(it); nh.key() = &object; registry<T>.insert(std::move(nh)); } // Now move the actual contents of the object object = std::move(other.object); } ... }; Naming things Consider renaming the classes to better convey their purpose: Identified -> Trackable Identifier -> Tracker I don't think there is a need for an Inner_identifier if you have the registry.
{ "domain": "codereview.stackexchange", "id": 41032, "tags": "c++, memory-management, c++17, pointers" }
How long before two sidereal months start on the same lunar phase?
Question: I was reading about the difference between the sidereal and synodic month when I started to wonder how many sidereal months need to pass before you get two that start on the same part of the synodic month or lunar phase. I've tried looking on Wikipedia and googling around a bit and haven't turned anything up so I'm hoping someone here can help me. Thanks in advance! Answer: The synodic and sidereal period are not in an exact ratio to each other, so there isn't an exact answer, After 12 synodic months (354.3 days) about 13 siderial months have passed (355.1 days, only one day out) After 99 synodic months (2923.6 days) about 107 sidereal months have passed (2923.5, one tenth of a day out) After 235 synodic months (6939.785 days) about 254 sidereal months have passed (6939.788, about 5 minutes out) So if you mean "in roughly the same phase" After 13 sidereal months, the synodic month has gone through one cycle and is roughly back to where it started the previous year. And if you mean almost exactly the same, if you wait for 254 sidereal months, the synodic month will, be back in line, almost perfectly.
{ "domain": "astronomy.stackexchange", "id": 6746, "tags": "the-moon, moon-phases, sidereal-period" }
RNA/DNA transcriber
Question: I've been going through some of the exercises over on exercism and this is one of my solutions: a basic RNA/DNA transcriber. I was happy enough at first but now, looking at it again, the solution looks very repetitive to me. The two methods below are needed because there are tests to ensure we've implemented them. It's just the code inside the methods I'm trying to refactor. def self.of_dna(dna) if dna.include? 'U' raise ArgumentError.new('Incorrect nucleotide detected. Please enter a DNA sequence.') else return dna.gsub(/[GCTA]/, 'G' => 'C', 'C' => 'G', 'T' => 'A', 'A' => 'U') end end def self.of_rna(rna) if rna.include? 'T' raise ArgumentError.new('Incorrect nucleotide detected. Please enter an RNA sequence') else return rna.gsub(/[GCAU]/, 'G' => 'C', 'C' => 'G', 'A' => 'T', 'U' => 'A') end end As you can see, in one method I'm performing the following substitutions: 'G' for 'C' 'C' for 'G' 'A' for 'T' 'U' for 'A' In the other method it's the reverse. Can anyone point me in the right direction towards simplifying this and making it more concise? Answer: To make it more concise and readable: you can use tr instead of regex as @200_success pointed out use Ruby's trailing if construct to short-circuit input errors omit the return which is unnecessary omit the message about entering a DNA sequence. User input is not the responsibility of these methods. The message should be closer to where the input is handled. Here's what it condenses to: def self.of_dna(dna) raise ArgumentError, 'Incorrect nucleotide' if dna.include? 'U' dna.tr('GCTA', 'CGAU') end
{ "domain": "codereview.stackexchange", "id": 13562, "tags": "ruby, regex, bioinformatics" }
Python hexdump generator
Question: I wrote the following hexdump generator function. How can I improve it? FMT = '{} {} |{}|' def hexdump_gen(byte_string, _len=16, n=0, sep='-'): while byte_string[n:]: col0, col1, col2 = format(n, '08x'), [], '' for i in bytearray(byte_string[n:n + _len]): col1 += [format(i, '02x')] col2 += chr(i) if 31 < i < 127 else '.' col1 += [' '] * (_len - len(col1)) col1.insert(_len // 2, sep) yield FMT.format(col0, ' '.join(col1), col2) n += _len Example: In[15]: byte_string = b'W\x9a9\x81\xc2\xb5\xb9\xce\x02\x979\xb5\x19\xa0' \ ...: b'\xb9\xca\x02\x979\xb5\x19\xa0\xb9\xca\x02\x979' \ ...: b'\xb5\x19\xa0\xb9\xca\x8c\x969\xfb\x89\x8e\xb9' \ ...: b'\nj\xb19\x81\x18\x84\xb9\x95j\xb19\x81\x18\x84' \ ...: b'\xb9\x95j\xb19\x81\x18\x84\xb9\x95j\xb19\x81\x18' \ ...: b'\x84\xb9\x95j\xb19\x81\x18\x84\xb9\x95' ...: In[16]: from hexdump import hexdump_gen In[17]: for i in hexdump_gen(byte_string, n=32, sep=''): ...: print(i) ...: 00000020 8c 96 39 fb 89 8e b9 0a 6a b1 39 81 18 84 b9 95 |..9.....j.9.....| 00000030 6a b1 39 81 18 84 b9 95 6a b1 39 81 18 84 b9 95 |j.9.....j.9.....| 00000040 6a b1 39 81 18 84 b9 95 6a b1 39 81 18 84 b9 95 |j.9.....j.9.....| Tested in CPython 3.6 on Windows 10. Answer: I think your hexdump implementation looks pretty good. I have no immediate comments on the implementation. I will however comment on the implied requirements. Hex Dumper Definition Most hex dumpers that I am familiar with dump hex as a modulo of the stride length. The example you show implies that, but that is because your example uses n=32, where 32 is an even modulus of the stride length (16). If you pass in different stride lengths, or pass in an n that is not an even modulus of the stride, the output doesn't (to my eye) look quite as nice. So I suggest you consider adding another parameter (let's call it base_addr) which is the address of the beginning of the byte array. And, then also consider adding fill at the beginning of the dump to allow it to align the dump with an even modulus of the stride length. Such that: hexdump_gen(byte_string, base_addr=1, n=1, sep='') Would produce: 00000000 9a 39 81 c2 b5 b9 ce 02 97 39 b5 19 a0 b9 | .9.......9....| 00000010 ca 02 97 39 b5 19 a0 b9 ca 02 97 39 b5 19 a0 b9 |...9.......9....| 00000020 ca 8c 96 39 fb 89 8e b9 0a 6a b1 39 81 18 84 b9 |...9.....j.9....| 00000030 95 6a b1 39 81 18 84 b9 95 6a b1 39 81 18 84 b9 |.j.9.....j.9....| 00000040 95 6a b1 39 81 18 84 b9 95 6a b1 39 81 18 84 b9 |.j.9.....j.9....| 00000050 95 |. | One way that could be done: def hexdump_gen(byte_string, _len=16, base_addr=0, n=0, sep='-'): not_shown = [' '] leader = (base_addr + n) % _len next_n = n + _len - leader while byte_string[n:]: col0 = format(n + base_addr - leader, '08x') col1 = not_shown * leader col2 = ' ' * leader leader = 0 for i in bytearray(byte_string[n:next_n]): col1 += [format(i, '02x')] col2 += chr(i) if 31 < i < 127 else '.' trailer = _len - len(col1) if trailer: col1 += not_shown * trailer col2 += ' ' * trailer col1.insert(_len // 2, sep) yield FMT.format(col0, ' '.join(col1), col2) n = next_n next_n += _len Symmetric parameters The n parameter is an offset into the bytearray, which specifies where in the bytearray to start the dump. But there is no equivalent end address. So currently, the dumper always goes to the end of the byte array. From a symmetry perspective it would seem a good idea to also provide a terminal condition.
{ "domain": "codereview.stackexchange", "id": 25374, "tags": "python, python-3.x" }
Why can't un-journaled filesystems be resized?
Question: I have yet to find an un-journaled filesystem that can be resized without data loss or intermediate conversions, and I'm wondering why that is. Filesystems like FAT, for example, leave free space at the end of the block device; it seems pretty simple to me to just add more free space onto the end... Edit: It seems that some filesystems, created without a journal, can still be resized. I created an EXT4 filesystem with mke2fs -t ext4 -O ^has_journal /dev/loop0p1, grew the partition, and successfully resized it. I also created an EXT2 filesystem (which to my knowledge doesn't have a journal) and successfully resized it. My current theory is that it's just a big coincidence that most filesystems without a journal can't be safely resized and most with a journal can. Answer: It’s just historical. Old file systems are not journaled, and they cannot be resized. At some point in time journaling was added, and at some point in two resizing was added. So new file systems have both features. You obviously don’t need the ability to resize a disk to implement journaling, but there is also no reason why journaling would be needed for resizing a hard drive. It would be good if the file system was designed in such a way that resizing can be done with a minimal amount of moving files, and in such a way that the disk is a valid (old size) disk for as long as possible, so that only one final change is needed to change it from old size to new size.
{ "domain": "cs.stackexchange", "id": 15678, "tags": "filesystems" }
c# make synchronization function waitable and cancelable
Question: I need to call some synchronization function and need they running in background and canbe canceled. So I write this: private static async Task WaitSyncFunction(Action syncFunction, int timeoutMilliseconds, CancellationToken token) { var syncFunctionTask = Task.Run(syncFunction); var timeoutTask = Task.Delay(timeoutMilliseconds, token); await Task.WhenAny(timeoutTask, syncFunctionTask).ContinueWith(task => { if (timeoutTask.IsCanceled) throw new TaskCanceledException(); if (timeoutTask.IsCompletedSuccessfully) throw new TimeoutException(); if (syncFunctionTask.IsFaulted) throw syncFunctionTask.Exception.InnerException; }); } use it: //...async method of button click event handler in window cancelSource = new CancellationTokenSource(); try { await WaitSyncFunction(() => MySyncFunction(), 5000, cancelSource.Token); MessageBox.Show("Success!"); } catch (TaskCanceledException) { MessageBox.Show("Canceled!"); } catch (Exception ex) { MessageBox.Show("Error! "+ ex.Message); } //... What risks may exist in codes, and is it anti-pattern? ============Edit============ Some improvements: //Custom exception for process task after timeout public class TaskTimeoutException : TimeoutException { public Task task { get; } public TaskTimeoutException(Task task) => this.task = task; } private static async Task WaitSyncFunction(Action syncFunction, int timeoutMilliseconds, CancellationToken token) { var syncFunctionTask = Task.Run(syncFunction); var timeoutTask = Task.Delay(timeoutMilliseconds, token); await Task.WhenAny(timeoutTask, syncFunctionTask);//.ContinueWith(task => //{ //Unnecessary ContinueWith //if (timeoutTask.IsCanceled) throw new TaskCanceledException(); //return the function task so that it can do something else after it ending if (timeoutTask.IsCanceled) throw new TaskCanceledException(syncFunctionTask); //if (timeoutTask.IsCompletedSuccessfully) throw new TimeoutException(); //return the function task so that it can do something else after it ending if (timeoutTask.IsCompletedSuccessfully) throw new TaskTimeoutException(syncFunctionTask); if (syncFunctionTask.IsFaulted) throw syncFunctionTask.Exception.InnerException; //}); } Answer: Whenever we have an async method and we want to differentiate Canceled from Timeout then we usually do the following: We are anticipating OperationCanceledException (the base class of TaskCanceledException) We examine the IsCancellationRequested property of the CancellationToken Let me show you a simple example: Timeout private static readonly TimeSpan OperationDuration = TimeSpan.FromSeconds(3); // timeoutSource will be triggered private static readonly TimeSpan Timeout = TimeSpan.FromSeconds(2); private static readonly TimeSpan CancelAfter = TimeSpan.FromSeconds(10); static async Task Main(string[] args) { var userCancellationSource = new CancellationTokenSource(CancelAfter); try { await TestAsync(userCancellationSource.Token); } catch (OperationCanceledException) { Console.WriteLine(userCancellationSource.IsCancellationRequested ? "Canceled" : "Timed out"); } } public static async Task TestAsync(CancellationToken token = default) { var timeoutSource = new CancellationTokenSource(Timeout); var timeoutOrCancellationSource = CancellationTokenSource.CreateLinkedTokenSource(timeoutSource.Token, token); await Task.Delay(OperationDuration, timeoutOrCancellationSource.Token); } Canceled private static readonly TimeSpan OperationDuration = TimeSpan.FromSeconds(3); private static readonly TimeSpan Timeout = TimeSpan.FromSeconds(2); // userCancellationSource will be triggered private static readonly TimeSpan CancelAfter = TimeSpan.FromSeconds(1); static async Task Main(string[] args) { var userCancellationSource = new CancellationTokenSource(CancelAfter); try { await TestAsync(userCancellationSource.Token); } catch (OperationCanceledException) { Console.WriteLine(userCancellationSource.IsCancellationRequested ? "Canceled" : "Timed out"); } } public static async Task TestAsync(CancellationToken token = default) { var timeoutSource = new CancellationTokenSource(Timeout); var timeoutOrCancellationSource = CancellationTokenSource.CreateLinkedTokenSource(timeoutSource.Token, token); await Task.Delay(OperationDuration, timeoutOrCancellationSource.Token); } I think the same pattern should be followed by your wrapper. To have the same behavior I have modified your WaitSyncFunction method in the following way private static async Task WaitSyncFunction(Action syncFunction, int timeoutMilliseconds, CancellationToken token) { var syncFunctionTask = Task.Run(syncFunction); var timeoutTask = Task.Delay(timeoutMilliseconds, token); await Task.WhenAny(timeoutTask, syncFunctionTask); if (timeoutTask.IsCanceled) token.ThrowIfCancellationRequested(); //changed if (timeoutTask.IsCompletedSuccessfully) throw new OperationCanceledException(token); //changed if (syncFunctionTask.IsFaulted) throw syncFunctionTask.Exception.InnerException; } Timeout private static readonly TimeSpan OperationDuration = TimeSpan.FromSeconds(3); // timeoutSource will be triggered private static readonly TimeSpan Timeout = TimeSpan.FromSeconds(2); private static readonly TimeSpan CancelAfter = TimeSpan.FromSeconds(10); static async Task Main(string[] args) { var userCancellationSource = new CancellationTokenSource(CancelAfter); try { await WaitSyncFunction(() => Thread.Sleep((int) OperationDuration.TotalMilliseconds), (int) Timeout.TotalMilliseconds, userCancellationSource.Token); } catch (OperationCanceledException) { Console.WriteLine(userCancellationSource.IsCancellationRequested ? "Canceled" : "Timed out"); } } private static async Task WaitSyncFunction(Action syncFunction, int timeoutMilliseconds, CancellationToken token) { var syncFunctionTask = Task.Run(syncFunction); var timeoutTask = Task.Delay(timeoutMilliseconds, token); await Task.WhenAny(timeoutTask, syncFunctionTask); if (timeoutTask.IsCanceled) token.ThrowIfCancellationRequested(); if (timeoutTask.IsCompletedSuccessfully) throw new OperationCanceledException(token); if (syncFunctionTask.IsFaulted) throw syncFunctionTask.Exception.InnerException; } Canceled private static readonly TimeSpan OperationDuration = TimeSpan.FromSeconds(3); private static readonly TimeSpan Timeout = TimeSpan.FromSeconds(2); // userCancellationSource will be triggered private static readonly TimeSpan CancelAfter = TimeSpan.FromSeconds(1); static async Task Main(string[] args) { var userCancellationSource = new CancellationTokenSource(CancelAfter); try { await WaitSyncFunction(() => Thread.Sleep((int) OperationDuration.TotalMilliseconds), (int) Timeout.TotalMilliseconds, userCancellationSource.Token); } catch (OperationCanceledException) { Console.WriteLine(userCancellationSource.IsCancellationRequested ? "Canceled" : "Timed out"); } } private static async Task WaitSyncFunction(Action syncFunction, int timeoutMilliseconds, CancellationToken token) { var syncFunctionTask = Task.Run(syncFunction); var timeoutTask = Task.Delay(timeoutMilliseconds, token); await Task.WhenAny(timeoutTask, syncFunctionTask); if (timeoutTask.IsCanceled) token.ThrowIfCancellationRequested(); if (timeoutTask.IsCompletedSuccessfully) throw new OperationCanceledException(token); if (syncFunctionTask.IsFaulted) throw syncFunctionTask.Exception.InnerException; } Please bear in mind that with this implementation the syncFunction is not aborted in case of timeout. To support cooperative cancellation you have to pass the CancellationToken to the syncFunction and examine its IsCancellationRequested property periodically (at each checkpoint / milestone). Check out these for more details: Cancellation in Managed Threads CodeReview topic: C# asynchronous tasks training (turn-based simulation)
{ "domain": "codereview.stackexchange", "id": 40821, "tags": "c#, async-await" }
DFA to regular expression conversion
Question: I was looking at the question How to convert finite automata to regular expressions? to convert DFA to regex. The question, I was trying to solve is: I have got the following equations: $Q_0=aQ_0 \cup bQ_1 \cup \epsilon$ $Q_1=aQ_1 \cup bQ_1 \cup \epsilon$ When solved, we will get $Q_0=a^*b(a \cup b)^* \cup\ \epsilon$ But my doubt is that, in the DFA starting state is also the final state so, even if we dont give any $b$, it will be accepted, if we give some $a$. But in the regex we have $b$, instead of $b^*$. Why is it so? Is it because,we have that regex $\cup$ $\epsilon$ ? Answer: I'll be using the solution to $Q = \alpha Q \cup \beta$ given by $Q = \alpha^* \beta$, essentially as you would go solving a system of linear equations by hand: $$ \begin{align*} Q_0 &= a Q_0 \cup b Q_1 \cup \epsilon \\ Q_1 &= (a \cup b) Q_1 \cup \epsilon \end{align*} $$ From the first equation you have $Q_0 = a^* (b Q_1 \cup \epsilon)$, the second one reduces to $Q_1 = (a \cup b)^* \epsilon = (a \cup b)^*$. Replacing $Q_1$ in $Q_0$ gives: $$ Q_0 = a^* (b (a \cup b)^* \cup \epsilon) = a^* b (a \cup b)^* \cup a^* $$
{ "domain": "cs.stackexchange", "id": 21912, "tags": "regular-languages, automata, finite-automata, regular-expressions" }
Java code for generating 16bit DMX values using sine and cosine
Question: This is my try, based on good inputs from different places. I ended up saving the DMX-values in two arrays (one for sine and one for cosine) as I guess that will make stuff easier later on when I want to add offset (and so on) to the lamps receiving the dmx-values. This how I declare the two arrays and how the code is saving dmx-values in the two arrays: int[] sineValues = new int[(360 * 1000) + 1]; int[] coSineValues = new int[(360 * 1000) + 1]; public void generateSineValuesInArray() { int y = 0; for (double x = 0; x <= 360; x = x + 0.001) { double degrees = x; double radians = Math.toRadians(degrees); double sine = Math.sin(radians); sineValues[y] = (int) ((sine * 127) * 255) + (127 * 255); y++; } } public void generateCosineValuesInArray() { int y = 0; for (double x = 0; x <= 360; x = x + 0.001) { double degrees = x; double radians = Math.toRadians(degrees); double sine = Math.cos(radians); coSineValues[y] = (int) ((sine * 127) * 255) + (127 * 255); y++; } } Yes, many elements in the two arrays and very small (0.001) increments in the two loops but that is what I found out was working best for getting a resolution that can do the dmx-fine so I can achieve smooth movements even at very low speeds. This is the code for getting and sending the actual dmx-values: public void doMovement() { valFromArrayPan = sineValues[counter]; valFromArrayTilt = coSineValues[counter]; coarsePan = (valFromArrayPan >> 8) & 255; finePan = valFromArrayPan & 255; coarseTilt = (valFromArrayTilt >> 8) & 255; fineTilt = valFromArrayTilt & 255; //Send dmx-values counter++; if (counter == sineValues.length) { counter = 1; } } This is the code and what I do for running the movement: SineMovement SineMovement = new SineMovement(); SineMovement.generateSineValuesInArray(); SineMovement.generateCosineValuesInArray(); speed = 20000000; //Slow speed ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor(); executorService.scheduleAtFixedRate(SineMovement::doMovement, 0, speed, TimeUnit.NANOSECONDS); I have tested above with a "real" 16bit moving head and I think, overall, it is working as I want it to. Answer: Initialization First about your idea of storing the precalculated values for sine and cosine: this is OK, and done for many different applications. But the way you initialize those arrays seems a bit complicated: I would use just one loop for sine and cosine You should never compare a floating point number with an == or similar, as there is a chance that this never will be true due to rounding errors. In your case the <= 360 is not necessary at all and can be replaced by a < 360 without doubt because sin(0°) is the same as sine(360°) I would not use the angle as loop counter at all - you have a nice index in the array that you want to fill, so instead use the array index as loop counter and calculate your angle from it. This way you also don't have to fiddle with degrees to radian conversion. You can make the tables and their initialization static because the sine and cosine tables will never change, no matter how often you load that class. And it's good practise to minimize visibility, and you never need to exchange the array, so set it private static final and use a static initializer. Naming conventions say that you should name static final fields in ALL_UPPER_SNAKE_CASE. As long as you do double calculations, use only double literals (= add a .0 to numbers if you don't have any other decimal places) to help the compiler in optimizing the code. Summary: I would do the initialization like this: private static final int[] SIN_TABLE = new int[(360 * 1000) + 1]; private static final int[] COS_TABLE = new int[(360 * 1000) + 1]; private static final double INDEX_RAD_FACTOR = 2 * Math.PI / sineValues.length; private static final double STRETCH_SIN = 127.0 * 255.0; static { for (int x = 0; x < sineValues.length; x++) { double radians = x * INDEX_RAD_FACTOR; double sine = Math.sin(radians); SIN_TABLE[x] = (int) (Math.sin(radians) * STRETCH_SIN + STRETCH_SIN); COS_TABLE[x] = (int) (Math.cos(radians) * STRETCH_SIN + STRETCH_SIN); } } Ongoing movement repeated calculation You should create a private method for the splitting of the high and low byte as you use it multiple times. private int[] splitInt(int toSplit) { int[] result = new byte[2]; result[0] = toSplit & 255; result[1] = (toSplit >> 8) & 255; return result; } Instead of an int you could also use a short, but don't use a byte because all of that datatypes are signed and byte goes from -128 to +127. timing relevant code Also as you are working with stuff that needs exact timing, I would not use an if in that method - the case where the condition is true and where it's not, could take a different amount of time and therefore create a weird pause if your application requires short execution times (when the "speed" is not slow, but fast). Also you should start counter with and reset it to 0 all the time - arrays are 0-based in Java. private int counter = 0; public void doMovement() { int[] overallPan = splitInt(SIN_TABLE[counter]); int[] overallTilt = splitInt(COS_TABLE[counter]); int coarsePan = overallPan[1]; int finePan = overallPan[0]; int coarseTilt = overallTilt[1] int fineTilt = overallTilt[0]; // TODO: Send dmx-values counter = (counter + 1) % SIN_TABLE.length; } Starting The starting of your timed code looks quite good, with the above changes to the initialization you do no longer have to call the initialization of sine and cosine tables manually. And I would not use nanoseconds as time unit, probably your code wouldn't be fast enough to be called that often, and also with 360000 possible steps in the sine curve, even with a microsecond resolution you would only need one third of a second to have set all possible values. Probably even that is too fast to be visible. I would not go just millisecond resolution, because then all values would need 6 minutes to be shown and thats probably to slow. Again if speed is a long variable it deserves to be initialized with a long literal to help the compiler in optimization, this is denoted by an L at the end of the number. (And I divided the value you gave by 1000 because I reduced the resolution to microseconds.) SineMovement SineMovement = new SineMovement(); long speed = 20000L; //Slow speed ScheduledExecutorService executorService = Executors.newSingleThreadScheduledExecutor(); executorService.scheduleAtFixedRate(SineMovement::doMovement, 0L, speed, TimeUnit.MICROSECONDS); ```
{ "domain": "codereview.stackexchange", "id": 44490, "tags": "java" }
Implementation of Radix Sort algorithm for sorting integers in c++
Question: I wrote the following code for Radix Sort algorithm for number sorting. Any review is highly appreciated. You can also view and run/test the code here Radix Sort #include<iostream> using namespace std; int getNumberAtPosition(int num,int position){ return (num/position)%10; } void radixSort(int array[],int length){ int sizeOfEachBucket = length; int numberOfBuckets = 10; int buckets[10][sizeOfEachBucket]; int large = 0; int maxPasses = 0; //finding largest number from array for(int i=0; i<length; i++){ if(array[i]>large){ large = array[i]; } } //finding the number of passes while(large != 0){ maxPasses++; large = large/10; } cout<<"Max passes ="<<maxPasses<<endl; int position = 1; int bucketIndex = 0; int newListIndex = 0; int arrayLengths[10]; for(int i=0; i<maxPasses; i++){ //cout<<"i ="<<i<<endl; for(int k=0; k<=9; k++){ //cout<<"k ="<<k<<endl; bucketIndex = 0; for(int j=0; j<length; j++){ if(k==getNumberAtPosition(array[j],position)){ buckets[k][bucketIndex] = array[j]; bucketIndex++; } } arrayLengths[k] = bucketIndex; } position = position*10; int newArrayIndex = 0; for(int k=0; k<=9; k++){ //cout<<"k ="<<k<<endl; bucketIndex = 0; for(int x=0; x<arrayLengths[k];x++){ array[newArrayIndex] = buckets[k][x]; newArrayIndex++; } } } for(int i=0; i<length; i++){ cout<<array[i]<<"\t"; } } Answer: First things first, don't use using namespace std in header files, where your radix sort implementation should be. Besides, convenience isn't a valid excuse when the only thing you import from std is cout: just type std::cout and you're done. One liner functions are often useless and noisy: you need to refer to the implementation to know exactly what they do, so they don't make the code more readable, and you have to come up with a name, which is not always easy. In the case of getNumberAtPosition, for instance, it's impossible to tell from the name if the position is meant for the most or least significant digit, and both are equally likely in a radix sort algorithm. everything that isn't meant to change should be const. The only place where it isn't idiomatic is in function signature, where you often don't tag built-in type arguments passed by value as const. Also, don't alias variables: length is only used to define sizeOfEachBucket, that's two names to track down instead of one. Use the standard library: there may not be many things inside compared to other languages, but what's inside is very well implemented. It will also make your code more concise and expressive. For instance, the largest element in a [first, last) sequence is the result of std::max_element(first, last) (std::max_element resides in <algorithm>). Using standard containers is also a statement: a constant size array will be a std::array, whereas a variable-size one will be a std::vector. Avoid naked loops: by that I mean loops like for (int i = 0; i < z; ++i). The nested ones are particularly difficult to read, with all those meaningless one-letter variable names. Use either range based for loops when you iterate over the whole container (e.g: for (auto item : vector)) or named algorithm when your loop has a standardly implemented purpose (std::for_each, std::copy, etc.). When implementing your own algorithms, try to use an stl-like interface: iterators are preferable because they abstract the concrete container-type. Your algorithm won't work on an std::list although it very well could be (radix sort doesn't rely on random access like quick sort does). Here's an example of better-looking (though not thoroughly tested) code: #include <algorithm> #include <vector> #include <array> #include <cmath> template <typename Iterator> void radix_sort(Iterator first, Iterator last) { const int max_divisor = std::pow(10, std::log10(*std::max_element(first, last))); for (int divisor = 1; divisor < max_divisor; divisor *= 10) { std::array<std::vector<int>, 10> buckets; std::for_each(first, last, [&buckets, divisor](auto i) { buckets[(i / divisor) % 10].push_back(i); }); auto out = first; for (const auto& bucket : buckets) { out = std::copy(bucket.begin(), bucket.end(), out); } } } EDIT: since the algorithm exposed in the question relies on decimal digits, I also formulated a base-10 based algorithm. But thinking back about this, I feel like my answer isn't complete if I don't precise that a base-two approach is more optimal (and more generally used as far as I know). Why is that? because binary arithmetic is easier for a computer (not a very strong reason since decimal arithmetic is often optimized into binary arithmetic by the compiler); because -and that's a much stronger reason- you can the rely on a very well-known algorithm to distribute your number between buckets, an algorithm that moreover does it in place, thus without any memory allocation; by the way, that algorithm is std::stable_partition And here is a sample: template <typename Iterator> void binary_radix_sort(Iterator first, Iterator last) { using integer_type = std::decay_t<decltype(*first)>; bool finished = false; for (integer_type mask = 1; !finished; mask <<= 1) { finished = true; std::stable_partition(first, last, [mask, &finished](auto i) { if (mask < i) finished = false; return !(mask & i); }); } }
{ "domain": "codereview.stackexchange", "id": 34038, "tags": "c++, algorithm, sorting" }
message header doesn't build
Question: I know this question has been asked a number of times, and I've gone through and followed the instructions on a number of those answers (for example- here and here), as well as in the documentation for building a custom message. Yet still my message header is not building. This is the error I get: /home/erinline/Documents/testsim/src/m-explore/explore/src/explore.cpp:39:10: fatal error: msg/util.h: No such file or directory #include <msg/util.h> ^~~~~~~~~~~~ compilation terminated. m-explore/explore/CMakeFiles/explore.dir/build.make:86: recipe for target 'm-explore/explore/CMakeFiles/explore.dir/src/explore.cpp.o' failed make[2]: *** [m-explore/explore/CMakeFiles/explore.dir/src/explore.cpp.o] Error 1 CMakeFiles/Makefile2:23711: recipe for target 'm-explore/explore/CMakeFiles/explore.dir/all' failed make[1]: *** [m-explore/explore/CMakeFiles/explore.dir/all] Error 2 Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed This is my CMakeLists.txt: cmake_minimum_required(VERSION 2.8.3) project(explore_lite) ## Find catkin macros and libraries find_package(catkin REQUIRED COMPONENTS actionlib actionlib_msgs costmap_2d geometry_msgs map_msgs move_base_msgs nav_msgs roscpp std_msgs tf visualization_msgs message_generation ) add_message_files( FILES util.msg ) generate_messages( DEPENDENCIES std_msgs ) ################################### ## catkin specific configuration ## ################################### catkin_package( CATKIN_DEPENDS actionlib_msgs geometry_msgs map_msgs move_base_msgs nav_msgs std_msgs visualization_msgs message_runtime ) ########### ## Build ## ########### # c++11 support required include(CheckCXXCompilerFlag) check_cxx_compiler_flag("-std=c++11" COMPILER_SUPPORTS_CXX11) if(COMPILER_SUPPORTS_CXX11) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") else() message(FATAL_ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.") endif() ## Specify additional locations of header files include_directories( ${catkin_INCLUDE_DIRS} include ) add_executable(explore src/costmap_client.cpp src/explore.cpp src/frontier_search.cpp ) #add_dependencies(explore ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS}) add_dependencies(explore explore_lite_generate_messages_cpp) target_link_libraries(explore ${catkin_LIBRARIES}) ############# ## Install ## ############# # install nodes install(TARGETS explore ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) # install roslaunch files install(DIRECTORY launch/ DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/launch ) ############# ## Testing ## ############# if(CATKIN_ENABLE_TESTING) find_package(roslaunch REQUIRED) # test all launch files roslaunch_add_file_check(launch) endif() My message is called util.msg, it is in a file called msg within the package file called explore. I must be missing something, since those previous answers seem to work for some folks. Any advice on where I am going wrong would be greatly appreciated! Originally posted by kitkatme on ROS Answers with karma: 47 on 2019-04-13 Post score: 0 Answer: Your CMakeLists.txt shows: project(explore_lite) [..] add_message_files( FILES util.msg ) The error output shows: /home/erinline/Documents/testsim/src/m-explore/explore/src/explore.cpp:39:10: fatal error: msg/util.h: No such file or directory #include <msg/util.h> You write: My message is called util.msg, it is in a file called msg within the package file called explore [..] There are a few discrepancies here: according to your build script, the package is actually called explore_lite, not explore the compiler output suggests it's in a directory called m-explore (or the package is actually called m-explore) As to why the include fails: message header #include lines follow the pattern: pkg_that_hosts_the_msg/MsgHeader.h. In your care this should probably be explore_lite/util.h, not msg/util.h. Two additional comments: it might be good to store messages in a separate package (it will make interfacing with your package much easier, as I don't have to build your node in order to use the messages) try to follow ROS naming guidelines: util as a name does not convey much semantics and message (file)names should start with an uppercase character Originally posted by gvdhoorn with karma: 86574 on 2019-04-14 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by kitkatme on 2019-04-14: thanks so much!! your suggestion to change the include line to explore_lite/util.h solved the issue. also thanks for the notes on guidelines! will update my files later.
{ "domain": "robotics.stackexchange", "id": 32867, "tags": "ros-melodic, catkin-make, custom-message" }
How do I evaluate the expectation value $\left \langle \hat{p}^{2} \right \rangle$ for a quantum harmonic oscillator?
Question: The probability function of a quantum SHO is: $$P(x)=Ae^{-\frac{x^{2}}{\sigma ^{2}}}$$ where $A$ is a factor required for normalisation. The operator is: $$\hat{p}^{2}=(-i\hbar\frac{\textrm{d}}{\textrm{d}x})^{2}=-\hbar^{2}\frac{\textrm{d}^{2}}{\textrm{d}x^{2}}$$ Therefore the expectation value is: $$\left \langle \hat{p}^{2} \right \rangle=\int_{-\infty}^{\infty}(-\hbar^{2}\frac{\textrm{d}^{2}}{\textrm{d}x^{2}}\times A^{2}e^{-\frac{x^{2}}{\sigma ^{2}}})\textrm{d}x$$ However I had difficulty evaluating this integral, and when I inserted it into WolframAlpha out of laziness I obtained an answer of 0, which should be false since otherwise it would not agree with the uncertainty principle inequality. Where have I gone wrong in the above process? WolframAlpha calculations (I removed the constants for simplicity): http://www.wolframalpha.com/input/?i=second+order+derivative+of+e%5E(-x%5E2%2Fsigma%5E2) http://www.wolframalpha.com/input/?i=integral+of+(e%5E(-x%5E2%2Fsigma%5E2)(4x%5E2-2sigma%5E2))%2Fsigma%5E4+from+-infinity+to+infinity Answer: The problem is that you use the differential on the wrong expression: It goes on the right wavefunction not on the entire propability distribution. You should get something like: \begin{align} \langle\hat p^2\rangle & =-\int_{-\infty}^{\infty} \psi(x)\frac{d^2}{dx^2}\hbar^2\psi(x)\\\\ & = \int_{-\infty}^{\infty}(\frac{m\omega}{\hbar \pi})^{1/2}[-(m\omega x)^2+\hbar m\omega]\exp(-\frac{m\omega}{\hbar}x^2)dx \\\\ & = \frac1 2\hbar m \omega. \end{align} If you apply the derivative to $\psi(x)^*\psi(x)$ instead of $\psi(x)$ you get down the wrong terms.
{ "domain": "physics.stackexchange", "id": 33276, "tags": "quantum-mechanics, homework-and-exercises, harmonic-oscillator" }
Why are black holes depicted as disks and not spheres?
Question: Surely light gets 'pulled' in from all directions in 3D space so the event horizon would not be a round disk, but rather a sphere of light - meaning black holes should actually be light holes? My only guess is it would be due to the same phenomenon that causes our solar systems, planet rings and galaxies shapes to generally be 2-dimensional rather than spherical. Answer: Static black holes are spheres while rotating black holes have an oblate horizon so they are a deformed sphere. I suspect you are thinking of the accretion disk when you say Why are black holes depicted as disks and not spheres?, and this is indeed the same reason that galaxies form disks.
{ "domain": "physics.stackexchange", "id": 41073, "tags": "black-holes, angular-momentum, orbital-motion, event-horizon" }
Casimir operators for Poincare algebra
Question: I have seen at various places the comment that the operator $P_\mu P^\mu$ is a Casimir operator of Lorentz algebra and thus it satisfies a on-shell condition like $P_\mu P^\mu=m^2$. Given the Poincare algebra \begin{aligned} i\left[M^{\mu \nu}, M^{\rho \sigma}\right] &=g^{\mu \sigma} M^{\nu \rho}+g^{\nu \rho} M^{\mu \sigma}-g^{\mu \rho} M^{\nu \sigma}-g^{\nu \sigma} M^{\mu \rho} \\ i\left[P^{\mu}, M^{\rho \sigma}\right] &=g^{\mu \rho} P^{\sigma}-g^{\mu \sigma} P^{\rho} \\ \left[P^{\mu}, P^{\nu}\right] &=0. \end{aligned} How does one derive its Casimir operators, especially the one $P_\mu P^\mu$? Can someone show the crucial steps? Also Does the method works for any other similar algebra? Moreover, if an operator, say, $A$ commutes with the generators $M^{\mu\nu}$, i.e., $[A,M^{\mu\nu}]=0$, can it be said that $A$ is a Casimir operator? Answer: Casimir operators commute with all generators. That's what you need to check. $P^\mu P_\mu$ does commute with $M$ and $P$. A fast way to say it is that $[P^2,P_\mu] \propto [P_\nu,P_\mu] =0$ $P^2$ is a scalar and therefore it's annihilated by the rotation generators But if you don't believe 2. you can just check $$ \begin{aligned} i[M_{\mu\nu},P^2] &= 2\,(g_{\rho\mu}P_\nu-g_{\rho\nu}P_\mu)P^\rho\\ &=2 P_\mu P_\nu - 2 P_\nu P_\mu \\&= 0\,. \end{aligned} $$ For other Casimirs such as $$ W^\mu W_\mu\,,\qquad W_\mu := \tfrac12 \varepsilon_{\mu\nu\rho\lambda} M^{\nu\rho}P^\lambda\,, $$ you can do the same. This is just a bit harder. The argument 2. still works because it's a scalar. Then by explicit computation $$ [P_\mu,W_\nu] = 0\,, $$ so also its square commutes with $P$.
{ "domain": "physics.stackexchange", "id": 65685, "tags": "special-relativity, lorentz-symmetry, lie-algebra, poincare-symmetry, invariants" }
Error launching Rplidar node
Question: Hello everyone. I am using a RPLidar A1M8 to generate obstacles using move_base and costmap_2d. The problem arrives when I launch rplidar node with the whole system .launch; you can see the include where I call the .launch file of rplidar: <include file="$(find rplidar_ros)/launch/rplidar.launch" /> The next error appears on terminal when I launch it: [ERROR] [1585914589.956502460]: Error, operation time out. RESULT_OPERATION_TIMEOUT! [rplidarNode-20] process has died [pid 14017, exit code 255, cmd /home/sara/catkin_ws/devel/lib/rplidar_ros/rplidarNode __name:=rplidarNode __log:=/home/sara/.ros/log/35a653a0-75a1-11ea-83b0-701ce7079383/rplidarNode-20.log]. log file: /home/sara/.ros/log/35a653a0-75a1-11ea-83b0-701ce7079383/rplidarNode-20*.log I tried to read the .log file, but it doesn't appear on that directory. By the way, if I launch first the whole system in one terminal and then I launch the rplidar node using other terminal, it works and I can see rplidar working and generating obstacles on rviz. What can I do to solve the problem and launch rplidar whit the system together? Thanks in advance. Best regrets. Alessandro EDIT: I solve the problem not launching other node that uses the same usb port of the rplidar, so it was my fault. Thank you. Originally posted by Alessandro Melino on ROS Answers with karma: 113 on 2020-04-03 Post score: 0 Original comments Comment by gvdhoorn on 2020-04-21: Please post your last edit as an answer instead. And then accept your own answer. This would convey the fact that your issue was resolved much better than closing a question. Comment by Alessandro Melino on 2020-04-21: Done. Thanks, i am still adapting to ROS Answers. Answer: I solve the problem not launching other node that uses the same usb port of the rplidar, so it was my fault. Thank you. Originally posted by Alessandro Melino with karma: 113 on 2020-04-21 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 34688, "tags": "ros, roslaunch, rplidar, ros-kinetic" }
Failure when under both tensile and shear stress
Question: Suppose I have some element which is under both shear and tensile stress. I know the material properties for ultimate tensile stress and for ultimate shear stress. However, presumably the material will break at a lower total stress then when both the ultimate shear and tensile stress would be applied. I'm not sure about this. Is there some formula relating the ultimate shear stress and ultimate tensile stress to the ultimate combined stress? How do I know when the material will break in such a combined loading scenario? I'm looking for a simplified model, not the 100% accurate result. Answer: I believe you're looking for a 3D failure criterion. Pick the one most appropriate to your material type. For ductile materials, for example, the von Mises criterion is frequently used. This criterion assumes—quite accurately—that hydrostatic stress of essentially any magnitude cannot disrupt any uniform solid and that only deviations from the hydrostatic state can lead to failure. (This is why the criterion appears graphically as a cylinder of safety around $\sigma_x=\sigma_y=\sigma_z$.) Interestingly, it's not always the case that an additional load, even an additional load of the same sign (tensile or compressive), in another direction or mode necessarily brings the material closer to failure. Since ductile materials generally fail in shear, a material subjected to an x-direction normal stress $\sigma$ actually becomes less likely to fail if identical stresses are applied in the y- and z-directions, as the original resultant shear of $\sigma/2$ is reduced to zero under the latter equitriaxial stress state. Alternatively a shear stress of $\tau_{xy}$ can be alleviated by applying a pair of normal stresses $-\sigma_x$, ${\sigma_y}$. I wrote a little about this here in the context of the Tresca criterion. Edit: Hertzberg et al.'s Deformation and Fracture Mechanics of Engineering Materials and Kinloch's and Young's Fracture Behaviour of Polymers have extensive information on polypropylene fracture. I haven't used this online resource, Christensen's Failure Theory for Materials Science and Engineering, but it may be useful. For PLA information (especially for recent 3D printing applications), you may need to look at the recent literature, such as here and here.
{ "domain": "physics.stackexchange", "id": 78713, "tags": "material-science, stress-strain, fracture" }
Lockless multi user, multi consumer FIFO queue using C++11
Question: One of the features that is missing from C++11 are lockless queues. Therefore I have decided to write one myself. A few remarks: I know the code isn't technically lockless. However the lock is only used for a blocking call and can be omitted. I have tested the code and it seems to work. However I found I had some trouble coming up with good tests. So if anybody has a good suggestion I'd be happy to hear them. I'm aware of there being a good implementation for this in the Boost library. However I don't want to include Boost in all my projects. Just for this. #include <atomic> #include <memory> #include <condition_variable> namespace threading { template<typename T> class lockless_queue { private: template<typename U> struct node { friend class lockless_queue; node(const U data) : data(new U(data)), next(nullptr), isDummy(false) {} private: node(bool isDummy) : data(nullptr), next(nullptr), isDummy(isDummy) {} public: const bool isDummy; std::shared_ptr<U> data; std::shared_ptr<node<U>> next; }; public: lockless_queue() : m_head(new node<T>(true)), m_running(true) {} ~lockless_queue() { m_running = false; m_newDataWaiter.notify_all(); } //adds a new element to the end of the array void produce(const T &&data) { //bool indicating whether a notification should be sent after adding bool l_notifyUponAdding; //the new node to be added at the end of the array std::shared_ptr<node<T>> l_newNode(new node<T>(std::forward<const T&&>(data))); //pointer to the last node std::shared_ptr<node<T>> l_lastNode(std::atomic_load(&m_head)); //value to compare the next of the last node with std::shared_ptr<node<T>> l_expectedNullPointer; //notify if this isn't the only node l_notifyUponAdding = l_lastNode->isDummy; do { l_expectedNullPointer.reset(); while (l_lastNode->next) { l_lastNode = std::atomic_load(&(l_lastNode->next)); } } while (!std::atomic_compare_exchange_weak(&(l_lastNode->next), &l_expectedNullPointer, l_newNode)); if (l_notifyUponAdding) m_newDataWaiter.notify_one(); } //Removes an element from the end of the array std::shared_ptr<T> consume(bool blockingCall = false) { //the pointer to the element we will consume std::shared_ptr<node<T>> l_head = std::atomic_load(&m_head); std::shared_ptr<node<T>> l_snack = std::atomic_load(&(l_head->next)); do { //Check if the first node is null if (!l_snack) { //and if it is : if (blockingCall && m_running)//And this is a blocking call, { std::unique_lock<std::mutex> l_newDataWaiterLock(m_newDataWaiterMutex); while (!l_head->next) { m_newDataWaiter.wait(l_newDataWaiterLock);//we block until if (!this || !m_running)//break if the object was destroyed during the wait return nullptr; l_snack = std::atomic_load(&(l_head->next)); }// the load yields a head that is not null(to avoid unnecessary calls on spurious wake ups) } else//And this is not a blocking call we { return nullptr; } } } /*Not that if the atomic CAS fails The new l_snack gets updated. Since it might also be updated to nullptr if another thread has consumed the last node. We will have to check for this again. Hence the do while loop */ while (!std::atomic_compare_exchange_weak(&(l_head->next), &l_snack, l_snack->next)); if (l_snack) return l_snack->data; else return std::shared_ptr<T>(); } private: //should be used as atomic std::shared_ptr<node<T>> m_head; std::mutex m_newDataWaiterMutex; std::condition_variable m_newDataWaiter; bool m_running; }; } Answer: General comments The node structure is defined as a struct with private fields. This makes it a class. If it has private fields, prefer a class. The node structure is an implementation detail that is not exposed, you do not need to bother with private here. Just remove the friend declaration and remove the private and public declarations. Keep it simple silly ;) Writing lock-free data structures is difficult and error-prone at best with risk for very subtle bugs and race conditions that only occur very rarely. Are you really sure you need a lock-free queue? Do you have profiling data to back this up? You mentioned boost was out of the question, but for your own mental health and hair growth, please do consider using a well tested lock-free queue implemented by experts. The use of template<typename U> is unnecessary. The nested class is automatically a template class with the same parameters as the enclosing class. Simply change node<T> to node and remove template<typename U> from the class declaration. Also, I'm pretty sure that this: if (!this || !m_running) //break if the object was destroyed during the wait is undefined behaviour. If the object has been destroyed, there is nothing that says that this will have been set to nullptr in fact I'd wager it won't. At any rate as your waiters are reading from this you need to inhibit destruction of this until all waiters have returned. Otherwise you risk reading freed memory. You should initialize all variables when they are declared: bool l_notifyUponAdding; should be: bool l_notifyUponAdding = l_lastNode->isDummy; API and Naming To me the names of produce and consume are not very apt as they don't reflect the way I think about a queue and they don't match the naming of the STL queue. I would much prefer if your class implemented the same API as std::queue where applicable. Or at least used the same terminology such as push and pop. Performance This: std::shared_ptr<node> l_newNode(new node(std::forward<const T&&>(data))); should be: auto l_newNode = std::make_shared<node>(std::forward<const T&&>(data)); this only does one memory allocation and gives you better performance when using the shared_ptr as the reference count will be allocated together with the data. Which brings me to my next point: Use Forwarding Reference Correctly This: void produce(const T &&data){ ... std::shared_ptr<node> l_newNode(new node(std::forward<const T&&>(data))); really should be: void produce(T&& data){ ... std::shared_ptr<node> l_newNode(new node(std::forward<T&&>(data))); in the template context T&& denotes a forwarding reference (universal reference to some). And will take the correct type depending on how it is called. Edit You should also properly forward the argument in the node constructor: node(U&& data) :data(new U(std::forward<U>(data))) Thread Safety I'm not going to review thread safety as I'm not confident enough in the correct behaviour of the code. Addendum: Graceful Shutdown Requested in comments. To make a graceful shutdown when you may have other threads waiting on data on the queue you need two things: Ability to determine if there are any waiters. Defer destruction until no one is waiting. I haven't tested the following but it shows the concept: #include <atomic> #include <queue> #include <condition_variable> #include <mutex> #include <thread> class counter_guard{ public: counter_guard(std::atomic<int>& a) : v(a) { v++; } ~counter_guard(){ v--; } private: std::atomic<int>& v; }; class blocking_pipe{ public: ~blocking_pipe(){ m_enabled = false; m_signal.notify_all(); // Busy wait or you can use another condition_variable while (0 != m_users){ std::this_thread::yield(); } } void push(int val){ counter_guard cg(m_users); // Prevents "this" from being destroyed until we leave the function body. assert(m_enabled); // It's the users responsibility to not push to a pipe being destroyed. std::lock_guard<std::mutex> lg(m_mutex); m_queue.push(val); } int pop(){ counter_guard cg(m_users); // Prevents "this" from being destroyed until we leave the function body. assert(m_enabled); // It's the users responsibility to not pop a pipe being destroyed. std::unique_lock<std::mutex> lg(m_mutex); m_signal.wait(lg, [this](){ return !m_enabled || !m_queue.empty(); }); if (!m_queue.empty()){ // Here m_enabled might be false, but the destructor has not ran yet (we hold a user count) // so we can still return useful data to the caller. auto ans = m_queue.front(); m_queue.pop(); return ans; } else{ // This means m_enabled == false definitively. throw std::exception("Pipe severed!"); // non-standard VS2013 constructor } } private: std::queue<int> m_queue; std::atomic<bool> m_enabled{ true }; std::atomic<int> m_users{0}; std::condition_variable m_signal; std::mutex m_mutex; };
{ "domain": "codereview.stackexchange", "id": 13576, "tags": "c++, c++11, queue, lock-free, atomic" }
Setting use_sim_time globally across all ROS2 nodes
Question: Is there a way to set the use_sim_time ROS2 parameter across all nodes? I want all my nodes to use_sim_time globally by default since most of my work is in simulation. Originally posted by Blake McHale on ROS Answers with karma: 5 on 2021-09-18 Post score: 0 Answer: I think you can use SetParameter in this case. https://github.com/tier4/AutowareArchitectureProposal_launcher/blob/8cd04aabd4f07d867bd9f3d77241550a5b9263ab/autoware_launch/launch/logging_simulator.launch.xml#L25 https://github.com/tier4/AutowareArchitectureProposal.iv/blob/6adeec28d85dd41beaeb5a398ace899573bfda2a/common/util/autoware_global_parameter_loader/launch/global_params.launch.py#L27 Originally posted by Kenji Miyake with karma: 307 on 2021-09-19 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Blake McHale on 2021-09-22: Thanks! I ended up using SetParameter since I only needed to change one parameter across all nodes. I did not look into RewrittenYaml.
{ "domain": "robotics.stackexchange", "id": 36925, "tags": "ros2" }
What's the importance of all four fundamental forces being "curvature"?
Question: I've heard about how, in a gauge theory, the gauge covariant derivative of the field around a closed curve is generally not zero, and this is how you can quantify force or field strength. And that this is the same basic idea as curvature, with the gauge field being equivalent to the connection. So since gravity is already known to be curvature, we can say that all the forces of nature are curvature in their own way. So what's the significance of that? Is there some deeper reason that we should expect that to be the case? And are the current unification programs based on that similarity? Answer: When we study non-gravitational fundamental interactions, we distinguish internal symmetries associated with only such interactions from the external symmetries of spacetime. For all fundamental interactions, there is a finite-dimensional Lie group characterizing that interaction's symmetries. In the case of gravity, the Lie derivative of Killing vector fields on the spacetime manifold defines the associated Lie algebra's structure constants; for the other interactions, there is a "space" that plays a role analogous to this manifold, but it's not spacetime itself. Instead, it's a space of legal values for a field over spacetime. For example, electromagnetism's $U(1)$ symmetry (let's put electroweak unification aside for the moment) is the rotational invariance of $|\phi|$ for $\phi\in\Bbb C$ with $|\phi|:=\sqrt{\phi^\ast\phi}$, or equivalently for $\phi\in\Bbb R^2$ with $|\phi|:=\sqrt{\phi\cdot\phi}$. (I'm denoting the set of values $\phi$ can have at each point in spacetime, say $\Bbb R^4$, so as a function $\phi\in X^{\Bbb R^4}$ for $X=\Bbb C$ or, somewhat less helpfully in QFT, $X=\Bbb R^2$.) So if there is a space which is "curved" in this context by electromagnetism, it is not spacetime per se. As for unification implications, Wikipedia notes For ordinary Lie algebras, the gauge covariant derivative on the space symmetries... cannot be intertwined with the internal gauge symmetries... this is the content of the Coleman–Mandula theorem. However, a premise of this theorem is violated by the Lie superalgebras (which are not${}^\dagger$ Lie algebras!) thus offering hope that a single unified symmetry can describe both spatial and internal symmetries: this is the foundation of supersymmetry ${}^\dagger$ emphasis in the original.
{ "domain": "physics.stackexchange", "id": 89816, "tags": "general-relativity, quantum-field-theory, gauge-theory, curvature, interactions" }
If the Brain can store as much information as a billion hard disks why cant i memorize a single word document of random letters?
Question: I read a lot of articles on this and all seem to agree that the brain storage in neural connections is tremendous but that doesnt explain why we forget things so easily and have such a modest memory in our subjective lives and we use much humbler devices to store information, i specifically wanna know if we can really store this information but the problem is we cant acces them as efficiently as a computer can or the whole process of storing this information is different than a hard disk on a computer. None of the articles i have read make any comments addressing this question. Links to relevant information on memory processing and storage will be appreciated. Answer: The brain is trained to remember patterns and predictable associations. Randomness is the absence of patterns, so it's the exact opposite of what the human brain is for. A human can remember random numbers to about 67,890, which is the world record digits of Pi. That's about 20 pages of irrational numbers. Some people can remember 20 pages of word documents. To memorize random numbers, the brain has to join colors, places, objects, people, animals, onto the randomness and construct a story to which the randomness can correspond to. It gives an idea of what the human brain is very good at memorizing, which is real-world phenomena through experience. The human brain actually predicts what it has seen before it knows: It represents an expected object, personality or event from imagination (for example if you see a rabbit/cat beside the road and it turns out to be a bag). New objects like a page of random letters are compared to previous patterns of sensory input. There are regions of the brain which are especially designed for tesks which are more like computer-memory, especially the task of geographic memory used for path-finding, food sources, which is called the hippocampi, which taxi drivers have very developed. A computer actually uses man-made tools to do the memorization. A computer can't design a hard disk or a memory module, the computer uses human memorization tools, which humans can use too if you change the rules of the game a bit... they can use pen and paper... in that sense humans and computers are equal, with the computers being subservient to the humans.
{ "domain": "biology.stackexchange", "id": 10706, "tags": "neuroscience, brain, neurology, memory" }
Euler-Lagrange Equations for charged particle in Einstein notation
Question: I'm a math guy, just wanting to clarify some notation. This is the Euler Lagrange equation associated with a charge in an electric field I found in a book. Where $\phi$ and $A$ are the scalar and vector potential related to Maxwell Equations. I am just curious, this last term, the one with the subscript $j$, is this last term suppose to be a sum over $j=1,2,3$ of that last quantity, i.e it will be three terms? I just wanted clarification for this, since I'm not sure why this sum of omitted if there should be a summation here. If there is not a summation here. Can someone tell me what $j$ is referencing. I know i is referencing different components of the position vector. Answer: I'd add a small comment, which is to give a name to the notation to possibly help in the future, but I don't have enough reputation yet. Therefore I'm writing this as an answer: Yes, it's extremely powerful, and it's known as the Einstein Summation Convention. There are nice articles on Wikipedia and Wolfram's MathWorld on the topic.
{ "domain": "physics.stackexchange", "id": 40704, "tags": "electromagnetism, lagrangian-formalism, tensor-calculus, notation" }
What reinforcement learning algorithm should I use in continuous states?
Question: I want to use reinforcement learning in an environment I made. The exact environment doesn't really matter, but it comes down to this: The amount of different states in the environment is infinite e.g. amount of ways you can put 4 cars at an intersection, but the amount of different actions is only 3 e.g. go forward, right or left. The state exists out of five numbers. My question is: what algorithm should I use or at least what kind of algorithm? Answer: I would recommend looking at Deep Q-Learning.
{ "domain": "ai.stackexchange", "id": 2141, "tags": "reinforcement-learning, algorithm, dqn, ddpg" }
What is the unit for work done?
Question: My textbook's equation for work done is: work done = force * distance So this means that the unit should be Nm. However, when I researched on Google, a lot of people were saying that the unit is J. Answer: J (joule) is a derived unit for energy (or work done) named after the physicist James Joule. Since $W = F.d$, we have 1 J=1 Nm. We can also express in terms of basic SI units, yielding us 1 J = 1 kg m$^2$s$^{-2}$.
{ "domain": "physics.stackexchange", "id": 49701, "tags": "work, units, dimensional-analysis, si-units" }
Topological Sort without modifying the graph or marking edges
Question: I have a DAG which I want to traverse in a topological order. Wikipedia describes two algorithms for topological sorting, which both work in theory but seem impractical to me from a design point of view: Kahn's algorithm modifies the graph (by removing edges) and the DFS-based one marks nodes, which would require me to modify my node classes (by adding a boolean field) and is furthermore not thread safe. Are there more practical approaches, that preserve the asymptotic runtime but do not interfere with my business logic so much? Answer: Instead of seting a 'mark' flag; node.Marked = true; You can maintain a set of marked nodes in a hashtable or similar; hashTable[node] = true; You now have to pass the hash table around, but it's O(n) for space and O(1) to check if a node is marked.
{ "domain": "cs.stackexchange", "id": 7115, "tags": "dag" }
Explain why the following conditions should be satisfied for a sustained interference
Question: My Book says that The following conditions should be satisfied for a sustained interference The two coherent sources placed in front of two slits separated by some distance should be close to each other so as to produce a sustained interference on the screen kept at some distance from the slits Amplitude of incoming waves should be equal The two sources should be strong with least background I have got the following justification for the upper given point For the first point maybe if the current sources are not close to each other than the phase difference changes continuously and won't give interferance according to me the second point is wrong because even if the amplitude of waves are not equal still I have seen interference pattern forming I didn't understand the third point Please help me Answer: I think that they give "practical conditions" to observe interferences : If the two slit are too far apart, the fringes will be too close to be easily observed (the interfringe is lambda*D/a and a is too great). Moreover there will be a problem with the spatial coherence of the source witch illuminate the slits. If the amplitude are too differents, the contrast of the fringes will be low. If there is highly illuminated background, you will not see the fringe (low contrast !) Hope you can understand my poor enblish !
{ "domain": "physics.stackexchange", "id": 54812, "tags": "optics, interference" }
Is it possible to make an all natural smoke detector from Brazil nuts?
Question: After reading about Brazil nuts, I discovered they have very high levels of radiation due to trace amounts of Ra-226 and Ra-228 and their decay products. A kilogram of the nut, for instance, gives a reading between 40 and 2660 becquerels. Ionizing smoke detectors use a strong alpha-emitter to ionize air molecules between two plates. When smoke particulates in the air obstruct the flow of ions between the plates an alarm is triggered. Radium-226 was the first radioactive source in smoke detectors before it was switched over to Americium-241. Given that today's smoke detectors operate on 0.05 µCi or 1850 Bq of radiation, is it possible to build a smoke detector apparatus using Brazil nuts as a radiation source? Citations: Smoke detectors Brazil nuts Answer: Smoke detector in the hallway outside my bedroom door—purchased new just a year ago—says it contains 0.9 µCi of 241-Am. That's 90 times what you said for "today's smoke detectors." If we go with your highest estimate, 2660 Bq per kg of nut meat, we're going to need 12.5 kg of nut meat to get as much radioactivity as my smoke detector needs. But here's the thing. A smoke detector depends on alpha particles to ionize the air in the chamber, and no alpha particles can escape from the interior of a mass of nut meat. You need the radioactive source to be concentrated in a thin film. By the time you have processed the 12.5 kg of nut meat to sufficiently concentrate the radium for the application, can you honestly call it "all natural" at that point?
{ "domain": "physics.stackexchange", "id": 96124, "tags": "nuclear-physics, radiation, nuclear-engineering" }
Cannot clone object ''
Question: My model- #define model model = Sequential() model.add(Dense(128, activation='relu', input_dim=n_input_1)) model.add(Dense(64, activation='relu')) #model.add(Dense(32, activation='relu')) #model.add(Dense(16, activation='relu')) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse',metrics=['mse']) Now I am trying to tune the hyper parameters by this code- from sklearn.model_selection import GridSearchCV # fix random seed for reproducibility seed = 7 np.random.seed(seed) # define the grid search parameters batch_size = [10, 20, 40, 60, 80, 100] epochs = [10, 50, 100] param_grid = dict(batch_size=batch_size, epochs=epochs) grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3,scoring='neg_mean_absolute_error') grid_result = grid.fit(scaled_train,y_train_c) # summarize results print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) I am getting this error- TypeError Traceback (most recent call last) <ipython-input-10-c6e1e39d878e> in <module> 9 param_grid = dict(batch_size=batch_size, epochs=epochs) 10 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3,scoring='neg_mean_absolute_error') ---> 11 grid_result = grid.fit(scaled_train,y_train_c) 12 # summarize results 13 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params) 631 n_splits = cv.get_n_splits(X, y, groups) 632 --> 633 base_estimator = clone(self.estimator) 634 635 parallel = Parallel(n_jobs=self.n_jobs, verbose=self.verbose, ~\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\base.py in clone(estimator, safe) 58 "it does not seem to be a scikit-learn estimator " 59 "as it does not implement a 'get_params' methods." ---> 60 % (repr(estimator), type(estimator))) 61 klass = estimator.__class__ 62 new_object_params = estimator.get_params(deep=False) TypeError: Cannot clone object '<keras.engine.sequential.Sequential object at 0x0000023DD4D5F488>' (type <class 'keras.engine.sequential.Sequential'>): it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' methods. I have no idea whats causing this issue. I am implementing a MLP for time series forecasting. Answer: The xyzSearchCV classes in sklearn require the estimator to be compatible with the sklearn API; in particular, the estimator('s clones)'s hyperparameters get set using set_params, which needs to be defined as a class method for the estimator. So a Keras Sequential model will not work. Keras does provide an sklearn wrapper: https://keras.io/scikit-learn-api/
{ "domain": "datascience.stackexchange", "id": 7134, "tags": "machine-learning, python, keras, scikit-learn" }
communication gz->ROS2 doesn't work, how to debug?
Question: Hello, I have a pb with ros_gz_bridge, I use ROS2-Humble, Gazebo-Garden, get ros_gz from source, branche humble and after compilation (and source my env in each shell) in a new workspace I tried exemple from ros_gz_bridge. (changing ignition for gz in commands) Example 1a: communication gz -> ROS2 => KO nothing is received in ROS2 topic Example 1b: communication ROS2 -> gz => OK I can't understand what's going wrong. # shell A $ ros2 run ros_gz_bridge parameter_bridge /chatter@std_msgs/msg/String@gz.msgs.StringMsg [INFO] [1683186627.556779317] [ros_gz_bridge]: Creating GZ->ROS Bridge: [/chatter (gz.msgs.StringMsg) -> /chatter (std_msgs/msg/String)] (Lazy 0) [INFO] [1683186627.558527086] [ros_gz_bridge]: Creating ROS->GZ Bridge: [/chatter (std_msgs/msg/String) -> /chatter (gz.msgs.StringMsg)] (Lazy 0) # shell B $ ros2 topic echo /chatter # nothing # shell C $ gz topic -t /chatter -m gz.msgs.StringMsg -p 'data:"Hello"' # shell D (to control msg is really sent) $ gz topic --echo -t /chatter data: "Hello" As far as I understand, it is probably a pb with message type. $ ros2 topic type /chatter std_msgs/msg/String $ gz topic -t /chatter --info Publishers [Address, Message Type]: tcp://192.168.21.112:37511, ignition.msgs.StringMsg This ignition message is suspicious to me. Originally posted by SébastienL on ROS Answers with karma: 32 on 2023-05-04 Post score: 0 Answer: Ok got it from https://github.com/gazebosim/ros_gz/issues/365 I have to export GZ_VERSION=garden before compiling. $ gz topic --info -t /chatter Publishers [Address, Message Type]: tcp://127.0.0.1:39363, gz.msgs.StringMsg Originally posted by SébastienL with karma: 32 on 2023-05-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38371, "tags": "ros2" }
Can Expectation Maximization estimate truth and confusion matrix from multiple noisy sources?
Question: Suppose we have $m$ sources, each of which noisily observe the same set of $n$ independent events from the outcome set $\{A,B,C\}$. Each source has a confusion matrix, for example for source $i$: $$C_i = \begin{bmatrix} 0.98 & 0.01 & 0.07 \\ 0.01 & 0.97 & 0.00 \\0.01 & 0.02 & 0.93\end{bmatrix} $$ where each column relates to the truth, and each row relates to the observation. Eg. if the true event is $B$ then source $i$ will get it right 97% of the time, and observe $A$ 1% of the time and $C$ 2% of the time. We can assume the diagonal elements are > 95% Given a sequence of $n$ events, where each event $j$ was observed by source $i$ as $O_{i,j}$, it is trivial to estimate the pmf of the truth $T_j$ by solving $P(T_j | O_{1,j},\dots,O_{m,j})$ using Bayesian formula (given some reasonable priors about the probabilities of the events themselves, say, uniform). However suppose we didn't have the confusion matrices, nor the ground truth, and instead wanted to estimate them both. One algorithm is: Start with some reasonable confusion matrix for each source $C_{i,0}$ Fixing the confusion matrices $C_{i,k}$, estimate the most likely truths $T_{j,k}$ using Bayes formula Fixing truths $T_{j,k}$, estimate new confusion matrices $C_{i,k+1}$ based on how often each source got it "wrong" (allegedly) Repeat the last two steps incrementing $k$ until convergence This seems like the EM algorithm but I don't know how to show this formally. (No this is not homework.) 1) Is this EM, or some other standard algorithm in data fusion? 2) Does it have convergence guarantees? 3) Does it have any guarantees about the quality of the solution and how well the final confusion matrices will approximate the true confusion matrices? 4) Are there issues about the number of parameters being estimated vs. the number of samples? Eg. it seems there are $n + 6m$ parameters being estimated - the $n$ truths and the $6m$ elements across all confusion matrices (last cell of each column is determined by the others). EDIT These two papers describes exactly the problem and how to use EM to solve it: Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm http://crowdsourcing-class.org/readings/downloads/ml/EM.pdf LEARNING FROM NOISY SINGLY-LABELED DATA https://openreview.net/pdf?id=H1sUHgb0Z So the answers are: 1) Properly interpreted yes this is EM 2) EM in general converges to local optimum 3) Not really. In this problem though since the sources are 97%+ accurate, I expect the estimates to be pretty good 4) I don't think this is an issue - this is a "non-parametric" EM algorithm as the confusion matrices are not parameterized in anyway way. The sample sizes I deal with are in the 100000's+ so shouldn't be an issue Answer: I don't have a full answer to your question, but I wanted to help (and would love to know a complete answer). In a simpler case when you have one source $m=i=1$, it seems to me you are describing a scenario that resembles a hidden Markov model which uses a EM algorithm-based solution (Baum–Welch algorithm) – that is if you make the further hypothesis that a Markov property is satisfied. Because convergence properties of the stationary distribution of Markov chains are exponentially fast the algorithm convergence may be limited to the implemented algorithm. I hope that info helps point you in the right direction.
{ "domain": "datascience.stackexchange", "id": 4031, "tags": "data-mining, confusion-matrix, parameter-estimation, expectation-maximization" }
What is the chemistry behind amalgamation?
Question: Mercury can form amalgams with metals, such as gold, however how exactly does the "reaction" if you can call it that take place? My understanding of metal bonds is that the atoms of a metal are surrounded by a "sea" of electrons, however I don't understand how mercury can "penetrate" metals and weaken their structure or "dissolve" them. How does it do this? Also does this apply to Gallium? Answer: An amalgam is simply an alloy of some metal with mercury. Alloys are simply solid solutions with metals. Since the mixture of multiple elements disrupts the metal bonding, the properties are often quite different than the components. (In some cases, alloys form so-called intermetallic compounds with defined stoichiometry, but more often, the ratios are very flexible.) I wouldn't describe the mixture as a reaction, in much the same way that I wouldn't really call dissolving sodium chloride in water as a reaction. You're forming a mixture. Water disrupts the ionic bonding in NaCl, and mercury and other metals can also accept other components or impurities. Gallium does alloy with a lot of metals, and indium-gallium eutectic alloy is a liquid at room temperature, as is "galinstan". As to your question about mercury dissolving the metal atoms, consider that mercury is also a metal, so it also participates in similar metal bonding.
{ "domain": "chemistry.stackexchange", "id": 1867, "tags": "metal" }
Check whether a date is a valid future date
Question: I'm looking to increase the conciseness of this code. I realize that I can use Joda (or Java 8's new date API), but if I were to keep this to just Java 7, any suggestions? (I care less about whitespace and formatting.) /** * Tests whether the date input represents * a real date in mm/dd/YYYY format that is after the current date. * Useful for testing send dates and expiration dates. * * Tests against current date minus 24 hours * so users in different time zones from the server can * be accommodated. * * @param pDateString date to be tested * @return true if date (@12:01 am) >= yesterday (@12:02am) */ public static boolean isValidFutureDate(String pDateString) { if (!isValidDate(pDateString)) { return false; } StringTokenizer st = new StringTokenizer(pDateString.trim(), "/"); if (st.countTokens() != 3) { throw new NumberFormatException("Date format should be MM/DD/YYYY."); } String month = st.nextToken(); String day = st.nextToken(); String year = st.nextToken(); long oneDayInMillis = 86400000; // set reference point to yesterday's date, 12:01am GregorianCalendar ref = new GregorianCalendar(); ref.setTime(new Date(System.currentTimeMillis() - oneDayInMillis)); ref.set(Calendar.HOUR_OF_DAY, 0); ref.set(Calendar.MINUTE, 1); ref.set(Calendar.AM_PM, Calendar.AM); // set comparison time to entered day, 12:02am GregorianCalendar now = toDate(year, month, day); now.set(Calendar.HOUR_OF_DAY, 0); now.set(Calendar.MINUTE, 2); ref.set(Calendar.AM_PM, Calendar.AM); return now.after(ref); } /** * This method tests whether the date string passed in represents * a real date in mm/dd/YYYY format. * * @param pDateString date to be tested * @return a <code>boolean</code> value */ public static boolean isValidDate(String pDateString) { StringTokenizer st= new StringTokenizer(pDateString.trim(), "/"); if (st.countTokens() != 3) { throw new NumberFormatException("Date format should be MM/DD/YYYY."); } String month = st.nextToken(); String day = st.nextToken(); String year = st.nextToken(); return toDate(year, month, day) != null; } /** * Converts a String set of date values to a GregorianCalendar object. * * @param year a <code>String</code> value * @param month a <code>String</code> value * @param day a <code>String</code> value * @return a <code>GregorianCalendar</code> value */ public static GregorianCalendar toDate(final String year, final String month, final String day) { int mm, dd, yyyy; try { if(year.length() != 4) { throw new NumberFormatException("Please provide four(4) digits for the year."); } yyyy = Integer.parseInt(year); if(yyyy == 0) { throw new NumberFormatException("zero is an invalid year."); } } catch(NumberFormatException nfe) { throw new NumberFormatException(year + " is an invalid year."); } try { mm = Integer.parseInt(month); if(mm < 1 || mm > 12) { throw new NumberFormatException(month + " is an invalid month."); } } catch(NumberFormatException nfe) { throw new NumberFormatException(month + " is an invalid month."); } try { dd = Integer.parseInt(day); } catch(NumberFormatException nfe) { throw new NumberFormatException(day + " is an invalid day."); } GregorianCalendar gc = new GregorianCalendar( yyyy, --mm, 1 ); if(dd > gc.getActualMaximum(GregorianCalendar.DATE)) { throw new NumberFormatException(CalendarUtils.months[gc.get(GregorianCalendar.MONTH)] + " " + dd + " is an invalid day in "+gc.get(GregorianCalendar.YEAR) + "."); } if(dd < gc.getActualMinimum(GregorianCalendar.DATE)) { throw new NumberFormatException(CalendarUtils.months[gc.get(GregorianCalendar.MONTH)] + " " + dd + " is an invalid day in "+gc.get(GregorianCalendar.YEAR) + "."); } return new GregorianCalendar(yyyy,mm,dd); } Answer: Correctness My first point is that the current date is accepted as a valid future date. That is odd, and it is certainly not according to : Tests whether the date input represents a real date in mm/dd/YYYY format that is after the current date. Simplicity Secondly, assuming the current date should return false, the entire implementation can be as simple as : public static boolean isValidDate(String pDateString) throws ParseException { Date date = new SimpleDateFormat("MM/dd/yyyy").parse(pDateString); return new Date().before(date); } Testable Lastly, this method is difficult to test. Does it work when now falls in a DST overlap period? Does it work on 29th of February? Does it work in the time zone "Australia/Darwin"? While you may not use Java 8, you can certainly make a Clock abstraction of your own to get around this.
{ "domain": "codereview.stackexchange", "id": 12091, "tags": "java, performance, datetime, validation, cyclomatic-complexity" }
Javascript condition optimization
Question: Can the following condition be optimized/simplified ? if (!this.properties.width || this.properties.width <= 0 || !this.properties.height || this.properties.height <= 0) return; So basically if properties does not have a width or height property or the width or height property is less than or equal to zero then return Answer: I'd simply flip it around: if( !(this.properties.width > 0 && this.properties.height > 0) ) { return; } This will catch null, undefined, NaN, etc., etc.. It will allow numeric strings (provided they can be coerced to a number greater than zero), and of course it'll allow straight-up positive, non-zero numbers. Update: There's one edge-case I've come across: Arrays. It's some unfortunate JavaScript weirdness to do with type coercion: [] > 0; // => false (so far so good) [0] > 0; // => false (as it should be) [1] > 0; // => true (what?) [1, 1] > 0; // => false (WHAT?) So an array with just a single numeric element will be treated as a number when comparing. Well, actually, as far as I can tell, the array is coerced to a string by coercing each element to a string, and joining them with a comma, and the joined string is then compared lexicographically coerced to a number in order to be compared. (Thanks to Robert in the comments for the correction) [1] + 0; // => "10" [1, 1] + 0; // => "1,10" "0" > 0; // => false "1" > 0; // => true "1,1" > 0; // => false I love JavaScript, but sometimes... jeez...
{ "domain": "codereview.stackexchange", "id": 10192, "tags": "javascript" }
What is the time complexity of a binary multiplication using Karatsuba Algorithm?
Question: My apologies if the question sounds naive, but I'm trying wrap my head around the idea of time complexity. In general, the Karatsuba Multiplication is said to have a time complexity of O(n^1.5...). The algorithm assumes that the addition and subtraction take about O(1) each. However, for binary addition and subtraction, I don't think it will be O(1). If I'm not mistaken, a typical addition or subtraction of two binary numbers takes O(n) time. What will be the total time complexity of the following program then that multiplies two binary numbers using Karatsuba Algo that in turn performs binary addition and subtraction? long multKaratsuba(long num1, long num2) { if ((num1>=0 && num1<=1) && (num2>=0 && num2<=1)) { return num1*num2; } int length1 = String.valueOf(num1).length(); //takes O(n)? Not sure int length2 = String.valueOf(num2).length(); //takes O(n)? Not sure int max = length1 > length2 ? length1 : length2; int halfMax = max/2; // x = xHigh + xLow long num1High = findHigh(num1, halfMax); // takes O(1) long num1Low = findLow(num1, halfMax); // takes O(1) // y = yHigh + yLow long num2High = findHigh(num2, halfMax); // takes O(1) long num2Low = findLow(num2, halfMax); // takes O(1) // a = (xHigh*yHigh) long a = multKaratsuba(num1High, num2High); // b = (xLow*yLow) long b = multKaratsuba(num1Low, num2Low); //c = (xHigh + xLow)*(yHigh + yLow) - (a + b); long cX = add(xHigh,xLow); // this ideally takes O(n) time long cY = add(yHigh,yLow); // this ideally takes O(n) time long cXY = multKaratsuba(cX, cY); long cAB = add(a, b) // this ideally takes O(n) time long c = subtract(cXY, cAB) // this ideally takes O(n) time // res = a*(10^(2*m)) + c*(10^m) + b long resA = a * (long) Math.pow(10, (2*halfMax)); // takes O(1) long resC = c * (long) Math.pow(10, halfMax); // takes O(1) long resAC = add(resA, resC); // takes O(n) long res = add(resAC, b); // takes O(n) return res; } Answer: You're mistaken. The analysis of the Karatsuba algorithm takes addition and subtraction into account. If you analyze your pseudo-code, you can see that you have exactly 3 recursive function calls with arguments of size $n/2$, and all other operations like addition, subtraction, extracting the higher and lower bits, ..., run in $O(n)$. Therefore you get the recursion $T(n) = 3 \cdot T(n/2) + O(n)$ for the complexity, which gives the following according to the Master theorem: $$T(n) = \Theta(n^{\log_2 3}) = \Theta(n^{1.58...})$$ Fun fact: even if you assume that all the bitwise operations (addition, subtraction, ...) take $O(1)$, the code will still have the same complexity. Since the number of calls to the three subproblems dominates the other work. But I doubt, that any book/lecturer would assume that the bitwise operations are constant, since the whole point of the algorithm is that you want to multiply large numbers. Btw, you have some small errors in your pseudo-code/analyze. E.g. here: long resA = a * (long) Math.pow(10, (2*halfMax)); // takes O(1) You want to multiply with a power of 2, to make it efficient (since you are working with binary numbers). And even corrected it will not run in $O(1)$, because you need to copy/move $O(n)$ bits. So this line will also run in $O(n)$.
{ "domain": "cs.stackexchange", "id": 14496, "tags": "algorithms, algorithm-analysis, time-complexity, arithmetic" }
Deciding emptiness of intersection of regular languages in subquadratic time
Question: Let $L_1,L_2$ be two regular languages given by NFAs $M_1,M_2$ as input. Assume we would like to check whether $L_1\cap L_2\neq \emptyset$. This can clearly be done by a quadratic algorithm which computes the product automaton of $M_1,M_2$, but I was wondering if anything more efficient is known. Is there a $o(n^2)$ algorithm for deciding whether $L_1\cap L_2\neq \emptyset$? What is the fastest known algorithm? Answer: Simple answer: If there does exist a more efficient algorithm that runs in $O(n^{\delta})$ time for some $\delta < 2$, then the strong exponential time hypothesis would be refuted. We will prove a stronger theorem and then the simple answer will follow. Theorem: If we can solve the intersection non-emptiness problem for two DFA's in $O(n^{\delta})$ time, then any problem that's non-deterministically solvable using only n bits of memory is deterministically solvable in $poly(n)\cdot2^{(\delta n/2)}$ time. Justification: Suppose that we can solve intersection non-emptiness for two DFA's in $O(n^{\delta})$ time. Let a non-deterministic Turing machine M with a read only input tape and a read/write binary work tape be given. Let an input string x of length n be given. Suppose that M doesn't access more than n bits of memory on the binary work tape. A computation of M on input x can be represented by a finite list of configurations. Each configuration consists of a state, a position on the input tape, a position on the work tape, and up to n bits of memory that represent the work tape. Now, consider that the work tape was split in half. In other words, we have a left section of $\frac{n}{2}$ cells and a right section of $\frac{n}{2}$ cells. Each configuration can be broken up into a left piece and a right piece. The left piece consists of the state, the position on the input tape, the position on the work tape, and the $\frac{n}{2}$ bits from the left section. The right piece consists of the state, the position on the input tape, the position on the work tape, and the $\frac{n}{2}$ bits from the right section. Now, we build a DFA $D_1$ whose states are left pieces and a DFA $D_2$ whose states are right pieces. The alphabet characters are instructions that say which state to go to, how the tape heads should move, and how the work tape's active cell should be manipulated. The idea is that $D_1$ and $D_2$ read in a list of instructions corresponding to a computation of M on input x and together verify that it is valid and accepting. Both $D_1$ and $D_2$ will always agree on where the tape heads are because that information is included in their input characters. Therefore, we can have $D_1$ verify that the instruction is appropriate when the work tape position is in the left piece and $D_2$ verify when in the right piece. In total, there are at most $poly(n) \cdot 2^{n/2}$ states for each DFA and at most $poly(n)$ distinct alphabet characters. By the initial assumption, it follows that we can solve intersection non-emptiness for the two DFA's in $poly(n) \cdot 2^{(\delta n /2)}$ time. You might find this helpful: https://rjlipton.wordpress.com/2009/08/17/on-the-intersection-of-finite-automata/ CNF-SAT is solvable using $k+O(\log(n))$ bits of memory where k is the number of variables. The preceding construction can be used to show that if we can solve intersection non-emptiness for two DFA's in $O(n^{\delta})$ time, then we can solve CNF-SAT in $poly(n) \cdot 2^{(\delta k/2)}$ time. Therefore, the simple answer holds. Comments, corrections, suggestions, and questions are welcomed. :)
{ "domain": "cstheory.stackexchange", "id": 3075, "tags": "ds.algorithms, automata-theory, regular-language, nondeterminism" }
Nim game in Haskell implementing optimal strategy
Question: I was inspired by Stas Kurlin's Nim game to write my own. I'm new to Haskell, and quite unfamiliar with monads, do notation, and -- in general -- functional design patterns. In the game of nim, two players begin with a number of sized piles, e.g. piles of stones. Each player moves by taking a (non-zero) number of stones from one pile. The winning player is the one who takes the last stone (i.e. the one who makes every pile size identically zero). In my game, a NimPosition is a Map from Word64s to Word64s, where the keys are distinct pile sizes, and the values are the number of piles with that size. The user interacts with the game by entering space-separated pile sizes, which are then parsed into a list of Word64s, and these Word64s are converted to a NimPosition using the fromList function. The goal of this Map implementation is to ensure that each NimPosition has a unique representation without making the user have to think too hard about how to enter a position during play. However, I'm not too Data.Map is necessary; it makes more sense to me now to have a NimPosition be a list of Word64s, and ensure that each NimPosition is unique by having fromList be a sort function. The function nextMove (which I realize now is not a terribly descriptive name) calculates the optimal move to make from a given NimPosition. In the case that the bitwise-xor (aka nim-sum) of all the pile sizes isn't zero, then the optimal play is the (not necessarily unique) move that makes the nim-sum zero. If the nim-sum is already zero, there is no way to make it zero, so there is no optimal move. (In this case nextMove reduces the size of the largest pile by one; I don't have any good reason why, except that it probably makes it inconvenient for the human opponent, who must calculate the nim-sum to play optimally, but probably can't calculate the bitwise-xor or a list of large integers as fast as she could a list of smaller integers.) (See this) Like I said, I'm unfamiliar with Haskell and design patterns in general. But this is my first Haskell program of any length, but I guess ya gotta start somewhere. GitHub import qualified Data.Bits as Bit import qualified Data.Map as Map import Data.Word (Word64) import Data.List import Data.Char data NimPosition = NimPosition (Map.Map Word64 Word64) deriving (Eq) -- A NimPosition is constructed from a map from Word64 to Word64. The -- keys correspond to the distinct pile sizes, and the values -- correspond to the number of piles with that size. data Player = Human | Computer data GameState = Game { player :: Player , position :: NimPosition } data Bit = Bit Bool deriving (Eq, Ord) data Binary = Binary [Bit] deriving (Eq, Ord) insertWithCounts :: Word64 -> Map.Map Word64 Word64 -> Map.Map Word64 Word64 -- Insert an Word64 into a map as a key. If that Word64 is already present -- in the map as a key, then increase the value by 1. If the Word64 is -- not already present, give it the default value of 1. insertWithCounts pileSize oldMap = Map.insertWith (\_ y -> y + 1) pileSize 1 oldMap fromList :: [Word64] -> NimPosition -- Construct a NimPosition from a list of Word64, where each Word64 is a -- pile. fromList xs = NimPosition (foldr insertWithCounts Map.empty xs) toList :: NimPosition -> [Word64] -- Convert a NimPosition into a list of Word64, where each Word64 in the list -- corresponds to a pile. toList (NimPosition position) = let pileSizes = Map.keys position pileQtys = Map.elems position pileLists = zipWith replicate (map fromIntegral pileQtys) pileSizes in foldr1 (++) pileLists instance Show NimPosition where show = unwords . map show . toList instance Show GameState where show (Game Human position) = "Computer's play....=> " ++ show position ++ "\n" ++ "Your turn..........=> " show (Game Computer position) = "" toBit 0 = Bit False toBit _ = Bit True instance Show Bit where show (Bit False) = "0" show (Bit True ) = "1" toBitList :: Integral a => a -> [Bit] toBitList 0 = [] toBitList n = let (q, r) = n `divMod` 2 in (toBit r) : toBitList q toBinary :: Integral a => a -> Binary toBinary n = (Binary . toBitList) n instance Show Binary where show (Binary bitList) = concat $ (map show) . reverse $ bitList positionSum :: NimPosition -> Word64 -- Compute the bitwise xor of the pile sizes. positionSum position = foldr1 (Bit.xor) (toList position) winning :: NimPosition -> Bool -- According to Bouton's theorem, a position in nim is winning if the -- bitwise exclusive or of the pile sizes is exactly zero. winning position = (positionSum position == 0) losing :: NimPosition -> Bool losing position = (sum . toList) position == 1 terminal :: NimPosition -> Bool terminal position = (sum . toList) position == 0 findNumWithLeadingBit :: [Word64] -> Maybe Word64 findNumWithLeadingBit xs | maxBinaryLengthIsUnique = lookup maxBinaryLength lengthValueAlist | otherwise = Nothing where binaryExpansions = map (show . toBinary) xs binaryLengths = map length binaryExpansions lengthValueAlist = zip binaryLengths xs maxBinaryLength = maximum binaryLengths numsWithMaxBinaryLength = filter (== maxBinaryLength) binaryLengths maxBinaryLengthIsUnique = length numsWithMaxBinaryLength == 1 isValidMove :: NimPosition -> NimPosition -> Bool isValidMove prevPosition nextPosition = let prevPiles = toList prevPosition nextPiles = toList nextPosition pilesNotInPrevPosition = nextPiles \\ prevPiles pilesNotInNextPosition = prevPiles \\ nextPiles in case (pilesNotInNextPosition, pilesNotInPrevPosition) of (originalSize:[],resultantSize:[]) | resultantSize < originalSize -> True | otherwise -> False _ -> False nextMove :: NimPosition -> NimPosition nextMove prevPosition = if winning prevPosition then let prevList = (reverse . toList) prevPosition nextList = (head prevList - 1) : (tail prevList) in fromList nextList else let prevList = toList prevPosition in case findNumWithLeadingBit prevList of Just bigPile -> fromList (newPile:otherPiles) where otherPiles = delete bigPile prevList newPile = foldr1 (Bit.xor) otherPiles Nothing -> head possibleMoves where remainingPiles = zipWith delete prevList (repeat prevList) remainingNimSums = map (foldr1 Bit.xor) remainingPiles candidateLists = zipWith (:) remainingNimSums remainingPiles candidateMoves = map fromList candidateLists possibleMoves = filter (isValidMove prevPosition) candidateMoves readIntListFromString :: String -> [Word64] readIntListFromString input = case readIntFromString input of (Nothing, _) -> [] (Just intRead, remainder) -> intRead : (readIntListFromString remainder) readIntFromString :: String -> (Maybe Word64, String) readIntFromString string = let (_, newString) = span (isSpace) string (intString, remainder) = span (isNumber) newString numberRead = case null intString of True -> Nothing False -> Just (read intString) in (numberRead, remainder) getIntList :: IO [Word64] getIntList = do line <- getLine let intListRead = readIntListFromString line in case null intListRead of True -> do putStrLn "Parse error: can't read list of integers" getIntList False -> return intListRead getNimPosition :: IO NimPosition getNimPosition = do intList <- getIntList return $ fromList intList getValidNimPosition :: NimPosition -> IO NimPosition getValidNimPosition oldPosition = do newPosition <- getNimPosition case isValidMove oldPosition newPosition of False -> do putStrLn "Player error: not a valid position" getValidNimPosition oldPosition True -> return newPosition takeTurns :: Maybe GameState -> IO (Maybe GameState) takeTurns Nothing = do putStrLn "Game Over!"; return Nothing takeTurns (Just currentState) = let currentPosition = position currentState in do (putStr . show) currentState case (losing currentPosition) || (terminal currentPosition) of True -> takeTurns Nothing _ -> case player currentState of Computer -> let computersNextMove = nextMove $ position currentState nextState = currentState { player = Human, position = computersNextMove} in takeTurns $ Just nextState Human -> do playersNextMove <- getValidNimPosition $ position currentState let nextState = currentState { player = Computer , position = playersNextMove} in do takeTurns $ Just nextState data YesNo = Yes | No getYesOrNo :: IO (YesNo) getYesOrNo = do input <- getLine case input of "yes" -> return Yes "y" -> return Yes "no" -> return No "n" -> return No _ -> do putStr "Please enter 'yes' or 'no': "; getYesOrNo introduceGame :: IO () introduceGame = putStrLn "Welcome to Nim! To get started, enter your initial position, e.g. '1 3 5'" main = do introduceGame putStr "Initial position => " startingPosition <- getNimPosition let initialGameState = Just Game { player = Computer , position = startingPosition } in takeTurns initialGameState putStr "Would you like to continue? (y/n): " shouldContinue <- getYesOrNo case shouldContinue of Yes -> main No -> do putStrLn "Goodbye!"; return () Answer: Some ideas: For Bit and Binary use newtype rather than data to get rid of data's run-time overhead. Instead of custom Bits you could use the Data.Bits instance of Integer. This would simplify or remove a lot related code. As you noted, for NimPosition you could either use just a list, or even a multi-set. For findNumWithLeadingBit function maximumBy seems to be useful. Or perhaps even more simplified, something like (untested) where withLengths = map (id &&& (length . show . toBinary)) xs maxBinaryLength = maximum . map snd $ withLengths numsWithMaxBinaryLength = filter ((== maxBinaryLength) . snd) withLengths maxBinaryLengthIsUnique = length numsWithMaxBinaryLength == 1 Rather than if it's often more readable to use guards, for example: nextMove prevPosition | winning prevPosition = ... | otherwise = ... Code that tries various options, to eventually find one that matches some criteria, can be often nicely expressed using the [] or Maybe monad using MonadPlus functions. Package monadplus has more useful functions, for example mfromList. It's strongly recommended to include types for all top-level functions. Otherwise nice program! I also like that you meaningfully named variables, this really helps reading the code.
{ "domain": "codereview.stackexchange", "id": 21043, "tags": "beginner, game, haskell" }
Alternating split of a given singly linked list
Question: Write a function AlternatingSplit() that takes one list and divides its nodes to make two smaller lists a and b. The sublists should be made from alternating elements in the original list. Example: if the original list is 0->1->0->1->0->1, then one sublist should be 0->0->0 and the other should be 1->1->1. This code is attributed to geeksforgeeks. I'm looking for code review, optimizations and best practices. Specifically, please let me know if the way I have used a private constructor in code is acceptable within coding standards, and if not, an alternative. Why I don't extend or reuse: I am prepping for interviews, and interviewers explicitly want you to code, in my experience. I request the reviewer to not insist on reusing, as I am aware in real life reusability is the right approach. This does not work in interviews. Why don't I use a Util class instead nesting method inside linked list? That is because I need the Node to be an internal data structure. If I made a Util class, it would have no access to internal data structure and perform operations on the node's pointers. final class AlternateSplitData<T> { private final AlternateSplit<T> evenLL; private final AlternateSplit<T> oddLL; public AlternateSplitData(AlternateSplit<T> evenLL, AlternateSplit<T> oddLL) { this.evenLL = evenLL; this.oddLL = oddLL; } public AlternateSplit<T> getEvenLL() { return evenLL; } public AlternateSplit<T> getOddLL() { return oddLL; } } public class AlternateSplit<T> { private Node<T> first; // is private constructor the right thing to do ? private AlternateSplit(Node<T> first) { this.first = first; } public AlternateSplit(List<T> items) { add(items); } private void add(List<T> items) { Node<T> prev = null; for (T item : items) { Node<T> node = new Node<>(item); if (first == null) { first = prev = node; } else { prev.next = node; prev = node; } } } private static class Node<T> { private Node<T> next; private T item; Node(T item) { this.item = item; } } public AlternateSplitData<T> alternate() { Node<T> temp = first; Node<T> oddhead = null; Node<T> odd = null; Node<T> evenhead = null; Node<T> even = null; int count = 0; while (temp != null) { if (count % 2 == 0) { if (evenhead == null) { evenhead = temp; even = temp; } else { even.next = temp; even = temp; } } else { if (oddhead == null) { oddhead = temp; odd = temp; } else { odd.next = temp; odd = temp; } } count++; temp = temp.next; } if (even != null) { even.next = null; } if (odd != null) { odd.next = null; } return new AlternateSplitData<T>(new AlternateSplit<T>(evenhead), new AlternateSplit<T>(oddhead)); } // size of new linkedlist is unknown to us, in such a case simply return the list rather than an array. public List<T> toList() { final List<T> list = new ArrayList<>(); if (first == null) return list; for (Node<T> x = first; x != null; x = x.next) { list.add(x.item); } return list; } @Override public int hashCode() { int hashCode = 1; for (Node<T> x = first; x != null; x = x.next) hashCode = 31*hashCode + (x.item == null ? 0 : x.item.hashCode()); return hashCode; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; AlternateSplit<T> other = (AlternateSplit<T>) obj; Node<T> currentListNode = first; Node<T> otherListNode = other.first; while (currentListNode != null && otherListNode != null) { if (currentListNode.item != otherListNode.item) return false; currentListNode = currentListNode.next; otherListNode = otherListNode.next; } return currentListNode == null && otherListNode == null; } } public class AlternateSplitTest { @Test public void testEvenLength() { AlternateSplit<Integer> as1 = new AlternateSplit<>(Arrays.asList(0, 1, 2, 3, 4, 5)); AlternateSplitData<Integer> list1 = as1.alternate(); assertEquals(new AlternateSplit<Integer>(Arrays.asList(0, 2, 4)), list1.getEvenLL()); assertEquals(new AlternateSplit<Integer>(Arrays.asList(1, 3, 5)), list1.getOddLL()); } @Test public void testOddLength() { AlternateSplit<Integer> as2 = new AlternateSplit<>(Arrays.asList(0, 1, 2, 3, 4)); AlternateSplitData<Integer> list2 = as2.alternate(); assertEquals(new AlternateSplit<Integer>(Arrays.asList(0, 2, 4)), list2.getEvenLL()); assertEquals(new AlternateSplit<Integer>(Arrays.asList(1, 3)), list2.getOddLL()); } @Test public void testSingleElement() { AlternateSplit<Integer> as3 = new AlternateSplit<>(Arrays.asList(0)); AlternateSplitData<Integer> list3 = as3.alternate(); assertEquals(new AlternateSplit<Integer>(Arrays.asList(0)), list3.getEvenLL()); assertEquals(new AlternateSplit<Integer>(Collections.EMPTY_LIST), list3.getOddLL()); } } Answer: Having a list whose type is AlternateSplitData<T> makes it totally impractical to use. It's a singly linked list. I don't care that it came from an alternating splitting process. I just want it to interoperate with any other code that works with linked lists. alternate() seems to contain a lot of code for my taste. In particular, if (even != null) { even.next = null; } if (odd != null) { odd.next = null; } is totally superfluous. I suggest naming variables like oddTail for clarity, and to adhere to Java capitalization conventions. public AlternateSplitData<T> alternate() { Node<T> oddHead = null, oddTail, evenHead = null, evenTail; if (first != null) { evenHead = evenTail = temp; if (first.next != null) { oddHead = oddTail = temp; } } if (oddHead != null) { boolean isEven = false; for (Node<T> temp = oddHead.next; temp != null; temp = temp.next) { if ((isEven = !isEven)) { evenTail = (evenTail.next = temp); } else { oddTail = (oddTail.next = temp); } } } return new AlternateSplitData<T>(evenHead, oddHead); }
{ "domain": "codereview.stackexchange", "id": 9082, "tags": "java, linked-list, interview-questions" }
Linear Probing in Python
Question: About to get back into coding, so I implemented linear probing in Python. How did I do? Anything unnecessary? Am I using "global" right? I originally wanted to use an array with a fixed length, but couldn't get it to work. list = ["x", "x", "x", "x", "x", "x", "x"] state = 0 def main(): #tests linearProbing(1) linearProbing(1) linearProbing(1) print(list) def linearProbing(x): global state global list c = x % len(list) while state != 1: if list[c] == "x": list[c] = x state = 1 elif c == 0: c = len(list) - 1 else: c -= 1 state = 0 if __name__ == '__main__': main() Answer: Am I using "global" right? It would be better to not use global at all, by wrapping this in a class, with state and list as attributes, so an instance could reference them with self.state, for example. How did I do? state is used as a boolean. So it should be a boolean value, set to False instead of 0 and True instead of 1 values. list is not a good name for a variable, as it shadows a built-in with the same name. A quick tip: you can write ["x", "x", "x", "x", "x", "x", "x"] simpler as ["x"] * 7. In any case, duplicating "x" in multiple places (first when you initialize list, and then later in linearProbing), is not great. It would be better to put the value "x" in a variable, say FREESPOT = "x", and reuse that everywhere.
{ "domain": "codereview.stackexchange", "id": 13770, "tags": "python, circular-list" }
Deciding whether a list of sum types is homogeneous
Question: I ran into a problem recently where I had to decide whether a set of sum types was homogeneous (all the same). In itself, this is a pretty simple problem. However, there were a few complicating factors: There exists a type within the types (call it Id) that is ignored in the comparisons. There exists a type (call it Mix) that immediately makes the set non-homogeneous, even if all the types are Mix. The extension of this problem was, given a list of types, is there a single type that can be removed that will make the rest of the types homogeneous (excepting, of course, the Mix type). For example, if we define our type as: data Foo = Bar | Baz | Quux | Id | Mix deriving (Eq, Ord, Show) then [Bar, Bar, Bar] is homogeneous, [Bar, Bar, Baz] would be homogeneous if Baz were removed. Further, [Bar, Bar, Id] is homogeneous, and [Mix, Mix, Mix] is not. Finally, [Bar, Bar, Mix] contains a Mix type, and hence would not be homogeneous even if this type were removed. import Data.List as List import Data.Map as Map data Foo = Bar | Baz | Quux | Id | Mix deriving (Eq, Ord, Show) -- Is a list of the above sum types homogeneous? -- Note that Id is ignored in the comparisons, and Mix immediately -- makes this False. homogeneous :: [Foo] -> Bool homogeneous [] = True homogeneous [x] = if x == Mix then False else True homogeneous (x:xs) = all isEqNotMix xs where isEqNotMix y = ((y == x || y == Id) && y /= Mix) -- Find a type that would make the set homogeneous if it were removed. -- Note that if we have an already homogeneous set, then removing the -- single homoogeneous type from the list will leave it being homogenoeous. -- The final part of the tuple returns whether it was a homogeneous list or not. -- Note that in the case of no such type existing, the result is -- (Mixed, Mixed, False). homogeneousButOne :: [Foo] -> (Foo, Foo, Bool) homogeneousButOne xs = case xs of [] -> (Id, Id, True) [x] -> (x, x, homogeneous [x]) xs@(x:_) -> if homogeneous xs then (x, x, True) else (fst mixed, snd mixed, False) where mixed = mixedCase $ List.filter (/= Id) xs -- Deal with the case where the list is not homogeneous in homogeneousButOne. mixedCase :: [Foo] -> (Foo, Foo) mixedCase xs | Mix `elem` keys = (Mix, Mix) | Map.size fooMap == 2 = decideTwo fooMap | otherwise = (Mix, Mix) where fooMap = List.foldl (\acc x -> Map.insertWith (+) x 1 acc) Map.empty xs keys = Map.keys fooMap single = Map.filter (== 1) fooMap multiple = Map.filter (> 1) fooMap firstKey = head . Map.keys secondKey = head . tail . Map.keys -- Only case where we may not have (Mix, Mix) is if there were -- only two types found, and neither of them was `Mix`. decideTwo mp = case Map.size single of -- Both types exist > 1 times 0 -> (Mix, Mix) -- One type exists once, it is the type that can be removed 1 -> (firstKey multiple , firstKey single) -- Both types only exist once (e.g. [Bar, Baz]) 2 -> (firstKey single, secondKey single) Any improvements to this code are welcome (excepting the names of the sum types: I realise Foo, Bar, Baz and Quux aren't great, but in the original problem, the names were no more descriptive). Answer: homogeneous In this line homogeneous [x] = if x == Mix then False else True you have something like if expr then False else True which is the same as not expr. That is: homogeneous [x] = x /= Mix In the fact the whole line could be just: homogeneous [Mix] = False because the line homogeneous (x:xs) = ... would do the right thing for the cases that are not covered by homogeneous [Mix]. homogeneousButOne A Map is an overkill in this case, since the problem can be solved in linear time looping through the list of Foos. Here's one possible implementation: homogeneousButOne :: [Foo] -> (Foo, Foo, Bool) homogeneousButOne foos | sansId == [] = (Id, Id, True) | homogeneous sansId = (ref, ref, True) | elem Mix sansId = (Mix, Mix, False) | all (== other) others = (ref, other, False) | otherwise = (Mix, Mix, False) where sansId = filter (/= Id) foos (ref : remaining) = sansId (other : others) = filter (/= ref) remaining Note however that in this implementation I valued clarity over efficiency.
{ "domain": "codereview.stackexchange", "id": 7854, "tags": "haskell" }
Collision between photon and an isolated electron
Question: Consider an isolated electron and a photon of energy 'hf' suffers collision with it. Then will the electron absorb all the energy of the photon or some amount is absorbed? If so, then does the photon after collision has less frequency than the previous one? Answer: When a photon scatters off a free electron, it can either scatter elastically, i.e. no change in the frequency of the outgoing, or undergoes what is called Compton scattering. The distributions of the outgoing photonh can be calculated using the expansion in Feynman diagrams to first order : The outgoing photon will have a smaller frequency as the incoming transferred part of its momentum and energy to the electron. For the history see here.
{ "domain": "physics.stackexchange", "id": 43969, "tags": "particle-physics, photons, electrons, collision" }
NP-Completeness of Finding Minimum Automaton, in Gold's paper
Question: I have been investigating "learning automatas", and I came across reference to Gold's papers several times: "Complexity Of Automaton Identification From Given Data", and "System Identification Via State Characterization", and I am confused now. I also checked Is finding the minimum regular expression an NP-complete problem? but I am still confused. Let us call Problem 1: Finding a minimum regular expression for a given regular expression which has same language L. Let us call Problem 2a: Finding automaton which can explain some data D. Let us call Problem 2b: Finding an automaton with minimum states for Problem 2a. In "Complexity of Automaton.." paper, it states that "However, it is the objective of this paper to show that the construction of a minimum state automaton which agrees with given data is, in general, computationally difficult." Then later, in Theorem 2: "Minimum Automaton Identification Question Qmin(D, n). Given: data D, positive integer n. Question: Is there a finite automaton with n states which agrees with D ? Now, I can easily create an automaton which fits the data. Lets call it G. I can also minimize G to get G*, and count the number of states. As I wrote the whole question, I realized that if the answer to the following statement is "NO", then I do need further help. If the answer is "YES", elaboration is appreciated, but a simple yes will clear lots of confusion. Given arbitrary automaton G for language L, if we use some Polynomial algorithm to minimize it to G* (and P =/= NP), can there exist another smaller automaton S < G (in number of states) such that language of S is exactly L? If answer is "NO", please proceed. If the answer is "YES", feel free to elaborate. Any help is appreciated. First question: Gold assumes that a blackbox can be "identified in the limit". However I find that an a regular expression: ab(ab*)* to not apply here, since we don't know the number of states in the FSM we are trying to identify. Hence, this makes me believe Problem 1 is different than Problems 2a and 2b. Correct? Second question: Given the data, I can easily create an automaton, then minimize it. Based on what I've seen (and yet don't understand), the minimal automaton G* is not necessarily the "minimum automaton which can explain the data". There is confusion about what "explain the data" mean. If I minimize G to get G* some known polynomial algorithm, wouldn't that be "Exact" fit for the data? meaning all the data is accepted by G* and G*'s language is only all the data? Third question: Depending on how second question is answered, if there could be another smaller automaton which is difficult to obtain, then that should imply that polynomial minimization algorithms may produce different "minimal" automaton, based either on the algorithm, which I find absurd, or based on the "initial constructed automaton". I think question 3 is absurd, which is forcing me to believe that Gold meant something specific with "agrees with given data". Any help is appreciated. Answer: After more and more digging, here is what I found: First reference: Introduction to Automata Theory, Languages, and Computation 3rd Edition. Specifically, theorem 4.26 indicates that the provided algorithm constructs a minimum state machine M for a A such that M has as few states as any DFA equivalent to A. This was my original understanding, so the answer is "NO" to the pre-question. Second reference: Grammatical Inference: Learning Automata and Grammar First question: This book was excellent source for my understanding. It turns out, Gold's construction does not necessarily "EXACTLY" fit the data. The data may be a subset of of the inferred automaton, and this is a key reason as to why eventually, identification in the limit can happen, so ab(ab*)* can be learnt, eventually, but we can never tell we learned it until we wait for ever (I see this as a halting problem sort of issue) Second question: The confusion occurred from the meaning of "consistent". The construction used in the paper has "holes" where an "experiment" is not known to accept or reject a string at that "point in time". It is difficult to summarize a whole chapter here, but what ends up happening is "consistent" does not mean "EXACT". Eventually, identification in the limit produces "EXACT" fit for "historical" and "future" experiments. The reason why identification is NP-Hard is due to the holes in the algorithm. At some point a value is assumed to fill the holes, but based on the value, some inconsistency may arise, and a parsing tree is returned instead (which has large number of states). The other option is to use backtracking to implement nondeterminism and try different values (or run this in parallel...) to find the reduced automaton. Since the algorithm is non deterministic and can find the correct consistent minimum automaton, finding the minimum automaton from data, in the limit, is NP hard. Some Final Remarks If we have a "complete" finite log, we can find the minimum automaton in P. Then one may ask, what if we generate lots of experiments before we attempt to find an automaton, and assume that the log is complete? The answer is: If the log was indeed complete, then an algorithm in P can find the minimum state. Otherwise, the algorithm did not find the minimum state. I believe this makes the question: "Is the Log Complete?" to be the problem here, because if we can answer that with a YES, then identification of automata from finite data would be possible (not in the limit...), which is not possible. One way to circumvent this is to add domain knowledge, such as having an upper bound on the number of possible states.
{ "domain": "cstheory.stackexchange", "id": 5203, "tags": "automata-theory, regular-language, minimization" }
Question about drop tower deceleration
Question: I have a really basic question. If we release an object from a drop tower $700\, \mathrm{m}$ tall, the object free falls for $500\,\mathrm{m}$ and reaches a velocity of $99\mathrm{\frac{m}{s}}$. For the last $200\mathrm{m}$, a deceleration is applied. The deceleration required to reduce the velocity to zero is calculated to be about $-24 \mathrm{\frac{m}{s^2}}$. I can't get my head around why we don't need to add the gravitational acceleration $9.81\mathrm{\frac{m}{s^2}}$ on top of this? I understand we need $-24\mathrm{\frac{m}{s^2}}$ to bring an object traveling at $99\mathrm{\frac{m}{s}}$ to zero velocity over a $200\,\mathrm{m}$ distance, but this is if the object is travelling at a constant speed of $99\mathrm{\frac{m}{s}}$ to start with. In our case, the object is still affected by the gravitational acceleration, don't we need to account for that? $-24 - 9.81 =$ a total deceleration of $-34\mathrm{\frac{m}{s^2}}$? Answer: The first thing to note is that your question is in the realm of kinematics (the branch of mechanics concerned with the motion of objects without reference to the forces which cause the motion) so the fact that there is a gravitational force is totally irrelevant when answering the question. Given the assumption that all the accelerations are constant and down is defined as the positive direction then using the constant acceleration kinematic equations the answer to the question is that the acceleration of the body is $-24 \,\rm m\, s^{-2}$. If you wish to apply Newton's second law $F=ma$ to the problem then you need to proceed as follows with down as positive noting that the acceleration of the body is still $-24 \,\rm m\, s^{-2}$. $$F+mg = m\left ( -24 \,\rm m\, s^{-2} \right)$$ where $F$ is the force on the body and $m$ is the mass of the body and all the forces which are acting on the body are on the left hand side of the equation. With $g = 10 \,\rm m\, s^{-2}$ this results in the force acting on the body equal to $m(-24-10) = -m \,34 \,\rm N$. This the (upward) force which is required to stop the body in the gravitational field of the Earth. With no other forces present if a force of this magnitude had been applied to a body of mass $m$ then its acceleration could been found by applying Newton's second law $m \, (-34) = m \,a \Rightarrow a = -34 \,\rm m\, s^{-2}$
{ "domain": "physics.stackexchange", "id": 52914, "tags": "homework-and-exercises, newtonian-gravity, kinematics, acceleration" }
What are the antisymmetric terms in $\sigma_{mn}$ in the expression for the Fisher information?
Question: Given a parameter-dependent density operator $\hat\rho^\lambda$ and its spectral decomposition $\{\rho_m^\lambda, |\psi_n^\lambda\rangle\}$, Eq. $(17)$ from this review shows that one can compute its quantum Fisher information (QFI) as \begin{align} H(\lambda)&:=H_C(\lambda) +H_Q(\lambda) \\ &=\sum_n\frac{(\partial_\lambda\rho_n^\lambda)^2}{\rho_n^\lambda} + \left(2\sum_{n\ne m}\sigma_{mn}\left|\langle \psi_m^\lambda|\partial_\lambda\psi_n^\lambda\rangle\right|^2\right) \end{align} where the matrix $\sigma_{mn}$ is rather mysteriously defined as $$\sigma_{mn}:=\frac{(\rho_n^\lambda-\rho_m^\lambda)^2}{\rho_m^\lambda+\rho_m^\lambda}+\text{any antisymmetric terms}. $$ When the eigenvectors do not depend on $\lambda$, then $H(\lambda)=H_C(\lambda)$, which is just the classical FI associated to the distribution of the eigenvalues. I have two questions regarding this expression, including one that is perhaps a little vague. What are these antisymmetric terms, and why aren't they written explicitly? Suppose a state $\hat\rho^\lambda$ is such that $H_Q(\lambda)=0$, either because the eigenvectors do not depend on $\lambda$ or because the scalar product vanishes, as is the case for the state $p_\lambda|\Psi_\lambda\rangle\langle \Psi_\lambda|+(1-p_\lambda)|0\rangle\langle 0|$, where $|0\rangle$ is the empty state with no photons and $|\Psi_\lambda\rangle\propto \int d\mathbf r \ A_\lambda(\mathbf r)\hat a^\dagger(\mathbf r)|0\rangle$ is a general one-photon state. What does it mean for a state to have purely 'classical' Fisher information? Do we need, for instance, entanglement between two or more photons in order for $H_Q$ to be nonzero? Cross-posted on physics.SE Answer: Antisymmetric means that $\sigma_{nm}=-\sigma_{mn}$. Since the sum ranges over all values of $m$ and $n$, adding an antisymmetric term adds something proportional to $$|\langle \psi_m^\lambda|\partial_\lambda\psi_n^\lambda|^2-|\langle \psi_n^\lambda|\partial_\lambda\psi_m^\lambda|^2.$$ Now, these two quantities are equal in magnitude and so their difference is zero, as can be seen by taking the derivative \begin{align}0=\partial_\lambda \delta_{mn}=\partial_\lambda \langle \psi_m^\lambda|\psi_n^\lambda\rangle= \langle \partial_\lambda\psi_m^\lambda|\psi_n^\lambda\rangle+\langle \psi_m^\lambda|\partial_\lambda\psi_n^\lambda\rangle \\ \Rightarrow \langle \partial_\lambda\psi_m^\lambda|\psi_n^\lambda\rangle=-\langle \psi_m^\lambda|\partial_\lambda\psi_n^\lambda\rangle\\ \Rightarrow |\langle \partial_\lambda\psi_m^\lambda|\psi_n^\lambda\rangle|=|\langle \psi_m^\lambda|\partial_\lambda\psi_n^\lambda\rangle|. \end{align} (We also need that $|\langle \partial_\lambda\psi_m^\lambda|\psi_n^\lambda\rangle|=|\langle \psi_n^\lambda|\partial_\lambda\psi_m^\lambda\rangle|$.) Since any antisymmetric term adds 0 to the Fisher information, we can include as many extra antisymmetric terms that we like (obviously the easiest thing to do is to not include any antisymmetric term). For a state to have purely "classical" Fisher information, it can still be as quantum mechanical as you like. I can write $$\rho(\lambda)=\sum_k p_k(\lambda) |\psi_k\rangle\langle \psi_k|$$ with basis states $|\psi_k\rangle$ that are as entangled as I like. Even a boring state like $\rho=|\psi\rangle\langle \psi|$ that is independent from the parameter can be a maximally entangled pure state in any dimension! The question about having "quantum" additions to the Fisher information is not so much about the state's nonclassical properties; it instead depends on how the parameter dependence is encoded in the state. As you have seen, this simply means that the eigenbasis of the state must change with the parameter, otherwise there are only classical contributions to the Fisher information.
{ "domain": "quantumcomputing.stackexchange", "id": 4867, "tags": "information-theory, quantum-optics, quantum-fisher-information" }
What is "the famous 3R experiment" for quark colours?
Question: This page says: "The famous $3R$ experiment also suggests that whatever force binds the quarks together has 3 types of charge (called the 3 colors)." Google seems to think that the $3R$ experiment isn't at all famous! Does it have a different name nowadays? Can anyone tell me what it is? Answer: The experiment is the measurement of the total cross section for electron-positron annihilation into hadronic final states, $e^{-}+e^{+}\rightarrow\, {\rm hadrons}$. Because the hadrons interact strongly, the details of the sub-processes that make up the inclusive $e^{-}+e^{+}\rightarrow\, {\rm hadrons}$ are very difficult to work out quantitatively. However, the total for all external hadron states tells us quite a bit of useful information, and precise measurements of the total rate of hadron production have been goals at successive high-energy electron-positron colliders for decades. The rate for the process with hadrons in the final state is compared to the total rate for annihilation into muons, $e^{-}+e^{+}\rightarrow\mu^{-}+\mu^{+}$ (which is easy to calculate, because it involves only leptons at tree level and is totally dominated by a single QED diagram). The ratio of rates is the quantity $R$: $$R=\frac{\sigma(e^{-}+e^{+}\rightarrow\, {\rm hadrons})}{\sigma(e^{-}+e^{+}\rightarrow\mu^{-}+\mu^{+})}.$$ From the quark model predictions of hadron charges and masses, it was inferred the the up and down(/strange) quarks had charges $+\frac{2}{3}|e|$ and $-\frac{1}{3}|e|$, respectively. This was further confirmed (although with a lot noise) by deep elastic scattering experiments, which saw pointlike charged quarks when high-energy electrons were scattered off nucleons. With those charges known, it was possible to calculate the expected rate for $e^{-}+e^{+}\rightarrow u+\bar{u}$, $e^{-}+e^{+}\rightarrow d+\bar{d}$, and $e^{-}+e^{+}\rightarrow s+\bar{s}$ (in the ultrarelativistic limit of QED). The process of $e^{-}+e^{+}\rightarrow\, {\rm hadrons}$ begins with a very hard (large momentum exchange) $e^{-}+e^{+}\rightarrow q+\bar{q}$, reaction, which controls the overall cross section; after this, there are soft final state interactions that cause hadrons or hadron jets to coalesce out of the initial quark and antiquark products. So measuring $e^{-}+e^{+}\rightarrow\,{\rm hadrons}$ is a good experimental proxy for measuring the total rate of $e^{-}+e^{+}\rightarrow q+\bar{q}$. The total rate is approximately proportional to the number of quark types that can be created, and when the $R$ was measured, it was about three times larger than what naively have been expected. They reason is that there are three kinds of up quark, three kinds of down quark, etc. These three kinds are the colors. That factor of three is what gives the $3R$ measurement its name. The state of the art of electron-positron annihilation experiments at the time when these questions became important and interesting is explained in detail in R. F. Schwitters, K. Strauch, Annual Reviews of Nuclear Science 26, 89–149 (1976).
{ "domain": "physics.stackexchange", "id": 82515, "tags": "particle-physics, quarks, color-charge" }
Why was water freezing almost instantaneously when shaking a bottle that spent the night outside during a frosty night?
Question: Due to the forecasted frost last night, I placed yesterday evening, some 1.5l standard PET bottles filled up to 90% with warm tap water(+60°C) close to some vegetables that I wanted to protect in my garden. The temperature dropped to roughly -3 ~ -4°C last night. This morning I went to see how it went. Of the 10 bottles, one was filled half with water, half with ice. In all the other bottles, the water was still in a liquid state. So I decided to empty them. Here comes the interesting thing: I uncap the bottles, flip them upside down to empty the water, and give them a shake/twist so that it empties faster, and I noticed that some not well structured ice (it was more looking like melting snow actually) was forming almost instantaneously. Curious, I decided to give a strong shake on the next one while emptying it, and well, this mix of ice forming at that moment reminded me the texture of the icy fruit smoothies one can find during summertime. How do you explain that the water, when still, was 100% liquid, and that when I shake the bottle, ice was forming in no time? I mean, for me, shaking = adding energy, so it should warm the water, not cool it to the point it will form ice? From this experiment I guess not, and that instead, it more or less 'helps' the remaining energy of the 0°C water to dissipate, forming ice super quickly. Am I right in my reasoning? I'll redo the experiment the next night, trying to take a photo to add it here. Edit 1: I've placed the same bottles, with the same warm tap water in them at the same position as yesterday. The upcoming night might even be colder... I'll try to take some photos tomorrow morning. Edit 2: Okay, so this morning it wasn't as impressive as yesterday but it happened again: Fig.1 When you start to empty the bottle, only cold, clear water comes out of it. Shaking a little, then Fig.2 Ice particles have attached to the inside of the bottle as it is being emptied. Fig.3 Unstructured ice has accumulated on the ground. It's a totally "wild" and uncontrolled experiment so it's not as impressive as the videos linked by Philip hereunder. Here are the videos from which the screenshots were extracted: https://vimeo.com/534346291 https://vimeo.com/534347556 I also made a tiny additional observation but this is probably entirely due to chance: because I filled the bottles with warm water yesterday, they were a little depressurized this morning, having kind of a global concave shape. I have shaken them all before opening; but the water stayed clear. It's only once I opened them, and emptied them, that ice was formed. Answer: Congratulations, it sounds to me like you've just observed supercooled water! There are many videos on YouTube that describe this phenomenon, and explain it much better than I could, see here for a Veritasum video where this is discussed for example. The basic idea is this: when water freezes it forms ice, which is a nice regular crystalline structure. However, ice-crystals need a nucleation site, which is a point where the crystal can start to form, before they can actually start to form. In normal situations, water usually has some impurities which can serve as such nucleation sites, around which the crystal starts to grow and ice starts to form. However, if you use very pure water, there are no such "natural" nucleation points and so there is a chance that the water molecules want to form ice, but can't quite get around to it. As a result, the liquids are trapped in a "metastable" state well below their freezing point, but such a state has a precarious stability that can easily be disturbed. Shaking the bottle is one way to disturb this stability, as it gives a couple of the water molecules the chance to align in just to right way to start the crystallisation process, and once it's done, it is energetically favourable for the system to form ice, so all the other water molecules hop on as well. As a result, you would usually see the crystal "growing" in one direction until all the water becomes ice. Of course, you don't need to shake it, you could just introduce a different type of nucleation point as can be seen in this very pretty video and it would produce the same results, or alternatively, you could very carefully pour the supercooled water on top of an ice cube and form a sort of ice sculpture (see this video from The Action Lab). It takes more energy to form supercooled water than ice, meaning that when the water transitions to ice, it actually releases some heat, so supercooled water is actually colder than ice. Incidentally, this process is also observed in other materials, notably sodium acetate which is used in making heat packs like this one. I've never actually managed to see supercooled water myself, though I've tried quite a few times. I'm quite surprised that you were able to get it from tap water, since it usually requires very pure water. I hope you're able to reproduce the experiment!
{ "domain": "physics.stackexchange", "id": 77990, "tags": "thermodynamics, water, phase-transition, ice, phase-diagram" }
Log the alive time of a rosnode?
Question: I am currently working on a rosnode which purpose is to generate CSV files about certain information. One of the information i am seeking is to know how much time a rosnode has been alive. I know that you using ros::master::getNodes an get a vector of the active nodes, but is it possible to know how long time it has been turned on? Originally posted by 215 on ROS Answers with karma: 156 on 2016-12-05 Post score: 2 Answer: The ROS master does not track how long a node has been online, and the node list is reported by the nodes themselves, so there are some cases where a node may not be removed from the list because it shuts down improperly (ie segfault or runtime exception). You'll probably need to write your own code which monitors which nodes are online (using the node ping method), and keeps track of the PID for each process; changes in the PID will indicate that the process has been restarted or replaced by a new node with the same name. The rosprofiler package keeps track of which nodes are running, their PID and their CPU time, but doesn't track overall node uptime. It might be a good starting point for the tool you're trying to build. Originally posted by ahendrix with karma: 47576 on 2016-12-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by 215 on 2016-12-05: Thats exactly what i need it for.. aren't there any other way.. like using a timer or something like that?. The node i monitor is a control_node.. to measure some performance, i need to know much time it has been on. Comment by ahendrix on 2016-12-06: Maybe querying the process creating time from the PID is what you're looking for: https://pythonhosted.org/psutil/#psutil.Process.create_time
{ "domain": "robotics.stackexchange", "id": 26402, "tags": "rosnode" }
Is a "kernel" just the quantum equivalent of classical SVMs?
Question: I'm confused about the relationship between kernel methods and SVM methods used in quantum machine learning. Sometimes the two seem to be used interchangeably, but often I'll see them both in the same sentence. I understand what an SVM is in the classical context. So is a kernel just the quantum equivalent? Or is there such thing as a 'classical' kernel and a 'quantum' SVM? Answer: Consider a simple implementation of a Support Vector Machine (SVM) that finds a hyperplane (defined by its normal vector $w$) that maximally separates vectors $\{v_1, \dots, v_m\}$ according to their labels $\{y_1, \dots, y_m\}$, where each $y$ is either $-1$ or $1$. For simplicity we'll assume that such a $w$ exists (i.e. the vectors $\{v_k\}$ are linearly seperable , or that the hard margin SVM is realizable). Training the SVM results in a linear prediction function, which assigns a label to some new vector $v_i$ according to $$\tag{1} f(v_i) = \text{sign}(\langle w, v_i\rangle) $$ If $\langle w, v_i\rangle$ comes out positive, it means that $v_i$ lives on the half of the plane $w$ where the set of positively-labeled vectors $\{v_k: y_k=+1\}$ also live, and so the best inference we can make is that $v_i$ should also be assigned a positive label. Up to this point I have not explicitly stated what space the vectors $v, w$ live in, but that space should at least have an associated inner product. Without losing much generality we can enforce that $v, w$ live in a Hilbert space to make the following comparison simpler. Then imagine two different applications of this algorithm: Define our dataset $\{v_k\}$ to be a set of $d$-dimensional real vectors $\{x_1, \dots, x_m\} = \mathcal{X}\subset \mathbb{R}^d$, so that we end up doing linear classification with hyperplane $w$ on $\mathcal{X}$ according to the label of each datapoint $x_i$. Define our dataset $\{v_k\}$ to be a set of $p$-dimensional complex vectors $\{\phi_1, \dots, \phi_m\} = \mathcal{\Phi}\subset \mathbb{C}^p$, so that we end up doing linear classification with hyperplane $w'$ on $\mathcal{\Phi}$ according to the label of each $\phi_i$. These are still both linear classifiers. But now we make a connection between the two by defining $$\tag{2} \phi_i \equiv \phi(x_i) $$ such that $\phi: \mathbb{R}^d \rightarrow \mathbb{C}^p$ is a feature map that sends each real datapoint $x_i$ in my $d$-dimensional space to some other element $\phi(x_i)$ in my (possibly much higher-dimensional) $p$-dimensional space. The feature map $\phi$ will generally be non-linear. This has the powerful side effect of making application #2 a nonlinear version of application #1, that is $$\tag{3} f'(x_i) = \text{sign}(\langle w', \phi(x_i)\rangle) $$ will result in a decision boundary that is more expressive in our input space $\mathbb{R}^d$ than the corresponding linear decision boundary $f(x_i) = \text{sign}(\langle w, x_i\rangle)$. So to answer some of your questions Why do I see SVM and quantum kernel methods used interchangeably - they should generally not be. A quantum kernel method describes (a) constructing the feature map $\phi$ to map input data $x_i$ into a state $\phi(x_i)$ that lives in quantum state space, typically (but not quite rigorously correct) of the form $$\tag{4} \phi(x) \equiv U(x)|0\rangle $$ for some unitary $U(x)$ that you can run on a quantum circuit, and then (b) processing our input data $\{x_i\}$ by evaluating inner products of the form $\langle \phi(x_i), \phi(x_j)\rangle \equiv k(x_i, x_j)$. This $k$ is the kernel we define with respect to our data, hence quantum kernel method. Running an SVM on our data using this quantum kernel is just the very specific choice of learning a function of the form of Equation (3). In general we can decompose $w'$ as $$\tag{5} w' = \sum_{k=1}^m \alpha_k \phi(x_k) $$ and so the classifier (3) can indeed be expressed strictly in terms of $k(x_i, x_j)$ Is a kernel just the quantum equivalent [of a classical SVM]? No! The only thing special about a quantum kernel method is that $k$ is computed (or $\phi$ is constructed) using a quantum computer. There are plenty of classical kernels (Gaussian, polynomial, Laplace, etc.) you can efficiently compute on a classical machine to do nonlinear classification. Implementing an SVM using a quantum kernel just means making a choice not to use one of those out-of-the-box classical kernel functions and instead use a feature map like Equation (4). Furthermore, not all kernel methods involve an SVM; consider Kernel Nearest Neighbors, Kernel Principal Component Analysis, Kernel Spectral Clustering, etc. One could choose to compute $k$ on a quantum computer and then use it as input to one of the other algorithms instead of an SVM and still have something called a ``quantum kernel method''. Is there such thing as a 'classical' kernel and a 'quantum' SVM? I'm not sure; its hard for me to imagine what that setup would look like but quantum kernels are an active area of research so maybe something like this will be proposed.
{ "domain": "quantumcomputing.stackexchange", "id": 2599, "tags": "machine-learning, quantum-enhanced-machine-learning, kernel-methods" }
How can a single slit diffraction produce an interference pattern?
Question: How can a light passed though a single slit produce a similar interference pattern to the double-slit experiment? How does the diffracted wave produce the points of cancellation and reinforcement, if there is only one wave? Answer: One way of understanding this which has always had intuitive appeal to me is the so-called Huygens Principle which basically states that every point on a wavefront can be considered a point source for a new spherical wave, and that the evolution of the wavefront can be determined by superposing all of these spherical waves at later times. The Wikipedia article that I linked to has some really nice pictures of this. Diffraction effects can then be explained using this principle. Imagine, for example, that you shine light through an extremely small slit, say a slit about the size of the wavelength itself, then when plane waves pass through this slit, the part of the wave that goes through the slit acts as a point source and generates a spherical wave, so the light diffracts. If the slit is larger, however, then the part of the wavefronts that pass through the slit act as multiple little points sources for spherical waves, and these spherical waves interfere with each other to give an interference pattern. In this way, the diffraction pattern is very much like multiple slit interference, except instead of multiple slits, the wave front itself splits into a bunch of adjacent point sources that interfere with each other.
{ "domain": "physics.stackexchange", "id": 84964, "tags": "double-slit-experiment, diffraction" }
Conditional and unconditional interpretations of mean inter-collision times
Question: This question appeared in a study guide for my graduate level written exam in physics. (It may have been the one from the University of Chicago.) I see that a similar question was asked here What is the average value of time since last collision in Drude model? but the discussion did not focus on the aspect that I'm going to present. A particle in a gas undergoes random collisions with other gas particles. The inter-collision times are exponential based on the idea that the number of collisions per unit time is describable as a Poisson stochastic process. Consider a particle at time $t_0$. The distribution of times until the next collision is exponentially distributed $f(t | \tau) = (1/\tau)\exp(-(t-t_0)/\tau)$. Clearly, $\mathbb{E}[t-t_0] = \tau$. But by time reversal symmetry, this also describes the time since the last collision. So which is it? Is the mean time between collisions $\tau$ or $2\tau$? The question always felt like a swindle $-$ in part because I was unable to track down a generally agreed upon answer. (This was long before stackoverflow existed.) My take is that conditional on knowing that the particle just endured a collision, the mean time to the next collision is indeed $\tau.$ But unconditionally, the time since/until the last/next collision is properly $\tau$, so we compute the last-to-next time to be $2\tau.$ The problem is with the exact definition of the term "mean inter-collision time." As it can be defined to be 1) unconditionally "time to next collision", then it is $\tau$ (but it makes no reference to the last collision, so perhaps this definition is non-responsive to the name) or 2) conditional on a collision having just occurred, "time to next collision" = $\tau$, or 3) unconditionally, "time since last collision to next collision," for a randomly selected particle whose past is as unknown as its future, in which case it is in fact $2\tau.$ Is there a consensus on this point here? Answer: The Poisson process is defined axiomatically as a counting process $N(t)$ for $t>0$ such that, $N(t=0)=0$. The increments are independent. The number of events at any time interval $t$ is Poisson distributed with mean $\lambda t$. That is, $$ \mathbb{P}(N(t)=n)=\mathrm{e}^{-\lambda t}\left(\lambda t\right)^n/n!$$ for $n\in\mathbb{N}_0$. From Conditions #1 and #2, and the fact that the starting point $t$ is arbitrary, it should be clear that your first choice (being unconditioned) is the correct answer: $\tau$ is the average time from an arbitrary point to the next event, rather than a mean time between events (or even half that).
{ "domain": "physics.stackexchange", "id": 91068, "tags": "statistical-mechanics, condensed-matter, stochastic-processes, randomness" }
What spectral type of star has an absolute magnitude of exactly 0?
Question: We know that Vega is the star that serves as the zero point for the UBV color scale, and has an apparent magnitude of nearly zero (+0.02). But its absolute magnitude is +0.58, making it rather far from $M=0$. So what spectral type would have an absolute magnitude of $0 \pm0.02$, and are there any stars that satisfy this? Answer: There isn't a one-to-one relationship between spectral type and absolute magnitude. Instead, there is a mean relationship with a fair bit of scatter around it. The reason is that the luminosity of a star of a given effective temperature depends on its composition/metallicity and how far along in its main sequence lifetime it is. Basically, late B-type main sequence stars (say B7/B8V) have an absolute magnitude of about zero. Alternatively there are low mass stars ascending the hydrogen shell burning giant branch (types of about K2-K5 III) that would have an absolute magnitude of zero.
{ "domain": "astronomy.stackexchange", "id": 4973, "tags": "star, temperature, absolute-magnitude, spectral-type" }
Greedy proof: Correctness versus optimality
Question: I am really confused after surveying a bunch of material online about correctness versus optimality proof for greedy algorithms. Some website even uses both correctness and optimal in the same sentence! From my best unconfirmed understanding, the optimal proof uses "greedy stay ahead" where I need to show that greedy algorithm constructs a solution set that is no worse than the optimal set The correctness proof utilizes the swapping argument to show that any difference between output set A and optimal set OPT can be eliminated by swapping the items in the optimal set. Can someone clarify if I must only use the "greedy stay ahead" proof method for the optimal proof and not the correctness proof? And must I use the swapping argument (with contradiction) to show that swapping items in the optimal set? Greedy stay ahead: http://www.cs.cornell.edu/Courses/cs482/2007sp/ahead.pdf Swapping: http://www.cs.oberlin.edu/~asharp/cs280/handouts/greedy_exchange.pdf (Note that author states that this proves correctness and ends with prove optimality) Instance where swapping was used to prove optimality, greedy stay ahead used to prove correctness: http://test.scripts.psu.edu/users/d/j/djh300/cmpsc465/notes-4985903869437/notes-5z-unit-5-filled-in.pdf Answer: You can use whatever proof method you want. Proofs aren't even limited to existing patterns such as "greedy stay ahead" and "swapping". Indeed, in some cases, such as the greedy algorithm for maximizing a submodular function over a uniform matroid, the proof consists of adding together a bunch of inequalities expressing the fact that the random choice was (greedily) optimal. Usually the proof that a greedy algorithm works compares itself against an optimal solution, though when proving approximation guarantees, it could be enough to compare the greedy solution to the theoretical maximum (a case in point is the derandomized version of the random 3SAT algorithm). Also, I suspect that "correctness" and "optimality" mean the same thing.
{ "domain": "cs.stackexchange", "id": 16042, "tags": "proof-techniques, greedy-algorithms" }