anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
What are some examples of Statistical AI applications?
Question: I believe that statistical AI uses inductive thought processes. For example, deducing a trend from a pattern, after training. What are some examples of successfully applied Statistical AI to real-world problems? Answer: There are several examples. For example, one instance of using Statistical AI from my workplace is: Analyzing the behavior of the customer and their food-ordering trends, and then trying to upsell by recommending them the dishes which they might like to order/eat. This can be done through the apriori and FP-growth algorithms. We then, automated the algorithm, and then the algorithm improves itself through an Ordered/Not-Ordered metric. Self-driving cars. They use reinforcement and supervised learning algorithms for learning the route and the gradient/texture of the road.
{ "domain": "ai.stackexchange", "id": 549, "tags": "applications, statistical-ai" }
Description of the stress in an elastic ring
Question: How can one describe in terms of elasticity the stress necessary to stretch an elastic ring of Young Modulus $E$ and rest radius $R_0$, and width $d$ ? My guess is that in the linear regime, the force can be written : $F=k(R-R_0)$ with : $k=E\frac{4\pi R_0^2}{d}$ for an elastic sphere, and $k=E\frac{2\pi R d_1}{d_2}$ for a ring where $d_1$ is the width perpendicular to the radius and $d_2$ the width in the radial direction. Hence the stress writes respectively : $\sigma=E(R-R_0)/d$ and $\sigma=E(R-R_0)/d_2$ is it ok ? Answer: The hoop strain is $$\epsilon_{\theta}=\frac{(2\pi R)-(2\pi R_0)}{2\pi R_0}=\frac{(R-R_0)}{R_0}$$Hence, the hoop stress is $$\sigma_{\theta}=E\frac{(R-R_0)}{R_0}$$If F represents the radial force per unit length applied to the ring (directed radially), then $$F(2R_0)=2\sigma_{\theta}d_1d_2=2E\frac{(R-R_0)}{R_0}d_1d_2$$or$$F=E\frac{(R-R_0)}{R_0^2}d_1d_2$$
{ "domain": "physics.stackexchange", "id": 57682, "tags": "elasticity" }
Auctioneer calling "Going once… going twice… sold!"
Question: I feel like the if-else combination isn't the most optimal and I'm also not sure if I need a loop var Auktionator = function(){ var rennt = false; this.versteigern = function(objekt){ if(rennt === false){ for(var i = 1; i<=4; i++){ if(i === 1){ setTimeout(function(){console.log(objekt + " zum ersten")},1000*i); } else if(i === 2){ setTimeout(function(){console.log(objekt + " zum zweiten")},1000*i); } else if(i === 3){ setTimeout(function(){console.log(objekt + " zum dritten")},1000*i); } else if(i === 4){ setTimeout(function(){console.log(objekt + " verkauft!")},1000*i); } rennt = true; } } } } Answer: Definetely not. The loop is useless (and un-optimizing) here. As far as I can see, the index i is only used to calculate some constants' values. Realise that your original loop has the same effect than dropping off the loop and the ifs clauses, and hard-code the values computed from i: setTimeout(function(){console.log(objekt + " zum ersten")},1000); setTimeout(function(){console.log(objekt + " zum zweiten")},2000); setTimeout(function(){console.log(objekt + " zum dritten")},3000); setTimeout(function(){console.log(objekt + " verkauft!")},4000); Or, another valid alternative could be taking advantage of the resembling of the actions and generalize them by using data: var messages=[" zum ersten", " zum zweiten", " zum dritten", " verkauft!"]; This would be more comprehensive and flexible, since it allows to add more messages without changing the logic a little. Update I recommend RobH's answer, which is based upon arrays, and works fine.
{ "domain": "codereview.stackexchange", "id": 16843, "tags": "javascript, timer" }
Strategies for cable vibration: eigenvalues are suitable?
Question: This is a theoretical question. It is known that in "classic" vibration analysis natural frequencies can be found by solving an eigenvalue problem, from the undampened vibration equation like $\mathbf{M}\mathbf{\ddot{u}} + \mathbf{K}\mathbf{u} = \mathbf{0}$ where $\mathbf{u}$ is the displacement vector. But, with reference to this situation I started to wonder if this type of analysis is not correct here because $\mathbf{K}$ (stiffness) of the cable is not constant but $\mathbf{u}$-dependant (equal to zero when $\mathbf{u}$ is a displacement of compression). So I would like to ask if someone could suggest me another strategy, or show why the eigenvalue analysis could be used also in this case. Thank you all. Answer: Mathematically, an eigenvalue analysis assumes the modal displacements are infinitesimally small, so the change in stiffness in the cable caused by the vibration is a second order effect which can be ignored. This very-small-amplitude vibration is a good place to start understanding the behaviour of the structure - for example, look at how the frequency depends on the angle of the beam to the horizontal. Of course if you are interested in large amplitude vibrations, the change in stiffness during each cycle of oscillation can not be ignored, but the vibrations are not sinusoidal anyway. Really, you can only simulate this by a time-marching transient dynamics analysis, updating the stiffness at each time step. The stiffness will vary along the length of the cable because its self weight will cause the tension to change along its length, and also because it is not straight! In simple situations (for example oscillations of a simple pendulum) you can investigate the large amplitude behaviour analytically, but that assumes you know the deformed shape of the structure (which is trivial for a simple pendulum, because the massless string is always straight so long as it remains in tension). You might be able to formulate the large-amplitude motion of your structure the same way as for a pendulum, if you assume the beam is rigid and the cable is massless and stretches along its length. You then will have only one degree of freedom, measured by the change in angle of the beam (or the cable) from its equilibrium position. Even for large amplitude oscillations of a pendulum, the period is an elliptic function of the amplitude which can only be evaluated numerically (e.g. by a power series approximation). For anything more complicated, attempting an analytic solution is unlikely to give much quantitative information - though a phase-plane plot may tell you some interesting things qualitatively.
{ "domain": "engineering.stackexchange", "id": 1633, "tags": "vibration, eigenvalue-analysis" }
Patent-free Interest Point Detector?
Question: Possible Duplicate: Free alternatives to SIFT/SURF for commercial use There are same very good algorithms for interest point detection/description, like: SIFT (Scale-Invariant Feature Transform) SURF (Speeded-Up Robust Features) MOPs (Multi-Scale Oriented Patches) Regrettably, at least the first two are patented and cannot be used commercially without paying huge fees to the authors. I am currently negotiating about the third one, which is patented as well. In a search for patent-free algorithm, I found there is a multitude of patents like this one. Huh?! Is there merely anything usable then? Maybe at least Harris-Laplace or some basic detector can be used. But how to be sure? Answer: (IANAL...) If you only want detectors: Harris is probably OK. According to http://users.fmrib.ox.ac.uk/~steve/susan/, SUSAN is out of patent now. I've not seen any claims that FAST is patented. Descriptors are harder... Histograms of Oriented Gradients might be worth considering - again I've not seen any claims of patent on the original form.
{ "domain": "dsp.stackexchange", "id": 201, "tags": "local-features" }
$g^{\mu\nu}g_{\alpha\beta}$=?
Question: Good day. I am self-studying general relativity, and I have a quick question on the inverse metric tensor: I am aware that $g^{\mu\nu}g_{\nu\alpha}=\delta^{\mu}_{\alpha}$. I do not know, however, what $g^{\mu\nu}g_{\alpha\beta}$ is. I thought I could perhaps manipulate the expression as follows: \begin{align} g^{\mu\nu}g_{\alpha\beta} & = g_{\alpha\mu}g_{\beta\nu}g^{\mu\nu}g^{\mu\nu} \\ & = \delta^{\nu}_{\alpha}\delta^{\mu}_{\beta} \\ \end{align} However, a similar procedure generates $\delta^{\mu}_{\alpha}\delta^{\nu}_{\beta}$ So perhaps I am completely off-base. Help would be appreciated. Answer: It's nothing in particular — or, in other words, it doesn't simplify like the contraction $g^{\mu\nu}g_{\nu\alpha}=\delta^{\mu}_{\alpha}$ does. It's simply a tensor of type ${2 \choose 2}$.
{ "domain": "physics.stackexchange", "id": 59449, "tags": "metric-tensor, relativity" }
Are there physical properties that can be used to differentiate stainless steel from copper in a home environment?
Question: So the backstory is that I purchased a reusable drinking straw that is copper coloured, but is advertised to be stainless steel. That got me thinking about whether I could be sure it was one or the other without having access to a laboratory. I saw this answer that mentions that the conductivity of steel is much lower, but I don't think my voltmeter could really measure something this small. The other effect I know of would be the Hall effect (nice because it's a tube, so it's easy to demonstrate), but I wasn't able to find what the predicted behaviour is for a steel tube. My question is then: are there any at-home/readily available ways to differentiate copper vs. stainless steel. Answer: Take advantage of the large difference in thermal conductivity between copper and stainless steel (approximately $400$ and $16$ $\mathrm{Wm^{-1}K^{-1}}$ respectively). If you put one end of a metal rod into contact with something held at a constant high or low temperature $T_C$, you would expect the other end to asymptotically approach that temperature like: $$ T(t) \sim T_C + A e^{-\lambda t} $$ where $$ \lambda = \frac{k}{\rho c_p L^2} $$ where $k$ is the thermal heat conductivity, $\rho$ is the mass density, $c_p$ is the specific heat capacity and $L$ is the length of the metal rod. Assuming the straw is approximately $0.1$ m long, you should get $\lambda$ values of approximately $0.011$ $\mathrm{s^{-1}}$ for copper and approximately $0.0004$ $\mathrm{s^{-1}}$ for stainless steel. A simple experiment would consist of putting one end of the your straw into contact with a container of ice water, or maybe a pot of boiling water, while you carefully measure and record the temperature at the other end as a function of time. (If the straw is longer than $0.1$ m, immerse the extra length of the straw into the water.) The best thing would be if you had some kind of digital thermometer that allows you to log data, but it could probably also be done with an analog thermometer, a clock, and a notebook. After taking the measurements it should be relatively easy to determine weather the temperature difference decreases with a half-life of one minute or half an hour. There are many potential error sources in a simple experiment like this, but since the difference between copper and stainless steel is more than an order of magnitude, it should be relatively easy to tell them apart despite these errors. The experiment could also be carried out for other rods that are known to be made of copper or stainless steel (or of some other metal), to validate that the experiment gives approximately the expected result for them.
{ "domain": "physics.stackexchange", "id": 55398, "tags": "material-science, physical-chemistry, density, home-experiment, metals" }
The PPI Chain is composed of 3 steps: Does only the 2nd step release energy?
Question: The PPI chain is \begin{align} \rm\ ^1H + {}^1H &\rm\rightarrow {}^2D + e^+ + \nu_e \tag 1 \\ \rm\ ^2D + {}^1H &\rm\rightarrow {}^3He + \gamma \tag 2 \\ \rm\ ^3He + {}^3He &\rm\rightarrow {}^4He + {}^1H + {}^1H \tag 3 \end{align} I know that, as an overall, the PP chain should release ~26 MeV. I can see clearly that (2) release energy in form of a photon but I can not say the same about (1) and (3). Does it mean that the ~26 MeV comes solely from (2)? Thank you for your help! Answer: No, all three steps release energy. The easy way to see this is to define the mass excess for a particle, generally a neutral atom at rest in its ground state, as $\Delta = M - A\rm\,amu$. This mass difference can totally be measured in energy units, $\Delta_E = \Delta _m c^2$. From a good reference table, we have $$ \begin{array}{c|c} \text{particle} & \Delta\text{ (MeV)} \\\hline \gamma & 0.0 \\ \nu & \lesssim10^{-6} \\ \rm e^\pm & 0.511 \\ \rm n & 8.0713 \\ ^1\rm H & 7.2889 \\ ^2\rm H & 13.1357 \\ ^3\rm He & 14.9312 \\ ^4\rm He & 2.4249 \end{array} $$ The total mass excesses before and after your reactions are \begin{align} 2\times 7.2889 &\to 13.1357 - 0.511^* + 0.511 + 10^{-6} & \text{change} &= -0.42\rm\,MeV \\ 13.1357 + 7.2889 &\to 14.9312 + 0 & \text{change} &= -5.49\rm\,MeV \\ 2\times 14.9312 &\to 2.4249 + 2\times7.2889 & \text{change} &= -11.29\rm\,MeV \end{align} In all cases the energy difference is carried away as the kinetic energies of the various decay products, subject to the constraint that, in the rest frame of the reaction, the total momentum is zero. For the massive particles, where the kinetic energy is approximately $K_i=p^2/2m_i$, that means the lighter products carry more of the energy than the heavy products. For the reaction with the photon, where momentum conservation makes the energies obey $$K_\gamma/c = \sqrt{2m_\mathrm{He}K_\mathrm{He}}$$ the trend of having the massless particle carry away most of the energy continues. $^*$ Note: since the mass excesses are tabulated for neutral atoms, while the participants in proton-proton fusion are completely ionized, it might be better to write \begin{align} \rm \Delta\left( {}^1H^+ + {}^1H^+ \right) &= \rm 2\times(7.2889 - 0.511)\, MeV \\ \rm \Delta\left( {}^2H^+ + e^+ + \bar\nu_e \right) &= \left( [13.1357 - 0.511] + 0.511 + 10^{-6} \right)\rm\,MeV \end{align} The result is the same, though.
{ "domain": "physics.stackexchange", "id": 45364, "tags": "nuclear-physics, astrophysics, stars, stellar-evolution, nucleosynthesis" }
Why isn't the time between apogee and perigee constant?
Question: I've assumed since the translational speed of the moon along its orbit undergoes the same boosts and reductions over its orbital course, the time between the apogee and the pedigee (and respectively, the time between the pedigee and the apogee) should be constant. However, according to this table, it seems they both range from 12 days to 16 days. Does proximity of the moon/earth system to the Sun cause the moon to go faster along its orbit as well? The in-between times do not appear to increase during July/August (i.e. near Aphelion), so I guess that's not a contributing factor. So, what is causing the moon to reach apogee/pedigee faster at one time and slower at other times? edit: I've just found some info which may explain this phenomenon: "The tidal effect of the Sun's gravitational field increases the eccentricity when the orbit's major axis is aligned with the Sun-Earth vector or, in other words, the Moon is full or new." So, I would assume whenever the Moon is full or new, its orbit changes its shape to some extent, which causes the time spent between apogee and perigee to be inconstant. Am I correct? And is this the only reason? Answer: The Moon's orbit would be nearly Keplerian were it not for the perturbing effects of the Sun. The time from perigee to perigee and from apogee to apogee wouldn't change, and the time from perigee to apogee would be exactly half the orbital period. What you are seeing are perturbing effects of the Sun on the Moon's orbit. If you use push the site you found, http://www.timeanddate.com/astronomy/moon/distance.html, to just beyond 2015, you'll see that only 24.69 days transpire between the last lunar perigee of 2015 and the first lunar perigee of 2016. That's almost three days less than the average value of 27.55455 days. There's something curious going on here! Below is a plot of the times between successive perigees (red curve) and successive apogees (blue curve). Note that the perigean anomalistic month is subject to significantly greater variations than is the apogean anomalistic month. Also note that the two curves have the same frequency (about 7 months) but are almost 180 degrees out of phase. These variations are the direct cause of the phenomenon you observed. You'll see a marked difference between perigee-to-apogee versus apogee-to-perigee when the time from perigee to successive perigee is changing fastest. It's the Sun that makes these variations occur. The Sun makes the lunar apse line (the line from Earth to the Moon at perigee) precess. This, in combination with the way tidal forces themselves vary, makes the Moon's orbit more eccentric that normal when the lunar apse line (the line perigee to apogee) aligns with the syzygies (new moon or full moon), less eccentric than normal when the apse line is more or less orthogonal to the line from the Earth to the Sun. The perigean months are longest when perigee more or less coincides with a new or full moon, and shortest when perigee more or less coincides with the moon in its quarter phases.
{ "domain": "physics.stackexchange", "id": 20273, "tags": "orbital-motion, solar-system, moon, celestial-mechanics" }
Positioning items to maximize separation
Question: Say we want to place n items on the real line. Let us denote the position of item i by $p_i$. We have interval constraints on the position $p_i$, i.e. we are given $l_i, r_i$ such that $l_i \le p_i \le r_i$. My problem is: given a specific item s, how do I compute the maximum gap possible to the right of s? By maximum gap to the right of s I mean the distance between s and the next item to its right. Mathematical Description: More formally, given $s \in [n]$, and $(l_i,r_i)$ for $i\in[n]$ I want to find $$f(s) = \max_{p_1,\dots,p_n} p_t - p_s$$ subject to \begin{align*} 1)\,& l_i \le p_i \le r_i\, \forall\,i \in [n] \\ 2)\,& \forall\,i\ne s,t, \text{ either } p_i \le p_s \text{ or }p_i \ge p_t \text{ (there is no item between s and t)} \end{align*} In the absence of the second constraint the problem would have been a linear program. The second constraint makes it difficult. I know how to model this question as an integer program (see https://bit.ly/2Jhf7kJ), but I am interested in an actual algorithm. Answer: Given a specific item $s$, how do I compute the maximum gap possible to the right of $s$? By maximum gap to the right of $s$ I mean the distance between $s$ and the next item to its right. This is much more useful than the mathematical description. First note that items that are to the left of $s$ don't hurt us in any way, so if it is possible to put an item to the left of $s$, do so. Then we note that if an item can't go to the left of $s$ we want it as far to the right as possible. The conclusion? Each item is always at either its left or right boundary, with the exception of $s$. And once you know the position of $s$ you'll also know all the other positions. So just loop over each interesting position of $s$, discard all items that can go on the left, and then choose the closest item on the right. Out of these closest items, which ever is the furthest is your answer (or well, it's associated position of $s$). Now what are the interesting positions of $s$? Well it's each time you can put another item to the left of $s$. So it's all the left boundaries (plus one epsilon) of items that fall within $[l_s, r_s]$, and $l_s$ itself.
{ "domain": "cs.stackexchange", "id": 12443, "tags": "algorithms, optimization, np-complete, linear-programming" }
How likely are planets to form after neutron star collisions?
Question: It is well known that planetary collisions can create moons orbiting the result of the merger if they happen in the correct way, and this is how the Earth's moon is believed to have been formed. See the animations on this Durham University page to get an idea of how the mechanism works http://icc.dur.ac.uk/giant_impacts/. It seems to me that it should be at least theoretically possible for the same process to happen when neutron stars collide, which would produce bizarre extremely-high-metallicity (or rather high-average-atomic-mass) planets. However, I also know that the physics is very different in some ways: the colliding objects are much denser; the collision is much higher-energy; radioactive decay creates a burst of extra energy from any matter thrown off the objects; the gravity and velocity are high enough that relativity matters a lot; they probably are in very circular orbits spiraling toward each other rather than hitting each other from the angles that protoplanets do; etc. It's also possible that most of the mass of the neutron star might be thrown away and leave a low-mass remnant that might expand into an high-atomic-weight planet or white-dwarf, or that some bit of ejected matter might be thrown out at similar enough velocities (speed AND direction) to eventually coalesce into a rogue planet. I'm just wondering whether anyone has looked into this before, or if anyone has any input as to whether this would be more or less likely than moons forming from planetary collisions, or if anyone knows how to test this with simulations. EDIT: I've just realized the reason why it is probably impossible for a planet to form in the same way the Moon formed around the Earth: The outward force is way stronger than gravity except for close objects, which would be inside the Roche limit of the resultant black-hole or neutron-star and thus form an accretion disc or ring rather than a planet (due to the fact that any potential planet would be ripped apart by tidal forces). I haven't done any math on this, and this is just my impression, though. In addition, this doesn't mean a planet couldn't form from the ejecta in other ways; for instance, the disc of matter close enough to be held in orbit after the initial explosion might be pushed out to include a planet-forming region outside the Roche limit during a later phase of the event. EDIT 2: I've had an idea for how this might happen, but I think this might really be a different question. The idea is that, if another star was in the same system as a kilonova (collision between stellar remnants that ejects matter and radiation), the kilonova might leave enough of the star to stay in the system, or perhaps leave enough matter for the other star to capture it somehow. One thing about this scenario, though, is that the idea of another star being in the same system as a compact binary merger rather implies that this third star has already been hit by at least one supernova, possibly multiple and maybe several novae, depending on whether a parasitic binary was formed. (This wouldn't apply if the third star was captured into the system after both of the other stars had already died, though.) Supernovae are stronger than kilonovae in terms of energy that gets thrown out, so the previous supernovae would already have had a stronger affect on the object. I believe that kilonovae are thought to produce heavier elements than any type of supernova, so stars hit by kilonovae would be different in composition than ones hit by supernovae, but it's still basically the same question: What kind of remnants can survive from stars hit by supernovae/kilonovae/novae at close range. I think it's pretty obvious that this could form some kind of remnant, possibly depending on the distance to the third star, so that already answers my question, though I don't know what compositions are possible or what masses are likely, but I think this is really a different question that should probably be asked separately if I or anyone else want it answered on Stack Exchange. Answer: There do appear to have been some studies on the properties of potential fallback discs formed after neutron star mergers, for example: Rosswog (2007) "Fallback accretion in the aftermath of a compact binary merger " Cannizzo et al. (2011) "Fall-back Disks in Long and Short Gamma-Ray Bursts " Lyutikov (2013) "The Electromagnetic Model of Short GRBs, the Nature of Prompt Tails, Supernova-less Long GRBs, and Highly Efficient Episodic Accretion " These studies focus on explaining X-ray flaring in the aftermath of gamma-ray bursts rather than the potential to form exotic planets in these environments. It does seem fairly likely that some kind of disc does form around the remnant of a neutron star merger, but it's going to be extremely hot and likely so close to the remnant that it will be unable to form planets. As noted in Menou et al. (2001) "Stability and Evolution of Supernova Fallback Disks", planet formation in fallback discs depends on the timescales for the disc cooling and how long it takes to spread beyond the Roche limit: if the disc becomes neutral before it spreads beyond the Roche limit, spreading becomes reliant on interactions within the remaining disc of rocks. While they consider the case of merging white dwarfs (noting that this scenario leads to a more favourable environment for planets than post-supernova fallback discs around black holes or neutron stars), they do not study the case of merging neutron stars.
{ "domain": "astronomy.stackexchange", "id": 4972, "tags": "neutron-star, planetary-formation, collision" }
Sending a string from anonymous thread to UI with PostMessage
Question: I am sending a string from an anonymous thread to the UI with PostMessage in the following code: unit Unit2; interface uses Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics, Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls; const WM_SETCAPTION = WM_USER; type TForm2 = class(TForm) Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } procedure WMSetCaption(var msg: TMessage); message WM_SETCAPTION; end; var Form2: TForm2; implementation {$R *.dfm} procedure TForm2.WMSetCaption(var msg: TMessage); begin Self.Caption := PChar(msg.LParam); end; procedure TForm2.Button1Click(Sender: TObject); begin System.Classes.TThread.CreateAnonymousThread( procedure begin PostMessage(Handle, WM_SETCAPTION, 0, LParam(PChar('My new caption'))); end).Start; end; end. This works seemingly well, but could this potentially create memory leaks? Or is there a better way to accomplish this? Answer: Create an object containing the string as a property and pass the pointer to the object as Integer LParam. Then, in the message handler, use the string and free the object. In this way, you could not only send strings but also other data as well. unit SendingStringWithPostMessageUsingObjectMainForm; interface uses Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics, Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls; const WM_STRINGMESSAGE = WM_USER; type TStrMsgObject = class(TObject) private fMsg: string; public constructor Create(const aMsg: string); property Msg: string read fMsg; end; type TForm2 = class(TForm) btnSetCaption: TButton; btnChangeCaption: TButton; procedure btnChangeCaptionClick(Sender: TObject); procedure btnSetCaptionClick(Sender: TObject); private { Private declarations } procedure WMStringMessage(var Msg: TMessage); message WM_STRINGMESSAGE; public { Public declarations } end; var Form2: TForm2; implementation {$R *.dfm} const // WM_STRINGMESSAGE Actions: ACTION_SETCAPTION = 0; ACTION_CHANGECAPTION = 1; constructor TStrMsgObject.Create(const aMsg: string); begin inherited Create; fMsg := aMsg; end; procedure TForm2.WMStringMessage(var Msg: TMessage); // Process WM_STRINGMESSAGE messages var mo: TStrMsgObject; begin mo := TStrMsgObject(Msg.LParam); try case Msg.WParam of ACTION_SETCAPTION: begin Caption := mo.Msg; end; ACTION_CHANGECAPTION: begin Caption := Caption + mo.Msg; end; end finally mo.Free; end; end; procedure TForm2.btnSetCaptionClick(Sender: TObject); begin System.Classes.TThread.CreateAnonymousThread( procedure begin PostMessage(Handle, WM_STRINGMESSAGE, ACTION_SETCAPTION, Integer(TStrMsgObject.Create('My new caption'))); end).Start; end; procedure TForm2.btnChangeCaptionClick(Sender: TObject); begin System.Classes.TThread.CreateAnonymousThread( procedure begin PostMessage(Handle, WM_STRINGMESSAGE, ACTION_CHANGECAPTION, Integer(TStrMsgObject.Create(' changed'))); end).Start; end; end.
{ "domain": "codereview.stackexchange", "id": 25526, "tags": "multithreading, thread-safety, delphi" }
Angular Displacement
Question: If something is rotating about a point and it covers a complete circle, should we take its angular displacement as 360 degree or 0? Please give link to some established material on this subject which defines your answer, whether it is that it should be taken as 0 or 360 degrees. Question that led to this problem : The angular speed of a motor wheel is increased from 1200 rpm to 3120 rpm in 16 seconds. 1. What is its angular acceleration, assuming the acceleration to be uniform, 2. How many revolutions does the wheel make during this time. In the solution to above question the author(s) for solving part 2 have used equation of motion and mentioned that they have obtained angular displacement for it, it basically implied that the equations of motion used for rotation provide angular displacement which does not become zero on returning to original point which is not followed when we consider it analogous to displacement of linear motion. Addendum : Even here ( Angular displacement and the displacement vector ) the selected answer says that on completing the circle, angular displacement is 360 degrees, is there some established text to support this ? Similarly here ( http://www.ask.com/question/calculating-angular-displacement ) something else is said in the answer, is then angular displacement ambigous and hence has no correct definition, if there is please guide me to some established text. Answer: The answer to your question is sometimes! In most cases when we're dealing with angles we are using the trigonometric functions, and since these are periodic in angle with period $2\pi$ it doesn't matter whether you use zero, $2\pi$ or any multiple of $2\pi$ as your equations will give the same result. Alternatively you could be describing some object moving in a circle in an external field e.g. a gravitational field, and again most of the time tracing one circle is the same as tracing any number of circles. This is true of all conservative fields. The exception is in electrodynamics e.g. when you're a charged object moving in a circle, because in that case you will be generating a magnetic field and each revolution of the circle puts energy into the magnetic field. In that case how many times you go round the circle does matter. Re the edit to the question: Aha, you're mixing up two different concepts. The angle can mean the position or it can mean the total angle moved. Let me attempt to given example. Suppose you walk 1 km north then turn round and walk 1 km south back to where you started. Then your position in space hasn't changed, but you have still walked 2 kms. Likewise if you rotate an object by $2\pi$ its angle hasn't changed, but it has still travelled through $2\pi$ radians. In the question you cite the total angle moved is just the integral of angular velocity wrt time, just as in linear motion the total distance moved is the integral of velocity wrt time.
{ "domain": "physics.stackexchange", "id": 10428, "tags": "definition, rotation" }
Subset of regular languages
Question: I have a system that is deciding a subset of regular languages and am curious if anyone has seen this type before and if it has a name I could use to research more. Specifically consider the collection of formal grammars with the following restrictions: There is a bijection between the terminal symbols and non-terminal symbols. In this post, I will use lowercase letters for terminal symbols and their uppercase counterparts as the associated non-terminal. All production rules are of the form: A -> bB or A -> ε. So, these are basically regular grammars with the restriction that all rules which produce a terminal b must then have the associated non-terminal B directly afterwards. From the point of view of a finite automata, any automata that writes a b must enter the same state B. I believe the collection of languages described by these grammars are a proper subset of regular languages. For example, I don't think it can describe the regular language (ac*a|bc*b) because it cannot "remember" whether the string started with a or b while processing through the cs. Does anyone know if this type of language/grammar has been studied and how I could find more information about it if so? Thanks! Answer: Yes, this is identical to a bigram model, a Markov model with order 2. Briefly, each state corresponds to a context of up to n-1 symbols, and each arc represents the n'th symbol that transitions you to another state (again encoding a context of up to n-1 symbols, so you lose some history). Since your transition function is conditioned on the state (which only holds one token, in this example) your intuition about not being able to remember the a or b in (ac*a|bc*b) is correct, except that it would need to be (ac+a|bc+b) to ensure you enter the c state. For example, starting in a state with an empty context (sometimes called the unigram state), reading an a brings you to the a state, and reading a c brings you to the c state, but loses the information about where you came from. You may be interested in this chapter on ngram language models (exactly what you described, but describing a probability distribution rather than set inclusion): Speech and Language Processing, Chapter 3. One major issue with this type of model is that if you increase the context length (rather than remembering just one token, you remember several), the state complexity increases exponentially with the size of the vocabulary. There is a special type of ngram model called backoff ngram models designed specifically to address this issue. Sections 3.5.3 and 3.7 of 1 discuss this. A few more links about ngram and backoff language models: An Empirical Study of Smoothing Techniques for Language Modeling, Unary Data Structures for Language Models, Faster and Smaller N-Gram Language Models.
{ "domain": "cstheory.stackexchange", "id": 5561, "tags": "fl.formal-languages, regular-language, grammars" }
Ambidentate Behavior of Cyanide
Question: Why do haloalkanes give alkyl cyanides when treated with KCN but give alkyl isocyanide with AgCN? Answer: The $\ce{KCN}$ substance is made of two ions : $\ce{K^+}$ and $\ce{CN^-}$ The active ion is cyanide, and the active atom in cyanide ion is the carbon atom, which is charged $-1$. This negative C atom attacks haloalkanes to produce alkylcyanide. On the contrary, $\ce{AgCN}$ is not made of two ions. It is not soluble in water. The bond $\ce{Ag-C}$ is mainly covalent. So the active atom in $\ce{AgCN}$ is the outer $\ce{N}$ atom. When interacting with a haloalkane, the $\ce{N}$ atom has a free doublet which can attack it via a $\ce{S_N2}$ mechanism to produce an isocyanide.
{ "domain": "chemistry.stackexchange", "id": 17173, "tags": "organic-chemistry" }
Is $a^n b^n c^n$ context-free?
Question: I am new to grammars and I want to learn context free grammars which are the base of programming languages. After solving some problems, I encountered the language $$\{a^nb^nc^n\mid n\geq 1\}\,.$$ Can anybody tell me if this is a context free language? I can't make any context free grammar for it and I don't know any other proof. Answer: This is my approach to prove that a given language is not a CFL. Try hard to come up with a Context free grammar for the given language. If you can come up with such a grammar, then the language is indeed a CFL. If you can't, then you can use the pumping lemma to show that a given language is not a CFL. Assume L is context free. Then L satisfies P.L. Then there exists n by the P.L. Let z = a^n b^n c^n Because |z|>=n and z in L, by PL there exist uvwxy s.t. z = uvwxy |vwx| <= n |vx| >= 1 forall i>=0, u v^i w x^i y in L But if there exist u,v,w,x,y satisfying the first three constraints, we can show that there exists an i s.t. the fourth constraint fails. Case uvw consists only of "a". Then when i=0, the string is not in L (fewer "a"s than "b"s or "c"s). Case vwx contains only "b"s - similar. Case vwx contains only "c"s - similar. Case vwx contains only two types of characters from {a,b,c}. Then uwy has more of the remaining type of characters. The string vw cannot contain "a"s, "b"s, and "c"s, since vwx is a substring of z and has length <=n (cannot contain the last "a" and the first "c" both).i.e this is a contradiction and our assumption is wrong that L is CFL.
{ "domain": "cs.stackexchange", "id": 3621, "tags": "formal-languages, context-free" }
What is the matrix of the iSwap gate?
Question: Mostly I'm confused over whether the common convention is to use +$i$ or -$i$ along the anti-diagonal of the middle $2\times 2$ block. Answer: Mostly I'm confused over whether the common convention is to use +i or -i along the anti-diagonal of the middle 2x2 block. The former. There are two $+i$'s along the anti-diagonal of the middle $2\times 2$ block of the iSWAP gate. See page 95 here[$\dagger$]. [$\dagger$]: Explorations in Computer Science (Quantum Gates) - Colin P. Williams
{ "domain": "quantumcomputing.stackexchange", "id": 261, "tags": "quantum-gate, matrix-representation" }
Decision directed TED (timing error detector). What does it mean?
Question: I am reading about symbol rate recovery and there is a term that I didn't grasp - 'decision directed' TED. Can somebody explain what does it mean or provide me with accurate reading? Thanks Answer: "Decision directed" means that the inputs to the timing error determination use decisions (best guesses) of what the ideal symbol is based on the sample read. This is in contrast to "symbol-aided" where a known symbol is transmitted and the replica symbol can be used in the receiver. You can visualize how this would allow for much further error determination due to a much wider range before the results alias. I have diagrams specific to phase detection, but well demonstrates the concept and utility of decision-directed approaches applicable to timing recovery as well. As an example of this for the purpose of phase detection (used in carrier recovery) below shows a Decision Directed Phase Detector for QPSK and showing at the ideal sampling location in the symbol the actual sample measured at $V_2$ from which we assume (best guess) that the symbol transmitted must have been $V_1$. This is just an assumption based on nearest neighbor and only serves to drive the correction loop to that state, even though the whole symbol may end up being rotated once that correction converges (and then other techniques such as pilot symbols or higher level inspection or the data itself can be used to determine if such rotation occurred). In contrast below shows a QAM Constellation where a symbol-aided approach is used for phase detection. The lock range as limited by the nearest-neighbor feature of a decision directed approach is greatly extended as depicted in the figure where this shows an actual symbol received as the dark circle and based on the known location (the outer symbol at +45 degrees) a much greater error can be determined. In a Symbol-Aided approach the known symbols are transmitted over a portion of the symbol such as in a pilot which is then used for carrier and timing recovery, and can operate in much lower SNR conditions. Decision-directed approaches out-perform other approaches at higher SNR levels where all samples are based on the received signal itself (such as the Gardner TED which typically does not use a pilot or decisions) since a decision when good removes all noise for that input to the timing detector.
{ "domain": "dsp.stackexchange", "id": 10091, "tags": "digital-communications" }
Hamiltonian cycle vs co-NP
Question: I am trying to understand co-NP and its implications properly. The French Wikipedia page describing co-NP provides the "complementary" version of the Hamiltonian cycle in co-NP as follows: Considering a graph G, is it true that it does not have an Hamiltonian cycle? Is this correct? If so, then how is this complementary version a different class of complexity than the original Hamiltonian cycle problem? The decision question is the same. To put it differently, what is the difference between NP and co-NP here? Otherwise, if this complementary version is false, what would be the correct version (if any exists)? Update To put it simply, returning an opposite answer for one input does not cover for all possible inputs. In the comments of dkaeae's answer below, I say "NP is not only about checking if a cycle is Hamiltonian. It is about finding any that would be Hamiltonian." Fact is if a P-machine can decide whether a graph is Hamiltonian or not, it can also deliver an instance of such a cycle in P-time if any exist. This does not harm the formal definition of NP. Answer: The complementary version given by Wikipedia is correct. coNP consists of the complements of the problems in NP. In this case, the universe of discourse (i.e., the encoded instances) is the set of (properly encoded) graphs. The Hamiltonian cycle decision problem contains those graphs which are Hamiltonian; its complement is exactly the set of graphs which are not Hamiltonian. You are correct; the two problems do look almost the same. This is a vital realization concerning coNP (or any co-class in fact). The key difference is that, for some problem classes, being able to efficiently check whether a solution is correct (NP) does not necessarily mean you can also efficiently verify all answers are incorrect (coNP) and vice-versa.
{ "domain": "cstheory.stackexchange", "id": 4599, "tags": "complexity-classes, np" }
Celestial age calculator
Question: Our solar system has 8 planets, including earth. Our calendar has some very confusing concepts such as leap years, which I still don't fully understand. In fact, since where I live, we use a different solar calendar that is different from the Gregorian reforms of the Julian Western calendar, I think I'm born in March but I should be born in February! Anyways, I attempted to create a simple program that calculates any given age, subtracting the leap days of each four years, in any planet of our solar system except Pluto which I believe is not a planet. I'm not sure about it. def calculate_days(age): leap_days = 0 #holds number of leap days leap_years_list = [i for i in range(age) if i % 4 == 0] #holds the essence of leap years for j in leap_years_list: if j % 4 == 0 or j % 400 == 0 and j % 100 != 0: leap_days += 1 #if leap years are divisible by four and not divisible by a hundred, add to leap days return (age * 365) - leap_days def calculate_celestial_age(planet, age): number_of_days = calculate_days(age) #calculate the days in age assert str.islower(planet), "Planet name must be entered in lower case e.g. 'mercury'" #if planet name is in lowercase, give error days_in_year = {"mercury": 88, "venus": 224, "earth": 365, "mars": 687, "jupiter": 4332, "saturn": 10759, "uranus": 30688, "neptune": 60182} #days in each planet's year if planet == "mercury": return number_of_days / days_in_year["mercury"] elif planet == "venus": return number_of_days / days_in_year["venus"] elif planet == "earth": return number_of_days / days_in_year["earth"] elif planet == "mars": return number_of_days / days_in_year["mars"] elif planet == "jupiter": return number_of_days / days_in_year["jupiter"] elif planet == "saturn": return number_of_days / days_in_year["saturn"] elif planet == "uranus": return number_of_days / days_in_year["uranus"] elif planet == "neptune": return number_of_days / days_in_year["neptune"] else: raise Exception("Unknown Planet! Are you sure you've enter a planet?") age = 23 planets = ["mercury", "venus", "earth", "mars", "jupiter", "saturn", "uranus", "neptune"] for planet in planets: print("{0}: {1}".format(planet, calculate_celestial_age(planet, age))) For my age, 23, I got these results: mercury: 95.3409090909091 venus: 37.455357142857146 earth: 22.986301369863014 mars: 12.212518195050945 jupiter: 1.9367497691597415 saturn: 0.7798122502091273 uranus: 0.2733967674661105 neptune: 0.1394104549533083 According to this, my results are accurate. Answer: Simplify the logic It seems like: if planet == "mercury": return number_of_days / days_in_year["mercury"] elif planet == "venus": return number_of_days / days_in_year["venus"] elif planet == "earth": return number_of_days / days_in_year["earth"] elif planet == "mars": return number_of_days / days_in_year["mars"] elif planet == "jupiter": return number_of_days / days_in_year["jupiter"] elif planet == "saturn": return number_of_days / days_in_year["saturn"] elif planet == "uranus": return number_of_days / days_in_year["uranus"] elif planet == "neptune": return number_of_days / days_in_year["neptune"] can be easily rewritten: if planet in days_in_year: return number_of_days / days_in_year[planet] raise Exception("Unknown Planet! Are you sure you've enter a planet?") Also, one could consider keeping it even more simple and use the fact that: return number_of_days / days_in_year[planet] will throw the relevant exception for an invalid value. Do not repeat yourself You can try to avoid duplicated values and have a single source of information. In your case, the list of planet is indirectly hardcoded twice : days_in_year = {"mercury": 88, "venus": 224, "earth": 365, "mars": 687, "jupiter": 4332, "saturn": 10759, "uranus": 30688, "neptune": 60182} #days in each planet's year and planets = ["mercury", "venus", "earth", "mars", "jupiter", "saturn", "uranus", "neptune"] Maybe you could define a constant like DAYS_IN_YEAR_PER_PLANET (corresponding to your actual days_in_year dictionnary) and use it where you are using the list of the planets (for planet in DAYS_IN_YEAR_PER_PLANET: for instance). Do not perform more operations than required number_of_days = calculate_days(age) is computed for every planet in the list. It would be better from a performance point of view to feed the function a number of days. Useless list (or useless test) You are creating a list with values divisible by 4. Then you iterate over it and check if the value is divisible by 4. It seems like a waste of effort. Let's get rid of the list creation. Code is based on the initial version of your code. def calculate_days(age): leap_days = 0 #holds number of leap days for j in range(age): if j % 4 == 0 and j % 100 != 0: leap_days += 1 #if leap years are divisible by four and not divisible by a hundred, add to leap days return (age * 365) - leap_days Also, this can somehow be written by abusing the generator expression and the sum builtin: def calculate_days(age): leap_days = sum(1 for j in range(age) if j % 4 == 0 and j % 100 != 0) return (age * 365) - leap_days Also, you could find mathematical expressions to compute the number of leap days in a more efficient way (constant time instead of linear time) but I'll keep this out of the review.
{ "domain": "codereview.stackexchange", "id": 20984, "tags": "python, datetime, calculator" }
Problems with moveit. Exit code -11
Question: Hello, I'm using Hydro, Ubuntu 12.04 and I installed Moveit from the source. When I try to load my .urdf with the setup_assistant, moveit prints that: [ INFO] [1411283688.252217911]: Loaded motoman_mh5 robot model. [ INFO] [1411283688.252495879]: Setting Param Server with Robot Description [ INFO] [1411283688.258372674]: Robot semantic model successfully loaded. [ INFO] [1411283688.258451525]: Setting Param Server with Robot Semantic Description [ INFO] [1411283688.290750255]: Loading robot model 'motoman_mh5'... [ INFO] [1411283688.290813881]: No root joint specified. Assuming fixed joint ================================================================================ REQUIRED process [moveit_setup_assistant-1] has died! process has died [pid 14258, exit code -11, cmd /home/lower/moveit/devel/lib/moveit_setup_assistant/moveit_setup_assistant __name:=moveit_setup_assistant __log:=/home/lower/.ros/log/e5208dd8-415d-11e4-a760-00216a36b28c/moveit_setup_assistant-1.log]. log file: /home/lower/.ros/log/e5208dd8-415d-11e4-a760-00216a36b28c/moveit_setup_assistant-1*.log Initiating shutdown! In other computer with the same system, it works. So I assume that the .urdf is ok. Thanks! Running it with GDB, that is what I get: GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2.1) 7.4-2012.04 Copyright (C) 2012 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://bugs.launchpad.net/gdb-linaro/>... Reading symbols from /home/lower/catkin_workspace/devel/lib/moveit_setup_assistant/moveit_setup_assistant...(no debugging symbols found)...done. Starting program: /home/lower/catkin_workspace/devel/lib/moveit_setup_assistant/moveit_setup_assistant __name:=moveit_setup_assistant __log:=/home/lower/.ros/log/18c0b614-417b-11e4-a388-00216a36b28c/moveit_setup_assistant-1.log [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [New Thread 0x7fffddf64700 (LWP 771)] [New Thread 0x7fffdd763700 (LWP 772)] [New Thread 0x7fffdcf62700 (LWP 773)] [New Thread 0x7fffcffff700 (LWP 778)] [New Thread 0x7fffcf7fe700 (LWP 779)] [New Thread 0x7fffbffff700 (LWP 780)] [New Thread 0x7fffbf7fe700 (LWP 781)] [New Thread 0x7fffbe38d700 (LWP 782)] [New Thread 0x7fffad724700 (LWP 783)] [New Thread 0x7fffacf23700 (LWP 784)] [New Thread 0x7fffa7fff700 (LWP 785)] [New Thread 0x7fffa77fe700 (LWP 786)] [Thread 0x7fffad724700 (LWP 783) exited] [Thread 0x7fffbe38d700 (LWP 782) exited] [Thread 0x7fffa7fff700 (LWP 785) exited] [Thread 0x7fffa77fe700 (LWP 786) exited] [New Thread 0x7fffa77fe700 (LWP 787)] [New Thread 0x7fffa7fff700 (LWP 788)] [New Thread 0x7fffbe38d700 (LWP 789)] [Thread 0x7fffa77fe700 (LWP 787) exited] [Thread 0x7fffa7fff700 (LWP 788) exited] [Thread 0x7fffacf23700 (LWP 784) exited] [New Thread 0x7fffacf23700 (LWP 790)] [New Thread 0x7fffa7fff700 (LWP 791)] [New Thread 0x7fffa77fe700 (LWP 792)] [Thread 0x7fffbe38d700 (LWP 789) exited] [Thread 0x7fffacf23700 (LWP 790) exited] [Thread 0x7fffa77fe700 (LWP 792) exited] [New Thread 0x7fffa77fe700 (LWP 793)] [Thread 0x7fffa77fe700 (LWP 793) exited] [New Thread 0x7fffa77fe700 (LWP 794)] [New Thread 0x7fffacf23700 (LWP 795)] [New Thread 0x7fffbe38d700 (LWP 796)] [New Thread 0x7fffad724700 (LWP 797)] [New Thread 0x7fffa6ffd700 (LWP 798)] [New Thread 0x7fffa67fc700 (LWP 799)] [New Thread 0x7fffa5ffb700 (LWP 800)] [New Thread 0x7fffa57fa700 (LWP 801)] [New Thread 0x7fffa4ff9700 (LWP 802)] [Thread 0x7fffa5ffb700 (LWP 800) exited] [Thread 0x7fffa57fa700 (LWP 801) exited] [Thread 0x7fffacf23700 (LWP 795) exited] [Thread 0x7fffa7fff700 (LWP 791) exited] [Thread 0x7fffa67fc700 (LWP 799) exited] [Thread 0x7fffa6ffd700 (LWP 798) exited] [Thread 0x7fffbe38d700 (LWP 796) exited] [Thread 0x7fffa4ff9700 (LWP 802) exited] [Thread 0x7fffa77fe700 (LWP 794) exited] [New Thread 0x7fffa77fe700 (LWP 803)] [New Thread 0x7fffa4ff9700 (LWP 804)] [New Thread 0x7fffbe38d700 (LWP 805)] [New Thread 0x7fffa6ffd700 (LWP 806)] [New Thread 0x7fffacf23700 (LWP 807)] [New Thread 0x7fffa7fff700 (LWP 808)] [New Thread 0x7fffa67fc700 (LWP 809)] [New Thread 0x7fffa5ffb700 (LWP 810)] [New Thread 0x7fffa57fa700 (LWP 811)] [Thread 0x7fffa57fa700 (LWP 811) exited] [Thread 0x7fffa5ffb700 (LWP 810) exited] [Thread 0x7fffa6ffd700 (LWP 806) exited] [Thread 0x7fffa4ff9700 (LWP 804) exited] [Thread 0x7fffad724700 (LWP 797) exited] [Thread 0x7fffbe38d700 (LWP 805) exited] [Thread 0x7fffacf23700 (LWP 807) exited] [Thread 0x7fffa67fc700 (LWP 809) exited] [Thread 0x7fffa7fff700 (LWP 808) exited] [New Thread 0x7fffa7fff700 (LWP 812)] [New Thread 0x7fffa67fc700 (LWP 813)] [New Thread 0x7fffacf23700 (LWP 816)] [New Thread 0x7fffbe38d700 (LWP 818)] [Thread 0x7fffa67fc700 (LWP 813) exited] [Thread 0x7fffbe38d700 (LWP 818) exited] [Thread 0x7fffacf23700 (LWP 816) exited] [Thread 0x7fffa77fe700 (LWP 803) exited] [New Thread 0x7fffa77fe700 (LWP 819)] [New Thread 0x7fffacf23700 (LWP 820)] [Thread 0x7fffacf23700 (LWP 820) exited] [Thread 0x7fffa7fff700 (LWP 812) exited] Program received signal SIGSEGV, Segmentation fault. 0x00007fffecfdb24e in glXCreateContext () from /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 (gdb) Originally posted by Bastbeat on ROS Answers with karma: 131 on 2014-09-21 Post score: 0 Answer: It seems that I had a problem with my graphic drivers. I tried "glxinfo | grep OpenGL" and the result was "Error: couldn't find RGB GLX visual or fbconfig" I googled it, and I found this: "http://askubuntu.com/questions/475972/error-couldnt-find-rgb-glx-visual-or-fbconfig-ubuntu-12-04" Finally, I could run "glxinfo | grep OpenGL", and I could load the .urdf in moveit_setup_assistant. I could run the "demo.launch" when I finished the tutorial(I never before could do this), too. Originally posted by Bastbeat with karma: 131 on 2014-09-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Bastbeat on 2014-09-21: Thank you anyway, gvdhoom.
{ "domain": "robotics.stackexchange", "id": 19468, "tags": "ros, urdf, moveit, moveit-setup-assistant" }
FIR Filter implementation without padding
Question: Is it possible to implement FIR filtering action without padding the input and coefficients? i.e. Let's say if the input and filter coefficients are of size 4, then the output will be 7 samples. So, while implementing, we generally add 3 more zeros to both input and filter coefficients making them equal to output size. But, if the input and filter coefficients are of size 1024, then the output will be of 2047 samples. So, now, we need to add 1023 zeros to both input and filter coefficients. This is inefficient, right? So, I just want to know is there any other way to implement FIR Filtering without padding? The below code gives the idea I was talking about. int main() { int x[4],h[4],y[7]={0}; int i,j; for(i=0;i<4;i++) { x[i] = i+1; h[i] = i+1; } for(int i=0;i<7;i++) { for(int j=0;j<4;j++) { y[i+j] = y[i+j]+x[i]*h[j]; //filtered signal of length M+N-1 } } for(i=0;i<7;i++) printf("%d\n", y[i]); } Output: 1,4,10,20,25,24,16 - expected But obtained results - 1,4,10,20,garbage value, garbage value, garbage value Answer: As I said in a comment, no padding is necessary. A simple FIR implementation might look like (it is assumed that the filter kernel h[] has already been implemented) float y[N+M-1]; //set y[] to zero for(int i=0;i<N;i++){ //N - signal length for(int j=0;j<M;j++){ //M - filter kernel length (usually much shorter than N y[i+j] = y[i+j]+x[i]*h[j]; //filtered signal of length M+N-1 } } h[] represents the filter kernel and is usually implemented as a window-sync filter of length M (in your case it's just the original signal). For your use case, you'd have int x[7],h[7],y[13]={0}; int i,j; for(i=0;i<7;i++) { if(i<4) { x[i] = i+1; h[i] = i+1; } if(i>=4) { x[i] = 0; h[i] = 0; } } for(int i=0;i<7;i++) { for(int j=0;j<7;j++) { y[i+j] = y[i+j]+x[i]*h[j]; //filtered signal of length M+N-1 } } Note that since you're convolving a signal with itself (x*h), you don't even need h[] but I kept it for clarity.
{ "domain": "dsp.stackexchange", "id": 8418, "tags": "convolution, finite-impulse-response, c, zero-padding" }
Scattering from a step potential barrier
Question: Suppose a potential barrier of the form $$ V(x) = \begin{cases} V_0 & x>0 \\ 0 & x<0 \end{cases} $$ Then, for energy $E$ such that $E < V_0$, we have that the transmission and reflection coefficients for the probabilities are $R = 1, T = 0$. In case where $V_0$ is not enormously large with respect to $E$, wave function will decay in $x>0$, but in a somewhat not rapid fashion. This means there is some reasonable not negligible probability for the particle to tunnel through. How does this agree with the fact that $R=1$, which means all of the probability has been reflected? Answer: $T$ and $R$ are transmission and reflection coefficients for waves. They refer to the probability that an incident wave will penetrate the barrier and continue propagating infinitely far. Physically, you should think of it as sending a constant sine wave in from the far left and looking to see what amplitude of constant sine wave you get at the far right. In this case, since $E < V_0$ no matter how far to the right you go, the amplitude keeps decaying exponentially for all positive $x$. It never recovers its sinusoidal form, as it would with a rectangular barrier of finite extent, so there is no transmission. Now, if you think of this waves as probability waves, then yes, it seems like there is a chance that the particle will materialize inside the barrier. And there is. That doesn't count as transmission, though. The process of a wave "turning into a particle" (i.e. wavefunction collapse) is not taken into account by the calculation of the transmission and reflection coefficients.
{ "domain": "physics.stackexchange", "id": 32197, "tags": "quantum-mechanics, schroedinger-equation, quantum-tunneling" }
How are train crossing signals and highway traffic signals coordinated?
Question: Some roadway intersections that are near railroad tracks have signs that light up when a train is approaching. These signs warn that certain turns are not allowed because of the train. One of these signs is shown below from the City of Edmonton. My understanding is that normal train crossing signals are the responsibility of the railroad and that traffic signals are the responsibility of the highway department. This wouldn't normally seem like a big problem, but these are two completely different systems and organizations. Obviously there is some way that the two owners coordinate and communicate the train warning information. How does the train warning signal get passed from the railroad signal to the traffic signal? Is this as simple as a wire run from the train signal that is "high" while lights are on? Is there standard way that this connection is done? Answer: The answer is slightly more complex than what the OP proposes -- this is due to two factors: The interconnect must be fail safe -- i.e. if the interconnect circuit fails, the system must be able to detect this and report a problem to both highway and railroad maintenance crews. To this end, special supervised circuits are used, using two relays in opposite states to detect a failure of any single relay to operate correctly. The interconnect must trigger the traffic light preemption cycle sufficiently before the railroad crossing signals trigger to allow queues to clear off the track before the train arrives. This may require both extended advance warning times (in excess of the 20 second regulatory minimum) and the use of advance warning outputs from the railroad grade crossing predictor -- just about all constant warning time predictors support advance preemption, but older DC or AC/DC three-track-circuit systems and Audio Frequency Overlay detectors may not. Furthermore, there are special programming concerns on the highway side as well -- the grade crossing preemption cycle must be set up correctly to provide sufficient track clearance green before it blocks all movements over the grade crossing, either using red arrows or illuminated blank-out signs to prohibit turning movements onto the tracks, while also allowing pedestrians to clear from the railroad crossing (this can be critical in geometries where the railroad crossing bisects or is in close proximity to the intersection). Preemption reservice (where the traffic signal controller handles a second preemption request that closely follows the first) and extra logic on both sides (highway and rail) may be necessary when multiple tracks or switching movements are involved, and coordinated signalization and preemption are needed when multiple traffic signals are in close proximity to the grade crossing. Diagonal crossings (either cutting multiple approaches or bisecting the intersection) pose even more challenges, sometimes requiring multiple grade crossing predictors to be interconnected to each other as well as to the highway traffic signal controller. If you want more gory detail on the intricacies of this than you ever need, the FHWA has an excellent handbook chapter on this topic. In particular, Exhibit 3 in the pre-emption sidebar (reproduced below) provides a sample diagram of a fail-safe interconnect relay circuit.
{ "domain": "engineering.stackexchange", "id": 119, "tags": "rail, traffic-intersections, traffic-light" }
[Navigation stack] self defined Actionlib or move_base?
Question: Hi, I have a crucial question about the realization of a navigation for my robot model. The robot is already moving very well and as expected in a simulative environment (RViz). Given a target point, it is able to calculate the desired trajectory and moves to the point autonomously. One drawback in my design is: I have implemented everything on my own and since the dynamic model was really difficult to realize and I never did use the move_base package or something like that. In my code reads the controller points and not velocities commands (like the output of move_base, which is /cmd_vel ) Now I want to implement a navigation stack for my robot but I m a little bit confused about which way should I consider: Option: I could take inspiration from the tutorials about move_base and leave it putting out velocities commands which I ll later feed into my robot's model. Pro: exaustive tutorials about move_base on this website and lot of explanations. Contra: my robot reads goals defined as points and moves accordingly to reach them. Feeding velocities commands instead of positions will not work. Option: I feed the goal's position into the model and write an actionlib (server + client) to track the progress to the goals. At the goal it will trigger to the next goal. Pro: it sounds much easier and logic than the above idea. Contra: no idea on how to start developing such a server/client application and mostly important I could not use implement something like the base_local_planner if I need to generate a simple map to avoid obstacles (till now not necessary). UPDATE #1: The controller reads the published points (goals) respect to the /odom frame and the controller drive the robot there. Since fthe only transformation is between /odom and /frame_link I made the assumption that goals are "fixed" respect to the world (in RViz). Later I m going to publish them respect to /map. I hoe my question is clear enough. It not I will update it! Thanks in advance Originally posted by Andromeda on ROS Answers with karma: 893 on 2014-09-30 Post score: 2 Original comments Comment by David Lu on 2014-09-30: So the robot controller takes in points? In what reference frame? Answer: It looks like you partially replicated the functionality of move_base - specifically the goal-to-trajectory part. However, in a navigation context, the usefulness of this part is somewhat limited without processing the additional data you need for obstacle avoidance (costmaps for move_base). As David suggests, you can pipe the output path of local planner into your trajectory generator, however you'll have a lot of wasted cycles there - the path is created because the local planner has already evaluated sensory data, generated the trajectories, and calculated the endpoints for the paths (someone correct me if I'm wrong). So you'll be doing the same work twice. While it's really going to hurt, I'd suggest stripping out the trajectory generation, and accepting cmd_vel input instead. It's a learning experience to try to use the tools available and not reinvent the wheel :) Besides, the ROS way would have been to develop these parts as seperate modules in the first place, so that you could connect your robot's driver (which accepts cmd_vel and outputs odom) to any number of navigation implementations. Also, I know nothing about your robot. If your trajectory generator is very well suited to your system, and performs better than the two already implemented in the navigation stack, you should totally rewrite it to use the base_local_planner interface (processing costmap data and outputing velocity commands), taking advantage of the infrastructure provided by move_base as a state machine, and publish it to contribute back to the ROS ecosystem! Originally posted by paulbovbel with karma: 4518 on 2014-10-02 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by marco on 2014-11-04: Thanks dude
{ "domain": "robotics.stackexchange", "id": 19578, "tags": "navigation" }
Why Benzene Cannot be chlorinated using HOCl?
Question: Why HOCl doesnt do the job please , can anyone show me the mechanism ? Answer: One possibility: When you chlorinate with, let us say, $\ce{Cl2 + FeCl3}$, you generate a "leaving group", $\ce{FeCl4^-}$, that comes off easily because it is highly polarizable and only weakly basic. The $\ce{HOCl +H^+}$ combination would have to displace a more strongly basic and less polarizable water molecule. Chlorination can be effected with hypochlorous acid by adding silver ion as another catalyst/co-reagent. See here.
{ "domain": "chemistry.stackexchange", "id": 10718, "tags": "organic-chemistry" }
Questions on electric field formed in conductor and the magnetice field it moves through
Question: When a conductor (e.g. a wire) is moving through a uniform magnetic field, it will create a electric field in the wire as the magnetic field exerts a force on the moving conduction electrons in the wire and create a potential difference. And here is my questions: (If we can get the magnetic field to exert a super strong force on the electrons) Is it ever possible for the electrons in the wire to get separated from the wire like what happens when high energy photon hits an atom and knock off its electrons? Since $F_e$ and $F_b$ always act in the opposite direction, why don't they cancel off one another (so the net force is 0) when the charges are in the electromagnetic equilibrium ($qE = qvB$) ? What's so significant about the this equilibrium ($qE = qvB$)? The formula for motional emf $emf = vBl$ is derived from the fact that $E = vb$ and that was because the charges are in the electromagnetic equilibrium. But wouldn't this formula $emf = vBl$ be inaccurate if we have a motional emf that are caused by charges not in the electromagnetic equilibrium ($qE = qvB$)? Why when there is a motional emf we assume that the chargers are in this equilibrium? I'm still learning secondary school physics and I'm very new to this topic and so please explain it in laymen terms if possible. Thank you so much. Answer: yes if the generated electric field is high enough, field emission can occur--electrons are forced out the negatively charged end purely from the high electric field.. note that in this case the electrons dont move down the conductor because of an electric force, but the magnetic force (aka Lorentz force). it is because they have settled at the lower end that an electric field occurred. (electrons in this position cause electric field, not vice versa). note that the electrons are all forced to the bottom, creating a large negative charge there. naturally, electrons will want to spread back out, but they are kept in that uncomfortable position because of the magnetic force. you are right to say they are in equilibrium--as long as the rod is not being accelerated, the electric force wanting to push the electrons up the conductor is matched by the magnetic force pushing them down. therefore the electrons stay stuck at the lower end. note that a net force of zero doesnt mean that the electrons are all spread out normally, it just means they stopped moving (no net force, no acceleration). where they stopped is a matter of how strong the Lorentz force (how fast the rod is being moved) is. the higher the force, the more strongly it can pack all the electrons at one end. the moment the rod starts to slow down, the Lorentz force decreases lower than the electric force (electric force is not dependent on how fast the rod is moving), and the forces are no longer in equilibrium--the charges spread out accordingly. until equilibrium is reached again and the charges stop moving.
{ "domain": "physics.stackexchange", "id": 9696, "tags": "electromagnetism, electric-fields, magnetic-fields" }
What is the IUPAC preferred name of this tetracyclic natural product?
Question: On page 819 of Vollhardt, Organic Chemistry 7th Ed., the following "subunit of the antibacterial platensimycin" appears. I have attempted to apply von Baeyer and skeletal replacement ('a') nomenclature but am stuck deciding between numberings. Here are some vector drawings: Is von Baeyer the preferred nomenclature for this molecule? and, if so, what is its correct numbering and preferred IUPAC name? Answer: The pubchem website has given Platensimycin IUPAC name, $3$-$[3$-$[(1S,5S,6R,7S,9S,10S)$-$5,9$-dimethyl-$4$-oxo-$8$-oxatetracyclo$[7.2.1.1^{7,10}.0^{1,6}]$tridec-$2$-en-$5$-yl]propanoylamino]-$2,4$-dihydroxybenzoic acid (see attached picture): Accordingly, since https://pubchem.ncbi.nlm.nih.gov is usually updating data according to the changes, I believe the name of the cyclic system (without 5,9-dimethyl groups) OP is asking is same as one in the comment by @KarstenTheis (including the correct stereochemistry): $(1S,5S,6R,7S,9S,10S)$-$4$-oxo-$8$-oxatetracyclo$[7.2.1.1^{7,10}.0^{1,6}]$tridec-$2$-ene (see the blue numbering in the picture). Late correction: After exchanging few comments with Loong, I realized that the given system should be treated as a chemical compound. Accordingly, the name should have been: $(1S,5S,6R,7S,9S,10S)$-$4$-oxo-$8$-oxatetracyclo$[7.2.1.1^{7,10}.0^{1,6}]$tridec-$2$-en-$4$-one (as suggested by Loong).
{ "domain": "chemistry.stackexchange", "id": 11976, "tags": "nomenclature, heterocyclic-compounds" }
Hi there! I've recently been getting into java and was wondering how I can clean up or condense the code for this problem I've been working on
Question: The program asks the user to enter values for items bought, the price of each item, the GST rate, the QST rate. The program calculates the subtotal based on these inputs. For every invalid input entered by the user the program will prompt the user to re-enter that value until it is a valid input. Items bought must be between 1 and 10. The price of each item must be between 1 and 1000. The GST must be between 0 and 14. The QST must be between 0 and 17. Im wondering if there's a way I can make this code more efficient by including more methods or anything else. Thanks! import java.util.Scanner; public class Taxes{ public static void main(String[] args){ double subtotal = 0; int errors = 0; Scanner scan = new Scanner(System.in); System.out.println("Please enter the number of items bought (1-10): "); int num_items = scan.nextInt(); while (num_items < 1 || 10 < num_items){ errors += 1; System.out.println("Please enter the number of items bought (1-10): "); num_items = scan.nextInt(); } for (int i = 1; i <= num_items; i++){ Scanner scn = new Scanner(System.in); System.out.println("Please enter the price of item " + i); Double item_cost = scn.nextDouble(); while (item_cost < 1 || 1000 < item_cost){ errors += 1; System.out.println("Please enter the price of item " + i); item_cost = scn.nextDouble(); } subtotal += item_cost; } System.out.println("Please enter the tax rate of GST in %: "); double gRate = scan.nextDouble(); while (gRate < 0 || 14 < gRate){ errors += 1; System.out.println("Please enter the tax rate of GST in %: "); gRate = scan.nextDouble(); } System.out.println("Please enter the tax rate of QST in %: "); double qRate = scan.nextDouble(); while (qRate < 0 || 17 < qRate){ errors += 1; System.out.println("Please enter the tax rate of QST in %: "); qRate = scan.nextDouble(); } calculate(subtotal, gRate, qRate, errors); } public static void calculate(double subtotal, double gRate, double qRate, int errors) { double gst = subtotal * (gRate/100); double qst = (subtotal + gst) * (qRate/100); double total = subtotal + gst + qst; System.out.println("GST: " + gst); System.out.println("QST: " + qst); System.out.println("Subtotal: " + total); System.out.println("You entered " + errors + " invalid inputs"); } } Answer: Style Please use proper indentation. It is generally considered good practice to indent the code inside a method. This will make your code more legible. The use of curly brackets is inconsistent (have a look at the calculate-method. In Java, the most common use of curly brackets looks like this: public static void calculate(...) { //Commands here } For variable names the lowerCamelCase is used. So num_items becomes numItems and item_cost becomes itemCost. Input You have been thinking about how to stop the user from making invalid inputs. You are using: while (numItems < 1 || 10 < numItems) { errors += 1; System.out.println("Please enter the number of items bought (1-10): "); numItems = scan.nextInt(); } This will stop the user entering other numbers than intended. The problem is that your program crashes, when the user enters not even a number (for example "hi"). This can be solved with the following code (use import java.util.InputMismatchException;): while(true) { try { numItems = scn.nextInt(); if(numItems < 1 || 10 < numItems) { throw new InputMismatchException() } break; } catch(InputMismatchException e) { System.out.println("Invalid Input!"); scn.nextLine(); } } Also, you have initialized more than one scanner in your program. That's not necessary. Just initialize one in the beginning and use it for the whole time. Other I removed the variable errors, because it is not useful to tell the user how many errors he/she made. But that's just a personal opinion. It's not necessary to restrict the number of items and their prices, but I have left it in the code so you can see how to restrict it. All in all your code could look like this: import java.util.Scanner; import java.util.InputMismatchException; public class Taxes { public static void main(String[] args) { double subtotal = 0; Scanner scan = new Scanner(System.in); System.out.println("Please enter the number of items bought (1-10): "); int numItems; while(true) { try { numItems = scan.nextInt(); if(numItems < 1 || 10 < numItems) { throw new InputMismatchException(); } break; } catch(InputMismatchException e) { System.out.println("Invalid Input!"); scan.nextLine(); } } for (int i = 1; i <= numItems; i++) { System.out.println("Please enter the price of item " + i); Double itemCost; while(true) { try { itemCost = scan.nextDouble(); if(itemCost < 1 || 1000 < itemCost) { throw new InputMismatchException(); } break; } catch(InputMismatchException e) { System.out.println("Invalid Input!"); scan.nextLine(); } } subtotal += itemCost; } System.out.println("Please enter the tax rate of GST in %: "); double gRate; while(true) { try { gRate = scan.nextDouble(); if(gRate < 0 || 14 < gRate) { throw new InputMismatchException(); } break; } catch(InputMismatchException e) { System.out.println("Invalid Input!"); scan.nextLine(); } } System.out.println("Please enter the tax rate of QST in %: "); double qRate; while(true) { try { qRate = scan.nextDouble(); if(qRate < 0 || 17 < qRate) { throw new InputMismatchException(); } break; } catch(InputMismatchException e) { System.out.println("Invalid Input!"); scan.nextLine(); } } calculate(subtotal, gRate, qRate); } public static void calculate(double subtotal, double gRate, double qRate) { double gst = subtotal * (gRate/100); double qst = (subtotal + gst) * (qRate/100); double total = subtotal + gst + qst; System.out.println("GST: " + gst); System.out.println("QST: " + qst); System.out.println("Subtotal: " + total); } } I will leave it to you to comment the code properly.
{ "domain": "codereview.stackexchange", "id": 38150, "tags": "java, calculator" }
Why is information in a volume proportional to it's surface area?
Question: I read the wiki article on the holographic principle, but it never answered this question. Can anyone explain the math that leads to this conclusion? Answer: Explaining the math behind the holographic principle would be lengthy exercise. Is that really what you want? A short hand-waving argument would be that you can pack a limited number of qbits (in the form of photons) together in a given space. If you take long wavelength photons, you can pack a lot of them together before a black hole forms. Two fundamental principles limit the number of photons you can put in a sphere of radius $R$: 1) you can't make their wavelength too long as this would prevent you from localizing the photons within the sphere, and 2) if you make their wavelengths too short, the energy content within the sphere becomes too high and a black hole forms that would have a radius larger than $R$. The bottom line is that you can pack no more than a number of photons proportional to $R^2$ into the sphere of radius $R$, provided these have wavelengths comparable to $R$. If you would select massive qbits (rather than massless photons) things get worse.
{ "domain": "physics.stackexchange", "id": 5686, "tags": "holographic-principle" }
Stress-energy tensor for a fermionic Lagrangian in curved spacetime - which one appears in the EFE?
Question: So, suppose I have an action of the type: $$ S =\int \text{d}^4 x\sqrt{-g}( \frac{i}{2} (\bar{\psi} \gamma_\mu \nabla^\mu\psi - \nabla^\mu\bar{\psi} \gamma_\mu \psi) +\alpha \bar{\psi} \gamma_\mu \psi\bar{\psi}\gamma_\nu \psi g^{\mu\nu})$$ Where $\psi$ is a fermionic field and the rest has the usual meaning ($\alpha$ is a coupling constant). Now, if I write down the Canonical energy momentum tensor, i find $$ \tilde{T}_{\mu\nu}= \frac{\delta L}{\delta \nabla^\mu\psi} \nabla_\nu\psi+ \nabla_\nu\bar{\psi} \frac{\delta L}{\delta \nabla^\mu\bar{\psi}}- g_{\mu\nu} L = 2i\bar{\psi} \gamma_{(\mu}\nabla_{\nu)}\psi -g_{\mu\nu} L $$ But, if I write the Einstein's tensor in general relativity i get $$T_{\mu\nu}=\frac{2}{\sqrt{-g}}\frac{\delta S}{\delta g^{\mu\nu}}=2 i\bar{\psi} \gamma_{(\mu}\nabla_{\nu)}\psi + 2 g \bar{\psi} \gamma_\mu \psi\bar{\psi}\gamma_\nu \psi- g_{\mu\nu} L$$ The two are obviously different. So, which one should i use in the Einstein's equations? The problem comes when you write an interaction term of the type $A_\mu A^\mu$, where $A$ is some current. Because otherwise the two tensor coincide. The first energy momentum is the one invariant under translations, so it is the one satisfying $$\nabla_\mu \tilde{T}^{\mu\nu} = 0$$ While the second satisfy the same identity only if $$\nabla_\mu A^\mu = 0$$ Basically my question is, which one of the two should be used in the Einstein's equations? $G_{\mu\nu} = \kappa \overset{?}{T}_{\mu\nu}$ Or am i doing something wrong and the two tensor do actually coincide? Answer: As @Holographer has mentioned in a comment, the correct formula for the stress-tensor that enters the EFE is $$ T_{\mu\nu} = \frac{2}{\sqrt{-g}} \frac{\delta S_{\text{matter}}}{ \delta g^{\mu\nu} } $$ whereas what you are computing is the canonical stress energy tensor. However, there is a subtle relation between the two, which I will elaborate upon here. Apart from a theory that contains only scalars, the canonical stress tensor is never the one that enters the EFE. This is because, in general, the canonical stress tensor is not symmetric and therefore cannot possibly be the same stress tensor that enters in the EFE. For instance the canonical stress tensor for electromagnetism is $$ (T^{EM}_{\mu\nu})_{\text{canonical}} = F^\rho{}_\mu \partial_\nu A_\rho + \frac{1}{4} g_{\mu\nu} F_{\alpha\beta} F^{\alpha\beta} $$ which is not only not symmetric, but also not gauge invariant. PS - The non-symmetry is due to the spin of the field involved and is closely related to the angular momentum tensor. However, there is an ambiguity in the construction of the stress tensor (the ambiguity does not change the conserved charges which are physical quantities). This ambiguity allows construction of an improved stress tensor (often known as the Belinfante tensor) that is symmetric and conserved. It is this improved tensor that enters the EFE. (ref. this book) To see the equivalence, let us recall the standard construction of the stress-tensor. Consider a coordinate transformation $$ x^\mu \to x^\mu + a^\mu (x) $$ Since the original Lagrangian is invariant under translations (where $a^\mu$ is constant), the change in the action under such a coordinate transformation is $$ \delta S = \int d^d x \sqrt{-g} \nabla_\mu a_\nu T_B^{\mu\nu} $$ Now, if the stress-tensor is symmetric then we can write $$ \delta S = \frac{1}{2} \int d^d x \sqrt{-g} \left( \nabla_\mu a_\nu + \nabla_\nu a_\mu \right) T_B^{\mu\nu} $$ Note that the term in the parenthesis is precisely the change in the metric under the coordinate transformation. Thus, $$ \delta S = \frac{1}{2} \int d^d x \sqrt{-g} \delta g_{\mu\nu} T_B^{\mu\nu} \implies \frac{2}{\sqrt{-g}}\frac{\delta S}{\delta g_{\mu\nu}} = T_B^{\mu\nu} $$ Thus, we see that the symmetric Belinfante stress tensor is precisely the gravitational stress tensor. Note of course that what I've said holds specifically in a Minkowskian background, since the construction of $T_B^{\mu\nu}$ assumes Lorentz invariance.
{ "domain": "physics.stackexchange", "id": 23158, "tags": "general-relativity, lagrangian-formalism, fermions, stress-energy-momentum-tensor, qft-in-curved-spacetime" }
Cauchy stress tensor in different coordinate system
Question: The general form of the cauchy stress tensor is given by the dyadic decomposition $$\boldsymbol \sigma = \sigma_{ij}\,\,\mathbf{e}_i \otimes \mathbf{e}_j$$ I want to know how this can be expanded in a different coordinate system such as spherical coordinates. Related: link Answer: You need to express the coordinate basis vectors for the current system in terms of a linear combination of the coordinate basis vectors for spherical coordinates, and substitute into your equation. Alternatively, there is a formula for directly converting the components of your tensor from one coordinate system to another. Both methods give you the same answer. Method 1: $$\vec{i}_x=\sin{\theta}\cos{\phi}\vec{i}_r+\cos{\theta}\cos{\phi}\vec{i}_{\theta}-sin{\phi}\vec{i}_{\phi}$$ $$\vec{i}_y=\sin{\theta}\sin{\phi}\vec{i}_r+\cos{\theta}\sin{\phi}\vec{i}_{\theta}+cos{\phi}\vec{i}_{\phi}$$ $$\vec{i}_z=\cos{\theta}\vec{i}_r-\sin{\theta}\vec{i}_{\theta}$$ I'm only going to do it for a simple case in which only one component of the stress tensor is non-zero in cartesian coordinates, the z-z component. So, $$\vec{\sigma}=\sigma_{zz}\vec{i}_z \otimes \vec{i}_z=\sigma_{zz}(\cos{\theta}\vec{i}_r-\sin{\theta}\vec{i}_{\theta})\otimes(\cos{\theta}\vec{i}_r-\sin{\theta}\vec{i}_{\theta})$$ So, $$\vec{\sigma}=\sigma_{zz}\vec{i}_z \otimes \vec{i}_z=\sigma_{zz}(cos^2\theta\vec{i}_r \otimes \vec{i}_r-sin\theta cos\theta(\vec{i}_r \otimes \vec{i}_{\theta}+\vec{i}_{\theta} \otimes \vec{i}_r)+sin^2\theta\vec{i}_{\theta} \otimes \vec{i}_{\theta})$$ So, in this case, it follows that: $$\sigma_{rr}=\sigma_{zz}cos^2\theta$$ $$\sigma_{r\theta}=\sigma_{\theta r}=-\sigma_{zz}sin\theta cos\theta$$ $$\sigma_{\theta \theta}=\sigma_{zz}sin^2\theta$$
{ "domain": "physics.stackexchange", "id": 27389, "tags": "fluid-dynamics, tensor-calculus, stress-strain" }
Can a neutron star become a black hole via cooling?
Question: How much does thermal expansion affect neutron stars? Would the loss of temperature cause a neutron star to be more densely packed and thus collapse into a black hole? Answer: No (or at least not much). One of the essential properties of stars that are largely supported by degeneracy pressure, is that this pressure is independent of temperature and that is because although a neutron star may be hot, it has such a small heat capacity, it contains very little thermal energy$^{*}$. When a neutron star forms, it cools extremely rapidly by the emission of neutrinos, on timescales of seconds. During this phase, the neutron star does contract a little bit (tens of per cent), but by the time its interior has cooled to a billion Kelvin, the interior neutrons are degenerate and the contraction is basically halted. It is possible that a (massive) neutron star could make the transition to a black hole before this point. If it does not do so, then from there, the neutron star continues to cool (but actually possesses very little thermal energy, despite its high temperature), but this makes almost no difference to its radius. $^{*}$ In a highly degenerate gas the occupation index of quantum states is unity up the Fermi energy and zero beyond this. In this idealised case, the heat capacity would be zero - no kinetic energy can be extracted from the fermions, since there are no free lower energy states. In practice, and at finite temperatures, there are fermions $\sim kT$ above the Fermi energy that can fall into the few free states at $\sim kT$ below the Fermi energy. However, the fraction of fermions able to do so is only $\sim kT/E_F$, where $E_F$ is the kinetic energy of fermions at the Fermi energy. At typical neutron star densities, this fraction is of order $T/10^{12}\ {\rm K}$, so is very small once neutron stars cool (within seconds) below $10^{10}$ K. What this means is that the heat capacity is extremely small and that whilst the neutrons in a neutron star contain an enormous reservoir of kinetic energy (thus providing a pressure), almost none of this can be extracted as heat.
{ "domain": "physics.stackexchange", "id": 37998, "tags": "thermodynamics, black-holes, astrophysics, neutron-stars, stellar-physics" }
Are there natural separations in the nondeterministic time hierarchy?
Question: The original Nondeterministic Time Hierarchy Theorem is due to Cook (the link is to S. Cook, A hierarchy for nondeterministic time complexity, JCSS 7 343–353, 1973). The theorem states that for any real numbers $r_1$ and $r_2$, if $1 \le r_1 \lt r_2$ then NTIME($n^{r_1}$) is strictly contained in NTIME($n^{r_2}$). One key part of the proof uses (an unspecified) diagonalization to construct a separating language from the elements of the smaller class. Not only is this a nonconstructive argument, but languages obtained by diagonalization usually provide no insight other than the separation itself. If we want to understand the structure of the NTIME hierarchy, the following question probably needs to be answered: Is there a natural language in NTIME($n^{k+1}$) but not in NTIME($n^k$)? One candidate might be k-ISOLATED SAT, which requires finding a solution to a CNF formula with no other solutions within Hamming distance k. However, proving the lower bound seems is tricky, as usual. It is obvious that checking a Hamming k-ball is clear of potential solutions "should" require $\Omega(n^k)$ different assignments to be checked, but this is by no means easy to prove. (Note: Ryan Williams points out this lower bound for $k$-ISOLATED SAT would actually prove P ≠ NP, so this problem does not seem to be the right candidate.) Note that the theorem holds unconditionally, regardless of unproved separations such as P vs. NP. An affirmative answer to this question would therefore not resolve P vs. NP, unless it has additional properties like $k$-ISOLATED SAT above. A natural separation of NTIME would perhaps help to illuminate part of the "difficult" behaviour of NP, the part which derives its difficulty from an infinite ascending sequence of hardness. Since lower bounds are hard, I will accept as an answer natural languages for which we may have a good reason to believe a lower bound, even though there may not yet be a proof. For instance, if this question had been about DTIME, then I would have accepted $f(k)$-CLIQUE, for a non-decreasing function $f(x) \in \Theta(x)$, as a natural language that probably provides the required separations, based on Razborov's and Rossman's circuit lower bounds and the $n^{1-\epsilon}$-inapproximability of CLIQUE. (Edited to address Kaveh's comment and Ryan's answer.) Answer: As far as I know, we don't know of such languages, or if we do, there is significant controversy about the "naturalness" of them. I know this isn't really a satisfying answer, but I can say: (a) If you prove an $\Omega(n^k)$ time lower bound for k-ISOLATED SAT for every $k$, then you have actually proved $P \neq NP$. (b) One way you might hope to show that k-ISOLATED SAT is one of these natural problems in $NTIME[n^{k+1}] - NTIME[n^k]$ is to show that k-ISOLATED SAT problem is hard (in the usual, formal sense of having efficient reductions) for $NTIME[n^k]$. In fact this is the only way we know how to prove such results. But k-ISOLATED SAT is probably not hard in this sense, there are some very unlikely consequences. The main reason is that k-ISOLATED SAT instances are solvable in $\Sigma_2 TIME[n]$, independently of $k$. You can existentially guess the isolated assignment, then universally verify (for all $O(\log (\sum_{i=1}^k {n \choose i}))$ ways to flip up to $k$ bits in the assignment) that none of the other "local" assignments work. Here is the proof of part (a). Let ISOLATED SAT be the version of the problem with $k$ given as part of the input (in unary, say). Suppose we prove that ISOLATED SAT requires $\Omega(n^k)$ time for all $k$. If $P=NP$, then $\Sigma_2 TIME[n]$ is in $TIME[n^c]$ for some fixed $c$ (the proof uses an efficient version of Cook's theorem: if there is a SAT algorithm running in time $n^d$, then any $c > d^2$ suffices). But we proved that there is a language in $\Sigma_2 TIME[n]$ that is not in $TIME[n^k]$ for every $k$. This is a contradiction, so $P \neq NP$. Here is the proof of part (b). If every $L \in NTIME[n^k]$ could be efficiently reduced to a k-ISOLATED SAT formula (e.g., all $n$ bit instances of $L$ get reduced to $k$-ISOLATED SAT formulas of at most $f(k) n^{c}$ size) then $NP=\bigcup_{k} NTIME[n^k] \subseteq \Sigma_2 TIME[n^{c+1}]$. This would immediately imply $coNP \neq NP$, but moreover it just looks very unlikely that all of $NP$ can be simulated so efficiently within the polynomial hierarchy.
{ "domain": "cstheory.stackexchange", "id": 78, "tags": "cc.complexity-theory, nondeterminism, time-hierarchy" }
Should I lookahead in the lexer or parser?
Question: So I have this Recursive Descent Parser which outputs a Syntax Tree for arithmetic expressions. While doing some reading, I learned that LL(1) grammars have a lookahead of 1 - I remembered using a lookahead so I grepped for it. But instead of finding it in my parser, I found it in the lexer. case "*": if self._peek() == "*": self._next(2) return Token(SyntaxKind.TWO_STAR, self.position - 2, "*") self._next(1) return Token(SyntaxKind.STAR, self.position - 1, "*") I use this to differentiate between the power(**) and multiplication operator(*). So is this the right way to do things? Or should this code be in my parser? And is my grammar still LL(1) if I don't actually lookahead in the parser? Answer: "The right way to do things" depends on the specifics of the language. The lexical syntax of most modern programming languages uses what is known as the maximal munch rule, which states that given more than one possibility as to what the next lexeme could be, choose the one that consumes more of the input. This inevitably means that a lexical analyser must backtrack (or look ahead) in general. An example is in in C++, where 0x0 is an integer constant expressed in hexadecimal, but 0x is an integer literal followed by a suffix. To determine the next token, the lexical analyser must look past the x to see what follows it. This is a design decision by programming language designers to make programming languages easier to implement. It's not the only option. To answer your last question, any LL(0) grammar is also LL(1) even though it never needs the lookahead symbol. Either way, LL(k)-ness is a property of the grammar, and the grammar doesn't care where the terminal symbols came from.
{ "domain": "cs.stackexchange", "id": 20696, "tags": "compilers, parsers, lexical-analysis" }
Building a toy from Lego toys
Question: I recently attempted a coding problem, but my solution was not well received during the interview (modified slightly to reduce Google-ability). Suppose a child has several Lego blocks, and wants to build a toy from them. Every piece has a size, and when two pieces are combined a new piece is created whose size is the sum of the attached blocks. The time it takes to combine two blocks is equal to the size of the individual blocks Find the sequence of combinations that takes the least amount of time to combine all of the Legos Here is an example: Given the following blocks: 5, 2, 8 the child could combine them in the following methods: 5 + 8 = 13 --> Kid spent 13 units of time 13 + 2 = 15 --> Kid spent 15 units of time Total time spent: 13 + 15 = 28 Another combination would be: 2 + 5 = 7 7 + 8 = 15 Total time spent: 22 I came up with the following algorithm to find the shortest time required to combine all of the blocks: import java.util.Arrays; import java.util.Collections; import java.util.List; class AssemblyTime { int shortestAssemblyTime(List<Integer> lego) { Collections.sort(lego); int counter = lego.size() - 1; int sum = lego.get(0) * (lego.size() - 1); for (int i = 1; i < lego.size(); i++) { sum += lego.get(i) * counter; counter--; } return sum; } public static void main(String[] args) { int i = new AssemblyTime().shortestAssemblyTime(Arrays.asList(5, 2, 8, 4)); assert i == 36; } } The test cases are not available to me, however my solution timed out on 6 of the 10 (it passed the other 4). The idea of my algorithm is as follows. Given my test case in code, the process is something like this: 2 + 4 2 + 4 + 5 2 + 4 + 5 + 8 So the last element is added once, the last-1 is added twice until elements at index 0 and 1, which are both added list.length - 1 times. How can I improve the performance for this algorithm? Is there any other data structure I can use? Answer: You have a more fundamental problem here than performance: your solution isn't actually correct. Consider 4 lego pieces all of size 1. Your solution combines them as \$1+1=2\$ \$2+1=3\$ \$3+1=4\$ for a total cost of \$2+3+4 = 9\$. However, we can combine more efficiently in the following way \$1+1=2\$ \$1+1=2\$ \$2+2=4\$ for a total cost of \$8\$. In general this problem is asking you to rediscover the famous Huffman coding. If you consider your size \$w\$ blocks to be code words with frequency \$w\$, then the solution is asking you to find the cost of the optimal prefix code for those code words. The optimal combination is obtained by at each step picking the two smallest size blocks and combining them. We need a data structure where we can efficiently find and remove the smallest element, and also insert new elements (we need to reinsert combined blocks at each step). This can be done efficiently with a min priority queue. Strictly speaking, in an actual interview, you might be asked to justify why the above algorithm gives the most efficient combination. For this, see a resource on the Huffman code.
{ "domain": "codereview.stackexchange", "id": 36213, "tags": "java, time-limit-exceeded, interview-questions" }
Classification - Divide the interval (0 - 1] to lets say 100 classes and use each class to make a calculation
Question: class-1 represents 0.01, class-i represents 0.01*i, class-100 represents 1.00. Thus, when the classifier predicts the class-y and it should have predicted class-(y+1) there is a small error so we can accept class-y. Is there a way to express this behaviour in a neural network? Maybe with a distribution or something? PS: Not interested in regression. Answer: Correct me if I am wrong but If I understand your question correctly, what you want is a classifier such that classes close to each other (say class 2 and class 3) are preferable to those far away (class 2 and class 99). If this is the case, this problem is called "Ordinal Categorical Classification". I was working on a similar problem a while ago, and I found this loss function during my research. I ended up not using it so I don't really know how good it works but anyways, hope that helps.
{ "domain": "datascience.stackexchange", "id": 6339, "tags": "machine-learning, classification, predictive-modeling, classifier" }
Function that convert decimal to binary
Question: I have a function that converts an integer to its binary representation. I am wondering if there is any way I can improve the function. public List<string> Conversion(int x) { var bitConversion = new List<string>(); var result = x; while (result >= 0) { if (result == 0) { bitConversion.Add("0"); break; } bitConversion.Add((result % 2).ToString(CultureInfo.InvariantCulture)); result = result / 2; } bitConversion.Reverse(); return bitConversion; } Answer: One major improvement would be to just use what is supplied through .NET: Convert.ToString(int, 2); where int is your supplied argument. Convert.ToString Method (Int32, Int32) Converts the value of a 32-bit signed integer to its equivalent string representation in a specified base. Note that this returns you a string value like 10000110100001, which is the representation of a binary number. In your code you store them as string representations and in separate entries in your collection, which is not the way a number should be stored. It's like creating an array with values "1" and "2" to represent the number 12. If this is your intention then you can always work towards that of course but it might be an indication that something else is wrong for needing it like that. However if you want to stay with your own implementation, there are a few things you could change around: public List<string> Conversion2(int x) { var bitConversion = new List<string>(); while (x >= 0) { if (x == 0) { bitConversion.Add("0"); break; } bitConversion.Add((x % 2).ToString(CultureInfo.InvariantCulture)); x /= 2; } bitConversion.Reverse(); return bitConversion; } Remove the unnecessary result variable Contract x = x / 2 to x /= 2 (compound assignment operator)
{ "domain": "codereview.stackexchange", "id": 13105, "tags": "c#, performance, converting" }
Why is a meteorite impact on the moon visible as white light?
Question: You can watch a video of a meteorite impacting the moon at the npr.org website. Why does the impact produce white light? I understand that the impact is like an explosion, but there is no air on the moon to burn material. If it was the result of superheating rock from the impact, then why does it fade so fast. White light emitted by heated rocks should last longer as it takes time to cool. We can see meteorites breaking up in Earth's atmosphere as a streak of light, but there is no atmosphere on the moon. So why isn't it just a big black puff of smoke? Answer: The velocity of the impactor of tens of kilometers per second provides enough energy to heat the impactor and parts of the target to several thousands of Kelvins, so that parts are converted to plasma or to vapor, at least. According to Planck's law the color at these temperatures is white or bluish. According to the Stefan Boltzman law the total emitted energy is proportional to the forth power of the absolute temperature. Hence material which isn't heated to thousands of degrees, and may glow yellowish or reddish is too dim to be noticible in comparison. Since most of the heated ejecta are finely distributed, they cool down rapidly by the emitted radiation, and also after contact with the surface. After cooling there will certainly be some fine dust, which still has to fall back to Moon's surface, but you won't see this on a video from a distance, since it's cooled down and distributed over a wider area.
{ "domain": "astronomy.stackexchange", "id": 283, "tags": "the-moon, meteorite" }
K-Means clustering for mixed numeric and categorical data
Question: My data set contains a number of numeric attributes and one categorical. Say, NumericAttr1, NumericAttr2, ..., NumericAttrN, CategoricalAttr, where CategoricalAttr takes one of three possible values: CategoricalAttrValue1, CategoricalAttrValue2 or CategoricalAttrValue3. I'm using default k-means clustering algorithm implementation for Octave. It works with numeric data only. So my question: is it correct to split the categorical attribute CategoricalAttr into three numeric (binary) variables, like IsCategoricalAttrValue1, IsCategoricalAttrValue2, IsCategoricalAttrValue3 ? Answer: The standard k-means algorithm isn't directly applicable to categorical data, for various reasons. The sample space for categorical data is discrete, and doesn't have a natural origin. A Euclidean distance function on such a space isn't really meaningful. As someone put it, "The fact a snake possesses neither wheels nor legs allows us to say nothing about the relative value of wheels and legs." (from here) There's a variation of k-means known as k-modes, introduced in this paper by Zhexue Huang, which is suitable for categorical data. Note that the solutions you get are sensitive to initial conditions, as discussed here (PDF), for instance. Huang's paper (linked above) also has a section on "k-prototypes" which applies to data with a mix of categorical and numeric features. It uses a distance measure which mixes the Hamming distance for categorical features and the Euclidean distance for numeric features. A Google search for "k-means mix of categorical data" turns up quite a few more recent papers on various algorithms for k-means-like clustering with a mix of categorical and numeric data. (I haven't yet read them, so I can't comment on their merits.) Actually, what you suggest (converting categorical attributes to binary values, and then doing k-means as if these were numeric values) is another approach that has been tried before (predating k-modes). (See Ralambondrainy, H. 1995. A conceptual version of the k-means algorithm. Pattern Recognition Letters, 16:1147–1157.) But I believe the k-modes approach is preferred for the reasons I indicated above.
{ "domain": "datascience.stackexchange", "id": 5951, "tags": "data-mining, clustering, octave, k-means, categorical-data" }
Structure of carbon fiber
Question: What is the structure of carbon fiber? Wikipedia doesn't give much insight. The atomic structure of carbon fiber is similar to that of graphite, consisting of sheets of carbon atoms (graphene sheets) arranged in a regular hexagonal pattern, the difference being in the way these sheets interlock. Graphite is a crystalline material in which the sheets are stacked parallel to one another in regular fashion. The intermolecular forces between the sheets are relatively weak Van der Waals forces, giving graphite its soft and brittle characteristics. The Wikipedia description begs the question of how exactly are the sheets of graphene interlocked. Are there pi-stacking interactions? Answer: The key difference between graphite and carbon fibers is the degree (or range) of ordering. In graphite there are large, extensive planes of carbon stacked one atop another resulting in long-range order. In the fibers, there are small ribbons of graphite-like (it is not all carbon, due to the way the fibers are prepared there is still some nitrogen present in the molecular array) material oriented along the fiber axis and there is only short range order. As a consequence graphite with its extensive 3 dimensional planar array of ordered carbon atoms is crystalline. Whereas the fibers, being ordered over a much smaller range, are amorphous. The amorphous nature of the fiber allows the smaller ribbons to bend and fold, and mechanically interlock with one another. This mechanical interlocking is what serves to increase the fiber's strength. See here for a very nice and concise description of how the fibers are prepared. The pictures in the link make clear how the short range ordering in the fibers results in a highly folded and interlocked structure. Further heating of the fiber will convert it to graphite.
{ "domain": "chemistry.stackexchange", "id": 2218, "tags": "materials, aromatic-compounds, carbon-allotropes" }
What would Hawking radiation look like from inside the event horizon?
Question: Let’s say you fell into a rotating black hole, the inner horizon of the black hole is an infinite blue shift surface, so you should be able to observe events from the arbitrarily far future before reaching it. I’ve heard that hawking radiation results in positive energy particle escaping the bock hole, while causing negative energy particles to fall In, what would this look like from the inside. Answer: … so you should be able to observe events from the arbitrarily far future before reaching it. This is wrong for “almost inertial” observers inside the outer horizon (i.e. observers whose trajectories do not deviate much from inertial trajectories of bodies falling into the black hole). Recall, that the black hole inner horizon has two components, “outgoing” and “ingoing”. The Cauchy horizon dependent on the entirety of future history is the ingoing component, but a particle falling from the outside would cross the outgoing component. So, unless you are in a spaceship capable of achieving relativistic velocities in a short time of order $r_g/c$, the ingoing component is inaccessible to you. On the other hand, if you do have a spaceship capable of ultrarelativistic velocities, there is no need for a black hole if you wish to see the distant future: ordinary flat space twins paradox suffices. The misconception ultimately originates from the use of Penrose diagrams for reasoning about spacetime structure. While such diagrams are great for considering causality, they depend on the conformal structure of spacetime and do not retain information about lengths, durations, momentum changes etc. Having said that, the question of what happens to the Hawking radiation at the inner horizon has indeed been studied by several groups in the last few years: one, two, three, four, five. The papers convey a picture of quantum divergencies that observers would encounter near the (would be) inner horizon, independent from classical instabilities developing there. The most accessible source would be the essay McMaken, T. (2023). Pancakification and negative Hawking temperature, arXiv:2305.09019. Here are some quotes: … one of the key insights we wish to convey here is that the miniscule levels of Hawking radiation that manage to leak out to infinity are nothing compared to the roiling atmosphere of quantum radiation near the inner horizon. An observer falling into a black hole will be met with an increasingly dense atmosphere of Hawking radiation as they plunge through the interior until they reach a wall of infinite energy at the inner horizon. But once the observer approaches the inner horizon, the spectrum no longer appears Planckian. Instead, the exponential drop-off at high frequencies inverts, leading to an ultraviolet divergence. As shown by the dashed curves, these spectra approximately match the magnitudes one would obtain for a Planckian distribution with negative temperatures $\kappa_\text{eff} ^{−}/(2\pi)$ and $\kappa_\text{eff} ^{+}/(2\pi)$ at the left and right inner horizons, respectively …
{ "domain": "physics.stackexchange", "id": 96695, "tags": "black-holes, event-horizon, hawking-radiation, kerr-metric" }
Maxwell equations from Euler-Lagrange equation: I keep obtaining the wrong equation
Question: I'm deriving the Maxwell equations from this Lagrangian: $$ \mathscr{L} \, = \, -\frac{1}{4} F^{\mu \nu}F_{\mu \nu} + J^\mu A_\nu \tag{1}$$ My signature is $$(+ - - -)\tag{2}$$ and $$ F^{\mu \nu} \, = \,\left(\begin{matrix}0 & -E_x & -E_{y} & -E_{z} \\ E_{x} & 0 & -B_{z} & B_{y} \\ E_y & B_{z} & 0 & -B_x \\ E_z & -B_{y} & -B_{x} & 0\end{matrix}\right)\tag{3} $$ My procedure is almost exactly the same as this one: https://physics.stackexchange.com/a/14854/121554 But he has a $+\frac{1}{4}F^{\mu \nu}F_{\mu \nu}$ in the lagrangian. So, while he obtains the right equation $$\partial_\mu F^{\mu \nu}\, = \, J^\nu,\tag{4}$$ I carry a minus sign till the end and my final equation is $$ J^\nu \,=\, -\partial_\mu F^{\mu \nu}\, = \, \partial_\mu F^{\nu \mu}\, ; \tag{5} $$ Which is clearly wrong if you write it down explicitely in function of the fields, the charge and the currents. Is my lagrangian wrong for my metric and my definition of the elctromagnetic field tensor? We spent some time talking about that lagrangian at lesson and the professor gave a lot of importance to that minus sign in order to have positive kinetic term. Am I missing something? I can write down all my calculations if requested but they are basically the same of the link I provided above. Answer: I suspect the error is in your source term: with reference to Jackson's "Classical electrodynamics", the correct Lagrangian density is $$ {\cal L}=-\frac{1}{16\pi} F_{\alpha\beta}F^{\alpha\beta}-\frac{1}{c}J_\alpha A^\alpha\, , $$ which differ from yours by a sign in the source term. (The other factors $1/16\pi$ and $1/c$ are linked to the use of Gaussian units.) The article you link to also have the same sign for both terms in the Lagrangian density.
{ "domain": "physics.stackexchange", "id": 35856, "tags": "homework-and-exercises, electromagnetism, lagrangian-formalism, conventions, maxwell-equations" }
How can a transformer produce a high voltage and a low current?
Question: I understand that in ideal transformers, power is conserved. Because of this the product of voltage and current in the secondary winding is a constant. This means that voltage and current are inversely related, which seems unintuitive because they are directly related by ohms law. Shouldn't the emf induced in the secondary winding by the alternating magnetic flux be directly related to the current by some constant, such as the resistance of the secondary winding? I also came across a term known as impedance that seem to be related to the question. Wondering if it is of any relevance. Answer: I understand that in ideal transformers, power is conserved. Because of this the product of voltage and current in the secondary winding is a constant. This isn't true. The expression of power conservation for an ideal transformer is $$V_s\cdot I_s = V_p\cdot I_p$$ There is no requirement for $V_s\cdot I_s$ to be equal to a constant. This means that voltage and current are inversely related, which seems unintuitive because they are directly related by ohms law. Power conservation doesn't imply that the secondary voltage and current are inversely related. Further, the secondary voltage and current (for an ideal transformer) are related by Ohm's law only if the load is a resistor but not otherwise. For example, when the load is a resistor of resistance $R$ then... Ohm's Law $$V_s = R\cdot I_s,\quad V_s\cdot I_s = \frac{V^2_s}{R}$$ Power conservation $$V_p \cdot I_p = V_s\cdot I_s$$ Ideal transformer voltage relation $$V_s = N\cdot V_p$$ where $N = \frac{N_s}{N_p}$. Thus $$I_p = N^2\frac{V_p}{R} = \frac{V_p}{R/N^2}$$ That is, when the load is a resistor, both the primary and secondary voltage and current are related by Ohm's law and power is conserved. See that the load resistance $R$ connected to the secondary appears as a resistance $R/N^2$ to the circuit connected to the primary. Shouldn't the emf induced in the secondary winding by the alternating magnetic flux be directly related to the current by some constant, such as the resistance of the secondary winding? The resistance of the secondary winding is zero for an ideal transformer. If the secondary winding has non-zero resistance, power conservation does not hold, i.e., the power delivered to the load is less than the power delivered to the primary.
{ "domain": "physics.stackexchange", "id": 51255, "tags": "electric-circuits" }
when will move_base_msgs be catkinized?
Question: In my catkin package ,if I import "move_base_msgs", then "ImportError:No module named move_base_msgs". I know it is because it has not be catkinized,so I want to know when it will be completed. My catkin is thirsty for this module. Originally posted by bobliao on ROS Answers with karma: 46 on 2013-03-28 Post score: 0 Answer: The navigation stack is not yet catkinized. Do you have a package that depends on just the messages, but not the rest of that stack? I don't see any current catkin work in that repository: https://github.com/ros-planning/navigation Originally posted by joq with karma: 25443 on 2013-03-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by joq on 2015-11-20: For future readers: note that this question was posted back in 2013. The move_base_msgs package was converted to catkin in the ROS Hydro distribution and is still basically the same for Indigo and Jade.
{ "domain": "robotics.stackexchange", "id": 13592, "tags": "ros" }
Maximum power transfer proof
Question: I have the following homework problem. Consider a power supply with fixed emf $ε$ and internal resistance $r$ causing current in a load resistance $R$. In this problem, $R$ is fixed and $r$ is a variable. The efficiency is defined as the energy delivered to the load divided by the energy delivered by the emf. When the internal resistance is adjusted for maximum power transfer, what is the efficiency? The answer is 50% but I'm confused how this is calculated. Here is my thought process so far. The power dissipated in the load is $I^2R$ and $\displaystyle I = \frac{ε}{r+R}$ so $\displaystyle P = \frac{ε^2R}{(r+R)^2} = \frac{ε^2}{\frac{r^2}{R} + 2r + R}$ and so power transferred to the load will be maximum when $\displaystyle \frac{r^2}{R} + 2r + R$ is a minimum. Taking the first derivative $\displaystyle \frac {d}{dr}(\frac {r^2}{R} + 2r + R) = (\frac{2r}{R} + 2)$ and this equals 0 when r = -R. The second derivative is $\displaystyle \frac{2}{R} > 0$ so this point would be a minimum and therefore max power transferred when r = -R. But having $r = -R$ doesn't make sense to me. From what I read, for maximum power transfer $r$ should equal $+R$. So I've probably done something wrong in my calculations or assumptions in the previous paragraph. Also wouldn't max power be transferred when the internal resistance is as close to zero as possible? Can someone please show me a proof that shows why $r = R$ for max power transfer and then how to calculate the efficiency. Answer: Are you sure you mean the internal resistance "r"? The internal resistance typically can not be adjusted, often this question is phrased in terms of the load resistance "R". The power dissipated in the external load is: $$ P=\frac{V^2 R}{(r+R)^2}\;, $$ which you need to maximize. If you maximize with respect to R, you find R=r... If you maximize with respect to r, you find r=0; algebraically you found a zero at r=-R, but r can not equal -R! It has a physical boundary at r=0, which gives the maximum for fixed R (you are maximizing on a fixed interval, you need to check the endpoints!). However, are you sure you want to maximize with respect to "r", or do you actually mean "R"? Finally, divide the value of the maximize load power by the total power dissipated, which is $$ \frac{V^2}{(r+R)} $$ evaluated at the correct resistance to determine the efficiency.
{ "domain": "physics.stackexchange", "id": 20545, "tags": "homework-and-exercises, electricity, electrical-resistance, power, efficient-energy-use" }
Why bigger stream of water in a metal sink is more silent than a smaller one?
Question: I have a metal sink. I don't want to waste much water so I prefer smaller streams of water. But when the stream is small it makes a big noise. However, when I increase the flow, the tapping sound becomes absent and I can only hear the sound of the water flow. My question is - why does the additional water reduce the noise of flowing water? I believe it has something to do with damping but I can't imagine the process. Probably the water particles in the middle don't make much noise because they are surrounded by other particles and sound waves are trapped inside. But what about the water around the stream? The stream's diameter is big and therefore there are more water particles on the borders than there are in the smaller stream. And more particles should make more noise. I can make a video of the process I've described if I didn't myself clear enough. Tell me in the comment if you need it. Answer: I tried this on my kitchen sink and it's true. A narrow stream almost separated into many dollops makes a loud resounding thudding, but a thick fast column of water is almost silent on the metal. The reason is that the narrow, weaker, less continuous stream allows the metal to vibrate. The thick, weighty, continuous stream deforms the metal and keeps it that way, doesn't allow the metal to spring back into shape, and there is no vibration. The greater weight and pressure of the thick stream damps the resonance of the metal. If you pressed a drumstick into a drumhead continuously, there would be no vibration. But when you allow the drumstick to bounce on the drumhead, there's vibration and sound.
{ "domain": "physics.stackexchange", "id": 22014, "tags": "water" }
How to break and come of the for loop?
Question: Hello guys ! I am working on Morse and ROS- Hydro version. I want to break and come out of the for loop when my flag is set to 1 ( and this flag is set in opencv part) once the flag is set, it should enter an another function and print 'flag is set' The following is the code:- #!/usr/bin/env python import roslib import sys import time import math import rospy import cv2 import time #import cv2.cv as cv import numpy as np from std_msgs.msg import Float64 from geometry_msgs.msg import PoseStamped from geometry_msgs.msg import Vector3 from geometry_msgs.msg import Twist from geometry_msgs.msg import Wrench from geometry_msgs.msg import Pose from geometry_msgs.msg import Point from sensor_msgs.msg import NavSatFix from sensor_msgs.msg import CameraInfo from sensor_msgs.msg import Image from sensor_msgs.msg import RegionOfInterest from cv_bridge import CvBridge, CvBridgeError from rospy.numpy_msg import numpy_msg a=0 b=0 c=0 timer = 0 flag =0 #def permanent_stop(): # my_value = 1 # stop(x=5) # return 1 def bridge_opencv(): image_pub = rospy.Publisher("quadrotor/videocamera1/camera_info",Image) cv2.namedWindow("Image window", 1) image_sub = rospy.Subscriber("quadrotor/videocamera1/image",Image, callback) def callback(data): global timer global dis global my_var1 global my_var2 global my_var3 global a global b global c bridge = CvBridge() try: cv_image = bridge.imgmsg_to_cv2(data, "bgr8") except CvBridgeError, e: print e (rows,cols,channels) = cv_image.shape if cols > 60 and rows > 60 : cv2.circle(cv_image, (50,50), 10, 255) #converting bgr to hsv hsv=cv2.cvtColor(cv_image,cv2.COLOR_BGR2HSV) # define range of blue color in HSV lower_blue = np.array([60,0,0],dtype=np.uint8) upper_blue = np.array([255,255,255],dtype=np.uint8) # Threshold the HSV image to get only blue colors mask = cv2.inRange(hsv, lower_blue, upper_blue) new_mask = mask.copy() # Bitwise-AND mask and original image res = cv2.bitwise_and(cv_image,cv_image, mask= mask) #removing noise kernel = np.ones((12,12),np.uint8) new_mask = cv2.morphologyEx(new_mask, cv2.MORPH_CLOSE, kernel) new_mask = cv2.morphologyEx(new_mask, cv2.MORPH_OPEN, kernel) contours, hierarchy = cv2.findContours(new_mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(cv_image, contours, -1, (0,0,0), 3) if(contours): cv2.drawContours(cv_image, contours, -1, (0,0,0), 3) cnt = contours[0] area = cv2.contourArea(cnt) #print area if area > 6000: print('i found the object') dis = timer*0.4 #print 'dis',dis my_var1= a+dis my_var2 = b my_var3 = c flag = 1 coorda = Float64() coordb = Float64() coordc = Float64() #my_value2 = False cv2.imshow('mask',mask) cv2.imshow('res',res) cv2.imshow("Image window", cv_image) cv2.waitKey(3) def stop(x): now=time.time() global timer print 'stop' cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=x: motion = Twist() motion.linear.x = +0.0 motion.linear.y = +0.0 motion.linear.z = +0.0 cmd.publish(motion) end = time.time() timer = round(end-now) def left(): now = time.time() global timer print 'left' global b cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=10: motion = Twist() motion.linear.y = -0.4 cmd.publish(motion) end = time.time() timer = round(end-now) b = b-4 def right(): now=time.time() global timer print 'right' global b cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=5: motion = Twist() motion.linear.y = +0.4 cmd.publish(motion) end = time.time() timer = round(end-now) b = b+2 def straight(x): now = time.time() global timer print 'straight' global a cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=x: if flag != 1: motion = Twist() motion.linear.x = -0.4 cmd.publish(motion) end = time.time() timer = round(end-now) print flag elif flag ==1: print 'flag',flag print('i was here') break a = a+16 def back(x): now = time.time() global timer print 'back' global a cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=x: motion = Twist() motion.linear.x = +0.4 cmd.publish(motion) end = time.time() timer = round(end-now) a = a-16 def up(): now=time.time() global timer print 'up' global c cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=10: motion = Twist() motion.linear.z = +0.4 cmd.publish(motion) end = time.time() timer = round(end-now) c=c+4 def pilot(): rospy.init_node("pilot") puba = rospy.Publisher("box_positiona", Float64) pubb = rospy.Publisher("box_positionb", Float64) pubc = rospy.Publisher("box_positionc", Float64) print "my inital position" bridge_opencv() if flag != 1: print('i am finding..') up() stop(x=5) left() for i in range(1,4): if flag != 1: stop(x=20) else: break if flag !=1: straight(x=40) else: break if flag !=1: stop(x=20) else: break if flag !=1: right() else: break if flag !=1: stop(x=20) else: break if flag !=1: back(x=40) else: break if flag !=1: stop(x=20) else: break if flag !=1: right() else: break print over print my_value if flag ==1: print("i am true") stop(x=50) print('hi') #rospy.spin() # this will block untill you hit Ctrl+C if __name__ == '__main__': pilot() Looking forward for answers ! Thanks ! Originally posted by jashanvir on ROS Answers with karma: 68 on 2014-06-27 Post score: 0 Answer: I have tried this and it worked for me. #!/usr/bin/env python import roslib import sys import time import math import rospy import cv2 import time #import cv2.cv as cv import numpy as np from std_msgs.msg import Float64 from geometry_msgs.msg import PoseStamped from geometry_msgs.msg import Vector3 from geometry_msgs.msg import Twist from geometry_msgs.msg import Wrench from geometry_msgs.msg import Pose from geometry_msgs.msg import Point from sensor_msgs.msg import NavSatFix from sensor_msgs.msg import CameraInfo from sensor_msgs.msg import Image from sensor_msgs.msg import RegionOfInterest from cv_bridge import CvBridge, CvBridgeError from rospy.numpy_msg import numpy_msg a=0 b=0 c=0 timer = 0 def bridge_opencv(): image_pub = rospy.Publisher("quadrotor/videocamera1/camera_info",Image) cv2.namedWindow("Image window", 1) image_sub = rospy.Subscriber("quadrotor/videocamera1/image",Image, callback) def callback(data): global timer global dis global my_var1 global my_var2 global my_var3 global a global b global c global flag bridge = CvBridge() try: cv_image = bridge.imgmsg_to_cv2(data, "bgr8") except CvBridgeError, e: print e (rows,cols,channels) = cv_image.shape if cols > 60 and rows > 60 : cv2.circle(cv_image, (50,50), 10, 255) #converting bgr to hsv hsv=cv2.cvtColor(cv_image,cv2.COLOR_BGR2HSV) # define range of blue color in HSV lower_blue = np.array([60,0,0],dtype=np.uint8) upper_blue = np.array([255,255,255],dtype=np.uint8) # Threshold the HSV image to get only blue colors mask = cv2.inRange(hsv, lower_blue, upper_blue) new_mask = mask.copy() # Bitwise-AND mask and original image res = cv2.bitwise_and(cv_image,cv_image, mask= mask) #removing noise kernel = np.ones((12,12),np.uint8) new_mask = cv2.morphologyEx(new_mask, cv2.MORPH_CLOSE, kernel) new_mask = cv2.morphologyEx(new_mask, cv2.MORPH_OPEN, kernel) contours, hierarchy = cv2.findContours(new_mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(cv_image, contours, -1, (0,0,0), 3) if(contours): cv2.drawContours(cv_image, contours, -1, (0,0,0), 3) cnt = contours[0] area = cv2.contourArea(cnt) #print area if area > 500: dis = timer*0.4 my_var1= a+dis my_var2 = b my_var3 = c flag = 1 cv2.imshow('mask',mask) cv2.imshow('res',res) cv2.imshow("Image window", cv_image) cv2.waitKey(3) def stop(x): global flag now=time.time() global timer print 'stop' cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=x: motion = Twist() motion.linear.x = +0.0 motion.linear.y = +0.0 motion.linear.z = +0.0 cmd.publish(motion) end = time.time() timer = round(end-now) def left(): global flag now = time.time() global timer print 'left' global b cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=10: if flag !=1: motion = Twist() motion.linear.y = -0.4 cmd.publish(motion) end = time.time() timer = round(end-now) if flag ==1: print 'flag', flag print('i was here') break if flag !=1: b = b-4 def right(): global flag now=time.time() global timer print 'right' global b cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=5: if flag !=1: motion = Twist() motion.linear.y = +0.4 cmd.publish(motion) end = time.time() timer = round(end-now) if flag == 1: print 'flag', flag print('i was here') break if flag !=1: b = b+2 def straight(x): global flag now = time.time() global timer print 'straight' global a cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=x: if flag != 1: motion = Twist() motion.linear.x = -0.4 cmd.publish(motion) end = time.time() timer = round(end-now) if flag ==1: print 'flag',flag print('i was here') break if flag !=1: a = a+16 def back(x): global flag now = time.time() global timer print 'back' global a cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=x: if flag !=1: motion = Twist() motion.linear.x = +0.4 cmd.publish(motion) end = time.time() timer = round(end-now) if flag ==1: print 'flag', flag print('i was here') break if flag !=1: a = a-16 def up(): global flag now=time.time() global timer print 'up' global c cmd = rospy.Publisher("/quadrotor/rotorcraftvelocity", Twist) while timer !=10: if flag !=1: motion = Twist() motion.linear.z = +0.4 cmd.publish(motion) end = time.time() timer = round(end-now) if flag ==1: print 'flag', flag print('i was here') break if flag !=1: c=c+4 def shutdown(): rospy.loginfo("Shutting down head tracking node...") def print_abc(): global my_var1 global my_var2 global my_var3 print ('a',my_var1) print('b',my_var2) print('c',my_var3) def pilot(): global my_var1 global my_var2 global my_var3 rospy.init_node("pilot") puba = rospy.Publisher("box_positiona", Float64) pubb = rospy.Publisher("box_positionb", Float64) pubc = rospy.Publisher("box_positionc", Float64) print "my inital position" global flag flag = 0 bridge_opencv() if flag != 1: print('i am finding..') up() stop(x=5) left() for i in range(1,4): if flag != 1: stop(x=20) straight(x=40) stop(x=20) right() stop(x=20) back(x=40) stop(x=20) right() else: break print_abc() stop(20) coorda = Float64() coordb = Float64() coordc = Float64() coorda.data = my_var1 coordb.data = my_var2 coordc.data = my_var3 while not rospy.is_shutdown(): if flag == 1: puba.publish(coorda) pubb.publish(coordb) pubc.publish(coordc) #rospy.spin() # this will block untill you hit Ctrl+C if __name__ == '__main__': try: pilot() except rospy.ROSInterruptException: rospy.loginfo("Head tracking node is shut down.") Originally posted by jashanvir with karma: 68 on 2014-06-30 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 18412, "tags": "python, callback, opencv2, nodes, cv-bridge" }
obtaining images from a camera
Question: i'm relatively new to ros hence help out in connecting a camera with the ros babu@ubuntu:~/ros_workspace/cv_test/launch$ roslaunch cv_test cam.launch ... logging to /home/babu/.ros/log/83437bc6-8d6d-11e1-a097-00269eec6613/roslaunch-ubuntu-6224.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://ubuntu:47219/ SUMMARY ======== PARAMETERS * /uvc_cam_node/height * /uvc_cam_node/width * /rosdistro * /rosversion * /uvc_cam_node/frame_rate * /uvc_cam_node/device NODES / uvc_cam_node (uvc_cam/uvc_cam_node) dynamic_reconfigure (dynamic_reconfigure/reconfigure_gui) ROS_MASTER_URI=http://localhost:11311 core service [/rosout] found process[uvc_cam_node-1]: started with pid [6242] process[dynamic_reconfigure-2]: started with pid [6243] [ INFO] [1335205172.229326077]: using default calibration URL [ INFO] [1335205172.229482182]: camera calibration URL: file:///home/babu/.ros/camera_info/camera.yaml [ERROR] [1335205172.229596017]: Unable to open camera calibration file [/home/babu/.ros/camera_info/camera.yaml] [ WARN] [1335205172.229652605]: Camera calibration file /home/babu/.ros/camera_info/camera.yaml not found. [ INFO] [1335205172.358249625]: opening uvc_cam at 320x240, 20.000000 fps opening /dev/video0 capabilities 4000001 pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)' discrete: 640x480: 1/24 discrete: 320x240: 1/24 discrete: 160x120: 1/24 camera formats supported format 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)' discrete: 640x480: 1/24 discrete: 320x240: 1/24 discrete: 160x120: 1/24 int (Brightness, 0, id = 980900): -10 to 10 (1) int (Contrast, 0, id = 980901): 0 to 20 (1) int (Saturation, 0, id = 980902): 0 to 10 (1) int (Gamma, 0, id = 980910): 100 to 200 (1) menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1) 0: Disabled 1: 50 Hz 2: 60 Hz int (Sharpness, 0, id = 98091b): 0 to 10 (1) setting control 9a0901 unable to get control : Invalid argument [ERROR] [1335205172.411106873]: Problem setting exposure. Exception was unable to get control setting control 9a0902 unable to get control : Invalid argument [ERROR] [1335205172.411317500]: Problem setting absolute exposure. Exception was unable to get control setting control 98091b current value of 98091b is 10 new value of 98091b is 142 setting control 98090c unable to get control : Invalid argument [ERROR] [1335205172.411917887]: Problem setting white balance temperature. Exception was unable to get control setting control 980913 unable to get control : Invalid argument [ERROR] [1335205172.412057032]: Problem setting gain. Exception was unable to get control setting control 980902 current value of 980902 is 10 new value of 980902 is 50 setting control 980901 current value of 980901 is 20 new value of 980901 is 50 setting control 980900 current value of 980900 is 10 new value of 980900 is 66 select timeout in grab [ WARN] [1335205173.462441519]: Could not grab image [ WARN] [1335205173.977795152]: [camera] calibration does not match video mode (publishing uncalibrated data) what and how can them be avoided Originally posted by paul91 on ROS Answers with karma: 1 on 2012-04-23 Post score: 0 Original comments Comment by Ryan on 2012-04-23: What are you wondering? Comment by karthik on 2012-04-23: Please explain the issue Comment by Kevin on 2012-04-23: I did a search on ros.org for uvc_cam_node and couldn't find anything ... do you mean uvc_camera or uvc_cam2? Can you provide a link to the nodes you are using? Answer: Check your video address. Originally posted by fins with karma: 76 on 2012-08-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9093, "tags": "ros" }
Generate random numbers without repetitions
Question: I want to generate a list of N different random numbers: public static List<int> GetRandomNumbers(int count) { List<int> randomNumbers = new List<int>(); for (int i=0; i<count; i++) { int number; do number = random.Next(); while (randomNumbers.Contains(number)); randomNumbers.Add(number); } return randomNumbers; } But I feel there is a better way. This do...while loop especially makes this ugly. Any suggestions on how to improve this? Answer: Updated answer in response to bounty: See Is that your final answer? at the end, and other changes - basically answer is significantly rewritten. To break your problem down in to requirements: you need a set of random numbers the numbers need to be unique the order of the returned numbers needs to be random Your current code indicates that the range of random numbers is specified by Random.Next(), which returns values in the [0 .. Int32.MaxValue) range (note, it excludes Int32.MaxValue). This is significant for the purpose of this question, because other answers have assumed that the range is configurable, and 'small'. If the range should be configurable, then the recommended algorithm would potentially be much larger. Based on those assumptions, let's do a code review... Code Style do ... while The most glaring problems here are the un-braced do-while loop. You already knew it, but this code is ugly: do number = random.Next(); while (randomNumbers.Contains(number)); It really should be braced: do { number = random.Next(); } while (randomNumbers.Contains(number)); This makes the statement clear, and significantly reduces confusion. Always use braces for 1-liners. List Construction The List class allows an initial capacity to be used. Since the capacity needs to just be count, it makes sense to initialize the list with this capacity: List<int> randomNumbers = new List<int>(count); Current Algorithm This is where the most interesting observations can be made. Let's analyze your current algorithm: Create a container for the results repeat until you have selected N values: Select a random value check if it has been previously selected if it is 'new', then add it to the container This algorithm will produce random values, in a random order, and with good random characteristics (no skews, biases, gaps, etc.). In other words, your results are good. The problem is with performance.... There are two performance concerns here, one small, the other large: the do-while loop to avoid collisions the List container do-while performance The do-while has a very low impact on performance... almost negligible. This is hotly debated, but, the reality is that you would need a very, very large count before this becomes a problem. The reasoning is as follows: Collisions happen when the random value was previously selected. For the specified range of [0 .. Int32.MaxValue), you would need a very large count before collisions actually happened. For example, count would have to be about 65,000 before there was better than a 50% chance that there was even a single collision. In a general sense, given a Range of \$N\$, select \$M\$ numbers. If \$M < \sqrt{N}\$ then the probability of a single collision is < 50%. Since the Range is very large, the probability is small. Obviously, if the range was small, then the probabilities would be significantly affected. But the range is fixed at Int32.MaxValue, so that's OK. Additionally, if the count was large, then the probabilities would also be affected. How large would be very large? Well, you would be running in to very large arrays before you run in to significant problems..... I would venture you are hitting close to \$\frac{Int32.MaxValue}{2}\$ before you run in to a significant issue with performance. List performance This is without doubt your largest concern. You use the randomNumbers.Contains(number) call to determine whether a value was previously selected. This requires a scan of all previously-selected values to determine. As mentioned, this will almost always return false, and will thus have to scan the entire list. As the count value increases, the length of time to perform the Contains will increase at an quadratic rate, \$O(n^2)\$ where n is count. This performance problem will become critical much sooner than the random-collision problem. Putting it together The problem you have in your code is that you are trying to do too much at once, you are using a List because that is your return value, when a HashSet would be better. If you break the problem down in to stages, you will be able to solve things more elegantly. If you add a duplicate value to a HashSet, it does not grow, and the operation performance is not dependent on the amount of data in the HashSet (it is \$O(1)\$). You can use the Count of the HashSet to manage the data uniqueness. Once you have a clean set of unique random numbers, you can dump them in to a List, then shuffle the list using an efficient shuffle. Combining these data structures, in the right way, leads to an overall \$O(n)\$ solution, which will scale fairly well. Here is some code, which works in Ideone too. Note, my C# is weak, so I have tried to make the logic clear. using System; using System.Collections.Generic; public class Test { static Random random = new Random(); public static List<int> GenerateRandom(int count) { // generate count random values. HashSet<int> candidates = new HashSet<int>(); while (candidates.Count < count) { // May strike a duplicate. candidates.Add(random.Next()); } // load them in to a list. List<int> result = new List<int>(); result.AddRange(candidates); // shuffle the results: int i = result.Count; while (i > 1) { i--; int k = random.Next(i + 1); int value = result[k]; result[k] = result[i]; result[i] = value; } return result; } public static void Main() { List<int> vals = GenerateRandom(10); Console.WriteLine("Result: " + vals.Count); vals.ForEach(Console.WriteLine); } } The above code is my initial recommendation, and it will work well, and scale well for any reasonable number of values to return. Second Alternate Algorithm The problem with the above algorithm is threefold: When count is very large, the probability of collision is increased, and performance may be affected Data will need to be in both the HashSet and the List at some point, so the space usage is doubled. The shuffle at the end is needed to keep the data in a random order (HashSet does not keep the data in any specific order, and the hashing algorithm will cause the order to become biased, and skewed). These are only performance issues when the count is very large. Note that only the collisions at large count will impact the scalability of the solution (at large count it is no longer quite \$O(n)\$, and it will be come progressively worse when count approaches Int32.MaxValue. Note that in real life this will not likely ever happen.... and memory will become a problem before performance does. @JerryCoffin pointed to an alternate algorithm from Bob Floyd, where a trick is played to ensure that collisions never happen. This solves the problem of scalability at very large counts. It does not solve the need for both a HashSet and a List, and it does not solve the need for the shuffle. The algorithm as presented: initialize set S to empty for J := N-M + 1 to N do T := RandInt(1, J) if T is not in S then insert T in S else insert J in S assumes that RandInt(1, J) returns values inclusive of J. To understand the above algorithm, you need to realize that you choose a random value from a range that is smaller than the full range, and then after each value, you extend that to include one more. In the event of a collision, you can safely insert the max because it was never possible to include it before. The chances of a collision increase at the same rate that the number of values decreases, so the probability of any one number being in the result is not skewed, or biased. Is this almost a final answer? No So, using the above solution, in C#, would look like (in Ideone) (note, code is now different to Ideone): public static List<int> GenerateRandom(int count) { // generate count random values. HashSet<int> candidates = new HashSet<int>(); for (Int32 top = Int32.MaxValue - count; top < Int32.MaxValue; top++) { Console.WriteLine(top); // May strike a duplicate. if (!candidates.Add(random.Next(top + 1))) { candidates.Add(top); } } // load them in to a list. List<int> result = candidates.ToList(); // shuffle the results: int i = result.Count; while (i > 1) { i--; int k = random.Next(i + 1); int value = result[k]; result[k] = result[i]; result[i] = value; } return result; } Note that you need to shuffle the results still, so that the HashSet issue is resolved. Also note the need to do the fancy loop-condition top > 0 because at overflow time, things get messy. Can you avoid the shuffle? So, this solves the need to do the collision loop, but, what about the shuffle. Can that be solved by maintaining the HashSet and the List at the same time. No! Consider this function(in Ideone too): public static List<int> GenerateRandom(int count) { List<int> result = new List<int>(count); // generate count random values. HashSet<int> candidates = new HashSet<int>(); for (Int32 top = Int32.MaxValue - count; top < Int32.MaxValue; top++) { // May strike a duplicate. int value = random.Next(top + 1); if (candidates.Add(value)) { result.Add(value); } else { result.Add(top); candidates.Add(top); } } return result; } The above answer is never going to allow the first value in the result to have any of the Max - Count to Max values (because there will never be a collision on the first value, and the range is not complete at that point), and this is thus a broken random generator. Even with this collision-avoiding algorithm, you still need to shuffle the results in order to ensure a clean bias on the numbers. TL;DR Is This Your Final Answer? Yes! Having played with this code a lot, it is apparent that it is useful to have a range-based input, as well as a Int32.MaxValue system. Messing with large ranges leads to potential overflows in the 32-bit integer space as well. Working with @mjolka, the following code would be the best of both worlds: static Random random = new Random(); // Note, max is exclusive here! public static List<int> GenerateRandom(int count, int min, int max) { // initialize set S to empty // for J := N-M + 1 to N do // T := RandInt(1, J) // if T is not in S then // insert T in S // else // insert J in S // // adapted for C# which does not have an inclusive Next(..) // and to make it from configurable range not just 1. if (max <= min || count < 0 || // max - min > 0 required to avoid overflow (count > max - min && max - min > 0)) { // need to use 64-bit to support big ranges (negative min, positive max) throw new ArgumentOutOfRangeException("Range " + min + " to " + max + " (" + ((Int64)max - (Int64)min) + " values), or count " + count + " is illegal"); } // generate count random values. HashSet<int> candidates = new HashSet<int>(); // start count values before max, and end at max for (int top = max - count; top < max; top++) { // May strike a duplicate. // Need to add +1 to make inclusive generator // +1 is safe even for MaxVal max value because top < max if (!candidates.Add(random.Next(min, top + 1))) { // collision, add inclusive max. // which could not possibly have been added before. candidates.Add(top); } } // load them in to a list, to sort List<int> result = candidates.ToList(); // shuffle the results because HashSet has messed // with the order, and the algorithm does not produce // random-ordered results (e.g. max-1 will never be the first value) for (int i = result.Count - 1; i > 0; i--) { int k = random.Next(i + 1); int tmp = result[k]; result[k] = result[i]; result[i] = tmp; } return result; } public static List<int> GenerateRandom(int count) { return GenerateRandom(count, 0, Int32.MaxValue); }
{ "domain": "codereview.stackexchange", "id": 26650, "tags": "c#, random" }
Energy-Momentum Lorentz Transformation
Question: Suppose a micrometeorite of mass $10^9$ kg moves past Earth at a speed of $0.01c$. What values will be measured for the momentum of the particle by an observer in a system $S'$ moving relative to Earth at $0.5c$ in the same direction as the micrometeorite? Book Solution According to the book the momentum of the micrometeorite as measured by the Earth Observer is $$p_x = mu_x = 10^9 * 0.01c~ \rm kg \cdot ms^{-2}$$ My solution I argue the following. Define the momentum and Energy of the micrometeorite in the micrometeorite frame as $p_m$ and $E_m$ and are $0$ and $mc^2$ respectively (since in its frame its velocity is $0$). Applying Lorentz Transformation from $A_m$ frame to $A_e$ (Earth frame and $0.01c$ is the frame velocity) gives us $$p_e = \gamma_{0.01} \left(p_m +\frac{v}{c^2}E_m\right)$$ Plugging in the respective values gives us $$p_e = \gamma_{0.01}\frac{v}{c^2}E_m = \gamma_{0.01}mv \neq mu_x$$ Why is my approach to calculate the momentum in Earth frame is wrong? Answer: My guess is just that when they quote the mass, they are giving the "relativistic mass" measured relative to Earth (which is just $\gamma$ times the rest mass) — rather than giving the rest mass, which is what you're assuming. I think your assumption is quite reasonable, but I also think the book's interpretation is reasonable as well; they're just different ways of talking about things. Moreover, the difference really is quite small. If you look at the significant digits, your answer and the book's are the same. In this case, $\gamma = 1.00005$, so multiplying the book's answer by that doesn't change it significantly. Otherwise, you've done nothing wrong. Your approach is entirely valid. It shows that you have a good handle on the math, and that you're interested in looking at things from multiple perspectives. My only piece of advice is to not psych yourself out too much by using harder approaches to solve problems than are really necessary.
{ "domain": "physics.stackexchange", "id": 48847, "tags": "homework-and-exercises, special-relativity" }
How to perform molecular dynamics simulations of charged systems?
Question: The GROMACS official documentation (see here) states that a system with non-zero total charge will yield an error: System has non-zero total charge Notifies you that counter-ions may be required for the system to neutralize the charge or there may be problems with the topology. If the charge is a non-integer, then this indicates that there is a problem with the topology. If pdb2gmx has been used, then look at the right hand comment column of the atom listing, which lists the cumulative charge. This should be an integer after every residue (and/or charge group where applicable). This will assist in finding the residue where things start departing from integer values. Also check the capping groups that have been used. If the charge is already close to an integer, then the difference is caused by rounding errors and not a major problem. Note for PME users: It is possible to use a uniform neutralizing background charge in PME to compensate for a system with a net background charge. There is probably nothing wrong with this in principle, because the uniform charge will not perturb the dynamics. Nevertheless, it is standard practice to actually add counter-ions to make the system net neutral. If I understand this correctly, it means that a uniform background charge will be added to a net-charged system. What if I wanted to study a charged system, e.g. a peptide with basic groups in the gas phase, where counter ions are absent. Is it still possible to perform a gromacs calculation on such a system and how will the introduction of the compensating background charge affect the molecular dynamics? I then looked into other packages and noted that net-charged systems seem to be generally problematic in mm calculations. Can anyone point me into a direction like literature on this? Answer: This is intrinsic to Ewald summation methods, not software implementations. The uniform charge arises from neglect of a reciprocal sum term. It does not directly affect the dynamics and may be a reasonable model of a spatially homogeneous system. See https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2015-October/101544.html for further details and https://www.mpibpc.mpg.de/14063977/Hub_2014_JCTC.pdf for a description and explanation of the likely artefacts.
{ "domain": "chemistry.stackexchange", "id": 7568, "tags": "reference-request, software, molecular-mechanics, molecular-dynamics" }
Hamming Window LUT function
Question: I'm working on a project where I need to generate a LUT for Hamming Window to perform a FFT. I'm using this function (doing the table with MATLAB): n = 0:4095; LUT_length = length(n); WIN_HAM_v = round(4095*(0.54-(0.46*cos(2*pi*n/(LUT_length-1))))); I have some doubts if the function is correct Answer: Your code is correct. This can also be implemented by calling the built-in MATLAB function hamming: WIN_HAM_v = round(4095*hamming(LUT_length)); You can (should) compare the two methods yourself: win1 = 4095*(0.54-(0.46*cos(2*pi*n/(LUT_length-1)))); win2 = 4095*hamming(LUT_length); maxError = max(abs(win1(:)-win2(:))); % result: 2.2737e-12
{ "domain": "dsp.stackexchange", "id": 4725, "tags": "matlab, fft, filters, window" }
Understanding the partial trace and deriving $\langle l|R_{B}|k\rangle = \text{Tr}((\mathbb{I}_{A} \otimes |k\rangle \langle l|)(R_{AB}))$
Question: By definition according to the notes I am looking through: The partial trace $\text{Tr}_A:L(H_A \otimes H_B) \rightarrow L(H_B)$ is the unique map that satisfies: $$\text{Tr}(L_B \cdot \text{Tr}_A(R_{AB})) = \text{Tr}((\mathbb{I} \otimes L_B)R_{AB})$$ for all $L_B \in L(H_B)$ and $R_{AB} \in L(H_A \otimes H_B)$. Now, according to the Wikipedia definition,$$T \in L(V \otimes W) \mapsto \text{Tr}_W(T) \in L(V).$$ Now if, in trying to relate the two definitions I let $W = H_A$, and $T = R_{AB}$ in the second definition, then I recover $\text{Tr}_A(R_{AB})$ However in the first definition I have the outer trace and $L_B$ - the first part which is $\text{Tr}(L_B \cdots$ Why are these different (at least seemingly)? $\leftarrow$ Question (1) Now I am trying to understand why, as in the subject line: $$\langle l|R_{B}|k\rangle = \text{Tr}((\mathbb{I}_{A} \otimes |k\rangle \langle l|)(R_{AB}))$$ First, I know that $\langle l|R_{B}|k\rangle = \text{Tr}(|k\rangle \langle l|R_{B})$, but I don't see how to get to the result. I think some of the confusion lies in that I don't know how to operate on $(\mathbb{I}_{A} \otimes |k\rangle \langle l|)(R_{AB})$. Since $R_{AB}$ is a matrix I don't know what to do with it in regards to the tensor product (I couldn't find or recognize a similar example with distributing a matrix over a tensor product). Question 2: How do we get $\langle l|R_{B}|k\rangle = \text{Tr}((\mathbb{I}_{A} \otimes |k\rangle \langle l|)(R_{AB}))$? Answer: I quite like your characterization of the partial trace! I think you perceive a conflict with the Wikipedia definition because you are only taking part of the latter: given an operator $T\in L(V\otimes W)$, the requirement that its partial trace obey $$\text{Tr}_W(T)\in L(V)$$ simply says that the partial trace over $W$ be an operator on $V$, but that doesn't say which operator. (The specification of that is done in a more concrete, basis-dependent way.) To obtain the first result that confuses you, $\langle k |R|l\rangle=\text{Tr}(|l\rangle\langle k|R)$ for some operator $R$, simply take the trace in the same orthogonal basis where $|k\rangle$ and $|l\rangle$ came from: $$ \text{Tr}(|l\rangle\langle k|R)=\sum_j \langle j|l\rangle\langle k|R|j\rangle =\sum_j \langle k|R|j\rangle\langle j|l\rangle =\langle k|R|l\rangle. $$ Now, if you take $R=R_B=\text{Tr}_A(R_{AB})$, the matrix elements of this partial trace in the $B$ basis are, from the above, $$ \langle k|R_B|l\rangle=\text{Tr}_B(|l\rangle\langle k|R_B)=\text{Tr}_{AB}((\mathbb I\otimes|l\rangle\langle k|)R_{AB}), $$ where the second equality is simply the fundamental definition of the partial trace, as you formulated it. Now, I can understand it if all this simply looks complicated and does not provide any insight into what is going on - though that simply means that you need to look more closely into what your fundamental definition is saying. Say I have a bipartite system $A\leftrightarrow B$, which may be initially entangled, and then I completely forget about the $A$ part of the system. Thus, I need to trade my full (possibly entangled) density matrix $\rho_{AB}$ for one I can deal with locally: a density matrix $\rho_B$ which acts only on the $B$ side, which I can act on with operators in $L(H_B)$, and which I can take the $B$ trace on. That is, I need to be able to speak of the object $$\text{Tr}(L_B\rho_B),$$ and that object embodies all I need in order to make predictions. However, in terms of the full system, the state is $\rho_{AB}$, when I operate on it I am really using the operator $\mathbb I\otimes L_B$, and when I take the trace I am really taking the full trace $\text{Tr}_{AB}$ over the full space. Since both viewpoints must match, these objects must obey $$ \text{Tr}(L_B\rho_B)=\text{Tr}_{AB}((\mathbb I\otimes L_B)\rho_{AB}), \tag{1} $$ and this equation is simply a requirement on the only free object we have, $\rho_B$, which we call the partial trace $\rho_B:=\text{Tr}_A(\rho_{AB})$. As it happens, requiring $\text{Tr}_A$ to obey this for all $L_B\in L(H_B)$ and $\rho_{AB}\in L(H_A\otimes H_B)$* is enough to uniquely determine it, so that requirement can act as a definition (though, of course, you can have simpler definitions based on explicit basis-dependent formulae). * Note that I am taking $\rho_{AB}$ to be a general operator, instead of only a density matrix, since we want to be able to act on $\rho_{AB}$ using entangling or correlated measurements before we forget about $B$. However, requiring (1) for all $L_B\in L(H_B)$ and only those $\rho_{AB}\in L(H_A\otimes H_B)$ such that $\rho_{AB}\geq 0$ and $\text{Tr}_{AB}(\rho_{AB})=1$ is enough to determine $\text{Tr}_A$ uniquely by linearity, as any operator $R=R_{AB}$ can be decomposed into positive-definite, trace-one operators $R_k$ as $R=r_1 R_1+ir_2R_2-r_3R_3-ir_4R_4$, with each $r_k\geq0$, by taking positive and negative parts of its hermitian and antihermtian parts.
{ "domain": "physics.stackexchange", "id": 9182, "tags": "quantum-mechanics, quantum-entanglement, trace" }
Did rospy remapping arguments change in Fuerte?
Question: I'm having a weird error with remapping arguments on Fuerte. In my rospy node, I have a line like baud_rate = rospy.param('~baud',4800) so when I'd like to change that on the commandline, I tried rosrun nmea_gps_driver nmea_gps_driver.py _baud:=38400 but instead of the parameter being set, I get the following error: ERROR: Invalid remapping argument '_baud:=38400' That remapping works just fine in Electric and I don't see anything in the rosrun or Fuerte Migration Guide. In case it matters, I'm working against the latest Fuerte .debs on Ubuntu 11.10. UPDATE: Minimal reproducible example: # test.py import roslib roslib.load_manifest('rospy') import rospy rospy.init_node('test_node') test_param = rospy.get_param('~test', 4800) print "Param was: %d" % (test_param) If I run this like python test.py _test:=38400, I get the error output. See below. $ python test.py _test:=38400 ERROR: Invalid remapping argument '_test:=38400' Param was: 4800 $ Originally posted by Eric Perko on ROS Answers with karma: 8406 on 2012-02-19 Post score: 2 Answer: This was a bug due to some code refacotrintg. Fixed in r16352. Will be included in next ros_comm update. BTW: thanks for the simplified test case, made this quick to fix Originally posted by kwc with karma: 12244 on 2012-02-20 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 8292, "tags": "ros, rospy, ros-fuerte, remapping, rosrun" }
Free-particle solution to Schrödinger Equation
Question: The free particle solution in stationary state (with definite energy) to the Schrödinger equation is $$\psi(x,t) =Ae^{i(kx-\omega t)} + Be^{-i(kx+\omega t)}$$ Since the energy is definite, and hence the momentum is definite, the uncertainty in position must be infinite. How is this reflected by the probability distribution function: $$\Psi = |\psi(x,t)\psi^*(x,t)| $$ The book that I am using just look at the first term of the solution, and derive that the probability distribution function is $A^2$. However, I do not understand why we can do that? Does it imply that if wave function is made up of n terms such that each individual term has a constant probability distribution function, the whole wave function also has a constant probability distribution function? If so, how can I prove it? I know my question might be very vague but that is precisely the problem I am facing now, I don't even know how to ask about the things that I don't understand. Answer: For any function of $x$ and $t$ that depends on the combination $x\pm vt$ (for constant $v$ represents a wave with a fixed shape that travels in the $\mp x$ direction with speed $v$. That is to say, $$ x\pm vt={\rm constant} $$ In your wave function, $$ \psi(x)=Ae^{i(kx-\omega t)}+Be^{-i(kx+\omega t)}\tag{0}, $$ the first term represents a right-moving wave while the second term represents a left-moving wave. Since they differ only by the sign of $k$, we can simplify the wavefunction to $$ \psi(x)=Ae^{i(kx-\omega t)}\tag{1} $$ where $k>0$ means the right-moving wave (the first term of Eq (0) ) and $k<0$ means a left-moving wave (the second term of Eq (0)). Using Equation (1) and multiplying by its complex conjugate, we get $$ \psi^*\psi(x)=\left(Ae^{-i(kx-\omega t)}\right)\left(Ae^{i(kx-\omega t)}\right)=A^2 $$
{ "domain": "physics.stackexchange", "id": 44343, "tags": "quantum-mechanics, schroedinger-equation, probability" }
Explaining how we cannot account for changing acceleration questions without calculus
Question: For context, I am a high school physics teacher. I am teaching students about the basics of electromagnetic force between two point charges. The equation we use is $F=\frac{kq_1q_2}{r^2}$. This gives us the instantaneous force and also gives us the instantaneous acceleration. I have a student asking me why we do not analyze the changing forces and acceleration as the two bodies move towards or away from each other. For example a proton and electron starting a distance apart and accelerating towards each other. The best answer I can give is that we cannot analyze this without the use of calculus to allow for a changing acceleration. This particular student is fairly advanced, but I am looking for an answer that may be used as I move forward in my career. Overall my question is if there is there a better explanation I can give them that will give a more in-depth understanding? Or does any understanding of changing acceleration require the use of calculus in all or most circumstances? I am not sure if I just need a more in depth understanding of where the equation is derived from or how these interactions are summarized into the simple equation. This is also of course for my own knowledge base. Answer: You do, in fact, have to take into account the change in separation distance between charged objects when analyzing the dynamics of the system. This is done mathematically through the use of differential equations, which are "self-updating" equations that tell you how quantities are expected to change in the next (infinitesimal) time step, given the current configuration of positions and velocities. So to answer your students' question, you would have to write out a differential equation and solve it using calculus methods, as you've mentioned. For this particular problem, the relevant differential equation would be $F=ma$, or $$m\frac{\text{d}^2\mathbf{r}}{\text{d}t^2} = \frac{kqq' \hat{\mathbf{r}}}{|\mathbf{r}|^2},$$ and finding the dynamics $\mathbf{r}(t)$ as a function of time is accomplished by solving this second-order differential equation, and plugging in initial conditions such at $\mathbf{r}(t=0) = \mathbf{r}_0$, $\mathbf{v} = \partial_t\mathbf{r} = 0$, and $\mathbf{a} = \partial_t^2\mathbf{r} = 0$. Pedagogically, you could use this exact scenario as a motivation for the need of differential equations. You can construct discrete time steps $t_1, t_2, \dots$ and compute at each time, what the positions, velocities, and accelerations are (you could make a neat table). The solution provided by calculus is then the limit as the separation between time steps approaches zero.
{ "domain": "physics.stackexchange", "id": 79662, "tags": "newtonian-mechanics, electromagnetism, forces, education, calculus" }
The spectrum of the sum of two periodic signals
Question: Considering the continue, periodic signals $s_1(t)$ and $s_2(t)$, with the period $P_1$ and $P_2$ respectively. Consider now a new signal $s$ the sum of the two pervious signals. I come to know that the spectrum of the sum, is not necessarily the sum of the spectrums unless the two signals $s_1(t)$ and $s_2(t)$ have the same period. With spectrum I mean: $$X\in [\mathbb{Z} \rightarrow \mathbb{C} ]$$ $$X[k] = \langle x(t), \phi_k\rangle = \frac{1}{P} \int_P x(t)e^{-ik\omega_0 t}$$ with $\omega_0 = \frac{2\pi}{P}$. $P$ here represents the period. Note that $x(t)\in [\mathbb{R} \rightarrow \mathbb{C}]$ is periodic with the period $P$. I don't understand why? Because at it seems this violates the linearity properties. Answer: the Fourier Series for a single periodic function, having period $P$, $$ x(t) = x(t+P) \qquad \forall \ -\infty < t < +\infty $$ is $$ x(t) = \sum\limits_{k=-\infty}^{\infty} c_k \, e^{j(2\pi k/P)t} $$ each harmonic $c_k$ is labelled with its harmonic number, $k$, in the above. the actual frequency of each harmonic, in Hz or units of 1/time, is $\frac{k}{P}$. the fundamental frequency is the reciprocal $\frac{1}{P}$ of the period. in angular frequency (radians per unit time), multiply both by $2\pi$. but, in the harmonic number $k$ there lacks enough specific information to tell you what the actual frequency is of the harmonic. e.g. the second harmonic of a periodic tone of 220 Hz is the same frequency as the fourth harmonic of a periodic tone of 110 Hz. so the spectrum of $x(t)$ is about what actual frequencies the energy of a signal is. it doesn't matter whether $x(t)$ is periodic or not (actually it does with the esoteric mathematics, but we won't go there). and we use the Fourier Transform, not the Fourier Series to visualize it. So each harmonic in the Fourier Series above is attached to a dirac delta function, $\delta(f)$. $$\begin{align} X(f) &\triangleq \mathscr{F} \Big\{ x(t) \Big\} \\ &= \int\limits_{-\infty}^{\infty} x(t) \, e^{-j2\pi f t} \, \mathrm{d}t \\ &= \mathscr{F} \Big\{ \sum\limits_{k=-\infty}^{\infty} c_k \, e^{j(2\pi k/P)t} \Big\} \\ &= \sum\limits_{k=-\infty}^{\infty} c_k \,\mathscr{F} \Big\{ e^{j(2\pi k/P)t} \Big\} \\ &= \sum\limits_{k=-\infty}^{\infty} c_k \, \delta(f-k\tfrac{1}{P}) \\ \end{align}$$ If you express the spectra of your periodic signals in this form, the spectra sum to a single spectrum.
{ "domain": "dsp.stackexchange", "id": 7991, "tags": "frequency-spectrum, linear-systems" }
Sine wave, $\pi$ and frequency
Question: Please explain the relation $\sin(2\pi ft)$ such that how the $\pi$ (which is actually circumference/diameter of a circle) relates with the sine wave which is having a longitudinal vibration? Answer: It's just using radian measure for angles. If you have a sin and cosine function, you feed it an angle, and if you choose the angle to be measured in fractions of a turn, then the relation would be $\sin(ft)$ and $\cos(ft)$. This choice is inconvenient because the derivative of the sin and cosine function would get $2\pi$ factors. So instead, people parametrize the angles by radians, so that the derivatives are simple, but the radian measure gives the $2\pi$ in the frequency deinition, so that $sin(\omega t)$ repeats every $2\pi/\omega$ time-units, and the frequency is $f={\omega\over 2\pi}$. The factor $\omega$ is the radian frequency, and it is often just called the frequency in sloppy physics-talk.
{ "domain": "physics.stackexchange", "id": 1963, "tags": "waves, mathematics" }
Rosjava + (OpenCV) => (2D to 3D)
Question: Hi, I want compute 3D position from 2D point of video-stream (from Kinect) in rosjava. I know that, is possible in roscpp/rospy, because existed CV_Bridge and after I can compute 3D position by OpenCV library. My question, Is something similar possible in rosjava ? or I have to compute manualy ? Thank for any advice Originally posted by LeopoldPodmolík on ROS Answers with karma: 66 on 2013-02-27 Post score: 0 Original comments Comment by fei on 2015-06-08: Could you please tell me which function you use from openCV library to compute 3D position. I am doing the similar work with yours. Thank you. Answer: Did you take a look at OpenCV now supports desktop Java? Originally posted by 130s with karma: 10937 on 2013-07-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13106, "tags": "opencv, rosjava" }
Saturation Potential in photoelectric effect
Question: I was reading about Saturation current in photoelectric effect and they said that the saturation potential is the minimum potential required to accelerate the slowest moving photo-electrons so that they reach the other plate in a photoelectric circuit. Then, won't the formula for saturation potential be the same as that of stopping potential with the opposite sign? Answer: Stopping potential is well defined and equal to the photon energy minus working function of the photo-electric material. This potential level is sure to prevent any photo electron from approaching the anode, even the best fitted electron on a "freeway" to the anode and with velocity directed towards the anode. Saturation potenial is more vague. This potential must ensure that all photo electrons will reach the anode in spite of possibly being directed to the glass tube, possibly collide with gas atoms, deflected by a cloud of other slow moving electrons and similar mishaps. This voltage depends on tube structure, gas pressure and similar characteristics. Conversely, saturation current is again well defined hence usually being used with stopping potential.
{ "domain": "physics.stackexchange", "id": 47257, "tags": "photoelectric-effect" }
HackerRank - Candies
Question: Here is the solution I've written for Hacker Rank Candies challenge. It uses dynamic programming (memoization in a boxed array). I'm interested on what could be done to improve this code in term of style and performance (although it passes all the testcases of the challenge) For example, I wonder if the bunch of guards I used could be made prettier import System.IO import Data.Array main = do contents <- getContents print $ runTest contents runTest :: String -> Int runTest = sum . candies . map read . tail . lines candies :: [Int] -> [Int] candies rs = elems candies where go i | n ==1 = 1 | i == 1 = if (ri > riplus) then candies ! (2) + 1 else 1 | i == n = if (ri > riminus) then candies ! (n - 1) + 1 else 1 | ri > riminus && ri > riplus = max (candies! (i-1)) (candies! (i+1)) + 1 | ri > riminus = candies! (i-1) + 1 | ri > riplus = candies! (i+1) + 1 | otherwise = 1 where ri = ratings!i riplus = ratings!(i+1) riminus = ratings!(i-1) ratings = listArray (1,n) rs candies = listArray (1,n) [go i | i <- [1..n]] n = length rs Answer: You might consider using the trick of extending your array by one on both ends, ie. to include indices 0 and n+1, and assign: ratings[0] = ratings[n+1] = 0 candies[0] = candies[n+1] = 0 and then go 1 and go n will be handled by the other clauses. Also, you can eliminate the boundary assignments in go with: candies = listArray (0,n+1) $ [0] ++ [go i | i <- [1..n] ] ++ [0] Finally you can use these definitions to make the guards more succinct: go i | incL && incR = 1 + max cL cR | incL = 1 + cL | incR = 1 + cR | otherwise = 1 where incL = (ratings!(i-1)) < (ratings!i) incR = (ratings!i) > (ratings(i+1)) cL = candies!(i-1) cR = candies!(i+1)
{ "domain": "codereview.stackexchange", "id": 15354, "tags": "programming-challenge, haskell, functional-programming, dynamic-programming" }
Water level changes after scrap iron thrown off a barge
Question: I'm currently trying to work through Physics by HRK, Edition 5. I've come across this problem in the Fluid Statics section which I, being an amateur, decided to google. The exercise reads: A barge filled with scrap iron is in a canal lock. If the iron is thrown overboard, what happens to the water level in the lock? What if it is thrown onto the land beside the canal? Mixed results so far from: https://www.askiitians.com/forums/Mechanics/a-barge-filled-with-scrap-iron-is-in-a-canal-lock_122702.htm https://answers.yahoo.com/question/index?qid=20080316062239AAVmZxk http://cosweb1.fau.edu/~jordanrg/busters_9/answers_9.htm#Ex_7 Also, if anyone knows where I can find solutions to ALL the questions and exercises of HRK, I'd really appreciate it! Answer: While the iron is on the barge, the barge and all it's load is buoyant. It is displacing all of it's weight in water, so the water level is high. When the iron is thrown into the water the barge and it's remaining load weighs less so it rises some, displacing less water. The iron thrown in the water sinks, it weighs more than the water that it's volume displaces. Now not all of the total of the barge and it's original load is displacing it's weight in water, as some of the iron's weight is resting against the bottom of the canal. Since less water is now displaced it's level will go down. When the iron is unloaded onto the land, even less water is displaced by the unloaded barge's weight. Then the water level will be even lower.
{ "domain": "physics.stackexchange", "id": 71259, "tags": "fluid-statics" }
Could rocks from Earth have reached the Kuiper belt, or Neptune at least? If so, how?
Question: This answer (currently edit 3) to How certain are we that we have not sent life to other planets/moons? begins: First of all, rocks from Earth are probably just about everywhere in the Solar System. One simple example is this rock found on the Moon. A number of pieces of Mars have been found on Earth, and if that has happened, no doubt there are Earth rocks on Mars. If Earth life can survive a vacuum it has probably gotten to everywhere in the Solar System anyways. (emphasis added) I started to write the following as a comment: "...rocks from Earth are probably just about everywhere in the Solar System" is quite a statement, and ...from which point they will travel throughout the solar system. requires energy. What mechanism accelerates a rock from Earth to the outer solar-system? Mars is not really a distant object compared to the size of the solar system. New Horizons is in the Kuiper belt already. Is it even remotely plausible to suggest that Earth rocks can become KBOs? Question: Can rocks from Earth have reached the Kuiper belt or Neptune at least? If so, how? I'll even allow a solid particle of proper space-dust to serve as proxy for the rock for the purposes of this question, even sub-micron in size if necessary, but not simply an atom or a molecule. We know from answers to What is the origin of the dust near the sun? that dust is surprisingly (to me at least) mobile in the solar system due in part to its large surface to mass ratio and interaction with static EM fields and photons for example. Answer: (I have tracked down the reference that I made in my comment on the question.) Presumably earth rocks once blasted into space (by volcanism or meteorite impact) gain or lose energy primarily by orbital interactions with planets or moons. Since these interactions will occasionally give enough energy to escape the solar system entirely, presumably anything short of that (Neptune's orbit, the Kuiper belt) is also possible. The questions is if this will happen often enough to create a reasonable expectation that an Earth rock could have collided with any given solar system body. Various attempts to identify the odds have been made. In one, researchers simulated the path of fragments from the Chixculub impact, and found that many may have made it to the outer Solar System within 10 million years. This is the paper in Astrobiology
{ "domain": "astronomy.stackexchange", "id": 4839, "tags": "solar-system, planetary-formation, dust, planetary-science, kuiper-belt" }
MaxHeap implementation
Question: I'd like this to be reviewed: public class MaxHeap<E extends Comparable<E>> { private E[] heap; private int capacity; // maximum size of heap private int numberOfNodes; // number of nodes in current heap /** * Create a new MaxHeap object. * * @param heap * @param capacity * @param numberOfNodes */ public MaxHeap(E[] heap, int capacity, int numberOfNodes) { this.heap = heap; this.capacity = capacity; this.numberOfNodes = numberOfNodes; this.buildHeap(); } /** * Put all nodes within the max heap in the correct position. */ void buildHeap() { for (int i = (this.numberOfNodes / 2 - 1); i >= 0; i--) { this.correctNodeIndexByShifting(i); } } /** * Insert a new node at the current position within the max-heap. * * @param nodeValue * The node to be inserted. */ public void insert(E nodeValue) { if (this.capacity <= this.numberOfNodes) { throw new IllegalArgumentException("In method insert of class " + "MaxHeap the element: " + nodeValue + " could not be inserted because the max-heap is full"); } int currentNodePosition = this.numberOfNodes++; this.heap[currentNodePosition] = nodeValue; // start at the end of most bottom right leaf node and shift up // until the nodeValue has a parent with a greater or equal value while ((currentNodePosition != 0) && (this.heap[currentNodePosition].compareTo(this.heap[this .getParentIndex(currentNodePosition)]) > 0)) { this.swap(currentNodePosition, this.getParentIndex(currentNodePosition)); currentNodePosition = this.getParentIndex(currentNodePosition); } } /** * Remove the node at arrayIndex within the MaxHeap and return the node * value that the removed node is replaced with. * * @param arrayIndex * Index of the node within the array based max-heap to be * removed. * @return The element that was removed. */ public E remove(int arrayIndex) { int changingArrayIndex = arrayIndex; if ((changingArrayIndex < 0) || (changingArrayIndex >= this.numberOfNodes)) { throw new IllegalArgumentException("In method remove of class " + "MaxHeap the input node postion to be removed is invalid"); } // if the most bottom right node is being removed there is no work to be // done if (changingArrayIndex == (this.numberOfNodes - 1)) { this.numberOfNodes--; } else { // swap node to be removed with most bottom right node this.swap(changingArrayIndex, --this.numberOfNodes); // if swapped node is large, shift it up the tree while ((changingArrayIndex > 0) && (this.heap[changingArrayIndex].compareTo(this.heap[this .getParentIndex(changingArrayIndex)]) > 0)) { this.swap(changingArrayIndex, this.getParentIndex(changingArrayIndex)); changingArrayIndex = this.getParentIndex(changingArrayIndex); } if (this.numberOfNodes != 0) { // if swapped node is small, shift it down the tree this.correctNodeIndexByShifting(changingArrayIndex); } } return this.heap[changingArrayIndex]; } /** * @return maximum node value in max-heap. */ public E removeMaximumValue() { if (this.numberOfNodes <= 0) { throw new IllegalStateException( "In method removeMaximumValue of class " + "MaxHeap the value you cannot remove a value from an " + "empty max-heap"); } // swap maximum with last value this.swap(0, --this.numberOfNodes); if (this.numberOfNodes != 0) { // if not the last element this.correctNodeIndexByShifting(0); } return this.heap[this.numberOfNodes]; } /** * Place given node position in the correct position within the complete * binary tree. * * @param arrayIndex * Index of node to be correctly shifted to the correct position. */ void correctNodeIndexByShifting(int arrayIndex) { int changingArrayIndex = arrayIndex; if ((changingArrayIndex < 0) || (changingArrayIndex >= this.numberOfNodes)) { throw new IllegalArgumentException( "In method shiftDown of class " + "MaxHeap the value: " + changingArrayIndex + " represents a node that does not exist in the current heap"); } while (!this.isLeafNode(changingArrayIndex)) { int childIndex = this.getLeftChildIndex(changingArrayIndex); if ((childIndex < (this.numberOfNodes - 1)) && (this.heap[childIndex] .compareTo(this.heap[childIndex + 1]) < 0)) { childIndex++; // childIndex is not at index of child with // greater node value } if (this.heap[changingArrayIndex].compareTo(this.heap[childIndex]) >= 0) { return; } this.swap(changingArrayIndex, childIndex); changingArrayIndex = childIndex; // node shifted down } } /** * Switch the node at arrayIndex1 into node at arrayIndex2 and vice versa. * * @param arrayIndex1 * @param arrayIndex2 */ void swap(int arrayIndex1, int arrayIndex2) { if (arrayIndex1 < 0 || arrayIndex1 > this.numberOfNodes) { throw new IllegalArgumentException( "In method swap of class " + "MaxHeap the input arrayIndex1 is not a valid node position"); } else if (arrayIndex2 < 0 || arrayIndex2 > this.numberOfNodes) { throw new IllegalArgumentException( "In method swap of class " + "MaxHeap the input arrayIndex2 is not a valid node position"); } E tempNodeValue = this.heap[arrayIndex1]; this.heap[arrayIndex1] = this.heap[arrayIndex2]; this.heap[arrayIndex2] = tempNodeValue; } /** * @param arrayIndex * Index of child node. * @return Index of parent to given index of child. * */ public int getParentIndex(int arrayIndex) { if (arrayIndex <= 0) { throw new IllegalArgumentException( "In method getParentPosition of class " + "MaxHeap your input node position at " + arrayIndex + " must be > 0"); } else { return (arrayIndex - 1) / 2; } } /** * @param arrayIndex * Index of parent node. * @return Index of right child within array based max-heap to given parent * node. */ public int getRightChildIndex(int arrayIndex) { if (arrayIndex >= (this.numberOfNodes / 2)) { throw new IllegalArgumentException("In method rightChild of class " + "MaxHeap your input node position at " + arrayIndex + " does not have a right child."); } else { return 2 * arrayIndex + 2; } } /** * @param arrayIndex * Index of parent node. * @return Index of left child within array based max-heap to given parent * node. */ public int getLeftChildIndex(int arrayIndex) { if (arrayIndex >= (this.numberOfNodes / 2)) { throw new IllegalArgumentException("In method leftChild of class " + "MaxHeap your input node position at " + arrayIndex + " does not have a left child."); } else { return 2 * arrayIndex + 1; } } /** * @param arrayIndex * Index of node to be checked. * @return True if node at given arrayIndex is a leaf node; otherwise return * false. */ public boolean isLeafNode(int arrayIndex) { if ((arrayIndex >= (this.numberOfNodes / 2)) && (arrayIndex < this.numberOfNodes)) { return true; } else { return false; } } /** * @return The number of nodes in this max-heap. */ public int getNumberOfNodes() { return this.numberOfNodes; } /** * @return The height of the heap. */ public int getHeapHeight() { double approximateHeight = Math.log(this.numberOfNodes) / Math.log(2); int actualHeight = (int) (Math.floor(approximateHeight) + 1); return actualHeight; } /** * @return String representation of elements in the array used to implement * the max-heap. */ public String printMaxHeapArray() { StringBuilder maxHeapArray = new StringBuilder(); for (int i = 0; i < this.heap.length; i++) { maxHeapArray.append(this.heap[i] + " "); } return maxHeapArray.toString(); } } Answer: I haven't checked the details of the algorithm, just some general feedback on the code, API, etc: It's usually a good practice to make a copy of mutable input parameters. (E[] heap in this case.) It prohibits malicious clients to modify the heap's internal structure or it could save you from a few hours of debugging. (Effective Java, 2nd Edition, Item 39: Make defensive copies when needed) Copying the input array would make the capacity field redundant. You could use heap.length instead of it. The constructor should validate its input parameters. Currently it's too easy to call it with a wrong size array: String[] heap = {"aa", "ac", "bb"}; int capacity = 4; int numberOfNodes = 4; MaxHeap<String> maxHeap = new MaxHeap<String>(heap, capacity, numberOfNodes); If you call a maxHeap.insert("ab") after that you get a mysterious ArrayIndexOutOfBoundsException. (Effective Java, Second Edition, Item 38: Check parameters for validity) (Copying the input array to a properly sized array could solve this issue too.) It's easy to misunderstood the parameter of remove(). Currently it's an index. A client easily think that the following code removes 20 from the heap: final Integer[] initialData = {10, 30, 20, 40}; int capacity = 4; int numberOfNodes = 3; MaxHeap<Integer> heap = new MaxHeap<Integer>(initialData, capacity, numberOfNodes); heap.remove(20); Actually it throws an exception, since there aren't 20 items in the heap. Note that there isn't any other method which returns an index so how could a client figure out a valid parameter of the remove method? A remove(E item) method would be better. Implementation of printMaxHeapArray could be replaced with return Arrays.toString(heap); if output format is not bound. (Output of Arrays.toString() is a slightly different: [cc, bb, aa, ab]). Otherwise, I'd use a more compact foreach loop: for (final E item: heap) { maxHeapArray.append(item + " "); } Checking it again I guess i < this.heap.length should be i < this.numberOfNodes. I think you don't want to print the removed items. Java already has a toString() method. If you use printMaxHeapArray only for debugging or logging purposes and clients don't rely on its exact output you should implement (override) toString() instead. toString() is more convenient for most developers. (Effective Java, 2nd Edition, Item 10: Always override toString) Comments like this are unnecessary: /** * Create a new MaxHeap object. * * @param heap * @param capacity * @param numberOfNodes */ They say nothing more than the code already does, it's rather noise. (Clean Code by Robert C. Martin: Chapter 4: Comments, Noise Comments) Instead of comments like this: * @param arrayIndex * Index of parent node. rename the parameter to parentIndex or parentNodeIndex. It would help readers, make the code readable and you could get rid of the comment. The same is true for getParentIndex(final int arrayIndex) (childIndex). boolean isLeafNode(final int arrayIndex) returns false when the given index is higher than the size of the array. I guess calling the method with a definitely invalid index is a bug in the client code. Crash early, throw an IllegalArgumentException (as the getLeftChildIndex() method does). See: The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas: Dead Programs Tell No Lies. The else keyword is unnecessary here: if (arrayIndex1 < 0 || arrayIndex1 > this.numberOfNodes) { throw new IllegalArgumentException("In method swap of class " + "MaxHeap the input arrayIndex1 is not a valid node position"); } else if (arrayIndex2 < 0 || arrayIndex2 > this.numberOfNodes) { throw new IllegalArgumentException("In method swap of class " + "MaxHeap the input arrayIndex2 is not a valid node position"); } It could be simply if (arrayIndex1 < 0 || arrayIndex1 > this.numberOfNodes) { throw new ... } if (arrayIndex2 < 0 || arrayIndex2 > this.numberOfNodes) { throw new ... } Furthermore, you could extract out the checking logic (since it's duplicated, used twice for the two parameters and the same logic is in other methods too) to a validator method: void swap(final int arrayIndex1, final int arrayIndex2) { checkValidIndex(arrayIndex1, "In method swap of class MaxHeap the input arrayIndex1 is not a valid node position: " + arrayIndex1); checkValidIndex(arrayIndex2, "In method swap of class MaxHeap the input arrayIndex2 is not a valid node position: " + arrayIndex2); ... } private void checkValidIndex(final int arrayIndex, final String message) { if (arrayIndex < 0 || arrayIndex > this.numberOfNodes) { throw new IllegalArgumentException(message); } } Google Guava has similar checkArgument method which supports handy template strings (%s) too. (See also: Effective Java, 2nd edition, Item 47: Know and use the libraries The author mentions only the JDK's built-in libraries but I think the reasoning could be true for other libraries too.) I'd use a little bit shorter exception messages, like "invalid arrayIndex: " + arrayIndex1. The stacktrace will show the place of the error. It helps debugging if you put the invalid value to the message. I think that the tempNodeValue could have a better name. Currently it does not express the developer's intent and the purpose of the variable. (Every local variable is temporary.) I'd call it oldValue instead since it stores the old value of index1. The this.heap[changingArrayIndex].compareTo(this.heap[this.getParentIndex(changingArrayIndex)]) > 0 condition is duplicated. It could be extracted out to a helper method: private boolean compareWithParent(final int arrayIndex) { final int parentIndex = this.getParentIndex(arrayIndex); return heap[arrayIndex].compareTo(heap[parentIndex]) > 0; }
{ "domain": "codereview.stackexchange", "id": 6185, "tags": "java, heap" }
GPSD socket connection and decoding JSON into Python dictionaries
Question: GPS3 is a python 2.7-3.5 interface to GPSD. I've stripped back everything to two classes. #!/usr/bin/env python3 # coding=utf-8 """ GPS3 (gps3.py) is a Python 2.7-3.5 GPSD interface (http://www.catb.org/gpsd) Defaults host='127.0.0.1', port=2947, gpsd_protocol='json' GPS3 has two classes. 1) 'GPSDSocket' to create a socket connection and retreive the output from GPSD. 2) 'Fix' unpacks the streamed gpsd data into python dictionaries. These dictionaries are populated from the JSON data packet sent from the GPSD. Import import gps3 Instantiate gps_connection = gps3.GPSDSocket() gps_fix = gps3.Fix() Use print('Altitude = 'gps_fix.TPV['alt']) print('Latitude = 'gps_fix.TPV['lat']) Consult Lines 150-ff for Attribute/Key possibilities. or http://www.catb.org/gpsd/gpsd_json.html Run human.py; python[X] human.py [arguments] for a human experience. """ from __future__ import print_function import json import select import socket import sys __author__ = 'Moe' __copyright__ = "Copyright 2015-2016 Moe" __license__ = "MIT" __version__ = "0.11a" HOST = "127.0.0.1" # gpsd defaults GPSD_PORT = 2947 # " PROTOCOL = 'json' # " class GPSDSocket(object): """Establish a socket with gpsd, by which to send commands and receive data. """ def __init__(self, host=HOST, port=GPSD_PORT, gpsd_protocol=PROTOCOL, devicepath=None): self.devicepath_alternate = devicepath self.response = None self.protocol = gpsd_protocol self.streamSock = None if host: self.connect(host, port) def connect(self, host, port): """Connect to a host on a given port. :param port: :param host: """ for alotta_stuff in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): family, socktype, proto, _canonname, host_port = alotta_stuff try: self.streamSock = socket.socket(family, socktype, proto) self.streamSock.connect(host_port) self.streamSock.setblocking(False) except OSError as error: sys.stderr.write('\nGPSDSocket.connect OSError is-->', error) sys.stderr.write('\nAttempt to connect to a gpsd at {0} on port \'{1}\' failed:\n'.format(host, port)) sys.stderr.write('Please, check your number and dial again.\n') self.close() sys.exit(1) # TODO: gpsd existence check and start finally: self.watch(gpsd_protocol=self.protocol) def watch(self, enable=True, gpsd_protocol='json', devicepath=None): """watch gpsd in various gpsd_protocols or devices. Arguments: self: enable: (bool) stream data to socket gpsd_protocol: (str) 'json', 'nmea', 'rare', 'raw', 'scaled', 'split24', or 'pps' devicepath: option for non-default device path Returns: command: (str) e.g., '?WATCH={{"enable":true,"json":true}}' """ # TODO: 'timing' requires special attention, as it is undocumented and lives with dragons command = '?WATCH={{"enable":true,"{0}":true}}'.format(gpsd_protocol) if gpsd_protocol == 'rare': # 1 for a channel, gpsd reports the unprocessed NMEA or AIVDM data stream command = command.replace('"rare":true', '"raw":1') if gpsd_protocol == 'raw': # 2 channel that processes binary data, received data verbatim without hex-dumping. command = command.replace('"raw":true', '"raw",2') if not enable: command = command.replace('true', 'false') # sets -all- command values false . if devicepath: command = command.replace('}', ',"device":"') + devicepath + '"}' return self.send(command) def send(self, commands): """Ship commands to the daemon :param commands: """ # session.send("?POLL;") # TODO: Figure a way to work this in. # The POLL command requests data from the last-seen fixes on all active GPS devices. # Devices must previously have been activated by ?WATCH to be pollable. if sys.version_info[0] < 3: # Not less than 3, but 'broken hearted' because self.streamSock.send(commands) # 2.7 chokes on 'bytes' and 'encoding=' else: self.streamSock.send(bytes(commands, encoding='utf-8')) # It craps out here when there is no gpsd running # TODO: Add recovery, check gpsd existence, re/start, etc.. def __iter__(self): """banana""" # <------- for scale return self def next(self, timeout=0): """Return empty unless new data is ready for the client. Will sit and wait for timeout seconds :param timeout: """ try: (waitin, _waitout, _waiterror) = select.select((self.streamSock,), (), (), timeout) if not waitin: return else: gpsd_response = self.streamSock.makefile() # was '.makefile(buffering=4096)' In strictly Python3 self.response = gpsd_response.readline() return self.response except OSError as error: sys.stderr.write('The readline OSError in GPSDSocket.next is this: ', error) return __next__ = next # Workaround for changes in iterating between Python 2.7 and 3.5 def close(self): """turn off stream and close socket""" if self.streamSock: self.watch(enable=False) self.streamSock.close() self.streamSock = None return class Fix(object): """Retrieve JSON Object(s) from GPSDSocket and unpack it into respective gpsd 'class' dictionaries, TPV, SKY, etc. yielding hours of fun and entertainment. """ def __init__(self): """Sets of potential data packages from a device through gpsd, as a generator of class attribute dictionaries""" version = {"release", "proto_major", "proto_minor", "remote", "rev"} tpv = {"alt", "climb", "device", "epc", "epd", "eps", "ept", "epv", "epx", "epy", "lat", "lon", "mode", "speed", "tag", "time", "track"} sky = {"satellites", "gdop", "hdop", "pdop", "tdop", "vdop", "xdop", "ydop"} gst = {"alt", "device", "lat", "lon", "major", "minor", "orient", "rms", "time"} att = {"acc_len", "acc_x", "acc_y", "acc_z", "depth", "device", "dip", "gyro_x", "gyro_y", "heading", "mag_len", "mag_st", "mag_x", "mag_y", "mag_z", "pitch", "pitch_st", "roll", "roll_st", "temperature", "time", "yaw", "yaw_st"} # TODO: Check Device flags pps = {"device", "clock_sec", "clock_nsec", "real_sec", "real_nsec"} device = {"activated", "bps", "cycle", "mincycle", "driver", "flags", "native", "parity", "path", "stopbits", "subtype"} # TODO: Check Device flags poll = {"active", "fixes", "skyviews", "time"} devices = {"devices", "remote"} # ais = {} # see: http://catb.org/gpsd/AIVDM.html error = {"message"} # 'repository' of dictionaries possible, and possibly 'not applicable' packages = {"VERSION": version, "TPV": tpv, "SKY": sky, "GST": gst, "ATT": att, "PPS": pps, "DEVICE": device, "POLL": poll, "DEVICES": devices, "ERROR": error} # etc. # TODO: Create the full suite of possible JSON objects and a better way for deal with subsets for package_name, datalist in packages.items(): _emptydict = {key: 'n/a' for (key) in datalist} # There is a case for using None instead of 'n/a' setattr(self, package_name, _emptydict) self.SKY['satellites'] = [{'PRN': 'n/a', 'ss': 'n/a', 'el': 'n/a', 'az': 'n/a', 'used': 'n/a'}] self.DEVICES['devices'] = [{"class": 'n/a', "path": 'n/a', "activated": 'n/a', "flags": 'n/a', "driver": 'n/a', "native": 'n/a', "bps": 'n/a', "parity": 'n/a', "stopbits": 'n/a', "cycle": 'n/a'}] def refresh(self, gpsd_data_package): """Sets new socket data as Fix attributes Arguments: self (class): gpsd_data_package (json object): Returns: self attribute dictionaries, e.g., self.TPV['lat'] Raises: AttributeError: 'str' object has no attribute 'keys' when the device falls out of the system ValueError, KeyError: stray data, should not happen """ try: fresh_data = json.loads(gpsd_data_package) # The reserved word 'class' is popped from JSON object class package_name = fresh_data.pop('class', 'ERROR') # gpsd data package errors are also 'ERROR'. package = getattr(self, package_name, package_name) # packages are named for JSON object class for key in package.keys(): # TODO: Rollover and retry. It fails here when device disappears package[key] = fresh_data.get(key, 'n/a') # Updates and restores 'n/a' if key is absent in the socket # response, present --> "key: 'n/a'" instead.' except AttributeError: # 'str' object has no attribute 'keys' TODO: if returning 'None' is a good idea print("No Data") return None except (ValueError, KeyError) as error: sys.stderr.write(str(error)) # Look for extra data in stream return None if __name__ == '__main__': print('\n', __doc__) # # Someday a cleaner Python interface will live here # # End While 'refreshing' the data from the GPSD socket read, the JSON object is loaded into a JSON decoder module. This fresh data output has 'class' popped and it's value becomes an attribute of the instance. Remaining data goes into a dictionary with the new values, such as gps_fix.TPV['lat'] = -33.123456789. If data is missing from the socket, key or value, persistently or sporadically, the key has its value replaced with 'n/a', the initialised value. Answer: In general looks good and well documented. In the close method the return statement is unnecessary. Quotes are inconsistently used. Most of the :param annotations in the docstrings are unused. If you're not going to document them, just leave them out. The watch method is also not using the syntax at all, where it would make a lot of sense to use it. The finally block in connect seems weird. If I'm not mistaken it will be executed even if sys.exit is called (since that's implemented using a SystemExit exception) - is that intentional? I'd put a comment on it if so. Also, is the watch method intended to be called from outside the class? If not, then the default arguments are moot. Possibly also prefix it to avoid calling it from outside the class. In next the else block can be put inline as the if already returns from the method. Again, the return in the except handler is not necessary. Also, return None is the same as return, but I imagine that's done for clarity. In the _emptydict creation, the parens around key aren't needed: _emptydict = {key: 'n/a' for key in datalist} If possible I'd use the same construction for SKY and DEVICES btw. The documentation for refresh is wrong, there's nothing returned from that method (well None, but that doesn't count).
{ "domain": "codereview.stackexchange", "id": 18681, "tags": "python, python-3.x, json, socket" }
Stochastic simulation event timing
Question: So I'm learning bits and pieces of python. Here's some code that just doesn't seem pythonic, and I'd like to improve it. I'm doing stochastic simulations with a Gillespie style approach, but if that doesn't mean anything to you, no worries. I'm trying to avoid some iterations and replace them with something like a list comprehension. The code will work, only I'm looking for a better way to do the same thing. First I calculate a stopping time (maxTime). Then I calculate the time of an event (and store it if it's less than maxTime). Then the time of the next event (and store again). I repeat until I finally get an event happening after maxTime. import random maxTime = random.expovariate(1) L = [] eventTime = random.expovariate(10) while eventTime<maxTime: L.append(eventTime) eventTime += random.expovariate(10) Any cleaner way to write this code? Answer: If you're writing a simulation, it's probably worthwhile to add some abstractions to your code so that you can free your mind to think at a more abstract level. Ideally, I would like to see L = list(until_total(stop_time, poisson_process(10))) (Consider not calling list() if you just need an iterable and not a list.) Here is one way to get there: from itertools import takewhile from random import expovariate def poisson_process(rate): while True: yield expovariate(rate) def until_total(limit, iterable): total = [0] # http://stackoverflow.com/q/2009402 def under_total(i): total[0] += i return total[0] < limit return takewhile(under_total, iterable) stop_time = next(poisson_process(1)) L = until_total(stop_time, poisson_process(10)) Also, consider using more meaningful identifiers: customer_arrivals = poisson_process(10) cashier_yawns = poisson_process(1) customer_interarrival_times = until_total(next(cashier_yawns), customer_arrivals)
{ "domain": "codereview.stackexchange", "id": 11138, "tags": "python, simulation" }
$\left(H^\dagger H\right)^2$ is invariant under $U(1)\times SU(2)$?
Question: Is it true that $\left(H^\dagger H\right)^2$ is invariant under $U\left(1\right) \times SU\left(2\right)$ where $H$ is the Higgs field $(1,2,1/2)$? Does this invariance imply that its hypercharge is invariant under $U\left(1\right)$ and its spin is invariant under $SU\left(2\right)$? . $$H = [H_+, H_0]$$ $$H_+ = [H_-]$$ but $$H_0 = [?]$$ $$H^\dagger H = [H_-][H_+] + [?][H_0]$$ Answer: Yes, $(H^\dagger H)^2$ is invariant under $SU(2)\times U(1)$ because even without the second power, $H^\dagger H$ is invariant under it. By that, we mean $$\sum_{i=1}^2 H_i^* H_i$$ Of course that it's invariant under $U(1)$ because the $U(1)$ charges of $H^\dagger$ and $H$ are opposite in sign and add up to zero. Note that the transformation of a charge-$Q$ field is $F\to F\exp(iQ\lambda)$. The invariance under $SU(2)$ is also self-evident because $\sum_i z^*_i z_i$ is exactly the bilinear (with one asterisk) invariant that defines the unitary groups. No, the invariance of $H^\dagger H$ or its square does not mean that the hypercharge of $H$ itself is zero or isospin is zero. Moreover, the OP seems to confuse the spin and the isospin.
{ "domain": "physics.stackexchange", "id": 8685, "tags": "homework-and-exercises, symmetry, standard-model, electroweak, higgs" }
why multiply grayscale images by 256
Question: I am a newbie in image processing. in an open source code for convolution neural networks, the author is multiplying a gray scale image from MNIST dataset by 256. Please can any one tell me why? ADDITION: the greyscale images have a max value of 0.8681 and a min of -0.1319 Many thanks. Answer: Grayscale images are typically stored as 8 bits per pixel in files. Perhaps the network was trained to work with such data. $2^8 = 256,$ so 8 bits can represent any number from 0 to 255. It may well be that the author was converting from range $[0, 1]$ to $[0, 255]$. Note that $0.8681 - (-0.1319) = 1.$ Multiplying by 255 to stay within the range would have been more appropriate, but multiplying by 256 is appealing because it does not introduce any rounding error in finite precision floating point arithmetic. Subtracting the mean does not change the story as it can be equivalently done before or after the multiplication.
{ "domain": "dsp.stackexchange", "id": 6459, "tags": "image-processing, normalization" }
Non-deterministic Turing machine to solve graph colouring
Question: Consider the graph coloring problem: given an undirected graph $G$ and a natural number $n$ return yes if we can color the graph with n different colors and no otherwise. I am able to design a deterministic Turing machine that would solve it with a greedy approach trying the different combinations of coloring with the $n$ colors. I think this takes exponential time although I'm not sure how to proof it. However, I am not able to conceive a nondeterministic Turing machine that would do so. Can someone guide me designing an algorithm for this machine? Answer: The algorithm would be as follows: Go through each vertex and "guess" a coloring using non-determinism. This takes time $O(V\log n)$, since this is the time it takes to write all the colors. Check that the coloring is legal, which also takes polynomial time. If it's not legal, it rejects, and otherwise it accepts. By definition, a non-deterministic Turing machine accepts an input if there is some accepting computation path. So the machine accepts iff some valid coloring exists.
{ "domain": "cs.stackexchange", "id": 4460, "tags": "complexity-theory, turing-machines" }
What is a Hilbert Space?
Question: The Hilbert Space is the space where wavefunction live. But how would I describe it in words? Would it be something like: The infinite dimensional vector space consisting of all functions of position $\psi(\vec x)$ given the conditions that $\psi(\vec x)$ is a smooth, continuous function. (I am not saying this is right, it is merely an example). Further more how does $\Psi(\vec x, t)$ fit into this? Would it be appropriate to say that at any $\Psi(\vec x, t)$ itself is not in the Hilbert space, but at any given time $t_0$ then the function $\psi_0 (\vec x)\equiv \Psi(\vec X,t_0)$ is a member of the Hilbert Space? And all operators $\hat Q$ have to act on members in the Hilbert Space and therefore cannot have time-derivatives but can have time dependencies. Answer: Your idea of what $\psi(\vec x,t)$ is supposed to be is essentially correct. Given a space of states $\mathcal{H}$, the "Schrödinger state" is a map $$ \psi : \mathbb{R}\to\mathcal{H}, t\mapsto\lvert\psi(t)\rangle$$ where $\lvert \psi(t)\rangle\in\mathcal{H}$ for every instant $t$. If $\mathcal{H}$ is a space of functions in a variable $\vec x$, then $\lvert \psi(t)\rangle$ is often written $\psi(\vec x,t)$. The space of wavefunctions in usual quantum mechanics is crucially not the space of smooth functions $C^\infty(\mathbb{R}^3,\mathbb{C})$, but the space of equivalence classes of square integrable functions $L^2(\mathbb{R^3},\mathbb{C})$(link to Wikipedia article on $L^p$-spaces). This space contains the smooth compactly supported functions $C_c^\infty(\mathbb{R}^3,\mathbb{C})$ (non-compactly supported functions are not necessarily square-integrable) and also all smooth square-integrable functions, but is larger. The reason for this is twofold: For one, there is no reason to demand a generic wavefunction be continuous - the whole physical content of the wavefunction is encapsulated in the probability density $$ \rho(\vec x) = \lvert\psi(x)\rvert^2$$ and this needs to have $\int\rho(\vec x)\mathrm{d}^3x = 1$ to be probability density, hence $\psi$ must be square integrable, but nothing else. Another reason is that the smooth compactly supported functions do not form a Hilbert space under the scalar product $$ (f,g) = \int \overline{f(x)}g(x)\mathrm{d}^3 x$$ since they are not complete - there are sequences which are Cauchy with respect to the $L^2$-norm induced by this inner product, but which do not converge. The space of square-integrable functions is precisely the completion of the smooth compactly supported functions under this norm, and hence a Hilbert space. In other words, every wavefunction may be arbitrarily accurately be approximated by a smooth compactly supported function, but is itself not guaranteed to be a smooth function. Among other difficulties commonly overlooked, this means that, strictly speaking, one cannot evaluate wavefunctions at points, since the $L^2$-space elements are only defined up to a zero measure set, and points are of zero measure. This is again meaningful when considering $\rho(\vec x)$: The value of a probability density on a zero measure set is physically meaningless, since only the integration of it over a set of non-zero measure gives a physically relevant probability.
{ "domain": "physics.stackexchange", "id": 26547, "tags": "quantum-mechanics, hilbert-space" }
Grouping all mathematical results using a number sequence with basic math operators
Question: Not sure how to make the title smaller. I was reading the question A truly amazing way of making the number 2016 and some of the answers referred to programatically determining the answer set. What I am doing is related to that in concept. Given a range of number 1 to X determine all possible mathematical expressions that can be used with those numbers and group the results together to see which whole numbers prevail. function ConvertTo-Base { [CmdletBinding()] param ( [parameter(ValueFromPipeline=$true,Mandatory=$True, HelpMessage="Base10 Integer number to convert to another base")] [int]$Number=1000, [parameter(Mandatory=$True)] [ValidateRange(2,20)] [int]$Base ) [char[]]$alphanumerics = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" do { # Determine the remainder $Remainder = ($Number % $Base) # Get the associated character and add it to the beginning of the string. $newBaseValue = "$($alphanumerics[$Remainder])$newBaseValue" # Adjust the number to remove the calculated portion $Number = ($Number - $Remainder) / $Base # Loop until we have processed the whole number } while ($Number -gt 0) return $newBaseValue } # Variables $maxRange = 3 #13 would make for the largest values 13!. Cap the script as 13 $lowCountThreshold = 1 # Only show group results where the match exists more than five times. # Mathematical Operators [char[]]$operators = "+-*/" # Define the number range for calculations. 13 would make for the largest values 13!. Cap the script as 13 $range = 1..$maxRange # Build the format string that will be used for invoking. Will look like 1{0}2{1}3. Acting as place holders for mathematic operators $formatString = -join (1..($range.count - 1) | ForEach-Object{"$_{$([int]$_ - 1)}"}) + $range[-1] # Determine the number of possible permutations of those operators inbetween the number set. $permutations = [System.Math]::Pow($operators.Count,$range.count - 1) # Cycle each permutation 0..($permutations - 1) | ForEach-Object{ # Convert the number to a base equal to the element count in operators. Use those values to represent the index of the operators array. $mathString = $formatString -f @([string[]][char[]]((ConvertTo-Base -Number $_ -Base $operators.Count).PadLeft($range.count - 1,"0")) | ForEach-Object{$operators[[int]$_]}) # Build an object that contains the result and the mathematical expression [pscustomobject]@{ Expression = $mathString Value = Invoke-Expression $mathString } # Since this take a while try and give the user some semblance of progress. Write-Progress -Activity "Performing mathematical calculations" -Status "Please wait." -PercentComplete ($_ / $permutations * 100) -CurrentOperation "$([math]::Round($_ / $permutations * 100))% Completed." # Filter for whole number and only give group results } | Where-Object{$_.Value -is [int32]} | Group-Object Value | Where-Object{$_.Count -ge $lowCountThreshold} | Sort-Object Count -Descending So if you were to run this and change the $maxValue to something smaller like 3 and change the you would Count Name Group ----- ---- ----- 2 6 {@{Expression=1+2+3; Value=6}, @{Expression=1*2*3; Value=6}} 1 0 {@{Expression=1+2-3; Value=0}} 1 7 {@{Expression=1+2*3; Value=7}} 1 2 {@{Expression=1-2+3; Value=2}} 1 -4 {@{Expression=1-2-3; Value=-4}} 1 -5 {@{Expression=1-2*3; Value=-5}} 1 5 {@{Expression=1*2+3; Value=5}} 1 -1 {@{Expression=1*2-3; Value=-1}} So there are two operations that would get 6. 1+2+3 and 1*2*3. Who would have thought! If you are testing this be careful of using larger numbers here. Running something like a 9 would take about 7 minutes give or take since it would have 65536 permutations to figure out and then group. The function helps me convert each permutation number into its mathematical operator sequence. I take a number and convert it into base4. Then take that and use each number to pull an element out of the operator array. Then use the format operator to populate the string and use invoke expression to get the result. Answer: About performance (there's a good article to get your started: Slow Code: Top 5 ways to make your Powershell scripts run faster); I apply that advice as follows: Problem #1: Expensive operations repeated, e.g. Invoke-Expression: rather than this time consuming cmdlet, I evaluate math expressions using the Compute method of the DataTable class; Write-Progress: show progress bar. We don't call Write-Progress each time through the loop because that is slow. Edit: due to this Dangph's answer; Problem #0: Not using cmdlet parameter filters, e.g. instead of collecting all the [System.Math]::Pow($operators.Count,$range.count-1) permutations and then narrowing the huge array using the Where-Object cmdlet, I collect only desired values; Problem #4: Searching text: the original script computes a string in the (commonly usable) ConvertTo-Base function and then casts it as [string[]][char[]]; … and some further (maybe minor) enhancements are visible in the following (partially commented) 122635answer.ps1 script: # [CmdletBinding(PositionalBinding=$false)] # slows script execution cca 50% param ( # Variables [parameter()] # [ValidateRange(3,20)] # ??? [int]$maxRange = 9, [parameter()] [int]$lowCountThreshold = 5, [parameter()] [ValidateNotNullOrEmpty()] [string]$opString = '+-*/' # Mathematical Operators as string ) Begin { Set-StrictMode -Version Latest # cast $operators variable as an array of characters $operators = [char[]]$opString $opsCount = $operators.Count # Define the number range for calculations. 13 would make for the largest values 13!. Cap the script as 13 $maxRangeMinus1 = $maxRange - 1 # Build an array for extending $maxOpsArray = 1..$maxRange for ( $i=0; $i -lt $maxRange; $i++ ) { $maxOpsArray[$maxRangeMinus1 -$i] = ,$operators[0] * $i } # Build the format string that will be used for invoking. # Will look like 1{2}2{1}3{0}4. Acting as place holders for mathematic operators [string]$formatString = -join (1..($maxRangeMinus1) | ForEach-Object{"$_{$([int]$maxRangeMinus1 - $_)}"}) + $maxRange # reverse order # ForEach-Object{"$_{$([int]$_ - 1)}"}) + $maxRange # $range[-1] # ascending order # ascending order would require `[array]::Reverse($newOperatorArr)` below in the process loop if ( $maxRange -gt 11 ) { # force decimal computing in following `$DataTable.Compute( $mathString, '')` $formatString = $formatString.Replace('{','.0{') + '.0' } # Determine the number of possible permutations of those operators inbetween the number set. [int64]$permutations = [System.Math]::Pow($opsCount, $maxRangeMinus1) # Narrow down $alphanumerics array size to necessary count $alphanumerics = $([char[]]'0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' )[0..($opsCount -1)] Write-Verbose -Verbose -Message ` ("maxRange=$maxRange, lowCountThreshold=$lowCountThreshold, operators=""$(-join $operators)""" ` + "`r`npermutations=$permutations formatString=""$formatString""") $DataTable=[System.Data.DataTable]::new() $Error.Clear() # for debugging purposes } Process { # Cycle each permutation. Use `for` loop instead of `0..($permutations - 1) | ForEach-Object` $( for ( $i=0; $i -lt $permutations; $i++ ) { # Build an array of operators: # ( based on the number converted to base `$opsCount` ) $Number = $i $newOperatorArr = @( $( do { $Remainder = $Number % $opsCount # Get the associated character $operators[$Remainder] $Number = ($Number - $Remainder) / $opsCount } while ($Number -gt 0) )) # Extend array of operators to appropriate length if necessary if ( $newOperatorArr.Count -lt $maxRangeMinus1 ) { $newOperatorArr += $maxOpsArray[$newOperatorArr.Count] } ### [array]::Reverse($newOperatorArr) # only if `$formatString` is in ascending order $mathString = $formatString -f @( $newOperatorArr ) # evaluate math expression using the Compute method of the DataTable class # rather than time consuming Invoke-Expression $value122635 = $DataTable.Compute( $mathString, '') # Effectively reduce the output size in advance: refuse "non-integers" if ( $value122635 -eq [System.Math]::Floor($value122635) ) { # Build an object that contains the result and the mathematical expression [pscustomobject]@{ Expression = $mathString Value = [System.Math]::Floor($value122635) # [int64]$value122635 } } # Show progress bar. We don't call Write-Progress each time through the loop # because that is slow. # Due to Dangph's answer https://codereview.stackexchange.com/a/221098/88771 if ($i % 1000 -eq 0) { Write-Progress -Activity "Performing mathematical calculations" ` -Status "Please wait." -PercentComplete (100 * $i / $permutations) ` -CurrentOperation "$([math]::Round(100 * $i / $permutations))% Completed." } # Only give group results } ) | Group-Object Value | Where-Object{$_.Count -ge $lowCountThreshold} | Sort-Object -property Count <# -Descending <##>, @{Expression = {[int]$_.Name} } About correctness: the $_.Value -is [int32] condition seems to be too strong. For instance, 1/2*3*4 gives 6, however (1/2*3*4).GetTypeCode() gives Double; hence, (1/2*3*4) -is [int32] condition (incorrectly) rejects the value from result. For the sake of comparison, I added appropriate Write-Verbose cmdlet (immediately before the main loop) and used $maxRange = 7; $lowCountThreshold = 1 in the original script. The comparison of the latter with the adapted one I performed using the following wrapper: param ( [parameter()] [ValidateRange(8,13)] [int]$maxLoop = 12 ) $y = (Measure-Command {$x = D:\PShell\CR\122635.ps1}).TotalSeconds $z = ($x | Measure-Object -Property Count -Sum).Sum 'orig. {0,4} {1,9} {2,9} {3,16}' -f 7, $x.count, $z, $y for ( $icnt=7; $icnt -lt $maxLoop; $icnt++ ) { $y = (Measure-Command { $x = D:\PShell\CR\122635answer.ps1 -maxRange $icnt -lowCountThreshold 1 }).TotalSeconds $z = ($x | Measure-Object -Property Count -Sum).Sum 'answer {0,4} {1,9} {2,9} {3,16}' -f $icnt, $x.count, $z, $y if ($icnt -eq 7) {''} } Comparison result shows that the 122635answer.ps1 script runs approximately 26× faster (21.33 / 0.80) than original for given $maxRange=7and $lowCountThreshold=1 (in fact, I see no practical use if maxRange is greater than 9; that's in merely to show exponential time growth…): pwsh -noprofile -file D:\PShell\CR\122635wrapper.ps1 VERBOSE: maxRange=7, lowCountThreshold=1, operators="+-*/" permutations=4096 formatString="1{0}2{1}3{2}4{3}5{4}6{5}7" orig. 7 269 756 21,3338469 VERBOSE: maxRange=7, lowCountThreshold=1, operators="+-*/" permutations=4096 formatString="1{5}2{4}3{3}4{2}5{1}6{0}7" answer 7 284 839 0,7970644 VERBOSE: maxRange=8, lowCountThreshold=1, operators="+-*/" permutations=16384 formatString="1{6}2{5}3{4}4{3}5{2}6{1}7{0}8" answer 8 663 2605 1,8726185 VERBOSE: maxRange=9, lowCountThreshold=1, operators="+-*/" permutations=65536 formatString="1{7}2{6}3{5}4{4}5{3}6{2}7{1}8{0}9" answer 9 1514 7897 7,5665315 VERBOSE: maxRange=10, lowCountThreshold=1, operators="+-*/" permutations=262144 formatString="1{8}2{7}3{6}4{5}5{4}6{3}7{2}8{1}9{0}10" answer 10 3286 24349 32,0042106 VERBOSE: maxRange=11, lowCountThreshold=1, operators="+-*/" permutations=1048576 formatString="1{9}2{8}3{7}4{6}5{5}6{4}7{3}8{2}9{1}10{0}11" answer 11 7089 73434 158,3116746 Edit: due to this Dangph's answer, above performance improvements become even better…
{ "domain": "codereview.stackexchange", "id": 34693, "tags": "math-expression-eval, powershell" }
Get list of dates that are the last day each month in a date range
Question: What is a good way to get rid of these if/elif/else inside the for loop? How you would improve readability of it? from datetime import datetime from calendar import monthrange def iter_completed_months(initial_date, today): delta_months_years = (today.year - initial_date.year)*12 delta_months_current_year = today.month - initial_date.month return delta_months_years + delta_months_current_year initial_date = datetime(2021, 11, 30) today = datetime.now() # populate a list that goes from initial_date until today with all the dates # representing the very end of each month year = initial_date.year month = initial_date.month end_month_dates = [] for _iter_month in range(iter_completed_months(initial_date, today)): if month <= 12: day = monthrange(year, month)[1] elif month > 12: year = year + 1 month = 1 day = monthrange(year, month)[1] else: day = monthrange(year, month)[1] end_month_dates.append(datetime(year, month, day)) month = month + 1 print(end_month_dates) this code populates a list end_month_dates with all "end of months" from initial_date to today. Answer: Put your code in functions. The benefits are numerous. Even if you don't appreciate them fully yet, you should do it anyway. Stand on the shoulders of the computing giants, who knew that functions were the way to go. You are using datetimes, but dates seem more appropriate. Based on the information you have given us, the function should reason with dates. A simpler approach. Compute the month start and end dates. Yield the latter if it's still within the desired bounds. Then advance to the next month by adding 31 days to the month start. [See the comment from RootTwo for a further simplification, allowing one to drop month_start.] from calendar import monthrange from datetime import date, timedelta def last_dates_of_months(start_date, end_date): d = start_date while True: # Compute first and last dates of the month. month_start = d.replace(day = 1) n_days = monthrange(d.year, d.month)[1] month_end = d.replace(day = n_days) # Yield or break. if month_end <= end_date: yield month_end else: break # Advance to a date in the next month. d = month_start + timedelta(days = 31)
{ "domain": "codereview.stackexchange", "id": 44586, "tags": "python" }
Relationship between entanglement and complex vector space
Question: In the article Quantum Algorithm Implementations for Beginners I found the following sentence Entanglement makes it possible to create a complete $2^n$ dimensional complex vector space to do our computations in, using just $n$ physical qubits. My taught process: Let there be three qubits $(Q_1, Q_2, Q_3)$ with corresponding Hilbert spaces $\mathcal{H_1^{2}}$,$\mathcal{H_2^{2}}$,$\mathcal{H_3^{2}}$. Using tensor products I can write a $2^n$ dimensional Hilbert space as \begin{equation} \mathcal{H^{2^3}} = \mathcal{H_1^{2}} \otimes\mathcal{H_2^{2}}\otimes \mathcal{H_3^{2}}. \end{equation} So with my logic, I've created a $2^n$ dimensional Hilbert space without entanglement. My question: Why is entanglement making it possible and where I'm wrong? Answer: The error lies in the assertion that $\mathcal{H^{2^3}} = \mathcal{H_1^{2}} \otimes\mathcal{H_2^{2}}\otimes \mathcal{H_3^{2}}$ is without entanglement. For example, $\frac{|000\rangle+|111\rangle}{\sqrt2}\in\mathcal{H^{2^3}}$ is the GHZ state famous for exemplifying one type of tripartite entanglement. Nevertheless, the misunderstanding has an interesting lesson about the nature of entanglement. Note that we do not explicitly use or assume the existence of entanglement anywhere in the construction of $\mathcal{H^{2^3}}$. Instead, entanglement emerges as a result of using the tensor product in the definition. In physics terms, the tensor product expresses the fact that we allow$^1$ $\mathcal{H^{2^3}}$ to contain superpositions of product states such as $|000\rangle$ and $|111\rangle$. It turns out that most such superpositions are in fact entangled states. This is related to the fact that entanglement is not a primitive notion of quantum mechanics and does not explicitly appear in any of its postulates. Instead, it is a consequence of the postulate that defines the state space of a composite system to be the tensor product of the state spaces of the component systems. $^1$The question whether a superposition of two states of a physical system is allowed or not is not a trivial matter. See superselection rules for more details on when superpositions are disallowed.
{ "domain": "quantumcomputing.stackexchange", "id": 3482, "tags": "quantum-state, entanglement, linear-algebra" }
Where does the electric force come from if an electron has no definite location?
Question: Say electron A is nearby another electron (B), so that they may repel each other. Electron B is in a position eigenstate (so it has a definite position). But electron A is not. How does electron A affect the acceleration of electron B? Does it "divide up" its electromagnetic force as if it were a charged object spanning the space that the wave function occupies, whose charge density is proportional to the value of the probability density function? Otherwise, how can electron B decide where to move? Simply: if an electron can be in "multiple places at once", and the force it produces depends on its location, which location is "chosen" for that force? ...I know that $\exists$ a whole theory on this, Quantum Electrodynamics (thanks Feynman!!!), but I have not studied it. I have only ever taken an intro QM class as an undergraduate. Edit: If the position eigenstate causes problems, let B be in an arbitrary eigenstate as well. The question is rephrased: if the positions are indeterminate, how is the force, which depends on them, calculated? Answer: Note that the problem you pose is non-realistic. If at a certain moment B is in a position eigenstate, $\delta (\vec r)$, at an extremely short time after , B can be everywhere is the universe with equal probability. You will see the effect of this, below. But let's first calculate the force $<\vec F>$. In QM, the influence of between A and B goes as follows: let $\psi_A(\vec r)$ be the wave-function of the electron A, where the vector $\vec r$ connects A, wherever A is, with B. Then the force of interaction is $\vec F(\vec r) = -\frac {e^2 \vec r}{4 \pi \epsilon_0 |r|^2}$. The average force between the two electrons is $<\vec F> = \int d\vec r \int d\vec r' \psi_A^* (\vec r)\delta (\vec r') \frac {e^2 (\vec r - \vec r')}{4 \pi \epsilon_0 |\vec r - \vec r'|^3} \psi_A (\vec r) \delta (\vec r')$. $=\int d\vec r d\vec r' \delta (0) |\psi_A (\vec r)|^2 \frac {e^2 \vec r}{4 \pi \epsilon_0 |\vec r|^2} = \delta (0)\int d\vec r |\psi_A^* (\vec r)|^2 \frac {e^2 \vec r}{4 \pi \epsilon_0 |\vec r|^2}$ So, we have a problem because the function $\delta (\vec r')$ has infinite norm. On the other side, if $\psi_A$ is spherically symmetrical, one gets $<\vec F> = 0$. For the case that $\psi_A$ is not spherically symmetrical we have to replace the wave-function of B by another function, let's name it $\psi_B (r')$, highly localized around the point $\vec r' = 0$, but normalized. In that case $<\vec F>=\int d\vec r d\vec r' |\psi^* _B (\vec r')|^2 |\psi_A^* (\vec r)|^2 \frac {e^2 (\vec r - \vec r')}{4 \pi \epsilon_0 |\vec r - \vec r'|^3}$, and since $\psi_B (r')$ is highly localized around $\vec r' = 0$ we can approximate, $<\vec F>=\int d\vec r d\vec r' |\psi^* _B (\vec r')|^2 |\psi_A^* (\vec r)|^2 \frac {e^2 (\vec r)}{4 \pi \epsilon_0 |\vec r|^3} = \int d\vec r |\psi_A^* (\vec r)|^2 \frac {e^2 (\vec r)}{4 \pi \epsilon_0 |\vec r|^3}$. Now I return to the next moment after localization. The function $\psi_B (\vec r')$ will be practically zero, everywhere. So, in the before last equation we will get $<\vec F> = 0$.
{ "domain": "physics.stackexchange", "id": 18894, "tags": "quantum-mechanics, quantum-field-theory, particle-physics" }
What is the physics in a spacetime with only two point masses?
Question: Imagine a universe consisting of only two point masses, neither of which are in motion. Would they be drawn towards each other? Does there need to be motion for the curvature of spacetime to affect them? Answer: If "spacetime" is assumed to have curvatures obeying general relativity then gravitational fields can do work, hence carry energy, hence have mass. So physically speaking this is not a universe "consisting of only two point masses" because spacetime is not empty. And yes, they will interact with the gravitational field, which will also interact with itself, according to the Einstein equations, producing changes in spacetime curvature that will pull them together. If space is assumed an empty shell and they interact via Newtonian action at a distance say there can be no change in curvatures by assumption. But there are issues with whether space is even physically meaningful if all that exists are two points.
{ "domain": "physics.stackexchange", "id": 24607, "tags": "spacetime, universe" }
Why does current vary continuously with potential in the photoelectric effect?
Question: In the photoelectric effect, for a given intensity and frequency of light source, why does the current increase as you decrease the retarding potential, below the stopping potential. Isn't the current a measure of the number of electrons passing through the gap successfully? Below the stopping potential, shouldn't this depend only on the intensity? Answer: Think of emitted electrons and the stopping potential as the equivalent of a bunch of bullets fired in all directions in a gravitational field. Depending on the direction a given bullet is fired in, it will reach a different height before it falls back to earth. If you find the maximum altitude the bullets reach, you can figure out the muzzle velocity of the guns. You could do this by flying an aircraft above the field, and listening for bullets hitting the belly. If you gradually fly higher and higher the number of impacts will gradually decrease, and when they stop altogether you've got the maximum altitude. In the same way, electrons are emitted at varying angles, with different velocities perpendicular to the surface, and by gradually increasing the voltage you gradually select out the electrons with maximum velocity/energy. When the number of electrons detected (the current) drops to zero, you've got the maximum possible energy from the electrons.
{ "domain": "physics.stackexchange", "id": 25145, "tags": "quantum-mechanics, photoelectric-effect" }
Feynman's Lost Lecture: what is the significance of $\frac{d\mathfrak{v}}{d\theta}=-\frac{GMm}{\left|\mathfrak{L}\right|}\hat{\mathfrak{r}}?$
Question: My question pertains to a fact used by Richard Feynman in his so-called Lost Lecture. http://books.wwnorton.com/books/Feynmans-Lost-Lecture/. I have only skimmed the book, so I have much more to learn from it. One of the important insights in Feynman's modified version of Newton's derivation of Kepler's Laws is that the change in the velocity vector per change in polar angle is of constant magnitude and directed parallel to the force. The following is not Feynman's demonstration of this fact, which did not use calculus. It is my demonstration using calculus. The introduction of the areal velocity is not necessary for the final result. It is included with the hope that it may motivate understanding. Notation: $m:$ constant mass of planet. $M:$ constant mass of (immovable) Sun. $\hat{\mathfrak{r}}:$ unit vector directed from Sun to the planet. $\theta:$ polar angle of $\hat{\mathfrak{r}}$, measured in the orbital plane. $\hat{\mathfrak{k}}:$ unit vector normal to the orbital plane. $\omega\hat{\mathfrak{k}}\equiv\frac{d\theta}{dt}\hat{\mathfrak{k}}:$ angular velocity of $\hat{\mathfrak{r}}$. $\hat{\theta}\equiv\frac{d\hat{\mathfrak{r}}}{d\theta}:$ unit vector perpendicular to $\hat{\mathfrak{r}}$ and lying in the orbital plane. $\mathfrak{r}\equiv r\hat{\mathfrak{r}}:$ radial position of the planet relative to Sun. $\mathfrak{v}_{\parallel}:$ planet's velocity component parallel to $\hat{\mathfrak{r}}$. $\mathfrak{v}_{\perp}:$ planet's velocity component perpendicular to $\hat{\mathfrak{r}}$. $\mathfrak{v}\equiv\frac{d\mathfrak{r}}{dt}=\frac{dr}{dt}\hat{\mathfrak{r}}+r\frac{d\theta}{dt}\hat{\theta}=\mathfrak{v}_{\parallel}+\mathfrak{v}_{\perp}:$ velocity the of planet. $\mathfrak{p}\equiv m\mathfrak{v}:$ momentum of the planet. $\mathfrak{L}\equiv\mathfrak{r}\times\mathfrak{p}:$ angular momentum of thee planet. $\vec{\mathscr{L}}\equiv\frac{\mathfrak{L}}{m}=\mathfrak{r}\times\mathfrak{v}=\mathscr{L}\hat{\mathfrak{k}}:$ angular momentum of a unit mass in the planet's orbit. $\mathfrak{A}_{\theta}\equiv A_{\theta}\hat{\mathfrak{k}}:$ oriented area of the region enclosed by the planet's radial vector with polar angle $\theta_{0}=0$, the radial vector with polar angle $\theta<2\pi,$ and the path of the planet between those points. $\mathfrak{a}\equiv\frac{d\mathfrak{v}}{dt}:$ acceleration of the planet toward Sun. $\mathfrak{F}\equiv-\frac{GMm}{r^{2}}\hat{\mathfrak{r}}=m\mathfrak{a}:$ gravitational force of Sun on the planet. Demonstration: Angular momentum is a conserved geometric object $$ \frac{d\mathfrak{L}}{dt}=\mathfrak{0}. $$ The parallelogram associated with the cross product gives $$ \mathfrak{r}\times d\mathfrak{r}=2d\mathfrak{A}_{\theta}. $$ This is related the the unit mass angular momentum $$ \mathfrak{r}\times\frac{d\mathfrak{r}}{dt}=2\frac{d\mathfrak{A}_{\theta}}{dt}=\vec{\mathscr{L}} $$ $$ =\mathfrak{r}\times\left(\mathfrak{v}_{\parallel}+\mathfrak{v}_{\perp}\right) $$ $$ =r\hat{\mathfrak{r}}\times r\frac{d\theta}{dt}\hat{\theta} $$ $$ =r^{2}\frac{d\theta}{dt}\hat{\mathfrak{k}}. $$ It follows that the distance and angular speed are related to the constant magnitude of angular momentum by $$ \mathscr{L}=r^{2}\frac{d\theta}{dt}. $$ Rewriting this result gives $$ \frac{1}{r^{2}}=\frac{1}{\mathscr{L}}\frac{d\theta}{dt}. $$ Substituting this into the force equation gives $$ \frac{d\mathfrak{v}}{dt}=-\frac{GM}{\mathscr{L}}\frac{d\theta}{dt}\hat{\mathfrak{r}.} $$ A simple manipulation shows that the derivative of velocity with respect to the polar angle is of constant magnitude with the same sense and direction as the gravitational force acting on the planet. $$ \frac{d\mathfrak{v}}{d\theta}=-\frac{GM}{\mathscr{L}}\hat{\mathfrak{r}} $$ $$ =-\frac{GMm}{\left|\mathfrak{L}\right|}\hat{\mathfrak{r}}. $$ Written in terms of force $$ \frac{\mathfrak{F}}{\omega}=\frac{1}{\omega}\frac{d\mathfrak{p}}{dt}=-\frac{GMm}{\mathscr{L}}\hat{\mathfrak{r}}. $$ Question: Is this relationship associated with some specific conservation law, or with some stationary value such as "action"? Is there any other valuable insight to be had from this relationship? Answer: Forgive me if I'm going to use my own notations. Using the polar unit vectors \begin{align} \boldsymbol{e}_r &= \cos\theta \boldsymbol{e}_x + \sin\theta \boldsymbol{e}_y,\\ \boldsymbol{e}_\theta &= -\sin\theta \boldsymbol{e}_x + \cos\theta \boldsymbol{e}_y, \end{align} we find \begin{align} \boldsymbol{r} &= r\boldsymbol{e}_r,\\ \boldsymbol{v} &= \dot{r}\boldsymbol{e}_r + r\dot{\theta}\boldsymbol{e}_\theta = \dot{r}\boldsymbol{e}_r + \boldsymbol{\omega} \times \boldsymbol{r},\tag{1} \end{align} with $\boldsymbol{\omega} = \dot{\theta}\boldsymbol{e}_z$. Let's also define the specific angular momentum $\boldsymbol{h}$ as \begin{equation} \boldsymbol{h} = \boldsymbol{r} \times \boldsymbol{v} = r^2\boldsymbol{\omega}, \end{equation} so that $h = r^2\dot{\theta}$. The acceleration $\boldsymbol{a}$ is given by \begin{equation} \boldsymbol{a} = -\frac{\mu}{r^2}\boldsymbol{e}_r, \end{equation} which can be written as \begin{equation} \boldsymbol{a} = -\frac{\mu}{h}\dot{\theta}\boldsymbol{e}_r = \frac{\mu}{h}\dot{\boldsymbol{e}}_\theta= \boldsymbol{\omega} \times \boldsymbol{v}_\text{c},\tag{2} \end{equation} with \begin{equation} \boldsymbol{v}_\text{c} = \frac{\mu}{h}\boldsymbol{e}_\theta. \end{equation} This is equivalent with the equation you derived: \begin{equation} \frac{\text{d}\boldsymbol{v}}{\text{d}\theta} = -\frac{\mu}{h}\boldsymbol{e}_r. \end{equation} The significance of this equation is the cornerstone of Feynman's proof, namely that the velocity diagram of a Kepler orbit is a circle. Indeed, integrating Eq. $(2)$ yields \begin{equation} \boldsymbol{v} = \frac{\mu}{h}\boldsymbol{e}_\theta + \boldsymbol{w} = \boldsymbol{v}_\text{c} + \boldsymbol{w},\tag{3} \end{equation} where $\boldsymbol{w}$ is a constant vector. Without loss of generality, we can orientate the $(x,y)$ coordinate axes such that $\boldsymbol{w} = (0,w)$, so that $$ v_x = -v_\text{c}\sin\theta,\qquad v_y-w = v_\text{c}\cos\theta, $$ which is indeed the equation of a circle, with radius $v_\text{c}$. And because $\boldsymbol{w}$ is constant, we do have an additional conservation law. The diagrams below illustrate this further: Eq. $(3)$ actually provides us with an elegant method to derive $r(\theta)$. We can combine Eqs. $(1)$ and $(3)$ to get an expression for $\boldsymbol{w}$: \begin{align} \boldsymbol{w} &= \boldsymbol{v} - \frac{\mu}{h}\boldsymbol{e}_\theta = \dot{r}\boldsymbol{e}_r + \left(r\dot{\theta} - \frac{\mu}{h}\right)\boldsymbol{e}_\theta = \dot{r}\boldsymbol{e}_r + \left(\frac{h}{r} - \frac{\mu}{h}\right)\boldsymbol{e}_\theta. \end{align} Multiplying $\boldsymbol{w}$ by $\boldsymbol{e}_x$ and $\boldsymbol{e}_y$ respectively, we obtain \begin{align} \dot{r}\cos\theta - \left(\frac{h}{r} - \frac{\mu}{h}\right)\sin\theta &= 0, \\ \dot{r}\sin\theta + \left(\frac{h}{r} - \frac{\mu}{h}\right)\cos\theta &= w. \end{align} Multiplying these equations by $\sin\theta$ and $\cos\theta$ respectively, and subtracting them, we get \begin{equation} \frac{h}{r} - \frac{\mu}{h} = w\cos\theta, \end{equation} or \begin{equation} r = \frac{h^2/\mu}{1 + e\cos\theta}, \end{equation} with $e = h w /\mu$ the orbital eccentricity. It also follows that \begin{equation} \boldsymbol{v} = \frac{\mu}{h}\left(\boldsymbol{e}_\theta + e\boldsymbol{e}_y\right), \end{equation} which means that $\boldsymbol{v} = \boldsymbol{v}_\text{c}$ for circular orbits. The conservation law associated with $\boldsymbol{w}$ is more commonly expressed in terms of the Laplace-Runge-Lenz vector: \begin{equation} \boldsymbol{A} = m^2(\boldsymbol{w} \times \boldsymbol{h}). \end{equation} For more information on the symmetry associated with this vector, see this post.
{ "domain": "physics.stackexchange", "id": 48716, "tags": "classical-mechanics, lagrangian-formalism, orbital-motion, hamiltonian-formalism, celestial-mechanics" }
/proc/net/tcp results filter and converter
Question: Being unable to easily read the output of /proc/net/tcp I made a small C# console app that modifies the output to be in decimal notation instead of the standard hexadecimal, show the corresponding enum states of the state code and added the ability to filter out remote addresses. With the ability to further tweak to your own needs/add custom messages. By default /proc/net/tcp returns the result in the following format: 2: 6400A8C0:A21F 6400A8C0:ADC1 01 00000000:00000000 00:00000000 00000000 10296 0 802001 1 0000000000000000 25 4 0 21 -1 After running it through the assistant the output changes to: 2: 192.168.0.100:41503 192.168.0.100:44481 ESTABLISHED 00000000:00000000 00:00000000 00000000 10296 0 802001 1 0000000000000000 25 4 0 21 -1 The following code (github view) works when run in the adb shell with any connected Android device: using System; using System.Collections.Generic; using System.Diagnostics; using System.Globalization; using System.Linq; using System.Net; using System.Threading; class ProcNetTcpConverter { //These values can either be hardcoded, or left empty. If left empty the programm will ask for the user to input manually. static string filePath = @""; internal static string[] remoteIPFilter = new string[] { "" }; static readonly string header = string.Format("{0, 5} {1, 20} {2, 20} {3, 12} {4, 5} {5} {6} {7, 5} {8, 10} {9, 7} {10, 8} {11}", "sl", "local_address", "rem_address", "state", "tx_queue", "rx_queue", "tr", "tm->when", "retrnsmt", "uid", "timeout", "inode"); static readonly string ipRegex = @"[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}"; static void Main(string[] args) { while (true) { try { Console.SetWindowSize(220, 50); if (filePath.Equals("")) { Console.WriteLine("Please specify file path"); filePath = Console.ReadLine(); } if (remoteIPFilter.Length == 0) { Console.WriteLine("IPv4 Filters (seperated by space, leave blank if none):"); remoteIPFilter = Console.ReadLine().Split(' '); if (remoteIPFilter[0] != "") { IPCheck(); } } Console.WriteLine("Press escape or ^c to pause"); do { while (!Console.KeyAvailable) { RunBatch(); Thread.Sleep(100); } } while (Console.ReadKey(true).Key != ConsoleKey.Escape); Console.WriteLine("\nPress any key to resume"); Console.ReadLine(); } catch (Exception e) { Console.WriteLine(e); Console.ReadLine(); } } } /// <summary> /// Check if the given ip is valid. /// Checks ip recursively after entering new IP. /// </summary> static void IPCheck() { for (int i = 0; i < remoteIPFilter.Length; i++) { var match = System.Text.RegularExpressions.Regex.Match(remoteIPFilter[i], ipRegex, System.Text.RegularExpressions.RegexOptions.IgnoreCase); if (!match.Success) { Console.WriteLine("{0} is not a valid ipv4 address, please re-enter, or leave blank to continue.", remoteIPFilter[i]); remoteIPFilter[i] = Console.ReadLine(); if (remoteIPFilter[i] != "") { IPCheck(); } } } } /// <summary> /// Run a batch script that checks for all open TCP connections on a connected device. /// </summary> static void RunBatch() { var process = new Process { StartInfo = { UseShellExecute = false, RedirectStandardOutput = true, FileName = filePath } }; process.Start(); string rawResult = process.StandardOutput.ReadToEnd(); ReplaceHexNotation(rawResult); } /// <summary> /// Repalce the ip address + port that is in hex notation with an ip address + port that is in decimal notation /// </summary> /// <param name="rawResult"></param> static void ReplaceHexNotation(string rawResult) { string[] splitResults = rawResult.Trim().Split('\n'); Console.WriteLine("\n" + splitResults[0]); for (int i = 0; i < splitResults.Length; i++) { if (i == 1) { Console.WriteLine(header); } if (i > 2) { TCPResult result = new TCPResult(splitResults[i]); string message = result.GetMessage(); if (message != null) { Console.WriteLine(message); } } } } /// <summary> /// Convert an ip address + port (format: 00AABB11:CD23) from hex notation to decimal notation. /// Implementation taken from: https://stackoverflow.com/a/1355163/8628766 /// </summary> /// <param name="input">Hex "ip:port" to convert</param> /// <returns>Input ip as decimal string</returns> static internal string ConvertFromInput(string input) { string[] ipPart = input.Split(':'); var ip = new IPAddress(long.Parse(ipPart[0], NumberStyles.AllowHexSpecifier)).ToString(); var port = long.Parse(ipPart[1], NumberStyles.AllowHexSpecifier).ToString(); return ip + ":" + port; } /// <summary> /// Find the index of a character's n'th occurance /// </summary> /// <param name="s">Input string</param> /// <param name="t">Character to check</param> /// <param name="n">Occurance index</param> /// <returns></returns> internal static int GetNthIndex(string s, char t, int n) { int count = 0; for (int i = 0; i < s.Length; i++) { if (s[i] == t) { count++; if (count == n) { return i; } } } throw new IndexOutOfRangeException("GetNthIndex Exception: Index was out of range."); } } class TCPResult { private readonly string tcpResultMessage; enum TcpStates { ESTABLISHED = 1, SYN_SENT, SYN_RECV, FIN_WAIT1, FIN_WAIT2, TIME_WAIT, CLOSE, CLOSE_WAIT, LAST_ACK, LISTEN, CLOSING, NEW_SYN_RECV, TCP_MAX_STATES }; /// <summary> /// Recreate the Tcp result message so we can re-format it with the new ip format /// </summary> /// <param name="rawInput">input string</param> internal TCPResult(string rawInput) { rawInput = rawInput.Trim(); List<string> inputList = rawInput.Split(' ').ToList(); inputList.RemoveAll(o => o.Equals("")); string sl = inputList[0]; string local_address = ProcNetTcpConverter.ConvertFromInput(inputList[1]); string rem_address = ProcNetTcpConverter.ConvertFromInput(inputList[2]); TcpStates st = ((TcpStates)Convert.ToInt32(inputList[3], 16)); string tx_queue = inputList[4].Substring(ProcNetTcpConverter.GetNthIndex(inputList[4], ':', 1) - 8, 8); string rx_queue = inputList[4].Substring(ProcNetTcpConverter.GetNthIndex(inputList[4], ':', 1) + 1, 8); string tr = inputList[5].Substring(ProcNetTcpConverter.GetNthIndex(inputList[5], ':', 1) - 2, 2); string tmWhen = inputList[5].Substring(ProcNetTcpConverter.GetNthIndex(inputList[5], ':', 1) + 1, 8); string retrnsmt = inputList[6]; string uid = inputList[7]; string timeout = inputList[8]; string inode = ""; //iNode doens't always have 8 parameters. Sometimes it only has 3 if (inputList.Count > 12) { inode = string.Format("{0, 6} {1, 3} {2, 16} {3, 3} {4, 3} {5, 3} {6, 3} {7,3} ", inputList[9], inputList[10], inputList[11], inputList[12], inputList[13], inputList[14], inputList[15], inputList[16]); } else { inode = string.Format("{0, 6} {1, 3} {2, 16} {3, 3} {4, 3} {5, 3} {6, 3} {7,3} ", inputList[9], inputList[10], inputList[11], "-1", "-1", "-1", "-1", "-1"); } //return if the remote address is in the filter array if (ProcNetTcpConverter.remoteIPFilter.Contains(rem_address.Substring(0, rem_address.IndexOf(':')))) { return; } tcpResultMessage = string.Format("{0, 5} {1, 20} {2, 20} {3, 12} {4, 5}:{5} {6}:{7, 5} {8, 10} {9, 7} {10, 8} {11}", sl, local_address, rem_address, st.ToString(), tx_queue, rx_queue, tr, tmWhen, retrnsmt, uid, timeout, inode); if (Convert.ToInt32(tx_queue, 16) > 0 && st == TcpStates.ESTABLISHED)//Indicates this is the current active transmissiting connection { tcpResultMessage += " <= Active transmitting connection"; } else if (Convert.ToInt32(rx_queue, 16) > 0 && st == TcpStates.ESTABLISHED)//Indicates this is the current active receiving connection { tcpResultMessage += " <= Active receiving connection"; } } internal string GetMessage() { return tcpResultMessage; } } Running the following batch script: @echo off echo %time% adb.exe shell cat /proc/net/tcp The program works. But I'm interested to learn if i'm using any bad practises, or doing other things wrong. Especially the TCPResults class/constructor could be implemented a lot better I think, but unable to come up with how myself. Answer: Regular expressions You might consider compiling your regular expressions. ipRegex = new Regex(@"[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" , RegexOptions.Compiled); Path normalization Console.WriteLine("Please specify file path"); filePath = Console.ReadLine(); -> append the following method to have some leverage on user input. filePath = System.IO.Path.GetFullPath(filePath ) New lines rawResult.Trim().Split('\n'); Are you sure to use \n? Perhaps this is fine. Maybe consider using Environment.NewLine instead. It depends how this tool encodes new lines. Seperation of concerns Method ReplaceHexNotation performs both tokenizing and outputting to the console. You should extract algorithms from output for better usability and maintainability.
{ "domain": "codereview.stackexchange", "id": 36318, "tags": "c#, .net, formatting, io, ip-address" }
Losses through holes drilled in a pipe
Question: I asked a previous question about pipe t-junctions with pipes of varying diameters, and it got me wondering... How are losses handled and calculated in pipes with holes drilled directly into them? Flow runs perpendicular to the hole and most of the theory I can see for flow through a pipe exit assumes the pipe and orifice are coaxial. Or for an orifice plate directly in the pipe flow, neither of which seem appropriate. I'm guessing some theory must exist since common objects like lawn sprinklers contain these pipes with many holes drilled into them. I have been thinking about a pipe with one hole drilled in it of small diameter. Essentially we have a free jet exiting perpendicular to the main pipe flow. Answer: So, I attempted to solve a simplified problem using Bernoulli, it is an approximation though, since it doesn't account for the change in flow direction and subsequent turbulence caused after the hole. I rearranged equations for volumetric flow rate and head loss. Please let me know if you spot any errors. Note: the last two lines rearrange from an earlier form of the equation to find pressure drop, and SG refers to specific gravity of the fluid.
{ "domain": "engineering.stackexchange", "id": 1931, "tags": "mechanical-engineering, fluid-mechanics, civil-engineering, pipelines, fluid" }
Can't process image using cv_bridge
Question: I'm using a library called 'tpofinder' to recognize textured objects,but it can only work using opencv's videocapture() function,and it's very slow,about 1 fps or less. I'm using CMake to build it,in the same project,the one using videocapture can process image,but the other one using cv_bridge can't process,it just return the origin image. ####The one using videocapture#### //Include headers for OpenCV Image processing #include <opencv2/imgproc/imgproc.hpp> //Include headers for OpenCV GUI handling #include <opencv2/highgui/highgui.hpp> #include <boost/foreach.hpp> #include <boost/format.hpp> #include <iostream> #include "tpofinder/configure.h" #include "tpofinder/detect.h" #include "tpofinder/visualize.h" using namespace cv; using namespace tpofinder; using namespace std; VideoCapture *capture = NULL; bool verbose = true; Mat image; Detector detector; void openCamera() { capture = new VideoCapture(0); if (!capture->isOpened()) { cerr << "Could not open default camera." << endl; exit(-1); } } void loadModel(Modelbase& modelbase, const string& path) { if (verbose) { cout << boost::format("Loading object %-20s ... ") % path; } modelbase.add(path); if (verbose) { cout << "[DONE]" << endl; } } void processImage(Detector& detector) { if (!image.empty()) { cout << "Detecting objects on image ... "; Scene scene = detector.describe(image); vector<Detection> detections = detector.detect(scene); cout << "[DONE]" << endl; BOOST_FOREACH(Detection d, detections) { cout << "Drawing ..."; drawDetection(image, d); cout << "[Done]" << endl; } } } void nextImage() { if (verbose) { cout << "Reading from webcam ... "; } *capture >> image; if (verbose) { cout << "[DONE]" << endl; } } /** * This tutorial demonstrates simple image conversion between ROS image message and OpenCV formats and image processing */ int main(int argc, char **argv) { // TODO: adapt to OpenCV 2.4. // TODO: remove duplication // TODO: support SIFT Ptr<FeatureDetector> fd = new OrbFeatureDetector(1000, 1.2, 8); Ptr<FeatureDetector> trainFd = new OrbFeatureDetector(250, 1.2, 8); Ptr<DescriptorExtractor> de = new OrbDescriptorExtractor(1000, 1.2, 8); Ptr<flann::IndexParams> indexParams = new flann::LshIndexParams(15, 12, 2); Ptr<DescriptorMatcher> dm = new FlannBasedMatcher(indexParams); Feature trainFeature(trainFd, de, dm); Modelbase modelbase(trainFeature); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/adapter"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/blokus"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/book1"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/book2"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/book3"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/coffee"); Feature feature(fd, de, dm); Ptr<DetectionFilter> filter = new AndFilter( Ptr<DetectionFilter> (new EigenvalueFilter(-1, 4.0)), Ptr<DetectionFilter> (new InliersRatioFilter(0.30))); Detector detector(modelbase, feature, filter); openCamera(); while (true) { nextImage(); processImage(detector); //Display the image using OpenCV cv::imshow("image processed", image); //Add some delay in miliseconds. The function only works if there is at least one HighGUI window created and the window is active. If there are several HighGUI windows, any of them can be active. cv::waitKey(3); } //OpenCV HighGUI call to destroy a display window on shut-down. cv::destroyWindow("image processed"); } ####The one using cv_bridge#### //Includes all the headers necessary to use the most common public pieces of the ROS system. #include <ros/ros.h> //Use image_transport for publishing and subscribing to images in ROS #include <image_transport/image_transport.h> //Use cv_bridge to convert between ROS and OpenCV Image formats #include <cv_bridge/cv_bridge.h> //Include some useful constants for image encoding. Refer to: http://www.ros.org/doc/api/sensor_msgs/html/namespacesensor__msgs_1_1image__encodings.html for more info. #include <sensor_msgs/image_encodings.h> //Include headers for OpenCV Image processing #include <opencv2/imgproc/imgproc.hpp> //Include headers for OpenCV GUI handling #include <opencv2/highgui/highgui.hpp> #include <boost/foreach.hpp> #include <boost/format.hpp> #include <iostream> #include "tpofinder/configure.h" #include "tpofinder/detect.h" #include "tpofinder/visualize.h" using namespace cv; using namespace tpofinder; using namespace std; bool verbose = true; //Store all constants for image encodings in the enc namespace to be used later. namespace enc = sensor_msgs::image_encodings; //Declare a string with the name of the window that we will create using OpenCV where processed images will be displayed. static const char WINDOW[] = "Image Processed"; //Use method of ImageTransport to create image publisher image_transport::Publisher pub; cv_bridge::CvImagePtr cv_ptr; Detector detector; void loadModel(Modelbase& modelbase, const string& path) { if (verbose) { cout << boost::format("Loading object %-20s ... ") % path; } modelbase.add(path); if (verbose) { cout << "[DONE]" << endl; } } void processImage(Detector& detector) { if (!cv_ptr->image.empty()) { cout << "Detecting objects on image ... "; Scene scene = detector.describe(cv_ptr->image); vector<Detection> detections = detector.detect(scene); cout << "[DONE]" << endl; BOOST_FOREACH(Detection d, detections) { cout << "Drawing ..."; drawDetection(cv_ptr->image, d); cout << "[Done]" << endl; } } } //This function is called everytime a new image is published void imageCallback(const sensor_msgs::ImageConstPtr& original_image) { //Convert from the ROS image message to a CvImage suitable for working with OpenCV for processing try { //Always copy, returning a mutable CvImage //OpenCV expects color images to use BGR channel order. cv_ptr = cv_bridge::toCvCopy(original_image, enc::BGR8); } catch (cv_bridge::Exception& e) { //if there is an error during conversion, display it ROS_ERROR("tutorialROSOpenCV::main.cpp::cv_bridge exception: %s", e.what()); return; } processImage(detector); //Display the image using OpenCV cv::imshow(WINDOW, cv_ptr->image); //Add some delay in miliseconds. The function only works if there is at least one HighGUI window created and the window is active. If there are several HighGUI windows, any of them can be active. cv::waitKey(3); imwrite("2.jpg",cv_ptr->image); /** * The publish() function is how you send messages. The parameter * is the message object. The type of this object must agree with the type * given as a template parameter to the advertise<>() call, as was done * in the constructor in main(). */ //Convert the CvImage to a ROS image message and publish it on the "camera/image_processed" topic. pub.publish(cv_ptr->toImageMsg()); } /** * This tutorial demonstrates simple image conversion between ROS image message and OpenCV formats and image processing */ int main(int argc, char **argv) { /** * The ros::init() function needs to see argc and argv so that it can perform * any ROS arguments and name remapping that were provided at the command line. For programmatic * remappings you can use a different version of init() which takes remappings * directly, but for most command-line programs, passing argc and argv is the easiest * way to do it. The third argument to init() is the name of the node. Node names must be unique in a running system. * The name used here must be a base name, ie. it cannot have a / in it. * You must call one of the versions of ros::init() before using any other * part of the ROS system. */ ros::init(argc, argv, "image_processor"); /** * NodeHandle is the main access point to communications with the ROS system. * The first NodeHandle constructed will fully initialize this node, and the last * NodeHandle destructed will close down the node. */ ros::NodeHandle nh; //Create an ImageTransport instance, initializing it with our NodeHandle. image_transport::ImageTransport it(nh); //OpenCV HighGUI call to create a display window on start-up. cv::namedWindow(WINDOW, CV_WINDOW_AUTOSIZE); // TODO: adapt to OpenCV 2.4. // TODO: remove duplication // TODO: support SIFT Ptr<FeatureDetector> fd = new OrbFeatureDetector(1000, 1.2, 8); Ptr<FeatureDetector> trainFd = new OrbFeatureDetector(250, 1.2, 8); Ptr<DescriptorExtractor> de = new OrbDescriptorExtractor(1000, 1.2, 8); Ptr<flann::IndexParams> indexParams = new flann::LshIndexParams(15, 12, 2); Ptr<DescriptorMatcher> dm = new FlannBasedMatcher(indexParams); Feature trainFeature(trainFd, de, dm); Modelbase modelbase(trainFeature); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/adapter"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/blokus"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/book1"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/book2"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/book3"); loadModel(modelbase, PROJECT_BINARY_DIR + "/data/coffee"); Feature feature(fd, de, dm); Ptr<DetectionFilter> filter = new AndFilter( Ptr<DetectionFilter> (new EigenvalueFilter(-1, 4.0)), Ptr<DetectionFilter> (new InliersRatioFilter(0.30))); Detector detector(modelbase, feature, filter); /** * Subscribe to the "camera/image_raw" base topic. The actual ROS topic subscribed to depends on which transport is used. * In the default case, "raw" transport, the topic is in fact "camera/image_raw" with type sensor_msgs/Image. ROS will call * the "imageCallback" function whenever a new image arrives. The 2nd argument is the queue size. * subscribe() returns an image_transport::Subscriber object, that you must hold on to until you want to unsubscribe. * When the Subscriber object is destructed, it will automatically unsubscribe from the "camera/image_raw" base topic. */ image_transport::Subscriber sub = it.subscribe("camera/image_raw", 1, imageCallback); //OpenCV HighGUI call to destroy a display window on shut-down. cv::destroyWindow(WINDOW); /** * The advertise() function is how you tell ROS that you want to * publish on a given topic name. This invokes a call to the ROS * master node, which keeps a registry of who is publishing and who * is subscribing. After this advertise() call is made, the master * node will notify anyone who is trying to subscribe to this topic name, * and they will in turn negotiate a peer-to-peer connection with this * node. advertise() returns a Publisher object which allows you to * publish messages on that topic through a call to publish(). Once * all copies of the returned Publisher object are destroyed, the topic * will be automatically unadvertised. * * The second parameter to advertise() is the size of the message queue * used for publishing messages. If messages are published more quickly * than we can send them, the number here specifies how many messages to * buffer up before throwing some away. */ pub = it.advertise("camera/image_processed", 1); /** * In this application all user callbacks will be called from within the ros::spin() call. * ros::spin() will not return until the node has been shutdown, either through a call * to ros::shutdown() or a Ctrl-C. */ ros::spin(); //ROS_INFO is the replacement for printf/cout. ROS_INFO("tutorialROSOpenCV::main.cpp::No error."); } I found the main problem is that the vector "detections" in vector<Detection> detections = detector.detect(scene); is empty,but can't find the reason. I tried drawing a circle using opencv,it can show normally in the cv_bridge one. Then I tested other cameras,but only using opencv's videocapture() can work(recognized the objects and draw something on the image),they are in the same project,so i'm confused,what's the difference between cv_ptr->image and cv::mat image?That's really strange... You can find tpofinder's other source in here: tpofinder source If necessary, i can provide the full source code... Originally posted by Starsky Wong on ROS Answers with karma: 13 on 2013-05-06 Post score: 0 Answer: You "use" Rosbridge, but you show the images different than what I normally do: std_msgs::Header h; h.stamp = ros::Time::now(); ..... cv_bridge::CvImage cv_ptr(h, sensor_msgs::image_encodings::BGR8, img); cv::imshow(OPENCV_WINDOW_IMAGE,cv_ptr.image);//cv::imshow(WINDOW,cv_ptr.image); for you Sorry, but your solution takes too much time to be solved in an easy way (too many lines of code), I hope someone else will do ;-) Originally posted by pablocesar with karma: 252 on 2016-06-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14073, "tags": "ros, opencv, cv-bridge, uvc-camera" }
How can I see which language type will result from the union or intersection of different language types?
Question: I have to decide which language type will result from the union of a type-2 (context-free) and a type-3 (regular) language. Is there a way or rule to decide this for all language types? Answer: First, a minor point of terminology. In the Chomsky hierarchy, type-2 languages are the context-free family, not the context-sensitive one. I do not know which you meant. In general, the terminology used is family of language rather than type of language. The fact that the application of some operation $\diamond$ applied to languages in families $\mathcal F_i$ always gives a language in the family $\mathcal F$ is called a closure property. As stated by Hendrik Jan in his answer, there are no general rules to determine closure properties and they can vary a lot. To complement his examples, context-free (type 2) languages are not closed under intersection: the intersection of two context-free languages may not be a context-free language. However, the intersection or union of a context-free language with a regular language is always context-free. (proving it is an excellent exercise). Since you are ambiguous as to what family of languages you are interested in, it so happens that the context-sensitive languages are also closed under union or intersection with regular languages. Actually, many (most ?) interesting families of languages are closed under union or intersection with regular languages. Though there are no general rules for proving it, there are many other operations one may consider on the languages of a family, which leads to many studies of closure properties of languages. It turns out that closure properties are not independent, because many operations can be defined by composition, or complex combinations, of other operations. This has led to the study of what is usually called Abstract Families of Languages, i.e. families that are closed under the same kinds of operations. But that is now far beyond your question.
{ "domain": "cs.stackexchange", "id": 4343, "tags": "formal-languages, regular-languages" }
Does a callback complete before being called again?
Question: Hi! Suppose I have a subscriber with a callback C. C manipulates a number of global variables and then calls a second method M, that builds a message from those global variables and publishes the message. I am now wondering: Is is possible that C and M interfere, i.e., that M uses global variables that have been modified by a different call to C? Or, stated differently, is it possible to call C before the previous call to C has returned? I was not able to figure out if ros::getGlobalCallbackQueue()->callAvailable blocks till completion. If interference in this situation can happen, how does one usually deal with it? Thank you! Originally posted by Jay4Ros on ROS Answers with karma: 35 on 2015-12-08 Post score: 1 Answer: Callback execution is single-threaded, unless you setup things differently, by using a Multithreaded or Asynchronous spinner fi. See also roscpp/Overview/Callbacks and Spinning. Btw, this has been asked (and answered) many times before. A quick search immediately turned up ROS callbacks, threads and spinning fi. Originally posted by gvdhoorn with karma: 86574 on 2015-12-08 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Jay4Ros on 2015-12-08: Hey! I saw the question you linked before, but I must have missed "Single thread spinner: the default one, takes the messages contained in a callback queue and process the callbacks one by one while blocking the execution of the thread that called it." Thanks! Comment by Jay4Ros on 2015-12-08: Oh, I guess if a program is single threaded a callback can only be called one by one.... My bad!
{ "domain": "robotics.stackexchange", "id": 23176, "tags": "ros, thread, callback" }
Why my perceptron doesn't train well and produces bad results on test data?
Question: I am a newbie in Machine learning and I am writing a small code for Perceptron. This is the first time I am writing code in Python. I have four training data points (X). As they are used for supervised learning so, each data point has its corresponding correct output pair (D). I have implemented SGD and used generalized Delta rule (wij ← wij + α δixj). I have trained my perceptron 10,000 times (epochs= 10,000). Although everything looks fine to me, I don't get the right results when I test it with test values. I need some suggestions so that I can improve my results on test data. P.S. How can I improve this code? Code import numpy as np def sigmoid(x): return 1 / (1 + np.exp(-x)) def Delta_SGD(W, X, D): N = 4 for x in range(N): v1 = np.dot(X[x][0], W[0]) v2 = np.dot(X[x][1], W[1]) v3 = np.dot(X[x][2], W[2]) #weighted sum V = v1+v2+v3 #output of neuron y = sigmoid(V) #error e = D[x] - y #derivative of sigmoid(y) delta = (y)*(1-y)*e #Delta rule DW = alpha*delta*X[x] #updated weights W[0] = W[0] + DW[0] W[1] = W[1] + DW[1] W[2] = W[2] + DW[2] return W #input data points X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ]) #Correct output pairs D = np.array([[0,0,1,1]]).T #learning rate alpha = 0.9 #random weights W = 2*np.random.random((3,1)) - 1 #10000 epochs for epoch in range(10000): W = Delta_SGD(W, X, D) print(epoch) #Final weights after all epochs print("Final weights are \n", W) #testing network N = 4 for x in range(N): v1 = np.dot(X[x][0], W[0]) v2 = np.dot(X[x][1], W[1]) v3 = np.dot(X[x][2], W[2]) V = v1+v2+v3 y = sigmoid(V) print("output of neuron is \n ", y) Answer: Two points about the whole thing You did not test yet. The point behind the training process is to make machine able to learn from the data conditioned on the ability of generalizing this to predicting samples which it has not seen before. Otherwise, the good training is actually overfitting. So here you trained on X and you need to create new samples and check the result to really call it testing (this is an introductory explanation). Machine Learning is about features a lot! Playing with features and cleaning, modifying and filtering them is a key point. In your example, the last dimension of your 3d data is always 1. Does it distinguish anything? (in my course you get a complete explanation of this in the Lecture 2). So that feature (dimenstion, element of the vector) can/should be removed. To better understanding, imagine your 3d spread of the data. The z axis is always 1 which means the topology of points is what you see in x-y plain. So use only that one.
{ "domain": "datascience.stackexchange", "id": 3975, "tags": "python, numpy, perceptron" }
Is "Light Field" a useful concept in mathematics or physics, or just in marketing?
Question: The company lytro.com has an Illum and an Immerge [Light Field Camera] which presumably photograph Light Fields. If I understand correctly the term light field refers to a 5D concept - at each point in 3D space there are rays of light from all directions. Those directions use 2 more dimensions (e.g. $(\theta, \phi)$) and $2+3=5$. This could be called the 5D plenoptic function. I think at each point in this 5D space you have some kind of spectral intensity information. It could be RGB values or a complete spectrum, and would have units of (for example) power per unit area, per unit solid angle, per unit wavelength. Is this just a name for a thing, or are there real, substantial mathematical things you can do with a light field? For example, if I know the Electric and Magnetic fields in a volume, I suppose could calculate the resulting light field in a straightforward way. But what can I actually DO with a light field besides market it? Related question and answer, and a random popular article that repeats the term light field over and over. Answer: The short answer is this: yes, the light field is useful in geometric optics, where you can simulate light as though it were a bunch of classical particles moving on straight lines with definite positions and momenta. This is wrong, since light is excitations in a field, but it works as an approximation. This is the idea behind the framework used in astronomy called radiative transfer, and ray tracing optics in computer graphics. So, a phase space number density of photons can be written as $\rho(\mathbf{x}, \mathbf{p})$, a density over real space, which is 3D, and momentum space, which is also 3D. The magnitude of $\mathbf{p}$ is the "spectrum" direction, and the direction of $\mathbf{p}$ is the direction the light is traveling. This is related to the spectral radiance, $I_\nu$, by: $$\begin{align}I_\nu(\mathbf{x},\theta, \phi, \nu) &= hcp^3 \rho(\mathbf{x}, \mathbf{p}) \\ &= \frac{h^4 \nu^3}{c^2} \rho(\mathbf{x}, \mathbf{p}), \end{align}$$ where $\mathbf{p} = [ \sin \theta \cos \phi,\ \sin\theta \sin \phi,\ \cos\theta] \times h\nu /c.$ As for what it's useful for in a camera the answer is: it depends. You're engaging in a trade-off where spatial resolution on the chip is sacrificed for improved resolution in other ways. For example, integral field spectrographs use the lost spatial resolution to encode spectral information. In the case of light field cameras, they're using it to encode direction of travel for the light. Roughly, this is equivalent to taking many different images at different focuses at the same time, so you can do fun depth of field tricks with Photoshop by combining them, or just selecting the one with the desired focus.
{ "domain": "physics.stackexchange", "id": 33378, "tags": "optics, geometric-optics" }
How does AlphaZero use its value and policy heads in conjunction?
Question: I have a question about how the value and policy heads are used in AlphaZero (not Alphago Zero), and where the leaf nodes are relative to the root node. Specifically, there seem to be several possible interpretations: Policy estimation only. This would be most similar to DFS in which an average evaluation is computed through MCTS rollouts (though as others have noted the AlphaZero implementation actually seem to be deterministic apart from the exploration component, so 'rollout' may not be the most appropriate term) after reaching the . Here each leaf node would be at the end of the game. Value estimation only. It seems that if the value network is to be used effectively, there should be a limit on the depth to which any position is searched, e.g. 1 or 2 ply. If so, what should the depth be? They are combined in some way. If I understand correctly, there is a limit on the maximum number of moves imposed - so is this really the depth? By which I mean, if the game has still not ended, this is the chance to use the value head to produce the value estimation? The thing is that the paper states that the maximum number of moves for Chess and shogi games was 512, while it was 722 moves for Go. These are extremely deep - evaluations based on these seem to be rather too far from the starting state, even when averaged over many rollouts. My search for answers elsewhere hasn't yielded anything definitive, because they've focused more on one side or the other. For example, https://nikcheerla.github.io/deeplearningschool/2018/01/01/AlphaZero-Explained/ the emphasis seems to be on the value estimation. However, in the Alphazero pseudocode, e.g. from https://science.sciencemag.org/highwire/filestream/719481/field_highwire_adjunct_files/1/aar6404_DataS1.zip the emphasis seems to be on the policy selection. Indeed, it's not 100% clear if the value head is used at all (value seems to return -1 by default). Is there a gap in my understanding somewhere? Thanks! Edit: To explain this better, here's the bit of pseudocode given that I found slightly confusing: class Network(object): def inference(self, image): return (-1, {}) # Value, Policy def get_weights(self): # Returns the weights of this network. return [] So the (-1,{}) can either be placeholders or -1 could be an actual value and {} a placeholder. My understanding is that they are both placeholders (because otherwise the value head would never be used), but -1 is the default value for unvisited nodes (this interpretation is taken from here, from the line about First Play Urgency value: http://blog.lczero.org/2018/12/alphazero-paper-and-lc0-v0191.html). Now, if I understand correctly, inference is called in both during training and playing by the evaluate function. So my core question is: how deep into the tree are the leaf nodes (i.e. where the evaluate function would be called)? Here is the bit of code that confused me. In the official pseudocode as below, the 'rollout' seems to last until the game is over (expansion stops when a node has no children). So this means that under most circumstances you'll have a concrete game result - the player to move doesn't have a single move, and hence has lost (so -1 also makes sense here). def run_mcts(config: AlphaZeroConfig, game: Game, network: Network): root = Node(0) evaluate(root, game, network) add_exploration_noise(config, root) for _ in range(config.num_simulations): node = root scratch_game = game.clone() search_path = [node] while node.expanded(): action, node = select_child(config, node) scratch_game.apply(action) search_path.append(node) value = evaluate(node, scratch_game, network) backpropagate(search_path, value, scratch_game.to_play()) return select_action(config, game, root), root But under such conditions, the value head still doesn't get very much action (you'll almost always return -1 at the leaf nodes). There are a couple of exceptions to this. When you reach the maximum number of allowable moves - however, this number is a massive 512 for chess & Shogi and 722 for Go, and seems to be too deep to be representative of the 1-ply positions, even averaged over MCTS rollouts. When you are at the root node itself - but the value here isn't used for move selection (though it is used for the backprop of the rewards) So does that mean that the value head is only used for the backprop part of AlphaZero (and for super-long games)? Or did I misunderstand the depth of the leaf nodes? Answer: I am also a bit confused by your wording but I will try to clear some things up. During MCTS the policy head is used to guide the search while the value head is used as a replacement for roll outs to estimate how good the game position looks. One iteration of the search procedure in MCTS finds a new leaf node which has not been evaluated by the network yet. This leaf node does not have to be a terminal state of the actual game. In fact, in the rare cases towards the end of the game where it IS a terminal state then the network evaluation can be skipped and the real value is used instead. In the code you find slightly confusing -1 and {} are both place holders. The value -1 is a placeholder for the value head output and {} a placeholder for the policy head output of the neural network evaluation. The policy head output should be a vector of values (one value for each child of the node). It is not related to FPU at all, the code you linked uses FPU 0. def value(self): if self.visit_count == 0: return 0 return self.value_sum / self.visit_count The maximum number of moves (512 for chess and 722 for GO) you mention are used when creating training games to train the neural network. The current network is playing against itself and the goal is to create many useful training samples so the games are cut off if they take too long (lots of repetition in chess for example). Note that for chess the actual number of maximum moves is 5899 or an infinite number if draw is not mandatory. Maybe you will find these illustrations regarding AlphaZero's MCTS search useful: http://tim.hibal.org/blog/alpha-zero-how-and-why-it-works/
{ "domain": "ai.stackexchange", "id": 1245, "tags": "reinforcement-learning, alphazero" }
Why do lines in atomic spectra have thickness? (Bohr's Model)
Question: Consider the atomic spectrum (absorption) of hydrogen. The Bohr's model postulates that there are only certain fixed orbits allowed in the atom. An atom will only be excited to a higher orbit, if it is supplied with light that precisely matches the difference in energies between the two orbits. But how precise does 'precisely' mean. Of course, if we need energy $E$ to excite the electron to a higher energy level, and I supply a photon with just $E/2$ I would expect nothing to happen (since the electron cannot occupy an orbit between the allowed ones). But what if I supplied a photon with energy $0.99E$, or $1.0001E$ or some such number. What will happen then? I think that the electron should still undergo excitation precisely because the lines we observe in the line spectrum have some thickness. Which means that for a given transition, the atom absorbs frequencies in a certain range. Is my reasoning correct? If not, why? How does Bohr's model explain this? How about modern theory? If I'm right, what is the range of values that an atom can 'accept' for a given transition? Answer: According to Bohr model, the absorption and emission lines should be infinitely narrow, because there is only one discrete value for the energy. There are few mechanism on broadening the line width - natural line width, Lorentz pressure broadening, Doppler broadening, Stark and Zeeman broadening etc. Only the first one isn't described in Bohr theory - it's clearly a quantum effect, this is a direct consequence of the time-energy uncertainty principle: $$\Delta E\Delta t \ge \frac{\hbar}{2}$$ where the $\Delta E$ is the energy difference, and $\Delta t$ is the decay time of this state. Most excited states have lifetimes of $10^{-8}-10^{-10}\mathrm{s}$, so the uncertainty in the energy sligthly broadens the spectral line for an order about $10^{-4}Å$.
{ "domain": "physics.stackexchange", "id": 20785, "tags": "quantum-mechanics, atomic-physics, spectroscopy, doppler-effect, absorption" }
Does data anonymization conflict with GDPR rules?
Question: There are GDPR articles that relate to a person's ownership of their data e.g., Art. 17 GDPR Right to erasure (‘right to be forgotten’) and Art. 20 GDPR Right to data portability. In case one would anonymize the data without a way to "restore" the relation between the person (name + e-mail address) (which in turn would allow handling of the person-specific data), I'd say this would conflict with these GDPR articles. Are there data anonymization techniques that allow to "restore" the relation between name + contact e-mail after the data has been anonymized? This would allow satisfying these GDPR rules. Answer: Formally speaking, this is clarified in the GDPR Recital 26, unofficially titled Not Applicable to Anonymous Data: The principles of data protection should therefore not apply to anonymous information, namely, information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. This Regulation does not, therefore, concern the processing of such anonymous information, including for statistical or research purposes. Informally speaking, the claim that data anonymization would violate the data subject's rights to data erasure and portability, hence we should seek to use reversible anonymization techniques, sounds awkward, and against the very spirit of GDPR; and the official interpretation regarding anonymization is very clear: Effective data anonymization is made up of two parts: It is irreversible. It is done in such a way that it is impossible (or extremely impractical) to identify the data subject. In other words: if a subject's data have been effectively anonymized, they are no longer personal data, hence they are no longer governed by GDPR; consequently, Articles 17 & 20 are not applicable, and this does not constitute any conflict if any personal data used as a source for the anonymized ones remain, they are subject to GDPR; data subjects can exercise their right to erasure and portability for these non-anonymized data if the personal data used as a source for the anonymized ones are already erased (possibly in compliance with GDPR), then neither the right to erasure nor the right to portability are applicable anymore, and this does not constitute any kind of conflict either. Notice that the fact that ensuring effective anonymization is not as clear-cut as it may sound has also been legally recognized in the Opinion 05/2014 on Anonymisation Techniques of the Data Protection Working Party: Thus, anonymisation should not be regarded as a one-off exercise and the attending risks should be reassessed regularly by data controllers.
{ "domain": "datascience.stackexchange", "id": 8445, "tags": "anonymization" }
Doesn't the acquired charge on the insulator also get polarized?
Question: I was wondering that in experiments such as comb attracting tiny paper pieces or a charged balloon sticking to the wall. The comb and balloon acquires negative charge that is localized on the surface and the electric field due to that negative charge polarizes paper/wall which results in attraction , but my doubt was whether polarization of paper/wall will not effect the distribution of negative charge on the comb/balloon ? Paper/walls are also insulators but they get polarized so why not the comb or balloon for that matter ? It would've be great if anybody could help me out considering I had just started upon electrostatics. Answer: Although induced dipoles can experience a force in an exterior field (that of the balloon or comb) they are very bad at generating a field themselves. This is due to the fact that the fields of two opposite charges in close proximity (i.e. in a dipole) almost cancel. As a result the electric field strength of a dipole is proportional to $1/r^3$ while the electric field strength of a monopole (single charge) is proportional to $1/r^2$ where $r$ is the distance from the source. This is the reason why you can often neglect the fields of dipoles if significant net charge is also present at the same time. Nevertheless, there actually is an effect, even if it is small. In the case of the balloon: it will mainly be polarized by its own charges, but it will also be polarized (to a very small extent) by the dipoles in the wall. This is why you strictly only can solve the problem of both media being polarizable, which is done by Maxwell's equations in matter. By the way, dipoles only experience a force in an inhomogeneous field. For the localized charges of the balloon or comb this inhomogeneity is given. But, if you ever have to do with dipoles in a homogeneous field (e.g. a capacitor), don't be surprised to find no force. In a homogeneous field, dipoles only experience torque, and only if they are oriented at an angle to the field. This could be the case e.g. for permanent dipoles or induced dipoles in an non-isotropic medium.
{ "domain": "physics.stackexchange", "id": 78097, "tags": "electrostatics, charge, coulombs-law, dielectric" }
Is electrons ability to produce mechanical motion considered the only reason for it to be material particle?
Question: "Cathode rays (streams of electrons) produce mechanical motion of a small paddle wheel placed in their path indicating that they are material particles."$_1$ Isn't this statement wrong? Assuming electrons to be material particles, "if we use light to observe the position of electron, photons can transfer momentum to the electron at the time of collision"$_2$. Isn't here photon causing mechanical motion of electron? Then we can also consider photon to be material particle based on the above reasoning of considering electron to be material particle. Isn't it? I don't think photon is considered to be a material particle. Is it a material particle? So, is the first quote correct that electron can be considered as a material particle only because of its ability to produce mechanical motion? Reference: $_1$Principles of Physcial Chemistry-Puri, Sharma, Pathania-Page No.21. $_2$ Modern's abc of Chemistry-Dr.S.P.Jauhar-Page No.169. Data is subjected to modification without the loss of meaning up to my knowledge and page no's may change depending on editions. Answer: Short-short version: We can measure the mass of electrons. They have some. Surely that is enough. Details: To perform the measurement obtain a beam of electrons of known energy. You can know the energy either by knowing the accelerating potential or by measuring the energy with a velocity selector. Pass the beam through a region of known and constant magnetic field but zero electric field and note the radius of curvature. This is the method used by most mass spectrometers, albeit tuned for much more massive projectiles. You must also know the value of the charge on the electron. Millikan's method will work but is difficult to do well.
{ "domain": "physics.stackexchange", "id": 22013, "tags": "particle-physics, material-science" }
Creating an RVIZ plugin to add semantic information to a map
Question: I repeat this question posted 2 years ago, except that we need a plugin for a different purpose. We plan to create a graphical editor to add semantic information to 2D/3D maps, and a RViz plugin could be the ideal solution. Rqt plugins look great for simple interfaces, but we need the fully-fledged 3D interface that RViz provides. So it's still not recommended to make a RViz plugins? I already started a plugin, reusing rqt_nav_view to load a 2D map, but I'm afraid that make it fully 3D will be a lot of development effort, and I don't have enough time. Thanks for any input! Originally posted by jorge on ROS Answers with karma: 2284 on 2014-08-23 Post score: 1 Answer: The RViz plugin API is stable now, and it's possible to write display plugins for specialized data types. I maintain a number of custom rviz plugins as part of my day job. The rviz wiki page has tutorials for writing new display types, panels and tools. Originally posted by ahendrix with karma: 47576 on 2014-08-23 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jorge on 2014-08-23: Excellent! Thanks for the information.
{ "domain": "robotics.stackexchange", "id": 19164, "tags": "navigation, mapping, rviz, plugin, rqt" }
Is it possible for someone to setup ROS for me?
Question: And have it be for windows? What about for Windows XP? Originally posted by ADVANCESSSS on ROS Answers with karma: 21 on 2016-01-13 Post score: -2 Answer: I don't know if it's possible to have ROS working on Windows. But you can find some answers here : or using a virtual machine on your windows system : http://www.robotappstore.com/Knowledge-Base/ROS-Installation-for-Windows-Users/137.html Originally posted by F.Brosseau with karma: 379 on 2016-01-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 23435, "tags": "ros" }
Complexity of testing if a number is a primorial
Question: For the $n$-th prime number $p_n$ the primorial $p_n\#$ is defined as the product of the first $n$ primes. What is the complexity of testing if a given number $n$ is a primorial? Is it related in some way to the FACTORING problem? Answer: This is simpler than factoring. You can simply multiply all prime numbers up to the point where you (a) reach $p_n$ or (b) get a product which is larger than $p_n$. In the first case, you answer yes otherwise you answer no. The reason it is simpler is that the prime factorization is trivial, whereas in the FACTORING problem, you don't know how many of each prime number you need.
{ "domain": "cs.stackexchange", "id": 909, "tags": "complexity-theory, reference-request" }
Principle of Sufficient Reason on light travelling in straight line
Question: I was reading a book Laws and Symmetry by Bas C. Van Fraassen I found that there is an argument for arguing that light travel in straight line: Leibniz's reconstruction of these arguments goes roughly like this. Let it be given that the light travels from point A to point B; demonstrate that its path will be the straight line AB, if these points lie within an entirely homogeneous medium. This is the problem; how does one approach its solution? The problem itself introduces a geometric figure: the pair of points A, B, and the direction from A to B. To define any other direction in space, not along the line AB, one would need to refer to some other point, line, plane or figure, which has not been introduced in the given. Any rule governing the motion of light in this case must therefore either (a) imply that light follows the line AB, or (b) draw upon some feature X of the situation which could single out a direction not along that line. But the assumption that the medium is homogeneous, rules out the presence of such a feature X to be drawn upon. Therefore. . . . We cannot quite yet conclude that therefore light travels along the straight line. As Leibniz clearly perceived, we need some bridge to get us from the fact that we could not possibly formulate any other rule here to the conclusion that light—a real phenomenon in nature—must follow that rule. The bridge, for Leibniz, is that the world was created by Someone who was in the position of having to solve these problems in the course of creation, and who would not choose without a reason for choice. If there was no such feature X to be preferred, obviously none of the choices of type (b) could then have been made. That God does not act without sufficient reason, implies that any correct science of nature, satisfies the constraint of Sufficient Reason. In the above problem, the conclusion that we cannot formulate any rule for the motion of light under these conditions, except that of rectilinear motion, yields then the corollary that light can follow no other path. The Principle of Sufficient Reason is introduced to fill the gap in the sort of arguments (the above, and also Hero's and Fermat's) here represented, and is in turn grounded in a certain conception of creation. From my understanding, the basic idea behind is that when there is only two points $A,B$ defined in a homogeneous space (i.e. every point is the same). We can only draw a straight line between $A$ and $B$. I was not a physics student, but I think of Euclid's postulates that given any two points, except drawing a straight line (Postulate 1), we can also draw a circle by using $AB$ as radius (Postulate 3). Therefore, a light starting from point $A$ and given another point $B$, the light could in principle travel around $B$ at orbit (a circle). Is this argument flawed? Can I conclude possible existence of Photon sphere without reference to any theory of gravity? Answer: Your own source says Let it be given that the light travels from point A to point B. The whole argument is prefaced with the restriction that we have light emitted from A and received at B. In your example, you want light to start at A and go around B, which is an entirely different thing altogether and has nothing to do with Leibniz. Note also that Leibniz really only can claim to show that the photon path must be axisymmetric about the line AB. Only if take a photon to be perfectly localized in space and time does this then mean that it travels along the straight line in question. Back to your claim of procognizing the photon sphere: The argument "X isn't obviously wrong, therefore X possibly exists" isn't really substantive, physically or philosophically. In fact, unless you have a definition of "possibly exists" that is somehow more strict than "isn't obviously wrong," this line of reasoning hasn't shown anything at all. For a more extreme (but qualitatively identical) situation, imagine someone proposing "I assume the Pythagorean theorem, and I conceptualize the Higgs Boson, and I see no conflict between the two, so the Higgs Boson possibly exists." Sure the Higgs Boson possibly exists, and you knew that the moment you conceptualized it. Checking that the Pythagorean theorem doesn't disallow the Higgs Boson doesn't really accomplish much. Moreover, if you know nothing other than the Pythagorean theorem, it's questionable how much your concept of the Higgs Boson is what others take it to be.
{ "domain": "physics.stackexchange", "id": 21839, "tags": "visible-light, symmetry, geodesics" }
Get js0 data without joy publishing
Question: I'm trying to protect the input data stream into my robot, but not by subscribing to joy. Rather I'd like to add joy code into my cpp or just directly get the info from /dev/input/js0. I am using ROS Kinetic. I've searched and haven't been able to quite find the answer that will work with ROS. I appreciate any help, and would be happy to answer any questions to help. Originally posted by jamesR on ROS Answers with karma: 1 on 2018-08-14 Post score: 0 Original comments Comment by gvdhoorn on 2018-08-14: Well: my first question would be: what do you hope to gain by that? I'm particularly interested in your "I'm trying to protect the input data stream to my robot" remark there. Comment by jarvisschultz on 2018-08-14: You could always start by modifying the source code from the joystick_drivers package to do what you desire with the /dev/js0 data instead of publishing it to /joy Comment by PeteBlackerThe3rd on 2018-08-15: Are you trying to 'hide' the joystick values from the wider ROS system? Answer: Are you sure that you can not do what you need by setting the device or device name parameters in joy? You might also consider combining this with some udev rules. Originally posted by qnetjoe with karma: 30 on 2018-08-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 31539, "tags": "ros, ros-kinetic, joy-node, joystick, joy" }
Boundary Terms in Matrix Elements
Question: Recently I have become confused about how boundary terms in the action affect transition amplitudes. Ultimately I am concerned about field theory, but I think my confusion can be demonstrated with a free point particle example as well. Consider the matrix element $$ \,_{H}\langle x_2, t_2|x_1, t_1\rangle_{H} = \,_S\langle x_2|e^{iH(t_2 - t_1)}|x_1\rangle_S$$ Where $H, S$ denotes a Heisenberg and Schrodinger vector, respectively. I am interested in understanding this matrix element from the path integral point of view: $$\,_{H}\langle x_2, t_2|x_1, t_1\rangle_{H} = \int_{q(t_1) = x_1}^{q(t_2) = x_2}\mathcal{D}q(t)\,e^{iS[q]}.$$ My confusion comes from the fact that there are many different representations of the action which could describe a free point particle, which differ by boundary terms. For example, one could use $$S_1[q] = \frac{1}{2}\int_{t_1}^{t_2}dt\,\dot q^2.$$ Alternatively, one could also use $$S_2[q] = -\frac{1}{2}\int_{t_1}^{t_2}dt\,q\ddot q = S_1[q] - \frac{1}{2}\Big(x_2\dot x_2 -x_1\dot x_1\Big).$$ The first choice of action is traditionally what one would use, and it comes from the Hamiltonian $$H_1 = \frac{p^2}{2}$$ However, I am wondering if the second choice of action has any physical meaning. In particular my questions are the following: (1) Does using $S_2$ actually compute a transition amplitude at all? For instance, if we use this action, we would have $$\,_{H}\langle x_2, t_2|x_1, t_1\rangle_{H} = e^{ - \frac{i}{2}(x_2\dot x_2 -x_1\dot x_1)}\int_{q(t_1) = x_1}^{q(t_2) = x_2}\mathcal{D}q(t)\,e^{iS_1[q]}$$ This does not seem like a well defined object because $\dot x_{1,2}$ are not well defined. If this path integral does not compute a transition amplitude, does it have another interpretation? (2) What affect do the boundary terms in $S_2$ have on the Hamiltonian? Is there a different Hamiltonian $H_2$ that gives $S_2$? Note: For question (1), one could argue that the problematic pieces only appear as a phase, and thus drop out after taking the modulus. This is true for this simple example, but it may not be in general. For instance, one might consider $$\,_H\langle x_2, t_2|\psi\rangle_H = \int dx_1\int_{q(t_1) = x_1}^{q(t_2) = x_2}\mathcal{D}q(t)\,e^{iS_2[q]}\,_{H}\langle x_1, t_1|\psi\rangle_H \\= e^{ - \frac{i}{2}x_2\dot x_2}\int dx_1 \,e^{\frac{i}{2}x_1\dot x_1}\int_{q(t_1) = x_1}^{q(t_2) = x_2}\mathcal{D}q(t)\,e^{iS_1[q]}\,_{H}\langle x_1, t_1|\psi\rangle_H $$ where the $x_1 \dot x_1$ piece does not appear as a phase. Answer: The point-particle path integral can allegedly be defined unambiguously in continuous time, but I'm not familiar with the high-brow details of that definition, so I'll do something barbaric: I'll discretize time. And to avoid issues with non-normalizable states, I'll use... normalizable states. Then the quantity of interest is $$ \newcommand{\la}{\langle} \newcommand{\ra}{\rangle} \la\tilde\psi,t_N|\psi,t_1\ra \equiv \int \prod_t dq_t\ e^{iS[q]} \tilde\psi^*(q_N)\psi(q_1). \tag{1} $$ This is well-defined as long as (a) $\psi$ and $\tilde\psi$ are normalizable, and (b) doing the integral over $q_1$ gives an expression of the same form but with a new "initial" state that is a normalizable function of $q_2$, and so on. What happens if we add a total-derivative term to the lagrangian, so that the modified action $S'$ picks up terms that depend on $q_1$ and $q_N$? It modifies the initial and final states. It's the same model as before, with the same action as before, but with different initial and final states. Now let's increase the number of spacetime dimensions from $1$ to $D$, still discretized. If we imposed periodic boundary conditions in the spatial dimensions, then nothing new would happen. The answer would be the same as above. But what if we don't impose periodic boundary conditions in the spatial dimensions? In that case, adding a total-derivative term to the lagrangian must change the boundary conditions on the spatial dimensions, in addition to changing the initial/final states. In other words, it changes the boundary conditions on all of the spacetime dimensions. I'm using the phrase "boundary conditions" in a broad sense: not just Neumann or Dirichlet, but also including generalizations like integrating over a function of the boundary-variables, just like we normally do to specify the initial/final states. Just for fun, let's generalize even further. What if we cut a hole inside spacetime? Now we need to specify boundary conditions on the boundary of the hole, and adding a total-derivative term to the lagrangian will change those boundary conditions just like it changed the "outside" boundary conditions. How should we interpret this? If I'm not mistaken, cutting a hole and specifying its boundary conditions is equivalent to inserting a local operator into the path integral (I mean localized within the hole, not necessarily at a point), so that now we're calculating $\la\tilde\psi,t_N|O|\psi,t_1\ra$ for some nontrivial operator $O$. Which operator? That's what the boundary conditions specify. I infer that adding a total-derivative term to the lagrangian changes the operator that we inserted, in addition to changing the initial/final states and the boundary conditions at the far edges of space. Those two things (changing operators and changing states) look different in the canonical formulation, but from this perspective, they look like the same kind of thing, just in different parts of spacetime. Disclaimer: I'm not an expert in this subject, and what little intuition I do have is based on local lattice QFT, which might not be sufficient to capture everything that can happen in local continuum QFT (cf the not-strictly-local overlap lattice Dirac operator).
{ "domain": "physics.stackexchange", "id": 81812, "tags": "quantum-mechanics, lagrangian-formalism, hamiltonian-formalism, path-integral, boundary-conditions" }
jQuery tab groups
Question: I'm new to jQuery, but I have a fair bit of experience with JavaScript, mainly using Angular. What I'm wanting to do is have implement a horizontal list of tabs, where clicking each button will display a corresponding div, and hide the others. Here's how I'm doing it: $(document).ready(function(){ $(".tab-group").each(function(i, tg){ $(tg).children().each(function(i, child){ $(child).click(function(){ $(tg).children().removeClass('btn-selected'); $(child).addClass('btn-selected'); $(tg).children().each(function(i, child) { $('#'+$(child)[0].attributes['tabid'].nodeValue).hide(); }) $('#' + $(child)[0].attributes['tabid'].nodeValue).show(); }); }); $(tg).children('.btn-selected').click(); }); }); See the plunker This works how I want it to, but I have questions about the code style, because it looks gnarly perhaps: Is there a better way to do ID selectors other than $('#' + ....) What about where I'm always referencing the first time in an array $(child)[0] - that seems messy. Am I reinventing the wheel here? - Is there a better way of doing this? I'm doing a 'hide all the children, then show the clicked one' algorithm. Is there a better way to do this? Answer: Some thoughts in reviewing both your javascript code and your HTML on Plunker. Code Style Don't use spaces between HTML tag properties and their values. It makes your markup hard to read. Example: <div id = "footab" class = "module"> Should be: <div id="footab" class="module"> Think of the spaces as the ONLY separator between each element property, since in HTML you don't have the luxury of commas. The only usage for spaces in element definition is for this purpose (excepting spaces inside property string values, of course). Lots of unnecessary vertical white space both in your HTML and javascript code. Code Content I question introducing your own custom custom HTML tag properties. This is what data attributes are meant to do. You could change this: <div id = "foo-btn" tabid = "footab" class = "btn btn-underline btn-selected">FOO</div> To: <div id="foo-btn" data-tab-target-id="footab" class="btn btn-underline btn-selected">FOO</div> So when trying to determine target ID in javascript. You go from this: $('#'+$(child)[0].attributes['tabid'].nodeValue).hide(); To this: var target = this.data('tabTargetId'); $('#'+target).hide(); Much cleaner. Also note I used this here as... You are introducing a scope confusion problem by using child instead of just using this. This: $(tg).children().each(function(i, child){ Should just be: $(tg).children().each(function(){ Since you are not really using the index value anyway, there is no reason to specify parameters for the callback function. You would then use this or $(this) (if you need jQuery wrapper on current element) from within the callback. Attach listeners at a higher level (delegate event handling). There is no reason to attach a click handler to every tab in a loop like this. Instead consider using on() with a selector (ideally you could add a class to tab group elements to make this selector simple). So this: $(".tab-group").each(function(){ $(this).children().each(function(){ Can simply become: $(".tab-group").on('click', '.tab-option', function() { You have removed two levels of nested loops in doing this. Thinks about whether it truly makes sense to use <h1> tags for title for the tabbed content. Semantically, I would think you should be using <h2> here perhaps. This is more meaningful for things such as Google search indexing. Putting it together, you may end up with code like this. HTML: <body> <h1>Hello Plunker!</h1> <div class="tab-group"> <div id="foo-btn" data-tab-target-id="footab" class="tab-option btn btn-underline btn-selected">FOO</div> <div id="bar-btn" data-tab-target-id="bartab" class="tab-option btn btn-underline">BAR</div> </div> <div id="footab" class="module"> <h2>FOO</h2> </div> <div id = "bartab" class = "module"> <h2>BAR</h2> </div> <div class="tab-group"> <div data-tab-target-id="a" class="tab-option btn btn-underline btn-selected"> A</div> <div data-tab-target-id="b" class="tab-option btn btn-underline"> B</div> <div data-tab-target-id="c" class="tab-option btn btn-underline"> C</div> </div> <div id="a" class="module"> <h2>ALPHA</h2> </div> <div id="b" class="module"> <h2>BETA</h2> </div> <div id="c" class="module"> <h2>CHARLIE</h2> </div> </body> Javascript: $(document).ready(function(){ $(".tab-group").on('click', '.tab-option', function() { var $selectedTab = $(this); var selectedTargetId = $(selectedTab).data('tab-target-id'); var $siblingTabs = $selectedTab.siblings('.tab-option'); $siblingTabs .removeClass('btn-selected') .each(function() { var target = $(this).data('tab-target-id'); $('#'+target).hide(); }); $selectedTab.addClass('btn-selected'); $('#'+selectedTargetId).show(); }); }); A few thoughts for further exploration: Something like this might be something you consider writing as a jQuery plug-in. In fact there are probably a large number of such plug-ins available around tabbed interfaces that you can either use or reference for inspiration. You could optimize performance here. This code (both your version and mine) would still do some unnecessary DOM traversal that could be designed away if you REALLY needed to squeeze performance out of this functionality. You could certainly better store references between tab group and target content elements such that you would not need to re-calculate these relationships every time this event handler fires. You might take a look at my answer on this code review - https://codereview.stackexchange.com/a/135858/23727 - which presented a more complex use case similar to yours in behavior (controls that show/hide elements). For this use case, I had suggested using a javascript "class" structure to: Support efficient storage of jQuery collection references Build and store a data structure for efficient mapping of control elements to the elements they control Execute various show/hide (filtering) operations.
{ "domain": "codereview.stackexchange", "id": 22016, "tags": "javascript, jquery" }
Is this a manifestation of some infinite-dimensional Cayley-Hamilton theorem?
Question: In classical field theory, when you have a free real scalar field $\phi$ with Lagrangian (density): $$ L = \frac{1}{2} \, \eta^{\mu \nu} \, \partial_{\mu} \phi \,\partial_{\nu} \phi - \frac{1}{2} m^2 \phi^2,$$ where $\eta^{\mu \nu}$ is the Minkowski metric with signature $(+, -, -, -)$, the corresponding Euler-Lagrange equations are the Klein-Gordon equations: $$ \eta^{\mu \nu} \partial_{\mu} \partial_{\nu} \, \phi + m^2 \phi = 0. $$ Moreover, it turns out that after quantization, in QFT, the quantized field $\hat{\phi}(x)$ in the Heisenberg picture also obeys the Klein-Gordon equations. This vaguely reminds me of the Cayley-Hamilton theorem, which states that if $A$ is a complex $n \times n$ matrix, then $p(A) = 0$, where $p(x)$ is the characteristic polynomial of $A$. Indeed, we know that any eigenvalue $\lambda$ of $A$ satisfies $p(\lambda) = 0$, and the CH theorem tells us that when you promote a generic eigenvalue $\lambda$ to the "operator" $A$, then $A$ satisfies the same equation, namely $p(A) = 0$. I realize it is a "formal" question for most physicists, but I wonder if what I am thinking of is essentially true. Answer: This is the ultimate "soft" question, so I can't possibly give you a "hard" answer, but, offhand, no. Indeed, in your vision, the C-H theorem amounts to a functor "promoting" a characteristic polynomial with n roots $\lambda_n$ to the same polynomial of a diagonal (direct sum) operator with these roots along its diagonal; and then scrambling the operator to a generic hermitian one (you seem to think of strictly diagonalizable operators, the all but trivial case of the theorem, which is fine for your purposes). However, quantization is not quite a map between quantum operators and their eigenvalues, despite the coherent state vision you might conceivably be basing this on. That is to say, the complex coefficients $\alpha(p), \alpha^*(p)$ you are quantizing to $a(p), a^\dagger(p)$ are in no way eigenvalues of your quantum result--and the linear K-G equation is not readily interpretable as a characteristic polynomial of the $\alpha(p), \alpha^*(p)$s... Quantization here is just arraying a slew of decoupled classical momentum (normal) modes with arbitrary coefficients to the same modes with quantum coefficients, and the similarity is predicated on the identical component equations. In a sense, this is the trivial part encoding Lorentz invariance. The real fun starts when you project the nontrivial implications of the quantized system, and how features unthinkable for the classical precursor start emerging. But, as indicated, absent further specific hard questions, it is hard to comment on you vision.
{ "domain": "physics.stackexchange", "id": 85604, "tags": "quantum-field-theory, linear-algebra, klein-gordon-equation, classical-field-theory" }
What causes these circular swirls of islands?
Question: I was following the boarder of the US and Canada on Google Maps, because, and found these interesting circular patterns of islands near the Northwest Angle. I have heard of geologic folding but I have never heard of Swirls or Circles. What is the cause of this? Swirl of islands on Google maps Answer: Near the Lake of the Woods, the island swirl or vortex is due to structures in bedrock beneath the lake. The rocks in this area are old and have been folded by tectonic action. The area has either been folded into a dome or a basin, exposing different layers of bedrock. Glacial activity afterwards has 'flattened'the area, and differential erosion within the folded area formed valleys that flooded where the rock eroded faster. Geologic history of The Lake of the Woods.
{ "domain": "earthscience.stackexchange", "id": 1112, "tags": "geomorphology, structural-geology, regional-geology" }
Why CMakeLists's rosbuild_add_executable couldn't have a name 'test'?
Question: My package couldn't generate executable file when CMakeLists has a line: rosbuild_add_executable(test src/t.cpp) My package can generate executable file when CMakeLists has a line: rosbuild_add_executable(test2 src/t.cpp) I found that only the name 'test',it failed to generate an executable file. What happened? I use ROS diamondback. Thank you~ Originally posted by sam on ROS Answers with karma: 2570 on 2011-08-26 Post score: 0 Answer: From the rosbuild/ CMakeLists wiki page: Do not call your executable test. This target name is reserved for unit testing, and CMake will fail if you try it The page describes a workaround if you really need to call it test Originally posted by Ivan Dryanovski with karma: 4954 on 2011-08-27 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 6536, "tags": "cmake" }