anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How can I tell whether or not a molecule is planar?
Question: I am doing several questions involving judgment on the planarity of a compound. Which of the following is not a planar molecule? $\ce{H_2C=C=CH_2}$ $\ce{H_2C=C=C=CH_2}$ $\ce{H_2C=C=O}$ $\ce{NC-HC=CH-CN}$ I had the idea that the compound in with central atom has $\ce{sp}$ hybridisation is planar or the compound in which all the atoms has the same hybridization. But it is not working in this case. I am aware of finding out the hybridization of a atom in a compound but I feel trapped to decide the planarity of certain compounds. The question does not address this purpose. Anyone has any idea to solve this and many related questions? Answer: I think these general rules work: If there is an $\ce{sp^3}$ hybridized carbon (or nitrogen), the molecular is NOT planar. 2) If there are no $\ce{sp^3}$ hybridized carbons (or nitrogens), and there is only one $\ce{sp^2}$ hybridized atom (carbon or nitrogen), it will be planar. 3) If there are no $\ce{sp^3}$ hybridized atoms, and there are 2 $\ce{sp^2}$ hybridized atoms (carbon or nitrogen) that are separated by an even number of double bonds and no single bonds, then the molecule will not be planar. So a general simple rule is that: the molecule will not be planar if there is an $\ce{sp^3}$ hybridized carbon (or nitrogen) atom or two $\ce{sp^2}$ hybridized atoms of carbon/nitrogen which are separated by an even number of double bonds and no single bonds. Otherwise, its structure allows it to be planar. Even though the molecule will have a structure that allows for it to exist in a planar conformation, there may be some/many that do not persist in a planar conformation due to steric effects, or complex three dimensional geometries. In the problems you listed above, using this rule: Not planar because there are no $\ce{sp^3}$ and the two $\ce{sp^2}$s are separated by an even number of double bonds. Planar because there are two $\ce{sp^2}$s but they are separated by an odd number of double bonds (3) (and no single bonds) Planar because there are no $\ce{sp^3}$s and only 1 $\ce{sp^2}$s that make 3 or more bonds (C or N). The orbital geometry is NOT planar because the $\ce{sp^2}$ oxygen is separated from the $\ce{sp^2}$ carbon by an even number of double bonds. Planar because 2 $\ce{sp^2}$s are separated by an odd number (1) of double bonds (and no single bonds)
{ "domain": "chemistry.stackexchange", "id": 6916, "tags": "organic-chemistry, molecular-structure" }
How to justify the chosen neural architecture?
Question: I had a task to implement a neural network that would carry out multiclass classification of traffic by several parameters. On the advice of colleagues, I chose the "Multilayer Perceptron" architecture. One of these days I will have a defense of my work, but I absolutely do not understand how to answer the question: "Why did you choose this type of architecture?". Please tell me if there are any theses why the "multilayer perceptron" architecture is better than other neural network architectures for solving problems of multi-class traffic classification? Answer: This is a very general question, so I'll just point to a reference that should be a good starting point. Deep Learning for Encrypted Traffic Classification: An Overview seems to contain exactly what you're looking for: Several factors affect the choice of deep learning models for network traffic classification. The most important one is the choice of features. ... Table II summarizes features, the corresponding models, and their properties.
{ "domain": "ai.stackexchange", "id": 3365, "tags": "neural-networks, deep-learning, classification" }
ADCs: by what mechanism are antibodies internalised
Question: I read that ADCs (Antibody-Drug Conjugates) act by a -mab for a particular target being bonded to a cytotoxic compound. From my high-school-with-crayons knowledge of antibodies, however, one part of the mechanism stands out as being strange: the internalisaiton of the complex into the cell. In my head, antibodies are either free in plasma or expressed bound to the surface of cells. How and why (mechanistically/evolutionarily/functionally) does this antibody internalisation take place? Is it a natural process in most organisms, or a backdoor which must be engineered into the ADC? The usual sources for neophytes are quiet in this area, and the advanced material is bewildering. Answer: What we're basically looking at with ADC internalization Courtesy of Bayer So the way an ADC works: The antibody-drug conugate binds to a target antigen like a transmembrane receptor, the cell in response engulfs the entire complex and send it to an endosome. What the cell can do with the endosome depends on the cargo. Digressing, the ADC internalization mechanism is often attributed to receptor-mediated endocytosis, where the most prominent mechanism of receptor internalization is the clathrin-mediated pathway. We're going to be referring here most of the time. Clathrin-mediated endocytosis commences with the recruitment of adaptor proteins, accessory proteins and a clathrin polymeric lattice to phosphatidylinositol-4,5-bisphosphate-enriched plasma membrane regions A common adaptor protein here is AP2 that's capable of binding to motifs present on the cytoplasmic tails of membrane receptors. What it's doing is selecting which receptors are the cargo. Then the clathrin mobilizes to the now-enriched membrane regions, and the polymerization of clathrin causes the membrane to displace and curve. Dynamins, large helical GTPase proteins capable of stretching the invagination in the membrane to a vesicle, bind the phosphatidylinositol-4,5-bisphosphates in the membrane, and may work in concert with BAR domain-containing proteins and actin tension to stretch the membrane into a vesicle and cut it away (Doherty & McMahon, 2009). Obviously there's some missing information, as we don't fully understand it. I think reading the first reference, published in mAbs in 2013, is a really good place to start however!
{ "domain": "biology.stackexchange", "id": 5094, "tags": "cell-biology, antibody" }
Conserving momentum along $y$-axis
Question: We know that momentum of a system is conserved if no external force acts on it and since gravity acts along the $y$ axis,momentum of a system excluding the earth cannot be conserved in the $y$ axis. But i have a doubt on this. What difference does it make to include or exclude the earth?If we include earth in our system,gravity is an internal force. Then we are just supposed to take into account,the momentum of earth before or after,right?The earth is so huge that neither its mass changes nor velocity. So won't the initial and final momentum of the earth always remain conserved? Doesn't that mean we can always conserve momentum in the $y$ axis as well? Mathematically suppose we have a vertically projectiled body and a falling body having a collision in $y$ axis. Taking earth into the system, $m_1u_1+m_2u_2+M_{\mathrm{earth}}u_{\mathrm{earth}}=m_1v_1+m_2v_2+M_{\mathrm{earth}}v_{\mathrm{earth}}$ Here the terms $M_{\mathrm{earth}}u_{\mathrm{earth}}$ and $M_{\mathrm{earth}}v_{\mathrm{earth}}$ get cancelled since $u_{\mathrm{earth}}=v_{\mathrm{earth}}$. So it doesn't seem to make any difference conserving momentum when gravity is an external force. Hence,can't we say that momentum is always conserved when gravity is an external force? I may be wrong. Please enlighten me. Answer: Can't we say that momentum is always conserved when gravity is an external force? No. An object at rest, like a stone after I drop it, will be accelerated by gravity meaning $\dot P \neq 0$, meaning momentum is not conserved, if we take the earth out of the equation. What you got right though is that the change in earth's momentum is very small, but if we decided to neglect it, that means that even if we take Earth into the system momentum is not conserved not the other way around. The earth is so huge that neither its mass changes nor velocity. This is not true, it's a good approximation but, saying that is like saying Newton's third law does not apply to earth, which is the same as saying gravity is not an interaction force, it's a spooky force from outside, an external force. Of course then we don't have conservation of momentum, as you well know "momentum of a system excluding the earth cannot be conserved". To finish with the calculation that shows that the total sum of momentum does not change: Let's say you let an apple ($m$) fall and look at the forces between apple and earth ($M$), down being the negative direction, $$F_{{e\to a}}=-\frac{G m M}{ r^2}=-F_ {{a\to e}}$$ Acceleration of the apple: $$a_a=F_{{e\to a}}/m$$ acceleration of earth $$a_e=F_{{a\to e}}/M$$ The mass of both don't change so we have a change in momentum for both that is just $\dot P = m a$: $$\dot P_a = m a_a=m F_{{e\to a}}/m = -\frac{G m M}{ r^2} $$ And the earth: $$\dot P_e = M a_e=M F_{{a\to e}}/M= \frac{G m M}{ r^2} $$ We see that the change in momentum is zero. $$\dot P = \dot P_e + \dot P_a =0 $$ I.e. momentum is conserved. If you now took $\dot P_e =0$, you of course have a nonzero change in momentum i.e. no momentum conservation.
{ "domain": "physics.stackexchange", "id": 90607, "tags": "newtonian-mechanics, momentum, conservation-laws" }
Old-way of asynchronous programming in ASP.NET MVC 3
Question: On ASP.NET MVC, I try to write an async Controller action with the old asynchronous programming model (actually, it is the current one, new one is still a CTP). Here, I am trying to run 4 operations in parallel and it worked great. Here is the complete code: public class SampleController : AsyncController { public void IndexAsync() { AsyncManager.OutstandingOperations.Increment(4); var task1 = Task<string>.Factory.StartNew(() => { return GetReponse1(); }); var task2 = Task<string>.Factory.StartNew(() => { return GetResponse2(); }); var task3 = Task<string>.Factory.StartNew(() => { return GetResponse3(); }); var task4 = Task<string>.Factory.StartNew(() => { return GetResponse4(); }); task1.ContinueWith(t => { AsyncManager.Parameters["headers1"] = t.Result; AsyncManager.OutstandingOperations.Decrement(); }); task2.ContinueWith(t => { AsyncManager.Parameters["headers2"] = t.Result; AsyncManager.OutstandingOperations.Decrement(); }); task3.ContinueWith(t => { AsyncManager.Parameters["headers3"] = t.Result; AsyncManager.OutstandingOperations.Decrement(); }); task4.ContinueWith(t => { AsyncManager.Parameters["headers4"] = t.Result; AsyncManager.OutstandingOperations.Decrement(); }); task3.ContinueWith(t => { AsyncManager.OutstandingOperations.Decrement(); }, TaskContinuationOptions.OnlyOnFaulted); } public ActionResult IndexCompleted(string headers1, string headers2, string headers3, string headers4) { ViewBag.Headers = string.Join("<br/><br/>", headers1, headers2, headers3, headers4); return View(); } public ActionResult Index2() { ViewBag.Headers = string.Join("<br/><br/>", GetReponse1(), GetResponse2(), GetResponse3(), GetResponse4()); return View(); } #region helpers string GetReponse1() { var req = (HttpWebRequest)WebRequest.Create("http://www.twitter.com"); req.Method = "HEAD"; var resp = (HttpWebResponse)req.GetResponse(); return FormatHeaders(resp.Headers); } string GetResponse2() { var req2 = (HttpWebRequest)WebRequest.Create("http://stackoverflow.com"); req2.Method = "HEAD"; var resp2 = (HttpWebResponse)req2.GetResponse(); return FormatHeaders(resp2.Headers); } string GetResponse3() { var req = (HttpWebRequest)WebRequest.Create("http://google.com"); req.Method = "HEAD"; var resp = (HttpWebResponse)req.GetResponse(); return FormatHeaders(resp.Headers); } string GetResponse4() { var req = (HttpWebRequest)WebRequest.Create("http://github.com"); req.Method = "HEAD"; var resp = (HttpWebResponse)req.GetResponse(); return FormatHeaders(resp.Headers); } private static string FormatHeaders(WebHeaderCollection headers) { var headerStrings = from header in headers.Keys.Cast<string>() select string.Format("{0}: {1}", header, headers[header]); return string.Join("<br />", headerStrings.ToArray()); } #endregion } I have also Index2 method here which is synchronous and does the same thing. I compare two operation execution times and there is major difference (approx. 2 seconds) But I think I am missing lots of things here (exception handling, timeouts, etc). I only implement the exception handling on task3 but I don't think it is the right way of doing that. What is your opinion on this code? How can it be improved? Answer: I would certainly include exception handling in the process. I normally use try catch statements. This at least allows me to catch the exceptions and return a more meaningful and informative message vs letting a method create a silent exception that may not be returned. Example within the helper methods: string GetReponse1() { try { var req = (HttpWebRequest)WebRequest.Create("http://www.twitter.com"); req.Method = "HEAD"; var resp = (HttpWebResponse)req.GetResponse(); return FormatHeaders(resp.Headers); } catch (Exception ex) c{ throw new Exception("There was a problem creating web request" + ex.InnerException); { } Since you are making async calls I would look to handle timeouts or prolonged responses. You could do this by creating a timer (stopwatch class). I would allow this to be configurable by the user or within a config file but I would include a stopwatch object in the method that will cancel the async call and return a timeout response once "x" (10 seconds as an example) time was reached. You could include this with the helper methods so that the call will expire after a certain time. Additional examples of try catch statements or the stopwatch class may be found at: Stopwatch class MSDN Try Catch Statements
{ "domain": "codereview.stackexchange", "id": 6326, "tags": "c#, asp.net, asp.net-mvc-3, asynchronous" }
How did the scientific community receive this measurement of speed of gravity
Question: This link and this one concern a recent measurement, by Chinese scientists, of the speed of gravity using Earth tides. They find it is consistent with a speed equal with the speed of light, with an error of about 5%. Is it real? Was it done before another way, with better precision? Is it something particularly important for the validation of gravitation theories? Answer: K.Y. Tang is a geophysicist who is known for work on the Allais effect, which is pathological science dating back to the 1950's, when Allais claimed anomalous effects on a Foucault pendulum during an eclipse. A Google Scholar search shows no citations yet to Tang et al.'s February 2013 paper claiming to have measured the speed of gravity. As is often the case with pathological science, there seems to be a certain set of people who take the subject seriously and cite each other's papers, while people outside their circle can't be bothered to debunk them. This particular subgroup includes kooks like van Flandern, who has claimed, for example, that light propagates faster than $c$. As discussed in the answers to this question, we have strong indirect confirmation from binary pulsars of GR's prediction that gravity propagates at $c$, whereas attempts at a direct measurement have been thwarted by the lack of any test theory that predicts any other speed for gravity. As with the previous bogus claim by Kopeikin, Tang et al. seem to have made no effort to seek the involvement of anyone competent in general relativity to help with analyzing and interpreting their data. A Google Scholar search shows a couple of papers, Amador 2008 and Duif 2004, that reference Tang's previous work on the Allais effect. Amador, "Review on possible Gravitational Anomalies," 2008, http://arxiv.org/abs/gr-qc/0604069 Duif, "A review of conventional explanations of anomalous observations during solar eclipses," 2004, http://arxiv.org/abs/gr-qc/0408023
{ "domain": "physics.stackexchange", "id": 12604, "tags": "gravity, speed-of-light" }
Quantum Tunnelling with Delta Potential
Question: I'm trying to create an animation of Quantum Tunnelling like this one. I've been learning some QM on my own, so please forgive and correct any mistakes. I considered the potential barrier $\alpha \delta(x)$ where $\alpha $ is a real constant and $\delta$ is Dirac's. I assumed a wave coming in from the left (travelling to the right), that either reflects off the barrier or tunnels through the barrier. Solving the time independent Schrodinger equation gave me $$\psi(x) = \left\{ \begin{array}{ccc} 1\mathrm e^{\mathrm ikx} + R\mathrm e^{-\mathrm ikx} & : & x<0 \\ T\mathrm e^{\mathrm i kx} & : & x > 0 \end{array}\right.$$ where $k = \frac{\sqrt{2mE}}{\hbar}$. Here $|R|^2$ gives the probability of the wave reflecting and $|T|^2$ the probability of the wave tunnelling through the barrier. We want $\psi$ to be continuous and we want, as $\varepsilon \to 0^+$, $$-\frac{\hbar^2}{2m}\int_{-\varepsilon}^{\varepsilon}\frac{\mathrm d^2\psi}{\mathrm dx^2}~\mathrm dx+\alpha\int_{-\varepsilon}^{\varepsilon}\delta(x)\psi(x)~\mathrm dx= E\int_{-\varepsilon}^{\varepsilon}\psi(x)~\mathrm dx$$ $$\lim_{\varepsilon \to 0}\left[\frac{\mathrm d\psi}{\mathrm dx}\right]_{-\varepsilon}^{\varepsilon} = \frac{2m\alpha}{\hbar^2}\psi(0)$$ Applying these conditions gives me $$R=\frac{\alpha}{2\mathrm ik-\alpha} \ \ \ \mbox{ and } \ \ \ T=\frac{2\mathrm ik}{2\mathrm ik-\alpha}$$ $$\psi(x) = \left\{ \begin{array}{ccc} \mathrm e^{\mathrm ikx} + \left(\frac{\alpha}{2\mathrm ik-\alpha}\right)\mathrm e^{-\mathrm ikx} & : & x<0 \\ \left(\frac{2\mathrm ik}{2\mathrm ik-\alpha}\right)\mathrm e^{\mathrm i kx} & : & x > 0 \end{array}\right.$$ Including the time-dependent term $\varphi(t)=\mathrm e^{-\mathrm iEt/\hbar} = \mathrm e^{-\mathrm ik^2\hbar t/2m}$ gives $$\psi(x)\varphi(t) = \left\{ \begin{array}{ccc} \mathrm e^{\mathrm ik(x+k\hbar t/2m)} + \left(\frac{\alpha}{2\mathrm ik-\alpha}\right)\mathrm e^{-\mathrm ik(x-k\hbar t/2m)} & : & x<0 \\ \left(\frac{2\mathrm ik}{2\mathrm ik-\alpha}\right)\mathrm e^{\mathrm ik(x+k\hbar t/2m)} & : & x > 0 \end{array}\right.$$ I've looked at $|\psi(x)\varphi(t)|^2$ and this is independent of $t$. Griffiths mentions taking a linear combination of the $\psi(x)\varphi(t)$, but does not give any details. Any ideas? Answer: If what you wanted was to get something that changes in time, then you started off on the wrong foot when you went looking for solutions for the time-independent Schrödinger equation. The wavefunction you have written down is an eigenfunction of the hamiltonian, and as such, no physical observable will ever change in time. If what you want is to construct a solution with a wavepacket that actually moves, then that's never going to be a solution of the TISE; instead, you need to build a solution of the time-dependent Schrödinger equation, with a suitable initial condition, and then let that propagate. Luckily, you've already done most of the required work, in building out the relevant continuum eigenstates $\psi_k(x)$ (and therefore their associated TDSE solutions, $e^{-i\hbar k^2 t/2m}\psi_k(x)$), and all you need is to assemble those into a wavepacket. The way that's normally done is by starting with a gaussian on the left and with momentum to the right, $$ \psi_0(x,t_0) = N \exp\left(-\frac{1}{2\sigma^2}(x-x_0)^2+ip_0x\right), $$ decompose that via a Fourier transform into a sum of plane waves, extend those plane waves into the barrier eigenstates you've found, add the time-dependent phase, and then do the Fourier transform back into position space.
{ "domain": "physics.stackexchange", "id": 42175, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, scattering, quantum-tunneling" }
Does dark matter follow all principles of regular physics?
Question: Is dark matter bound by all the laws of regular physics? i.e. laws of thermodynamics, speed of light, length contraction, mass-energy relation. What about Newton's laws of motion (since all of Newton's laws assume an interaction between particles)? Answer: Is dark matter bound by all laws of regular physics? i.e. laws of thermodynamics, speed of light, length contraction, mass-energy relation. Yes. The four things you mention are assumed to apply to dark matter. It does not need to violate any known laws of physics. The only way in which dark matter needs to differ from regular matter is that it doesn’t interact electromagnetically, or it has electromagnetic interactions that are so weak that they can’t be observed. It definitely needs to interact gravitationally. It could have weak and strong nuclear interactions, and perhaps new interactions that regular matter doesn’t have. It doesn’t need to be weird. All it needs to be is dark, not emitting or absorbing an observable amount of light or any other electromagnetic radiation. What about newton's laws of motion (since all of Netwon's law assume an interaction between particles)? Yes. But Newton’s Laws do not assume or require any particular interaction, or even any interaction at all. The First Law tells you what happens when there are no forces. The Second and Third tell you what happens if there are forces.
{ "domain": "physics.stackexchange", "id": 70444, "tags": "thermodynamics, general-relativity, special-relativity, dark-matter" }
Brightness of 2 Lightbulbs hooked up to Capacitor
Question: The figure shows a circuit consisting of a battery, a switch, two identical lightbulbs, and a capacitor that is initially uncharged. a. Immediately after the switch is closed, are either or both bulbs glowing? Explain. b. If both bulbs are glowing, which is brighter? Or are they equally bright? Explain. c. For any bulb (A or B or both) that lights up immediately after the switch is closed, does its brightness increase with time, decrease with time, or remain unchanged? Explain. I really don't understand the concept of how a capacitor affects the flow of current in a circuit, and am having an incredibly hard time figuring out this question. At least an idea of where to start would be great. I had thought that for part A both light bulbs would begin glowing since the capacitor isn't charged, but i have no idea how to tell which one is brighter. I also think that for part C, the brightness of the bulbs would decrease over time after the capacitor is charged because once it's charged the voltage of the capacitor equals the voltage of the battery, hence there is no voltage potential and no current. Am I on the right track at all for this??? Answer: You're on the right track. You're probably familiar with how the current decreases exponentially after closing the switch. $$I(t)=I_0 e^{-t/\tau}$$ Where $\tau$ is the time constant of the circuit given by $\tau = RC$, and $R$ is the total resistance of the bulbs. So when the switch is closed, current will be a maximum, and the bulbs brightest. As time goes on, the bulbs will dim according to the exponential decay of the current. To help you understand the capacitor behaviour better, the usual circuit rule of current in = current out applies. This means the current on either side of the capacitor is the same, and also the current through either bulb is the same, and thus the bulbs are equally bright. When you have charge flowing into one side of the capacitor, the same charge is flowing out the other side of the capacitor, to keep both plates of the capacitor equally and oppositely charged.
{ "domain": "physics.stackexchange", "id": 30496, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, batteries" }
Using the continuity equation against gravity
Question: Studying Fluid Mechanics right now and in my textbook there is an example of getting water up to a bathroom in a house. We're given the diameter of the inlet pipe and bathroom pipe, but only the velocity at the inlet pipe. Why does the continuity equation apply if gravity is accelerating the water down? The pipe is getting smaller so velocity increases, but then velocity also decreases because gravity is acting against the water flow. Answer: This started as a comment but I will flesh it out some. The continuity equation is: $$ \frac{\partial \rho}{\partial t} + \frac{\partial \rho u_i}{\partial x_i} = 0$$ and is an expression of the conservation of mass. Effectively it is saying "What comes in, must go out, or density must increase/decrease accordingly." The example your book gives is using water. It is probably (without telling you as much) assuming that the water is incompressible. This is a good assumption, but nothing is truly incompressible. Anyway, if it is incompressible then the density cannot increase nor can it decrease. So you are left with "What comes in, must go out." That is why you can still use the 1D simplification of $A_1 u_1 = A_2 u_2$ which is likely what your book is doing.
{ "domain": "physics.stackexchange", "id": 20981, "tags": "fluid-dynamics" }
SUMMARIST: Automated Text Summarization
Question: There is a text summarization project called SUMMARIST. Apparently it is able to perform abstractive text summarization. I want to give it a try but unfortunately the demo links on the website do not work. Does anybody have any information regarding this? How can I test this tool? http://www.isi.edu/natural-language/projects/SUMMARIST.html Regards, PasMod Answer: It dates back to 1998, so most likely has been abandoned, or "acquired" by Microsoft as the creator currently works there and has done since publishing that research. See https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/ists97.pdf and http://research.microsoft.com/en-us/people/cyl for the author. Maybe you could try to contact him.
{ "domain": "datascience.stackexchange", "id": 94, "tags": "text-mining" }
How long must a sample be irradiated before all 59-Co atoms are converted to 60-Co?
Question: For the interaction, n$^0$ + $^{59}$Co $\to$ $^{60}$Co Answer: As you probably know, $^{59}$Co is the only stable isotope of Cobalt. Neutron irradiation results in formation of $^{60}$Co (plus an emitted gamma), which has a half life of 5.2 years. The decay is through beta decay, resulting in formation of stable $^{60}$Ni. For nuclear physics cross section, a very useful website is the National Nuclear Data Center, hosted at BNL: NNDC entry page. Selecting the Evaluated Nuclear Data File option pops up a page where you can select co-59, n (for neutrons), and sig (for cross sections sigma). Then you get a huge long list of the evaluated data for about a gazillion possible neutron reactions with $^{59}$Co. One of them is the $^{59}$Co (n,$\gamma$) $^{60}$Co reaction. Pick those, and plot them to get the cross section versus neutron energy or ask for the text listing. The thermal neutron cross section is on the order of 40 barns. One barn is $10^{-24}$ cm$^2$. So, what is your neutron flux? If I have 25 monolayers of Co (where I'm taking a monolayer as $10^{15}$ at/cm$^{2}$, we have a reaction probability of 1 in a million. For your particular problem, you now have a set of equations, based on the neutron flux, of conversion from $^{59}$Co to $^{60}$Co from the neutron flux, and the decay of $^{60}$Co to $^{60}$Ni with the 5.2 year half life. I'll note on a final reading of your question that it might really be about how long before all the $^{59}$Co has been made into $^{60}$Co, not how much $^{60}$Co is remaining. That question is just a simple calculation from the initial number of $^{59}$Co atoms, the capture cross section, and the neutron flux. EDIT - @CuriousOne raised a good point: what is happening to the $^{60}$Co under this flux of neutrons? Well, as it turns out, a variety of things. Sticking with thermal neutrons (~23meV or whatever is closest in the cross section tables) we have: $^{60}$Co(n, tot) has $\sigma=3.4$ barns (total cross section for any interactions, including just scattering) $^{60}$Co(n,$\gamma$)$^{61}$Co has $\sigma=2.08$ barns (so most of the cross section) $^{60}$Co(n,p)$^{60}$Fe has $\sigma=9\times 10^{-12}$ barns (wow that is small!) Many other possible reactions, almost all of which are at MeV neutron energies, not thermal. So, under a neutron flux, the $^{60}$Co will transform into $^{61}$Co, although the cross section to do that is about 5% that of the $^{59}$Co to $^{60}$Co transformation. $^{61}$Co has a half life of 1.65 hours, then decays to $^{61}$Ni which is a stable isotope. Then the $^{61}$Ni (n,$\gamma$) $^{62}$Ni cross section is about 2.6 barns. I could keep going, but will let the reader peruse the NNDC pages for their own edification. Looks like you could have a lot of fun with the coupled equations to determine
{ "domain": "physics.stackexchange", "id": 16788, "tags": "nuclear-engineering" }
Automatic seat assignment algorithm
Question: I am looking for articles relating to algorithms that deal with automatic selection of seating assignment. I need an algorithm (preferably more than one) that can automatically select a seating place while enforcing certain constraints that are predefined. Originally I was planning on having the seats selected on the fly, meaning whenever a new person comes, the system selects the optimal seat for him based on the seats which were already taken, but I guess it is not a must. if there is a more general algorithm that can also present an approach fit for my problem that is also great. Lets call the seated people "players" , and our seating domain lets picture as a 2d matrix. lets say we have several groups among our "players" and you can set your "players" anywhere within the matrix as long as they are not seated next to other "players" from their own group . I am not claiming there is a perfect solution, I am looking for articles that are dealing with some approach for giving a solution - if you can direct me to an article or even give me a name for that kind of problem it is also good for me. Thanks, Olaf Answer: You could start your investigation from an online bipartite matching [PDF] point of view. In specific, in this case, you know one side of the bipartition, namely the seats and their properties, and then you will get orders on-the-fly (online). The performance you are looking to analyze is the number of matched buyers in the online algorithm versus the number of matched buyers if you'd know the buyers in advance (offline).
{ "domain": "cs.stackexchange", "id": 2651, "tags": "algorithms, optimization" }
Lowering index of Riemann tensor
Question: I'm trying to undertand the lowering of index of Riemann curvature tensor, but I'm not sure what I have to do. I know that $R_{ebcd} = g_{ea}{R^a}_{bcd}$. But let's say I have the coordinates ($t,r,\theta, \phi$) and that the metric is diagonal. If I want, for example, $R_{\theta r t\phi}$, then do I have to make the sum $\sum_{a=1}^4 g_{\theta a} {R^a}_{rt\phi}$ to get it? In this case, only $g_{\theta \theta}$ isn't zero, then, if I'm right $R_{\theta r t\phi} = g_{\theta \theta}{R^\theta}_{rt\phi}$. I only need someone to confirm it for me. Answer: You do have to do that full sum, in general. In any case where there's only one non-zero $g_{\theta a}$ component, that sum will reduce to just that one term; in particular, if $g_{\theta \theta}$ is the only non-zero component of $g_{\theta a}$, then all you get is $R_{\theta r t\phi} = g_{\theta \theta}{R^\theta}_{rt\phi}$, as you said.
{ "domain": "physics.stackexchange", "id": 59092, "tags": "general-relativity, metric-tensor, tensor-calculus, curvature" }
Question about scan.ranges information
Question: Just to make sure I understand laser scan data. ranges is an array of scan distances. so: ranges[0] shows scan distance at angle_min. ranges[1] shows scan distance at angle_min+angle_increment. ranges[2] shows scan distance at angle_min+(2*angle_increment) and so on? Is this correct Originally posted by distro on ROS Answers with karma: 167 on 2022-11-28 Post score: 0 Answer: Your understanding seems correct. http://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/LaserScan.html Originally posted by Martin Peris with karma: 5625 on 2022-11-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by distro on 2022-12-01: @Martin Peris The values in the ranges array are the distances with reference to the sensor frame? Comment by Martin Peris on 2022-12-18: Sorry, I missed your comment. Yes, the values in ranges array are the distances with reference to the sensor frame (specified in the header of the message)
{ "domain": "robotics.stackexchange", "id": 38159, "tags": "ros-melodic, ros-kinetic" }
RC Transmitter Quadcopter with Arduino
Question: I have a WL v262 quadcopter and I want to control it using Arduino instead of the joysticks on the transmitter. I opened up the transmitter and saw that each joystick has 2 potentiometers on the PCB and that the voltage for each pot goes from 0-3.3V. I used arduino's PWM and a low pass filter and connected the output of the filtered output to the potentiometer's analog pin which is connected to the PCB (I cannot desolder and take out the pots from the PCB) but even with this $V_{out}$ going onto the analog pin, my transmitter's display gave ???? Now I am really confused and frustrated because I don't know how else to control this transmitter other than attaching stepper motors to the joysticks and manually controlling the transmitter but this is really my last resort. Can someone help me with this? I have spent hours and hours trial and error but I am getting nowhere. Here is the PCB of the transmitter: Answer: Did it actually output only "????" Does removing the PWM restore normal operation? It is likely that the load from the PWM is not initially within correct voltage range at RC's bootup and it is detecting this and faulting. With the Pot's still inplace the range will be limited, but you could try to match it. Initially (and even while ????'ing) try to set the PWM's output to be the same voltage as when it is not connected. This may solve your ???? problem. I also note from your picture the Transmitter module at top between joy sticks. I would suspect that you can better decipher the stream of information sent into this module by the controller and send your own information. This would be similar to what I did for IR Helicopter controllers
{ "domain": "robotics.stackexchange", "id": 1489, "tags": "arduino, sensors, radio-control, wireless" }
What properties of atoms can be derived to high accuracy from a theory of QM?
Question: I'm trying to understand which predictions some of the theories of quantum mechanics can make. This is the picture I estimate in my layman's attempt: the fundamental properties of particles can be derived to high accuracy by quantum field theory the situation is much more dissatisfying for the fundamental properties of atoms (mass, electronegativity, etc.) for molecules, there may be some semi-accurate result for a specific one here and there (boiling point for water would be awesome) If that's correct in broad strokes, it would leave mostly the atomic layer for me to learn about what can be predicted there. I'd love to hear about what properties of whole atoms can be derived from first principles of a quantum theory. Answer: Interestingly, the properties of fundamental particles (their mass, charge and strength of interactions with the weak and strong forces) are not predicted by quantum mechanics. Instead they are measured by experiment, and they are then used as inputs to the Standard Model of particle physics. One of the unsolved questions in physics is why these masses and coupling constants take the values that they do, and whether these values are constrained in any way. Once the properties of the fundamental particles are known, the behaviour of any system composed of those particles can in principle be understood by setting up and solving the Schrödinger equation for that system to find its wave function. However, in practice an exact solution of the Schrödinger equation can only be found for the very simplest systems. The Schrödinger equation can be solved exactly for one electron orbiting one proton to give the possible orbital states of the electron in a hydrogen atom. For other atoms apart from hydrogen, approximate solutions have to be used, although these do give very accurate predictions for the behaviour of actual electrons in these atoms. The structure of the atomic nucelus is less well understood, and the state of the art here relies on various ad-hoc and semi-classical models such as the nuclear shell model. Modelling the behaviour of molecules from first principles is even less advanced, and molecular modelling relies on a combination of ad-hoc models such as the the Leonard-Jones potential and computer simulations.
{ "domain": "physics.stackexchange", "id": 82600, "tags": "quantum-mechanics, quantum-field-theory" }
How is resolution defined?
Question: So I'm looking at a 1 meter square object in my digital camera. What size of pixel is considered fine enough that the object is considered to be resolved? Obviously a pixel size of one meter squared will 'notice' the object, but it will likely affect four pixels in varying degrees, so we can hardly call it an image. A pixel size of 10cm will at least let me see that I'm looking at a square, but is that considered fine enough? Answer: How is resolution defined? Differently in different contexts, and even differently by different people within the same context. Some definitions are objective, expressed as an equation or a computational procedure, others may be more subjective. We can always start with the grandparent of all resolution definitions, which is The Rayleigh criterion Two airy disks at various spacings: (top right) twice the distance to the first minimum, (middle) exactly the distance to the first minimum (the Rayleigh criterion), and (bottom left) half the distance. This image uses a nonlinear color scale (specifically, the fourth root) in order to better show the minima and maxima. (cropped and rotated from source) The middle image doesn't look as resolved as it should because the author chose to use the fourth root of brightness instead of a linear scale, but you can see a dark band between the two "stars" in the middle one even in this image. These patterns are what diffraction from a circular aperture causes, either your pupil or a telescope or your camera's lens. It doesn't take into account your square pixels though, so it doesn't help. What size of pixel is considered fine enough that the object is considered to be resolved? I'd say that some form of Beauty is in the eye of the beholder can be used here. Astronomers push things to the max. Loosely speaking they can measure the apparent size of a "resolved" object if it's 6 pixels wide for example as long as all the other stars are no more than 4 pixels wide, but they'll do a thorough computer analysis first including optical blurring and simulations shifting the positions of the stars fractions of a pixel in every direction first to make sure that object is definitely bigger than all the unresolved stars in the field. above: from @PeterIrwin's answer to Has Hubble ever been used to try to image a near Earth asteroid? Resolved? Not? Here's another case in point. It's from this answer to How big would a QR code have to be on my roof for a satellite to be able to scan it given today's allowable resolution? in Space Exploration SE. It's the result of a simulation of a 6 meter pixel "QR code" (not exactly) on Earth's surface seen from a 9 cm aperture telescope in orbit 575 kilometers above Earth with the sensor's pixels having a resolution at Earth of 3 meters. I've used an Airy function to simulate the image, the same thing that generates those concentric rings in the first image here. Resolved? Not? Perhaps 'barely" or "almost" or "mostly, except for some parts..."? We can recognize that it might be QR-code-ish but from our eye can't You can see that I was naughty and rotated the image sensor's axis by 45° on purpose just to make it more interesting. If you gave the right image to a computer program, it could probably recover the data that was encoded into the original "QR code" most of the time, but it might be much harder for a person to figure out. Punch line It's up to you really, based on your application. For amateur astrophotographers or lay people like me, I'd say that if other folks look at a tiny few-pixel image of the Moon and recognize it as probably the Moon without being told, then it's resolved. So maybe 10 pixels wide? Is it the Moon? If I told you it was, would it be believable? If I asked you what it was without any hints, would Moon be your first guess? Resolved? Not? Now if we had a thousand Moons and you had to identify which one? then you might need to go to maybe 16 pixels wide. From barcodeart.com/artwork/portraits/pixel_presiden: Resolved? Not? Leon Harmon - 1973 In November 1973, a researcher at Bell Labs named Leon Harmon wrote an article for Scientific American titled, "The Recognition of Faces." It includes several "block portrait" illustrations, most notably this one of Abraham Lincoln. He created the portraits with some prehistoric computer equipped with a "flying-spot scanner." Harmon used these pixelated portraits to test human perception and automatic pattern recognition. The article actually doesn't have the word "pixel" in it, but certainly introduced a new way of seeing. Salvador Dali - 1976 A few years after Harmon's article, Salvador Dali completed this painting titled, "Gala contemplating the Mediterranean Sea, which at 30 meters becomes the portrait of Abraham Lincoln (Homage to Rothko)." Not only did Dali appropriate Harmon's portrait of Lincoln into the overall composition, but Dali also reincorporated a smaller grayscale version into a single tile.
{ "domain": "astronomy.stackexchange", "id": 5976, "tags": "telescope, photography" }
Reuse .sdf using different mesh files
Question: Say I have an sdf file like below: <?xml version="1.0" ?> <sdf version="1.4"> <model name="bookertshelf"> <static>true</static> <link name="body"> <visual name="visual"> <geometry> <mesh><uri>model://bookertshelf/dummy.dae</uri></mesh> </geometry> </visual> </link> </model> </sdf> I have bookertshelf-%X%.dae files for all of which I like to use the same SDF config above. Is there a way to reuse this SDF file without doubling the file? In my world file I tried the following that doesn't seem to be doing what I want (without error on console). <include> <name>bookertshelf1</name> <pose>0 -6.0 0 1.5708 0 -1.5708</pose> <static>true</static> <uri>model://bookshelf</uri> <mesh><uri>model://bookertshelf/bookertshelf1.dae</uri></mesh> </include> I'm on Ubuntu Trusty, Gazebo 2.2.5. Upper version of Gazebo can't be an option since I'm using ROS Indigo where 2.2 seems like a must. Originally posted by IsaacS on Gazebo Answers with karma: 118 on 2015-06-20 Post score: 0 Answer: You could use erb to automatically generate your SDF as in this example. You would need a file, named for example meshes.world.erb, similar to this: <?xml version="1.0" ?> <% mesh_list = ["mesh0.dae", "mesh1.dae", "mesh2.dae"] %> <sdf version="1.4"> <world name="default"> <% # Loop through list i = 0 mesh_list.each do |m| name = 'model_' + i.to_s i = i + 1 %> <%= "<model name='#{name}'>" %> <static>true</static> <link name="body"> <visual name="visual"> <geometry> <mesh><uri><%= "model://bookertshelf/" + m %></uri></mesh> </geometry> </visual> </link> </model> <% end %> </world> </sdf> Then use erb to generate the world file (which I called meshes.world), from the command line: erb meshes.world.erb > meshes.world Then open the world on gazebo: gazebo meshes.world Originally posted by chapulina with karma: 7504 on 2015-06-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by IsaacS on 2015-07-03: Thanks. Originally I was hoping for a way to "reuse" a single SDF, not replicating SDF files, but after having seen discussions at another places (e.g. this) I now understand using erb is the best available option for SDF.
{ "domain": "robotics.stackexchange", "id": 3786, "tags": "sdformat" }
Capturing a string in a specific format
Question: I have a requirement to capture a string in a specific format of * [Numeric Digits] *. This is how I have done right now but I think it would be faster with Regular Expressions. I don't have a lot of experience with RegEx, so please help me optimize this code using RegEx. if (string.IsNullOrEmpty(BarcodeScan) && e.KeyChar.ToString() == "*") BarcodeScan = e.KeyChar.ToString(); else { if (BarcodeScan.StartsWith("*")) { if (int.TryParse(e.KeyChar.ToString(), out i)) BarcodeScan += i.ToString(); else if (e.KeyChar.ToString() == "*") { BarcodeScan += "*"; ArticleID = BarcodeScan.Substring(1, BarcodeScan.Length - 2); } else BarcodeScan = string.Empty; } } The above code is written in KeyPress event so I have to capture the string as the user is doing the input. Basically the first * means that the user has started entering Article ID and I keep on capturing numeric digits till he enters another *. This means that *2323 is valid but incomplete *34h is invalid *343f33 is invalid *3434hsds3 * is invalid *3412 * is valid and complete How do I check for *2323 in regex? I tried ^\*\d+ but it allows *22f as well. Answer: Could you possibly provide more samples of your data? In any case, try this Regex regex = new Regex(@"^[*]\d+[*]$"); If you actually expect the brackets (e.g. []) using the following: Regex regex = new Regex(@"^[*][\[]\d+[]][*]$");
{ "domain": "codereview.stackexchange", "id": 1009, "tags": "c#, .net, asp.net, regex, winforms" }
OS X Install – Problems With PIL
Question: I'm trying to install Hydro on a fresh OS X Mavericks system, but I get hung up during the following command: rosdep install --from-paths src --ignore-src --rosdistro hydro -y The issue I'm getting is with PIL: Could not find any downloads that satisfy the requirement PIL. When I try to install PIL manually (using sudo pip install -U PIL --allow-external PIL --allow-unverified PIL), I get stuck at _imagingft.c:73:10: fatal error: 'freetype/fterrors.h' file not found, even though freetype is installed and up to date. Has anyone else experienced the same issue? Originally posted by nckswt on ROS Answers with karma: 539 on 2014-01-07 Post score: 1 Original comments Comment by nckswt on 2014-01-08: Worked perfectly! Feel free to copy/paste my answer so I can give you sweet, sweet karma =) Comment by lanyusea on 2014-01-08: @nckswt, done. as a newbie I'm not sure whether it should be a comment or answer. Answer: can that help? ln -s /usr/local/Cellar/freetype/2.5.1/include/freetype2 /usr/local/include/freetype reference: stackoverflow.com/questions/20325473/error-installing-python-image-library-using-pip-on-mac-os-x-10-9 Originally posted by lanyusea with karma: 279 on 2014-01-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by demmeln on 2014-01-20: Note that if you later decide to install ogre with brew install ogre, you might need to remove that symlink again for it to compile (at least when I did this a while back it was neccessary).
{ "domain": "robotics.stackexchange", "id": 16608, "tags": "ros, installation, rosdep, ros-hydro, osx" }
What is the relationship between poles and system stability?
Question: I see two notions that describe the relationship between poles and system stability. But they are not the same from my understanding The system is BIBO stable if and only if all the poles are in the left half of the complex plane A LTI system with a rational system function H(z) is stable if and only if all of the poles of H(z) lie inside the unit circle. Why these two notions are different? Is that in different conditions? Answer: The two are both true, but they are for different cases. Case 1 is true for continuous-time systems, and the transform is the Laplace transform and the variable is the derivative operator, $s$. Case 2 is true for discrete-time systems, and the transform is the $z$-transform and the variable is the delay operator, $z$.
{ "domain": "dsp.stackexchange", "id": 5227, "tags": "filters, linear-systems, transfer-function, poles-zeros, stability" }
Question about branch hazards on 4-stage pipeline
Question: Let's say that conditional branches are resolved at the 2nd-stage on a 4-stage pipeline. Why is there different penalties on a taken branch versus an untaken branch ? Should the penalty be the same for both? I always assumed that the not taken and taken branch penalty is 2 cycles but this is incorrect according to my friend. Could someone clarify this ? Answer: Ideally there are two possible branch outcomes which are unconditional and conditional branches.Conditional branches normally include a branch that can either be taken or not taken. Unconditional branches are usually known as jump or goto instructions. Jump instructions in a 4-stage pipeline Assuming that jumps can be resolved on the 2nd-stage (for a normal IF,ID,EX,WB) and that 1th-stage can always be done independently. This as a penalty will include 1 stall cycle. The main reason is simply because the pipeline fetches the next instruction following the jump(IF must update the PC , and the next sequential address is the only address known at this point). Now at the 2-stage when the jump resolves and realizes that the the fetch it issued was awrong address . The scenario will change,meaning that the pipeline will re-issue the fetch of the available instruction in the next cycle ( i + 1 ) causing one-cycle stall. branch instructions taken in a 4-stage pipeline Assuming that branches are resolved at the 3th-stage the pipeline then realizes taht it must reissue the fetch for the next instruction ( assuming the next instruction is not directly the branch) thus creating two-cycle stalls. branch instructions not-taken in a 4-stage pipeline For not taken branches the fetched instruction that is after the branch is actually correct because the branch is false ( meaning the instruction after it will execute). Mentioning that branches are resolved at the 3th-stage the pipeline then realizes that it doenst have to reissue the fetch and can therefore resume execution the instruction that was fetched after the branch. However the branch instruction cannot leave the 1th-stage (IF) since it has to resolve the branch thus including a penalty of 1 cycle. Here is a picture showing every scenario:
{ "domain": "cs.stackexchange", "id": 7166, "tags": "cpu-pipelines" }
Convention for numbering of carbons
Question: I'm trying to understand how carbons are numbered in organic molecules (i.e. as in the following image). (source: Principles of Biochemistry by Nelson and Cox, p. 532) I've read and heard different methods, among which: the end of the carbon chain with the highest oxidation state is assigned the lowest number. the end of the carbon chain with the highest total mass is assigned the lowest number. the number of bonds is used as a kind of multiplier (i.e. O=C-OH corresponds to a total mass of O*2+C+O+H). (source: What is the convention of numbering carbon atoms in organic molecules?) the end of the carbon chain with the substituent of the highest atomic number is assigned the lowest number (source: Principles of Biochemistry by Nelson and Cox, p. 73). the carbonyl, acetal or hemiacetal carbon is given the lowest possible number (source: Carbon numbering in carbohydrates) These methods could come to contradictory conclusions, though. One example I could think of: Phosphorous would be the highest atomic number constituent, while the remaining rules (carbonyl carbon, total mass and oxidation state) would assign the carbonyl carbon the lowest number. I assume some are just simplifications. What is the correct method, or is there no single general one? Answer: Your initial example showing derivatives of glucose and fructose might be misleading since the nomenclature of natural product parent compounds does not necessarily follow the usual nomenclature rules of general organic chemistry. In particular, P-102.2.2 Numbering parent structures The carbon atoms of a monosaccharide are numbered consecutively in such a way that: (1) a (potential) aldehyde group receives the locant 1 (even if a more senior characteristic group is present); (…) (This rule is probably easier to understand if you draw the monosaccharides in their acyclic form.) Generally, however, nomenclature of organic chemistry has different rules for the numbering. The most important simplified criteria for the numbering are: lower locants for the group that is expressed as suffix lower locants for multiple bonds lower locants for prefixes lower locants for substituents cited first as a prefix in the name The corresponding actual wording in the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book) reads as follows: P-14.4 NUMBERING When several structural features appear in cyclic and acyclic compounds, low locants are assigned to them in the following decreasing order of seniority: (…) (c) principal characteristic groups and free valences (suffixes); (…) (e) saturation/unsaturation: (i) low locants are given to hydro/dehydro prefixes (…) and ‘ene’ and ‘yne’ endings; (ii) low locants are given first to multiple bonds as a set and then to double bonds (…); (f) detachable alphabetized prefixes, all considered together in a series of increasing numerical order; (g) lowest locants for the substituent cited first as a prefix in the name; (…) Therefore, the numbering in your own example corresponds to the name 3-phosphinopropanoic acid since the acid group is the principal characteristic group of this compound.
{ "domain": "chemistry.stackexchange", "id": 10409, "tags": "organic-chemistry, nomenclature" }
In Monte Carlo learning, what do you do when an end state is reached, after having recorded the previously visited states and taken actions?
Question: When you train a model using Monte Carlo-based learning the state and action taken at each step is recorded, and then at some point an end state is reached and the agent receives some reward - what do you do at that point? Let's say there were 100 steps taken to reach this final reward state, would you update the full rollout of those 100 state/action/rewards and then begin the next episode, or do you then 'bubble up' that final reward to the previous states and update on those as well? E.g. Process an update for the full 100 experiences. Can either stop here, or... Bubble up the final reward to the 99th step and process an update for the 99 state/action/reward. Bubble up the final reward to the 98th step and process an update for the 98 state/action/reward. and so on right the way to the first step... Or, do you just process an update for the full 100-step roll-out and that's it? Or perhaps these are two different approaches? Is there a situation where you'd one rather than the other? Answer: I am assuming you are asking about Monte Carlo simulation for value estimates, perhaps as part of a Monte Carlo control learning agent. The basic approach of all value-based methods is to estimate an expected return, often the action value $Q(s,a)$ which is a sum of expected future reward from taking action $a$ in state $s$. Monte Carlo methods take a direct and simple approach to this, which is to run the environment to the end of an episode and measure the return. This return is a sample out of all possible returns, so it can just be averaged with other observed returns to obtain an estimate. A minor complication is that the return depends on the current policy, and in control scenarios that will change, so the average needs to be recency-weighted for control e.g. using a fixed learning rate $\alpha$ in an update like $Q(s,a) \leftarrow Q(s,a) + \alpha(G - Q(s,a))$ Given this, you can run pretty much any approach that calculates the returns from observed state/action pairs. You will find that the "bubble up" approach is used commonly - the process usually termed backing up - working backwards from the end of the episode. If you have an episode from $t=0$ to $t=T$ and records of states, actions, rewards $s_0, a_0, r_1, s_1, a_1, r_2, s_2, a_2, r_3 . . . s_{T-1}, a_{T-1}, r_T, s_T$ (note indexing, reward follows state/action, there is no $r_0$ and no $a_T$), then the following algorithm could be used to calculate individual returns $g_t$: $g \leftarrow 0$ for $t = T-1$ down to $0$: $\qquad g \leftarrow r_{t+1} + \gamma g$ $\qquad Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha(g - Q(s_t,a_t))$ This working backwards is an efficient way to process rewards and assign them with discounting to action values for all state, action pairs observed in the episode. Or perhaps these are two different approaches? It would be valid to calculate only the return for the first state/action, and randomly select state/actions to start from (called exploring starts). Or in fact take any arbitrary set of estimates generated this way. You don't have to use all return estimates, but you do need to have an algorithm that is guaranteed to update values of all state/action pairs in the long term. Is there a situation where you'd one rather than the other? Most usually you will see the backed up return estimates to all observed state/action pairs, as this is more sample efficient, and Monte Carlo is already a high variance method that requires lots of samples to get good estimates (especially for early state/action pairs at the start of long episodes). However, if you work with function approximation such as neural networks, you need to avoid feeding in correlated data to learn from. A sequence of state/action pairs from a single episode are going to be correlated. There are a few ways to avoid that, but one simple approach could be to take just one sample from each rollout. You might do this if rollouts could be run very fast, possibly simulated on computer. But other alternatives may be better - for instance put all the state, action, return values into a data set, shuffle it after N episodes and learn from everything.
{ "domain": "ai.stackexchange", "id": 1375, "tags": "reinforcement-learning, monte-carlo-methods" }
Extracting elements from multiple sorted lists efficiently
Question: I was asked this interview question recently. I was able to come up with this which runs in \$O(k \log n)\$. Given k <= n sorted arrays each of size n, there exists a data structure requiring \$O(kn)\$ preprocessing time and memory that answers iterated search queries in \$O(k + \log n)\$ time. I have k sorted Lists, each of size \$n\$. I currently have hard-coded 5 sorted Lists each of size 3, but in general that can be a very high number. I would like to search for a single element in each of the \$k\$ Lists. Obviously, I can binary search each array individually, which will result in \$O(k \log n)\$ where \$k\$ is number of sorted arrays. Can I do it in \$O(k + \log n)\$ where \$k\$ is the number of sorted arrays? I think there might be some better way of doing it as we're doing the same searches \$k\$ times as of now. private List<List<Integer>> dataInput; public SearchItem(final List<List<Integer>> inputs) { dataInput = new ArrayList<List<Integer>>(); for (List<Integer> input : inputs) { dataInput.add(new ArrayList<Integer>(input)); } } public List<Integer> getItem(final Integer x) { List<Integer> outputs = new ArrayList<Integer>(); for (List<Integer> data : dataInput) { int i = Collections.binarySearch(data, x); // binary searching the item if (i < 0) i = -(i + 1); outputs.add(i == data.size() ? null : data.get(i)); } return outputs; } public static void main(String[] args) { List<List<Integer>> lists = new ArrayList<List<Integer>>(); List<Integer> list1 = new ArrayList<Integer>(Arrays.asList(3, 4, 6)); List<Integer> list2 = new ArrayList<Integer>(Arrays.asList(1, 2, 3)); List<Integer> list3 = new ArrayList<Integer>(Arrays.asList(2, 3, 6)); List<Integer> list4 = new ArrayList<Integer>(Arrays.asList(1, 2, 3)); List<Integer> list5 = new ArrayList<Integer>(Arrays.asList(4, 8, 13)); lists.add(list1); lists.add(list2); lists.add(list3); lists.add(list4); lists.add(list5); SearchItem search = new SearchItem(lists); System.out.println(dataInput); List<Integer> dataOuput = search.getItem(5); System.out.println(dataOuput); } Whatever output I am seeing with my above code approach should come with the new approach as well which should work in \$O(k + \log n)\$. Is this possible to achieve? Can anyone provide an example of how this would work based on my example? Answer: This is sooo off-topic for CR... but, since it is related to a previous answer, and since I want to see a blown mind ....: consider the following code: class Value<T> { private final T value; private final int[] arraypointers; private final int arraycursor = 0; Value(T value, int maxindex) { this.value = value; this.arraypointers = new int[maxindex]; } public void addIndex(int pointer) { arraypointers[arraycursor++] = pointer; } ... some other stuff. } OK, the above class will be used as follows... consider the example data system, the value 4 appears in list1 and list5. This would be stored as: Value v = new Value(4, k); v.addIndex(1); v.addIndex(5); Now, start with a LinkedList: LinkedList<Value<Integer>> values = new LinkedList<>(); Then, iterator though each of your loops, and merge the values in to the linked list: for (int datapointer = 0; datapointer < datalists.size(); datapointer++) { ListIterator<Value<Integer>> valit : values.listIterator(); List<Integer> = datalists.get(datapointer); for (Integer addval : data) { boolean found = false; while (valit.hasNext()) { if (addval.compareTo(valit.next().value) >= 0) { found = true; Value<Integer> val = valit.previous(); if (val.value.equals(addval) { // update existing value val.addIndex(datapointer); // leave the iterator point backwards to // allow for dup values in the data. } else { // add a new value Value<Integer> val = new Value(addval, k); val.addIndex(datapointer); valit.add(val); // leave the iterator pointing backwards. // but need to move it back one. valit.previous(); } } } if (!found) { Value<Integer> val = new Value(addval, k); val.addIndex(datapointer); valit.add(val); valit.previous(); } } } then, convert the LinkedList in to an ArrayList List sortedvalues = new ArrayList<Value<Integer>>(values); Right, here we now have a sorted list of Values<Integer>. Each Value has pointers back to the list(s) they came from. The space complexity for this is \$O(kn)\$ and we got there by doing a complexity \$O(kn)\$ nested loop (the inside while loop does not count because it is on an iterator that is outside the for loop, and it is part of the same complexity as the inner for loop)... OK, so that is the \$O(kn)\$ preprocessing. The lookup is a case of doing a binary search on the ArrayList (\$O(\log n)\$) and then iterating over the index pointers (\$O(k)\$). Thus, the search is \$O(k + log n)\$. Voila! Working solution Right, putting all the pieces together in a working solution: Value.java import java.util.Arrays; class Value<T extends Comparable<T>> implements Comparable<Value<T>> { private final T value; private final T[] indices; public Value(T value, T[] data) { super(); this.value = value; this.indices = data; } public void setIndex(int index, T val) { if (indices[index] == null) { indices[index] = val; } } public T[] getIndices() { return Arrays.copyOf(indices, indices.length); } public int compareToValue(T o) { return value.compareTo(o); } @Override public int compareTo(Value<T> o) { return value.compareTo(o.value); } @Override public int hashCode() { return value.hashCode(); } @Override public boolean equals(Object obj) { return obj instanceof Value && value.equals(((Value<?>)obj).value); } @Override public String toString() { return String.format("%s -> %s", value, Arrays.toString(indices)); } } MultiListIndex.java package listsearch; import java.lang.reflect.Array; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.Iterator; import java.util.LinkedList; import java.util.List; import java.util.ListIterator; public class MultiListIndex<T extends Comparable<T>> { private final List<Value<T>> index; private final Class<T> clazz; private final int width; public MultiListIndex(Class<T> clazz, Collection<List<T>> data) { this.clazz = clazz; this.width = data.size(); this.index = preprocess(new ArrayList<>(data)); } private final List<Value<T>> preprocess(List<List<T>> data) { LinkedList<Value<T>> processed = new LinkedList<>(); Value<T> target = null; for (int listid = 0; listid < data.size(); listid++) { ListIterator<Value<T>> valit = processed.listIterator(); Iterator<T> datait = data.get(listid).iterator(); while (datait.hasNext()) { final T toadd = datait.next(); boolean found = false; while (valit.hasNext()) { final int compare = (target = valit.next()).compareToValue(toadd); if (compare >= 0) { // we have a match, or gone past. valit.previous(); found = true; if (compare == 0) { target.setIndex(listid, toadd); } else { Value<T> newtarget = new Value<>(toadd, Arrays.copyOf(target.getIndices(), width)); valit.add(newtarget); newtarget.setIndex(listid, toadd); if (newtarget != valit.previous()) { throw new IllegalStateException("Bad math!"); } } break; } target.setIndex(listid, toadd); } if (!found) { Value<T> newtarget = new Value<>(toadd, buildArray(clazz, width)); valit.add(newtarget); newtarget.setIndex(listid, toadd); } } } return new ArrayList<>(processed); } @SuppressWarnings("unchecked") private static final <T> T[] buildArray(Class<T> clazz, int size) { return (T[])Array.newInstance(clazz, size); } public List<T> searchValues(T value) { Value<T> key = new Value<>(value, null); int pos = Collections.binarySearch(index, key); if (pos < 0) { pos = -pos - 1; } if (pos >= index.size()) { return Arrays.asList(buildArray(clazz, width)); } return Arrays.asList(Arrays.copyOf(index.get(pos).getIndices(), width)); } } MultiListMain.java package listsearch; import java.util.ArrayList; import java.util.Arrays; import java.util.List; public class MultiListMain { public static void main(String[] args) { List<List<Integer>> lists = new ArrayList<List<Integer>>(); List<Integer> list1 = new ArrayList<Integer>(Arrays.asList(3, 4, 6)); List<Integer> list2 = new ArrayList<Integer>(Arrays.asList(1, 2, 3)); List<Integer> list3 = new ArrayList<Integer>(Arrays.asList(2, 3, 6)); List<Integer> list4 = new ArrayList<Integer>(Arrays.asList(1, 2, 3)); List<Integer> list5 = new ArrayList<Integer>(Arrays.asList(4, 8, 13)); lists.add(list1); lists.add(list2); lists.add(list3); lists.add(list4); lists.add(list5); MultiListIndex<Integer> search = new MultiListIndex<Integer>( Integer.class, lists); // System.out.println(dataInput); System.out.println(search.searchValues(0)); System.out.println(search.searchValues(1)); System.out.println(search.searchValues(2)); System.out.println(search.searchValues(5)); } } Output [3, 1, 2, 1, 4] [3, 1, 2, 1, 4] [3, 2, 2, 2, 4] [6, null, 6, null, 8]
{ "domain": "codereview.stackexchange", "id": 6353, "tags": "java, algorithm, interview-questions, search" }
Program to check an item's price against available credit
Question: This is a working program to check an item's price against available credit. I'm trying to simplify this program to the bare minimum. It would require to run so I'll be better able to understand each part of the 'if' and 'else' processes. Is there a simpler way to write a program that accomplishes the task below? I'm new to C#, just trying to figure this stuff out. Write a program named CheckCredit that prompts users to enter a purchase price for an item. If the value entered is greater than a credit limit of $8,000, display you have exceeded the credit limit; otherwise, display Approved. using static System.Console; namespace CheckCredit { class Program { static void Main(string[] args) { const double CreditCheck = 8000; string userInput; double price; WriteLine("This is a program designed to check an item's price against your amount of available credit."); WriteLine("Your credit limit is $8,000.00.\n"); do { Write("Please type the item's price:"); userInput = ReadLine(); if (!double.TryParse(userInput, out _)) { WriteLine("Invalid input, please enter a whole or decimal number."); userInput = null; } } while (!double.TryParse(userInput, out price)); if (price > CreditCheck) { WriteLine(" You have exceeded the credit limit", price); } else if (price == CreditCheck) { WriteLine( "Approved.(*)\n\n\n" + "(*) It is exactly your credit limit."); } else { WriteLine("Approved."); } ReadKey(); } } } Answer: Since you are a beginner, I will try to go easy on you. That said, this looks very beginner-ish. As noted in the comments above, you inadvertently put a line break which causes a compiler error. There are many here you would CLOSE the question on that alone. Beginner's frequently use double for binary floating point values. However, anytime you are working with money or currency, then you should use Decimal. Decimal is floating point as well but it's Base 10 rather than Base 2. The static import of System.Console may be allowed but in general is frowned upon. My eyes would rather see Console.WriteLine instead of WriteLine due to over dozen years of writing .NET apps. Naming is important. CreditCheck is a bad name. It should be creditLimit or currentBalance if not simply balance. But "Check" is an action verb and it would be a method that would perform that checking action, not a variable. Note too that for local variable naming I am using camelCase. Organization is less than desired. Beginners will frequently put everything in Main method. How about you create a separate method and pass the credit limit as an argument to it? Or even a method that accepts the credit limit and item price? You get to decide whether such a method should return a bool to denote you have sufficient funds, or perhaps return the remaining funds after the price. If it returned remaining funds, you now have extra information, such as negative value denotes insufficient funds (that it the price check failed). I don't think you need a special check for price == CreditCheck. Nice of you to throw in something extra, but why not tell them "Approved. Remaining funds = ?". You use decent indentation and nice that braces are wrapped around one line of code. Why fix the credit to 8000? Again, pass it as an argument. If you do that, you should see why it's wrong that you hardcode it with: WriteLine("Your credit limit is $8,000.00.\n"); I get that you want a blank line to appear after that. For some people "\n" is perfectly acceptable. Others would recommend using Environment.NewLine. Putting many suggestions together, and using String Interpolation, this would become: Console.WriteLine($"Your credit limit is ${creditLimit.ToString("N2")}.{Environment.NewLine}"); Or you could issue a simple Console.WriteLine(); to avoid the whole "\n" versus NewLine debate.
{ "domain": "codereview.stackexchange", "id": 34999, "tags": "c#, beginner, console" }
Can hydrogen peroxide reduce ferric ion to ferrous ion?
Question: I have studied that hydrogen peroxide always oxidizes ferrous ion to ferric ion (source of study : NCERT Chemistry Part II, Textbook for Class XI), but a question came in IIT JEE 2015 which states: The answer given is A and B but I doubt that the answer is correct as I have studied the following reactions in my book : $$\ce{2Fe^{2+}(aq) + 2H^{+}(aq) + H_2O_2(aq) -> 2Fe^{3+}(aq) + 2H_2O(l)}$$ $$\ce{2Fe^{2+} + H_2O_2(aq) -> 2Fe^{3+} + OH^{-}}$$ Where am I possibly going wrong? Answer: You are right. This question is "practically" incorrect, although on paper it might appear so if someone is unaware of real chemistry (the question setter). Iron (II) will readily precipitate in alkaline medium and then immediately oxidize to iron (III) in the presence of hydrogen peroxide. It does not proceed backwards, i.e., hydrogen peroxide will not reduce Fe(III). It will start catalytic decomposition rather. In fact iron (II)+ hydrogen peroxide mixture is a very interesting system and used in environmetal cleaning. It is quite complex chemistry of free radicals. It is called Fenton's reagent (always used in mildly acidic conditions).
{ "domain": "chemistry.stackexchange", "id": 15976, "tags": "redox, hydrogen, iron" }
Can the Auger effect cause a second electron to be just excited instead of ionised and emitted from the atom?
Question: From what I understand, the Auger effect is usually defined as when an electron deexcites but instead of releasing its change in binding energy as a photon, it transfers it as kinetic energy to another electron which, if greater than its binding energy, will cause this second electron to be emitted from the atom. My question is why is this process defined with the second electron being emitted from the atom instead of just excited to a higher energy state sometimes. My guess is maybe it has something to do with entropy and the fact that there are so many more possible states for the second electron final state if it is emitted that maybe only in this case will this process actually occur (instead of just emitting a photon as usual). Answer: My question is why is this process defined with the second electron being emitted from the atom instead of just excited to a higher energy state sometimes. The atom is a unit tied up quantum mechanically . To observe transformations of an atom, there must be an interaction that can be measured. An emitted photon can be measured. An emitted electron can also be measured. If the whole process happens within an atom, there is no measurable/observable effect. An electron just going to a higher energy level can emit a photon when de-exciting , but there is no measurable way to determine that it comes from a transfer from a different electron, the way there is for an ejected electron to be identified with a different energy level: Upon ejection, the kinetic energy of the Auger electron corresponds to the difference between the energy of the initial electronic transition into the vacancy and the ionization energy for the electron shell from which the Auger electron was ejected.
{ "domain": "physics.stackexchange", "id": 76646, "tags": "quantum-mechanics, quantum-field-theory, particle-physics, atomic-physics, quantum-electrodynamics" }
Isn't the work done on a spring-mass system zero, so that there is be no change in potential energy?
Question: So far what I had understood about potential energy is that it is defined for a system of particles (at least two particles) with forces acted by the particles on each other of same magnitude and opposite in direction as, $$-W_\mathrm{conservative} = ΔU.$$ For instance in case of gravitational potential energy, we only consider work done on small body as there is negligible work done on the earth. But in cases such as the following example, the concept is not completely clear. Consider a massless spring, the left side of which is attached to the wall, and there is a block attached to the right side of the spring. If we consider the block $\cup$ spring as our system, then the the net work done by internal forces would be zero. So what is increasing the potential energy of the system as we stretch the spring? As per one of the answers to this question, it seems we need to consider the earth to be a part of the system and the spring just stores potential energy similar to the gravitational field while not really being the part of the system. Is this reasoning correct? If not, please help me clarify the concept. Answer: I was able to clear my misconception , So I am just writing so anybody else could get helped ; So first of all , I got to know that only "internal" conservative forces changes potential energy as: $$-W_{internal} = ΔU \tag{1}$$ Note:- I am considering non-conservative internal forces to be zero. Note when we write Work energy theorem , the $W_{net}$ includes both $W_{internal}$ and $W_{external}$. $$W_{internal} + W_{external} = ΔKE$$ By using $(1)$ $$W_{external} = ΔU + ΔKE \tag{2}$$ If $W_{external}$ is zero then we say total energy of system is conserved. Now let's consider spring as our system in which let $W_{internal}$ mean work done by internal particles of spring on each other due to elastic forces and the block applies the force $kx$ on the spring which is $W_{external}$. As spring is massless this implies $ΔKE$ = 0 ,So we could write work energy theorem for the spring as $$W_{internal} + W_{external} = 0$$ Using (1) $$W_{external} = ΔU_{spring}$$ By calculation , $W_{external}$ turn outs to be $kx^2/2$. $$ΔU_{spring} = kx^2/2$$ Now if I were to consider Block ∪ Spring as my system then: $$W_{block on the spring} + W_{spring on the block} = 0$$ Considering no external force on the block ; $$W_{internal} = ΔKE$$ By using (1) $$ ΔKE + ΔU = 0 $$ So my statement "If we consider the block ∪ spring as our system, then the the net work done by internal forces would be zero" was wrong as there are internal forces of spring which are responsible for potential energy !! I would also like to clear out that in a field the source remains fixed , So potential energy of the "source + particle" system is wrongly attributed as potential energy of particle as source has $ΔKE = 0$. Reference : - HC Verma : Concept of physics - Volume 1
{ "domain": "physics.stackexchange", "id": 77461, "tags": "newtonian-mechanics, energy-conservation, work, potential-energy, conventions" }
Fatty Acid Synthesis
Question: I have a problem in my reasoning on the fatty acid synthesis in the human body. In the synthesis process you have this homodimer. So the synthesis starts with the transfer of a acetyl group from acetyl-CoA to the sulfhydrylgroup of the condensing enzyme (CE) with help of the acetyltransferase enzyme (AT) and this on 1 of the 2 monomers. At the same time a malonyl group from malonyl-CoA is transfered to the sulfhydryl terminus of phosphopantotheine from the ACP, and this on the other monomer and with help of the malonyltransferase (MT) enzyme. AND HERE THE PROBLEM STARTS The acetyl residue will be condensing with the malonylACP and at the same time there will be a decarboxylation (CO2 goes off) and CE goes off from acetyl. At the end acetoacetyl-ACP is formed. So if I got it right; the acetyl residue is being 'cut off' from the CE from the first monomere and binds to the malonylACP on the other monomere. Right? And after that the acetoacetyl-ACP will be swept back to the other monomer (to the reduction part), is this also right? Then the reduction-dehydration-reduction steps are happening and at the end of the road you will get butyryl-ACP which will cut off from the ACP and migrate on the same monomer to the cysteine residue at the condensing enzyme (CE) (= translocation). And after that you can start a new condensation-reduction-translocation cycle and go on until you end up at palmitoyl-ACP. So the actual question is; is my reasoning right about how the product is going from one monomer to the other several times? In my book it's a pretty messy explanation and so I want to be sure I'm right. I hope somebody can help me out =) Greetings Answer: The textbook descriptions of fatty acid synthesis can be confusing because although the underlying chemistry of the process is universal, the way that it is organised is different in the systems that have been characterised, which include E. coli, yeast and vertebrates. In vertebrates: The fatty acid synthase is a dimer of identical multifunctional single polypeptides. The synthesis process involves two -SH groups: one is the terminal group of the pantothenate of the acyl carrier protein domain (Pan-SH) and the other is an -SH group of the condensing enzyme (CE-SH). For the first condensation step the initiating acetyl group is attached to the CE-SH of one monomer and the incoming unit (malonyl CoA) is attached to the Pan-SH of the other monomer. The condensation reaction between these two results in the new elongated unit being attached to the Pan-SH. This unit then goes through the reductase/dehydratase/reductase steps to form the new acyl CoA which is finally transferred to an CE-SH group, opening up the Pan-SH for the next incoming malonyl CoA. This will happen repeatedly in each cycle of the process. Obviously since each monomer has both -SH groups a single fatty acid synthase dimer can be engaged in two elongations at the same time, but these will always include a transfer. It is thought that the initiating acetyl group comes in via the Pan-SH group. You say: And after that the acetoacetylACP will be swept back to the other monomere (to the reduction part), is this also right? Then the reduction-dehydration-reduction steps are happening and at the end of the road you will get butyrylACP which will cut off from the ACP and migrate on the same monomere to the cysteïnresidu at the condensing enzyme (CE) (= translocation). The diagram that you included clearly shows that the acetoacetyl group is formed via the reduction-dehydration-reduction steps on the same Pan-SH where its precursor was formed by condensation. Only then is it transferred to the CE-SH group. The diagram also indicates that the acetoacetyl group is transferred to the CE-SH of the same monomer. This implies that a growing chain never transfers from one monomer to the other. I have seen other sources which seem to suggest that this step does involve a transfer between monomers. In fact it would appear from the discussion on the WP page for fatty acid synthase that this is an unresolved issue - see for example: Joshi et al. (2003) Engineering of an Active Animal Fatty Acid Synthase Dimer with Only One Competent Subunit Chemistry & Biology 10:169 - 173 Abstract Animal fatty acid synthases are large polypeptides containing seven functional domains that are active only in the dimeric form. Inactivity of the monomeric form has long been attributed to the obligatory participation of domains from both subunits in catalysis of substrate loading and condensation reactions. However, we have engineered a fatty acid synthase containing one wild-type subunit and one subunit compromised by mutations in all seven functional domains that is active in fatty acid synthesis. This finding indicates that a single subunit, in the context of a dimer, is able to catalyze the entire biosynthetic pathway and suggests that, in the natural complex, each of the two subunits forms a scaffold that optimizes the conformation of the companion subunit.
{ "domain": "biology.stackexchange", "id": 2281, "tags": "lipids, biochemistry, fatty-acid-synthase, fat-metabolism" }
If the neutral bar is connected to literal ground, why doesn't all electricity flow into the ground?
Question: Consider a single phase system with 2 wires coming from the transformer, one hot and other neutral. These 2 wires are connected to my main box. The hot wire is connected to the breakers, then leading to a live pin of sockets. The neutral wire is connected to the neutral bar, and then connected to the neutral pin of sockets. A green ground wire connects the ground pin of socket to the neutral bar. A wire connects the neutral bar to the literal ground in my garden. Why does not all electricity go into the ground? Answer: The answer to this is that electricity does not always take "the shortest path to ground." That phrase is an oversimplification that can be useful, but in this case is not. A more correct statement is that electricity flows from a high potential to a low potential. In the case of the power to you house, that means it travels in a loop between the power company and your house. That loop happens to have a single point where it connects to the "ground" under your feet, but that doesn't make a loop. Well, it kinda does. The power plant is on the ground too, so in theory there are two parallel connections. One goes through one wire into your house, exits, and returns on the other wire, and the other goes through one wire into your house, exist, and then heads out to the "ground" and travels to the power plant along that path. In reality, both paths are followed, all the time. However, the resistance of the ground between you and the power company is far larger than that of the wire, so the vast majority of the current travels through the wires.
{ "domain": "physics.stackexchange", "id": 78593, "tags": "electricity, electric-circuits" }
Why don't electromagnetic waves interact with each other?
Question: My exact question is that what refers to this phenomenon? I saw also Richards Feynman's video in that he talks about light and says that if we look at something those light waves that come from that thing are not disturbed from any other electromagnetic waves and explains this kind of way that if I can see things clearly, in front of me, although if someone stand in the right of me, can also clearly see any thing in the left of me, our light waves cross each other but the are not disturbed by each other. This is a kinda cool explanation but I don't understand that exactly, because I am not convinced that if those two electromagnetic waves would interact then I couldn't see the thing in front of me clearly Answer: Here are three explanations of how to understand “why” electromagnetic waves don’t directly interact electromagnetically with each other, which are all equivalent to each other: Maxwell’s equations are linear in the electric and magnetic fields, and in their sources, so the superposition of two solutions is also a solution. (For example, in Coulomb’s Law you can just add up the fields of multiple charges.) Photons do not carry any electric charge and do not have their own electromagnetic field. (Note: By contrast, gluons do carry color charge and do interact with each other.) The gauge group for electromagnetism is an abelian (i.e., commutative) group. (Gauge groups are something you learn about in more advanced physics courses.) Notice that I said photons don’t directly interact with each other. They do indirectly interact via virtual electrons and positrons (or other charged particle-antiparticle pairs). Until you get to extremely intense electric and magnetic fields, this is a very tiny effect and was only recently measured. An even tinier effect, which we will probably never be able to detect, is the gravitational interaction of electromagnetic waves or photons. Physicists believe there would be a gravitational interaction because electromagnetic waves and photons carry energy and momentum, even though photons are massless.
{ "domain": "physics.stackexchange", "id": 57288, "tags": "electromagnetic-radiation" }
Gravitational potential of a disc
Question: The question says Find the potential at the center of a disc whose surface area density varies as $$σ = σ_0(1+\cosθ)r $$ where theta is the angle made by the radius with the horizontal and $r$ is the distance of the point from the center My textbook says is to first integrate -$$-Gσ_0(1+\cosθ)r\mathrm{d}r\mathrm{d}θ$$ with respect to dr, then integrate the result with respect to d0. I have understand the integration process, I wanted to know the physical meaning between integrating like suppose for finding for a ring we choose a small portion dm then we find for the complete ring but here at every point the mass density varies, how exactly the integration works. Can this process be physically interpreted the one we did for ring? Answer: In this setup, mass density of the disc is a function of two variables, namely $\theta$ and $r$. Hence, double integration is required to solve for the gravitational potential of the disc at the centre. We first consider an differential ring of thickness $dr$ and then a differential patch of length $rd\theta$ on this ring. This area of this patch is $dA$. While doing double integration, we first integrate w.r.t that variable whose element is considered last, here, $\theta$. While integrating w.r.t $\theta$, $r$ is considered to be constant. Here, we are basically finding the gravitational potential at the centre due to a differential ring by summing up the contribution of each small patch on the ring. Next, we integrate w.r.t $r$. Here, we sum up the contribution of each differential ring to find the gravitational potential due to the whole disc. Note: For this particular integral, the result will come out to be same if we first integrate w.r.t $r$ and then w.r.t $\theta$. However, it is more intuitive to first integrate w.r.t $\theta$.
{ "domain": "physics.stackexchange", "id": 84057, "tags": "homework-and-exercises, newtonian-gravity, potential, integration" }
Does the uncertainty principle make simulation of systems impossible?
Question: Is it possible to fully define a system, then be incapable of simulating or calculating its future states due to the Uncertainty Principle? If it can be done, how? Answer: The Uncertainty Principle will never, as far as we know, prevent you from simulating any physical system. The reason for this is that quantum mechanics is - except for that little problem with measurements - completely deterministic. To be more precise, say you want to simulate a given system within quantum mechanics. You begin by describing your preparation procedure of the initial state, you describe the hamiltonian which drives the evolution of the system, and you describe any measurements you will do at any given point. Then quantum mechanics allows you to calculate, at least in principle, the evolution of the system's state via the quantum Liouville equation, $$i\hbar\frac{\partial\rho}{\partial t}=[H,\rho].$$ When you perform measurements, the formalism will tell you the probabilities of each outcome and the state you should use to continue the unitary evolution. The whole thing is completely simulatable. (On the other hand, there is no guarantee on you being able to find a computer that will do this in less than the age of the universe.) Even in classical mechanics, this is not an issue. Say you have a classical particle which you want to simulate using some hamiltonian mechanics, but you're worried that you can never have full information about both position and momentum. The Uncertainty Principle does limit your precision to a patch of area $\hbar$ in phase space. However, your preparation procedure will produce some sort of definite probability distribution over phase space which determines what positions and momenta are more likely than others. This probability density can then be propagated deterministically in time using liouvillian mechanics. This formalism will give you, at any given time, the probability distribution over the position and momentum of the particle; if you repeat the experiment over your ensemble then you can simulate the distribution of final values.
{ "domain": "physics.stackexchange", "id": 10639, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, measurement-problem, determinism" }
Putting a sensor_msgs/Image into a message, getting it back, and converting it for OpenCV?
Question: I'm just doing this wrong somehow. I've got a Sensor_msgs/Image already. I want to put it into my message which has "Sensor_msgs/Image image" in it's definition. Can I just do this: msg.image = *imgPtr; Or do I need to do some form of deep copy? The data types are a bit too complicated for me to figure out what's going on. Later, in a service, I want to extract that image and convert it to a IplImage* for use with OpenCV. i'm trying this: cv_img = cv_bridge::toCvCopy(image, enc::BGR8); IplImage img; img = cv_img->image; And it seems to be working alright now, but it's hard to tell since i think that first bit is not working. Originally posted by Murph on ROS Answers with karma: 1033 on 2011-05-01 Post score: 1 Answer: I suggest you take a look at the cv_bridge tutorials Originally posted by tfoote with karma: 58457 on 2011-05-01 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tfoote on 2011-05-06: All messages should do a full copy on assignment. Comment by Murph on 2011-05-02: Any idea on the first part - do i need to do a deep copy or something to put an image into the message file? I think the Image i'm sending is not working right now.
{ "domain": "robotics.stackexchange", "id": 5483, "tags": "ros, opencv, sensor-msgs, image, messages" }
Is it proven that quantum computation is no better at solving NP complete problems than classical computation?
Question: Is it proven that quantum computation is no better at solving NP complete problems than classical computation or it's just believed? Answer: It is suspected that NP-complete problems cannot be solved in quantum polynomial time (i.e., that they are not in BQP), but this hasn't been proved. We don't expect a proof in the near future, since this would imply that P is different from NP.
{ "domain": "cs.stackexchange", "id": 8173, "tags": "np-complete, quantum-computing" }
What is electrical conductivity divided by scattering time in DFT calculations?
Question: In DFT calculations, I see many graphs showing electrical conductivity divided by scattering time $\sigma/\tau$. But it is treated as electrical conductivity in the papers. What does this mean and why do they plot like this? Answer: Density Functional Theory cannot calculate the scattering time $\tau$. That is a parameter that must be experimentally determined from the width of the Drude peak in optical conductivity (the frequency dependence of the AC conductivity).
{ "domain": "physics.stackexchange", "id": 65113, "tags": "solid-state-physics, density-functional-theory" }
SMACH tutorial: ImportError: No module named msg
Question: I got error while trying to run the ./examples/state_machine.py in smach_tutorials package Traceback (most recent call last): File "/home/tienthanh/workspace/ros/fuerte/mebios/tienthanh/executive_smach_tutorials/smach_tutorials/examples/state_machine.py", line 21, in import smach_ros File "/opt/ros/fuerte/stacks/executive_smach/smach_ros/src/smach_ros/init.py", line 55, in from action_server_wrapper import ActionServerWrapper File "/opt/ros/fuerte/stacks/executive_smach/smach_ros/src/smach_ros/action_server_wrapper.py", line 9, in from actionlib.simple_action_server import SimpleActionServer File "/home/tienthanh/workspace/ros/fuerte/mebios/tienthanh/executive_smach_tutorials/smach_tutorials/examples/actionlib.py", line 27, in from actionlib.msg import * ImportError: No module named msg Output of "echo $PYTHONPATH" is /opt/ros/fuerte/share/ros/core/roslib/src:/opt/ros/fuerte/lib/python2.7/dist-packages: I use ROS fuerte, Ubuntu 12.04 32 bit. Originally posted by Tien Thanh on ROS Answers with karma: 231 on 2012-09-18 Post score: 0 Answer: It looks like the tutorial was never updated when actionlib. The line should read: from actionlib_msgs.msg import * I've filed a ticket here: https://kforge.ros.org/smach/trac/ticket/2 Originally posted by jbohren with karma: 5809 on 2012-09-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Tien Thanh on 2012-09-19: I change in smach_tutorials/examples/actionlib.py as you advice: from actionlib_msgs.msg import * But I got another error: ... action_server_wrapper.py", line 9, in from actionlib.simple_action_server import SimpleActionServer ImportError: No module named simple_action_server
{ "domain": "robotics.stackexchange", "id": 11060, "tags": "ros, executive-smach, smach" }
How to solve the recurrence $T(n) = T(n/2) + T(n/4) + T(n/8)$?
Question: How to solve the recurrence $T(n) = T(n/2) + T(n/4) + T(n/8)$? We assume that $T(n)$ is constant for sufficiently small $n$. Answer: Use the Akra-Bazzi theorem, a generalization of the master theorem which captures recurrences such as yours. The method immediately gives $T(n) = \Theta(n^p)$, where $(1/2)^p + (1/4)^p + (1/8)^p = 1$. In contrast to your method, the Akra-Bazzi theorem can also handle inhomogeneous recurrence relations (i.e., $T(n) = T(n/2) + T(n/4) + T(n/8) + g(n)$), and it can also handle floors and ceilings (i.e., $T(\lfloor n/2 \rfloor)$ instead of $T(n/2)$) and beyond.
{ "domain": "cs.stackexchange", "id": 11228, "tags": "recurrence-relation" }
Cantor's diagonal method in simple terms?
Question: Could anyone please explain Cantor's diagonalization principle in simple terms? Answer: Here's the standard application explained in the simplest terms I can: Theorem: There are more real numbers than there are integers. Lemma: A real number has a decimal representation (that might not terminate), and all decimal representations create real numbers. Proof of Theorem: Suppose there are as many integers as reals. Then we can list the reals in some order, $r_1 = 3.141592...$ $r_2 = 2.718281...$ $r_3 = 1.000000...$ and so on for each $r_i$ where $i$ is an integer There is a contradiction because I can construct a real that is not in the above list. Let $r' = 0.d_1d_2d_3d_4\ldots$ where $d_1$ is not equal to the 1st digit after the decimal of $r_1$, $d_2$ is not equal to the 2nd digit after the decimal of $r_2$, and so forth. For example, I could add 5 and take it mod 10. In the example this gives $r' = 0.665\ldots$. The contradiction is that $r'$ is not in the list, because for any $i$ I know that $r'$ and $r_i$ differ by at least $4*10^{-i}$ (since they differ in $d_i$). The fact that the reals cannot be enumerated as such shows that they are larger than the integers. But, the important result of Cantor is simply that they cannot be enumerated.
{ "domain": "cs.stackexchange", "id": 684, "tags": "complexity-theory, sets, uncountability" }
How can I determine the interaction knowing the decay formula?
Question: I know the kind of interaction occurring in very common reaction. For example, I know that the interaction: $e^- + e^+ \longrightarrow \mu^+ + \mu^-$ is driven by the electromagnetic force (there is a $\gamma$ in the Feynman's diagram). But in general, given only the formula, on which criteria can I relay to find the type of interaction (strong, weak, EM) occurring? Has it something to do with the characteristics of the mediators? I know that the W,Z bosons have defined electric charge, mass, ..., but I can't understand how this can help me in finding the interaction. Answer: is driven by the electromagnetic force (there is a γ in the Feynman's diagram). you are putting the cart in front of the horse. Interactions are defined by their strength. Stronger, i.e. most probable is the strong interaction and the crossections, i.e. probability distributions , are much larger than the electromagnetic interaction, which is larger than the weak interaction first seen in decays. This table of fundamental forces makes this clear, where the exchanged particle is on the rightmost column. Many Feynman diagrams for the processes exist for your example, since the crossection diagrams are a perturbative expansion, with photon exchanges and Z exchanges, and depending on the energy and the particles involved different diagrams (see page 5) become dominant in the expansion. Photon exchanges are the electromagnetic contribution, Z exchanges are the weak contribution to the crossection. Thus the only rule of thumb for picking out the dominant diagram, which will characterize the interaction, depends on the energy of the interaction under study. For low energies the exchanged Z is very much off mass shell, in your example, and the weak contribution is depressed. On the Z energy the weak contribution is dominant. One has to calculate the Feynman diagrams and fit the specific data with the theoretical calculation. This is a summary of the way the electroweak interactions have been studied.
{ "domain": "physics.stackexchange", "id": 31396, "tags": "particle-physics, nuclear-physics, interactions, weak-interaction, strong-force" }
Can Principal Component Analysis (PCA) Solve the Cocktail Party Problem?
Question: I'm looking into the cocktail party problem and trying to figure out whether something like Principal Component Analysis is enough to separate out all the various voices at the cocktail party into its constituent sound sources. If its not enough, why? What other techniques should be used in conjunction with it, so that I wind up with distinct signals for each cocktail party patron's voice? Spatial filtering, such as various beamforming methods, has been suggested. But in researching PCA it seems that should (possibly) be enough to "split" the total signal of the cocktail party into individual signals for each human voice attending the party. Beamforming and similiar methods appear to be filters for subsequently focusing on just one of those voices and filtering the rest out. Can anybody with PCA experience weigh in here with whether or not PCA can solve this, or if its requires additional processing? Answer: The Cocktail Party Problem is a Blind Source Separation (BSS) problem. Given a linear mixture of signals: $$ \boldsymbol{y} \left[ n \right] = A \boldsymbol{x} \left[ n \right] $$ We're trying to estimate the signal $ \boldsymbol{x} \left[ n \right] $. The model can get even more complex with $ A $ being time varying: $$ \boldsymbol{y} \left[ n \right] = A \left[ n \right] \boldsymbol{x} \left[ n \right] $$ We have 3 main approaches to this problem: Probabilistic Approach Looking at the signals as an ensemble of points of a distribution and find the linear coordinate transform to guarantee some property. The PCA approach tries to remove correlation (2nd moment information) while the ICA tries to remove correlation in higher moments (Basically statistical independence). Time Signal Processing Approach In case of 2 signals one of them being a reference we can use the adaptive decorrelation filter. Basically we're after removing any time correlation from the signals. Spatial Signal Processing Approach We can utilize the known location of the microphones in the room to create adaptive beamforming. The idea is that delayed adaptive summation of the data can change the spatial curve of the array and making matching a certain direction. Of course in late years we can find work on the subject utilizing Deep Learning approaches. Their main advantage is being able to incorporate additional information (Like using the properties of the signal, be it in a certain language, incorporating visual data on the scene like images and videos [Who is moving the lips when?]). This is a vast subject and the main idea is to tailor the solution to the specific case of yours. Modern Robust ICA and IVA (Independent Vector Analysis) can be very effective. I'd try them first unless you have the case which matches the Adaptive Filter (Which can be proven to be matching the Beam Forming solution under some conditions).
{ "domain": "dsp.stackexchange", "id": 9723, "tags": "audio, audio-processing, inverse-problem, pca, source-separation" }
Preparation of alcohols from Monosubstituted cyclic ethers
Question: Can methyl oxirane(epoxide) (a mono substituted 3 membered cyclic ether) form alcohols by reacting it with Grignard reagent? If yes, can you please explain the reaction with mechanism? If no, why? Answer: Yes, Girgnard reagents can act as nucleophiles in the ring opening of epoxides. Nucleophilic attack occurs much faster at the less hindered carbon, so the product of this reaction will predominate. The reaction results in the formation of an alkoxide, and does not form the alcohol until treated with a protic acid. The mechanism for Grignard addition to epoxypropane (methyl oxirane) is: $\hspace{4cm}$ For 2-methyl-2,3-epoxybutane, a more hindered epoxide, the addition still occurs at the less hindered carbon: $\hspace{3.8cm}$
{ "domain": "chemistry.stackexchange", "id": 5717, "tags": "organic-chemistry, reaction-mechanism, alcohols" }
Liquid-liquid caffeine extraction
Question: I am performing a lab experiment in which I am extracting crude caffeine from different samples of tea. I have read a couple of similar experiments in which methylene chloride has been used to extract caffeine, owing to its volatile nature and caffeine's affinity to methylene chloride. I know chloroform can be used as a substitute for methylene chloride. I was wondering if carbon disulphide can used as a substitute in the experiment. Thank you very much! Answer: I suggest to use a 1:1 mixture of dichloromethane and 0.2 M NaOH at room temperature. The protocol was described by Tetsuo Onami and Hitoshi Kanazawa in J. Chem. Educ, 1996, 73, 556-557. The authors mention that only trace amounts of caffeine are extracted from tea leaves with dichloromethane alone. They reason that due to the hight amount of tannines in the tea leaves, most of the caffeine exist in protonated form and is therefore bound to the tannines. As an experimentally proven alternative, they suggest to treat the tea leaves with a mixture of dichloromethane and 0.2 M NaOH at room temperature! According to the article, 20-30 mg of caffeine were extracted from a 2 g tea bag after shaking the above mixture for about ten minutes. UPDATE If you insist on the liquid-liquid extraction of previously prepared aqueous solutions and feel the urge to set up a percolator-type apparatus for continuous extraction, the Kutscher-Steudel glassware will not work with solvents that have a higher density than water. For continuous extractions with solvents like $\ce{CH2Cl2}$, some different glassware has been designed and was described by S. Wehrli in Ein Apparat zur Extraktion von Lösungen mit schweren Lösungsmitteln and was published in Helv. Chim. Acta, 1937, 20, 927–931 (DOI)
{ "domain": "chemistry.stackexchange", "id": 5242, "tags": "safety, extraction" }
Momentum exerted on an external object by a Photon
Question: I am trying to calculate momentum transferred by a Photon for Solar Sails speed (not exactly a photon because of wave-particle duality but i will consider it a Photon for this Question), So I had the following results: 2.998×10^-47 kilogram meters per second (with Rest Mass only) I used Wolfram Alpha with Sir Newtons Second Law (p = mv) to Calculate it Since light has no acceleration but velocity, I have calculated it's momentum so p=mv I read on this ScienceDirect Article that Photons have are rest mass of 10 -54 kg I did'nt find Relativistic Mass of A Photon with 780 Nanometer Wavelength Light I want to find out if my value is correct, how much would it differ with Relativistic Mass (Since i calculated it with rest mass)? Can you please help me? I am new to equations Answer: When dealing with photons it is better to think in terms of momentum rather than mass. A photon with frequency $\nu$ has momentum with magnitude $\frac {h \nu} c$. If the photon hits the light sail at an incident angle of $90$ degrees and if the light sail is perfectly reflective then the momentum of the photon is reversed, so the change in momentum of the photon is $2\frac {h \nu} c$. By conservation of momentum, the change in momentum of the light sail must also be $2\frac {h \nu} c$. If $N$ photons hit the light sail every second then the force exerted by the photons on the light sail is $2N\frac {h \nu} c$.
{ "domain": "physics.stackexchange", "id": 92043, "tags": "forces, visible-light, photons, mass, solar-sails" }
How to draw one-loop corrections for a certain QFT theory
Question: Consider that I'm given the following Lagrangean: $$L=L_{QED}+\frac{1}{2}\partial_\mu\phi\partial^\mu\phi+\partial_\mu\chi\partial^\mu\chi-\frac{1}{2}m_\phi^2\phi^2-\frac{1}{2}m_\chi^2\chi^2-\frac{1}{2}\mu_1\phi^2\chi-\frac{1}{2}\mu_2\chi^2\phi-g\bar{\psi}\psi\chi$$ where $\phi$ and $\chi$ are neutral scalar fields and $\psi$ is the electron field. From this Lagrangean I extract new Feynman rules besides QED's, for instance, the propagator for the scalar fields, the vertices of electrons and $\chi$ field, vertices between the scalar fields, etc. Within QED, in particular, the one-loop correction to the vertex is done by introducing a photon propagator "connecting" the fermionic lines. Furthermore, the one-loop correction for the electron propagator introduces the electron's self energy and for the photon propagator, the vacuum polarization. My question is, why are these the one-loop 1PI corrections to QED? How do I know that for the vertex, I need to connect those two electron lines with a photon? The reason I ask this is because I'm trying to draw the one-loop corrections for the scalar self energy $\chi$ and for the new vertices (so I can discuss their superficial degree of divergence after). Answer: To construct Feynman diagrams you have to look up at the interaction terms in your lagrangian. The fact that in QED you have a $\gamma e^-e^+$ vertex comes directly from the interaction term between the photon field and the fermion field in the QED lagrangian $$e\bar\psi\not A\psi\to-ie\gamma^\mu$$ With this interaction vertex you can build up only certain diagrams. For example the four photon interaction diagram in QED needs to be done using a fermion loop since there's not four photon interaction term in the lagrangian. Now, in your lagrangian you have three more interaction vertices between the fields: a vertex with two $\phi$ and a $\chi$ given by the term $$\mu_1\phi^2\chi$$ for which the Feynman rule gives $-i\mu_1$, a vertex with two $\chi$ and one $\phi$ $$\mu_2\chi^2\phi$$ for which the Feynman rule gives $-i\mu_2$, and a vertex with two fermions and a $\chi$ $$g\bar\psi\psi\chi$$ for which the Feynman rule gives $-ig$. For example this last vertex gives another one loop contribution to the fermion propagator which is identical to the fermion self energy diagram but with the photon replaced by a $\chi$. If you're searching for the one loop corrections to the scalar propagator of the $\chi$ you'll have three diagrams The first is given by the interaction term $\bar\psi\psi\chi$, the second by the interaction $\chi^2\phi$ (sorry if both the $\chi$ and the $\phi$ are given by dashed lines), and the third is the remaining term $\phi^2\chi$.
{ "domain": "physics.stackexchange", "id": 68791, "tags": "homework-and-exercises, quantum-field-theory" }
Spring Damper System : Recoil Reduction
Question: I am designing a man-"portable" 30mm sniper rifle. Although it may be classified as a anti-materiel cannon. I have the mass of the round in total, and the mass of the projectile leaving the barrel, as well as the pressure inside the barrel. I have the weapon's weight as well. Here is my source : http://www.pmp.co.za/index.php?page=mediumcalibre6 Now, my problem. How do I calculate the force that this bullet imparts upon the rifle? If I use a spring-damper system to reduce the force on the shooter, how do I actually determine the force on the shooter? If you do not find the need to supply me with a direct answer, which is understandable, could you please redirect me to a website/book that might help me? Answer: My initial guess for modelling the forces involved in firing a rifle would be to model the rifle, bullet and pressurized gas as a piston in which the gas would experience ideal adiabatic expansion until the bullet leaves the barrel. The force exerted by the pressure of the gas on the barrel in the radial direction will cancel out. The pressure does exerts a force on the riffle and the bullet, both proportional to the cross sectional area, $A$, of the barrel (the base of the cylindrical volume of the gas in the barrel). These forces will contribute to the acceleration of both objects, which will result in a increase of the enclosed volume of the gas and therefore a drop in pressure. When assuming the only force involved is due to pressure, the following differential equations can be composed: $$ p=p_0\left(1+\frac{A}{V_0}(x_b-x_r)\right)^{-\gamma}, $$ $$ \frac{\partial^2}{\partial t^2}x_b=\frac{pA}{m_b}=\frac{p_0A}{m_b}\left(1+\frac{A}{V_0}(x_b-x_r)\right)^{-\gamma}, $$ $$ \frac{\partial^2}{\partial t^2}x_r=-\frac{pA}{m_r}=-\frac{p_0A}{m_r}\left(1+\frac{A}{V_0}(x_b-x_r)\right)^{-\gamma}, $$ where $x_b$ and $x_r$ are the positions of the bullet and rifle, $m_b$ and $m_r$ the masses of the bullet and rifle, $p$ is the pressure of the gas, $p_0$ is the starting pressure of the gas, $V_0$ is the starting volume of the pressurized gas and $\gamma$ is the adiabatic index. This model does has quite some simplifications, such as no friction between barrel and bullet, the air in front of the bullet in the barrel has no influence and no high pressure gas can escape between the bullet and the barrel. However even this simplified model might be to complex and unnecessary. Because by knowing the muzzle velocity and using the conservation of momentum you that you would have to dissipate the same impulse as how much the bullet has gained, but distribute it over a longer period to reduce the the average force. Ideally you would want some kind of friction between the moving and static part of the rifle. Because this would mean that there is a constant force acting between the two part and thus on you and therefore minimizing the maximum force experienced and the distance needed to transfer the impulse of the moving part to the user. However friction often means wear, so other systems might be a better option if smallest distance is not that important. To answer your second problem I would have to simplify it. Say that after firing a bullet of mass $m_b$ at velocity $v_b$, the barrel (and other parts of the rifle moving along with it) with mass $m_r$ should have an equal momentum but in the opposite direction as that of the bullet. From this you can find the initial speed, $v_r$, of the moving part of the rifle: $$ v_r=v_b\frac{m_b}{m_r} $$ After this the spring and the damper will exert a force on this moving part, however the same force will also be exerted onto you: $$ F=m_r\ddot{x}=-kx-c\dot{x} $$ This equation has a known solution and by using the initial conditions $x(t=0)=0$ and $\dot{x}(t=0)=v_r$ you can find the equation of motion and thus the exerted force.
{ "domain": "physics.stackexchange", "id": 12542, "tags": "homework-and-exercises, newtonian-mechanics, momentum, spring" }
Calling fgets() twice
Question: I am a beginner. I have really struggled with fgets() when using the function twice. I am on Windows with VS Code and C++ extension. My test program is working, but it is IMHO lengthy and complicated for such a simple task. Furthermore, the use of goto is bad practice, I have read. Is there a simpler solution? #include <iostream> #include <string.h> using namespace std; #define max 5 void killNL(char *str) { int i = 0; while (*(str+i) != EOF){ if (*(str+i) == '\n'){ *(str+i) = '\0'; *(str+i+1) = EOF; } i++; } } int main() { size_t len = 0; char string[5] = {0}; int c = 0; begin: printf("Enter string: "); fgets(string, max, stdin); killNL(string); //Is used only for the len count when length is 1 less //than max. It depends on the space that is left for //'\n' AND '\0' which raises the count to when the //(getchar()) != gets activated. Which needs a leftover //'\n' in the buffer to not go into endless loop. len = strlen(string); for (int i = 0 ; i <= len ; i++) { if (string[i] == '\n'){ printf("\\n \n");} else if (string[i] == '\0'){ printf("\\0 \n");} else { printf("%c \n", string[i]);} } askAgain: if (len >= max - 1) { while (c = (getchar()) != '\n' && c != EOF) {} } printf("Continue? 'y'/'n' :"); fgets(string , 3, stdin); if (*(string) == 'y'){ goto begin; } else if (*(string) == 'n') { goto end; } else { printf("\nWrong answer!\n"); goto askAgain; } end: printf("END!\n"); return 0; } I have been searching here, on Youtube and then in a book where I found the killNL function, which finally did make it work. This question has many variants on Stack Overflow, with a combination with sscanf, with reading a file and more. But not this variant, which is why I am posting it despite the fact that it is an old subject. Answer: Well, for starters, you really want to decide whether you are coding in C or C++. You #include <iostream>, but make no use of any iostream functions (presumably you thought about using std::cout or the like). You also specify using namespace std;, but that is not valid in C. Rule: only include necessary headers for the functions used in your code. We will presume you are wanting to write code in C given your choice of input and output functions. If not, drop a comment and I'm happy to help with C++ as well. Working though your code: max 5 -- Don't SKIMP on buffer size. While you would use std::string if writing in C++, when declaring an array to hold input in C, don't skimp. While fgets() will only attempt to write size number of characters (minus 1 to provide room for the nul-terminating character) your purpose in using fgets(), among others, is to read a complete line of input at a time without leaving extraneous characters in the input stream. You do that by providing a sufficiently sized buffer. When taking input, you generally loop-continually and then break your read-loop if the input fails stream error or manual EOF (generated by the user pressing Ctrl + d (or Ctrl + z on Windows)), or successful input is received, or one of your exit conditions is satisfied (e.g. the user pressing Enter alone for the string indicating they are done with input) In C, to trim the '\n' from the end of the string read by fgets() (e.g. fgets (string, MAXC, stdin)), then you simply call string[strcspn (string, "\n")] = 0; effectively overwriting the '\n' with the nul-terminating character '\0' (which is just ASCII 0) Declare your variables in the scope where they are needed, and There is absolutely nothing wrong with goto used properly. It is the only way to break nested loops in a single expression, etc.. With that as a backdrop, you can simplify and clean up your code similar to the following. Many of the choices are up to you, but this is one general way to approach reading input, checking if the input is valid, checking if the user generated an EOF, checking if the user is done with input, checking if the input matches a string you want, and finally looping again if the input isn't what is needed. (a rough interpretation of what it looked like you were wanting to do) The code (with additional comments) can be: // #include <iostream> /* only include needed headers for functions used */ #include <stdio.h> /* or <cstdio> for modern C++ */ #include <string.h> /* or <cstring> for modern C++ */ // using namespace std; /* using nothing from std:: namespace */ #define MAXC 1024 /* don't SKIMP on buffer size, UPPERCASE defines */ #define ANS "guess" int main() { char string[MAXC] = {0}; /* declare string */ /* read-loop continually, break on condition of your choice */ while (1) { fputs ("\nEnter string: ", stdout); /* no conversion, fputs is fine */ /* validate EVERY input based on return of read function */ if (!fgets (string, MAXC, stdin)) { puts ("(user generated manual EOF)"); return 0; } /* trim '\n' from string */ string[strcspn (string, "\n")] = 0; /* exit if empty-string (user just pressed [Enter], or any condition) */ if (*string == '\0') { puts ("(all done)"); break; } /* check if correct answer ANS given */ if (strcmp (string, ANS) == 0) { puts ("(correct!)"); break; } puts ("(answer didn't match)"); /* otherwise, loop again */ } puts ("(That's All Folks!)"); } Example Use/Output $ ./bin/read-loop Enter string: my dog has fleas (answer didn't match) Enter string: why? (answer didn't match) Enter string: what if I guess? (answer didn't match) Enter string: guess (correct!) (That's All Folks!) Or canceling with a Ctrl + d: $ ./bin/read-loop Enter string: (user generated manual EOF) Or using the exit-condition of the user just pressing Enter alone at the prompt indicating they are done guessing: $ ./bin/read-loop Enter string: next we will just press Enter when prompted, to quit input (answer didn't match) Enter string: (all done) (That's All Folks!) You can add (or remove) as many of the exit conditions for the read-loop as you like. The key take-aways are: loop continually until the user provides the required input, always condition exiting your read-loop on the return of your read-function, validate EVERY user-input, provide the needed exit conditions to control exit from your read-loop, and handle any errors that occur as needed. Let me know if this is what you needed. If not, I'm happy to help further.
{ "domain": "codereview.stackexchange", "id": 45209, "tags": "c++, c" }
Can accelaration of a falling mass be positive when upwards direction is positive? What am I missing?
Question: As shown in the figure: (+) direction is upwards displacement is y time=t Let's say that $ y=30-5t^{2} $ , thus the second derivative will give acceleration as -10 which is (-) as expected. Now let's say $$ y=30 + 5e^{-t} $$ as t will increase from 0 to infinity, y will decrease from 35 to 30, thus it's falling. But second derivative, the acceleration is $ 5e^{-t} $ which is always positive for t>0. So, how can a falling object have positive acceleration when it's falling, what am I missing here? Answer: At $t=0$ the particle has negative velocity $v=-5$. If it were to carry on at this velocity it would sail on past the origin and head into the negative $y$ direction. It does not do this - its speed decreases as it approaches the origin and it eventually comes to a rest at $y=0$ in the limit $t\rightarrow\infty$. In order for this to happen it must be accelerating in the positive $y$ direction so that it's speed towards the negative direction decreases with time.
{ "domain": "physics.stackexchange", "id": 78210, "tags": "kinematics, acceleration, displacement" }
Add data to DataTable
Question: I created a class to operate with concrete table and I want to generalize this code to develop more universal class. Here is one method of this class: public void FillColumns(DataTable table, OracleDataReader reader) { if (reader != null) if (reader.HasRows) { table.Rows.Clear(); while (reader.Read()) table.Rows.Add ( reader["ColumnName1"], reader["ColumnName2"], reader["ColumnName3"], reader["ColumnName4"] ); } } How can I refactor this concrete method, to make concrete column names as input parameters (even independing of number of columns)? Answer: params (version 1) You can use the same trick the Rows.Add method uses namely the params keyword By using the params keyword, you can specify a method parameter that takes a variable number of arguments. and pass columns names as the last parameters. Then you read the values form the reader with some LINQ and turn it into an array because it's just that, an array (but one that is created dynamically if each value is passed separately). Besides you don't need two ifs, use the and && operator instead. public void FillColumns( DataTable table, OracleDataReader reader, params string[] columnNames) { if (reader != null && reader.HasRows) { table.Rows.Clear(); while (reader.Read()) { table.Rows.Add(columnNames.Select(name => reader[name]).ToArray()); } } } Example call: .FillColumns(table, reader, "Column1", "Column2"); table.Columns (version 2) the second approach would be to read the column names from the data table public void FillColumns(DataTable table, OracleDataReader reader) { if (reader != null && reader.HasRows) { table.Rows.Clear(); while (reader.Read()) { table.Rows.Add( table .Columns .Cast<DataColumn>() .Select(column => reader[column.ColumnName]) .ToArray()); } } } table.Load (version 3) and the third to just load the datatable from the reader: public void FillColumns(DataTable table, OracleDataReader reader) { if (reader != null && reader.HasRows) { table.Rows.Clear(); table.Load(reader); } }
{ "domain": "codereview.stackexchange", "id": 25378, "tags": "c#, .net-datatable" }
How does Telescope lens work?
Question: 1.How does a Telescope work? 2.What factors increase the magnification of the lens? Answer: It's not quite clear what you mean by "telescope lens" - do you mean the system of lenses that make up a telescope? If so, there are two basic types. The actual lenses in your telescope are probably more complicated and correct for all kinds of aberrations, but they work like this. The Keplerian telescope (top one in the diagram) consists of two positive lenses, with different focal distances, with their foci at the same point, in between the lenses. Imagine your eye on the left. Two parallel rays will converge to a point at the focus of the right-hand lens, and since this point is also at the focus of the left-hand lens, they will become parallel again, but inverted. Of course you are usually not looking at parallel rays with your telescope, but picturing it this way is what helped me to understand why we see a magnified image. You might think at first glance from the diagram that this makes the image smaller; but what it does is take parallel rays traveling at different angles to the optical axis, and make them parallel again on the other side of the telescope, but traveling at a larger angle. This increases the apparent size of the object. (Google Docs isn't very good for drawing detailed diagrams - you could take a look at the more complicated ones on the Wikipedia page on telescopes.) The Galilean telescope (bottom one in the diagram) consists of a negative and a positive lens, again with their foci at the same point. This time the point is on the outside of the telescope, at where your eye is. The positive lens focuses the parallel rays to that point, and the negative lens takes the converging rays and makes them parallel again. This time, the image is not inverted. Then the second part of your question is about magnification. The focal lengths of the lenses are the only factors that influence the magnification: it is equal to $$M=-\frac{f_2}{f_1}$$ The minus sign seems counter-intuitive, but think about it - we fill in a negative focal length for the negative lens in the Galilean telescope, so the magnification comes out positive. For the Keplerian telescope, the magnification comes out negative - this indicates that the image is magnified, but also inverted.
{ "domain": "physics.stackexchange", "id": 853, "tags": "optics" }
Why were the GW detections at Livingston and Hanford separated by 7 ms if the light travel time between them is 10 ms?
Question: How did a gravitational wave travel from Livingston, Louisiana to Hanford, Washington in 7 milliseconds, when they are separated by 10 milli-light seconds (3002 km)? Answer: The time delay depends on the direction the wave is travelling. If it is travelling along the line connecting Livingston and Hanford then the delay time would indeed be the Livingston-Hanford distance divided by $c$: However suppose the wave was travelling normal to the line connecting the two detectors. In that case the wave would arrive at both of them at exactly the same time and the delay would have been zero: So the delay can be anything from zero up to $d/c$ depending on the direction the wave is travelling. The only real upset would be if the delay was greater than $d/c$ as that would mean the wave was travelling slower than light.
{ "domain": "physics.stackexchange", "id": 31623, "tags": "speed-of-light, gravitational-waves" }
Siphon water from washing machine into sink next to it
Question: I have a top-loading washing machine that has broke down and is full of water. Instead of getting a bucket to manually collect the water, I was thinking I could use a tube to siphon the water out of it, but gravity could possibly get in the way. I dont really want to leave the house to buy a pump, and I figured maybe I dont need one. The sink height is the same as the top of the washing machine, the water will have to travel upwards about 1 foot then travel sideways about half a foot, then down into the neighboring sink. Is this possible with just a tube? no pump? Answer: A siphon from one open vessel (A) to another vessel (B) requires the water level in B to be lower than the water level in A. As long as there is not a big air bubble in the siphon and the siphon is full of water, water will flow; but the flow rate will be approximately proportional to the difference in water height in the two vessels. You might do better to siphon into a bucket on the floor, then dump the bucket into the sink.
{ "domain": "physics.stackexchange", "id": 54726, "tags": "fluid-dynamics" }
Why does the unstabilised Wittig reaction selectively form cis alkenes?
Question: This question is meant for a simple unstabilised ylide. The mechanism of the Wittig reaction, as given on ChemTube3d, involves a concerted formation of the oxaphosphetane (this is generally favoured over the traditional stepwise mechanism with betaine formation): Here the puckered transition state has been clearly shown but involves the direct formation of the oxaphosphetane (without any betaine intermediate). In this transition state, the two possibilities are for the methyl groups to be either cis or trans to each other. Surely the transition state with the methyl groups trans to each other will be more stable. Doesn't this support formation of the trans product? Answer: To my knowledge the mechanism of the Wittig reaction isn't fully resolved yet. But maybe I can give you some ideas about why the Wittig reaction with unstabilized ylides is (Z)-selective (well, with the exception of the Schlosser modification) instead of (E)-selective. In the excellent book by Clayden, Warren, and Greeves, there is a section beginning p. 690 that describes the oxaphosphetane formation as a concerted, antarafacial [2+2]-cycloaddition reaction (similar to your "puckered transition state"). Here, the phosphonium ylide and the carbonyl compound approach each other at right angles. The substituents are arranged in order to minimise steric repulsions in the transition state. In particular, the phenyl group (Ph) points away from the PPh3 group, and then the R group on the ylide points away from the Ph. The formation of the oxaphosphetane is irreversible and kinetically controlled. Hence, even though the trans oxaphosphetane is more stable, it is not formed. (By the way, the book also has some justification why the Wittig reaction of stabilized ylides is E-selective.) Edit: Since the OP asked for it I will try to give some justification on why the ylide and the carbonyl group approach one another perpendicularly. For a proper description one would need the frontier orbitals of both compounds and then reason about the orbital interactions that bring about the [2+2] cycloaddition. Since I don't have those I will try to give some oversimplified description that should convey the general principle at work here. In the approximate orbital pictures I'm going to use I'll leave out the substituents for clarity. To begin with, if one imagines the orbital interactions between the ylide and the carbonyl group, it is clear that the most important (primary) interaction for the reaction is the one between the HOMO of the ylide (nucleophile) and the LUMO of the carbonyl group (electrophile). What happens when both groups approach one another head-on or at right angles is shown in the following picture (for the perpendicular approach a top-view is used). In both cases there are as many bonding interactions (black) as there are antibonding interactions (red). These situations are overall non-bonding and cannot lead to the desired product. This is the reason why [2+2] cycloadditions are usually thermally forbidden (by symmetry). However, some compounds have additional orbitals that: (1) are similar in energy to the frontier orbitals drawn in the picture, and (2) possess the right symmetry to interact with the other orbitals. These "secondary orbital interactions" are weaker than the HOMO-LUMO interaction, but might provide enough bonding to tip the balance from an overall non-bonding to a bonding situation. So, how does this translate to the situation of the Wittig reaction? Since O is more electronegative than C, the HOMO of the carbonyl group will have a larger orbital coefficient at O, while the LUMO will have a larger orbital coefficient at C. The situation is similar with the ylide. Since C is more electronegative than P, the HOMO of the ylide group will have a larger orbital coefficient at C. The ylide group is the nucleophile so the primary interaction will be between its HOMO and the LUMO of the electrophilic carbonyl group. Since the orbitals interact best where the coefficients are large, this primary interaction will be biased towards C–C bonding. The question is now, is there a suitable secondary interaction? One can argue that P has a lot of orbitals (valence shell extension) and that the LUMO of the ylide might be some empty phosphorus orbital (or at least an orbital with a big coefficient on $\ce{P}$) - this could be a p-orbital perpendicular to the ylide's C=P π-orbitals, or a d-orbital, or something like that. This phosphorus orbital can then interact with the carbonyl group's HOMO, which has its biggest coefficient on O. So this bonding interaction will be biased towards P–O bonding. The situation is shown in the following picture: The biases in the primary and secondary orbital interactions will lead to some distortions from the pure perpendicular approach which will become more pronounced the nearer the two species get to each other. For example, the ylide might rotate a bit so that its C and P atoms are closer to the carbonyl's C and O atoms respectively, in order to maximize the bonding overlap. And of course the perpendicular approach of ylide and carbonyl group will proceed in such a way that their respective biggest substituents are as far away from each other as possible (see diagram above). After this quite lengthy answer I want to make clear that this description of mine is only a guess. The real reaction path is still a matter of debate and there are some compounds that rather react via a radical or ionic pathway. But I think it gives a good explanation for most of the observed behaviour and hope it helped the understanding. A more exact frontier orbital description of another thermal [2+2] cycloaddition, namely that of ketene with ethene, can be found in the textbook by Brueckner on p 653.
{ "domain": "chemistry.stackexchange", "id": 492, "tags": "organic-chemistry, reaction-mechanism, wittig-reactions, stereoselectivity" }
How do I prove the identity for ${\rm tr}_p [e^{-iS\Delta t}(\rho\otimes\sigma)e^{iS\Delta t}]$ in Seth Lloyd's 2014 Quantum PCA Paper?
Question: Equation (1) in Seth Lloyd's paper on Quantum PCA says: $\text{tr}_{p}\text{e}^{-iS\Delta t} \rho \otimes \sigma \text{e}^{iS\Delta t} = \cos^2(\Delta t)\sigma + \sin^2(\Delta t) \rho - i \sin(\Delta t)\cos(\Delta t) [\rho, \sigma]$ Where $S$ is the swap matrix, and $\Delta t$ is a small slice of time $t/n$, and $\sigma$, $\rho$ are density matricies (we wish to apply $\text{e}^{-i\rho t}$ to density matrix $\sigma$) How would I go about proving this? Attempt: From Wikipedia, we have that $\text{e}^{tA} = \text{e}^{st}[(\cosh(qt) - s \frac{\sinh(qt)}{q}) I + \frac{\sinh(qt)}{q} A]$ where $s = \frac{\text{tr}A}{2}$ and $q = \pm \sqrt{-\text{det}(A-sI)}$, by Cayley-Hamilton. Thus, we can expand: $\text{tr}_{p} \text{e}^{-iS\Delta t} \rho \otimes \sigma \text{e}^{iS\Delta t} = \text{tr}_{p}(\text{e}^{-i\Delta t}(-I + S)\rho \otimes \sigma \text{e}^{i\Delta t} (-I + S)) = \text{tr}_{p}(\text{e}^{-i\Delta t}[\rho \otimes \sigma - S(\rho \otimes \sigma) - (\rho \otimes \sigma) S + \sigma \otimes \rho] \text{e}^{i\Delta t})$ However, I do not see how this simplifies to $\cos^2(\Delta t)\sigma + \sin^2(\Delta t) \rho - i \sin(\Delta t)\cos(\Delta t) [\rho, \sigma]$. Am I making a mistake in my math, or is there a trick that I am not seeing to simplify the expression I obtained down to the one in the paper? Answer: Note that, for any pair of matrices $A,B$, you have $$e^A B e^{-A} = e^{{\rm ad}(A)} B \equiv \sum_{k=0}^\infty \frac{1}{k!}[\underbrace{A,[A,\cdots ,[A}_k,B]\cdots]] \equiv B + [A,B] + \frac12 [A,[A,B]] + \dots,$$ where ${\rm ad}(A)$ denotes the adjoint operator, ${\rm ad}(A):B\mapsto [A,B]$, and the complicated-looking object in the series is a repeated commutator with $k$ terms. Note that, if $S$ is the SWAP operator, then $$ \operatorname{Tr}_2\left(S(\rho\otimes\sigma)\right) = {\rm Tr}(\sigma\rho), \qquad \operatorname{Tr}_2\left((\rho\otimes\sigma)S\right) = {\rm Tr}(\rho\sigma). $$ Apart from directly showing this expliciting via the matrix elements of the components of the expression, you can see this identity quite nicely in diagrammatic notation. I think this is pretty much all you need to get to the reported expression.
{ "domain": "physics.stackexchange", "id": 80148, "tags": "quantum-mechanics, quantum-information, quantum-computer, density-operator, trace" }
What insect is in the PHP bugs logo?
Question: Well, maybe this is a question more suited to Stack Overflow, but anyway, I am curious, which insect is the one in the logo of the PHP bugs site? Is it an actual insect or just a generic "bug"? It seems like some kind of Hemiptera for me, probably some kind of "true bug". A quick google search about these insects and I found the Hemiptera suborder Heteroptera, and this critter is probably from this group. I tried to search by the image in google images, but only found the logo itself, in the same low resolution is the image above (I was not able to find a better version of it). I tried to remove the background from the image then, but still I didn't get any photo of the possible species of this logo. EDIT: this was the PHP bugs logo up until 24th July 2017; the next day it was changed to another different "bug". This question is about the former logo, not the current one. Answer: It is an actual insect, Lethocerus americanus, also known as Giant Water Bug. See: Matches the colors, body shape and small eyes and head. And yes, they are very annoying. As expected from real bugs. You know, like the first one.
{ "domain": "biology.stackexchange", "id": 7750, "tags": "species-identification, entomology" }
A question gravitation and magnetism?
Question: Suppose I have a magnet and I put a piece of iron next to it, then the magnet will attract it. Now if I put a piece of wood in front of the magnet and the piece of iron, the iron will not get attracted. Why? Now I have heard that gravitation is also magnetism, so it should also show the properties similar to that of magnet but it is not so, even if I put a big building between ground and the object it gets attracted. Why? Answer: There are a couple mistaken assumptions in your questions, which render your questions meaningless. First, wood has almost no effect on a steady magnetic field. The relative permeability of wood is $1.00000043$, which for most purposes is negligibly different from the relative permeability of air ($1.00000037$) or vacuum ($1$). Second, saying that "gravitation is also magnetism" isn't really true. Kaluza-Klein theory provides a unification of gravitation and electromagnetism into one theory, but that doesn't mean that gravitation and electromagnetism are the same thing any more than the electromagnetic force and the strong force can be said to be the "same thing" because quantum field theory provides a model which encompasses them both. Gravitoelectromagnetism also points out similarities between gravity and electromagnetism, but gravitoelectromagnetism is only an analogy that's only approximately valid under a limited set of circumstances. Gravitation and magetism are different phenomena.
{ "domain": "physics.stackexchange", "id": 28923, "tags": "gravity, magnetic-fields" }
docker: how to use RViz and Gazebo from a container?
Question: " sudo docker run -it osrf/ros:kinetic-desktop-full-xenial" it is working and roscore is also working , but "rosrun rviz rviz" is not working i am using ubuntu 18.04 and ihave installed docker ,and i need to use ros kinetic , so for that i have pulled image of"osrf/ros:kinetic-desktop-full-xenial" but now im not able to use rviz and gazebo,how to install rviz and gazebo while using docker? Originally posted by muhammed rushid s on ROS Answers with karma: 71 on 2018-08-08 Post score: 7 Original comments Comment by NEngelhard on 2018-08-09: Could you please use only a single sentence or question as title? (like the rest of the questions?) It does not make much sense to just paste the same text in both fields.. Comment by gvdhoorn on 2018-08-09: Please see if the information on the wiki/docker pages sheds some light on things. Answer: Hi there, we got RViz to work through nvidia-docker2 (with OpenGL) by following the guide here: http://wiki.ros.org/action/login/docker/Tutorials/Hardware%20Acceleration#nvidia-docker2 We had to modify the script a bit to make it work: # If not working, first do: sudo rm -rf /tmp/.docker.xauth # It still not working, try running the script as root. XAUTH=/tmp/.docker.xauth echo "Preparing Xauthority data..." xauth_list=$(xauth nlist :0 | tail -n 1 | sed -e 's/^..../ffff/') if [ ! -f $XAUTH ]; then if [ ! -z "$xauth_list" ]; then echo $xauth_list | xauth -f $XAUTH nmerge - else touch $XAUTH fi chmod a+r $XAUTH fi echo "Done." echo "" echo "Verifying file contents:" file $XAUTH echo "--> It should say \"X11 Xauthority data\"." echo "" echo "Permissions:" ls -FAlh $XAUTH echo "" echo "Running docker..." docker run -it \ --env="DISPLAY=$DISPLAY" \ --env="QT_X11_NO_MITSHM=1" \ --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \ --env="XAUTHORITY=$XAUTH" \ --volume="$XAUTH:$XAUTH" \ --net=host \ --privileged \ --runtime=nvidia \ our-nvidia-based-ros-melodic-image-plus-nvidia-env-vars-from-the-guide:latest \ bash echo "Done." Note 1: We added the tail -n 1 into the xauth sequence since the original command resulted in two identical lines after the sed replacement. So we only took one of them. Note 2: The --net=host and --privileged are only needed if you want to join the host pc network. We added it to test communication with RViz by playing a bag-file from the outside using rosbag play and seeing the images being received by RViz. Originally posted by JBruun with karma: 46 on 2020-07-02 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by sgarciav on 2021-06-09: The link you provided for installing the nvidia-docker2 package was excellent thanks! I did find these instructions for installing the package easier to follow. The provided script for running the container did not work for me. I had to rely on docker-compose to build the image and run the container. You can find the docker-compose.yml file I'm using in this Github gist. Make sure you download it into the same directory where you have your Dockerfile. cd into the directory where both files live and execute the following: $ docker-compose build: to build the image $ docker-compose up -d: to spin the container $ docker exec -it [CONTAINER NAME] /bin/bash: to be dropped into the container At this point you should be dropped into the container and you Comment by zkytony on 2022-01-11: This worked for me. Thanks! You don't have to use nvidia. I built a Dockerfile from the ros:kinetic image.
{ "domain": "robotics.stackexchange", "id": 31502, "tags": "docker, ros-kinetic" }
Infinite potential well and spin $\frac{1}{2}$
Question: problem A particle with spin $\frac{1}{2}$, mass $m$ and electric charge $q$ is bounded in the 2-dimensional infinite potential well. (a) Find the eigenvalue and eigenstate of this particle. (b) Put a particle with spin $1/2$ in the well. The interaction of two particles is represented by $V = -\alpha \vec{S} \cdot \vec{B}$ and $\vec{B}$ has only $\hat z $ direction. Find the first and second order correction of the energy by perturbation theory. I have three questions about the above problem. It is easy to solve (a). $$\psi = \sqrt{\frac{2}{a}} \sqrt{\frac{2}{b}} \sin {\frac{n\pi x}{a}}\sin \frac{l\pi y}{b}$$ $$E = E_n + E_l = \frac{n^2 \pi^2 \hbar^2}{2ma^2} +\frac{l^2 \pi^2 \hbar^2}{2ma^2}$$ Here, I'm confused in writing answer of (a). The problem gives us the spin $1/2$. Do we have to add the spin eigenstate factor to the above one? Like below. $$\psi = \sqrt{\frac{2}{a}} \sqrt{\frac{2}{b}} \sin {\frac{n\pi x}{a}}\sin \frac{l\pi y}{b} {( \alpha |+> + \beta |->)} $$ The $|+>, |->$ is spin up and down. If we have to add spin factor to the eigenstate in writing answer, how can we find $\alpha$ and $\beta$? It is the first question of this problem. Now, in (b), do we have to add the energy of "new" particle to the first order correction? It means that $$E' = E_{n'} + E_{l'} = \frac{n'^2 \pi^2 \hbar^2}{2m'a^2} +\frac{l'^2 \pi^2 \hbar^2}{2m'a^2}$$ should be added to the first correction. I think it is false, isn't it? I think we have to find the first and second order correction of the $V$. Am I right? Moreover, $V$ is represented by $2 \times 2$ matrix. I don't know how to solve it. Here I rearrange my questions. Do we have to add spin eigenstate factor to the answer? Do we have to add the energy of the "new" particle in calculating first order correction? $V$ is represented by matrix and the eigenstate is represented by real function. How can we solve the $(b)$ Answer: For part (a), spin does not play any role. Spin matters only if there is an external magnetic/electric field or if there are multiple particles (exclusion principle does not allow two fermions to occupy the same eigenstate). For part (b), all you have to do is find the expectation value of the perturbing potential. As you pointed out, the perturbing potential is really an operator, and therefore a 2 by 2 matrix. All you have to do is to find the eigen value of this 2 by two matrix, in order to find the energy correction.
{ "domain": "physics.stackexchange", "id": 32915, "tags": "quantum-mechanics, homework-and-exercises, schroedinger-equation, potential, perturbation-theory" }
Number of photons through glass
Question: I am getting mixed information on the total photon count involved with light transmission through glass. I'm not looking for percentages and I don't have equipment to count photons. For simplicity say you shine laser that produces 1000 photons per second through glass. I know that depending on the glass thickness the reflection from the front surface can range between 0 and 16% but that doesn't tell me the number of photons. A side from photons absorbed in the glass (hopefully minimally) are there charts that show the actual photon counts: (1) After going through the glass. (2) Reflected back. Are there charts for different thicknesses of glass? For example could a light source of 1000 photons per second transmits 656 photons, reflect 125 and refract or absorb the other 219? In short are all the photons accounted for? Thanks Answer: In the world of linear optics we assume that $T+R+A=1$, Transmitted, Reflected, Absorbed. For optical glass absorptance is very low, so set $A=0$. This leads us to the result described in the comments: $T=1-R$. I use this when doing quantum optics in the lab; in order to maximize transmission we use (a) fine optics that are designed for the wavelengths being used, (b) anti-reflection coatings, (c) the minimum number of passive optical elements required to get the job done. In the end you will always lose some photons, but if the system works for large numbers of photons, it will also (most of the time) work for small numbers of photons. This is because most passive optical devices can be modeled as unitary operators and there is (usually) no feedback into the laser cavity, so the experimental Hamiltonian isn't being perturbed. Once you have your experiment setup and working properly you can test it with phase plates of various thickness, and report back if the quantum counting still agrees with the $T=1-R$ prediction.
{ "domain": "physics.stackexchange", "id": 29535, "tags": "quantum-mechanics, photons, reflection" }
Can I store the Subscriber object on the heap?
Question: Hi, I'd like to subscribe to topics dynamically. This is a simplified version of my current solution: class MyClass { std::vector<ros::Subscriber> rosSubscriberList; void doSubscribe() { ros::Subscriber sub = n.subscribe("chatter", 1000, &whatevercallback); rosSubscriberList.push_back(sub); } } Which works, but I was just wondering whether if it is safe to do this. I believe the subscriber object gets copied to the vector, then gets destructed. So for a short time there are two subscribers? Also, when the MyClass object gets destructed, does the subscriber in the list gets destructed too? I was wondering if it would be better to have a vector of type std::vector<ros::Subscriber*> and handle object (subscriber) destruction manually. But then there is a copy in that solution too. Which one would you suggest? Originally posted by Gabor Juhasz on ROS Answers with karma: 13 on 2013-04-08 Post score: 0 Answer: Yes, the rosSubscriberList will also be destroyed with the class and thus all its subscribers. Using Subscriber* in the list would solve that, but: You'll get some undestroyable pointers/memory leaks You could use shared_ptr here in the list, but still you'd need to copy those out to somewhere. Given the list belongs to the class intuitively I'd say that this is the behaviour you'd want. If not, maybe a redesign might be better suited. Originally posted by dornhege with karma: 31395 on 2013-04-08 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Gabor Juhasz on 2013-04-14: Yes, that is the behavior I want.I want to clean up everything when my class gets destroyed. Thanks for explaining.
{ "domain": "robotics.stackexchange", "id": 13728, "tags": "ros, c++, roscpp, stack" }
2 dimensional massless scalar field propagator in position space
Question: I have been trying to calculate the massless scalar field propagator in position space by directly Fourier transforming the momentum space propagator. $$\int{d^2p\frac{1}{(p^0)^2-(p^1)^2}e^{-i(p^0t-p^1x)}}$$ Upon referring to multiple sources (linked below), I realize that the answer is actually proportional to $ln|x|$ but I don't see how this integral will result in that answer. All of these sources obtain that answer by finding the massive propagator and then taking the $m\rightarrow 0$ limit. I don't see what I am missing by directly doing doing the above integral. To see how the above integral does not give $ln|x|$: Evaluate the $dp^0$ integral using the Feynman prescription for avoiding the poles and this will give: $$\int\frac{i}{2\pi p^1}e^{-ip^1 (t-x)}dp^1$$This integral is actually a constant multipled by a step function. I also head into a similar problem in the (1+3)-D case where a direct Fourier transform gives a different answer from the known propagator and from the answer got by taking the limit on the massive case. So, what am I missing by directly Fourier transforming the propagator from momentum space? Sources: http://max2.physics.sunysb.edu/~rastelli/HW4Solutions.pdf H. Zhang, K. Feng, S. Qiu, A. Zhao and X. Li, "On analytic formulas of Feynman propagators in position space", Chinese Phys. C 34 (2010) 1576, arXiv:0811.1261. Phys.SE Q: Two-point function of massless scalar theory in 2d CFT Phys.SE Q: Massless limit of the Klein-Gordon propagator Answer: The idea for this kind of computation is the following. Firstly, add a mass term to the propagator. This will yield $$ \int \frac{d^2p}{(2\pi)^2}\frac{1}{p^2-m^2}e^{ip\cdot x}. $$ This integral can be evaluated provided we make the rotation $p_0\rightarrow ip_0$ that yields $$ i\int \frac{d^2p}{(2\pi)^2}\frac{1}{p^2+m^2}e^{ip\cdot x}. $$ Now, one us $d^2p=pdpd\theta$ and $p\cdot x = pr\cos\theta$ and one has to evaluate the integral $$ \frac{1}{4\pi^2}\int_0^\infty dp\int_0^{2\pi}d\theta\frac{p}{p^2+m^2}e^{ipr\cos\theta}. $$ Firtstly, we integrate on $\theta$. This can be done remembering that $$ e^{ia\cos\theta}=\sum_{n=0}^\infty i^nJ_n(a)e^{in\theta} $$ being $J_n$ the Bessel functions of the first kind of integer order. Integration in $\theta$ leaves just $J_0$ and so, our integral becomes $$ -\frac{1}{4\pi^2i}\int_0^\infty dp\frac{p}{p^2+m^2}J_0(pr). $$ This integral can be evaluated with techniques in complex integration, with a proper choice of the integration path, yielding $$ G(r)=-\frac{1}{2\pi}K_0(mr) $$ being $K_0$ the modified Bessel function of 0 order. This is the point where the references you cite bring you. The next step is to note that, for $m\rightarrow 0$, the massless limit, $$ K_0(mr)\sim -\ln r $$ and you are done. Note the presence of an infinite constant, $\ln m$, that is generally omitted taking the massless limit. The reason is that, in the massless limit, one can always add an arbitrary constant to the propagator.
{ "domain": "physics.stackexchange", "id": 84540, "tags": "homework-and-exercises, field-theory, propagator" }
Fetching uniprot seq using bash command
Question: I am trying to get a protein sequence using this command: curl https://www.uniprot.org/uniprot/A2Z669.fasta But I don't get any output: $ curl https://www.uniprot.org/uniprot/A2Z669.fasta $ And no file is created. Answer: The URL https://www.uniprot.org/uniprot/A2Z669.fasta redirects to https://rest.uniprot.org/uniprotkb/A2Z669.fasta. You can see this if you open it in your browser: clicking on the first link will take you to the address of the second. Now, curl doesn't follow redirects by default, you need the -L option for that (see man curl): -L, --location (HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option makes curl redo the request on the new place. If used together with -i, --include or -I, --head, headers from all requested pages are shown. So you need -L for curl to work here: $ curl -L https://www.uniprot.org/uniprot/A2Z669.fasta >sp|A2Z669|CSPLT_ORYSI CASP-like protein 5A2 OS=Oryza sativa subsp. indica OX=39946 GN=OsI_33147 PE=3 SV=1 MRASRPVVHPVEAPPPAALAVAAAAVAVEAGVGAGGGAAAHGGENAQPRGVRMKDPPGAP GTPGGLGLRLVQAFFAAAALAVMASTDDFPSVSAFCYLVAAAILQCLWSLSLAVVDIYAL LVKRSLRNPQAVCIFTIGDGITGTLTLGAACASAGITVLIGNDLNICANNHCASFETATA MAFISWFALAPSCVLNFWSMASR Or, alternatively, you can use wget which does follow redirects by default: $ wget "https://www.uniprot.org/uniprot/A2Z669.fasta" --2023-12-10 16:21:46-- https://www.uniprot.org/uniprot/A2Z669.fasta Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt' Resolving www.uniprot.org (www.uniprot.org)... 193.62.193.81 Connecting to www.uniprot.org (www.uniprot.org)|193.62.193.81|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: https://rest.uniprot.org/uniprot/A2Z669.fasta [following] --2023-12-10 16:21:46-- https://rest.uniprot.org/uniprot/A2Z669.fasta Resolving rest.uniprot.org (rest.uniprot.org)... 193.62.193.81 Connecting to rest.uniprot.org (rest.uniprot.org)|193.62.193.81|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: https://rest.uniprot.org/uniprotkb/A2Z669.fasta [following] --2023-12-10 16:21:46-- https://rest.uniprot.org/uniprotkb/A2Z669.fasta Reusing existing connection to rest.uniprot.org:443. HTTP request sent, awaiting response... 200 OK Length: 314 [text/plain] Saving to: ‘A2Z669.fasta.1’ A2Z669.fasta 100%[=======================================>] 314 --.-KB/s in 0s 2023-12-10 16:21:47 (8.57 MB/s) - ‘A2Z669.fasta.1’ saved [314/314] That creates a A2Z669.fasta file in the current directory which will contain the sequence.
{ "domain": "bioinformatics.stackexchange", "id": 2626, "tags": "uniprot" }
Which literature/study/myth spurred the idea that urine is sterile?
Question: There is a seemingly unfounded mantra that a person's urine is sterile who is not suffering from a UTI. So far I have found convincing evidence to the contrary; urine is not sterile. This poster abstract and publication by Hilt et al., 2014 shows that urine may not be sterile. They list 6 other articles (references 3-8) that also found evidence for microflora in healthy urine, the urethra, and the bladder. In some of those studies males were used, and some used females. This makes a convincing case for non-sterile urine. But why does this mantra persist; where does it come from? Is there a paper that demonstrates the sterility of urine that I have missed? Even if it is old literature, or pop culture, I'd appreciate it. Answer: But why does this mantra persist; where does it come from? Behind this line of inquiry is a fascinating observation about the disconnect between the laboratory and the clinic. The basic answer, from my perspective, goes like this: It is doctors who talk with people about their urine, and any testing thereof; for doctors, urine is practically1 sterile; and among those who are aware of the microbiologic reality, doctors inevitably gloss details when explaining laboratory results to patients. The average clinician learns most of what he knows about his patients' urine from urinalyses. Most clinical urinalysis assays show negative nitrites and negative leukocyte esterase in normal people. A glance at the reference ranges for a clinical assay will demonstrate this. (See also the clinical review below.) Ideally, urinalyses should not be performed in patients without relevant symptoms, but they are sent anyway, and they're mostly negative.2 Even in people who have symptoms possibly suggesting a UTI they're frequently negative. Furthermore, urine cultures from properly collected specimens grown on the media used in clinical laboratories mostly return at 72 hours: "no growth". This point is emphasized in the abstract by Hilt et. al. linked in the question. Of the cultures that grew in their "expanded quantitative urine culture" protocol, 90% of these specimens were deemed "No Growth" by the standard urine culture technique, highlighting its limitations. The last phrase "highlighting its limitations" is an interesting one. It appears to me to be yet to be proven that this is a clinically meaningful "limitation" of such assays. Without attempting to refute these data — fortunately this question doesn't require me to do so — I will note that the two linked studies aimed to show that people with overactive bladder are characterized by increased urinary flora. The explicitly stated hypothesis is that this may contribute to symptoms. Even in this study demonstrating the existence of such a micro-biome even in normal people, then, it is understood as a potentially pathologic state. The reason this "mantra persists", then, is that it remains true for clinical purposes. In contrast to most (all?) other bodily discharges, urine is not packed with bacteria, and the commonly used assays reflect this fact. Medicine and laboratory science have different "modes of discourse", each calibrated to convey levels of precision that are appropriate to the outcomes of interest. Notes 1. In American English at least, that adverb is ambiguous, meaning either "for practical purposes" or (idiomatically) "almost". Both senses are intended here. 2. Of course, there are plenty of data in various populations demonstrating that some substantial minority of urinalyses among asymptomatic people return positive, "proving" that we shouldn't be checking them (amen!). But this is not the point. The gestalt remains: normal = negative. 3. The nitrite parameter in particular is not especially sensitive. It is, however, quite specific (92-100%), meaning it reliably returns negative in people without infection. (See review, below.) Of course, negative nitrites does not mean sterile in the sense that the OP has used it; see "modes of discourse", above. Reference Simerville JA, et al. Urinalysis: A Comprehensive Review. Am Fam Physician. 2005 Mar;71(6):1153-1162.
{ "domain": "biology.stackexchange", "id": 5368, "tags": "human-biology, microbiology, literature" }
Cavity and black body radiation
Question: If one speaks of the fact that one gets blackbody radiation in good approximation by a cavity with hole, does one mean as blackbody this hole, i.e. the place where the radiation exits from the cavity? Then the concept would make much more sense for me, however I find the idea difficult that the blackbody is then only a thought object and no material object, because we are only talking about the hole. Answer: Black body was originally defined to be a thought object, but in a slightly different sense: it is an hypothetical object which absorbs all EM radiation falling on its surface, and reflects (from its surface or inside) none of the incoming EM radiation (it also transmits no incoming radiation to the other side). It can however emit EM radiation of its own, in the sense that characteristics of the emitted radiation such as frequency spectrum or angular distribution of emission intensity depend only state of the radiating body (its absolute temperature), not on other sources which may have supplied the radiation energy in the past. Then the question of how to approximate this hypothetical body for the purpose of measuring black body radiation arose. There are several possibilities, and there was an important series of experiments by Lummer and Kurlbaum, and by Lummer and Pringsheim. They made a specially shaped cavity with metallic walls reflective to EM radiation, with large inside surface, covered with well-absorbing layer (metal oxides) and with a small hole in the reflective wall to study properties of the emitted radiation. Such a cavity is not itself a black body in the original sense, because it reflects substantial part of incoming EM radiation on its outside due to its metallic reflective walls; but the hole in the cavity behaves as black body surface of the same size as the hole, because any radiation energy that comes in through the hole will lose any characteristics and those cannot be detected in outgoing radiation from the hole. This is because the radiation that comes in through the hole interacts with the absorbing layers in such a way that its energy gets dissipated into many different frequencies and directions. A radiation beam reflects many times and dillutes itself in frequency and direction; the hole is emitting radiation that reveals only the cavity inside's temperature, nothing else. This makes the cavity insides behave as perfectly absorbing body, a black body.
{ "domain": "physics.stackexchange", "id": 88351, "tags": "electromagnetic-radiation, radiation, thermal-radiation" }
What is the Bloch sphere representation of $\rho\to\mathcal{E}(\rho) = |+\rangle\langle+|ρ|+\rangle\langle+| + |−\rangle\langle−|ρ|−\rangle\langle−|$?
Question: Suppose a projective measurement is performed on a single qubit in the basis $|+\rangle, |−\rangle$, where $|±\rangle \equiv (|0\rangle\pm |1\rangle)/\sqrt{2}$. In the event that we are ignorant of the result of the measurement, the density matrix evolves according to the equation $$ \rho\to\mathcal{E}(\rho) = |+\rangle\langle+|ρ|+\rangle\langle+| + |−\rangle\langle−|ρ|−\rangle\langle−| $$ Illustrate this transformation on the Bloch sphere. This is given as Exercise 8.15 in Page 378, Chapter 8, Quantum Computation and Quantum Information by Nielsen and Chuang. My Attempt Thanks @GaussStrife for pointing out the mistake. $|+\rangle\langle+|=\frac{1}{2}\begin{bmatrix}1&1\\1&1\end{bmatrix}$ and $|-\rangle\langle-|=\frac{1}{2}\begin{bmatrix}1&-1\\-1&1\end{bmatrix}$ $$ \rho=\frac{1}{2}[I+\vec{r}.\vec{\rho}]=\frac{1}{2}\begin{bmatrix}1+z&x-iy\\x+iy&1-z\end{bmatrix} $$ $$ \mathcal{E}(\rho)=\frac{1}{8}\begin{bmatrix}1&1\\1&1\end{bmatrix}\begin{bmatrix}1+z&x-iy\\x+iy&1-z\end{bmatrix}\begin{bmatrix}1&1\\1&1\end{bmatrix}+\frac{1}{8}\begin{bmatrix}1&-1\\-1&1\end{bmatrix}\begin{bmatrix}1+z&x-iy\\x+iy&1-z\end{bmatrix}\begin{bmatrix}1&-1\\-1&1\end{bmatrix}\\ =\frac{1}{8}\begin{bmatrix}2+2x&2+2x\\2+2x&2+2x\end{bmatrix}+\frac{1}{8}\begin{bmatrix}2-2x&-2+2x\\-2+2x&2-2x\end{bmatrix}=\frac{1}{2}\begin{bmatrix}1&x\\x&1\end{bmatrix} $$ My understanding is that, in the Bloch sphere representation, an arbitrary trace-preserving quantum operation is equivalent to a map of the form, please check, Affine map of single qubit quantum operations $$ \mathcal{E}(\rho)=\vec{r}\xrightarrow{\mathcal{E}}\vec{r}'=M\vec{r}+\vec{c} $$ and we have $I=\begin{bmatrix}1&0\\0&1\end{bmatrix},X=\begin{bmatrix}0&1\\1&0\end{bmatrix},Y=\begin{bmatrix}0&-i\\i&0\end{bmatrix},Z=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$ $$ \frac{1}{2}\begin{bmatrix}1&x\\x&1\end{bmatrix}=\frac{1}{2}\Big[I+xX\Big] $$ Therefore, under the quantum operation, the Bloch vector is transformed as, $\vec{r}=(x,y,z)\xrightarrow{\mathcal{E}}\vec{r}'=(x,0,0)$ If we were to carry out the projective measurement on the basis $|0\rangle,|1\rangle$ then it would be $$ \mathcal{E}(\rho)=|0\rangle\langle 0|\rho|0\rangle\langle 0| + |1\rangle\langle 1|\rho|1\rangle\langle 1|\\ =\frac{1}{2}\begin{bmatrix}1+z&0\\0&1-z\end{bmatrix}\\ =\frac{1}{2}[I+zZ] \implies \vec{r}'=(0,0,z) $$ Answer: Let $\rho'=\mathcal{E}(\rho)$. We can find out the elements of the Block vector by calculating $$ \vec{n}'=(\text{Tr}(X\rho'),\text{Tr}(Y\rho'),\text{Tr}(Z\rho')). $$ Consider the $Z$ term first: $$ \text{Tr}(Z\rho')=\text{Tr}(Z|+\rangle\langle +|\langle +|\rho|+\rangle+Z|-\rangle\langle -|\langle -|\rho|-\rangle)=\langle +|Z|+\rangle\langle +|\rho|+\rangle+\langle -|Z|-\rangle\langle -|\rho|-\rangle $$ But you can quickly check that $\langle +|Z|+\rangle=0$. The $Y$ term is similar. Now consider the $X$ term $$ \text{Tr}(X\rho')=\text{Tr}(X|+\rangle\langle +|\langle +|\rho|+\rangle+X|-\rangle\langle -|\langle -|\rho|-\rangle)=\langle +|\rho|+\rangle-\langle -|\rho|-\rangle=\text{Tr}(X\rho). $$ Thus, if we started with a Bloch vector $\vec{n}=(n_X,n_Y,n_Z)$, it transforms into $\vec{n}'=(n_X,0,0)$. This should be quite straightforward to visualise on the Bloch sphere as a projection onto the $x$ axis.
{ "domain": "quantumcomputing.stackexchange", "id": 4269, "tags": "textbook-and-exercises, quantum-operation, nielsen-and-chuang, bloch-sphere" }
Will the bar rotate about the center of mass or not?
Question: Here, $O$ is the center of mass of the unconstrained rigid bar: none of the points, including $O$, are affixed. According to Salman Khan, due to $\vec{F}$ the bar will start to rotate about $O$. However, according to @Asad, the rotation will not be about $O$. Who is correct? Answer: After some useful conversations with @Asad, I think this question is ill-posed, in the sense that it can be answered in different ways depending on one's point of view. The one invariant statement that everyone should agree on is that the motion is not purely a rotation about the center of mass, nor is is purely a translation. At this point, you can make different statements depending on how you interpret the question. After some discussions with @Asad, I think his point of view (which is reasonable) is that this question is asking whether the motion is a pure rotation about the center of mass. Then, the answer is no. The motion is a pure rotation about the center of rotation, which is a point on the rod which is not moving at all, at least instantaneously. One can say the motion of the rod is undergoing a pure rotation about this point, instantaneously. From this point of view, the equations of motion of the rod are irrelevant for answering the question "what point is the rod rotating around." The question is purely kinematic and geometrical, and should be answered in those terms. Another point of view (which is the way I would tend to interpret the question, and I think some others in the comments) is that even though the question is, on its face, just about what the rod is doing at the instant the force is applied, in physics we are usually interested in how to solve the dynamical equations of motion for the rod. When we solve for the motion of a freely rotating rigid body, it is useful to decompose the motion into a translation of the center of mass, and a rotation about the center of mass. This decomposition simplifies the equations of motion for the rod. If we perform this decomposition, then the motion is a combination of a translation of the center of mass, and a rotation about the center of mass. The motion of the bar is not simply a translation; there is some component of "rotation about the center of mass" in this decomposition. To summarize, I would say the answer to the question in the title, "will the bar rotate about the center of mass?", depends a bit on how you interpret the English language. Everyone should agree that the bar will not just rotate about the center of mass, nor will the motion simply be a translation of the bar. Beyond that, it is correct to say both "the bar instantaneously rotates about the center of rotation, which is not the center of mass in this example" (which might make you say "no" to the question in the title), and "the bar moves in a combination of translation of the center of mass and rotation about the center of mass" (which might make you say "yes").
{ "domain": "physics.stackexchange", "id": 87285, "tags": "newtonian-mechanics, rotational-dynamics, reference-frames, torque" }
Range search in a max-heap
Question: I am having trouble with coming up for a suitable algorithm for this question. A max-heap is essentially visualized as a binary tree not a binary search tree. Also the runtime of the algorithm must depend only on the number of elements in the output. I was thinking of doing a preorder traversal on the max heap. While doing the preorder traversal, if the value of a node is less than the given value x, we return to the previous recursive call. All child nodes in a max heap are less than the parent node. Otherwise we output current node and recur on the children. I am not sure however if the runtime of this algorithm depends only on the number of elements in the output. Anybody have other suggestions/thoughts? Thanks. Answer: The algorithm you're describing is basically correct. To summarize in one sentence: "If the current value is less than $x$, turn back." The reason it has the desired time complexity is because the set of the nodes you are visiting is precisely: the root node all the nodes which are direct descendants of the nodes in the output (since every node has no more than two children, there are no more than $2k$ of these) So the total number of times the body of your function gets executed is no more than $2k+1$.
{ "domain": "cs.stackexchange", "id": 16723, "tags": "binary-trees, heaps" }
Splitting an NSArray into an NSDictionary of array more elegantly
Question: I need to split one NSArray into NSDictionary. Every key in NSDictionary will contain an NSArray with the object with the same value. i.e. I have an array with 1000 customers and I want create an NSDictionary based on their zip code. I wrote this code into an NSArray category and it works, but I'm looking for a better name and a way (if it exists) to do the same job with the KVC. -(NSDictionary *)groupArrayWithBlock:(id<NSCopying> (^)(id obj))block { NSMutableDictionary *dictionary = [NSMutableDictionary dictionary]; for (id obj in self) { id<NSCopying> key = block(obj); if (! dictionary[key]) { NSMutableArray *arr = [NSMutableArray array]; dictionary[key] = arr; } [dictionary[key] addObject:obj]; } return [dictionary copy]; } Answer: As far as name of the method is concerned I have two points: There is no need of the work Array here, as it is NSArray instance method. The work With gives wrong impression here. (the phrase "group a with b" will generally mean to group them together). So in my opinion groupUsingBlock or dictionaryGroupedUsingBlock, might be better. Regarding KVC, if grouping is required to be done on a single property and that property is in complaint with the standard, you can have the function as following: -(NSDictionary *)groupByKey:(NSString *) key { NSMutableDictionary *dictionary = [NSMutableDictionary dictionary]; for (id obj in self) { id keyValue = [obj valueForKey:key]; NSMutableArray *arr = dictionary[keyValue]; if (! arr) { arr = [NSMutableArray array]; dictionary[keyValue] = arr; } [arr addObject:obj]; } return [dictionary copy]; } This will make the method slightly easy to use, but will also seriously limit the flexibility. So I would suggest that you keep the method which uses Block and implement the KVC version of the method by using it, like following: -(NSDictionary *)groupByKey:(NSString *) key { return [self groupUsingBlock:^(id obj) { return [obj valueForKey:key]; }]; }
{ "domain": "codereview.stackexchange", "id": 5956, "tags": "array, objective-c, hash-map" }
Amortised cost - transferring tokens
Question: I'm trying to solve a problem from one of the older exams. Question: There's an infinite, one-dimensional board, with fields numbered consecutively $\ldots, -2, -1, 0, 1, 2, \ldots$ A move in the game consists of selecting a field and placing a token on it. If after placing the token it turns out that on two adjacent fields there are an equal number of tokens (at least one each), then we move all tokens from one of these fields to the other, clearing the first field and doubling the number of tokens on the second field. If there's a choice between two adjacent fields, we make that choice arbitrarily. Then we continue the described process of clearing the field and doubling the number of tokens on the adjacent field until there's not an equal number of tokens on two adjacent fields. Example: Suppose that on fields numbered $1, 2, 3, 4, 5$ there are $0, 1, 2, 4, 6$ tokens respectively. After adding one token to the field numbered $1$, we get the following arrangement of tokens on the board: $0, 0, 0, 8, 6$. A basic move in the game involves placing or removing a token. Therefore, moving $k$ tokens from one field to another requires performing $k$ removal operations and $k$ placement operations on the board. Analyze the amortized cost of a single move in the game measured by the number of basic moves. I think first part of the solution goes like this: Suppose there are $n$ tokens on the board. No token could be moved more than $\log n$ times because with each move, it would be on a stack twice as large, and the height of that stack would be greater than the total number of tokens. For example, if there are $8$ tokens on the board, then no token could be moved 4 times because it would be on a stack of height $16$. Hence, each token could be moved at most $\log n = \log 8 = 3$ times. Therefore, the cost is at most $\log n$. What is the method to establish a lower bound? Answer: To prove a lower bound simply take a worst case example. We will prove the following: Induction Hypothesis: There exists a sequence of moves that results in $n$ tokens at position $\log n$ that requires at least $n \log n$ place and remove operations. For simplicity, assume that $n$ is some power of $2$, and $\log$ with base $2$. Base Case: Let $n = 2$. Then, place a token at position $1$ and $2$, each. Total place and remove operations are $4 > n \log n = 2$. Thus, the base case holds correctly. Induction Case: Suppose there exits a sequence of moves that results in $n/2$ tokens at position $\log(n/2)$, which requires at least $(n/2) \log (n/2)$ operations. Similarly, $n/2$ tokens can be obtained at position $\log(n/2)-1$ if the placement of tokens starts from index $0$. It also requires at least $(n/2) \log (n/2)$ operations. Since the two sets are adjacent to each other, we move the tokens from position $\log n -1$ to position $\log n$. It requires $n$ remove and place operations. Total operations are thus $\geq (n/2) \log (n/2) + (n/2) \log (n/2) + n = n \log n$. Hence, the induction hypothesis is holds.
{ "domain": "cs.stackexchange", "id": 21717, "tags": "algorithm-analysis, proof-techniques, amortized-analysis" }
Is it possible for the kinetic energy integral to be negative?
Question: Is it possible for the kinetic-energy integral, Tij, to be negative? I was messing around with some HF code and found that the integral became negative on some off-diagonal terms. (This could also be a bug with the code.) If the kinetic energy integral is negative, what would the physical meaning be? Answer: Generally, the off-diagonal elements of matrices, especially in physical systems, are interpreted as the coupling between whatever the $i$ and $j$ elements correspond to. So, in this case, a negative element of $T_{ij}$ corresponds to a negative kinetic coupling between atomic orbital basis functions. In plain language, this means that basis functions $i$ and $j$ tend to mutually lower the kinetic energy of an electron placed in one of those orbitals. I am not sure it really makes sense to give much of an interpretation to this, however, because what one is really interested in is diagonalizing the Fock matrix, of which the kinetic energy is only one part. Also, it is always possible to choose a basis where the kinetic energy matrix is diagonal, but again, this would not get you anywhere as in solving the HF problem, you will diagonalize the Fock Matrix which will surely put you in a basis in which the kinetic energy matrix is non-diagonal. What might be more interesting would be to take the actual molecular orbitals which are part of the solutions of the Roothan-Hall equations and re-compute the kinetic energy matrix over these orbitals. Looking at the coupling between these orbitals may be more easily interpretable for e.g. aromatic $\pi$-systems.
{ "domain": "chemistry.stackexchange", "id": 12578, "tags": "computational-chemistry" }
Python not finding interperter
Question: I created a very simply hello world node in python. Though I can not get rosrun to execute because the python script will not find the interpreter. When I run whereis python the first path I get it /user/bin/python3.4m. There are several more bin paths listed and I have tried them all. Not sure why it wont find it. See the whereis and code below Code ! /user/bin/python3.4m import rospy from std_msgs.msg import String def talker(): pub = rospy.Publisher('hello_pub',String,queue_size=10) rospy.init_node('hello_world_publisher',anonymous=True) r = rospy.Rate(10) #10hz while not rospy.is_shutdown(): str = "hello world %S"%rospy.get_time() rospy.login(str) pub.publish(str) r.sleep() if__name__=='__main__': try: talker() except rospy.ROSInterruptException:pass whereis ! /user/bin/python3.4m import rospy from std_msgs.msg import String def talker(): pub = rospy.Publisher('hello_pub',String,queue_size=10) rospy.init_node('hello_world_publisher',anonymous=True) r = rospy.Rate(10) #10hz while not rospy.is_shutdown(): str = "hello world %S"%rospy.get_time() rospy.login(str) pub.publish(str) r.sleep() if__name__=='__main__': try: talker() except rospy.ROSInterruptException:pass Originally posted by AgentNoise on ROS Answers with karma: 38 on 2016-01-07 Post score: 0 Answer: On most standard Linux/Mac/POSIX systems, python will be in /usr/bin (no e). The scripts you've posted are also missing the leading # which informs the OS that the first line defines the interpreter to be used. Try using this as the first line of your program instead: #!/usr/bin/python3.4m Originally posted by ahendrix with karma: 47576 on 2016-01-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by AgentNoise on 2016-01-07: Oh, that was so obvious. Need to pay more attention. Thank you.
{ "domain": "robotics.stackexchange", "id": 23371, "tags": "python" }
What is the purpose and benefit of applying CNN to a graph?
Question: I'm new to the graph convolution network. I wonder what is the main purpose of applying data with graph structure to CNN? Answer: There are some problems that involve graphs and manifolds (sometimes collectively called non-Euclidean data), such as molecule design and generation, drug repositioning, social networks analysis, brain imaging, fake news detection, recommender systems, neutrino detection, computer vision and graphics and shape (e.g. hand or face) completion or generation (generative models). The main benefit of geometric deep learning (deep learning applied to graphs and manifolds) is that you do not lose the information encoded in the graphs (or manifolds), which, otherwise, you would likely lose because you would need to convert your graphs (or manifolds) to an equivalent vector representation that you can feed into the existing CNN or other standard neural networks. Note that you cannot directly apply the usual convolution operation to graphs, because, for example, graphs do not have the notion of relative positions of the nodes. Furthermore, note that graph networks have little to do with CNNs, even if they are sometimes called graph convolution networks.
{ "domain": "ai.stackexchange", "id": 1208, "tags": "machine-learning, deep-learning, convolutional-neural-networks, geometric-deep-learning" }
Improving Shell Installation Script
Question: I'm currently working on a project and currently have this code. I already improved it by having some pointers, but I want to make it more professional looking and right. What should I implement next? installation() { echo '------------------------------------'>&2 echo 'Installing Dependensies'>&2 echo '------------------------------------'>&2 echo "">&2 echo 'Installing pkgconf ..'>&2 apt install pkgconf echo '------------------------------------'>&2 echo 'Installing libvte-2.91-dev ..'>&2 apt install libvte-2.91-dev echo '------------------------------------'>&2 echo 'Installing meson ..'>&2 apt install meson echo '------------------------------------'>&2 echo 'Installing libcairo2-dev ..' apt install libcairo2-dev echo '------------------------------------'>&2 echo 'Installing libpango1.0-dev ..'>&2 apt install libpango1.0-dev echo '------------------------------------'>&2 echo 'Installing libgnutls28-dev ..'>&2 apt install libgnutls28-dev echo '------------------------------------'>&2 echo 'Installing libgtk-3-dev ..'>&2 apt install build-essential libgtk-3-dev echo '------------------------------------'>&2 echo 'Installing libsystemd-dev ..'>&2 apt install libsystemd-dev echo '------------------------------------'>&2 echo 'Installing libgirepository1.0-dev ..'>&2 apt install libgirepository1.0-dev echo '------------------------------------'>&2 echo 'Installing valac ..'>&2 apt install valac echo '------------------------------------'>&2 echo 'Finished Installing the Dependensies'>&2 echo '------------------------------------'>&2 echo 'Cloning https://gitlab.gnome.org/GNOME/vte.git/'>&2 git clone https://gitlab.gnome.org/GNOME/vte.git/ echo 'Entering vte directory'>&2 cd vte echo 'Building VTE'>&2 meson _build ninja -C _build ninja -C _build install echo 'Done'>&2 } installation Any recommendations are welcome. It's pretty simple, all it does is install different dependencies. Answer: Please start with a shebang: #! /usr/bin/env bash It's unclear why you're logging ordinary messages to stderr, but OK, maybe that's a requirement. Define a log function for that, and prefer it over echo. echo 'Installing Dependensies'>&2 The typo is definitely not professional. Prefer the conventional spelling of "Dependencies", both here and for the "finished" message. (Pretty sure there's no "dependensy" British-ism.) echo 'Installing pkgconf ..'>&2 apt install pkgconf ... echo 'Installing libvte-2.91-dev ..'>&2 apt install libvte-2.91-dev ... It just goes on and on in this vein, it makes my eyes water. DRY. Write a loop already: for PKG in pkgconf libvte-2.91-dev ... Or put the package names in an env var, one-per-line for convenient git diff'ing, and loop over that. apt install ... I wonder if you maybe want apt install -y ..., to prevent user prompting? It is possible to ask apt to install a bunch of packages all at once in a single command. But that would alter the apt output and your progress reporting, so maybe you prefer not to. Total time spent waiting for downloads will often be reduced if you batch requests together, as some downloads will happen in parallel and the bottleneck router might have lots of bandwidth available. You didn't show us where you're already cd'd to, so it's unclear where the git clone will write to, but I imagine you've got that worked out already. ninja -C _build ninja -C _build install What a curious idiom! First build wasn't enough? Ok, I accept that maybe that's a "feature" of the build system and is really necessary for a correct install. Here are a few non-default settings you might choose to embrace. set -e will bail upon error, that is, upon running cd, apt, or any other command that returns non-zero status. It is useful for preventing an errant script from running amok. For example cd /tmp/baz; git clone $URL won't pollute $CWD with some annoying repo if destination dir doesn't exist. A related idiom is cd /tmp/baz && git clone $URL, another way of insisting the dest dir exists. set -o pipefail is similar to set -e, so e.g. echo | false | false | true will bail instead of reporting $? status of 0. I didn't notice any pipelines in your script, but you might add some over time. Suppose we never set the env var FOO. set -u will make e.g. echo ${FOO} a fatal error, instead of interpolating a null-string value. Think of it as "lint for bash". You don't need any of these. But they can help improve the robustness of what you write. Often I will set -x to see which stage a script has progressed to, similar to make. But you have explicit echo statements for that, so it isn't needed in this script.
{ "domain": "codereview.stackexchange", "id": 44576, "tags": "linux, shell" }
Calculating Pipe Leak Flow Rate
Question: Just curious and working on some hypothetical problems on my own...I'm currently studying for my PE license. Also, trying my best here with the Latex stuff. Let's say there's a crude oil pipeline and it somehow gets a hole in it and begins to leak. If you wanted to calculate the flow rate through the hole, my understanding is that you would approach the problem in the following manner: $$ Q = CA_0\sqrt{2g_c\frac{P_1-P_2}{\rho}}$$ Where: $A_0$ = cross-sectional area of orifice (leak hole) = $\frac{\pi*d^2}{4}$ = $\frac{\pi(2)^2}{4}$ = $3.14 in^2$ $P_1$ = 1,000 psi (pressure in pipeline) $P_2$ = 14.7 psi (atmospheric pressure on outside of pipe) $\rho$ = density of crude oil = 870$\left(\frac{kg}{m^3}\right)$ $\left(\frac{2.2 lb}{kg}\right)$ $\left(\frac{1m}{3.28ft^3}\right)^3 $ $\left(\frac{1ft}{12in}\right)^3$ = $\frac{0.031lb_m}{in^3}$ $g_c$ = 32.2$\frac{lb_m*ft}{lb_f*s^2}$ = 386.4$\frac{lb_m*in}{lb_f*s^2}$ So I get the following: $$Q = C3.14in^2\sqrt{\left(772.8\frac{lb_m*in}{lb_f*s^2}\right)\left(\frac{\frac{985.3lb_f}{in^2}}{\frac{0.031lb_m}{in^3}}\right)}$$ Is this the correct approach? Answer: This is a correct method, given you have a good estimate of the discharge coefficient (C). There are a lot of resources out there on estimates for C for an orifice in a cross flow configuration such as this. "Handbook of Hydraulic Resistance" is a good resource, although, I am not sure what you would have available to you during the PE exam. A value such as 0.7 may be a good initial guess for C.
{ "domain": "engineering.stackexchange", "id": 3839, "tags": "bernoulli" }
Better / Cheaper Material Choice?
Question: Current State Currently I have designed a coffee roaster that mostly uses the stainless steel tri clamps / tubes that many brewers use. The system can get up to 450 degrees F (232°C) even with moving air. Question I was wondering if someone has come across a cheaper material or system than the tri clamp one? The requirements are to withstand 450 degrees F (232°C) or more, and to be food safe. Current material is stainless steel which works for food safe but the pipes with all the clamps are up to $300 or so which is more than the rest of the project. What I have investigated so far I looked at many plastics available and while some can handle higher heats, those are generally more expensive. I also looked at using HVAC ducting but learned that galvanized steel can let off gasses when heated up. So are the stainless steel tri clamps the best solution for the money or is there something cheaper out there that I am not thinking of? Answer: In the food industry not many materials are used because of strict policies. The mostly used ones are definitely stainless steels (AISI304/316 and the even better AISI316/L). The other common material is nylon. For applications where only food contact is required(ie containers mostly) also BOPP or other polyolefins are used but they are not common in food processing. In your case stainless steel is probably the best choice in terms of thermal stability, food safety and also from an esthetics point of view. I would say that PA66(high grade nylon) could make it but you’d be really at the edge of the service limit, I wouldn’t use it before good testing for the thermal aspects. Would PA66 be better then stainless? Well it might be cheaper but machining is more complex then with stainless and might costs more. I wouldn’t consider 3D printed nylon because of too low properties. The only other options for food safety is ceramics, this is feasible if you are not using any elastic behavior of the material. One other point is that it will fail in a brittle manner not ductile like steel. In the other hand this could be 3D printed and still capable of withstanding the environment you have. It depends on how many parts you’re looking to build.
{ "domain": "engineering.stackexchange", "id": 3497, "tags": "materials, cost-engineering" }
Trying to improve my javascript code in this simple challenge from coderbyte
Question: Here is a slightly modified challenge from Coderbyte: Determine if a given string is an acceptable. The str parameter will be composed of + and = symbols with several letters between them (ie. ++d+===+c++==a) and for the string to be true each letter must be surrounded by a + symbol. So the string to the left would be false. The string will not be empty and will have at least one letter. For whatever reason, I have struggled mightily with this challenge. I tried to use a regular expression in my Booleans, I tried cascading if statements, I tried a while statement surrounding if statements, I tried to structure my code as a function. After hours of failure, I finally resorted to the following: var str = prompt("Please enter a test string: ").split(""); answer = "true"; if (((str[0] != "+") && (str[0] != "=")) || ((str[str.length-1] != "+") && (str[str.length-1] != "="))){ answer = "false"; } for (var i = 1; i < str.length-1; i++){ if (((str[i] != "+") && (str[i] != "=")) && (str [i-1] != "+" || str [i+1] != "+")){ answer = "false"; } } console.log(answer); I think it works, but of all the possible solutions to this problem, I am guessing that it is toward the bottom in terms of efficiency and elegance. I'd really appreciate some guidance on how this could be improved. Below is one of my "solutions" that did not work. Among other things, the Booleans are not evaluating correctly. I must be using the regular expressions incorrectly. var str = prompt("Please enter a test string: ").split(""); answer = "true"; if ((str[0] === /[a-z]/gi) || (str[str.length-1] === /[a-z]/gi)){ answer = "false"; } for (var i = 1; i < str.length-1; i++){ if (str[i] === /[a-z]/gi && (str [i-1] != "+" || str [i+1] != "+")){ answer = "false"; } } console.log(answer); Answer: Your version is actually not that bad but could be simplified slightly and wrapped in a function. You could use charAt to get the character from a string instead of splitting the string. You don't need the initial check on the first and last characters; you can just do these as part of the main loop. (This works because str.charAt(-1) and str.charAt(str.length) both just return empty strings. Similarly if you were using an array, arr[-1] returns undefined.) Wrapping this in a function and returning false when a non-matching character is found makes the code more efficient, as it does not have to run through the remaining loops when it already knows the answer (this would be the same as inserting break into your code.) The below code also checks that the characters are either +, =, or a letter, which your original code does not do (so it would accept "===+!+==+3+", for example). var str = "++d+===+c++=a"; console.log(checkString(str)); function checkString(str) { for (var i = 0; i < str.length; i++) { if (str.charAt(i) != '+' && str.charAt(i) != '=' && !(str.charAt(i) >= 'a' && str.charAt(i) <= 'z' && str.charAt(i - 1) == '+' && str.charAt(i + 1) == '+')) { return false; } } return true; } To use regular expressions you need to write pattern.test(str), e.g. /[a-z]/gi.test('a') returns true. However the point of using regex is that you can test a whole string in one go, rather than character-by-character. I think the following will work: /^((\+[a-z])*\+|=)+$/gi.test('++d+===+c++=a')
{ "domain": "codereview.stackexchange", "id": 4760, "tags": "javascript, regex, programming-challenge" }
Typing by value
Question: I wondered if there was a generalized name for typing a variable by assigning a specific value to it. For instance a = 4 This would make the variable a an integer, since 4 is an integer. Likewise b = 0.8 is b a float in this instance. Technically this is not dynamic typing (since there is only one defined type, it is just written as the value), but then how is this form called? Answer: Determining the type of a variable from the type of the value that is assigned to it is a form of type inference. In dynamically typed languages, variables usually don't have types, only values have types. In statically typed languages, variables do have types. Most modern statically typed general purpose languages have a form of type inference that can at least infer the type of a variable from the type of what is assigned to it. Type inference can infer types in other circumstances. The power of type inference depends on the language: full type inference limits the expressive power of the language, as there are many useful typed languages where full type inference is undecidable, but type verification is easy. Different languages make different compromises on automatic inference vs expressiveness.
{ "domain": "cs.stackexchange", "id": 10390, "tags": "programming-languages, typing, type-inference" }
simulink inverse notch filter
Question: Is there a way to make an inverse notch filter block in matlab simulink? I have found the peak-notch block, but I need to amplify the signal instead of attenuate it. Thank you. Answer: Here is my solution. I don't know whether it is the best one but it works. Simply, I created a transfer function with 2 resonant poles at 50Hz, and 2 zeros right after. In this way, the bode diagram is the opposite of the classical notch filter
{ "domain": "dsp.stackexchange", "id": 986, "tags": "matlab, filters" }
What goes on in an AC circuit when the power delivered to the inductor is negative?
Question: The instantaneous power delivered to a resistor by an alternating voltage source $v(t)=V_0\sin\omega t$, is always nonnegative. But the instantaneous power delivered to an inductor by $v(t)=V_0\sin\omega t$, is positive for $0<\omega t<\pi/2$ and negative for $\pi/2<\omega t<\pi$. During the first quarter of the cycle ($0<\omega t<\pi/2$), both $v_L(t)$ and $i(t)$ have the same signs, and the power delivered to the inductor is positive. This energy is stored in the inductor as magnetic energy. In the next quarter of the cycle ($\pi/2<\omega t<\pi$), $v_L(t)$ and $i(t)$ have the opposite signs, the power delivered to the inductor is negative! I do not understand this part of the cycle in terms of what the circuit is doing. During this interval, is the magnetic energy stored in the inductor negative? What would that mean? During this interval, the inductor delivers power to the source? How does an inductor deliver power to the source? Answer: During this interval, the inductor delivers power to the source? How does an inductor deliver power to the source? Yes, by doing work against the electrostatic field of the voltage source. An inductor can accept work of the voltage source and store it as magnetic energy at one time interval (while the current magnitude increases and goes in the direction of potential drop), and at other time interval, it can release this magnetic energy by doing work on mobile charges running through it (current magnitude decreases and goes in opposite direction to the potential drop). When an inductor releases energy via work back to the source, this work is done by induced electric forces ( due to the induced electric field of the inductor) acting on the mobile charge carriers in direction of their motion. Since these charge carriers' kinetic energy remains negligible all the time, all this work is spent to increase electric potential energy of all the charges. In other words, the induced electric field forces deliver positive work against the forces of conservative electric field, and this work is stored (usually) as increased electrostatic energy of those charges in the source and on surfaces of all circuit elements, including the inductor. In other words, magnetic energy turns into electric energy.
{ "domain": "physics.stackexchange", "id": 89187, "tags": "electric-circuits, electric-current, electrical-resistance, power, inductance" }
Does testing a training dataset guarantee successful results?
Question: If I test an image that has been previously used to train a classification model, is it guaranteed to classify correctly? My guess is that since the parameters have been trained with other images as well, there is no guarantee of getting a correct classification, just a high probability. Answer: This is correct, there's no guarantee at all, not even a high probability. As usual It depends on the type of model, the data, the number and distribution of the classes. However there's of course a higher chance that the instance would be correctly classified. That's why one shouldn't use a test set containing training instances to estimate the performance of the model, since there's a high risk the performance would be overestimated (data leakage).
{ "domain": "datascience.stackexchange", "id": 8755, "tags": "machine-learning-model, training" }
How to find the number of times a package has been installed for apt
Question: Hi I'm trying to generate metrics for packages I maintain to see how many times they've been installed from ros buildfarm deployment system through apt-get/other package managers. Is there a way to find this info from repositories.ros.org or any other links? Anyone I can contact to get these metrics? Originally posted by stevemacenski on ROS Answers with karma: 8272 on 2018-12-26 Post score: 1 Answer: It's a bit indirect and your package(s) may not be listed, but from ROS Metrics Report extension on ROS Discourse you can get to awstats.osuosl.org/list/packages.ros.org which is the Awstats page for packages.ros.org. From there to this page for December, 2018 and then finally to the full list for the Downloads (Top 10) section. ctrl+f on your package name should give you some statistics. Originally posted by gvdhoorn with karma: 86574 on 2018-12-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by stevemacenski on 2018-12-27: Thanks for the answer, that's a really solid find thanks! However, my packages (and most packages) don't appear in that list, they look like mostly the core packages or things with very high install rates. Comment by gvdhoorn on 2018-12-28: Well, as I wrote: your package(s) may not be listed It's all parsed data, so things below certain thresholds will not be shown. Afaik this is as much detail as you're going to get, so if this doesn't cover it then I'm afraid the answer -- in your specific case -- is going to be .. Comment by gvdhoorn on 2018-12-28: .. that you can't get those statistics (for now at least).
{ "domain": "robotics.stackexchange", "id": 32211, "tags": "ros-kinetic" }
Rabi oscillation normalisation
Question: Given that the probability of an atom being in an excited state when it is applied by a resonant field is: $$P=\sin^2(\frac{\Omega_0t}{2})$$ Find an analogous formula in the case of non-resonant excitation when there is detuning $\delta$ given that $\Omega=\sqrt{\Omega_0^2+\delta^2}$. Is there a quick way to do this without solving the time-dependent Schrodinger equations? I know that the new probability should look like$$P=A\sin^2(\frac{\Omega{t}}{2})$$ so I was trying to do a normalisation condition where integrated the probability over the period should give me one. But this does not give me the correct answer which looks like: $$P=\frac{\Omega_0^2}{\Omega^2}\sin^2(\frac{\Omega{t}}{2})$$ Answer: I would try to approach this problem from a time-independent perspective. For example, the resonant oscillation at frequency $\Omega_0/2$ can be seen from the Hamiltonian of a resonantly-coupled two-level system (in the basis $|0\rangle$, $|1\rangle$): $$ H = \frac{1}{2}\begin{pmatrix} 0 & \Omega_0 \\ \Omega_0 & 0 \end{pmatrix} $$ Diagonalizing this Hamiltonian gives that the two eigenstates are $|\pm\rangle = \frac{1}{\sqrt{2}}(|0\rangle \pm |1\rangle)$, with energy splitting $\Omega_0$. Therefore when the system begins in $|0\rangle = \frac{1}{\sqrt{2}}(|+\rangle + |-\rangle)$, phase accumulates at a rate $\Omega_0$. This phase accumulation is equivalent to oscillations between $|0\rangle$ and $|1\rangle$. Now, in the detuned case the Hamiltonian can be written as $$ H = \frac{1}{2}\begin{pmatrix} 0 & \Omega_0 \\ \Omega_0 & \delta \end{pmatrix} $$ In this case, the eigenvectors are similarly linear combinations of $|0\rangle$ and $|1\rangle$ with weights that depend on $\delta$ and $\Omega_0$. The two eigenvectors will have an energy splitting of $\frac{1}{2} \sqrt{\delta^2 + 4 \Omega_0^2}$. Therefore when the system is initialized in $|0\rangle$ (a superposition of the two eigenvectors), phase will accumulate at the rate of the eigenvector energy splitting, which corresponds to oscillations between $|0\rangle$ and $|1\rangle$ at a faster frequency but lower amplitude.
{ "domain": "physics.stackexchange", "id": 49161, "tags": "electrons, atomic-physics, two-level-system" }
$T_2>2T_1$ qubits on the ibm_washington quantum processor
Question: I have been checking out the parameters of the new ibm_washington processor and I have the following doubt about the calibration data provided by them. Checking out the relaxation and dephasing times I found out that some of their qubits are said to have $T_2>2T_1$. For example, see qubits Q2 or Q21. I understand that the dephasing times they provide are the ones obtained by Ramsey's experiments. However, relaxation and dephasing times are related by the expression \begin{equation} \frac{1}{T_2}=\frac{1}{2T_1}+\frac{1}{T_\phi}, \end{equation} where $T_\phi$ is the pure dephasing time. From this equation, it can be seen that qubits that have $T_2>2T_1$ make no physical sense since that would imply that the pure dephasing time is negative. Therefore, I am wondering what's going on with the decoherence time values that are being provided by IBM for the newest processor. I have thought about measurement error, but qubits Q2 and Q21 are not even close to the Ramsey limit $T_2\approx 2T_1$. Maybe that due to the novelty of the system the data is not still accurate? Or may I be missing something? Answer: Good catch! This is a result of the T1 and T2 properties of the qubits being estimated in separate measurement batches. What was happening is that a qubit fluctuation such as a TLS would cause a low T1 to be measured. Sometime later the fluctuation would disappear (and the qubit would recover its inherent T1) followed by the measurement of the T2s. The effect of separating these measurements is the appearance that some T2s might violate the T1 limit. Going forward T1/T2s will be measured together to reduce the incidence of such events.
{ "domain": "quantumcomputing.stackexchange", "id": 3274, "tags": "ibm-q-experience, decoherence" }
Solving Young Sheldon's 100th episode vanity card
Question: Please define the terms. Below is Chuck Lorre's 700th vanity card which congratulates Young Sheldon on reaching 100 episodes. Part 1. Find $x$ in $J_0(x)=0$. --> I guess this refers to Bessel of the 1st kind, but wolfram alpha doesn't give me a unique answer. What's going on? I forgot this already. I just remember in calculus class that Bessel is like a series solution to some ordinary differential equation. I somehow think we're supposed to have $x=2.4041$ and so $yz^2 = 10$ or something. Part 2. $y=R_y$ --> I guess this refers to Rydberg constant (Rydberg unit of energy). Soooo $y = 2.1798723611035 \times 10^{-18}$ ? Part 3. $z = \frac{\mu_D}{\mu_N}$. 2.1. $\mu_N$ I guess is nuclear magneton? 3.2. As for $\mu_D$, no idea. Wasn't able to find in the list of physical constants except possibly...Bohr magneton ($\mu_B$) ? Or maybe $z$ is the W-to-Z mass ratio? Answer: This is how I solved it on 1 April. Here are the essential details: x = 2.4048 (the first zero crossing of an order zero Bessel function), y = 13.606 (the Rydberg unit of energy in eV) and z = 0.857 (the ratio of deuterium’s magnetic dipole moment to the nuclear magneton value). The answer is 100.0 to four significant digits. Further details, as well as notes on the nonsensical units can be found in the linked PDF (which will not change, and should be read with an awareness of Young Sheldon’s subject matter and the document's publication date). Note that the number 1648777428 is a Unix timestamp. The Young Sheldon Episode Count equation only works for that one episode, so I specify the date of the episode's first airing to distinguish it from any future YSEC equation.
{ "domain": "physics.stackexchange", "id": 88138, "tags": "mathematics, popular-science, rydberg-states" }
Project Euler Problem 11: Largest product in a grid
Question: Here is my implementation to project Euler Problem 11. I did this problem a bit later, when I learned how to input from a txt file. The problem context can be seen here. I did add "\n" to the end of each line, after copying the numbers into a txt file, I'm not sure of any other way to do that. Any other improvements anyone can think of? #include <fstream> #include <iostream> #include <vector> #include <sstream> #include <string> //splitting stream into ints. std::vector<int> split(std::string line){ std::stringstream ss (line); std::vector<int> result; std::string num; while(std::getline(ss, num, ' ')) result.push_back(std::stoi(num)); return result; } //comparing all int's that are horizontally next to each other unsigned long long Horizontal(int a, int b, std::vector<std::vector<int>> grid){ if(b < grid[a].size()-3){ return (grid[a][b] * grid[a][b+1] * grid[a][b+2] * grid[a][b+3]); } } //comparing all int's that are vertically next to each other unsigned long long Vertical(int a, int b, std::vector<std::vector<int>> grid){ if(a < grid.size()-3){ return (grid[a][b] * grid[a+1][b] * grid[a+2][b] * grid[a+3][b]); } } //all int's that are diagonally(forward) next to each other unsigned long long ForDiag(int a, int b, std::vector<std::vector<int>>grid){ if (a < grid.size()-3 && b < grid[a].size()-3){ return (grid[a][b] * grid[a+1][b+1] * grid[a+2][b+2] * grid[a+3][b+3]); } } //all int's that are diagonally(backward) next to each outher unsigned long long BackDiag(int a, int b, std::vector<std::vector<int>>grid){ b+=3; // b needs to be 3 larger for this function if(a < grid.size()-3 && b < grid[a].size()){ return (grid[a][b] * grid[a+1][b-1] * grid[a+2][b-2] * grid[a+3][b-3]); } } //Calls the calculation functions, and compares them for the largest. unsigned long long Largest(std::vector<std::vector<int>> grid ){ int gwidth = grid[0].size(); int glength = grid.size(); unsigned long long largest = 0; for(int a = 0; a < gwidth; ++a){ for(int b = 0; b < glength; ++b){ largest = std::max(largest,Horizontal(a,b,grid)); largest = std::max(largest,Vertical(a,b,grid)); largest = std::max(largest,ForDiag(a,b,grid)); largest = std::max(largest,BackDiag(a,b,grid)); } } return largest; } int main(){ std::vector<std::vector<int>> grid; //opening file. std::ifstream nums; nums.open("grid.txt"); std::string row; //Calling function, and pushing back into the vector while(std::getline(nums, row, '\n')){ grid.push_back(split(row)); } std::cout << Largest(grid); } Answer: Your functions returning a unsigned long long are missing a return which leads to undefined behavior (more details on this question). You could throw an exception to handle this or you could just return 0. Not a real issue and mostly a matter of personal preference but the way you check that you are not going out of the bounds of the array could be written in a more natural way. To check that index b + 3 is in the array, I'd rather read b + 3 < a.size() than b < a.size() - 3. Even more unusual is the pre-increment in the case of BackDiag() : you add 3 to b and then you consider b - 3. Once your code re-written to take into account these comments, it looks like : unsigned long long Horizontal(int a, int b, std::vector<std::vector<int>> grid){ return (b + 3 < grid[a].size()) ? (grid[a][b] * grid[a][b+1] * grid[a][b+2] * grid[a][b+3]) : 0; } //comparing all int's that are vertically next to each other unsigned long long Vertical(int a, int b, std::vector<std::vector<int>> grid){ return (a + 3 < grid.size()) ? (grid[a][b] * grid[a+1][b] * grid[a+2][b] * grid[a+3][b]) : 0; } unsigned long long ForDiag(int a, int b, std::vector<std::vector<int>>grid){ return (a + 3 < grid.size() && b + 3 < grid[a].size()) ? (grid[a][b] * grid[a+1][b+1] * grid[a+2][b+2] * grid[a+3][b+3]) : 0; } unsigned long long BackDiag(int a, int b, std::vector<std::vector<int>>grid){ return (a + 3 < grid.size() && b + 3 < grid[a].size()) ? (grid[a][b+3] * grid[a+1][b+2] * grid[a+2][b+1] * grid[a+3][b]) : 0; } Now, one thing to notice is that this good is basically always the same. You should try to write a generic function that you can reuse. The signature would be something like : unsigned long long computeProduct(int a, int b, int incrA, int incrB, std::vector<std::vector<int>>grid)
{ "domain": "codereview.stackexchange", "id": 7118, "tags": "c++, programming-challenge" }
Confusion in comparing melting point
Question: We have to compare melting points of $\ce{LiH}$, $\ce{NaH}$, $\ce{KH}$, $\ce{CsH}$. I know the melting point of an ionic compound is more than that of a covalent compound. So according to Fajan's rule, ionic character increases in $\ce{LiH}$, $\ce{NaH}$, $\ce{KH}$, $\ce{CsH}$. As ionic character increases, melting point should also increase in $\ce{LiH}$, $\ce{NaH}$, $\ce{KH}$, $\ce{CsH}$. But actually, the melting point decreases in $\ce{LiH}$, $\ce{NaH}$, $\ce{KH}$, $\ce{CsH}$. What could be the reason behind this? Answer: You can get an estimate of the bond strength by using Coulomb's law and periodic trends for atomic radius. $$F = k{ {q_1q_2} \over r^2}$$The charge of all these alkali metals is the same in these compounds, so the only difference in bond strength comes from the the distance between the charges. The distance between the charges is related to each atoms' atomic radius and that is an increasing trend as you move down the alkali metals. So, if distance is increasing, the force holding the solid together is decreasing, which would justify the trend $\ce{LiH}>\ce{NaH} >\ce{KH}> \ce{CsH}$ for melting point.
{ "domain": "chemistry.stackexchange", "id": 7851, "tags": "melting-point" }
State Spaces in Classical vs Quantum systems, in the context of classical & quantum computers
Question: I am currently reading "Quantum Computing: A Gentle Introduction" by Rieffel & Polak. In describing the difference between classical and quantum state spaces, they say: In classical physics, the possible states of a system of n objects, whose individual states can be described by a vector in a two-dimensional vector space, can be described by vectors in a vector space of 2n dimensions. Classical state spaces combine through the direct sum. However, the combined state space of n quantum systems, each with states modeled by two-dimensional vectors, is much larger. The vector spaces associated with the quantum systems combine through the tensor product, resulting in a vector space of $2^{n}$ dimensions. In the context of general classical physics and general quantum physics, this makes very good sense to me. An object in classical physics can be fully described by its position and momentum (which is the two-dimensional vector space described above) and the time evolution is governed by Hamilton's equations. If we add more objects, the state space grows via the direct sum of the individual vector spaces. For a two-state quantum system however, when we add more particles the overall Hilbert space grows like the direct product of the individual vector spaces and therefore grows in size like $2^{n}$. What doesn't make sense to me is how this directly relates to classical analog computers and classical digital computers? Take for instance the analog computer example shown here. I suppose it would be possible to convert the equation which it models from a Newtonian form (i.e. $F=ma$) to a Hamiltonian form and perhaps model Hamiltons equations using two coupled active differentiators. Am I to conclude then that this is what is meant by the state space growing by 2n? I'm not sure this is even correct however because there is a friction term I am neglecting (the shock absorber). And even more confusing, how does this relate to digital computers? If I have a state with 8 bits, then by definition I have 8 bits of information. It seems then that the state space of a digital computer scales like $n$, rather than the $2n$ mentioned previously. Answer: I can only make sense of the paragraph by assuming it's about two completely unrelated systems. The $n=1$ classical system is a particle that can be described by a vector in $\mathbb R^2$, which I suppose is either a position and momentum (as you guessed) or two position coordinates. It's not a bit, nor the state of any sort of classical computer. The $n=1$ quantum system is a qubit. There is no correspondence between the two, so it doesn't make much sense as an analogy. The state of the analog computer that you linked could be described by the position of the mass and its first time derivative, which could be seen as a vector in $\mathbb R^2$, but I really don't think that's what Rieffel & Polak had in mind. My impression is that they were thinking that if a single qubit is described by a vector in $\mathbb C^2$ then the classical equivalent of it should also be a vector in a two-dimensional space. But, as you said, it isn't.
{ "domain": "physics.stackexchange", "id": 94492, "tags": "quantum-computer, computer" }
Power Dissipated by Resistor in AC Circuit
Question: I'm having a problem with the following review question: Given the circuit: I am asked to find the power dissipated by R1 and C. I know that for C it is 0. For R1, I know I need to use Vrms (120*rt2 V/rt2) = 120 V. I'm stuck as to where to go from here. I thought about using a voltage divider to find the voltage on R1, then using V^2/R to get the power, but I don't know how to do this with three resistors. Answer: Break it into two problems. First, consider the three resistors in series, and replace them with a single equivalent resistor. Now you can figure out how much current flows through the capacitor, and how much through the resistor. You correctly figured out there is no power dissipation through the capacitor, since current and voltage will be in quadrature; and since there is no internal impedance on the voltage source, the current through the resistor doesn't depend on the presence of the capacitor. Once you know how much current is flowing, you can remove the capacitor and figure out what fraction of the current flows through the resistor you are asked about. And use $\frac12 I^2 R$ to obtain the power.
{ "domain": "physics.stackexchange", "id": 26874, "tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, power" }
Figure of merit for 'locality of power'
Question: I have a raw signal which, if interpreted correctly with various 'tuning' parameters set to their optimum values, can be seen to consist of a relatively small (but a priori unknown) set of discrete frequency components. With the tuning parameters set at sub-optimum values, the power spectrum is spread out - each frequency component is broader or, in the very poorly-tuned case, there can be many 'false' frequency peaks. Some example plots would be: Good tuning: Poor tuning: Awful tuning: What I'd like is to be to able to auto-tune by identifying a figure-of-merit which is highest for the 'good tuning' case, lowest for the 'awful tuning' case, and somewhere in the middle for the 'poor tuning' case. If I knew a priori that there was a single frequency component, this would be fairly straightforwrad but I haven't been able to think of a good appraoch when the number of components is unknown. Is there a standard approach to this problem that I'm unaware of? Answer: You could try calculating Shannon entropy of the spectrum. Normalize the Fourier transform $f(x)$ of your signal so that $\int_{-\infty}^\infty |f(x)|^2\, dx = 1$ and calculate Shannon entropy as $- \int_{-\infty}^\infty |f(x)|^2 \log |f(x)|^2\, dx$. You can clamp $|f(x)|^2 \log |f(x)|^2$ to zero for very small values of $f(x)$ if its logarithm blows up and ruins the calculations. Zero-padding before windowing & FFT will give you more frequency resolution and a better approximation, especially if you have narrow peaks in the spectrum. If you want something more standard, have a look at spectral flatness. But it seems even less stable numerically.
{ "domain": "dsp.stackexchange", "id": 2615, "tags": "power-spectral-density, source-separation" }
Objects on inclined planes and the normal force
Question: Say we had an object lying on a inclined plane, at an angle of $\theta$ to the horizontal, and the object has a mass of $m$. If we take the obejct to have an acceleration of 0 perpendicular to the plane, (i.e it accelerates down the slope), we can conclude that $mg\cos\theta$ = $N$ where $N$ is the normal reaction, however I have seen online the expression $N\cos\theta = mg$ and I would like a detailed explanation on why that is, where does the second formula come from and why does it not "agree" with the first one (I dont think they can both be true at the same time thats why I am confused), many thanks in advance!!! Answer: Only gravity $w$ and the normal force $N$ are involved. Set up Newton's 1st law in the perpendicular direction (let's call it the $y$ direction): $$\sum F_y=0\quad\Leftrightarrow\quad -w_y+N=0 \quad\Leftrightarrow\quad N=w_y.$$ Now, what is $w_y$? Write up the right-angled triangle and you will see that $w_y$ constitutes the adjacent side from the angle $\theta$ in that triangle. Thus, you use the cosine function to retrieve it: $$w_y=w\cos(\theta)=mg\cos(\theta).$$ So, your first-mentioned expression is correct. Not the second-mentioned one. We can do a quick check to see why the second-mentioned expression isn't true. Let's try to imagine the $y$ axis not perpendicular to the surface but vertical. If we used Newton's 1st law again just like before, then we might expect to get something like this: $$\sum F_y=0\quad\Leftrightarrow\quad -w+N_y=0 \quad\Leftrightarrow\quad w=N_y.$$ Draw up the right-angled triangle in this scenario, and you will see that $N_y$ is the adjacent side to the angle, so $N_y=N\cos(\theta)$, and we would expect the following result which is your second-mentioned expression: $$w=mg=N\cos(\theta).$$ But note: This second result is incorrect. Because in fact, Newton's 1st law does not apply here. By choosing a vertical direction to resolve the forces along for Newton's law, then there will be a small acceleration component. Only in the direction perpendicular to the acceleration - which was the first scenario - will there be no acceleration component at all, so only then will Newton's first law apply. In this second scenario we actually should have used Newton's 2nd law: $$\sum F_y=ma_y\quad\Leftrightarrow\quad -w+N_y=ma_y \quad\Leftrightarrow\quad w=mg=N\cos(\theta)+ma\cos(\theta).$$ So, we see that a term was missing from the second expression before it would be correct. In some other scenarios, where the angle was measured differently or where the acceleration is angled differently, it is possible to achieve your second expression. The other answers provide such other examples.
{ "domain": "physics.stackexchange", "id": 92276, "tags": "newtonian-mechanics, forces, vectors, free-body-diagram" }
Is this XKCD comic an accurate timeline of the earth's average temperature?
Question: Apologies for the length of the image, which I found at http://xkcd.com/1732/. I'm mostly interested in the temperature line itself, both the historical data and the forward projections. The reason I ask is that I'd like to (in a classroom context) hold this image up as a good example of the visualisation of data, that could be emulated in the area of science relevant to that classroom (zoology). However, I'd first like to check that the data itself is correct. (click to enlarge) Answer: Yes, it's accurate, see e.g. this picture taken from the Wikipedia Geologic temperature record page: You could dig up the source if you wanted from the wikipedia article (or the graph description page), but any graph will be generally the same. Since there was an ice age, you'd expect global temperature to be low ~20k years ago and so on.
{ "domain": "earthscience.stackexchange", "id": 1733, "tags": "climate-change, temperature" }
Clarification on the Keras Recurrent Unit Cell
Question: I paste below the Keras documentation on Recurrent layer model= Sequential() model.add(LSTM(32, input_shape=(10, 64))) # now model.output_shape == (None, 32) # note: `None` is the batch dimension. What does the first argument of LSTM 32 mean here? Is it creating 32 LSTM blocks (By block I mean consisting of input, forget and output gate)? Could you please explain the meaning of the first argument and how that contributes to the output dimension of the LSTM both when return sequences is True and when return sequences is False Answer: The notation used for LSTM is quite confusing, and this took me some time to get my head around this as well. When you see this graphic (often used to explain RNNs): You need to consider that X is a sequence of data the timestep t, it's not just a single scalar input value (as we're used to with feed-forward networks); it's an array / tensor of data. The diagram shows how there is an output at time-step t, but that it then also feeds into the next time-step, t+1, when the next array/tensor is then fed in. A better/clearer way (in my opinion) is to look at it is like this: So a LSTM 'cell' is actually what you might consider a layer. And a unit (one of the circles) is one of these: which you can consider a neuron in a hidden layer. Initially, I thought this was a cell, but it isn't, it's a single unit. So when you specify 32 units, for example, you're actually saying how many of these units (neurons) you want in the cell (layer). This is what gives the model its capacity to learn the data that's presented to it. And just like hidden layers/neurons in a feed-forward, it's a hyperparameter that you'll need to experiment with; too low and the model will under-fit, too high and it will over-fit.
{ "domain": "datascience.stackexchange", "id": 6884, "tags": "python, deep-learning, nlp, keras, rnn" }
Assign values to a matrix based on the location of non-zero elements in another matrix
Question: I have a matrix, A containing ones and zeros. I want to create a new matrix B where the non-zero elements are in the same position as those in A, but where the values are given by the location of the first non-zero element in that row. A few examples to show what I mean: 1 1 0 0 -> 1 1 0 0 (the first non-zero element is in column 1) 0 1 0 1 -> 0 2 0 2 (the first non-zero element is in column 2) 0 0 1 0 -> 0 0 3 0 (the first non-zero element is in column 3) 0 0 0 1 -> 0 0 0 4 (the first non-zero element is in column 4) The code I have now is: for i = 1:size(A,1) for j = 1:size(A,2) if A(i,j) == 1 B(i,:) = A(i,:) * find(A(i,:) == 1,1); end end end This gives the following results for a given A: A A = 0 0 1 0 0 1 1 1 0 0 0 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 0 0 1 1 B B = 0 0 3 0 0 2 2 2 0 0 0 4 1 1 1 1 1 1 0 1 0 2 2 2 1 1 1 1 1 1 1 1 0 0 3 3 0 0 3 3 This code is slow if A is big. Is there a way to improve this code? I'm looking for improvements both on performance, and on the coding style / best practices etc. Answer: First, let's go through the code: You have two calls to size() in the beginning of your code, where the dimension is specified. It's better to make a single call to size, and save the two variables. This won't improve performance much, but will make it possible to reuse the two dimensions: [rows, cols] = size(A); You are using i and j as names for the iterators. In general, using those names are discouraged. It is not a problem when working with only real numbers, but once you start working with complex numbers this could be a problem. As an example, consider the following code, where we're adding complex numbers: x = 1 + i; % Complex number, Real(x) = 1, Imag(x) = 1 for i = 1:10 x = x + i end In this case, x will be 11 + i, instead of 10 + 11*i. Similarly, you might get code that doesn't behave as expected, if you forget to intialize i to a variable. For instance, if you think you have written i = true, i = false, i = 0 or something similar, neither of the two statements will give an error: i == true, i == false. This might be hard to debug since you don't get any error messages. You have a growing loop inside your code. A row will be added to B for each iteration of the outer loop. This is very slow, and MATLAB actually warns you about this: The Variable A appears to change size on every loop iteration You should always preallocate memory. In this case, the size of the B matrix is known: B = zeros(size(A)); % or B = zeros(rows, cols); You loop through all columns and use if A(i,j) == 1 to check if there are any non-zero elements. In the following code you use find to find the index of that element. A better way to do this is: for ii = 1:rows idx = find(A(ii,:) == 1, 1); if ~isempty(idx) B(ii,:) = A(ii,:) * idx; end end This way, you can save one loop, and only loop through each element once. You should use proper indentation. This can easily be achieved in Matlab. Press Ctrl+a followed by Ctrl+i, and Matlab will do the indentation for you. The better way to do this: A better way to do this, that's both simpler and faster is: % Find the column index of the first element in each "slice". [~, idx] = max(A,[],2); % Multiply the column index with each row of the initial matrix bsxfun(@times, A, idx); [M, idx] = max(A, [], dim) returns the value and index of the largest element along one dimension. It will only give one index per row or column (dependent on which dimension you choose). We can now multiply the vector of indices by the elements in A. For this, we use bsxfun.
{ "domain": "codereview.stackexchange", "id": 20624, "tags": "performance, matrix, matlab" }
How to interpret training and testing accuracy which are almost the same?
Question: Note - I have read this post but still don't understand I have a Naive Bayes classifier, when I input my training data to test the accuracy, I get 63.05%. When I input my test data, the accuracy is 65.00%. Why are the training and test accuracy almost identical? For information, my data is split in 70/30. Does this mean that there is no overfitting? Answer: Why are the training and test accuracy almost identical? Nearly identical performance on the training set and test set is a good outcome, it means the model is doing what it's supposed to do. To give an intuitive comparison: The performance on the training set is equivalent to how well a student can redo the exercises which have been solved by the teacher during class. The student might just have memorized the answers by heart, so it's not a proof that they understand. The performance on the test is equivalent to how well the student can solve some similar exercises that they haven't seen before in a test. This is a much better indication that the student truly understands the topic. Does this mean that there is no overfitting? Yes, it proves that there's no overfitting. To keep with my comparison, overfitting is equivalent to memorizing the answers. However there can be other problems which bias the result: The performance on the test set is 2 points higher than the performance on the test set. This probably means that the test is very small, because if it was a large enough sample the performance wouldn't be higher. If the test set is too small, the performance is less reliable (any statistics obtained on a small sample is less reliable). Accuracy can be a misleading evaluation measure. It only counts the proportion of correct predictions, so if a large proportion of instances belong to the same class then the classifier can just predict any instance as this class and obtain high accuracy. For example here if the majority class is around 63-65%, then it's possible that the classifier didn't learn anything at all. Looking at precision/recall/F1-score gives a more accurate picture of what happens. [edit] Important note: as Nikos explained in a comment below, my answer assumes that you have a proper test set, i.e. that the train and test sets are sufficiently distinct from each other (otherwise there could be data leakage and the test set performance would be meaningless).
{ "domain": "datascience.stackexchange", "id": 9179, "tags": "machine-learning, accuracy, naive-bayes-classifier" }
Determining location of Big bang
Question: I am just a high school student. I had already heard that we can't determine the exact location of big bang. So now i have a question can't we approximate it? Like say divide the observable universe in many grids and you take the Average age of each grid and do this for all the grids. I assume intuitively that on average the parts of the observable universe which were formed earlier would be closer to the point of origin of the big bang right? I know that there are other effects to consider such as expansion of space but still it is not possible so my question is why? Answer: Think of a large elastic sheet of cloth. Gather a few friends and now pull from all sides. It expands. By that I mean that any two points are separating and moving farther away from each other. If you drew a mesh pattern on the cloth beforehand, you would see all mesh cells widening. All of them. So where did the stretching start? Where is the point of origin of this stretching phenomenon? This is unanswerable. It quite obviously did not start at just one point. Rather, it started everywhere at once. This is how you can think of the Big Bang for an intuitive picture of why your question, as intuitive as it may feel, is unanswerable. An unintuive part about the Big Bang that skews this analogy a bit is that you must imagine a cloth with a mesh of infinitely many infinitesimally small cells to begin with. Because the very concept of space didn't even exist before the Big Bang took place. So asking "where" had no meaning. Also, time itself didn't exist before, so I can't even meaningfully use the word "before". The Big Bang is what we call a singularity. It's an abstract topic, and the above cloth-stretching analogy good for realising this.
{ "domain": "physics.stackexchange", "id": 98777, "tags": "cosmology, big-bang" }
Generalized Büchi Automata - Formal definition of a state appearing infinitely often?
Question: I am studying generalized Büchi automata and I don't really understand when a state is considered to appear infinitely often. The definition I have is: A state $s$ appears infinitely often if there exists an infinite set of points $i \in N$ such that the $i$th state of the execution is $s$. But there's also an example which I think contradicts this definition. According to the example, the language accepted by this automaton is the language where the string $ab$ appears infinitely often. Why isn't it just $a$ appearing infinitely often? State 2 would be reached even if we only had $a$ as input. Which is wrong, the example or the definition? Or did I misunderstand the definition? Answer: The definition and example are both correct. If the automaton reads $a$ in state $2$ or reads $b$ in state $1$, then it rejects, because it has no state to go to. So, if the machine reads an infinite amount of its input, then it must read $a$ every time it is in $1$ and read $b$ every time it is in $2$. If the machine reads $b$ only a finite number of times, it only enters $2$ a finite number of times, so it must reject. Therefore, any string that's accepted must contain an infinite number of $b$s. Since $b$s can only be read in state $2$ and the only way of getting there is to read an $a$ from state $1$, any accepted string must contain an infinite number of $ab$s.
{ "domain": "cs.stackexchange", "id": 9170, "tags": "automata, buchi-automata" }
Gravitational potential energy in CM frame
Question: If the centre of mass is taken as the origin, then the gravitational potential energy of two bodies is \begin{equation} V=-\frac{Gm_{1}m_{2}}{(r_{1}+r_{2})} \end{equation} where $r_{1}$ and $r_{2}$ are the distances from the center of mass of the system to the bodies $m_{1}$ and $m_{2}$ respectively. Is this right? Correct me if I'm wrong. Answer: This is almost correct. The formula is $ V=-\frac{Gm_{1}m_{2}}{r_{12}} $ where $r_{12} $ is the distance between the centers of mass of the two particles.
{ "domain": "physics.stackexchange", "id": 49379, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, reference-frames, potential-energy" }
Time reversal of a QM Hamiltonian
Question: I'm interested in the time reversal properties of a term in the non-relativistic QM Hamiltonian proportional (up to a true scalar) to $$ H \propto (\vec S_1 \times \vec S_2) \cdot \vec L $$ The situation with $\vec L$ is clear, it does change the sign. What about the first term in the product? Doesn't its parity depend on the particular spin state? Answer: $\mathbf{L}$, $\mathbf{J}$ and $\mathbf{S}$ all change sign under time reversal. For $\mathbf{L}$ this is trivial, since it depends on $\mathbf{p} = \frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}$ and with $t \rightarrow -t$ you get $\mathbf{p} \rightarrow -\mathbf{p} $. For spin, I guess at the end of the day it just worked. But the procedure is rooted in treating spin as a magnetisation $\propto \mathbf{M}$. And $\mathbf{M}$, like $\mathbf{B}$, are generated by currents so by terms like $\frac{\mathrm{d}q}{\mathrm{d}t}$, so again $t \rightarrow -t$. See here for more discussion.
{ "domain": "physics.stackexchange", "id": 55775, "tags": "quantum-mechanics, angular-momentum, hamiltonian, time-reversal-symmetry" }