anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Problem with Faraday's Law for a Circuit with Battery
Question: From Maxwell's equations, we know, $$\nabla \times \vec E=-\frac{\partial \vec B}{\partial t}\Rightarrow\oint \vec E\cdotp d\vec l=-\int\frac{\partial \vec B}{\partial t}\cdotp d\vec a \tag{1}\label{1}$$ and from the definition of EMF, $$\varepsilon=\oint \vec f\cdotp d\vec l \tag{2}\label{2}$$ where, $\vec f$ is the force per unit charge. Now, consider a simple circuit with just a battery and a resistance. The circuit is placed in a time varying magnetic field. So, $\vec f=\vec f_s+\vec E$, where $\vec f_s$ is the force per unit charge due to the battery. So, the EMF $$\varepsilon=\oint(\vec f_s+\vec E)\cdotp d\vec l=\oint\vec f_s\cdotp d\vec l+\oint\vec E\cdotp d\vec l=\oint\vec f_s\cdotp d\vec l-\int\frac{\partial \vec B}{\partial t}\cdotp d\vec a \tag{3}\label{3}$$ But from Faraday's law, we know that the EMF, $$\varepsilon=-\frac{d\phi}{dt}=-\int\frac{\partial \vec B}{\partial t}\cdotp d\vec a \tag{4}\label{4}$$ Combining these two expressions of $\varepsilon$, we get $$\oint\vec f_s\cdotp d\vec l-\int\frac{\partial \vec B}{\partial t}\cdotp d\vec a=-\int\frac{\partial \vec B}{\partial t}\cdotp d\vec a \tag{5}\label{5}$$ $$\Rightarrow \oint\vec f_s\cdotp d\vec l=0$$ But, $\oint\vec f_s\cdotp d\vec l$ is actually the voltage across the battery terminals which shouldn't be zero. Where is the problem with this analysis? Answer: You're confusing induced EMF with total EMF, which is sum of all contributions to EMF, including the induced EMF(due to induced electric field, described by the Faraday law) and the battery EMF (due to electrochemical processes in the battery, not described by the Faraday law). Net EMF due to both induced field and battery can be written as $$ \mathscr{E}_{net} = \mathscr{E}_{induced} + \mathscr{E}_{battery} $$ $$ \mathscr{E}_{net} = \oint \mathbf E_{induced} \cdot d\mathbf s + \mathscr{E}_{battery}. $$ $$ \mathscr{E}_{net} = -\frac{d}{dt}\int \mathbf B \cdot d\mathbf S + \mathscr{E}_{battery}. $$ If resistance of the circuit is $R$, then Kirchhoff's second circuital law predicts that current due to both sources of EMF will be $$ I = \frac{\mathscr{E}_{net}}{R}. $$
{ "domain": "physics.stackexchange", "id": 93736, "tags": "electromagnetism, electric-circuits, maxwell-equations, electromagnetic-induction, batteries" }
Search for Tree
Question: Here is a search implementation for my tree. It searches each node in a breadth-first manner first, but it goes deeper in a depth-first manner, in the event it needs to go deeper. I know there may be multiple identical items in the tree (well, not my current use of it, but I still need to presume that), so it just returns the children for the first instance found. If I ever have duplicate items with different children, there will be other things needing change anyway. For reference, this method is being implemented as part of my tree here: Trees and their uses public IEnumerable<T> GetChildren(T item) { IEnumerable<KeyValuePair<int, T>> itemSet = items.Where(x => item.Equals(x.Value)); if (itemSet.Count() != 0) { int index = itemSet.ElementAt(0).Key; IEnumerable<KeyValuePair<int, Tree<T>>> branchSet = branches.Where(x => x.Key == index); if (branchSet.Count() != 0) { return branchSet.ElementAt(0).Value.GetChildren(); } } foreach (KeyValuePair<int, Tree<T>> kvp in branches) { IEnumerable<T> coll = kvp.Value.GetChildren(item); if (coll != Enumerable.Empty<T>()) { return coll; } } return Enumerable.Empty<T>(); } I'm sure there are ways I can improve this, although it looks good to me. Edit: Some people asked me for clarification on how my search combined the breadth-first and depth-first searches in The 2nd Monitor. Here are some images showing how they each work (the first two taken from Wikipedia): Breadth-first search: Depth-first search: My search: Answer: I would use First() instead of ElementAt(0) as it conveys the intention better. Instead of using something.Count() != 0 you should use something.Any(). This avoids having to iterate over the entire sequence just to find out if there are any items in it. Also you are not interested in the entire itemSet you're just interested in the first one if it exists so you should make use of FirstOrDefault which also provides a predicate overload. This foreach loop: foreach (KeyValuePair<int, Tree<T>> kvp in branches) { IEnumerable<T> coll = kvp.Value.GetChildren(item); if (coll != Enumerable.Empty<T>()) { return coll; } } can be condensed to: var result = branches.Select(kvp => kvp.Value.GetChildren(item)) .FirstOrDefault(coll => coll != Enumerable.Empty<T>()); return result ?? Enumerable.Empty<T>(); Update: Due to KeyValuePair being a value type to make it a bit neater a little helper is needed: public static class Extensions { public static bool EqualsDefault<T>(this T obj) { return EqualityComparer<T>.Default.Equals(obj, default(T)); } } With this the method could be rewritten as: public IEnumerable<T> GetChildren(T item) { var firstMatch = items.FirstOrDefault(x => item.Equals(x.Value)); if (!firstMatch.EqualsDefault()) { var branch = branches.FirstOrDefault(x => x.Key == firstMatch.Key); if (!branch.EqualsDefault()) { return branch.Value.GetChildren(item); } } var result = branches.Select(kvp => kvp.Value.GetChildren(item)) .FirstOrDefault(coll => coll != Enumerable.Empty<T>()); return result ?? Enumerable.Empty<T>(); }
{ "domain": "codereview.stackexchange", "id": 12875, "tags": "c#, tree, breadth-first-search, depth-first-search" }
Ampère's circuital law in differential form
Question: I'm having trouble understanding how certain conclusions are made in the explanation of Ampère's circuital law. Here's part of what's in my book: Consider a magnetic field with induction $\vec B$. Find the circulation along the edges of an infinitesimal rectangle $PQRS$ in the xy-plane (The length of $SR$ is the length of $PQ$ which is $dx$). The circulation (if the direction you're going in is from $R$ to $S$) is: $\Lambda_B=\oint_{PQRS}\vec B \cdot d\vec l = \int_{PQ}+\int_{QR}+\int_{RS}+\int_{SP}$ Now, along $QR$, parallel with the Y-direction, $d\vec l=\vec e_y dy$ and: $\int_{QR}\vec B \cdot d\vec l=\vec B \cdot \vec e_ydy = B_yd_y$ Along $SP$, parallel with the -Y direction, $d\vec l=-\vec e_y dy$ so that: $\int_{SP}\vec B \cdot d\vec l=-\vec B' \cdot \vec e_ydy = -B'_yd_y$ And so: $\int_{QR}+\int_{SP}=(B_y-B'_y)dy$ But because $PQ = dx$, $B_y-B'_y = dB'_y = (\partial B_y/\partial x)dx$ ... Here are the things I don't understand: I assume that the magnetic field isn't homogeneous, since that isn't specified and since a differentiation is made between $B_y$ and $B'_y$ in resp. $QR$ and $SP$. Why is that not taken into account when integrating over $QR$ and $SP$? The magnetic field isn't constant along the line. I first figured that this is why it's explicitly stated we're working with an "infinitesimal" rectangle and then it made sense to me, except for the fact that now I'm confused as to why $B_y$ and $B'_y$ aren't the same. How do you go from $dB'_y$ to $(\partial B_y/\partial x)dx$? Answer: I don't know which book you're reading from. I think you don't have to consider $B(x,y)$ as homogenous. Consider that when you integrate along your path, only the component $B_y$ will contribute along segment $QR$ and $SP$, while $B_x$ will contribute along $PQ$ and $RS$. Consider the integral along $QR$. $$ \tag{1} \Gamma_{QR}=\int_{QR} \vec B(x,y)\cdot d\vec l=\int_{QR} B_y(x,y)dy $$ If you consider that the segment $QR$ is small and $B_y$ doesn't vary very much along it, you can substitute $B_y$ with its mean value $\bar B_{y,QR}$ and you get: $$ \tag{2} \Gamma_{QR}=\bar B_{y,QR}dy $$ Now at $Q$ we have $B_y(x,y)$, while at $R$ we have $B_y(x,y+dy)$ and we can write: $$ \tag{3} \bar B_{y,QR}=\frac{1}{2} [B_y(x,y)+B_y(x,y+dy)]=B_y(x,y)+\frac{1}{2}\frac{\partial B_y}{\partial y}dy $$ where I've used Taylor expansion. At $S$ we have $B_y(x+dx,y+dy)$ and at $P$ we have $B_y(x+dx,y)$. With the same reasoning we get: $$ \tag{4} \bar B_{y,SP} =\frac{1}{2} [B_y(x+dx,y+dy)+B_y(x+dx,y)]=\\ =B_y(x,y)+\frac{\partial B_y}{\partial x}dx+\frac{1}{2}\frac{\partial B_y}{\partial y}dy $$ Similarly to (2), the line integral along $SP$ is $$\tag{5} \Gamma_{SP}=-\bar B_{y,SP}dy $$ Finally from (3) and (4) $$ \tag{6} \Gamma_{QR}+\Gamma_{SP}=-\frac{\partial B_y}{\partial x}dxdy $$ The same reasoning for $B_x$ along $PQ$ and $RS$ will give: $$ \tag{7} \Gamma_{PQ}+\Gamma_{RS}=\frac{\partial B_x}{\partial y}dxdy $$ Your line integral along the path will give $$\tag{8} \oint_{PQRS} \vec B(x,y)\cdot d\vec l=\Bigg(\frac{\partial B_x}{\partial y}-\frac{\partial B_y}{\partial x}\Bigg)dxdy $$
{ "domain": "physics.stackexchange", "id": 13808, "tags": "homework-and-exercises, electromagnetism" }
Basic random password generator
Question: I'm trying to pick up some Haskell skills so thought I'd write a random password generator to get to grips with using IO. It was far trickier than I expected and I employed rather more trial and error than I'd have liked. What I've ended up with works, and while I'm happy with it generally I'd like to know if this is reasonable, idiomatic Haskell or if I've done anything peculiar. import System.Random -- Generate a random number in closed interval [lo, hi] random_int :: Int -> Int -> IO Int random_int lo hi = (randomRIO (lo,hi) :: IO Int) -- Generate a random character from an internal alphabet random_char :: IO Char random_char = do index <- random_int 0 $ (length alphabet) - 1 return $ alphabet !! index where alphabet = ['A'..'Z'] ++ ['a'..'z'] ++ ['0'..'9'] ++ "!\"£$%^&*()-_+={}[]:;@'~#|\\<,>.?/'" -- Generate a random password of given length random_password :: Int -> IO String random_password length = sequence $ map (\x -> random_char) [1..length] main :: IO () main = random_password 8 >>= putStrLn The implementation of random_password feels odd and hacky, but I struggled to find another way to create an array of results from successive invocations of random_char or any way to avoid the need for sequence. Is it hacky? Answer: hlint You should run hlint on your program - it's a good way to become familiar with techniques and practices used by expert Haskellers. Running hlint on your code returned the following suggestions: * use camelCase instead of underscores ./passwd.hs:7:20: Warning: Redundant bracket Found: (randomRIO (lo, hi) :: IO Int) Why not: randomRIO (lo, hi) :: IO Int ./passwd.hs:19:5: Error: Use mapM Found: sequence $ map (\ x -> random_char) [1 .. length] Why not: mapM (\ x -> random_char) [1 .. length] ./passwd.hs:19:21: Warning: Use const Found: \ x -> random_char Why not: const random_char sequence . map You're right that the code for random_passwd feels hacky since the value x is ignored. The canonical way to write this is: import Control.Monad (replicateM) random_passwd = replicateM length random_char replicateM repeats a monadic action a specified number of times collecting the results in a list. !! You probably already know that using !! to index into a list is inefficient. A better data structure to use for alphabet is Text from Data.Text which gives you O(1) indexing. import qualified Data.Text as T alphabet = T.pack $ ['A'..'Z'] ++ ['a'..'z'] ++ ... random_char = do index <- randomR (0, T.length alphabet - 1) return $ T.index alpha index
{ "domain": "codereview.stackexchange", "id": 16022, "tags": "strings, haskell, random" }
How fast can we find all Four-Square combinations that sum to N?
Question: A question was asked at Stack Overflow here: Given an integer $N$, print out all possible combinations of integer values of $A,B,C$ and $D$ which solve the equation $A^2+B^2+C^2+D^2 = N$. This question is of course related to Bachet's Conjecture in number theory (sometimes called Lagrange's Four Square Theorem because of his proof). There are some papers that discuss how to find a single solution, but I have been unable to find anything that talks about how fast we can find all solutions for a particular $N$ (that is, all combinations, not all permutations). I have been thinking about it quite a bit and it seems to me that it can be solved in $O(N)$ time and space, where $N$ is the desired sum. However, lacking any prior information on the subject, I am not sure if that is a significant claim on my part or just a trivial, obvious or already known result. So, the question then is, how fast can we find all of the Four-Square Sums for a given $N$? OK, here's the (nearly) $O(N)$ algorithm that I was thinking of. First two supporting functions, a nearest integer square root function: // The nearest integer whose square is less than or equal to N public int SquRt(int N) { return (int) Math.Sqrt((double) N); } And a function to return all TwoSquare pairs summing from 0 to N: // Returns a list of all sums of two squares less than or equal to N, in order. public List<List<int[]>> TwoSquareSumsLessThan(int N) { // Make the index array List<int[]>[] Sum2Sqs = new List<int[]>[N + 1]; // Get the base square root, which is the maximum possible root value int baseRt = SquRt(N); for (int i = baseRt; i >= 0; i--) { for (int j = 0; j <= i; j++) { int sum = (i * i) + (j * j); if (sum > N) { break; } else { // Make the new pair int[] sumPair = { i, j }; // Get the sumList entry List<int[]> sumLst; if (Sum2Sqs[sum] == null) { // make it if we need to sumLst = new List<int[]>(); Sum2Sqs[sum] = sumLst; } else { sumLst = Sum2Sqs[sum]; } // Add the pair to the correct list sumLst.Add(sumPair); } } } // Collapse the index array down to a sequential list List<List<int[]>> result = new List<List<int[]>>(); for (int nn = 0; nn <= N; nn++) { if (Sum2Sqs[nn] != null) result.Add(Sum2Sqs[nn]); } return result; } Finally, the algorithm itself: // Return a list of all integer quads (a,b,c,d), where: // a^2 + b^2 + c^2 + d^2 = N, // and a >= b >= c >= d, // and a,b,c,d >= 0 public List<int[]> FindAllFourSquares(int N) { // get all two-square sums <= N, in descending order List<List<int[]>> Sqr2s = TwoSquareSumsLessThan(N); // Cross the descending list of two-square sums <= N with // the same list in ascending order, using a Merge-Match // algorithm to find all combinations of pairs of two-square // sums that add up to N List<int[]> hiList, loList; int[] hp, lp; int hiSum, loSum; List<int[]> results = new List<int[]>(); int prevHi = -1, prevLo = -1; // Set the Merge sources to the highest and lowest entries in the list int hi = Sqr2s.Count - 1; int lo = 0; // Merge until done .. while (hi >= lo) { // check to see if the points have moved if (hi != prevHi) { hiList = Sqr2s[hi]; hp = hiList[0]; // these lists cannot be empty hiSum = hp[0] * hp[0] + hp[1] * hp[1]; prevHi = hi; } if (lo != prevLo) { loList = Sqr2s[lo]; lp = loList[0]; // these lists cannot be empty loSum = lp[0] * lp[0] + lp[1] * lp[1]; prevLo = lo; } // Do the two entries' sums together add up to N? if (hiSum + loSum == N) { // they add up, so cross the two sum-lists over each other foreach (int[] hiPair in hiList) { foreach (int[] loPair in loList) { // Make a new 4-tuple and fill it int[] quad = new int[4]; quad[0] = hiPair[0]; quad[1] = hiPair[1]; quad[2] = loPair[0]; quad[3] = loPair[1]; // Only keep those cases where the tuple is already sorted //(Otherwise it's a duplicate entry) if (quad[1] >= quad[2]) { // (only need to check this one case, the others are implicit) results.Add(quad); } // (there's a special case where all values of the 4-tuple are equal // that should be handled to prevent duplicate entries, but I'm // skipping it for now) } } // Both the HI and LO points must be moved after a Match hi--; lo++; } else if (hiSum + loSum < N) { lo++; // too low, so must increase the LO point } else { // must be > N hi--; // too high, so must decrease the HI point } } return results; } As I said before, it should be pretty close to $O(N)$. However, as Yuval Filmus points out, as the number of Four Square solutions to N can be of order $N.ln(ln(N))$, then this algorithim could not be less than that. Answer: Juho's algorithm can be improved to an $O(N)$ algorithm using meet-in-the-middle. Go over all pairs $A,B \leq \sqrt{N}$; for each pair such that $M=A^2+B^2 \leq N$, store $(A,B)$ in some array $T$ of length $N$ (each position $M$ could contain several pairs, which might be stored in a linked list). Now go over pairs $M,N-M$ such that the corresponding cells in $T$ are non-empty. This way we get an implicit representation of all quadruples. If we want to list all of them, then we can't do any better than $\Omega(N\log\log N)$, since Jacobi's four square theorem shows that (for odd $N$) the number of representations is $8\sigma(N)$, and there are infinitely many integers such that $\sigma(N) \geq (e^\gamma - \epsilon) N\log\log N$ (see Grönwall's theorem). In order to get less trivial algorithms, one can try to factor $N$ over the appropriate quaternion ring, since we know that the representations as sums of two squares correspond (in some sense) to these factorizations, through Lagrange's four-square identity. We would still need to find all representations of any relevant prime.
{ "domain": "cs.stackexchange", "id": 5096, "tags": "complexity-theory, time-complexity, number-theory, enumeration" }
What are some comparative studies on program termination verification tools?
Question: Comparative studies of tools like AProver, 2LS Answer: As mentioned by nekketsuuu, SV-COMP has a termination category, and the Termination Competition has been going on for quite some time (there is some overlap). It depends on what kind of termination you are interested in. This paper on AProVE gives an overview of that tool, which covers many of the techniques involved in general program termination analysis and types of programs that can be handled, though it does not really analyze the comparative strengths of different tools. Sadly, I'm not aware of any modern overview paper that does this, which is unfortunate. The tool description papers of the competition participants for your category of choice would be a good place to start.
{ "domain": "cs.stackexchange", "id": 11896, "tags": "software-verification" }
Regarding buoyant force acting on a cone in an accelerated container
Question: While solving questions related to fluid mechanics, I came across this particular question: The correct options are (A), (B) and (C). I was only able to mark option (A) with certainty. I have some conceptual doubts here: What exactly is the significance of '${a = g}$'? (I thought of considering pseudo force, and hence, assumed the cone to be slanted due to it. But the question got complicated.) Does it really influence the answers in any way, i.e., will the answers differ if the container wasn't accelerating, or even moving at all? The second option has an obvious typo error (dimensional error), but even if the force in the option was ${'(πr^3ρg)/3'}$, it still doesn't match my answer. I was able to find the relation between the height(${h}$) of the part of the cone in liquid 1 and ${r}$ as ${h^3 = r^3/2}$. So, ${F = (πr^3ρg)/6}$ (I have considered only buoyant force). Is there any other force applied by liquid 1 which I might be missing here? Could anyone please clarify both of my doubts? Answer: Ignoring the indicated acceleration, I agreed with statement, A. Assuming that the upper liquid could only push down on the upper section, I integrated the pressure times the horizontal component of surface elements to get a total force of πρg$r^3$/3. (A poorly written question.)
{ "domain": "physics.stackexchange", "id": 79048, "tags": "fluid-dynamics, fluid-statics, density, buoyancy" }
Is there a simple way to test services (like actionlib's axclient)?
Question: I want to test a service connection as simply as possible (i.e. without setting up a dummy program). Actionlib has to very nice axserver.py tool for this, which just displays the entries of an action and allows a user to respond to this. I would like to have the same for services, some script that displays an arbitrary service in a simple gui and allows the user to reply to service calls. Does something like this exist? Originally posted by dornhege on ROS Answers with karma: 31395 on 2011-11-25 Post score: 0 Answer: I guess, the answer is: Nobody knows anything. So I ripped axclient.py: If anyone needs something like this. It is available here: sxserver. Originally posted by dornhege with karma: 31395 on 2011-11-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Timple86 on 2020-03-24: Since the link is outdated: https://github.com/nobleo/rx_service_tools
{ "domain": "robotics.stackexchange", "id": 7420, "tags": "ros, actionlib, roscpp, service" }
What is the difference between velocity and temperature?
Question: When talking about liquids or gases they are both a measure of the kinetic energy of a collection of molecules. And in devices like jet engines temperature is converted directly into velocity. So, why isn't something hot when it picks up velocity? And why doesn't something move when it gets really hot? My main question is is the only difference between temperature and velocity the direction the volume moves? Answer: The average random molecular velocity (whose direction is, in a stochastic sense, evenly distributed in space and whose direction changes constantly through collisions) corresponds to temperature in e.g. kinetic gas theory. This has nothing to do with the macroscopic velocity which has a direction. So yes, the difference between the velocity that can be associated with temperature and the macroscopic velocity we talk about in fluid/continuum mechanics is the randomness or non-randomness of their direction. Which is a very important distinction, though. This means that something does not move when it gets really hot because the displacement the velocity due to thermal motion is on the average zero. The thermal velocity that can be assigned to every molecule/atome changes its direction constantly through collisions with other molecules/atoms. For the same reason, something does not get hot when it is moving - because the random velocity of the single particles does not change. As has been pointed out in the comments, it might be worth to also mention the scale seperation of those velocities. To quote Pirx from the comments: As an aside, the average molecular velocity in an ideal gas is of the order of the speed of sound in that gas. Thus, for practical, every-day situations, the velocity of the gas molecules is much, much higher than the macroscopic velocity of the gas flow.
{ "domain": "physics.stackexchange", "id": 35617, "tags": "thermodynamics, fluid-dynamics, temperature, velocity" }
Executing rosjava programs
Question: Leaving android aside for the moment, what's the recommended way of executing a rosjava program buried in a main() in one of your classes? Am asking because of the complexity of the CLASSPATH. This seems to be handled in the compilation, but I'm not sure how to bring it in when trying to run something. Originally posted by Daniel Stonier on ROS Answers with karma: 3170 on 2011-08-26 Post score: 0 Original comments Comment by Daniel Stonier on 2011-08-27: Just noticed rosjava_bootstrap's run.py which will bootstrap the classpath for a NodeMain type class. Is there a way to do this for a generic class with a generic main? Answer: If you are implementing your own main(), I recommend looking at the rosjava_bootstrap run.py script and adapting it to your own needs. The main thing of note is the Python routines for constructing the classpath. Also of note is the RosRun.java, which shows the basic bootstrapping steps that you'll need for constructing a context. All of this will get better over time -- we are evolving the toolchain to try and find the sweet spot of compatibility with the many different and varied frameworks that Java provides (e.g. Eclipse, ant, Maven, OSGI, Android, etc...). Originally posted by kwc with karma: 12244 on 2011-08-27 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by kwc on 2011-08-28: There is someone who is working on support for osgi containers and done several different deployment scenarios with it. You should see the basic hooks in place for that, though it's not well documented. Comment by Daniel Stonier on 2011-08-27: Ended up doing exactly that (re-implementing my own run.py). Also found I could add ros.runtime.classpath from ros.properties directly into an osgi xxx.bnd file which gives it rpath like tracking across jars. That will do for now. So many frameworks :)
{ "domain": "robotics.stackexchange", "id": 6537, "tags": "rosjava" }
Does every nerve ending send information to the brain separately?
Question: Does every nerve ending send information to the brain separately? Is there a nerve path (I don't know their scientific name) from every nerve ending to the brain; or are they sent to brain from the same paths in the dorsal root ganglion? If not, how can we determine the (almost) exact location of pain in our hand? I am not very familiar with the biology except the lessons I had taken in the high school. So please try to use daily language explaining this. Answer: There are two main structures in the human nervous system: The central nervous system, which includes the brain and spinal cord The peripheral nervous system, which is all of the nerves in the rest of the body (fingers, arms, feet, etc.) The signals taken by the peripheral nerves mainly travel to the brain through the spinal cord, and the brain sends signals back to the peripheral nerves through the central nervous system. There are a lot more complex mechanisms and various exceptions, but essentially: the nervous system is a vast network of signals, and the majority of these signals travel through the spinal cord to the brain and from the brain to the target nerves. The signals travel via different nerve "branches" — think of it like a tree of nerves, with the common root being the brain. For more information please see: Overview of the nervous system for dummies Wikipedia (with strong references and lots of detailed information)
{ "domain": "biology.stackexchange", "id": 7046, "tags": "human-biology, neuroscience, human-physiology, peripheral-nervous-system" }
Langevin equations in translational and rotational direction
Question: I want to describe the following system. A bead is connected with a tether. There is a force $\vec{F}_{up}=F_{up}\hat{y}$ that acts on the bead. The tether acts with a force on the bead, this force $F_{spring}$ is only dependent on the length/extension of the spring. A Brownian force acts in the rotational and translational directions. Just consider these as a random variable in x-y-theta direction. Now I want to formulate the Langevin equation for the rotational motion. I know that if there is no angular dependence, $\theta=0$ for all time, the Langevin equation are: $\gamma_{y}\frac{dy}{dt}+F_{y}^{tether}(x,y)+\frac{dF_{y}^{tether}(x,y)}{dy}\hat{y}=F_{y}^{therm}+F_{mag}-mg$ $\gamma_{x}\frac{dx}{dt}+F_{x}^{tether}(x,y)+\frac{dF_{y}^{tether}(x,y)}{dx}\hat{x}=F_{x}^{therm}$ What I know: The $\gamma_{theta}$ and $F_{them}^{\theta}$ are known from literature. Should I construct a construct a Langevin equation in the same way, such as $\gamma_{\theta} \frac{d\theta}{dt}+\frac{d\tau_{tether}}{d\theta}\theta+\tau_{tether}=F^{tether}_{\theta}$ ? More specifically, should it include the term $\frac{d\tau_{tether}}{d\theta}\theta+\tau_{tether}$, which is completely analogous to the translational equations? Trivia: In the end I want to extend this into 3 dimensions and solve the system numerically, using Langevin dynamics. But I'm stuck on how to describe the force. EDIT: I've edited the question into something more specific. Answer: All the forces acting on the bead will accelerate the center of mass as though the force is acting there - so the mass will accelerate with a magnitude and direction given by the vector sum of $$\frac{\vec{F_{up}} + \vec{F_{spring}}}{m}$$ . The bead will further undergo a rotation. The torque is entirely due to the spring - but we are missing the angle of the spring to the vertical. Let's call that angle $\alpha$. Then the component of the force of the spring causing rotation is $F_{spring}\cos(\pi - \alpha - \theta - \pi / 2)=F_{spring}\sin(\alpha + \theta)$. Diagram: From this it follows that the angle $\beta$ can be calculated from $x$, $y$ and $\theta$ (as I defined them in the drawing - I assume this is how you define them as well... not quite clear from your diagram). Now you can see that $$\tan\alpha = \frac{x + r\sin\theta}{y - r\cos\theta}$$ After which you can calculate $\beta$ from $$\beta = \pi/2 - (\alpha + \theta)$$ And finally, the torque on the bead is $$\Gamma = F_{spring} r \cos\beta$$
{ "domain": "physics.stackexchange", "id": 17422, "tags": "classical-mechanics, forces, fluid-dynamics, coordinate-systems, brownian-motion" }
Tic-Tac-Toe in C++11
Question: I have found Tic-Tac-Toe source code: Tic Tac Toe in C++ I have rewritten the source code in C++11 as shown below. How can I minimize hardcoding in the game logic? #include <iostream> #include <cctype> #include <algorithm> #include <functional> #include <array> class TicTacToe { public: bool isFull() const; void draw() const; void turn(char player); bool check(char player) const; private: bool fill(char player, int position); private: static const std::size_t mDim = 3; std::array<char, mDim * mDim> grid { { '-', '-', '-', '-', '-', '-', '-', '-', '-' } }; }; template<std::size_t dim> struct column : public std::unary_function<int, bool> { column(int i) : colNum(i){} bool operator() (int number) { return (number % dim == colNum); } int colNum; }; bool TicTacToe::fill(char player, int position) { if (grid[position] != '-') return false; grid[position] = player; return true; } bool TicTacToe::isFull() const { return 0 == std::count_if(grid.begin(), grid.end(), [](const char& i) { return '-'; }); } bool TicTacToe::check(char player) const { column<mDim>::argument_type input; // check for row or column wins bool row1 = true, row2 = true, row3 = true, col1 = true, col2 = true, col3 = true, diag1 = true, diag2 = true; int j = 0; // columns std::for_each(grid.begin(), grid.end(), [&](char i) { int x = j++; if (column<mDim>(0)(input = x)) col1 &= i == player; if (column<mDim>(1)(input = x)) col2 &= i == player; if (column<mDim>(2)( input = x)) col3 &= i == player; }); // diagonals j = 0; for (const auto& i : grid) { int x = j++; if (x == 0 || x == 4 || x == 8) diag1 &= i == player; if (x == 2 || x == 4 || x == 6) diag2 &= i == player; } if (col1 || col2 || col3 || diag1 || diag2) return true; // rows return std::search_n(grid.begin(), grid.end(), 3, player) != grid.end(); } void TicTacToe::draw() const { //Creating a onscreen grid std::cout << ' '; for (std::size_t i = 1; i <= mDim; ++i) std::cout << " " << i; int j = 0; char A = 'A'; column<mDim>::argument_type input; for (auto& i : grid) { int x = j++; if (column<mDim>(0)(input = x )) std::cout << "\n " << A++; std::cout << ' ' << i << ' '; } std::cout << "\n\n"; } void TicTacToe::turn(char player) { char row = 0; char column = 0; std::size_t position = 0; bool applied = false; std::cout << "\n" << player << ": Please play. \n"; while (!applied) { std::cout << "Row(1,2,3,...): "; std::cin >> row; std::cout << player << ": Column(A,B,C,...): "; std::cin >> column; position = mDim * (std::toupper(column) - 'A') + (row - '1'); if (position < grid.size()) { applied = fill(player, position); if (!applied) std::cout << "Already Used. Try Again. \n"; } else { std::cout << "Invalid position. Try again.\n"; } } std::cout << "\n\n"; } class Game { public: Game(); void run(); private: TicTacToe ttt; std::array<char, 2> players{ { 'X', 'O' } }; int player = 0; std::function<void()> display = std::bind(&TicTacToe::draw, &ttt); std::function<void(char)> turn = std::bind(&TicTacToe::turn, &ttt, std::placeholders::_1); std::function<bool(char)> win = std::bind(&TicTacToe::check, &ttt, std::placeholders::_1); std::function<bool()> full = std::bind(&TicTacToe::isFull, &ttt); }; Game::Game() :ttt() { } void Game::run() { while (!win(players[player]) && !full()) { player ^= 1; display(); turn(players[player]); } display(); if (win) { std::cout << "\n" << players[player] << " is the Winner!\n"; } else { std::cout << "\nTie game!\n"; } } int main() { Game game; game.run(); } Answer: Here are some things that may allow you to improve your code: Eliminate unused variables In the check routine, the variables row1, row2 and row3 are initialized but unused. It would be best to eliminate them. Fix check code The code does not correctly evaluate when a player has won. For example, if I redirect this file to the game: 2 B 1 A 2 A 2 C 1 C 3 A 3 B The result is this: 1 2 3 A X O X B - O O C O X - O is the Winner! Clearly that's not right. I'd write it like this: bool TicTacToe::check(char player) const { // check for row or column wins for(unsigned i = 0; i < mDim; ++i){ bool rowwin = true; bool colwin = true; for (unsigned j=0; j < mDim; ++j) { rowwin &= grid[i*mDim+j] == player; colwin &= grid[j*mDim+i] == player; } if (colwin || rowwin) return true; } // check for diagonal wins bool diagwin = true; for (unsigned i=0; i < mDim; ++i) diagwin &= grid[i*mDim+i] == player; if (diagwin) return true; diagwin = true; for (unsigned i=0; i < mDim; ++i) diagwin &= grid[i*mDim+(mDim-i-1)] == player; return diagwin; } Fix Game::run The code currently includes the following code in Game::run(): if (win) { std::cout << "\n" << players[player] << " is the Winner!\n"; } However, that's an error because win is std::function<bool(char)> and this is not an invocation of that function. In fact, what it's doing is checking to see if the address of the function is nullptr. It never is, so this code will always claim there's a winner, even if the game was actually a tie. Either invoke the function again as win(players[player]) or, better, save the result from the previous invocation a few lines above and test that. Avoid unnecessary obfuscation The code for Game::run uses std::bind to essentially redefine four functions of the TicTacToe class. It would be much simpler to simply call those functions directly. For example, instead of writing !full() you could simply call !ttt.isFull(). Eliminate column The column function makes the code more complex rather than less and is created and destroyed many many times during the course of a regular game. (On a sample run with a tie game here, 90 column objects were created and destroyed.) For example, the code for TicTacToe::draw() contains this code: column<mDim>::argument_type input; for (auto& i : grid) { int x = j++; if (column<mDim>(0)(input = x )) std::cout << "\n " << A++; std::cout << ' ' << i << ' '; } It can be rewritten in much simpler form without column: for (auto& i : grid) { if (j++ % mDim == 0) std::cout << "\n " << A++; std::cout << ' ' << i << ' '; } Consider function names carefully The function named isFull() is well-named, but others are not. For instance, the fill function would probably make more sense named apply or applyMove. The word fill implies that the entire array is filled, which is not actually the purpose for this function. Use a constructor for TicTacToe Most of the code carefuly uses mDim as a potentially changeable parameter denoting the dimension of the board. However, the grid member is statically initialized with a hand-created array of - characters. Better would be to simply define grid like this: std::array<char, mDim * mDim> grid; and then create a constructor: TicTacToe() { grid.fill('-'); } Avoid "magic numbers" A few places within the code use '-' to signify an empty square. However, this should instead be a static const member of the TicTacToe class instead. Simplify isFull() Rather than check each square in the grid each turn, it would probably be simpler just to maintain a number of empty squares. Consider revising the class responsibilities Right now, the Game class keeps track of the turns, does some of the I/O and contains a TicTacToe object. Better might be to make it responsible for all of the I/O and have TicTacToe incorporate only the logic of the game. This is a step toward what is called the Model-View-Controller pattern. The use is that if, at some future point, you wished to convert this game into, say, a graphical program with a touch-screen interface, you would only have to rework Game and not the TicTacToe class. Thoroughly validate user input Right now, the code will allow the user to input a position of (0,B) which it interprets as (3,A) and (0,D) as (3,C). Even stranger input, such as (/,B) is also accepted. It would be better to make sure that what the user inputs is actually valid. Most of the logic is already there -- it just needs some improvement.
{ "domain": "codereview.stackexchange", "id": 10773, "tags": "c++, game, c++11, tic-tac-toe" }
Is it correct to define the F-measure as the harmonic mean of specificity and sensitivity in such a way?
Question: It is common to define the F-measure as a function of precision and recall, as mentioned in [1]: $F_{\beta}=\frac{(1+\beta^2)PR}{\beta^2 P+R}$ However I came across some other cases, another definition is used [2] (without weights): $F = H(sensitivity, 1- specificity)$ Where H is harmonic mean. Reference: F - measure derivation (harmonic mean of precision and recall) https://link.springer.com/chapter/10.1007/978-3-540-68947-8_133. https://stackoverflow.com/a/52892413/2243842 Answer: The one is general formula the other you get for Beta=1: The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision. So you just want to generalise, and punish certain mistakes more. So to conclude: Correct in mathematical sense is always to generalise and derive special cases, in that sense the first one is preferable since setting beta to one you get the 'standard' F-1-harmonic-mean-formula. http://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html
{ "domain": "datascience.stackexchange", "id": 7059, "tags": "model-evaluations, metric, definitions" }
costmap does not subscribe /scan
Question: hi, i have a litte problem, my costmap2d node does not subscribe my /scan topic which publishes sensor_msgs/LaserScan Rosparam get /costmap_node/scan/: {clearing: true, data_type: LaserScan, expected_update_rate: 0.5, marking: true, max_obstacle_height: 0.4, min_obstacle_height: 0.08, observation_persistence: 0.0, topic: /scan} In rxgraph i see that it does not subscribe my /scan topic, but my gmapping is unsing /scan and all works fine. here is my costmap_common.yaml : costmap/transform_tolerance: 1.0 costmap/footprint: [[-0.3, -0.3], [-0.3, 0.3], [0.3, 0.3], [0.3, -0.3]] observation_sources: scan scan: {data_type: LaserScan, expected_update_rate: 0.5, topic: /scan, observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 0.4, min_obstacle_height: 0.08} any ideas ? Originally posted by pkohout on ROS Answers with karma: 336 on 2012-07-19 Post score: 0 Answer: Try this configuration: observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: laser, data_type: LaserScan, topic: scan, marking: true, clearing: true, expected_update_rate: 0.5} If you are still having issues, please post the rest of your configuration file. Not a lot of info was given to figure out your issues... Also, a screenshot of RXGRAPH would be nice alongside the output of this command: rostopic info scan Originally posted by allenh1 with karma: 3055 on 2012-07-20 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by pkohout on 2012-07-22: have tried your configuration does't work. A picture of rxgrap : http://i.imgur.com/DBeHE.png output of rostopic info scan:Type: sensor_msgs/LaserScan Publishers: /gazebo (http://peter-XPS-15Z:35525/) Subscribers: /hector_mapping (http://peter-XPS-15Z:50795/) Comment by pkohout on 2012-07-22: global_frame: /map robot_base_frame: base_link costmap/update_frequency: 0.2 costmap/static_map: true costmap/transform_tolerance: 1.0costmap/footprint: [[-0.3, -0.3], [-0.3, 0.3], [0.3, 0.3], [0.3, -0.3]] observation_sources: laser_scan_sensor Comment by pkohout on 2012-07-22: observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: laser, data_type: LaserScan, topic: scan, marking: true, clearing: true, expected_update_rate: 0.5} the whole config Comment by pkohout on 2012-07-23: ok found my fault..i was setting observation_sources: laser_scan_sensor wrong.. now it works. but now i get an error, [ERROR] [13, 5.46]: Client [/costmap_node] wants topic /scan to have datatype/md5sum [sensor_msgs/PointCloud/d8.. bur our version has sensor_msgs/LaserScan[...] drop connectino Comment by allenh1 on 2012-07-23: Please post your configuration file again... I think you have a data type issue. Comment by pkohout on 2012-07-23: ok was looking at my file once again and there was realy a little issue, i forgot to delete my laser_scan_sensor entry. thanks a lot dude (: Comment by pkohout on 2012-07-23: global_frame: /map robot_base_frame: base_link costmap: update_frequency: 0.2 publish_frequency: 0.2 static_map: true transform_tolerance: 1.0 footprint: [[-0.3, -0.3], [-0.3, 0.3], [0.3, 0.3], [0.3, -0.3]] observation_sources: laser_scan_sensor Comment by pkohout on 2012-07-23: laser_scan_sensor: {sensor_frame: laser, data_type: LaserScan, topic: scan, marking: true, clearing: true, expected_update_rate: 0.5} take a look at it...maybe there is somthing wrong i don't see
{ "domain": "robotics.stackexchange", "id": 10287, "tags": "gazebo, navigation, laserscan, costmap-2d" }
How to write a simple Python Qt dialog on ROS Service?
Question: I want to write a ROS Service that receives a Question and then shows a Qt Pop-up Dialog with the Question and a yes or no Button. The service should return the Answer as a Boolean. I have it with wx but since it is depreciated in ROS, I want to use Qt now. This is how it looked like with wx: rospy.init_node('dialog_server') s = rospy.Service('dialog', Dialog, self.handle_dialog) rospy.spin() def handle_dialog(self, req): ex = wx.App() dial = wx.MessageDialog(None, req.message, 'Question', wx.YES_NO | wx.ICON_QUESTION) ret = dial.ShowModal() if ret == wx.ID_YES: answer = True else: answer = False return DialogResponse(answer) I tried simply replacing wx with the equivalent qt (like wx.MessageDialog -> QtGui.QMessageBox.question) but it's not working because ros is another Thread. Error: "It is not safe to use pixmaps outside the GUI thread". I am not familiar with Qt, rqt python_qt_binding and plug-ins at all. What do I need to achieve this? Do I have to use Slots and Signals between GUI and ROS, or even write it as a plug-in? How should it look like? Originally posted by msieber on ROS Answers with karma: 181 on 2013-07-26 Post score: 0 Answer: ROS does NOT deprecates wx (, which is technically impossible because wx is externally hosted from ROS). ROS does so for some particular wx-based toolssuite rx. That said you are free to continue using your own wx-based program. Advantage of doing so especially in your case would be that you don't have to rewrite anything. If you decide to continue using wx, you can look at the source of rx tools. For the particular error you mention about, I can't tell why it happens without looking at your Qt code. And yes, if you choose to write in Qt, using its standard architecture like slot-signal is a good idea. If you extend it to rqt plugin, you get more benefits as described. This tutorial has a link to sample source of rqt plugin. Originally posted by 130s with karma: 10937 on 2013-07-26 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 15062, "tags": "ros" }
Is my Insertion Sort optimal?
Question: I'm learning sorting algorithms and wrote my own implementation of Insertion Sort. Is it optimal? Is there anything that can be done better? #include <iostream> #include <vector> #include <algorithm> #include <vector> void display(std::vector<int> arr) { for (int i = 0; i < arr.size(); ++i) { std::cout << arr[i] << ' '; } std::cout << std::endl; } std::vector<int>& insertionSort(std::vector<int>& arr) { int i = 1; while (i < arr.size()) { int j = i; while ((j > 0) && (arr[j-1] > arr[j])) { std::swap(arr[j-1],arr[j]); --j; } ++i; } return arr; } int main() { std::vector<int> arr{1,4,3,5,6,2}; display(insertionSort(arr)); return 0; } Answer: int i = 1; while (i < arr.size()) { // body of loop ++i; } That is the quintessential for loop. Write it as: const auto ASize = arr.size(); for (size_t i= 1; i < ASize; ++i) { // body of loop } I also saved the array size since it doesn't change... but your variable arr is not const and the compiler can't really deduce that it doesn't change so it will call the function every time. (OTOH, the inlined size() function might be simple enough that it's just as fast as a variable, but generally we don't assume this) Meanwhile, you are comparing a signed and unsigned value, which should be giving you a warning. Make i the same type as the thing it is compared against. frozenca shows the main loop written as two calls to existing STL algorithms, which is indeed the way it ought to be written today. This means understanding what those algorithms do. Back in the day, implementing sorts from scratch as part of learning how sorting works, I didn't have library calls that did so much of the work. To really show how it works "from scratch" you should write a binary search and insert code too. You step back one element at a time (not a binary search) and swap every pair as you go; it is more optimal to find the insertion position using a binary search and then move them all with one call. The former changes the complexity from n/2 comparisons to log n comparisons, and thus the whole sort from O (n²) to O (n log n), which is an important detail! The latter does not change the formal complexity but is much faster (by a constant factor). (Actually, I did use a built-in library call when implementing the binary tree sort; this was a machine language primitive used by the system's own compiler and tools! The instructor had me re-write to actually do all the primitive manipulations, to show that I understood what it was doing.) Since you are sorting in-place, you don't really care that you have a std::vector. You should accept two iterators in the same manner as the standard algorithms, and then the same code can sort std::vector, a plain array, or anything that has suitable iterators. So, don't use subscript index values i and j, but use two iterators instead, when doing the work. Using iterators, you can sort a subrange of a collection as well, which is very useful when you write the more advanced sorts: For example, when a quicksort gets down to a small enough span it calls the simple insertion sort instead.
{ "domain": "codereview.stackexchange", "id": 41273, "tags": "c++, sorting, insertion-sort" }
Finding phase of frequency response as a function of frequency
Question: I am trying to calculate manually the phase of the frequency response of an LTI system as a function of frequency and plot the results. The original system is described as $y[n]=0.1(x[n]-x[n-1]+x[n-2]$, which I was able to write the frequency response $H(e^{j\hat{\omega}})=0.1(1-e^{-j\hat{\omega}}+e^{-j\hat{\omega}2})$. I used MATLAB to plot the phase using the code below: tt = -pi : 1/200 : pi; H = 0.1*(1-exp(-j*tt)+exp(-j*tt*2)); plot(tt, angle(H)) And the output was as shown below: This does not match the result I got by calculating as shown below: $0.1(1-e^{-j\hat{\omega}}+e^{-j\hat{\omega}2})$ $0.1e^{-j\hat{\omega}}(e^{j\hat{\omega}}-1+e^{-j\hat{\omega}})$ $0.1e^{-j\hat{\omega}}(2\cos(\hat{\omega})-1)$ $0.1(\cos(\hat{\omega})-j\sin(\hat{\omega}))(2\cos(\hat{\omega})-1)$ $0.1(2\cos^2(\hat{\omega})-\cos(\hat{\omega}) - 2j\sin(\hat{\omega})\cos(\hat{\omega})+j\sin(\hat{\omega}))$ $0.1\left(\left(2\cos^2(\hat{\omega})-\cos(\hat{\omega})\right)+j\left(- 2\sin(\hat{\omega})\cos(\hat{\omega})+\sin(\hat{\omega})\right)\right)$ $\theta = \tan^{-1}\left(\dfrac{-2\sin(\hat{\omega})\cos(\hat{\omega})+\sin(\hat{\omega})}{2\cos^2(\hat{\omega})-\cos(\hat{\omega})}\right)$ $\theta = \tan^{-1}\left(\dfrac{-\sin(\hat{\omega})\left(2\cos(\hat{\omega})-1\right)}{\cos(\hat{\omega})\left(2\cos(\hat{\omega})-1\right)}\right)$ $\theta = \tan^{-1}\left(-\tan(\hat{\omega})\right)$ I am not sure if I am misunderstanding how to calculate the phase function, or if there is an error in my work somewhere. Is this the correct way to calculate the phase? If so, is there an error somewhere that I am missing? Answer: I assume that you used the arctangent function to evaluate the result of your calculations. That's the problem. You cannot compute the argument of a complex number by using the arctangent function, unless you're lucky and the argument is in the range $(-\pi/2,\pi/2)$. Note that when you divide the imaginary part and the real part before computing the arctangent, the individual signs get lost. That's why all arguments are mapped to the range $(-\pi/2,\pi/2)$. So you need to use the real and imaginary parts directly, without dividing them. This can be done with the atan2 function. It will give you the correct argument in the range $(-\pi,\pi]$. Concerning your calculation, you did too much work (which also didn't give you any extra information). You should have stopped after obtaining $$H(e^{j\omega})=0.1e^{-j\omega}\left(2\cos\omega-1\right)\tag{1}$$ Since the term $2\cos\omega-1$ is real-valued, you can see that the phase is basically linear, but that it jumps by $\pm\pi$ when that real-valued term changes sign, which happens at $\cos\omega=\frac12$, i.e., at $\omega=\pm\pi/3$. You can see this in your first plot. The jumps in the second plot are not real phase jumps; they just occur due to the use of the arctangent function. So from $(1)$, the result of the calculation could be written as $$\phi(\omega)=\begin{cases}-\omega,&-\pi/3<\omega<\pi/3\\-\omega+\pi,&\pi/3<\omega\le\pi\\-\omega-\pi,&-\pi<\omega<-\pi/3\end{cases}$$ where I chose the sign of the phase jump such that the phase $\phi(\omega)$ remains in the range $(-\pi,\pi]$, just like the return value of the angle function.
{ "domain": "dsp.stackexchange", "id": 12326, "tags": "phase, frequency-response" }
What reasons are there to choose Illumina if PacBio provides longer and better reads?
Question: PacBio provides longer read length than Illumina's short-length reads. Longer reads offer better opportunity for genome assembly, structural variant calling. It is not worse than short reads for calling SNP/indels, quantifying transcripts. Sounds like PacBio can do whatever Illumina platform can offer. Would that be any reason for going with Illumina if PacBio's long reads can do everything and more? Answer: There are so many reasons why one might want to prefer Illumina over PacBio (also note that it's a false dichotomy, at least Oxford Nanopore is a competitive sequencing platform): The first (IMHO and the most common reason) is still the cost of both sequencing and the instruments. Illumina can sequence a Gbp of data for \$7 - \$93. PacBio sequencing is according the same webpage \$115 per Gbp, however at our sequencing center it's ~$200. Though ONT might have already a cheaper solution. Edit, I just found a google sheat with prices that seems to be frequently updated, seems that the ratio still holds Illumina short reads ~10x cheaper than PacBio. RNA-seq (i.e. analysis of a gene expression) is not possible with PacBio due to preferential sequencing of smaller fragments; shorter genes would always be shown to be more expressed. To be clear, it's possible to sequence RNA with PacBio (the keyword is iso-seq), but the analysis of gene expression is problematic. It's way easier to extract fragmented DNA (concerns small non-model organisms; although recently a single mosquito was sequenced, so we can expect a further improvement) other sequencing techniques as RAD-seq that allow genotyping with very little effort and cost, I have never seen anybody even considering using long reads for such genotyping Genome profiling (assembly-less genome evaluation) based on kmer spectra analysis is not possible with PacBio data due to higher error rates. Conflict of interest: I am a developer of one of the tools for genome profiling (smudgeplot) DNA in Formalin-Fixed Paraffin-Embedded samples is fragmented anyway (~300bp), therefore, there is no point in sequencing them with more expensive long-read technology (contributed by @mRoten) I bet there will be a plenty other applications I am not aware of
{ "domain": "bioinformatics.stackexchange", "id": 2345, "tags": "phylogenetics, ngs, phylogeny, long-reads, pacbio" }
The age of the universe
Question: Many times I have read statements like, "the age of the universe is 14 billion years" . For example this wikipedia page Big Bang. Now, my question is, which observers' are these time intervals? According to whom 14 billion years? Answer: An observer with zero comoving velocity (i.e. zero peculiar velocity). Such an observer can be defined at every point in space. They will all see the same Universe, and the Universe will look the same in all directions ("isotropic"). Note that here I'm talking about an "idealized" Universe described by the FLRW metric: $$\mathrm{d}s^2 = a^2(\tau)\left[\mathrm{d}\tau^2-\mathrm{d}\chi^2-f_K^2(\chi)(\mathrm{d}\theta^2 + \sin^2\theta\;\mathrm{d}\phi^2)\right]$$ where $a(\tau)$ is the "scale factor" and: $$f_K(\chi) = \sin\chi\;\mathrm{if}\;(K=+1)$$ $$f_K(\chi) = \chi\;\mathrm{if}\;(K=0)$$ $$f_K(\chi) = \sinh\chi\;\mathrm{if}\;(K=-1)$$ and $\tau$ is the conformal time: $$\tau(t)=\int_0^t \frac{cdt'}{a(t')}$$ The peculiar velocity is defined: $$v_\mathrm{pec} = a(t)\dot{\chi}(t)$$ so the condition of zero peculiar velocity can be expressed: $$\dot{\chi}(t) = 0\;\forall\; t$$ The "age of the Universe" of about $14\;\mathrm{Gyr}$ you frequently hear about is a good approximation for any observer whose peculiar velocity is non-relativistic at all times. In practice these are the only observers we're interested in, since peculiar velocities for any bulk object (like galaxies) tend to be non-relativistic. If you happened to be interested in the time experienced by a relativistic particle since the beginning of the Universe, it wouldn't be terribly hard to calculate.
{ "domain": "physics.stackexchange", "id": 66317, "tags": "cosmology, time, reference-frames, big-bang, observers" }
Do other supercluster like Laniakea also get pulled by the Great Attractor?
Question: I'm not sure, whether there is any other observable super cluster like our local one, Laniakea, and if they do exists, do they also get pulled by the Great Attractor? I am a little confused about superclusters and The Great Attractor, how they do exists relative to each other and how we can see/know from our galaxy, specifically from the Earth. Answer: In short, not much gravitational pull at all, not relative to the distance and relative velocity. Everything in the observable universe is gravitationally attracted to everything else, at least within the appropriate horizon. There should be some objects far enough apart, taking into account cosmic inflation, that gravity can't travel that far over the age of the Universe. For example the farthest galaxy we can see in one direction and the farthest one we can see in the opposite direction, those two galaxies can't see each other and they likely experience no gravitational pull from each other, but that's not really relevant to your question. The Great Attractor is a comparatively small but very dense part of Laniakea, with an estimated mass thousands of times the mass of the Milky Way. The Laniakea supercluster has a mass of about 100,000 Milky Ways, so any gravitational attraction one supercluster experiences would primarily be from other entire superclusters, and the great attractor would be a small percentage of that. Superclusters experience gravity from other superclusters but the Universe is very uniform, so that gravitational pull is largely balanced out because it's coming from all sides. This touches on one of the modern puzzles in physics, called the Horizon problem. This question goes back to the big bang and why the young universe was very close to, but not precisely uniform. It might be a fun exercise to calculate the gravitational pull neighboring superclusters have on each other. I imagine it's pretty negligible compared to the expansion of space between them.
{ "domain": "astronomy.stackexchange", "id": 3037, "tags": "redshift, galaxy-cluster, local-group, apparent-motion" }
"Team formation" challenge
Question: I'm trying to solve this problem but am getting a timeout. Can someone help me to improve? using namespace std; typedef multimap<int,int> MMap; int main() { /* Enter your code here. Read input from STDIN. Print output to STDOUT */ int t; cin>>t; for(int tcase=0; tcase<t; tcase++) { // contains input for each test case vector<int> in_vec; // array of different teams vector<vector<int> > vec_teams; int size; cin>> size; int val; int min = size; for(int i=0;i<size;++i) { cin>>val; in_vec.push_back(val); } if(size==0) { cout<<"0"<<endl; continue; } sort(in_vec.begin(),in_vec.end()); vec_teams.push_back(vector<int>()); // maintains ordered, key value pair of size and its position(index) in vec_teams MMap mmap; mmap.clear(); vec_teams[0].push_back(in_vec[0]); mmap.insert(pair<int,int>(vec_teams[0].size(),0)); for(int id=1; id<in_vec.size(); ++id) { bool added = false; MMap::iterator it = mmap.begin(); //inserts the skill level to correct team // if not possible to add to any // then new team is created after the loop for(int i=0; i<vec_teams.size(); ++i,++it) { int mpos = (*it).second; if (vec_teams[mpos][vec_teams[mpos].size() -1 ] + 1 == in_vec[id]) { pair <MMap::iterator, MMap::iterator> pa= mmap.equal_range(vec_teams[mpos].size()); MMap::iterator mmit; for (mmit=pa.first; (*mmit).second!=mpos; ++mmit); mmap.erase(mmit); vec_teams[mpos].push_back(in_vec[id]); mmap.insert(pair<int,int>(vec_teams[mpos].size(),mpos)); added = true; break; } } // Since not able to add to existing teams // new team is created and added to it if(!added) { vec_teams.push_back(vector<int>()); vec_teams[vec_teams.size() -1].push_back(in_vec[id]); mmap.insert(pair<int,int>(1,vec_teams.size() -1)); } } int ans = (*(mmap.begin())).second; cout<<vec_teams[ans].size()<<endl; } return 0; } Answer: Separation of concerns At the moment, all your code is in a convoluted main function handling : getting the input from stdin solving the problem printing the answer in stdout. It would be better to separate the concerns. On top of making things clearer and easier to maintain, it would also make things easier to test. Writing tests Because you'll probably want to improve the algorithm, it can be a safe option to write tests to ensure you are aware if you break your code. Great thing is that you have test cases provided with the problem, you just need to write them as code, run them and be happy. At this stage, my code looks like : #include <iostream> #include <map> #include <vector> #include <algorithm> #include <assert.h> using namespace std; typedef multimap<int,int> MMap; vector<int> get_input_from_std() { // contains input for each test case vector<int> in_vec; int size; cin>> size; int val; for(int i=0;i<size;++i) { cin>>val; in_vec.push_back(val); } return in_vec; } int get_min_team_size(vector<int> in_vec) { if(in_vec.size()==0) { return 0; } sort(in_vec.begin(),in_vec.end()); vector<vector<int> > vec_teams; vec_teams.push_back(vector<int>()); // maintains ordered, key value pair of size and its position(index) in vec_teams MMap mmap; mmap.clear(); vec_teams[0].push_back(in_vec[0]); mmap.insert(pair<int,int>(vec_teams[0].size(),0)); for(int id=1; id<in_vec.size(); ++id) { bool added = false; MMap::iterator it = mmap.begin(); //inserts the skill level to correct team // if not possible to add to any // then new team is created after the loop for(int i=0; i<vec_teams.size(); ++i,++it) { int mpos = (*it).second; if (vec_teams[mpos][vec_teams[mpos].size() -1 ] + 1 == in_vec[id]) { pair <MMap::iterator, MMap::iterator> pa= mmap.equal_range(vec_teams[mpos].size()); MMap::iterator mmit; for (mmit=pa.first; (*mmit).second!=mpos; ++mmit); mmap.erase(mmit); vec_teams[mpos].push_back(in_vec[id]); mmap.insert(pair<int,int>(vec_teams[mpos].size(),mpos)); added = true; break; } } // Since not able to add to existing teams // new team is created and added to it if(!added) { vec_teams.push_back(vector<int>()); vec_teams[vec_teams.size() -1].push_back(in_vec[id]); mmap.insert(pair<int,int>(1,vec_teams.size() -1)); } } int ans = (*(mmap.begin())).second; return vec_teams[ans].size(); } /* Using http://stackoverflow.com/questions/8906545/how-to-initialize-a-vector-in-c */ void unit_tests() { { int vv[]{4, 5, 2, 3, -4, -3, -5}; vector<int> v(begin(vv), end(vv)); assert(get_min_team_size(v) == 3); } { int vv[]{-4}; vector<int> v(begin(vv), end(vv)); assert(get_min_team_size(v) == 1); } { int vv[]{3, 2, 3, 1}; vector<int> v(begin(vv), end(vv)); assert(get_min_team_size(v) == 1); } { int vv[]{1, -2, -3, -4, 2, 0, -1}; vector<int> v(begin(vv), end(vv)); assert(get_min_team_size(v) == 7); } } void stdin_tests() { /* Enter your code here. Read input from STDIN. Print output to STDOUT */ int t; cin>>t; for(int tcase=0; tcase<t; tcase++) { vector<int> in_vec = get_input_from_std(); cout<< get_min_team_size(in_vec) <<endl; } } int main() { unit_tests(); return 0; } Algorithms and performances In the case of programming problems like the one you are trying to solve, the real issue is usually the complexity of your algorithm, how it behaves on large inputs. In order to see how your code behaves on larges inputs, you have two main strategies (and you should try apply both): pen, paper and mathematics tests : great thing is that now, it is quite easy to define tests of any size you like. You just need to time your program. For instance, you could add the following tests to your test suite : int nb_elts = 100000; { // Single big team vector<int> v; for (int i = 0; i < nb_elts; i++) v.push_back(i); assert(get_min_team_size(v) == nb_elts); } { // Double big team vector<int> v; for (int i = 0; i < nb_elts; i++) { v.push_back(i); v.push_back(i); } assert(get_min_team_size(v) == nb_elts); } { // Multiple different one-player teams vector<int> v; for (int i = 0; i < nb_elts; i++) v.push_back(2 * i); assert(get_min_team_size(v) == 1); } { // Multiple identical one-player teams vector<int> v; for (int i = 0; i < nb_elts; i++) v.push_back(0); assert(get_min_team_size(v) == 1); } { // Overlapping three-player teams vector<int> v; for (int i = 0; i < nb_elts; i++) { v.push_back(2*i); v.push_back(2*i+1); v.push_back(2*i+2); } cout << get_min_team_size(v) << endl; assert(get_min_team_size(v) == 3); } A quick look at your code tells me is probably somewhere around O(n^2) which grows "very" fast. I have designed an algorithm which seems to be faster on the various inputs I have tried but I might have forgotten edge-cases. In any case, the principle is the following : we sort the individuals by levels just like you did. we count of many individuals we have for each levels when considering a new group of individual : if we have more people than the number of teams, we create as many new teams keeping track of the level of the smallest level if we have more teams than the number of people, we end the formation of as many teams, starting with the longest ones (the one with the smallest starting levels). I hope : I managed to make this explanation clear this actually works. And here is the corresponding code : int get_min_team_size(vector<int> levels) { if (levels.size()==0) return 0; sort(levels.begin(),levels.end()); int sol = levels.size(); // Solution will not be bigger that the size of the input queue<int> lowest_lvl; // Storing the (sorted) levels of the less skilled member of the teams being formed. int level = levels[0] - 2; // Level of the individuals being processed (the minus 2 is a trick not to have to handle the beginning in a special way). int nb_in_level = 0; // Number of individual being processed levels.push_back(levels.back() + 2); // trick not to have to handle the end in a special way for(int id=0; id<levels.size(); ++id) { int ind_lvl = levels[id]; if (ind_lvl == level) { // Same level : just counting the new individual nb_in_level++; } else { bool lvl_gap = ind_lvl != level + 1; // Different level : processing what needs to be done for previous level : // 1) If we haven't started enough teams, let's start them from that level while (lowest_lvl.size() < nb_in_level) lowest_lvl.push(level); // 2) If we have too many teams started, let's end the one starting with the longest ones // (they correspond to the first teams as we have them sorted by the level of the smallest individual) // and check their length. If we have a gap of level, we want to end them all. if (lvl_gap) nb_in_level = 0; while (lowest_lvl.size() > nb_in_level) { // Theoritically, lowest_lvl is sorted so only the last element is relevant but we have to unpop them anyway sol = min(sol, 1 + level - lowest_lvl.front()); lowest_lvl.pop(); } // 3) If there is a gap of level, we add the new beginning of team. if (lvl_gap) { lowest_lvl.push(ind_lvl); } nb_in_level = 1; level = ind_lvl; } } assert(nb_in_level == 1); assert(lowest_lvl.size() == 1); cout << sol << endl; return sol; }
{ "domain": "codereview.stackexchange", "id": 12036, "tags": "c++, programming-challenge, time-limit-exceeded" }
Different ways to orient the spin in the same direction
Question: I know that all electrons have quantised spin. But how can one orient all the spins of a given bunch of an electrons in the same direction? I know one way is that we pass the electrons through a uniform magnetic field (which is I guess from where the 'magnetic quantum number' name of the spin comes from). But why does this method does so? And is there any other way one can achieve this? Answer: A (uniform) magnetic field by itself does not orient spins in the same direction. What really happens is that in the magnetic field the spins of different orientations have different energy: $$E_\pm = \pm \frac{\hbar\omega_g}{2} $$ The spins undergo equilibration towards thermodynamic equilibrium - e.g., via amitting photons of frequency $\omega_g$, accompanied by a spin-flip. So we end up with spins having probability to be in a certain state: $$ P_\pm = e^{\mp \frac{\hbar\omega_g}{2k_B T}} $$ Thus one ends up in most of the spins oriented in one direction. Another widespread method to polarize spins is by using a strongly inhomogeneous magnetic field, where differently polarized spins are pulled in different directions. Details of thsi method are usually described in connection to the Stern-Gerlach experiment (the simples treatment is presented in the Feynmann lectures), but this method is also important for creating population inversion in the ammonia maser (the first quantum generator) and Hydrogen masers (the current frequency standard).
{ "domain": "physics.stackexchange", "id": 76974, "tags": "quantum-mechanics, particle-physics, quantum-spin, quantum-electrodynamics" }
Why we express the radiation pattern of an antenna array in terms of electric field and not magnetic field?
Question: Is this because in the farzone we are talking about TEM waves and the relation of the magnitude of the field is E=ηH, therefore the patterns of the fields should be identical in terms of shape and differ only by a factor η? Answer: The radiation pattern of an antenna is the radiation intensity $S = E \times H$ usually plotted in spherical angle coordinates (azimuth, elevation) be it far or near field. It is always a relative measurement normalized to some reference radiation value for the radiated intensity is proportional to the actual (actual=incident $-$ reflected $-$ absorbed) transmitted power at the antenna input. Because in a linear medium $E$ and $H$ are proportional, therefore $S \propto E^2 \propto H^2$, and a radiation pattern represents all three $S$, $E^2$ an $H^2$.
{ "domain": "physics.stackexchange", "id": 76188, "tags": "electromagnetism, antennas" }
Vue.js code to update selection on screen
Question: I have the following code snippet: <tr> <td @click="changeTeam('WHU')">WHU</td> <td>20 Oct</td> <td @click="changeTeam('BHA')">BHA</td> </tr> <tr> <td @click="changeTeam('CHE')">CHE</td> <td>21 Oct</td> <td>WAT</td> </tr> and the following VueJS method: methods: { changeTeam(selection) { this.team = selection; } } So as you can see, I'm calling this function and passing in the value in the table cell (i.e. <td> tag) to change a selection. This works for me, but is there a way I can capture the contents of the table cell, by means of some other method? Something a little more reusable or would this just be introducing extra overhead? The table itself is the result of a dynamically generated page, so I'm not overly bothered about the apparent duplication. Answer: One technique would be to apply fundamentals of event delegation by adding a method to handle all clicks. For instance, an event listener could be added to the root element (or one if child its child nodes, like a <table>) mounted: function() { this.$el.addEventListener('click', this.clickHandler); } Then have the method accept the click event argument: clickHandler(clickEvent){ That method could check if the target of the event was a table cell by checking the tagName attribute (for TD). To restrict to only certain table cells, add a class attribute - e.g. team. If that class is found in the target element's classList, set the team property to the innerHTML of the target. clickHandler(clickEvent){ const target = clickEvent.target; if (target.tagName == 'TD' && target.classList.contains('team')) { this.team = target.innerHTML; } } See a demonstration in the sample below. Notice how there is now a clean separation of the event handling logic (JS) and the markup for display (HTML). new Vue({ el: '#app', data: { message: 'Hello Vue.js!', team: '' }, mounted: function() { this.$el.addEventListener('click', this.clickHandler); }, methods: { clickHandler(clickEvent) { const target = clickEvent.target; if (target.tagName == 'TD' && target.classList.contains('team')) { this.team = target.innerHTML; } } } }) <script src="https://unpkg.com/vue"></script> <div id="app"> <p>Selected Team: {{ team }}</p> <table> <tr> <td class="team">WHU</td> <td>20 Oct</td> <td class="team">BHA</td> </tr> <tr> <td class="team">CHE</td> <td>21 Oct</td> <td>WAT</td> </tr> </table> </div>
{ "domain": "codereview.stackexchange", "id": 28060, "tags": "javascript, event-handling, vue.js" }
Deflection angle of an alpha particle when colliding with a stationary nucleus
Question: Is it possible to calculate the deflection angle of an alpha particle after colliding with a stationary nucleus with atomic number,Z,without actually knowing Z or for that matter without knowing the identity or anything about it. And how do particle accelerators measure angle of deflection. Answer: If you measure the alpha particle scattering off heavy nuclei you can just place particle detectors aroung the scattering location. Maybe a small cylinder of lead for example. Or have your detector on a track. Then you can move the detector aroung from 0 to maybe 180 degrees. Then use a particle accelerator to bombard the sample or use a colimated radioactive sample that emits alpha particles. This is known as Rutherford Scattering and the scattering cross section can be calculated accurately based on coulomb scattering. More info can be found at these sites. http://hyperphysics.phy-astr.gsu.edu/hbase/rutsca.html and https://en.wikipedia.org/wiki/Stefan–Boltzmann_law Rutherford scattering was performed by scattering alpha particles off gold nuclei. This is essentially scattering off a coulomb potential unless the alpha particle's energy is enough to actually reach the nucleus. As far as alpha particle scattering off light nuclei that is a whole different field of study and you can google something like 'alpha particle scattering off light nuclei' You'll find many good sources including some recent works.
{ "domain": "physics.stackexchange", "id": 75925, "tags": "electromagnetism, kinematics, experimental-physics, radiation, interactions" }
Small Single Board Computers suitable for ROS
Question: Hi, I've been wondering which SBC's do ROS users recommend. I'm particularly interested in small designs (less or around 10x10cm) lightweight (less than 200g, preferably around 100g) high processing power, approx. 2GHz (single or multiple cores), 2GB RAM, or more 2 or more USB's 5V or 12V DC power supply preferably x386 ROS Diamondback friendly, ie. supports Ubuntu 10.04 or newer. I've used BeagleBoard with ROS, but unfortunately I found it's processing power to little for my needs. I'm also aware of PandaBoard, but, after playing around with BB-xM, I think x386 architectures will be better suited for fast prototyping. I'm also aware of AscTec's Atom Board, but I find it too expensive (AFAIK it costs 1200Eur + VAT in Germany). The option I'm considering now is IEI's PM-PV-D5251-R10, but the manufacturer is only assuring it's compatibility until Ubuntu 8.10 (but there is good chance newer versions will work out-of-the-box, I suppose) - costs around 400$. Am I missing something worthy of knowing? Originally posted by tom on ROS Answers with karma: 1079 on 2011-06-20 Post score: 2 Answer: I know of: Advantech Cogent Toradex Taskit And a little bit larger Mini-itx Originally posted by KoenBuys with karma: 2314 on 2011-06-20 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5898, "tags": "ros, hardware" }
If the temperature of 2 materials are the same does that mean the molecules are vibrating at the same speed?
Question: Pretty much what the title says. My base question is this. Assuming I take a piece of steel, and a piece of PVC plastic and I measure both their temperatures and find they are the same. I then take a look at the vibration speeds of the individual molecules would they be the same as well? Here's a rough example: I measure both the steel and PVC and find them both at 100F, and then I measure the vibrations of a molecule in the steel and find it to be moving at 10 miles an hour. Would the PVC molecules also be moving at 10 miles an hour? I'm sure I'm not using the correct units of measure to measure the vibration, but I didn't know what the correct unit of measure is for something like that. Hopefully it gets the point across. Answer: No, because the atoms in steel and plastic have different masses. Your example is a bit more complicated than it need be because steel (well iron) is an element while plastic is a compound. This complicates things because molecules can have internal motions that contribute to the energy. A better comparision might be between lead and lithium. These are both elements to you just need to consider the motions of the Pb and Li atoms. For a given kinetic energy a lead atom will be moving more slowly than a lithium atom because it's heavier. You could think of it this way: if you touch the piece of lead to the piece of lithium the atoms will be in contact so they'll swap energy by bashing into each other. If a lead atom hits a lithium atom the Li atom will recoil a lot faster than a Pb atom.
{ "domain": "physics.stackexchange", "id": 3652, "tags": "temperature, molecules" }
Testing mapping code
Question: Obviously, you have to test mapping code somehow, even (or especially) if you use AutoMapper. Is there any way to make it less verbose? [Test] public void Map_Always_SetsSimpleProperties() { var auctionPlace = fixture.Create<string>(); var submissionCloseDateTime = fixture.Create<DateTime>(); var quotationForm = fixture.Create<string>(); var quotationExaminationDateTime = fixture.Create<DateTime>(); var envelopeOpeningTime = fixture.Create<DateTime>(); var envelopeOpeningPlace = fixture.Create<string>(); var auctionDateTime = fixture.Create<DateTime>(); var applExamPeriodDateTime = fixture.Create<DateTime>(); var considerationSecondPartDate = fixture.Create<DateTime>(); var doc = fixture.Create<Notification223>(); doc.AuctionPlace = auctionPlace; doc.SubmissionCloseDateTime = submissionCloseDateTime; doc.QuotationForm = quotationForm; doc.QuotationExaminationTime = quotationExaminationDateTime; doc.EnvelopeOpeningTime = envelopeOpeningTime; doc.EnvelopeOpeningPlace = envelopeOpeningPlace; doc.AuctionTime = auctionDateTime; doc.ApplExamPeriodTime = applExamPeriodDateTime; doc.ConsiderationSecondPartDate = considerationSecondPartDate; var sut = CreateSut(); var actual = sut.Map(doc); Assert.That(actual.AuctionPlace, Is.EqualTo(auctionPlace)); Assert.That(actual.SubmissionCloseDateTime, Is.EqualTo(submissionCloseDateTime)); Assert.That(actual.QuotationForm, Is.EqualTo(quotationForm)); Assert.That(actual.QuotationExaminationDateTime, Is.EqualTo(quotationExaminationDateTime)); Assert.That(actual.EnvelopeOpeningTime, Is.EqualTo(envelopeOpeningTime)); Assert.That(actual.EnvelopeOpeningPlace, Is.EqualTo(envelopeOpeningPlace)); Assert.That(actual.AuctionDateTime, Is.EqualTo(auctionDateTime)); Assert.That(actual.ApplExamPeriodDateTime, Is.EqualTo(applExamPeriodDateTime)); Assert.That(actual.ConsiderationSecondPartDate, Is.EqualTo(considerationSecondPartDate)); } Answer: Unfortunately, I don't think so. There is a question about this topic on the website of Dozer (which is a similar library for Java) which mentions a trick: Should I write unit tests for data mapping logic that I use Dozer to perform? [...] Regardless of whether or not you use Dozer, unit testing data mapping logic is tedious and a necessary evil, but there is a trick that may help. If you have an assembler that supports mapping 2 objects bi-directionally, in your unit test you can do something similar to the following example. This also assumes you have done a good job of implementing the equals() method for your data objects. The idea is that if you map a source object to a destination object and then back again, the original src object should equal the object returned from the last mapping if fields were mapped correctly. [...] I think this trick won't find bugs when, for example, doc.ApplExamPeriodTime is mapped to actual.ConsiderationSecondPartDate and doc.ConsiderationSecondPartDate is mapped to actual.ApplExamPeriodTime. I don't know that these kind of bugs are possible with AutoMapper or not. Furthermore, if you need only one directional mapping, I think the most simple solution is the one that's already in your question, I'd go with that. Adding code for the reverse mapping to the production code would be a test smell (Test Logic in Production).
{ "domain": "codereview.stackexchange", "id": 18555, "tags": "c#, unit-testing, automapper" }
What's a segmented copy number profile
Question: I am studying copy-number variation. I am reading C. H. Mermel, S. E. Schumacher, B. Hill, M. L. Meyerson, R. Beroukhim, and G. Getz, “GISTIC2.0 facilitates sensitive and confident localization of the targets of focal somatic copy-number alteration in human cancers,” Genome Biol., vol. 12, no. 4, p. R41, 2011. Here, it is written that Segmented copy number profiles represent the summed outcome of all the SCNAs [somatic copy number alterations] that occurred during cancer development. Accurate modeling of the background rate of copy-number alteration requires analysis of the individual SCNAs. However, because SCNAs may overlap, it is impossible to directly infer the underlying events from the final segmented copy-number profile alone. It is not clear to me how a segmented copy number profile represents the summed outcome of all the SCNAs. Is it because different alterations can be present in the same sample, or can alter the copy-number in different moments, or both? And, do they overlap spatially, temporally, or both? Answer: Yes, one sample can contain different alterations. For each patient there is typically one tumour specimen that is removed. That specimen may be divided up into several samples (e.g. one for DNA sequencing, one for RNA sequencing, one for methylation microarray, and one for copy-number variation microarray), however each sample contains thousands of individual cells and two adjacent cells may have different CNVs (depending on their clonal ancestry, etc.). In other words, a tumour is a heterogeneous collection of cells. For some tumour types there can even be healthy cells mixed in. The term in the literature is clonal evolution, there is a nice image in this article: Tumor Heterogeneity
{ "domain": "biology.stackexchange", "id": 5768, "tags": "cancer, copy-number-variation" }
Is it possible to make a satellite orbit Earth, the same way Earth orbits Sun? ( Same orbital path pattern)
Question: Is it possible to make a satellite orbit Earth, the same way Earth orbits Sun? (Same orbital path pattern) Earth's Orbit Answer: Orbits are described by their eccentricity, period, and semi-major axis. If you mean can all three be the same then no, it is not possible. The sun and the earth have different standard gravitational parameters. Since $$\mu = 4\pi^2 \frac{a^3}{T^2}$$ if two orbits have the same semi-major axis $a$ and period $T$ then they must have the same standard gravitational parameter $\mu$. Since the earth and the sun do not have the same $\mu$ they cannot have both the same $a$ and the same $T$. If you mean only can the eccentricity be the same, then yes. The eccentricity does not enter in to the above formula. So you can have identical eccentricities. You can also have either an identical period or an identical semi-major axis.
{ "domain": "physics.stackexchange", "id": 90208, "tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, satellites" }
Velocity of separation
Question: Defining velocity of separation / approach , my textbook states that it is component of velocity of one body with respect to another along the line joining them . Why is it that it is the component of velocity **along the line joining them ? ** Is there a mathematical proof or an intuitive way to understand the phrase along the line joining them ? Answer: It's all in the wording: velocity of separation is the velocity at which two objects may be approaching or separating from each other. So, if say objects $A$ and $B$ have velocities $\vec v_A$ and $\vec v_B$ say towards a third object $C$, then $\vec v_A$ and $\vec v_B$ themselves tell us nothing about the rate at which $A$ and $B$ approach each other - they tell us how they are approaching this third object they point to. As such, if the objects have position vectors $\vec r_A$ and $\vec r_B$ respectively, if we want to know how fast they are approaching each other, then we are interested in the quantity, $$\vec v_A \cdot \hat r_{AB} = \frac{\vec v_A \cdot (\vec r_B - \vec r_A)}{|\vec r_B - \vec r_A|}$$ which is the component of the velocity along the line that joins $A$ and $B$, and mathematically is known as the scalar projection. To expand on the answer for the OP, recall that for a vector $\vec V$, we can find the $x$-component for example by taking the dot product with the unit vector $\hat x$, i.e. $$\vec V = V_x \hat x + V_y \hat y, \quad \vec V \cdot \hat x = V_x.$$ We can see this since $\hat x = (1,0)^T$ and that $\hat x \cdot \hat x = 1$ and $\hat x \cdot \hat y = 0$. So, if we want instead the 'component' along the line between these two objects, we have to take the dot product with the unit vector in the direction along the line that joins them.
{ "domain": "physics.stackexchange", "id": 41295, "tags": "homework-and-exercises, kinematics, relative-motion" }
What is the terminal velocity of a sheep?
Question: Inspired by this question on Gaming.SE Using actual in-real-life physics, what would the terminal velocity of a sheep actually be? I would assume it would be around 50m/s, but I might be wrong. Bonus question: What would the terminal velocity of a chicken be? Animal-friendly answers preferred. Answer: That's a very hard question to answer theoretically because the aerodynamic drag, and therefore the terminal velocity, is highly dependant on the shape of the falling object. Assuming we're in the turbulent regime, the aerodynamic drag is given by the equation: $$ F_d = \tfrac{1}{2}\rho v^2 C_d A $$ where $\rho$ is the density of the air, $v$ is the velocity, $A$ is the cross sectional area in the direction of motion and $C_d$ is the drag coefficient. The terminal velocity is just the velocity at which the drag force is equal to the weight $mg$, and a quick rearrangement gives: $$ v_t = \sqrt{\frac{2mg}{\rho C_d A}} $$ In the UK a popular breed is the Texel, which weighs around 80kg. I can't find any info on their size, but from my memories of sheep I'd guess they're about a metre long and the diameter of the body is about half a metre. The legs are pretty spindly, so the cross sectional area is going to be around $A = \pi/16$ m$^2$ if the sheep is falling head first or around 0.5 m$^2$ if it's falling sideways. Putting in the relevant figures we get: $$ v_t \approx \frac{80}{\sqrt{C_d}} $$ The problem is what value to use for $C_d$. A sphere has $C_d = 0.47$, which would give $v \approx 117$ m/sec. But a cylinder (with sharp rims) has $C_d = 0.82$, which would give $v \approx 88$ m/sec. I'd guess the truth is somewhere in between. Both these figures are higher than the terminal velocity of a human falling, which varies from around 60 to 90 m/sec depending on your orientation. But then sheep are pretty dense. If you've ever tried to pick one up you were probably surprised by how heavy they seem for their size. Bear in mind the Texel sheep I used as an example weighs 10kg more than I do and I'm 5'10". It's quite reasonable that the terminal velocity of a sheep would be higher than that of a human.
{ "domain": "physics.stackexchange", "id": 15366, "tags": "newtonian-gravity, velocity, drag" }
Why does the sun make me feel warm?
Question: For a while I thought that the reason I felt warmth from the sun was because my skin was being hit by photons, but then I realized that photons also hit me when I take an X-ray, but I don't feel any heat from that. So, why is it that you feel warmth from the sun, or any hot object for that matter? Answer: X-rays do warm you up. It's just that the X-rays are more dangerous per photon (they can do major damage to cells and DNA, and are known to cause tumors and cancer), so they limit the amount of time you're exposed to the bare minimum needed for a clear image. The total energy from standing in the sunlight for several seconds is much higher than the energy from all the X-rays you're likely to take in your life, which is why you feel it but don't feel the X-ray. Addendum: Total Energy from an X-ray. I found this page with radiation doses from common radiology treatments. The maximum radiation dose listed is 10 millisieverts (mSv). 1 Sv is defined as one joule (J) of energy per kilogram (kg) body mass. Assuming an average person is around 70 kg, 10 mSv corresponds to 0.7 J of energy absorbed. Note that the actual energy is probably a bit lower, because the sievert definitions also account for the biological effects (so a dose to an internal organ is weighted higher per actual joule than a dose to a finger or something). But this should be a decent approximation. Total Energy from Sunlight. The sun outputs about 1300 watts per square meter (W/m²) in space near the earth, which gets reduced to around 650 W/m² in the middle of the day after going through the atmosphere. 1 watt is defined as 1 joule per second (J/s). So that's about 650 joules of energy per second per square meter. According to this site, the surface area of a human body is between 1 and 2 m². At least half of that is on the side of your body not facing the sun, and you'll get less radiation if the sun is shining on the end of your body (like your head/shoulders) instead of the front or back of your body. If we assume the sun is right overhead, and you're laying on your back facing up at the sun, then you've got 0.5 to 1 m² in the sunlight (it's a little less because some of that surface area is parallel to the sunlight, but we're in the ballpark). Ok, combining all of the above, sunlight shines down with 650 J/(s*m²) * [0.5 to 1] m² = 325 to 650 J/s. One second of direct sunlight is therefore 465 to 930 times more energy than one X-ray image, hence why it feels so much hotter.
{ "domain": "physics.stackexchange", "id": 47014, "tags": "energy, kinematics, photons" }
Take a desired string, iterate through objects to see if it exists in a given field and append a number until a unique string is found
Question: I wanted to in the most generic way possible write a method in C# to achieve the following. Take in a string, a set of objects, and a function to access the field of a given object Look at all of the strings from the fields of these objects to match against provided string If provided string is not unique in a case insensitive manner, append _x to the end, where x is an incrementing integer until a unique string is found With that in mind, I created this. private string GetUniqueStringForModel<T>(string originalString, IEnumerable<T> enumerableObjects, Func<T, string> stringFieldFunction) where T : class { var uniqueString = originalString; var duplicateCount = 1; while (enumerableObjects.Select(stringFieldFunction).ToList().Any(currentString => string.Equals(currentString, uniqueString, StringComparison.InvariantCultureIgnoreCase))) { uniqueString = originalString + "_" + duplicateCount++; } return uniqueString; } I would rather have not put the ToList() in but when working with Entity Framework it was complaining about doing the string comparison in LINQ (presumably because it couldn't compile it SQL). Any thoughts or ideas for improvement? Answer: There's certainly room for improvement. Move that that Select(stringFieldFunction).ToList() out of the while loop. Iterating an IEnumerable might be quite expensive, and there's no need to repeat that work when you find a duplicate. Select only those strings that have originalString as a prefix. That will reduce the amount of items you need to check against when adjusting the suffix. Instead of making a list, turn those results into a (hash)set, for faster lookup. Regarding EF complaining: apparently it doesn't support those StringComparsion overloads, so it'll have to fetch all rows from the database and run your Any predicate in memory, which may be slower as expected. Maybe an alternative approach like using ToUpperInvariant can be translated to SQL? Applying the above changes will give you the following: private string GetUniqueValue<T>(string value, IEnumerable<T> items, Func<T, string> getValue) { var possibleDuplicates = items .Select(getValue) .Where(val => val.StartsWith(value, StringComparison.InvariantCultureIgnoreCase)) .ToHashSet(); var result = value; var suffix = 1; while (possibleDuplicates.Contains(result)) { result = value + "_" + suffix; suffix += 1; } return result; } Addendum As t3chb0t and ErikE already pointed out, if you want the benefits of Linq-to-SQL, then you need to use IQueryable<T>. Not only that, you also need to make sure that you're using the Queryable Linq extension methods, not the Enumerable variants. The difference is that Enumerable methods have Func<> parameters, while the Queryable variants have Expression<Func<>> parameters. An Expression is a data structure that represents a piece of code, which makes translation to SQL possible. Because stringFieldFunction is a Func, not an Expression<Func>, the Enumerable.Select variant is used. That cannot be translated to SQL, so all data has to be loaded from the database before the Select (and any subsequent operation) can be performed on it. To recap: use IQuerable<T>, make sure you're using Queryable Linq methods, and only use supported methods within your expressions.
{ "domain": "codereview.stackexchange", "id": 32102, "tags": "c#, strings, linq, generics, iterator" }
What does the concept of an "infinite universe" actually mean?
Question: When physicists talk about the universe being infinite, or wondering whether it is or not, what do these two options actually mean? I am not interested whether the universe is infinite or not, I am interested in what are the two options actually looking like. Does an "infinite universe" mean infinite space, but when you go far enough there isn't any matter anymore, just more of infinite empty space in that direction? Or does an "infinite universe" mean infinite matter, where it doesn't matter how far you go, you will find infinite stars all along the way? In short, is the "infiniteness" meant to apply to space or to matter (which then would also include space of course)? Answer: Generally when physicists talk about the universe being finite, they are talking about the existence of an upper bound $R$ on the distance between any two points in space. Such an upper bound could arise in several ways - perhaps the universe has an edge - a boundary which cannot be crossed - or perhaps the universe has the topology of a 3-sphere, and so if one travels sufficiently far in any direction they would eventually return to their starting point. A spatially infinite universe is one which does not have this feature - given any real number $M$, there exist two points in the universe which are separated by a distance which is greater than $M$. A finite universe of course can presumably only host a finite amount of matter. An infinite universe could in principle host either a finite or infinite amount of matter. Mainstream cosmology generally assumes the cosmological principle, which states that on sufficiently large scales the distribution of matter in the universe is homogeneous; in such cases, an infinite universe would have an infinite amount of matter in it, but of course this principle may not be accurate.
{ "domain": "physics.stackexchange", "id": 77051, "tags": "cosmology, spacetime, terminology, universe, definition" }
Gunicorn workers timeout
Question: I'm using Flask where i load some pre-trained machine learning models once. I'm also using Gunicorn usually with 2 or 4 workers to handle parallel requests. Every request contains some texts that i want to analyze. I'll explain my problem with a example: My Flask server with Gunicorn and 2 workers is up and loads my models once for every worker. Then i send two parallel requests. The first will run analysis on the 1st worker with 500 texts and the second on the 2nd worker with 2000 texts. The problem is that the second request will stop the analysis after some period of time and reload the models for this worker. Does Gunicorn contains a default worker timeout and how it can be solved? Answer: You can use the --timeout flag to increase the worker timeout. Run gunicorn --help for more information about the available flags
{ "domain": "datascience.stackexchange", "id": 4015, "tags": "machine-learning, parallel, processing" }
Why is a material such as plastic air-tight?
Question: I'm not too familiar with fluid mechanics, but i always wondered how let's say plastic containers would be able to contain a fluid, such as gas or water. How does plastic prevent the molecules of the fluid from crossing it? Is it because of a high density relation at the boundary? Does it have anything to do with energy levels? Answer: When a molecule from a fluid approaches molecules from a solid boundary, interaction repelling forces between them increase. As molecules from the solid are prevented from moving by their interactions with other molecules from the solid, it is the molecule from the fluid that conserves all of the momentum and bounces back. In doing so, it exerts a force on the solid which averages as the fluid's pressure when you consider all the molecules from it hitting a patch of surface. If the material is porous, the fluid molecule may enter a cavity and bounce within it. If it is porous enough, there are some odds that from cavity to cavity it reaches the other surface of the solid material. Of course, the size of the molecule from the gas is important for this. This is called permeation.
{ "domain": "physics.stackexchange", "id": 13754, "tags": "fluid-dynamics, water, air, ideal-gas" }
What is the formula to calculate the velocity of an airstream based on pressure differential?
Question: I have an air tank at 100 psi above ambient pressure (roughly 14 psi). If a small spigot is opened, what will the velocity of the core of the air stream be? I realize friction will slow the outside of the stream, but was curious, as large storms such as hurricanes do a lot with a pressure differential of around 3 psi. What mathematical formula would describe the velocity of the stream. Under what conditions? Answer: The flow depends on the ratio of static pressure and the nozzle design. In your case without a properly designed nozzle the flow will be choked and reach the speed of sound $c_s^{(air)} \approx 340 \frac{m}{s}$. The following formulas can be derived assuming one-dimensional isentropic flow of an ideal gas. The main criterion for the flow velocity (generally given by the Mach number $Ma := \frac{U}{c_s}$) is the ratio of ambient pressure $p_{ambient}$ to static pressure inside the container $p_0$ as you can see in this graph. As a reference pressure ratio we take the ratio of critical pressure $p_{crit}$ (which corresponds to the static pressure in the sonic case for $Ma = 1$) to $p_0$ as the Mach number characterises the communication up- and downstream.* $$\frac{p_{crit}}{p_0} = \left( \frac{2}{\gamma + 1}\right)^\frac{\gamma}{\gamma - 1} \approx 0.528$$ For a pressure ratio of ambient pressure to static pressure inside the container $\frac{p_{ambient}}{p_0}$ lower than the ratio of critical to static pressure the flow will accelerate towards the smallest cross-section of the nozzle and then decelerate (top line in the aforementioned graph). If the ambient pressure is lower than the critical pressure but the nozzles is not designed accordingly, the flow will accelerate towards the smallest cross-section, reach the speed of sound $c_s$ but not be able to accelerate any further even though the pressure ratio would be large enough. This type of flow is said to be choked (second line from top in the graph). The reason for this is the lack of downstream communication for sonic and supersonic flows. If the ambient pressure is lower than the critical pressure and the nozzle a properly designed convergent-divergent nozzle the flow can accelerate further and reach supersonic velocities. For more information see. *) For the limiting case of $Ma=0$, which corresponds to an incompressible fluid, any information travels up- and downstream equally fast. With increasing Mach number the information travels up- and downstream increasingly non-uniform and for sonic $Ma = 1$ and supersonic flows $Ma > 1$ there will be no communication downstream anymore. This is also the reason for the difference in sub- and supersonic jet design: A subsonic airliner will be streamlined as the incoming fluid senses the geometry of the airplane in advance while for a supersonic jet traveling at supersonic speeds this is not the case and it will have to "slice" the fluid.
{ "domain": "physics.stackexchange", "id": 63824, "tags": "fluid-dynamics, pressure, flow" }
In a mixture of NaOH and Na2CO3, why doesn't phenolpthalein change its color until Na2CO3 is half neutralised?
Question: Let us consider a mixture of $\ce{NaOH}$ and $\ce{Na2CO3}$. Based on what I have understood, double titration works in the below manner : Phenolpthalein turns pink when added to the mixture. As $\ce{HCl}$ is added, the pH of the mixture keeps reducing and phenolpthalein becomes colorless when the mixture becomes of a pH less than $8.3$ ie. at a stage when $\ce{NaOH}$ is completely neutralised and $\ce{Na2CO3}$ is half neutralized. Then, we add methyl orange which makes the solution yellow. As we add more $\ce{HCl}$, the pH reduces and hence, methyl orange turns red when the neutralization of $\ce{Na2CO3}$ is complete. My questions are: a) When phenolpthalein is added, I have read elsewhere that $\ce{NaOH}$ reacts with $\ce{HCl}$ first. In that case, phenolpthalein should make the solution colorless as $\ce{NaOH}$ neutralizes with $\ce{HCl}$ to form $\ce{NaCl}$ which has a pH of $7$ (not between $8.3$ and $10.5$). Why doesn't phenolpthalein change its color until $\ce{Na2CO3}$ is also half neutralized? Is it because the reactions take place simultaneously? b) When methyl orange is added, $\ce{NaHCO3}$ is neutralized to $\ce{NaCl}$. But pH range of methyl orange is 3-4.5, but the solution obtained is neutral. How does methyl orange change its color, then?I think it is because of the $\ce{H2CO3}$ ($\ce{H2O + CO2}$) that is formed along with $\ce{NaCl}$, which makes the solution slightly acidic. Is that a correct explanation? Answer: a) That's right, $\ce{NaOH}$ reacts first. But you are wrong in thinking that $\ce{NaCl}$ has a pH of 7. In a way, $\ce{NaCl}$ has "no pH at all". It just sits there and does nothing. It is other component(s) of the solution that determine its pH. At that point, the other component is $\ce{Na2CO3}$, which is alkaline due to hydrolysis. As the titration progresses, it turns to $\ce{NaHCO3}$, hence the change in pH. b) Well, $\ce{H2CO3}$ would indeed make the solution slightly acidic, but not enough so to trigger the color change in methyl orange. It is $\ce{HCl}$ that does the job. After the neutralization is completed, one tiny extra drop is enough.
{ "domain": "chemistry.stackexchange", "id": 8235, "tags": "stoichiometry, titration" }
Does enthalpy equal heat when PV work is done?
Question: The derivation that enthalpy equals heat at constant pressure goes like: $$\begin{align} H &= U + P_{\mathrm{int}}V \\ \Delta H &= \Delta U + \Delta (P_{\mathrm{int}}V) \end{align}$$ If $P_{\mathrm{int}}$ is constant, then $$\Delta H = \Delta U + P_{\mathrm{int}}\Delta V$$ We also have $$q = \Delta U - w$$ If external pressure is constant, $w = -P_{\mathrm{ext}}\Delta V$. Then $$q = \Delta U + P_{\mathrm{ext}}\Delta V$$ So in order for $\Delta H$ to equal $q$, we need $P_{\mathrm{int}} = P_{\mathrm{ext}} = \mathrm{constant}$. This is true when the system is open to a constant external pressure and no PV work is done. But when there's PV work done (and the temperature is still constant), $P_{\mathrm{int}}$ clearly changes, so does $\Delta H$ still equal $q$? Answer: The external pressure applied by the surroundings on the system at the interface between the system and the surroundings is always equal to the force per unit area exerted by the system on the surroundings at the interface, even if the process is irreversible. It's just that, in the irreversible case, the pressure within the system is not uniform (so there is no single unique value for the pressure) and the force per unit area exerted by the system on the surroundings at the interface also includes viscous stresses. So, if the external pressure applied by the surroundings is constant, the work done by the system on the surroundings is equal to $p_{ext}\Delta V$. If the constant external pressure applied by the surroundings is also equal to the initial pressure p exhibited by the system in its initial thermodynamic equilibrium state, then the enthalpy change is $\Delta H=\Delta U + p_{ext}\Delta V=\Delta U + p\Delta V$. But, by the first law, this is just equal to the heat added to the system. So, irrespective of what the thermal conditions are (say, isothermal or adiabatic) or whether chemical reaction is occurring, if the external pressure is held constant at the initial pressure of the system in its initial thermodynamic equilibrium state, the heat transferred is equal to $\Delta H$. If the process is such that the initial and final temperatures of the system are the same, we call the corresponding $Q = \Delta H$ the heat of reaction (at least for ideal gases for which the heat of mixing is zero).
{ "domain": "chemistry.stackexchange", "id": 4440, "tags": "thermodynamics, enthalpy, pressure" }
Two luminous points disappearing in the sky
Question: I saw a few days ago two luminous points in the sky, which were quite close (about 1-2 degree of separation between them) and not apparently moving (or to slow to be noticed with the naked eye). The apparent luminosity of these points was close to the one of Jupiter (a bit less maybe, but far more luminous than Saturn). The luminosities of these points increased a bit and after a few seconds they faded. I guess these were not satellites entering in the shadow of the Earth as they were not moving. And I think that the probability for this to be two meteors falling exactly towards me is quite low... Any suggestion this might be ? Answer: Edit 1: @JohnHoltz correctly pointed out two errors - geosats are on the equator, not ecliptic. My brain was thinking "equator" but my fingers had other plans. I read about geosat flares quite some time ago and thought it was solstices when they could be seen but it's around equinoxes, not solstices. I've changed my mind and think planes are the most likely answer as to what was seen. However, I do stand by the statement that geosat flares end due to the Earth's shadow. They are unlike other sat flares in this regard according to CalSky which I've always trusted above all other sources for info like this. The link is in the first sentence of my original post below. End of edit. Edit 2: (by @uhoh) answers to How bright are geostationary satellites due to reflected sunlight? explain that flares from GEO can be bright enough to be visible, and when you look at how big those "mirrors" are it's not hard to imagine! The heat radiators are dark in thermal IR but mirror-like in the visible to prevent absorbing heat from sunlight. The solar panels are larger but have fairly low reflectivity. I'm 90% sure that you saw geostationary satellites catching the Sun just right causing a flare. It's the right time of the year for them to be seen, too. Jupiter is sitting on the ecliptic at the moment which is where geosats are parked in clusters (the reason there were two dots visible at 1-2 degrees in separation). Their brightness increased because they were moving towards opposition (reflecting more sunlight), then faded because of the Earth's shadow exactly as you thought. Geosats move slowly against the background stars since their job is to remain over the same point on the Earth; this is how people can get satellite TV with satellite dishes on their rooftops pointing at the same spot all the time. It'd be hard to notice their motion without keeping a steady eye on them for 15 mins or so. Your description matches perfectly with a geosat flare and I'm a bit jealous because I haven't caught one yet! Congratulations, you witnessed an event that isn't often visible to the naked eye! They show up in my astropics all the time, but I have yet to see one with only my eyes. A comment on the video in the link: Geosats stay in the same position over the Earth but this video makes them appear to be moving left as the stars set behind them. I was puzzled by this until I realized the ground objects visible in the bottom of the video are slowly moving, so the time lapse video was made with a camera slowly panning the to the right. The video shows the clusters well but with the wide angle the individual geosats in each cluster aren't visible. I think the ones you saw were in the same grouping.
{ "domain": "astronomy.stackexchange", "id": 4649, "tags": "amateur-observing, identify-this-object, naked-eye" }
How to prove ww^r is context free using pumping lemma for context free languages
Question: I am having a hard time to prove it, what i know is we cannot prove that a language is regular by using pumping lemma cause even if the "pumped string" is in the language the language could still be regular. But since we already know that ww^r is a context free language cause we can design a pda for it, we should be able to divide a string from ww^r into 5 parts and pump it and the result should still be in the language. But i fail to do so, i have done the following: assume w = 010 then ww^r = 010010 Then, u = 0, v = 10, x = 0, y = 1, z = 0 And then i pumped it once and i got: 010100110, which obviously isn't in the language produced by ww^r which again is contradicting because we can design a pda for it Where am i going wrong? How do i exactly use pumping lemma for CFL? Answer: The pumping lemma for context-free languages is used to prove that a given language is not a context-free language. There exists a PDA accepting the language $L = \{w w^r: w \in \{0,1\}\}$, and so $L$ is a context-free language. The pumping lemma states that all regular languages and all CFL's satisfy a certain property: there exists a pumping length $p$ such that for every string of length at least $p$ in the language, there exists a partition of the string (into 3 parts or 5 parts) such that the pumped string belongs to the language. This property claims only that there exists a $p$ and (for every string of length at least $p$) a partition satisfying some conditions - if you choose an arbitrary value of $p$ and an arbitrary partition, the pumped strings might not be in the language.
{ "domain": "cs.stackexchange", "id": 19988, "tags": "context-free, pushdown-automata, pumping-lemma, context-sensitive, theory" }
Understanding gauge fields as connections on a principal G-bundle
Question: I have read that gauge theories may be formulated in the language of differential geometry though the use of Ehresmann connections on a principal $G$ bundle, where $G$ is the Lie group of the theory. While I understand the various equivalent definitions of the connection as either a $\mathfrak{g}$-valued 1-form or a decomposition of the tangent bundle into vertical and horizontal subspaces, I have yet to understand how the connection may be identified with a gauge field or how it's curvature may be identified with the field strength. On an abstract level, I see how there could be a correspondence between these ideas. It makes sense that a gauge transformation could be thought of as a type of parallel transport and hence could be described via a connection, but I have not seen an explicit mathematical formalism that describes this correspondence. Could someone perhaps provide me with some insight? Answer: A gauge field is a connection on a principal bundle. I'll try to show here roughly how this formulation is related to the common construction in physics of the gauge fields in the process of making locally invariant a theory that only has global invariance. Let $G$ be a Lie group, $G\times V\rightarrow V:(g,v)\mapsto gv$ a representation of $G$, $M$ the space-time manifold, $E\rightarrow M$ a principal $G$-bundle and $F\overset{\pi}{\rightarrow} M$ an associated bundle with fiber isomorphic to $V$. The space of gauge transformations $\Gamma(E)$ is equipped with a canonical action over the space of fields $\Gamma(F)$ as $(g,\phi)\mapsto g\phi$ where $(g\phi)(x)=g(x)\phi(x)$. Observe that for a constant $g$, derivatives have the nice property that $\partial_\mu(g\phi)=g\partial_\mu\phi$. This isn't true for a general gauge transformation, so we want to construct an object similar to $\partial_\mu$ that satisfies the previous equation for any $g$. By adding a connection $A$ of the principal bundle $G$ (and using the Lie algebra representation $(T,v)\mapsto Tv$ corresponding to the representation of $G$ over $V$) we can define the covariant derivative $D_\mu$ to be $D_\mu\phi(x)=\partial_\mu\phi(x)+A_\mu\phi(x)$. Now we have our object $D_\mu$ for which the equation $D_\mu(g\phi)=gD_\mu\phi$ holds for any $g$. The action functional is a map $S:\Gamma(F)\rightarrow\mathbb{R}$. It is said to have global invariance under $G$ if $S[\phi]=S[g\phi]$ for every constant gauge transformation $g$. If the condition $S[\phi]=S[g\phi]$ holds for any $g\in\Gamma(E)$ the action is said to be locally (gauge) invariant. Suppose that we want to construct locally invariant actions from globally invariant ones. Usually, global invariance will depend on the fact that $\partial_\mu(g\phi)=g\partial_\mu\phi$, so an action with global invariance will not have in general local invariance. However, by replacing every $\partial_\mu$ by a $D_\mu$ in the expression for the action one obtains a new one which extends invariance to every gauge transformation because now $D_\mu(g\phi)=gD_\mu\phi$ is satisfied for all $g$.
{ "domain": "physics.stackexchange", "id": 35767, "tags": "differential-geometry, gauge-theory" }
Why do we always use square kernels for filters?
Question: In image processing, why do we always use odd square kernels(3x3, 5x5, 7x7, etc.) for filters? Why can't we use kernels of size 3x1, 5x3, etc. that are rectangular kernels? Also, why do we not prefer even kernels (2x2, 4x4, 6x6, etc.)? Answer: You can use whatever size of kernel you like. The kernel is not necessary to be a square especially when you want to pay more attention to process along a specific orientation. In fact, moving average along a specific axis in a image is a simple filter with rectangular shape. Gaussian filters, probably one of the most used filters in image processing, are based on gaussian function in which the top value is achieved on the axis of symmetry. This is the main reason why such kinds of kernels are preferably to be odd. Kernel size selection is often supported in the filter kernel options in the image processing packages, such as all the imfilter related functions in Matlab image processing toolbox. Yet I would also like to take an high-pass filter example if you design the filters by yourself. For a square kernel with odd dimensions, the value for all filter weights can be set to a negative value, except for the center cell, which has a positive value which increases with the size of the kernel; A square kernel that has even dimensions has the positive value in the central group of four cells; A rectangular kernel has a center group of positive value cells in proportion to the kernel's dimensions. They all work as a high-pass filter.
{ "domain": "dsp.stackexchange", "id": 6334, "tags": "image-processing, filters" }
Surface terms for path integrals in field theory?
Question: My question relates to something that I´ve seen in many books and appears in all its glory here: Ryder, pg 198 My question is about eq. 6.74. Which I repeat below: $$i \int {\cal D}\phi \frac{\delta \hat{Z} [\phi] }{\delta \phi} \exp \left(i\int J(x) \phi(x) dx\right)$$ $$= i\; \exp \left(i\int J(x) \phi(x) dx\right) \hat{Z}[\phi] \Bigg|_{\phi\rightarrow\infty}$$ $$+ \int {\cal D}\phi J(x) \hat{Z}[\phi] \exp \left(i\int J(x) \phi(x) dx\right).\tag{6.74}$$ $\phi$ is a scalar field, $J$ is a source, $x = x_{\mu}$ in 4D Minkowski space and $$\hat{Z}[\phi] = \frac{e^{iS}}{\int {\cal D}\phi\; e^{iS}}.$$ The author is clearly doing a integral by parts and the first term on the right hand side is a kind of surface term for the path integral. He then considers this term to be zero and the second one gives us: $$i \int {\cal D}\phi \frac{\delta \hat{Z} [\phi] }{\delta \phi} \exp \left(i\int J(x) \phi(x) dx\right) = J(x) Z[J]$$ The trick thing here is that integral limits for $\int{\cal D}\phi$ are not very obvious (at least not to me). You are in fact summing up for all field configurations. So, there are actually two problems in my mind: For what configuration of $\phi$ is the surface term calculated? (the author says it is $\phi \rightarrow \infty$) Assuming the author is right about taking huge $\phi$: why is this term zero? This applies to path integrals in general: can we do the usual trick of throwing out surface terms safely? Answer: One mustn't confuse field space with physical space. The field space is some sort of manifold without boundaries (for a nonlinear sigma model), or $R^n$ for usual field theories, in either case, integration by parts works in Euclidean space, or if you add a little imaginary part to the propagators so that the action is decaying at large values of $\phi$. The integration by parts in field space is simple--- there are no boundaries in field space, except at infinite field values, and the Euclidean or slightly off-Minkowsky action decays at infinity. There is no relation to the integration by parts in physical space involved for instantons or other topological things.
{ "domain": "physics.stackexchange", "id": 74253, "tags": "quantum-field-theory, path-integral, partition-function, boundary-terms" }
Ring opening of epoxides
Question: For the above question, here's what I'd thought: 1. H+ ion from HCl protonates the oxygen 2. Chloride ion attacks the electrophilic carbon, opening the epoxide ring 3. So the final product formed is a chlorohydrin (which it really isn't) Also, I didn't know what to do with methanol (guessing it just acts as the solvent, HOWEVER, the product contains a methoxy group) Could someone please explain the mechanism to get to the actual product, and also highlight the mistake here? Answer: If you think about the quantities involved in this reaction, you'll realize why does the product contain the methoxy group and not the halide. Methanol is present in large excess since it is the solvent of this reaction. Cl- ions will also be present and a very small amount of the chlorohydrin may be formed as well. The reason why methanol is a reagent and not only act as a solvent in this reaction is because its oxygen atom acts as a nucleophile, just like halides or any other molecule containing electronegative atoms (such as O, N, S etc). I found the following mechanism on Google and it is draw as if the reaction was being performed in water, however you can replace the blue water molecule for methanol or any other alcohol since the mechanism is the same for both. Whenever you look at a reaction schemes like the one you've posted, you should always address the reactivity of solvents and other reaction components, sometimes they end up reacting with your substrate rather than your desired reagent, this is a very good example of it. So if you intend to synthesize a chlorohydrin, you cannot use a nucleophilic solvent, otherwise, it will react with the epoxide and form the correspondent adduct. I hope this answers your question.
{ "domain": "chemistry.stackexchange", "id": 8645, "tags": "organic-chemistry, reaction-mechanism, ethers" }
What is the difference between a signal peptide and a transit peptide?
Question: From what I know, the two names are used interchangeably and I haven't found any resource which says otherwise either. Is there at all any difference, is there a transit peptide that is not a signal peptide or vice versa? Answer: Signal peptides are typically located at the N terminus of a protein. The signal peptides are processed by the translocon machinery and are cleaved off after sorting through the membranes of organelles in the secretory system: endoplasmic reticulum Golgi apparatus ER-Golgi transition vesicles plasma membrane lysosomes Transit peptides target the protein to other subcellular organelles such as (from UniProt): Mitochondrion Apicoplast Chromoplast Chloroplast Cyanelle Thylakoid Amyloplast Peroxisome Glyoxysome Hydrogenosome N-terminal transit peptides are quite rare. C-terminal transit peptide motifs are much more common. UniProt holds transit peptides as a discrete controlled vocabulary, separate from signal peptides.
{ "domain": "biology.stackexchange", "id": 9325, "tags": "biochemistry, molecular-biology, cell-biology, proteins, cell-membrane" }
ogre build fails with freetype.h error, OS X 10.8, Hydro
Question: While trying to install Hydro on OS X 10.8, with XCode 5.0.2, installing ogre through brew install ogre fails with an freetype.h not found error. /tmp/ogre-ZCKQ/ogre_src_v1-7-4/OgreMain/src/OgreFont.cpp:44:10: fatal error: 'freetype.h' file not found #include FT_FREETYPE_H ^ /usr/local/include/freetype/config/ftheader.h:173:24: note: expanded from macro 'FT_FREETYPE_H' #define FT_FREETYPE_H <freetype.h> I have the following symbolic links in my machine for freetype. ln -s /usr/local/Cellar/freetype/2.5.3_1/include/freetype2 /opt/x11/include/freetype2 ln -s /usr/local/Cellar/freetype/2.5.3_1/include/freetype2 /usr/local/include/freetype The complete error stream is gisted here : https://gist.github.com/karthikkovalam/11376100 I am not sure if I have all the symbolic links properly set. Kindly help me with this issue. Originally posted by karthik_ms on ROS Answers with karma: 71 on 2014-04-28 Post score: 1 Answer: I think this is related: https://github.com/osrf/homebrew-simulation/issues/6 Basically if you use the symbolic link to get one of the Python package's to install, then ogre will not work. Originally posted by William with karma: 17335 on 2014-04-28 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by karthik_ms on 2014-04-28: That perfectly worked. Thanks. Comment by Arjav on 2014-06-19: Hi! I am trying to install ROS Hydro on my MAC 10.8.5. Can you please outline the steps which you carried out to install ogre? I have been stuck here for a while and can't seem to find a way out. Thanks.
{ "domain": "robotics.stackexchange", "id": 17804, "tags": "ros-hydro, osx, ogre" }
Is time complexity $O(n^{n/log(n)})$ considered subexponential time?
Question: If there is an algorithm with time complexity $O(n^{n/log(n)})$, is that already exponential time or still subexponential time? It shouldn't be considered quasi-polynomial since the exponent is also depending on $n$, right? Answer: $$n^{\tfrac{n}{\log_2 n}} = (2^{\log_2 n})^{\tfrac{n}{\log_2 n}} = 2^{\tfrac{n}{\log_2 n}\times \log_2 n} = 2^n$$
{ "domain": "cs.stackexchange", "id": 20729, "tags": "time-complexity, runtime-analysis" }
Understanding the Jahn–Teller elongation and compression for octahedral complexes (stabilisation/destabilisation of d orbitals)
Question: I am currently learning about Jahn–Teller effect. For elongation I wondered why the d orbitals with $z$ component are stabilised when the metal–ligand bonds are getting longer? I thought a longer bond would mean a less efficient overlap of the d orbital on the transition metal with the orbital on the ligand. And if yes, shouldn't it be raised so the difference in energy between the d orbital and the ligand’s orbital be greater? I thought the ligands’ orbitals would be lower in energy each time. Where am I going wrong with my thinking? Answer: Before you consider J-T distortion, take a look at the d orbitals of an octahedral complex. Note that the orbitals that are on the axes where the ligands are are higher in energy. This is because these "d orbitals" are actually anti-bonding orbitals comprised primarily of the metal d orbitals. Because they are anti-bonding, they get higher in energy as the bonding interaction between the ligand and metal gets stronger. Thus, a longer distance between the ligand and metal (ie weaker bonding interaction) results in less destabilization. Another way to think about it is to consider the interaction of a ligand lone pair (ie negatively charged electron density) with a filled d orbital on the metal (also negatively charged electron density). The closer they are together, the more unfavorable the interaction is.
{ "domain": "chemistry.stackexchange", "id": 14653, "tags": "transition-metals, crystal-field-theory" }
Percolation using quick union connectivity algorithm
Question: This is my attempt to solve the percolation problem for the Princeton Algorithm-I course, the problem is well-defined here. My implementation is in C++ not Java and the following procedure is followed: Create an NxN grid using QuickUnion class and keep track of each grid site by labeling it (except the first row that is all labeled with zero, to make the percolates() and isFull() functions easier to implement). Each site is a struct type that contains its id and the boolean open state. The solving methodology is as follows: open a random site, check if system percolates and if not repeat. My problem is with the open() function, specifically the part of the code where I try to get the neighbor sites in order to connect the current site with the open neighbor site, where I make multiple checks for the location of the current site and what to do in each case. QuickUnion.h (the implementation is not attached because it's not the issue here) #ifndef QUICKUNION_H #define QUICKUNION_H #include <vector> struct Site { bool open = false; std::size_t id; }; typedef std::vector<std::vector<Site>> Matrix2D; typedef std::pair<int ,int> Location; class QuickUnion { public: QuickUnion(const int N); bool isConnected(int p, int q); void Union(int p, int q); int Root(int p); Location getLocation(int p); virtual ~QuickUnion(){ } Matrix2D grid; std::size_t count = 0; }; #endif /* QUICKUNION_H */ Percolation.h #ifndef PERCOLATION_H #define PERCOLATION_H #include "QuickUnion.h" inline std::size_t getRandom(std::size_t N); class Percolation { public: Percolation(int N); // create n-by-n grid, with all sites blocked void open(int row, int col); // open site (row, col) if it is not open already bool isOpen(int row, int col); // is site (row, col) open? bool isFull(int row, int col); // is site (row, col) full? bool percolates(); // does the system percolate? ~Percolation(); QuickUnion *qu; }; #endif /* PERCOLATION_H */ Percolation.cpp #include <cstdlib> #include <random> #include <iostream> #include <vector> #include "QuickUnion.h" #include "Percolation.h" inline std::size_t getRandom(std::size_t N) { return ( std::rand() % ( N ) ); } Percolation::Percolation(int N) { qu = new QuickUnion(N); } Percolation::~Percolation() { delete qu; } void Percolation::open(int row, int col) { // row-> j & col->i if (isOpen(row, col)) return; qu->grid[row][col].open = true; auto current_site_id = qu->grid[row][col].id; // connect the just-opened site to the neighbor open sites // upper boundary. Don't connect to any site, all the opened sites here shall be roots. if (row == 0) { if (col == 0) { std::vector<std::pair<int ,int>> neighbors; neighbors.push_back(std::make_pair(row + 1, col)); neighbors.push_back(std::make_pair(row + 1, col + 1)); neighbors.push_back(std::make_pair(row, col + 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(id, current_site_id); } } return; } if (col == qu->count - 1) { std::vector<std::pair<int ,int>> neighbors; neighbors.push_back(std::make_pair(row + 1, col)); neighbors.push_back(std::make_pair(row + 1, col - 1)); neighbors.push_back(std::make_pair(row, col - 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(id, current_site_id); } } return; } else { std::vector<std::pair<int ,int>> neighbors; neighbors.push_back(std::make_pair(row + 1, col)); neighbors.push_back(std::make_pair(row + 1, col - 1)); neighbors.push_back(std::make_pair(row + 1, col + 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(id, current_site_id); } } return; } } // lower boundary if (row == qu->count - 1) { if (col == 0) { std::vector<std::pair<int ,int>> neighbors; neighbors.push_back(std::make_pair(row - 1, col)); neighbors.push_back(std::make_pair(row - 1, col + 1)); neighbors.push_back(std::make_pair(row, col + 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(current_site_id, id); } } return; } if (col == qu->count - 1) { std::vector<std::pair<int ,int>> neighbors; neighbors.push_back(std::make_pair(row - 1, col)); neighbors.push_back(std::make_pair(row - 1, col - 1)); neighbors.push_back(std::make_pair(row, col - 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(current_site_id, id); } } return; } else { std::vector<std::pair<int ,int>> neighbors; neighbors.push_back(std::make_pair(row - 1, col)); neighbors.push_back(std::make_pair(row - 1, col - 1)); neighbors.push_back(std::make_pair(row - 1, col + 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(current_site_id, id); } } return; } } // left boundary if (col == 0) { std::vector<std::pair<int, int>> neighbors; neighbors.push_back(std::make_pair(row + 1, col)); neighbors.push_back(std::make_pair(row - 1, col)); neighbors.push_back(std::make_pair(row, col + 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(current_site_id, id); } } return; } // right boundary if (col == qu->count - 1) { std::vector<std::pair<int, int>> neighbors; neighbors.push_back(std::make_pair(row + 1, col)); neighbors.push_back(std::make_pair(row - 1, col)); neighbors.push_back(std::make_pair(row, col - 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(current_site_id, id); } } return; } else { std::vector<std::pair<int ,int>> neighbors; neighbors.push_back(std::make_pair(row + 1, col)); neighbors.push_back(std::make_pair(row + 1, col + 1)); neighbors.push_back(std::make_pair(row + 1, col - 1)); neighbors.push_back(std::make_pair(row - 1, col)); neighbors.push_back(std::make_pair(row - 1, col + 1)); neighbors.push_back(std::make_pair(row - 1, col - 1)); neighbors.push_back(std::make_pair(row, col + 1)); neighbors.push_back(std::make_pair(row, col - 1)); for (auto &neighbor: neighbors) { if (isOpen(neighbor.first, neighbor.second)) { auto id = qu->grid[neighbor.first][neighbor.second].id; qu->Union(current_site_id, id); } } return; } return; } bool Percolation::isOpen(int row, int col) { return qu->grid[row][col].open; } bool Percolation::isFull(int row, int col) { return qu->isConnected(qu->grid[row][col].id, 0); } bool Percolation::percolates() { for (int i = 0; i < qu->count; i++) { if (qu->isConnected(qu->grid[qu->count - 1][i].id, 0)) return true; } return false; } int main(int argc, char *argv[]) { if (argc < 2) { std::cout << "Too few arguments.\n"; return -1; } auto size = std::atoi(argv[1]); Percolation p(size); int threshold = 0; int i, j; while (!p.percolates()) { i = getRandom(size); j = getRandom(size); p.open(j, i); threshold++; } std::cout << "System percolates at " << float(threshold)/float(size*size) << " open sites propability.\n"; } In terms of performance, how can I speed up the code (it's an algorithm course!)? Is their an easier way to avoid the ugly code block in open() to solve the connectivity versus site location problem? Answer: Because no details were provided regarding the implementation of the QuickUnion class, it's not possible to provide any meaningful comment on performance. However, here are a few things that many help you improve your code. Consider using a better random number generator You are currently using inline std::size_t getRandom(std::size_t N) { return ( std::rand() % ( N ) ); } The major problems with this approach is that the low order bits of the random number generator are not particularly random. On my machine, there's a slight but measurable bias toward 0 with that. A better solution, if your compiler and library supports it, would be to use the C++11 std::uniform_int_distribution. In this case, I'd instead add a std::uniform_int_distribution to the class as private data members like this: std::uniform_int_distribution<std::size_t> randomSquare; static std::mt19937 gen; Then in the implementation file, define gen: std::mt19937 Percolation::gen([]()->std::random_device::result_type{std::random_device rd; return rd();}()); Now whenever you want a random square, use the member function randomSquare() like this: randomSquare(0,size*size); Or if you feel you must keep the x,y style the code is currently using, use this: i = randomSquare(0, size); j = randomSquare(0, size); Save the constructor argument The size N is the only constructor argument but it doesn't appear to be explicitly saved which would be convenient in a number of places. One could refer perhaps to qu->grid.size() each time, but that seems a little awkward. I'd suggest a private member variable that stores the size for convenience. Fix the bug Right now, the open function begins with this: if (isOpen(row, col)) return; However, the threshold is incremented even if no new square is opened. That's a bug and will skew your calculation. Make the object do the work If I were writing this program, I'd prefer to write a main like this: int main(int argc, char *argv[]) { if (argc < 2) { std::cout << "Too few arguments.\n"; return -1; } auto size = std::atoi(argv[1]); Percolation p(size); std::cout << "System percolates at " << p.run() << " open sites propability.\n"; } I'm sure you can figure out what would need to go into a run function. Simplify your code As you have already realized, the open function is waaaay too long and complex. Consider instead what's required. When a site is opened, you simply need to call qu->Union() with each of it neighbors. If the site is on the edge of the grid, the neighbor that would be outside the grid simply needs to be skipped. Also, it appears that id is simply an alias for row + col * size -- I'd recommend choosing the single id rather than the row,col approach or a messy mix of both. Anyway, a much simpler open might be this: void Percolation::open(int row, int col) { if (isOpen(row, col)) return; qu->grid[row][col].open = true; auto current_site_id = qu->grid[row][col].id; // process neighbor above if (row != 0 && qu->grid[row-1][col].open) { qu->Union(qu->grid[row-1][col].id, current_site_id); } // process neighbor below if (row != size-1 && qu->grid[row+1][col].open) { qu->Union(qu->grid[row+1][col].id, current_site_id); } // process neighbor left if (col != 0 && qu->grid[row][col-1].open) { qu->Union(qu->grid[row][col-1].id, current_site_id); } // process neighbor right if (col != size-1 && qu->grid[row][col+1].open) { qu->Union(qu->grid[row][col+1].id, current_site_id); } }
{ "domain": "codereview.stackexchange", "id": 23361, "tags": "c++, performance, algorithm, simulation" }
Testing if a random signal was modified or if it contains noise
Question: Given a random signal, is there a method of finding if it was modified or if it contains noise if you don't know the original signal? E.g.: given a random image, can I find out if it was modified if I don't know the original? Answer: There is no way to say for sure. But you can make some assumptions about the signal. If we assume that signal is not compressed and is not encrypted, you can assume that entropy gets higher as more noise is added. That is, a photo of a beach will take up less kb when compressed, than a white noice picture. If lossless compression is used. https://en.wikipedia.org/wiki/Kolmogorov_complexity This is a way to estimate entropy from theoretical point of view. Practical is lossless compression. This is the only thing that fits your question exactly. If we assume that signal is sending the data at much lower frequency than carrier frequency, which is the case for almost all consumer grade tech, and our sampling rate is much faster than the cartier wave of the signal, which is very expensive tech, we can search for the carrier wave itself. The ratio of noise to signal within a single symbol will indicate the noise. This will work with any information being sent, even encrypted. Frequency hopping is seen as particular time on a particular frequency. Like 1ms on frequency of 2.396mhz. Ultrawideband is seen as separate pulses with repetition after a natural number of pauses. like delta pulse after 1ns*N, where N is 1,2,3,4... https://en.wikipedia.org/wiki/Carrier_wave Second case is just a special case of the first. You can collect the samples and try to compress them, and use the resulting size reduction as a rough estimate on amount of non-natural signals. There is theoretical way to transmit information on a radio frequency that will not be detected quickly, like in examples above. It would require to operate frequency much lower than the carrier frequency, or maximum switching frequency of the device, emit wideband signal, emit signal that is below natural noise level for a desired target distance (still can be detected if detector is closer than that range), use encryption to make signal look random, use direct sequence spread spectrum, https://en.wikipedia.org/wiki/Direct-sequence_spread_spectrum or similar technique to code 1 bit of data as many bit of output signal, use precise clocks since normal synchronisation methods would be visible. As far as I know noone bothers with such extremes and almost all of our radio signals can be detected by patterns charecteristical to that modulation. https://en.m.wikipedia.org/wiki/Modulation If we asume this picture was taken by a camera, then even this extreme method wont protect from much simpler detection methods, such as retroreflection of the optical system (camera) https://jc6guaarxlwqeirmsf35gvp7ki-adwhj77lcyoafdy-ru-m-wikipedia-org.translate.goog/wiki/%D0%90%D0%BD%D1%82%D0%B8%D1%81%D0%BD%D0%B0%D0%B9%D0%BF%D0%B5%D1%80_(%D0%BF%D1%80%D0%B8%D1%86%D0%B5%D0%BB) And even without optics a pcb parts nearby the camera can be found using metal detector, and in particular ringing of its antena, even if its not working. But this only works in close proximity, less than 1m. You may want to specify the device you are talking about, to get more precise answer. Since there isnt much to be done with the image alone. TLDR: it comes down to a lot of tricks. To select the trick for your case, more information about the system is needed. One that sort of fits sometimes is entropy estimating.
{ "domain": "dsp.stackexchange", "id": 9625, "tags": "image-processing, signal-analysis" }
Can a classical circuit of size $2^k$ be modelled by a quantum circuit of size $k$ or vice versa?
Question: There is something fundamental I don’t understand about quantum computing and hence the following question may be very trivial or stupid for which I apologize in advance. A boolean function $f:\{0,1\}^n \to \{0,1\}$ has a certain (classical) complexity (say with respect to the basis $\{and, or, not\}$) which is defined as the smallest number of gates (i.e. the size of the circuit) in a classical circuit using only $and$, $or$, and $not$ and which computes $f$. Say a quantum circuit computes $f$ if applied to the initial state $e_{x0^r}$ with $x\in\{0,1\}^n$ it ends in a quantum state where the probablility of measuring $f(x)$ is $\geq \frac{3}{4}$. Moreover, say the quantum circuit complexity is the smallest number of quantum gates (using only some from a basis, say, and let’s also call this the size as well) needed for computing $f$. I have two questions: If I am not mistaken, I have read/heard at somewhere that one can ‘obviously emulate’ a quantum circuit of size $k$ by a classical circuit of size $2^k$. (‘Emulated‘ means here that the same boolean function is computed.) Why is this trivial, if it is true? :) Is quantum circuit complexity as defined above at least bounded by the logarithm of the classical circuit complexity? This question is about ‘the other direction’ and as a motivation for quantum circuits it is even more interesting to me: Can a classical circuit of size $2^k$ be emulated by a quantum circuit of size $k$? Thank you. Comment: Perhaps the thing similar to 1 above which was meant by some people is concerning the inputs. But I even don’t see how one can encode a general state of $k$ qubits (a unit vector in $(\mathbb{C}^2)^k)$ by using $2^k$ classical bits as the latter can be only zero and one each - let alone the gates. Hence I don’t see how one can simulate any quantum circuit using a classical circuit. Answer: Before answering let me generalize and formalize your framing a bit, so that I can say things concretely without fussing over details. I believe the heart of your question concerns the power of an exponentially-sized classical circuit vs. a polynomially-sized quantum circuit. But I will also focus on the case where the circuit are exponential only in depth, and preclude the possibility of exponentially many bits. Thus, we are focused on matters of exponential time, not space. My answers below would need to change otherwise. Computer scientists like to define complexity classes to classify the difficulty of various computational problems. For example, the class EXP denotes the class of problems classically solvable in exponential time (with a polynomial number of bits, however). Your first question concerns whether the simulation of quantum circuits is in EXP. Indeed, it is, because one can simply perform the appropriate matrix multiplication on a (complex) vector space of dimension $2^n$. Given exponential time, a classical computer can emulate a quantum computer. To address question (2), suppose one could emulate a classical circuit of size $N$ with a quantum circuit of size $O(\log N)$. From the standpoint of complexity theory, this would imply that quantum computers could efficiently solve all problems in EXP. The class of problems that can be efficiently solved on a quantum computer is termed BQP. Thus, we would have EXP $\subset$ BQP. On the other hand, we've already essentially shown above that BQP $\subset$ EXP. We would be forced to conclude that BQP = EXP. Though we cannot prove (as far as I know) whether this is indeed the case, we should be quite suspicious. Most experts don't believe that BQP contains NP, let alone EXP. This would preclude efficient solutions to the likes of the traveling salesman problem. For a better discussion of this than I can give, see this other post. To summarize, it would be very, very surprising if quantum computers were as powerful as you describe. I doubt there are many people in the field who believe so.
{ "domain": "quantumcomputing.stackexchange", "id": 4766, "tags": "quantum-gate, circuit-construction, quantum-circuit, classical-computing" }
Generating and calling code on the fly
Question: Delegate This class module defines what I'm calling, in this context, a Delegate - here a function that can take a number of parameters, evaluate a result, and return a value. Close enough to the actual "delegate" thing I find. Example usage Set x = Delegate.Create("(x) => MsgBox(""Hello, "" & x & ""!"")") x.Execute "Mug" The Execute call will generate this code in a dedicated code module found in the Reflection project (I know, it should be indented... but hey it's generated code!): Public Function AnonymousFunction(ByVal x As Variant) As Variant AnonymousFunction = MsgBox("Hello, " & x & "!") End Function Then it will call it (here with parameter value "Mug"), resulting in this: And this would output VbMsgBoxResult.vbOK, which has a value of 1: Debug.Print x.Execute("Mug") Now that's all nice, but I didn't write this class to display "Hello" message boxes; with it I can create a Delegate instance, and pass it as a parameter to a function, say, this member of some Enumerable class: Public Function Where(predicate As Delegate) As Enumerable Dim result As New Collection Dim element As Variant For Each element In this.Encapsulated If predicate.Execute(element) Then result.Add element Next Set Where = Enumerable.FromCollection(result) End Function I've always wanted to be able to do this. Enough talk, here's the code that enables this sorcery! Option Explicit Private Type TDelegate Body As String Parameters As New Collection End Type Private Const methodName As String = "AnonymousFunction" Private this As TDelegate Friend Property Get Body() As String Body = this.Body End Property Friend Property Let Body(ByVal value As String) this.Body = value End Property Public Function Create(ByVal expression As String) As Delegate Dim result As New Delegate Dim regex As New RegExp regex.Pattern = "\((.*)\)\s\=\>\s(.*)" Dim regexMatches As MatchCollection Set regexMatches = regex.Execute(expression) If regexMatches.Count = 0 Then Err.Raise 5, "Delegate", "Invalid anonymous function expression." End If Dim regexMatch As Match For Each regexMatch In regexMatches If regexMatch.SubMatches(0) = vbNullString Then result.Body = methodName & " = " & Right(expression, Len(expression) - 6) Else Dim params() As String params = Split(regexMatch.SubMatches(0), ",") Dim i As Integer For i = LBound(params) To UBound(params) result.AddParameter Trim(params(i)) Next result.Body = methodName & " = " & regexMatch.SubMatches(1) End If Next Set Create = result End Function Public Function Execute(ParamArray params()) As Variant On Error GoTo CleanFail Dim paramCount As Integer paramCount = UBound(params) + 1 GenerateAnonymousMethod 'cannot break beyond this point Select Case paramCount Case 0 Execute = Application.Run(methodName) Case 1 Execute = Application.Run(methodName, params(0)) Case 2 Execute = Application.Run(methodName, params(0), params(1)) Case 3 Execute = Application.Run(methodName, params(0), params(1), params(2)) Case 4 Execute = Application.Run(methodName, params(0), params(1), params(2), _ params(3)) Case 5 Execute = Application.Run(methodName, params(0), params(1), params(2), _ params(3), params(4)) Case 6 Execute = Application.Run(methodName, params(0), params(1), params(2), _ params(3), params(4), params(5)) Case 7 Execute = Application.Run(methodName, params(0), params(1), params(2), _ params(3), params(4), params(5), _ params(6)) Case 8 Execute = Application.Run(methodName, params(0), params(1), params(2), _ params(3), params(4), params(5), _ params(6), params(7)) Case 9 Execute = Application.Run(methodName, params(0), params(1), params(2), _ params(3), params(4), params(5), _ params(6), params(7), params(8)) Case 10 Execute = Application.Run(methodName, params(0), params(1), params(2), _ params(3), params(4), params(5), _ params(6), params(7), params(8), _ params(9)) Case Else Err.Raise 5, "Execute", "Too many parameters." End Select CleanExit: DestroyAnonymousMethod Exit Function CleanFail: Resume CleanExit End Function Friend Sub AddParameter(ByVal paramName As String) this.Parameters.Add "ByVal " & paramName & " As Variant" End Sub Private Sub GenerateAnonymousMethod() Dim component As VBComponent Set component = Application.VBE.VBProjects("Reflection").VBComponents("AnonymousCode") Dim params As String If this.Parameters.Count > 0 Then params = Join(Enumerable.FromCollection(this.Parameters).ToArray, ", ") End If Dim signature As String signature = "Public Function " & methodName & "(" & params & ") As Variant" & vbNewLine Dim content As String content = vbNewLine & signature & this.Body & vbNewLine & "End Function" & vbNewLine component.CodeModule.DeleteLines 1, component.CodeModule.CountOfLines component.CodeModule.AddFromString content End Sub Private Sub DestroyAnonymousMethod() Dim component As VBComponent Set component = Application.VBE.VBProjects("Reflection").VBComponents("AnonymousCode") component.CodeModule.DeleteLines 1, component.CodeModule.CountOfLines End Sub The regular expression is pretty permissive; I'm basically allowing anything between parentheses, followed by =>, and then anything goes. I'd like a regex that enforces an optional comma-separated list of parameters between the parentheses, at least. The reason I'd want a stiffer regex, is because it's my only chance to catch and prevent syntax errors that would generate uncompilable code, like.. Set x = Delegate.Create("(this is a bad parameter) => MsgBox(""Hello, "" & x & ""!"")") Which generates this uncompilable code: Public Function AnonymousFunction(ByVal this is a bad parameter As Variant) As Variant AnonymousFunction = MsgBox("Hello, " & x & "!") End Function The actual anonymous function doesn't get generated until the Execute function is called, and then the anonymous function gets destroyed before Execute exits - this way one could have 20 Delegate objects with as many different anonymous functions waiting to be executed. The flipside is an obvious performance hit, especially with usages such as the Where method shown above - the same method would get created, executed and destroyed 200 times if the encapsulated collection has 200 elements. Appending the expression body to the function's name induces a limitation - the "body" may only be a one-liner. I can live with that, but I wonder if there wouldn't be a way to make it smarter. Answer: NOTE - if you decide to stick with paramArray() it wouldn't be a bad idea to check the boundaries of the paramArray() before going any further -> into Select case in the Execute(). Application.Run() is capable to take up to 30 parameters so a quick check that your Ubound(params)) < 30 would probably be sufficient. ButAlso!: Something ... super tiny ;) but why take a paramArray() in the Execute() since currently Execute() can only proceed with 10 arguments? (could do with up to 30 due to Application.Run() limit of 30 optional arguments) Application.Run can take 30 Optional Parameters so I am just thinking that possibly a better idea would be to take up to 10 (or 30) optional parameters rather than a whole paramArray(). The function's definition may not look too pretty with all those Optional Parameters but it would allow you for a (IMO) better function's body. I suspect that you wouldn't have to drastically change anything in the way you call Execute() but I haven't tested so this may still need verification. So...something along these lines: '// '// Application.Run() is limited to up to 30 optional arguments '// '// firstParameter may actually not needed to be passed because it's a global constant '// I have used it here "just in case" for now '// Public Function Execute(methodName As String, _ Optional Arg1 As Variant, Optional Arg2 As Variant, Optional Arg3 As Variant, _ Optional Arg4 As Variant, Optional Arg5 As Variant, Optional Arg6 As Variant, _ Optional Arg7 As Variant, Optional Arg8 As Variant, Optional Arg9 As Variant, _ Optional Arg10 As Variant, Optional Arg11 As Variant, Optional Arg12 As Variant, _ Optional Arg13 As Variant, Optional Arg14 As Variant, Optional Arg15 As Variant, _ Optional Arg16 As Variant, Optional Arg17 As Variant, Optional Arg18 As Variant, _ Optional Arg19 As Variant, Optional Arg20 As Variant, Optional Arg21 As Variant, _ Optional Arg22 As Variant, Optional Arg23 As Variant, Optional Arg24 As Variant, _ Optional Arg25 As Variant, Optional Arg24 As Variant, Optional Arg27 As Variant, _ Optional Arg28 As Variant, Optional Arg29 As Variant, Optional Arg30 As Variant _ ) As Variant On Error GoTo CleanFail GenerateAnonymousMethod 'cannot break beyond this point Execute = Application.Run(methodName, Arg1, Arg2, Arg3, Arg4, Arg5, Arg6, Arg7, Arg8, Arg9, _ Arg10, Arg11, Arg12, Arg13, Arg14, Arg15, Arg16, Arg17, Arg18, Arg19, _ Arg20, Arg21, Arg22, Arg23, Arg24, Arg25, arg26, Arg27, Arg28, Arg29, Arg30) CleanExit: DestroyAnonymousMethod Exit Function CleanFail: Resume CleanExit End Function Ok, so you will need to modify the AddParameter() too...because Variant can be Missing Friend Sub AddParameter(ByVal paramName As String) this.Parameters.Add "Optional ByVal " & paramName & " As Variant = vbNullString" End Sub This reduces all the Select Case 1-30 to a single: Execute = Application.Run(methodName, Arg1, Arg2, Arg3, Arg4, Arg5, Arg6, Arg7, Arg8, Arg9, _ Arg10, Arg11, Arg12, Arg13, Arg14, Arg15, Arg16, Arg17, Arg18, Arg19, _ Arg20, Arg21, Arg22, Arg23, Arg24, Arg25, arg26, Arg27, Arg28, Arg29, Arg30) A super easy repro to get an idea just in case the above is a bit overwhelming Sub Main() ExecuteExt ExecuteExt "hello" ExecuteExt "hello", "world" End Sub ' your execute without the select Function ExecuteExt(Optional ByVal Arg1 As Variant, Optional ByVal Arg2 As Variant) ExecuteExt = Application.Run("PrintArgs", Arg1, Arg2) End Function ' this would be the generated anonymous method Sub PrintArgs(Optional ByVal Arg1 As Variant = vbNullString, Optional ByVal Arg2 As Variant = vbNullString) Debug.Print Arg1, Arg2 End Sub
{ "domain": "codereview.stackexchange", "id": 10068, "tags": "object-oriented, regex, vba, delegates, meta-programming" }
Complexity of a certain leaf language with Prime & Composite number of accepting paths.
Question: Given a non-deterministic Turing Machine that runs in polynomial time, it accepts if the number of accepting paths are composite, it rejects if the number of accepting paths are prime and it outputs I do not know if the number of accepting paths are {0,1}. Lets call the Above language CA-PR (Composite Accept - Prime Reject). Then we have co-CA-PR = PA-CR(Prime accept, composite reject). Both of the above languages output DON'T KNOW when the number of accepting paths are {0,1}. Questions: Do CA-PR & PA-CR not contain UP? A #P Oracle can definitely solve these problems, can a PP oracle too? How about a ParityOracle? What can we say about the intersection and union of these languages? Where can we place this complexity class? Is it in the polynomial hierarchy? Answer: @Tayfun Pay, you are trying to get us to solve a problem for you! As it turns out, you only need to apply primality on the count of the number of accepting paths and this count requires polynomial number bits to write down. But the concatenated string on the paths of the NP machine is exponentially long. It is as if the primality algorithm is run on a "padded/unary" version. So it is quite weak and runs within logspace and hence the whole things is within PSPACE. Now, if you refer to the table (Figure 1) in this paper by [Jenner et al], (or directly to [Hertrampf et all 93]), you will notice that classes within PSPACE (ModP/PH etc) emerge only when the leaf language is a subset of regular languages such as solvable/aperiodic etc. But as is well known, unary PRIMES is not regular (by standard application of pumping lemma). This much I know. Now to your question. I would suspect PSPACE is the right bound and the proof should not be too difficult, given the techniques from the papers quoted above. If you spend some time thinking, I am sure you will be able to prove the correct bound one way or the other. And especially if these are your first steps into research, I will say, go for it!
{ "domain": "cstheory.stackexchange", "id": 640, "tags": "cc.complexity-theory, counting-complexity, polynomial-hierarchy" }
What is the Davis Equation and why is it used in a Train Simulator?
Question: I have been trying to understand how Microsoft Train Simulator works and people seem to use some Davis equation to calculate friction. So my questions are: What is it? Why do they use it? Are there alternatives to calculate train friction/are there other ways to calculate train friction? Answer: Basically, the Davis Equation is a resistance formula mainly used in basic go-stop situations like trains. The basic formula: $$R'=1.3+\frac{29}{w}+.045v+\frac{.0005av^2}{wn}$$ R being resistance, w is axle load in short tons, n is the number of axles, and a is the frontal area of the train in sq. feet. According to Szanto in Rolling Resistance Revisited you can modify the equation to fit standard freight cars, but the concepts are the same, factoring in air resistance as well. When you or the simulator substitutes the values above to yield certain necessary values for the simulation, you can find relatively accurate coefficients of drag. When you get resistance/drag the simulator will then compute whatever other factors are necessary and then create the appropriate image. This (according to Microsoft Train Simulator) happens hundreds of times a second at the highest settings to give high quality data for the discerning user. Now as to your third question, yes, there are other ways of calculating friction, but the Davis Equation was designed specifically for this purpose and requires no extraneous values and in a sense is the most 'streamlined' equation for this purpose. Some come close though, most prominent being the Canadian National modification for double deck EMU's: $$R=14*\sqrt{10(M)(n)}$$ This square root function will yield more accurate resistance coefficients for taller wagons. If anybody has found more accurate and EFFICIENT methods of finding resistance for trains than the Davis please edit or answer thusly, but as to my point of view the Train Simulator, as with all computer programs, uses this equation to balance both accuracy and speed of calculation.
{ "domain": "physics.stackexchange", "id": 46985, "tags": "newtonian-mechanics, forces, friction, terminology, drag" }
Composite vs metal gearing
Question: I am planning on buying a kitchen stand mixer with planetary action. One important factor to consider is the type of gears the mixer uses, they can be plastic, composite or metal. As far as I know metal gears are the best and plastic the worst in terms of durability and handling stress. However I don't know what are composite gears and how they fare, compared to metal or plastic. Answer: Composite is a very broad term in engineering - it's essentially any material that is made of two or more substances merged at macroscopic level - so not a mixture or alloy. This may be just a mix of larger grains, or it can be a complex structure. One most common composite is reinforced concrete, where structure of rebar is embedded in concrete. Another is carbon fiber reinforced polymer (CFRP), where a texture/fabric of carbon fiber is embedded in a resin. Even plain concrete, even if rarely defined as such, is a composite - cement binder plus gravel. In mechanisms, composite is usually the middle ground between plain polymer and metal. This may be metal-reinforced polymer (thin sheet metal gears embedded in plastic for stability), or it may be a CFRP cut-out, or a gear with metal axis mount and plastic teeth. Or it may be plain plastic with some waste organic fiber (cotton) added, increasing its strength insignificantly. It's hard to say without knowing the exact device in question, and while that may be something of performance quite comparable to metal, the marketing department can get away with the buzzword to peddle a total BS that way - any additive that won't dissolve in plastic will make the plastic "a composite" regardless of any performance gains - or losses.
{ "domain": "engineering.stackexchange", "id": 1377, "tags": "materials, gears" }
Perturbation method & eigenvalues
Question: I have a problem but I don't understand the question. It says: "Show that, to first order in energy, the eigenvalues ​​are unchanged." What does it mean? It means that if the Hamiltonian has the form $$H=H^{(0)}+\lambda H^{(1)}$$ Where $H^{(0)}$ is the Hamiltonian of the unperturbed system, $H^{(1)}$ is the perturbation and $\lambda$ is a small parameter, then if $$E_{n}=E_{n}^{(0)}+\lambda E_{n}^{(1)}$$ Where $$E_{n}^{(1)}=\left\langle \psi_{m}^{(0)}|H^{(1)}|\psi_{m}^{(0)}\right\rangle $$ I have to show that $$E_{n}^{(1)}=0$$ ? I'm confused. Thanks for your answers. Answer: As I mentioned in the comments, the assertion that $E_n^{(1)}\equiv0$ cannot hold in general since a scalar perturbation does not obey it. For the particular case you mention, a linear perturbation on a harmonic oscillator, however, it does hold. The simplest way to see this is that the perturbation can be included in the oscillator potential to give another, displaced, harmonic oscillator: $$ \frac{1}{2}m\omega^2 x^2-Fx=\frac{1}{2}m\omega^2\left(x-\frac{F}{m\omega^2}\right)^2-\frac{F^2}{2m\omega^2}. $$ This is displaced in position, which is irrelevant for these purposes (though it definitely affects the eigenfunctions!), and it is displaced in energy by $-\frac{F^2}{2m\omega^2}\propto F^2$. Thus there will not be any first-order shifting in energy. As far as your question is concerned, however, you need a perturbation-theoretic argument that will prove this, and that is on you to build. The essential point here is to think parity: in the expectation value $$E_n^{(1)}=\left\langle \psi_{m}^{(0)}|H^{(1)}|\psi_{m}^{(0)}\right\rangle$$ the eigenfunctions have definite parity, as does the perturbation. What does this entail?
{ "domain": "physics.stackexchange", "id": 5371, "tags": "quantum-mechanics, homework-and-exercises, hamiltonian, perturbation-theory, eigenvalue" }
How to handle out of scope value?
Question: I have a data set in which the score column has to be between 0 to 100 and the subject column has to be one of ['Math','Science','English']. However, my data set has different values for some rows. How should I handle those rows? subject score ... 1 Math 90 ... 2 Science 85 ... 3 English 105 ... 4 Comp 95 ... 5 Math 80 ... 6 Science 70 ... Answer: If you need to clean your data, you can either drop the rows which contain invalid values, or try to correct them. Here two examples: # If you want to change the score, so values below 0 # are changed to 0 and values above 100 are changed # to 100 you can do that like this: df['score']= df['score'].clip(0, 100) # Or alternatively (in case you have more complicated # operations, you can also use where. For the # correction of the scores, this would look like df['score']= df['score'].where(df['score']<=100, 100) df['score']= df['score'].where(df['score']>0, 0) # If you want to drop the rows that contain undefined # subjects, you can do that as follows: valid_subjects= ['Math','Science','English'] # define an indexer that contains True for all rows which are invalid invalid_subj_indexer= ~df['subject'].isin(valid_subjects) # now drop them df.drop(invalid_subj_indexer.index[invalid_subj_indexer], inplace=True) The result of this looks like: subject score ... 1 Math 90 ... 2 Science 85 ... 3 English 100 ... 5 Math 80 ... 6 Science 70 ... You can test the lines above by executing the following lines first to create the test dataframe: import io import pandas as pd raw=\ """ subject score ... 1 Math 90 ... 2 Science 85 ... 3 English 105 ... 4 Comp 95 ... 5 Math 80 ... 6 Science 70 ...""" df= pd.read_csv(io.StringIO(raw), sep='\s+')
{ "domain": "datascience.stackexchange", "id": 6149, "tags": "python, pandas, data-cleaning" }
Analytic Manifolds and General Relativity
Question: I'm currently taking a course in General Relativity and We've discussed the mathematical formulation of GR. The professor told us that an analytic manifold $C^{\omega}$ is not a good choice for a manifold, but I can't think of a reason why we shouldn't choose an analytic one. Answer: Analiticity is a really strong requirement. If you are dealing with an analytic function, the function is completely defined on the entire complex plane if you manage to get information about it on an arbitrarily small neighborhood of any point of the plane. For example, if you know the function is analytic on the plane and you know $f(z)$ on the region $|z-a| < \epsilon$, then you get the function on the whole plane by writing the Taylor series $$f(z) = \sum_{n=0}^{+\infty} \frac{f(a)^{(n)} (z-a)^n}{n!}.$$ From just a small neighborhood, you got the whole function. A problem with using analytic manifolds in GR is similar: knowing information on a small piece of spacetime would determine it everywhere. This seems nice at first, but it is not. Suppose, for example, that you were modelling a spherically symmetric star of radius $R$. Now, the way we usually do this is of the sort $$g_{ab} = \begin{cases}\text{vacuum solution for } r > R, \\ \text{interior solution for } r < R. \\\end{cases} \tag{1}$$ The vacuum solution is just the Schwarzschild solution, which, by Birkhoff's theorem, is the only spherically symmetric vacuum solution of the Einstein Field Equations. Notice that the scheme of Eq. (1) applies to all spherically symmetric stars (no matter what they are made of), to black holes, to planets, and etc. If it is spherically symmetric, we can model it in that way. However, if the spacetime was analytic, then a tiny bit of the outer vacuum solution would already be sufficient to determine the entire spacetime, in analogy with the Taylor series. This is not what we have in real life. In real life, the gravitational field of the Sun as felt on Earth won't depend on the details of the Sun's composition (if we assume it to be spherically symmetric), just on its mass, charge and angular momentum. The Earth would move in the same way if instead of the Sun we had a black hole with the same mass, charge and angular momentum. Nevertheless, inside the Sun, the gravitational field is quite different from that of a black hole. To go back to my analogy with one-variable functions, the gravitational field of the Sun and of a black hole are like two functions $f$ and $g$ that respect $f(z) = g(z)$ for $|z| > R$, but might differ for $|z| < R$. This is only possible if they are not analytic. If they were analytic, being equal on any neighborhood would imply being equal on the whole domain, and that is certainly not interesting in gravitational physics. Edit: an analytic manifold needs not have an analytic metric This was pointed out in the comments and I agree with it. My answer used the metric as an example, so I should give some more detail, but the main point is still the same: analytic functions are way too restrictive. I'll first argue why one would not want to work with analytic manifolds, but on the end I'll also show some arguments of why there is no issue in doing it. Some constructions often used in Differential Geometry are partitions of unity. A partition of unity is a collection of functions $\lbrace f_{\alpha}\rbrace$ that always add to $1$ (\sum_\alpha f_{\alpha} = 1) but for each point of the manifold there is a neighborhood in which only finitely many of them are non-vanishing. Hence, the functions belonging to a partition of unity will often have to vanish on much of the manifold and assume non-zero values on other points. See nLab an example on the real line, in case it helps to grasp the concept. The Wikipedia page also has a really nice picture. While it can be shown that smooth partitions of unity always exist in (paracompact) smooth manifolds (see nLab). However, you'll typically won't find analytic partitions of unity in an analytic manifold, simply because analytic functions are too restrictive and refuse to give non-vanishing values at a point if they vanish on a neighborhood somewhere else. This in principle might sound a bit too technical, but it becomes relevant because partitions of unity are used, for example, to define integration on manifolds. The trick is to write an integral over $M$ by writing it as $$\int_M F(x) \mathrm{d}x = \sum_{\alpha} \int_{U_\alpha} F(x) f_\alpha(x) \mathrm{d}x,$$ where $\lbrace f_\alpha\rbrace$ is a partition of unity, $\lbrace U_\alpha\rbrace$ covers the manifold, and for each $\alpha$ $f_\alpha$ vanishes outside of $U_\alpha$. In the absence of analytic partitions of unity, one can't define an ``analytic integral''. According to this MathOverflow post, what one usually does is exploit the fact that analytic manifolds are smooth manifolds and define integration in a smooth, non-analytic, manner. (Let me point out that the comments to that very same question also point to other posts that apparently discuss how to define integration without the need for partitions of unity). In short, while analyticity is nice, it is also quite restrictive. One might need to resort back to smooth, non-analytic results. Is analyticity a problem? I should also add a comment from Hawking & Ellis' The Large Scale Structure of Spacetime, p. 58: If the metric is assumed to be $C^r$, the atlas of the manifold must be $C^{r+1}$. However, one can always find an analytic subatlas in any $C^s$ atlas ($s \geq 1$) (Whitney (1936), cf. Munkres (1954)). Thus it is no restriction to assume from the start that the atlas is analytic, even though one could physically determine only a $C^{r+1}$ atlas if the metric were $C^r$. (The link for Munkres (1954) corresponds to what I could find online). Hence, Hawking & Ellis see no issue if you want to consider an analytic manifold structure (notice they don't mention restricting the metric to only analytic metrics: that would definitely be way too restrictive). This might make you have some more of a headache working with some things such as partitions of unity (which can be done by treating the analytic manifold as a smooth manifold).
{ "domain": "physics.stackexchange", "id": 87289, "tags": "general-relativity, differential-geometry" }
How curvature information in second order optimization methods helps
Question: It is said that second order optimization methods in neural networks work better than first order because they contain information about rate of change of gradient or the curvature. This information helps to choose a better step size for moving forward on the error surface. It is not clear as to how the rate of change of gradient is controlling the step size and leading to better optimization. For simplicity, consider only one weight update iteration. Answer: You need to consider two steps of your first order optimisation process to see why a second order method can be usefull. (For more clarity we'll work in one dimension). First step : calculate the derivative and move your evaluation point accordingly. Then, second step : calculate the derivative and move your evaluation point accordingly. If the derivative in the second step is bigger / lower than in the first step, you'll move less/more in the second step than in the first step. So the rate of change of the derivative along your path impact the change of your step size. So if you had information on the rate of change of the derivative in the frist step it may have been better to incorporate it directly in your first step calculation. In the end it is rather intuitive that the second derivative may able to find better suited step sizes. In practice it is a bit more complex as the actual gain in performance depends on how the second order derivative is calculated / approximated. (See here)
{ "domain": "datascience.stackexchange", "id": 6694, "tags": "neural-network, optimization, gradient-descent" }
Why temperature of planets decreases as we move far from Sun?
Question: Is there a friction in the space? If yes, then does that affect speed, wavelength and amplitude of an EM wave? And i also want to know, why temperature of planets decreases as we move far from Sun? Answer: Is there a friction in the space? The interplanitary space: The interplanetary medium includes interplanetary dust, cosmic rays and hot plasma from the solar wind. The temperature of the interplanetary medium varies. For dust particles within the asteroid belt, typical temperatures range from 200 K (−73 °C) at 2.2 AU down to 165 K (−108 °C) at 3.2 AU.[3] The density of the interplanetary medium is very low, about 5 particles per cubic centimeter in the vicinity of the Earth;[citation needed] it decreases with increasing distance from the Sun, in inverse proportion to the square of the distance. It is variable, and may be affected by magnetic fields and events such as coronal mass ejections. It may rise to as high as 100 particles/cm3. So some very rare matter is there. If yes, then does that affect speed, wavelength and amplitude of an EM wave? As seen by the numbers above the density is not enough to affect electromagnetic radiation , which is composed out of zillions of photons . And i also want to know, why temperature of planets decreases as we move far from Sun? The first order temperature of planets comes from the radiation of the sun. Radiation falls off as $1/r^2$ , while the angle subtended by the planets get smaller the further away they are. The second order comes from nuclear reactions still going on at the center of planets, but it is not enough to add much to the temperature. As Rob observes in a comment planets that are gas giants, as Jupiter, are still undergoing gravitational contraction which: Jupiter radiates about twice as much energy as it receives from the sun. The source of this energy is apparently a very slow gravitational contraction of the entire planet rather than the nuclear fusion that powers the sun. Jupiter would have to be almost 80 times larger to have enough mass to ignite a nuclear furnace. So it is not that simple, more details are needed for the specific planetary system's planets.
{ "domain": "physics.stackexchange", "id": 63400, "tags": "temperature, astrophysics, planets, solar-system" }
Finding the drag force (Air resistance force) for accelerated ball?
Question: As you know if I want to find the force for an accelerated object I will use the law $F_o=ma$ so I can get the affecting force of it. But there is another force affecting against the object. It's the air resistance force, So I will have to calculate the drag force (Air resistance force) and the subtract it from the accelerated object force to get the exact affecting force $F_{net}=F_o-F_d$ while $F_o$ is accelerated object force and $F_d$ is drag force (Air resistance force). Now, how can I calculate the drag force (Air resistance force)? OK, Actually I know the formula below: $F_d=\dfrac{\rho\,\nu^2AC_d}{2}$ While: $\rho$ is the air density, $\nu$ is the speed of the object relative to the air, $A$ is the cross sectional area of the object and $C_d$ is the drag coefficient. So my problem is: I don't know what is the $A$ cross sectional area of the sphere (in my state I used a ball) and I don't know what is the $C_d$ drag coefficient of a sphere (ball)? Answer: $C_d$ is a function of speed via the Reynolds number. See here and here. For some numeric values for the $C_d$ vs. $\rm Re$ use the following from here of which you take the log values to interpolate. Example The kinematic viscosity of Air is $\nu = 14.8\; \rm{cSt} = 14.8 \cdot 10^{−6}\; \rm{m^2/s}$. At a speed of $v=20\;\rm{m/s}$ a ball with diameter $d=5\,\rm{cm} = 0.05\;\rm{m}$ has Reynolds number of $\rm{Re} = \frac{v d}{\nu} = 67400 $. When you look at the $C_d$ vs. $\rm Re$ graph you get $C_d = 0.47$. The area of the ball is $A=\pi \frac{d^2}{4} = 0.001964\;\rm{m^2}$ so the drag force is $F_d = \frac{1}{2} \rho A C_d v^2 = \frac{1}{2} (1.2\;\rm{kg/m^3}) (0.001964\;\rm{m^2}) (0.47) (20\;\rm{m/s})^2 = 0.2215\;\rm{N}$
{ "domain": "physics.stackexchange", "id": 8919, "tags": "newtonian-mechanics, forces, projectile, geometry, drag" }
$\frac{d}{dr}=0$ and $\frac{d}{dz}=0$ (cylindrical coordinates) for a 1D ring
Question: In http://ritchie.chem.ox.ac.uk/Grant%20Teaching/2010/Lecture%204%202010.pdf slide 21 of 26, he says "Radius of ring is fixed and so derivatives in $r$ are 0." Presumably this goes for $\frac{d}{dz}$ too. But for points on the ring, as you step on/off the ring, aren't you transitioning from some non-zero value of the wave function, to 0 (since the wave function is 0 everywhere off the ring) so the slope is infinite? -- (note: the .pdf is "The Quantum Theory of Atoms and Molecules: particles in boxes and applications," by Dr.Grant Ritchie, 26 slides.) Answer: Yes. In general this sort of double-speak is very common. What he really means is something more like: "We want to consider only the angular part of the cylindrical Schrodinger equation, and the proper limit to take here is to send $\nabla^2 \mapsto r^{-2}~\partial_\phi^2.$ We know that is proper because if $\partial_r$ and $\partial_z$ were zero and hence irrelevant, then that would be the only term of $\nabla^2$ that would be left."
{ "domain": "physics.stackexchange", "id": 25757, "tags": "quantum-mechanics, coordinate-systems, differentiation" }
Correct way to assign variables before a try/catch/finally
Question: using Excel = Microsoft.Office.Interop.Excel ... Excel.Application app = null; Excel.Workbooks books = null; Excel.Workbook book = null; object[,] data; try { app = new Excel.Application(); books = app.Workbooks; book = books.Open("C:\..."); Excel.Worksheet sheet = book.Worksheets[1]; Excel.Range range = sheet.UsedRange; data = range.Value; } catch (Exception ex) { throw ex; //To Do... } finally { book.Close(); app.Quit(); } //code that the uses data object from above e.g. ... In the code above I have to declare the objects app, books and book outside of the try block so that I have access to them in the finally block. However if I do not assign anything to them (i.e. Excel.Application app; instead of Excel.Application app = null;) then the compiler complains about the line app.Quit() saying "Use of unassigned local variable...". So is it correct to assign null to app etc in this case? Also books doesn't get used in the finally block so is it better to declare it as I have, or should I declare it in the try block? Answer: Two main issue I can see with your code: app and book can potentially remain null and therefore your finally code has the potential to throw a NullReferenceException. You need to check for null first before calling methods on these objects. The way you re-throw the exception will destroy its stacktrace. Instead of this: catch (Exception ex) { throw ex; //To Do... } you should do this: catch (Exception ex) { //To Do... throw; } As to your question where to declare books - it best practice to limit the scope of variables and declare them as close to their use point as possible. As mentioned in the comments if you want to pass a custom error message via the exception then the best way forward it to create a custom exception which accepts the original exception as a parameter. If you derive from Exception one of the constructors accepts an inner exception parameter. So something along these lines: public class DocumentProcessingException : Exception { public DocumentProcessingException(string message, Exception innerException) : base(message, innerException) { } }
{ "domain": "codereview.stackexchange", "id": 10186, "tags": "c#, exception-handling, excel" }
What are all the contributions to libration; is there a self-consistent formalism?
Question: If I understand correctly, in a two-body system with at least one of them more-or-less tidally locked (mean rotational period = mean orbital period) if we draw a line between the centers of mass and look at the point where that line passes through (one of) the locked bodies, that point will roughly periodically migrate around the surface, and that is called, or at least attributed to libration. The most familiar example is the libration of Earth's Moon, and the primary cause is the Moon's eccentricity and also various axial and orbital tilts (which precess over time as well) In Space SE: What are the "Moon L, B, C" angles shown in this solar eclipse simulation? How to get lunar L, B, C parameters from the Moon's 3x3 rotation matrix from the Python package Skyfield? (currently unanswered) In History of Science and Mathematics SE: How did Cassini measure the "Cassini state" of the Moon? What measurements were made; what did the data look like? (currently unanswered) In addition to the effects of eccentricity and axial and orbital inclinations I suppose that: for a just recently locked body there could still be residual pendulum-like harmonic motion that hasn't yet been damped out. See Is there any residual oscillation left from the Moon rotation? there could be motion excited by third body gravitational effects there could be "inner sloshing" of magma, liquid core, or subsurface (or surface) oceans Question: What are all the contributions to libration; is there a self-consistent formalism? Related: Moon's rotation and revolution Was lunar libration first observed or first predicted? In either case, who was the responsible party? How old is the idea of the far side of the Moon? Explanation about the resonance, mean motion resonance and libration Answer: TLDR: The primary ~28 day lunar orbital librations are well modeled by taking into account the changing orbital speed due to the Moon's orbital eccentricity. The forced physical librations (which include Cassini's 2nd and 3rd laws) can be modeled closed form with assumptions about the Moon's interior layers. The free librations can be fitted to Lunar Laser Ranging data. Setup: Even though the Moon's rotation rate is the same as its orbital period, its orbital speed changes due to its orbital eccentricity. It moves faster at perigee than apogee in accordance with Kepler's 2nd law. This miss-match between orbital speed and rotation rate is the primary contributor to lunar libration and (as expected) cycles with the Moon's orbital period. This libration isn't quite along a single axis due to the 6.8 degree obliquity of the Moon's rotation. The Moon is tidally deformed by the Earth. This tidal deformation axis leads or trails the vector from the Moon to the Earth through Apogee and Perigee, respectively, leading to torque on the Moon. Earth's gravitational field is irregular. The Earth is well modeled by an oblate spheroid and subject to its own tidal deformations mainly due to the Moon and the Sun. The orientation of the Earth's lunar tidal axis to the Earth/Moon vector is mostly transferring angular momentum from the Earth's spin into the Moon's orbit. However, it is also imparting torque to the Moon (an irregular shape in an irregular gravitational field). Gravitation from other bodies in the Solar System, even when treated as point masses will cause tidal torques on the Moon. I did a little alteration to Bate, Mueller, White's gravitation table to show gravitational acceleration from some of these other bodies on the Moon. Note the tidal torques are a function of both gravitational force and gradient, so even though the Sun exerts higher gravitational forces on the Moon than the Earth exerts on the Moon, the Earth exerts more tidal torque since its gravitational gradient is higher. The sum of all these torques affect the rotation of the Moon and thus librations. Also, we can't separate the long term lunar orbital effects from libration, since eccentricity is a primary cause of libration. Normally in a two body system we would expect circularization of the lunar orbit. But for our Moon, the eccentricity is actually growing due to gravitational interactions from the Sun! Answer: The librations of the Moon are the result of the changing geometry of the Earth/Moon system due to the sum of forces on the Moon combined with its orbital and angular momentum. The Moon is irregular in shape and composition, and changes shape due to tidal forces. It also passes through an irregular gravitational field since the Earth is similarly deformed. This causes various torques on the Moon, which result in changes to its rotation. However, the dominant force of tidal torque prevents the average orbital and rotation periods from ever deviating. Hence, the librations. Pavlov et al. 2018 wrote a paper called Determining parameters of Moon's orbital and rotational motion from LLR observations using GRAIL and IERS-recommended models. Here are the parameters they use: I am surprised they don't explicitly include "solar radiation pressure" explicitly as one of the forces on the Moon, but perhaps it is hidden in one of the other variables. Instead of using a complete physical model for the free physical librations, they use various models to fit to the LLR (Lunar Laser Ranging) measurements. For the forced librations (including Cassini's 2nd and 3rd laws), there are some self-consistent formalisms, like in https://arxiv.org/abs/2201.00803. However, they require some assumptions about the mantle and core of the Moon and they are extremely complex.
{ "domain": "astronomy.stackexchange", "id": 6232, "tags": "astrophysics, planetary-science, lunar-libration" }
Do the following circuits violate the principle of "no fast-forwarding"?
Question: The no fast-forwarding principle states roughly that, given that we can simulate a Hamiltonian for time $t$ using $r$ gates, in order to perform the Hamiltonian simulation for time $2t$, we must in general use $2r$ gates. But, there are some noteworthy exceptions to this principle, such as Shor's use of modular exponentiation by repeated squaring, and finding out precisely when many-body systems be fast-forwarded is an interesting open problem. Applying these I initially had a SWAP circuit to rotate colors on the vertices of a square by $90^\circ$, using three SWAP gates with depth three. Naively and following the principle above, I would have guess that a $180^\circ$ degree rotation would require six SWAP gates, but, indeed, we can do a $180^\circ$ rotation of the vertices of a square with only two gates, with depth only one! Is it fair to call this (mostly curious) observation an example of a mild fast-forward? Letting $H$ be the circuit for a ninety degree rotation and $H^2$ be the for the 180 degree rotation, we can do $H^2$ quicker than we can even do $H$, so can we say that we can fast-forward the evolution of $H^2$? Answer: All swap circuits can be fast forwarded efficiently. In fact any power of any permutation can be done in at most $n-1$ swaps gates with a circuit depth of exactly 2. An n-qubit permutation can be represented as a list of n integers, where the integer at position $k$ says which position the qubit at position $k$ ends up in the output. Take an $n$ qubit circuit containing swap gates. Compute its permutation $P$ by starting from a sorted list of integers from 0 to $k-1$, and applying each swap gate to the list. Given a permutation $P$, you can compute $P^2$ by following the list entries twice. For entry $k$ of $P^2$, look up $r = P_k$ and then look up $P_r$. In other words, $P^2_k = P_{P_k}$. This is efficient to compute. More generally, $(P \cdot Q)_k = P_{Q_k}$. You can repeat this squaring process to get $P$, $P^2$, $P^4$, $P^8$, $P^{16}$, and so forth up to $P^{2^m}$ for any reasonable sized $m$ you'd like. If you want to simulate for $t$ steps, set $m = \lceil \lg t \rceil$. You can then compute $P^t$ by multiplying together the powers of 2 with a corresponding bit in $t$'s binary representation. For example, if $t = 10000101_2 = 2^8 + 2^3 + 2^1$ then $P^t = P^8 \cdot P^3 \cdot P^1$. This strategy for computing $P^t$ is called repeated squaring. It takes time $O(n \lg t)$. Once you have the permutation, you can efficiently decompose it into a set of $n-1$ swaps as follows. First, find the disjoint cycles in the permutation by tracking each element forward through the permutation repeatedly until it runs into itself. Doing all cycles this way takes time $O(n)$ total. Each cycle is decomposed separately. To decompose a cycle into two layers, number the cycle's entries in order around the cycle starting from some arbitrary point. Apply one layer of swaps to reverse their order. Then another layer of swaps to reverse the order again, but excluding the first entry. This implements the cycle. Implementing all the cycles implements the permutation. This also takes time $O(n)$ to do. So in total by using $O(n \lg t)$ classical computation, you found an $O(n)$ cost quantum circuit to fast forward iterating the swap network.
{ "domain": "quantumcomputing.stackexchange", "id": 4788, "tags": "hamiltonian-simulation, terminology-and-notation" }
Convergence of harmonic oscillator to the free particle and the issue of asymptotic freedom in QM
Question: For any $\epsilon >0$, consider the following harmonic oscillator with $m= \hbar =1$: \begin{equation} \frac{\partial}{\partial t}\psi(x,t)= -\frac{1}{2}\Delta \psi+ \frac{\epsilon}{2}x^2 \psi \end{equation} where $x \in \mathbb{R}$ and $t$ is time. Then, I wonder if solutions of the above Schrodinger equation converge to the free particle as $\epsilon \to 0^+$. If they converge, may I understand this as an example of asymptotic freedom? Could anyone please provide some insight? Answer: No, the spectrum of the Hamilton operator changes discontinuously from a pure point spectrum for $\epsilon >0$ to a purely continuous spectrum ($\mathbb{R}^+$) for $\epsilon =0$.
{ "domain": "physics.stackexchange", "id": 99553, "tags": "quantum-mechanics, harmonic-oscillator, differential-equations" }
Fetching detail for a given ID, which might not exist
Question: I'm actually working on a piece of code which use the following code: public Detail GetDetail(int id) { if (!_detail.ContainsKey(id)) { GetDetailForNewObjects(id); } return _detail[id].Data; } The last line throws a NullReferenceException. Since it should not happen that this method is called in the way it returns an exception they propose not to catch it and rethrow a more specific and descriptive one. I don't agree. The software readability and the maintainability of the cose falls dramatically when we get a NullReferenceException. How would you write that piece of code? Answer: In this case, I would suggest that this exception should be classified as a 'Boneheaded' exception as defined by Eric Lippert because there is no reason why the code should throw the exception at all. Exceptions are expensive, as is handling them so I would make every effort to have code that only has them in TRULY exceptional circumstances. It seems to me like the reason the _detail[id] throws a NullReferenceException is because the GetDetailForNewObjects method had a problem which could be in many forms, but I'll outline two below. This problem could be one of Your DB Query (assuming that is what is happening) couldn't find any records. Solution: return a null object. Don't try to access something you know has a reasonable possibility of being null, instead check again for null and return null if no object exists after the call. Let the calling code figure out what to do with a null (maybe search results are null, this is not an exception). Your DB had a connection issue / timeout Solution: Again, THIS is the exception that should be passed up. We shouldn't handle a DB exception (since there it typically nothing that can be done about it), just to throw a null reference exception. This behavior would make debugging more difficult since it hides the actual DB connection issue. Bubble the DB exception upwards so the code / developer can respond to that, instead of an ambiguous NullReferenceException because ultimately, we need to know WHY it was null since clearly we weren't expecting it to be. In both cases, we as developers know how to handle the situation (given the assumptions I laid out). I would write the above code something like this: public Detail GetDetail(int id) { if(id < 0) { throw new ArgumentException("id", "Argument should have a positive value."); } // end if Detail returnValue = null; // value store that is only assigned if the call succeeds if (!_detail.ContainsKey(id)) { GetDetailForNewObjects(id); // assuming this adds the key to the dictionary. if(_detail.ContainsKey(id)) { //Cheaper and safer than handling an exception returnValue = _detail[id].Data; } // end if } // end if return returnValue; } // end function GetDetail Eric Lippert has a great description of when / how to handle different types of exceptions. Further reading on Stack Exchange: How to avoid throwing vexing exceptions? Is catching 'expected' exceptions that bad?
{ "domain": "codereview.stackexchange", "id": 9935, "tags": "c#, error-handling" }
Paradox? Pure Prolog is Turing-complete and yet incapable of expressing list intersection?
Question: Pure Prolog (Prolog limited to Horn clauses only) is Turing-complete. In fact, a single Horn clause is enough for Turing-completeness. However, pure Prolog is incapable of expressing list intersection. (Disequality, dif/2, would allow it to do it, but dif/2 is not Horn, unlike equality). This seems like a paradox, at first glance. Is there a simple explanation? Answer: Turing-complete means "can compute every function on natural numbers that a Turing machine can compute". It means exactly that and only that. A list is not a natural number, and list intersection is not a function on natural numbers. Note: it is, of course, possible to encode lists as natural numbers, which would then make list intersection a function on natural numbers. And I have no doubt that, given you chose a suitable encoding of lists, Pure Prolog will be perfectly capable of expressing list intersection. To put it another way: just because Pure Prolog is not capable of expressing list intersection using the particular representation of lists that was chosen for General Prolog does not mean that there does not exist a representation of lists more suitable for use with Pure Prolog such that Pure Prolog is capable of expressing intersection of those particular lists.
{ "domain": "cs.stackexchange", "id": 17491, "tags": "programming-languages, turing-completeness, prolog, logic-programming" }
Why do matter waves show refraction? / Why does wavelength change when a particle enters a medium?
Question: I know why electromagnetic waves show refraction but matter waves? I found this in section 2-5 in “An Introduction to Quantum Physics” by French and Taylor while reading about Davison and germer experiment. Answer: While it is true that matter wave and electromagnetic waves have different properties (for instance, Maxwell equations describe the evolution of electric and magnetic fields, which are vector fields, whereas matter waves are scalar, so only EM waves can exhibit polarization effects), they still share the same universal properties of waves. One of them is that waves diffract. This comes from the fact that the most basic solutions to the wave equation in both cases is a plane wave: $$\phi(\overrightarrow{r}, t) = \exp \left(i\left(\overrightarrow{k}\cdot \overrightarrow{r} - \omega t\right)\right)$$ These go in a straight line, at constant velocity as long as the medium is homogeneous. However, plane waves are objects that are infinitely spread both in time and space, which is not physical. In real life, you would need to describe any finite-sized wavepacket/beam by an infinite combination of these plane waves, which will all evolve slightly differently. This is what causes diffraction. Note that this is not dependent on the exact nature of the waves (scalar or vector), nor the dispersion relation between $\omega$ and $k$ (linear for EM waves, quadratic for matter waves without a potential). As for what causes a change in wavelength when you go into a medium, you have to look at the dispersion relation. For EM waves, it writes: $$\left| \overrightarrow{k} \right| = n(\omega) \frac{\omega}{c},$$ where the optical index $n(\omega)$ depends on the way the medium responds to the EM field. We have a similar dispersion relation for matter waves. Indeed, the Schrödinger equation can be written as: $$E \psi = \frac{\hbar^2 k^2}{2m} \psi + U(\overrightarrow{r}) \psi$$ Assuming $U(\overrightarrow{r})$ is locally constant and equal to $U_0 < E$, the solutions are plane waves: $$\psi(\overrightarrow{r}, t) = \exp \left(i\left(\overrightarrow{k}\cdot \overrightarrow{r} - \omega t\right)\right), \quad \mathrm{with} \, \left| \overrightarrow{k} \right| = \sqrt{\frac{2m \omega}{\hbar}- \frac{2m U}{\hbar^2}}.$$ As before, if we move from a region of space from a certain uniform $U$ to a region of space with a different $U$ (for instance, a potential step $U(x<0) = 0$ to $U(x > 0) = U_0 < E$), the wavelength must change accordingly. Note that in real life, $U(\overrightarrow{r})$ is rarely uniform, so the previous description is not exactly true. For an electron in a crystal for instance, $U(\overrightarrow{r})$ is periodic, which will lead to diffraction just in the same way as a periodic optical index $n(\overrightarrow{r})$ would lead to light diffraction.
{ "domain": "physics.stackexchange", "id": 67179, "tags": "quantum-mechanics, refraction, wavelength" }
What is (negative) wind stress curl?
Question: I'm currently doing research for a paper in school where we need to research on a university-like level. When I read the paper Causes and impacts of the 2014 warm anomaly in the NE Pacific, I found the sentence: The wind stress curl was negative, which has precedence but is still quite unusual. The wind stress curl was given as $-0.5*10^6 \ \text{N}*m^{-3}$. I neither know what wind stress curl is, nor what the negative sign is, nor what the unit of it exactly describes (of course, pressure per meter, but what does that mean?). Can anyone explain what it is? Answer: Skimming the paper, I believe the relevance of the wind stress curl is its relation to "Ekman pumping". I haven't found a simple, concise reference for this, but this page might be a good start, and this page has a couple of formulas about wind stress curl. I'll try to summarize here. When wind blows over water, the top of the water starts moving. It shears against the water below it, so that water starts moving too. The momentum from the wind is transferred down into lower layers of the water. This water also feels the Coriolis force. The direction it ends up moving in depends on the balance of friction/drag and Coriolis force. On average, the water moves to the right of the wind in the northern hemisphere; if the wind is blowing northward, the water moves eastward. Now imagine you have strong wind blowing northward at one location and weaker wind to the right of it. The water at the first location moves to the right, and it does so faster than the water at the second location (because the wind forcing the water is stronger at the first location). The water converges at the second location, pushing the water downward. This is how the curl of the wind stress (the northward wind changing in the east-west direction) is related to the water convergence (the eastward current changing in the east-west direction) and hence to water being pushed down or pulled up. Positive wind stress curl pulls water up; negative wind stress curl pushes it down. The last relevant part here is that this kind of motion suppresses ocean mixing. The relevant sentence from that paper is The wind stress curl and hence Ekman pumping anomalies were negative, which also is consistent with relatively weak entrainment. "Entrainment" is how much of the deep, cold ocean water mixes with the relatively warm upper ocean water, cooling it. The negative wind stress curl leads to water being pushed down and less deep water mixing with the upper ocean. The upper ocean stayed warmer, so the whole heat blob lasted longer.
{ "domain": "earthscience.stackexchange", "id": 910, "tags": "geophysics, ocean, oceanography, water, wind" }
Mathematical model for this graph of a simplified binary star system?
Question: Unnecessary background for question: I had a school assignment asking us to relate a quadratic equation to a real life example relating to our future dream career, making sure to express the accuracy of the equation (knowing it would be practically impossible to find an exact match considering parabolas go on infinitely both ways on a graph). I want to be a theoretical physicist and so the best I could come up with was modeling the velocity of a star of a simplified binary star system. Here is an animation of the simplified model in question. It assumes inertia is never lost. Here is a rough graph modeling a star from the system. So, is there a good mathematical equation for this graph? The equation should model the spikes out infinitely. Answer: This problem can be treated as an elliptical Kepler orbit. But for a Kepler orbit it is assumed that one mass is much more massive than the other: $m_1>>m_2$, which means that $m_1$ basically sits still at their center of mass. But both masses will move symmetrically around their center of mass (if one moves closer the other will move closer as well, inversely proportional to their masses), which allows you to write their attractive force as a function of their distance towards their center of mass ($r_{COM}$). $$ F_1=\frac{Gm_1m_2}{\left(\frac{m_1+m_2}{m_2}r_{COM}\right)^2} $$ This only scales the force and is equal to an object orbiting a much more massive object (located at the previous center of mass), like a Kepler orbit, but with different masses. However Kepler orbits do not have an explicit solution for the position as a function of time, these are often calculated numerically. But you can calculate it explicitly the other way around, time as a function of position and the exact trajectory an object will take can also be found explicitly. Edit: You can also approximate an orbit with a Fourier series which has the advantage that it goes on infinitely, but will contain small errors. I did some testing with this and got the following results: $$\theta(t)=\bar{\omega}t+\sum_{n=1}^\infty{A(n)\sin\left(n\bar{\omega}t\right)}$$ for $e=0.5$ (orbital eccentricity) $A(n)\approx 0.9757633n^{-1.944954}$. $$ v(t)=\sqrt{\frac{\mu}{a(1-e^2)}(1+e(2\cos{\theta(t)}+e))} $$ When choosing $\frac{\mu}{a}=1$, using the approximation and a limited sum of $n=50$, the graph looks like this:
{ "domain": "physics.stackexchange", "id": 9809, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, orbital-motion, velocity" }
Simple syntax highlighter in Javascript - SQL highlighting
Question: Yesterday and today I've made a very basic syntax highlight. It creates a function in the window object that handles part of the job: (function(window){ var f=window.highlight = function(lang, element){ var lang_defs = f[lang]; if(!lang_defs) { throw new TypeError( 'The language "' + lang + '" was not yet defined' ); } if(!(element instanceof Element)) { throw new TypeError( 'The 2nd parameter must be an Element' ); } element.className += ' highlight ' + lang; for(var i = 0, l = lang_defs.length; i<l; i++) { var html = ''; for(var j = 0, k = element.childNodes.length; j<k; j++) { if(element.childNodes[j].nodeType === 3) { html += element.childNodes[j].nodeValue .replace( lang_defs[i].match, /*shortcut to decide if the lang_defs[i].replace is one of those types *if so, passes it directly *otherwise, makes a string matching based on the object */ {'string':1, 'function':1}[ typeof lang_defs[i].replace ] ? lang_defs[i].replace : '<' + lang_defs[i].replace.tag + ' class="' + lang_defs[i]['class'] + '">' + lang_defs[i].replace.text + '</' + lang_defs[i].replace.tag + '>' ); } else { html += element.childNodes[j].outerHTML; } } element.innerHTML = html; if('function' === typeof lang_defs[i].patch) { var returned = lang_defs[i].patch.call( element ); if('string' === typeof returned) { element.innerHTML = returned; } } } }; //default replace object f.default_replace = {'tag': 'span', 'text': '$1'}; })(Function('return this')());//just be sure that we have the real window Each language is a property in the function, which is then read (sql example): (function(window){ if('function' === typeof window.highlight) { window.highlight.sql=[ { 'class':'string', 'match':/([bn]?"(?:[^"]|[\\"]")*"|[bn]?'(?:[^']|[\\']')*')(?=[\b\s\(\),;\$#\+\-\*\/]|$)/g, 'replace':window.highlight.default_replace }, { 'class':'comment', 'match':/((?:\/\/|\-\-\s|#)[^\r\n]*|\/\*(?:[^*]|\*[^\/])*(?:\*\/|$))/g, 'replace':window.highlight.default_replace, 'patch':function(){ return this.innerHTML.replace( /((?:\/\/|\-\-\s|#)[^\r\n]*|\/\*(?:[^*]|\*[^\/])*(?:\*\/|$))/g, '$1</span>' ).replace(//matches single-line comments /<span class="comment">((?:#|-- |\/\/)(?:.|<\/span><span class="[^"]+">([^<])<\/span>)*)([\r\n]|$)/g, function(_,part1,part2,part3){ return '<span class="comment">'+ //cleans up all spans ((part1||'')+(part2||'')).replace(/<\/?span(?: class="[^"]+")?>/g,'')+ '</span>'+ (part3||''); } ).replace(//matches multi-line comments /<span class="comment">(\/\*(?:[^*]|\*[^\/])+(?:\*\/(?:<\/span>)?|$))/g, function(_,part1){ return '<span class="comment">'+ //cleans up all spans ((part1||'')).replace(/<\/?span(?: class="[^"]+")?>/g,'')+ '</span>'; } ); } }, { /* * numbers aren't that 'regular' and many edge-cases were left behind * with the help of @MLM (http://stackoverflow.com/users/796832/mlm), * we were able to make this work. * he took over the regex and patched it all up, I did the replace string */ 'match':/((?:^|\b|\(|\s|,))(?![a-z_]+)([+\-]?\d+(?:\.\d+)?(?:[eE]-?\d+)?)((?=$|\b|\s|\(|\)|,|;))/g, 'replace':'$1<span class="number">$2</span>$3' }, { 'class':'name', 'match':/(`[^`]+`)/g, 'replace':window.highlight.default_replace }, { 'class':'var', 'match':/(@@?[a-z_][a-z_\d]*)/g, 'replace':window.highlight.default_replace }, { 'class':'keyword', //the keyword replace must have an aditional check (`(?!\()` after the name), due to the function replace() 'match':/\b(accessible|add|all|alter|analyze|and|as|asc|asensitive|before|between|bigint|binary|blob|both|by|call|cascade|case|change|char|character|check|collate|column|condition|constraint|continue|convert|create|cross|current_date|current_time|current_timestamp|current_user|cursor|database|databases|day_hour|day_microsecond|day_minute|day_second|dec|decimal|declare|default|delayed|delete|desc|describe|deterministic|distinct|distinctrow|div|double|drop|dual|each|else|elseif|enclosed|escaped|exists|exit|explain|false|fetch|float|float4|float8|for|force|foreign|from|fulltext|generated|get|grant|group|having|high_priority|hour_microsecond|hour_minute|hour_second|if|ignore|in|index|infile|inner|inout|insensitive|insert|int|int1|int2|int3|int4|int8|integer|interval|into|io_after_gtids|io_before_gtids|is|iterate|join|key|keys|kill|leading|leave|left|like|limit|linear|lines|load|localtime|localtimestamp|lock|long|longblob|longtext|loop|low_priority|master_bind|master_ssl_verify_server_cert|match|maxvalue|mediumblob|mediumint|mediumtext|middleint|minute_microsecond|minute_second|mod|modifies|natural|nonblocking|not|no_write_to_binlog|null|numeric|on|optimize|optimizer_costs|option|optionally|or|order|out|outer|outfile|parse_gcol_expr|partition|precision|primary|procedure|purge|range|read|reads|read_write|real|references|regexp|release|rename|repeat|replace(?!\()|require|resignal|restrict|return|revoke|right|rlike|schema|schemas|second_microsecond|select|sensitive|separator|set|show|signal|smallint|spatial|specific|sql|sqlexception|sqlstate|sqlwarning|sql_big_result|sql_calc_found_rows|sql_small_result|ssl|starting|stored|straight_join|table|terminated|then|tinyblob|tinyint|tinytext|to|trailing|trigger|true|undo|union|unique|unlock|unsigned|update|usage|use|using|utc_date|utc_time|utc_timestamp|values|varbinary|varchar|varcharacter|varying|virtual|when|where|while|with|write|xor|year_month|zerofill)\b/gi, 'replace':window.highlight.default_replace }, { 'class':'func', 'match':/\b([a-z_][a-z_\d]*)\b(?=\()/gi, 'replace':window.highlight.default_replace }, { 'class':'name', 'match':/\b([a-z\_][a-z_\d]*)\b/gi, 'replace':window.highlight.default_replace } ]; } })(Function('return this')()); There are 2 themes: the default one and then a console-like one. Example of the execution: // highlight function, separated file (function(window){ var f=window.highlight = function(lang, element){ var lang_defs = f[lang]; if(!lang_defs) { throw new TypeError( 'The language "' + lang + '" was not yet defined' ); } if(!(element instanceof Element)) { throw new TypeError( 'The 2nd parameter must be an Element' ); } element.className += ' highlight ' + lang; for(var i = 0, l = lang_defs.length; i<l; i++) { var html = ''; for(var j = 0, k = element.childNodes.length; j<k; j++) { if(element.childNodes[j].nodeType === 3) { html += element.childNodes[j].nodeValue .replace( lang_defs[i].match, /*shortcut to decide if the lang_defs[i].replace is one of those types *if so, passes it directly *otherwise, makes a string matching based on the object */ {'string':1, 'function':1}[ typeof lang_defs[i].replace ] ? lang_defs[i].replace : '<' + lang_defs[i].replace.tag + ' class="' + lang_defs[i]['class'] + '">' + lang_defs[i].replace.text + '</' + lang_defs[i].replace.tag + '>' ); } else { html += element.childNodes[j].outerHTML; } } element.innerHTML = html; if('function' === typeof lang_defs[i].patch) { var returned = lang_defs[i].patch.call( element ); if('string' === typeof returned) { element.innerHTML = returned; } } } }; //default replace object f.default_replace = {'tag': 'span', 'text': '$1'}; })(Function('return this')());//just be sure that we have the real window //========================================================== // sql syntax highlight, anothed different file (function(window){ if('function' === typeof window.highlight) { window.highlight.sql=[ { 'class':'string', 'match':/([bn]?"(?:[^"]|[\\"]")*"|[bn]?'(?:[^']|[\\']')*')(?=[\b\s\(\),;\$#\+\-\*\/]|$)/g, 'replace':window.highlight.default_replace }, { 'class':'comment', 'match':/((?:\/\/|\-\-\s|#)[^\r\n]*|\/\*(?:[^*]|\*[^\/])*(?:\*\/|$))/g, 'replace':window.highlight.default_replace, 'patch':function(){ return this.innerHTML.replace( /((?:\/\/|\-\-\s|#)[^\r\n]*|\/\*(?:[^*]|\*[^\/])*(?:\*\/|$))/g, '$1</span>' ).replace(//matches single-line comments /<span class="comment">((?:#|-- |\/\/)(?:.|<\/span><span class="[^"]+">([^<])<\/span>)*)([\r\n]|$)/g, function(_,part1,part2,part3){ return '<span class="comment">'+ //cleans up all spans ((part1||'')+(part2||'')).replace(/<\/?span(?: class="[^"]+")?>/g,'')+ '</span>'+ (part3||''); } ).replace(//matches multi-line comments /<span class="comment">(\/\*(?:[^*]|\*[^\/])+(?:\*\/(?:<\/span>)?|$))/g, function(_,part1){ return '<span class="comment">'+ //cleans up all spans ((part1||'')).replace(/<\/?span(?: class="[^"]+")?>/g,'')+ '</span>'; } ); } }, { /* * numbers aren't that 'regular' and many edge-cases were left behind * with the help of @MLM (http://stackoverflow.com/users/796832/mlm), * we were able to make this work. * he took over the regex and patched it all up, I did the replace string */ 'match':/((?:^|\b|\(|\s|,))(?![a-z_]+)([+\-]?\d+(?:\.\d+)?(?:[eE]-?\d+)?)((?=$|\b|\s|\(|\)|,|;))/g, 'replace':'$1<span class="number">$2</span>$3' }, { 'class':'name', 'match':/(`[^`]+`)/g, 'replace':window.highlight.default_replace }, { 'class':'var', 'match':/(@@?[a-z_][a-z_\d]*)/g, 'replace':window.highlight.default_replace }, { 'class':'keyword', //the keyword replace must have an aditional check (`(?!\()` after the name), due to the function replace() 'match':/\b(accessible|add|all|alter|analyze|and|as|asc|asensitive|before|between|bigint|binary|blob|both|by|call|cascade|case|change|char|character|check|collate|column|condition|constraint|continue|convert|create|cross|current_date|current_time|current_timestamp|current_user|cursor|database|databases|day_hour|day_microsecond|day_minute|day_second|dec|decimal|declare|default|delayed|delete|desc|describe|deterministic|distinct|distinctrow|div|double|drop|dual|each|else|elseif|enclosed|escaped|exists|exit|explain|false|fetch|float|float4|float8|for|force|foreign|from|fulltext|generated|get|grant|group|having|high_priority|hour_microsecond|hour_minute|hour_second|if|ignore|in|index|infile|inner|inout|insensitive|insert|int|int1|int2|int3|int4|int8|integer|interval|into|io_after_gtids|io_before_gtids|is|iterate|join|key|keys|kill|leading|leave|left|like|limit|linear|lines|load|localtime|localtimestamp|lock|long|longblob|longtext|loop|low_priority|master_bind|master_ssl_verify_server_cert|match|maxvalue|mediumblob|mediumint|mediumtext|middleint|minute_microsecond|minute_second|mod|modifies|natural|nonblocking|not|no_write_to_binlog|null|numeric|on|optimize|optimizer_costs|option|optionally|or|order|out|outer|outfile|parse_gcol_expr|partition|precision|primary|procedure|purge|range|read|reads|read_write|real|references|regexp|release|rename|repeat|replace(?!\()|require|resignal|restrict|return|revoke|right|rlike|schema|schemas|second_microsecond|select|sensitive|separator|set|show|signal|smallint|spatial|specific|sql|sqlexception|sqlstate|sqlwarning|sql_big_result|sql_calc_found_rows|sql_small_result|ssl|starting|stored|straight_join|table|terminated|then|tinyblob|tinyint|tinytext|to|trailing|trigger|true|undo|union|unique|unlock|unsigned|update|usage|use|using|utc_date|utc_time|utc_timestamp|values|varbinary|varchar|varcharacter|varying|virtual|when|where|while|with|write|xor|year_month|zerofill)\b/gi, 'replace':window.highlight.default_replace }, { 'class':'func', 'match':/\b([a-z_][a-z_\d]*)\b(?=\()/gi, 'replace':window.highlight.default_replace }, { 'class':'name', 'match':/\b([a-z\_][a-z_\d]*)\b/gi, 'replace':window.highlight.default_replace } ]; } })(Function('return this')()); //========================================================= // execution example: window.onload = function(){ highlight('sql', document.getElementsByTagName('pre')[0]); } /*console theme, main file*/ .highlight, .highlight *{ background:black; color:white; font-family:'Consolas',monospace; font-size:16px; word-wrap: break-word; white-space: pre; } /*sql style, different file*/ .highlight.sql .keyword{color:teal;} .highlight.sql .string{color:red;} .highlight.sql .func{color:purple;} .highlight.sql .number{color:#0F0;} .highlight.sql .name{color:olive;} .highlight.sql .var{color:green;} .highlight.sql .comment{color:gray;} <!-- example code, found somewhere on google, with some edge cases --> <pre> CREATE TABLE `shop` ( article INT(4) UNSIGNED ZEROFILL DEFAULT '0000' NOT NULL, dealer CHAR(20) DEFAULT '' NOT NULL, price DOUBLE(16,2) DEFAULT '0.00' NOT NULL, PRIMARY KEY(article, dealer)); INSERT INTO shop VALUES (1,'A',3.45),(1,'B',3.99),(2,'A',10.99),(3,'B',1.45), (3,'C',1.69),(3,'D',-1.25),(4,'D',19.95); #This is an example of sql a = 'b', 'c' SELECT MAX(article) AS article FROM shop; SELECT s1.article, s1.dealer, s1.price FROM shop s1 LEFT JOIN shop s2 ON s1.price &lt; s2.price WHERE s2.article IS NULL; SELECT article, MAX(price) AS price FROM shop GROUP BY article; SELECT article, dealer, price FROM shop s1 WHERE price=(SELECT MAX(s2.price) FROM shop s2 WHERE s1.article = s2.article); SELECT @min_price:=MIN(price),@max_price:=MAX(price) FROM shop; SELECT * FROM shop WHERE price=@min_price OR price=@max_price; /* a string =' ', */ select '1'#comment /*''*/ or 2; </pre> I'm actually happy with the results, but the comment highlighting is concerning me. Is there anything I can do to improve the highlight syntax? What can I do to improve the performance? Am I handling comments correctly or should I try a different way? Answer: After closely watching the code, I realized that I've made a few mistakes: Mistake 1: The languages are being added directly as a property in the function. That's just begging for trouble! I've added an object where all the languages will be added. Mistake 2: To generate the new HTML to search for the text nodes (the un-highlighted text), I was refreshing the element itself. That means that all the styles related with that element were being refreshed over and over and over again. Now, with a document fragment, the number of reflows was reduced to 2! Mistake 3 (or 2.5?): Since I was setting the class at the beggining, before any code, which was helping in the number of reflows. This is a total waste of time for the CPU. This reflow was moved before doing any direct changes in the code, but before adding the new HTML. Mistake 4: The lack of 'stricness' disturbed some people. And it is a good point! Now, 'use strict'; is present in the code. Now, with all the mistakes sorted out, I've also made some changes: Change 1: The language now can be set directly in the element, or as an optional parameter in the function. Change 2: If you forget to set a language, both on the element and in the parameter, it throws a friendly exception. Also, the exception about the language being missing was adjusted. Change 3: You can now pass a set of elements (NodeList or HTMLCollection) and the elements will be highlighted. The exceptions that are thrown will be handled differently, giving the other elements a chance. The individual results for each element are returned in the form of an array. Change 4: @Mast pointed out something very simple and small. The initialization had the following line: var f=window.highlight = function(element, lang){//... At first, it is completely unclear what f is doing. I've changed it's name to fn, and added a description of what that variable is doing there. After all the changes, this is the final result: (function(window){ 'use strict'; //fn keeps an internal reference to avoid problems with rewritting the window.highlight var fn = window.highlight = function(element, lang){ 'use strict'; if(element instanceof NodeList || element instanceof HTMLCollection) { for(var i = 0, l = element.length, results = []; i<l; i++) { try { results[i] = fn( element[i], lang ); } catch(e) { //logs the message, to give a chance to all the other elements ( console.error || console.log ).call( console, e.message ); results[i] = false; } } return results; } if(!(element instanceof Element)) { throw new TypeError( 'The 1st parameter must be an Element or NodeList' ); } lang = lang || element.getAttribute('data-lang'); if(!lang) { throw new TypeError( 'Missing language definition. Set the 2nd parameter or the attribute data-lang' ); } var lang_defs = fn.langs[lang]; if(!lang_defs) { throw new TypeError( 'The language "' + lang + '" was not yet defined' ); } //create a document fragment, to avoid reflow and increase performance var fragment = document.createDocumentFragment(), div = document.createElement('div'); div.innerHTML = element.innerHTML; fragment.appendChild(div); for(var i = 0, l = lang_defs.length; i<l; i++) { var html = ''; for(var j = 0, k = div.childNodes.length; j<k; j++) { if(div.childNodes[j].nodeType === 3) { html += div.childNodes[j].nodeValue .replace( lang_defs[i].match, /*shortcut to decide if the lang_defs[i].replace is one of those types *if so, passes it directly *otherwise, makes a string matching based on the object */ {'string':1, 'function':1}[ typeof lang_defs[i].replace ] ? lang_defs[i].replace : '<' + lang_defs[i].replace.tag + ' class="' + lang_defs[i]['class'] + '">' + lang_defs[i].replace.text + '</' + lang_defs[i].replace.tag + '>' ); } else { html += div.childNodes[j].outerHTML; } } //refreshes the HTML, before doing anything else div.innerHTML = html; if('function' === typeof lang_defs[i].patch) { var returned = lang_defs[i].patch.call( div ); if('string' === typeof returned) { div.innerHTML = returned; } } } //only change at the end, to avoid unnecessary reflows element.className += ' highlight ' + lang; element.innerHTML = div.innerHTML; return true; }; //default replace object fn.default_replace = {'tag': 'span', 'text': '$1'}; //all the languages will be added here fn.langs = {}; })(Function('return this')());//just be sure that we have the real window Example of an execution (same HTML and CSS): //main file, containing the main function (function(window){ 'use strict'; //fn keeps an internal reference to avoid problems with rewritting the window.highlight var fn = window.highlight = function(element, lang){ 'use strict'; if(element instanceof NodeList || element instanceof HTMLCollection) { for(var i = 0, l = element.length, results = []; i<l; i++) { try { results[i] = fn( element[i], lang ); } catch(e) { //logs the message, to give a chance to all the other elements ( console.error || console.log ).call( console, e.message ); results[i] = false; } } return results; } if(!(element instanceof Element)) { throw new TypeError( 'The 1st parameter must be an Element or NodeList' ); } lang = lang || element.getAttribute('data-lang'); if(!lang) { throw new TypeError( 'Missing language definition. Set the 2nd parameter or the attribute data-lang' ); } var lang_defs = fn.langs[lang]; if(!lang_defs) { throw new TypeError( 'The language "' + lang + '" was not yet defined' ); } //create a document fragment, to avoid reflow and increase performance var fragment = document.createDocumentFragment(), div = document.createElement('div'); div.innerHTML = element.innerHTML; fragment.appendChild(div); for(var i = 0, l = lang_defs.length; i<l; i++) { var html = ''; for(var j = 0, k = div.childNodes.length; j<k; j++) { if(div.childNodes[j].nodeType === 3) { html += div.childNodes[j].nodeValue .replace( lang_defs[i].match, /*shortcut to decide if the lang_defs[i].replace is one of those types *if so, passes it directly *otherwise, makes a string matching based on the object */ {'string':1, 'function':1}[ typeof lang_defs[i].replace ] ? lang_defs[i].replace : '<' + lang_defs[i].replace.tag + ' class="' + lang_defs[i]['class'] + '">' + lang_defs[i].replace.text + '</' + lang_defs[i].replace.tag + '>' ); } else { html += div.childNodes[j].outerHTML; } } //refreshes the HTML, before doing anything else div.innerHTML = html; if('function' === typeof lang_defs[i].patch) { var returned = lang_defs[i].patch.call( div ); if('string' === typeof returned) { div.innerHTML = returned; } } } //only change at the end, to avoid unnecessary reflows element.className += ' highlight ' + lang; element.innerHTML = div.innerHTML; return true; }; //default replace object fn.default_replace = {'tag': 'span', 'text': '$1'}; //all the languages will be added here fn.langs = {}; })(Function('return this')());//just be sure that we have the real window //========================================================== // sql syntax highlight, anothed different file (function(window){ 'use strict'; if('function' === typeof window.highlight) { window.highlight.langs.sql=[ { 'class':'string', 'match':/([bn]?"(?:[^"]|[\\"]")*"|[bn]?'(?:[^']|[\\']')*')(?=[\b\s\(\),;\$#\+\-\*\/]|$)/g, 'replace':window.highlight.default_replace }, { 'class':'comment', 'match':/((?:\/\/|\-\-\s|#)[^\r\n]*|\/\*(?:[^*]|\*[^\/])*(?:\*\/|$))/g, 'replace':window.highlight.default_replace, 'patch':function(){ 'use strict'; return this.innerHTML.replace( /((?:\/\/|\-\-\s|#)[^\r\n]*|\/\*(?:[^*]|\*[^\/])*(?:\*\/|$))/g, '$1</span>' ).replace(//matches single-line comments /<span class="comment">((?:#|-- |\/\/)(?:.|<\/span><span class="[^"]+">([^<])<\/span>)*)([\r\n]|$)/g, function(_,part1,part2,part3){ return '<span class="comment">'+ //cleans up all spans ((part1||'')+(part2||'')).replace(/<\/?span(?: class="[^"]+")?>/g,'')+ '</span>'+ (part3||''); } ).replace(//matches multi-line comments /<span class="comment">(\/\*(?:[^*]|\*[^\/])+(?:\*\/(?:<\/span>)?|$))/g, function(_,part1){ return '<span class="comment">'+ //cleans up all spans ((part1||'')).replace(/<\/?span(?: class="[^"]+")?>/g,'')+ '</span>'; } ); } }, { /* * numbers aren't that 'regular' and many edge-cases were left behind * with the help of @MLM (http://stackoverflow.com/users/796832/mlm), * we were able to make this work. * he took over the regex and patched it all up, I did the replace string */ 'match':/((?:^|\b|\(|\s|,))(?![a-z_]+)([+\-]?\d+(?:\.\d+)?(?:[eE]-?\d+)?)((?=$|\b|\s|\(|\)|,|;))/g, 'replace':'$1<span class="number">$2</span>$3' }, { 'class':'name', 'match':/(`[^`]+`)/g, 'replace':window.highlight.default_replace }, { 'class':'var', 'match':/(@@?[a-z_][a-z_\d]*)/g, 'replace':window.highlight.default_replace }, { 'class':'keyword', //the keyword replace must have an aditional check (`(?!\()` after the name), due to the function replace() 'match':/\b(accessible|add|all|alter|analyze|and|as|asc|asensitive|before|between|bigint|binary|blob|both|by|call|cascade|case|change|char|character|check|collate|column|condition|constraint|continue|convert|create|cross|current_date|current_time|current_timestamp|current_user|cursor|database|databases|day_hour|day_microsecond|day_minute|day_second|dec|decimal|declare|default|delayed|delete|desc|describe|deterministic|distinct|distinctrow|div|double|drop|dual|each|else|elseif|enclosed|escaped|exists|exit|explain|false|fetch|float|float4|float8|for|force|foreign|from|fulltext|generated|get|grant|group|having|high_priority|hour_microsecond|hour_minute|hour_second|if|ignore|in|index|infile|inner|inout|insensitive|insert|int|int1|int2|int3|int4|int8|integer|interval|into|io_after_gtids|io_before_gtids|is|iterate|join|key|keys|kill|leading|leave|left|like|limit|linear|lines|load|localtime|localtimestamp|lock|long|longblob|longtext|loop|low_priority|master_bind|master_ssl_verify_server_cert|match|maxvalue|mediumblob|mediumint|mediumtext|middleint|minute_microsecond|minute_second|mod|modifies|natural|nonblocking|not|no_write_to_binlog|null|numeric|on|optimize|optimizer_costs|option|optionally|or|order|out|outer|outfile|parse_gcol_expr|partition|precision|primary|procedure|purge|range|read|reads|read_write|real|references|regexp|release|rename|repeat|replace(?!\()|require|resignal|restrict|return|revoke|right|rlike|schema|schemas|second_microsecond|select|sensitive|separator|set|show|signal|smallint|spatial|specific|sql|sqlexception|sqlstate|sqlwarning|sql_big_result|sql_calc_found_rows|sql_small_result|ssl|starting|stored|straight_join|table|terminated|then|tinyblob|tinyint|tinytext|to|trailing|trigger|true|undo|union|unique|unlock|unsigned|update|usage|use|using|utc_date|utc_time|utc_timestamp|values|varbinary|varchar|varcharacter|varying|virtual|when|where|while|with|write|xor|year_month|zerofill)\b/gi, 'replace':window.highlight.default_replace }, { 'class':'func', 'match':/\b([a-z_][a-z_\d]*)\b(?=\()/gi, 'replace':window.highlight.default_replace }, { 'class':'name', 'match':/\b([a-z\_][a-z_\d]*)\b/gi, 'replace':window.highlight.default_replace } ]; } })(Function('return this')()); //========================================================= // execution example: window.onload = function(){ highlight(document.getElementsByTagName('pre')); } /*styling for this example only*/ p { font-family:sans-serif; margin-bottom: 0px; } pre {margin-top:5px;} /*console theme, main file*/ .highlight, .highlight *{ background:black; color:white; font-family:'Consolas',monospace; font-size:16px; word-wrap: break-word; white-space: pre; } /*sql style, different file*/ .highlight.sql .keyword{color:teal;} .highlight.sql .string{color:red;} .highlight.sql .func{color:purple;} .highlight.sql .number{color:#0F0;} .highlight.sql .name{color:olive;} .highlight.sql .var{color:green;} .highlight.sql .comment{color:gray;} <p>Simple example:</p> <pre data-lang="sql"> insert into `table` select * from `_copy` where id &lt; 55; </pre> <p>Simple example, without language defined:</p> <pre> select "This will be 'unhighlighted'"; </pre> <p>Complex example:</p> <pre data-lang="sql"> CREATE TABLE `shop` ( article INT(4) UNSIGNED ZEROFILL DEFAULT '0000' NOT NULL, dealer CHAR(20) DEFAULT '' NOT NULL, price DOUBLE(16,2) DEFAULT '0.00' NOT NULL, PRIMARY KEY(article, dealer)); INSERT INTO shop VALUES (1,'A',3.45),(1,'B',3.99),(2,'A',10.99),(3,'B',1.45), (3,'C',1.69),(3,'D',-1.25),(4,'D',19.95); #This is an example of sql a = 'b', 'c' SELECT MAX(article) AS article FROM shop; SELECT s1.article, s1.dealer, s1.price FROM shop s1 LEFT JOIN shop s2 ON s1.price &lt; s2.price WHERE s2.article IS NULL; SELECT article, MAX(price) AS price FROM shop GROUP BY article; SELECT article, dealer, price FROM shop s1 WHERE price=(SELECT MAX(s2.price) FROM shop s2 WHERE s1.article = s2.article); SELECT @min_price:=MIN(price),@max_price:=MAX(price) FROM shop; SELECT * FROM shop WHERE price=@min_price OR price=@max_price; /* a string =' ', */ select '1'#comment /*''*/ or 2; select "Also has support for "" &lt;-- that style of quotes", '''Like this as well!'''; </pre>
{ "domain": "codereview.stackexchange", "id": 13664, "tags": "javascript, strings" }
Is there an optimal way to quantize log-likelihood ratios of an AWGN channel?
Question: My understanding is that given an AWGN channel and BPSK modulation, an LDPC decoder that uses message passing takes as input log-likelihood ratios $L$ of the following form by Bayes' rule: $$ L=\frac{P(a=1|r)}{P(a=-1|r)}=\frac{2}{\sigma^2}r $$ where $a\in\{\pm1\}$, $n\sim \mathcal{N}(0,\sigma^2)$, and $r=a+n$. I have been referencing page 22 of this report, which uses a uniform quantizer and mentions that a larger dynamic range results in larger step sizes assuming a fixed number of bits. Are there other quantization methods that might be better suited to the Gaussian nature of this channel? Said differently, are there quantization methods that better account for $\sigma^2$? Answer: It depends on your actual decoder architecture, but if memory serves me right, uniform quantization is optimal for Min-Sum or Sum-Product decoders. (My memory was not fully correct, though. For Message-passing decoders using LLRs with bit widths larger than 4 bits, uniform quantization is indeed approximately optimal. For smaller lengths, you can find better quantizers¹.) The question is where you put your quantization boundaries: If you have a fixed number of bits for your LLRs, do you quantize the region from, say, $-2\sigma$ to $+2\sigma$, because everything larger is sufficiently "safely" the sign-indicated value, anyway? Or do you have a large code word length, and you want to make sure that in sea of large LLRs, where some represent bit errors, the "righter" ones are still understood as such? That's a question of looking your design SNR and your target FER; no general answer or formula is known, far as I'm aware. ¹ Geiselhart, Marvin, et al. “Learning Quantization in LDPC Decoders.” 2022 IEEE Globecom Workshops (GC Wkshps), Dec. 2022. Crossref, https://doi.org/10.1109/gcwkshps56602.2022.10008635.
{ "domain": "dsp.stackexchange", "id": 9708, "tags": "digital-communications, channelcoding, quantization, ldpc" }
[how to] Crank mechanism under URDF notation
Question: Hy guys, I have a Slider-Crank like mechanism to control. How can I represent/write under URDF notation a bearing which is connected to a link and slides inside a prismatic joint? It's possible to represent that "crank joint"? thank you! Originally posted by Bemfica on ROS Answers with karma: 482 on 2011-09-02 Post score: 1 Answer: yes it's possible, you can lay out the links and joints as shown in the animation you've linked above. Here's one that has both ends weighted down by gravity. <robot name="crank"> <link name="link1"> <!-- left most red link in the animation --> <inertial> <mass value="100.0" /> <!-- center of mass (com) is defined w.r.t. link local coordinate system --> <origin xyz="0 0 0" /> <inertia ixx="10.0" ixy="0.0" ixz="0.0" iyy="10.0" iyz="0.0" izz="10.0" /> </inertial> <visual> <origin xyz="0 0 0.25" rpy="0 0 0" /> <geometry> <box size="0.1 0.1 0.5"/> </geometry> </visual> <collision> <origin xyz="0 0 0.025" rpy="0 0 0" /> <geometry> <box size="1.0 1.0 0.05"/> </geometry> </collision> </link> <joint name="link1_link2_joint" type="continuous"> <parent link="link1" /> <child link="link2" /> <origin xyz="0 0 0.5" rpy="0 0 0" /> <axis xyz="0 1 0"/> </joint> <link name="link2"> <!-- blue link in the animation --> <inertial> <mass value="1.0" /> <!-- center of mass (com) is defined w.r.t. link local coordinate system --> <origin xyz="0.1 0 0" /> <inertia ixx="0.01" ixy="0.0" ixz="0.0" iyy="0.01" iyz="0.0" izz="0.01" /> </inertial> <visual> <origin xyz="0.1 0 0" rpy="0 0 0" /> <geometry> <box size="0.2 0.05 0.05"/> </geometry> </visual> <collision> <origin xyz="0.1 0 0" rpy="0 0 0" /> <geometry> <box size="0.2 0.05 0.05"/> </geometry> </collision> </link> <joint name="link2_link3_joint" type="continuous"> <parent link="link2" /> <child link="link3" /> <origin xyz="0.2 0 0" rpy="0 0 0" /> <axis xyz="0 1 0"/> </joint> <link name="link3"> <!-- red sliding link in the animation --> <inertial> <mass value="1.0" /> <origin xyz="0.25 0 0" /> <inertia ixx="0.01" ixy="0.0" ixz="0.0" iyy="0.01" iyz="0.0" izz="0.01" /> </inertial> <visual> <origin xyz="0.25 0 0" rpy="0 0 0" /> <geometry> <box size="0.5 0.05 0.05"/> </geometry> </visual> <collision> <origin xyz="0.25 0 0" rpy="0 0 0" /> <geometry> <box size="0.5 0.05 0.05"/> </geometry> </collision> </link> <joint name="link3_link4_joint" type="continuous"> <parent link="link3" /> <child link="link4" /> <origin xyz="0.5 0 0" rpy="0 0 0" /> <axis xyz="0 1 0"/> </joint> <link name="link4"> <!-- right most gray link in the animation --> <inertial> <mass value="1.0" /> <origin xyz="0 0 0" /> <inertia ixx="0.01" ixy="0.0" ixz="0.0" iyy="0.01" iyz="0.0" izz="0.01" /> </inertial> <visual> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry> <sphere radius="0.1"/> </geometry> </visual> <collision> <origin xyz="0 0 0" rpy="0 0 0" /> <geometry> <sphere radius="0.1"/> </geometry> </collision> </link> <joint name="link4_link5_joint" type="prismatic"> <parent link="link4" /> <child link="link5" /> <origin xyz="0 0 0" rpy="0 0 0" /> <axis xyz="1 0 0"/> <limit upper="0.4" lower="0" effort="1000.00" velocity="1000.00"/> </joint> <link name="link5"> <!-- right most gray link in the animation --> <inertial> <mass value="100.0" /> <origin xyz="0 0 0" /> <inertia ixx="10.0" ixy="0.0" ixz="0.0" iyy="10.0" iyz="0.0" izz="10.0" /> </inertial> <visual> <origin xyz="-0.2 0 0" rpy="0 0 0" /> <geometry> <box size="0.4 0.1 0.1"/> </geometry> </visual> <collision> <origin xyz="0 0 -0.475" rpy="0 0 0" /> <geometry> <box size="1.0 1.0 0.05"/> </geometry> </collision> </link> <gazebo reference="link1"> <turnGravityOff>false</turnGravityOff> <material>Gazebo/Red</material> </gazebo> <gazebo reference="link2"> <turnGravityOff>true</turnGravityOff> <material>Gazebo/Blue</material> </gazebo> <gazebo reference="link3"> <turnGravityOff>true</turnGravityOff> <material>Gazebo/Red</material> </gazebo> <gazebo reference="link4"> <turnGravityOff>true</turnGravityOff> <material>Gazebo/Blue</material> </gazebo> <gazebo reference="link5"> <turnGravityOff>false</turnGravityOff> <material>Gazebo/Gray</material> </gazebo> </robot> Apply torque to see it rotate rosservice call /gazebo/apply_joint_effort '{joint_name: link1_link2_joint, effort: 10, start_time: 0, duration: 10000000}' Originally posted by hsu with karma: 5780 on 2011-09-06 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by hsu on 2011-09-06: modified, both ends weighted down by mass :) Comment by hsu on 2011-09-06: @lu you're right, the last joint from link4 to the world creates a loop. removed. Comment by David Lu on 2011-09-06: I don't think this will work, since it breaks the fundamental tree structure of URDF. Comment by David Lu on 2011-09-06: s/slider/prismatic/
{ "domain": "robotics.stackexchange", "id": 6590, "tags": "ros, joint, urdf" }
Sucrose skeletal formula - missing Carbon and Hydrogen
Question: I was reading the Wikipedia article about Sucrose, when I noticed that the skeletal formula was missing some $\ce{C}$ and $\ce{H}$ atoms. The chemical formula being $\ce{C12H22O11}$, I counted the $11$ Oxygen parts. But there seem to be only $3$ $\ce{C}$ atoms and $14\, \ce{H}$ atoms, if I counted correctly. My chemistry lessons date back about $15$ years, so I'd like to understand where the missing carbon and oxygen is hidden within the skeletal formula. Answer: Since your chemistry lessons date back about 15 years, I'd loke to refreash your organic chemistry knowledge. You may need to be aware that carbon can make maximum if 4 single bonds. Therefore, when you have a line structure of organic compound like sucrose (see below), keep in mind that an each corner represent a carbon atom and appropriate hydrogen atoms, which are not shown. For example, if a corner contains only three bonds (as shown in most of sucrose molecule), then the forth bond is $\ce{C-H}$ bond: I have put those $\ce{C-H}$ at each corner, which are representing them. However, the anomeric corner of fructose molecule already has 4 bonds. Therefore, that corner represents only $\ce{C}$ as indicated in turquoise color. Therefore, the molecular formula of sucrose can be counted as $\ce{C12H22O11}$.
{ "domain": "chemistry.stackexchange", "id": 14419, "tags": "molecular-structure, carbohydrates, structural-formula, chemical-formula" }
Refactor selective item by highest create date or update date
Question: I have to check for the Highest Date from CostRepository whether it is create date or edit date then i have to get value in that field. edit date can be null. i don't know how to fix the codesmell. var allCostsInThisPo = CostRepository .GetAll() .WhereInPo(po.po_surr_key); if (allCostsInThisPo.Any()) { var costWithMaxCreatedDate = allCostsInThisPo.OrderByDescending(item => item.create_date).First(); var allEditedCosts = allCostsInThisPo.Where(item=>item.update_date != null); if (allEditedCosts.Any()) { var costWithMaxEditedDate = allEditedCosts.OrderByDescending(item => item.update_date).First(); if (costWithMaxCreatedDate.create_date > costWithMaxEditedDate.update_date) { history.AllocationInfo = BuilkConstant.LogHistorySetting.GetAllocationHistoryLogDisplay(costWithMaxCreatedDate.CreatedByUser.user_fullname, costWithMaxCreatedDate.create_date); } else { history.AllocationInfo = BuilkConstant.LogHistorySetting.GetAllocationHistoryLogDisplay(costWithMaxEditedDate.EditedByUser.user_fullname, costWithMaxEditedDate.update_date.Value); } } else { history.AllocationInfo = BuilkConstant.LogHistorySetting.GetAllocationHistoryLogDisplay(costWithMaxCreatedDate.CreatedByUser.user_fullname, costWithMaxCreatedDate.create_date); } } Answer: You could try and make LINQ do more, to me it is more readable but it may not be to your style. var allCostsInThisPo = CostRepository .GetAll() .WhereInPo(po.po_surr_key); if (allCostsInThisPo.Any()) { var costWithMaxCreatedDate = allCostsInThisPo.OrderByDescending(item => item.create_date) .Select(c => new { Name = c.CreatedByUser.user_fullname, Date = c.create_date}) .First(); // if there is a max edited date greated than the cost created date then this will // be populated, otherwise will be null. var costWithMaxEditedDate = allCostsInThisPo.Where(item => item.update_date.HasValue && item.update_date.Value > costWithMaxCreatedDate.Date) .OrderByDescending(c => item.update_date.Value) .Select(c => new { Name = c.EditedByUser.user_fullname, Date = c.update_date.Value}) .FirstOrDefault(); var maxCost = costWithMaxEditedDate ?? costWithMaxCreatedDate; history.AllocationInfo = BuilkConstant.LogHistorySetting.GetAllocationHistoryLogDisplay(maxCost.Name, maxCost.Date); }
{ "domain": "codereview.stackexchange", "id": 4053, "tags": "c#" }
Stack Exchange Post Reminder
Question: I've created a UserScript for adding follow-up reminders to any post (question or answer) here on the Stack Exchange network. I did this in response to a stackoverflow meta post feature request which sparked my interest. It adds a calendar icon into the vote cell which displays a datepicker where you can select a reminder date at which time you'll be notified via a notification dialog similar to the current inbox and achievements dialog. Reminders are displayed at the top of the screen in the navbar alongside your inbox/achievements and can be dismissed by clicking on them. Everything works but feels sloppy/spaghetti-ish and I would like to get some feedback on how I can improve it, I'm sure I made a few mistakes. reminders.js var Reminder = function (reminderId, postId, postUrl, postTitle, postType, siteName, reminderDate) { this.reminderId = reminderId; this.postId = postId; this.postUrl = postUrl; this.postTitle = postTitle; this.postType = postType; this.siteName = siteName; this.reminderDate = reminderDate; }; var Reminders = { Add(reminder) { reminders[reminder.reminderId] = { "reminderId": reminder.reminderId, "postId": reminder.postId, "postUrl": reminder.postUrl, "postTitle": reminder.postTitle, "postType": reminder.postType, "siteName": reminder.siteName, "reminderDate": reminder.reminderDate }; }, Clear() { reminders = {}; }, HasReminder() { return reminders.hasOwnProperty(reminderId); }, Load() { if (GM_getValue('reminders', undefined) == undefined) { GM_setValue('reminders', JSON.stringify(reminders)); } else { reminders = JSON.parse(GM_getValue('reminders')); } }, Remove(reminderId) { delete reminders[reminderId]; }, Save() { GM_setValue('reminders', JSON.stringify(reminders)); } }; post-reminder.user.js // ==UserScript== // @name SPR-DEV // @version 1.0 // @namespace https://stackoverflow.com/users/1454538/ // @author enki // @match *://*.stackexchange.com/* // @match *://*.stackoverflow.com/* // @match *://*.superuser.com/* // @match *://*.serverfault.com/* // @match *://*.askubuntu.com/* // @match *://*.stackapps.com/* // @match *://*.mathoverflow.net/* // @grant GM_getValue // @grant GM_setValue // @grant GM_addValueChangeListener // @require https://code.jquery.com/ui/1.11.4/jquery-ui.min.js // @require https://rawgit.com/enki-code/UserScripts/master/reminders.js // @run-at document-end // ==/UserScript== var reminders = {}, sitename = window.location.hostname, title = $("#question-header h1 a").text(); $(function () { $("head").append("<link rel='stylesheet' href='https://maxcdn.bootstrapcdn.com/font-awesome/4.5.0/css/font-awesome.min.css'>") .append("<link rel='stylesheet' href='https://code.jquery.com/ui/1.11.4/themes/smoothness/jquery-ui.css'>") .append("<style>.reminder, #reminders {color: #999;} .active-reminder, #reminders.active-reminder { color: dodgerblue; }</style>"); $("div.network-items").append("<a id='reminders'\ class='topbar-icon'\ style='background-image: none; padding: 10px 0 0 10px; font-size: 12px; '\ title='Post Reminders'>\ <i class=' fa fa-calendar-o'></i>\ </a> '") .on('click', '#reminders', function (e) { $("#reminder-dialog").toggle(); }); $(".js-topbar-dialog-corral").append("<div id='reminder-dialog' class='topbar-dialog inbox-dialog dno' style='top: 34px; left: 236px; width: 377px; display: block; display:none;'>\ <div class='header'>\ <h3>post reminders</h3>\ </div>\ <div class='modal-content'>\ <ul id='reminderlist'>\ </ul>\ </div>\ </div>"); Reminders.Load(); notify(); // listen for changes and reload reminders GM_addValueChangeListener("reminders", function () { console.log("reminder data has changed, updating reminder list now..."); Reminders.Load(); notify(); }); $(".vote").each(function () { // add calendar icon to each vote cell and add generate reminderId from postId and sitename since post ids are not unique across all sites var postId = $(this).find("input[name='_id_']").val(), reminderId = postId + sitename, type = $(this).parent().next().attr("class"), postType = (type == "postcell" ? "question" : "answer"); $(this).append("<a class='reminder'\ data-reminderid='" + reminderId + "'\ title='Remind me of this post'\ style=' padding-left:1px;'>\ <i class='fa fa-calendar-plus-o fa-2x' style='padding-top:5px;'></i>\ </a>\ <input type='text' class='datepicker' data-reminderid='" + reminderId + "' data-posttype='" + postType + "' style='display:none;'>") .on('click', '.vote a.reminder', function (e) { $(this).next().show().focus().hide(); }); }); $(".datepicker").datepicker({ minDate: 0, onSelect: function (dateText, inst) { // date selected, add new reminder and save changes. var postId = $(this).data("postid"), postUrl = $(this).closest("tr").find(".post-menu").find(".short-link").attr("href"), postType = $(this).data("posttype"), reminderId = $(this).data("reminderid"), reminderDate = new Date($(this).val()), calendar = $(this).prev(), rem = new Reminder(reminderId, postId, postUrl, title, postType, sitename, reminderDate.getTime()); Reminders.Add(rem); Reminders.Save(); }, beforeShow: function (input, instance) { instance.dpDiv.css({ marginTop: '-35px', marginLeft: '10px' }); } }); setTimeout(function () { // had to delay this or it wouldn't work, still need to investigate why. $('#reminder-dialog .modal-content #reminderlist li a').click(function (e) { //notification item clicked, remove item and open link in new tab e.preventDefault(); var id = $(this).data("reminderid"); Reminders.Remove(id); Reminders.Save(); $(this).remove(); $("#reminder-dialog").hide(); window.open($(this).attr('href'), '_blank'); }); }, 600); }); function notify() { setTimeout(function () { // remove active reminder class from any calendars and hide the notification dialog $("#reminders, a.reminder").removeClass("active-reminder"); $("#reminder-dialog").hide(); $("#reminderlist").empty(); $.each(reminders, function (id, val) { // find calendar associated with reminder and highlight it var calendar = $("a.reminder[data-reminderid='" + id + "']"), time = reminders[id].reminderDate, currentTime = new Date().getTime(); $(calendar).addClass("active-reminder").attr("title", "This post has a reminder set for " + new Date(time).toDateString()); // check if it is time to display reminder notification if (new Date().getTime() > time) { var reminderDate = new Date(reminders[id].reminderDate).toDateString(); $("#reminders").addClass("active-reminder"); $("#reminderlist").append("<li class='inbox-item '>\ <a href='https://" + reminders[id].siteName + reminders[id].postUrl + "' data-reminderid='" + id + "'>\ <div class='site-icon fa fa-calendar-o' title='Post Reminder'></div>\ <div class='item-content'>\ <div class='item-header'>\ <span class='item-type'>Reminder &mdash; " + reminders[id].postType + "</span>\ <span class='item-creation'><span title='" + reminderDate + "'>" + reminderDate + "</span></span>\ </div>\ <div class='item-location'>" + reminders[id].postTitle + "</div>\ <div class='item-summary'>" + reminders[id].siteName + "</div>\ </div>\ </a>\ </li>"); }//end if }); //end each }, 500);//end setTimeout } //end Notify Answer: Overall concept What about shared computers and users who have several devices? Could storage be made independent of a particular device (eg cloud storage) and somehow take account of StackOverflow login. reminders.js Needs an explanatory comment/link. I for one don't understand the pattern. If you are looking for a more OO way of doing things then with a little thought, Reminder() instances could be full-blooded objects with methods, not just raw data. Reminders.Save() should ignore any non-enumerables on stringification and Reminders.Load() could be modified to re-create full-blooded Reminder() objects from the retrieved raw data. Reminder objects with say .activate(), .isDue() and .notification() methods would allow notify() to be simplified. That would be a lot of work for the sake of elegance but possibly worth while. post-reminder.user.js A bunch of nit-picks : Move inline styles into the stylesheet. Test .hasClass('...') rather than .attr('class') == '...'. $.each(reminders, function (id, val) makes val available but reminders[id] is used instead, several times. calendar = $(this).prev() is not used. currentTime = new Date().getTime(); is not used. new Date().getTime() is more efficiently written as Date.now(). $(this).data('postid') appears not to be set. e.preventDefault() in click handlers won't hurt even where not strictly necessary. In the onSelect handler new Reminder() parameter list could be composed directly rather than via a series of assignments. Suggest trawling through for other unnecessary assignments (gives GC less to do). Even better, pass a hash (javascript plain object) to the Reminder() constructor instead of individual params. In the $(".vote").each(...) loop, it should be possible to do the .datepicker() widgetization as you go, rather than rediscover the .datepicker elements after the loop has finished. For efficiency, you would need external, named functions for onSelect and beforeShow. The need for timeouts is worrying. Definitely needs investigating. Possibly due to async loading of SO content?
{ "domain": "codereview.stackexchange", "id": 17573, "tags": "javascript, jquery, stackexchange, userscript" }
I need a kilogram of neutrinos. What are the challenges?
Question: So I am a benevolent genius that figured out that if only I had a kilo of neutrinos in a bottle, I could solve some long standing problems (climate change, rockets landing upright, world peace, the usual). What are the challenges? So far, collecting neutrinos turned out to be... difficult. They only interact weakly (and gravitationally, I presume). The neutrinos we know of (coming from the Sun or supernovae or radioactive decay) are high energy and travel near the speed of light. My problems are How can I slow them down? Nuclear reactors use moderation to "cool" down fast neutrons. Can we imagine a process to cool down neutrinos? What could we bounce them off of to transfer energy? Or maybe there are cold neutrinos everywhere we just haven't detected them? How can I store them? Could there be some material providing a kind of electro-weak wall, like an Erlenmeyer flask for bubbly neutrino soup (probably invisible due to missing electromagnetic interaction)? Could I generate them already cold/slow? Anything else I've missed? Answer: Neutrinos have very little mass and react extremely weakly with just about everything. One way to trap it is just to use gravity. Even photons can get trapped by black holes so I think it's pretty reasonable to try to collect neutrinos by placing an extremely heavy object near the sun and placing a bottle around the neutrinos that orbit it (the actual "bottle" is just for appearances and doesn't do anything, although I guess you could attach it to the black hole so something could carry the bottle around, which would move the black hole, which would cause the neutrinos to follow). In your case you want 1kg of neutrinos in a "bottle" so having these neutrinos circling in a large orbit is not enough. But if you created a black hole that has a photon sphere (radius that photons are trapped) around the size of a bottle, then it's possible for neutrinos to be able to circle around in a bottle-sized orbit. Jupiter roughly has the mass necessary to have a "Schwarzschild radius" of $\approx 3m$ so if you had something around that order of magnitude, then it could be used to collect the neutrinos. (Although, in my example I work it out with light, while the orbits will be different since neutrinos have a tiny mass. Maybe someone here can do the details here more rigorously?) Not so easy carrying around a mini-black hole with the mass of Jupiter. Also, actually collecting these neutrinos is going to be a hard task, as the crosssection of neutrinos from the sun that have bottle-sized stable orbits (that are trapped by our black hole) is probably very small. One trick that might work here is to use gravitational lensing to try to focus the neutrinos coming from the sun to a smaller area (although wikipedia is telling me that gravitational lensing isn't like optics and doesn't have a focal point so I'm not sure if this works). Also, to add to the complications, neutrinos both famously and bizarrely oscillate between "flavors" as they propagate over long distances, which I can imagine only makes the situation more complicated.
{ "domain": "physics.stackexchange", "id": 77527, "tags": "soft-question, neutrinos, weak-interaction" }
Infinitely charged wire and Differential form of Gauss' Law
Question: I have tried calculating the potential of a charged wire the direct way. If lambda is the charge density of the wire, then I get $$\phi(r) = \frac{\lambda}{4 \pi \epsilon_0 r} \int_{-\infty}^\infty \frac{1}{(1+z^2/r^2)^{1/2}} dz.$$ But this comes to $+\infty$ unless I am doing the calculation wrong. Why doesn't this work the direct way? Also, is it possible to calculate the potential of a charged wire using Gauss' differential law? What about in the case of an infinite charged sheet? Or does Gauss' differential law only apply to charged volumes? Answer: The infinitely long wire has an infinite charge $$Q~=~\lambda \int_{-\infty}^{\infty} \! dz ~=~ \infty,\tag{1}$$ and EM has an infinite range, so one shouldn't be surprised to learn that the result $$\phi(r)~=~ \frac{\lambda}{4 \pi \epsilon_0} \int_{-\infty}^{\infty} \frac{dz}{\sqrt{z^2+r^2}} ~=~ \frac{\lambda}{4 \pi \epsilon_0} \left[ {\rm arsinh} \left(\frac{z}{r}\right)\right]_{z=-\infty}^{z=\infty} ~=~\infty \tag{2}$$ is infinite. (From a mathematical point of view, the integrand fails to be integrable wrt. the $z$ variable.) A similar situation happens often in Newtonian gravity if the total mass is infinite, see e.g. this question. However, as Pygmalion mentions in his answer, the electric field $\vec{E}$ is well-defined for $r\neq 0$, and the corresponding integrand is integrable wrt. the $z$ variable. E.g. the radial component (in cylindrical coordinates) reads $$E_r(r)~=~ \frac{\lambda r}{4 \pi \epsilon_0} \int_{-\infty}^{\infty} \frac{dz}{(z^2+r^2)^{3/2}} ~=~ \frac{\lambda r}{4 \pi \epsilon_0} \left[ \frac{z}{r^2\sqrt{z^2+r^2}}\right]_{z=-\infty}^{z=\infty} ~=~\frac{\lambda}{2\pi\epsilon_0 r}\tag{3} $$ for $r\neq 0$. Alternatively, apply Gauss' law $$d\Phi_{E} ~=~\frac{dQ}{\epsilon_0},\tag{4}$$ using an infinitesimally thin disk perpendicular to the wire. The disk has radius $r$ and thickness $dz$. The total electric flux $d\Phi_{E}$ out of the disk is $$ E_r \cdot 2\pi r dz ~=~ \frac{\lambda dz}{\epsilon_0},\tag{5}$$ which leads to the same electric field $E_r$. This electric field $\vec{E}=-\vec{\nabla}\phi$ is consistent with a potential of the form $$\phi(r) ~=~-\frac{\lambda}{2\pi\epsilon_0}\ln r \qquad \text{for}\qquad r\neq 0.\tag{6}$$
{ "domain": "physics.stackexchange", "id": 2835, "tags": "homework-and-exercises, electrostatics, potential, gauss-law" }
Conditionals for time specific data
Question: This is a fun piece. I have this block of code here: if (minutes >= 60 && hours < 24) { return hours + " hours ago"; } else if (hours >= 24 && days <= 30) { return days + " days ago"; } else if (minutes <= -60 && hours > -24) { return -hours + " hours from now"; } else if (hours <= -24 && days >= -30) { return -days + " days from now"; } else if (minutes >= -60 && minutes < 0) { return -minutes + " minutes from now"; } else if (days > 30 && days < 365) { if (days/30 == 1) { return "Last month"; } return days/30 + " months ago"; } else if (days < -30 && days > -365) { if (-days/30 == 1) { return "Next month"; } return -(days/30) + " months from now"; } else if (days >= 365) { if (days/365 == 1) { return "Last year"; } return days/365 + " years ago"; } else if (days <= -365) { if (-days/365 == 1) { return "Next year"; } return -(days/365) + " years from now"; } else { return minutes + " minutes ago"; } I want to reduce the number of if-else statements. The ways I was thinking of tackling this was: Store these into a hashmap and then call it \$O(1)\$ time Use enums Ternary operators Switch statements I don't want to use enums since this is on a mobile device, and enums are heavier on memory. I was thinking that maybe ternary operators are best here. Thoughts? The negative is intentional. It's with an API that I am working with and they use negatives to imply future events. Input from API: This is the following example input from the API: "time": "2015-03-06 20:00:00 EST" It is passed to the following code block, and it is expected to return something like: "45 minutes ago" "56 seconds ago" "1 hour from now" "2 days from now" "2 months from now" "Next year" "Two years from now" "A month ago" "28 days ago" "5 mins from now" //etc. Answer: I'd just say, WTF is this doing? You're testing minutes and hours first as if they were the most important things. So with 10 minutes, 1 hours and 1000 days, you output something like "1 hours ago". Maybe it's impossible as the variables all get computed from a single time interval. But you didn't show us. I guess it's better for my sanity not to try too hard to figure out what it exactly does. So let me assume that all variable come from a single time interval. Then a condition like minutes >= 60 && hours < 24 can be expressed like 1 <= hours && hours < 24 This doesn't buy us much as the order is still confusing: " hours ago" " days ago" " minutes from now" " months ago" Again, not trying to find a system there. What about something like String suffix = minutes < 0 ? "ago" : "from now"; if (Math.abs(days) >= 365) { return (Math.abs(days) / 365) + " years " + suffix; } else if (Math.abs(days) >= 2*30) { return (Math.abs(days) / 12) + " months " + suffix; } else if (Math.abs(days) >= 30) { return days < 0 ? "Last month" : "Next month"; } else if (Math.abs(days) >= 1) { return Math.abs(days) + " days " + suffix; } else if (Math.abs(hours) >= 1) { return Math.abs(hours) + " hours " + suffix; } else { return Math.abs(minutes) + " minutes " + suffix; } I'm not claiming it's correct, but it's short and damn simple, so you find and fix all bugs in a few seconds. So what I dislike is The high count of cases, which could be cut in half using the abs idea stolen from Hosch250. Placing two tests in each if when one is enough. You need no range tests if you do it systematically. Even with range tests, there should be a clean order. I always prefer to keep things simple from the very beginning. Otherwise, it can easily happen that you get a cool idea, but can't apply it because of the need to preserve compatibility with some quirks in the original. Another disadvantage of starting with complicated stuff is that you may miss some simplifications the way I did above. Concerning correctness, all the divisions and comparisons with 356 and 30 are slightly wrong. Note also that the task is not exactly defined. Given the dates 2000-12-31T23:23:00 and 2001-01-01T00:11:22, multiple answers are correct: "48 minutes from now" "nearly one hour from now" "tomorrow" "next month" "next year" And maybe also "next century" and "next millennium", but both are disputable (and irrelevant most of the time). There's no clear solution for this and you should specify the behavior precisely and cover it by tests. The most straightforward solution would be to look at the calendar year first, which would give us the answer "next year". Pretty impractical, but every other solution needs some arbitrary choices. I'd probably start with String timeDifferenceString(long currentTimeMillis, long otherMillis) { final boolean isPast = otherMillis < currentTimeMillis; final Calendar first = Calendar.getInstance(); first.setTimeInMillis(Math.min(currentTimeMillis, otherMillis)); final Calendar second = Calendar.getInstance(); second.setTimeInMillis(Math.max(currentTimeMillis, otherMillis)); return timeDifferenceString(first, second, isPast); } so that no Math.abs is needed anymore. Obviously, Joda-Time or the corresponding JDK8 classes are a better choice than Calendar, but let's keep it simple for this answer. Now you can write private String timeDifferenceString(Calendar first, Calendar second, boolean isPast) { final String agoOrFromNow = isPast ? "ago" : "from now"; final String lastOrNext = isPast ? "last " : "next "; final int years = second.get(Calendar.YEAR) - first.get(Calendar.YEAR); if (years > 1) { return years + " years " + agoOrFromNow; } else if (years > 0) { return lastOrNext + "year"; } final int months = second.get(Calendar.MONTH) - first.get(Calendar.MONTH); if (months > 1) { return months + " months " + agoOrFromNow; } else if (years > 0) { return lastOrNext + "month"; } ... return "now"; } This leads to a rather lengthy but trivial method. Now you may want to split it as suggested by h.j.k. This is not easy, as the parts above only conditionally return something. Using a guard condition, this can be solved like this if (years > 0) { if (years > 1) { return years + " years " + agoOrFromNow; } else if (years > 0) { return lastOrNext + "year"; } else { return "this year"; } } After extracting the method, you'll see that it's about the same for every time unit, so you may want to generalize it to private String timeDifferenceString(int count, String unitName, boolean isPast) { if (count > 1) { return count + " " + unitName + "s " + agoOrFromNow(isPast); } else if (count > 0) { return lastOrNext(isPast) + " " + unitName; } else { return "this " + unitName; } } Note that this won't work for other languages for many reasons including pluralization rules. Now, you have a few trivial methods and one important one looking like private String timeDifferenceString(Calendar first, Calendar second, boolean isPast) { final int years = second.get(Calendar.YEAR) - first.get(Calendar.YEAR); if (years > 0) { return timeDifferenceString(years, "year", isPast); } final int months = second.get(Calendar.MONTH) - first.get(Calendar.MONTH); if (months > 0) { return timeDifferenceString(months, "month", isPast); } ... return "now"; } This is the right time for making changes, allowing you to output "48 minutes from now" instead of "next year" in my above example.
{ "domain": "codereview.stackexchange", "id": 14328, "tags": "java, datetime" }
Critical electric field that spontaneously generates real pairs
Question: With the current QED framework, If an electric field is strong enough (say, near a nucleus with $Z > 140$) , pair production will occur spontaneously? Is this a real effect or an artifact before renormalization is carried out? how can energy be conserved in such scenario? Answer: The Schwinger pair production can be understood as follows: Suppose you have a constant electric field E in some region of space, pointing in the x direction. This is created for example by a large capacitor. Inside this capacitor, the energy of an electron-positron pair separated by some distance $\Delta x$ is $V(\Delta x)= 2m- q E \Delta x$, where $m$ is the mass of the electron (and positron) and $q$ is its charge. The first term is the rest energy of two massive particles, and the second is their potential energy in the presence of the electric field. You can see that for large enough separation the total energy is negative - it becomes energetically favourable to have a pair present instead of an empty capacitor. This process can be thought of as tunnelling: the configuration of empty space and an electron-positron pair are separated by a barrier $V(\Delta x)$. You therefore have an exponentially small probability to create pairs at the classical turning point $V(\Delta x)=0$, or $\Delta x = \frac{2m} {qE}$. As always in tunnelling process, energy is conserved - it is zero before and after the tunnelling in this case. Once the particles are created, they accelerate away from each other, and eventually end up neutralizing in part the capacitor, in other words reducing the electric field. Note that there is no critical electric field - pair creation occurs for arbitrarily small electric field, though the probability is exponentially suppressed, roughly as $\exp(- \pi m^2/ q E)$. The derivation of this formula by Schwinger (and corrections to all orders) is a real joy to see, I'd recommend at least for theorists to take a look at the original paper. This may well be the first use of instanton methods in quantum mechanics, though I'm far from being an expert on the history. Also: There is no real connection of this to renormalization. Variants of this calculation are useful in cosmology and QFT in curved spacetime, for example for particle production by time varying backgrounds. As far as I know, the effect has never been observed due to difficulties creating large enough electric field. I may be wrong on that as well though.
{ "domain": "physics.stackexchange", "id": 718, "tags": "nuclear-physics, quantum-electrodynamics, binding-energy, elements, pair-production" }
Dyson Air Blade as a propulsion system?
Question: I've read from many sources that Dyson Air Multipliers are more efficient and quieter than normal fans. Now, with the proof of concept, is it possible to use its principles as a propulsion system for, say, a quieter helicopter? Answer: Unlike John Rennie, I think that the problem is not in the efficiency of this system but in the fact, that it will not generate considerable lift. So even if marketing materials are completely true and Dyson Air Multiplier is more energy efficient than conventional fans this efficiency only applies to moving air (which is its intended use) but not to the lifting force. The principle behind Air Multiplyer (see this video) is creating the flow around the surface of the duct which induces considerably greater flow through the duct. However the resulting flow would be nearly potential and the net force on the duct would be quite small. Somewhat similar effect does occur in helicopters: Vortex ring state, where under certain conditions increasing the air flow through the rotor does not produce additional lift. In helicopters this is harmful and could even cause the crash, but the Air Multiplier effectively creates similar 'vortex' around the duct for the purpose of moving air.
{ "domain": "physics.stackexchange", "id": 14921, "tags": "fluid-dynamics, pressure, air, fan" }
Waveguides Transmission Mode Determination
Question: How do I know if I have TE, TM, or TEM rectangular conductive waveguide? For instance, I am doing a lab where we want maximum magnetic field in the waveguide, does that mean we want the TE because $\vec{E}=0$ near boundary? Is it the material of waveguide that tells you, frequency of signal, or something else? Thank you! BTW I am using microwave radiation. Answer: Firstly, if your waveguide is a hollow conductor, it cannot support TEM modes. There must be at least two separate (electrically insulated from one another) conductors in the waveguide's cross section for TEM modes to propagate. The reason is that the transverse field dependence of a TEM mode is the same as that of a static field, as I explain in detail in this answer here. That is, the fields are of the form $\vec{E}(x,y) \,E_z(z \pm c\,t)$ and $\vec{H}(x,y) \,H_z(z \pm c\,t)$, where $\vec{E}(x,y) = -\nabla \phi_E(x,y)$ and $\vec{H}(x,y) = -\nabla \phi_H(x,y)$. So, at a given cross-section, a conductor must be an equipotential surface. If there were only one such conductor, in the farfield it would look like charged thread, with field lines directed radially, and so the potential in the farfield would vary like $\log r$, which is unphysical because it is divergent. There must be two conductors for field limes to separately begin and end on. Likewise, inside a hollow conductor, there can be no static field, therefore no TEM field. So TEM waveguides are things like co-axial cables (outer and centre conductor), twisted pairs and microwave strip lines with dielectric sandwiched between two conductors. To tell whether the waveguide will be TE or TM (or indeed hybrid) you need to look at the details of the cross section and the frequency it is working at. The whole answer is that you need to look at the full boundary value problems for Maxwell's equations for the waveguide's cross section. If it is a rectangular waveguide, the propagation constant as a function of frequency is the same expression for both TE and TM modes: the one difference is that TE modes have a lower cutoff frequency so if working above TE cutoff but below TM cutoff the field will be TE. See this document: Section 13.4 in MIT online Lecture Course "Electromagnetic Fields and Energy" by Herman Haus and James Melcher The particular section you need (13.4) is at: http://web.mit.edu/6.013_book/www/chapter13/13.4.html
{ "domain": "physics.stackexchange", "id": 12143, "tags": "electromagnetism, electromagnetic-radiation, experimental-physics, magnetic-fields, waveguide" }
Using Senz3d with depthimage_to_laserscan
Question: Hi everyone, I'm using Senz3d to get depthimage data, and I would like to use this depthimage data with package depthimage_to_laserscan. But when I run this data with depthimage_to_laserscan package, it showed the warning message below: [ WARN] [1407311794.272714061]: [image_transport] Topics '/softkinetic_camera/depth_registered/image' and '/softkinetic_camera/depth_registered/camera_info' do not appear to be synchronized. In the last 10s: Image messages received: 33 CameraInfo messages received: 0 Synchronized pairs: 0 I think I have to publish '/softkinetic_camera/depth_registered/camera_info', but I don't have any experienced with this. So, anyone can help me? Thanks, Duong Originally posted by buihaduong on ROS Answers with karma: 5 on 2014-08-06 Post score: 0 Answer: It looks like your camera driver isn't publishing CameraInfo messages for the depth image properly. The driver probably needs to be updated to support maintaining and publishing CameraInfo. The camera_info_manager library makes this easier. Originally posted by ahendrix with karma: 47576 on 2014-08-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by buihaduong on 2014-08-06: Hi ahendrix, Thank for your answer. Do you have any guides or tutorials to work with camera_info_manager and CameraInfo ? Comment by ahendrix on 2014-08-07: Not really. The only tutorial available in the camera_info_manager package just links to the source code the camera1394 driver: http://wiki.ros.org/camera_info_manager/Tutorials/UsingCameraInfoManagerInCameraDriver%28C%2B%2B%29 Comment by buihaduong on 2014-08-07: Thank ahendrix, I'm trying to use camera_info_manager. I will post the solution when success. Comment by buihaduong on 2014-08-07: Hi @ahendrix, do you have experienced with following error: Could not convert depth image to laserscan: scan_height ( 1 pixels) is too large for the image height.?
{ "domain": "robotics.stackexchange", "id": 18929, "tags": "ros, camera-depth-points, depthimage-to-laserscan" }
What is the smallest result you publish on ArXiv?
Question: In essence, the question is: What is the least publishable unit for the ArXiv? Of particular interest are fields that use the ArXiv extensively such as quantum computing. But comments on other fields and preprint services (like, ECCC & ePrint) are also welcome. Detailed question This is based on the following two questions: When should you say what you know? How do you decide when you have enough research results to write a paper and to which journal you submit the paper In particular, on Jukka Suomela's comment on this answer: I think ArXiving your results ASAP is a good idea. Please keep in mind that an ArXiv manuscript does not need to constitute a minimum publishable unit. I think it is perfectly ok to submit a 2-page proof to ArXiv, even though it would be obviously too short as a conference or journal paper. Resolving an open problem that other people would like to solve is more than enough. In my field (quantum computing) it seems that every preprint I see on the ArXiv is a publication-level paper, released early so that we don't have to wait for conference proceedings or journal turnaround. It is intimidating to submit something that is not at publication-level. Is it alright to put up results which are partial or only slight extensions of existing work? Is it alright to put up results that are potentially interesting (i.e. you've given some talks on them and not everybody fell asleep) but you doubt would get into a top-conference or journal? Do you have advice on when to share results on ArXiv or similar preprint-servers? Can sharing results early hurt you? Some specific background Just to make the question more personal, I'll include a further motivation. However, I am hoping to receive answers that give more general guidelines that I (and others) could follow in the future. I did some work on unitary t-designs in which I extended an existing theorem (in a way that is kind of useful, but the proof of the original just needs to be modified slightly -- so no new idea; i.e. when I talked to the author of the earlier paper his comment was along the lines of "oh cool, didn't think about that", and for the proof I had to say about a sentence and then he was like "okay, I see how you would prove that"), proved some easy results, and provided an alternative proof of a lower bound. I wrote up a pretty verbose paper that I keep on my website, but unfortunately I am not well read enough in the field to really understand how it fits in the bigger picture (and I think that is the biggest weak-point, that I doubt I could overcome easily). I keep the text around mostly as a sort of "I worked on this" note and since I give talks on the topic sometimes. It has also come in useful once to a friend since I make a pretty gentle intro and so he used it as a basic starting point to relate some of his work to designs (although he didn't use any of the results in the paper, just like a lecture note on definitions). Would this be an example of something that I should put up on the ArXiv? or is the appropriate measure to keep it in on my website? Answer: ArXiv papers still need to be recognizable as papers. I'd only put something on the arXiv if I'd feel comfortable publishing it as a letter in a journal (like, say, Information Processing Letters). For stuff that's even smaller than that, but that I still want to put on some sort of public record, I'll just make a blog post. But in your case, if you've written it up as a preprint anyway, and you clearly state in it how much or how little is new, then why not? ArXiv papers don't actually have to have any new research content — survey papers are also welcome — so a paper that's mostly a survey but that extends the problem a small step in some direction doesn't sound problematic to me.
{ "domain": "cstheory.stackexchange", "id": 1015, "tags": "soft-question, advice-request, research-practice, paper-review" }
Confusion regarding application of time invariance to flip system?
Question: I am reading book signal processing first Chapter 5 I am reading article of time invariance Where author mentions example of a flip system as shown in attached photo ,i have drawn a red curve above confusion equation I am confused, how he is doing this when he say"if we delay input and then reverse the order" I think he is not properly reversing the order,as he reversed the sign of only "n" but he didn't reversed the sign of "n0" Why this discrepancy in reversal/flipping? Answer: The book is correct, there is no discrepancy. When we reverse a system in time, only the time-variable will get negated and not the shift. Time-reversal does not mean that the whole argument of $x[n]$ gets negated. Take example of a sequence : $x[n] = {\hat{0},1,2,3,4,5}$. Shift this by 2 samples, so $x_{shift}[n] = {\hat{0},0,0,1,2,3,4,5}$. Now, reverse the time, what do you think should be expected sequence : $x_{final}[n] = 5,4,3,2,1,0,0, \hat{0}$. You will get it only when $x_{final}[n] = x[-n-2]$ and not $x[-n+2]$. Check it out. Whatever operation happens, time scaling, shifting and reversal, it happens on n and not on the complete argument. That is why when the author, reverses the time first and then delays, the shift gets negated too. Time-reversal : $x_{reverted}[n] = x[-n]$ and then shift of $n_o$, so again shift will happen to $n$ and not to $-n$, and hence, $x_{final}[n] = x[-(n-n_o)]$. Lets take the same sequence as an example : $x[n] = \hat{0},1,2,3,4,5$. Revert time first : $x_{reverted}[n] = 5,4,3,2,1,\hat{0}$ Now, when delay the sequence by 2 samples, the expected sequence should look like : $x_{final}[n] = 5,4,3,\hat{2},1,0$. Check that this would happen iff $x_{final}[n] = x[-n+2]$ and not if $x_{final}[n] = x[-n-2]$. Just the put the values of n and it will be clear. It always helps to take an example sequence and do the shifting, delaying and reversal on that sequence and then see what should you do to the argument on $x$ to get he final expected sequence.
{ "domain": "dsp.stackexchange", "id": 8634, "tags": "system-identification" }
Unable to resolve dependencies while ROS Install on Mac
Question: Hi, I am trying to install ROS Indigo on Mac OS X El Capitan and am following this tutorial. However, along the tutorial I am getting suck at the step 2.1.2 wherein I need to run the following command: $ rosdep install --from-paths src --ignore-src --rosdistro indigo -y Upon doing so, I get the following error: $ rosdep install --from-paths src --ignore-src --rosdistro=indigo -y executing command [sudo -H pip install -U urlgrabber] Password: Collecting urlgrabber Using cached urlgrabber-3.9.1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/private/tmp/pip-build-iJQJmX/urlgrabber/setup.py", line 3, in <module> import urlgrabber as _urlgrabber File "urlgrabber/__init__.py", line 54, in <module> from grabber import urlgrab, urlopen, urlread File "urlgrabber/grabber.py", line 427, in <module> import pycurl ImportError: No module named pycurl ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-iJQJmX/urlgrabber/ ERROR: the following rosdeps failed to install pip: command [sudo -H pip install -U urlgrabber] failed I tried looking for help, but am not able to get through. Does anyone have a clue why? Thanks :) Originally posted by dexter05 on ROS Answers with karma: 26 on 2016-08-20 Post score: 1 Original comments Comment by spmaniato on 2016-08-21: You may wanna use this script instead: https://github.com/mikepurvis/ros-install-osx Has been working great for many many people. Answer: I tried Mike Purvis's solution and it works! Thanks for the tip spmaniato! It takes a long time though. Just a tip for all those who might need help in the future. Packages like pyqt, sip may throw ImportError during compilation. A simple brew unlink <package that throws ImportError> && brew link <same package> works like magic. Originally posted by dexter05 with karma: 26 on 2016-08-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25570, "tags": "ros, ros-indigo, macosx" }
Is there any legitimate information about Microsoft quantum hardware?
Question: I'm wondering if anyone has information regarding the current status of Microsoft quantum hardware? How many working qubits do they have? What are the gate depth/fidelity? Any details about realization? (besides "we're focused on sustainable solutions that will change the world"; "in the future, topological qubits will allow us to reach fault-tolerance", etc.) Answer: I'm glad that you inserted this quote: "in the future, topological qubits will allow us to reach fault-tolerance" So you are at least aware that Microsoft is invested in topoligical quantum computing. You can even see it directly from microsoft.com: So now the answer to your question: Topolgical quantum computing is quantum computing that would use anyons. If you read this question of mine you will see that confirming the existence of anyons is still an open problem (unfortunately there is an "answer" to the question, but it is about simulating anyons, not actually realizing them in real life). My answer here explains what anyons are in more detail, and my answer here explains that since using anyons is part of the definition of topological quantum computing, there is no way for Microsoft to make a topological quantum computer by skipping the step of confirming the existence of anyons. So you ask: "I'm wondering if anyone has information regarding the current status of Microsoft quantum hardware?" How many working qubits do they have? What are the gate depth/fidelity? This is what they have so far: # of qubits = 0 gate depth = 0 fidelity = 0 We need to wait until anyons are confirmed to actually exist, before we can create a quantum computer out of them. While it is possible that Microsoft is working on some other type of quantum computing without telling us, all we know for sure is that their website says "Our approach focuses on topological quantum computing" which means that they do not have any physical device yet. They have made plenty of achievements in quantum computing theory though, as well as software such as liquid. No hardware though.
{ "domain": "quantumcomputing.stackexchange", "id": 1616, "tags": "experimental-realization" }
Draw-Your-Own-Cards One-file Memory Match Game
Question: I've made a memory match game, where the users can draw each card. When the program is ran, there would be a canvas that allows the user to draw in. When the user is done drawing, they can press ENTER, and another fresh canvas will appear. The user may repeat the process, drawing as many card designs as needed, and when enough card designs are drawn, press ENTER without drawing anything, and the memory match game will begin. Here is a sped-up demonstration how it works: My code: import pygame from time import sleep from random import shuffle, choice ROWS = 5 COLUMNS = 8 pygame.init() wn = pygame.display.set_mode((435, 500)) class CreateShape: def __init__(self, x, y, w, h): self.rect = pygame.Rect(x, y, w, h) self.drawing = False self.color = (255, 0, 0) self.cors = [] def under(self, pos): return self.rect.collidepoint(pos) def draw(self): pygame.draw.rect(wn, (255, 255, 255), self.rect) for cor in self.cors: if len(cor) > 2: pygame.draw.polygon(wn, self.color, cor) def create(self): while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() elif event.type == pygame.MOUSEBUTTONDOWN: if shape.under(event.pos): shape.cors.append([]) shape.drawing = True elif event.type == pygame.MOUSEBUTTONUP: shape.drawing = False elif event.type == pygame.MOUSEMOTION: if shape.under(event.pos): if shape.drawing: shape.cors[-1].append(event.pos) elif event.type == pygame.KEYDOWN: return self.cors wn.fill((0, 0, 0)) shape.draw() pygame.display.update() class Card: def __init__(self, x, y, w, h, shape, original): self.x = x self.y = y self.w = w self.h = h self.card_color = (200, 200, 200) self.shape_color = (255, 0, 0) self.rect = pygame.Rect(x, y, w, h) self.shape = shape self.original = original self.turned = False def flip(self): self.turned = not self.turned def clicked(self, pos): return self.rect.collidepoint(pos) def draw(self): pygame.draw.rect(wn, self.card_color, self.rect) if self.turned: for shape in self.shape: pygame.draw.polygon(wn, self.shape_color, shape) class Deck: def __init__(self, x, y, w, h, rows, cols, score_keeper, shapes=[], space=5): shapes *= 2 self.turned = [] self.cards = [] index = 0 while len(shapes) < rows * cols: shapes += [choice(shapes)] * 2 shuffle(shapes) for i in range(rows): for j in range(cols): shapey = [[[c[0] / 2, c[1] / 2] for c in s] for s in shapes[index]] card_x, card_y = x + j * (w + space), y + i * (h + space) cors_x = sorted([cor[0] for shape in shapey for cor in shape]) cors_y = sorted([cor[1] for shape in shapey for cor in shape]) max_x, min_x = cors_x[0], cors_x[-1] max_y, min_y = cors_y[0], cors_y[-1] shaper = [[(i + card_x - min_x + (w - max_x + min_x) / 2, j + card_y - min_y + (h - max_y + min_y) / 2) for i, j in shape] for shape in shapey] self.cards.append(Card(card_x, card_y, w, h, shaper, shapes[index])) index += 1 def check_equal(self, c1, c2): for shape1, shape2 in zip(c1.original, c2.original): for s1, s2 in zip(shape1, shape2): for cor1, cor2 in zip(s1, s2): if cor1 != cor2: return False return True def clicked(self, pos): for card in self.cards: if card.clicked(pos): card.flip() if card.turned: self.turned.append(card) else: self.turned.remove(card) if len(self.turned) == 2: self.draw() pygame.display.update() sleep(0.5) if self.check_equal(*self.turned): for card in self.turned: self.cards.remove(card) score.add() else: for card in self.turned: card.flip() score.remove() self.turned.clear() def draw(self): for card in self.cards: card.draw() class Score: def __init__(self, x, y, size=40): self.x = x self.y = y self.size = size self.font = pygame.font.SysFont('Arial', size) self.total = 0 def add(self, amt=10): self.total += amt def remove(self, amt=5): self.total -= amt def draw(self): text = self.font.render(f'Score: {self.total}', False, (255, 255, 255)) wn.blit(text, (self.x, self.y)) shapes = [] for i in range(ROWS * COLUMNS // 2): shape = CreateShape(168, 200, 100, 100) my_shape = shape.create() if not my_shape: break shapes.append(my_shape) score = Score(20, 20) deck = Deck(20, 90, 45, 65, ROWS, COLUMNS, score, shapes) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() elif event.type == pygame.MOUSEBUTTONDOWN: deck.clicked(event.pos) wn.fill((0, 0, 0)) deck.draw() score.draw() pygame.display.update() I believe my check_equal function can be greatly simplified, as all it does is return wehther two lists (c1 and c2) of lists of tuples are equal: def check_equal(self, c1, c2): for shape1, shape2 in zip(c1.original, c2.original): for s1, s2 in zip(shape1, shape2): for cor1, cor2 in zip(s1, s2): if cor1 != cor2: return False return True Can you please point out where in my code can be simpliflied, and where can I improve the efficiency of it? Bug reports will also be greatly appreciated! Answer: check_equal() can be greatly simplified: def check_equal(self, c1, c2): return c1.original is c2.original The main code creates a list of shapes. Deck.__init__() uses shapes *= 2 to double the list of shapes. That is, shapes now contains two copies of each shape. Actually, it contains two references to the same shape. Then a Card is made from each shape, which stores the shape in Card.original. So you can use is to check if the cars have the same shape.
{ "domain": "codereview.stackexchange", "id": 40126, "tags": "python, performance, beginner, pygame" }
Can nucleophilic attack be faster than rearrangement?
Question: Let's say in a reaction, unimolecular (SN1), we have $\ce{CH3-CH(CH3)-CH2+}$ carbocation, and a nucleophile $\ce{Nu-}$. Generally the carbocation is first rearranged and then there is attack of $\ce{Nu-}$, but can't it be that a very strong nucleophile, strongly attracted towards the cation, attacks the cation befor it rearrange to give kinetically controlled product (KCP) (at least at lower temperatures) ? Answer: You are asking what would happen if Something like 1-bromo-2-methylpropane were to undergo an $\mathrm{S_N1}$ reaction. The simple answer is: Your premise won’t happen. The first step in an $\mathrm{S_N1}$ reaction is the formation of a carbocation by the leaving group leaving. A general equation for this would be: $$\ce{R3C-Br <=> R3C+ + Br-}$$ Where this reaction has its equilibrium and thus what concentration of the carbocation can be expected, depends on the relative stabilities of reactant and carbocation (the bromide is always the same so it doesn’t matter). Now primary carbocations are so unstable (in a thermodynamic sense) that thermodynamics predicts an equilibrium far to the left for primary haloalkanes: $$\ce{RCH2-Br <<=> RCH2+ + Br-}$$ (This reaction arrow doesn’t do it justice. It should be a humungous left-pointing arrow and a minute right-pointing one.) For tertiary or otherwise stabilised carbocations, we can expect a significant concentration of the right-hand side of the equation at equilibrium. Thus, we have a potential carbocation concentration that can potentially react with another nucleophile in an $\mathrm{S_N1}$ reaction. But primary haloalkanes will not even allow the carbocation to form, or in a different view, will pull back the bromide to regenerate the haloalkane before any nucleophile has a chance of attacking the carbocation. That’s why primary haloalkanes predominantly react via the $\mathrm{S_N2}$ mechanism: There simply isn’t any significant amount of carbocation present for $\mathrm{S_N1}$ or any type of Wagner-Meerwein rearrangement.
{ "domain": "chemistry.stackexchange", "id": 15126, "tags": "organic-chemistry, reaction-mechanism" }
Why don't breast enlargements leave any marks of surgery on breasts?
Question: If we get stitches, we get marks left on the skin, but there are no marks for breast enlargements. I saw a YouTube video about a breast enlargement wherein a doctor makes a cut. Where does the cut mark go after surgery? Answer: All cuts that go through the dermis (the full thickness of skin) will leave a scar, no matter what (and no matter what anyone tells you). The visibility of scar tissue has a lot to do with how a person heals, how much stress is put on the incision as it's healing, where the scar is, if it crosses the natural direction of skin (called Langer's lines) or goes along the lines. Plastic surgeons have many years of training and experience in cosmetic skin surgery, and know how to minimize scar formation by their more fastidious closure methods and wound care, as well as hiding scars in ceases and along Langer's lines. But even the best plastic surgeons cannot prevent scars, only minimize them. If you still don't believe breast augmentation doesn't leave scars, you can look at this page (NSFW).
{ "domain": "biology.stackexchange", "id": 3422, "tags": "human-biology, surgery" }
Electric field in a cylinder
Question: We have electric charge density $\rho(r) = kr$ in a cylinder of infinite height and radius $a$. I'm asked to find the electric field. I'm doing it using two methods and I don't undesrtand why then don't yield the same result Method 1 Gauss' theorem applied to a cylindrical surface; $$E(r) 2\pi rh = \frac{Q}{\epsilon_0}$$ $Q = h\int_A \rho = h\int_A kr$, where A is the unit circle $\Rightarrow Q = h\pi k r^2$ So I find $$E(r) = \frac{kr}{2\epsilon_0}$$ Method 2 Divergence in polar coordinates: $$\nabla \cdot E (r) = \frac{\rho}{\epsilon_0}$$ $$\nabla \cdot E(r) = \frac{1}{r} \frac{\partial rE(r)}{\partial r} = \frac{1}{r} (\frac{\partial E(r)}{\partial r} + r\frac{\partial E(r)}{\partial r}) = \frac{\partial E(r)}{\partial r} \frac{r+1}{r} = \frac{\rho}{\epsilon_0}$$ $$\frac{\partial E(r)}{\partial r} = \frac{k}{\epsilon_0} \frac{r^2}{1+r}$$ $$\Rightarrow E(r) = \frac{k}{\epsilon_0} (\frac{r^2}{2} + r + \ln{(1+r)})$$ What's wrong with that? Answer: You just made some math mistakes. You made a mistake when you did $Q = h\int_A kr$. You got $Q = h\pi k r^2$, but you should have gotten $Q = \frac{2}{3} h\pi k r^3$. Notice how this second expression has units of charge while the first one doesn't. Another mistake you make is that you say $\frac{1}{r} \frac{\partial rE(r)}{\partial r} = \frac{1}{r} (\frac{\partial E(r)}{\partial r} + r\frac{\partial E(r)}{\partial r})$. This is not proper application of the product rule. If correct these mistake you will get the right answer unless there are other mistakes I didn't find.
{ "domain": "physics.stackexchange", "id": 12147, "tags": "electrostatics, operators, coordinate-systems, gauss-law" }
show that $L=\{a^*\}\cup\{b^ja^{n^2}|0<j,1\leq n \}$ Holds the pumping lemma for context-free languages
Question: prove this language verifies the conclusion of the pumping lemma show that $L=\{a^*\}\cup\{b^ja^{n^2}|0<j,1\leq n \}$ Holds the pumping lemma for context-free languages the problem is that I know how to refute with pumping lemma that a language is not context-free. but I don't know how to prove Language holds the pumping lemma -> is a context-free lang. Answer: One can never prove a language to be context-free with the use of pumping lemma alone. The pumping lemma for context-free languages states a property that is necessary but not sufficient for context-free languages. There are no context-free languages that lack a context-free pumping property, but there are languages that have this property despite not being context-free. Let $s \in L$ with $|s| \geq p$ for some pumping length $p$ of $L$. I am going to show that for any such $s$, the $uvwxy$ division required by the pumping lemma is possible. Due to the definition of $L$, we know that one of the following is true: $s = a^n$ for some $n \geq p$ $s = b^ma^n$ for some $m \geq 1$ and $n$ is a natural square For each of these cases, there exists a trivial decomposition that satisfies the conditions of the pumping lemma. Can you find them? Note that in the end, $L$ is not context-free, so this will serve as a proof of non-invertibility of the pumping lemma – you cannot use it to prove a language context-free.
{ "domain": "cs.stackexchange", "id": 20100, "tags": "automata, context-free, pumping-lemma" }
What is the difference between "mutation" and "crossover"?
Question: In the context of evolutionary computation, in particular genetic algorithms, there are two stochastic operations "mutation" and "crossover". What are the differences between them? Answer: The mutation is an operation that is applied to a single individual in the population. It can e.g. introduce some noise in the chromosome. For example, if the chromosomes are binary, a mutation may simply be the flip of a bit (or gene). The crossover is an operation which takes as input two individuals (often called the "parents") and somehow combines their chromosomes, so as to produce usually two other chromosomes (the "children"), which inherit, in a certain way, the genes of both parents. For more details about these operations, you can use the book Genetic Algorithms in Search, Optimization, and Machine Learning by David E. Goldberg (who is an expert in genetic algorithms and was just advised by John H. Holland). You can also take a look at the book Computational Intelligence: An Introduction (2nd edition, 2007) by Andries P. Engelbrecht.
{ "domain": "ai.stackexchange", "id": 1424, "tags": "comparison, evolutionary-algorithms, crossover-operators, mutation-operators, genetic-operators" }
Mesh convergence and its affect on results
Question: Let’s say I had a mesh with 10000 nodes, then made another with 13000 nodes, why would the finer one give me slightly different results to the other even if it appears like results converged? Also, why do results vary lots before they converge? Why don’t they converge straight away? Answer: Remember that you are trying to approximate real solution using simple shape functions. Depending on the problem, you may need to use lots of elements to be able to describe the solution with sufficient accuracy. It is a feature of FEA that if your model is correct, the value you are trying to calculate should converge to the exact solution when you are refining the mesh. Just try "mesh convergence". When you right click on a result in the tree, you can add "convergence" to it and ANSYS will try to refine the mesh automatically. Regarding varying results before convergence in nonlinear analysis, ANSYS uses Newton method and especially in cases where the load step is big, the method may make a lot of wrong guesses all over the place. After a certain number of iterations, where solution has not been found, this is stopped, the load step is halved (bisection) and Newton method starts again.
{ "domain": "engineering.stackexchange", "id": 4949, "tags": "mechanical-engineering, ansys, meshing" }
Number of different wavelengths in the visible spectrum reaching Earth
Question: I recognize the visible wavelengths of light extend from approximately 400 - 700 nm. But how many different wavelengths exist in that range? 300 ? 30,000 ? (400.01 - 699.99) If it's completely continuous, then how can photons be so easily absorbed, as they need to match an electron potential fairly precisely to be absorbed? (I did a naive calculation some time ago, and came up with a wavelength about every 1.5 nm, but I don't know if that is correct.) Answer: As you can see the spectrum at the top of the atmosphere is continuous, with some saw tooth excesses, but still continuous. The absorption does create a saw tooth pattern, even so there is continuity. To dispel doubts here is the sun spectrum showing continuity and absorption spectra Solar spectrum with Fraunhofer lines as it appears visually. The first figure has been done registering intensities at each wavelength. You ask: how can photons be so easily absorbed, as they need to match an electron potential fairly precisely to be absorbed Photons are absorbed when they impinge on atoms with matching energies, and the others are not. Absorption is not the only interaction that photons can have. They can scatter off electrons and lose some energy, Compton Scattering, or whole atoms Raman scattering . These do not have precise energy levels.
{ "domain": "physics.stackexchange", "id": 19886, "tags": "visible-light, photons, wavelength" }
How to convert molecule structure to 3D PyTorch tensors for CNN?
Question: I want to try convolutional neural networks for drugs classification. I use PyTorch for 3D CNN implementation. Is there a way to obtain 3D tensors from SMILES or SDF/PDB structures? Answer: I've seen the conversion of SMILES into 1D and 2D representations. Is there any reason you specifically wish to use 3D tensors? I haven't seen 3D tensors in literature, but that isn't to say that it doesn't exist. torchdrug converts SMILES molecules graphs based on NetworkX. You can install it using: pip install torchdrug Be aware that it only works for python 3.6-3.9. pysmilesutils converts SMILES into a vector. You can install it using the following commands: python -m pip install git+https://github.com/MolecularAI/pysmilesutils.git PyTorch Geometric converts SMILES into a matrix constructed of property vectors of each atom in the molecule.
{ "domain": "chemistry.stackexchange", "id": 17012, "tags": "computational-chemistry, software" }