anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Implementation of Generic SQL Data Reader
Question: I am using below virtual method to read the data from SQL Data Reader like: public IList<District> GetList() { IList<District> _list = new List<District>(); SqlConnection con = new SqlConnection(ConStr); try { string StoreProcedure = ConfigurationManager.AppSettings["SP"].ToString(); SqlCommand cmd = new SqlCommand(StoreProcedure, con); cmd.CommandType = CommandType.StoredProcedure; con.Open(); SqlDataReader rdr = cmd.ExecuteReader(); _list = new GenericReader<District>().CreateList(rdr); rdr.Close(); con.Close(); } finally { IsConnectionOpenThenClose(con); } return _list; } District Class: public class District { public int id { get; set; } public string name { get; set; } } And GenericReader Class as: public class GenericReader<T> { public virtual List<T> CreateList(SqlDataReader reader) { var results = new List<T>(); while (reader.Read()) { var item = Activator.CreateInstance<T>(); foreach (var property in typeof(T).GetProperties()) { if (!reader.IsDBNull(reader.GetOrdinal(property.Name))) { Type convertTo = Nullable.GetUnderlyingType(property.PropertyType) ?? property.PropertyType; property.SetValue(item, Convert.ChangeType(reader[property.Name], convertTo), null); } } results.Add(item); } return results; } } Is this approach is better or still, we can refactor? Answer: GetList() SqlConnection, SqlCommand and SqlDataReader are all implementing the IDisposable interface hence you should either call Dispose() on that objects or enclosing them in a using block. You should use var instead of the concrete type if the right-hand-side of an assignment makes the concrete type obvious. E.g the line SqlConnection con = new SqlConnection(ConStr); we can see at first glance that the concrete type is SqlConnection and therfor we should use var con = new SqlConnection(ConStr); instead. Using abbreviations for naming things shouldn't be done because it makes reading and maintaining the code so much harder. Underscore-prefixed variablenames are usually used for class-level variables. Method-scoped variables should be named using camelCase casing hence list would be better than _list because Sam the maintainer wouldn't wonder about it. You return an IList<> which is good because coding against interfaces is the way to go.
{ "domain": "codereview.stackexchange", "id": 33062, "tags": "c#, generics" }
Is the Universe still believed to be flat?
Question: I have read a handful of old articles from mid 2013 expressing that the Universe may, in fact, be curved. http://www.nature.com/news/universe-may-be-curved-not-flat-1.13776 http://www.nature.com/news/planck-snaps-infant-universe-1.12671 etc... My question is how is the apparent "lopsidedness" of the CMB Radiation explained in a flat universe model? I understand that the vast majority of the evidence indicates a spatially flat universe, but I would like to know if there is any merit to these claims: The temperature of the cosmic microwave background radiation fluctuates more on one side of the sky (the right side of this projection) than on the opposite side, a sign that space might be curved. Answer: From here: Before proceeding, it should be mentioned that the statistical significance of the result is still under debate. While the asymmetry is significant at the ≳3σ level, some question whether it is simply a consequence of the “look-elsewhere” effect: i.e., we test for all kinds of anomalies in the CMB, and the investigated parameter space is so vast that it’s no surprise that, by chance, one of the parameters shows a positive result. Cosmological models make statistical predictions about the distribution of temperature fluctuations on an ensemble of CMB skies, but we have only one CMB sky to observe. Therefore, if the observed asymmetry is a statistical fluke, we are stuck with it because there is no way to increase the statistics on this particular measurement. But if the asymmetry is real and not just a statistical fluke, then it is extremely important. It may well be a remnant of the preinflationary Universe! The dipole anisotropy in the power spectrum seems real (not the doppler dipole, but rather the one discussed in these articles). Alright, that's out of the way. This observation needs to be interpreted to get any further, though. One possibility is that this is just a $3\sigma$ excursion from the expected isotropy. Without a statistical sample of additional Universes to observe, we have no way of knowing for sure. And as the article points out, if you look at enough parameters, eventually you'd actually be surprised if you didn't find one off by a couple of $\sigma$. Another interpretation is that the Universe is a little bit curved (in the "Open" direction, i.e. negatively). This depends on the model some theorists are proposing being at least broadly correct. However, for the moment, this class of models offers no other presently testable predictions. I'm sure the theorists are working on more predictions that are testable, and CMB observers are working on measuring the predictions that have been made (the signal is supposed to be very faint, and the measurement is very difficult). But for the moment no other tested predictions means that this is just another class of theories, and there is no compelling reason to prefer it over the usual flat Universe model. In fact, I would prefer the flat model as it has less parameters and also explains the observations (even though I have to live with a $3\sigma$ statistical anomaly). The same article also mentions that the anomaly is seen in the two hemispheres roughly separated by the ecliptic, which is somewhat worrying. Alignment with the ecliptic, or galactic equator, or other preferred direction, to me is suggestive of some uncorrected systematic effect. Not to say that this can't be a real anomaly because it's aligned with the ecliptic, but it's worrying... So to sum up, there doesn't seem to be any compelling evidence for an open Universe. There is this anomaly in the isotropy of the power spectrum, but it's not so large that it couldn't just be happenstance. If the proposed curvaton model makes some additional predictions (differing from the predictions of the usual cosmology) that are later borne out by observation, that would be more strongly suggestive of an Open geometry.
{ "domain": "physics.stackexchange", "id": 17250, "tags": "cosmology, universe, curvature, cosmological-inflation, cosmic-microwave-background" }
‘class octomap::ColorOcTree’ has no member named ‘genKey’ (hydro)
Question: Hi guys, I have to make fuerte code runable under hydro. By compiling one package i get the following error: ‘class octomap::ColorOcTree’ has no member named ‘genKey’ I read in this log entry https://code.google.com/p/mrpt/source/browse/trunk/otherlibs/octomap/CHANGELOG.txt?r=3081, that genKey is deprecated. The author seems to use coordToKey instead. I tried that, but the problem is, that this function doesn't return a boolean value like genKey. But in the code this boolen is needed for an if clause. Does someone know, which function does the same like genKey and returns a boolean. Thanks Originally posted by mr42 on ROS Answers with karma: 3 on 2014-05-29 Post score: 0 Answer: Please refer to the official API documentation of OctoMap online: http://octomap.github.io/octomap/doc/ If you need the bool return value there is a function coordToKeyChecked as replacement. Originally posted by AHornung with karma: 5904 on 2014-06-02 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18109, "tags": "ros-hydro" }
What defines stationary vs spinning in space?
Question: To produce artificial gravity on a space station, we simply spin it around some central point, and the acceleration makes objects "fall" outwards. If our space station is so far in deep space that it cannot detect any light from stars whatsoever, we can still tell if we are spinning or not based on the artificial gravity produced. Why? What are we spinning relative to? If I were to take a stationary space station and spin everything else in the universe around it, would it produce the same effect? Or to put it another way, suppose I took all of the matter in the Universe and made it into a gigantic disk. Would it be possible to spin the disk? What would it be spinning relative to? Answer: Issac Newton described an experiment in which a bucket containing water is spun. As the water in the bucket starts to rotate, it becomes concave. The reason for this can be understood in terms of rotating frames of reference. Newton supposed (as an axiom of mechanics) that there are frames of reference (a system of locating particles in space relative to an origin, and in time) in which his 3 laws of motion are true, and momentum is conserved. Such a frame is called "inertial". If one inertial frame exists then any frame with an origin that moves at constant velocity relative to the inertial frame is also inertial. As I mentioned, the existence of inertial frames is an axiom of Mechanics. The truth of the axiom is verified by observation. It can't be proved. If a frame of reference is rotation with respect to an inertial frame, then the rotating frame is not inertial, and Newton's laws (in their simplest form don't hold). Instead of "F=ma" there are extra terms for the Coriolis and centrifugal force. In the case of a spinning space station, it is spinning relative to an inertial frame, and that is why there is a centrifugal force. For Newton, the bucket experiment proves that there is a notion of "absolute space". This interpretation is not accepted by all, and in particular, Ernst Mach rejected the idea of any absolute space. His ideas were influential on Einstein. You can read more about Newton's bucket experiment at http://www-history.mcs.st-andrews.ac.uk/HistTopics/Newton_bucket.html and you can read about Ernst Mach's ideas regarding absolute space at https://en.wikipedia.org/wiki/Mach%27s_principle
{ "domain": "astronomy.stackexchange", "id": 2607, "tags": "gravity, space-time" }
workstation multi-robot with rviz
Question: Hi there ! We've got a problem ( my classmate and I) about the multi-robot supervision on Rviz. We would like to see 2 turtlebots on the workstation Rviz but we don't know if we have to do a .launch file where turtlebot commands and names are but we aren't sure. We think ROS_NAMESPACE could make a mistake on the network when we're trying to put 2 turtlebots on Rviz. For the moment, we use : roscore on the workstation. ROS_NAMESPACE=turtlebotX roslaunch turtlebot_bringup minimal.launch for each turtlebot, where X is a letter about a turtlebot ( turtlebotA, turtlebotB). ROS_NAMESPACE=turtlebotX roslaunch turtlebot_bringup 3dsensor.launch for each turtlebot. After this command, we've got a problem with warning flood on each prompt of turtlebot computer. They seems to be a warning about tf_old_data on the turtlebotB only. roslaunch turtlebot_rviz_launchers view_robot.launch The workstation can control one of them but only if they're not connected at the same time on it. If A and B are connected, the workstation only see on Rviz the same turtlebot( in ur case, A). So maybe it's about alphabetical order and the workstation always store turtlebotA. We can see the turtlebot's camera and we can move it with teleop keyboard command. So, the real questions are : need a .launch? if yes, what kind of information do we write inside? does we make a mistake with command or forgot useful command to do? Thanks for listening. Originally posted by florian2 on ROS Answers with karma: 16 on 2015-02-12 Post score: 0 Answer: We found the real problem thank to you. It was about the school's proxy who blocked port 123 allowing the update of time. The new problem is about Rviz and more precisely 'odom'. We have always 2 turtlebots connected on the workstation and we want to see both on Rviz but we don't know how to see them correctly. For the moment, Rviz get information of two turtlebots and think they are the same robot. Does we have to make launch and how exactly just for a example of 2 turtlebots? tf_frames could be one of the problem too? Originally posted by florian2 with karma: 16 on 2015-03-11 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20862, "tags": "rviz, multi-robot" }
Freeze water bottles during colder nights, and place them in front of a fan during warmer days?
Question: Is this advice correct? Are they alluding to Freezing Point Depression? Adding salt to tap water lowers the Crystalloid's freezing point. Pour 3 tbsp (51 g) of salt into each of your 3 plastic water bottles. Use disposable plastic bottles for the easiest set-up and clean up. Pour 3 tablespoons (51 g) of table salt per bottle. Put the caps back on and shake the bottles to thoroughly mix the salt. I live in a studio flat. I know of the Law of Thermodynamics, also here – but my plan is different. Where I live, night time is way cooler than day time. I'll freeze the bottles over night, because even if my freezer heats up more than usual, my night room temperature will still be cool enough. During day time when it's hot, I'll place the frozen bottles in front of my fan to air condition my studio flat. Does this work? I read Quora. Answer: There are a few things to consider. The freezer won't be 100% efficient. It'll probably generate more heat than you expect to freeze the bottles and this will add to the average temperature in your flat. Operating the fan also generates heat. The freezer might make more noise at night than otherwise. Air conditioners also remove moisture from the air, as well as cooling. The bottle/fan method wouldn't do this. It's true that the method would generate a cool breeze, but the overall effect would be a slight increase of the average temperature. So instead try shutting the curtains on bright days to stop the sun heating your flat, also open windows at night and shut them in the morning on hot days to keep the night time cool air inside.
{ "domain": "physics.stackexchange", "id": 78087, "tags": "thermodynamics" }
Why does this rocket stop rolling while going up
Question: I have been watching a ludicrous video about this insane notion of a flat earth and, among a lot of wrong and ill presented proofs (some fake ones) there is an interesting one which I don't have the knowledge to debunk. In the video, at the half hour mark (https://youtu.be/PItu4aeGUrE?t=1775) we are shown an amateur rocket flying up straight into the sky, then the camera switches to the one strapped on the rocket, which is spinning while flying. I can understand that the rocket flies straight instead of bending as space shuttles do (they have to in order to enter orbit) and the video cuts too early to start seeing the normal bending effect due to the earth rotation. I can also understand that the rocket rolls in order to gain stability, but what I cannot understand is why the rocket STOPS rolling at some point. The video claims that it hit something and also stopped progressing, which is not really believable. What I believe is that it ran out of fuel and both the forward and the roll propulsion just stopped and the atmosphere simply acted as a brake (regardless of the claim that "rockets have no brake"). Is there any other possible explanation? Is my understanding of the roll correct? At one point I thought that maybe the rocket did not have a propulsion system that induced the roll but only used the fins to roll, but that will make explaining the abrupt stop harder. Answer: The spin was stopped by a yo-yo despin device. A yo-yo despin device releases a pair of tethered masses. The tether becomes rigid due to tension when the masses are fully deployed, thereby significantly increasing the moment of inertia about the roll axis. Think of how figure skaters stop their rapid spin by extending their arms.
{ "domain": "physics.stackexchange", "id": 49843, "tags": "rocket-science" }
Weibull distribution Units - ENGIEQSOL
Question: I am using Engineering Equation Solver, and it has been telling me the units of my distribution are inconsistent. According to a the book I am reading, I think it should be unitless (as it is a probability). However EES, says $$ h = \frac{k}{c}\cdot \left(\frac{v}{c}\right)^{k-1}\cdot exp\left(-\left(\frac{v}{c}\right)^{k}\right)$$ In theory, v and c have the same units [m/s] and k is unitless. I do not understand why the units are not working out. Hope you can help me out. Answer: If you have defined the same units for v and c then the equation should work just fine. In all likelihood, h is the probability density so you its units should be $\frac{s}{m}$ In order to see where the problem lies, I would suggest breaking up the terms and checking their units independently within the software. I.e. I would find the units in EES for: $\frac{k}{c}$ : the units should be $\frac{s}{m}$ $\frac{v}{c}$ : It should be unitless $\left(\frac{v}{c}\right)^{k-1}$ : It should be unitless $\exp\left(\left(\frac{v}{c}\right)^k\right)$ : It should be unitless Then I would try to build up the equation to see, if the problem continues.
{ "domain": "engineering.stackexchange", "id": 3821, "tags": "power, energy, turbines, wind-power, unit" }
Which approach is the common one in the literature for determining the bacterial growth rate?
Question: I have the following data, which is OD600 (the second component) vs. time (the first component): data = {{0, 0.046}, {40, 0.111}, {80, 0.291}, {120, 0.808}, {160, 1.742}, {200, 3.319}, {240, 5.017}, {280, 5.503}, {320, 5.897}} I want to obtain the growth rate of bacteria from the above data. If I fit a logistic function, that is, $$f(t) = \frac{L}{1 + e^{-k(t - t_0)}},$$ where $k$ is the growth rate, I obtain the following curve: In this approach, the growth rate is: $k = 0.028$. Now, if I fit the natural logarithm of the logistic function to the natural logarithm of OD600, I obtain: In this case, the growth rate is: $k = 0.025$. Which of these values (or, approaches) is the correct and common approach for determining the growth rate of bacteria? Answer: The simple logistic function that you are using here is a fine first-order approximation for modeling microbial growth. Both logarithmic and linear scales are reasonable, with different tradeoffs as explained below. It is unsurprising that these two scales will give slightly different numbers, however, as the exponential approach gives more relative weight to fit errors on the lower values. I wouldn't be too concerned about the difference in numbers, though, since they are only 1.1-fold different, and that's within the likely limits of your assay precision. Thinking about this from a biological perspective, however, there are a number of key limitations to fit accuracy that you need to take into account when you consider and present your data: You show only a single replicate. I would not trust this data without at least three replicates, which will give some understanding of the degree of variability encountered with your protocol. At the low end of the scale, precision will likely be limited by instrument precision and media density. What are your control blanks? Are the OD values that you present with or without media subtraction? Since you are showing a high final OD, I would guess that you are working with rich media, which typically has a significant OD even without any cells present. At the high end of the scale, remember that OD itself is a logarithmic measure. Once the level of light penetrating is very low (below ~1%, i.e., above OD 2.0), most instruments will start having difficulty quantifying the penetrating light. Are you actually measuring an OD of 6, or is this a compensated measure from a much shorter path length? If it's a compensated measure, that will again imply decreased precision on the low end. These questions can be addressed by using controls that determine the effective linear range of your instrument with respect to your media and path length, such as the method that we validated in this interlaboratory study. Once you know the range in which you can trust your data, then you can determine your answer on how to fit: Since your primary interest is growth rate, you want to focus on the exponential portion of the behavior. For this, the log-scale fit is better, since it places more weight on the range where the exponential growth dominates the behavior. Based on what you've presented, this is likely the approach that will work for you. If your low data is mostly outside of the effective linear range, however, you can fall back to the linear-scale fit, which will de-emphasize the low data. If your primary interest were the saturated level, then you would want to prioritize the linear-scale fit instead, for the converse reason. Bottom line: you need to figure out your data limitations, based on the interaction between biology and instrument, then see if that supports the log-scale fit or if you need to fall back to linear.
{ "domain": "biology.stackexchange", "id": 11686, "tags": "theoretical-biology, population-biology, growth" }
Producer - multiple consumer implementation using futures
Question: I've tried to implement a producer-multiple consumer pattern using futures and I was wondering should it ever deadlock, and if it should, why? I have this method RunNTimes which I implemented in order to run the implementation, e.g. 1000 times with 500 consumers. The implementation shouldn't deadlock as far as I know and it often doesn't (running it multiple times and telling RunNTimes to repeat for 1000 times). However, every once in a while it blocks at the first or second run (I'm running it from sbt). For the sake of simplicity, ignore the RunNTimes method, I've included it only to explain why the occasional deadlock seems strange to me. package comparison_examples.futuresimplementation import common.MeasurementHelpers._ import common._ import scala.collection.mutable import scala.concurrent.blocking import scala.concurrent.ExecutionContext.Implicits.global import scala.concurrent.duration.Duration import scala.concurrent.{Await, Future, Promise} case class Item(value: Int) object ProducerConsumer { private val sharedQueue = mutable.Queue[Item]() def main(args: Array[String]): Unit = { val n: Double = if (args.nonEmpty) args(0).toDouble else Configuration.runTimes.toDouble val numConsumers = if (args.length == 2) args(1).toInt else Configuration.numberOfConsumers val funRunNTimes = MeasurementHelpers.runNTimes(n.toInt) _ val results = funRunNTimes { val p = Promise[Boolean]() val producer = new Producer(p, sharedQueue) val cs = startConsumers(numConsumers, List[Consumer](), sharedQueue, p) val fp = producer.start() val fs = cs.map(x => x.start()) val f = Future.sequence(fs) Await.result(fp, Duration.Inf) Await.result(f, Duration.Inf) val allElementsObtained = cs.flatMap(_.getObtainedItems) println(allElementsObtained.length) } println(s"Run times: $n") println(s"Average number of distinct threads: ${results.map(x => x._3.length).sum / n }") println(s"Average duration: ${results.map(x => x._2).sum / n} milliseconds") println() } def startConsumers(max: Int, result: List[Consumer], sharedQueue: mutable.Queue[Item], p: Promise[Boolean]): List[Consumer] = if (max == 0) result else { val consumer = new Consumer(p, sharedQueue) startConsumers(max - 1, consumer :: result, sharedQueue, p) } } class Producer(p: Promise[Boolean], sharedQueue: mutable.Queue[Item]) { def start(): Future[Unit] = Future { val n = Configuration.workToProduce for (i <- 1 to n) { val item = Item(i) sharedQueue.synchronized { sharedQueue.enqueue(item) sharedQueue.notifyAll() } } p success true } } class Consumer(p: Promise[Boolean], sharedQueue: mutable.Queue[Item]) { private var obtainedItems = List[Item]() def getObtainedItems: List[Item] = obtainedItems def start(): Future[Unit] = Future { addCurrentThread() while (sharedQueue.nonEmpty || !p.isCompleted) { obtainedItems = getItem match { case None => obtainedItems case Some(item) => item :: obtainedItems } } } def getItem: Option[Item] = blocking { sharedQueue.synchronized { while (sharedQueue.isEmpty && !p.isCompleted) { sharedQueue.wait() } val result = if (sharedQueue.nonEmpty) Some(sharedQueue.dequeue()) else None sharedQueue.notifyAll() result } } def printObtainedItems(): Unit = obtainedItems.foreach(x => print(s"\t ${x.value}")) } Answer: I notice several things. You have a data race on the shared queue. The consumer has an unsynchronized access to the nonEmpty property. The notifyAll after you removed an element from the queue is unnecessary. Consumers shouldn't care about other consumers removing items. But the main problem, which is suspect is the source of your deadlocks, is the race condition between the producer and the consumers. Here's what I think happens: The producer produces the last item. It notifies all threads, releases the lock, but is preempted before setting the promise. Consumer C1 wakes up, sees the queue is non-empty and grabs the element. It notifies all threads, releases the lock, and goes out to add the element to its list. The other consumers wake up, see that the queue is empty but the promise is not set, and go back to sleep. The producer sets the promise. Consumer C1 sees that the promise is set and terminates. The other consumers continue sleeping for all eternity. To solve this, you need to notifyAll after you've set the promise. You can also probably optimize the whole thing by only calling notifyOne after adding an element.
{ "domain": "codereview.stackexchange", "id": 25662, "tags": "scala, concurrency, producer-consumer" }
Geometry of the curvature of spacetime around rotating or spinning revolving objects
Question: What is the geometry of the curvature of spacetime around rotating or spinning revolving objects, or anything that has an angular momentum? Answer: Take a small object, such as a golf ball, it will have an undetectable spacetime distortion, from its own mass, nor from any angular momentum. So leave the mass fixed, and spin the golfball up to 99.99.999999..... of light speed, assuming it will not explode long before that. So there will still be no GR mass induced spacetime distortion, but Frame Dragging will occur. Rotational frame-dragging (the Lense–Thirring effect) appears in the general principle of relativity and similar theories in the vicinity of rotating massive objects. Under the Lense–Thirring effect, the frame of reference in which a clock ticks the fastest is one which is revolving around the object as viewed by a distant observer. This also means that light traveling in the direction of rotation of the object will move past the massive object faster than light moving against the rotation, as seen by a distant observer. It is now the best known frame-dragging effect, partly thanks to the Gravity Probe B experiment. Qualitatively, frame-dragging can be viewed as the gravitational analog of electromagnetic induction. The Gravity B probe is well covered on Wikipedia. What is the geometry of the curvature of spacetime around rotating or spinning revolving objects or anything that is having an angular momentum Frame-dragging and it's effects on spacetime geometry are best described by the Kerr metric, taking a mass $M$ and angular momentum $J$ $${\displaystyle {\begin{aligned}c^{2}d\tau ^{2}=&\left(1-{\frac {r_{s}r}{\rho ^{2}}}\right)c^{2}dt^{2}-{\frac {\rho ^{2}}{\Lambda ^{2}}}dr^{2}-\rho ^{2}d\theta ^{2}\\&{}-\left(r^{2}+\alpha ^{2}+{\frac {r_{s}r\alpha ^{2}}{\rho ^{2}}}\sin ^{2}\theta \right)\sin ^{2}\theta \ d\phi ^{2}+{\frac {2r_{s}r\alpha c\sin ^{2}\theta }{\rho ^{2}}}d\phi dt\end{aligned}}}$$ $r_s$ is the Schwarzschild radius ${\displaystyle r_{s}={\frac {2GM}{c^{2}}}}$ ${\displaystyle \alpha ={\frac {J}{Mc}}}$ ${\displaystyle \rho ^{2}=r^{2}+\alpha ^{2}\cos ^{2}\theta \,\!}$ ${\displaystyle \Lambda ^{2}=r^{2}-r_{s}r+\alpha ^{2}\,\!}$ In the non-relativistic limit where $M$ (or, equivalently, $r_s$) goes to zero, the Kerr metric becomes the orthogonal metric for the oblate spheroidal coordinates $${\displaystyle c^{2}d\tau ^{2}=c^{2}dt^{2}-{\frac {\rho ^{2}}{r^{2}+\alpha ^{2}}}dr^{2}-\rho ^{2}d\theta ^{2}-\left(r^{2}+\alpha ^{2}\right)\sin ^{2}\theta d\phi ^{2}}$$ We may rewrite the Kerr metric in the following form $${\displaystyle c^{2}d\tau ^{2}=\left(g_{tt}-{\frac {g_{t\phi }^{2}}{g_{\phi \phi }}}\right)dt^{2}+g_{rr}dr^{2}+g_{\theta \theta }d\theta ^{2}+g_{\phi \phi }\left(d\phi +{\frac {g_{t\phi }}{g_{\phi \phi }}}dt\right)^{2}}$$ This metric is equivalent to a co-rotating reference frame that is rotating with angular speed Ω that depends on both the radiusr and the colatitude θ $${\displaystyle \Omega =-{\frac {g_{t\phi }}{g_{\phi \phi }}}={\frac {r_{s}\alpha rc}{\rho ^{2}\left(r^{2}+\alpha ^{2}\right)+r_{s}\alpha ^{2}r\sin ^{2}\theta }}}$$ In the plane of the equator this simplifies to ${\displaystyle \Omega ={\frac {r_{s}\alpha c}{r^{3}+\alpha ^{2}r+r_{s}\alpha ^{2}}}}$ Thus, an inertial reference frame is entrained by the rotating central mass to participate in the latter's rotation; this is frame-dragging.
{ "domain": "physics.stackexchange", "id": 34972, "tags": "general-relativity, angular-momentum, astrophysics, metric-tensor, frame-dragging" }
Need help in understanding a small example
Question: Pardon me, I agree the title of the question is not clear. I would like to know the understanding of below steps which are picked from the textbook "Hands on machine learning". >>> housing['income_cat'].value_counts() >>> 3.0 7236 2.0 6581 4.0 3639 5.0 2362 1.0 822 If I am not wrong, the above step is to get the counts for each class. For example, for class '3' there are 7236 instances. Likewise, for class '2' there are 6581 instances. >>> housing['income_cat'].value_counts / len(housing) >>> 3.0 0.350581 2.0 0.318847 4.0 0.176308 5.0 0.114438 1.0 0.039826 Next, I was not clear, what was the intention behind the above step. By doing the above step, what am I suppose to learn ?. and >>> from sklearn.model_selection import StratifiedShuffleSplit >>> split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) >>> for train_index, test_index in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] >>> strat_test_set['income_cat'].value_counts() / len(strat_test_set) >>> 3.0 0.350533 2.0 0.318798 4.0 0.176357 5.0 0.114583 1.0 0.039729 Name: income_cat, dtype: float64 How come strat_test_set['income_cat'].value_counts() / len(strat_test_set) results are almost same to the results of housing['income_cat'].value_counts / len(housing) ? Answer: Regarding your first question, by dividing by the length, you get the percentage of each category, and you can see that one category has a very low percentage. If you use the regular train_test_split, the proportion of the categories will be different in each set, as the split is made randomly, and this can introduce a bias. Imagine categories with a very low number of observations, you could even have categories missing entirely in either of the sets, which will cause trouble for your model. The use of stratified sampling allows you to have the same proportion for the categories. That is why you see the same results: it is the goal of this sampling.
{ "domain": "datascience.stackexchange", "id": 3221, "tags": "python, pandas, preprocessing" }
Determining Base Configuration Tsb from End-Effector Configuration Tse for youbot
Question: Hello there, hope you are doing great. Right now I am trying to control youbot to pick and cube and place it to a certain position, on simulation. However, I believe that the reference trajectory is very prone to singularities. Because of that, I am trying to create a reference trajectory that does not contain singularity possibilities. I am trying on new initial end-effector configurations for the youbot. However I need to extract Tsb matrix (base frame relative to space frame) from the Tse (end-effector frame, relative to space frame) matrix. Because, in order to tune my PI gains, I need to start the robot with 0 error. How can I manage to do that? For example to get Tse from Tsb I do this: Tse = Tsb * Tb0 (which has a fixed value) * T0e (comes from Forward Kinematics) Answer: Do I understand correctly, that the problem is with the matrix equation you have posted and how to calculate $T_{sb}$ from that equation? You can solve the matrix equation by multiplying from the right with the inverse of the last matrix on the rhs, step by step. $$\require{cancel} $$ $$T_{se} = T_{sb} \times T_{b0} \times T_{0e}$$ $$T_{se} \times T_{0e}^{-1}= T_{sb} \times T_{b0} \times T_{0e} \times T_{0e}^{-1}$$ $$T_{se} \times T_{0e}^{-1}= T_{sb} \times T_{b0} \times \cancel{ T_{0e} \times T_{0e}^{-1}}$$ $$T_{se} \times T_{0e}^{-1}= T_{sb} \times T_{b0}$$ $$T_{se} \times T_{0e}^{-1} \times T_{b0}^{-1}= T_{sb} \times T_{b0} \times T_{b0}^{-1}$$ $$T_{se} \times T_{0e}^{-1} \times T_{b0}^{-1}= T_{sb} \times \cancel{T_{b0} \times T_{b0}^{-1}}$$ $$T_{se} \times T_{0e}^{-1} \times T_{b0}^{-1}= T_{sb}$$ $$T_{sb} = T_{se} \times T_{0e}^{-1} \times T_{b0}^{-1}$$
{ "domain": "robotics.stackexchange", "id": 2231, "tags": "mobile-robot, robotic-arm, forward-kinematics, transforms" }
Maximum height for a given width of a wall?
Question: Say I have a wall that can support 8 feet of height for a width of 20 inches. Is there any way, from this information alone, to determine how much more height this material can support by increasing its width to 2 feet? Answer: You can't use this information without a model, that tells how height support changes with the width, i.e. a function H(W). However, you can assume that it is approximately linear or quadratic, depending on the contexto of your question.
{ "domain": "physics.stackexchange", "id": 34316, "tags": "gravity" }
Is the oscillating EM field and probability interference caused by the same wave property of light?
Question: Light travels as an oscillating EM wave. It can also interfere with itself to create wavelike probability distributions. Are these waves behaviors one in the same, just different shades of the wave nature of light? Or are these two separate phenomenon and light just happens to exhibit both? Hopefully an analogy can better illustrate what I'm asking. If I pluck a guitar string and leave my finger on it I will see the string vibrate with a wave pattern and I will feel the string against my finger with a pressure pattern like a wave. Both things are sensed because of one wavelike thing, the string getting plucked. Now, if I drop something in water two things also can happen. I can see the water oscillate transversely and I will hear sound because of longitudinal air displacement. The event had two wavelike outcomes, water ripples and sound. However, the mechanisms causing these phenomenon are different and it is just because of the circumstances that they are sensed together. In the case of light's EM fields and probability distributions is it more like the first or second analogy? *Sorry I know the analogy isn't particularly strong Answer: Light interference is linked to the phase of the EM wave. It is contructive when signals are in phase, destructive when they are out of phase. For an oscillating EM wave, the oscillating part refers to phase oscillations. Frequency and amplitude (at least in a void) are constants, phase is oscillating. So yes, it is the same phenomenon. The image at the Optical interference paragraph on wikipedia shows this clearly. The EM wave shown travels a bit more, so its phase oscillates a bit more, and when they are superposed interference is due to being in phase or out of phase.
{ "domain": "physics.stackexchange", "id": 35289, "tags": "quantum-mechanics, electromagnetism, wave-particle-duality" }
Grep for pattern recursive and disable file
Question: On a shared host, I'd like to setup a cron which scans folders recursively for some base64 malware strings. Therefore, I've written the following script: #!/bin/bash if [ $# -ne 1 ]; then echo $0: usage: ./findone folder_to_start_with exit 1 fi folder=$1 IFS=$'\n' searchfiles=($(grep -r -F -n -f malware-strings.dat $folder)) for (( i=0; i<${#searchfiles[@]}; i++ )); do STR=$(echo ${searchfiles[i]} | awk -F':' '{print $1}') if [ -z "$STR" ]; then true; else chmod 000 $STR; fi done ## Do something else like mail results etc. printf '%s\n' "${searchfiles[@]}" My locals test are doing what I expect. If a string pattern from "malware-string.dat" is found the file permission is changed to 000. Before scanning the production sites, I wanted to ask for a code review as I'm new to Bash and do not want to mess things up. Also, your judgement of disabling the file with chmod is enough would help, or if it is advisable to move the file outside of the www directory. Answer: Your input checking is not very strict. Any argument could pass that check, even an empty string, but the script needs specifically a valid path to a folder. So why not check for that: if [ ! -d "$1" ]; then echo $0: usage: ./findone folder_to_start_with exit 1 fi The grep command will output the matched lines, with the line number. But you don't need those, only the filename. And there could be multiple matches per file. It would be better to use the -l flag, and that way you won't need the awk at all, you will get only the file names, ready to use. This condition is very awkward: if [ -z "$STR" ]; then true; else chmod 000 $STR; fi Instead of an empty branch that does nothing, you could have flipped the condition by simply dropping the -z, and the unused branch with it, resulting in the simpler: if [ "$STR" ]; then chmod 000 $STR fi (In any case, when you rewrite the grep to use -l, this condition will not be needed, so this part will be simply gone.) As you guessed, in addition to chmod it would be better to move the file out of the vulnerable directory, to a safer place. One reason is to make it less vulnerable, another is that these files buried deep within your web directories might go unnoticed easier. I'd also suggest adding email alerts after detecting such malware.
{ "domain": "codereview.stackexchange", "id": 18310, "tags": "bash" }
Get list of claims based on priorities
Question: I have written the following function which works as expected but I still see there is some room for improving its readability def get_claims_to_search(): claims = [] database_priority = DatabasePriority.objects.first() low_priority_claims_databases = ClaimsDatabase.objects.filter( deleted=False, priority="low" ) normal_priority_claims_databases = ClaimsDatabase.objects.filter( deleted=False, priority="normal" ) high_priority_claims_databases = ClaimsDatabase.objects.filter( deleted=False, priority="high" ) low_priority_count = database_priority.low normal_priority_count = database_priority.normal high_priority_count = database_priority.high if not low_priority_claims_databases.count(): low_priority_count = 0 if normal_priority_claims_databases.count(): normal_priority_count += int( database_priority.normal / (database_priority.normal + database_priority.high) * database_priority.low ) if high_priority_claims_databases.count(): high_priority_count += int( database_priority.high / (database_priority.normal + database_priority.high) * database_priority.low ) if not normal_priority_claims_databases.count(): normal_priority_count = 0 if low_priority_claims_databases.count(): low_priority_count += int( database_priority.low / (database_priority.low + database_priority.high) * database_priority.normal ) if high_priority_claims_databases.count(): high_priority_count += int( database_priority.high / (database_priority.low + database_priority.high) * database_priority.normal ) if not high_priority_claims_databases.count(): high_priority_count = 0 if low_priority_claims_databases.count(): low_priority_count += int( database_priority.low / (database_priority.low + database_priority.high) * database_priority.normal ) if normal_priority_claims_databases.count(): normal_priority_count += int( database_priority.normal / (database_priority.normal + database_priority.high) * database_priority.high ) priority_databases = { "low": low_priority_claims_databases, "normal": normal_priority_claims_databases, "high": high_priority_claims_databases, } priority_count = { "low": low_priority_count, "normal": normal_priority_count, "high": high_priority_count, } for priority in priority_count: if priority_count[priority]: priority_count[priority] = int( ( (priority_count[priority] / 100) / priority_databases[priority].count() ) * settings.DEBUNKBOT_SEARCHEABLE_CLAIMS_COUNT ) for claim_database in priority_databases[priority]: claims.append( claim_database.claims.filter(processed=False, rating=False).values( "claim_first_appearance" )[: priority_count[priority]] ) return claims any suggestions on how I can improve/rewrite it? Answer: The function seems to be inconsistent; I don't know what is it intended to do, so if you say it works as expected, probably you can lose some functionality on unnecessary "beautification" of the code. What inconsistency am I talking about: the central (and the longest) part of the code transforms 6 input values, *_priority_claims_databases.count() and database_priority.* (where * is low, normal and high) into 3 output values, *_priority_count, using something that seems to be one formula. I'll use short names dl for database_priority.low and pl for low_priority_count etc. to make formulas more readable. So we have (omitting conditions): pn += int( dn / (dn + dh) * dl ) ph += int( dh / (dn + dh) * dl ) pl += int( dl / (dl + dh) * dn ) ph += int( dh / (dl + dh) * dn ) pl += int( dl / (dl + dh) * dn ) #<--!!!!! pn += int( dn / (dn + dh) * dh ) At this point it is clear that first two lines (in if low_priority_claims_databases.count() == 0:) end with dl, second two lines - with dn, 6th line - with dh, which corresponds with condition, but 5th line stands out. If we change dh+dh into s-dl (where s=dh+dn+dl), the problem will get even worse: pn += int( dn / (s - dl) * dl ) ph += int( dh / (s - dl) * dl ) pl += int( dl / (s - dn) * dn ) ph += int( dh / (s - dn) * dn ) pl += int( dl / (s - dn) * dn ) #<--!!!!! pn += int( dn / (s - dl) * dh ) #<--!!!!! Now I see two lines are out of pattern. Sorry, can't help it here without a description. Maybe last two lines should be pl += int( dl / (s - dh) * dh ) pn += int( dn / (s - dh) * dh ) i.e. low_priority_count += int( database_priority.low / (database_priority.low + database_priority.normal) * database_priority.high ) normal_priority_count += int( database_priority.normal / (database_priority.low + database_priority.normal) * database_priority.high ) If so, fix the code first. Still we can do something with the last portion of the code. Let's create dicts with keys "low", "normal" and "high" (or list and constants) instead of groups of variables; so we'll have if priority_count["low"]: priority_count["low"] = ( priority_count["low"] // priority_claims_databases["low"].count() ) * settings.DEBUNKBOT_SEARCHEABLE_CLAIMS_COUNT for claim_database in priority_claims_databases["low"]: claims.append( claim_database.claims.filter(processed=False, rating=False).values( "claim_first_appearance" )[:priority_count["low"]] and two other code chunks with the only change "low" to "normal" and "high". This can be changed into the loop: for priority in ["low","normal","high"]: if priority_count[priority]: priority_count[priority] = ( priority_count[priority] // priority_claims_databases[priority].count() ) * settings.DEBUNKBOT_SEARCHEABLE_CLAIMS_COUNT for claim_database in priority_claims_databases[priority]: claims.append( claim_database.claims.filter(processed=False, rating=False).values( "claim_first_appearance" )[:priority_count[priority]] This is at least as readable as your code but almost 3 times shorter. One more question: what happens if all *_priority_claims_databases.count() are greater then zero? Is it intended that the code return empty list?
{ "domain": "codereview.stackexchange", "id": 41961, "tags": "python, django" }
How to make circuit for n-control Z gate( i.e $C^3Z$ )?
Question: I am trying to make circuit for $C^3Z$ gate I have seen a circuit for $C^2Z$ or $CCZ$ gate made by using $CCX$ gate so is there any way to make circuit for $C^3Z$ in this similar manner( i.e by using $CCX$ gate) or i have to do something different then this method Fig. is below for $CCZ$ gate Answer: The circuit you showed above for the double-controlled $Z$ gate can be extended to a triple-controlled $Z$ by adding an extra Toffoli and ancilla: Qiskit offers such circuits readily in the circuit library, where you have many different possibilities to implement your multi-controlled Z gate. Using the MCMT (multi-controlled multi-target circuit) is one option. You can either use the v-chain version with ancillas, which produces the same circuit as above: from qiskit.circuit.library import MCMTVChain c3z = MCMTVChain('z', num_ctrl_qubits=3, num_target_qubits=1) c3z.draw(output='mpl') Or you can use an ancilla-free version: from qiskit.circuit.library import MCMT c3z = MCMT('z', num_ctrl_qubits=3, num_target_qubits=1) c3z.decompose().decompose().draw(output='mpl') In principle you there's always a tradeoff in the number of ancilla qubits you can use and the depth of the circuit. More ancillas usually allows to use less gates, but more ancillas are costly or may not be available at all! Excursion to multi-controlled $X$ gates Since you know that $Z = HXH$ another possibility would be to use the multi-controlled $X$ gate from Qiskit. Since there are different methods on how the multi-controlled $X$ can be implemented you can choose the mode you want as either of 'noancilla' 'recursion' 'v-chain' 'v-chain-dirty-ancilla': from qiskit import QuantumCircuit noancilla = QuantumCircuit(4) noancilla.h(3) # H on target qubit noancilla.mcx([0, 1, 2], 3, mode='noancilla') noancilla.h(3) # again H on target qubit noancilla.draw() q_0: ───────■─────── │ q_1: ───────■─────── │ q_2: ───────■─────── ┌───┐┌─┴─┐┌───┐ q_3: ┤ H ├┤ X ├┤ H ├ └───┘└───┘└───┘ The recursion mode uses only one ancilla and recursively splits the number of controls until we have a 3 or 4 controls for which the controlled-X is hardcoded. Here, since you only have 3 controls, it does not need an ancilla (since Qiskit knows a concrete 3-controlled X implementation). But if you have more than 4 qubits you need an ancilla. n = 5 # number of controls recursion = QuantumCircuit(n + 1 + 1) # one for target, one as ancilla recursion.h(n) # H on target qubit recursion.mcx(list(range(n)), n, ancilla_qubits=[n + 1], mode='recursion') recursion.h(n) # again H on target qubit recursion.decompose().draw() q_0: ──────────────■─────────■─────────────────── │ │ q_1: ──────────────■─────────■─────────────────── │ │ q_2: ──────────────■─────────■─────────────────── │ │ q_3: ──────────────┼────■────┼────■────────────── │ │ │ │ q_4: ──────────────┼────■────┼────■────────────── ┌──────────┐ │ ┌─┴─┐ │ ┌─┴─┐┌──────────┐ q_5: ┤ U2(0,pi) ├──┼──┤ X ├──┼──┤ X ├┤ U2(0,pi) ├ └──────────┘┌─┴─┐└─┬─┘┌─┴─┐└─┬─┘└──────────┘ q_6: ────────────┤ X ├──■──┤ X ├──■────────────── └───┘ └───┘ The v-chain implementation is similar to the $Z$ gate implementations with the Toffolis. Here you need $n - 2$ ancillas, if $n$ is the number of controls. vchain = QuantumCircuit(n + 1 + n - 2) # needs n - 2 ancillas vchain.h(n) # H on target qubit vchain.mcx(list(range(n)), n, ancilla_qubits=list(range(n+1, 2*n-1)), mode='v-chain') vchain.h(n) # again H on target qubit q_0: ───────■──────── │ q_1: ───────■──────── │ q_2: ───────■──────── ┌───┐┌─┴──┐┌───┐ q_3: ┤ H ├┤0 ├┤ H ├ # if you decompose this you'll see └───┘│ X │└───┘ # the exact implementation, try q_4: ─────┤1 ├───── # vchain.decompose().decompose().draw() └────┘
{ "domain": "quantumcomputing.stackexchange", "id": 1526, "tags": "qiskit, programming, circuit-construction, ibm-q-experience" }
Chi-Squared test: ok for selecting significant features?
Question: I would have a question on the contingency table and its results. I was performing this analysis on names starting with symbols as a possible feature, getting the following values: Label 0.0 1.0 with_symb 1584 241 without_symb 16 14 getting a p-value which lets met conclude that variables are associated (since it is less than 0.05). My question is if this result might be a good result based on the chi-squared test, so if I can include in the model. I am selecting individually features to enter the model based on the chi-squared. Maybe there is another way to select the most appropriate and significant features for the model. Any suggestions on this would be great. Answer: I will raise several issues that could arise if you are selecting features based on chi-2 tests Repeated use of chi-2 test can lead to spurious results unless you correct for the number of times you run it You can include features that are correlated with each other, i.a. A is correlated with B, and both are correlated with label. Not sure, but I think, this can lead to results where model performs worse with more features. I would try starting with all the features, remove the ones linearly correlated. But this is just a suggestion. Also, mutual information can be used to estimate how well any given feature describes the label.
{ "domain": "datascience.stackexchange", "id": 9071, "tags": "classification, feature-selection, correlation, chi-square-test" }
sending goal to move_base_simple has no effect
Question: Hi! Still getting acquainted so please be patient. From ROS electric (have to deal with legacy code, can't upgrade), I'm trying to simply move my turtlebot around using the move_base_simple topic. At first, I tried using actionlib, succeeded in connecting to the server, setting up a goal but sending it actually had no effect. For debugging purposes, I'm now trying to simply publish the goal from command line. Here is my current setup: First ran: roslaunch turtlebot_bringup minimal.launch roslaunch turtlebot_navigation move_base_turtlebot.launch now from rostopic list I have: /move_base/goal /move_base_node/current_goal /move_base_simple/goal I then try to publish the goal as follows: rostopic pub /move_base_simple/goal geometry_msgs/PoseStamped '{ header: { frame_id: "/base_link"}, pose: { position: { x: 0.2, y: 0 }, orientation: { x: 0, y: 0, z: 0, w: 1 } } }' Nothing happens. If I try: rostopic echo /move_base_simple/goal, I can see that the goal is set. I tried changing the turtlebot_drive_mode to either drive, twist or turtle, no effect. I also tried to teleop the bot with the keyboard, it works. Tried to send velocity commands, also works. If anyone can shed some light on why the robot isn't responding to the commands, it'll be greatly appreciated! (I've spent more than 5 hours digging on this issue). Thanks! Originally posted by Strav on ROS Answers with karma: 11 on 2013-12-01 Post score: 1 Original comments Comment by Tirjen on 2013-12-03: Could it be some problem with costmaps? Did you try to visualize them in rviz? Comment by Strav on 2013-12-03: costmaps? Forgive me if this is obvious, but I thought I could use move_base without providing any map nor transformation frame other than base_link. I am somewhat loosely following this tutorial: http://wiki.ros.org/navigation/Tutorials/SendingSimpleGoals; haven't saw anything in that code that involved a map of any kind. Answer: Honestly I never tried to use the navigation stack without a map, but I'm pretty sure it isn't possible. Moreover, reading the first line of the description of the tutorial ("The Navigation Stack serves to drive a mobile base from one location to another while safely avoiding obstacles.") makes me think so. Also in the pre-requisites is written "This tutorial assumes basic knowledge of how to bring up and configure the navigation stack.". Take also in consideration that independently from the move_base planner, without a map and a sensor used for localization, usually any planner wouldn't work much well using only the odometry of the robot. What sensor do you have on your robot? What I suggest you to do is to look at this tutorial and in general to all other tutorials of the navigation stack. Originally posted by Tirjen with karma: 808 on 2013-12-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by rotaryopt on 2021-04-25: hi @Tirjen @Strav having this same issue in ROS2: sending a PoseStamped goal on the move_base_simple Topic, is echoed correctly in ROS2 introspection, but has no effect on the Robot in Gazebo. What are the minimum required interfaces or Gazebo plug-ins for PoseStamped to be used for control? thank you!
{ "domain": "robotics.stackexchange", "id": 16314, "tags": "turtlebot" }
Existence / non-existence of a sequence with short longest increasing subsequence and decreasing subsequence?
Question: Can there exist any integer sequence $A$ of length $N$ with all unique elements such that the length of its Longest Increasing Subsequence as well as that of its Longest Decreasing Subsequence is less than $ \displaystyle \lfloor \frac{N}{2} \rfloor $? If yes, then give an example of such a sequence. Otherwise, can anyone present a proof that there cannot exist such a sequence? (Just to add some substance, can it be shown there can exist such sequences, given any arbitrary value of $ N > 1 $?) Answer: The answer to the OP's question is, no if $N\le 7$ and yes otherwise. For given any positive integer $r$ and $s$, the celebrated Erdős–Szekeres theorem shows that for any sequence of distinct real numbers with length at least $(r − 1)(s − 1) + 1$ contains an increasing subsequence of length $r$ or a decreasing subsequence of length $s$. It turns out that bound, $(r-1)(s-1)+1$ is tight. That is, for any positive number $r$ and $s$, there is a sequence of distinct numbers with length $(r-1)(s-1)$ that contains no increasing subsequence of length $r$ and no decreasing subsequence of length $s$. Here is such an example. $$\begin{array} {} &s-1, &s-2, &\cdots,&2, &1\\ &2(s-1), &(s-1)+ s-2, &\cdots, &(s-1)+ 2, &(s-1)+ 1\\ &\vdots &\vdots &\vdots &\vdots &\vdots \\ &(r-2)(s-1), &(r-3)(s-1)+s-2, &\cdots, &(r-3)(s-1)+2, &(r-3)(s-1)+1\\ &(r-1)(s-1), &(r-2)(s-1)+s-2, &\cdots, &(r-2)(s-1)+2, &(r-2)(s-1)+1\\ \end{array}$$ Consider the numbers above, reading from left to right and then from top to bottom. In other words, the sequence is $s-1$ down to $1$, followed by $2(s-1)$ down to $(s-1)+1$, etc and finally followed by $(r-1)(s-1)$ down to $(r-2)(s-1)+1$, all in step of $1$. It is easy to see that there is no increasing subsequence of length r and no decreasing subsequence of length $s$. For example, when $r=s=5$, we have $$4,3,2,1,\ \, 8,7,6,5,\ \,12,11,10,9,\ \,16,15,14,13$$ which does not have increasing subsequence of length $5$ nor decreasing subsequence of length $5$. If we let $r=s$, the section above implies that, for any positive number $N$, there exists an integer sequence of length $N$ with all unique elements such that the length of its longest increasing subsequence as well as that of its longest decreasing subsequence is at most $\lceil\sqrt N\rceil$. And $\lceil\sqrt N\rceil$ is the tight upper bound. Since $$\lceil\sqrt N\rceil\ge \lfloor\frac N2\rfloor\ \text{ for all } N\le 7$$ and $$\lceil\sqrt N\rceil\lt \lfloor\frac N2\rfloor\ \text{ for all } N\gt 7,$$ the answer to the OP's question is, no if $N\le 7$ and yes otherwise. For example, for $N=8$, we have sequence $3,2,1,6,5,4,9,8,7$.
{ "domain": "cs.stackexchange", "id": 16603, "tags": "algorithms, correctness-proof, subsequences" }
Is it possible that antimatter has positive inertial mass but negative gravitational mass?
Question: Newtonian mechanics seems to allow for both positive and negative gravitational mass as long as the inertial mass is always positive. The situation is analogous to electrostatics but with the opposite sign. Two positive masses or two negative masses are attracted to each other whereas one positive and one negative mass repel each other. General relativity says gravitational and inertial mass are the same thing through the equivalence principle. This has been confirmed experimentally to a very high degree of accuracy, though not for very small masses and only for normal matter. Antimatter is known to have positive inertial mass from observing the trajectories of particles in electric or magnetic fields. Presumably it is also known that the $m$ in the famous $E=mc^2$ is positive. The gravitational mass of elementary particles is currently too small to measure, but is it possible that antimatter could have negative gravitational mass - or is this absolutely precluded in general relativity? Answer: A long comment: AEGIS is a collaboration of physicists from all over Europe. In the first phase of the experiment, the AEGIS team is using antiprotons from the Antiproton Decelerator to make a beam of antihydrogen atoms. They then pass the antihydrogen beam through an instrument called a Moire deflectometer coupled to a position-sensitive detector to measure the strength of the gravitational interaction between matter and antimatter to a precision of 1%. A system of gratings in the deflectometer splits the antihydrogen beam into parallel rays, forming a periodic pattern. From this pattern, the physicists can measure how much the antihydrogen beam drops during its horizontal flight. Combining this shift with the time each atom takes to fly and fall, the AEGIS team can then determine the strength of the gravitational force between the Earth and the antihydrogen atoms. Also new experiments are in the process. In total there are three experiments at CERN to measure the effect of the earth's gravitational field on antimatter. Patience.
{ "domain": "physics.stackexchange", "id": 72783, "tags": "general-relativity, gravity, newtonian-gravity, mass, antimatter" }
How can I make my local plan path in navigation stack longer?
Question: Hy, I have a pretty large robot which I'm trying to navigate using ROS Navigation stack. It is about 1.2m long and 0.8m wide. The problem is, when moving, the robot trajectory is oscillating around calculated global path (in other words - robot is not moving straight). At first I thought that the problem could be in control loop frequency which couldn't reach higher than 4Hz but after solving that problem the robot was still acting basically the same. Here is the link to the question about control loop frequency http://answers.ros.org/question/227137/controller-frequency-in-navigation-stack/ When looking in rviz the calculated local plan, I find it very short (practically less than 30cm), when comparing it with my robots dimensions. It is always a lot shorter than my robot's footprint. So I was thinking if I could make my local plan longer the robot wouldn't oscillate that much. So the question is - How can I make my local plan path in navigation stack longer? Thanks in advance Originally posted by Double X on ROS Answers with karma: 17 on 2016-02-24 Post score: 0 Original comments Comment by jiecuiok on 2016-07-03: hello, i meet the same problom, do you have solved this one? Comment by gvdhoorn on 2016-07-03: Hi @jiecuiok, could you please not post answers when you're not answering the question? Please use comments for these kinds of interactions. Thanks. Comment by Procópio on 2016-07-04: what is the local planner you are using? Comment by jiecuiok on 2016-07-04: the default local planner.I have solved this problem with tunning the sim_time and pdis_scale and gdis_scale. Answer: If you are using the default local planner of ROS, you can check the Forward Simulation Parameters (section 4.2.3) here. Try to play with those parameters, specially the sim_time, which is only 1.0 s by default. But beware that these parameters will have a visible impact on your cpu resources. Originally posted by Procópio with karma: 4402 on 2016-07-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Double X on 2016-07-04: Yes, I changed it with sim_time, although I have written my own node for local and global planing so I'm not using Navigation Stack any more. Thnx Comment by kiran on 2016-09-25: Hey, how is the performance using your own global and local planners? Even I am facing that problem. Depending on your answer I will try to solve this problem
{ "domain": "robotics.stackexchange", "id": 23893, "tags": "ros, navigation, path, stack" }
SKlearn PolynomialFeatures R^2 score
Question: I'm trying to create a linear regression model with use of PolynomialFeatures. But when I evaluate it, I get really strange scores. I know that R^2 can be applied to this model and I think I've trying everything. I'd really apricate a good advice. Here is my code. X = df_all[['Elevation_gain', 'Distance']] y = df_all['Avg_tempo_in_seconds'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42) for n in range(2,10,1): poly_feat = PolynomialFeatures(degree=n, include_bias = True) X_poly_train = poly_feat.fit_transform(X_train) X_poly_test = poly_feat.transform(X_test) lin_reg_2 = LinearRegression() lin_reg_2.fit(X_poly_train, y_train) test_pred_2 = lin_reg_2.predict(X_poly_test) #testset evaluation r2 = metrics.r2_score(y_true = y_test, y_pred = test_pred_2) mse = metrics.mean_squared_error(y_true = y_test, y_pred = test_pred_2) print(round(r2,2)) #print(round(mse,2)) And this is the output I get: 0.36 -3.99 -59.96 -1299.38 -627.37 -1773329.36 -19673802.94 -23125681.65 Here is the sample data: Elevation_gain Distance Avg_tempo_in_seconds 70 6,13 290.1 135 9.27 301.0 10 4.94 287.5 270 15.74 310.2 120 8.11 298.5 Answer: The scores you are seeing indicate that a linear regression would with multiple polynomial features does not fit the data well, with performance decreasing drastically on new data when using features polynomial features of degree 5/6 and higher (likely because of overfitting and/or multicollinearity). R-squared can be negative, for what this exactly means see for example this question on stats.stackexchange.com.
{ "domain": "datascience.stackexchange", "id": 10412, "tags": "machine-learning, python, scikit-learn, r-squared" }
titrating a weak acid with strong base in the present of buffer
Question: I would like to get a neutral salt solution for a weak acid whose pKa is 3.42. This weak acid is in the form of solution (now is 20% w/w), thus I cannot directly dissolve the powder into the buffer etc. And I would like the resulting solution to be as concentrated as possible. I am thinking to titrate the acid with NaOH (1M) or other bases (that only generates the salt of the acid and water). After calculation, I found that 2.0998 ml of NaOH will give me around pH 7.4 (which is what I want as it is to be used in a biological system). The problem is that: it is very difficult to accurately add 0.0998 ml of NaOH. And I found that 0.0001 ml of NaOH deviation will lead to huge changes in pH, however, I want to make sure the final pH is between 7.2 - 7.4. Another consideration is I would want to avoid using pH meter because I would like to maintain the solution as sterile as possible (which I will prepare everything in a biological hood). In short, I would like to know how to prepare a neutral solution for that acid precisely, e.g. through calculations of the exact volume of base needed or use diluted NaOH (though I still want the resulting salt solution to be as concentrated as possible). I would like to know whether it would help if I add some buffer to the system (e.g. PBS that is often used in biological experiments)? Answer: I may be missing something, but a rough reality check says that if you are thinking of adding 2.0998 mL of 1 N NaOH solution, you have ~0.0021 moles of base, which for a good buffer, should have about ~0.0021 moles of acid. If your acid solution is 20% by weight, and guessing 200 for a molecular weight, your total acid volume is 2.0998 mL also. It seems to me that your accuracy problem with the base is in addition to the accuracy problem with measuring the acid. Unless the acid is very expensive (isotope?), scale up! In your sterile hood, make 2 batches of the buffer solution, starting with 50 mL of the 20% acid in each vessel (of about 200 mL capacity). Add 51 mL of 1 N NaOH solution to one and 49 mL of 1N NaOH solution to the other. Measure the pH of both by using an eyedropper to take out a few mL for a regular pH meter, or just one drop to place on a micro pH meter, such as https://news.thomasnet.com/fullstory/pocket-ph-meters-utilize-non-glass-silicon-chip-sensor-608720 One of the 100 mL solutions may be exactly what you want. If not, it would be nice it their two pHs straddle your desired pH. Using an eyedropper, transfer some of one solution into the other, stir, and check the pH by withdrawing a drop to test on the micro pH meter. The pH electrode will never be immersed into the buffer. When you get one of the solutions to the pH you need, you can still save the residue for further adjustments. Some micro pH meters claim to be able to measure as little as 5 microliters, so if the acid is very expensive, you could scale down. Calculations are very fine for exams and theorizing, but when it comes down to needing an exact pH near 7, you may have to take other variables into consideration, including the preparation, i.e., bringing the calculations into reality. If you calculate, you are guessing you will get it right. If you measure, you'll know if you got it right. (Assuming you calibrate the pH meter, and it works as expected, etc., etc.)
{ "domain": "chemistry.stackexchange", "id": 15510, "tags": "titration, buffer" }
Sharing one propagation constant in circular waveguide
Question: I need to know more about this sentence for circular wave-guide structure: "When mode degeneration occurs, two modes sharing one propagation constant may be linearly combined. There are two possibilities to define the polarization of degenerated modes." what does "sharing one propagation constant" mean? why do two different polarization possibilities exist? Not many. Answer: It is not always possible to be certain one understands what is meant by a description like this without being able to ask the person that wrote it. However, I'll try to clarify as I understand it. The phrase "sharing one propagation constant" is more or less what the degenerate means. When two modes in a wave guide are degenerate then they have the same propagation constant. Since they have the same propagation constant, any linear combination of the two mode would again be a mode of the wave guide. As a result degenerate modes are not unique. I think what the description means is that there are always two orthogonal states of polarization in terms of which all other states of polarization can be expressed as linear combinations. If the two degenerate modes in a wave guide have linearly independent states of polarization then one can use them to find a mutually orthogonal pair in terms of which one can then represent any other state of polarization. Hope it helps.
{ "domain": "physics.stackexchange", "id": 82744, "tags": "waveguide" }
Thermodynamics - please check my proof that $\partial C_p/\partial p$ = 0 for an ideal gas
Question: Prove $$\left(\frac{\partial C_p}{\partial p}\right)_T = 0$$ for an ideal gas. All the $\partial$s are partial derivatives Please check to see if this makes sense. We know that $$C_p = \left(\frac{\partial H}{\partial T}\right)_P$$ Observe that $$\left(\frac{\partial C_p}{\partial p}\right)_T = \left(\left(\frac{1}{\partial P}\right)\left(\frac{\partial H}{\partial T}\right)_P\right)_T = \left(\left(\frac{1}{\partial T}\right)\left(\frac{\partial H}{\partial P}\right)_T\right)_P $$ Enthalpy is defined as $$H=U+PV$$ Equipartition tells us that $$U=Nk\frac{f}{2}$$ and the ideal gas law tells us that $$PV=NkT$$ Therefore, $$H=Nk\frac{f}{2}+NkT=\left(1+\frac{f}{2}\right)NkT$$ From knowing $$H=\left(1+\frac{f}{2}\right)NkT$$ we can see that $$\left(\frac{\partial H}{\partial P}\right)_T = 0$$ and hence that $$\left(\frac{\partial C_p}{\partial p}\right)_T =0$$ Answer: It can be done in an easier way. Take that $(\partial C_p /\partial P)_T = - T (\partial^2 V /\partial T^2)_P$. If you plug in your ideal gas $PV=nRT$ you directly get the result asked. Even though, you reasoning is fine if you don't know the equation I showed you.
{ "domain": "physics.stackexchange", "id": 57072, "tags": "homework-and-exercises, thermodynamics, differentiation, ideal-gas" }
Relative performance of RLS and LMS filters
Question: It's known that the RLS filter converges faster than the LMS filter in general, but that if you're tracking time varying parameters the LMS algorithm can perform better. My question is under what conditions does this hold? I understand that the LMS filter is like a point estimate, but the RLS uses more data - when would using less data be helpful? Answer: Using less data is helpful when, as you said, the parameters are time varying, and in particular when they change a lot. The key difference is that LMS is a Markov process. It has its current state, but other than that it does not remember data from the past. For time-varying signals this is a feature because past data will give you erroneous information about the current parameters. The RLS algorithm uses all of the information, past and present, but that can be a problem if the past data is misleading for the current parameters. If you are looking for a quantitative rule for when to use one or the other, I don't have one. RLS is more computationally intensive than LMS, so if LMS is good enough then that is the safe one to go with. RLS converges faster, but is more computationally intensive and has the time-varying weakness, so I would only use it if the parameters don't vary much and you really needed the fast convergence.
{ "domain": "dsp.stackexchange", "id": 776, "tags": "filter-design, adaptive-filters" }
generate a 2d array that could be used to create an image with Java
Question: This Python code im_data = np.ones((100,100)) im_data[20:50,20:50]=np.zeros((30,30)) can generate a 2d array that could be used to create an image plt.axis('off') axi = plt.imshow(im_data, cmap='Greys') I'm trying to do the same job with Java class Arr100{ public static void main(String[] args){ System.out.println("H"); int[][] arr = new int[100][100]; for(int i=0; i<arr.length; i++){ for (int j=0; j<arr[i].length; j++){ arr[i][j]=1; } } for(int i=20; i<50; i++){ for (int j=20; j<50; j++){ arr[i][j]=0; } } } } In terms of GPU and memory, is there a better way to do the same thing? Answer: You don't need to manually manage nested for loops. int[][] pixels = new int[100][100]; for (int[] row : pixels) { Arrays.fill(row, 1); } for (int i = 20; i < 50; i++) { Arrays.fill(pixels[i], 20, 50, 0); } This will make better use of optimizations available for managing array values. I personally don't like arr as a name for an array. If I need a generic name, I tend to use data. I used pixels here. Or colors would work. This may be more or less efficient than int[][] pixels = new int[100][100]; int i = 0; for (; i < 20; i++) { Arrays.fill(pixels[i], 1); } for (; i < 50; i++) { Arrays.fill(pixels[i], 0, 20, 1); Arrays.fill(pixels[i], 50, pixels[i].length, 1); } for (; i < pixels.length; i++) { Arrays.fill(pixels[i], 1); } That doesn't set any of the pixels twice and relies on the default values for the 0 pixels. That may be more efficient. Although it is also possible that it is easier to set the entire array to one value than to work with rows and parts of rows. Note that if you are willing to change the representation, then int[][] pixels = new int[100][100]; for (int i = 20; i < 50; i++) { Arrays.fill(pixels[i], 20, 50, 1); } would probably be better than either. In this, black is the default 0 while 1 is the white square. The third form is the least code and probably fastest. The only way that I could see it being as slow or slower than either of the others is if something is happening implicitly. For example, if there is a memory initialization that the first form skips because it immediately does its own initialization. The first form is less code than the second. The comparative speed would probably depend on the platform (and possibly compiler). Platforms that allow for managing memory as blocks would probably find the first quicker, while platforms that can only address one word at a time would probably find the second quicker. You would have to do timing tests on all platforms where you expect to run to get a real idea of the speed. There is probably a streams-based version that is faster as well. If you want to go further and try to pass the information for regions, you would either have to write your own solution or use a third party library. Note that such a thing would probably be slower. But it might be less code. I suspect that your Python example is also slower on some platforms (albeit less code). My guess is that Python implements it much as the first example code block in this answer.
{ "domain": "codereview.stackexchange", "id": 41045, "tags": "java" }
Ramifications of P?NP Being Undecidable
Question: Godel proved there are true statements in arithmetic that can't be proven true in any sufficiently strong Formal Axiomatic System (FAS). The authors of this paper use similar arguments to prove there are TM's that belong to a certain complexity class that cannot be proven to belong to that class. They refer to these type of algorithms as "hidden machines". ON THE EXISTENCE OF HIDDEN MACHINES IN COMPUTATIONAL TIME HIERARCHIES What happens to the quesion of P?NP if there exists a TM that can solve any satisfiable 3SAT instance in polynomial steps, yet, it is impossible to prove this TM does so? Answer: In some sense, nothing much. We still have P = NP. It's possible for there to be statements that are true even though we can't prove them true. In another sense, something interesting happens: in your scenario, we can explicitly write down a new algorithm that is guaranteed to solve 3SAT in polynomial time, using Levin universal search. See https://cs.stackexchange.com/a/92095/755. So, if there is a non-constructive proof that 3SAT is in P, then you can find a constructive proof that 3SAT is in P. See also Is P decidable?, Assuming P = NP, how would one solve the graph coloring problem in polynomial time?, Explicit algorithms and algorithms involving unknowns.
{ "domain": "cs.stackexchange", "id": 16851, "tags": "complexity-theory" }
How does gravity truly work in the bend of spacetime?
Question: If gravity is caused from the bend in space time from a large mass, why do all objects fall towards earths center and not strait down to below earth? Sorry i am not an expert in any fields just trying to understand and study relativity, also i'm fairly new to the whole bending of space and time and do not truly understand how it works. Any help would be appreciated, thanks ahead of time. Not everyone understood my question so, Picture earth bending space time right, and gravity pulls the objects to the center of the planet, why don't objects when falling pull to the space warp instead of the earth center? How is it that at the bottom of the earth objects fall upwards toward earth and not down to the space warp? I guess what i don't fully understand is wether gravity is created by the space warp or by Earth itself and the space warp is a result from the earths gravity. – C.Julch Answer: The problem is that the images that you have seen about spacetime bending depict a two dimensional 'fabric'. Try picturing it in three dimensions and it would be easier to understand. If you are still not able to visualise then see: https://www.dropbox.com/s/h6h5pfe37stxdrv/Photo%2002-06-16%2C%2022%2028%2003.jpg?dl=0
{ "domain": "physics.stackexchange", "id": 31470, "tags": "general-relativity, gravity, spacetime, curvature" }
What happens if the impulse of a collision results in the change in velocity being greater than the speed of sound?
Question: It is my understanding that impulse travels through materials at the speed of sound through that material (i.e. impulse through steel travels through the steel at ~5000 m/s which is the speed of sound through steel). So lets say we have a collision between 2 objects named A and B. Object B is a solid sphere of some material with a speed of sound of ~5000 m/s, a mass of 1 kg, and a velocity of 0 m/s. Object A is an identical sphere but is travelling at lets say 10,000 m/s. Object A collides with Object B and Object A transfers all of it's momentum to Object B. This would mean that Object B has had an impulse that resulted in Object B travelling faster than impulses can travel through Object B (being that impulses can only travel at the speed of sound). I cannot really comprehend how such a collision would play out in all honesty. I'm leaning towards the idea that this scenario is just not possible for some variety of reasons but I can't really work out what those reasons are so any help would be appreciated! Answer: One can imagine in your example there will be some permanent deformation of the object, the form of which will depend on factors which you have not mention, eg masses, shapes and materials of objects. The speed of sound is in effect the speed of communication between different parts of the object so it will take time for the information that there is a collision between the two objects to reach the parts of the object remote from the initial collision, a shock wave will be produced. The video Balloon Bullet Time - The Slow Mo Guys shows two objects (bullet and balloon) colliding. You can see a ripple on the third balloon surface in the second experiment after the bullet has hit it.
{ "domain": "physics.stackexchange", "id": 98336, "tags": "newtonian-mechanics, classical-mechanics, waves" }
Does muon halflife have any application in cold fusion concept?
Question: As we know muon being more massive than electron species is more likely to fuse , but muon decays quickly , does that have any implications in nuclear fusion ? Answer: In the muon-catalyzed fusion process, the muons basically replace the electrons in the hydrogen-isotopes Deuterium and Tritium. This makes the muonic atoms very small, which increases the probability for interaction processes. Starting with a muon being captured by a Deuterium, it can, for example, be transferred from the Deuterium to a Tritium, $$\mathrm{Dµ} + \mathrm{T} \rightarrow \mathrm{Tµ} + \mathrm{D},$$ which happens on a time-scale of roughly $10^{-9}\mathrm{s}$. A molecule can then be formed, $$\mathrm{Tµ} + \mathrm{D} \rightarrow \mathrm{DµT},$$ which happens on an even faster time-scale. Note that I have omitted the released energies on the right-hand side of both reactions! With Deuterium and Tritium being very close to each other, they fuse (and releases the muon and some energy), this takes roughly $10^{-12}\mathrm{s}$. So, in summary, all these processes happen on time-scales much shorter than the mean lifetime of a muon (which is $2.2\,\mathrm{µs}$), therefore the answer is no. Just a note: the muon-catalyzed fusion process cannot be used as a potential source of "free" energy, as it requires a tremendous amount of energy to create the muons and the catalyze-process is far from perfect as the muons have a certain probability to stick to the alpha-particles created in the D-T fusion process, which means they are lost for further fusion-processes.
{ "domain": "physics.stackexchange", "id": 50869, "tags": "particle-physics, cold-fusion" }
Joule heating due to the (slow) electron drift velocity?
Question: I understand the concept of why the signal speed is higher than the electron drift velocity, but I can't understand the concept of joule heating. If electrons move slow then how do they produce a lot of heat when they hit the nucleus. Besides my friend once told me that the drift velocity is the net movement and electrons move fast in all directions, if that is the case why do they move like that? Answer: The absolute velocity of the electrons actually doesn't matter for joule heating. Think about it this way, if there is no current flowing there wouldn't be any joule heating. So, even if electrons are moving quickly and randomly when no current is flowing, we know no joule heating would occur and that joule heating is really about the net change in effect caused by the current. That is, the base electron velocity doesn't have an effect. All that matters is the $\Delta V$ over the base electron velocity which is given by the drift velocity. Joule heating is really about electrical energy lost to heat due to resistance. Even if the average drift velocity of an electron is tiny, there are so many electrons moving that the tiny energy loss to heat for each electron adds up. As you know, current is the result of a huge number of moving electrons. It's a numbers game. The more electrons losing a tiny bit of energy there are, the more total heat is generated. Via Ohm's Law you can see that $P_{ower} = I^2 R$ so it's no wonder that heat generation is proportional to $I^2 R$. Also, in your question you mentioned an electron bumping into a nucleus. That is not what's happening. Electrons are colliding with the electron cloud of atoms and via electromagnetic repulsion are pushing the whole atom a bit, increasing its kinetic energy. It's just free electrons interacting with bound electrons.
{ "domain": "physics.stackexchange", "id": 7415, "tags": "electricity, electrons" }
Simple Task-Assignment Problem
Question: I have this simple 'assignment' problem: We have a set of agents $A = \{a_1, a_2, \dotso, a_n\}$ and set of tasks $T= \{t_1, t_2, \dotso, t_m\}$. Note that $m$ is not necessarily equal to $n$. Unlike the general assignment formulation in Wikipedia, a task $t_c$ can only be assigned to an agent based on the task's preferred agents $ta_c \subseteq A$. For example, if we have $ta_1= \{a_1, a_3\}$, that means that task $t_1$ can only be assigned to either agents $a_1$ or $a_3$. Now, each agent $t_d$ has a quota $q_d$ where $q_d$ is positive integer. This means that $a_d$ must be assigned with $q_d$ number of tasks. The Problem Given above and a set of quota $\{q_1, q_2, \dotso, q_n\}$, is there an assignment of tasks to agents such that all agents meet their respective quota $q$. Note that it is not necessarily that all tasks be assigned to an agent. Possible Solution I have tried reformulating this problem in terms of a bipartite graph $G(A, T, E = \cup ta_c)$ and expressed as a form of matching problem where given a matching $M$, a vertex agent $a_d\in A$ is matched up to $q_d$ times or is incident to $q_d$ edges in $M$ but the vertices in $T$ is incident to only one edge in $M$. Not really like the usual matching problem which requires that the edges in $M$ are pairwise non-adjacent. However, it was suggested by someone (from cstheory, he knows who he is) that I could actually work this out as a maximum matching problem, by replicating an agent $a_d$ into $q_d$ vertices and 'conceptually' treat them as different vertices as input to the matching algorithm. The set of edges $E$ is also modified accordingly. Call the modified graph as $G'$ It is possible to have more than 1 maximum matchings from graph $G'$. Now, if I understand this correctly, I still have to check each of the resulting maximum matchings and see that at least one of them satisfies the $qouta$ constraint of each $agent$ to actually provide the solution to the problem. Now, I want to prove that not finding one maximum matching $M$ $\in$ set of all maximum matchings of the graph $G'$ that satisfies the $qouta$ constraint of the problem guarantees that there really exists no solution to the problem instance, otherwise a solution exist which is $M$. I want to show that this algorithm always give correct result. Question Can you share to me some intuition on how I might go to show this? Answer: As I stated on the CSTheory post, this is solved via maximum matching. The following should give enough intuition to show that each agent $a_i$ has a $q_i$-matching iff a transformed graph $G'$ has a matching. First, construct the graph $G$. Now, for each agent $a_i$ and quota $q_i$, make a new graph $G'$ that has $q_i$ copies of $a_i$. That is to say, if agent $a_i$ has quota $q_i = 3$, make 2 new nodes $a_i', a_i''$ that have the same tasks as $a_0$. Here is an illustration to help. Supposed we have the complete bipartite graph $K_{2,3}$ where agent $q_1 = 2, q_2 = 1$. Original graph $G$, and modified graph $G'$: We see that agent $a_1$ has two copies in the new graph $G'$, while agent $a_2$ maintains just his lone copy. Solving for the maximum matching in $G'$ and then merging all copies of an agent to the original and you will have your best possible assignment. Since the edges are unweighted, one possible solution is (and merged solution):
{ "domain": "cs.stackexchange", "id": 490, "tags": "algorithms, graphs, proof-techniques, assignment-problem" }
In a General Relativistic metric, what is (intuitively) the physical meaning of the parameter $t$?
Question: While studying the 3+1 Formalism of General Relativity, the slices of constant $t$ confuse me on what the physical essence is. For example (and I've made another question related to that, after that started studing 3+1 GR book) in simulations where a Black Hole - Neutron Star merger is happening, the $t$ parametere is refering to these slices is, but I'm lacking to see the physical relation. Answer: It's a time coordinate for the spacetime. It's not a unique choice, of course, because we have diffeomorphism invariance; any other scalar function $t'$ on our manifold with $(\nabla_a t') (\nabla^a t') < 0$ is just as valid of a choice. This coordinate freedom is why we have all that business with the lapse and the shift and the Hamiltonian and momentum constraints. But as far as the equations of motion are concerned, $t$ plays the same role that $t$ does in any other field theory, allowing us to cast the equations of motion in the form $$ {\text{rate of change} \choose \text{of fields w.r.t. } t} = {\text{some expression involving the fields} \choose \text{& their derivatives at time } t}. $$
{ "domain": "physics.stackexchange", "id": 73568, "tags": "general-relativity, spacetime, coordinate-systems, time, hamiltonian-formalism" }
What is the mechanism through which mass is converted to thermal energy in the accretion disc of a black hole?
Question: In the book The Cosmic Perspective, it is stated that as matter is falling into a supermassive black hole, up to $40\%$ of its mass are converted to thermal energy, making the accretion of matter around a black hole a vastly more efficient energy source than even fusion. But what is the mechanism behind this conversion of mass to thermal energy? Answer: The "mass" falling in is the rest mass (at infinity). As the matter falls it gains kinetic energy. Most of the matter cannot fall directly into the black hole because it encounters a potential barrier due to its angular momentum with respect to the black hole, so it enters some sort of orbit. The orbiting matter accumulates in an accretion disk. To fall into the black hole, the matter must lose angular momentum. It does this via friction. At a microscopic level, interactions between particles and possibly with magnetic fields, transfer angular momentum outwards and also heat the disk material. The hot disk effectively radiates away (some of) the kinetic energy that the matter gained by falling towards the black hole and the matter moves inwards, eventually falling into the black hole. This radiated energy can be a significant fraction of the rest mass energy of the matter because it is moving relativistically when it gets close to the black hole. One way of looking at the whole process is in terms of conservation of mass/energy. The start point is the black hole plus the rest mass energy of the material that is to be accreted. After accretion the black hole has accreted some of that mass-energy, but a fraction of it has been radiated away as it passes through the accretion disk. Thus, as @Sten points out, the mass accreted by the black hole is less than the rest mass that fell into it - the difference emerges as radiation.
{ "domain": "astronomy.stackexchange", "id": 6982, "tags": "black-hole, supermassive-black-hole, quasars, accretion" }
Supervisors and employees query, with a subselect and inner join
Question: I want to replace or improve the SELECT command of tblVisor to make it faster. Is there any way to improve this SQL command? SELECT tblVisor.supervisor_id, tblVisor.last_name, tblVisor.first_names, tblVisor.employee_job_profile_id, org_employees.last_name, org_employees.first_name, org_employees.job_code FROM (SELECT cp_supervisor_properties.supervisor_id, persons.last_name, persons.first_names, cp_supervisor_properties.employee_job_profile_id FROM cp_supervisor_properties INNER JOIN persons ON persons.person_id = cp_supervisor_properties.supervisor_id) as tblVisor INNER JOIN org_employees ON org_employees.employee_number = tblVisor.employee_job_profile_id LIMIT 100 Answer: Unless I'm missing something there doesn't seem to be a reason for the subselect, just join all three tables directly. Also, INNER is the default, so you could drop that prefix as well. The query would then become something like the following: SELECT cp_supervisor_properties.supervisor_id, persons.last_name, persons.first_names, cp_supervisor_properties.employee_job_profile_id, org_employees.last_name, org_employees.first_name, org_employees.job_code FROM cp_supervisor_properties INNER JOIN persons ON persons.person_id = cp_supervisor_properties.supervisor_id INNER JOIN org_employees ON org_employees.employee_number = cp_supervisor_properties.employee_job_profile_id LIMIT 100; Without more information (about how the data looks and so) I don't see possible performance improvements. If you have problems with the query time you should probably check for missing indexes on the join columns and in general look into query optimisation.
{ "domain": "codereview.stackexchange", "id": 13232, "tags": "sql, postgresql" }
Why does cyclobutadieneiron tricarbonyl behave aromatically?
Question: It is said that $\ce{(\eta^4-C4H4)Fe(CO)3}$ can undergo electrophilic substitution reactions. Therefore, it displays aromaticity. For the iron atom, it has $8$ electrons in its outer shell initially and it receives $6$ electrons from three carbonyls. It achieves the stable state of $18e$ after bonding with cyclobutadiene. Therefore, the oxidation state of iron should be zero, and the $\pi$ electrons of cyclobutadiene in the complex is still $4$. But this result contradicts with Huckel's rule, which states that the $\pi$ electrons should be $4n+2$. Where did I go wrong? Answer: In the paper "(Cyclobutadiene)iron Tricarbonyls - A Case of Theory before Experiment", Organometallics 2003, 22, 2-20 https://pubs.acs.org/doi/pdf/10.1021/om020946c, at the page 12 nice explanation of bonding in this compound is provided: The electronic structure of (cyclobutadiene)iron tricarbonyl has been the subject of many papers. He I and He II low-energy photoelectron spectra provided useful information.77,78 The eight observed bands in the lowenergy PE spectrum of (cyclobutadiene)iron tricarbonyl in the range 7.65-20.31 eV all were assigned.78 The calculations (ab initio SCF MO) showed that there is a net negative charge on the cyclobutadiene ligand that results from π back-bonding from the iron atom into an antibonding MO of the ligand (δ bond). Better agreement between the experimental assignments and theoretical calculations was obtained by Chinn and Hall using generalized MO calculations with configuration interaction.79 A simple textbook approach to the bonding is shown in Figure 8.80 It is assumed that it is triplet state cyclobutadiene that is involved with unpaired electrons in the two degenerate ψ2 and ψ3 molecular orbitals. These interact with two singly occupied iron orbitals, generating two covalent bonds.
{ "domain": "chemistry.stackexchange", "id": 12647, "tags": "organic-chemistry, organometallic-compounds, aromaticity" }
Is it possible to disprove water memory with an entropy argument?
Question: Water memory was a controversial experiment claiming to provide an explanation supporting homeopathy. The results were largely dismissed as being tainted by experimental error. One possible mechanism invoked was that water molecules, on account of being polar, would form a structured network, thus storing information about other molecules they had been in contact with. I wonder if it is possible to disprove this explanation with an entropy argument. If water molecules did arrange themselves in such a fashion, this would have an effect on their entropy, which in turn would impact other thermodynamic functions such as Gibbs free energy. In other words, it might be possible to look for such an entropic effect by looking at deviations from expected behavior in a steam table. Would the magnitude of such an effect be measurable ? Answer: Quite simply, no. Water memory doesn't appear to violate any physical laws, and the claims made about it are not well-defined or specific enough to be falsified (e.g. with an entropic argument). It's revealing that while a scientist could be convinced that he's wrong, there's nothing that could change the mind of a homeopath. The best we can do is test the specific mechanisms that have been proposed to produce a memory effect. For example, most homeopathy theories revolve around some sort of persistent water structure, and this structure would presumably be the result of hydrogen-bonded networks. But it's well-established from experiment as well as computer simulations (e.g. ab initio MD) that hydrogen-bonded networks last for a matter of femtoseconds and therefore could not give rise to long-lasting memory effects. Indeed, there does not currently exist any plausible mechanism (or evidence) for water memory that does not contradict well-established science. And for this reason the scientific community does not accept it. But, as was seen to be the case with Jacques Benveniste's work, if some evidence were to arise for it then the scientific community would be willing to take it seriously and investigate.
{ "domain": "physics.stackexchange", "id": 19963, "tags": "thermodynamics, water, entropy" }
What does torque depend upon?
Question: What does torque depend upon? I know torque depends on force and moment arm, but does it depend on choice of origin? Because I think choice of origin determines its moment arm. Answer: Torque is defined as $\vec \tau = \vec r \times \vec F$, where $\vec r$ is the displacement vector from the origin to the point at which the force is applied. This means that torque depends very much on the choice of origin. Then again, the choice of origin also affects the inertia tensor. So long as you get all of the physics correct, you can choose any origin you want. The ultimate answer will be the same regardless of choice of origin. That said, some choices vastly complicate the equations of motion while other choices vastly simplify the equations of motion. The "best" choice of origin is the one that results in the simplest equations of motion. This varies from problem to problem. There is no hard and fast rule that says always choose origin X (whatever "X" may be).
{ "domain": "physics.stackexchange", "id": 16952, "tags": "reference-frames, torque" }
Prove that $L^r$ is context free without alphabet
Question: I'm stuck with this problem: Given $L$ a CFL on the alphabet $\Sigma$. Prove that $L^r=\{x^r|x\in L\}$, where for each $a\in\Sigma$ and $y\in\Sigma^*$, $$\epsilon^r=\epsilon,$$ $$(ay)^r=y^ra,$$ is context free or not. Since I don't have the alphabet I cannot think of a grammar that generates this language, so I decided to prove that it's not context free by applying the pumping lemma for CFL. So I started with the hypothesis that $L^r$ is context free, thus if $x\in L^r$ that $x^r\in L$. Then I tried to find different possible strings that once pumped didn't belong anymore to $L^r$, but I'm not able to find such string. Is this a bad aproach? Where am I wrong? Answer: Informally, by construction $L^r$ consists of the strings in $L$ reversed. Since $L$ is context-free, it has a Grammar in Chomsky normal form. All production rules in this grammar will fall in one of three classes: $A \rightarrow BC$ $A \rightarrow a$ $S \rightarrow ε$ Thus by reversing the order of non-terminals in the RHS of all rules of category 1., a new grammar can be derived that produces $L^r$. This new grammar is also in Chomsky normal form, thus $L^r$ is context-free.
{ "domain": "cs.stackexchange", "id": 11867, "tags": "context-free, proof-techniques, pumping-lemma" }
Building SQL query
Question: I wrote a messy function and I am wondering if you see any way I could clean it up. Essentially it takes in a list e_1=2&e_2=23&e_3=1 and makes a queryset out of it, maintaining the ordering. from operator import itemgetter def ids_to_students(items, prefix=0): if prefix == 0: # Make ordered QuerySets of Students based on there ids etuples = sorted([(k,v) for k,v in items if k[:2] == 'e_'], key=itemgetter(0)) ituples = sorted([(k,v) for k,v in items if k[:2] == 'i_'], key=itemgetter(0)) tuples = etuples+ituples else: tuples = sorted([(k,v) for k,v in items if k[:2] == '%s_'%prefix], key=itemgetter(0)) pk_list = [v for (k,v) in tuples] clauses = ' '.join(['WHEN id=%s THEN %s' % (pk, i) for i, pk in enumerate(pk_list)]) ordering = 'CASE %s END' % clauses students = Student.objects.filter(pk__in=pk_list).extra( select={'ordering': ordering}, order_by=('ordering',)) return students It's called like this: students = ids_to_students(request.GET.items()) e_students = ids_to_students(request.GET.items(), 'e') Answer: Tuples are sorted primarily by the first item of each. If they are all unique, specifying key=itemgetter(0) makes no difference. This code if prefix == 0: etuples = sorted([(k,v) for k,v in items if k[:2] == 'e_'], key=itemgetter(0)) ituples = sorted([(k,v) for k,v in items if k[:2] == 'i_'], key=itemgetter(0)) tuples = etuples+ituples else: tuples = sorted([(k,v) for k,v in items if k[:2] == '%s_'%prefix], key=itemgetter(0)) can be rearranged like this to avoid repetition: prefixes = ['%s_'%prefix] if prefix else ['e_', 'i_'] tuples = sorted(item for item in items if item[0][:2] in prefixes)
{ "domain": "codereview.stackexchange", "id": 6710, "tags": "python, django" }
Turing recognizable but not Turing decidable language cannot have TM do not halt on infinitely many inputs
Question: Sorry, I think I misunderstand the question, It should read as if $L$ is turing-recognizable but not decidable, then there exists infinitely many input that any TM will not halt on it... Answer: The question shows several misconceptions. I'll try to clarify the key aspects. If $L$ is recognizable but not decidable, then $L$ has to be infinite (otherwise, it is decidable). If a TM recognizes such $L$, it has to accept all the words in $L$ (by definition of "recognizes"), hence it must halt in infinitely many cases. A TM halting on infinitely many cases does not imply that the recognized language is decidable. (I'm unsure about why you think this would be the case, it would make the halting problem decidable, for instance.) There is no such a thing as an uncountable language, at least in a conventional setting where the alphabet is countable, and words have finite length. Hence, a TM can never halt on "uncountably many" words.
{ "domain": "cs.stackexchange", "id": 14985, "tags": "formal-languages, turing-machines, automata, undecidability, halting-problem" }
Interpretation of Clarke's Doppler power spectral density
Question: What I understand of Doppler spread is that the relative motion between Transmitter (TX) and Receiver (RX) change the exposing time of signal. In rapport to a constant-distance TX-RX, a moving toward each other TX-RX "compresses" signal in time (signal takes less time to propagate), then signal is "expanded" in frequency domain. Similarly, a moving away RX-TX "expands" signal in time and "compresses" its spectrum. In short, that is scaling Fourier Transform. These two extreme cases set the left and right bounds of spreading an original frequency between $-f_d$ and $+f_d$ where $f_d$ is max Doppler spread. In looking at the Clarke model, it is just multiple propagation model with rich scattering environment and equal angle of arrival. (link for more details Clarke model) If I understand well, there are two assumptions which are reasonale in urban environment: Rayleigh fading equal angle of arrival, or equal receiver sensitivity I have followed the math from the original article, it seems ok. The final Doppler power spectrum is then $\displaystyle S(f) = \frac{1}{\pi f_d \sqrt{1 - \left(\frac{f}{f_d}\right)^2}}$ What I don't understand is that why energy is concentrated to the two extreme spread frequency $-f_d$ and $f_d$ while angles of arrival are uniform. Is there any physical interpretation ? What am I missing from the famous Clarke model ? Personally, this model seems well-model the typical urban environment. R. H. Clarke, A Statistical Theory of Mobile-Radio Reception, The Bell System Technical Journal, July/Aug 1968, p. 957ff Answers Although the answer of Carlos captures the most fundamental mathematical part, the real answer is in his comment about "mapping between angle and frequency". Moreover, the answer of Maximilian is interesting too. Answer: A simple, "non-technical" way of thinking of it is the fact that the Doppler frequency is proportional to $\cos\theta$. The amplitudes of cosine, however, are not uniformly distributed, but are heavily weighted towards $\pm 1$. Example plot to demonstrate, using Python/Pylab code: theta = linspace(0, 2*pi, 1001) x = cos(theta) hist(x) More rigor can be seen by noting that \begin{align} f &= f_d \cos\theta\\ \theta &= \cos^{-1}\left(\frac{f}{f_d}\right) \end{align} and the power received at any angle is proportional to a small angle increment $d\theta$: $$ P(\theta) \propto d\theta = \frac{-1}{f_d\sqrt{1-\left(\frac{f}{f_d}\right)^2}} df $$ And the total power can be determined by integrating the above quantity, which is identically what defines a power spectral density.
{ "domain": "dsp.stackexchange", "id": 5017, "tags": "power-spectral-density, doppler" }
Fourier Series of Aperiodic convolution of periodic functions
Question: we were given the following classic exercise: Given two periodic signals $x(t), y(t)$ with fundamental period $T$ with Fourier series coefficients $c_m^x, c_m^y$ respectively, find the Fourier coefficients of the signal $z(t) = x(t) * y(t)$ with relation to $T, c_m^x, c_m^y$. Now, this can easily be solved when the aforementioned convolution is the circular convolution (integral over a period only). However, in class our professor noted that it can be solved even when we have an aperiodic convolution (that is, convolution as an integral from $-\infty$ to $+\infty$). We argued that, in this case, that infinite integral doesn't converge, and he responded that, even though the convolution integral doesn't converge (i.e. might be infinite), the Fourier Series coefficients are still finite and can be calculated!! Is this true? If yes, then is the relation the usual one: $c_m^z=Tc_m^xc_m^y$ or another and how do you prove that? If not, why? Austere mathematical proofs would be appreciated. Answer: okay, now that the popcorn is eaten and my fingers are clean... $$ x(t) = \sum\limits_{m=-\infty}^{+\infty} c_m^x e^{j 2 \pi (m/T) t} $$ $$ y(t) = \sum\limits_{m=-\infty}^{+\infty} c_m^y e^{j 2 \pi (m/T) t} $$ then $$\begin{align} z(t) &= \sum\limits_{m=-\infty}^{+\infty} c_m^z e^{j 2 \pi (m/T) t} \\ \\ &= x(t) \circledast y(t) \\ \\ &= \int\limits_{-\infty}^{+\infty} x(\tau) y(t-\tau) \, d\tau \\ \\ &= \int\limits_{-\infty}^{+\infty} \sum\limits_{m=-\infty}^{+\infty} c_m^x e^{j 2 \pi (m/T) \tau} \sum\limits_{n=-\infty}^{+\infty} c_n^y e^{j 2 \pi (n/T) (t-\tau)} \, d\tau \\ \\ &= \sum\limits_{n=-\infty}^{+\infty} \sum\limits_{m=-\infty}^{+\infty} c_m^x c_n^y \int\limits_{-\infty}^{+\infty} e^{j 2 \pi (m/T) \tau} e^{j 2 \pi (n/T) (t-\tau)} \, d\tau \\ \\ &= \sum\limits_{n=-\infty}^{+\infty} \sum\limits_{m=-\infty}^{+\infty} c_m^x c_n^y \int\limits_{-\infty}^{+\infty} e^{j 2 \pi ((m-n)/T) \tau} \, d\tau \ e^{j 2 \pi (n/T) t} \\ \end{align} $$ this integral: $\int\limits_{-\infty}^{+\infty} e^{j 2 \pi ((m-n)/T) \tau} \, d\tau$ ain't converging when $n \ne m$ and certainly not when $n=m$. you will not be getting finite values for $c_m^z$.
{ "domain": "dsp.stackexchange", "id": 5936, "tags": "fourier-series, periodic" }
"This operator is odd under parity"
Question: In problem 8.10 of Schaum's Quantum Mechanics they say: "We see that under the parity operator $r \rightarrow r$, $\theta \rightarrow \pi - \theta$ and $\phi \rightarrow \pi + \phi$ .. since $\frac{d}{d\theta} \rightarrow -\frac{d}{d\theta}$ and $\frac{d}{d\phi} \rightarrow \frac{d}{d\phi}$, it follows that the operators $\hat{L}_\pm$ are not affected by the parity operation." (Here $\theta$ is the $z$-axis spherical angle and $\phi$ is the azimuthal spherical angle.) Another source, http://itp.uni-frankfurt.de/~valenti/SS14/QMII_2014_chap3.pdf, also refers to this idea of an OPERATOR being "odd" under parity. Do the operators really change? If you represented the $\frac{d}{d\theta}$ operator for example "under the parity operation" (what does that mean?) as a matrix, wouldn't it be the exact same matrix? It's just that the input wavefunction that is being input as argument to the $\frac{d}{d\theta}$ operator has had all its $+$ and $-$ signs flipped (from the perspective of $x y z$ coordinates) messed around with, so naturally the $\frac{d}{d\theta}$ outputs $-1$ times its result. Correct? Answer: If you represented the ddθ operator for example "under the parity operation" (what does that mean?) as a matrix, wouldn't it be the exact same matrix? When we talk about an operator undergoing some unitary transformation, be it spatial inversion, rotation, time reversal, etc., we are saying $$U^{-1}AU = \,\,?$$ Saying an operator is odd/even means $$U^{-1}AU = \pm A$$ with $+$ for even and $-$ for odd. When you look at the matrix elements of a transformed operator, $$\langle \alpha | U^{-1}AU | \beta \rangle$$ you can see that these are not the same as those of the original operator, $$\langle \alpha | A | \beta \rangle$$ So the matrix is not the same. It's just that the input wavefunction that is being input as argument to the ddθ operator has had all its + and − signs flipped That's a valid way of looking at it as well. Instead of viewing $\langle \alpha | U^{-1}AU | \beta \rangle$ as some new operator $A' = U^{-1}AU$ acting on the old states, you can see it as $A$ acting on the transformed states $\langle \tilde \alpha | = \langle \alpha | U^{-1}$ and $| \tilde \beta \rangle = U | \beta \rangle$. I'll stress that the matrix is still not the same, provided you're using the same basis in either case. To illustrate this point, let's consider the expectation value of an operator $A$ - which commutes with the position operator - after a parity transformation. We have $$\langle A' \rangle = \langle \Psi | \Pi^{-1}A\Pi | \Psi \rangle = \int d\mathbf{x'}d\mathbf{x''} \langle \Psi | \Pi^{-1} | \mathbf{x'} \rangle \langle \mathbf{x'} | A | \mathbf{x''} \rangle \langle \mathbf{x''} | \Pi | \Psi\rangle = \int d\mathbf{x'} \Psi^*(-\mathbf{x'})A\Psi(-\mathbf{x'})$$ where in the last step I've made use of the orthogonality of position states and the Hermiticity of the parity operator. So either way is valid (although the relation $U^{-1}AU = \pm A$ is independent of basis). In the same way that we can speak of time-evolved operators with time-independent kets in the Schrödinger picture and time-evolved kets with time-independent operators in the Heisenberg picture, we can choose to transform the "inputs" or the operator.
{ "domain": "physics.stackexchange", "id": 24710, "tags": "operators, parity" }
Models in a Simple PyMongo-based Blogging Web App without ORM/ODM
Question: I am currently using PyMongo + Flask for building a simple blogging application. I am not using any kind of ODM, instead I decided to use PyMongo directly. I need to know how to improve my code since learning is my main motivation for building this application. This portion of code is implemented in a view controller called post.py and it is used to edit a post: @posts.route('/<_id>/edit', methods=['GET', 'POST']) def edit(_id): post = Post() post.get(_id) if request.method == 'POST': post.title = request.form.get('title') post.tags = [tag.strip() for tag in request.form.get('tags').split(',')] post.body = request.form.get('body') post.update() return render_template('posts_edit.html', post=post) The Post model looks something like this: class Post: def __init__(self, _id=None, title=None, status=None, tags=None, body=None, date=None, author=None): self._id = _id self.title = title self.status = status self.tags = tags self.body = body self.date = date self.author = author def save(self): mongo.db.posts.insert_one({ "_id": ObjectId(), "title": self.title, "status": self.status, "tags": self.tags, "body": self.body, "date": datetime.utcnow(), "author": self.author }) def update(self): mongo.db.posts.update({ '_id': self._id }, { "title": self.title, "status": self.status, "tags": self.tags, "body": self.body, "date": self.date, "author": self.author }) def get(self, _id): post = mongo.db.posts.find_one({'_id': ObjectId(_id)}) if post: self.__init__(**post) return self return None @classmethod def get_all(cls): return mongo.db.posts.find() Is this the right direction? Answer: For the route I'd probably use fewer temporary variables, not that it hurts a lot, but it can be more compact. Additionally if Post implemented some sort of dict-like update function it could be even more compact, but that's splitting hairs. @posts.route('/<_id>/edit', methods=['GET', 'POST']) def edit(_id): post = Post() post.get(_id) if request.method == 'POST': post.title = request.form.get('title') post.tags = [tag.strip() for tag in request.form.get('tags').split(',')] post.body = request.form.get('body') post.update() return render_template('posts_edit.html', post=post) With the latter point I mean something like this (implemention left as an exercise for the reader): @posts.route('/<_id>/edit', methods=['GET', 'POST']) def edit(_id): post = Post() post.get(_id) if request.method == 'POST': post.update({ "title": request.form.get('title'), "tags": [tag.strip() for tag in request.form.get('tags').split(',')] "body": request.form.get('body') }) return render_template('posts_edit.html', post=post) For the other file I'd suggest looking at the usual 80 character limit, meaning wrapping the longer lines a bit. In save the date and _id attributes of the object aren't being used - that looks inconsistent, either self.date and self._id should be updated too, or neither, right? That should probably also be documented clearly. With get and the __init__ call, AFAIK that's probably safe, but coming from any background where constructors are, well, that, this looks fishy. Consider making that a class method too: @classmethod def get(cls, _id): return cls(**mongo.db.posts.find_one({'_id': ObjectId(_id)})) get_all doesn't use it's argument, so it's actually more of a staticmethod right now. If you're no going to use inheritance make it one then. If you are, possibly make move mongo.db.posts into a class variable so that you could override it for subclasses and the methods would keep on working with the correct tables.
{ "domain": "codereview.stackexchange", "id": 25331, "tags": "python, mongodb, flask, pymongo" }
Qiskit - Statevector measurement of single qubit in GHZ state collapses the entire state
Question: I have a GHZ state. I want to measure the third qubit in Hadamard basis, after which the state left behind should be a maximally entangled state as mentioned here. But when I measure the third qubit, the entire state collapses into either $|000\rangle$ or $|{111}\rangle$ I am representing the quantum state as a statevector then measuring that. #Create a circuit to generate GHZ state circ = QuantumCircuit(3) circ.h(0) circ.cx(0, 1) circ.cx(0, 2) #Get the statevector from circuit ghz_statevec = Statevector(circ) H_matrix = 1/np.sqrt(2)*np.array([[1, 1], [1,-1]]) #To measure in X basis, apply the Hadamard transform evolved_state = ghz_statevec.evolve(H_matrix, [0]) evolved_state.draw('latex') The output after applying Hadamard to the third qubit is $ \frac{1}{2} |{000}\rangle + \frac{1}{2} \ |001\rangle + \frac{1}{2} |110\rangle - \frac{1}{2} |111\rangle$ outcome, state = ghz_statevec.measure([0]) state.draw('latex') But the state after measuring the third qubit is $|000\rangle$ When I use the circuit representation and do measurement, I get the expected outcome but not in the statevector representation. Answer: The problem is that you measure ghz_statevec instead of evolved_state. The following line gives the correct result: outcome, state = evolved_state.measure([0]) Be aware though that this will return the complete system, including the qubit which has been measured. For instance, if outcome is equal to '1', then your resulting state would be $\frac{|001\rangle-|111\rangle}{\sqrt{2}}=\frac{|00\rangle-|11\rangle}{\sqrt{2}}\otimes|1\rangle$.
{ "domain": "quantumcomputing.stackexchange", "id": 4322, "tags": "qiskit, measurement" }
Conformal Compactification of spacetime
Question: I have been reading Penrose's paper titled "Relativistic Symmetry Groups" where the concept of conformal compactification of a space-time is discussed. My other references have been this and this. In case you cannot see the paper above, let me describe what I understand so far: The idea of conformal compactification is to bring points at "infinity" on a non-compact pseudo-Riemannian manifold $M$ (equipped with metric $g$) to a finite distance (in a new metric) by a conformal rescaling of the metric ${\tilde g} = \Omega^2 g$. This is done so that $(M,{\tilde g})$ can be isometrically embedded into a compact domain ${\tilde M}$ of another (possibly non-compact) pseudo-Riemannian manifold $M'$. This allows us to discuss the asymptotic behaviour of the manifold under consideration. In the references above, they mention the following without explanation: Then observe that any regular extension of $\phi$[$=\Omega^2$] to the conformal boundary $\partial {\tilde M} \subset M'$ must vanish on said boundary. This reflects the property of a conformal compactification that “brings infinity to a finite distance”. This I do not understand. Firstly, what is meant by conformal boundary? Secondly, why should $\Omega = 0$ on the conformal boundary? Is there any good reference for this material? Answer: Often (like in the case of the standard compactifications of $\mathbb R^{d,1}$ or $\mathrm{AdS}_{d+1}$), the non-compact manifold $M$ that we start with is mapped to the interior of a compact manifold $\tilde M$ that also happens to be a manifold with boundary. In these cases, we define the conformal boundary as $\partial\tilde M$ as indicated by Ben Crowell. Note, however, that this does not always happen. Consider, for example, the standard stereographic projection which maps $M=\mathbb R^2$ onto $\tilde M=S^2\setminus \{(0,0,1)\}$ where in my notation I'm treating the sphere as an embedded submanifold of $\mathbb R^3$ with north pole $(0,0,1)$. Notice, in this case, that $\tilde M$ is not the interior of a compact manifold with boundary; when we include the north pole, we get $S^2$ which is a compact manifold without boundary. In contrast, consider $\mathbb R^{d,1}$ with metric $$ ds^2 = -dt^2 + dr^2 + r^2 d\Omega_{d-1}^2 $$ where $d\Omega_{d-1}^2$ is the metric on $S^{d-1}$. Let $\vec\theta$ be coordinates on the sphere, then the diffeomorphism $$ f(t,r,\vec\theta) = (T(t,r,\vec\theta), R(t,r,\vec\theta), \vec\theta) $$ where \begin{align} T(r,t,\vec\theta) &= \tan^{-1}(t+r) + \tan^{-1}(t-r)\\ R(r,t,\vec\theta) &= \tan^{-1}(t+r) -\tan^{-1}(t-r) \end{align} leaves the sphere factor unchanged but maps all of the $(r,t)$ plane to the interior of the triangular region in the $(R,T)$ plane satisfying $$ 0\leq R \leq \pi, \qquad |T|\leq \pi-R $$ This region does have a boundary (the edges of the triangle) which allows us to define the conformal boundary of $\mathbb R^{d,1}$. As for the $\Omega^2 = 0$ constraint, this is how I think of it intuitively (and pretty imprecisely). The new metric $\tilde g$ on the compact manifold is related to the original metric $g$ by the conformal factor: $$ \tilde g = \Omega^2 g $$ Now in the original manifold, as you go out to infinity, $g$ allows for distances between points to be arbitrarily large. But after the compactification, all points will be some finite distance away from one another. In order for this to happen, distances between points need to be multiplied by a smaller and smaller number as you go further and further out so that the product remains finite. The factor $\Omega^2$ multiplying $g$ is precisely what does this for you. This roughly is what the quote means when it says This reflects the property of a conformal compactification that “brings infinity to a finite distance." By the way, I found it useful to explicitly go through the $\mathrm{AdS}_{d+1}$ example. In particular, you can, for example, verify for yourself that for the explicit mapping written above, the conformal factor is $$ \Omega(t,r,\vec\theta)^2 = \frac{1}{\frac{1}{4}(1+(r-t)^2)(1+(r+t)^2)} $$ which vanishes as $r,t\to\infty$
{ "domain": "physics.stackexchange", "id": 8230, "tags": "general-relativity, string-theory, symmetry, asymptotics" }
Navigation planning based on kinect data in 2.5D?
Question: I have a wheeled robot with a front mounted SICK laser scanner. The recent addition of a kinect allows me to use the SICK laser for longer range nav planning and mapping, and the kinect for various 3d stuff. So, my idea is to convert the kinect point cloud to laser scans at various heights (via pcl). Say for instance my SICK laser is at 8" above the ground. That will not tell me about a curb or some other obstacle that lies just under 8". So, if I were to map the appropriate Z value of the kinect's point cloud data to a new laser scan topic, I could then use it for navigation, and write some code to decide what to do at that Z level. A simple example would be to determine the height of an object that my robot could negotiate based on wheel size, and just slow it down to the appropriate speed. It could also check for height clearance when driving around by creating a laser scan that correlates to the highest Z value of the robot. I see this being useful for quad copters... it is still not full 3D navigation, but it could allow for some decent object avoidance in the Z dimension by writing some code to determine which Z height has the most clear path. My question is, is anyone using laser scans at various heights to evaluate navigation at different Z levels? Is the kinect2laser a viable solution, or is there a better way to do this? I see this as a possible workaround for this problem. Originally posted by evanmj on ROS Answers with karma: 250 on 2011-02-27 Post score: 5 Original comments Comment by joq on 2011-03-06: Have you considered using a Voxel Grid? Costmap2d offers that option. Comment by KoenBuys on 2011-02-27: I had the same idea with a master thesis student at our lab (Enea Scioni, also following the list), however he left back to Italy where he's doing a PhD now, perhaps he continued the work. The idea we had is to include Z acquisition information in a map, so that robots could reason how to interpret maps that where acquired by other types of robots. I also think this could be useful to quadrotors as long as you can encode it in a memory optimized way. Now the data coming from the kinect is to large to build full 3D maps on the limited onboard resources. Comment by Eric Perko on 2011-02-27: Why convert to laser scans in the first place? Much of the existing navigation stack can use PointClouds for sensory input. Answer: The navigation stack will support most of your use case out of the box -- because you have a true long-range laser for localization (which that other problem you reference did not). You might note that this is nearly identical forms of input as the PR2 uses, which has a Hokuyo laser on the base for localization and obstacle avoidance, and a stereo camera rig for additional obstacle avoidance in 3d. When configuring your robot's launch and parameter files, use only the SICK laser as input to AMCL for localization. Then use both the SICK and the Kinect data as observation sources for the local costmap (see http://www.ros.org/wiki/costmap_2d for details on parameters). It might also be advisable to setup a voxel_grid to downsample and clean your Kinect data before sending it into the costmap_2d. The pcl_ros contains a nodelet that can do such, so you could configure it entirely in a launch file without any new custom code (see http://www.ros.org/wiki/pcl_ros/Tutorials/VoxelGrid%20filtering for details) Originally posted by fergs with karma: 13902 on 2011-03-06 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by ctguell on 2013-07-28: Hi im could you accomplish the navigation using gmapping?? any help would be really apreciated
{ "domain": "robotics.stackexchange", "id": 4887, "tags": "navigation, kinect, pcl" }
Are $|\pm x\rangle$ and $|\pm y\rangle$ always defined the same way for a two-level system?
Question: I'm a little confused about the definitions of the states $|+x\rangle$ and $|+y\rangle$. We've started out talking about spin-${1\over 2}$, and in this first chapter of my textbook (A Modern Approach to Quantum Mechanics - Townsend) we are told that the definitions for $|+x\rangle$ and $|+y\rangle$ are \begin{align} |+x\rangle &= {1\over \sqrt{2}}|+z\rangle\ + \ {1\over \sqrt{2}}|-z\rangle \\ |+y\rangle &= {1\over \sqrt{2}}|+z\rangle\ + \ {i\over \sqrt{2}}|-z\rangle \, . \end{align} All of the calculations in this chapter involving these two states use these two equations, and one homework problem says to prove that a given state reduces to $|+x\rangle$ and $|+y\rangle$ that are "given in this chapter." To me this seems like an implication that these state equations are always equal to what I've written above. However this doesn't seem like it's right to me because we have only been talking about spin-${1\over 2}$ and there are other spins, and that they are a blanket statement for everything. Are these the equations of state for only spin-${1\over 2}$ or is this true in all cases? Answer: These are the spin-$1/2$ up states for the $\hat x$ and $\hat y$ directions. They are always the same. Of course, if you have different values of $S$, then you have more than just the spin-up and spin-down projections and the states are more complicated. For instance, for $S=1$, we have $$ L_x=\frac{1}{\sqrt{2}}\left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \\ \end{array} \right) $$ and the state $\vert 11\rangle_x$, with $m_x=1$ (largest possible projection along $\hat x$) is a combination of the $\vert 1m\rangle_z$ states of projection $m_z$ $$ \vert 1 1\rangle_x=\frac{1}{2}\vert 11\rangle_z-\frac{1}{\sqrt{2}}\vert 10\rangle_z+\frac{1}{2}\vert 1,-1\rangle_z $$ There is also (of course) a $\vert 10\rangle_x$ and a state $\vert 1,-1\rangle_x$, and they are just the normalized eigenvectors of the matrix $L_x$ given above.
{ "domain": "physics.stackexchange", "id": 48318, "tags": "quantum-mechanics, angular-momentum, hilbert-space, quantum-spin, two-level-system" }
Product operator analysis for CHn groups in HSQC and HMQC
Question: I'm taking and advanced NMR course this semester and we are learning about product operator formalism. I have a homework where I'm supposed to apply this formalism to an HSQC and HMQC experiment. The problem I have is that I should use a 3 and 4 spin systems $(\ce{CH2})$ and $(\ce{CH3})$ groups instead of just simply $(\ce{CH})$ and we only did two spin systems in class. I understand how to use the formalism and how to apply it to the experiment, but I need let's say a concept clarification. We use basic pulse sequences for HSQC and HMQC. In the exercise, we assume that the hydrogens do not couple to each other, only to the carbon. I did the example a found out that the result (in the product operator formalism it was the same observable term which with I was left at the end) was the same for $\ce{CH3}$ group as for $\ce{CH}$ after optimization of delay time. After I thought about it, I feel it makes sense. Since the three hydrogens will do the same as one hydrogen, only I will have a different intensity afterward in the spectrum and of course different shifts. I even calculated that the optimal tau delay is the same for both $\ce{CH}$ and $\ce{CH3}$ groups. And since HMQC basically gives the same info, I assume it will work the same, and results for $\ce{CH}$ and $\ce{CH2}$ group will also give me the same. Is this correct? Is there any other difference in what happens during the experiment to the $\ce{CH}$ and $\ce{CH3}$ group (or $\ce{CH2})?$ Just to clarify, it is not a multiplicity editing experiment, just a simple HSQC/HMQC applied to $\ce{AX2}$ and $\ce{AX3}$ spin systems. Answer: Your analysis is generally correct. In the case where all the protons are equivalent, you don't need to worry about proton–proton coupling. For equivalent protons, the main thing you might need to worry about is that when you have coherences on carbon, such as $S_x$, you will get evolution of multiple proton–carbon couplings, represented by (e.g.) the Hamiltonian $2\pi J_{IS} I_{1z}S_z + 2\pi J_{IS}I_{2z}S_z$ for a $\ce{CH2}$ pair. This could, in theory, generate more complicated product operators such as $4I_{1z}I_{2z}S_x$. To analyse this, you can separately evaluate the effects of the two J-couplings. The first one goes as: $$S_x \xrightarrow{2\pi J_{IS} I_{1z}S_z \tau} (\cos \theta) S_x + (\sin \theta) 2I_{1z}S_y$$ where $\theta = \pi J_{IS}\tau$. To evaluate the second coupling, the $S_x$ term becomes $$(\cos\theta)S_x \xrightarrow{2\pi J_{IS} I_{2z}S_z \tau} (\cos^2 \theta) S_x + (\cos\theta \sin \theta) 2I_{2z}S_y$$ and for the $I_{1z}S_y$ term, we can just "carry through" the $I_{1z}$ terms because they don't evolve under a Hamiltonian on different nuclei*: $$(\sin\theta)2I_{1z}S_y \xrightarrow{2\pi J_{IS} I_{2z}S_z \tau} (\cos \theta\sin\theta) 2I_{1z}S_y - (\sin^2 \theta) 4I_{1z}I_{2z}S_x$$ So all in all, single-quantum coherence on carbon evolves under two different couplings to give four terms: $$S_x \xrightarrow{2\pi J_{IS} I_{1z}S_z \tau} (\cos^2 \theta) S_x + (\cos\theta \sin \theta) (2I_{1z}S_y + 2I_{2z}S_y) - (\sin^2 \theta) 4I_{1z}I_{2z}S_x.$$ Thankfully, you do not need to worry about this, because the only times that single-quantum coherence on carbon exists is during the $t_1$ periods of the HSQC. Conveniently, the HSQC also has a proton 180° pulse in the middle of $t_1$, which refocuses all C–H couplings, so this whole evolution of $J_{IS}$ can be neglected. Outside of the $t_1$ periods the magnetisation of interest is single-quantum on proton, so the only coupling that evolves is that one proton–carbon coupling, which behaves in exactly the same way as for a CH system. For a more detailed treatment I suggest consulting Chapter 12 of James Keeler's Understanding NMR Spectroscopy, 2nd ed. (2010). If the protons are equivalent, then the above also applies to the HMQC: the only time when there are coherences on carbon is during $t_1$, and there is also a 180° pulse in the middle of $t_1$ which removes the effects of carbon–proton couplings. However, if the protons are not equivalent, then you will start to get evolution of H–H homonuclear couplings, which can't simply be refocused by a 180° pulse. For the HSQC this is less important as it just leads to a reduction in intensity from magnetisation that goes down non-useful coherence pathways. Because the magnetisation during the HSQC $t_1$ is single-quantum carbon, it does not evolve under homonuclear couplings. However, for the HMQC which has multiple-quantum coherence present during $t_1$, the proton component will evolve under homonuclear couplings, which means that you will get multiplets in the indirect dimension after Fourier transformation. In practice it is difficult to resolve these splittings and so it is just manifested as line broadening. There are also considerations to be made when using a sensitivity-enhanced HSQC sequence, but I assume those are not relevant for your current situation. The interested reader is directed to J. Biomol. NMR 1994, 4, 301–306. As a final point, in the HSQC above, notice that when we allow single-quantum carbon coherence to evolve under $J_{IS}$, the $\ce{CH2}$ group obtains a phase factor of $cos^2\theta$, whereas a $\ce{CH}$ group would only have a factor of $\cos \theta$. This is the basis of multiplicity editing in the HSQC: immediately after $t_1$, a spin echo of total duration $\tau = 1/J_{IS}$ is added in order to allow $J_{IS}$ to evolve. This choice means that $\theta = \pi$, so $\cos\theta = -1$: thus, $\ce{CH}$ groups are inverted and $\ce{CH2}$ groups are not. The resulting spectrum will therefore have different signs for peaks belonging to $\ce{CH}$ and $\ce{CH2}$ groups. ($\ce{CH3}$ groups get a factor of $\cos^3\theta$, so have the same sign as $\ce{CH}$ groups.) * To be more precise, this is because $I_{1z}$ commutes with the Hamiltonian. So, it also commutes with the unitary propagator $U = \exp(-\mathrm i H\tau)$, and we can write $$2I_{1z}S_x \xrightarrow{H\tau} U(2I_{1z}S_x)U^\dagger = 2I_{1z} \cdot US_xU^\dagger$$
{ "domain": "chemistry.stackexchange", "id": 13852, "tags": "physical-chemistry, spectroscopy, nmr-spectroscopy" }
Systems and Gravational PE
Question: Can an object that is part of a system of objects have gravitational potential energy if the Earth is not part of the said system? For example say i have an incline and a block near the surface of the earth. If the block is at the top of the incline and i define my system to be the block and the incline, then does the block have any gravitational PE? Sorry if this is a stupid question. Answer: Gravitational potential energy is not "inside an object". It is shared between two objects that attract each other. It is stored in the system or field between them. If you define your system as one of these objects alone, then it get's a bit difficult. Your system will not be isolated. Then you cannot rely on e.g. the energy conservation law, because you essentially are allowing energy to exit and enter the system; as you say, removing Earth will severally change the system you have defined, since the external force changes. And without this force there is no tendency of the object to move, thus no stored energy. So, defining your system as only a part of a gravitational potential energy system is best to avoid.
{ "domain": "physics.stackexchange", "id": 39318, "tags": "newtonian-gravity, potential-energy" }
Principle of stationary action vs Euler-Lagrange Equation
Question: I am a bit confused as to what I should use to derive the equations of motions from the lagrange equation. Suppose I have a lagrange function: $$L(x(t), \dot{x}(t)) = \frac{1}{2}m\dot{x}^2-\frac{1}{2}k(\sqrt{x^2+a^2}-a)$$ Method 1: Principle of least action $$\delta L = \delta \dot{x}(m\dot{x})-\delta x \frac{kx(\sqrt{x^2+a^2}-a)}{\sqrt(x^2+a^2)}$$ $$\delta W = \int_{t_0}^{t_1} \delta L \ dt$$ After doing integration by parts, i obtain: $$\delta W= -\int_{t_0}^{t_1} \delta x \biggl[m \ddot{x}+\frac{kx(\sqrt{x^2+a^2}-a)}{\sqrt(x^2+a^2)} \biggl] dt$$ for stationary points, $\delta W = 0$ Hence, inside the integral, $$m\ddot{x}+\frac{kx(\sqrt{x^2+a^2}-a)}{\sqrt(x^2+a^2)} = 0$$ and this is the equation of motion. Method 2: Euler-Lagrange Equation Alternatively, we can consider the euler lagrange equation: $$\frac{\partial L}{\partial x} - \frac{d}{dt}\biggl(\frac{\partial L}{\partial \dot{x}} \biggl) = 0$$ By substituting $L$ into the euler-lagrange equation, we get the same equation of motion: $$m\ddot{x}+\frac{kx(\sqrt{x^2+a^2}-a)}{\sqrt(x^2+a^2)} = 0$$ So method 2 is a lot easier than method 1, but why do we arrive at the same answer? I have a hunch both methods are essentially calculating the same thing, but I am not sure if this hunch is right because the euler lagrange equations seems a bit too simple as compared to principle of least action. Is there something i'm missing here? Answer: First, I think there is something wrong with your partial derivative of the Lagrangian with respect to $x$. Second, the Euler-Lagrange equations are nothing more than the process that you performed in Method 1, done without committing to a specific form for $L$ but leaving it generic. In your first step you took partial derivatives of $L$ with respect to its position and velocity terms, in your second step you took the velocity derivative and involved it in an integration by parts, where you took a total time derivative and then added a minus sign. If your Lagrangian also involved $\ddot x$ you would then have a $+\frac{\mathrm d^2~}{\mathrm dt^2}\frac{\partial L}{\partial\ddot x}$ term from two integrations by parts, for example.
{ "domain": "physics.stackexchange", "id": 62692, "tags": "classical-mechanics, lagrangian-formalism, variational-principle, action, variational-calculus" }
Critique of LinkedList class
Question: I'm working my way through The Java Programming Language, 4th edition. This is exercise 2.2: Write a LinkedList class that has a field of type Object and a reference to the next LinkedList element in the list. Is this an adequate solution? Is there a more efficient way to build up a test list for this example? class Node { private Object data; private Node next; public Node(Object data) { this(data, null); } public Node(Object data, Node next) { this.data = data; this.next = next; } public void setData(Object data) { this.data = data; } public Object getData() { return data; } public Node getNext() { return next; } } class LinkedListTest { public static void main(String[] args) { Node node3 = new Node("puke"); Node node2 = new Node("vomit", node3); Node node1 = new Node("blah", node2); Node node = node1; while (node != null) { System.out.println("Data = " + node.getData()); node = node.getNext(); } } } Answer: I would say it is an adequate solution. The product may not feel very useful yet, but it's probably not supposed to be. In some sense, the exercise is formulated ambiguously. Your Node class can be used to construct linked lists and as such is a building block of instantiated linked lists. However, I would not consider that a LinkedList implementation. A LinkedList implementation would consist of a wrapper class that hides this Node from the user and allows you to say thinks like List list = new LinkedList(); list.add("foo"); list.add("bar"); list.remove("foo"); int index = list.indexOf("bar"); etc. It may seem somewhat unsatisfactory that you have to construct your test list so clumsily, but ease of use wasn't a goal of the exercise ;)
{ "domain": "codereview.stackexchange", "id": 4438, "tags": "java, linked-list" }
Advice for Teleoperation of a Robot
Question: Hi, My project is to teleoperate a robot over a ROS network. I have been able to demonstrate that the concept works over a local ROS network. But, now I have to implement it so that robot can be operated remotely. I have a control station that has some controls to control the robot. The control station is connected to internet. I have a the robot which then subscribes to the control commands being published by the control station and performs the appropriate actions. Also the robot has a camera mounted on it which publishes a live video stream for the control station. I am able to achieve this cross communication if my control station and robot are connected to the same network. But, now I have to implement it where they are connected to separate internet networks. I saw following solutions over the internet: Use Robot Web Tools to achieve this (Could someone point to a good tutorial, that I can use as a starting point) Use Port Forwarding (I am using Netgear At&T 770S Aircards for internet connections as I want my communication to happen over 4G LTE. I am facing issues to get the port forwarding work. I think this is the easiest route, but I don't know if Netgear Aircards can be used to do port forwarding. I have tried to setup port forwarding on port 80 by logging into the modem on a browser but open port checker tool says that port is closed.) Please let me know, which route is the best way to go. Also, it'll be really helpful if you could point me towards some useful resources. If there is some solution which would be better than these two solutions, please let me know. Looking forward to your responses. Thank you! Originally posted by zulfiz on ROS Answers with karma: 23 on 2019-11-25 Post score: 1 Answer: You will need both port forwarding and robotwebtools if you plan to make the interface a web browser. You don't say in your question how you are doing it today but I'll assume you're using the standard route of having ROS on multiple machines and using standard pub/sub. If you plan to have the remote machine running ROS and controlling same as now, I think you'll find a VPN in your future but I have zero understanding of how that will work so will not comment further. If you do not plan to have ROS running on remote machine, then rosbridge, robotwebtools and portforwarding will be required. As you note, robotwebtools is how to interface using browser. The robotwebtools site example codes includes a telop example so much of the work has been for you, but the last time I looked at that site the code had some dead links (URL to libraries that no longer work). You'll also find that getting telop to run on an iPhone will be very difficult as iOS will not present a keyboard unless there is a textbox open. When I started the same thing you're trying I gave up on the keyboard based telop and instead created a click-a-button based code for "telop"ing. If you're planning on remote machine being a PC with keyboard, then you're probably OK. But if control via a phone is required, you should not plan on a keyboard input method. You'll need to use and understand rosbridge, robotwebtools, apache, HTML, and at least a little about how networks work. I cannot help you with your portforwarding issue. You haven't provided any info on your network (school, work, home, port 80 blocked, intermediate firewalls, etc). If you tell me you're not working on a homework assignment I can provide the HTML code for my page that allows for driving robot around clicking on buttons on webpage with camera streaming video. EDIT after follow on questions: 1 - see below for HTML code that worked on my set up. This code resides on the robot computer and is accessed through the page being served by Apache that is also running on the robot computer. 2 - Google tells me that port forwarding can be setup on those air cards. You'll need to port 80 and 8080 and 9090 I think(can't remember for sure) to run the code below. The air card that needs the port forwarding is the one that supports the robot, not the one being used by remote. You need to know the ip address for the aircard supporting the robot and route ports specially to the IP address of the robot on the wifi network. 3 - As I noted in the original answer, if you plan to run ROS on remote machine I cannot help you. I think you need VPN for that. I don't know anything about VPN(never tried). My information here is only dealing with controlling robot through a web browser from internet connected device. If this is not what you want, then disregard everything I have provided. If you're going to try the HTML route (proven) I suggest you get it running via HTML on a local network. Once it's running then you can work out the internet access and port forwarding issues. To do this the aircard needs to be configured to allow communication between machines on the network. Not sure they allow that. Follow the ROS tutorials for rosbridge and spend time on robotwebtools.org. Install apache and get a simple page running within the local network. Then get familiar with debug page on your favorite browser. Have fun with it but don't expect it to go smoothly. It took me a few weeks to get the page below running from a remote (internet connected) machine. You can review a couple other questions I've answered that give additional info on how it is set up: https://answers.ros.org/question/315015/what-is-the-best-way-to-monitor-and-remote-control-the-robot-from-tablet/ https://answers.ros.org/question/319626/real-time-map-generate-on-web-like-a-rviz/#319636 And finally: No comments about messy code - I'm not a SW guy. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1"> <script type="text/javascript" src="http://static.robotwebtools.org/EventEmitter2/current/eventemitter2.min.js"></script> <script type="text/javascript" src="http://static.robotwebtools.org/roslibjs/current/roslib.min.js"></script> <script type="text/javascript"> var pi_img = new Image(); var ip = location.host; </script> <script type="text/javascript" type="text/javascript"> var ip = String(location.host); var x = 0; var z = 0; var scale = 0; var scale_max = 0.2; var out_message = ""; // Connecting to ROS // ----------------- var newURL = "ws://" + ip + ":9090"; var ros = new ROSLIB.Ros({ url : newURL }); ros.on('connection', function() { console.log('Connected to websocket server.'); }); ros.on('error', function(error) { console.log('Error connecting to websocket server: ', error); }); ros.on('close', function() { console.log('Connection to websocket server closed.'); }); function sendTelop(){ var cmdVel = new ROSLIB.Topic({ ros : ros, name : '/cmd_vel', messageType : 'geometry_msgs/Twist' }); var outputTopic = new ROSLIB.Topic({ ros : ros, name : '/webpage', messageType : 'std_msgs/String' }); var out_string = new ROSLIB.Message({ }); out_string.data = out_message; var twist = new ROSLIB.Message({ linear : { x : x, y : 0, z : 0 }, angular : { x : 0, y : 0, z : z } }); cmdVel.publish(twist); outputTopic.publish(out_string); console.log('output publisher: ' + out_string.data); } // Subscribing to a Topic // ---------------------- var listener = new ROSLIB.Topic({ ros : ros, name : '/webpage', messageType : 'std_msgs/String' }); listener.subscribe(function(message) { console.log('Received message on ' + listener.name + ': ' + message.data); document.getElementById('statusbar').innerHTML = message.data + '<br>' + document.getElementById('statusbar').innerHTML; // listener.unsubscribe(); }); function forward(){ x = 0.5 * scale_max; z = 0; out_message = "go forward"; sendTelop(); } function left(){ x = 0; z = scale_max; out_message = "rotate left"; sendTelop(); } function right(){ x = 0; z = -scale_max; out_message = "rotate right"; sendTelop(); } function backward(){ x = -0.5 * scale_max; z = 0; out_message = "go backward"; sendTelop(); } function stop(){ x = 0; z = 0; out_message = "stop"; sendTelop(); } //init(); </script> <style> #buttonup{ width: 200px; height: 40px;} #buttondown{ width: 200px; height: 40px;} #buttonleft, #buttonright{ display:inline-block; width: 98px; height: 40px;} </style> </head> <body> <h1>Robot Web Interface</h1> <h2>Allows remote access to the family robot.</h2> <h2>Press-and-hold (for at least 1 sec) a button to move robot.</h2> <p>*Reload page if needed as cached pages don't work. </p> <input type="button" id="buttonup" onmousedown="inter=setInterval(forward, 500);" onmouseup="clearInterval(inter);stop();" ontouchstart= "inter=setInterval(forward, 500);" ontouchend ="clearInterval(inter);stop();" value="Go forward" /> <br> <input type="button" id="buttonleft" onmousedown="inter=setInterval(left, 500);" onmouseup="clearInterval(inter);stop();" ontouchstart= "inter=setInterval(left, 500);" ontouchend ="clearInterval(inter);stop();" value="Rotate left" /> <input type="button" id="buttonright" onmousedown="inter=setInterval(right, 500);" onmouseup="clearInterval(inter);stop();" ontouchstart= "inter=setInterval(right, 500);" ontouchend ="clearInterval(inter);stop();" value="Rotate right" /> <br> <input type="button" id="buttondown" onmousedown="inter=setInterval(backward, 500);" onmouseup="clearInterval(inter);stop();" ontouchstart= "inter=setInterval(backward, 500);" ontouchend ="clearInterval(inter);stop();" value="Go backward" /> <p></p> <iframe src= '' id= 'target_frame' name='target_frame' width="410" Height="310"></iframe> <p></p> <script> document.getElementById('target_frame').src = "http://" + ip + ":8080/stream?topic=/convertedimage"; </script> <p>Commands being sent to robot:</p> <div class="messages" id="statusbar" style = "width:200px; height:500px; overflow:hidden; background-color:grey"></div> </body> </html> Originally posted by billy with karma: 1850 on 2019-11-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by zulfiz on 2019-11-25: Thank you @Bily for your response. I'll try to answer your questions below: No, this is not a homework assignment. So, if you could provide me with some examples you implemented it'll be great. I cannot use my work WiFi for this project. They have setup their firewalls, which make the port forwarding issue very difficult. So, I have gotten two AT&T AirCards 770S. One provides internet coverage to robot, and the other provides internet coverage to the Remote Station. Let me know any other information you might need to comment on this. I'll have ROS running on both ends and my control station would be a laptop. So, I don't think I would have the issue of a keyboard. Looking forward to your reply.Thank you! Comment by lukelu on 2020-09-17: Got similar issue in 2020. We were wondering if you solve the issue, so you could teleoperate your robot via 4G LTE network.
{ "domain": "robotics.stackexchange", "id": 34062, "tags": "ros, ros-kinetic, remote, robotwebtools" }
How to compute the power bands of an eeg signal using python?
Question: so I have an eeg signal (edf format) that has 25 channels and 248832 entries, sampling frequency of 512Hz. I have to compute the frequency bands: – Delta: 0.1-4Hz – Theta: 4-8Hz – Alpha: 8-12Hz – Sigma: 12-16Hz – Beta: 16-36Hz – Gamma: >36Hz and plot them accordingly. I am using Python for this with scipy, numpy, etc. and I should get to something like this: Does anyone have any point-outs/ideeas/tutorials that could help me compute the bands and then get such a plot(probably a histogram)? Thanks! Answer: Here is some code that may solve your problem: from scipy.io import loadmat import scipy import numpy as np from pylab import * import matplotlib.pyplot as plt eeg = loadmat("mydata.mat"); eeg1=eeg['eeg1'][0] fs = eeg['fs'][0][0] fft1 = scipy.fft(eeg1) f = np.linspace (0,fs,len(eeg1), endpoint=False) plt.figure(1) plt.plot (f, abs (fft1)) plt.title ('Magnitude spectrum of the signal') plt.xlabel ('Frequency (Hz)') show() You can also check this other link: http://forrestbao.blogspot.pt/2009/10/eeg-signal-processing-in-python-and.html
{ "domain": "dsp.stackexchange", "id": 5761, "tags": "python, eeg, scipy" }
What does this notation for spin mean? $\mathbf{\frac 1 2}\otimes\mathbf{\frac 1 2}=\mathbf{1}\oplus\mathbf 0$
Question: In my quantum mechanics courses I have come across this notation many times: $$\mathbf{\frac 1 2}\otimes\mathbf{\frac 1 2}=\mathbf{1}\oplus\mathbf 0$$ but I feel like I've never fully understood what this notation actually means. I know that it represents the fact that you can combine two spin 1/2 as either a spin 1 (triplet) or a spin 0 (singlet). This way they are eigenvectors of the total spin operator $(\vec S_1+\vec S_2)^2.$ I also know what the tensor product (Kronecker product) and direct sum do numerically, but what does this notation actually represent? Does the 1/2 refer to the states? Or to the subspaces? Subspaces of what exactly (I've also heard subspaces many times but likewise do not fully understand it). Is the equal sign exact or is it up to some transformation? And finally is there some (iterative) way to write a product of many of these spin 1/2's as a direct sum? $$\mathbf{\frac 1 2}\otimes\mathbf{\frac 1 2}\otimes\mathbf{\frac 1 2}\otimes\dots=\left(\mathbf{1}\oplus\mathbf 0\right)\otimes\mathbf{\frac 1 2}\dots=\dots$$ Answer: The $\otimes$ sign denotes the tensor product. Given two matrices (let’s say $2\times 2$ although they can be $n\times n$ and $m\times m$) $A$ and $B$, then $A\otimes B$ is the $4\times 4$ matrix \begin{align} A\otimes B =\left( \begin{array}{cc} A_{11}B&A_{12}B\\ A_{21}B&A_{22}B \end{array}\right)= \left(\begin{array}{cccc} A_{11}B_{11}&A_{11}B_{12}&A_{12}B_{11}&A_{12}B_{12}\\ A_{11}B_{21}&A_{11}B_{22}&B_{12}B_{21}&A_{12}B_{22}\\ A_{21}B_{11}&A_{21}B_{12}&A_{22}B_{11}&A_{22}B_{12}\\ A_{21}B_{21}&A_{21}B_{22}&A_{22}B_{21}&A_{22}B_{22} \end{array}\right) \, . \end{align} A basis for this space is spanned by the vectors \begin{align} a_{1}b_{1}&\to \left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array} \right)\, ,\quad a_1b_2 \to \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right)\, ,\quad a_2b_1\to \left(\begin{array}{c} 0 \\ 0 \\ 1 \\0\end{array}\right)\, ,\quad a_2b_2\to \left(\begin{array}{c} 0\\0\\0\\1\end{array}\right) \end{align} In terms of $a_1\to \vert +\rangle_1$, $a_2\to \vert -\rangle_1$ etc we have \begin{align} a_1b_1\to \vert{+}\rangle_1\vert {+}\rangle _2\, ,\quad a_1b_2\to \vert{+}\rangle_1\vert{-}\rangle _2 \, ,\quad a_2 b_1\to \vert{-}\rangle_1\vert {+}\rangle _2 \, ,\quad a_2b_2\to \vert{-}\rangle_1\vert{-}\rangle_2\, . \end{align} In the case of two spin-$1/2$ systems, $\frac{1}{2}\otimes \frac{1}{2}$ implies your are taking $\sigma_x\otimes \sigma_x$, $\sigma_y\otimes \sigma_y$, $\sigma_z\otimes \sigma_z$, since these are operators acting on individual spin-$1/2$ systems. The resulting matrices can be simultaneously block diagonalized by using the basis states $a_1b_1$, $\frac{1}{\sqrt{2}}(a_1b_2\pm a_2b_1)$ and $a_2b_2$. There is a $3\times 3$ block consisting of $a_1b_1, \frac{1}{\sqrt{2}}(a_1b_2+a_2b_1)$ and $a_2b_2$ and a $1\times 1$ block with basis vector $\frac{1}{\sqrt{2}}(a_1b_2-a_2b_1)$. The $3\times 3$ block never mixes with the $1\times 1$ block when considering the operators $S_x=s_x^{1}+s_x^{2}$ etc. The basis vectors of the $3\times 3$ block transform as states with $S=1$, in the sense that matrix elements of $S_x$, $S_y$ and $S_z$ are precisely those of states with $S=1$; the basis vector of the $1\times 1$ block transforms like a state of $S=0$. Hence one commonly writes \begin{align} \frac{1}{2}\otimes \frac{1}{2} = 1\oplus 0 \end{align} with the $\oplus$ symbol signifying that the total Hilbert space is spanned by those vectors spanning the $S=1$ block plus the vector spanning the $S=0$ part; note that those vectors are product states of the type $a_1b_1$ etc.
{ "domain": "physics.stackexchange", "id": 65442, "tags": "quantum-mechanics, angular-momentum, quantum-spin, notation, representation-theory" }
Preservation under Substitution with Telescopes
Question: In the simply typed lambda calculus, one can show the following result, known as "preservation under substitution": If $\Gamma \vdash v : \tau_1$ and $(x : \tau_1) \vdash t : \tau_2$, then $\Gamma \vdash [v/x]t : \tau_2 $. However, the proof of this relies on the property of permutation, that we can rearrange contexts and it will preserve typing of terms. I'm wondering, can we prove a similar property for dependently typed languages? The problem is that, here, permutation may not hold, since telescopes are used in place of environments, and the types themselves may refer to variables. Moreover, since the types and terms overlap, we have to substitute in $\Gamma$ and $\tau_2$ as well. Does anyone have a good reference to proving such a preservation property for a dependently typed language? Are there tricks that are used to avoid the permutations? Answer: The property, which I would call "typing of substitution" should hold in any type theory, and is not dependent on the exchange property (which I assume is what you mean by permutation) The key is that you need to generalize the inductive hypothesis to when the variable in t appears in a context. So for a dependent type theory you prove If $\Gamma \vdash t_1 : \tau_1$ and $\Gamma, x : \tau_1, \Delta \vdash t_2 : \tau_2$ then $\Gamma,\Delta[t_1/x] \vdash t_2[t_1/x] : \tau_2[t_1/x]$ For a reference, see Lemma 2.3.2 on page 55 in Advanced Topics in Types and Programming Languages.
{ "domain": "cstheory.stackexchange", "id": 4552, "tags": "reference-request, pl.programming-languages, type-theory, lambda-calculus, dependent-type" }
Initialization Error for 'effort_controllers/JointGroupPositionController'
Question: I see that effort_controllers/JointPositionController looks for the particular joint and assigns the corresponding PID value to the control_toolbox::Pid variable. effort_controllers/JointGroupPositionController also does the same thing for all the joints inside the controller in a for loop. So essentially there is no difference between the two except for in the .yaml file and the launch files. When I try to load a effort_controllers/JointGroupPositionController with the following .yaml file and launch file, I am getting the following error: "Failed to getParam 'joints' (namespace: /my_6dof_robot/joint_group_position_controller)". [INFO] [1572904736.013589, 0.511000]: Controller Spawner: Waiting for service controller_manager/switch_controller [INFO] [1572904736.015633, 0.512000]: Controller Spawner: Waiting for service controller_manager/unload_controller [INFO] [1572904736.017165, 0.513000]: Loading controller: joint_group_position_controller [ERROR] [1572904736.025517675, 0.519000000]: Failed to getParam 'joints' (namespace: /my_6dof_robot/joint_group_position_controller). [ERROR] [1572904736.025622082, 0.519000000]: Failed to initialize the controller [ERROR] [1572904736.025671797, 0.519000000]: Initializing controller 'joint_group_position_controller' failed [ERROR] [1572904737.027262, 1.296000]: Failed to load joint_group_position_controller [INFO] [1572904737.027650, 1.296000]: Loading controller: joint_state_controller [INFO] [1572904737.035878, 1.303000]: Controller Spawner: Loaded controllers: joint_state_controller [INFO] [1572904737.045714, 1.309000]: Started controllers: joint_state_controller It succeeds in loading the joint_state_controller. The .yaml file (my_6dof_robot_effort_control.yaml) is as follows: my_6dof_robot: joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 joint_group_position_controller: type: effort_controllers/JointGroupPositionController joints: joint_base_link1: pid: {p: 600.0, i: 80.0, d: 200.0} joint_link1_link2: pid: {p: 11000.0, i: 100.0, d: 500.0} joint_link2_link3: pid: {p: 8000.0, i: 300.0, d: 400.0} joint_link3_link4: pid: {p: 400.0, i: 40.0, d: 60.0} joint_link4_link5: pid: {p: 400.0, i: 10.0, d: 10.0} joint_link5_link6: pid: {p: 20.0, i: 0.8, d: 0.1} The gazebo and control launch file is as follows: <launch> <arg name="paused" default="false"/> <arg name="use_sim_time" default="true"/> <arg name="gui" default="true"/> <arg name="headless" default="false"/> <arg name="debug" default="false"/> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="debug" value="$(arg debug)" /> <arg name="gui" value="$(arg gui)" /> <arg name="paused" value="$(arg paused)"/> <arg name="use_sim_time" value="$(arg use_sim_time)"/> <arg name="headless" value="$(arg headless)"/> </include> <param name="robot_description" command="$(find xacro)/xacro --inorder $(find my_6dof_robot_description)/urdf/my_6dof_robot_robot.xacro"/> <node name="spawn_my_6dof_robot" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -x 0 -y 0 -z 0 -model my_6dof_robot" respawn="false" output="screen"> </node> <rosparam file="$(find my_6dof_robot_control)/config/my_6dof_robot_effort_control.yaml" command="load" /> <node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false" ns="/my_6dof_robot_robot" output="screen" args="joint_group_position_controller joint_state_controller --timeout 50"> </node> <node name = "robot_state_publisher" pkg = "robot_state_publisher" type = "robot_state_publisher" respawn="false" output="screen"> <remap from="/joint_states" to="/my_6dof_robot_robot/joint_states" /> </node> </launch> However, when I load the effort_controllers/JointPositionController with the following .yaml and launch file, it is a success. my_6dof__robot: joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 joint1_position_controller: type: effort_controllers/JointPositionController joint: joint_base_link1 pid: {p: 600.0, i: 80.0, d: 200.0} joint2_position_controller: type: effort_controllers/JointPositionController joint: joint_link1_link2 pid: {p: 11000.0, i: 100.0, d: 500.0} joint3_position_controller: type: effort_controllers/JointPositionController joint: joint_link2_link3 pid: {p: 8000.0, i: 300.0, d: 400.0} joint4_position_controller: type: effort_controllers/JointPositionController joint: joint_link3_link4 pid: {p: 400.0, i: 40.0, d: 60.0} joint5_position_controller: type: effort_controllers/JointPositionController joint: joint_link4_link5 pid: {p: 400.0, i: 10.0, d: 10.0} joint6_position_controller: type: effort_controllers/JointPositionController joint: joint_link5_link6 pid: {p: 20.0, i: 0.8, d: 0.1} .launch file: <launch> <arg name="paused" default="false"/> <arg name="use_sim_time" default="true"/> <arg name="gui" default="true"/> <arg name="headless" default="false"/> <arg name="debug" default="false"/> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="debug" value="$(arg debug)" /> <arg name="gui" value="$(arg gui)" /> <arg name="paused" value="$(arg paused)"/> <arg name="use_sim_time" value="$(arg use_sim_time)"/> <arg name="headless" value="$(arg headless)"/> </include> <param name="robot_description" command="$(find xacro)/xacro --inorder $(find my_6dof_robot_description)/urdf/my_6dof_robot_robot.xacro"/> <node name="spawn_my_6dof_robot" pkg="gazebo_ros" type="spawn_model" args="-param robot_description -urdf -x 0 -y 0 -z 0 -model my_6dof_robot" respawn="false" output="screen"> </node> <rosparam file="$(find my_6dof_robot_control)/config/my_6dof_robot_effort_control.yaml" command="load" /> <node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false" ns="/my_6dof_robot_robot" output="screen" args="joint1_position_controller joint2_position_controller joint3_position_controller joint4_position_controller joint5_position_controller joint6_position_controller joint_state_controller joint_state_controller --timeout 50"> </node> <node name = "robot_state_publisher" pkg = "robot_state_publisher" type = "robot_state_publisher" respawn="false" output="screen"> <remap from="/joint_states" to="/my_6dof_robot_robot/joint_states" /> </node> </launch> The my_6dof_robot.urdf is has the following gazebo_ros_plugin tag in both the cases: <gazebo> <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so"> <robotSimType>gazebo_ros_control/DefaultRobotHWSim</robotSimType> <robotNamespace>/my_6dof_robot</robotNamespace> <robotParam>robot_description</robotParam> <legacyModeNS>true</legacyModeNS> </plugin> </gazebo> I think the error has something to do with the namespace. I want to know what Iam doing wrong. The same error is appearing when I try to load my custom controller instead of the joint_group_position_controller. Any help is highly appreciated. Thank you!! Originally posted by sthakar on ROS Answers with karma: 1 on 2019-10-29 Post score: 0 Original comments Comment by gvdhoorn on 2019-10-29: Please show a verbatim copy of the actual (and complete) error message. Comment by sthakar on 2019-10-30: Thanks! I have edited the question. Please let me know if you need any more information. Thank you! Answer: I think the issue is that your parameters are not correct for the effort_controllers/JointGroupPositionController controller type. In general, it's currently a bit tough to find documentation on what parameters are required by each different ros_controllers controller -- I often find it's easiest to look at the source code. If you look at that previous link, you'll see they are expecting a param named joints that can be converted to a std::vector< std::string > (this is currently missing). They also initialize a PID Controller for each joint by passing a node handle constructed as ros::NodeHandle(n, joint_name + "/pid")). So, maybe if your file looked something like the following (untested): my_6dof_robot: joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 joint_group_position_controller: type: effort_controllers/JointGroupPositionController joints: - joint_base_link1 - joint_link1_link2 - joint_link2_link3 - joint_link3_link4 - joint_link4_link5 - joint_link5_link6 joint_base_link1: pid: {p: 600.0, i: 80.0, d: 200.0} joint_link1_link2: pid: {p: 11000.0, i: 100.0, d: 500.0} joint_link2_link3: pid: {p: 8000.0, i: 300.0, d: 400.0} joint_link3_link4: pid: {p: 400.0, i: 40.0, d: 60.0} joint_link4_link5: pid: {p: 400.0, i: 10.0, d: 10.0} joint_link5_link6: pid: {p: 20.0, i: 0.8, d: 0.1} Originally posted by jarvisschultz with karma: 9031 on 2019-11-04 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by sthakar on 2019-11-04: This actually works! Thanks a ton for your reply!! I went through the source code you pointed towards and now understand how it works! Appreciate your help! Comment by jarvisschultz on 2019-11-05: Awesome! I didn't have the time to test what I posted, so I'm glad it worked.
{ "domain": "robotics.stackexchange", "id": 33948, "tags": "ros, microcontroller, ros-control, ros-kinetic, ros-controllers" }
Efficient counting sort for large byte arrays in Java
Question: I have this counting sort for byte values running in linear time with respect to array length: Arrays.java package net.coderodde.util; import java.util.Random; /** * This class contains static methods for sorting {@code byte} arrays. * * @author Rodion "rodde" Efremov * @version 1.6 (Apr 24, 2019) */ public final class Arrays { /** * Sorts the given {@code byte} array in its entirety. * * @param arrays the array to sort. */ public static void sort(byte[] arrays) { sort(arrays, 0, arrays.length); } /** * Sorts the given {@code byte} array omitting first {@code fromIndex} * array components starting from beginning, and omitting last * {@code array.length - toIndex} array components from the ending. * * @param array the array holding the target range. * @param fromIndex the starting index of the target range. * @param toIndex one position to the right from the last element * belonging to the target range. */ public static void sort(byte[] array, int fromIndex, int toIndex) { rangeCheck(array.length, fromIndex, toIndex); int[] bucketCounters = new int[256]; for (int index = fromIndex; index < toIndex; index++) { bucketCounters[Byte.toUnsignedInt(array[index])]++; } int index = fromIndex; // Insert the negative values first: for (int bucketIndex = 128; bucketIndex != 256; bucketIndex++) { java.util.Arrays.fill(array, index, index += bucketCounters[bucketIndex], (byte) bucketIndex); } // Insert the positive values next: for (int bucketIndex = 0; bucketIndex != 128; bucketIndex++) { java.util.Arrays.fill(array, index, index += bucketCounters[bucketIndex], (byte) bucketIndex); } } /** * Checks that {@code fromIndex} and {@code toIndex} are in * the range and throws an exception if they aren't. */ private static void rangeCheck(int arrayLength, int fromIndex, int toIndex) { if (fromIndex > toIndex) { throw new IllegalArgumentException( "fromIndex(" + fromIndex + ") > toIndex(" + toIndex + ")"); } if (fromIndex < 0) { throw new ArrayIndexOutOfBoundsException(fromIndex); } if (toIndex > arrayLength) { throw new ArrayIndexOutOfBoundsException(toIndex); } } public static void main(String[] args) { warmup(); benchmark(); } private static final int LENGTH = 50_000_000; private static final void warmup() { runBenchmark(false); } private static final void benchmark() { runBenchmark(true); } private static final void runBenchmark(boolean output) { long seed = System.currentTimeMillis(); Random random = new Random(); byte[] array1 = createRandomByteArray(LENGTH, random); byte[] array2 = array1.clone(); byte[] array3 = array1.clone(); if (output) { System.out.println("seed = " + seed); } long startTime = System.nanoTime(); java.util.Arrays.sort(array1); long endTime = System.nanoTime(); if (output) { System.out.println("java.util.Arrays.sort(byte[]) in " + (endTime - startTime) / 1e6 + " milliseconds."); } startTime = System.nanoTime(); java.util.Arrays.parallelSort(array2); endTime = System.nanoTime(); if (output) { System.out.println("java.util.Arrays.parallelSort(byte[]) in " + (endTime - startTime) / 1e6 + " milliseconds."); } startTime = System.nanoTime(); net.coderodde.util.Arrays.sort(array3); endTime = System.nanoTime(); if (output) { System.out.println("net.coderodde.Arrays.sort(byte[]) in " + (endTime - startTime) / 1e6 + " milliseconds."); System.out.println("Algorithms agree: " + (java.util.Arrays.equals(array1, array2) && java.util.Arrays.equals(array1, array3))); } } private static final byte[] createRandomByteArray(int length, Random random) { byte[] array = new byte[length]; for (int i = 0; i < length; i++) { array[i] = (byte) random.nextInt(); } return array; } } Typical output seed = 1556112137029 java.util.Arrays.sort(byte[]) in 67.6446 milliseconds. java.util.Arrays.parallelSort(byte[]) in 210.0057 milliseconds. net.coderodde.Arrays.sort(byte[]) in 46.6332 milliseconds. Algorithms agree: true Answer: I find the big points covered well: class and methods have one clear, documented purpose the API follows the well-known java.util.Arrays (if not to the point of documenting RuntimeExceptions thrown) I'd try to get rid of magic literals and code replication: size counts = new int[Byte.MAX_VALUE-Byte.MIN_VALUE+1] (or 1<<Byte.SIZE?), use for (int bucket = Byte.MIN_VALUE ; bucket <= Byte.MAX_VALUE; bucket++) Arrays.fill(array, index, index += counts[Byte.toUnsignedInt((byte) bucket)], (byte) bucket); I found it interesting to ogle the RTE source code I use.
{ "domain": "codereview.stackexchange", "id": 37035, "tags": "java, array, sorting" }
Restriction of a Lagrangian
Question: I'm wondering if anyone could help me with the following questions. Let $M$ be the Minkowski spacetime, given $f\in C^{\infty}(M) ; f(m)=x^{0}(m)$, with $\{x^{\mu}\}$ being a global Cartesian coordinates system, given the 3-dimensional submanifold $M\supset F_{t}=f^{-1}(t)$ relative to a regular value $t\in\mathbb{R}$ of $f$, and given the Lagrangian: $$ \mathcal{L}\in C^{\infty}(TM) $$ $$ \mathcal{L}(x^{\mu},\dot{x}^{\mu})=-\sqrt{\eta_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}} $$ where $\eta$ is the Minkowski metric and $\{x^{\mu}\}$ a global Cartesian coordinates system; what is the coordinates espression of the Lagrangian on $F_{t}$: $$ (T\iota_{t})^{*}\mathcal{L}\in C^{\infty}(F_{t}) $$ I "know" from other "sources" that I should find: $$ (T\iota_{t})^{*}\mathcal{L}(x^{i},\dot{x}^{i})=\mathcal{L}\circ T\iota_{t}(x^{i},\dot{x}^{i})=-\sqrt{1 - \delta_{ij}\dot{x}^{i}\dot{x}^{j}} $$ Is it totally wrong? Answer: Your suspicions are correct: It is wrong! At least as it is written presently. Let us start form the embedding of manifold $$\imath_t : F_t \ni p \mapsto p \in M\:.$$ It induces and embedding of corresponding tangent bundles: $$T\imath_t : TF_t \ni (p,v) \mapsto (p, d\imath_t (v)) \in TM$$ The latter can only preserve the vectors tangent to $F_t$ seen as embedded submanifold in $M$. It cannot say anything about components non-tangent to $F_t \subset M$. When one fixes a coordinate system $x^0,x^1,x^2,x^3$ adapted to $F_t$, i.e $F_t$ coincides with the set of points with $x^0=0$ (your $x^0$ is my $t+x^0$), then he/she also fixes a similar coordinate system referring to $TF_t$ and $TM$, passing in the naturally associated charts with coordinates, respectively, $x^0,x^1,x^2,x^3,\dot{x}^0,\dot{x}^1,\dot{x}^2,\dot{x}^3$ in $TM$ and $x^1,x^2,x^3,\dot{x}^1,\dot{x}^2,\dot{x}^3$ in $TF_t$. With our choice of coordinates the bases turn out to be identical and thus $T\imath_t$ preserves the $3$ components $\dot{x}^i$. In other words, as said above, any vector transported from $F_t$ to $M$ keeps remaining tangent to $F_t$ seen as submanifold of $M$: $$T\imath_t : TF_t \ni (x^1,x^2,x^3,\dot{x}^1,\dot{x}^2,\dot{x}^3) \mapsto (0, x^1,x^2,x^3,0,\dot{x}^1,\dot{x}^2,\dot{x}^3) \in TM$$ Therefore $$ (T\iota_{t})^{*}\mathcal{L}(x^{i},\dot{x}^{i})=\mathcal{L}\circ T\iota_{t}(x^{i},\dot{x}^{i})=-\sqrt{0 - \delta_{ij}\dot{x}^{i}\dot{x}^{j}} $$ which makes sense if you are allowed to consider complex values. Otherwise you should define the Lagrangian including an absolute value (the point is that as it stands the initial, unrestricted, $\mathcal{L}$ is not defined on $TM$, but only on the subset of causal elements $(p,v) \in T_pM$ with $v$ causal). If you want to obtain the expression $\sqrt{1 - \delta_{ij}\dot{x}^{i}\dot{x}^{j}}$, you should fix the temporal component of vectors making use of a jet bundle over $x^0$ for instance... (However to be completely honest, all that seems to my like killing a fly with a gun.) ADDENDUM: As I wrote in a comment now erased, every differentiable coordinate function like $x^0$ in a coordinate patch on a manifold is such that all their values are always regular. (In fact $dx^0|_p$ has to be an element of a basis $T_p^*M$ and thus it cannot vanish.) So it is not necessary to assume it separately, as you did in your question.
{ "domain": "physics.stackexchange", "id": 13117, "tags": "homework-and-exercises, general-relativity, spacetime, differential-geometry" }
What does pO2 of blood mean and why do we use it?
Question: I understand the basic Dalton's law of partial pressures in gases. Also, Henry's law of diffusion, says, the concentration of gas dissolved in a fluid is proportional to the partial pressure above it. So if we say that the $p(\ce{O2})$ of oxygenated blood is $\pu{100 mmHg}$, where is the free gas existing in equilibrium with dissolved gas? Does it mean that the blood has a concentration of oxygen equal to that when placed in a surrounding of $p(\ce{O2}) = \pu{100 mmHg}$? If yes, why don't we directly report in concentrations instead? Is it easier to measure? Wikipedia also says that the Henry's law doesn't stand if the gas is reacting. But isn't oxygen reacting with the Haemoglobin? Answer: There is a good explanation in Relating oxygen partial pressure, saturation and content: the haemoglobin–oxygen dissociation curve Breathe 2015; 11: 194–201 The partial pressure of oxygen (also known as the oxygen tension) is a concept which often causes confusion. In a mixture of gases, the total pressure is the sum of the contributions of each constituent, with the partial pressure of each individual gas representing the pressure which that gas would exert if it alone occupied the volume. In a liquid (such as blood), the partial pressure of a gas is equivalent to the partial pressure which would prevail in a gas phase in equilibrium with the liquid at the same temperature. With a mixture of gases in either the gas or liquid phase, the rate of diffusion of an individual gas is determined by the relevant gradient of its partial pressure, rather than by its concentration. While in a gas mixture, the partial pressure and concentration of each gas are directly proportional, with oxygen in blood the relationship is more complex because of its chemical combination with haemoglobin. This allows blood to carry an enormously greater concentration (content) of oxygen than, for example, water (or blood plasma). Measurement of $p_\ce{O_2}$, therefore, does not give direct information about the amount of oxygen carried by blood. So blood $p_\ce{O_2}$ does not correspond to a particular concentration of oxygen, because the concentration of haemoglobin can vary, and most of the oxygen is bound to the heme iron. $P_\ce{O_2}$ is the partial pressure of oxygen in a hypothetical gas phase which would make the blood oxygen and gas phase oxygen be in equilibrium.
{ "domain": "chemistry.stackexchange", "id": 9623, "tags": "biochemistry, solubility, gas-laws" }
What does the hominin phylogenetic tree look like?
Question: I'm no biologist, but I'm curious what the rough phylogenetic tree looks like for Hominin. Could you create a rough sketch that includes: Homo rudolfensis Homo ergaster H. georgicus H. antecessor H. cepranensis H. rhodesiensis Homo neanderthalensis Denisova hominin Homo floresiensis H. heidelbergensis H. neanderthalensis H. sapiens H. erectus H. habilus any others I may have missed Feel free to correct any incorrect assumptions I may have made here, or clarify anything that could be instructive. Answer: Gonzalez-Jose et al. (2008) published the following cladograms, based on two analyses (parsimony versus maximum likelihood). The table shows the legend. The interesting case of Homo floriensis, among others are not included, likely because of their recent discoveries and limitations of the study cited. Reference - Gonzalez-Jose et al. Nature (2008); 453 775-79
{ "domain": "biology.stackexchange", "id": 4133, "tags": "phylogenetics" }
Is Fourier analysis applicable to lightwaves?
Question: I'm a mathematician with little understanding of physics. My questions: In mathematics, we decompose a wave into its elementary parts by using the Fourier transform. Is this process applicable to lightwaves, ie. may we think of lightwaves as complicated functions arising from several elementary functions that overlay each other? Do the elementary functions correspond to single photons, or to groups of photons? Does a glass prism do the job of a Fourier transform? Answer: The short answer is yes. Not just a yes, a YES! Optics is one of the subjects that uses Fourier transforms all of the time. (Like a lot of other subjects in physics and engineering) If we'll think about a the light wave (or more precisely, the electric field) in one point in space we we'll see it varying with time. If it has a specific frequency $\omega$, meaning that the electric field will be $E=E_0 \cos (\omega t +\varphi ) $ we will see it as a specific color. For example, the light is called "red" when $\omega = 2.7\cdot 10^{15}$, "yellow" when $\omega = 3.25\cdot 10^{15} $ and "blue" when $\omega = 4.2\cdot 10^{15}$, etc... If you mix these colors, each with a different initial phase and amplitude you can create any function you want*. White light for example is just an addition of a lot of cosines with a lot of different frequencies. The discussion above was in a specific point in space, but now I want to talk about a specific temporal frequency. lets say you have a red laser, so by telling you its red, you know that in each point in space the wave will oscillate at a specific frequency. It can be shown that the spacial frequency (often written as $\vec{k}$) is responsible for the direction of propagation, and by introducing spacial perturbations into the laser beam, it will split into different directions eventually. This phenomenon is called "diffraction" and when analyzing diffraction in the far field region, Fourier transforming the field (in space) will give you information how much energy will go in different directions. A prism is a device that gives for each $\omega$ a different $\vec{k}$, meaning you can think of it as performing a Fourier transform and sending each color in a different direction. It should be noted that if you want to find the amplitude and phase of each frequency component experimentally by performing manipulations and detection at the other end of a prism (and in principal, the frequency spectrum is continuous so in that case it's impossible) it will be quite difficult. footnote: Well, not any function, you can't create $E=e^{t^2}$ but that is not considered physical. As a mathematician you are likely aware of the limitations of the Fourier transform, but these limitations exists in the "real world" too so that's ok. I think it should be emphasized that the answer to your second question is no. The elementary functions are cosines (as always in fourier transform) which correspond to colors (for transforming in time) and directions (for transforming in space). Photons are a totally different thing which are not connected at all to this subject.
{ "domain": "physics.stackexchange", "id": 63667, "tags": "optics, visible-light, electromagnetic-radiation, photons, fourier-transform" }
Array-based parameters check
Question: I've got a simple JavaScript function that takes multiple types of arguments. Inside the function I'm doing a check on the arguments to determine which way the user called the function: var build = function() { var args = Array.prototype.slice.call(arguments) var options = { to: {}, from: {}, relationship: null } if (_.isString(args[3]) && !_.isUuid(args[3]) && _.isString(args[4])) { options.to.type = args[3] } else if (_.isString(args[2]) && !_.isUuid(args[2]) && _.isString(args[3])) { options.to.type = args[2] } } // called like: build(client, dataObj, 'relationship', 'type', 'name') // or build(client, dataObj, 'relationship', '<uuid>') This looks messy and isn't really scalable. Is there a more succinct way I can write this (perhaps using .map(), .filter(), or .reduce())? Answer: Define the functions that correspond to a given argument configuration separately: const processStringNumber = (string, number) => console.log(number + ' is a number') const processStringString= (string, string2) => console.log(string2+ ' is a string') Put the argument configurations and the corresponding functions in a data structure: const processingObj = [ { argsTypes: ['string', 'number'], process: processStringNumber }, { argsTypes: ['string', 'string'], process: processStringString } ] Write a helper that does the checks and dispatches the corresponding function: //Retrieve the types of an array of args (can have different implementations) const getTypes = (args) => args.map((arg) => typeof arg) //Calls the appropriate function from 'processingObj' const process = (args, processingObj) => processingObj .find((process) => _.isEqual(process.argsTypes, getTypes(args))) .process(...args) Usage: process(["aaa", 1], processingObj) // 1 is a number process(["aaa", "aaa"], processingObj) // aaa is a string Here is also an ES5 version (via Babel)
{ "domain": "codereview.stackexchange", "id": 17760, "tags": "javascript" }
Determine if an acid base reaction will occur
Question: I'm wondering why some acid base reactions occur, let's say for example: $$\ce{CH3CH2OH + H2O <<=> CH3CH2O- + H3O+}$$ Why does this reaction occur, because the alkoxide ion is a really strong base right? Why should its conjugate acid give off a $\ce{H+}$ to water, if it was a strong base let's say $\ce{NaOH}$ it would make sense to me, because the $\ce{OH-}$ is probably a stronger base then the alkoxide ion. Question: Why does the above reaction take place, considering the fact that water is a really weak base? Answer: The first thing you need to realise is that every reaction is in equilibrium, but some reactions have an equilibrium so far to one side that they are effectively complete (or they don't go at all). The position of equilibrium is determined by the relative stability of the products and the reactants. $$\ce{CH3CH2OH + H2O <<=> CH3CH2O- + H3O+}$$ The strength of acids can be compared by looking at $\mathrm{p}K_\mathrm{a}$ values. The $\mathrm{p}K_\mathrm{a}$ of $\ce{H3O+}$ in water is -1.7 and the $\mathrm{p}K_\mathrm{a}$ of ethanol in water is 16. So you are correct that hydronium is a much stronger acid than ethanol - or equivalently, ethoxide is a much stronger base than water. However, some of the water will deprotonate the ethanol and so the reaction will go to a small extent. This is why the equilibrium arrows are written with a big arrow on the reverse reaction and a small arrow on the forward reaction. The equilibrium constant for the reaction is related to the standard Gibbs energy change for the reaction by: $$\Delta_\mathrm{r} G^\circ = -RT\ln K$$
{ "domain": "chemistry.stackexchange", "id": 5557, "tags": "acid-base, equilibrium" }
Notify owner of post and other commenters
Question: I have a rails4 app. When a user comments on a post it should send a notification to all the guys who commented on the post and to the post creator. I have a working method in the controller, but it's kinda ugly and I'd like to refactor it. Could somebody tell me what the rails convention for refactoring this? Comment belongs_to :post, post has_many :comments create#comment action if @post_comment.save ((@post.users + [@post.user]).uniq - [current_user]).each do |post_commenter| Notification.create(recipient_id: post_commenter.id, sender_id: current_user.id, notifiable: @post_comment.post, action: "commented") end .... end Answer: First of all I would add a method to Post: # app/models/post.rb def notify_others_about_new_comment(comment) recipients = [user] # the author of the original post recipients << users # people that wrote a comment recipients.delete(comment.user) # writer of the new comment recipients.uniq.each do |recipient| Notification.create( recipient_id: recipient.id, sender_id: comment.user.id, notifiable: self, action: 'commented' ) end end And just call that method in your controller: if @post_comment.save @post.notify_others_about_new_comment(@post_comment) #... end
{ "domain": "codereview.stackexchange", "id": 20156, "tags": "performance, ruby, ruby-on-rails, mvc" }
Are SNPs and alleles the same thing?
Question: It seems to be quite difficult to find an answer to this. Are SNPs the same thing as alleles? Answer: Alleles are variations of a same locus that codes for a protein (gene). These alleles can come in different forms, one of which is SNP. For example, sickle cell anemia arises from an allele of the beta-globin gene which has had a change from A to T. Meanwhile, for the ABO gene that determines your blood group, the O allele has a missing nucleotide (G) that leads to a frameshift in the gene and a loss of function. So alleles are caused by SNPs, but can also be due to deletions, additions, insertions and other genetic changes. Note that SNPs not always lead to new alleles. Sometimes they occur in non-coding areas and nothing happens. Edit: SNPs do not need to be gene specific, but this was for simplicty. @Artem added nicely to the answer, I'm quoting it here: "Single Nucleotide Polymorphisms (SNPs) are Single Nucleotide Variants (SNVs) at a population allele frequency greater then 1%. Alleles are any variants of the same position of DNA, which includes SNVs, insertion/deletions, or structural variants and at any frequency." - @Artem
{ "domain": "biology.stackexchange", "id": 6810, "tags": "molecular-genetics, dna-sequencing, genomics, human-genome" }
How to distinguish between angular frequency $\omega$ and frequency $f$
Question: The relation between the "regular" frequency $f$ and the angular frequency $\omega$ ($\omega = 2\pi f$) is clear to me. However, every time I see "rotations per second" I really get confused as to what is meant there. How do I know whether the author means frequency or angular frequency, if "rotations per minute" can refer to both the frequency of rotations and the angular frequency of rotations as they both have the exact same unit $1 \over s$? Before you mark this question as a duplicate, I've seen many questions regarding the differences between $f$ and $\omega $ but that's NOT what I'm asking here. I just want to know how to distinguish them when I come across a problem involving rotations and waves. Answer: “Rotations per second” always means regular frequency. “Radians per second” always means angular frequency. A “rotation” always means a full rotation, which is through $2\pi$ radians.
{ "domain": "physics.stackexchange", "id": 55619, "tags": "frequency, rotation, angular-velocity" }
Question Concerning Position Of A Particle At Any Given Time
Question: After years of procrastinating i've decided not to "move ahead" with physics without getting this ridiculously trivial question clear!*I know i had asked a similar question as silly and stupid as this one, however for some reason this equation keeps haunting me cause i see it almost everywhere in kinematics! We know that the position of a particle at any time is given by $$x = x_{0} + v_{0}t + \frac{1}{2}at^2.$$ I'am aware that $x_{0}$ and $v_{0}$ are obtained by solving for the constants of integration at $t = 0$. But why should i even care what the velocity or the acceleration is? What purpose do they serve?What has these two quantities got to do with anything? Ok.to be more "technical" Why should one "involve" velocity and acceleration when the equation is for determining position at any given point in time? To be honest i've tried to digest what this equation is really telling me by watching and reading numerous materials both on-line and off-line regarding fundamentals of physics. Nothing seems to be working. I've decided to leave it to you as i seriously lack intuition. Let me reiterate by saying i've no problem with deriving this equation. Answer: Suppose you are holding an apple. You obviously know where the apple is, and as long as you don't drop the apple you can predict it's future position. But suppose you now drop the apple and you want to predict where it will be in one second, two seconds or even ten seconds if you're standing on a tall building. As soon as you drop the apple it starts accelerating downwards due to gravity. Therefore to predict the position of the apple 1 second after it leaves your hand you have to know the acceleration. If you repeat the experiment on the moon you need to know that the acceleration on the moon is different, because of it's lower gravity, and if you feed this lower acceleration into your equation you'll find the apple has moved a smaller distance after 1 second, as indeed you'll know from watching videos of the lunar astronauts.
{ "domain": "physics.stackexchange", "id": 2350, "tags": "kinematics, acceleration" }
simpleActionClient not connecting to simpleActionServer on seperate computers
Question: This is on two machines both running ROS Noetic on Ubuntu20.04, done with rospy I have a remote machine, which is running roscore. It will run a node which starts an action server. Here is how I start it, within the constructor of the node. So if I run the node, it will start. self.actionServer = actionlib.SimpleActionServer('ids_1_video_capture', ids_1_video_captureAction, self.execute_action, False) self.actionServer.start() Below are the topics the server generates on the remote machine, after running the node. /ids_1_video_capture/cancel /ids_1_video_capture/feedback /ids_1_video_capture/goal /ids_1_video_capture/result /ids_1_video_capture/status /rosout /rosout_agg My local machine is configured to connect to the remote machine, with ROS_MASTER_URI and ROS_HOSTNAME, as I can see the same topics, shown below /ids_1_video_capture/cancel /ids_1_video_capture/feedback /ids_1_video_capture/goal /ids_1_video_capture/result /ids_1_video_capture/status /rosout /rosout_agg And here is the code for connecting the server from the local machine, as a client client = actionlib.SimpleActionClient('ids_1_video_capture', ids_1_video_captureAction) if not client.wait_for_server(rospy.Duration(30)): raise rospy.ROSException('Could not connect to action server...') However the local machine times out when waiting for the server. This same exact code will work fine if I run it on the remote machine (so client + server on the same machine works fine). When I do this, can echo the data on the local computer from the topics that the remote computer publishes on. I'm confident that I have my network set up correctly as I had no issues with calling a service on the remote computer from the local computer (using rospy.wait_for_service), so I thought I could follow the same logic for actionlib but I guess not ? In the worst case, I can just use a service to create client on the remote computer (since it works there), but this is not ideal. Any help is appreciated ! thanks Originally posted by turtlesnbacon on ROS Answers with karma: 16 on 2021-07-29 Post score: 0 Answer: Fixed it, turns out that you can't just expect one type of connection to imply that the other connections are working. In this case, just bcz services worked, I though actions would also work, and also topics. Turns out I wasn't able to echo remotely the data I published locally, and there was a missing entry in /etc/hosts causing the issue. I guess both computers need this file to be filled up. When I was able to publish to a remote topic, actions also worked. I guess this is because internally ros actions clients will publish to a goal topic on the server, even when you just do action_client.send_goal(). So if something as basic a topic wouldn't work, no way actions would work. Originally posted by turtlesnbacon with karma: 16 on 2021-08-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36759, "tags": "ros, network" }
Longest substring with unique characters
Question: Find the length of the longest substring without repeating characters Any comments on my code? public class LongestSubstring { /** * First version. O(n^2) runtime * For each character, find the longest substring starting * with this character and update max * @param s * @return */ public int lengthOfLongestSubstringV1(String s) { char[] c = s.toCharArray(); int n = c.length; int max = 0; for (int i = 0; i < n; i++){ // localMax is the size of the longest substring starting with character i // and that has no repeated character int localMax = findLocalMax(c,i); // update max if localMax > max max = (max > localMax) ? max : localMax; } return max; } /** * find the largest substring that has no repeated character * starting at character i in array c * @param c * @param i * @return */ public int findLocalMax(char[] c, int i){ int n = c.length; // seen: characters already seen HashSet<Character> seen = new HashSet<Character>(); int localMax = 0; for (int j = i; j < n; j++){ // c[j] was seen already if (seen.contains(c[j])){ return localMax; } else{ seen.add(c[j]); localMax++; } } return localMax; } /** * Second version. O(n^2) runtime * if a character is seen again, * @param s * @return */ public int lengthOfLongestSubstringV2(String s) { char[] c = s.toCharArray(); int n = c.length; // currentLetters: HashSet of the letters contained in the current substring HashSet<Character> currentLetters = new HashSet<Character>(); // pointer1 points to the beginning of the substring int pointer1 = 0; int max = 0; int currentCount = 0; // pointer2 points to the end of the substring for (int pointer2 = 0; pointer2 < n; pointer2++){ if (currentLetters.contains(c[pointer2])){ // if the letter c[pointer2] has already been seen // remove all the letters before the first occurrence // of this letter to consider the substring beginning after // this first occurrence while (c[pointer1] != c[pointer2]){ currentCount--; currentLetters.remove(c[pointer1]); pointer1++; } } else{ // otherwise, add the letter to the substring currentCount++; currentLetters.add(c[pointer2]); } max = (max > currentCount) ? max : currentCount; } return max; } } public class TestLongestSubstring { LongestSubstring lgs = new LongestSubstring(); @Test public void testEmptyString(){ String s = ""; assertEquals(0,lgs.lengthOfLongestSubstringV1(s)); assertEquals(0,lgs.lengthOfLongestSubstringV2(s)); } @Test public void testSameCharacterExpects1(){ String s = "bbbb"; assertEquals(1,lgs.lengthOfLongestSubstringV1(s)); assertEquals(1,lgs.lengthOfLongestSubstringV2(s)); } @Test public void testabcabcbbExpects3(){ String s = "abcabcbb"; assertEquals(3,lgs.lengthOfLongestSubstringV1(s)); assertEquals(3,lgs.lengthOfLongestSubstringV2(s)); } } public class TestLongestSubstringRunner { public static void main(String args[]){ Result result = JUnitCore.runClasses(TestLongestSubstring.class); for (Failure failure : result.getFailures()){ System.out.println(failure.toString()); } System.out.println(result.wasSuccessful()); } } Answer: Time complexity The time complexity is not \$O(N^2)\$ as you estimated, but actually \$O(N M)\$, where n is the length of the input string and m is the number of unique characters in the input string. This makes a big difference, because in practice m is typically bounded by a constant, the size of the alphabet, which in the case of ascii is 256. If m is a constant then the asymptotic complexity becomes simply \$O(N)\$, which is great news for you. Optimizations A small tweak can make the implementation significantly faster: change the loop condition in the outer function to go until n - max instead of n. If you know the size of the alphabet in advance, then you can use a boolean[] instead of a set, with the character codes as indexes. It's simpler, and by reducing the autoboxing, faster too. Use interface types instead of implementations You could declare the HashSet as a Set instead.
{ "domain": "codereview.stackexchange", "id": 13675, "tags": "java, algorithm, strings, unit-testing, comparative-review" }
Logging in a different thread using circular buffer C++
Question: What it does The code creates a logger class which instantiates a circular buffer at construction and uses producer-consumer style approach using condition_variable to log and print the messages to stdout. Code logger.h #pragma once #include <chrono> #include <condition_variable> #include <cstdio> #include <iomanip> #include <iostream> #include <memory> #include <mutex> #include <sstream> #include <stdio.h> #include <string> #include <sys/time.h> #include <thread> #include <utility> #include <vector> namespace TIME { void get_time(char*); } // namespace TIME class my_logger { static constexpr size_t TIME_BUF_SIZE = 19; static constexpr size_t MESSAGE_BUFFER_SIZE = 300; static constexpr size_t BUFFER_SIZE = 500; static constexpr size_t MESSAGE_PRINT_THRESHOLD = 450; static_assert(MESSAGE_PRINT_THRESHOLD < BUFFER_SIZE, "Message print threshold must be smaller than msg buffer size"); std::vector<std::array<char, MESSAGE_BUFFER_SIZE>> m_to_print; size_t m_head; size_t m_tail; std::mutex mu_buffer; std::condition_variable m_consumer_cond_var; std::condition_variable m_producer_cond_var; std::atomic<bool> m_continue_logging; std::thread m_printing_thread; public: my_logger(); ~my_logger(); void logging_thread(); size_t msg_count() const { if(m_tail >= m_head) return m_tail - m_head; return m_tail + BUFFER_SIZE - m_head; } template <typename... Args> void log(const char* format, const char msg_type[], Args... args) { std::unique_lock lg(mu_buffer); // m_producer_cond_var.wait(lg, [this] { return msg_count() <= MESSAGE_PRINT_THRESHOLD; }); auto buf_ptr = m_to_print[m_tail].data(); const size_t N = std::strlen(msg_type); TIME::get_time(buf_ptr); std::snprintf(buf_ptr + TIME_BUF_SIZE - 1, N + 1, "%s", msg_type); std::snprintf(buf_ptr + TIME_BUF_SIZE + N - 1, MESSAGE_BUFFER_SIZE - N - TIME_BUF_SIZE + 1, format, std::forward<Args>(args)...); m_tail = (m_tail + 1) % BUFFER_SIZE; if(msg_count() > MESSAGE_PRINT_THRESHOLD) { lg.unlock(); m_consumer_cond_var.notify_one(); // notify the single consumer } } template <typename... Args> void info(const char* format, Args... args) { log(format, " [INFO] ", std::forward<Args>(args)...); } template <typename... Args> void warn(const char* format, Args... args) { log(format, " [WARN] ", std::forward<Args>(args)...); } template <typename... Args> void error(const char* format, Args... args) { log(format, " [ERROR] ", std::forward<Args>(args)...); } }; logger.cpp #include "logger.h" #include <algorithm> #include <mutex> namespace TIME { void get_time(char* buf) { auto currentTime = std::chrono::high_resolution_clock::now(); auto nanoSeconds = std::chrono::time_point_cast<std::chrono::nanoseconds>(currentTime); auto nanoSecondsCount = nanoSeconds.time_since_epoch().count(); // Convert nanoseconds to seconds and fractional seconds auto fracSeconds = nanoSecondsCount % 1'000'000'000; // Convert seconds to std::time_t std::time_t time = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now()); // Convert time to struct tm in local time zone std::tm* tm = std::localtime(&time); std::strftime(buf, 9, "%H:%M:%S.", std::localtime(&time)); std::snprintf(buf + 9, 10, "%09lld", fracSeconds); } } // namespace TIME my_logger::my_logger() : m_head(0) , m_tail(0) , m_continue_logging(true) { m_to_print.assign(BUFFER_SIZE, {}); m_printing_thread = std::thread(&my_logger::logging_thread, this); } void my_logger::logging_thread() { while(m_continue_logging) { std::unique_lock lg(mu_buffer); m_consumer_cond_var.wait(lg, [this] { return m_head != m_tail; }); while(m_head != m_tail) { std::printf("%s\n", m_to_print[m_head].data()); m_head = (m_head + 1) % BUFFER_SIZE; } lg.unlock(); m_producer_cond_var.notify_all(); //notify all producers } } my_logger::~my_logger() { m_continue_logging = false; m_consumer_cond_var.notify_one(); if(m_printing_thread.joinable()) m_printing_thread.join(); } Benchmarking code #include "logger.h" #include <chrono> #include <fstream> #include <string> int main(int argc, char* argv[]) { if(argc != 2) { std::cerr << "Usage: logger <file>"; return 1; } constexpr unsigned RUNS = 33333; const std::string benchmark_file_path = argv[1]; std::fstream file(benchmark_file_path, std::ios::out | std::ios::app); if(!file.is_open()) { std::cerr << "Couldn't open " << benchmark_file_path << "\n"; return 1; } auto start = std::chrono::high_resolution_clock::now(); { my_logger logger; for(size_t i = 0; i < RUNS; i++) { logger.info("this is a %s string with id: %u and double: %f", "sundar", i, static_cast<double>(i + 0.5)); logger.warn("this is a %s string with id: %u and double: %f", "WARN", 2 * i, static_cast<double>(2 * i)); logger.error("this is a %s string with id: %u and double: %f", "ERROR", 4 * i, static_cast<double>(4 * i)); } } auto finish = std::chrono::high_resolution_clock::now(); auto time1 = std::chrono::duration_cast<std::chrono::milliseconds>(finish - start).count(); file << "RUNS: " << 3 * RUNS << " TIME_MULTITHREADED: " << time1 << " ms\n"; start = std::chrono::high_resolution_clock::now(); { char time_buf[19]; for(size_t i = 0; i < RUNS; i++) { TIME::get_time(time_buf); fprintf(stdout, "%s" " [INFO] " "this is a %s string with id: %zu and double: %f\n", time_buf, "atisundar", i, static_cast<double>(i + 0.5)); TIME::get_time(time_buf); fprintf(stdout, "%s" " [WARN] " "this is a %s string with id: %zu and double: %f\n", time_buf, "atisundar", i, static_cast<double>(2 * i)); TIME::get_time(time_buf); fprintf(stdout, "%s" " [ERROR] " "this is a %s string with id: %zu and double: %f\n", time_buf, "atisundar", i, static_cast<double>(4 * i)); } } finish = std::chrono::high_resolution_clock::now(); auto time2 = std::chrono::duration_cast<std::chrono::milliseconds>(finish - start).count(); file << "RUNS: " << 3 * RUNS << " TIME_PLAIN_LOGGER: " << time2 << " ms\n"; return 0; } Current performance RUNS: 99999 TIME_MULTITHREADED: 282 ms RUNS: 99999 TIME_PLAIN_LOGGER: 970 ms RUNS: 99999 TIME_MULTITHREADED: 203 ms RUNS: 99999 TIME_PLAIN_LOGGER: 972 ms RUNS: 99999 TIME_MULTITHREADED: 217 ms RUNS: 99999 TIME_PLAIN_LOGGER: 970 ms Known issues The log function in logger.h isn't thread safe because when multiple threads are trying to log and one of them causes total messages to exceed MESSAGE_PRINT_THRESHOLD, then after the unlock step in log and before notifying the m_consumer_cond_var, one of the other logging threads might take ownership of the mutex and keep writing into the buffer and potentially lead to overwriting the circular buffer. The solution I can see for this is to uncomment the m_producer_cond_var.wait line in log function, but that is causing the code to become slower than plain logger function by almost 1.5x . Can you please suggest what I may be doing wrong here? Answer: Create a separate class for the message queue Your class my_logger has too many responsibilities and is therefore quite complex. The first thing I would do is to create a separate class for a thread-safe message queue. Your logger class can then use that queue. msg_count() should not be public msg_count() should never be called from outside the class. It doesn't take a lock, so it can return incorrect values. Even if it would take a lock, by the time an outside caller would get the result, it might no longer be valid. log() should do as little as possible It's clear from your code that you want to optimize the throughput of the log() function. So do as much as possible without holding the lock: you can format the time and build the whole log message first, then take the lock to just push that log message onto the queue. Even better, don't do that at all inside log(), but defer as much as possible to the logging_thread(): you still have to get the current time in log(), but instead of immediately formatting it, just store the result from now() directly in the queue. Of course, all this requires some changes to your code, in particular how you store data in the queue. logging_thread() should not hold the lock for a long time Consider that you only notify the logging thread when there are at least MESSAGE_PRINT_THRESHOLD messages in the queue. It will then lock the mutex, and print all those messages. During that time, other threads that want to add a log message to the queue are blocked. Note that you don't need to hold the lock while you are printing, at least if you make sure that the loggers wait if the queue is full. Log messages can be dropped or printed lated Since you currently don't block threads from adding log messages if the queue is already full, it can be that messages are being dropped. Is that OK? If not, then do add the wait() call back, and find other ways to improve performance. However, another issue is that you wait for a threshold to start printing log messages. What if you have a situation where there is just one thread, and it only has one very important log message to print? Unfortunately, the printing thread will not be woken up, and you have to wait for the destructor to be called before it will finally wake up. Also, it can happen that m_continue_logging is set to false when there are still messages in the buffer, and the printing thread could then exit before it as printed all messages. More about that below. Avoid using C functions You are using C functions like strftime(), snprintf(), and so on, when there are much better C++ equivalents for them. In particular, you can use std::format_to_n() to format strings to a buffer. It can also directly format time. I also see you mixing fprintf(stderr, …) and std::cerr << …. Don't mix C's stdio and C++'s iostreams, it's not guaranteed how those will interact. Avoid std::chrono::high_resolution_clock() There is no guarantee what kind of time std::chrono::high_resolution_clock() actually returns. It might give you something that follows wall clock time, or it might follow some other timer that doesn't track the actual time. Either use std::chrono::system_clock() if you need wall clock time (so you can compare timestamps with events outside the local computer), or if you really need high resolution timestamps so you can more accurately see in which order things happen on the local computer, use std::chrono::steady_clock(). Don't mix atomics and mutexes Because m_continue_logging is not guarded by the same mutex as m_head and m_tail, it can happen that something sets m_continue_logging to false between the check in the while-condition and the subsequent m_consumer_cond_var.wait(). Note that any notifications sent before wait() is called are ignored. So this can cause the printing thread to hang indefinitely. Make m_continue_logging a regular bool, and check it in the predicate you pass to wait(). And in the destructor, you still need to take a lock when setting it to false. No need to check for m_printing_thread.joinable() Since you always start the thread in the constructor, m_printing_thread will always be joinable when you call the destructor. But even better: Use std::jthread Instead of manually joining a std::thread, use std::jthread; it takes care of this automatically. You only need one condition variable It is extremely unlikely that the printing thread is waiting for the queue to become non-empty, and any other threads waiting for the queue to become non-full at the same time. So there is no need to have two condition variables.
{ "domain": "codereview.stackexchange", "id": 45520, "tags": "c++, multithreading, thread-safety, logging, c++20" }
Transition/Transversion ratio
Question: I'm taking a bioinformatics class and at this point we are just going over basic stuff about molecular biology. Half of the class is doing reading/research on our own, so there isn't much class time to ask questions or go over examples. One topic is transition/transversion ratio, and I'm not 100% sure I'm understanding it. Do you just compare the first symbol of string 1 to the first symbol of string 2? What if the two symbols you're comparing are the same? Do you just ignore it? Let's say string 1 is ACGATG and string 2 is TCAGTG. Would the ratio be 2/1? Here are the comparisons I made. Transitions: G-->A, A-->G, Transversions: A-->T, Neither: C-->C, T-->T, G-->G Is this correct or am I completely on the wrong path? Answer: You are almost on the right path. This schematics from Wikipedia is quite clear in my opinion. Slightly more chemical explanation (which you do not necessarily need to do bioinformatics, however I think it is always better to know exactly what you are dealing with outside of the computer!) The idea is that you have four basic nucleotides: cytosine and thymine are called pyrymidines and adenine (A) and guanine (G) which are called purines. Pyrymidines are characterised by a 6 carbon azo-ring which is called a pyridine, while purines have a larger double ring, formed by a pyridine ring + an imidazole ring. A transition is defined as the passage purine -> purine or pyrimidine -> pyrimidine. A transversion is the passage purine -> pyrimidine, or viceversa. These mutations can be caused for instance by certain chemicals, such as alkylating agents, or by ionizing radiations. In your specific example, given the sequences ACGATG TCAGTG You have 3 mutations: A -> T G -> A A -> G So you have 1 transversion (from the purine A to the pyrimidine T) and two transitions (between the purines A and G).
{ "domain": "biology.stackexchange", "id": 1760, "tags": "molecular-biology, bioinformatics" }
What is the step response curve of a second order low pass filter? And what if it is set for resonance?
Question: I am doing a lot of audio synthesis DSP where I need to use low pass filters to shape the decay of an impulse. I understand that a one pole filter fed a discontinuity, say from a steady state of 1's to a steady state of 0's, will decay with a perfectly exponential curve. I believe the decay will be such that the signal will reach an amplitude of $1/e$ at $time = 1/(2*pi*f)$. This makes conceptualizing one pole low pass impulse modeling very easy. But I am trying to understand how other filter orders will do the same. I understand that a second order filter can be approximated running two one pole filters in sequence (ie. each will attenuate by 6 dB/oct, leading to a 12 dB/oct curve). If that is the case, then I would expect the step response of a second order filter without resonance to behave the same as a one pole filter and remain exponential, only that much faster. So perhaps the time to 1/e would be: $time = 1/(2*pi*f)^2$ Would that be correct? I asked here about a lower slope filter and no one has replied so I am guessing it is not common knowledge: A one-pole LPF (6 dB/oct) has a step response to $1/e$ amplitude of $time = 1/(2∗pi∗f)$. What would the response time be of a 3 dB/oct filter? But I presume the same principle would then apply. A 3 dB/oct low pass filter would have a time constant of: $time = 1/(2*pi*f)^{0.5}$ And if this is all correct, then we could say the time constants of any non-resonant low pass filter would be roughly: $time = 1/(2*pi*f)^{filter-order}$ What do you think? Does all this sound correct? If so, there is no difference between low pass filtering with any order of non-resonant filter, as they can all be set up to create the same outcome with different parameters. Lastly, then the remaining question would be: What does the step response of a resonant second order low pass filter look like, again going from steady 1's to steady 0's? I presume the resonance ruins the precisely exponential decay, but in what manner? Does it create a more compressed curve? Will it resonate creating a wobbling of the output? Will it dip below zero as it tries to settle out? I tried testing it with a resonant second order LPF but I was just getting almost random maximum amplitudes coming out. Very unpredictable. I'm not sure if I did something wrong or that's to be expected from the resonance. Thanks for any help understanding all this. Answer: I can help with some of your multiple questions. First, a cascade of n buffered RC low pass filters (LPFs), a so-called n-th order synchronous LPF, has the impulse response and step response shown in the screenshot below: This is a screenshot from my paper 1 referenced at the bottom. All R values are the same and all C values are the same. Buffering simply means there is a unity gain buffer between each filter and on the first input and final output as well. So no filter is loaded or gets loaded. Notice that the conventional unit step function goes from 0 to 1, so if you want the response to be 1 to 0, just subtract y(t) from 1. Then you can find 1/e times as a function of n, etc. With regard to the four "time = ..." equations you give, only the first one, for n = 1, is correct. In general, second (and higher) order filters do not all behave the same. The simple RC LPF is the first (primitive) member of several filter families. But things get much more complicated (and useful) as the various properties of the filter families are taken into account and utilized. Note that a cascade of RC LPFs cannot have overshoot or undershoot. On the other hand, Butterworth LPFs can have "ring" in response to a unit step input. This is nicely illustrated by the following figure from Blinchikoff and Zverev 2: References: 1 E. Voigtman, J.D. Winefordner, “Low-pass filters for signal averaging”, Rev. Sci. Instrum. 57 (1986) 957-966. 2 H.J. Blinchikoff, A.I. Zverev, "Filtering in the Time and Frequency Domains", Wiley-Interscience, John Wiley & Sons, NY, ©1976, p. 114.
{ "domain": "dsp.stackexchange", "id": 8726, "tags": "filters, lowpass-filter, amplitude, step-response, resonance" }
Interact with Rviz, send multiple goals to path planner
Question: Hi all, I want to send multiple goals to path planner (move_base) using Rviz. I mean, normally the 2D Nav Goal button is used to send an unique goal to the planner. My question is if is possible to concatenate goals clicking in different points on the map. Is there any Rviz functionality or is necessary to install a plugin or create something ad-hoc "home-made"? Thank you very much in advance. Originally posted by Jose Luis on ROS Answers with karma: 375 on 2015-10-28 Post score: 1 Answer: Finally, I have used a combination of two packages that are provided by ROS. On one hand I have used a RViz plugin example for creating a new plugin on Rviz interface. The example that I took is plant_flag_tool in rviz_plugin_tutorials package. Once I have a button in Rviz menu, I have created a custom interactive marker which is capable of move in XY plane and rotate in Z. By this way is possible to send a position and orientation to the path planner. Each time that the user presses the button plugin on Rviz menu a new custom marker (new goal) is created. Originally posted by Jose Luis with karma: 375 on 2015-12-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 22851, "tags": "navigation, rviz, interactive-markers, move-base, pathplanning" }
How can we find the ip address of roscore?
Question: How can we find the ip address of roscore if multiple machines are connected Originally posted by Joy16 on ROS Answers with karma: 112 on 2017-03-25 Post score: 1 Answer: Okay, so I just saw the running ROS will actually give tell you the ROS_MASTER_URI as well. Originally posted by Joy16 with karma: 112 on 2017-03-25 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 27426, "tags": "ros, roscore" }
Is potential enough to determine dipole distribution?
Question: I was working on a problem and I had to mathematically distinguish a charge and its distribution from a dipole and its distribution in the space. Following is an example where I am confused how to find the dipole distribution by thetotal electric potential of the system. Suppose there exists a dipole at the origin, p = pi, in the plane and the potential of the system is given by the $\text{V} = \frac{\text{kp}\cos(\phi)}{r^2}$. Taking the laplacian gives $\nabla^2\text{V} = \frac{3\text{kp}\cos(\phi)}{r^4}$ which is not the distribution of the charge (since a pure dipole is not a charge) Specifically, I was trying to solve for the potential of a system given a dipole in the system, and from a mathematical point of view, I couldn't paraphrase the system in terms of some PDE. e.g, for the above situation how to write a PDE with some boundary conditions so that its solution is uniquely that of a dipole at the origin. To compare, if I said laplacian is $q\delta^3(x,y,z)$ on the whole $\mathbb{R}^3$ and at infinity is zero, I think the solution is the field of a charge at the origin. Answer: The main issue is that you made a mistake in your calculation. Taking the oppropriate expression of the Laplacian in spherical coordinates, and normalizing the dipole moment so that $p=e_z$, you’ll rather find that if (for $r\neq0$): $$ V(r,\theta,\phi)=\frac{\cos\theta}{4\pi r^2} $$ then (for $r\neq0$): $$ \Delta V=0 $$ To include the origin, you’ll need distributions. You can check that: $$ -\Delta V=-\partial_z\delta $$ which shows unambiguously that it is the field generated by an ideal dipole. Note that it is consistent with the previous result. Indeed, assuming Gauss’ law, you just need to know the charge density of an ideal dipole. As you pointed out, a single monopole would have charge density: $$ \rho(x)=q\delta(x) $$ So a real dipole has charge density: $$ \rho(x)=q\delta(x-d)-q\delta(x) $$ The ideal dipole is obtained by taking the limit $d\to0$, $q\to\infty$ while keeping $qd=p$ giving: $$ \rho(x)=-p\cdot \nabla\delta(x) $$ Hope this helps.
{ "domain": "physics.stackexchange", "id": 95394, "tags": "electrostatics, potential, dipole" }
Count Characters Function PHP
Question: As part of studying algorithms, I needed to make a function to count occurrences of characters in a string. My solution is below. I'd like some feedback about how I did - could the implementation be cleaner? Are there any conceptual gaps my code suggests? <?php function ra_count_chars($str){ $dict = str_split(strtolower(str_replace(' ', '', $str))); foreach($dict as $key=>$value){ $dict[$value] = 0; unset($dict[$key]); } for($i = 0; $i < strlen($str); $i++){ $dict[$str[$i]] += 1; } return $dict; } print_r(ra_count_chars('ccaaat')); Answer: You can achieve your goal with only one loop. I also find it easier to understand and to maintain if you have two variables instead of just one. Unsetting the numeric keys while filling associative keys could get tricky at some point. function ra_count_chars($str) { $dict = []; $chars = str_split(strtolower(str_replace(' ', '', $str))); foreach ($chars as $char) { isset($dict[$char]) ? ++$dict[$char] : $dict[$char] = 1; } return $dict; } It simply checks whether the character is already in the dictionary: if no, it's set if yes, its count is increased. Keep in mind that this does not work for multi-byte characters. Yes, this is PHP and so this would work, too: foreach ($chars as $char) { ++$dict[$char]; }. But you should always check array values beforehand to not raise a PHP: Notice.
{ "domain": "codereview.stackexchange", "id": 23864, "tags": "php, algorithm, strings, hash-map" }
How to import stl files into urdf files
Question: I am trying to use stl files with the tag mesh with this code: <geometry> <mesh filename="package://auriga_model/auriga_base.stl"/> </geometry> The package existes and the file is located there but I get this message when I try to launch it: [ERROR] [1301591093.704357996]: Malformed geometry for Visual element [ERROR] [1301591093.704545807]: Could not parse visual element for Link '/base_laser' [ERROR] [1301591093.704624976]: link xml is not initialized correctly I have been doing some test with PR2 stl model and I had no success. Here are some of my tries: <!-- description of the robot --> <link name="/base_link"> <visual> <origin rpy="0 0 0" xyz="0.15 0 0.35"/> <geometry> <!-- mesh filename="package://auriga_model/meshes/auriga_base.stl" /--> <!-- mesh filename="package://auriga_model/meshes/head_pan.stl" /--> <mesh filename="package://pr2_description/meshes/head_v0/head_pan.stl"/> </geometry> </visual> </link> <link name="/base_laser"> <visual> <origin rpy="0 0 0" xyz="0 0 0"/> </visual> </link> For me, it looks like it can't resolve properly "filename="package://". Is it possible? I tested and "rospack find" command works fine with me. Any clue? Now I am able to load PR2 models but not the one of my robot. The original model was build in solidworks and maybe this is the problem. I will try to port it from Catia and I will say if it works. Originally posted by jsogorb on ROS Answers with karma: 77 on 2011-03-31 Post score: 3 Original comments Comment by hsu on 2011-04-07: If solidworks was the problem, one thing you can try is open the solidworks mesh in a text editor (e.g. vi), and replace the first word "solid" with spaces. Comment by David Lu on 2011-03-31: That seems correct at first pass. What node are you launching that gives you this error? Also, could you post the whole Link xml, please? Answer: The link "base_link" is just fine. The problem is that the second visual tag (in the link "base_laser") is missing the required <geometry> element. You also need to specify a joint between the two links, otherwise the robot_state_publisher will complain that there are two root links. Below are working versions of the URDF and launch file. Also see the URDF XML documentation and the URDF tutorials. URDF file: <robot name="your_robot_name"> <link name="base_link"> <visual> <origin rpy="0 0 0" xyz="0.15 0 0.35"/> <geometry> <mesh filename="package://pr2_description/meshes/head_v0/head_pan.stl"/> </geometry> </visual> </link> <joint name="base_link_to_base_laser_joint" type="fixed"> <parent link="base_link"/> <child link="base_laser"/> <origin xyz="0 0 1.0"/> </joint> <link name="base_laser"> <visual> <origin rpy="0 0 0" xyz="0 0 0"/> <geometry> <cylinder length="0.6" radius="0.2"/> </geometry> </visual> </link> </robot> launch file: <launch> <!-- upload urdf --> <param name="robot_description" textfile="$(find your_package_name)/urdf/your_robot.urdf" /> <!-- robot state publisher --> <node pkg="robot_state_publisher" type="state_publisher" name="robot_state_publisher" /> <!-- joint state publisher with gui --> <param name="use_gui" value="true" /> <node pkg="joint_state_publisher" type="joint_state_publisher" name="joint_state_publisher"/> </launch> Originally posted by Martin Günther with karma: 11816 on 2011-04-03 This answer was ACCEPTED on the original site Post score: 6 Original comments Comment by David Lu on 2011-04-26: Please post a new question with all of the relevant XML. Comment by Tõnu Samuel on 2011-04-26: I am not sure if I need to ask new question or extend it. I see on similar issue strace output like this: stat64("/home/it/ros/visualization_common/ogre_tools/media/materials/programs/package://xml/Roboti_algus - roomuk-1 Jalg-1 1-1.STL"... Why "package://" is not parsed/rewritten? Some trick? Comment by PrasadNR on 2016-12-16: +1 But, I would still like to know how a model created in Catia/Blender (say a simple two joint three link structure hand equivalent) can be simulated in ROS Gazebo interface through C++ code (from scratch). Comment by Alessandro Melino on 2020-06-19: Hello, I see that you declare the base_laser link in the urdf file, but is it possible to declare it using tf package and in the urdf file just declare the base_link with the stl model? Comment by Martin Günther on 2020-06-19: Sure, you can have a URDF that only has the base_laser link in it, and publish the TF transform yourself. If you want to display meshes for both the base_laser and base_link while publishing the base_link to base_laser joint yourself via TF, you cannot simply delete the base_link_to_base_laser_joint, because URDF doesn't allow two unconnected trees of links. Instead, you should change the joint type from fixed to floating. This will cause robot_state_publisher to ignore that joint and not publish TFs for it: https://github.com/ros/robot_state_publisher/blob/d8600f658aa09c3cb68f034feec357dc1d24bb72/src/robot_state_publisher.cpp#L66 Then you can publish that TF yourself. Comment by Alessandro Melino on 2020-06-22: Okay, I understood it perfectly. Thank you for your answer.
{ "domain": "robotics.stackexchange", "id": 5253, "tags": "ros, urdf, robot-model, solidworks, stl" }
Free fall time after being accelerated
Question: An elevator car whose floor-to-ceiling distance is equal to $2.7m$ starts ascending with constant acceleration $1.2 m/s^2$; $2.0 s$ after the start a bolt begins falling from the ceiling of the car. Find the bolt's free fall time. $l=2.7m$ and $w=1.2m/s^2$ I am trying to solve using the absolute frame of reference. This is my wrong attempt, the bolt's equation for $t\geq 2s$ would be $y_b(t)=-\frac{1}{2}g(t-2)^2+e(2)+l$ where $e(t)=\frac{1}{2}wt^2$ the position of the elevator's floor in the absolute frame of reference. $d(t)=y_b(t)-e(t)=-\frac{1}{2}(w+g)t^2+2gt+2(w-g)+l$ the distance between the bolt and the elevator for $t\geq 2s$. I get $\Delta=4g^2+(2g+2w)(2w-2g+l)$ and so $t=\frac{-2g\pm\sqrt{4g^2+(2g+2w)(2w-2g+l)}}{-(g+w)}$ which yields a wrong answer. My mistake is probably in the bolt's position equation, I don't see how it's wrong though. The correct answer is $0.7s$. EDIT: I didn't take into account the fact that the bolt would have a velocity when it's set free. After editing my equations, however, I still get a wrong result (two seconds later than it should be). Here's what I did : $$y_b(t)=-\frac{1}{2}g(t-2)^2+2w(t-2)+(l+2w) \text{ for }t\geq2s$$ $$e(t)=\frac{1}{2}wt^2$$ $$\begin{align*} d(t)&=y_b(t)-e(t)\\ &=-\frac{1}{2}g(t^2-4t+4)-\frac{1}{2}wt^2+2wt+(-4w+l+2w)\\ &=-\frac{1}{2}gt^2+2gt-2g-\frac{1}{2}wt^2+2wt+(l-2w)\\ &=-\frac{1}{2}(g+w)t^2+2(g+w)t+(l-2w-2g) \end{align*}$$ We have then : $$\begin{align*} \Delta &=4(g+w)^2+2(g+w)(l-2w-2g)\\ &=2(g+w)[2(g+w)+l-2w-2g]\\ &=2l(g+w) \end{align*}$$ Finally : $$\begin{align*} t&=\frac{-2(g+w)\pm\sqrt{2l(g+w)}}{-(g+w)}\\ &=2\mp\sqrt{\frac{2l}{w+g}} \end{align*}$$ Does someone know where I have gone wrong? EDIT 2: Actually, my EDIT solution is correct, I just need to subtract 2 from it. I thought I am looking for the time at which the bolt makes contact with the floor, but that's incorrect, I am looking for the duration. Answer: We don't provide complete answers to homework and exercise questions, only guidance. So consider the following in your calculations: With respect to a fixed frame of reference outside the elevator, once the bolt becomes free of the ceiling what downward acceleration does it experience? At the same time with respect to a fixed frame of reference outside the elevator, what is the upward acceleration of the elevator? Given 1 and 2, what is the acceleration of the bolt with respect to the frame of reference of the elevator? In the frame of reference of the elevator, what will be the initial velocity of the bolt when it is released from the ceiling? Having answered the above you should be able to apply the kinematic equation relating distance traveled vs acceleration and time. Hope this helps.
{ "domain": "physics.stackexchange", "id": 60060, "tags": "homework-and-exercises, newtonian-mechanics, kinematics, reference-frames, relative-motion" }
Difficulties on understand a particular derivative of position vector field
Question: Please, I'm struggling on this particular second derivative: $\vec{r} = \vec{r}[\vec{r}'(t),t] $ Then in component form: $x^k = x^{k}[x'^{h}(t),t]$ I know that we have an chain rule: $ \displaystyle \frac{dx^k }{dt} = \sum_{i}\frac{\partial x^k}{\partial x'^{h}}\frac{dx'^h }{dt} + \frac{\partial x^k}{\partial t} $ Ok, but now in second differentiation, I really don't know how to manage quite well. $$ \frac{d}{dt}\left(\frac{dx^k }{dt}\right) = \frac{d}{dt}\left(\sum_{\color{red}{h}}\frac{\partial x^k}{\partial x'^{h}}\frac{dx'^h }{dt} + \frac{\partial x^k}{\partial t} \right) = \frac{d}{dt}\left(\sum_{\color{red}{h}}\frac{\partial x^k}{\partial x'^{h}}\frac{dx'^h }{dt}\right) + \frac{d}{dt}\left(\frac{\partial x^k}{\partial t} \right) $$ Answer: $\newcommand{\dd}[2]{\frac{{\rm d}#1}{{\rm d}#2}}$ $\newcommand{pd}[2]{\frac{\partial #1}{\partial #2}}$ $\newcommand{\ddtwo}[2]{\frac{{\rm d^2}#1}{{\rm d}#2^2}}$ $\newcommand{pdtwo}[2]{\frac{\partial^2 #1}{\partial #2^2}}$ $\newcommand{pdthree}[3]{\frac{\partial^2 #1}{\partial #2\partial #3}}$ If $x^k = x^k(x'^h(t),t)$ then $$ \dd{x^k}{t} = \sum_i \pd{x^k}{x'^i}\dd{x'^i}{t} + \pd{x^k}{t} $$ And \begin{eqnarray} \ddtwo{x^k}{t} &=& \sum_i \dd{}{t}\left( \pd{x^k}{x'^i}\dd{x'^i}{t} \right) + \dd{}{t}\pd{x^k}{t} \\ &=& \sum_i \left(\dd{}{t} \pd{x^k}{x'^i}\right)\dd{x'^i}{t} + \pd{x^k}{x'^i}\left(\dd{}{t}\dd{x'^i}{t} \right) + \dd{}{t}\pd{x^k}{t} \\ &=& \sum_i \left(\sum_j\pdthree{x^k}{x'^i}{x'^j}\dd{x'^j}{t} + \pdthree{x^k}{x'^i}{t}\right)\dd{x'^i}{t} + \pd{x^k}{x'^i}\left(\ddtwo{x'^i}{t}\right) + \sum_j\pdthree{x^k}{t}{x'^j}\dd{x'^j}{t} + \pdtwo{x^k}{t} \end{eqnarray}
{ "domain": "physics.stackexchange", "id": 43593, "tags": "homework-and-exercises, calculus" }
Question about Eq. 10.9 in Ashcorft and Mermin
Question: I have a doubt concerning the assumptions made in deriving Eq. 10.9 in Ashcroft and Mermin's Solid State Physics text. We have two entities in the equation: $\psi_m (\textbf{r})$ and $\psi(\textbf{r})$ referring to a localized and bloch wavefunction, respectively. The first equality amounts to interchanging the atomic Hamiltonian operator on the bloch and atomic wavefuctions. The relevant equality is reproduced below: $$ \int \psi_m^* (\textbf{r}) H_{at} \psi (\textbf{r})d\textbf{r} = \int(H_{at}\psi_m(\textbf{r}))^*\psi(\textbf{r})d\textbf{r}. \tag{10.9} $$ Presumably the matrix elements of the Hamiltonian operator is equal when $\psi_m (\textbf{r})$ and $\psi(\textbf{r})$ are interchanged. But how can it be justified. I am new to this forum so I apologize if this question is too naive. Answer: The Hamiltonian operator is Hermitian, so $H_{at} = H_{at}^{\dagger}$. So looking at the relevant part inside the integral, we have: $$(RHS) \quad (H_{at}\psi_m(\vec{r}))^{*} = \psi_m^*(\vec{r})H_{at}^{\dagger} = \psi_m^*(\vec{r})H_{at} \quad (LHS)$$ The first equal sign is because if you think in terms of matrices, $(AB)^\dagger=B^\dagger A^\dagger$.
{ "domain": "physics.stackexchange", "id": 55390, "tags": "quantum-mechanics, operators, hilbert-space, solid-state-physics, wavefunction" }
Estimates of historical human population size
Question: What are the estimates of minimum historical human population size, and how are they obtained from the current human genetic diversity? I seem to recall a Scientific American article from over 30 years ago claiming a figure of 500-1000, but I can’t find any estimate now. Answer: Although I cannot find the Scientific American article to which the poster refers, I assume that the “historic human population size” is the size of the human population in Africa before it underwent the expansion that accompanied the emergence from the African continent. Some important papers on this subject were published in the late 1990s (somewhat later than 30 years ago) and there is a suggestion in one of earlier estimates that I have not yet found a reference for. There is an implication in the Harpending et al. paper (below) of earlier estimates of the order of 1,000. I will try to find the source (help on this welcomed). Harpending et al. published a paper in PNAS of February 1998 in which they conclude an effective population size of the order of 10,000 individuals for most of the Pleistocene. Reich and Goldstein published a paper in PNAS of July 1998 in which they conclude a maximum pre-expansion population size of 5,900. A more recent study by Huff et al. published in PNAS in February 2010in which they estimated that the effective population size of human ancestors living before 1.2 million years ago was 18,500, and in which they rejected all models where the ancient effective population size was larger than 26,000 Why should the later estimate be more likely than the earlier ones? Although the details of the calculations are complex (or at least too complex for me), it appears that the genetic markers that are used are crucial, and that it is important to have markers that represent rare mutational events. The earliest studies apparently used mitochondrial genes and the non-combining parts of the Y chromosome as markers. (These are used and are valid for studies of more recent time.) The limitations in these resulted in a shift to Alu repeats and microsatellites in the two 1998 papers, respectively. The 2010 paper used the insertion of the longer LINE repeats, which occur about one tenth as often as Alu insertions. There is a Wikipedia article on this topic that may be also be of interest.
{ "domain": "biology.stackexchange", "id": 10869, "tags": "human-genetics, population-genetics, palaeontology" }
bounded length CoNP proof
Question: Question: Let $A \subseteq $ {0,1}$^* $ be a language which satisfies $|A \cap ${0,1}$^n|=n^3 $ for all $n\ge 10$ Prove that $A \in NP$ implies $A \in coNP$. Thoughts I've been having difficulty with this problem. My idea is somehow showing that each set of $n^3$ words cannot be a verifier for A and therefore A is in coNP. The problem is that as I see it, proving this will take exp time as I have an exponential number of options for strings from which to choose n^3 from. Would like some suggestions about how to prove this. Answer: Assume that $A\in NP$, and let $V$ be a verifier for it. That is, $V$ takes as input $(x,y)$ and outputs 1 if $y$ witnesses that $x\in A$. That is, we can write $A=\{x:\exists y,\ V(x,y)=1\}$. We show that $A\in coNP$, by showing that $\overline{A}\in NP$. We construct a verifier $V'$ for $\overline{A}$ as follows: Given input $x$, $V'$ expects a witness of the form $(w_1,y_1,...,w_{n^3},y_{n^3})$, where $|x|=n$, such that $y_i$ is a witness that $w_i\in A$, and for every $i$ we have $w_i\neq x$. That is, a witness for $x$ not being in $A$ is simply a list of all the $n^3$ words that are in $A$. Clearly verifying this can be done in polynomial time using $V$ (which we assume to run in polynomial time).
{ "domain": "cs.stackexchange", "id": 4637, "tags": "complexity-theory, time-complexity" }
Seed std::mt19937 from std::random_device
Question: Many people seed their Mersenne Twister engines like this: std::mt19937 rng(std::random_device{}()); However, this only provides a single unsigned int, i.e. 32 bits on most systems, of seed randomness, which seems quite tiny when compared to the 19937 bit state space we want to seed. Indeed, if I find out the first number generated, my PC (Intel i7-4790K) only needs about 10 minutes to search through all 32 bit numbers and find the used seed. (I know that MT is not a cryptographic RNG, but I just did that to get a feel for how small 32 bit really is in these days.) I am trying to build a function to properly seed a mt19937 and came up with this: #include <algorithm> #include <iostream> #include <random> auto RandomlySeededMersenneTwister () { // Magic number 624: The number of unsigned ints the MT uses as state std::vector<unsigned int> random_data(624); std::random_device source; std::generate(begin(random_data), end(random_data), [&](){return source();}); std::seed_seq seeds(begin(random_data), end(random_data)); std::mt19937 seededEngine (seeds); return seededEngine; } int main() { auto rng = RandomlySeededMersenneTwister(); for (int i = 0; i < 10; ++i) std::cout << rng() << "\n"; } This does look like a safe solution to me; however, I have learned that problems with RNG are often times quite subtle. Provided std::random_device produces good, random data on my system, does the code give me a correctly seeded std::mt19937? Answer: Well, first off, why do you use a std::vector for a comparatively small sequence of known length? A raw array or std::array suffice and avoids any dynamic allocation. Next, avoid needless magic numbers. Use std::mt19937::state_size instead of manually specifying 624. Why do you use a lambda? A simple std::ref(source) suffices. The seeding itself looks perfectly fine and there's no actual error anywhere in your code. template<class T = std::mt19937, std::size_t N = T::state_size * sizeof(typename T::result_type)> auto ProperlySeededRandomEngine () -> typename std::enable_if<N, T>::type { std::random_device source; std::random_device::result_type random_data[(N - 1) / sizeof(source()) + 1]; std::generate(std::begin(random_data), std::end(random_data), std::ref(source)); std::seed_seq seeds(std::begin(random_data), std::end(random_data)); return T(seeds); } You could avoid the need for random_data by using counting and transforming iterators as detailed in "Sequence iterator? Isn't there one in boost?". This is not simpler, but maybe more efficient: template<class T = std::mt19937, std::size_t N = T::state_size * sizeof(typename T::result_type)> T ProperlySeededRandomEngine () { std::random_device source; auto make_iter = [&](std::size_t n) { return boost::make_transform_iterator( boost::counting_iterator<std::size_t>(n), [&](size_t){return source();}); }; std::seed_seq seeds(make_iter(0), make_iter((N - 1) / sizeof(source()) + 1)); return T(seeds); } On coliru If you can upgrade to C++20, use ranges and views (godbolt): template<class T = std::mt19937, std::size_t N = T::state_size * sizeof(typename T::result_type)> T ProperlySeededRandomEngine () { std::random_device source; auto random_data = std::views::iota(std::size_t(), (N - 1) / sizeof(source()) + 1) | std::views::transform([&](auto){ return source(); }); std::seed_seq seeds(std::begin(random_data), std::end(random_data)); return T(seeds); }
{ "domain": "codereview.stackexchange", "id": 33445, "tags": "c++, c++11, random" }
Define a traceless part of $\rho$
Question: I saw in a paper: $|\bar{\rho}\rangle\rangle=|\rho\rangle\rangle-|\hat{I}\rangle\rangle / 2^{n / 2}$ for the $4^n$-dimensional vector representing the traceless part of $\rho$. https://arxiv.org/abs/2308.15648v1 For example, $n=1$, $ |\rho\rangle\rangle = \begin{pmatrix} a \\ b \\ c \\ d \end{pmatrix}, $ $a+d=1$. \begin{equation} |\hat{I}\rangle\rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 0 \\ 0 \\ 1 \end{pmatrix}, \end{equation} \begin{equation} |\bar{\rho}\rangle\rangle = \begin{pmatrix} a - \frac{1}{\sqrt{2}} \\ b \\ c \\ d - \frac{1}{\sqrt{2}} \end{pmatrix}, \end{equation} $tr(\bar{\rho})=a+d-\frac{2}{\sqrt{2}}\ne0$. Here, why not $|\bar{\rho}\rangle\rangle=|\rho\rangle\rangle-|\hat{I}\rangle\rangle / 2^{n}$? Answer: You've not been careful enough with the factors of $2^{n/2}$. There are two of them: one in making $\hat I$ instead of $I$ and another when subtracting $|\hat I\rangle\rangle$. In other words, your $|\bar\rho\rangle\rangle$ is incorrect and the terms really are $a-\frac12$ etc. as you expect.
{ "domain": "quantumcomputing.stackexchange", "id": 5223, "tags": "textbook-and-exercises, density-matrix" }
How do we choose the basis of an Hilbert Space?
Question: When we define a basis for the Hilbert Space for a spin half particle I understand it being done using the principle of mutual exclusivity that is if $S_z = +\hbar/2$ then it cannot be $S_z = -\hbar/2$ hence they are orthonormal state vectors. The $S_z$ operator is nice in the sense that it has the above same state vectors as its eigenvectors and therefore just gets scaled to produce an observable. So the choice of basis beforehand seems useless as the operator anyway will only produce an observable along its eigenvectors. So do we automatically make the observables of our experiment the basis of our Hilbert space by default since they are the only observed values when we apply a specific operator based on our experiment on it ? Answer: Observables aren’t a basis of the Hilbert space: their eigenvectors form a basis. The strategy is to find the largest possible set of commuting operators, so that their common eigenvectors are uniquely labelled by the eigenvalues in the commuting set. In your example, the Hilbert space is 2-dimensional and the eigenvalues of $\hat S_z$ are $\pm \frac{1}{2}$, so that’s enough to uniquely label the basis of your Hilbert space, so you don’t need anything else. In your example, you could choose your basis vector $(1,0)^\top$ and $(0,1)^\top$ to be the eigenvectors of $\hat S_x$, and this would be just as fine: the matrix representation of $\hat S_z$ and $\hat S_y$ would then be non-diagonal. The dimension of the Hilbert space is tied to the number of distinct mutually exclusive outcomes: experiment shows there’s only 2 possible distinct outcomes to measuring the spin of a spin-1/2 particle, and since these outcomes do not depend on the direction, the basis states of any operator of the form $$ n_x\hat S_x+n_y\hat S_y+n_z\hat S_z\, \qquad n_x^2+n_2^2+n_z^2=1 $$ would could serve as a basis for the 2-dimensional Hilbert space. It is convention to chose a basis where $\hat S_z$ is diagonal, but that’s just convention.
{ "domain": "physics.stackexchange", "id": 84466, "tags": "quantum-mechanics, hilbert-space, quantum-spin" }
Pymol: select low confidence regions from AlphaFold pdb file
Question: I have downloaded a predicted structure from AlphaFold as a pdb file (https://alphafold.com/entry/O75376) and loaded it into Pymol (2.3.0). There is quite a large portion of the structure that was modelled with very low confidence (pLDDT < 50). Is there a way to select these regions in the structure so that I can assign a different representation to them? Thank you. Answer: Answer from @matteo-ferla, converted from comment: They are stored as b-factors. This is well described in the PyMol wiki. [please edit to improve this answer, if possible]
{ "domain": "bioinformatics.stackexchange", "id": 2225, "tags": "protein-structure, pymol, structural-biology" }
A queue that switches from FIFO mode to priority mode
Question: I implemented a queue capable of operating both in the FIFO mode and in the priority mode: when the priority mode is enabled, the elements are taken in order of decreasing priority; when the priority mode in disabled, the elements are taken in order of their arrival. In order to manage the priority, I thought of using multiple queues (an array of Queue objects, that is m_PriorityQueues in the following code sample), one for each type of element; in this way, I can manage a priority based on the element type: just take first the elements from the queue at a higher priority and progressively from lower priority queues. In order to set the priority of different types of elements, I thought I'd pass an array of Type objects in ascending order of priority, so that each Type is associated with the index of a queue. The user does not see multiple queues, but it uses the queue as if it were a single queue, so that the elements leave the queue in the order they arrive when the priority mode is disabled. In order to properly manage the switch from priority mode to FIFO mode, I have thought to use an additional queue (that is m_FifoOrder field in the following code sample) to manage the order of arrival of the types of elements: essentially, when a new item is enqueued, it is added to the i-th queue, and the i value which indexes the array of queues is inserted to this additional queue of integers. public class PriorityQueue { private Queue[] m_PriorityQueues; private LinkedList<int> m_FifoOrder; private Dictionary<Type, int> m_TypeMapping; public PriorityQueue(Type[] prioritySet) { m_TypeMapping = new Dictionary<Type, int>(); for (int p = 0; p < prioritySet.Length; p++) { if (!m_TypeMapping.ContainsKey(prioritySet[p])) m_TypeMapping.Add(prioritySet[p], p); } m_PriorityQueues = new Queue[m_TypeMapping.Count]; for (int i = 0; i < m_PriorityQueues.Length; i++) m_PriorityQueues[i] = new Queue(); m_FifoOrder = new LinkedList<int>(); } // Enable or disable the priority mode. public bool IsPriorityEnabled { get; set; } // Gets the priority count. public int PriorityCount { get { return m_TypeMapping.Count; } } // Gets the number of items actually enqueued in this queue. public int Count { get { return m_FifoOrder.Count; } } // Removes all objects from this queue. public void Clear() { for (int i = 0; i < m_PriorityQueues.Length; i++) m_PriorityQueues[i].Clear(); m_FifoOrder.Clear(); } // Add an object to the end of this queue public int Enqueue(object item) { int priority; if (item == null) { priority = PriorityCount - 1; // higher priority } else if (!m_TypeMapping.TryGetValue(item.GetType(), out priority)) { priority = 0; // lower priority for unknown types } m_PriorityQueues[priority].Enqueue(item); m_FifoOrder.AddLast(priority); return priority; } // Removes and returns the object at the beginning of this queue. public bool TryDequeue(out object item) { if (IsPriorityEnabled) { for (int p = PriorityCount - 1; p >= 0; p--) { if (m_PriorityQueues[p].Count > 0) { item = m_PriorityQueues[p].Dequeue(); m_FifoOrder.Remove(p); return true; } } } else { if (m_FifoOrder.Count > 0) { int index = m_FifoOrder.First.Value; item = m_PriorityQueues[index].Dequeue(); m_FifoOrder.RemoveFirst(); return true; } } item = null; return false; } } UPDATE: I performed some tests and I noticed that the TryDequeue method of the version proposed above suffers from performance issues when the priority mode is enabled: the Remove method, called on m_FifoOrder linked list, performs a linear search, that is an O(n) operation. Obviously, the performance is reduced more so when n is very large. In order to reduce the latency caused by this method, I created a new version of the priority queue: the FastPriorityQueue class. The inner class ItemInfo simply contains the object to be enqueued and the priority that is assigned during the queuing operation. An ItemInfo object is always inserted at the end of the m_FifoOrder linked list, so that the AddLast method returns a reference to the last added LinkedListNode<ItemInfo>: this reference is enqueued to one of the m_PriorityQueues queues depending on the chosen priority. public class FastPriorityQueue { private class ItemInfo { public object Data { get; set; } public int Priority { get; set; } } private LinkedList<ItemInfo> m_FifoOrder; private Queue<LinkedListNode<ItemInfo>>[] m_PriorityQueues; private Dictionary<Type, int> m_TypeMapping; public FastPriorityQueue(Type[] prioritySet) { m_TypeMapping = new Dictionary<Type, int>(); for (int p = 0; p < prioritySet.Length; p++) { if (!m_TypeMapping.ContainsKey(prioritySet[p])) m_TypeMapping.Add(prioritySet[p], p); } m_PriorityQueues = new Queue<LinkedListNode<ItemInfo>>[m_TypeMapping.Count]; for (int i = 0; i < m_PriorityQueues.Length; i++) m_PriorityQueues[i] = new Queue<LinkedListNode<ItemInfo>>(); m_FifoOrder = new LinkedList<ItemInfo>(); } // Enable or disable the priority mode. public bool IsPriorityEnabled { get; set; } // Gets the priority count. public int PriorityCount { get { return m_TypeMapping.Count; } } // Gets the number of items actually enqueued in this queue. public int Count { get { return m_FifoOrder.Count; } } // Removes all objects from this queue. public void Clear() { for (int i = 0; i < m_PriorityQueues.Length; i++) m_PriorityQueues[i].Clear(); m_FifoOrder.Clear(); } // Add an object to the end of this queue public int Enqueue(object item) { int priority; if (item == null) { priority = PriorityCount - 1; // higher priority } else if (!m_TypeMapping.TryGetValue(item.GetType(), out priority)) { priority = 0; // lower priority for unknown types } LinkedListNode<ItemInfo> enqueued = m_FifoOrder.AddLast( new ItemInfo { Data = item, Priority = priority }); m_PriorityQueues[priority].Enqueue(enqueued); return priority; } // Removes and returns the object at the beginning of this queue. public bool TryDequeue(out object item) { if (IsPriorityEnabled) { for (int p = PriorityCount - 1; p >= 0; p--) { if (m_PriorityQueues[p].Count > 0) { LinkedListNode<ItemInfo> dequeued = m_PriorityQueues[p].Dequeue(); item = dequeued.Value.Data; m_FifoOrder.Remove(dequeued); // This method is an O(1) operation. return true; } } } else { if (m_FifoOrder.Count > 0) { ItemInfo nodeItem = m_FifoOrder.First.Value; item = nodeItem.Data; m_PriorityQueues[nodeItem.Priority].Dequeue(); m_FifoOrder.RemoveFirst(); return true; } } item = null; return false; } } Please review the above code samples and provide suggestions on how to improve them. Are there simpler solutions or more efficient than these? Also, if there are other solutions to switch between the two modes (FIFO mode or priority mode), please provide some details. UPDATE 2: here is an example of initialization of a PriorityQueue object. When the queue works in FIFO mode, therefore the priorities are ignored. Instead, when the priority mode is enabled, the next item removed from the queue depends on the priority of its type: items with the highest priority will be dequeued first. // list of types in order of priority Type[] priorities = new Type[] { typeof(ObjectWithLowerPriority), typeof(ObjectWithIntermediatePriority), typeof(ObjectWithHigherPriority) }; PriorityQueue queue = new PriorityQueue(priorities); queue.Enqueue(...); queue.Enqueue(...); // ... other calls to Enqueue method queue.IsPriorityEnabled = false; queue.Dequeue(); // ... queue.IsPriorityEnabled = true; // priority mode enabled queue.Dequeue(); // ... Answer: Layout looks good, I like the use of white space, and tabs. You have too many redundant comments that are unneeded. I am not big on the use of m_ to signify class members, I think the _ is more commonly used, but that is a debate for another time. I have changed it to _ because that is what my eyes are used to. I would get rid of the Queue array and make it a dictionary of <int, LinkedList<ItemInfo>> This will keep the data structures you are using in the class more consistent, and the brain is able to deal with them a little more. The three class members can all be make readonly. I would change the Type[] variable in the constructor to IEnumerable. This will block any changes to the list within the class, and might improve speed a little. To initialize it, you can use the link .ToList() which changes it back to a list, and enables you to run the initialization code. Any changes made to the new list will not propagate back to the calling code. I would also change the p variable to priority, this makes the code more readable. The initialization of the _typeMapping dictionary should be moved to its own method to declutter the constructor. I would also initialize your IsPriorityEnabled property here, just to make sure it starts off in a known condition So the constructor now looks like: public FastPriorityQueue(IEnumerable<Type> prioritySet) { _typeMapping = new Dictionary<Type, int>(); _priorityQueues = new Dictionary<int, LinkedList<ItemInfo>>(); _fifoOrder = new LinkedList<ItemInfo>(); IsPriorityEnabled = false; InitializeTypeMapping(prioritySet); } private void InitializeTypeMapping(IEnumerable<Type> prioritySet) { var priorityList = prioritySet.ToList(); for (var priority = 0; priority < priorityList.Count; priority++) { if (!_typeMapping.ContainsKey(priorityList[priority])) { _typeMapping.Add(priorityList[priority], priority); } } } The Clear method could be streamlined a little but using a foreach loop. public void Clear() { foreach (var list in _priorityQueues.Values) { list.Clear(); } _fifoOrder.Clear(); } The Enqueue method can be cleaned up a lot. I'd start by pulling the logic to determine the priority out into its own method. At the same time, you could redo the if statements to make it much more readable. First check should be the null item check. If it is null, no point in going any further. The second check can be either an if statement, or the ? : operator as I used. private int DeterminePriority(object item) { if (item == null) { return PriorityCount - 1; // higher priority } var itemType = item.GetType(); return _typeMapping.ContainsKey(itemType) ? _typeMapping[itemType] : 0; } You could then create the queueItem and add it to the queues. I suggest moving priority enqueue and the regular enqueue out into their own methods. I moved the regular Enqueue(ItemInfo x) into its own method to keep things consistent. public int Enqueue(object item) { var priority = DeterminePriority(item); var queueItem = new ItemInfo { Data = item, Priority = priority }; PriorityEnqueue(queueItem); Enqueue(queueItem); return priority; } private void Enqueue(ItemInfo queueItem) { _fifoOrder.AddLast(queueItem); } private void PriorityEnqueue(ItemInfo item) { if (!_priorityQueues.ContainsKey(item.Priority)) { _priorityQueues[item.Priority] = new LinkedList<ItemInfo>(); } _priorityQueues[item.Priority].AddLast(item); } For your TryDequeue method, I would again separate the priority and regular dequeue into their own methods. This will allow you to use the ? : operator in the TryDequeue method and really clean it up. Again, do your fail check first in each method so you don't execute more code than you need to. You'll notice I've added a RemoveItem() method which removes the item from both Queues. Because the way the item was found, there is no need to figure out which dictionary row the item came from. public bool TryDequeue(out object item) { return IsPriorityEnabled ? PriorityDequeue(out item) : Dequeue(out item); } private bool Dequeue(out object item) { if (_fifoOrder.Count == 0) { item = null; return false; } var nextItem = _fifoOrder.First.Value; item = nextItem.Data; RemoveItem(nextItem); return true; } private bool PriorityDequeue(out object item) { var priorityQueue = _priorityQueues.Values.FirstOrDefault(v => v.Count > 0); if (priorityQueue == null) { item = null; return false; } var nextItem = priorityQueue.First.Value; item = nextItem.Data; RemoveItem(nextItem); return true; } private void RemoveItem(ItemInfo item) { _priorityQueues[item.Priority].Remove(item); _fifoOrder.Remove(item); } So the whole class looks like: public class FastPriorityQueue { private class ItemInfo { public object Data { get; set; } public int Priority { get; set; } } private readonly LinkedList<ItemInfo> _fifoOrder; private readonly IDictionary<int, LinkedList<ItemInfo>> _priorityQueues; private readonly IDictionary<Type, int> _typeMapping; public FastPriorityQueue(IEnumerable<Type> prioritySet) { _typeMapping = new Dictionary<Type, int>(); _priorityQueues = new Dictionary<int, LinkedList<ItemInfo>>(); _fifoOrder = new LinkedList<ItemInfo>(); InitializeTypeMapping(prioritySet); } private void InitializeTypeMapping(IEnumerable<Type> prioritySet) { var priorityList = prioritySet.ToList(); for (var priority = 0; priority < priorityList.Count; priority++) { if (!_typeMapping.ContainsKey(priorityList[priority])) { _typeMapping.Add(priorityList[priority], priority); } } } public bool IsPriorityEnabled { get; set; } public int PriorityCount { get { return _typeMapping.Count; } } public int Count { get { return _fifoOrder.Count; } } public void Clear() { foreach (var t in _priorityQueues.Values) { t.Clear(); } _fifoOrder.Clear(); } public int Enqueue(object item) { var priority = DeterminePriority(item); var queueItem = new ItemInfo { Data = item, Priority = priority }; PriorityEnqueue(queueItem); Enqueue(queueItem); return priority; } private void Enqueue(ItemInfo queueItem) { _fifoOrder.AddLast(queueItem); } private void PriorityEnqueue(ItemInfo item) { if (!_priorityQueues.ContainsKey(item.Priority)) { _priorityQueues[item.Priority] = new LinkedList<ItemInfo>(); } _priorityQueues[item.Priority].AddLast(item); } private int DeterminePriority(object item) { if (item == null) { return PriorityCount - 1; // higher priority } var itemType = item.GetType(); return _typeMapping.ContainsKey(item.GetType()) ? _typeMapping[itemType] : 0; } public bool TryDequeue(out object item) { return IsPriorityEnabled ? DequeuePriority(out item) : Dequeue(out item); } private bool Dequeue(out object item) { if (_fifoOrder.Count == 0) { item = null; return false; } var nextItem = _fifoOrder.First.Value; item = nextItem.Data; RemoveItem(nextItem); return true; } private bool DequeuePriority(out object item) { var priorityQueue = _priorityQueues.Values.FirstOrDefault(v => v.Count > 0); if (priorityQueue == null) { item = null; return false; } var nextItem = priorityQueue.First.Value; item = nextItem.Data; RemoveItem(nextItem); return true; } private void RemoveItem(ItemInfo item) { _priorityQueues[item.Priority].Remove(item); _fifoOrder.Remove(item); } } I hope you picked up a few pointers with this. Good luck.
{ "domain": "codereview.stackexchange", "id": 2466, "tags": "c#, performance, queue" }
How is Vertex Cover reducable to Independent Set using parametrized reduction with parameter k?
Question: We have the following Lemma and proof: Lemma 5.5. If $A$ if FPT, then $A\leq_{\mathrm{fpt}}$ Independent Set. Proof. We reduce $A$ to Independent Set parametrised by $k'$, where $k'$ is the size of a sought independent set. Given an instance $(x,k)$ of $A$, solve $(x,k)$ in $f(k)\cdot \mathrm{poly}(|x|)$ time, (the running time of this reduction is using the running time of problem $A$) if $(x,k)$ is a yes-instance, then output a one-vertex graph and $k'=1$. if $(x,k)$ is a no-instance, then output a one-vertex graph and $k'=2$. (Could this need more time to compute?) It is clear that the input instance is a yes-instance if and only if the output IS instance is, and $k'\leq k+2$. $\quad\square$ The definition of parameterized reduction is as follows: Definition 13.1 (Parameterized reduction). Let $A,B\subseteq \Sigma^*\times\mathbb{N}$ be two parameterized problems. A parameterized reduction from $A$ to $B$ is an algorithm that, given an instance $(x,k)$ of $A$, outputs an instance $(x',k')$ of $B$ such that $(x,k)$ is a yes-instance of $A$ if and only if $(x',k')$ is a yes-instance of $B$, $k'\leq g(k)$ for some computable function $g$, and the running time is $f(k)\cdot |x|^{\mathcal{O}(1)}$ for some computable function $f$. Now it seems from what he did is: if $(x,k)$ is yes-instance, then output a one vertex with $k'=1$, otherwise if $(x,k)$ is no-instance, then output a one vertex with $k'=2$. This makes the first condition of Parameterized reduction true. Now,for second condition, we should say $k' \leq g(k)$ where $g(k)=1$ if we have yes-instance, and $0$ otherwise. The running time would be just the running time of the parameterized algorithm. Done. My question is that when I want to do a reduction, I should take the input of $A$ (like vertex cover) and turn it to input of IS instance. For example assume we have clique of $4$, with $k=3$, the vertex cover algorithm will return a yes-instance, now if we want to turn this instance to IS to make it yes-instance, we need to use $n-k$. But he use different way. Also, when we prove IS to VC using parameterized reduction, it gives us wrong reduction (because of $n-k$ isn't a computable function of $k$). I just want to know here how "VC to IS" using parameterized reduction works. Answer: It seems your confusion comes from the fact that Lemma 5.5 is a bit of a strange statement. In fact, we can replace Lemma 5.5 with the statement that for any problem $B$ such that $B_Y$ is a yes-instance of $B$ and $B_N$ is a no-instance of $B$ we have that $A\leq_{\mathrm{fpt}}B$. The proof of this adapted lemma is precisely the same as Lemma 5.5, only now we use $B_Y$ in the second bullet-point and $B_N$ is the third. The crucial reason why this works is that we already know that $A$ is fixed parameter tractable. In a sense, this lemma shows that we learn nothing from a fpt-reduction from a problem we already know is fpt. So, this lemma is in fact useless when trying to create a reduction from a problem we don't know is fpt, such as $k$-Clique. When you try to apply Lemma 5.5 when $A$ is not fpt (or at least not known to be), you fail to satisfy part 3 of the definition of an fpt-reduction.
{ "domain": "cs.stackexchange", "id": 10634, "tags": "complexity-theory, reductions, parameterized-complexity" }
Max-Cut Of Minor Closed Family
Question: It's well known that planar graphs from a closed-family with forbidden minors $K_{3,3}, K_{5}$, graphs with bounded treewidth also are closed family graphs with no $H_{k}$ as minor. I assume that graphs with bounded max cut form closed family graphs. Given arbitrarily graph $G$ that doesn't contain $H$ as a minor, how to find max cut approximately. Thanks! Addendum: The relevant topic can be found on On the complexity of the Maximum Cut problem Chapter 6. Graphs with bounded treewidth. The PTAS begins with making modification to the tree decomposition without increasing it's treewidth. 1) $T$ is a binary tree. 2) If a node $i \in I$ has two children $j_{1}$ and $j_{2}$, then $X_{i}=X_{j1}=X_{j2}$. 3) If a node $i \in I$ has one child $j$, then either $X_{j} \subset X_{i}$ and $|X_{i}-X_{j}|=1$, or $X_{i} \subset X_{j}$ and $|X_{j}-X_{i}|=1$. In my opinion it's very strong modification, and actually I don't get the idea behind this modification. On the 2th condition if I understood rigth, if there is a node with two neighbors then all of then contain actually the same set of the vertexes, but what for? Answer: MaxCut can be solved in polynomial time in $K_5$-minor-free graphs but is NP-hard in $K_6$-minor-free graphs (in particular, for apex graphs of planar graphs) [Barahona 1983]. See also this WG 2010 paper and slides by Marcin Kamiński.
{ "domain": "cstheory.stackexchange", "id": 1320, "tags": "graph-algorithms" }
Intuition behind the definition of Continuous Symmetry of a Lagrangian (Proof of Noether's Theorem)
Question: Suppose there is a one-parameter family of continuous transformations that maps co-ordinates $q(t)\rightarrow Q(s,t)$ where the $s$ is the continuous parameter. Also, for when $s=0$ the transformation is the identity, i.e. $Q(0,t)=q(t)$. Then if we have a Lagrangian, $L$ which is invariant under the replacement of $q\rightarrow Q$, then why is it intuitively true that: $$ \frac{d}{ds}L|_{s=0}=0$$ In other words, why is the derivative taken at $s=0$? Answer: Let there be a transformation $q(t) \rightarrow Q(s,t)$ such that $L$ remains invariant. There exists an identity point $s_{0}$ where: \begin{equation} Q(s_{0},t) = q(t) \end{equation} $L$ is initially a functional of $q$, $\dot{q}$ and $t$, but after the transformation it becomes a functional of $Q(s,t)$ and $\dot{Q}(s,t)$. Furthermore, this is the only way that $L$ depends on $s$, so: \begin{equation} \frac{d}{ds} L[Q(s,t), \dot{Q}(s,t),t] = \int ds' \, \left \{ \frac{\partial Q(s',t)}{\partial s} \frac{\delta L[Q(s,t), \dot{Q}(s,t),t]}{\delta Q(s',t)} + \frac{\partial \dot{Q}(s',t)}{\partial s} \frac{\delta L[Q(s,t), \dot{Q}(s,t),t]}{\delta \dot{Q}(s',t)} \right \} \end{equation} More specifically calculated at the identity point: \begin{equation} \frac{dL}{ds} \Bigg |_{s_{0}} = \int ds' \, \left \{ \frac{\partial Q(s',t)}{\partial s} \Bigg | _{s_{0}}\frac{\delta L[Q(s,t), \dot{Q}(s,t),t]}{\delta Q(s',t)} \Bigg|_{Q(s_{0},t)} + \frac{\partial \dot{Q}(s',t)}{\partial s} \Bigg|_{s_{0}}\frac{\delta L[Q(s,t), \dot{Q}(s,t),t]}{\delta \dot{Q}(s',t)} \Bigg|_{\dot{Q}(s_{0},t)} \right \} \end{equation} $\dot{Q}(s_{0},t)$ here denotes the function $\dot{Q}(s,t)$ calculated at $s_{0}$, but it is easy to to show that it is also equivalent to the time derivative of $Q(s_{0},t) = q(t)$. Next, we Taylor expand the Lagrangian around $Q(s_{0},t)$ as such: \begin{equation} \begin{split} L[Q(s,t), \dot{Q}(s,t),t] &= L[Q(s_{0},t), \dot{Q}(s,t),t] + \int ds' \, [Q(s,t)-Q(s_{0},t)] \frac{\delta L[Q(s,t), \dot{Q}(s,t),t]}{\delta Q(s',t)} \Bigg |_{Q(s_{0},t)} \\ &+ \frac{1}{2} \int ds' \int ds'' \, [Q(s,t)-Q(s_{0},t)]^{2} \frac{\delta ^{2} L[Q(s,t), \dot{Q}(s,t),t]}{\delta Q(s',t) \, \delta Q(s'',t)} \Bigg |_{Q(s_{0},t)} + \ldots \end{split} \end{equation} Similarly we may expand around $\dot{Q}(s_{0},t)$ for the first term: \begin{equation} \begin{split} &L[Q(s,t), \dot{Q}(s,t),t] = L[Q(s_{0},t), \dot{Q}(s_{0},t),t] \\ +\int ds' &\, \left \{ [Q(s,t)-Q(s_{0},t)] \frac{\delta L[Q(s,t), \dot{Q}(s,t),t]}{\delta Q(s',t)} \Bigg |_{Q(s_{0},t)} + [\dot{Q}(s,t)-\dot{Q}(s_{0},t)] \frac{\delta L[Q(s,t), \dot{Q}(s,t),t]}{\delta \dot{Q}(s',t)} \Bigg |_{\dot{Q}(s_{0},t)} \right \} \\ + &\frac{1}{2} \int ds' \int ds'' \, \Bigg \{ [Q(s,t)-Q(s_{0},t)]^{2} \frac{\delta ^{2} L[Q(s,t), \dot{Q}(s,t),t]}{\delta Q(s',t) \, \delta Q(s'',t)} \Bigg |_{Q(s_{0},t)} \\ & \quad \quad \quad \quad \quad \quad + [\dot{Q}(s,t)-\dot{Q}(s_{0},t)]^{2} \frac{\delta ^{2} L[Q(s,t), \dot{Q}(s,t),t]}{\delta \dot{Q}(s',t) \, \delta \dot{Q}(s'',t)} \Bigg |_{\dot{Q}(s_{0},t)} \Bigg \} + \ldots \end{split} \end{equation} Since $Q(s_{0},t) = q(t), \, \dot{Q}(s_{0},t) = \dot{q}(t)$ and the Lagrangian is invariant under the transformation: \begin{equation} L[Q(s,t), \dot{Q}(s,t),t] = L[q(t), \dot{q}(t),t] \, , \quad \forall \, s,t \end{equation} then the first term of the Taylor expansion is equal to the Lagrangian on the lhs. By extension, the infinite amount of terms from the Taylor expansion(s) must vanish. The two cases which ensure this are $Q(s,t) = Q(s_{0},t), \, \dot{Q}(s,t) = \dot{Q}(s_{0},t)$ i.e. the trivial case in which we transform $q(t)$ into itself, and the case of the 1st order functional derivatives vanishing: \begin{equation} \frac{\delta L[Q(s,t),\dot{Q}(s,t),t]}{\delta Q(s',t)} \Bigg |_{Q(s_{0},t)} = \frac{\delta L[Q(s,t),\dot{Q}(s,t),t]}{\delta \dot{Q}(s',t)} \Bigg |_{\dot{Q}(s_{0},t)} = 0 \end{equation} Discarding the trivial case, by virtue of the third equation we find that indeed: \begin{equation} \frac{dL}{ds} \Bigg|_{s_{0}} = 0 \end{equation} In your case you have $s_{0} = 0$, but it could be anything based on the particular transformation at hand. Hence the choice of $s=0$ for the derivative of the Lagrangian is "intuitive" in the sense that you implicitly need to Taylor expand $L$ around the functions at that specific value. That leads to the $s$-independent term of the expansion to be the same as the Lagrangian before the transformation, leading to the conclusions discussed above.
{ "domain": "physics.stackexchange", "id": 90725, "tags": "classical-mechanics, lagrangian-formalism, symmetry, definition, noethers-theorem" }
Column operations in LDPC generator matrix
Question: I'm trying to produce a generator matrix from a starting low density parity check matrix. There are lots of references on the topic (including this Signal Processing Stack Overflow answer here). They make sense, with the exception of one problem I'm having. My resource is the book Error Correction Coding: Mathematical Methods and Algorithms by Todd K. Moon. In it, there is a section that describes this that I will replicate below, and then highlight the part I'm stuck on. For a code specified by a parity check matrix $A$, it is expedient for encoding purposes to determine the corresponding generator matrix $G$. A systematic generator matrix may be found as follows. Using Gaussian elimination with column pivoting as necessary (with binary arithmetic) determine an $M$ x $M$ matrix $A^{-1}$ so that $$H = A_p^{-1} A = [I \quad A_2].$$ (If such a matrix $A$, does not exist, then $A$ is rank deficient, $r = \mathrm{rank}(A) < M$. In this case, form $H$ by truncating the linearly dependent rows from $A_p^{-1} A$. The corresponding code has $R = K/N > (N - M)/N$, so it is a higher rate code than the dimensions of $A$ would suggest.) Having found $H$, form $$G=\begin{bmatrix}A_2 \\ I\end{bmatrix}.$$ Then $H G = 0$, so $A_p H G = A G = 0$, so $G$ is a generator matrix for $A$. While $A$ may be sparse (as discussed below), neither the systematic generator $G$ nor $H$ is necessarily sparse. Here's the problem I'm running into: I start with $A$ and do Gauss-Jordan elimination in $\mathbb{G_2}$ to get it into reduced row echelon form. Sometimes the matrix I've been given (I'm not designing it, have to use what I'm given) can't be put into that form with solely row operations. It requires actually swapping the order of some columns (is that what he means by "column-pivoting" in the reference above?). Everything in me screams that's wrong, but it turns out if I just do the same thing on the decoding side it works out. But then I'm left with the conundrum that when I go to decode something encoded with this parity check matrix I have to know how the matrix columns were reordered on the transmit side. Basically I have to re-do the generator matrix, which is by far the slowest part of my process (I'm going to make some improvements there though, using the QC-LDPC structure of the parity matrix). It seems bizarre to me that a designed system would put in a parity check matrix that requires column reordering to work. Is there something I'm missing? For what it's worth I also have the example matlab code from the aforementioned book and this is what he does (column reordering and then re-applying that re-orderying on the decode side). Answer: Everything in me screams that's wrong, but it turns out if I just do the same thing on the decoding side it works out. Why? That just means you reorder some bits. On the other end, when you revert that reordering, that's an operation that changes nothing about the code itself. In fact, your system almost certainly has an interleaver, anyway, so you such reordering operations are absolutely integral to how it works! the conundrum that when I go to decode something encoded with this parity check matrix I have to know how the matrix columns were reordered on the transmit side. What's surprising about that? Your receiver needs to have a decoder matching the encoder. Don't forget that swapping columns is just a permutation matrix that you apply your data. Done! It seems bizarre to me that a designed system would put in a parity check matrix that requires column reordering to work. Why? You say that, but you don't argue based on anything about that. In fact, many coding-theory proofs are based on random codes... Again, imagine your reordering operations as permutation matrices that you just use after / before applying the parity check / generator matrix.
{ "domain": "dsp.stackexchange", "id": 8383, "tags": "forward-error-correction, ldpc" }
Why do large cations stabilise large anions and small cations stabilise small anions?
Question: I see this trend from enthalpy change of decomposition data, though I struggle to find a decent explanation for why. Answer: For more detailed info, search "cation-anion radius ratio" or "Pauling's rules". Below is my review of this idea. This ratio is ${R_{C}/R_{A}}$ (C = cation, A=anion). Small cation (I mean, too small) attracts anions in such a way that they come too close to each other and repulsive force comes into play. This happens when ratio is <0,155. So for larger anion, larger cation suits better. Ratio can be 0.155-1 for stable compounds, and this ratio also determines coordination number and type of void! (Check the picture below, recall that since <0.155 ratio is for unstable compounds, you have empty "example" line there).
{ "domain": "chemistry.stackexchange", "id": 11035, "tags": "inorganic-chemistry" }
C function to move and collapse cells in 2048 game
Question: I'm implementing a clone of 2048 in C with SDL, and I have the following function for performing a movement on a column vector/array /** * collapseVector: Collapse a vector "2048" style, flush the cells in one direction, joining them if legal. * @param _vector 1-D array to be collapsed * @param _score Score counter * @param invert Whether or not to inverse-flush vector. Useful for Up(false) vs. Down(true), and so on */ void collapseVector(int _vector[MAX_BOARD_POS], int **_score, bool invert) { int idx; int buf[MAX_BOARD_POS] ={0}; if (invert) { idx = MAX_BOARD_POS - 1; for (int i = MAX_BOARD_POS - 1; (MAX_BOARD_POS - i) < MAX_BOARD_POS + 1; --i){ if(!_vector[i]){ continue; } if(!buf[idx]){ buf[idx] = _vector[i]; }else if(_vector[i] == buf[idx]){ ++buf[idx]; **_score += 1 << buf[idx]; }else{ buf[--idx] = _vector[i]; } } } else { idx = 0; for (int i = 0; i < MAX_BOARD_POS; ++i) { // If it's an empty cell (0) ignore it. The buf vector is zero-filled anyway if (!_vector[i]) { continue; } // Special case for filling in the first cell of buffer if (!buf[idx]) { buf[idx] = _vector[i]; } else if (_vector[i] == buf[idx]) { // If we have identical neighbouring cells join them ++buf[idx]; // Add 2^buf[idx] to our score counter. **_score += 1 << buf[idx]; } else { // Flush cells to the end of vector buf[++idx] = _vector[i]; } } } memcpy(_vector, buf, MAX_BOARD_POS * sizeof(*_vector)); } This code works perfectly, but it's long and repetitive. How can I improve this? The two for loops in here are very similar and I wonder whether I could join them perhaps. Answer: Complicated loop condition I had to stare for a long time at this loop termination condition from your first loop: (MAX_BOARD_POS - i) < MAX_BOARD_POS + 1 Eventually, I turned to mathematics to figure it out: Subtract MAX_BOARD_POS from both sides: -i < 1 Add i to both sides: 0 < i + 1 Subtract 1 from both sides: -1 < i Reverse the inequality: i > -1 Use >= instead of >: i >= 0 So in other words, your loop was actually the simple downward loop of: for (int i = MAX_BOARD_POS - 1; i >= 0; --i) Now it made perfect sense! Combining the loops So one loop goes from N-1 to 0, and the other goes from 0 to N-1. One way of combining the loops is to make three variables: start, end, and delta, where end is one past the end of the loop. That way, you can do: for (i = start; i != end; i += delta) { ... } Now all you have to do is initialize those three variables to the correct values, depending on whether invert is true or false. There are a couple of other places that need adjustment depending on invert as well, but those are easy to change. Sample rewrite Here is how I would have written your function using only one loop: void collapseVector(int _vector[MAX_BOARD_POS], int **_score, bool invert) { int buf[MAX_BOARD_POS] = {0}; int start = invert ? MAX_BOARD_POS - 1 : 0; int end = invert ? -1 : MAX_BOARD_POS; int delta = invert ? -1 : 1; int idx = start; for (int i = start; i != end; i += delta) { if (!_vector[i]) { // If it's an empty cell (0) ignore it. The buf vector is // zero-filled anyway. continue; } if (!buf[idx]) { // Special case for filling in the first cell of buffer. buf[idx] = _vector[i]; } else if (_vector[i] == buf[idx]) { // If we have identical neighbouring cells join them. ++buf[idx]; // Add 2^buf[idx] to our score counter. **_score += 1 << buf[idx]; } else { // Flush cells to the end of vector. idx += delta; buf[idx] = _vector[i]; } } memcpy(_vector, buf, MAX_BOARD_POS * sizeof(*_vector)); }
{ "domain": "codereview.stackexchange", "id": 24970, "tags": "c, sdl, 2048" }
Is there a tool to get the quantum circuit corresponding to a sparse matrix?
Question: If I know a sparse matrix, is there any tool that allows me to get the corresponding quantum circuit directly? If not what should I do? For example,I want to try hamilton simulation and I have the sparse matrix. How can I get the corresponding quantum circuit? By code or by some software? Or get enough training? I'm new in this field. Need your help Answer: Concerning Hamiltonian simulation, you can find very useful guide in this question. General approach to quantum circuit construction is explained in paper Elementary gates for quantum computation. Also paper Optimal Quantum Circuits for General Two-Qubit Gates can be helpful.
{ "domain": "quantumcomputing.stackexchange", "id": 1752, "tags": "quantum-gate, programming, circuit-construction, hamiltonian-simulation" }
For potential energy problems, is it enough to state the potential energy at infinity?
Question: I've seen two different ways of stating the potential energy of an object (specifically for electric potential). For some problems, I've seen it defined as at $d = 0$, $PE_{object} = 0$. When you lift an object up, you increase its potential energy. In other problems, I've seen it defined as at $d = \infty$, $PE_{object} = 0$. In this case, how would I find the potential energy at some finite point in the field? I understand this is probably a basic concept, but I'd appreciate some help understanding it. Answer: These two different ways are actually equivalent. Let's consider a situation when there is an electric field of some charge $Q$ and we choose infinity as a zero level of potential field. In this case the formula of potential energy of some other charge $q$ in this potential field is: $$E = k * \frac{Q * q}{r}$$ I guess this answers your question - this is the formula which gives you the potential energy. But you can choose any point as zero level of potential energy! For example you can choose that potential energy is zero at some distance $r_0$ from the $Q$ charge. The formula for potential energy at some distance $r$ will be a little more complicated: $$E = k * \frac{Q * q}{r} - k * \frac{Q * q}{r_0}$$ You can even choose to define potential energy as $$E = k * \frac{Q * q}{r} - C$$ and choose any $C$ - even such that potential energy is not zero anywhere! But whatever formula you choose, when you "lift" your body (that is increase the distance between $Q$ and $q$ from $r$ to $r+dr$) the difference of potential energy would be the same! In our case if both $Q$ and $q$ are positive the energy will decrease. If you need to calculate how the speed of the body would change, or how much gasoline you need to move the bodies - the answers would be the same whatever formula you choose. Because only the potential energy difference matters. You only need to decide which one of the formulas you will use and use the chosen one for all the calculations. But which one do you need to choose? First one looks like most simple. But for some problems it can be more convenient to choose other versions. It depends on the problem you need to solve.
{ "domain": "physics.stackexchange", "id": 46390, "tags": "electric-fields, potential" }