anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Is this game EXPSPACE-complete?
Question: Let $M$ be a polynomial-time deterministic machine that can ask questions to some oracle $A$. Initially $A$ is empty but this is can be changed after a game that will be described below. Let $x$ be some string. Consider the following Alice and Bob game. Initially, Alice and Bob have $m_A$ and $m_B$ dollars respectively. Alice wants $M^A(x)=1$ and Bob wants $M^A(x)=0$. At every step of the game a player can add one string to $A$; this costs one dollar. Also a player can miss his or her step. The play ends if both players spends all money or if some player missed step when he or she in a losing position (that defines by the current value of $M^A(x)$). Question: is the problem of defining the winner of this game for given $M, x, m_A, m_B$ is an EXPSPACE-complete task? Note that $M$ can ask (for belonging to $A$) only strings of polynomial length so there is no sense for Alice or Bob to add more longer strings to $A$. Hence, this problem is in EXPSPACE. Answer: I don't have an exact characterization but it's unlikely this problem is EXPSPACE-complete. Suppose $M^{\Sigma^*}(x)$ accepts and let $S$ be the polynomial-size set of strings queries by this machine. If I understand the game right, Alice can win by playing every string in $S$. The only way to prevent this is if $m_A$ is polynomially-bounded but that would put the game somewhere inside the exponential-time hierarchy (or lower).
{ "domain": "cstheory.stackexchange", "id": 4497, "tags": "cc.complexity-theory, complexity-classes, gt.game-theory" }
Complexity of numerical derivation for general nonlinear functions
Question: In classical optimization literature numerical derivation of functions is often mentioned to be a computationally expensive step. For example Quasi-Newton methods are presented as a method to avoid the computation of first and/or second derivatives when these are "too expensive" to compute. What are the state of the art approaches to computing derivatives, and what is their time complexity? If this is heavily problem-dependent, I am particularly interested in the computation of first and second order derivatives for Nonlinear Least Squares problems, specifically the part concerning first order derivatives (Jacobians). Answer: The time to compute first-order derivatives (gradients) and second-order derivatives (Hessians) depends heavily on the particular function. In general, I am aware of three approaches: Analytical: With pencil and paper, figure out an analytical expression for the derivatives, by using the rules of calculus. Then implement those expressions. Here the running time depends entirely on how easy or hard it is to compute those expressions. In some cases, you may be able to use a computer algebra system to help with this calculation. Automatic differentiation: Use the computer to do compute the derivatives for you, given a program to compute the function itself. See, e.g., https://en.wikipedia.org/wiki/Automatic_differentiation. This will construct a program that evaluates the derivative at a value of your choice. The running time and ability to do this depends entirely on the program you are differentiating. Generally speaking, if you are differentiating a function $f$ that can be computed in $O(n)$ time as a straight-line expression (with loop-free code that uses only elementary operations and conditionals, but no arrays or random-access memory lookups or loops), then automatic differentiation can construct a program that evaluates the derivative at an arbitrary input in $O(n)$ time. This is true even if the function is over multiple variables and you want to compute the gradient. The Hessian is slower: if $f$ is a function of one variable, you can evaluate the Hessian at an arbitrary point in $O(n)$ time, but if $f$ is a function of $m$ variables, it will take $O(mn)$ time. Numerical differentiation: You can use the method of finite differences to approximate the gradient, given only a black box that can compute the function (you don't even need the code of that black box). See https://en.wikipedia.org/wiki/Numerical_differentiation. Here if you have a function $f:\mathbb{R} \to \mathbb{R}$ of one variable that can be evaluated in $O(n)$ time using some black box, you can approximate the derivative or second derivative in $O(n)$ time. If $f$ is a function of $m$ variables, you can approximate the gradient at an arbitrary point in $O(mn)$ time, and estimate the Hessian in $O(m^2n)$ time. Finally, one more method that may be acceptible in some settings is to evaluate the function $f$ at several points, fit a model $\hat{f}$ based on those points, compute or estimate the first- or second-order derivatives of $\hat{f}$ using any of above methods, and use that as an estimate of the corresponding derivative of $f$. Saying you are interested in "nonlinear least squares" does not narrow things down, because "nonlinear" allows a completely arbitrary function, so long as it is not linear.
{ "domain": "cs.stackexchange", "id": 16240, "tags": "complexity-theory, numerical-algorithms" }
Have two hurricanes ever merged? And what was the result?
Question: I was just reading this about how Hurricane Ethel could have merged with Hurricane Dora in 1964. Has such a merge ever happened before in history? If so, what was the result? Would storms become twice as powerful? Or would they disrupt and dissipate each other? They don't have to be hurricanes or typhoons per se, just large storms. I would think the low pressure regions of two storms would tend to attract each other if they were nearby, but it's apparently rare or unheard of because a quick google search showed nothing. Answer: Yes two hurricanes/tropical cyclones/typhoons can merge with each other and the effect is known as Fujiwhara effect- Fujiwhara effect. The National Weather Service defines the Fujiwhara effect as "Binary Interaction where tropical cyclones within a certain distance(300-375 nautical miles depending on the size of the cyclones) of each other begin to rotate about a common midpoint". What really happens is that centers of both systems begin to orbit in a counter clockwise direction about a midpoint that is determined by the relative mass and cyclone intensity. Eventually the smaller cyclone may merge into the larger cyclone. There are several examples of the Fujiwhara effect and one example would be Hurricane Connie Hurricane Connie and Diane Hurricane Diane way back in 1955. Shimokawa et al Fujiwhara Effect Types talk about the various kinds of interactions that can take place among various typhoons(Please note that the Fujiwhara effect is not restricted to two systems). The various kinds of interactions are Complete Merger Partial Merger Complete Straining Out Partial Straining Out Elastic Straining Out Complete straining and complete merger interactions lead to destruction of one of the vortices. Partial merger and partial straining lead to partial destruction of one vortex and elastic straining is an interaction in which both vortices survive with their initial circulation. Partial merger and partial straining out have received less attention in the literature on binary tropical cyclone interactions as the interactions are extremely complex. Prieto et al claim that during a partial merger repeated mass exchanges occur between vortices. As these are nonlinear effects a quantification is only possible by a direct numerical integration and precise initial condition Binary TC Vortex Like Interactions The period of orbit maybe as small as one day or there are others such as Cyclone Kathy and Cyclone Marie Kathy/Marie Fujiwhara orbited for a period of 5 days prior to merging into each other as pointed out by Lander et al. If the period of orbit is longer then there is a greater probability of a merger. Region wise binary cyclones are more common in the Western North Pacific than the North Atlantic as pointed by Dong et al Relative Motion Of Binary Tropical Cyclones Regarding the predictability of the track of binary cyclones Dong et al. state that prediction of steering forces of a single tropical cyclone are replete with numerical forecasting uncertainties and the problem is accentuated by the presence of another tropical cyclone in close proximity. Those who followed the progress of Hurricane Sandy in late October 2012 (an instance of the Fujiwhara effect but in this case a tropical cyclone merged with an extra tropical storm) will remember the ECMWF model correctly predicted the landfall location.
{ "domain": "earthscience.stackexchange", "id": 598, "tags": "meteorology, tropical-cyclone" }
What happens to matter in extremely high gravity?
Question: Though I am a software engineer, I have bit interest in sciences as well. I was reading about black holes and I thought if there is any existing research results on How matter gets affected because of extremely high gravity. I tried searching but these long equations really didn't help me. Can someone please put it in layman terms? More specifically if I ask Why the infinite gravity doesn't tear down the atoms in to quarks and does it have enough potential that it can affect something a Planck scale? Answer: Extreme gravity essentially equates to extreme pressure. We see a progression in stellar evolution. The high pressures of the huge gravitational pull of a star is at first counteracted by electromagnetic/thermal interactions between gas particles. However, at a certain point (with enough gravitational pull) these interactions are not enough and the gravitational force overwhelms electromagnetic forces. Positively charged nuclei collide into positively charged nuclei and hydrogen fusion occurs. The star is supported from further collapse via electron degeneracy pressure, where two electrons cannot occupy the same quantum state. As elements bind into heavier elements, fusion becomes more and more difficult and requires more energy. Eventually the nuclear forces are not sufficient and the star collapses further, allowing carbon to fuse. This carbon fusion is much more energetic than the proceeding fusion and the star explodes. If there is sufficient mass, a supernova occurs and the remnant could be a neutron star (if the star exceeds the Chandrashankar limit of about 1.44 solar masses) or a black hole (if the star exceeds the Tolman-Oppenheimer-Volkoff Limit of 3 solar masses). In a neutron star, fermi-degeneracy pressure keeps the particles from collapsing down to a gravitational singularity. Essentially there is enough pressure to force electrons into protons and form neutrons (inverse beta decay), and the neutrons are only stopped from colliding with each other by neutron degeneracy pressure. With a large enough mass though, even this is not enough to stop collapse, to either a theoretical quark star or even to a black hole. So essentially we see as pressure increases, the various forces that keep matter matter-like get overcome. First electromagnetic interactions, then electron degenerecy pressure, then neutron degenercy pressure, and finally a collapse into a singularity/black hole (or something like that). Edit: In response to the original poster's question, they can theoretically condense further to a quark degenerate matter. The specifics at this level get more fuzzy, since the strong force is difficult to model accurately due to asymptotic freedom. Unless there are particles that make up quarks, this is the lowest level of possible degeneracy.
{ "domain": "physics.stackexchange", "id": 3861, "tags": "gravity, black-holes, quantum-gravity, singularities" }
will increasing threshold always increase precision?
Question: here precision at threshold 0.85 > precision at threshold 0.90. shouldnt it be the other way round? increasing threshold will reduce False positive and precision will be greater than before? Answer: here precision at threshold 0.85 > precision at threshold 0.90. shouldnt it be the other way round? increasing threshold will reduce False positive and precision will be greater than before? Precision is $\frac{\text{TP}}{\text{TP}+\text{FP}}$ Both $\text{TP}$ and $\text{FP}$ are reduced when you increase the threshold. If both decrease in proportion to the current precision (i.e. they are spread evenly at each confidence value), then precision will remain the same. Most models on most datasets will tend to increase precision as the threshold increases, at least initially (e.g. moving from 0.5 to 0.6) as false positives may commonly be found as uncertain edge cases with low confidence, i.e. false positives tend to occur more frequently at low confidence, so increasing threshold will exclude a higher ratio of false positives than true positives than the current precision. However, there is no guarantee of that. The value of precision will vary in practice depending on what the model predicted for each example. If you have a cluster of highly confident false positives, they can cause precision to drop as threshold grows, until they get excluded. The most extreme example would be where the most confident classification is incorrect, in which case the highest possible threshold will score zero precision.
{ "domain": "datascience.stackexchange", "id": 9621, "tags": "machine-learning, deep-learning, classification, statistics, data-science-model" }
Reduce variant of Vertex Cover to original decision-version Vertex cover problem
Question: Consider the following variation (let us call it Q) on the Vertex Cover problem: Given a Graph G and a number K, we are asked if there is a k-cover of G so that it is the minimum cover. My question is: How may one prove that Q can be reduced to the general Vertex Cover problem? And, more generally, what is the approach to solve such a reduction, from "is there a k..." to "is k the minimum/maximum?"? I have a hunch on the methodology but am not sure and would appreciate a wiser opinion than that of myself on the subject. My hunch is the following: First of all, we take into consideration the following fact: if there is a k-1 cover of G, then we will surely have a k cover of G, just by adding a random node to the k-1 cover ( the cover property holds if we add nodes ). Thus, We can reformulate Q this way: Is there a k-cover of G, so that there is no (k-1)-cover of the graph? From this reformulation it is clear that an instance of Q reduces to 2 instances of the original decision version of the Vertex Cover. Answer: Your reduction works. A more efficient version would be to use binary search to find the optimal value through a series of queries "Is there a vertex covers of size $k$?" However, note that this is a polynomial-time Turing reduction (also known as a Cook reduction): you're solving the vertex cover optimization problem by making multiple queries to an oracle for the decision version. On the other hand, NP-completeness is defined in terms of polynomial-time many-one reductions (a.k.a. Karp reductions): here, you're only allowed to make one to the decision version. I'm not aware of a many-one reduction between these two problems. Cook reductions have the advantage that they correspond more closely to the idea of "If I have an efficient algorithm for problem A and a reduction from B to A, I have an efficient algorithm for B, too. On the other hand, Karp reductions allow for finer-grained distinctions in complexity theory. For example, there are Cook reductions between NP-complete problems and coNP-complete problems but a Karp reduction between an NP-complete problem and a coNP-complete problem would prove that NP=coNP.
{ "domain": "cs.stackexchange", "id": 2148, "tags": "complexity-theory, np-complete" }
send velocity command to wheels of pioneer3at in standalone version of gazebo
Question: Hi all, I installed standalone version of gazebo and i put a pioneer3at robot with laser scanner in environment, now i can get data of laser in my code but i don't know how can i send command to robot for moving in my code, i am new in gazebo and i want to test my navigation algorithm in simulated pioneer3at robot. i don't want to use ros, just i want to work with standalone version of gazebo, any one can help me? Originally posted by Vahid on Gazebo Answers with karma: 91 on 2013-05-10 Post score: 0 Answer: You can try and use the SkidSteerDrivePlugin to drive a pioneer3at. Looking at the source code, it expects you to publish a pose msg to ~/your_model_name_here/vel_cmd topic Originally posted by iche033 with karma: 1018 on 2013-05-11 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by SL Remy on 2013-05-15: Sending a Pose msg to a vel_cmd topic just seems wrong.. Comment by Vahid on 2013-05-16: why is it wrong? i want to test my navigation algorithm so i should can control the robot, so i think it is correct . if it is possible explain more about my wrong? Comment by SL Remy on 2013-05-16: A twist message seems more appropriate for a "velocity command" topic.. maybe these messages don't exist as yet? Comment by Vahid on 2013-05-24: yes, you are right, twist message is not exist, just pose message is exist for "velocity command" topic.
{ "domain": "robotics.stackexchange", "id": 3281, "tags": "gazebo, gazebo-plugin" }
Instantaneous transfer of information?
Question: If suppose there is some charge which is not under influence of any other thing. Let us for observation surround this charge with circles of pointers pointing in the direction of its field. If I move this charge a little bit the pointers will change their direction corresponding to new field generated at that point in space. My question is when I move the charge, will "All" the pointers also move at the same instant? Or the pointers will realise the change depending on how far away they are from the charge? Is the transfer of information instantaneous in this case? If not, why? Can there be any other possible scenario where transfer of information is instantaneous (except for the case of entanglement)? Answer: There is a name for machines which move charges around to cause the field to vary: wireless transmitters. As is well-known the disturbance propagates at $c$. No: if special relativity is correct (and it is extremely well tested) there is no way of transmitting information 'instantaneously', nor in fact is there any well-defined notion of what it would mean to do so, since whether two distinct events occur simultaneously or not depends on who you ask. Indeed, because of this observer-dependence of simultaneity, if such a thing were possible it is immediately possible for someone to send information into their own past. Entanglement is not an exception to this rule. As an aside, note that if it were possible to do this, then there is essentially no limit to the amount of money a financial market trader would pay for something that would do it. Yet no one has ever constructed such a system to do it: this should be a huge hint that none is possible.
{ "domain": "physics.stackexchange", "id": 32173, "tags": "quantum-field-theory, magnetic-fields, electric-fields" }
Topological sort in Java
Question: I am learning algorithms in graphs. I have implemented topological sort using Java. Kindly review my code and provide me with feedback. import java.util.LinkedList; import java.util.Queue; import java.util.Stack; public class TopologicalSortGraph { /** * This Topological Sort implementation takes the example graph in * Version 1: implementation with unweighted * Assumption : Graph is directed */ TopologicalSortGraph Graph = new TopologicalSortGraph(); //public LinkedList<Node> nodes = new LinkedList<Node>(); public static void topologicalSort(Graph graph) { Queue<Node> q = new LinkedList<Node>(); int vertexProcessesCtr = 0; for(Node m : graph.nodes){ if(m.inDegree==0){ ++vertexProcessesCtr; q.add(m); System.out.println(m.data); } } while(!q.isEmpty()) { Node m = q.poll(); //System.out.println(m.data); for(Node child : m.AdjacenctNode){ --child.inDegree; if(child.inDegree==0){ q.add(child); ++vertexProcessesCtr; System.out.println(child.data); } } } if(vertexProcessesCtr > graph.vertices) { System.out.println(); } } public static void main(String[] args) { Graph g= new Graph(); g.vertices=8; Node TEN = new Node("10"); Node ELEVEN = new Node("11"); Node TWO = new Node("2"); Node THREE = new Node("3"); Node FIVE = new Node("5"); Node SEVEN = new Node("7"); Node EIGHT = new Node("8"); Node NINE = new Node("9"); SEVEN.AdjacenctNode.add(ELEVEN); ELEVEN.inDegree++; SEVEN.AdjacenctNode.add(EIGHT); EIGHT.inDegree++; FIVE.AdjacenctNode.add(ELEVEN); ELEVEN.inDegree++; THREE.AdjacenctNode.add(EIGHT); EIGHT.inDegree++; THREE.AdjacenctNode.add(TEN); TEN.inDegree++; ELEVEN.AdjacenctNode.add(TEN); TEN.inDegree++; ELEVEN.AdjacenctNode.add(TWO); TWO.inDegree++; ELEVEN.AdjacenctNode.add(NINE); NINE.inDegree++; EIGHT.AdjacenctNode.add(NINE); NINE.inDegree++; g.nodes.add(TWO); g.nodes.add(THREE); g.nodes.add(FIVE); g.nodes.add(SEVEN); g.nodes.add(EIGHT); g.nodes.add(NINE); System.out.println("Now calling the topologial sorts"); topologicalSort(g); } } Graph class: class Graph { public int vertices; LinkedList<Node> nodes = new LinkedList<Node>(); } Node Class: class Node { public String data; public int dist; public int inDegree; LinkedList<Node> AdjacenctNode = new LinkedList<Node>( ); public void addAdjNode(final Node Child){ AdjacenctNode.add(Child); Child.inDegree++; } public Node(String data) { super(); this.data = data; } } Answer: class Graph { public int vertices; LinkedList<Node> nodes = new LinkedList<Node>(); } Graph Graph, as other data structures in general, should be declared public. Because you want them to be able to be used outside of the package they are declared in. nodes, vertices should be private so that you can know that they are not changed outside the Graph class. nodes should not be a LinkedList. You must depend on abstractions as much as possible. You should prefer interfaces such as List over specific implementations LinkedList. Nodes of a graph is not a List, it is a Set. A standard graph cannot have multiple copies of a node. You should prefer a Set to represent a set unless you have a good reason. Node All of the above points also apply to Node. Apart from those: AdjacenctNode should be named adjacentNode by Java naming convention. Feel free to remove parameterless call to super();, although Eclipse adds it by default, it's just noise. TopologicalSortGraph Remove unused code : TopologicalSortGraph Graph = new TopologicalSortGraph(); Always remove commented code. If you need to see previous versions of a code use a version control system. : //public LinkedList<Node> nodes = new LinkedList<Node>(); Do not put more than one space between tokens, use autoformat of your IDE to fix formatting after changing a piece of code: public static void topologicalSort(Graph graph) { The snippets : if(child.inDegree==0){ q.add(child); ++vertexProcessesCtr; System.out.println(child.data); } and if(m.inDegree==0){ ++vertexProcessesCtr; q.add(m); System.out.println(m.data); } are duplicates. They should be extracted to a private method. You are missing some kind of abstraction there. You are changing the internals of an object passed in as a parameter: --child.inDegree; I do not expect my graph to change after I ask to see its nodes printed in topological order. What if I want to print them again? You are mixing the calculation and the printing out of the calculation result, I do not expect to see printlns in a method implementing an algorithm: System.out.println( .... ); What if I want the result to be printed somewhere other than System.out? What if I do not want the result to be printed at all and want it to be used as an intermediate step in a bigger calculation instead? You probably want to return a List<Node> from a topological sort algorithm. List is the standard return type when you are sorting some collection, that is when the result is a collection whose order is important. You can then print that list as many times as you want or pass it as a parameter to wherever you like. public static void main(String[] args) { You should separate test code from your main code. If your actual class is named TopologicalSortGraph put your test code in TopologicalSortGraphTest. Use de facto standard JUnit so that instead of one big main you can have many small tests. You can run any one of them or all of them easily from within your IDE or from the command line. You should try to separate the test code into separate source directories or even into separate projects. Your implementation code should not need or know about your test code to compile. Another spacing (indentation) problem : Node TEN = new Node("10"); Your code should align well. So that it reads neatly top to bottom and scopes in it are easily identified. TEN should be ten by Java naming convention. Instead of these two you should use your addAdjNode method instead: SEVEN.AdjacenctNode.add(ELEVEN); ELEVEN.inDegree++; Also access chains like node.AdjacenctNode.add(otherNode) or x.y.z are usually a sign that your encapsulation is not good enough. In this case you are modifying a collection of one class from another class. It's a problem waiting to happen. Same encapsulation problem is present in ELEVEN.inDegree++, too. The root problem here is adjacency is a property of the graph -Remember G = (V, E) from school?- and not of the nodes themselves. Instead of node.addAdjNode(otherNode), you should use a method like graph.addEdge(node, otherNode). Same problem also exists here: g.nodes.add(TWO); You should have a Graph.addNode(node) method instead. Coming back to G = (V, E); it says, if you listen carefully, you need a set of vertices and a set of edges to have a graph and should not add nodes one by one. Ideally, Graph would have a constructor like this public Graph(Set<Node> vertices, Set<Edge> edges).
{ "domain": "codereview.stackexchange", "id": 6653, "tags": "java, algorithm, graph" }
Why does Avogadro's number have the value it has?
Question: As I was learning about Avogadro's number and I was wondering why is Avogadro's number equal to $6.02214\cdot 10^{23}$? I mean, how did chemists come up with this particular number? Answer: Because it was chosen to be that way. The picture is that at some point in chemical history it was realised that atoms are made up of a nucleus containing protons and neutrons, and electrons that have little mass in comparison in ‘shells’ around the outside. (Beware: this picture is a simplification!) It was, at some point found out how heavy these particles were. Things like the Law of Constant Proportions and the Law of Multiple Proportions were also known, the first stating that a fixed mass ratio of two different compounds would only be able to react (with extraneous mass of one compound remaining as unreacted residue) and the second stating that if compounds or elements reacted in different ways, the mass proportions would be integer ratios of each other. So obviously the idea of particles which have a mass was correct. Now how do we ‘count atoms’? Similar to ‘counting sugar’. A recipe won’t tell you to mix $x$ sugar crystals with butter, eggs and flour to make a cake, it will state a value in grams (or ounces, if you’re in that part of the world). It is common to do that for large amounts where you can’t count the individuals. Taken to atoms, this meant that at one point in history it was defined to take the amount that weighs $z~\mathrm{g}$ of element $\ce{Y}$ as a value for ‘one count’ — one mole. That definition was changed and currently, $12~\mathrm{g}$ of $\ce{^12C}$ atoms is the number of atoms in a mole. Because of these choices, the Avogadro constant now had a fixed value and could be calculated. It was calculated to be the well-known $6.022 \times 10^{23}$. Had a different choice been made, it would have been a different number. Also, this is due to change. The SI unit system is to be modified at some point in the future. From thence, the value of $N_\mathrm{A}$ will be fixed much like the speed of light in vacuum are fixed. The definition of the mole will then depend on the fixed constant.
{ "domain": "chemistry.stackexchange", "id": 6868, "tags": "mole, units" }
Comparing 2 CSV files
Question: Here's the exercise in brief: Consider the following file: Code: before.csv A; ; B; B; A; H; C; ; D; D; C; G; E; D; F; F; E; H; G; D; ; H; G; ; And a modified version of the file: after.csv A; ; B; B; A; H; C; ; D; D; ; G; E; D; F; F; E; H; G; D; ; K; ; E; The first field of the CSV is a unique identifier of each line. The exercise consists of detecting the changes applied to the file, by comparing before and after. There are 3 types of changes you should detect: ADDED (line is present in after.csv but not in before.csv) REMOVED (line is present in before.csv but not in after.csv) MODIFIED (line is present in both, but second and/or third field are modified) In my example, there are three modifications: ADDED line (K) REMOVED line (H) MODIFIED line (D) And my code: import collections import csv import sys class P_CSV(dict): '''A P_CSV is a dict representation of the csv file: {"id": dict(csvfile)} ''' fieldnames = ["id", "col2", "col3"] def __init__(self, input): map(self.readline, csv.DictReader(input, self.fieldnames, delimiter=";",\ skipinitialspace=True)) def readline(self, line): self[line["id"]] = line def get_elem(self, name): for i in self: if i == name: return self[i] class Change: ''' a Change element will be instanciated each time a difference is found'''. def __init__(self, *args): self.args=args def echo(self): print "\t".join(self.args) class P_Comparator(collections.Counter): '''This class holds 2 P_CSV objects and counts the number of occurrence of each line.''' def __init__(self, in_pcsv, out_pcsv): self.change_list = [] self.in_pcsv = in_pcsv self.out_pcsv = out_pcsv self.readfile(in_pcsv, 1) self.readfile(out_pcsv, -1) def readfile(self, file, factor): for key in file: self[key] += factor def scan(self): for i in self: if self[i] == -1: self.change_list.append(Change("ADD", i)) elif self[i] == 1: self.change_list.append(Change("DELETE", i)) else: # element exists in two files. Check if modified j = J_Comparator(self.in_pcsv.get_elem(i), self.out_pcsv.get_elem(i)) if len(j) > 0: self.change_list += j class J_Comparator(list): '''This class compares the attributes of two lines and return a list of Changes object for every difference''' def __init__(self, indict, outdict): for i in indict: if indict[i] != outdict[i]: self.append(Change("MODIFY", indict["id"], i, indict[i], "BECOMES", outdict[i])) if len(self) == 0: self = None #Main p = P_Comparator(P_CSV(open(sys.argv[1], "rb")), P_CSV(open(sys.argv[2], "rb"))) p.scan() print "{} changes".format(len(p.change_list)) [c.echo() for c in p.change_list] In real life, the code is supposed to compare two very large files (>6500 lines) with much more fields (>10). How can I improve, both the style of my programming as well as the performance of the script? For the record I'm using Python2.7 Answer: Implementation The P_CSV trick is good idea. I don't know if "input" is supposed to be a file name, a string, a file object, and so on (this is Python's fault, but still). Please use a better name and document that it is a file. What does {"id": dict(csvfile)} mean in your docstring? get_elem could be implemented with return self.get(name, default=None). The way you're doing it is misleading, since you're relying on the fact that no return means return None. This means you can either remove get_elem or find a better name explaining that it's just like get except that it returns None instead of throwing an exception. I guess you want to use __str__ in Change, and maybe __repr__ (but not echo). Do you really need Change as it is? Simply store lists in change_list, instead of Changes. readfile? I'd say readcsv since it is no longer a file, but a P_CSV. If you have a more descriptive name of what those files contain, then use that instead. J_Comparator doesn't work as requested in the exercise, since it also says what columns were modified. Setting a list to None is wrong. "No values" is the empty list. You can then use self.change_list.extend(j) without needing to worry about the empty list. It's much more elegant. Why P and J for the comparators? Performance Performance is good: you're using a linear algorithm, even though you're going through the files twice. If you're worried about very very larges files that won't hold in memory, you can use the assumption that the files are sorted to advance in both files simultaneously, and make sure to always have the same unique id in both files. I don't think this is needed, 6000 lines is quite small!
{ "domain": "codereview.stackexchange", "id": 3330, "tags": "python, performance, python-2.x, csv" }
Is there a name for the opposite reaction to the dissolution?
Question: When atmospheric $\ce{CO2}$ reacts with water to form $\ce{H2CO3}$, this is called dissolution, isn't it? What term would you use for the opposite reaction when it occurs at atmospheric pressure (e.g. due to temperature change)? For a solid it would be precipitation, but here it is for a gas. Answer: Effervescence if there are bubbles and Degasification more generally speaking. There is also the term outgassing, but that term is broader than just gas coming out of liquid solution.
{ "domain": "chemistry.stackexchange", "id": 2597, "tags": "terminology" }
Multiple NAOs, one roscore
Question: Hello, I would like to use the nao_driver ros stack to control multiple NAO robots at the same time. Does anyone have any experience with this? The nao_driver stack is run locally on a computer using a roscore instance on the same computer. When launching nao_driver it uses an environment variable of the NAO robots IP which it then connects to. Ros topics are then available to publish commands to the NAO. My guess is that if I launch multiple instances of the nao_driver (on the same computer) the ros topics will collide, i.e two topics "/cmd_vel" would be subscribed. My idea for a solution is to modify the nao_driver stack so that it uses different topic names, for example specified by a launch file. I could then publish commands to for example "nao1/cmd_vel" or "nao2/cmd_vel" on order to make both nao1 and nao2 walk. Is there a simpler/nicer way to do this? Did I miss some basic ROS functionality for handling issues like this? Thanks, Isak Tjernberg Originally posted by I.T on ROS Answers with karma: 67 on 2013-02-20 Post score: 0 Answer: That is what namespaces are for. Either set the ROS_NAMESPACE environment variable before using rosrun or make a launchfile that contains a tag for each robot. There you can define the parameter with the IP and, if the driver nodes work properly (i.e. subscribing to "cmd_vel" and not "/cmd_vel", the topics should now read like /naoX/cmd_vel and so on. If you want to use tf, you should also set the tf_prefix param and be sure that all software you use (and especially the one you develop) uses the tf methods to resolve frame_ids with the tf_prefix. Look for tf_prefix in the ros wiki. Originally posted by Achim with karma: 390 on 2013-02-20 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 12974, "tags": "ros, nao-driver, nao" }
How best to prepare a uniform superposition over all strings of balanced parentheses?
Question: [0001] Consider the set $D_n\subset \{(,)\}^{2n}$ of all Dyck words of strings of balanced brackets or balanced parentheses of length $2n$. For example, for $n=5$, we have $\sigma=()()()()()$ is balanced, while $\tau=(()))(()()$ is not. [0002] Thought of as basis vectors in a Hilbert space on $2n$ qubits $\{|(\rangle,|)\rangle\}^{\otimes 2n}$, an interesting highly entangled state is the uniform superposition of all such Dyck words. Indeed, from Sergey Bravyi's lecture on stoquastic Hamiltonians at the Israeli Institute of Advanced Studies, I learned that Salberger and Korepin proved for a fixed $n$ all such Dyck words can be generated with only two moves, each acting on three (adjacent) symbols: $$l:(()\leftrightarrow ()(,\\ r:\:)()\leftrightarrow ()).$$ [0003] With some similarity to a CSWAP involution, such moves are referred to in the literature as Fredkin moves, and the chain of qubits as Fredkin spin chains. Again fixing $n$ and starting from a string such as $\sigma$, a random walk on the unweighted graph formed by such moves will eventually reach all such Dyck words. Accordingly, Salberger and Korepin write a three-local stoquastic Hamiltonian out of projectors $|l\rangle\langle l|$ and $|r\rangle\langle r$|. The appropriately normalized state $|D_n\rangle$ over all Dyck words of a fixed length is the unique ground state of the stoquastic Hamiltonian. [0004] It may be natural to ask: How can we efficiently prepare such a properly normalized state corresponding to the uniform distribution over all Dyck words on $2n$ qubits? [0005] Determining the balancedness of a string of parentheses is often introduced early in the study of algorithms. For example, opening and closing parentheses correspond to pushing and popping items onto a stack; the string is balanced only if the stack never experiences underflow. [0006] Also, famously the number of strings of balanced parentheses of length $2n$ is given by the $n$th Catalan number, which is: $$C_n=\frac{1}{n+1}{2n\choose n},$$ while the central binomial coefficient is given by $2n\choose n$. So one approach may be to (1) initially prepare the uniform superposition over all $2n\choose n$ strings with the same number $n$ of open parentheses as closed parentheses, (2) evaluate in superposition and into a flag register if the strings so constructed are balanced or not, for example using the above stack underflow algorithm, and (3) post-select on the flag register indicating that the strings are balanced, with success probability $\frac{1}{n+1}$. But is there another simpler approach? Answer: Take the parent Hamiltonian (the one you describe above) and adiabatically prepare the ground state, by interpolating from a trivial Hamiltonian. Given the structure of the problem, I would expect such an interpolation can be designed with a 1/poly gap, and thus carried out in polynomial time. (One idea would be to start with a Hamiltonian where the bracketing patterns are constrained to one end of the chain, and then slowly let them expand -- this path should have a nice and uniform behavior.)
{ "domain": "quantumcomputing.stackexchange", "id": 4429, "tags": "state-preparation, stoquatic-matrices" }
Card-fighting game Part 2
Question: This is part 2 of the game I am building. After some good feedback on my first post I decided it was time to post my updated code. The differences in this part are: Updated code after feedback, new dodge mechanic and better formatted code. What I am looking for are: performance feedback, code style feedback, things I can do in a better way and JavaScript best practices. For more information about the game I should advice you to visit my first question. var cards = [1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10]; var shuffledCards = shuffle(cards); var playerDeck = shuffledCards.slice(0, 20); var computerDeck = shuffledCards.slice(20); var playerlife = 25; var computerLife = 25; var playerFeedback = document.getElementById('playerFeedback'); var playerName = ''; var swordSound = new Audio('sounds/sword.mp3'); var blockSound = new Audio('sounds/block.mp3'); var dodgeSound = new Audio('sounds/dodged.mp3'); var dodged = false; /** * @param array * @returns {*} */ function shuffle(array) { "use strict"; var m = array.length; var t; var i; // While there remain elements to shuffle… while (m) { // Pick a remaining element… i = Math.floor(Math.random() * m--); // And swap it with the current element. t = array[m]; array[m] = array[i]; array[i] = t; } return array; } /** * @returns {boolean} */ function getPlayerName() { "use strict"; playerName = document.getElementById('userInput').value; if (playerName.length === 0) { return false; } document.getElementById('playerName').innerHTML = playerName; document.getElementById('overlay').style.display = 'none'; } function checkIsGameOver() { "use strict"; var gameResult = document.getElementById('gameResult'); var gameResultContent = gameResult.innerHTML; var replay = document.getElementById('replay'); var click = document.getElementById('click'); var dodgeL = document.getElementById('dodgeL'); var dodgeR = document.getElementById('dodgeR'); function gameIsOver() { playerFeedback.innerHTML = 'Game Over, do you want to play again?'; gameResult.innerHTML = gameResultContent; gameResult.style.display = 'block'; replay.style.display = 'inline'; click.style.display = 'none'; dodgeL.style.display = 'none'; dodgeR.style.display = 'none'; } if (playerlife <= 0) { playerlife = 'Lost'; computerLife = 'Won'; gameResultContent = '<p>Hunter won the game!</p>'; gameIsOver(); } else if (computerLife <= 0) { computerLife = 'Lost'; playerlife = 'Won'; gameResultContent = '<p>' + playerName + ' won the game!</p>'; gameIsOver(); } } function whoWon() { "use strict"; var playerTopCard = playerDeck[0]; var computerTopCard = computerDeck[0]; document.getElementById('computerDamageImage').setAttribute('src', 'images/empty.png'); document.getElementById('playerDamageImage').setAttribute('src', 'images/empty.png'); if (playerTopCard > computerTopCard) { computerLife = computerLife - (playerTopCard - computerTopCard); swordSound.play(); document.getElementById('computerDamageImage').setAttribute('src', 'images/damage.gif'); playerFeedback.innerHTML = 'Hunter lost: ' + (playerTopCard - computerTopCard) + ' HP'; } else if (playerTopCard < computerTopCard) { playerlife = playerlife - (computerTopCard - playerTopCard); swordSound.play(); document.getElementById('playerDamageImage').setAttribute('src', 'images/damage.gif'); playerFeedback.innerHTML = playerName + ' lost: ' + (computerTopCard - playerTopCard) + ' HP'; } else { blockSound.play(); playerFeedback.innerHTML = 'It was a tie!'; } checkIsGameOver(); playerDeck.shift(); computerDeck.shift(); } function didPlayerDodged() { "use strict"; var playerTopCard = playerDeck[0]; var computerTopCard = computerDeck[0]; document.getElementById('computerDamageImage').setAttribute('src', 'images/empty.png'); document.getElementById('playerDamageImage').setAttribute('src', 'images/empty.png'); if (dodged === true) { computerLife = computerLife - playerTopCard; dodgeSound.play(); document.getElementById('computerDamageImage').setAttribute('src', 'images/damage.gif'); playerFeedback.innerHTML = 'You just dodged the attack! Hunter lost: ' + playerTopCard + ' HP'; } else { playerlife = playerlife - computerTopCard; swordSound.play(); document.getElementById('playerDamageImage').setAttribute('src', 'images/damage.gif'); playerFeedback.innerHTML = 'You could not dodged the attack! ' + playerName + ' lost: ' + computerTopCard + ' HP'; } checkIsGameOver(); playerDeck.shift(); computerDeck.shift(); } /** * @param type */ function dealCards(type) { "use strict"; var playerTopCard = playerDeck[0]; var computerTopCard = computerDeck[0]; var cardImage = "<img src='images/cardBack.png'/>"; if (type === 'attack') { whoWon(); } else if (type === 'dodgeTrue') { dodged = true; didPlayerDodged(); } else { dodged = false; didPlayerDodged(); } document.getElementById("card1").innerHTML = "<div class='cardNumber'>" + playerTopCard + '</div>' + cardImage; document.getElementById("card2").innerHTML = "<div class='cardNumber'>" + computerTopCard + '</div>' + cardImage; document.getElementById("scoreComputer").innerHTML = computerLife; document.getElementById("scorePlayer").innerHTML = playerlife; } function playAgain() { "use strict"; window.location.reload(); } /** * @param direction * @returns {boolean} */ function dodgeDirection(direction) { "use strict"; var goodDirection = Math.floor(Math.random() * 2) + 1; return direction === goodDirection; } (function startGame() { "use strict"; document.getElementById("replay").addEventListener("click", playAgain); document.getElementById("userSubmit").addEventListener("click", getPlayerName); document.getElementById("click").addEventListener("click", function () { dealCards('attack'); }); document.getElementById("dodgeL").addEventListener("click", function () { if (dodgeDirection(1)) { dealCards('dodgeTrue'); } else { dealCards('dodgeFalse'); } }); document.getElementById("dodgeR").addEventListener("click", function () { if (dodgeDirection(2)) { dealCards('dodgeTrue'); } else { dealCards('dodgeFalse'); } }); }()); Answer: Organize the data Sound Library You're starting to get a lot of global variables in your project. As your project grows, you'll want to organize it more so that it is easier to grasp. Some of your data would be better organized into classes and objects. For example you could make a sounds library object for sound effects: var sounds = { block: new Audio('sounds/block.mp3'), dodge: new Audio('sounds/dodged.mp3'), sword: new Audio('sounds/sword.mp3') }; sounds.sword.play(); // usage Now when you want to add another sound or edit the ones you have, all that logic will be contained in the sounds object. Player Class You might want to consider making a player class. There are a few ways to make classes in JavaScript, but I prefer a factory method (a function that constructs an instance of your class from parameters): function createPlayer(name, maxLife) { // factory functions var player = { name: name, life: maxLife, deck: undefined // no cards yet }; return player; } var player = createPlayer(undefined, 25); // usage var computer = createPlayer("Hunter", 25); Deck Class You have a lot of global functions that you're applying to a deck of cards. And a lot of your functions need to know that your decks are arrays. This deck class abstracts your deck logic into one place: function createDeck(array) { var cards = array; // private variable return { // removes the card off the top of the deck draw: function () { return cards.pop(); // the backside of the array the top of the deck }, // returns the value of the card on the top of the deck peek: function () { return cards[cards.length - 1]; }, // returns the size of the deck size: function () { return cards.length; }, // randomizes the order of the cards in the deck shuffle: function () { cards = shuffle(cards); // call your global shuffle function }, // returns an array to be assigned to two players. Second array get's the extra card if the deck's length is odd deal: function () { var half = Math.floor(cards.length / 2); var deck0 = createDeck(cards.slice(0, half)); var deck1 = createDeck(cards.slice(half)); return [deck0, deck1]; }, // returns a string of the deck toString: function () { return cards.toString(); } }; } var deck = createDeck([1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8]; deck.shuffle(); var hands = deck.deal(); player.deck = hands[0]; computer.deck = hands[1]; Now you can refactor your whoWon function like so: function whoWon() { "use strict"; var playerCard = player.deck.draw(); var computerCard = computer.deck.draw(); document.getElementById('computerDamageImage').setAttribute('src', 'images/empty.png'); document.getElementById('playerDamageImage').setAttribute('src', 'images/empty.png'); if (playerCard > computerCard) { computerLife = computerLife - (playerTopCard - computerTopCard); sounds.sword.play(); document.getElementById('computerDamageImage').setAttribute('src', 'images/damage.gif'); playerFeedback.innerHTML = computer.name + ' lost: ' + (playerTopCard - computerTopCard) + ' HP'; } else if (playerCard < computerCard) { playerlife = playerlife - (computerTopCard - playerTopCard); sounds.sword().play(); document.getElementById('playerDamageImage').setAttribute('src', 'images/damage.gif'); playerFeedback.innerHTML = playerName + ' lost: ' + (computerTopCard - playerTopCard) + ' HP'; } else { sounds.block.play(); playerFeedback.innerHTML = 'It was a tie!'; } checkIsGameOver(); } This might not seem like much, but now your decks are able to manage themselves. They've become their own module. Your whoWon function doesn't need to know that the cards are stored as an array; It doesn't need to know that the top of the deck is the front of the array or the back. All it knows is that the decks have a draw method that "draws" a card from a players deck. DOM Element Library - Calling document.getElementById like 100000 times Nearly every single one of your functions needs to access a DOM element at some point. You could introduce jquery to simplify the redundancy of document.getElementById, but then you would have to include jquery, adding a dependency to your code that you don't really need. I really like these two functions that I found in a cheeky little article about the "harmfulness" of jquery: // Returns first element that matches CSS selector {expr}. // Querying can optionally be restricted to {container}’s descendants function $(expr, container) { return typeof expr === "string"? (container || document).querySelector(expr) : expr || null; } // Returns all elements that match CSS selector {expr} as an array. // Querying can optionally be restricted to {container}’s descendants function $$(expr, container) { return [].slice.call((container || document).querySelectorAll(expr)); } Using the first function, $, you could simplify document.getElementById('playerFeedback') to: $('#playerFeedback'). As of getting the elements over and over again, I think you can assume the browser will be fast enough. But if you want, you could create a library of the DOM elements you want to use. var elements = { playerFeedback: $('#playerFeedback'), computerDamageImage: $('#computerDamageImage') } Let's do some more refactoring to whoWon: function whoWon() { "use strict"; var playerCard = player.deck.draw(); var computerCard = computer.deck.draw(); elements.computerDamageImage.setAttribute('src', 'images/empty.png'); elements.playerDamageImage.setAttribute('src', 'images/empty.png'); if (playerCard > computerCard) { computer.life = computer.life - (playerCard - computerCard); sounds.sword.play(); elements.computerDamageImage.setAttribute('src', 'images/damage.gif'); elements.playerFeedback.innerHTML = computer.name + ' lost: ' + (playerTopCard - computerTopCard) + ' HP'; } else if (playerCard < computerCard) { player.life = player.life - (computerCard - playerCard); sounds.sword().play(); elements.playerDamageImage.setAttribute('src', 'images/damage.gif'); elements.playerFeedback.innerHTML = player.name + ' lost: ' + (computerCard - playerCard) + ' HP'; } else { sounds.block.play(); elements.playerFeedback.innerHTML = 'It was a tie!'; } checkIsGameOver(); } There are a few other things I'd like to help you with, but this is good for now. Atillio makes some really good points. I particularly agree with the Logic/displaying section; look into MVC (Model-View-Controller) Architecture.
{ "domain": "codereview.stackexchange", "id": 20096, "tags": "javascript, performance, beginner, game, playing-cards" }
ar_track_alvar crashes camera_nodelet_manager
Question: Hi, I am struggling with the following problem: I am running openni_launch and ar_track_alvar. To minimize data sent over our network, we have set the image and depth mode of the camera to 5 (QVGA). However, when I want to detect some markers I set the camera image and depth mode to 2 to provide a higher resolution. I do this by running the following line from C++ code (found this example at the ROS wiki): system("rosrun dynamic_reconfigure dynparam set_from_parameters camera/driver _image_mode:=2 && rosrun dynamic_reconfigure dynparam set_from_parameters camera/driver _depth_mode:=2"); At a certain moment in time I received the following error: [camera/camera_nodelet_manager-5] process has died [pid 30792, exit code -11, cmd /opt/ros/hydro/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/rose/.ros/log/9534a258-21f7-11e4-aed4-000cf6bedafb/camera-camera_nodelet_manager-5.log] I thought this was caused by changing the resolution. I tested the openni_launch package separately by changing the resolution again and again from the terminal in a while-loop. But, the camera nodelet did not crash. When I started ar_track_alvar in combination with the previous set up (changing the resolution in this while-loop), the error occurred when the marker was in the camera view. Hence, I think ar_track_alvar is causing the camera_nodelet_manager to crash. I have tried to change the output_frame to /camera_depth_optical_frame. However, that did not work.. Launching the camera_nodelet_manager in gbd shows the following error: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fff9297f700 (LWP 25102)] 0x00007fffa309e522 in openni_wrapper::ImageBayerGRBG::fillRGB(unsigned int, unsigned int, unsigned char*, unsigned int) const () from /opt/ros/hydro/lib/libopenni_driver.so Has anyone experienced the same issues? Are there solutions for this problem? --- Extra information --- Launch files: ar_track_alvar.launch <launch> <arg name="marker_size" default="4.4" /> <arg name="max_new_marker_error" default="0.08" /> <arg name="max_track_error" default="0.2" /> <arg name="cam_image_topic" default="/camera/depth_registered/points" /> <arg name="cam_info_topic" default="/camera/rgb/camera_info" /> <arg name="output_frame" default="/camera_link" /> <node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkers" respawn="false" output="screen" args="$(arg marker_size) $(arg max_new_marker_error) $(arg max_track_error) $(arg cam_image_topic) $(arg cam_info_topic) $(arg output_frame)" /> <node pkg="tf" type="static_transform_publisher" name="right_gripper_marker_tf" args="0.087 0 0.025 3.1415 0 3.1415 /right_arm_wrist /right_arm_grippermarker_expected 10" /> <node pkg="tf" type="static_transform_publisher" name="left_gripper_marker_tf" args="0.087 0 0.065 3.1415 0 3.1415 /left_arm_wrist /left_arm_grippermarker_expected 10" /> <node pkg="tf" type="static_transform_publisher" name="ar_marker_1_rename" args="0 0 0 0 0 0 /ar_marker_1 /right_arm_grippermarker_observed 10" /> <node pkg="tf" type="static_transform_publisher" name="ar_marker_2_rename" args="0 0 0 0 0 0 /ar_marker_2 /left_arm_grippermarker_observed 10" /> </launch> kinect.launch <launch> <arg name="kinect_camera_name" default="camera" /> <!-- Openni kinect --> <include file="$(find openni_launch)/launch/openni.launch"> <arg name="depth_registration" value="true"/> <arg name="num_worker_threads" value="8"/> <arg name="sw_registered_processing" value="false"/> <arg name="camera" value="$(arg kinect_camera_name)" /> </include> <param name="/$(arg kinect_camera_name)/driver/image_mode" value="5" /> <!-- 2 is default, 5 is QVGA --> <param name="/$(arg kinect_camera_name)/driver/depth_mode" value="5" /> <!-- 2 is default, 5 is QVGA --> <!-- Point cloud filters --> <node pkg="nodelet" type="nodelet" name="pcl_manager" args="manager" output="screen" /> <!-- Run a VoxelGrid filter to clean NaNs and downsample the data --> <node pkg="nodelet" type="nodelet" name="voxel_grid" args="load pcl/VoxelGrid pcl_manager" output="screen"> <remap from="~input" to="/camera/depth_registered/points" /> <remap from="~output" to="/point_cloud/downsample" /> <rosparam> leaf_size: 0.04 filter_field_name: z filter_limit_min: 0.30 filter_limit_max: 3.00 filter_limit_negative: False </rosparam> </node> <!-- Run a StatisticalOutlierRemoval filter to remove outliers --> <node pkg="nodelet" type="nodelet" name="statistical_outlier_removal" args="load pcl/StatisticalOutlierRemoval pcl_manager" output="screen"> <remap from="~input" to="/point_cloud/downsample" /> <remap from="~output" to="/point_cloud/downsample_outlier" /> <rosparam> <!-- The number of points (k) to use for mean distance estimation Range: 2 to 100 --> mean_k: 2 <!-- The standard deviation multiplier threshold. All points outside the mean +- sigma * std_mul will be considered outliers. Range: 0.0 to 5.0 --> stddev: 0.4 <!-- Set whether the inliers should be returned (true) or the outliers (false) --> negative: false </rosparam> </node> </launch> [Ubuntu 12.04 LTS 64-bits, ROS Hydro] Originally posted by mathijsdelangen on ROS Answers with karma: 88 on 2014-08-12 Post score: 0 Answer: I have not fixed the issue with ar_track_alvar crashing the camera nodelet. However, the original problem I had was sending too much data over the network. "To minimize data sent over our network, we have set the image and depth mode of the camera to 5 (QVGA)." This problem has been fixed by using theora compression at the side of the receiver. Using this, the image and depth mode could be set at 2 and no switching between modes has to be done anymore. In the launch file: <param name="image_transport" value="theora"/> Originally posted by mathijsdelangen with karma: 88 on 2014-09-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 19009, "tags": "ros, ros-hydro, openni-launch, ar-track-alvar" }
Why do some planets have rings?
Question: Some planets, specifically Jupiter, Saturn, Uranus, and Neptune in our solar system, have planetary rings. Why do some planets have rings? How are they made and from what? Most importantly, will I be able to observe the rings on any planet with an amateur telescope? Answer: Rings are made up of tiny (and not so tiny) pieces of rock and ice that are in some way the bits "left over" from the formation of the planet. The theory involves the Roche limit - and is that particles that are already within this limit can't accrete into a larger body because of the tidal forces involved. Another theory is that they are formed when a moon comes closer to a planet than the Roche limit, the tidal forces cause it to break up and form a ring. Though the presence of "shepherd" moons in the rings of Saturn does hint that this may not be major source of material. Both explanations, to me, imply that you'd only get major ring systems around larger (gas giant) planets, though it doesn't preclude rings around smaller (rocky) planets. This seems to be borne out by our solar system where the gas giants have rings whereas the rocky planets don't. Source As to being able to see them, you should be able to see the rings of Saturn with an amateur telescope which has a 50 - 100 powers of magnification. With binoculars you'll probably see a misshapen blob. Source
{ "domain": "astronomy.stackexchange", "id": 33, "tags": "neptune, uranus, saturn, jupiter, planetary-ring" }
Web page for blog post demonstration
Question: I am a CSS and HTML5 newbie. I'm creating a minimal web page to demonstrate something in a blog post. So although it doesn't need to look great, I would like it to make sure it's solid. It has no errors or warnings when I use the W3C validator. .navigation { float: left; } .content { float: left; } table { border:1px solid #000; border-spacing: 0px; background-color: #EEE; border-collapse:collapse; } thead { font-weight: bold; text-decoration: underline; } th { border:1px solid #000; padding: 4px; } td { text-align: center; vertical-align: middle; padding: 4px; border:1px solid #000; } .liked { background-color: #CD0; } .unliked { background-color: white; } .color_like { text-align: center; } .color_name { text-align: right; } <div class="navigation"> <form id="search_colors_form_id" method="get" action="/colorliker/"> <input type="text" id="search_text" name="search_text"/> <input type='hidden' name='csrfmiddlewaretoken' value='LvwycpyDh8xhACAh9DaqTUaPh6YkqoAe' /> <input id="id_pic_submit_button" type="submit" value="Search for color"/><BR> (Requires two or more characters) </form> <BR>Searching for "<CODE>gr</CODE>":<UL> <LI>gray</LI> <LI>green</LI> </UL> </div> <div class="content"> <div class="content_body"> <H1>Color Likenatorizer</H1> <TABLE><TR> <TH>Title</TH> <TH>Favorite?</TH> </TR><TR> <TD CLASS="color_name">aqua</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_12/">No</A></TD> </TR><TR> <TD CLASS="color_name">black</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_13/">No</A></TD> </TR><TR> <TD CLASS="color_name">blue</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_14/">No</A></TD> </TR><TR> <TD CLASS="color_name">fuchsia</TD> <TD CLASS="liked"><A HREF="/colorliker/like_color_15/">Yes</A></TD> </TR><TR> <TD CLASS="color_name">gray</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_16/">No</A></TD> </TR><TR> <TD CLASS="color_name">green</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_17/">No</A></TD> </TR><TR> <TD CLASS="color_name">lime</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_18/">No</A></TD> </TR><TR> <TD CLASS="color_name">maroon</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_19/">No</A></TD> </TR><TR> <TD CLASS="color_name">navy</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_20/">No</A></TD> </TR><TR> <TD CLASS="color_name">olive</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_21/">No</A></TD> </TR><TR> <TD CLASS="color_name">orange</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_22/">No</A></TD> </TR><TR> <TD CLASS="color_name">purple</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_23/">No</A></TD> </TR><TR> <TD CLASS="color_name">red</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_24/">No</A></TD> </TR><TR> <TD CLASS="color_name">silver</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_25/">No</A></TD> </TR><TR> <TD CLASS="color_name">teal</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_26/">No</A></TD> </TR><TR> <TD CLASS="color_name">white</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_27/">No</A></TD> </TR><TR> <TD CLASS="color_name">yellow</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_28/">No</A></TD> </TR></TABLE> </div> </div> Answer: Some of your HTML is off and a little bit odd to me <div class="navigation"> <form id="search_colors_form_id" method="get" action="/colorliker/"> <input type="text" id="search_text" name="search_text"/> <input type='hidden' name='csrfmiddlewaretoken' value='LvwycpyDh8xhACAh9DaqTUaPh6YkqoAe' /> <input id="id_pic_submit_button" type="submit" value="Search for color"/><BR> (Requires two or more characters) </form> <BR>Searching for "<CODE>gr</CODE>":<UL> <LI>gray</LI> <LI>green</LI> </UL> </div> Stay consistent with your Capitalization, don't use SCREAMCASE for HTML tags Be consistent with your tag terminations, always terminate tags <br/> When you comment your code, make sure that you use Comment Tags <!-- (Requires two or more characters) --> This might not be a comment, if it isn't then see point #4 Text should always be housed in a tag <p> Searching for "<code>gr</code>": </p> Structure of HTML tags, your <ul> tag should not follow all that other stuff, it should look like this <ul> <li>gray</li> <li>green</li> </ul> You have a div inside of a div and they aren't used for anything. <div class="content"> <div class="content_body"> these could easily be one div like this <div class="content content_body"> because both are surrounding the same piece of HTML. The way that you wrote your table bothers me, it's not standard formatting, this isn't something that will cause errors but it is harder to read <TABLE><TR> <TH>Title</TH> <TH>Favorite?</TH> </TR><TR> <TD CLASS="color_name">aqua</TD> <TD CLASS="unliked"><A HREF="/colorliker/like_color_12/">No</A></TD> </TR><TR> This is how I would write the same HTML <table> <tr> <th>Title</th> <th>Favorite?</th> </tr> <tr> <td CLASS="color_name">aqua</td> <td CLASS="unliked"> <a HREF="/colorliker/like_color_12/">No</a> </td> </tr> <tr> <!-- ... --> <table> There are different variations, but most are very similar to this format
{ "domain": "codereview.stackexchange", "id": 9621, "tags": "beginner, html5, css3" }
Electrons only affected by electric field of EM wave misconception?
Question: When an EM wave is vertically polarised, and an aerial vertically aligned, a signal is received because it is in the correct alignment to absorb the wave that is vertically polarised (relative to the electric field oscillation) The same would happen with a horizontally aligned aerial if the EM wave was horizontally polarised However, if the aerial was aligned vertically and the wave polarised horizontally, no signal would be detected. This implies that the electrons in the aerial are only affected by the electric field component of an EM wave and not the magnetic field component. Is this true? And how can this be explained intuitively? Answer: Yes. The force on electrons due to the electric component of the wave is so much larger than that due to the magnetic component, that the latter can be ignored for most practical engineering purposes. There are two reasons for this: the first is that in SI units the magnetic component ${\bf B}$ is $1/c$ times the electic field ${\bf E}$, but the principal one is that the electrons in the antenna are moving so slowly that the Lorentz force ${\bf v}\times {\bf B}$ is very small. They may be oscillating rapidly (MHz) but they do not move very far --- and remember that even in a typical electric circuit electrons are only moving on the order of cm/sec.
{ "domain": "physics.stackexchange", "id": 66560, "tags": "electromagnetism, waves, polarization, absorption" }
Cut-off energy necessary to avoid vacuum catastrophe
Question: My understanding is that to obtain a finite vacuum energy density prediction from QFT, one must choose a cut-off point for the maximum allowed energy of a photon. Two seemingly natural choices are the Planck energy, which gives the oft-cited $10^{112}$ ergs/cubic cm figure, and the electroweak energy, which I recall reading gives a figure closer to $10^{40}$ ergs/cubic cm. My question then is: what cut-off would be required to give the value derived from cosmological observations ($10^{-8}$ ergs/cubic cm), and are photons above this cut-off known to exist? Answer: The vacuum energy density of a quantum field is generically given by the sum of the ground state energies of each oscillator mode composing the quantum field - I will use the term "cosmological constant" synonymously with the total vacuum energy density. In the continuum, each mode of a free scalar field contributes an energy $\frac12\hbar\omega_k$, where $\omega_k$ is the frequency of the oscillator, and the bare (unregularised) value for the vacuum energy density is given by an integral over all the zero modes: $$ \rho=\frac1V\langle\hat H\rangle=\hbar\int\frac{\mathrm d^3 k}{(2\pi)^3}\frac{\omega_k}2 \\=\hbar\int\frac{\mathrm d^3 k}{(2\pi)^3}\frac{\sqrt{k^2+m^2}}2 \\=\frac{\hbar}{4\pi^2}\int_0^\infty\mathrm dk\ k^2\sqrt{k^2+m^2} $$ and is naïvely UV infinite without regularisation, since modes of arbitrarily short wavelength are taken into account. We choose to impose a hard momentum cutoff $\Lambda$ to regularise the integral. The choice of energy scale as the cutoff reflects, roughly, our confidence in that QFT is predictive at and up to this energy scale, and that above this scale, QFT breaks down, "new" physics is required to explain phenomena above this scale, and modes above the cutoff do not contribute in the QFT prediction. $$ \\\rho=\frac{\hbar}{4\pi^2}\int_0^\Lambda\mathrm dk\ k^2\sqrt{k^2+m^2} $$ Although, strictly speaking, we should perform this zero-mode analysis in terms of an interacting, photon field rather than a free scalar field, this does not change the final result appreciably, and so this calculation is accurate. For example, this is $\rho_\gamma=\frac{\hbar\Lambda^4}{16\pi^2}$ for massless photons. More generally, the vacuum energy density goes as $\rho\sim\hbar\Lambda^4$, which you can see via dimensional analysis alone. Anyway, if we make the claim that our theory is valid up to the Planck scale, we can set the cutoff equal to the Planck mass $\frac{1}{8\pi G}\sim10^{18}\ \mathrm{GeV}$ whereupon $\rho$ is ${\sim}10^{110}\ \mathrm{erg/cm^3}$. Importantly, there are also natural contributions from other (even composite) fields due to broken symmetry phases, notably at the electroweak breaking scale, and at the QCD scale due to chiral symmetry breaking. However, these contributions are minuscule if we integrate up to the Planck length. This theoretical prediction is 120 orders of magnitude greater than the upper bound on the actual cosmological constant, which is on the order $10^{-10}\ \mathrm{erg/cm^3}$. By the dimensional relation between $\rho$ and $\Lambda$, we see that 120 orders of magnitude difference between $\rho_\text{Predicted}$ and $\rho_\text{Observed}$ corresponds to a 30 order magnitude difference between $\Lambda_\text{Planck}$ and $\Lambda_\text{Match}$, which is the cutoff scale that we need to choose in our analysis to match the observed value of the cosmological constant. So the momentum cutoff that reproduces the observed vacuum energy density is $10^{18}/10^{30}\ \mathrm{GeV} \sim 10^{-3}\ \mathrm{eV}$, or alternatively a wavelength of around $1\ \mathrm{mm}$. In the grand scheme of things, these are ridiculously low-energy photons: for instance, a photon of visible light has about 2000 times as much energy as the photons at the cutoff. In other words: yes, we have observed photons above this cutoff (and in fact you are doing so right now by reading this answer).
{ "domain": "physics.stackexchange", "id": 77840, "tags": "quantum-field-theory, cosmology, vacuum, dark-energy, cosmological-constant" }
Undo Framework Design (Revert the changes in the collection)
Question: Requirement is to monitor the changes in a List<T>, possible changes are Add / Remove / Update, which are registered in an Audit log, which I do in the code underneath using a Dictionary. Now user can take an action to revert each of the operations, using an integrated Action delegate. Each operation runs its respective Revert operation. Please note here undo is not about notification, like ObservableCollection<T> but removal at the later time based on user discretion. Following is my design, please suggest, what shall be done to further improvise. public class ActionWrapper<T> { public int Index {get;set;} public T OriginalValue {get;set;} public T NewValue {get;set;} public Action<int,T> Action {get;set;} } public class ChangeList<T> : List<T> where T:class,IEquatable<T> { public Dictionary<T,ActionWrapper<T>> ActionMap {get;set;} public ChangeList() { ActionMap = new Dictionary<T,ActionWrapper<T>>(); } public new void Add(T item) { base.Add(item); var actionWrapper = new ActionWrapper<T> { Action = new Action<int,T>(RevertAdd), Index = this.FindIndex(x => x.Equals(item)), NewValue = item, OriginalValue = null }; ActionMap[actionWrapper.NewValue] = actionWrapper; } public new void Remove(T item) { var actionWrapper = new ActionWrapper<T> { Action = new Action<int, T>(RevertRemove), Index = this.FindIndex(x => x.Equals(item)), NewValue = null, OriginalValue = item }; if(actionWrapper.Index < 0) return; base.Remove(actionWrapper.OriginalValue); ActionMap[actionWrapper.OriginalValue] = actionWrapper; } public void Update(T actualValue,T newValue) { var actionWrapper = new ActionWrapper<T> { Action = new Action<int, T>(RevertUpdate), Index = this.FindIndex(x => x.Equals(actualValue)), NewValue = newValue, OriginalValue = actualValue }; if (actionWrapper.Index < 0) return; base[actionWrapper.Index] = newValue; ActionMap[actionWrapper.NewValue] = actionWrapper; } public void RevertAdd(int index, T actual) { base.Remove(actual); } public void RevertRemove(int index,T actual) { base.Add(actual); } public void RevertUpdate(int index,T actual) { base[index] = actual; } } Use Case void Main() { var changeList = new ChangeList<string>(); changeList.Add("Person1"); changeList.Add("Person2"); changeList.Add("Person3"); changeList.Add("Person4"); changeList.Add("Person5"); changeList.Add("Person6"); changeList.Add("Person7"); changeList.Dump(); // Print statement var actionMapUpdateAdd = changeList.ActionMap["Person5"]; actionMapUpdateAdd.Action(actionMapUpdateAdd.Index, actionMapUpdateAdd.NewValue); changeList.Dump(); // Print statement changeList.Update("Person7","Person77"); changeList.Dump(); // Print statement var actionMapUpdate = changeList.ActionMap["Person77"]; actionMapUpdate.Action(actionMapUpdate.Index,actionMapUpdate.OriginalValue); changeList.Dump(); // Print statement changeList.Remove("Person6"); changeList.Dump(); // Print statement var actionMapRemove = changeList.ActionMap["Person6"]; actionMapRemove.Action(actionMapRemove.Index, actionMapRemove.OriginalValue); changeList.Dump(); // Print statement } Answer: Edge cases This fails to handle several edge-cases: Undoing a remove action multiple times results in that item being added back multiple times. Undoing an update action after other items have been inserted at a lower index causes the wrong items to be replaced. Lists can contain duplicate items, but only the last operation for each distinct item is remembered. Updating an item leaves the add-operation for the original item, but attempting to undo that add-operation fails unless the update action has first been undone. As the last point demonstrates, you can't just undo an action without undoing all actions that followed it first. If you do want to support something like that, you'll have to clearly define the requirements and figure out what the desired behavior is for a variety of edge-cases. You'll also want to make this information available to those that will use this code (documentation, see below). Implementation notes Hiding methods with new is rarely a good idea: (changeList as IList<string>).Add("untracked item"); probably does not do what you want it to do. In this case, don't inherit from List<T>: implement the necessary interfaces manually, and use an internal List<T> for the actual storage. List<T> (and IList<T>) provides some other methods (Insert, [int index] and Clear) that are not being 'intercepted', resulting in untracked changes. Undoing an action is complicated. Why should the caller need to know whether to use NewValue or OriginalValue? That makes it difficult to use correctly. Why does the caller need to pass in any arguments at all? Why use a wrapper class if you can just create a closure with all the necessary state? Try to use clear, descriptive names. UndoableAction and Undo are much clearer than ActionWrapper and Action, and Replace is a more accurate description of what the Update method does. Those RevertAdd/Remove/Update methods don't seem to be intended for public use, so don't make them public. They only clutter the interface of your class. Those ActionWrapper properties should probably not be public either, but if they have to be, then at least make them read-only. You don't want other code to be able to mess with the internals of your change-tracking/undo system. The same goes for that ActionMap property: it should only be exposed as a get-only IReadOnlyDictionary. Documentation is entirely absent. That makes it even more difficult to tell how this class is meant to be used (or even what its exact purpose is), and various details such as Remove only removing the first matching item are left to the caller to figure out. It also makes it difficult for others to distinguish between intended and incorrect behavior. Alternative design I'd go for a different design, one that doesn't expose internal details, doesn't allow for out-of-order undoing (which means less edge-cases), and that provides a simple interface that's easy to use correctly (note how it's not possible to undo the same action multiple times): public class ChangeTrackingList<T> // implements IList<T> and/or other interfaces { private List<T> _items = new List<T>(); private Stack<Action> _undoActions = new Stack<Action>(); public bool UndoLastAction() { if (!_undoActions.Any()) return false; var undoLastAction = _undoActions.Pop(); undoLastAction(); return true; } public void Add(T item) { _items.Add(item); // Ensure that this item gets removed, and not an identical earlier occurrence: var index = _items.Count - 1; _undoActions.Push(() => _items.RemoveAt(index)); } ... }
{ "domain": "codereview.stackexchange", "id": 31908, "tags": "c#, framework" }
Is this "superset existence" problem NP-complete?
Question: The "Superset Existence Problem": Let there be a set $S$, and $x$ subsets of $S$. Does there exist a set of size $y < |S|$, which is a superset of at least $z$ of those subsets? To me, this feels like it should be NP-complete (in $|S|$), since it seems intuitively related to covering problems and minimum $k$-union. But I'm not sure where to start with the reduction. So: is this problem NP-complete? Answer: The problem is NP complete. It is trivially in $NP$ (a certificate is the collection of the $y$ selected subsets of $S$), and it is $NP$-hard since it is the decision version of the Maximum Coverage Problem.
{ "domain": "cs.stackexchange", "id": 17518, "tags": "np-complete, decision-problem" }
Damped Oscillations: Incoherence between a general solution and a specific one
Question: In my 'Classical Dynamics of Particles and Systems, THORNTON/MARION, 5th Edition' book of classical mechanics it is given the following general solution for a damped oscillation solving $\ddot{x}+2\beta\dot{x}+w_o^2x=0$: $$x(t)=e^{-\beta t}[A_{1}\exp(\sqrt{\beta^2-w_o^2}t)+A_{2}\exp(-\sqrt{\beta^2-w_o^2}t)]$$ as $A_1$ and $A_2$ some arbitrary constants, $\beta$ the damping parameter and $w_0$ the the characteristic angular frequency. So, if I want to find the $x(t)$ expression for the critical damped case, I have to consider that $$w_0^2=\beta^2$$ so my critical damped motion is described by: $$x(t)=(A_1+A_2)(e^{-\beta t})$$ But, it we solve the specific differential equation for the critical damped case, the result is (as equal roots exists): $$x(t)=(A+B t)e^{-\beta t}$$ So, my question is: What am I missing? Answer: The first solution is not correct since it implies a strange connection $\dot x (0)=-\beta x(0)$. Check the general solution for its region of validity.
{ "domain": "physics.stackexchange", "id": 17526, "tags": "homework-and-exercises, newtonian-mechanics, harmonic-oscillator, oscillators" }
Does the "lowest layer" refer to the first or last layer of the neural network?
Question: People sometimes use 1st layer, 2nd layer to refer to a specific layer in a neural net. Is the layer immediately follows the input layer called 1st layer? How about the lowest layer and highest layer? Answer: People sometimes use 1st layer, 2nd layer to refer to a specific layer in a neural net. Is the layer immediately follows the input layer called 1st layer? The 1st layer should typically refer to the layer that comes after the input layer. Similarly, the 2nd layer should refer to the layer that comes after the 1st layer, and so on. However, note that this convention and terminology may not be applicable in all cases. You should always take into account your context! How about lowest layer and highest layer? To be honest, I also dislike this ambiguous terminology. From my experience, I don't think there's an agreement on the actual meaning of "lowest" or "highest". It depends on how you depict the neural network, but it's possible that "lowest" refers to the layers closer to the inputs, because, if you think of a neural network as a hierarchy that starts from the inputs and builds more complex representations of it, the "lowest" may refer to the "lowest in the hierarchy" (but who knows!).
{ "domain": "ai.stackexchange", "id": 1926, "tags": "neural-networks, machine-learning, deep-learning, terminology" }
Why do Greenhouse Gases absorb heavily in certain wavelengths?
Question: What molecular properties make greenhouse gases absorb and reemit primarily IR radiation? That is, why are CO2, H2O and NO2 all greenhouse gases (GHGs), but others (such as helium and neon) aren't? Furthermore, is there a mathematical approach to determining the distribution of absorption of a certain chemical, based on its molecular bonds and structure? Thanks for your help. Answer: CO2, H2O etc. are molecules that have vibrational and rotational states that can be excited due to they inner structure (they consist of several atoms bound together). Vibrational and rotational transitions have lower energy than electronic transitions that you need to excite in atoms like helium. Thus, greenhouse gases can absorb longer wavelengths, i.e. IR radiation.
{ "domain": "physics.stackexchange", "id": 89164, "tags": "thermodynamics, electromagnetic-radiation, earth, gas, absorption" }
What kind of LTL formula can be represented by DBAs
Question: I am looking for the portion of LTL formula that can be expressed by deterministic buchi automata. Is there any classification of this such? Answer: You can define syntactic fragments of LTL that ensure that all properties expressible in these fragments are representable as DBAs. An example is given in the paper "A LTL Fragment for GR(1)-Synthesis". Also, the common fragment of ACTL and LTL only contains properties that are representable as DBAs. But note that such a fragment will never be complete in the "if a formula is not in the fragment, then it is not representable as a DBA" sense. The reason is that if we have LTL formulas that are not representable as DBAs, then we may be able to combine them to a formula that is representable as DBA. For example, the properties "FG p | GF q" and "FG q | GF p" are both not expressible by DBA, but both their conjunction and their disjunction are. Note that there are also DBAs that are not representable in LTL. So a fragment of LTL cannot be equiexpressive to DBAs.
{ "domain": "cs.stackexchange", "id": 8748, "tags": "automata, logic, linear-temporal-logic, buchi-automata" }
Is it true that distance between earth and sun is closer in winter season (January) and farther in summer season in the Northern Hemisphere?
Question: Recently I heard that in winter season the distance between the Earth and the Sun will be closest and it is farthest in summer. If it is true then why is it hotter in summer and cooler in winter? Answer: Yes, it's true (in the northern hemisphere). The small eccentricity of the Earth's orbit is not anywhere close to a key driver in the seasons. The key driver of the seasons is the Earth's obliquity. In the northern hemisphere, the axial tilt of Earth's rotation axis has the northern half of the Earth facing a bit toward the Sun in June/July/August and away from the Sun in December/January/February. The opposite is true in the southern hemisphere. Eccentricity would be a driver of the seasons if the Earth's rotation and orbital axes were much closer in line with one another than they are. If that were the case, summer and winter would be world-wide phenomena. As it stands, when its summertime in the northern hemisphere its wintertime in the southern hemisphere, and vice versa. Somewhat paradoxically, even though the Earth is closest to the Sun in early January and furthest from the Sun in early July, the Earth as a whole is cooler during December/January/February than it is during June/July/August. The reason is the uneven distribution of land and ocean between the northern and southern hemispheres. In 13000 years, northern hemisphere summers will occur near perihelion passage and winters near aphelion passage. Those will be brutal times! Fortunately, I won't be around to see them.
{ "domain": "earthscience.stackexchange", "id": 355, "tags": "earth-rotation, earth-system" }
Basic Relativistic Question - length measurement
Question: A while ago we did an easy, introductory exercise on length measurement. Back then it seemed pretty straightforward but now when I look at it I have trouble understanding the assumption which led to the answer in the blink of an eye. Jack (say his frame is U') flew over a house at the speed of V=0.8c and it took him T=100 ns. What's the length of a house in Earth's reference frame? We solved this by plugging $\Delta x'$ = 0 and the rest came easily. I don't get it... If we do a measurement in whatever reference frame we should make it simultaneously at two points. My best guess : Jack uses a clock which doesn't move in his frame. I'd be really grateful if someone could explain this situation to me or come up with some solution where this assumption is not made or comes up naturally. Answer: Flying at $0.8c$, Jack travels in $100 ns$ the distance $L_J=24m$ $(x=vt)$, where I made the approximation $c=3 \cdot 10^8 m/s$. However, the length he sees is contracted by the factor $\gamma=\frac{1}{\sqrt{1-\beta^2}}$, where $\beta=\frac{v}{c}$. So taking this length contraction into account, the length $L$ of the house in the reference frame of the earth is $L=L_J \gamma = 40m$. If you are familiar with Minkowski diagrams, then you just draw the situation and the result is obvious. (As a personal note: almost all special relativity problems can easily solved by drawing the corresponding Minkowski diagram. Unfortunately I don't know how to draw here in SE...)
{ "domain": "physics.stackexchange", "id": 19409, "tags": "homework-and-exercises, special-relativity, time-dilation, length-contraction" }
How to study (by heart) all the equations in inorganic chemistry?
Question: Physical chemistry is easy to study as I have to just understand the concepts (like in physics) and apply some equations for calculations. In organic chemistry, reaction mechanisms helpsin remembering a lot of reactions and also doing conversion problems help a lot. But, in the case of inorganic chemistry it is very difficult for me to remember those equations. My high school syllabus includes a lot of preparation methods of different compounds and reactions showing their chemical properties. But, unlike organic chemistry there is no problems based on it for deep understanding. So, what is actually needed is a good memory power. But, I'm not able to remember all those equations. So, how should I study them? Answer: In organic chemistry, everything is based around C, H, O and N. Therefore to understand reactivity (somehow), there is just limited number of options. In contrast, inorganic chemistry deals with the rest of periodic table, and each of the elements has its special properties. But there are general trends as well, but are more difficult to decipher and are often expressed by set of typical reactions, you should learn. Usually they concern the stability of (oxidation) states, how to go from less stable to more stable one to obtain the desired compound and few processes how tho achieve the least stable state from the one found in nature. It is difficult to state some simple rules, but there are several most typical anions and cations, and the chemistry of given element is just interactions with them. During the learning it helped me to have "all" the reactions (several hundreds) on a paper and for each one try to understand what happens and if there are others in the set, which are similar, and why. This helps you to spot the similarities. And - it is better to have more reactions then less, because then you have higher probability to spot the trends.
{ "domain": "chemistry.stackexchange", "id": 2983, "tags": "inorganic-chemistry" }
ros1_bridge does not support sensor_msgs/msg/Image
Question: Hi, I wanted to transport sensors_msgs/msg/Image from a ros2(eloquent) node to a ros1(melodic) node so I did the following steps in this exact same order Installed melodic from debian packages based on this http://wiki.ros.org/melodic/Installation/Ubuntu Installed eloquent from debian packages based on this https://index.ros.org/doc/ros2/Installation/Eloquent/Linux-Install-Debians/ After this I have realized that when installing eloquent certain packages were removed from the melodic workspace (e.g gazebo_ros, image_pupline, rqt and a lot of others) ... so I removed melodic and installed it again like in step 1 After this both workspaces seemed fine to me and I installed ros1_bridge with sudo apt install ros-eloquent-ros1-bridge I tried to follow the ros1_bridge tutorials that i found on the linke below. Example1 worked just fine https://github.com/ros2/ros1_bridge/blob/master/README.md When trying Example2 it gave me the following error: failed to create 2to1 bridge for topic '/image' with ROS 2 type 'sensor_msgs/msg/Image' and ROS 1 type '': No template specialization for the pair check the list of supported pairs with the --print-pairs option I printed the support pairs which gave the following list. Supported ROS 2 <=> ROS 1 message type conversion pairs: 'builtin_interfaces/msg/Duration' (ROS 2) <=> 'std_msgs/Duration' (ROS 1) 'builtin_interfaces/msg/Time' (ROS 2) <=> 'std_msgs/Time' (ROS 1) 'diagnostic_msgs/msg/DiagnosticArray' (ROS 2) <=> 'diagnostic_msgs/DiagnosticArray' (ROS 1) 'diagnostic_msgs/msg/DiagnosticStatus' (ROS 2) <=> 'diagnostic_msgs/DiagnosticStatus' (ROS 1) 'diagnostic_msgs/msg/KeyValue' (ROS 2) <=> 'diagnostic_msgs/KeyValue' (ROS 1) 'geometry_msgs/msg/Accel' (ROS 2) <=> 'geometry_msgs/Accel' (ROS 1) 'geometry_msgs/msg/AccelStamped' (ROS 2) <=> 'geometry_msgs/AccelStamped' (ROS 1) 'geometry_msgs/msg/AccelWithCovariance' (ROS 2) <=> 'geometry_msgs/AccelWithCovariance' (ROS 1) 'geometry_msgs/msg/AccelWithCovarianceStamped' (ROS 2) <=> 'geometry_msgs/AccelWithCovarianceStamped' (ROS 1) 'geometry_msgs/msg/Inertia' (ROS 2) <=> 'geometry_msgs/Inertia' (ROS 1) 'geometry_msgs/msg/InertiaStamped' (ROS 2) <=> 'geometry_msgs/InertiaStamped' (ROS 1) 'geometry_msgs/msg/Point' (ROS 2) <=> 'geometry_msgs/Point' (ROS 1) 'geometry_msgs/msg/Point32' (ROS 2) <=> 'geometry_msgs/Point32' (ROS 1) 'geometry_msgs/msg/PointStamped' (ROS 2) <=> 'geometry_msgs/PointStamped' (ROS 1) 'geometry_msgs/msg/Polygon' (ROS 2) <=> 'geometry_msgs/Polygon' (ROS 1) 'geometry_msgs/msg/PolygonStamped' (ROS 2) <=> 'geometry_msgs/PolygonStamped' (ROS 1) 'geometry_msgs/msg/Pose' (ROS 2) <=> 'geometry_msgs/Pose' (ROS 1) 'geometry_msgs/msg/Pose2D' (ROS 2) <=> 'geometry_msgs/Pose2D' (ROS 1) 'geometry_msgs/msg/PoseArray' (ROS 2) <=> 'geometry_msgs/PoseArray' (ROS 1) 'geometry_msgs/msg/PoseStamped' (ROS 2) <=> 'geometry_msgs/PoseStamped' (ROS 1) 'geometry_msgs/msg/PoseWithCovariance' (ROS 2) <=> 'geometry_msgs/PoseWithCovariance' (ROS 1) 'geometry_msgs/msg/PoseWithCovarianceStamped' (ROS 2) <=> 'geometry_msgs/PoseWithCovarianceStamped' (ROS 1) 'geometry_msgs/msg/Quaternion' (ROS 2) <=> 'geometry_msgs/Quaternion' (ROS 1) 'geometry_msgs/msg/QuaternionStamped' (ROS 2) <=> 'geometry_msgs/QuaternionStamped' (ROS 1) 'geometry_msgs/msg/Transform' (ROS 2) <=> 'geometry_msgs/Transform' (ROS 1) 'geometry_msgs/msg/TransformStamped' (ROS 2) <=> 'geometry_msgs/TransformStamped' (ROS 1) 'geometry_msgs/msg/Twist' (ROS 2) <=> 'geometry_msgs/Twist' (ROS 1) 'geometry_msgs/msg/TwistStamped' (ROS 2) <=> 'geometry_msgs/TwistStamped' (ROS 1) 'geometry_msgs/msg/TwistWithCovariance' (ROS 2) <=> 'geometry_msgs/TwistWithCovariance' (ROS 1) 'geometry_msgs/msg/TwistWithCovarianceStamped' (ROS 2) <=> 'geometry_msgs/TwistWithCovarianceStamped' (ROS 1) 'geometry_msgs/msg/Vector3' (ROS 2) <=> 'geometry_msgs/Vector3' (ROS 1) 'geometry_msgs/msg/Vector3Stamped' (ROS 2) <=> 'geometry_msgs/Vector3Stamped' (ROS 1) 'geometry_msgs/msg/Wrench' (ROS 2) <=> 'geometry_msgs/Wrench' (ROS 1) 'geometry_msgs/msg/WrenchStamped' (ROS 2) <=> 'geometry_msgs/WrenchStamped' (ROS 1) 'rcl_interfaces/msg/Log' (ROS 2) <=> 'rosgraph_msgs/Log' (ROS 1) 'rosgraph_msgs/msg/Clock' (ROS 2) <=> 'rosgraph_msgs/Clock' (ROS 1) 'std_msgs/msg/Bool' (ROS 2) <=> 'std_msgs/Bool' (ROS 1) 'std_msgs/msg/Byte' (ROS 2) <=> 'std_msgs/Byte' (ROS 1) 'std_msgs/msg/ByteMultiArray' (ROS 2) <=> 'std_msgs/ByteMultiArray' (ROS 1) 'std_msgs/msg/Char' (ROS 2) <=> 'std_msgs/Char' (ROS 1) 'std_msgs/msg/ColorRGBA' (ROS 2) <=> 'std_msgs/ColorRGBA' (ROS 1) 'std_msgs/msg/Empty' (ROS 2) <=> 'std_msgs/Empty' (ROS 1) 'std_msgs/msg/Float32' (ROS 2) <=> 'std_msgs/Float32' (ROS 1) 'std_msgs/msg/Float32MultiArray' (ROS 2) <=> 'std_msgs/Float32MultiArray' (ROS 1) 'std_msgs/msg/Float64' (ROS 2) <=> 'std_msgs/Float64' (ROS 1) 'std_msgs/msg/Float64MultiArray' (ROS 2) <=> 'std_msgs/Float64MultiArray' (ROS 1) 'std_msgs/msg/Header' (ROS 2) <=> 'std_msgs/Header' (ROS 1) 'std_msgs/msg/Int16' (ROS 2) <=> 'std_msgs/Int16' (ROS 1) 'std_msgs/msg/Int16MultiArray' (ROS 2) <=> 'std_msgs/Int16MultiArray' (ROS 1) 'std_msgs/msg/Int32' (ROS 2) <=> 'std_msgs/Int32' (ROS 1) 'std_msgs/msg/Int32MultiArray' (ROS 2) <=> 'std_msgs/Int32MultiArray' (ROS 1) 'std_msgs/msg/Int64' (ROS 2) <=> 'std_msgs/Int64' (ROS 1) 'std_msgs/msg/Int64MultiArray' (ROS 2) <=> 'std_msgs/Int64MultiArray' (ROS 1) 'std_msgs/msg/Int8' (ROS 2) <=> 'std_msgs/Int8' (ROS 1) 'std_msgs/msg/Int8MultiArray' (ROS 2) <=> 'std_msgs/Int8MultiArray' (ROS 1) 'std_msgs/msg/MultiArrayDimension' (ROS 2) <=> 'std_msgs/MultiArrayDimension' (ROS 1) 'std_msgs/msg/MultiArrayLayout' (ROS 2) <=> 'std_msgs/MultiArrayLayout' (ROS 1) 'std_msgs/msg/String' (ROS 2) <=> 'std_msgs/String' (ROS 1) 'std_msgs/msg/UInt16' (ROS 2) <=> 'std_msgs/UInt16' (ROS 1) 'std_msgs/msg/UInt16MultiArray' (ROS 2) <=> 'std_msgs/UInt16MultiArray' (ROS 1) 'std_msgs/msg/UInt32' (ROS 2) <=> 'std_msgs/UInt32' (ROS 1) 'std_msgs/msg/UInt32MultiArray' (ROS 2) <=> 'std_msgs/UInt32MultiArray' (ROS 1) 'std_msgs/msg/UInt64' (ROS 2) <=> 'std_msgs/UInt64' (ROS 1) 'std_msgs/msg/UInt64MultiArray' (ROS 2) <=> 'std_msgs/UInt64MultiArray' (ROS 1) 'std_msgs/msg/UInt8' (ROS 2) <=> 'std_msgs/UInt8' (ROS 1) 'std_msgs/msg/UInt8MultiArray' (ROS 2) <=> 'std_msgs/UInt8MultiArray' (ROS 1) Supported ROS 2 <=> ROS 1 service type conversion pairs: 'diagnostic_msgs/srv/AddDiagnostics' (ROS 2) <=> 'diagnostic_msgs/AddDiagnostics' (ROS 1) 'diagnostic_msgs/srv/SelfTest' (ROS 2) <=> 'diagnostic_msgs/SelfTest' (ROS 1) 'example_interfaces/srv/AddTwoInts' (ROS 2) <=> 'roscpp_tutorials/TwoInts' (ROS 1) Apparantly it does not contain sensors_msgs so no wonder it did not work... I tried the same but building the ROS2 workspace from source https://index.ros.org/doc/ros2/Installation/Eloquent/Linux-Development-Setup/ With this, also Example 2 worked for me. when I printed the supported pairs it also gave me a much longer list. One thing I also realized: in Example 2 the tutorial tells me to source . workspace-with-bridge'install/setup.bash which seems like it assumes i have built the ros2 workspace from source including ros1_bridge. The introduction of the readme states "The bridge provided with the prebuilt ROS 2 binaries includes support for common ROS interfaces (messages/services), such as the interface packages listed in the ros2/common_interfaces repository and tf2_msgs" so based on this sentence it should work from prebuilt packages as well. I would like to resolve this contradiction so I would be glad if somebody could help me. Originally posted by tlaci on ROS Answers with karma: 48 on 2019-11-28 Post score: 0 Answer: It looks like a bug, I faced the same issue when Dashing was released. Once https://github.com/ros2-gbp/ros1_bridge-release/pull/6 is merged ans released the bridge should support all the "default" messages Originally posted by marguedas with karma: 3606 on 2019-11-29 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 34072, "tags": "ros, ros-melodic, tutorial" }
cmake error: attempt to add link library
Question: Hi, as starting point to work with Moveit and my custom robot i downloaded and modified xxx://github.com/ros-planning/moveit_pr2 i put the folder in my catkin workspace under /src, renamed everything according to my robot and used a lot of examples. Now i need to launch some original pr2 tutorial so i downloaded same package again this time i left everything intact but when i launch catkin_make i get lot's linking errors like this one: CMake Error at moveit_pr2-hydro-devel/pr2_moveit_tutorials/planning/CMakeLists.txt:18 (target_link_libraries): Attempt to add link library "/opt/ros/hydro/lib/libimage_transport.so" to target "move_group_interface_tutorial" which is not built in this directory. If I delete the modified package version I use with my custom robot everything compile fine but then i can't launch the examples anymore. If I launch rospack find pr2_moveit_tutorials I see correctly: /home/gabri/catkin_ws/src/moveit_pr2-hydro-devel/pr2_moveit_tutorials but when I launch the tutorial with: roslaunch pr2_moveit_tutorials motion_planning_interface_tutorial.launch I get this error: [motion_planning_interface_tutorial.launch] is neither a launch file in package [pr2_moveit_tutorials] nor is [pr2_moveit_tutorials] a launch file name What am I missing? Thanks Originally posted by Wedontplay on ROS Answers with karma: 42 on 2014-07-11 Post score: 1 Answer: When installing from source, be sure that all the dependencies it requires are either installed in your ROS ecosystem or are also being compiled from source. When trying to use programs installed by source, you'll also have to source that workspace by source devel/setup.bash And when wanting to specify something installed in the ROS ecosystem at /opt/ros/hydro/* source /opt/ros/hydro/setup.bash Cheers, Devon Originally posted by DevonW with karma: 644 on 2014-07-13 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Wedontplay on 2014-07-14: Thank you, actually the second problem was caused just by a mispelling, the launch file mentioned in the tutorial "Motion Planners/C++ API" has a different name in the pr_2_moveit_tutorials package. real launch file is: motion_planning_api_tutorial.launch. Comment by DevonW on 2014-07-14: Great! Could you please file an issue so that this can be patched in the future? I would greatly appreciate your support :D https://github.com/ros-planning/moveit_pr2 Is this ticket solved now? Comment by Wedontplay on 2014-07-17: issue created, yes it is solved. Thank you for your support. G.
{ "domain": "robotics.stackexchange", "id": 18587, "tags": "catkin-make, catkin" }
Why to pad zeros at the middle of sequence instead at the end of the sequence?
Question: One of the implementation of Bluestein algorithm as show in the below link Bluestein's Algorithm [conj(W), zeros(1, L-2*N+1), conj(W(N:-1:2)) ] padding the zeros at the middle of the sequence, eventually gave the correct results. But padding zeros just at the end does not give the correct results. Question: When to pad the zeros at the middle of the sequence and when at the end? Answer: When working with the DFT and IFFT we can zero pad the signal, which serves to interpolate new samples in the other domain. We will often see this applied with padding at the end of the sequence or alternatively in the middle of the sequence as referenced in the application linked by the OP. Below I answer the question as to why we would consider padding in the center of the sequence and the implications of doing that. Specific to how that is used in an implementation of Bluestein’s algorithm I have detailed in another post here. In general when working with the Discrete Fourier Transform (and inverse) , if we don't pad in the middle of a sequence, a linear phase will be introduced in the resulting transform. In applications where our only concern is with the interpolated magnitude of the result, this would be of no consequence. This answer explains the general considerations and motivations for when we would want to insert zeros in the middle of a sequence in either the time or frequency domain rather than zero padding at the end. Padding a sequence with zeros in one domain in either location (middle or end), interpolates samples in the other domain. This interpolation can be accomplished without introducing additional phase distortion in the other domain by padding in the proper "middle" of the sequence rather than at the end. In many cases this phase distortion is inconsequential since it is a linear phase: A linear phase in the frequency domain is a time shift or delay in the time domain. Similarly, a linear phase in the time domain is a frequency shift or translation in the frequency domain. Alternatively we can zero pad at the end of the sequence and then correct for the linear phase error in the result which may be more convenient that the approaches outlined here. Proper "Padding in the Center" Proper symmetry must be maintained when padding in the center, such that we maintain the same number of "positive" and "negative" domain samples. Padding in the "true center" for an odd sequence is done by placing the zeros in between $N/2+1$ samples at the beginning and then the remaining $N/2$ samples at the end, as in: $$[x_0, x_1, x_2, x_3, x_4]$$ $$[x_0, x_1, x_2, 0, 0, x_3, x_4]$$ As shown in the link jomega shared in the comments, the location of the zero padding is clear when we consider the alternate and equivalent positive and negative indexing as: Values: $[x_0, x_1, x_2, x_3, x_4]$ Indexes: $[0, 1, 2, 3, 4]$ Is the same as the following given the periodicity property of the DFT: Values: $[x_0, x_1, x_2, x_3, x_4]$ Indexes: $[0, 1, 2, -2, -1]$ Thus a zero insert after index 2 can increase the time duration in both the positive and negative direction which serves to not introduce any additional delay: Values: $[x_0, x_1, x_2, 0, 0, x_3, x_4]$ Indexes: $[0, 1, 2, 3, 4, -4, -3, -2, -1]$ For even sequences, the center bin is shared between the "positive" and "negative" domain, and therefore must be split in complex conjugate halves if not zero. To pad in the center for even sequences, we must split the bin located at $n=N/2$ (for $n=0\ldots N-1$) into complex conjugate halves. For example, with $N=5$, the sample $x_3$ is the shared sample that is right on the boundary between what would be considered the positive domain samples and negative domain samples, and the zero padding would be done as follows: $$[x_0, x_1, x_2, x_3, x_4, x_5]$$ $$[x_0, x_1, x_2, x_3/2, 0, 0, (x_3/2)^*, x_4, x_5]$$ Where $(x_3/2)^*$ represents the complex conjugate of $x_3/2$. Related questions with additional details related to the proper splitting are here and here. Intuition for Padding in the Time Domain Due to the reciprocity in the DFT, similar considerations in the frequency domain would apply in the time domain by swapping maximum time with maximum frequency. The more detailed frequency domain explanation is given to be more intuitive for anyone familiar with sampling. However, in general for either domain, padding in the center of a sequence will increase the "length" in either domain without distorting the original samples. ("Length" implying the sampling frequency in the frequency domain, or the time duration in the time domain). Below is a simple time domain example, followed by a more detailed frequency domain explanation where we see the same property holds and why. Consider the time domain sequence given by: $$x[n] = [1, -1, 1, 1, -1]$$ The DFT of this sequence is: $$X[k] = [ 1, -1.236, 3.236, 3.236, -1.236]$$ The Discrete Time Fourier Transform (which the DFT samples lie on) is plotted below together with the selection given as $X[k]$ above. Note that the result $X[k]$ for the particular time domain sequence used is real (due to the symmetry I chose in the sequence and that the samples are real). Likewise the DTFT plotted above is completely real. Only when we pad zeros in the proper center of the sequence, will the result continue to be real and fall exactly on the plot above. Padding zeros to the end or off-center will result in the samples from the DTFT having the same magnitude as above but with introduced phase offsets (so introduces a linear phase distortion; linear as the phase introduced is proportional to the frequency index). Intuition for Padding in the Frequency Domain The N-sample DFT typically has the bins extending from bin 0 to bin N-1 corresponding to the frequencies of DC up to nearly the sampling rate (1 bin less than what would correspond to the sampling rate. Due to periodicity in the DFT, these are equivalent to the DFT bins corresponding to the frequencies in the first Nyquist zone ($|f|< f_s/2$ where $f_s$ is the sampling rate) and as mapped with the fftshift function in MATLAB/Octave and Python scipy.signal. If we wish to increase the sampling rate associated with the DFT samples while maintaining all signals in the first Nyquist zone, we should then pad the frequency domain DFT result in the center. This may be made clearer by observing the spectrum plots below showing the representation of a continuous-time (CT) sinusoid in the DFT, along with the spectrum of the sampling process and the resulting spectrum of the discrete-time (DT) sinusoid. If these graphics aren't completely clear, I add more detail on their background at the end of this post. We see how the DFT covers the frequency range from DC to the sampling rate, but also the periodicity in the DFT result such that if we moved the shaded region to the left or right, we could still convey all the information contained about the original signal: the signal component at the upper end of the DFT equivalently represents the "negative frequency" components in the signal. If we only wanted to indirectly create a new DFT that would be equivalent to sampling the original signal at a higher sampling frequency, then we should zero pad in the middle of the DFT as demonstrated in the graphic below. Side note: in the actual DFT outputs for the DFT of a sinusoidal tone we would also see many additional non-zero samples as "spectral leakage" except for the convenient case that the sampling rate is an integer multiple of the frequency of the sinusoidal tone. If the above plots are confusing, please see this post for more background information. Basically the sampling of a signal is the process of multiplying a signal with periodic impulses in time. The Fourier Transform of periodic impulses in time is periodic impulses in frequency (which is what we see in the graphics above). Multiplication in the time domain is identical to convolution in frequency domain. The spectrum for the discrete-time (DT) sinusoid is the result of convolving the CT Sinusoid with the spectrum of the sampling process. For further examples of this specific to the Bluestein algorithm in this question and the DFT used for efficient circular convolution, and how the zero padding in the middle of a sequence can be done, please see this other post.
{ "domain": "dsp.stackexchange", "id": 11044, "tags": "zero-padding" }
Restucture code to wrap optional resources in a using block
Question: I have some C# I/O code that can take one or two input files. I would like to wrap the Stream objects in a using block, but cannot find a way to succinctly express this in code. Currently, I have two large files (>1GB), that contain concatenated TIFF files and PDF files respectively that need to be extracted into individual files. The metadata of the individual files (TIFF Tags and PDF Keywords) cross-reference one another, so the processing rules are different if I receive both files at once, i.e. there is more information and verification logic. I have a FileProcessor class than implements an IEnumerable<Range> to return the byte ranges of each individual file within the archive. My current implementation just wraps the main processing loop in a try/finally block and calls Dispose manually. // Struct to represent the byte range of a file in the archive struct Range { public long start; public long end; } // Custom class that takes a Stream object in the constructor and implements IEnumerable<Range> to return the individual files in sequence public class FileProcessor : IDisposable, IEnumerable, IEnumerable<Range> { private Stream _stream; public FileProcessor(Stream stream) { this._stream = stream; } public virtual void Dispose() { if (_stream != null) { _stream.Dispose(); _stream = null; } } } public class TiffProcessor : FileProcessor { ... } public class PdfProcessor : FileProcessor { ... } // Snippet of the dispatching/processing logic TiffProcessor infile1 = null, PdfProcessor infile2 = null; try { if (HasFirstInputFile) { infile1 = new TiffProcessor(File.OpenRead(FirstInputFileName)); } if (HasSecondInputFile) { infile2 = new PdfProcessor(File.OpenRead(SecondInputFileName)); } if (infile1 != null && infile2 == null) { foreach (var range in infile1) { ... } } if (infile1 == null && infile2 != null) { foreach (var range in infile2) { ... } } if (infile1 != null && infile2 != null) { foreach (var ranges in infile1.Zip(infile2, (tiff, pdf) => new Range[] { tiff, pdf })) { ... } } } finally { if (infile1 != null) { infile1.Dispose(); } if (infile2 != null) { infile2.Dispose(); } } Is there any construct that can help clean up and organize this code structure. It's not too bad now, but could become exponentially more complex if additional input streams are required in the future. Edit Based on the comments received, perhaps creating an intermediate Strategy object that managed resources would work? using (var strategy = StrategyFactory.Create(CommandLineArgs)) { strategy.Process(); } public static class StrategyFactory { public static IStrategy Create(CommandLineArguments args) { if (args.FirstInputFile != null && args.SecondInputFile == null) { return new FirstStrategy(args.FirstInputFile); } if (args.FirstInputFile == null && args.SecondInputFile != null) { return new SecondStrategy(args.SecondInputFile); } if (args.FirstInputFile != null && args.SecondInputFile != null) { return new ThirdStrategy(args.FirstInputFile, args.SecondInputFile); } } } Answer: The only think that I would do different is that I wouldn't dispose streams inside your FileProcessor classes. Disposing of resources should be done at the same level (or scope) where they were initialized. Unfortunately, even BCL classes like StreamWriter and BinaryWriter dispose underlying streams when disposed, so I cannot claim that this is unexpected behavior either. IMO, there is nothing wrong with your code. Going out of your way simply to save two lines of code might make your code less readable, so your first version might easily be the simplest one. Having said that, if you really want to wrap it in a single using block, one idea (and I don't actually find it "better" than yours in any way) might be to wrap all your disposable resources into a single class: public class InputFiles : IDisposable { private readonly Dictionary<string, Stream> _files = new Dictionary<string, Stream>(); public Stream GetStream(string type) { Stream input = null; _files.TryGetValue(type, out input); return input; } public InputFiles(string[] args) { // this is just an idea, you probably don't // use the extension to determine the type, // but it still seems a bit more general than // having strongly typed properties foreach (var path in args) { var ext = Path.GetExtension(path); // pdf or tif? _files[ext] = File.OpenRead(path); } } public void Dispose() { foreach (var stream in _files) stream.Value.Dispose(); } } And then dispose the entire object when done: static void Main(string[] args) { using (var input = new InputFiles(args)) { var pdf = input.GetStream("pdf"); var tif = input.GetStream("tif"); // use the appropriate strategy // (parsing strategy should not be concerned with // disposing of resources) Process(pdf, tif); } }
{ "domain": "codereview.stackexchange", "id": 2055, "tags": "c#" }
Morin 2.16 - Balancing a semi-infinite stick
Question: Problem 2.16 from Morin's Classical Mechanics: Given a semi-infinite stick (one that goes off to infinity in one direction), determine how its density should depend on position so that it has the following property: If the stick is cut at an arbitrary location, the remaining semi-infinite piece will balance on a support that is located a distance $l$ from the end. I am having a hard time understanding part of the solution to the above worked problem (2.16) in Morin’s Classical Mechanics text. He arrives at the following expression for total torque applied to the balanced rod which makes sense to me: $\tau = \int_{x_0}^\infty \rho(x)(x-(x_0+l))gdx=0$ He then argues that because $\tau = 0$ regardless of where the rod is cut (ie for all $x_0$), that $\tau’=0$, which I also understand. In order to find a differential equation to solve for $\rho(x)$, he then replaces $x_0$ with $x_0 + dx_0$ to differentiate $\tau$ with respect to $x_0$ to arrive at: $0 = \frac{d\tau}{dx_0} = gl\rho(x_0) - g\int_{x_0}^{\infty}\rho(x)dx$ What I don’t understand is how he can just replace $x_0$ with $x_0 + dx_0$, what he is expanding as a first order approximation, and why he has to do that in the first place. Answer: When differentiating an integral with respect to limits, according to the Wikipedia page on the Leibniz integral rule \begin{equation} \frac{\text{d}}{\text{d}x}\left( \int_{a(x)}^{b(x)}f(x,t)\text{d}t \right ) = f(x,b(x))b'(x)-f(x,a(x))a'(x)+\int_{a(x)}^{b(x)}\frac{\partial f(x,t)}{\partial x}\text{d}t. \end{equation} Therefore, for your equation \begin{equation} \begin{split} \frac{\text{d}}{\text{d}x_{0}}\int_{x_{0}}^{\infty}\rho(x)\left(x - (x_{0} + l)\right)g \text{d}x &= - \rho(x_{0})(x_{0} - x_{0} -l)g + \int_{x_{0}}^{\infty}\frac{\partial}{\partial x_{0}} \rho(x)\left(x - (x_{0} + l)\right)g \\ &= \rho(x_{0})lg - g\int_{x_{0}}^{\infty}\rho(x) = 0 \end{split} \end{equation} Morin does this because the problem is that if we change $x_{0}$, $\tau$ should not change, i.e. $\tau$ is not a function of $x_{0}$ and therefore should be a constant value $\tau = 0$ for all $x_{0}$ (just imagine plotting $\tau$ against $x_{0}$ - the gradient is flat). As stated, the remainder of the problem is solved by differentiating both sides wrt $x_{0}$ once more (use the Leibniz rule again) and solving the differential equation.
{ "domain": "physics.stackexchange", "id": 89246, "tags": "homework-and-exercises, torque, equilibrium, statics" }
How to realize SWAP operation using iSWAP gate?
Question: The following are the matrices for SWAP and iSWAP gates. SWAP = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} iSWAP = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & i & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} Both are similar except iSWAP adds a phase to $|01\rangle$ and $|10\rangle$ amplitudes. How can the SWAP operation be realized using the iSWAP gate? Answer: It can't be done in one or two uses of the iSwap because an iSwap is equivalent (up to single qubit rotations) to a SWAP+CZ. A single SWAP+CZ is not a swap. The two swaps in a pair of SWAP+CZs cancel out, leaving you with two CZs (and arbitry single qubit operations around them), which is also not enough to do a swap. But you can do it with three:
{ "domain": "quantumcomputing.stackexchange", "id": 4333, "tags": "quantum-gate, circuit-construction" }
What molecular changes occur inside solid Electromagnetic materials?
Question: It is a very elementary question to ask, but could not understand the inside picture. What happens inside a transformer material at an atomic/ molecular level when magnetic flux changes? (i.e., while EMF is being produced as a reaction?) What exactly changes, where do the changes occur? In the nucleus? In $s,p,d,f$ electronic orbit nuclear distances or speeds? Please give a reference of the internal picture. (I am clear about what is happening in the coils outside.) Answer: Transformer cores are typically made of laminated electrical steel, which is not steel (iron with carbon) but iron with a few percent silicon. This material has a narrow magnetic hysteresis loop because the magnetic domain walls are very mobile. When an external magnetic field is applied, the domains that are aligned with the field will grow at the expense of the other domains. This causes a large induced magnetic moment in the transformer core material. This moment is mostly due to unpaired spins of the iron $3d$-electrons. Each electron is a small magnet, when they are aligned as in ferromagnetic materials, this produces a large magnetization. Normally this is just in a small area, a magnetic domain, with sizes of micrometers. When the directions of domains are random, there is no net magnetization, no field outside the material. There are no changes in the nuclei. The only thing that happens in an applied field is that the spins of electrons get turned around. This gives some torque because of conservation of angular momentum, the Einstein-De Haas effect.
{ "domain": "physics.stackexchange", "id": 45835, "tags": "electromagnetism, ferromagnetism" }
Physical significance of Killing vector field along geodesic
Question: Let us denote by $X^i=(1,\vec 0)$ the Killing vector field and by $u^i(s)$ a tangent vector field of a geodesic, where $s$ is some affine parameter. What physical significance do the scalar quantity $X_iu^i$ and its conservation hold? If any...? I have seen this in may books and exam questions. I wonder what it means... Answer: In general, if $\xi^\mu$ is a Killing vector field on a spacetime, and if $u^\mu$ is a tangent field along a geodesic in that spacetime, then $\xi_\mu u^\mu$ is a conserved quantity along the geodesic. (See for example Wald's GR proposition C.3.1). To illustrate the physical significance of this, consider a particle moving in $2$-dimensional Minkowski space with metric $$ds^2 = -dt^2 + dx^2.$$ This metric admits killing vectors $\xi_{(t)} = (1,0)$ and $\xi_{(x)} = (0,1)$. It follows that for a geodesic $(t(\lambda),x(\lambda))$ with tangent $u^\mu(\lambda)=(dt/d\lambda, dx/d\lambda)$, we obtain two conserved quantities $$ \xi_{(t)}^\mu u_\mu = -\frac{dt}{d\lambda}, \qquad \xi_{(x)}^\mu u_\mu = \frac{dx}{d\lambda} $$ If we imagine that our geodesic represents the path of a particle of mass $m$ through spacetime, then if we choose the parameter $\lambda$ to be arc length, which for timelike curves is called proper time $\tau$, namely if we choose $$ -1 = u^\mu u_\mu = \left(\frac{dt}{d\tau}\right)^2 + \left(\frac{dx}{d\tau}\right)^2 $$ then $p^\mu = m u^\mu$ is the four-momentum of the particle, and the conservation equation obtained from $\xi_{(t)}$ gives conservation of $p^t = m u^t$, which is the energy of the particle, and the conservation equation obtained from $\xi_{(x)}$ gives conservation of $p^x = m u^x$ which is the momentum of the particle. So in this context, Killing vectors of the given metric gave conserved quantities that could be interpreted as the energy and momentum of a particle moving freely in flat spacetime.
{ "domain": "physics.stackexchange", "id": 6192, "tags": "general-relativity, symmetry, differential-geometry, vector-fields, geodesics" }
Get canonical transcript from UCSC
Question: I am using the following command to get all refseq genes from UCSC: /usr/bin/mysql --user=genomep --password=password --host=genome-mysql.cse.ucsc.edu \ -A -D hg38 -e 'select concat(t.name, ".", i.version) name, \ k.locusLinkId as "EntrezId", t.chrom, t.strand, t.txStart, \ t.txEnd, t.cdsStart, t.cdsEnd, t.exonCount, t.exonStarts, \ t.exonEnds, t.score, t.name2 from refGene t join hgFixed.gbCdnaInfo i \ on t.name = i.acc join hgFixed.refLink k on t.name = k.mrnaAcc' That returns data in the following format (showing the 1st 5 lines): +-------------+-----------+-------+--------+---------+-------+----------+--------+-----------+--------------------+--------------------+-------+-----------+ | name | EntrezId | chrom | strand | txStart | txEnd | cdsStart | cdsEnd | exonCount | exonStarts | exonEnds | score | name2 | +-------------+-----------+-------+--------+---------+-------+----------+--------+-----------+--------------------+--------------------+-------+-----------+ | NR_046018.2 | 100287102 | chr1 | + | 11873 | 14409 | 14409 | 14409 | 3 | 11873,12612,13220, | 12227,12721,14409, | 0 | DDX11L1 | | NR_106918.1 | 102466751 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-1 | | NR_107062.1 | 102465909 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-2 | | NR_107063.1 | 102465910 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-3 | | NR_128720.1 | 103504738 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-4 | +-------------+-----------+-------+--------+---------+-------+----------+--------+-----------+--------------------+--------------------+-------+-----------+ I also want to find the accession of the canonical transcript of each of the genes returned by the command above. Those seem to be stored in the knownCanonical table: $ /usr/bin/mysql --user=genomep --password=password --host=genome-mysql.cse.ucsc.edu -A -D hg38 -e 'select * from knownCanonical limit 5' +-------+------------+-----------+-----------+------------+--------------------+ | chrom | chromStart | chromEnd | clusterId | transcript | protein | +-------+------------+-----------+-----------+------------+--------------------+ | chrX | 100628669 | 100636806 | 1 | uc004ega.3 | ENSG00000000003.14 | | chrX | 100584801 | 100599885 | 2 | uc004efy.5 | ENSG00000000005.5 | | chr20 | 50934866 | 50958550 | 3 | uc002xvw.2 | ENSG00000000419.12 | | chr1 | 169853073 | 169893959 | 4 | uc001ggs.5 | ENSG00000000457.13 | | chr1 | 169795048 | 169854080 | 5 | uc001ggp.4 | ENSG00000000460.16 | +-------+------------+-----------+-----------+------------+--------------------+ However, there seems to be no obvious way to link the knownCanonical table to the refGene, hgFixed.gbCdnaInfo and hgFixed.refLink tables used above. So, how can I modify my 1st query (or, if necessary write a new one) so that my results also include the accession of the gene's canonical transcript? Answer: You can link them with the kgXref table, since kgXref.refseq == refGene.name and kgXref.kgID == knownCanonical.transcript. Since it seems that knownCanonical.transcript is what you want anyway, you don't even need to join on it: mysql --user=genomep --password=password --host=genome-mysql.cse.ucsc.edu \ -A -D hg38 -e 'select concat(t.name, ".", i.version) name, x.kgID, \ k.locusLinkId as "EntrezId", t.chrom, t.strand, t.txStart, \ t.txEnd, t.cdsStart, t.cdsEnd, t.exonCount, t.exonStarts, \ t.exonEnds, t.score, t.name2 from refGene t join (hgFixed.gbCdnaInfo i, hgFixed.refLink k, kgXref x) \ on (t.name = i.acc and t.name = k.mrnaAcc and t.name = x.refseq) limit 10' The output is then: +-------------+------------+-----------+-------+--------+---------+--------+----------+--------+-----------+--------------------+--------------------+-------+------------+ | name | kgID | EntrezId | chrom | strand | txStart | txEnd | cdsStart | cdsEnd | exonCount | exonStarts | exonEnds | score | name2 | +-------------+------------+-----------+-------+--------+---------+--------+----------+--------+-----------+--------------------+--------------------+-------+------------+ | NR_106918.1 | uc031tla.1 | 102466751 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-1 | | NR_107062.1 | uc031tlm.1 | 102465909 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-2 | | NR_107063.1 | uc032cta.1 | 102465910 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-3 | | NR_128720.1 | uc032dmn.1 | 103504738 | chr1 | - | 17368 | 17436 | 17436 | 17436 | 1 | 17368, | 17436, | 0 | MIR6859-4 | | NR_036051.1 | uc031tlb.1 | 100302278 | chr1 | + | 30365 | 30503 | 30503 | 30503 | 1 | 30365, | 30503, | 0 | MIR1302-2 | | NR_036266.1 | uc033cjs.1 | 100422831 | chr1 | + | 30365 | 30503 | 30503 | 30503 | 1 | 30365, | 30503, | 0 | MIR1302-9 | | NR_036267.1 | uc032csz.1 | 100422834 | chr1 | + | 30365 | 30503 | 30503 | 30503 | 1 | 30365, | 30503, | 0 | MIR1302-10 | | NR_036268.1 | uc032hiw.1 | 100422919 | chr1 | + | 30365 | 30503 | 30503 | 30503 | 1 | 30365, | 30503, | 0 | MIR1302-11 | | NR_026822.1 | uc001aak.4 | 654835 | chr1 | - | 34610 | 36081 | 36081 | 36081 | 3 | 34610,35276,35720, | 35174,35481,36081, | 0 | FAM138C | | NR_106918.1 | uc031tla.1 | 102466751 | chr1 | - | 187890 | 187958 | 187958 | 187958 | 1 | 187890, | 187958, | 0 | MIR6859-1 | +-------------+------------+-----------+-------+--------+---------+--------+----------+--------+-----------+--------------------+--------------------+-------+------------+ If you want the protein ID then join knownCanonical on x.kgID.
{ "domain": "bioinformatics.stackexchange", "id": 189, "tags": "public-databases, identifiers, ucsc" }
How do I derive this formula?
Question: How do I derive the range of a projectile formula? $$d = \frac{v\cos\theta}{g} \left( v\sin\theta + \sqrt{v^2 \sin^2\theta+ 2gy_0} \right)$$ Answer: You can derive this from t = d/Vox. For more check out the following: https://en.wikipedia.org/wiki/Projectile_motion#Maximum_distance_of_projectile
{ "domain": "physics.stackexchange", "id": 77495, "tags": "homework-and-exercises, kinematics, projectile" }
Why aren't human babies considered larvae?
Question: I've seen people who are anti-baby describe babies, or children in general, as human larvae. This is generally done in order to make them seem weird. While I feel like this statement is wrong, I can't quite put my finger on exactly why. Many definitions of "larva" specify that they must metamorphosize, or go through significant morphological changes on the path to adulthood. However, tadpoles gradually change form, rather than metamorphosizing, and hemimetabolic insects have larval instars that aren't substantially different from adults, aside from being sexually immature and smaller. "Sexually immature and smaller", of course, also describes babies fairly well. Unfortunately, I'm no biologist and my knowledge of what defines animal young as larval vs. non-larval is rudimentary, at best, so I'm not sure if there's a more precise definition of "larva" than the one I found on wikipedia, which would disqualify human babies. What traits differentiate larva from non-larva, and which of those traits to babies lack? Answer: For animals with a clear larval stage, the presence of such a stage indicates that there is a metamorphosis, a change in body morphology at some point in development. Metamorphosis does not necessarily refer to an 'instant' transformation or one that requires a pupation step or something similar; gradual change can still be metamorphosis, as long as there is a clear 'before' and 'after.' Human infants, on the other hand, don't go through much of a metamorphosis: sure, they grow quite a bit, but their general body plan does not change. Contrast this with insects, or your tadpole example, and it is clear there are major body plan changes from larvae to adult stages. The Wikipedia page on larvae describes the characteristics of larvae fairly clearly. Like most traits in biology that vary across taxa, you are likely to find some intermediate creatures where the presence of a larval stage is somewhat controversial or depends on opinion. There could be a difference of opinion on how much of a change is sufficient to describe a juvenile form as a larvae. Humans don't have a postnatal stage that approaches that potential boundary. The part in your question about people who are "anti-baby" sounds a bit judgmental, but I think what you are describing is just a phrasing meant to be somewhat humorous and to evoke images of "gross" larvae; I wouldn't take that for any biological meaning and I wouldn't focus your time on proving it "wrong" - the actual difference of opinion you have is something entirely different.
{ "domain": "biology.stackexchange", "id": 9704, "tags": "human-biology, development" }
Find MST on grid graph with only weight of 1 and 2 in $O(|V|+|E|)$
Question: Given a grid graph $G=(V,E)$ which has only two different integer costs/weights of 1 and 2. Find Minimum Spanning Tree in $O(|V|+|E|)$. I tried the following: Changing Kruskal using a counting Sort in $O(|E|)$. But can I say that this results in O(|E|+|V|)? Since Kruskal would normally be ${\displaystyle O(T_{sort}(|E|)+|E|\cdot \alpha (|V|))} =O(|E|)$ when $\alpha (|V|) \in O(1)$ Can this be stated for this case? I lack detailed understanding of inverse ackerman behaviour. Other possibility changing Prim so that I use a priority queue which support del_min, decreaseKey and insert in $O(1)$. I thought about using two simple stacks or queues and only del_min from the one, which holds the 1 integers, but descreaseKey seems not efficient since I have to loop through the lists to find the elements. So maybe combine this with some kind so hash mapping to directly access each element in $O(1)$ for decreaseKey? Both seem really close to the actual result, though I am struggling to see the right solution for this case. Answer: Let $n=|V|$ and $m=|E|$. Intuitively you want the to return the union of the edges in 1) a maximal spanning forest $F$ of the graph induced by the edges of weight $1$, with 2) a maximal spanning forest $F'$ of the graph obtained by identifying the edges of each tree in $F$ into a single vertex (where each edge in $F'$ actually represents an edge of $G$). Some care is required to attain a running time of $O(n)$. The details are as follows. Let $G_1$ be subgraph of $G$ induced by the edges of weight $1$. Let $C_1, \dots, C_k$ be the connected components of $G_1$. For each $C_i$, compute any a spanning tree $T_i = (V_i, E_i)$ of $C_i$. This requires $O(m)=O(n)$ time in total. For each edge $e=(u,v)$ of weight $2$ in $G$ let $i$ and $j$ be such that $u \in C_i$ and $v \in C_j$. Let $k(e) = (\min\{i,j\}, \max\{i,j\})$. Sort the edges of $G$ in nondecresing order of $k(\cdot)$, keep at most one edge for each value of $k(\cdot)$. Let $S$ the resulting ordered set of edges. Notice that $S$ can be found in time $O(m)=O(n)$ using radix-sort. Create a graph $G_2$ with vertex set $\{1, \dots, k\}$ and edge set $\{ k(e) : e \in S \}$. For an edge $(i,j)$ in $G_2$ let $\ell(i,j)$ be and edge $e$ in $G$ such that $k(e) = (i,j)$. This label $\ell(i,j)$ can be stored along with the edge $(i,j)$ itself, so that given $(i,j)$ we can find $\ell(i,j)$ in $O(1)$ time. This step also requires $O(n)$ time. Finally, compute any spanning tree $T' = (V', E')$ of $G_2$. An MST of $G$ is the tree induced by the edges in $\left( \bigcup_{i=1}^k E_i \right) \cup \{ \ell(i,j) \mid (i,j) \in E'\}$. Overall, the whole algorithm takes $O(n)$ time. The same algorithm extends naturally to any constant number of distinct edge weights. Here is a visualization of the algorithm on the graph you proposed in the comments. Blue edges has weight $1$, red edges have weight $2$. The connected components of $G_1$ are highlighted in gray (and each connected component will be represented by a vertex in $G_2$). In this particular example $S$ contains all red edges of $G$ since each red edge connects a different (unordered) pair of connected components in $G_1$.
{ "domain": "cs.stackexchange", "id": 17761, "tags": "weighted-graphs, minimum-spanning-tree, prims-algorithm" }
By what factor would you have to slow down time for water to feel like glass?
Question: I have been told that though glass seems like a solid, it is somehow, in theory, a liquid -- but is just somehow a liquid that is so thick that it appears to be solid. (Of course --- if this premise to my question is an urban myth then let me know, and that will qualify as an answer.) My question is this ---- by what factor would you have to slow down time if you want regular water to appear as-though solid to you the way glass does? Answer: Contrary to popular misconception, below a specific temperature, glasses do not flow. At all. A glass by definition is a solid sans repeating crystalline structure. Anything which flows (see "pitch-drop experiment which drops every 80 (or something) years") is a liquid, however viscous. Liquid glasses tend to have reasonably high viscosity, but once they freeze, they're solid and do not flow or deform. So the strict answer to your question about water and time is that you'll need to freeze time to make liquid water behave like solid glass.
{ "domain": "physics.stackexchange", "id": 22809, "tags": "classical-mechanics, fluid-dynamics, time, water" }
Orbits in a binary star system
Question: I know of three sets of stable orbits in a binary star system: orbiting closely around star A, orbiting closely around star B, or orbiting distantly around both stars (and their mutual center of gravity) at once. Is there a fourth set of stable orbits, around the mutual center of gravity, but inside both stars' orbits? Answer: The point you appear to refer to, is called the Lagrangian point $L_1$. This point is a saddle in the field of gravity, hence not to be considered to be stable in the strict sense. Two other Lagrangian points, called $L_4$ and $L_5$, can be stable, provided the considered orbiting objects are of small mass in comparison to the two main bodies of the system, and if the masses of the binary components are sufficiently different. According to theorem 4.1 of this paper, $L_4$ and $L_5$ are stable in all directions, if and only if the mass ratio of the two main binary components $\frac{m_1}{m_2}\geq\frac{25+3\sqrt{69}}{2}\approx 24.9599$. According to theorem 3.1 of the same paper all Lagrangian points are stable in z-direction, which is the direction perpendicular to the orbital plane of the binary system. (Credits for this corrected version go to user DylanSp.)
{ "domain": "astronomy.stackexchange", "id": 1237, "tags": "orbit, binary-star" }
External work required to move charge
Question: There are 2 charged which are equal in magnitude and opposite in sign, and they are equidistant from the vertical axis AB. The voltage at both A and B would be 0. If I introduce a new charge at point A, what would be the external work required to get it to point B? In a case like this, I know that the electric field generated by the blue and red charges would be perpendicular to the movement along the vertical axis, so the work done by the electric field is 0. However, to move the charge from A to B, would there be any external work required (even though the charge is moving along an equipotential surface)? Answer: As you correctly note, the electric field will be perpendicular to the line AB and will not do any work on the test charge if it moves along that path. But, there must be some constraint to keep it on that path. Assuming there is some constraint, let's now move the test charge from point A toward point B. To get the charge moving, you must do work (call it $W_1$) on it, exerting a force toward B and it moves that direction ($W_1>0$). The charge is now moving along the line with kinetic energy $K=W_1$, and you want it to stop at point B. Again you do work on it ($W_2$), exerting a force opposite its velocity, so the kinetic energy decreases to zero, so $W_2=-K=-W_1$. The net work done by your outside force in moving the charge from rest at point A to rest at point B is zero: $$W_{\mathrm{net}}=W_1+W_2=W_1+(-W_1)=0.$$ On the other hand, if the charge doesn't stop at point B, the net work from A to B is not zero.
{ "domain": "physics.stackexchange", "id": 37907, "tags": "homework-and-exercises, electricity, electric-fields, charge" }
Determining the uncertainty of an autocorrelation
Question: My problem should probably be built up from the beginning, so lets start there. I performed a certain experiment 25 times. Every time, the experiment consists of 5000 measurements, and each measurement returns either 1 ('yes') or 0 ('no'). I know that each time I measure 1, the probability that this measurement is correct is $F_1$, while if I measure 0 I know that it's correct with probability $F_0$. Now, subsequently I want to compute the autocorrelation function of these measurements I use the following formula with a slightly different normalization, but in principle that should not matter, because what I want to do is fit this autocorrelation to an exponential decay and find the decay time. To do so, I thought I had two options: Either I fit the 25 datasets separately, and then find the mean of the decay time $t_1$ and maybe say something about its variance, or I can find the mean autocorrelation of the 25 datasets, and fit that one. I've decided that that is probably the best option, as the individual datasets can be quite noisy with strange fluctuations, while the mean looks much cleaner. But now I'm stuck wondering what I should be doing to find the uncertainty in the fit. Should I have introduced some sort of uncertainty due to the imperfect measurements ($F_0$ and $F_1$), or should I have introduced some sort of standard error of the mean for the 'mean autocorrelation' data? That last one seems plausible, as of course there is also a standard deviation in the 25 datasets, but then I get a little confused as to how I should do that. The formula for the standard error of the mean seems to be $\frac{\sigma}{\sqrt{n}}$, but this is a bit unfair as in my 5000 measurements, the autocorrelation drops to 0 after around 50 measurements. For the fit I therefore also only use the first ~100 points of the autocorrelation, as there's no real need to fit the next 4000 points that are all pretty much equal to 0. So should I instead only calculate the mean and standard error of the first 100 points? Would that be a fair way of finding the uncertainty? The subsequent fitting method will already give error bars for the $t_1$, but this of course heavily depends on the uncertainty in the data. A final sort of sidetrail, what kind of quantitative method of finding how good I fit should I use for exponential decay? Reduced Chi square, Adjusted R squared, or something else entirely? I suppose this is not intended for dsp though, more for a statistics forum, so feel free to ignore it. Answer: As far as I can see the main question is: How to account for the uncertainty in the measurements of $X_t$ (i.e. $F$). One approach to do this would be to run some sort of Monte Carlo simulation on the data. I.e. for each experiment you can resample a large number of times (say 1000 or 10000) where each point has $1-F$ probability of being flipped. By calculating the auto-correlation of each resample you get a distribution of the correlation for each $k$. Then take the 95% (or another value) confidence interval to get an uncertainty in $R(k)$. It may also be possible to do this analytically but I'm not sure for binary data (my instinct says not or at least not simply). This uncertainty in $R$ should be accounted for in your fitting of the data when calculating the decay time. This can definitely be done analytically and may be implemented in whatever you are using to do the fit (I think matlab does). Alternatively a similar Monte Carlo simulation could be done but is probably more effort. If you think all the data follows the same trend (not unreasonable if it is a repeated experiment) you could calculate a mean for each auto-correlation distance as you suggest. This is effectively taking a manual Monte Carlo with 25 resamples. Then you can calculate the standard error as you suggest (note: $n=25$, not $5000$). I would shy away from this approach as it ignores the uncertainty in you initial measurement which you presumably have a good reason for giving.
{ "domain": "dsp.stackexchange", "id": 1723, "tags": "autocorrelation" }
Why does the fundamental mode of a recorder disappear when you blow harder?
Question: I have a simple recorder, like this: When I cover all the holes and blow gently, it blows at about 550 Hz, but when I blow more forcefully, it jumps an octave and blows 1100 Hz. What's the physical difference between blowing gently and blowing forcefully that the recorder suddenly jumps an octave and the fundamental mode is no longer audible? Answer: Overblowing is a phenomenon that exists in all wind instruments. The details of the physics are different from one instrument to the next, but there is a broad similarity, which is that it's the result of a nonlinear interaction between the air column and whatever is driving the air column. The recorder is in fact one of the simpler examples to understand. The mechanism that drives the air column is called an edge tone. The mouthpiece of the recorder contains a knife edge. The stream of air encounters the knife edge, but doesn't split smoothly onto the two sides. Instead, it forms a vortex which carries the energy to one side of the edge. However, a feedback process then causes this pattern of flow to deflect until it flips to the other side of the edge. So this is a highly nonlinear system. It's binary. Air either flows to one side of the edge or the other, and if we label the two states 0 and 1, we get a pattern over time that looks like 0000111100001111... You could graph it as (approximately) a square wave. When this edge-tone system is coupled to an air column, it's forced to accomodate its frequency to the resonant frequencies of the column. For example, if you imagine a pulse emitted from the edge, this pulse then travels down the tube, is partially reflected at the open end, returns, and slaps against the air in the edge-tone system, influencing its evolution. There is a tendency for the edge-tone system's vibrations to become locked in to one of the resonant frequencies of the column. In overblowing, the pattern switches from 0000111100001111... to 00110011... The square wave doubles its frequency from the fundamental frequency $f_0$ to the first harmonic $2f_0$. The original square wave contained Fourier components $f_0$, $2f_0$, $3f_0$, ... The new one contains $2f_0$, $4f_0$, $6f_0$, ... As you observed on the oscilloscope, $f_0$ is absent from the overblown spectrum. The ear's sensation of pitch is based on the frequency of the fundamental, so we hear a jump in pitch. Bamboo flutes and whistles also use edge tones, so exactly the same analysis applies. I think the original classic work on this was an analysis of organ-pipe acoustics in a German-language paper by Cremer and Ising. In general, the edge-tone system could be replaced by a reed, lip reed (as in brass instruments), or air reed (flute). There can be overblowing at the octave, or, in instruments such as the clarinet that have asymmetric boundary conditions, at an octave plus a fifth (i.e., a factor of 1.5 in frequency). On the saxophone, for example, a skilled player using a stiff reed can overblow to frequencies corresponding to several higher harmonics beyond the first. References: Cremer and Ising, "Die selbsterregten Schwingungen von Orgel," Acustica 19 (1967) 143. Fletcher, "Sound production by organ flue pipes," J Acoust Soc Am 60 (1976) 926, http://www.ausgo.unsw.edu.au/music/people/publications/Fletcher1976.pdf Backus, The acoustical foundations of music, Norton, 1969, pp. 184-186.
{ "domain": "physics.stackexchange", "id": 7466, "tags": "fluid-dynamics, acoustics" }
structure constants in $U(N)$ Yang-Mills Theory (t'Hooft)
Question: Consider the Yang-Mills action $S = -\frac{1}{2g^{2}} \int d^{x}\ \mathrm{Tr}\left( F_{\mu \nu} F^{\mu \nu} \right)$, where $F_{\mu \nu} = \partial_{\mu} A_{\nu} - \partial_{\mu} A_{\nu} - i [A_{\mu},A_{\nu}]$ is the field strength tensor. Let the gauge group be $U(N)$, so that $A_{\mu}$ belongs to the adjoint representation of $U(N)$. I'm trying to do a problem which says the following: the adjoint representation of $U(N)$ can be thought of as products of fundamental and anti-fundamental representations - so $\bar{N} \times N$. They say, adopt the following way to write the group index of $A_{\mu}^{a}$, such that $a = (\bar{j}, k)$. In this way, the above means the following in terms of matrix elements: $$ A^{a}_{\mu} \ \to \ A^{(\bar{j},k)}_{\mu} = \left[ A_{\mu} \right]_{jk} $$ I'm asked to compute the structure constants $f^{(\bar{m},n)}_{(\bar{j},k),(\bar{p},q)}$ which are defined by the expression: $$ [ A_{\mu}, A_{\nu} ]^{(\bar{m}, n)} = i f^{(\bar{m},n)}_{(\bar{j},k),(\bar{p},q)} A^{(\bar{j},k)}_{\mu} A^{(\bar{p},q)}_{\nu} $$ What exactly am I trying to do here? My Attempt: My thinking is that I pick some set of generators $\{ T^{a} \}$ for $U(N)$, which defines a set of structure constants $\{f_{abc}\}$ (relative to my choice of generators). The generators of the adjoint representation $\{T_{\mathrm{AD}}^{a}\}$, have elements determined by $[T_{\mathrm{AD}}^{a}]_{bc} = i f_{abc}$. Then I think I can expand $A_{\mu}$ as: $$ A_{\mu} = \sum _{a} A_{\mu}^{a} T_{\mathrm{AD}}^{a} $$ But from here I don't have any idea what i am doing....am I supposed to write the structure constants $f^{(\bar{m},n)}_{(\bar{j},k),(\bar{p},q)}$ in terms of the $f_{abc}$? (Eventually this problem is to lead me towards the t'Hooft double line formalism.) EDIT: It occurred to me that maybe I need to write the $f^{(\bar{m},n)}_{(\bar{j},k),(\bar{p},q)}$ in terms of the structure constants of the fundamental and anti-fundamental respresentations? Answer: Everything happens in the Lie algebra so consider $u$ to be the relevant Lie algebra (I'm tired of typing mathfrak...). The adjoint representation is the representation $\rho$ given by the Lie algebra acting on itself by the Lie bracket: $\rho:u\longrightarrow M(u),\quad \rho(X)\longmapsto [X,.].$ It is a representation because the Lie bracket is linear and $u$ itself is a vector space, hence the action can be represented as a matrix. Remember that this map is an algebra homomorphism. Thus algebraic relations are preserved. Now, suppose the basis $\{T^i\}_{i\in I}$ of $u$ with $[T^i,T^j]=f_{k}^{ij} T^k$. What we want is to express this last algebraic relation in terms of objects in the adjoint representation. Let us first express these matrices as their matrix elements: For $X\in u$, $X=X_i T^i \implies \rho(X)_{i}^j=[X,T^j]_i=X_k f_i^{kj}$. In particular, for $X=T^k$, we have that $\rho(T^k)_{i}^j=f_i^{kj}.$ So we can write $\rho([T^a,T^b])_{i}^j=f_{c}^{ab}\rho(T^c)_{i}^j=f_c^{ab}f_i^{cj}$ And knowing that $(\rho(T^a)\rho(T^b))_i^j=f_i^{ak}f_k^{bj}$, we have that $\rho([X_n T^n,Y_m T^m])_i^j=X_n Y_m\, \rho([T^n,T^m])_i^j=-X_n Y_m f_i^{jk} f_k^{nm}=X_n Y_m (f_i^{mk}f_k^{nj}-f_i^{nk}f_k^{mj})$, using Jacobi Identity. So with $\rho(X)_a^b\, \rho(Y)_c^{d} F_{(ac)i}^{(bd)j}=X_n Y_m f_a^{nb}f_c^{m d}F_{(ac)i}^{(bd)j}\implies F_{(bd)i}^{(ac)j}=(\delta_{d}^a\delta_{b}^j\delta_{i}^c-\delta_{b}^c\delta_{d}^j\delta_{i}^a)$
{ "domain": "physics.stackexchange", "id": 36046, "tags": "homework-and-exercises, quantum-field-theory, gauge-theory, group-representations, yang-mills" }
Rosserial Hydro unpack requires string length 4
Question: Hi,I installed rosserial hydro for arduino and ros_lib as written in the wiki. when I try ros hello world example found in ros_lib it compiles and works fine but when i try a ros_lib example with a subscriber, whatever topic name is, i get (from pubsub ros example): [INFO] [WallTime: 1380287883.995053] ROS Serial Python Node [INFO] [WallTime: 1380287884.001497] Connecting to /dev/ttyACM0 at 57600 baud [INFO] [WallTime: 1380287886.607383] Note: publish buffer size is 512 bytes [INFO] [WallTime: 1380287886.607882] Setup publisher on chatter [std_msgs/String] [ERROR] [WallTime: 1380287886.609524] Creation of subscriber failed: unpack requires a string argument of length 4. So publisher is created but subscriber fails....! I use hydro on ubuntu 12.04 64bit.Is there a way to solve it?I'm stuck here...!Someone can help me? thank! Originally posted by GioRos on ROS Answers with karma: 11 on 2013-09-27 Post score: 1 Answer: I believe it is temporarily broken, but they are at work on fixing it (see https://github.com/ros-drivers/rosserial/issues/76) -- in the meantime, you could do a git checkout, and roll back to the 0.5.2 tag to get back to the working version until a patch is released. In your catkin workspace you would do: git clone https://github.com/ros-drivers/rosserial.git git checkout 0.5.2 Originally posted by fergs with karma: 13902 on 2013-10-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by tonybaltovski on 2013-10-02: When I roll back to 0.5.2, the rosserial_xbee gives me the following error: Tried to publish before configured, topic id 125 but it works fine using rosserial_python fine. I am using the same microcontroller with the same code on it. However, it works on 0.5.1. Should I open an issue on GitHub?
{ "domain": "robotics.stackexchange", "id": 15686, "tags": "arduino, rosserial, ros-hydro" }
Why is batch gradient descent performing worse than stochastic and minibatch gradient descent?
Question: I have implemented a neural network from scratch (only using numpy) and I am having problems understanding why the results are so different between stochastic/minibatch gradient descent and batch gradient descent: The training data is a collection of point coordinates (x,y). The labels are 0s or 1s (below or above the parabola). As a test, I am doing a classification task. My objective is to make the NN learn which points are above the parabola (yellow) and which points are below the parabola (purple). Here is the link to the notebook: https://github.com/Pign4/ScratchML/blob/master/Neural%20Network.ipynb Why is the batch gradient descent performing so poorly with respect to the other two methods? Is it a bug? But how can it be since the code is almost identical to the minibatch gradient descent? I am using the same (randomly chosen with try and error) hyperparameters for all three neural networks. Does batch gradient descent need a more accurate technique to find the correct hyperparameters? If yes, why so? Answer: Assuming the problem at hand is a classification (Above or Below parabola), this is probably because of the nature of Batch gradient descent. Since the gradient is being calculated on the whole batch, it tends to work well on only convex loss functions. The reason for why batch gradient descent is not working too great probably is because of the high number of minimas in the error manifold, ending up learning nothing relevant. You can change the loss function and observe the change in results, they might not be great (Batch GD usually isn't) but you'll be able to see differences. You can check this out for more on the differences between the three. Hope this helped!
{ "domain": "ai.stackexchange", "id": 1399, "tags": "python, gradient-descent" }
Why do square shaped cups spill easier than round cups?
Question: I've noticed that when I use a cup that is square shaped with convex sides, it spills more easily than a circular cup. Why does this happen? What is the most spill-prone cup shape? What is the most spill-resistant basic cup shape? Answer: When a square cup is displaced slightly from its position, the water inside it moves to a diagonal corner of the cup. If we look at the direction of motion of the fluid then we will see that the fluid moves from lower corner to upper diagonal corner. You can imagine your room as the cup and consider water in it. Suppose the room is slightly disturbed. The water will move towards a corner (visualize it in your room). The fluid hitting the sides of the walls get reflected(actually they experience a normal force) and add up to the velocity of the fluid in the diagonal direction. Again as the fluid moves towards the corner, the cross sectional area perpendicular to the diagonal goes on decreasing(imagine the diagonal of your room and visualize its perpendicular planes) However your cup has no such ceiling to bind the fluid. So the fluid moves over the top of the cup and is spilled. The more closer the cup shape is to a sphere, the more spill resistant it is.(As then it will not have corners in any direction and the rate of decrease of perpendicular plane's cross sectional area will be the least)
{ "domain": "physics.stackexchange", "id": 18737, "tags": "fluid-dynamics, water" }
Collapse of wave function
Question: Suppose a quantum system is initially at a state $\psi_0$ and that a measurement of an observable $f$ is performed. Immediately after the measurement, the system will be in a state that is an eigenvector of the operator $\hat f$ associated to $f$, the eigenvalue being the result of the measurement. My question is the following: What if the candidate for this eigenvector does not represent a valid state? For example, the space of states of a 1D-system is $L^2(\mathbb{R})$ and there are operators acting on the space of all functions on $\mathbb{R}$ whose eigenvectors may not belong to $L^2(\mathbb{R})$. How does the wave function collapse to such an eigenvector? Answer: Every observable is described by a self-adjoint operator $A : D(A) \to {\cal H}$, where $D(A)$ is a dense subspace of $\cal H$ and coincide with $\cal H$ if and only if the set $\sigma(A)\subset \mathbb R$ (the spectrum of $A$) of values which $A$ may attain is bounded. The spectral theorem says that $A$ has an associated projection valued measure (PVM). That is a map associating every (Borel) subset $E\subset \sigma(A)$, for instance $E= [a,b]$ or a single point $E= \{\lambda\}$, with an orthogonal projector $P_E : \cal H \to \cal H$. It turns put that the "formal egenvectors", like $\delta$ functions, are always associated the the continuous part of $\sigma(A)$, whereas the proper eigenvectors $\psi_\lambda$ are associated with the elements $\lambda$ of the point spectrum part of $A$, they are the proper eigenvalues $\lambda$ of $A$. Regarding outcomes $E$ of the measurement procedure belonging to the continuous spectrum, what one actually measures is an interval $E= [a,b]$. In this situation the postulate of collapse, known as von Neumann-Luders postulate, states that, if a pure state is represented by the normalized vector $\psi$ before the measurement of $A$ and the outcome of measurement is $E$, the post-measurement pure state is $$\psi_E = \frac{P_E \psi}{||P_E\psi||}\:.\tag{1}$$ The probability to obtain $E$ in the state $\psi$ if measuring $A$ is, in particular, $$||P_E\psi||^2 \tag{2}$$ Remarks. (1) This postulate concerns non-destructive idealized measurement processes. In the experimental practice with realistic instruments, the post measurement state is described by a quantum operation which is a more sophisticated mathematical tool extending the notion of PVM. (2) von Neumann-Luders postulate includes the case of a measurement of a discrete value $\lambda$ which belong to the point spectrum, i.e., a proper eigenvalue. In the absence of degeneracy, $$P_{\{\lambda\}} = |\psi_\lambda \rangle \langle \psi_\lambda |\:.$$ and applying (1) and (2) you obtain the standard elementary results. If the eigenspace of $\lambda$ has dimension $d \leq +\infty$ and thus there is a Hilbert basis of eigenvectors $\{\psi_{\lambda k}\}$, more generally, $$P_{\{\lambda\}} = \sum_{k=1}^d|\psi_{\lambda k} \rangle \langle \psi_{\lambda k} |\:.$$ (3) von Neumann-Luders postulate can be extended to mixed states trivially. In this context it has a natural meaning in terms of conditional probability over the non Boolean quantum lattice of elementary events (see my answer here)
{ "domain": "physics.stackexchange", "id": 33644, "tags": "quantum-mechanics, wavefunction, measurement-problem, observables, wavefunction-collapse" }
Tension in pendulum
Question: I am asked to calculate the tension in the rope of a pendulum at (a) its initial position as well as at (b) its lowest position. $L = 3 m$ $α = 10^o$ $mass = 2kg$ (a) For the intial point I used the equation: $T=mg \cos (\alpha)$ and got the answer $T=19.3N$ (b) To calculate the tension at the lowest point I need to use the equation: $T=mg +mv^2/r$ however since I am not given the velocity I can't use that equation, so how do I calculate the tension at (b)? Answer: Since this is a homework question, I won't provide the full solution, but here is a guide. Gravitational potential energy is converted to kinetic energy. Thus, we apply conservation of energy to obtain the velocity: $$mgL(1- \cos{\alpha}) = \frac{1}{2}mv^2$$ You should be able to calculate the tension from there.
{ "domain": "physics.stackexchange", "id": 51601, "tags": "homework-and-exercises, newtonian-mechanics, harmonic-oscillator, free-body-diagram, string" }
Given the logical address, how to extract the page number?
Question: I am studying Computer Systems. I have th following question and its answer: Given the logical address 0xAEF9 (in hexadecimal) with a page size of 256 bytes, what is the page number? Answer: 0xAE (I found this answer in the web, but I want to know how can I figure it out myself? How can I figure out the page number for a given logical address? Answer: Your logical address is made of 16 bits, that means you have a addressable space of $2^{16}$ bits. The page size is typically a power of 2, $2^n$, in this case $2^n = 256 \Rightarrow n = 8$. The page number is calculated by substracting $n$ from the size of your logical address: $16 - 8 = 8$, so the first 8 bits of the address are your missing page number, that is 0xAE
{ "domain": "cs.stackexchange", "id": 7934, "tags": "operating-systems, memory-management, paging, virtual-memory" }
Calculate amount of necessary ingredients
Question: I have started to learn C and decided to recreate my bakery task in C. As I am new to the language, I am unsure if I have approached the task in the right way using structs. Feedback on the style of the code would also be appreciated. #include <stdio.h> double cup_ingredients[4] = {4.0,0.1,12.0,14.0}; // Amount of each ingredient for 1 cupcake = {Butter, eggs, flour, sugar} double lemon_ingredients[4] = {80.0,4.5,240.0,300.0}; // Amount of each ingredient for 1 lemon cake = {Butter, eggs, flour, sugar} double total[4]; double cup_req; double lemon_req; struct Bags { int big_bag; int med_bag; int small_bag; }; void calc_bag(double total_ingredient, struct Bags* bag_sizes, struct Bags* type); int main() { printf("How many cupcakes would you like? "); scanf("%lf", &cup_req); for (int x = 0; x<cup_req; x++){ // For the number of cupcakes required: for (int y = 0; y<4; y++){ // For each ingredient total[y] += cup_ingredients[y]; // Add the amount of each ingredient to the total amount of that ingredient } } printf("How many lemon cakes would you like? "); scanf("%lf", &lemon_req); for (int x = 0; x<lemon_req; x++){ // For the number of lemon cakes: for (int y = 0; y<4; y++){ // For each ingredient total[y] += lemon_ingredients[y]; // Add the amount of each ingredient to the total amount of that ingredient } } //Structs for the amount of each ingredient a bag can hold struct Bags Butter_size = {.big_bag = 500, .med_bag = 250, .small_bag = 125}; struct Bags Egg_size = {.big_bag = 12, .med_bag = 10, .small_bag = 6}; struct Bags Flour_size = {.big_bag = 750, .med_bag = 500, .small_bag = 250}; struct Bags Sugar_size = {.big_bag = 600, .med_bag = 400, .small_bag = 200}; //Set the bags required to 0 struct Bags Butter_req = {0,0,0}; struct Bags Egg_req = {0,0,0}; struct Bags Flour_req = {0,0,0}; struct Bags Sugar_req = {0,0,0}; //Calculate the amount of each ingredient bag required calc_bag(total[0], &Butter_size, &Butter_req); calc_bag(total[1], &Egg_size, &Egg_req); calc_bag(total[2], &Flour_size, &Flour_req); calc_bag(total[3], &Sugar_size, &Sugar_req); printf("\nButter: %d large bags, %d medium bags, %d small bags.", Butter_req.big_bag, Butter_req.med_bag, Butter_req.small_bag); printf("\nEgg: %d large bags, %d medium bags, %d small bags.", Egg_req.big_bag, Egg_req.med_bag, Egg_req.small_bag); printf("\nFlour: %d large bags, %d medium bags, %d small bags.", Flour_req.big_bag, Flour_req.med_bag, Flour_req.small_bag); printf("\nSugar: %d large bags, %d medium bags, %d small bags.", Sugar_req.big_bag, Sugar_req.med_bag, Sugar_req.small_bag); } void calc_bag(double total_ingredient, struct Bags* bag_sizes, struct Bags* type){ while (total_ingredient > 0){ if (total_ingredient > bag_sizes->big_bag) { type->big_bag++; total_ingredient -= bag_sizes->big_bag; } else if (total_ingredient > bag_sizes->med_bag) { type->med_bag++; total_ingredient -= bag_sizes->med_bag; } else if (total_ingredient > bag_sizes->small_bag) { type->small_bag++; total_ingredient -= bag_sizes->small_bag; } else { type->small_bag++; total_ingredient = 0; } } } Answer: I see some things that may help you improve your code. Eliminate function prototypes by ordering If you put the calc_bag implementations above main in the source code, you don't need the function prototype. Avoid the use of global variables I see that cup_ingredients and lemon_ingredients, etc. are declared as global variables rather than as local variables. It's generally better to explicitly pass variables your function will need rather than using the vague implicit linkage of a global variable. In this case, these should all be in main rather than global. Initialize variables Global variables are initialized for you (to 0 for numeric variables), but local variables are not. For that reason, you should also get into the habit of initializing variables, ideally when they're declared. For example: double total[4] = {0.0, 0.0, 0.0, 0.0}; double cup_req = 0.0; double lemon_req = 0.0; Use const where practical The ingredients lists cup_ingredients and lemon_ingredients, as well as the bag capacities Butter_size, etc. should all be constant. For that reason, they should all be declared static const as in: static const struct Bags Butter_size = {.big_bag = 500, .med_bag = 250, .small_bag = 125}; Then the calc_bag function should be this: void calc_bag(double total_ingredient, const struct Bags* bag_sizes, struct Bags* type); Simplify by using a typedef The code you have isn't wrong, but it's often convenient to use a typedef for structures that are used frequently. In this case, I'd suggest that your Bags structure could be this: typedef struct bags_s { int big_bag; int med_bag; int small_bag; } Bags; Then instead of writing struct Bags everywhere, you can simply write Bags. Prefer multiplication to iteration Especially when using floating point numbers, it's most often better to multiply than to use iteration. For example, the code currently has this: printf("How many cupcakes would you like? "); scanf("%lf", &cup_req); for (int x = 0; x<cup_req; x++){ for (int y = 0; y<4; y++){ total[y] += cup_ingredients[y]; } } That could be replaced by this: for (int i = 0; i < 4; ++i) { total[i] += cup_req * cup_ingredients[i]; } Similarly, your calc_bag function could use division rather than iteration. Break up the code into smaller functions The main function is quite long and does a series of identifiable steps. Rather than having everything in one long function, it would be easier to read and maintain if each discrete step were its own function. I'd be inclined to divide it into separate input, calculation, and output stages, each with the appropriate function. Eliminate "magic values" The value 4 is sprinkled through the code, but it really ought to be a named constant instead. I'd give it a meaningful name like this: #define INGREDIENT_COUNT 4 Rethink your data structures The only difference between the cup_ingredients and lemon_ingredients is the name. They're parallel structures. This relationship could be made more clear by defining another structure which includes the name. One might write it like this: typedef struct recipe_s { char *name; double ingredients[INGREDIENT_COUNT]; } Recipe; With that structure in place, one might rewrite main like this: int main() { static const char *ingredient_name[INGREDIENT_COUNT] = { "butter", "eggs", "flour", "sugar", }; static const Recipe recipes[] = { { "cupcakes", {4.0,0.1,12.0,14.0} }, { "lemon cakes", {80.0,4.5,240.0,300.0} }, }; static const int recipe_count = sizeof(recipes) / sizeof(recipes[0]); Recipe total = { "total", {0.0, 0.0, 0.0, 0.0} }; for (int i = 0; i < recipe_count; ++i) { double qty; printf("How many %s would you like? ", recipes[0].name); scanf("%lf", &qty); for (int j = 0; j < INGREDIENT_COUNT; ++j) { total.ingredients[j] += qty * recipes[i].ingredients[j]; } } //Structs for the amount of each ingredient a bag can hold static const Bags capacity[INGREDIENT_COUNT] = { { 500, 250, 125}, // butter { 12, 10, 6}, // eggs { 750, 500, 250}, // flour { 600, 400, 200}, // sugar }; // make a shopping list Bags shopping_list[INGREDIENT_COUNT] = { {0, 0, 0}, // butter {0, 0, 0}, // eggs {0, 0, 0}, // flour {0, 0, 0}, // sugar }; //Calculate the amount of each ingredient bag required for (int i = 0; i < INGREDIENT_COUNT; ++i) { calc_bag(total.ingredients[i], &capacity[i], &shopping_list[i]); printf("%s: %d large bags, %d medium bags, %d small bags.\n", ingredient_name[i], shopping_list[i].big_bag, shopping_list[i].med_bag, shopping_list[i].small_bag ); } } I'll leave it to you to divide that into smaller functions, but it should help you get an idea of how to write better C. Other enhancements I would be very disappointed if my grocer actually handed me a "bag of eggs." Instead, the common quantities for different things have different units of measure such as a "dozen" or a "kilogram" or a "pound". Using a similar idea of associating a name with constants (as with the recipes shown above), you might want to associate units of measure with each kind of ingredient. Also, our lemon cake does not appear to have lemon as an ingredient, which makes it a somewhat less appealing confection. Consider adding the ability to create arbitrary lists of named ingredients, consolidating them into a shopping list as above.
{ "domain": "codereview.stackexchange", "id": 26391, "tags": "beginner, c" }
Invariant terms of Chiral Lagrangian
Question: Stupid question. Consider a global SU(N) theory spontaneously broken. I want to write the EFT of the Goldstone bosons in terms of the field $$ \Pi = e^{i\pi^a T^a} $$ where $T^a$ are the SU(N) generators normalized such that $\text{Tr}\left[T^a T^b\right]=1/2\delta^{ab}$. At two-derivatives order in the EFT expansion, the following term is for sure allowed $$ \mathcal{L}_\pi = -\frac{f_\pi^4}{2}\text{Tr}\left[\partial_\mu \Pi\partial^\mu \Pi^\dagger\right] $$ This term gives the kinetic term plus pion self-interactions. However, I can build another invariant term which does not contribute to the kinetic term but just give corrections to the self-interactions $$ \text{Tr}\left[\Pi^\dagger\partial_\mu\Pi\right]\text{Tr}\left[\partial_\mu\Pi^\dagger \Pi \right] $$ Notice that this term is of two-order in derivatives and four-order in field insertions. It seems to me that this term is not considered in literature. Why? Is it zero? Is it a redundant operator? Answer: For simplicity, I will denote $\hat{\pi} = \pi^a T^a$. The invariant trace term is zero. Indeed $$ \partial_\mu \Pi \cdot \Pi^\dagger = \left(i\partial_\mu \hat {\pi}\right)\Pi\cdot \Pi^\dagger = \left(i\partial_\mu \hat {\pi}\right) $$ Then you get $\text{Tr}\left[\partial_\mu \Pi \cdot \Pi^\dagger\right] = i\,\text{Tr}\left[\partial_\mu \hat {\pi}\right]=0 $ because the generators are traceless.
{ "domain": "physics.stackexchange", "id": 50980, "tags": "quantum-field-theory, effective-field-theory" }
Computable Functions
Question: I'm learning about computable functions. Our definition for computable function is as follows: Informally, a computable function is a function f : A → B such that there is a mechanical procedure for computing the result f(a) ∈ B for every a ∈ A. they go on to give the following non-example: A function that takes an input p (which I assume to be a program), and checks if p is a syntactically valid Python program without any user interaction that terminates and returns 1 if this is true, 0 otherwise. I was just trying to understand why this is not computable. I know when I write a program with incorrect syntax that I get an error when I try to run it, so I am assuming that it is possible to check whether a program is syntactically correct, but I have a hunch that the terminating part has to do with the halting problem. Is this simply just an obscured halting problem? Answer: Yes, "that terminates" is the halting problem, not really in disguise but also not pointed out as problematic.
{ "domain": "cs.stackexchange", "id": 19281, "tags": "computability" }
Hormones - biotic or abiotic
Question: Are hormones biotic or abiotic? I have tried reading different articles, and I've found that it is both, but can that be true? Answer: Hormones occur inside organisms as signaling factors, and arise from biological activities, the development or homeostasis of the organism. Thus, they are considered neither biotic nor abiotic factors in ecology. (Terrible analogy warning!) It's a bit like asking whether the wheel of a car is petrol- or diesel-based. It makes no sense! Here's a helpful, comprehensive website. Biotic (living) factors are living organisms in an ecosystem, that must share common resources or compete in a habitat one way or another. Abiotic (non-living) factors are things like temperature, wind, salinity, etc. that affect individuals or the community of an ecosystem. As you can see, hormones are just one of many components of a living organism. They aren't living themselves. It's a silly question to ask, as I hope you can see now.
{ "domain": "biology.stackexchange", "id": 9446, "tags": "endocrinology" }
Do the axes of rotation of most stars in the Milky Way align reasonably closely with the axis of galactic rotation?
Question: The axis of rotation of the Solar System makes a large angle of about 60 degrees relative to the axis of rotation of the Milky Way. That seems unusual - for example, most of the bodies within the Solar Sytem are better behaved than that. Do most stars or planetary systems in the Milky Way rotate in close accord with the galactic rotation? Or is there a large scatter, so that, in fact, our Sun is not atypical? Answer: There is very likely to be a random scatter. Unlike planets orbiting the Sun in the Solar System, most of the stars in the Galaxy did not form at the same time as the Galaxy itself. There is therefore no strong reason to suspect that the angular momentum vectors would be aligned for similar reasons. On the other hand, the Galactic gravitational potential does depart from spherical symmetry in its inner regions, because the visible matter, which becomes dominant in the inner regions, is concentrated into a disc - so presumably this, or perhaps the tidal forces exerted by this on molecular clouds, could imprint some angular momentum preference. The evidence is sketchy but suggests random orientations, at least in the solar neighbourhood. I refer you to Detection of exo-planets , where I discuss this in the context of detecting transiting exoplanets. In a series of papers, myself and colleagues have investigated the distribution of spin-axes in clusters of stars. The idea here, which is not far-fetched, is that large clouds from which clusters form will have some angular momentum. The question is how much of that angular momentum is inherited by the stars it forms, or to what degree can turbulence in the collapsing gas essentially randomise the spin vectors of collapsing fragments. Our technique was to combine rotation periods (latterly from Kepler observations) with careful measurements of projected equatorial velocities ($v \sin i$, where $i$ is the spin inclination to the line of sight) to get projected radii ($R \sin i$) and then to model the distribution of $R \sin i$ with various assumptions about the spin-axis distribution. In all three of the clusters we have studied (Pleiades, Alpha Per, Praesepe), the distribution was consistent with a random distribution, with quite strong limits on the amount of alignment that was possible (Jackson & Jeffries 2011; Jackson, Deliyannis & Jeffries 2018; Jackson et al. 2019). The technique has been replicated in a fourth cluster, NGC 2516, by Healy & McCullough (2020), with the same conclusion. Other authors have claimed alignments in some cases. Notably, using Kepler asteroseismology of red giants in two clusters in the Kepler main field, Corsaro et al. (2017) claimed a quite tight alignment of spin axes, pointing almost towards us in each case. Since the Kepler field is not far from the Galactic plane and these were distant clusters, then the spin axes would be almost in the Galactic plane (a bit like Uranus and the Sun). However, the likelihood of finding such a result if individual clusters had random average angular momentum vectors raised question marks - the probability of seeing that vector pointing towards you is very low. Work by Kamiaka et al. (2018) shows that the asteroseismological estimates may be systematically biased towards low inclinations. A further piece of evidence for some alignment was in the orientations of bi-polar planetary nebulae towards the Galactic bulge. Rees & Zijlstra (2013) found a non-random distribution that suggested that the orbital angular momenta of binary systems, responsible for the bipolar shape of the nebulae, were oriented aligned with the Galactic plane (again, like Uranus around the Sun). The result is highly statistically significant but as far as I know has not been followed up despite its obvious implications for estimations of transit yields from exoplanetary surveys. I think the biggest argument that there is no significant effect for average stars in the field of the Galaxy, is that the exoplanet people working on the TESS survey (which covers the whole sky), would have found a drastic spatial dependence on their yield of transiting planets as a function of Galactic latitude. The majority of transiting planets (or at least hot Jupiters) have orbital axes coincident with the spin axis of the star (like planets in the Solar System). If these orbital axes were aligned with Galactic north (or any other direction) it would mean you would see far fewer transiting planets when looking towards those directions. I have heard no reports of such a spatial dependence.
{ "domain": "astronomy.stackexchange", "id": 5098, "tags": "galaxy, solar-system, milky-way, rotation, stellar-dynamics" }
what is meta-sympathetic nervous system?
Question: I always knew about the sympathetic and para-sympathetic nervous systems, and today I was told about the meta-sympathetic nervous system, but I didn't understand well the man who told me about it and I didn't find enough or reliable information about this system, and if it's accepted in the scientific world or it's arguable. Answer: The concept of a metasympathetic part of the autonomic nervous system (distinct from the sympathetic and parasympathetic parts) seems to be an idea discussed almost exclusively in the Russian scientific literature.(src) Many, if not a majority, of the papers concerned with it seem to have one A. D. Nozdrachev as one of the authors. He has been publishing on it at least since 1980 ("Structuro-functional organization of the vegetative (autonomic) nervous system"); most recently is from 2006 quoted in another answer ("The Metasympathetic System of the Brain"). The presentation in the latter paper seems almost incoherent to me, and apparently not due to translation. This idea doesn't seem to have much traction, particularly outside of Russia.
{ "domain": "biology.stackexchange", "id": 5348, "tags": "neuroscience, autonomic-nervous-system" }
A graph interface for Wikipedia
Question: I'm building a graph-based interface to explore Wikipedia, but I'm not really familiar with TypeScript/React (especially state management), so I really feel like I'm just Frankensteining things together, which makes me feel pretty uncomfortable. I've found many things in my code so far that have made me facepalm. I'd appreciate any feedback on state management, modularity, style, implementation, or anything else! If anyone wants to take a look at the entire source-code: https://github.com/lee-janice/wikigraph Otherwise, here's the core bit: App.tsx import "./styles/App.css"; import WikiGraph from "./components/wikigraph"; import { useEffect, useState } from "react"; const NEO4J_DB = String(process.env.REACT_APP_NEO4J_DB); const NEO4J_URI = String(process.env.REACT_APP_NEO4J_URI); const NEO4J_USER = String(process.env.REACT_APP_NEO4J_USER); const NEO4J_PASSWORD = String(process.env.REACT_APP_NEO4J_PASSWORD); function App() { // set initial theme and keep track of dark mode state const [darkMode, setDarkMode] = useState(window.matchMedia("(prefers-color-scheme: dark)").matches); // handle change in dark mode toggle useEffect(() => { if (darkMode) { document.body.classList.add("dark"); document.body.classList.remove("light"); } else { document.body.classList.add("light"); document.body.classList.remove("dark"); } }, [darkMode]); return ( <> <header> <h1> <strong>WikiGraph</strong> </h1> <p className="subtitle">A graph-based approach to exploring the depths of Wikipedia</p> </header> <div className="App"> {/* graph visualization */} <WikiGraph containerId={"vis"} serverDatabase={NEO4J_DB} serverURI={NEO4J_URI} serverUser={NEO4J_USER} serverPassword={NEO4J_PASSWORD} darkMode={darkMode} /> {/* light/dark mode toggle */} <label id="theme-toggle"> <input type="checkbox" checked={darkMode} onChange={() => setDarkMode(!darkMode)} /> Dark mode </label> </div> </> ); } export default App; wikigraph.tsx import { useEffect, useRef, useState } from "react"; import NeoVis, { NeoVisEvents } from "neovis.js/dist/neovis.js"; import ContextMenu, { ContextMenuState, ContextMenuType } from "./contextMenu"; import NavBar, { NavTab } from "./sidebar/navbar"; import UserManual from "./sidebar/userManual"; import About from "./sidebar/about"; import WikipediaSummaries, { WikiSummary } from "./sidebar/wikipediaSummaries"; import styled from "styled-components"; import { createConfig } from "../util/neo4jConfig"; const StyledCanvas = styled.div` height: ${(props) => (props.theme.expanded ? "100%;" : "80%;")} width: ${(props) => (props.theme.expanded ? "100%;" : "60%;")} top: ${(props) => (props.theme.expanded ? "0px;" : "inherit;")} left: ${(props) => (props.theme.expanded ? "0px;" : "inherit;")} z-index: ${(props) => (props.theme.expanded ? "100000;" : "100;")} position: fixed; @media (max-width: 1100px) { height: ${(props) => (props.theme.expanded ? "100%;" : "55%;")} width: ${(props) => (props.theme.expanded ? "100%;" : "90%;")} } `; StyledCanvas.defaultProps = { theme: { expanded: false, }, }; /* https://www.w3schools.com/howto/howto_css_fixed_sidebar.asp */ const StyledSidebar = styled.div` height: 100%; width: 33%; padding-top: 20px; top: 0; right: 0; position: fixed; /* stay in place on scroll */ z-index: 100; overflow-x: hidden; /* disable horizontal scroll */ border-left: 1px solid var(--borderColor); background-color: var(--primaryBackgroundColor); @media (max-width: 1100px) { height: 100%; width: 100%; top: 80%; display: block; position: absolute; z-index: 10000; border-left: none; border-top: 1px solid var(--borderColor); } `; // TODO: figure out how to import this from vis.js export type IdType = string | number; interface Props { containerId: string; serverDatabase: string; serverURI: string; serverUser: string; serverPassword: string; darkMode: boolean; } const WikiGraph: React.FC<Props> = ({ containerId, serverDatabase, serverURI, serverUser, serverPassword, darkMode, }) => { // keep vis object in state const [vis, setVis] = useState<NeoVis | null>(null); const [visIsExpanded, setVisIsExpanded] = useState(false); // keep track of selected nodes and labels // TODO: combine into one object const [selection, setSelection] = useState<IdType[]>([]); const [selectionLabels, setSelectionLabels] = useState([""]); // keep track of summaries // TODO: combine into one object const [summaries, setSummaries] = useState<WikiSummary[]>([]); const [currentSummary, setCurrentSummary] = useState<WikiSummary | null>(null); // keep track of search bar input const [input, setInput] = useState(""); // keep track of nav bar tab state const [currentNavTab, setCurrentNavTab] = useState<NavTab>(NavTab.Home); // keep track of whether the context menu is open or closed const [contextMenuState, setContextMenuState] = useState<ContextMenuState>({ open: false, type: ContextMenuType.Canvas, mobile: window.innerWidth < 1100, x: 0, y: 0, }); window.onresize = () => { if (window.innerWidth < 1100) { if (!contextMenuState.mobile) { setContextMenuState({ ...contextMenuState, mobile: true }); } } else { if (contextMenuState.mobile) { setContextMenuState({ ...contextMenuState, mobile: false }); } } }; // get reference to selection so that we can use the current value in the vis event listeners // otherwise, the value lags behind const selectionRef = useRef(selection); // so that we only register event listeners once const completionRef = useRef(false); // ----- initialize visualization and neovis object ----- useEffect(() => { const vis = createConfig(containerId, serverDatabase, serverURI, serverUser, serverPassword); vis.render(); setVis(vis); // create event listeners once the visualization is rendered vis?.registerOnEvent(NeoVisEvents.CompletionEvent, (e) => { if (!completionRef.current) { completionRef.current = true; const updateSelectionState = (nodeIds: IdType[]) => { // update selection setSelection(nodeIds); selectionRef.current = nodeIds; // update selection labels var labels = vis.nodes .get() .filter((node: any) => (nodeIds ? nodeIds.includes(node.id) : "")) .map(({ label }: { label?: any }) => { return label; }); setSelectionLabels(labels); }; // 1. listener for "select" vis.network?.on("select", (e) => { var nodeIds = vis.network?.getSelectedNodes(); if (nodeIds) { updateSelectionState(nodeIds); } }); // 2. listener for "click" vis.network?.on("click", (click) => { setContextMenuState({ open: false, type: ContextMenuType.Canvas, mobile: window.innerWidth < 1100, x: 0, y: 0, }); }); // 3. listener for "double click" vis.network?.on("doubleClick", (click) => { // if there's a node under the cursor, update visualization with its links if (click.nodes.length > 0) { const nodeId = click.nodes[0]; var cypher = `MATCH (p1: Page)-[l: LINKS_TO]-(p2: Page) WHERE ID(p1) = ${nodeId} RETURN p1, l, p2`; vis?.updateWithCypher(cypher); } }); // 4. listener for "right click" vis.network?.on("oncontext", (click) => { click.event.preventDefault(); // TODO: figure out why click.nodes is not accurate on right click // get adjusted coordinates to place the context menu var rect = click.event.target.getBoundingClientRect(); let correctedX = click.event.x - rect.x; let correctedY = click.event.y - rect.y; var type = ContextMenuType.Canvas; // check if there's a node under the cursor var nodeId = vis.network?.getNodeAt({ x: correctedX, y: correctedY }); if (nodeId) { // select node that was right-clicked if (selectionRef.current) { vis.network?.selectNodes([...selectionRef.current, nodeId]); } else { vis.network?.selectNodes([nodeId]); } // update selection state const nodeIds = vis.network?.getSelectedNodes(); if (nodeIds) { updateSelectionState(nodeIds); nodeIds.length > 1 ? (type = ContextMenuType.Nodes) : (type = ContextMenuType.Node); } } else { type = ContextMenuType.Canvas; } setContextMenuState({ open: true, type: type, mobile: window.screen.width < 1100, x: correctedX, y: correctedY, }); }); } }); }, [containerId, serverDatabase, serverURI, serverUser, serverPassword]); // ----- execute cypher query when user inputs search, update visualization ----- const createNewGraph = () => { // TODO: replace this with something that does not open the DB up to an injection attack var cypher = 'CALL { MATCH (p:Page) WHERE apoc.text.levenshteinSimilarity(p.title, "' + input + '") > 0.65 RETURN p.title as title ORDER BY apoc.text.levenshteinSimilarity(p.title, "' + input + '") DESC LIMIT 1 } MATCH (p1:Page)-[l:LINKS_TO]-(p2:Page) WHERE p1.title = title RETURN p1, l, p2'; // TODO: only render if the query returns > 0 nodes, otherwise tell user no nodes were found vis?.renderWithCypher(cypher); vis?.network?.moveTo({ position: { x: 0, y: 0 } }); }; const addToGraph = () => { var cypher = 'CALL { MATCH (p:Page) WHERE apoc.text.levenshteinSimilarity(p.title, "' + input + '") > 0.65 RETURN p.title as title ORDER BY apoc.text.levenshteinSimilarity(p.title, "' + input + '") DESC LIMIT 1 } MATCH (p1:Page)-[l:LINKS_TO]-(p2:Page) WHERE p1.title = title RETURN p1, l, p2'; vis?.updateWithCypher(cypher); vis?.network?.moveTo({ position: { x: 0, y: 0 } }); }; return ( <> {/* graph visualization */} <StyledCanvas theme={{ expanded: visIsExpanded }} id="canvas"> <div id={containerId} /> <img src={ visIsExpanded ? darkMode ? "icons/collapse-white.png" : "icons/collapse.png" : darkMode ? "icons/expand-white.png" : "icons/expand.png" } alt={visIsExpanded ? "Collapse visualization button" : "Expand visualization button"} className="vis-expand-button" onClick={() => setVisIsExpanded(!visIsExpanded)} /> {contextMenuState.mobile && ( <img src={ contextMenuState.open ? darkMode ? "icons/close-white.png" : "icons/close.png" : darkMode ? "icons/kebab-white.png" : "icons/kebab.png" } alt={visIsExpanded ? "Collapse visualization button" : "Expand visualization button"} className="mobile-context-button" onClick={() => { var type; if (selection.length === 0) { type = ContextMenuType.Canvas; } else if (selection.length === 1) { type = ContextMenuType.Node; } else { type = ContextMenuType.Nodes; } setContextMenuState({ ...contextMenuState, open: !contextMenuState.open, type: type }); }} /> )} <input type="submit" value="Stabilize" id="stabilize-button" onClick={() => { vis?.stabilize(); }} /> <input type="submit" value="Center" id="center-button" onClick={() => vis?.network?.fit()} /> <ContextMenu vis={vis} darkMode={darkMode} state={contextMenuState} setState={setContextMenuState} selection={selection} setSelection={setSelection} selectionLabels={selectionLabels} setSelectionLabels={setSelectionLabels} summaries={summaries} setSummaries={setSummaries} setCurrentSummary={setCurrentSummary} /> </StyledCanvas> {/* sidebar */} <StyledSidebar className="sidebar"> <NavBar currentNavTab={currentNavTab} setCurrentNavTab={setCurrentNavTab} /> {currentNavTab === NavTab.Home && ( <> <WikipediaSummaries summaries={summaries} setSummaries={setSummaries} currentSummary={currentSummary} setCurrentSummary={setCurrentSummary} /> <div className="search-bar"> Search for a Wikipedia article: <br /> <input type="search" placeholder="Article title" onChange={(e) => setInput(e.target.value)} /> <br /> <input type="submit" value="Create new graph" onClick={createNewGraph} /> <input type="submit" value="Add to graph" onClick={addToGraph} /> </div> </> )} {currentNavTab === NavTab.About && <About />} {currentNavTab === NavTab.UserManual && <UserManual />} </StyledSidebar> </> ); }; export default WikiGraph; Answer: Took me a while to find the time to write a proper review. General I like your project and what you are trying to achieve with it. In order to keep it maintainable and understandable you might want to consider the following recommendations: Reduce nesting I noticed places where the so called arrow anti pattern is noticable. Try to reduce nesting, this can be done by checking for values not being present rather than checking for values being present. Consider this: // ✅ const do = () => { if (value === null) { return; } if (valueTwo === null) { return; } // Do some stuff with value and valueTwo as it is now present } // vs. // ❌ const dont = () => { if (value) { if (valueTwo) { // work on value and valueTwo } } } Avoid direct DOM manipulation To set the dark mode class you can use a style sheet and directly set it at the most outer div you have inside the App component. Something like this: const [darkMode, setDarkMode] = useState(window.matchMedia("(prefers-color-scheme: dark)").matches); return <div className={darkMode ? "dark" : "light"}>{/* ... */}</div> Avoid direct DOM manipulation, as it can have negative side effects on your React experience and ultimately that's what React is for, keeping your UI up-to-date with the Application state of the App. WikiGraph.tsx As I already pointed out in the comments, the WikiGraph.tsx file is doing a lot of things. So I'd start to extract things out of there. After thinking about it, you might want to start to factor out general stuff without external state management libraries. If you are still curious about it though, I'd highly recommend to you and try the getting-started of Redux: https://redux.js.org/introduction/getting-started From the naming, my assumption of this component is, that it is responsible for rendering the graph and only the graph. Given that, let's have a look at the currentNavTab and setCurrentNavTab state first. Sidebar To get rid of the sidebar state we could extract a separate component let's call it Sidebar which could look something along those lines: export const SideBar = (/* introduce your own props */) => { const [currentNavTab, setCurrentNavTab] = useState<NavTab>(NavTab.Home); return (<StyledSidebar className="sidebar"> <NavBar currentNavTab={currentNavTab} setCurrentNavTab={setCurrentNavTab} /> {currentNavTab === NavTab.Home && ( <> <WikipediaSummaries summaries={summaries} setSummaries={setSummaries} currentSummary={currentSummary} setCurrentSummary={setCurrentSummary} /> <div className="search-bar"> Search for a Wikipedia article: <br /> <input type="search" placeholder="Article title" onChange={(e) => setInput(e.target.value)} /> <br /> <input type="submit" value="Create new graph" onClick={createNewGraph} /> <input type="submit" value="Add to graph" onClick={addToGraph} /> </div> </> )} {currentNavTab === NavTab.About && <About />} {currentNavTab === NavTab.UserManual && <UserManual />} </StyledSidebar>); } The problem here is in general, that the sidebar is tightly coupled with the graph component, therefore here are some methods missing. A simple solution would be to pass those methods via props to this component, which I think would be fine at this point in time - but in the future you might want to re-evaluate this decision. Example, but not complete, interface: interface SideBarProps { summaries: WikiSummary[]; currentSummary: WikiSummary | null; setSummaries: (e: WikiSummary | null) => void; // ... and so on } What todo with the vis object For now, it might be sufficient to extract it out of the component, by something like this: // vis.ts const createVis = () => createConfig(containerId, serverDatabase, serverURI, serverUser, serverPassword); let _vis: NeoVis | undefined; const getOrCreateVis = () => _vis === undefined ? createVis() : _vis; export getOrCreateVis; Hint: In a later point in time you can consider to move it into a global statemanagement library like Redux where you business logic is then held in reducers. This would move the responsibility of creating the Vis object into the getOrCreateVis method. You might want to extend/split/refactor this method to your needs. Thus it can be use like this: // wikigraph.tsx omitting not relevant code const WikiGraph = (/*...*/) => { useEffect(() => { getOrCreateVis().registerOnEvent(NeoVisEvents.CompletionEvent, (e) => {/**..*/}); }); } From there it is desirable to extract the registerOnEvent method as well and just pass the methods you want to register. I'd scribble this like this: createVis({ select: () => { /* do your stuff on the select event */ }, click: () => {/* ... */} }); Events The events like select and so on, looks a bit odd to me, especially the select event. That's because I'd assume that the event would return the selected nodes inside the event object. Rather than checking the global vis object and asking it for the selected nodes. In an ideal world this I'd expect something like this: const select = (e) => updateSelectionState(e); Assuming that those are calls are valid, the above registration call could look like this: getOrCreateVis({ select: updateSelectionState, /* ... more events like: */ doubleClick: (click) => { if (click.nodes.length === 0) { return; } const nodeId = click.nodes[0]; const cypher = `MATCH (p1: Page)-[l: LINKS_TO]-(p2: Page) WHERE ID(p1) = ${nodeId} RETURN p1, l, p2`; vis?.updateWithCypher(cypher); } }); CreateNewGraph & addToGraph Those methods are basically static and only depend on the input. Pull them out of the component. // anywhere outside of wikigraph const createNewGraph = (input) => { // TODO: replace this with something that does not open the DB up to an injection attack var cypher = 'CALL { MATCH (p:Page) WHERE apoc.text.levenshteinSimilarity(p.title, "' + input + '") > 0.65 RETURN p.title as title ORDER BY apoc.text.levenshteinSimilarity(p.title, "' + input + '") DESC LIMIT 1 } MATCH (p1:Page)-[l:LINKS_TO]-(p2:Page) WHERE p1.title = title RETURN p1, l, p2'; // TODO: only render if the query returns > 0 nodes, otherwise tell user no nodes were found vis?.renderWithCypher(cypher); vis?.network?.moveTo({ position: { x: 0, y: 0 } }); }; const addToGraph = (input) => { var cypher = 'CALL { MATCH (p:Page) WHERE apoc.text.levenshteinSimilarity(p.title, "' + input + '") > 0.65 RETURN p.title as title ORDER BY apoc.text.levenshteinSimilarity(p.title, "' + input + '") DESC LIMIT 1 } MATCH (p1:Page)-[l:LINKS_TO]-(p2:Page) WHERE p1.title = title RETURN p1, l, p2'; getOrCreateVis().updateWithCypher(cypher); getOrCreateVis().network?.moveTo({ position: { x: 0, y: 0 } }); }; // inside wikigraph const onCreate = () => createNewGraph(input); const onAdd = () => addToGraph(input); /* Use our newly created sidebar component and simply pass on the props ... */ <SideBar create={onCreate} add={onAdd} /> Summary I left out the Redux (or basically any other global state management library) part on purpose, because once you get the hang of pulling out common logic, you should be pretty fast in adapting it to such a state management framework. That being said, don't be too scared to pull things out of the component. In my eyes, working on that, helps you to enhance your code and adapt to other things.
{ "domain": "codereview.stackexchange", "id": 44337, "tags": "graph, react.js, typescript" }
Importing ROS msgs from a directory other than /msg
Question: I'm trying to create a flexible way of defining enumerations for different datatypes, such that adding a new enum doesn't invalidate the message hash, thereby ruining the bag. Such that a traditional message like: uint16 VOLTAGE uint16 CURRENT uint16 data Would be split into 2 messages, one simply containing uint16 data and the other containing the enumeration, essentially a documentation file. Ideally these 2 files would be named the same but in separate but parallel folders. So the enumeration file could be defined in constants/Battery.msg and the actual ROS message could be defined in msg/Battery.msg. I've managed to add the other file in the CMakeLists.txt file using the DIRECTORY variable in add_message_files, but this doesn't namespace the import, so there is a name conflict. So, is there any way to define a new build location for message generation within the same CMakeLists.txt file in ROS? Such that I can import from both custom_msgs.msg and custom_msgs.constants ? Originally posted by lynx on ROS Answers with karma: 3 on 2021-06-07 Post score: 0 Answer: Edit 2: Warning: long piece of text ahead about semantics of messages and why what we have is actually not that bad. Thank you for the time and in-depth answer! Not the answer I was hoping for, but what I feared... I was hoping that hardcoded /msg in genpy wouldn't be there. Thanks for the alternative solution, I was aware of the rosbag migration, just dealing with dozens of bags with 10s of thousands of messages makes adding any new enumeration extremely cumbersome. With the risk of "gold plating" this, I feel there is actually a big advantage to the way things are implemented now. ROS messages (and services and actions) are supposed to be stand-alone. That is: their contents should be interpretable without requiring any external sources of information (that's of course difficult to achieve in reality, but that's the idea). This is important both for archival reasons (ie: when stored in .bags from say 10 years ago) as well as to help ensure that consumers will be able to figure out what is being communicated right now. Both at the syntax level (ie: form of the data) as well as semantics (ie: meaning). The MD5 hash of message structures ensures that the form of .msgs expected by consumers is exactly the same as those produced by publishers. This not only guarantees that consumers will be able to deserialise the data (ie: they know in what form the data is), but it will also go a long way to make sure that how they interpret the meaning of that data is as the producer expected them to (ie: their responses and behaviour match with what the intentions of the producer were). What you propose in your example (using a plain uint16 to contain the value of an enum value and then storing the possible (legal?) values of that uint16 in a separate message type) goes against this (if it doesn't completely make it impossible). With the proposed approach, it would be possible to change the members of the "enum" (in quotes, as the msg spec does not really support enums) without any producer or consumer being able to detect this, as there is no direct coupling between the enum and the uint16. None of the hashes would change, as the "enum message" is not stored anywhere. The proper way to do this would be to either keep the enumeration values in the same .msg, or use the type of the enumeration .msg as the type of the field containing the value (ie: instead of uint16). I realise this doesn't help make things any easier, but I believe the current system does help avoid all sorts of problems. I was aware of the rosbag migration, just dealing with dozens of bags with 10s of thousands of messages makes adding any new enumeration extremely cumbersome. This is easy for me to say (as I don't have to do it), but automation should go a long way here. Migration rules should be almost generatable themselves if we're only talking about adding values to enumeration fields. With the rule in place, batch processing .bags should also be relatively straightforward. Edit: I'm not entirely sure, but from this: I'm trying to create a flexible way of defining enumerations for different datatypes, such that adding a new enum doesn't invalidate the message hash, thereby ruining the bag. it might be this is an xy-problem. While changing a .msg does change its hash, it does not immediately "ruin" .bags which contain the old definition. rosbag supports migration of .bags containing one version of a .msg to another. This requires bag migration rules to be provided, which tell rosbag how to transform old instances of a message type into the new one. The linked page has information on how to write those rules. Such a migration does require processing the entire .bag, but it is a one-time process, and keeps the old .bag as-is. On modern systems, with high throughput IO systems and high performance CPUs, a moderately sized bag should not take too long to migrate. Original answer: tl;dr: no, I don't believe this is supported. Longer: looking at the genmsg_py.py script's help, this could be possible: $ ./genmsg_py.py --help Usage: genmsg_py.py file Options: -h, --help show this help message and exit --initpy -p PACKAGE -o OUTDIR -I INCLUDEPATH notice the -o OUTDIR command line argument. genpy is the package providing the rospy compatible code generator used to convert your .msg defs into code. gencpp.py supports a similar option, and the other message code generators might as well, I haven't checked. So on the generation side, this could be possible. However, as you never really invoke any of the message code generation scripts directly, but use the message_generation and message_runtime CMake infrastructure, I see two possible problems/challenges (although I haven't checked whether they are problems/challenges): the default implementation of the CMake-side to genpy, gencpp et al. hard-codes the output directory to ${ARG_GEN_OUTPUT_DIR}/msg (see here for genpy fi). To change this, you'd have to fork that package and make it do something else, which will not be very scalable. none of the consumers of messages will be 'aware' they have to add additional directories to their msg/srv include paths, leading to lookup failures in downstream consumer code-paths. While technically you could fork the relevant packages and update them, I'm not sure that would be the kind of overhead you'd find acceptable for what you're trying to do. Originally posted by gvdhoorn with karma: 86574 on 2021-06-07 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by lynx on 2021-06-07: Thank you for the time and in-depth answer! Not the answer I was hoping for, but what I feared... I was hoping that hardcoded /msg in genpy wouldn't be there. Thanks for the alternative solution, I was aware of the rosbag migration, just dealing with dozens of bags with 10s of thousands of messages makes adding any new enumeration extremely cumbersome. "Ruin" might have been too strong of a word :p Comment by lynx on 2021-06-08: In reply to edit 2: I don't want to seem like I'm just complaining about ROS, it's an incredible piece of software that I've loved using and seeing what it can do. I hear you, I want to have my cake and eat it too. But, I generally don't treat rosbags as the gold standard of data storage, generally they're exploded into either sql databases or something more standardized if the data is that important to store for ~10 years. What I'm proposing would make this data not standalone, that's true, but in combination with other version control systems (i.e. git), for projects that are shorter temporally (like over semester/year) I could see the tradeoff for additional flexibility being worth it. But, that's not the goal of msgs and ROS, as I now understand (via the standalone principal). I haven't invested that much time into rosbag migration, I'll try and generate some rules and see if I can make something work. Thanks again! I appreciate the explanation, it's neat to see. Comment by gvdhoorn on 2021-06-08:\ I don't want to seem like I'm just complaining about ROS and I just wanted to provide some insight into why the current implementation is as-it-is :) no need to apologise.
{ "domain": "robotics.stackexchange", "id": 36501, "tags": "ros, ros-melodic, messages, msg" }
Crowd/Pedestrian Simulator in ROS?
Question: Hi there. Does anyone know (or is using) a crowd/pedestrian simulator integrated with ROS? My intention is to have a robot navigating among a realistic group of people. Originally posted by Procópio on ROS Answers with karma: 4402 on 2013-03-27 Post score: 3 Original comments Comment by scopus on 2014-10-09: hi, Dear Stein, have you got answer? which simulator for crowd/pedestrian are you using? Thank you! Comment by Procópio on 2014-10-23: hi, I had to develop my own, based on Helbing Social Forces model, together with Stage. Comment by scopus on 2014-10-23: hi, Have you got some progress? I am beginning to develop a simulator based on Stage. I also found an opensource http://pedsim.silmaril.org/. If you are developing a simulator based on Stage. I am interested to join your project. Thank you Comment by Procópio on 2014-10-23: I just found an implementation of PedSim for ROS, check the answer bellow. I believe it is much more flexible then my implementation in Stage. Answer: Here is an implementation of PedSim for ROS: https://github.com/srl-freiburg/pedsim_ros Originally posted by Procópio with karma: 4402 on 2014-10-23 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 13559, "tags": "ros, gazebo, simulation, stage" }
Get only the current map from GMapping
Question: Hi, I'm new in ROS and I want to do a DATMO application with laser scan. For the first step of this application, I only need to get the current map created by GMapping. Is that possible? Could you please tell me what should I do in details? or show me some references? Thank you. Originally posted by sadek on ROS Answers with karma: 58 on 2015-03-23 Post score: 0 Answer: Dear Sadek, gmapping is designed to work with odometry and laser scan data and you cannot use it if one of them is missing. If I understand you need to transform laser data in an occupancy grid (that is the same topic type of map published by gmapping): in that case some kind of Costmap2D is what you need. You should set up static_map to false, so no external map is required and only laser data will be used, and rolling_window to true, so the robot will be every posed at centre of costmap. Costmap2D will publish /grid that should be exactly what you need. Unfortunately (for you) the Costmap2D is not provided as node since it is embedded in move_base node so you should get the sources and write your own node. Furthermore it do something more than transform laser data to occupancy grid: it define costs used for path planning: maybe you can find a setup that works fine for your goal. Originally posted by afranceson with karma: 497 on 2015-03-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by sadek on 2015-03-23: Thank you for your response. It's exactly what I need. However, is there any other package (more simpler) which converts laser data in an occupancy grid? Comment by afranceson on 2015-03-23: Not in my knowledge, sorry! Comment by sadek on 2015-03-23: Thanks afranceson. I'll try Costmap2D then.
{ "domain": "robotics.stackexchange", "id": 21204, "tags": "navigation, mapping, occupancy-grid, gmapping" }
Obtaining position from a curved velocity vs time graph
Question: I have just entered AP Physics and Im struggling with the following: I need to obtain position from a curved position vs time graph, i.e. the acceleration slope is not constant. I first attempted to use the displacement formula $$x = v_0t + \frac{1}{2}at^2$$ where the initial velocity at time $0$ is $0\ \mathrm{m/s}$. First I knew I had to get instantaneous acceleration - never done that before but I drew tangent lines to the points at the different intervals and did: $$a = \frac{v_f-v_i}{t}$$ So far, this is what I've done. Acceleration is on the left side: However this is not correct because on the 2nd interval at $20\ \mathrm{s}$ where velocity has just started going backwards, I got $-80\ \mathrm{m}$. It can't jump $450\ \mathrm{m}$ to $80\ \mathrm{m}$ in 10 seconds based on the graph. My thinking is I'm not plugging in the right value for $v_0$, meaning original velocity. With $20\ \mathrm{m}$ should I be using the velocity from $10\ \mathrm{s}$? EDIT: this is how I solved it and the graph: I used formula x = (vi+vf)/2 * t , although Im sure thats wrong. Answer: The area between the x axis and the curve on a velocity-time graph represents displacement. When this area is above the x axis the displacement is +ve; when the area is below the x axis the displacement is -ve. The last formula you quoted is correct for calculating this area : $\Delta x \approx \frac12(v_i+v_f)\Delta t$. This formula should be applied for each interval. Ideally you should aim to choose intervals over which the curve is approx. a straight line; then the formula is exact. For each interval you have $\Delta t=5s$. For the 1st interval $(0-5s)$ you have $v_i=0m/s$ and $v_f=3m/s$ so then $\Delta x \approx \frac12(0+3)*5=7.5m$. For the 2nd interval $(5-10s)$ you have $v_i=3m/s$ and $v_f=9m/s$ so $\Delta x \approx \frac12(3+9)*5=30m$ giving a cumulative displacement of $37.5m$ at the end of $t=10s$. Alternatively you can count the number of rectangles under the curve, estimating fractions. This is what I would do. You need to exercise care when the area becomes -ve; when velocity becomes -ve the area is above the curve. For your graph I would use intervals $\Delta t$ of $5s$. The unit of area (au=displacement) is $5s \times 2m/s=10m$. The estimates which I make are : In the 3rd column I have converted 1au to 10m and accumulated the distance - ie added the interval amounts to a running total.
{ "domain": "physics.stackexchange", "id": 33501, "tags": "kinematics, velocity, integration, displacement" }
Can a long enough tube block all air flow?
Question: I'm asking because of the concept of a fluid flow creating a boundary layer region that increases in size as the fluid progresses down the length of the tube. Like in a wind tunnel. The flow can apparently be restricted by the boundary layer and, just like a normal constriction, speed it up in the center. Is there a length at which the boundary layer created by the flow in the tube completely obstructs the flow? Here's a diagram of what I'm talking about: Answer: The flow can apparently be restricted by the boundary layer and, just like a normal constriction, speed it up in the center. Boundary layer is not a constriction in the same way that a hard wall is. Boundary layer is that region of the flow where fluid has slowed down (but is still flowing), by virtue of fluid viscosity and no-slip condition at the tunnel walls. If flow slows down inside the boundary layer (wall region of the pipe), then it must speed up outside the boundary layer (central region of the pipe) to maintain the same mass flow rate. However, speeding up of the flow outside the boundary layer doesn't proceed indefinitely, but reaches a maximum value when boundary layers from the walls merge, resulting in a fully developed flow. Is there a length at which the boundary layer created by the flow in the tube completely obstructs the flow? No. The "constriction-like" effect (speeding up of flow at the pipe centre) is created by the flow itself (i.e. slowing of fluid inside the boundary layer).
{ "domain": "physics.stackexchange", "id": 53532, "tags": "fluid-dynamics" }
K°P vs KP in equilibrium thermodynamics
Question: I'm having a little trouble with definitions. Here's a copy and paste from my hwk: ...K°P, and K°c have no units...Please enter the KP in 1/bar in 3 significant figures... So these values even have different units, it's not a typo or anything. Given: aA+bB $=>$ cC+dD Is $$K°P=\frac {[C]^c[D]^d}{[A]^a[B]^b}$$ or is $$KP=\frac {[C]^c[D]^d}{[A]^a[B]^b}$$ the question stated that K°P had no units, and the equation could be unitless if a+b=c+d, but that may not always be the case. Can somebody tell me which "KP" matches the definition, and also provide, or link the formula for the other "KP"? Sorry for asking such a trivial question but I'm having alot of trouble finding these definitions. Thanks all. Answer: The equilibrium constant should be dimensionless, it has to be to be used in equations such as the free energy for reaction at equilibrium $\Delta G^0_R = -RT\ln(K_p)$. If the reaction is $\ce{\alpha A + \beta B \<=>\ \gamma C + \delta D} $ if we write $$K_p= \frac{P_A^{\alpha} P_B^{\beta}}{P_C^{\gamma}P_D^{\delta}} $$ where $P_a$ etc are partial pressures at equilibrium then $K_p$ is going to have dimensions in general depending ion the values of $\alpha , \beta $ etc. as you state in your question. In the past it was sort of implicitly assumed that each pressure was divided by 1 atm so then $K_p$ becomes dimensionless (or the units were simply ignored and numerical values used) but nowadays this is explicitly added so we obtain $$K_p= \frac{\left(\frac{P_A}{P^0}\right)^{\alpha} \left(\frac{P_B}{P^0}\right)^{\beta}}{\left(\frac{P_C}{P^0}\right)^{\gamma}\left(\frac{P_D}{P^0}\right)^{\delta}} $$ where $P^0 = 1 \pu{ atm}$ and so makes things dimensionless no matter what the values of $\alpha, \beta $ etc are but does not change any numerical value. Although I have used partial pressures the same applies to concentrations $\ce{[A], [B]}$ etc where now each concentration is divided by $\ce{[C^0]}$ where this is $ \pu{1 mol/dm^3}$. I'm not quite sure what your notation $K^0P$ etc. means but I suspect it refers to the two cases I have described. Hope this helps.
{ "domain": "chemistry.stackexchange", "id": 6724, "tags": "thermodynamics" }
Are we totally limited to the Milky Way for all stellar astrophysics?
Question: Simply put; Are we able to make any observations of the individual stars in other galaxies? Is the depth of our scientific knowledge on star systems, based solely on stars in the Milky Way Galaxy alone? Today, I just had a realization on how big the distance to any other galaxies are compared to the stars in our own Galaxy. I have been into space/pop-sci since I was a kid. I never thought about how limiting that huge gap must be in terms of astronomy. Then I thought, Is this knowledge so easy to miss? I have been pondering questions like Fermi Paradox, Drake Equation for many many years without this core awareness regarding observational limitations on stars in other galaxies. But in light of this observational limit on individual star systems in other galaxies, many popular science concepts...just feels overly confident. For example; Dark Energy or Dark Matter. Did we come up with all of those cosmological theories purely/entirely based on detailed observation in our own Galaxy alone? Answer: Just today, a new record was established in a brand new paper reporting the discovery of an observable star almost as distant in the Universe as it is possible to see. The discovery was possible due to a fortuitously placed cluster of galaxies whose gravity acts as a lens, focusing the light of the star in our direction. Observations of stars in external galaxies are commonplace in astronomy. Of course, the dimmest stars are only observable in our own galactic neighborhood, but brighter stars are observable farther away. As for dark energy and dark matter, dark energy has no measurable effect on our galaxy: its effects are only seen on much larger scales. Dark matter is very evident in clusters of galaxies, which don't contain enough visible matter to bind them together gravitationally.
{ "domain": "physics.stackexchange", "id": 87284, "tags": "cosmology, astronomy, stars, galaxies, milky-way" }
State Ket Interpretations
Question: I have a question that came from lecture earlier in the semester, that I never fully understood: Let's suppose that we have an electron, that at $t=0$ is in an eigenstate of $\hat{S}\bullet\hat{n}$, with an eigenvalue of $\hbar/2$. My question is... is it more correct to write the eigenvalue equation for this situation as $$\hat{S}\bullet\hat{n}\rvert \psi \rangle = \hbar/2\rvert\psi\rangle$$ OR as $$\hat{S}\bullet\hat{n}\rvert \hat{S}\bullet\hat{n} \rangle = \hbar/2\rvert\hat{S}\bullet\hat{n}\rangle~?$$ Here $\bullet$ is the dot product in 3D space. Someone please explain the difference between these two setups. Answer: Fundamentally this is just a difference in labelling, and, as long as you are clear about what you mean, you can label things however you like. Having said that in the second case you seem to be labelling your state with operator of which it is an eigenket. This is probably not a great choice in most situation because one operator will generally have many different eigenket, which this notation will not distinguish between. A very common choice, however, is to label the eigenket of an operator by their eigenvalue, so in your example the state would be lablled $|+\hbar/2\rangle$ (In your spesific case you might also want to include the vector $\hat{n}$ if you are going to be considering many different directions). This is often enough to uniquely identify the state and has the upshot of telling us something useful about the state. $|\psi\rangle$ is often used to signify a 'generic' state in whatever context we are considering, but it is also used as the go to name when we can't think of anything more interesting.
{ "domain": "physics.stackexchange", "id": 34299, "tags": "quantum-mechanics, angular-momentum, hilbert-space, quantum-spin, notation" }
Loop optimization of non-tail recursion
Question: When researching how to optimize recursion into loops, I came upon (on Wikipedia) a general rule about this: Whenever a function is in form: fn F(x): if p(x): return F(a(x)) else: return b(x) Then it may be written as: fn F(x): while p(x): x = a(x) return b(x) However, I was wondering about how to do this if the recursion is not in tail call position, i.e. the value is modified after the recursive call like this: fn F(x): if p(x): return h(F(a(x))) else: return b(x) And I came upon this solution: fn F(x): i = 0 while p(x): x = a(x) i++ x = b(x) repeat i times: x = h(x) return x Are those two functions equivalent in functionality? Furthermore, is there a general way to make a recursive function into a loop? Answer: One systematic way to analyze these problems is to rewrite the function in continuation-passing style. For our purposes this means that the function takes an extra argument which is another function describing the postprocessing that is to be applied to the returned value. Your third function F in continuation-passing style would look like fn F(x, k): if p(x): return F(a(x), k ∘ h) else: return k(b(x)) where ∘ denotes function composition. Note that F is now tail recursive. You can then look for alternate (more efficient) representations of the continuation argument. In this case, the continuation always has the form $k_0\circ h\circ\cdots\circ h = k_0\circ h^i$ where $k_0$ is the continuation passed by the external caller. This can be represented by two arguments, $k_0$ and $i$: fn F(x, k0, i): if p(x): return F(a(x), k0, i+1) else: x = b(x) repeat i times: x = h(x) return k0(x) If you convert this to an iterative function in the usual way, and assume the caller always passes the identity function and zero for the second and third arguments, you get your iterative version of the function. Usually you won't find such a concise encoding for the continuation. But quite generally the continuation will either be the original continuation, or one of a finite number of other functions that contains the previous continuation as a free variable (and possibly some other free variables). You can represent this as an array with one element for each "level", each element being a function number and the values of the non-continuation free variables. When you recurse downward you push another element, and when you return upward you pop an element. This gets you a stack-array-based iterative algorithm with the same space and time complexity as the original algorithm, but probably better constant factors.
{ "domain": "cs.stackexchange", "id": 17201, "tags": "optimization, recursion, functional-programming, tail-recursion" }
Thermodynamic Modeling for adiabatic Real-Gas Supply Tank Blowdown
Question: I am working on a program which allows our university team to size our inert-gas supply tank and how much inert gas we will need to buy for rocket engine development, and hopefully a rocket launch. The program is a time-step simulation, which takes inputs on how much mass flow is required to maintain the specific volume of the propellant tanks which the supply-tank feeds. The equations that I am using to model the expansion of the gasses are: $ P=P_0\frac{\rho}{\rho_0}^\gamma $ and $ T=T_0\frac{\rho}{\rho_0}^{\gamma -1} $ While I do not expect perfection, I need to do better than assuming constant $ \gamma $ since I want to spend as little on this system as I can, and to use the code to understand the temperature and pressure of the tank at any time to assess minimum flow instrument diameters to prevent choked flow and to ensure there will be enough pressurization authority at all stages in the launch. The tool that I am using is the CoolProp wrapper for python, which is causing, what I would call, EXTREME funny business which I think is due to the way I modeled changes in $ \gamma $. While Helium seems to behave relativity nicely, Nitrogen causes some wack errors, like negative enthalpys to be reported by cool-prop and massively fluctuating temperatures as the tank blows down. I think this might be because I am updating the $ \gamma $ of the time step and then using the equations I have above, which are explicitly for constant $ \gamma $s Is there any suggested way to improve this model? One way I was thinking was to calculate the $ \gamma $ at every timestep again but instead use the average $ \gamma $ over the simulation time for the above equations. Another way I am thinking of would be to take the derivative of the above equations in some way and implement some numerical method, to solve the DE to find the $ P $ and $ T $ over time. Any thoughts or insight would be greatly appreciated. Answer: If you neglect the heat transfer between the tank metal and the gas, then the gas remaining within the tank at any time has expanded adiabatically and reversibly. So, for this gas, the change in entropy per unit mass is constant. The variation in entropy per unit mass is given as a function of temperature and specific volume by $$ds=\frac{C_v}{T}dT+\left(\frac{\partial P}{\partial T}\right)_vdv$$where $C_v$ is the heat capacity at constant volume. In general, Cv is a function of temperature and specific volume, but, usually we will only know its behavior in the ideal gas region $C_v^{IG}(T) $where its value is a function only of temperature (i.e., at very large specific volume). Therefore, to use this equation in practice, one will have to integrate the equation relative to a reference state in the ideal gas region (i.e., high specific volume): $$s(T,v)=s(T_{ref},v_{ref})+\int_{T_{ref}}^T{\frac{C_v^{IG}(T')}{T'}dT'}-\int_v^{v_{ref}}{\left[\left(\frac{\partial P}{\partial T}\right)_v\right]_{T,v'}dv'}$$I suggest generating a 2D table of s as a function of T and v, and, for any given value of v, interpolating to get T for which there is no change in s from the initial state. Another thing you could do is to approximate the real gas behavior by the van der Waals equation of state. For this equation of state, Cv is a function only of temperature, but not specific volume. So Cv is the same as the ideal gas heat capacity at all specific. In addition, for this equation of state, $$\left(\frac{\partial P}{\partial T}\right)_v=\frac{R}{v-b}$$. So, $$ds=\frac{C_v^{IG}(T)}{T}dT+\frac{R}{v-b}dv$$which can immediately be integrated analytically.
{ "domain": "physics.stackexchange", "id": 63500, "tags": "thermodynamics, fluid-dynamics" }
What's the difference between male and female voice?
Question: If I record the voice of a man and a woman, what are the main differences I get in the various spectra and harmonics in Fourier analysis? Answer: The main difference between female and make voice is the pitch range of the fundamental frequency (of all vowels and consonants that have a pitch). The difference is significant: it's on average a factor of two or thereabouts. Female also tend to have higher ranges but the relevance of this depends highly on the specific language and the difference is somewhat less significant.
{ "domain": "dsp.stackexchange", "id": 11138, "tags": "frequency-spectrum, fourier, fourier-series, voice" }
Simplifying member functions in Blackjack game
Question: For my display functions in my Blackjack game, I want to control each player's stats output based on whether a turn is in progress or has ended. Normally, you could only see one card that your opponent(s) hold, while the rest are only revealed at the end of each turn. My code does that well, but I'm having trouble making it more concise. The display function in Game is called to display each player's stats (name, chips, and hand). It takes a Boolean argument to determine if it should display each CPU's partial stats (during each turn) or full stats (after each turn). At the function call in Game, the bool argument is passed to the respective display function in Player, where the human player and CPU player have individual display procedures. The display functions in Player primarily display the chip amounts and card rank totals. They also call respective Hand display functions which, again, solely depend on the Boolean argument for displaying the CPUs' stats. How could I simplify this code to achieve the same output (shown below the code)? Game.cpp displayStats(false); // function call elsewhere in Game void Game::displayStats(bool showCPUResults) const { players[0].displayPlayerStats(); // displays player stats for (int i = 1; i < numPlrs; i++) players[i].displayCPUStats(showCPUResults); // displays CPU stats } Player.cpp void Player::displayPlayerStats() const { // always shown std::stringstream ss; std::cout << "\n* " << name << ": "; std::cout << "($" << chips << ") "; // displays two-digit card rank total (always shown) if (totalCardsValue <= 9) ss << "(0" << totalCardsValue << ") "; else ss << "(" << totalCardsValue << ") "; std::cout << ss.str(); // displays human player's hand playerHand[0].displayPlayerHand(); } void Player::displayCPUStats(bool showResults) const { // always shown std::stringstream ss; std::cout << "\n* " << name << ": "; std::cout << "($" << chips << ") "; if (!showResults) { std::cout << " "; playerHand[0].displayCPUHand(showResults); } else { // only shown after each turn if (totalCardsValue <= 9) ss << "(0" << totalCardsValue << ") "; else ss << "(" << totalCardsValue << ") "; std::cout << ss.str(); // displays CPU's hand playerHand[0].displayCPUHand(showResults); } } Hand.cpp void Hand::displayPlayerHand() const { // displays player's full hand (always) for (unsigned i = 0; i < cards.size(); i++) std::cout << cards[i] << " "; } void Hand::displayCPUHand(bool showCPUHand) const { // displays CPU's first card (always) std::cout << cards[0] << " "; // displays the rest of the CPU's cards (only after each turn) if (showCPUHand) { for (unsigned i = 1; i < cards.size(); i++) std::cout << cards[i] << " "; } } Output during a turn... Jamal: ($1000) (23) [8C] [6H] [9C] CPU1: ($1000) [AD] ...and after a turn... Jamal: ($1000) (23) [8C] [6H] [9C] CPU1: ($1000) (21) [AD] [KH] Answer: If you want to make things more consice, you could change a few things: Number of card printed on 2 digits: if (totalCardsValue <= 9) ss << "(0" << totalCardsValue << ") "; else ss << "(" << totalCardsValue << ") "; could be written ss << ((totalCardsValue <= 9) ? "(0" : "(") << totalCardsValue << ") "; or ss << "(" << std::setw(2) << std::setfill('0') << totalCardsValue << ") "; No need for temporary std::stringstream. I think you could just std::cout directly without using std::stringstream ss. Removing duplicated code: void Hand::displayPlayerHand() does the same as void Hand::displayCPUHand(true). This would probably better if you were naming this method display and calling the parameter displayAllCards or displayOnlyFirst. Also, you might want to give it a default value. Corresponding code for Hand.cpp is: void Hand::display(bool displayAllCards) const { // displays the first card std::cout << cards[0] << " "; // displays the rest of the cards if required if (displayAllCards) { for (unsigned i = 1; i < cards.size(); i++) std::cout << cards[i] << " "; } } Removing duplicated code: things can be changed in Player.cpp too as Player::displayCPUStats and Player::displayPlayerStats really look alike. Edited to finish my answer: Indeed, displayPlayerStats() is nothing but displayCPUStats(true). Here again, it might be worth changing the names of methods and variables. Corresponding code for Player.cpp is: void Player::displayStats(bool displayAllCards) const { std::cout << "\n* " << name << ": "; std::cout << "($" << chips << ") "; if (displayAllCards) { std::cout << ((totalCardsValue <= 9) ? "(0" : "(") << totalCardsValue << ") "; } playerHand[0].displayHand(displayAllCards); }
{ "domain": "codereview.stackexchange", "id": 3568, "tags": "c++, game, classes, playing-cards" }
Preferred method to discuss wiki changes?
Question: On other wiki's I've contributed to, there is often a discussion method. I'm happy to help with changes, but being new to ROS I second guess whether or not these changes would be helpful or even accurate. Is this site appropriate, should I be doing this on the discussion forums, other? As an example, I stumbled on this question about ROS_INFO_STREAM and actually don't think it's a horrible question in principle. I'm quite new, and was looking for a handy equivalence table between ROS_VERBOSITY_TYPE and equivalent rospy.logVerbosity() calls using python. In looking through the wiki on logging, I think it would help to provide the base format up front: ROS_VERBOSITY_TYPE As it is, a beginner has to figure out that DEBUG in all of the examples could be any of the other verbosity levels. Also, I noticed that the intro says: roscpp uses the rosconsole package to provide its client-side API. That API takes the form of a number of ROS_ macros: rosconsole provides four different types of logging statements, at 5 different verbosity levels, with both printf- and stream-style formatting. I think the "types" are what are spelled out next, but this yields a list much longer than 4: base, named, conditional, conditional named, once, throttle, delayed throttle, filter. I could make various changes to remedy this... but it helps to have confirmation that I'm not just experiencing noob confusion and the things that don't make sense are legitimate. Thanks for any suggestions. Originally posted by jwhendy on ROS Answers with karma: 60 on 2017-05-10 Post score: 0 Answer: I think discourse.ros.org would be a good place to discuss these sort of things. Moinmoin doesn't have the "talk" or "discuss" capabilities like MediaWiki, so that being absent, and ROS Answers really being more of a Q&A site, discourse.ros.org seems like a it is the only option. PS: thanks for making the effort of first discussing things btw. MO has most of the times been to just change things. Perhaps exactly because of a lack of of a place to discuss these kind of things. Originally posted by gvdhoorn with karma: 86574 on 2017-05-10 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jwhendy on 2017-05-10: Thanks for clarifying. Is there someone who could be contacted about the idea of putting a wiki subforum in place? I contribute to arch linux and it has one for example. I think the "Site Feedback" section could work for this? Comment by gvdhoorn on 2017-05-10: I think it's ok to put it in the general category for now. The approach has been to propose new categories only when enough posts around a certain topic start to accumulate. Not sure about whether Site Feedback would be a good category.
{ "domain": "robotics.stackexchange", "id": 27864, "tags": "ros" }
CodeReview question markdown downloader
Question: This is an update to my earlier question From Q to compiler in less than 30 seconds. As with that version, this Python script automatically downloads the markdown from any question on Code Review and saves it to a local file using Unix-style line endings. For instance, to fetch the markdown for that older question, one could write: python fetchQ 124479 fetchquestion.md I'm interested in a general review including style, error handling or any other thing that could be improved. This also has a new feature, which I'll be showing here soon, which is that this also serves as a companion application to a browser extension that I'm currently testing. In that mode, this same Python script will receive two arguments: the path to the native application app manifest and a special tag that identifies the application. See https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging for details on how the messaging works. That version uses the environment variable AUTOPROJECT_DIR to determine the directory into which the file is placed and the file is named after the question number. So this question, for example, would be saved as 234084.md. This is intended to be used on Linux and only Python3. fetchQ #!/usr/bin/env python """ Code Review question fetcher. Given the number of the question, uses the StackExchange API version 2.2 to fetch the markdown of the question and write it to a local file with the name given as the second argument. """ import sys import urllib.request import urllib.parse import urllib.error import io import os import gzip import json import struct import html.parser from subprocess import call def make_URL(qnumber): return 'https://api.stackexchange.com/2.2/questions/' + \ str(qnumber) + \ '/?order=desc&sort=activity&site=codereview' + \ '&filter=!)5IYc5cM9scVj-ftqnOnMD(3TmXe' def fetch_compressed_data(url): compressed = urllib.request.urlopen(url).read() stream = io.BytesIO(compressed) return gzip.GzipFile(fileobj=stream).read() def fetch_question_markdown(qnumber): url = make_URL(qnumber) try: data = fetch_compressed_data(url) except urllib.error.URLError as err: if hasattr(err, 'reason'): print('Could not reach server.') print(('Reason: ', err.reason)) sys.exit(1) elif hasattr(err, 'code'): print(f'Error: {err.code}: while fetching data from {url}') sys.exit(1) try: m = json.loads(data) except json.JSONDecodeError as err: print(f'Error: {err.msg}') sys.exit(1) return m['items'][0] def getMessage(): rawLength = sys.stdin.buffer.read(4) if len(rawLength) == 0: sys.exit(0) messageLength = struct.unpack('@I', rawLength)[0] sendMessage(encodeMessage(f'attempting to read {messageLength} bytes')) message = sys.stdin.buffer.read(messageLength).decode('utf-8') return json.loads(message) # Encode a message for transmission, # given its content. def encodeMessage(messageContent): encodedContent = json.dumps(messageContent).encode('utf-8') encodedLength = struct.pack('@I', len(encodedContent)) return {'length': encodedLength, 'content': encodedContent} # Send an encoded message to stdout def sendMessage(encodedMessage): sys.stdout.buffer.write(encodedMessage['length']) sys.stdout.buffer.write(encodedMessage['content']) sys.stdout.buffer.flush() if __name__ == '__main__': if len(sys.argv) != 3: print(f'Usage: {sys.argv[0]} fetchQ questionnumber mdfilename') sys.exit(1) qnumber, qname = sys.argv[1:3] # are we being called as a Web Extension? if (qname == 'autoproject@beroset.com'): msg = getMessage() basedir = os.getenv('AUTOPROJECT_DIR', '/tmp') qnumber = msg['question_id'] qname = f'{basedir}/{qnumber}.md' else: msg = fetch_question_markdown(qnumber) md = html.unescape(msg['body_markdown']).replace('\r\n', '\n').encode('utf-8') title = html.unescape(msg['title']).encode('utf-8') header = b'# [{title}](https://codereview.stackexchange.com/questions/{qnumber})\n\n' with open(qname, 'wb') as f: f.write(header) f.write(md) call(["autoproject", qname]) Answer: PyCharm complains on this line: m = json.loads(data) If the above call to fetch_compressed_data fails, and the resulting error doesn't contain a reason or code attribute, the program won't close despite the error, and will then give a not-super-helpful NameError when you try to use data. I don't know if such a situation is possible, but I might add some protection just in case. Maybe add an else and move the call to exit down to reduce redundancy: except urllib.error.URLError as err: if hasattr(err, 'reason'): print('Could not reach server.') print(('Reason: ', err.reason)) elif hasattr(err, 'code'): print(f'Error: {err.code}: while fetching data from {url}') else: print("Unexpected problem:", err) sys.exit(1) Arguably, if len(rawLength) == 0: would be more idiomatic as if not rawLength: You can rely on empty collections being falsey (and non-empty collections being truthy). With {'length': encodedLength, 'content': encodedContent} This has the problem that you're needing to use strings to create and reference the "fields" of the returned object. Strings are notorious for allowing for typo problems though, and are outside of what static checking can help you with. It's a little more involved, but I might use a NamedTuple here: from typing import NamedTuple class Message(NamedTuple): length: bytes content: str ... encodedContent = json.dumps(messageContent).encode('utf-8') encodedLength = struct.pack('@I', len(encodedContent)) return Message(encoded_length, encoded_content) # or, for clarity (although redundant in this case) return Message(length=encoded_length, content=encoded_content) ... sys.stdout.buffer.write(encodedMessage.length) sys.stdout.buffer.write(encodedMessage.content) Now, no more messy-looking string accessing, and the IDE can assist you.
{ "domain": "codereview.stackexchange", "id": 36878, "tags": "python, python-3.x, json, stackexchange" }
In a P-N junction, why doesn't charge flow?
Question: I've linked a P-N diagram. My question is why doesn't the positive charge flow with the field and negative charge against the field when there is an electric field pointing from N to P? In equilibrium there is no flow of charge, but in the presence of the electric field, shouldn't there be flow? https://upload.wikimedia.org/wikipedia/commons/f/fa/Pn-junction-equilibrium-graphs.png Answer: If carriers enter in the E-field zone they are accelerated by the the field as expected. The issue is that the region is depleted. If a battery is connected in the reverse polarity, there is no electrons in the conduction band of the P-side to go to the other side. In direct polarity, there are electrons in the conduction band of the N-side to go to the other side. They only need enough tension from the battery to overcome the joint E-field. And the process continues because the battery is a source of carriers to the wires. The diagram only shows excess of charges in each side. The negative charges in the left side for example came from the conduction zone of N-side and are filling lower energy states at the valence band of the P-side, eliminating available states in that band close to the junction. In order to a current flows it is necessary available neighboring states, so that the average momentum changes from zero to the current direction. Otherwise the movement is only possible by sparks, where some electron can be ejected from its stable position due to high E-field. But it would damage the material, and is different from a continuous current.
{ "domain": "physics.stackexchange", "id": 83374, "tags": "solid-state-physics" }
Concurrent stack in C
Question: (See also the follow-up question.) I was in the mood for pthread.h and decided to write a concurrent stack data structure. My requirements: the stack has maximum capacity, pushing an element, removing the top element without returning it, peeking the top element, client is blocked whenever popping from an empty stack, client is blocked whenever pushing to a full stack. My codez looks like this: concurrent_stack.h #ifndef CONCURRENT_STACK_H #define CONCURRENT_STACK_H #include <stdlib.h> typedef struct concurrent_stack { /******************************************* The number of elements stored in this stack. *******************************************/ size_t size; /************************************************** The maximum number of elements this stack can hold. **************************************************/ size_t capacity; /********************************************** The actual array holding the data of the stack. **********************************************/ void** storage; /************************************************ The mutual exclusion lock for updating the stack. ************************************************/ pthread_mutex_t mutex; /***************************** Guards against an empty stack. *****************************/ pthread_cond_t empty_condition_variable; /*************************** Guards against a full stack. ***************************/ pthread_cond_t full_condition_variable; } concurrent_stack; /***************************************** Initializes a new, empty concurrent stack. *****************************************/ void concurrent_stack_init(concurrent_stack* stack, size_t capacity); /**************************************** Pushes a datum onto the top of the stack. ****************************************/ void concurrent_stack_push(concurrent_stack* stack, void* datum); /****************************************** Returns, but does not remove the top datum. ******************************************/ void* concurrent_stack_top(concurrent_stack* stack); /**************************************** Removes the topmost datum from the stack. ****************************************/ void concurrent_stack_pop(concurrent_stack* stack); /******************************************* Returns the number of elements in the stack. *******************************************/ size_t concurrent_stack_size(concurrent_stack* stack); /********************************* Returns the capacity of the stack. *********************************/ size_t concurrent_stack_capacity(concurrent_stack* stack); /*********************************** Releases all resources of the stack. ***********************************/ void concurrent_stack_free(concurrent_stack* stack); #endif /* CONCURRENT_STACK_H */ concurrent_stack.c #include "concurrent_stack.h" #include <pthread.h> #define MAX(A,B) (((A) > (B)) ? (A) : (B)) static const size_t MINIMUM_CAPACITY = 10; void concurrent_stack_init(concurrent_stack* stack, size_t capacity) { stack->size = 0; stack->capacity = MAX(capacity, MINIMUM_CAPACITY); stack->storage = malloc(sizeof(void*) * stack->capacity); pthread_mutex_init(&stack->mutex, NULL); pthread_cond_init(&stack->empty_condition_variable, NULL); pthread_cond_init(&stack->full_condition_variable, NULL); } void concurrent_stack_push(concurrent_stack* stack, void* datum) { pthread_mutex_lock(&stack->mutex); while (stack->size == stack->capacity) { pthread_cond_wait(&stack->full_condition_variable, &stack->mutex); } stack->storage[stack->size++] = datum; pthread_cond_signal(&stack->empty_condition_variable); pthread_mutex_unlock(&stack->mutex); } void concurrent_stack_pop(concurrent_stack* stack) { pthread_mutex_lock(&stack->mutex); while (stack->size == 0) { pthread_cond_wait(&stack->empty_condition_variable, &stack->mutex); } stack->size--; pthread_cond_signal(&stack->full_condition_variable); pthread_mutex_unlock(&stack->mutex); } void* concurrent_stack_top(concurrent_stack* stack) { void* ret; pthread_mutex_lock(&stack->mutex); while (stack->size == 0) { pthread_cond_wait(&stack->empty_condition_variable, &stack->mutex); } ret = stack->storage[stack->size - 1]; pthread_cond_signal(&stack->full_condition_variable); pthread_mutex_unlock(&stack->mutex); return ret; } size_t concurrent_stack_size(concurrent_stack* stack) { size_t size; pthread_mutex_lock(&stack->mutex); size = stack->size; pthread_mutex_unlock(&stack->mutex); return size; } size_t concurrent_stack_capacity(concurrent_stack* stack) { return stack->capacity; } void concurrent_stack_free(concurrent_stack* stack) { free(stack->storage); pthread_mutex_destroy(&stack->mutex); pthread_cond_destroy(&stack->empty_condition_variable); pthread_cond_destroy(&stack->full_condition_variable); } main.c #include "concurrent_stack.h" #include <pthread.h> #include <stdio.h> typedef struct thread_config { size_t element_count; concurrent_stack* stack; } thread_config; void* producer_code(void* data) { thread_config* cfg = (thread_config*) data; size_t limit = cfg->element_count; concurrent_stack* stack = cfg->stack; for (size_t i = 0; i != limit; ++i) { concurrent_stack_push(stack, (void*) i); } return NULL; } void* inspector_code(void* data) { thread_config* cfg = (thread_config*) data; size_t limit = cfg->element_count; concurrent_stack* stack = cfg->stack; for (size_t i = 0; i != limit; ++i) { concurrent_stack_top(stack); concurrent_stack_size(stack); } return NULL; } void* consumer_code(void* data) { thread_config* cfg = (thread_config*) data; size_t limit = cfg->element_count; concurrent_stack* stack = cfg->stack; for (size_t i = 0; i != limit; ++i) { concurrent_stack_pop(stack); } return NULL; } static const size_t PRODUCERS = 3; static const size_t CONSUMERS = 3; static const size_t INSPECTORS = 1; static const size_t PRODUCER_ELEMENTS = 91 * 1000; static const size_t CONSUMER_ELEMENTS = 90 * 1000; static const size_t INSPECTOR_ELEMENTS = 50 * 1000; /** In order to make sure that the program exits, you must guarantee that: PRODUCER_ELEMENTS * PRODUCERS - CONSUMER_ELEMENTS * CONSUMERS <= STACK_CAPACITY. Otherwise, after all consumers have done their job, the producers will fill it again and finally block on it. */ static const size_t STACK_CAPACITY = 5000; int main() { concurrent_stack st; concurrent_stack_init(&st, STACK_CAPACITY); pthread_t threads[PRODUCERS + CONSUMERS + INSPECTORS]; size_t next_thread_slot = 0; thread_config producer_thread_config = { PRODUCER_ELEMENTS, &st }; thread_config consumer_thread_config = { CONSUMER_ELEMENTS, &st }; thread_config inspector_thread_config = { INSPECTOR_ELEMENTS, &st }; for (size_t i = 0; i != CONSUMERS; ++i) { pthread_create(&threads[next_thread_slot++], NULL, consumer_code, (void*) &consumer_thread_config); } for (size_t i = 0; i != INSPECTORS; ++i) { pthread_create(&threads[next_thread_slot++], NULL, inspector_code, (void*) &inspector_thread_config); } for (size_t i = 0; i != PRODUCERS; ++i) { pthread_create(&threads[next_thread_slot++], NULL, producer_code, (void*) &producer_thread_config); } /* All threads created. Now join them. */ for (size_t i = 0; i != INSPECTORS + CONSUMERS + PRODUCERS; ++i) { pthread_join(threads[i], NULL); } size_t expected_stack_size = PRODUCERS * PRODUCER_ELEMENTS - CONSUMERS * CONSUMER_ELEMENTS; printf("Expected final stack size: %zu\n", expected_stack_size); printf("Actual final stack size: %zu\n", concurrent_stack_size(&st)); concurrent_stack_free(&st); } Critique request Please tell me anything that comes to mind, yet especially I am interested in comments relating to: code layout, correctness of synchronization. Answer: Extraneous line I quickly scanned your stack code and it looked correct to me. However, I did spot an extraneous line in concurrent_stack_top(). You have this line that was probably copied from concurrent_stack_pop(): pthread_cond_signal(&stack->full_condition_variable); Since concurrent_stack_top() doesn't remove anything from the stack, you shouldn't need to signal anything to the producers. Pop should return top also I think that pop() should return the top element, so that the operation is atomic. Otherwise, if you do top() followed by pop(), you might get some element from top() but then pop a different element.
{ "domain": "codereview.stackexchange", "id": 23738, "tags": "c, multithreading, stack, pthreads, producer-consumer" }
$\mathrm{strict}$-$\mathrm{SUBEXP} \subset \mathrm{P}/\mathrm{poly} \implies \mathrm{strict}$-$\mathrm{SUBEXP} \subset \mathrm{MA}$
Question: Is anyone able to give a concise proof for the implication stated in the title? This is gonna be in stark contrast to this question. For definition of $\mathrm{strict}$-$\mathrm{SUBEXP}$, see here. Answer: Yes, I can. First of all, $\mathrm{poly}(\mathrm{subexp}) = \mathrm{subexp}$. So, we can simulate a subexp TM in subexp time. (This is in stark contrast to standard $\mathrm{SUBEXP}$ where we do not have any particular subexp machine) Using this, we are able to put $\mathrm{strict}$-$\mathrm{SUBEXP}$ inside $\mathrm{PSPACE}$. The poly space TM would try every possible poly-size circuit and for each circuit make $2^n \times 2^n$ local checking of the tableaux using the current circuit. Note that the subexp TM never touches the $2^n$th cell and would cease to run long before $2^n$ time but that does not affect the poly space bound. After putting $\mathrm{strict}$-$\mathrm{SUBEXP}$ inside $\mathrm{PSPACE}$, we use the result of interactive proving protocol with the power of the prover at most $\mathrm{P}^\mathrm{L}$ (where $\mathrm{L}$ is the language at hand) to push it down to $\mathrm{MA}$.
{ "domain": "cs.stackexchange", "id": 12089, "tags": "complexity-theory, complexity-classes" }
The giant 6,000 km$^2$ iceberg A-68; will ground-truth telemetry supplement satellite tracking data?
Question: update 2: FIRST images of A-68 iceberg not taken from a satellite in space but from the air are out. On this NASA Earth Observatory web page are areal images from NASA's Operation IceBridge taken from a P-3 aircraft. Will the placement of a GPS tracker/weather monitor or two, or even a seismometer be next? above: The edge of A-68, the iceberg the calved from the Larsen C ice shelf. Photo by NASA/Nathan Kurtz. update 1 : According to the BBC Article British mission to giant A-68 berg approved: The British Antarctic Survey has won funding to visit the berg and its calving zone in February next year. It will use the Royal Research Ship James Clark Ross. This would present a potential opportunity to add some ground-truth telemetry to A-68. Details of it's short term and long term motion could then be monitored with higher resolution both in space and time than what can be obtained from periodic photographs from polar-orbiting satellites. Video: https://youtu.be/OT-gW9sZn_8 Study of the movement of cold fresh water in the form of ice bergs is important for climate models. The plain language summary in the recent paper A simulation of small to giant Antarctic iceberg evolution: Differential impact on climatology estimates; Rackow, T., C. Wesche, R. Timmermann, H. H. Hellmer, S. Juricke, and T. Jung (2017), J. Geophys. Res. Oceans, 122, 3170–3190, doi:10.1002/2016JC012513 says: Antarctic icebergs are large blocks of frozen fresh water that melt around the Antarctic continent while moving under the influence of winds, sea ice, and ocean currents. Small icebergs (2.2km) are mainly driven by winds and ocean currents, whereas giant icebergs (> 10 km) tend to ‘surf’ the tilted sea surface and are less sensitive to changes in the wind. The relative importance between melting at the iceberg’s base and mass loss at the side walls is also different for small and large icebergs.We present a computer simulation of Antarctic iceberg movement and melting that includes not only small icebergs, but at the same time also larger icebergs with side lengths of 10 km or more. The study highlights the necessity to account for larger icebergs in order to obtain an accurate depiction of the spatial distribution of iceberg meltwater, which, e.g., stabilizes and fertilizes the upper water column and thus supports phytoplankton growth. Future climate change simulations will benefit from the improved distribution, where the impact of iceberg melting is likely to increase due to increased calving from the Antarctic ice sheet. The authors describe the use of the Finite Element Sea ice-Ocean Model (FESOM) to address the effect of the 3 dimensional ocean currents and wind pattern on the paths of icebergs and their addition of cold fresh water to the ocean. At some point any predictive model will be compared against measured data, and traditionally satellite imaging provides the capability to track large numbers of icebergs with a wide range of sizes. From the new BBC article Drifting Antarctic iceberg A-68 opens up clear water: A-68 should follow the highway up the eastern coast of the Antarctic Peninsula, leading from the Weddell Sea towards the Atlantic. "It will most likely follow a northeasterly course, heading roughly for South Georgia and the South Sandwich Islands," Dr Rackow told BBC News. "It will be very interesting to see whether the iceberg will move as expected, as a kind of 'reality-check' for the current models and our physical understanding." The large size (~6,000 km^2) of the recently calved iceberg A-68 from the Larson-C ice shelf suggests it will will be relatively long lived, and so telemetry equipment placed on it is likely to collect data for a longer time than smaller ice bergs. I'm wondering if it might in fact be able as a sort of "meteorological station" of sorts, recording a variety of temperatures and currents and relaying the data via satellite. But my question is not if that is possible, but if it is likely to be done. Will A-68 get GPS beacons or more elaborate environmental telemetry in order to measure the local conditions? Having both the actual movement data together with the local environmental data may be of more value than satellite reconnaissance alone. More information available at this question and answers. As noted in this comment and by the BBC and the Washington Post, a very large chunk of the Larson C ice shelf in the Antarctic has calved and is now a free iceberg. below: Photo from BBC, source: Rackow et al.. below: Photo from BBC, source: Deimos Imaging an UrtheCast company Answer: I can't imagine a moving object more suitable for remote sensing tracking than Iceberg A-68, with such slow displacement and huge size. So I don't think it will be particularly useful to install any instrumentation to track its position on site. However, there would be plenty of other measurements that can be done, about mass balance, stress fields, etc. I'm not aware of any plans to install such instrumentation. I've emailed Thomas Rackow (the author of the paper cited above), and he is not aware of any plans either, and confirmed that no instrumentation have been installed so far. But, that can easily change in the future. He pointed me to an expedition happening next month to study the marine ecosystem that have been exposed by the departure of A-68, you can find out more in this link. By the way, a cool tool to track A-68 position is Worldview, this is an image taken on January 16th (there are images as fresh as today, but it have been cloudy sice then): But due to the frequent cloud coverage your best bet are radar images. A great search portal is VERTEX from the Alaska Satellite Facility. I just made a search, and there is an image of today, but I liked more this one taken yesterday (January 31st 2018) by the satellite Sentinel-1A
{ "domain": "earthscience.stackexchange", "id": 1317, "tags": "glaciology, remote-sensing, antarctic, gnss, gps" }
Understanding hollow waveguides. What is $\lambda_g$ on the figure?
Question: I am trying to understand the field distribution in waveguides. On the figure is $HE_{12}$ mode of hollow fiber. What is the $\lambda_g$ ? Is it $\lambda_g$ = $\lambda_0$ / $n_{eff}$. Does the z field component repeat (circles) many times (L/$\lambda_g$) over the length L of the fiber ? Answer: $\lambda_g$ is the wavelength inside the waveguide. Yes, that pattern repeats itself all along the waveguide - and it propagates at a velocity that is less than the velocity of electromagnetic waves in free space.
{ "domain": "physics.stackexchange", "id": 25967, "tags": "electromagnetic-radiation, fiber-optics" }
Presenting and passing data to a modal view controller without using prepare(for:sender:) method
Question: I am using a toolbar button to present a modal view controller (in which I let the user export data as a PDF file). The main section of my app is a UITableViewController subclass embedded in a UINavigationController. Here is a schematic of my layout. The modal itself is embedded in a UINavigationController as I need it to have a bottom toolbar. It also has a transparent background and is presented using .overCurrentContext, so the main screen of the user's data blurs underneath. I found that to get it to float over everything else (including the navigation bar etc), I had to present it from the UINavigationController (otherwise the main navigation bar and toolbar appeared on top of it). The problem with this is that the UITableViewController method prepare(for:sender:) is not called. I call the segue to the modal view controller like this (from the UITableViewController subclass): // User taps EXPORT button @objc func exportButtonTapped(_ sender: UIBarButtonItem) { self.navigationController?.performSegue(withIdentifier: "showExport", sender: nil) } In order to transfer the array of user data to the modal view controller, I have called the following code in the modal view controller: override func viewDidLoad() { super.viewDidLoad() // Get data from array in main table view controller let masterNav = navigationController?.presentingViewController as! UINavigationController let myTableVC = masterNav.topViewController as! MyTableViewController self.userData = myTableVC.userData // This is of type: [MyObject] } The data is then rendered to a PDF (using HTML templating) in the modal view controller's viewWillAppear() method. This works as expected. However, I have some concerns about doing it this way: Is it guaranteed that viewDidLoad() will finish before viewWillAppear() is called? Will an even a larger data set be available for rendering as PDF in viewWillAppear()? Is it acceptable to present modally from the UINavigationController? Should I be subclassing the main UINavigationController and using its prepare(for:sender:) method (if this is even an option)? In the performSegue(withIdentifier:sender:) method, does the sender argument make any difference? Is it preferable to use present() rather than a segue? I would of course be grateful for any other advice or refinements to the code. It seems to work as expected but I just want to make sure I following best practice as far as possible. Answer: Is it guaranteed that viewDidLoad() will finish before viewWillAppear() is called? Will an even a larger data set be available for rendering as PDF in viewWillAppear()? Yes. It needs to be loaded before it will appear. Is it acceptable to present modally from the UINavigationController? I think it is. Should I be subclassing the main UINavigationController and using its prepare(for:sender:) method (if this is even an option)? It sounds a bit complicated. prepare(for:sender:) is not a very clean way to do transfer data to begin with and only useful when you use segues in a regular way. Why don't you create the ModalViewController in code, set the value and then push it through code instead? In the performSegue(withIdentifier:sender:) method, does the sender argument make any difference? I used it in rare occasions to understand where the push was coming from. Is it preferable to use present() rather than a segue? I think in your case yes.
{ "domain": "codereview.stackexchange", "id": 31868, "tags": "swift, ios, cocoa-touch" }
How does the star that has collapsed to form a Schwarschild black hole appear to an observer falling into the black hole?
Question: I understand that to an outside observer, the light from a star that is collapsing into a black hole will become more and more red-shifted as the surface of the star appears to approach the black hole event horizon. The outside observer will never actually see the surface of the star cross the black hole event horizon. This applies to all outside observers: at infinity, in orbit around the star/black hole or those using a rocket to hover above the black hole. Conversely, I know that for someone on the surface of the star that is collapsing to form a black hole it will appear quite different. The observer on the surface will not see anything unusual happen as they cross the event horizon and in a finite time they will reach the singularity at the center of the black hole where we do not know what will happen since general relativity breaks down in a singularity. So, now consider an observer that starts at a great distance from the star who is continually falling directly into the star that has formed a black hole. Assume that he is falling in an exactly radial direction with no angular momentum. While the observer is still very far from the black hole, he will see the (original) visible light of the star get red-shifted to infrared, to microwaves and then to longer and longer radio waves. But as he approaches the black hole, he starts to fall faster and faster, so I assume he starts to see that red-shifted photons from the star surface will begin to be blue-shifted by his increasing speed as he falls. So, I think that the red-shift photons will be blue-shifted such that when the observer crosses the event horizon he will see the "visible light" photons that were sitting there at the horizon waiting for him. Since he is falling at the speed of light (in some sense) when he crosses the horizon these photons will become "visible light" again. So the first question is this true? When he crosses the event horizon will the surface of the star be back to having no net red or blue shift? Will a visible light photon emitted by the star as it crosses the horizon be once again a visible light photon? The second part of my question is what about the number density of photons? Will it look as "bright" as it would have looked for an observer falling with the surface of the black hole or will it look "dimmer" as if the surface was further away? Finally, what happens as the falling observer continues his fall past the event horizon of the black hole. Will the photons form the original in-falling star be red or blue shifted. What will the observer see during the short time before he himself hits the singularity? This is a follow on to my previous question since this was not answered there. Answer: You need to be a lot more careful when you use the phrase red-shift, due to how frequency is measured in general relativity. Roughly speaking, a photon is characterised by its wave vector $k$, which is a light-like four vector. The frequency measured by an observer is $g(\tau,k)$ where $g$ is the metric tensor, and $\tau$ is the unit vector tangent to the observer's world-line. A little bit of basic Lorentzian geometry tells you that for any given photon $k$, instantaneously there can be observers seeing that photon with arbitrary high and arbitrarily low frequency. So: start with your space-time. Fix a point on the event horizon. Fix a photon passing through that space-time event. For any frequency you want to see, you can choose a time-like vector at that space-time event that realises that frequency. Now, since the vector is time-like and the event horizon is null, the geodesic generated by that vector must start from outside the event horizon and crosses inside. Being a geodesic, it represents a free fall. So the conclusion is: For any frequency you want to see, you can find a free falling observer starting outside of the black hole, such that it crosses the event horizon at the given space-time event and observes the frequency you want him to see. So you ask, what is this whole business about gravitational red-shift of Schwarzschild black holes? I wrote a longer blog post on this topic some time ago and I won't be as detailed here. But the point is that on the Schwarzschild black holes (and in general, on any spherically symmetric solution of the Einstein's equations), one can break the freedom given by local Lorentz invariance by using the global geometry. On Schwarzschild we have that the solution is stationary. Hence we can use the time-like Killing vector field* for the time-translation symmetry as a "global ruler" with respect to which to measure the frequency of photons. This is what it is meant by "gravitational redshift" in most textbooks on general relativity (see, e.g. Wald). Note that since we fixed a background ruler, the frequency that is being talked about is different from the frequency "as seen by an arbitrary infalling observer". (There is another sense in which redshift is often talked about, which involves two infalling observers, one "departing first" with the second "to follow". In this case you again need the time-translation symmetry to make sense of the statement that the second observer "departed from the same spatial point as the first observer, but at a later time.) It turns out, for general spherically symmetric solutions, there is this thing called a Kodama vector field, which happens to coincide with the Killing vector field on Schwarzschild. Outside of the event horizon, the Kodama vector field is time-like, and hence can be used as a substitute for the global ruler with respect to which to measure red-shift, when the space-time is assumed to be spherically symmetric, but not necessarily stationary. Again, this notion of redshift is observer independent. And it has played important roles (though sometimes manifesting in ways that are not immediately apparently related to red-shift, through choices of coordinates and what-not) in the study of dynamical, spherically symmetric gravitational collapse in the mathematical physics literature. To summarise: If you just compare the frequency of light measured (a) at its emission at the surface of the star in the rest frame associated to the collapse and (b) by an arbitrary free-falling observer, you can get basically any values you want. (Basically because the Doppler effect depends on the velocity of the observer, and you can change that to anything you like by choosing appropriate initial data for the free fall.) One last comment about your last question: You asked about what happens in the interior of the black hole. Again, any frequency can be realised by time-like observers locally. The question then boils down to whether you can construct such time-like observers to have come from free fall starting outside the black hole. By basic causality considerations, if you start with a time-like vector at a space-time event inside the black hole formed from gravitational collapse, going backwards along the time-like geodesic generated by the vector you will either hit the surface of your star, or exit the black hole. Though precisely how the two are divided depends on the precise nature of the gravitational collapse. I should add that if you use the "global ruler" point of view, arguments have been put forth that analogous to how one expects red shift near the event horizon, one should also expect blue shifts near any Cauchy horizon that should exist. This has been demonstrated (mathematically) in the Reissner-Nordstrom (and similar) black holes. But as even the red-shift can sometimes run into problems (extreme charged black holes), one should not expect the statement about blue shifts near the Cauchy horizons to be true for all space-times.
{ "domain": "physics.stackexchange", "id": 2186, "tags": "general-relativity, black-holes, gravitational-redshift" }
Pulling quarks from one another creates a new pair of quarks?
Question: It is said that the force between two quarks increases as they are pulled apart which is ultimately resisted by the strong nuclear force. Correct me if I'm wrong, but my physics teacher once said that if I put enough energy to seperate two quarks, I end up creating a new pair of quarks. Why is that? Answer: the standard picture of quark creation looks like this: as two quarks are pulled apart, work is stored in the field that connects them. when the quantity of work stored this way is sufficient to pull another pair of quarks out of the vacuum, that's what happens, and you end up not with a pair of disconnected quarks, but two pairs. This is a simplified picture to which I invite the experts here to add details as they deem necessary.
{ "domain": "physics.stackexchange", "id": 51918, "tags": "particle-physics, standard-model, quarks" }
move_arm_warehouse package error
Question: How do you you use Arm Navigation Warehouse viewer on Ros electric? I tried to run rosrun move_arm_warehouse create_launch_files.py pr2_test from the Iros2011 motion planning Tutorials and it return this error message: ([rospack] couldn't find package [move_arm_warehouse]). Thanks Originally posted by OptiX on ROS Answers with karma: 63 on 2011-12-14 Post score: 0 Original comments Comment by sam on 2012-02-23: Maybe you can try 'sudo apt-get install ros-electric-*'. I can roscd to that package. Answer: Thanks Sam I hadn't all installed all the dependencies yet now the problem is solved. Originally posted by OptiX with karma: 63 on 2012-03-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 7638, "tags": "ros, move-arm" }
How does a star ignite?
Question: I remember reading that X-Rays are generated by 'braking' electrons in a Coolidge tube. Is it fundamentally a matter that the extreme gravity immediately before a star ignites is so strong that it affects the hydrogen atoms to the point the velocity of it's components must be let-off in the form of heat & light? How does a star ignite? Answer: The nuclear fusion that powers stars has little to nothing to do with electrons. In the cores of stars, temperatures are high enough that all the electrons are stripped from the nuclei, leaving a pure plasma. As stars contract and condense out of interstellar dust, their gravitational potential energy is converted to heat faster than this heat can be radiated away. Once the temperature reaches roughly $10^7\ \mathrm{K}$, protons (hydrogen nuclei, stripped of their electrons) have a nonnegligible chance of sticking together when they colide, with one of them converting to a neutron along the way: $$ {}^1H + {}^1H \to {}^2H + e^+ + \nu_e. $$ This is the first step of the PP chain, and it releases energy. There are more steps that ultimately turn four protons into a helium-4 nucleus. In more massive stars than the Sun, there are other ways (e.g. the CNO cycle) to catalyze this process with the help of carbon, nitrogen, and oxygen. In any event, there is nothing extreme about the gravity. It just happened to pull matter from a huge distance close together. If you took infinitely spread apart particles totaling mass $M$ and formed a uniformly dense sphere of radius $R$, the gravitational potential energy released would be $$ \frac{3GM^2}{5R}, $$ about half of which you expect to go into heating the material. Once hot, hydrogen naturally forms helium in exothermic processes. Stellar reactions are self-regulating in the sense that if the rate of fusion increases, the additional luminosity would push the outer layers of the star, causing the star to expand and cool, thus reducing the reaction rate. Thus as long as there is hydrogen in the core, stars more or less burn at a steady rate once ignited.
{ "domain": "physics.stackexchange", "id": 56070, "tags": "astrophysics, stars" }
String theory hilbert space - Gas of free gravitons
Question: I am trying to understand the arguments given in MAGOO in chapter 3.4.1(Hilbert Space of String Theory). The authors give descriptions of the Hilbert space of String Theory when we consider our theory in different energy regimes. There are 4 such regimes. Since each regime is enough material for one question I will spread this in 4 different questions. Let's start here with the first regime where we have 1: Gas of Free Gravitons($E\ll m_s$) In this low energy regime we can approxiamate the states of our Hilbert space by the Fock space of gravitons in $AdS_5\times S^5$. Then the states are the stationary wave solutions as shown in section 2.2.2 which are the solutions of the scalar KG-equation in $AdS_{p+2}$ $$ \phi=e^{i\omega\tau}G(\theta)Y_l(\Omega_p) $$ with $G(\theta)=(\sin\theta)^l(\cos\theta)^{\lambda_{\pm}} \;_2F_1(a,b,c;\sin\theta)$ and $$ a=\frac{1}{2}(l+\lambda_{\pm}-\omega R)\\ b=\frac{1}{2}(l+\lambda_{\pm}+\omega R)\\ c=l+\frac{1}{2}(p+1) $$ and $$ \lambda_\pm=\frac{1}{2}(p+1)\pm\frac{1}{2}\sqrt{(p+1)^2+4(mR)^2} $$ Then they argue that stationary modes are quantised "in the unit set by R" so they view the supergravity particles as confined in a box of size R(the curvature radius). Why are they confined in this box of size R? I understand that I'm probably missing something in the derivation of this, but the next point is very confusing to me. They also state the entropy, as a function of density of states $$ S(E)\sim(ER)^\frac{9}{10} $$ where did this come from? How is this entropy calculated(estimated?) Answer: For the purposes of thermodynamics, you can treat the propagating modes of the graviton (and superpartners) as independent scalar fields satisfying \begin{equation} \nabla_\mu \nabla^\mu \phi + \xi R \phi = 0. \end{equation} And then there's a very quick and dirty way to verify that $AdS_{d + 1}$ will behave like a box. Instead of a minimally coupled scalar with $\xi = 0$, you can consider a conformally coupled scalar with $\xi = \frac{d - 1}{4d}$. This will allow you to perform a Weyl rescaling on the metric to get \begin{align} &ds^2 = -\left ( 1 + \frac{r^2}{R^2} \right ) dt^2 + \left ( 1 + \frac{r^2}{R^2} \right )^{-1} dr^2 + r^2 d\Omega_{d - 1}^2 \\ \mapsto \;\; &ds^2 = -dt^2 + \left ( 1 + \frac{r^2}{R^2} \right )^{-2} dr^2 + \left ( 1 + \frac{r^2}{R^2} \right )^{-1} r^2 d\Omega_{d - 1}^2. \end{align} We can now get an effective volume of global $AdS_5$ as \begin{align} V &= \int \sqrt{g} \, dr \, d\Omega_3 \\ &= \mathrm{Vol}(S^3) \int_0^\infty \left ( 1 + \frac{r^2}{R^2} \right )^{-5/2} r^3 dr \\ &= \frac{2}{3} \mathrm{Vol}(S^3) R^4. \end{align} In the present problem there should of course be another piece due to the $S^5$. To proceed with analyzing a gas of free fields, we know that for a fermion, there can be 0 or 1 quanta with momentum $p$ so we get a single mode partition function of $Z(p) = 1 + e^{-\beta |p|}$. For a boson, there can be arbitrarily many excitations so \begin{equation} Z(p) = 1 + e^{-\beta |p|} + e^{-2\beta |p|} + \dots = (1 - e^{-\beta |p|})^{-1}. \end{equation} To get the log of the full partition function, we sum over all modes, leading to \begin{align} \log Z &= \sum_p \log Z(p) \\ &\approx \int | \log (1 \pm e^{-\beta |p|}) | \frac{V d^d p}{(2\pi)^d} \\ &= \frac{V}{(2\pi \beta)^d} \Gamma(d) \mathrm{Vol}(S^{d - 1}) \zeta^{\pm}(d + 1). \end{align} In the last step, we have Taylor expanded the log and integrated term by term. The answer involves the Riemann zeta function $\zeta^-$ for bosons and the so called alternating zeta function $\zeta^+$ for fermions. Knowing that the free energy is \begin{equation} F = E - TS = -T \log Z, \end{equation} the formula for the entropy now follows from $S = - \frac{\partial F}{\partial T}$. However, the above is overkill if we aren't interested in the precise prefactor. The slick way to find that $S \sim E^{d/(d + 1)}$ (so $E^{9/10}$ in $9 + 1$ dimensions) is to use scale invariance. Gravitons are massless and cannot introduce a scale beyond the $AdS$ radisu $R$. Therefore at temperatures much larger than $R^{-1}$ (but still much smaller than the string scale), entropy must be extensive. But entropy is a dimensionless quantity so the only way to achieve this is to have \begin{equation} S \sim V T^d. \end{equation} The same goes for internal energy but that needs to have one more power of temperature \begin{equation} E \sim V T^{d + 1}. \end{equation} Eliminating $T$ then yields $S$ in terms of $E$. As an aside, a free theory is not the only case where it's possible to determine the prefactor. Another is a 2d conformal field theory where modular invariance leads to the Cardy formula for the density of states.
{ "domain": "physics.stackexchange", "id": 88559, "tags": "hilbert-space, entropy, string-theory, research-level, ads-cft" }
Matrix "dimensional analysis" of Lagrangians in QFT
Question: Since the important things in the QFT Lagrangian are vectors and matrices, I wanted to do a "matrix dimensional analysis" of each term. The electromagnetic Lagrangian (ignoring all constants and signs) is : $\bar{\psi}\gamma^{\mu}\partial_{\mu}\psi + \bar{\psi}\gamma^{\mu}\partial^{\mu}A_{\mu}\psi + \bar{\psi}\psi + F_{\mu\nu}F^{\mu\nu}$ Since $\psi$ is a (4x1) vector, $\bar{\psi}$ is (1x4), and each $\gamma^\mu$ is (4x4), the first term is : (1x4)(4x4)(4x1) = (1x1) or a scalar. Since $A_{\mu}$ for a given $\mu$ is a scalar, the second term is : (1x4)(4x4)(1x1)(4x1) = (1x1), a scalar. The third term is : (1x4)(4x1) = (1x1) And because of the summations in the fourth term, the two (4x4) terms collapse into a scalar as well. The problem I'm having is with the QCD Lagrangian : $\bar{\psi}\gamma^{\mu}\partial_{\mu}\psi + \bar{\psi}\gamma^{\mu}\lambda\cdot A_{\mu}\psi + \bar{\psi}\psi + F_{\mu\nu}^aF^{a\mu\nu}$ Now, $\psi$ is primarily a 4-vector spinor, so it's still (4x1) (which it must be in order to be multiplied by the $\gamma$'s). So the first and third terms don't change at all. But the $\lambda$'s are 3x3 color matrices, and it was clarified here that each $A_\mu$ is a 3x3 matrix, which makes sense if they're to be multiplied together. And I'm assuming $\lambda\cdot A_{\mu}$ = $\lambda_aA^a_\mu$ (a=1..8) is still a 3x3 matrix (a "weighted sum", if you will, of each $A_\mu$). So then the second term looks like : (1x4)(4x4)(3x3)(3x3)(4x1), which does not work. Now, I understand that $\psi$ in QCD is a Dirac spinor of the quark field, so some component of $\psi$ must represent the colors $[\psi^r \psi^b \psi^g]^T$. But even if $\psi$ represents a color vector in the third term, there's still the 4x4 $\gamma$'s. Question 1 : what is the resolution to this? How can $\psi$ represent both a spinor and color space in the same equation? And is it only in the third term that $\psi$ represents the color space? (Ok, questions 1-3 :) Regarding the $F_{\mu\nu}$ term, since it is created from the $A_\mu$ term, is it a 4x4 matrix of 3x3 matrices? If so, the summations over $\mu$ and $\nu$ (0..3) would still leave a 3x3 matrix. Question 2 : what is the resolution to this? Is there an implied summation over the color matrices? Or something else? What I would most appreciate is the QCD Lagrangian, with both spinor and color indices included in every variable. (edited to correct mistakes pointed out below) Answer: In the case of QCD $\psi$ has both a spinor and a color index. If you put all appropriate indices on everything you will see how it works. Edit: Answer extended after a comment from OP. Let $\mu, \nu, \cdots=0, \cdots, 3$ be Lorentz indices, $\alpha,\beta, \cdots = 1,\cdots, 4 $ spinor indices, $a,b,\cdots = 1,\cdots ,8$ indices for the generators of the Lie algebra ($su(3)$ for QCD) and $r,s,\cdots = 1,2,3$ be the indices for the $\bf{3}$ representation of $su(3)$. We then have $\psi_{\alpha \, r}$ for the spinor, $(\lambda^a)_{rs}$ is an element a generator, $A_{\mu a}$ is the gauge field. These are now all $1\times 1$ in your notation. As an example $$ \bar{\psi} \gamma^\mu \lambda \cdot A_\mu \psi = \psi^\dagger_{\alpha r} (\gamma^0)_{\alpha\beta} (\gamma^\mu)_{\beta\delta}(\lambda^a)_{rs} A_{\mu a} \psi_{\delta s} $$ As you see all indices are contracted. The $\partial_\mu$ should not be present in that term. I trust you can fill in the rest.
{ "domain": "physics.stackexchange", "id": 68721, "tags": "quantum-field-theory, tensor-calculus, dimensional-analysis, yang-mills, dirac-matrices" }
How do I correctly typeset metastable radionuclide symbols?
Question: Does anyone know if there is an officially sanctioned way to typeset symbols like technetium-99m (99Tcm or 99mTc)? I have seen both, although in more recent publications, I think, the latter predominates. For my part, it looks somewhat wrong to place the symbol for metastable in the position you'd normally expect the charge. How would you write the ion, 99Tcm+? That looks quite clumsy. So does anyone know if there's some IUPAC (or similarly authoritative) guidance on this? Answer: According to the international standard ISO 80000 Quantities and units – Part 9: Physical chemistry and molecular physics, Amendment 1, a metastable nuclide is indicated by adding the letter m (in roman type) to the mass number of the nuclide, as in the following examples: $$\mathrm{^{133m}Xe}$$ $$\mathrm{^{99m}Tc^+}$$
{ "domain": "chemistry.stackexchange", "id": 9645, "tags": "elements, notation, radioactivity" }
definition of tempo and mode in evolution
Question: I am reading Punctuated Equilibria: The Tempo and Mode of Evolution Reconsidered, which mentions the "tempo" and "mode" of evolution. I'm not familiar with the related literature and when I google the concept, what I got is something like: One general topic, suggested by the word "tempo," has to do with "evolutionary rates..., their acceleration and deceleration, the conditions of exceptionally slow or rapid evolutions, and phenomena suggestive of inertia and momentum." A group of related problems, implied by the word "mode," involves "the study of the way, manner, or pattern of evolution, a study in which tempo is a basic factor, but which embraces considerably more than tempo'' This is as elusive as a definition can be. A clear definition should be something like "tempo is ...." So what are the definitions of mode and tempo in evolution? Answer: Actually, we can go right to the source on this: George Gaylord Simpson's Tempo and Mode in Evolution (1944). Tempo is the rate of evolution. Simpson was a paleontologist, so he was trying to answer questions like "what is the percent change in the size of this tooth per million years?" He has a chapter estimating these rates for various characters in horses, which have a very good North American fossil record. Mode encompasses the mechanisms responsible for evolution and their patterns. Simpson was working at the time of the Modern Synthesis (the wikipedia page is decent and brief enough on the topic, about which books are written), coming from the perspective of a paleontologist trying to reconcile genetic evolution with the macroevolutionary patterns he was seeing in the fossil record. Simpson was really the first paleontologist to look at fossils in a quantitative way, trying to understand of connections between microevolutionary processes and macroevolutionary patterns. He even co-authored one of the first quantitative and statistical methods books for "organismal" biologists: Quantitative Zoology. Specifically, of tempo, he says (p. xvii): "...evolutionary rates under natural conditions, their acceleration and deceration, the conditions of exceptionally slow or rapid evolutions, and phenomena suggestive of inertia and momentum." Of mode, he says (p. xviii) "...the study of the way, manner, or pattern of evolution, a study in which tempo is a basic factor, but which embraces considerably more than tempo. The purpose is to determine how populations became genetically and morphologically differentiated, to see how they passed from one way of living to another or failed to do so, to examine the figurative outline of the stream of life and the circumstances surrounding each characteristic element in that pattern." Mode basically encompasses all the processes of evolution. Tempo is the subset of mode that concerns how fast evolution is (or is not) happening. The great thing is that 70 years after Simpson's book, biologists are still trying to understand tempo and mode of evolution.
{ "domain": "biology.stackexchange", "id": 2139, "tags": "evolution" }
Most efficient solution for USACO: Cow Gymnastics - Python
Question: I was trying to figure out this problem, and I did. However, my code is so abhorrently ugly that I want to tear out my eyeballs when I look at it: with open("gymnastics.in", "r") as fin: rounds, cows = [int(i) for i in fin.readline().split()] nums = [tuple(map(int, line.split())) for line in fin] def populatePairs(cows): pairs = [] for i in range(cows): for j in range(cows): if i != j: pairs.append((i+1,j+1)) return pairs def consistentPairs(pairs, nums): results = [] for i in pairs: asdf = True for num in nums: for j in num: if j == i[1]: asdf = False break if j == i[0]: break if not asdf: break if asdf: results.append(i) return results pairs = populatePairs(cows) with open("gymnastics.out", "w+") as fout: print(len(consistentPairs(pairs, nums)), file=fout) I feel like that there should definitely be a better solution that is faster than \$O(n^3)\$, and without the triple nested for-loop with the if-statements trailing behind them, but I cannot, for the love of god, think of a better solution. Problem synopsis: Given an \$n\$ by \$m\$ grid of points, find the number of pairs in which number is consistently placed before the other. Example: Input: 3 4 4 1 2 3 4 1 3 2 4 2 1 3 Output: 4 Explanation: The consistent pairs of cows are (1,4), (2,4), (3,4), and (3,1), in which case 4 is consistently greater than all of them, and 1 is always greater than 3. Answer: Suggestions: Use the variable names from the problem specification instead of inventing your own (just make them lower-case since that's preferred in Python code). Makes it easier to see the connections. Unless yours are much better. I guess nums is plural so num is a single number, but you can iterate it? Bad name. And wth does asdf mean? Python prefers snake_case for function names. To be honest, the lack of any explanation of your method, the name asdf and the highly convoluted code made me give up reading it. But here's my solution, simply counting occurring pairs and then the result is the number of pairs that appeared K times: from itertools import combinations from collections import Counter ctr = Counter() with open('gymnastics.in') as f: k, _ = map(int, next(f).split()) for session in f: cows = session.split() ctr.update(combinations(cows, 2)) with open('gymnastics.out', 'w') as f: print(list(ctr.values()).count(k), file=f)
{ "domain": "codereview.stackexchange", "id": 39752, "tags": "python, complexity" }
How can you accurately determine your own mass?
Question: I'm trying to figure out whether it is possible for an individual to accurately determine his/her own mass, to within 100g, using equipment that is readily accessible or can be purchased at a reasonable price. It may be tempting to say "go to the supermarket and buy a bathroom scale" -- however as an anecdotal point of reference I can say that I own 3 and they give me readings which differ by several kilograms. Also, with many digital scales I've used, the readings change if I place them in different locations on the ground, as well as if I change the battery. Is there a practical yet effective way for an individual to determine their mass, with confidence that the results are accurate? Quick summary of suggestions offered in the comments: Use the scale at the doctor's office (or get a better scale from somewhere) ==> That's cheating, the problem is to find it by yourself at home. Take measurements from 3 scales and calculate the mean ==> That's wrong, what if all 3 scales are incorrect? It doesn't matter how many samples you take, theoretically we don't know the relationship between the real weight and the shown weight, nor do we know the distribution in the case of inaccuracies. (It could be the case that old scales will always show a higher value, or that certain manufacturers skew their results down to make overweight people happier.) Use known weights (such as weightlifting plates or known volumes of water) and determine a calibration curve ==> The problem here is twofold. First of all, it is not so easy to find objects with accurate weights. Weightlifting plates can be a few percent off. Also, the error will accumulate. For example, if you calibrate with water and you want your curve to go up to 80kg, you need 80 liters of water. If every liter is measured with 1% error, you can end up with anything between 79.2 and 80.8kg at the end. Answer: I'm not sure that it makes sense to try to measure your body weight to a precision of 100 g. For example I was just thirsty and drank a 20 ounce bottle of water, which transferred about 600 g extra mass to my stomach. Even just breathing changes your mass: if you take ten half-liter breaths per minute and your exhalations contain 5% carbon dioxide by volume, that's a mass loss of tens of grams of carbon per hour. (Moisture is probably a bigger effect there, too.) To measure a 50–100 kg mass to a precision of 0.1 kg is a fractional uncertainty of $10^{-4}$, which is about two orders of magnitude better precision than most college-course laboratory experiments. Furthermore you would expect to see changes of several hundreds of grams over the course of the day (which would be interesting, which is maybe why you're asking). You won't in general find $10^{-4}$ precision in cheap consumer electronics. I'd expect a bathroom scale to have an absolute precision of 1%–5%, or one to seven pounds for a 150 pound person, with poorer quality loosely associated with cheaper scales. If what you want is a well-calibrated absolute weight with three or four significant figures, the best setup for you is going to be a cantilever system with well-calibrated reference weights. That's what's at your doctor's office — sorry. If it weren't the most cost-effective way to get a reasonably accurate weight, then doctors would buy something else. If, on the other hand, you're interested in seeing kilogram-level changes in your weight with sub-kilogram precision, you might not need the absolute weight after all. If you can convince yourself that your scale is linear for small deviations from your weight, then maybe you can take your scale's last two digits, the kilogram and decigram digits, at face value. Here's one way you could test that: Get several similar-but-different sized weights, about the size of the mass differences you're hoping to measure. Brick pieces might work. Label them somehow: A, B, C, etc. Put a base load on the scale so that it reads somewhere near the value that you're interested in. For instance, if you're weighing yourself, you could stand on the scale and have someone help you with the next steps. One at a time, add your test weights to the scale. Each one will increment the reading on the scale by some amount. You'll make a data table like reading load ------- ----- 80.0 kg just you 81.2 kg you + A 82.2 kg you + A + B and so on. From this you can find the mass of each little weight. (This is how veterinarians weigh stubborn cats, but they do it one cat at a time). Now repeat the measurements with the same base, but with the other weights in a different order: reading load ------- ---- 95.2 kg you plus all your weights 80.2 kg just you 81.1 kg you + B 82.3 kg you + B + A There are a couple of things that you might learn from this procedure: One is the random error inherent in each scale. For instance, I made the two "just you" weights different in the last digit. That's not unreasonable: essentially all digital readouts have what's called a "Schmidt trigger" that puts some hysteresis in the last digit, so that it doesn't flicker between adjacent values; however that means that the uncertainty in the last digit of a digital readout is at least $\pm1$ in the final digit. You might also find that the same brick fragment C reliably takes the scale from 80.0 kg to 82.0 kg with one base load, but from 95.0 kg to 97.2 kg with another base load. That would mean that your scale is "nonlinear," since the same increase in signal gives a different increase in output starting from a different place. You'd have to decide how much this bothers you, if you find it. This technique doesn't address the question of stability: presumably you're interested in measuring your weight over many days. I'd suggest essentially the same test for measuring the stability of your scale(s): find an inert weight that's close enough to your body weight that you expect the scale to be linear for nearby values, and compare your weight on the scale to that rather than simply to the reading of the scale. Depending on the precision you're interested in, you may still have some strange stuff happen. For instance some electronics will respond differently in humid weather than in dry weather; also some weights will absorb moisture from the air and have different masses in humid weather than in dry weather. As the saying goes: quick, cheap, or correct, pick any two. You're not going to be able to get the precision that you want without some expenditure of money or time, but you can perhaps get the result that you want a little easier.
{ "domain": "physics.stackexchange", "id": 16454, "tags": "mass, everyday-life, measurements" }
Is $L=\{\langle M_1,M_2\rangle|L(M_1)\cap L(M_2)\neq \emptyset \}$ R, RE or coRE?
Question: Below is the language, determine (R), (RE), (coRE). and prove your answer. $L=\{\langle M_1,M_2\rangle|M_1,M_2$ are Turing-machines and $L(M_1)\cap L(M_2)\neq \emptyset \}$ Attempt: I Think the language $L\in RE$ Because we need to use $U-MT$ such that we check all the $L(M_1)\cap L(M_2)\neq \emptyset$ to see if all language in the Machines at least 1 accept to be not equal to the empty set so it can be infinity and the last one will be in both language in the machines. I would like someone will verify and clarify the situation, and give me some steps to prove that. Thanks! Answer: First, $L$ is not recursive: there is an easy reduction from $L_{ne}=\{\langle M\rangle \mid \mathcal{L}(M) \neq \emptyset\}$ which is a well known non-recursive language (for the reduction, just use a TM $M_2$ such that $\mathcal{L}(M_2) = \Sigma^*$). Now, $L$ is indeed recursively enumerable. The intuition behind this is the following ; considering inputs ordered lexicographically: n ← 1 repeat: simulate n computation steps for the n first inputs for M1 and M2 n ← n + 1 until: M1 and M2 accepts the same input x return true If $\langle M_1, M_2\rangle\in L$, then the previous algorithm will stop and return true. Otherwise, the algorithm will never stop. Finally, since $L$ is not R but RE, it means that $\overline{L}$ is not RE, so $L$ is not coRE.
{ "domain": "cs.stackexchange", "id": 20375, "tags": "formal-languages, turing-machines, computability" }
Karp hardness of searching for a matching split
Question: UPDATE: In 2 days, if no more convincing answer is posted, then bounty of 50 rep. will go to xskxzr. Due to lack of connectedness and a clean & clear cut, the bounty is still open for 2 days. (UTC is now 17 Sep 01:57) This is yet another follow-up question in the series: Karp hardness of searching for a matching cut Karp hardness of searching for a matching erosion In this question, we further restrict the notion of a matching erosion (which is already a restricted notion of a matching cut). Formally, our definition is as follows. Given an undirected graph $G(V, E)$, a matching split $M=(A, B)$ is a partition of $V$ into two disjoint subsets, i.e. $A\cap B = \emptyset \land A\cup B = V$ that satisfies the following conditions: $G[A]$ and $G[B]$ are two disjoint induced subgraphs The edge set of the cut $M = \{uv\in E\vert u\in A \land v \in B\}$ is a matching (here, we abuse the notation a little bit, $M$ is both the partition $(A, B)$ and the edge set of the matching cut) Each vertex $u\in A$ is incident to exactly one edge in $M$ Each vertex $u\in B$ is incident to exactly one edge in $M$ Clearly, we should have $|A|=|B|=|V|/2$, hence the name of this concept. Our decision problem Matching Split naturally asks whether a given graph $G$ has a matching split. Our question is: What is the complexity of deciding Matching Split? Answer: It is NP-complete. Clearly it belongs to NP. To prove its NP-completeness, we'll reduce not-all-equal 3-satisfiability (NAE3SAT) to this problem. Given an instance of NAE3SAT with $n$ variables and $m$ clauses, construct a graph as follows: For each literal $l$, construct a complete graph with $m+1$ vertices $v_0(l),\ldots,v_m(l)$. For each variable whose positive and negative literals are respectively $l_i$ and $l_j$, add an edge between $v_0(l_i)$ and $v_0(l_j)$. For each clause $c_i=l_i^1\vee l_i^2\vee l_i^3$, add the following structure: In addition, for each literal $l$ other than $l_i^1,l_i^2,l_i^3$, add a new vertex $v_i'(l)$ and an edge between $v_i(l)$ and $v_i'(l)$. Now we assert that the instance of NAE3SAT is satisfiable if and only if the graph has a matching split. If the instance of NAE3SAT is satisfiable, consider a valid truth assignment. If $l$ is a truth literal, we put $v_i(l)$'s into $A$ and $v_i'(l)$'s (if exists) into $B$ for all $i=0,\ldots,m$, otherwise we put $v_i(l)$'s into $B$ and $v_i'(l)$'s into $A$. In addition, for the structure corresponding to $c_i$ (we assume $l_i^1$ and $l_i^2$ are truth literals while $l_i^3$ is a false literal without loss of generality), we put $v_i^{12}, v_i^{123}$ into $A$ and put $v_i^{23}, v_i^{31}, v_i^{122331}$ into $B$. We can see this partition is a matching split. If the graph has a matching split, we can observe firstly that $v_0(l),\ldots,v_m(l)$ should be put into the same part since they make up a complete graph (assume $m\ge 2$ without loss of generality). In addition, if $l_i$ and $l_j$ correspond respectively to the positive and negative literals of the same variable, $v_k(l_i)$ and $v_k(l_j)$ should be put into different parts otherwise $v_0(l_i)$ or $v_0(l_j)$ has no neighbor in the other part. If the positive literal corresponding to a variable is put into $A$, we assign the variable to True, otherwise we assign it to False. Note for each clause $c_i$, $v_i(l_i^1), v_i(l_i^2)$ and $v_i(l_i^3)$ cannot be put into the same part otherwise $v_i^{123}$ has either no neighbor or $3$ neighbors in the other part. Hence this assignment is a valid assignment for the instance of NAE3SAT. Edit: One can simply add edges among $v_0(l)$'s to make the graph connected. For example, if the clauses are $l_1\vee l_2\vee l_3$ and $l_4\vee l_5\vee l_6$, one can simply add an edge between $v_0(l_3)$ and $v_0(l_4)$. The idea is that for a valid assignment, if we flip all the values of variables involved in one connected component, the assignment is still valid. In the example, this means we can always assume $v_0(l_3)$ and $v_0(l_4)$ are put into the same part.
{ "domain": "cs.stackexchange", "id": 12158, "tags": "complexity-theory, graphs, np-complete, np-hard, np" }
Conservation of momentum in an elastic collision
Question: At around 23:00 minutes in Walter Lewin's Lecture #16 on Elastic and Inelastic Collisions, he introduced a problem as follow: A ball with mass $m$ and velocity $v$ encounter Elastic collision with a rigid wall. So the only ball has Kinetic energy before and after the collision, also wall is at rest so its momentum is $0$. Later he told that after collision momentum of the ball changed by $2mv$, please explain how ball's momentum changed by $2mv$, and if it changed then how? Answer: This is one of Walter Lewin's interesting questions on this course. The falicy is that the mass of the wall is infinite. If you give the wall (and the Earth to which it is attached) a mass $M$ and work through the conservation of momentum and conservation of kinetic energy equations you get that the rebound velocity of the mass $m$ is less than $v$ $(=\dfrac{m-M}{m+M}v)$ and the wall has a finite speed $(= \dfrac{2m}{m+M}v)$. Them assuming that $M \rightarrow \infty$, which in practice is impossible, gives you Professor Lewin's result. Update Look at $\dfrac{m-M}{m+M}v)$ if $M\gg m$. You get the rebound velocity of mass $m$ is $-v$ and so the change in momentum of mass $m$ is $-mv - (mv) = -2mv$
{ "domain": "physics.stackexchange", "id": 35678, "tags": "newtonian-mechanics, momentum, conservation-laws, collision" }
Pointers towards using robot_localization for VO + Wheel odometry
Question: Hey , i am trying to setup a EKF fusion for combining odometry coming from two topics Visual Odometry (X , Y and yaw) Wheel Encoders ( X , Y and yaw) The main source of Odometry is coming from wheel encoders , the visual odometry is there much like a correction mechanism like while using IMUs. Having said that , my VO node publishes Twist values , is it better to fuse the wheel odometry using Twist values using robot_localization ? I am getting the following error when i launch rviz to view the odometry Assertion `!pos.isNaN() && "Invalid vector supplied as parameter"' failed. I am confused with how the TF should be setup . The TF of wheel encoder odometry is as follows frame_id: "odom" child_frame_id: "base_link" TF of the visual Odometry is as follows frame_id: "raspicam" child_frame_id: '' AFTER LAUNCHING THE EKF NODE BEFORE LAUNCHING THE EKF NODE ** MY EKF LAUNCH FILE LOOKS LIKE THIS ** <node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization_custom" clear_params="true"> <!-- renamed to ekf_localization_custom to prevent name conflicts --> <param name="frequency" value="30"/> <!-- this value should be fine --> <param name="sensor_timeout" value="0.1"/> <!-- this value should be fine --> <param name="two_d_mode" value="false"/> <!-- could make this true to stick to XY plane.. --> <!-- based these values off of the ROScon presentation --> <param name="map_frame" value="map"/> <param name="odom_frame" value="odom"/> <param name="base_link_frame" value="base_link"/> <param name="world_frame" value="odom"/> <param name="odom0" value="/optical_flow"/> <param name="odom1" value="/ubiquity_velocity_controller/odom"/> <!-- settings for using Twist --> <rosparam param="odom0_config">[false, false, false, false, false, false, true, true, false, false, false, true, false, false, false]</rosparam> <rosparam param="odom1_config">[false, false, false, false, false, true, true, true, false, false, false, true, false, false, false]</rosparam> <param name="odom0_differential" value="false"/> <param name="odom1_differential" value="false"/> <param name="publish_null_when_lost" value="false"/> <param name="odom0_relative" value="false"/> <param name="odom1_relative" value="false"/> <param name="odom0_queue_size" value="10"/> <remap from="/odometry/filtered" to="/odometry/visual" /> </node> [EDIT #1 adding sensor data samples ] Wheel Odometry header: seq: 3786 stamp: secs: 1529497452 nsecs: 845127379 frame_id: "odom" child_frame_id: "base_link" pose: pose: position: x: 0.50 y: 0.20 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.01 w: 1.0 covariance: [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2] twist: twist: linear: x: 0.8334 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.1233 covariance: [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2] Visual Odometry header: seq: 540 stamp: secs: 1529499662 nsecs: 800348997 frame_id: "raspicam" child_frame_id: '' pose: pose: position: x: 0.0 y: 0.0 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.0 w: 0.0 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] twist: twist: linear: x: 0.0 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0 covariance: [0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 100.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 100.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03] Originally posted by chrissunny94 on ROS Answers with karma: 142 on 2018-06-20 Post score: 1 Original comments Comment by Tom Moore on 2018-06-20: Please post sample messages for every sensor input. Comment by chrissunny94 on 2018-06-20: Hey , I have edited the question and added the sample data . I think I have resolved the issue of rviz crashing , it was because the TF not set properly . Now ,The covariance on the fussed output keeps building up into a larger and larger value. How do I systematically tackle this ? Answer: You have a few issues. You are fusing velocities from your visual odometry source, but your visual odometry source has no child_frame_id. If you look at the definition of nav_msgs/Odometry, you'll see that the child_frame_id is the TF frame of the twist (velocity) data, but you are leaving this empty. That means the EKF is going to ignore it completely. You have two_d_mode set to false, which means you want a full 3D state estimate, but you are only fusing 2D variables. Unmeasured variables in the EKF will result in absolute explosions of those variables' covariance values, which will have irritating effects on other variables. Turn on two_d_mode or start measuring the other 3D variables. Your state estimate covariance is growing without bound for your X and Y position, because you are only fusing velocity data for those. The filter will integrate those velocities, but it will also integrate their errors. If you had, for example, a laser scanner, you could localize against the map to provide X/Y position that would cap the growth in covariance. You are fusing two data sources that accumulate error over time, though, so endless growth in your covariance is actually appropriate. You should really fuse yaw velocity from your wheel encoders, and not absolute yaw. Note, though, that if you fuse absolute X, Y, and yaw from your wheel odometry, AND your wheel odometry covariance is NOT calculated correctly (e.g., it's static, as opposed to growing without bound), that will cap the covariance as well, but it will be incorrect. In general, yes, I recommend fusing velocity data for odometry sources, including visual odometry. If you had some means of localizing the robot globally (e.g., GPS or scan-to-map localization or overhead camera localization), you would fuse those absolute pose values. EDIT in response to comments: I'm not sure we're on the same page with what an IMU does or how it constrains drift (or doesn't). An IMU typically produces the following quantities: Absolute roll, pitch, and yaw Angular velocity Linear acceleration If you fuse roll, pitch, and yaw into the EKF, that data is coming from accelerometers and magnetometers, which are measuring the IMU's orientation with respect to a fixed frame. So if you fuse those values, you will constrain drift only for those variables, and their covariances will reach a steady-state and will stop growing. If you fuse velocity or linear acceleration data from an IMU, but do NOT fuse the roll, pitch, or yaw data, your roll, pitch, and yaw covariance will grow without bound. In this EKF, we are concerned with the following quantities: X, Y, Z position - can be measured directly, or integrated from linear velocity data. Roll, pitch, yaw - can be measured directly, or integrated from angular velocity data X, Y, Z velocity - can be measured directly, or integrated from linear acceleration data Roll, pitch, and yaw velocity - can only be measured directly X, Y, Z acceleration - can only be measured directly Any time your only data source for a variable comes from integration, that variable's covariance will grow without bound. So, if you fuse X, Y, and Z velocity, your X, Y, and Z velocity covariance will be bounded, but your X, Y, and Z position covariance will grow without bound. Originally posted by Tom Moore with karma: 13689 on 2018-06-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by chrissunny94 on 2018-06-26: https://avisingh599.github.io/vision/visual-odometry-full/ I am taking this approach to build my visual odometry node . But the drift is just too much . What if i use the Visual Node as simply an error correction node much like a IMU enabled EKF localization node . Are there other similar work? Comment by chrissunny94 on 2018-06-26: My greatest constraint is Computational power . I have to run everything on a RPI3B+ . I have already tried Viso2 , ORB-SLAM2 ,RPG-SVO .(Too Slow!) recently came across Rebvo , yet to test it . Comment by Tom Moore on 2018-06-26: Even if you did that, the error you're correcting is error in velocity measurement, not pose measurement. Odometry accumulates error over time, and you covariance will reflect that. The only way to constrain drift in the EKF output is to have an absolute pose measurement. Comment by chrissunny94 on 2018-06-26: While using an IMU , we are not getting an absolute pose , right? So can a visual odometry node be used in a similar way ? Comment by chrissunny94 on 2018-06-26: okay , now i understand better . One last question , is there a way to estimate Angular Velocity and Linear Acceleration ? recoverPose returns , R – Recovered relative t – Recoverd relative transl Comment by chrissunny94 on 2018-06-26: "Another problem occurs when the camera performs just pure rotation: even if there are enough features, the linear system to calculate the F matrix degenerates. " What if i use wheel odometry as an absolute scale ? Comment by Tom Moore on 2018-06-26: Your wheel odometry is generated by integrating encoder ticks. It has the same issue. If it's written correctly, the pose variables will have covariances that grow without bound. Is it a problem for your covariances to grow? Or are you trying to reject bad measurements from your VO?
{ "domain": "robotics.stackexchange", "id": 31044, "tags": "navigation, ekf, visual-odometry, ros-kinetic, robot-localization" }
How is a quartic oscillator solved in classical mechanics?
Question: Quantum mechanically, a quartic anharmonic oscillator with potential $$V(x)=\frac{1}{2}m\omega^2x^2+\lambda x^4$$ is dealt with perturbation theory- the approximate energies $E_n$ and energy eigenstates $|\phi_n\rangle$ are obtained using the time-independent perturbation theory. Classically, the problem amounts to solving the trajectory $x(t)$. At this point, one is stuck with a nonlinear differential equation which cannot be solved in closed analytical form. How do we go about solving such problems, classically? Do we use similar perturbation technique to obtain corrections to the trajectory order by order? Any suggestions? In short. I am curious about the classical behaviour of this system. Answer: This problem is discussed as an example of secular perturbation theory in José, J. and Saletan, E., 2000. Classical dynamics: a contemporary approach. For first order in $\epsilon$, the (rationalized) equation of motion $$ \ddot{x}+\omega_0^2 x+\epsilon x^3=0 $$ has the approximate solution $$ x(t)=a \cos(\omega_0 t)-\epsilon \frac{a^3}{8\omega_0^2} \left(3\omega_0t \sin\omega_0 t + \frac{1}{4}(\cos\omega_0 t- -\cos(3\omega_0t))\right) $$ with the secular term (linear in $t$) appearing explicitly. One can get rid of the secular term using Poincaré-Lindstedt theory, i.e. by introducing corrections to the unperturbed frequency $\omega_0$. The same problem is used in José and Saletan as an example of canonical perturbation theory (or Hamilton-Jacobi perturbation theory), where the solution is to find successive canonical transformations. using the unperturbed canonical variables $$ \phi_0=\arctan(m\omega_0 q/p)\, ,\qquad J_0=\frac{1}{2}\left(\frac{p^2}{2\omega_0}+m\omega_0 q^2\right)\, . $$ The first order correction to the frequency is $\omega\approx\omega_0+\epsilon \frac{3a^2}{8\omega_0}$ when the initial conditions are $p(0)=0, q(0)=a$. This problem, also as an example of the canonical perturbation approach, is similarly discussed in Example 8.3 of Percival, I.C. and Richards, D., 1982. Introduction to dynamics. Cambridge University Press.
{ "domain": "physics.stackexchange", "id": 59854, "tags": "newtonian-mechanics, oscillators, non-linear-systems, anharmonic-oscillators" }
Energy level of hybrid orbitals in the molecular orbital energy diagram
Question: I would like to know what is the level of energy of a hybrid orbital? For instance, lets consider the $\ce{N2}$ molecule. According to its geometry, we know that there is an orbital 2p and 2s that are going to form 2 sp orbitals. In the molecular orbital energy diagram, where should we place the these 2 sp orbitals? Would they be in the middle of a 2p and a 2s? And which p orbital do we take to take the middle ($p_x$,$p_y$,$p_z$, $\pi$ or $\sigma$?) Will the sp* be higher or lower that the $2p_x$ and $2p_y$? Answer: When we have an sp hybrid orbital, it is usually made of the s orbital and the p orbital that points in the bounding axis ($p_z$). It's energy will be the mean of the energy of the initial orbitals. In the example of $\ce{N2}$, it is essential to bear in mind that each sp won't form an $\sigma$ and an $\sigma^*$ orbital. Only the one pair of the sp orbitals, the one that overlap the most, will do. The other pair will contribute to the $\ce{N2}$ lone pairs because there is nearly no overlapping. The 2 others bonds that form the triple bonds are made with the 2 left orbitals $p_x$ and $p_y$. By the way, it is not necessary to use hybrid orbitals for the molecular orbital theory. Credits to Philipp and Ben Norris.
{ "domain": "chemistry.stackexchange", "id": 5852, "tags": "molecular-orbital-theory, orbitals" }
Tuple/Lookup conundrum
Question: I have a list of plans and a list of contacts, and I have to check whether contact.ProviderId matches plan.ProviderId. If they match I need to store in a contactUI the plan.Name. Provider to plan is a 0..1-to-many relationship, and that's why I couldn't use the Dictionary I tried at first instance. For retrieving the list of object I need to call the DB. So I want to avoid calling it more than needed. I came up with this var offeredPlans = new List<Tuple<int, string>>(); foreach (var plan in plans) { // ....Some code... offeredPlans .Add(new Tuple<int, string>(providerId, plan.Name)); } var compareTo = offeringPlans.ToLookup(pair => pair.Item1, pair => pair.Item2); foreach(var contact in contacts) { var plansAttachedTo = Check(providerId.Value, compareTo); foreach (var plan in plansAttachedTo) { // New contactUI with plan.Name as one of its properties } } Being private static IEnumerable<string> Check(int providerId, ILookup<int, string> plans) { return offeringPlans.Where(p => p.Key == providerId).SelectMany(p => p); } Is this terrible or does it make sense? I've never used before these classes (Tuple, Lookup...) I would like my code to be the clearest I can even it it's not the cleverest of the solution I can think of which is not this case for sure, actually I realize now I could have done some kind of SQL joins, right? (but I would like to know if someone can point any obvious error in the current code). Edit The code inside the loop is this one (in this case I don't even omit code as I'm not sure if it's possible what you suggest p.s.w.g). I am using reflection as only some of the classes that inherit from plan have the ProviderId property. foreach (var plan in plans) { var offeringDetail = new OfferingDetail(); if (plan.GetType() == typeof (OwnedProductSummary)) { var productSummary = plan as OwnedProductSummary; offeringDetail = _offeringBLL.GetById(productSummary.OfferingID); } if (plan.GetType() == typeof (OwnedServiceSummary)) { var serviceSummary = plan as OwnedServiceSummary; offeringDetail = _offeringBLL.GetById(serviceSummary.OfferingID); } // For the other types of summary will be 0 var providerId = offeringDetail.ProviderID; offeredPlans .Add(new Tuple<int, string>(providerId, plan.Name)); } Answer: Once you've gotten your code into a ILookup you can just call Item property (which in C# is called with [...]) to get all values with a given key. So the Check can be entirely replaced by using the ILookup like this: ILookup<int, string> plansLookup = ... IEnumerable<string> plansForProvider = plansLookup[providerId]; // Finds all plans for this provider However, it's not clear that you need to be creating the List<Tuple<int, string>> in the first place. You can just use Linq to generate your ILookup from scratch: var plansLookup = (from plan in plans let productSummary = plan as OwnedProductSummary let serviceSummary = plan as OwnedServiceSummary let offeringDetail = (productSummary != null) ? _offeringBLL.GetById(productSummary.OfferingID) : (serviceSummary != null) ? _offeringBLL.GetById(serviceSummary.OfferingID) : new OfferingDetail() select new { offeringDetail.ProviderID, plan.Name }) .ToLookup(x => x.ProviderId, x => x.Name);
{ "domain": "codereview.stackexchange", "id": 3640, "tags": "c#, lookup" }
What is the generated grammar for this language?
Question: I want to construct a regular grammar that generates words that contain both "ab" and "bc" as subwords with the alphabet of the terminal symbols {a,b,c} My solution so far is G=(Vn={S,X,Y},Vt={a,b,c},S,F={ S-> aS | bS | cS | abX | cbX, X-> aX | bX | cX | ε}) Answer: Your solution so far is incorrect. According to your solution, we have the derivation S -> abX -> ab which does not contain bc as a substring. Your solution also has a non-terminal Y, but does not appear in any production rules. A correct grammar follows, with start symbol $S$: $S \to aS$ $S \to bS$ $S \to cS$ $S \to abcA$ $S \to abX$ $S \to bcY$ $A \to aA$ $A \to bA$ $A \to cA$ $A \to \epsilon$ $X \to aX$ $X \to bX$ $X \to cX$ $X \to bcA$ $Y \to aY$ $Y \to bY$ $Y \to cY$ $Y \to abA$
{ "domain": "cs.stackexchange", "id": 17939, "tags": "complexity-theory" }
A* (A star algorithm) implementation in gazebo
Question: Hi Everyone, I am working on my project but now i am puzzled at A* algorithm implementation. I have successfully done these things. please see the attached image https://s12.postimg.org/f3o1xtprx/Working.png Anyone please guide me how to implement A* algorithm. Thanks Originally posted by xeeshan on ROS Answers with karma: 5 on 2016-11-03 Post score: 0 Original comments Comment by mgruhler on 2016-11-03:\ What is your question? This seems like asking for help on a homework. Where is your actual problem in implementing it? Don't you understand the algorithm? Gazebo is a simulator, you don't implement an algorithm for navigation in there What are those images supposed to tell us? Comment by xeeshan on 2016-11-03: Sorry for the confusion. Actually, I have followed tutorial of pioneer robot model and successfully executed the simulation in Gazebo and RVIZ. In the Image attached shows the current progress like in the image the red arrow that tells the robot to move in that direction. Comment by xeeshan on 2016-11-03: But I need to set a point on the map and calculate the distance through A* algorithm. For this I need help that where I have to write A* code (in global planner or in local planner). Answer: The move_base package will do that for you. By default it uses Dijkstra's algorithm which is very similar to A-star, but you can change it to A-star if you like. Originally posted by shoemakerlevy9 with karma: 545 on 2016-11-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26140, "tags": "ros, gazebo, rviz, path" }
Flask-SQLAlchemy models and unit-testing
Question: This is my first time building anything remotely like a robust database. I'm midway through designing it and I would be grateful if you have suggestions or have found any logical errors. Goal: allow authenticated users to enter data for multiple hospitals. My database has three tables: User, Hospital, and Data. A User might have many Hospitals (and vice-versa). Because of this, I have created a many-to-many scheme. A Hospital might have many Datas so I created a one to many relationship. from petalapp import db from datetime import datetime ROLE_USER = 0 ROLE_ADMIN = 1 #TODO:rename? hospitals = db.Table('hospitals', db.Column('hospital_id', db.Integer, db.ForeignKey('hospital.id')), db.Column('user_id', db.Integer, db.ForeignKey('user.id')) ) # tags bmarks time class User(db.Model): """User has a many-to-many relationship with Hospital""" id = db.Column(db.Integer, primary_key=True) nickname = db.Column(db.String(64), unique = True) email = db.Column(db.String(150), unique=True) role = db.Column(db.SmallInteger, default=ROLE_USER) hospitals = db.relationship('Hospital', secondary=hospitals, backref=db.backref('users', lazy='dynamic')) def __init__(self, nickname, email, role=ROLE_USER): self.nickname= nickname self.role = role self.email = email #TODO what information to show? def __repr__(self): return '<Name : %r>' % (self.nickname) def is_authenticated(self): return True def is_active(self): return True def is_anonymous(self): return False def get_id(self): return unicode(self.id) def add_hospital(self, hospital): if not (hospital in self.hospitals): self.hospitals.append(hospital) def remove_hospital(self, hospital): if not (hospital in self.hospital): self.hospitals.remove(hospital) @staticmethod def make_unique_nickname(nickname): if User.query.filter_by(nickname = nickname).first() == None: return nickname version = 2 while True: new_nickname = nickname + str(version) if User.query.filter_by(nickname = new_nickname).first() == None: break version += 1 return new_nickname class Hospital(db.Model): """Hospital's has a one-to-many relationship with DATA and a many-to-many relationship with User""" id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(80)) data = db.relationship('Data', backref='hospital', lazy = 'dynamic') def __init__(self, name): self.name = name def __repr__(self): return '<Name %r>' % self.name class Data(db.Model): """Data has a many-to-one relationship with Hospital""" id = db.Column(db.Integer, primary_key=True) standard_form = db.Column(db.Integer) marketing_education = db.Column(db.Integer) record_availability = db.Column(db.Integer) family_centerdness = db.Column(db.Integer) pc_networking = db.Column(db.Integer) education_and_training = db.Column(db.Integer) team_funding = db.Column(db.Integer) coverage = db.Column(db.Integer) pc_for_expired_pts = db.Column(db.Integer) hospital_pc_screening = db.Column(db.Integer) pc_follow_up = db.Column(db.Integer) post_discharge_services = db.Column(db.Integer) bereavement_contacts = db.Column(db.Integer) certification = db.Column(db.Integer) team_wellness = db.Column(db.Integer) care_coordination = db.Column(db.Integer) timestamp = db.Column(db.DateTime) hospital_id = db.Column(db.Integer, db.ForeignKey('hospital.id')) def __init__(self, standard_form=0, marketing_education=0, record_availability=0, family_centerdness=0, pc_networking=0, education_and_training=0, team_funding=0, coverage=0, pc_for_expired_pts=0, hospital_pc_screening=0, pc_follow_up=0, post_discharge_services=0, bereavement_contacts=0, certification=0, team_wellness=0, care_coordination=0, timestamp=datetime.utcnow()): self.standard_form = standard_form self.marketing_education = marketing_education self.record_availability = record_availability self.family_centerdness = family_centerdness self.pc_networking = pc_networking self.education_and_training = education_and_training self.team_funding = team_funding self.coverage = coverage self.pc_for_expired_pts = pc_for_expired_pts self.hospital_pc_screening = hospital_pc_screening self.pc_follow_up = pc_follow_up self.post_discharge_services = post_discharge_services self.bereavement_contacts = bereavement_contacts self.certification = certification self.team_wellness = team_wellness self.care_coordination = care_coordination self.timestamp = timestamp def __repr__(self): return """ <standard_form : %r>\n <marketing_education : %r>\n <record_availability : %r>\n <family_centerdness : %r>\n <pc_networking : %r>\n <education_and_training : %r>\n <team_funding : %r>\n <coverage : %r>\n <pc_for_expired_pts : %r>\n <hospital_pc_screening : %r>\n <pc_follow_up : %r>\n <post_discharge_services : %r>\n <bereavement_contacts : %r>\n <certification : %r>\n <team_wellness : %r>\n <care_coordination : %r>\n <datetime_utc : %r>""" % ( self.standard_form, self.marketing_education, self.record_availability, self.family_centerdness, self.pc_networking, self.education_and_training, self.team_funding, self.coverage, self.pc_for_expired_pts, self.hospital_pc_screening , self.pc_follow_up, self.post_discharge_services, self.bereavement_contacts, self.certification, self.team_wellness, self.care_coordination, self.timestamp) Here are my tests so far: ''' File: db_test_many_create.py Date: 2012-12-06 Author: Drew Verlee Description: functions to build a many-to-many relationship ''' import unittest from petalapp.database.models import User, Hospital, Data, ROLE_USER from petalapp import db class BuildDestroyTables(unittest.TestCase): def setUp(self): db.drop_all() db.create_all() def tearDown(self): db.session.remove() db.drop_all() def test_user_setup(self): user_test_1 = User("test_user_nickname","user_email",ROLE_USER) db.session.add(user_test_1) db.session.commit() def test_data_setup(self): data_test_1 = Data(1) db.session.add(data_test_1) db.session.commit() def test_hospital_setup(self): hospital_test_1 = Hospital("test_hospital_1") db.session.add(hospital_test_1) db.session.commit() def test_make_unique_nickname(self): u = User(nickname = 'john', email = 'john@example.com') db.session.add(u) db.session.commit() nickname = User.make_unique_nickname('john') assert nickname != 'john' u = User(nickname = nickname, email = 'susan@example.com') db.session.add(u) db.session.commit() nickname2 = User.make_unique_nickname('john') assert nickname2 != 'john' assert nickname2 != nickname def test_multiple_link_table(self): #create drew = User(nickname= "Drew", email="Drew@gmail.com",role=ROLE_USER) mac_hospital = Hospital("Mac_hospital") pro_hospital = Hospital("pro_hospital") mac_data = Data(1,2,3) pro_data = Data(10,9,8) #add db.session.add(drew) db.session.add(mac_hospital) db.session.add(pro_hospital) db.session.add(mac_data) db.session.add(pro_data) #commit db.session.commit() #create links mac_hospital.data.append(mac_data) pro_hospital.data.append(pro_data) drew.hospitals.append(mac_hospital) drew.hospitals.append(pro_hospital) db.session.commit() def functions_of_add_remove(self): johns_hospital_data = Data('johns_hospital_data') johns_hospital = Hospital('johns_hospital') john = User('john', 'john@gmail.com') db.session.add(johns_hospital_data) db.session.add(johns_hospital) db.session.add(john) #TODO make a function for this? johns_hospital.append(johns_hospital_data) #do i need a commit? db.session.commit() self.assertEqual(john.remove_hospital(johns_hospital), None) john_has_hospital = john.add_hospital(johns_hospital) db.session.add(john_has_hospital) db.session.commit() self.assertEqual(john.add_hospital(johns_hospital), None) self.assertEqual(len(john.hospitals), 1) if __name__ == "__main__": unittest.main() Answer: Models - General The comments on the model definitions don't add much value. I could figure out the relationships from the db schema; tell me instead something about what the classes are meant to represent, any invariants they have, anything not made explicit in the code. Models - User When defining a backreference, as you do in User, I like to go to the other class and add a comment saying something along the lines of "A backreference 'users' is created to the Users class". With that said, I would avoid naming the variable users; what does the variable name 'users' tell me when reading the class? It tells me the type of the object I'm getting, but nothing about what significance those objects have. Something like registered_users, patients (or whatever makes sense) might be a better option. You could replace your is_active, is_anonymous methods with read only properties, but that's personal preference. The get_id method seems very odd, not sure I see the value. I question also the value of the add_hospital and remove_hospital methods, though that may be due to lack of knowledge of your use case. Regardless, the remove_hospital is incorrect; it should be: def remove_hospital(self, hospital): if hospital in self.hospitals: # You have if not (hospital in self.hospital) self.hospitals.remove(hospital) Models - Data Personally, any variable named data sets off alarm bells. As it is, the Data class seems to store a lot of "stuff", and no where do I get told why this stuff should be grouped together, what its conceptual value is, etc etc. Would something like InspectionResult (or whatever the data comes from) be more useful? Regardless, I can't help but feel this class does too much. Your __init__ for the Data class is horrendous. Use the default parameter of the Column constructor to set defaults, and make sure to call Super(Data, self).__init__() if you are overriding the __init__ of a subclass of anything you don't control. Tests I can't say I like the name of the test class; it calls itself BuildDestroyTables and then goes on to do a whole lot more than that. I've personally moved away from class based testing, but back when I used it, I would define a BaseDatabaseTest class that other tests could inherit from to absolve them of having to worry about database setup/cleanup while not violating the single responsibility principle. You use a raw sessions; I would encourage you to use a contextmanager or something to manage your session. Here's one from a project I'm currently working on: import contextlib @contextlib.contextmanager def managed_session(**kwargs): """ Provides "proper" session management to a block. Features: - Rollback if an exception is raised - Commit if no exception is raised - Session cleanup on exit """ session = db_session(**kwargs) # Or however you create a new session try: yield session except: session.rollback() raise else: session.commit() finally: session.close() You can then use it like this: with managed_session() as session: session.add(foo) raise RuntimeException("foobar") # But it's ok, everything will get rolled back nicely and we'll have a viable session still Stuff like the automatic commit is not to everyone's taste, but I think in general, the exception safety is nice. With that said, this is probably the most controversial thing I'll say in this answer; there are many approaches to session management, and this is only one.
{ "domain": "codereview.stackexchange", "id": 6183, "tags": "python, unit-testing, flask" }