anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
The mean of Langevin equation
Question: I have a very basic question regarding the mean of the Langevin equation. So we have an equation of the form: $$\dot{v}(t)=-\beta v(t)+ \xi (t)$$ Where $\xi (t)$ is a Gaussian white noise with an average zero and a $\delta$ correlation in time. As known, this equation has the following solution: $$v(t)=v(0) e^{-\beta t}+\int_0^t dt' e^{-\beta (t-t')} \xi(t') $$ and I want to take the mean of this equation, i.e., $\langle v(t)\rangle$ The second term goes is zero since $\langle\xi (t)\rangle=0$, which leaves the first term. This, according to some books I have been reading, should be: $$\langle v(t)\rangle=v(0) e^{-\beta t},$$ but I really don't get how we got this result? It's a bit confusing to me. Answer: Let me answer the original question as I understood it literally. It has a methodological value. I understand taking a mean value as some integration over a limited period $T$: $$\langle f(t)\rangle=\frac{1}{T}\int_t^{t+T}f(\tau)d\tau$$ Only doing this we may "eliminate" the fluctuating force $\xi(t)$. Now, applying this averaging to the solution $v(t)$ we obtain: $$\langle v(t)\rangle=\frac{1}{T}\int_t^{t+T}v(0)\text{e}^{-\beta \tau}d\tau + \frac{1}{T}\int_t^{t+T}\text{e}^{-\beta \tau}d\tau\int_0^{\tau} dt' \text{e}^{t'} \xi(t')$$ Now let us take the first integral: $$\frac{1}{T}\int_t^{t+T}v(0)\text{e}^{-\beta \tau}d\tau=v(0)\frac{\text{e}^{-\beta t}(1-\text{e}^{-\beta T})}{\beta T}$$ This is an exact result of our averaging over some period $T$. Now, if inequality $\beta T\ll 1$ holds, in the first approximation the difference in the nominator is equal to $\beta T$ with a good precision, thus you obtain the answer to the original question. In order to make sure that the second term in the exact solution vanishes after averaging, you must respect - in addition! - another inequality, namely $T/\delta\gg 1$. Otherwise some "long-time" fluctuations will still be present in the mean solution. I leave the proof of this as an exercise to those who downvoted my answer without explanation. Finally, let me note that although the noise force mean value is zero, it does not mean that this force does not displace the particle in the space. Originally still particle ($v(0)=0$) and without friction ($\beta=0$) may be found elsewhere: $x(t)=\int_0^t dt'\int_0^{t'}\xi(t'')dt''\ne x(0)$, remember the Brownian motion, for example. Another (funny) example: a periodic external force $F(t)=F_0\cdot \sin(\omega t)$. On average this force is zero, but it displaces the particle with time far far away ;-) One cannot obtain this result by just dropping out this force from the exact equation of motion. EDIT: As pointed out Alexander in his comment, "Langevin equation is about stochastic processes. Averages of stochastic processes are (usually) with respect to different noise realizations (all taken from same distribution). In different language - tracing out the ensemble of stochastic environment." In other words, averaging is done in fact over some parameter independent of time. It is like averaging over a random initial phase $\varphi_0$ in my last example with a periodic external force $F_0\cdot\sin(\omega t + \varphi_0)$. Thus, it is not averaging the equation or its solution over $t\le\tau\le (t+T)$. Although mathematically it is possible, I do not fully understand how it may naturally arise in physical calculations and what meaningful conclusion one can draw from such averaged things.
{ "domain": "physics.stackexchange", "id": 54677, "tags": "statistical-mechanics, differential-equations, statistics, brownian-motion, stochastic-processes" }
LIFE in Python 3
Question: I have started to learn Python and have chosen Conway's game of life as my first program. I would be interested in reading how to write more idiomatic Python. Also, what threw me off for some time was that everything is passed by reference and assignment of a list doesn't copy its values but copies the reference. Therefore, I have used the deepcopy function, but I am thinking that lists might be the wrong choice in this case. What would be a better choice in Python? """ Implementation of LIFE """ import copy # PARAMETERS # Number of generations to simulate N_GENERATIONS = 10 # Define the field. Dots (.) are dead cells, the letter "o" represents living cells INITIAL_FIELD = \ """ ................... ................... ................... ................... .ooooo.ooooo.ooooo. ................... ................... ................... ................... """ # FUNCTIONS def print_field(field_copy, dead_cells=' ', living_cells='x'): """Pretty-print the current field.""" field_string = "\n".join(["".join(x) for x in field_copy]) field_string = field_string.replace('.', dead_cells) field_string = field_string.replace('o', living_cells) print(field_string) def get_neighbours(field_copy, x, y): """Get all neighbours around a cell with position x and y and return them in a list.""" n_rows = len(field_copy) n_cols = len(field_copy[0]) if y == 0: y_idx = [y, y+1] elif y == n_rows - 1: y_idx = [y-1, y] else: y_idx = [y-1, y, y+1] if x == 0: x_idx = [x, x+1] elif x == n_cols - 1: x_idx = [x-1, x] else: x_idx = [x-1, x, x+1] neigbours = [field_copy[row][col] for row in y_idx for col in x_idx if (row, col) != (y, x)] return neigbours def count_living_cells(cell_list): """Count the living cells.""" accu = 0 for cell in cell_list: if cell == 'o': accu = accu + 1 return accu def update_field(field_copy): """Update the field to the next generation.""" new_field = copy.deepcopy(field_copy) for row in range(len(field_copy)): for col in range(len(field_copy[0])): living_neighbours = count_living_cells(get_neighbours(field_copy, col, row)) if living_neighbours < 2 or living_neighbours > 3: new_field[row][col] = '.' elif living_neighbours == 3: new_field[row][col] = 'o' return new_field # MAIN # Convert the initial playfield to an array field = str.splitlines(INITIAL_FIELD) field = field[1:] # Getting rid of the empty first element due to the multiline string field = [list(x) for x in field] print("Generation 0") print_field(field) for generation in range(1, N_GENERATIONS+1): field = update_field(field) print(f"Generation {generation}") print("") print_field(field) print("") Answer: I think your get_neighbor function can be cleaned up using min and max, and by making use of ranges: def get_neighbours(field_copy, x, y): """Get all neighbours around a cell with position x and y and return them in a list.""" n_rows = len(field_copy) n_cols = len(field_copy[0]) min_x = max(0, x - 1) max_x = min(x + 1, n_cols - 1) min_y = max(0, y - 1) max_y = min(y + 1, n_rows - 1) return [field_copy[row][col] for row in range(min_y, max_y + 1) for col in range(min_x, max_x + 1) if (row, col) != (y, x)] It's still quite long, but it does away with all the messy if dispatching to hard-coded lists of indices. I also broke up the list comprehension over a few lines. Whenever my comprehensions start to get a little long, I break them up like that. I find it significantly helps readability. For "\n".join(["".join(x) for x in field_copy]) You don't need the []: "\n".join("".join(x) for x in field_copy) Without the square brackets, it's a generator expression instead of a list comprehension. They're lazy, which saves you from creating a list just so it can be fed into join. The difference here isn't huge, but for long lists that can save memory. I wouldn't represent the board as a 2D list of strings. This likely uses up more memory than necessary, and especially with how you have it now, you're forced to remember what string symbol represents what. On top of that, you have two sets of string symbols: one used internally for logic ('o' and '.'), and the other for when you print out (' ' and 'x'). This is more confusing than it needs to be. If you really wanted to use strings, you should have a global constant at the top that clearly defines what string is what: DEAD_CELL = '.' # At the very top somewhere ALIVE_CELL = 'o' . . . if living_neighbours < 2 or living_neighbours > 3: # Later on in a function new_field[row][col] = DEAD_CELL elif living_neighbours == 3: new_field[row][col] = ALIVE_CELL Strings like '.' floating around fall into the category of "magic numbers": values that are used loose in a program that don't have a self-explanatory meaning. If the purpose of a value isn't self-evident, store it in a variable with a descriptive name so you and your readers know exactly what's going on in the code. Personally though, when I write GoL implementations, I use a 1D or 2D list of Boolean values, or a set of tuples representing alive cells. For the Boolean list versions, if a cell is alive, it's true, and if it's dead it's false. For the set version, a cell is alive if it's in the set, otherwise it's dead. I'd tuck all the stuff at the bottom into a main function. You don't necessarily always want all of that running simply because you loaded the file. For the sake of efficiency, instead of constantly creating new field copies every generation, a common trick is to create two right at the start, then swap them every generation. The way I do it is one field is the write_field and one is the read_field. As the names suggest, all writes happen to the write_field, and all reads from read_field. After each "tick", you simply swap them; read_field becomes the new write_field and write_field becomes read_field. This saves you from the expensive deepcopy call once per tick. You can do this swap quite simply in Python: write_field, read_field = read_field, write_field
{ "domain": "codereview.stackexchange", "id": 39252, "tags": "python, python-3.x, game-of-life" }
Understanding InteractiveMarkers
Question: I have followed the interactive markers tutorial, and I successfully achieved want I wanted: to mix the quadrocopter and button markers. However, I do not understand how the code works. To define the marker, I have the following code: def makeMarker(): marker = Marker() marker.type = Marker.CYLINDER marker.scale.x = 0.3 marker.scale.y = 0.3 marker.scale.z = 0.3 marker.color.g = 1.0 marker.color.a = 1.0 return marker # create an interactive marker for our server int_marker = InteractiveMarker() int_marker.header.frame_id = "world" int_marker.name = "goal_marker" int_marker.description = "goal_marker" int_marker.scale = 0.7 # create marker and a non-interactive control containing the marker control = InteractiveMarkerControl() control.always_visible = True control.markers.append(makeMarker()) # add control to interactive marker int_marker.controls.append(control) # add control for position (as it has no marker, RViz decides which one to use). control = InteractiveMarkerControl() control.orientation.w = 1 control.orientation.x = 0 control.orientation.y = 1 control.orientation.z = 0 control.interaction_mode = InteractiveMarkerControl.MOVE_PLANE int_marker.controls.append(copy.deepcopy(control)) # add control for the height (as it has no marker, RViz decides which one to use). control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS int_marker.controls.append(copy.deepcopy(control)) # add control for pose publishing. control.interaction_mode = InteractiveMarkerControl.BUTTON control.markers.append(makeMarker()) int_marker.controls.append(control) First, the interactive marker is created (int_marker), and then a Control is associated to maker created by makeMarker and this Control is inserted in int_marker. Then, two more controls are created to give the plannar movement and height, and inserted to int_marker. So far so good. However, when adding the button functionality, I have to include the extra line control.markers.append(makeMarker()), that is, to create a new visual marker and append it to the Control. Why do I need this? I will not work otherwise. FINAL SOLUTION: Removing extra appends and thus other extra code: # create an interactive marker for our server int_marker = InteractiveMarker() int_marker.header.frame_id = "world" int_marker.name = "int_marker" int_marker.description = "int_marker" int_marker.scale = 0.7 # add control for position (as it has no marker, RViz decides which one to use). control = InteractiveMarkerControl() control.orientation.w = 1 control.orientation.x = 0 control.orientation.y = 1 control.orientation.z = 0 control.interaction_mode = InteractiveMarkerControl.MOVE_PLANE int_marker.controls.append(copy.deepcopy(control)) # add control for the height (as it has no marker, RViz decides which one to use). control.interaction_mode = InteractiveMarkerControl.MOVE_AXIS int_marker.controls.append(copy.deepcopy(control)) # add control for pose publishing. A marker is required in this case, RViz does not # assign one. control.interaction_mode = InteractiveMarkerControl.BUTTON control.markers.append(makeMarker()) # This is required, it will not work otherwise. int_marker.controls.append(control) Originally posted by Javier V. Gómez on ROS Answers with karma: 1305 on 2015-07-07 Post score: 0 Answer: Just look at the make marker function, it gives you the visual form of the marker controlled by the interactive object: Cylinder, Box, Sphere ... and its size and color: marker = Marker() marker.type = Marker.CYLINDER marker.scale.x = 0.3 marker.scale.y = 0.3 marker.scale.z = 0.3 marker.color.g = 1.0 marker.color.a = 1.0 UPDATE: In my opinion I guess it probably should work without the first append, I do not see any reason why it should not do so, maybe the author added it twice by accident. Originally posted by cyborg-x1 with karma: 1376 on 2015-07-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Javier V. Gómez on 2015-07-07: I understand that. My concern is about the second line from the last, why do I have to ad another marker to be able to click on it, and not just add the control as for the other two controls (move plane and move axis)? Comment by cyborg-x1 on 2015-07-07: hmm, that's a good point have you tried to compile it without the first append? In my opinion I guess it probably should work, I do not see any reason why it should not ... Comment by Javier V. Gómez on 2015-07-08: Can you post this last comment as an answer so I can accept it? It solved (and simplified) my code :) Comment by Javier V. Gómez on 2015-07-08: The problem was not the last append, but the first. It solved some other problems I had by just using the last marker. Comment by cyborg-x1 on 2015-07-08: Oh yes, I corrected it ;-)
{ "domain": "robotics.stackexchange", "id": 22099, "tags": "ros, interactive-markers" }
How to prove transitivity of < (Software Foundations exercise)?
Question: I'm working through the "Properties of Relations" chapter of Software Foundations, but have got stuck on one of the exercises, lt_trans'': Theorem lt_trans'' : transitive lt. Proof. unfold lt. unfold transitive. intros n m o Hnm Hmo. induction o as [| o']. (* FILL IN HERE *) Admitted. The base case for the induction is straightforward, but I'm stumped as to how to progress on the successor case. Proof state is: n, m, o' : nat Hnm : S n <= m Hmo : S m <= S o' IHo' : S m <= o' -> S n <= o' ============================ S n <= S o' To apply the induction hypothesis, I need S m <= o', but I only know that S m <= S o', and so it feels there is no way to make progress. Hoping someone can point me in the right direction as to how to solve this exercise. Answer: Not all relations are transitive, so you're going to need to use the definition of lt and le, or some lemma that you've already proved about them. Intuitively speaking, there are two cases: either $m = o'$ or $m \lt o'$. In the first case, Hnm says that $S(n) \le o'$, from which the conclusion is one application of le_S away. In the second case, you can apply IHo'. This form of reasoning assumes some very well-known facts about arithmetic. (It's usually hard to formally prove something if you don't already have some intuition about the topic.) But you haven't proved them yet, so you can't directly use them. Instead, you have to make these cases appear via the construction of the data or the proof. It's very common to reason about the structure of a proof hypothesis: such-and-such hypothesis is of this form, and therefore it can only have been constructed in a certain way. An important tactic for reasoning about the structure of a hypotheses in a non-recursive way is inversion. Roughly speaking, inversion does case analysis on a hypothesis and works out how these cases can be possible, unlike destruct which does case analysis but loses information on how the hypothesis could be possible. We want to reason about how $m \le o'$ can be possible, and there's a hypothesis that's very cloes to this: Hmo. So apply the tactic inversion Hmo. I'll let you work out the details of the proof. If you don't care about the details, Coq will fill them in for you, but as a learning exercise you should complete these auto steps by hand. Theorem lt_trans'' : forall n m o, n < m -> m < o -> n < o. Proof. unfold lt. intros n m o Hnm Hmo. induction o; inversion Hmo; subst; auto. Qed.
{ "domain": "cs.stackexchange", "id": 16407, "tags": "coq" }
What determines the particular frequency of infrared any given object emits?
Question: Infrared (IR) includes EM waves between 780nm and 1mm in wavelength. (Source) As an object gets hotter, it emits a greater amplitude of infrared. What determines the particular frequency(s) of IR an object emits? The temperature, and is independent of the object itself? The molecular makeup of the object itself, and hence is independent of the temperature? Or does it depend on both? Answer: If an object is in thermal equilibrium, then it will be emitting on all frequencies, but the intensity of emission depends on frequency. This dependence is given by the Planck's law, which has a peak, with most of radiation falling around this peak. The peak frequency is proportional to the temperature and in everyday conditions usually falls in infrared. (In the figure below the intensity of the radiation is shown as a function of the wavelength - the higher the temperature the shorter is the wave length corresponding to the peak, the more the "color" of object shifts towards blue.) Related: Why are only infrared rays classified as "heat rays"?
{ "domain": "physics.stackexchange", "id": 94111, "tags": "thermodynamics, temperature, thermal-radiation, frequency, infrared-radiation" }
Can you convert a positively weighted DAG into a non-weighted DAG in polynomial time?
Question: Given a positively weighted DAG (directed acyclic graph) $D = (V,E)$, can you create a new non-weighted DAG $D'$ by converting each edge with weight $w(e) = x$ into x non-weighted edges and vertices? I believe this would take $O(|E|+W)$ time where $|E|$ is the number of edges and $W$ is the total weight of all edges. My concern is whether I can include this weight variable and still consider this algorithm to be in polynomial time. (NOTE: This algorithm may apply to all positively weighted graphs, not just DAGs.) Answer: I believe that this would qualify as pseudo-polynomial time. See http://en.wikipedia.org/wiki/Pseudo-polynomial_time. The idea is that, usually we represent the time complexity as a function of the length of the string (bit) representation of the input. So your algorithm runs in polynomial time, given a fixed (or at least bounded) value for $W$. To follow your algorithm, if you perform $O(|w|) $ operations for each edge, you perform $O(2^{b})$ operations where $b$ is the length of the binary-representation of the weight. This means that overall, the algorithm runs in $O(|E|2^{b_{max}})$, where $b_{max}$ is the length of the largest edge weight. In summary, if you fix an upper bound on the edge weights, $b_{max}$ becomes a constant and the algorithm runs in linear time, but if you allow edge weights to be unbounded, the algorithm is exponential. This is a common pattern we see when dealing with numerical inputs. The Knapsack problem, as well as factoring the product of two primes, face the same issue.
{ "domain": "cs.stackexchange", "id": 1294, "tags": "algorithms, graphs, algorithm-analysis, polynomial-time" }
In which situations is it hard to find an ansatz?
Question: Looking at the single-qubit, toy example of VQE it's pretty much trivial that arbitrary X and Y rotations are sufficient to cover all of state space for our toy system. Unfortunately, the toy example doesn't do enough to illustrate to me why it would be any harder to do the same for a larger system. So in what situations is it hard to find an ansatz? EDIT After reading through some of the resources from the accepted answer I came up with a response that works best for me: The number of parameters required to describe all possible states of an n-qubit quantum system scales exponentially with n. So an ansatz which can cover all possible states would need an exponential number of parameters. And that just won't do because: We'd then need a classical optimistion algorithm which can search through an exponentially large parameter space. We'd need an exponential number of gates to actually prepare the state (certainly not good for a NISQ device with short coherence times). So we actually need to be able to find ansatzes who's parameters grow at most polynomially with the size of the system. But then of course, we can't cover all states. So then the challenge is in balancing the tradeoff between keeping the number of ansatz parameters small, but still being confident that the spanning space of the ansatz covers our ground state. Answer: If you haven't read the Qiskit chapter on Simulating Molecules Using VQE yet, that's a good place to start. There's also a related response to a similar question here, which you might find helpful. If you want to see an example of a problem that researchers are actively dealing with, you might try reading up on the ongoing progress around FeMoco simulation. A better understanding of this molecule could drastically reduce the energy cost of fertilizer production (which is about 1.2% of worldwide energy consumption), so it's a very attractive target for quantum computing. This paper from Google was a significant advance in that effort, and it has a good introduction section that should give you a sense of the challenges that researchers are facing in constructing effective and efficient molecule simulations.
{ "domain": "quantumcomputing.stackexchange", "id": 1596, "tags": "vqe" }
Freight cars and sand
Question: I have come across a few situations involving leaking and loading freight cars being pushed with a constant force. Here, I present two situations that leave me in doubt. 1) A car of mass $M_0$( sand of mass $m$ included ) at rest experiences a constant force $F$ at $t=0$ and starts to leak from the bottom at a constant rate $b$ at the same time. I am to calculate its speed when all the sand leaks out. Here, I am concerned with the system's horizontal momentum. I have, $P(0) = 0$ and $P(t) = (M_0 - m)v + x $, where $t$ is the time taken for all the sand to leak out, $v$ is the speed attained by the car at $t$ and $x$ is the momentum of the escaped sand subsystem. Since the mass of the system remains unchanged, I have, $\Delta P = (M_0 - m)v + x = \int_{0}^{t} F \mathrm dt$ I do not see myself in a position to calculate $x$, and that is why I'm not sure if this approach is correct. While I do know the solution to this question( a different approach ), I would like to know if there are any flaws with my reasoning here. Can I expect the right answer if $x$ is correctly known? The next question concerns the loading of a freight car. 2) A freight car of mass $M$ starts from rest under an applied force $F$. At the same time, sand begins to run into the car at a steady rate $b$ from a hopper at rest along the track. I am to calculate its speed when a mass $m$ of sand has been transferred. Once again, I'm concerned with the system's$( M + m )$ horizontal momentum. I am given, $\frac{\mathrm d M_{\mathrm car}}{\mathrm dt} =b$. Mass of the freight car at any time $t$ is then found out to be $M + bt$. At some time $t$ I have, $P(t) = (M + bt) v$, where $v$ is the speed at that time. Sometime later, $P(t+\Delta t) = (M + bt + b\Delta t) (v + \Delta v)$ The average rate of change of momentum is given by, $ \frac{\Delta P}{\Delta t} = ( M + bt) \frac{\Delta v}{\Delta t} + b( v + \Delta v) $ When I take $\Delta t \rightarrow 0 $, I see that the net external force depends on the speed of the car. This is an absurd result because if I shift my origin from my current inertial frame to another inertial frame( moving with a different speed ), the force acting on the car will be different. This is not possible. Where could I have gone wrong with my reasoning? The solution to this question is simple. However, I am curious to know where I went wrong. Answer: There is an asymmetry your two problems (as well as an error in an answer by Mark): When sand with initial zero horizontal velocity lands on the car already moving with a speed $v$, there is a case of inelastic collision in which a mass of sand $dm=b \, dt$ instantaneously acquires speed $v$, so there should also be a force, corresponding to the momentum we need to impart to accelerate this inflowing sand. The magnitude of this additional force is $$|F_1|=\frac{dp}{dt} = \frac{v\, dm }{dt}=\frac{v\, b dt}{dt}=b v,$$ and the direction is opposite $v$. This additional force is absent in your first case since the sand leaves with the same horizontal speed as the car. If you consider this situation in a different inertial frame moving horizontally with a speed $u$, then the sand dumped into the car would have a nonzero impulse $dp = u \, dm $, and so its speed would need to be changed by $v-u$ to move with the speed of the car. So, you are not wrong: the net force in the second case does depend on the speed of a car. And so, if while the car is moving under the hopper we dump into it total mass $m$ of sand the speed at the end is calculated by solving the differential equation $$ \frac{dv(t)}{dt}=\frac{F_1+ F}{M(t)}=\frac{-b\, v(t)+F}{M+b\,t}, $$ subject to initial condition $v(0)=0$.
{ "domain": "physics.stackexchange", "id": 46069, "tags": "homework-and-exercises, newtonian-mechanics, momentum" }
Hibernate MassIndexer using chained invocation for building a batch job
Question: I've a class called MassIndexer.java (on GitHub) and it uses chained invocation for building a batch job. Each method assigns a property of this job and finally the job will be started using method MassIndexer#start(). Here's an example for Java EE: long executionId = new MassIndexer() .addRootEntities(Company.class, Employee.class) .start(); According to different environments, the requirement is different. Some methods are required in Java SE while all of them are optional in Java EE. If users use MassIndexer under Java SE and they forget to set the properties through those required methods, the job will fail. These required methods on Java SE are: isJavaSE(boolean) entityManagerFactory(EntityManagerFactory) jobOperator(JobOperator) Here's another example for Java SE: long executionId = new MassIndexer() .addRootEntities(Company.class, Employee.class) .isJavaSE(true) .entityManagerFactory(emf) .jobOperator(jobOperator) .start(); However, the design of this class in not clear: people don't know which method is required and which isn't. The only way to know it is to read the documentation. And I'm wondering how can I improve the design of this class, so that people can easily understand what to do and how to do. For example, should this class be split into MassIndexerEE and MassIndexerSE? Should it be refactored into a builder class, e.g. BatchIndexingJobBuilder? And there's no order or logic among these methods. You can call foo() then bar(), but bar() then foo() works too. Properties are used for configurations based on selection of class types (about persistence) database interaction (about persistence) parallel processing (about job) checkpoing algorithm (about job) Someone must be confused when using this class, so I'm seeking advice. Every review is welcomed. Answer: Make a second level Builder class long executionId = new MassIndexer() .addRootEntities(Company.class, Employee.class) .javaSE() .entityManagerFactory(emf) .jobOperator(jobOperator) .start(); long executionId = new MassIndexer() .addRootEntities(Company.class, Employee.class) .javaEE() .start(); class MassIndexer MassIndexer addRootEntities(Class<?>...) SEBuilder javaSE() EEBuilder javaEE() class MassIndexer.SEBuilder long start() SEBuilder entityManagerFactory(EntityManagerFactory) ... I am not that happy with the name and using a constructor for MassIndexer addRootEntities might go in the "constructor" javaSE/EE maybe standalone/server or container?
{ "domain": "codereview.stackexchange", "id": 22210, "tags": "java, hibernate, fluent-interface" }
How are the three character IAU Minor Planet Center's Observatory Codes assigned? Why are some letters so popular?
Question: Answers to Latitude, longitude and altitude of Ckoirama Observatory; where can things like this be looked-up? point to https://www.minorplanetcenter.net/iau/lists/ObsCodesF.html which currently lists 2198 observatories. The first character of the three character codes has 36 possible alphanumeric values. and the second two are only digits. That allows for 36*10*10 = 3600 possible codes. I plotted the ones that are currently used and the pattern is interesting. Why are some letter codes fully used (all 100 two digit suffixes are populated) whereas others are completely empty? I haven't noticed an alphabetical associations. Y = 0 to 9 are the digits and 10 to 36 are A through Z: Shading means the code is used. For some reason I can't stop comparing the data to the famous Arecibo message, a tiny 1-bit bitmapped image beamed to space containing DNA, amino acids, our solar system and other goodies: import numpy as np import matplotlib.pyplot as plt # blob is the unformatted lines from https://www.minorplanetcenter.net/iau/lists/ObsCodes.html threes = [line[:3] for line in blob.splitlines()] key = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' a = [[x] for x in key] for t in threes: a[key.find(t[0])].append(t) b = np.zeros((36, 100)) for i, thing in enumerate(a): c = [int(x[1:]) for x in thing[1:]] for d in c: b[i, d] = 1 plt.imshow(-b, interpolation='nearest', cmap='gray') plt.gca().set_aspect(2) plt.show() pairs = [] todegs = 180/np.pi for line in blob.splitlines(): try: cos, sin = [float(x) for x in (line[13:21], line[21:30])] lat = todegs * np.arctan2(sin, cos) lon = float(line[4:13]) pairs.append([lon, lat]) except: pass print(len(pairs), len(blob.splitlines()), float(len(pairs))/len(blob.splitlines()) ) lon, lat = np.array(list(zip(*pairs))) lon = np.mod(lon+180, 360) - 180 plt.figure() plt.plot(lon, lat, 'ok', markersize=1) plt.ylim(-90, 90) plt.xlim(-180, 180) plt.gca().set_aspect('equal') plt.show() Answer: I asked the at the minor planet center how the codes were decided, and the answer was Historically, the observatory codes were assigned ascending by longitude toward east (from prime meridian): 360 degrees were divided by numbers. When three digit numerical codes were not sufficient, letters plus two numbers were used again in bands toward the east. Some bands are already full (e.g. H**). New observatory code is assigned in the available letter+number band (based on its longitude).
{ "domain": "astronomy.stackexchange", "id": 4493, "tags": "observational-astronomy, asteroids, observatory, iau" }
Why doesn't the Meissner effect follow from zero resistance?
Question: In articles on superconductivity (see, for example, the original BCS paper or Leggett's review article), the Meissner effect is treated as a seperate phenomena that must be explained in addition to the fact that the material has zero resistance. In Leggett's article, he especially mentions that the two phenomena are independent, and one may arise without the other. But thinking about it, it seems to me that any material that has zero resistance must expel magnetic fields. What's wrong with the following argument? Say you have a ring with zero resistance. The ring has some inductance, L, and you want to thread some flux $\Phi_{ext}$ through the ring. At any time, the total flux through the ring will be given by $$ \Phi_{total}(t)=\Phi_{ext}(t)+LI(t) $$ If you start with zero current and zero external flux and increase the external flux over time, the EMF around the loop is given by $$ \mathcal{E}(t)=-\frac{d\Phi_{total}}{dt}=-\frac{d\Phi_{ext}}{dt}-L\frac{dI}{dt} $$ Since the ring has zero resistance, the EMF at any time must be zero, so $$ 0=-\frac{d\Phi_{ext}}{dt}-L\frac{dI}{dt} $$ $$ \frac{d\Phi_{ext}}{dt}=-L\frac{dI}{dt} $$ or, integrating both sides from $t=0$, $$ \Phi_{ext}(t)=-LI(t) $$ so that $\Phi_{total}(t)=0$ at all times, and we've completely expelled the external magnetic field. Where does this argument break down? Answer: Your argument assumes the flux in the medium begins at zero, and concludes that it will remain zero, which is true. However, consider a medium that begins with nonzero resistance, has magnetic flux introduced, and then is cooled to the point of superconductivity. In that case, the classical theory you're using just says that whatever magnetic field is in the material when it becomes superconducting will remain there, "frozen" in place as it were. Zero resistance implies constant field (no diffusion), not necessarily zero field. In the Meissner effect, by contrast, any field present in the material up to the transition is actually forcefully ejected when the material becomes superconducting. Something causes the field to diffuse out of the medium very fast during the transition. (The converse is also true: by making the magnetic pressure on a superconductor big enough, you can force magnetic field into it, but in doing so you destroy its superconductivity.) (+! @knzhou!)
{ "domain": "physics.stackexchange", "id": 35108, "tags": "magnetic-fields, superconductivity" }
Multi-particle Hamiltonian for the free Klein-Gordon field
Question: The text I am reading (Peskin and Schroeder) gives the Hamiltonian for the free Klein-Gordon field as: $$H=\int {d^3 p\over (2\pi)^3}\; E_p\; a^{\dagger}_{\vec p}a_{\vec p}$$ This does not seem to be good for multi-particle states. How would this Hamiltonian be modified for multi-particle states? Answer: $$H=\int {d^3 p\over (2\pi)^3}\; E_p\; a^{\dagger}_{\vec p}a_{\vec p}\tag{1}$$ This does not seem to be good for multi-particle states. It is perfectly fine for multi-particle states. How would this Hamiltonian be modified for multi-particle states? It does not need to be modified. For example, consider the two-particle state: $$ |\vec p_1, \vec p_2\rangle\equiv a^{\dagger}_{\vec p_1}a^{\dagger}_{\vec p_2}|0\rangle\;, $$ and, if you please, the values of $p_1$ and $p_2$ can be the same. The action of the Hamiltonian in Eq. (1) is: $$ \hat H|\vec p_1, \vec p_2\rangle $$ $$ =\int \frac{d^3 p}{(2\pi)^3} E_p a^{\dagger}_{\vec p}a_{\vec p}a^{\dagger}_{\vec p_1}a^{\dagger}_{\vec p_2}|0\rangle $$ $$ =\int \frac{d^3 p}{(2\pi)^3} E_p a^{\dagger}_{\vec p}[a_{\vec p},a^{\dagger}_{\vec p_1}a^{\dagger}_{\vec p_2}]|0\rangle $$ $$ =\int \frac{d^3 p}{(2\pi)^3} E_p a^{\dagger}_{\vec p} \left( [a_{\vec p},a^{\dagger}_{\vec p_1}]a^{\dagger}_{\vec p_2} +a^{\dagger}_{\vec p_1}[a_{\vec p},a^{\dagger}_{\vec p_2}] \right)|0\rangle\;. $$ Then use: $$ [a_{\vec p}, a_{\vec q}^\dagger] = (2\pi)^3\delta^3(\vec p - \vec q) $$ to see that: $$ \hat H|\vec p_1, \vec p_2\rangle $$ $$ \int d^3 p E_p a^{\dagger}_{\vec p} \left( \delta^3(\vec p - \vec p_1)a^{\dagger}_{\vec p_2} +a^{\dagger}_{\vec p_1}\delta^3(\vec p - \vec p_2) \right)|0\rangle $$ $$ =(E_{\vec p_1} + E_{\vec p_2})|\vec p_1, \vec p_2\rangle\;. $$
{ "domain": "physics.stackexchange", "id": 99552, "tags": "quantum-field-theory, operators, hilbert-space, hamiltonian, klein-gordon-equation" }
Code to find the sums of building heights
Question: Remains You've recently stumbled upon the remains of a ruined ancient city. Luckily, you've studied enough ancient architecture to know how the buildings were laid out. The city had \$n\$ buildings in a row. Unfortunately, all but the first two buildings have deteriorated. All you can see in the city are the heights of the first two buildings. The height of the first building is \$x\$, and the height of the second building is \$y\$. Your studies show ancient traditions dictate for any three consecutive buildings, the heights of the two smaller buildings sum up to the height of the largest building within that group of three. This property holds for all consecutive triples of buildings in the city. Of course, all building heights were nonnegative, and it is possible to have a building that has height zero. You would like to compute the sum of heights of all the buildings in the row before they were ruined. You note there can be multiple possible answers, so you would like to compute the minimum possible sum of heights that is consistent with the information given. It can be proven under given constraints that the answer will fit within a 64-bit integer. Input Format The first line of the input will contain an integer \$T\$, denoting the number of test cases. Each test case will be on a single line that contains 3 integers \$x, y, n\$. Output Format Print a single line per test case, the minimum sum of heights of all buildings consistent with the given information. Constraints For all files \$1 ≤ T ≤ 10\,000\$ \$0 ≤ x, y\$ \$2 ≤ n\$ File 1 -- 61 pts: \$x, y, n ≤ 10\$ File 2 -- 26 pts: \$x, y, n ≤ 100\$ File 3 -- 13 pts: \$x, y, n ≤ 1\,000\,000\,000\$ Sample Input 3 10 7 5 50 100 50 1000000000 999999999 1000000000 Sample Output 25 1750 444444445222222222 Explanation In the first sample case, the city had 5 buildings, and the first building has height 10 and the second building has height 7. We know the third building either had height 17 or 3. One possible sequence of building heights that minimizes the sum of heights is {10, 7, 3, 4, 1}. The sum of all heights is 10+7+3+4+1 = 25. In the second sample case, note that it's possible for some buildings to have height zero. Environment Time Limit: 5.0 sec(s) for each input file. Memory Limit: 256 MB Source Limit: 1024 KB TLDR; Heights of first two buildings and the total number of buildings are given. For any three consecutive buildings, the heights of the two smaller buildings sum up to the height of the largest building within that group of three. This property holds for all consecutive triples of buildings in the city. Buildings of size 0 are allowed. Compute the minimum sum of heights of all the buildings. Runtime Per Input = 5 sec My Code: #include <iostream> #include <cmath> #include <vector> #include <cstdio> using namespace std; int main(void) { ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr); long long int t; // t = number of test cases cin >> t; long long int x, y, n; //x, y = Height of first & second building. n = number of buildings long long int sum; //sum of the height of buildings int arr[3]; //Not required as pointed out by Juno long long int z; //Temporary variable to store the height of next building for (long long int foo = 0; foo < t; foo++) { cin >> x >> y >> n; //Takes the input sum = x + y; //Adds to sum the height of the first two buildings for (long long int i = 2; i < n; i++) { z = abs(x - y); sum += z; x = y; y = z; } cout << sum << endl; } } Input 1: x, y, n ≤ 10 Input 2: 26 pts: x, y, n ≤ 100 Input 3: 13 pts: x, y, n ≤ 1,000,000,000 My code works for the first two inputs but goes on time limit exceeded on the third input. Can anyone help in thinking a better algorithm for solving this question? During the last input, the program's execution was not complete. Answer: There are a few observations we can make about the nature of the problem. The first and most obvious observation is that the building heights will keep decreasing until the first building hits 0. As soon as that happens, the building heights will follow the form \$h, 0, h, h, 0, \ldots \$, repeating the last non-zero height twice before adding in a zero again. Note that the same is true for equal building heights (though that much should've been obvious). The next observation is the following: When the first building is smaller than the second, the output of cumulative_heights(a, b, n) is a + cumulative_heights(b, b - a, n - 1) Now that we handled "special cases", we should take a look at the properties of the "general case". In the following, \$a > b\$ Now let's examine the behaviour of four simple sequences: $$ \begin{align} (1) & 20, 5, 15, 10, 5, 5, 0, \ldots \\ (2) & 20, 9, 11, 2, 9, 7, 2, 5, 3, 2, 1, 1, 0, \ldots\\ (3) & 20, 4, 16, 12, 4, 8, 4, 4, 0, \ldots\\ (4) & 20, 6, 14, 8, 6, 2, 4, 2, 2, 0, \ldots \end{align} $$ Note how in all of these sequences the smaller building height keeps repeating. I've chosen these for certain additional properties they exhibit. It's important to understand how the divisibility of the building heights comes into play here. Using \$(\lfloor \frac{a}{b} \rfloor, a \mod b)\$ to classify these sequences gives us an insight: The sequences where \$a \mod b = 0\$ very quickly result in a repeating pattern. Because the repeated subtraction returns 0 at some point, these sequences (namely (1) and (3)) "end" the descent of the building heights. The other two sequences are somewhat more interesting. When we look at them somewhat differently we get the following picture: $$ 20, 9, 11, 2, 9\\ 2, 9 \\ 9, 7, 2, 5, 3, 2, 1\\ 2, 1, 1, 0 \\ ... \\ 20, 6, 14, 8, 6, 2\\ 6, 2, 4, 2, 2, 0 $$ Now when you see this I hope you notice that there's multiple subsequences here. I repeated the "start" of each subsequence to make it easier to notice. The deciding factor about what exactly happens when the first subsequence ends is whether \$\lfloor \frac{a}{b} \rfloor \$ is even or not. If it is, that's equivalent to the first case (sequence (2)). If it isn't we get the somewhat cleaner and easier to reason about result of sequence (4). Note how upon reaching \$a \mod b\$ the even case "adds another \$b\$", which will always be larger than \$a \mod b\$. The sequence then continues with \$(b, b - (a \mod b))\$. The odd case however just "starts sooner" and follows a sequence based on \$(b, a \mod b)\$. The only remaining puzzle piece to turn this into a working analytical (and possibly recursive) solution is to understand how many steps elapse before the "next subsequence" begins and to encapsulate this behaviour into a separate function that takes the two starting heights and the number of buildings as arguments. I leave the hammering out of that detail to you :)
{ "domain": "codereview.stackexchange", "id": 33892, "tags": "c++, time-limit-exceeded, c++14" }
c++ file in a gazebo model plugin?
Question: Hey all! I'm a new ros-gazebo user, I've read some tutos and I have a question. Is it possible to use a cpp file in a gazebo plugin model? For example, can I put the cpp file somewhere in this template explained in the tuto ? Because I have a cpp file with a main function (and some more functions) , and I have a gazebo model (an iRobot create). Is it possible for the model to follow the code (passing by the plugin, or directly, or another method)? I guess the last solution is to convert all the cpp code in a plugin langage but is there an easier way? Thanks in advance and have a great day. Axxeel Originally posted by Axxeel on Gazebo Answers with karma: 3 on 2017-05-06 Post score: 0 Answer: All gazebo plugins are in C++; there is no plugin language. Any file can be included as long as you make sure the file is in your include path. In your CMakeLists.txt this is done using include_directories. In the case of source files you'll have to make sure all are listed in the call to add_library. You won't be able to call or run your main function. main as a symbol is already defined by the gazebo executable. You should treat the Load() method on the plugin as the entry point. Originally posted by sloretz with karma: 558 on 2017-05-08 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4097, "tags": "c++, gazebo-plugin" }
How to reliably compute the group delay of a comb filter
Question: I applied the following FIR comb filter in real-time: y[n]=x[n]-x[n-40] Since this is an FIR, the group delay is D=(N-1)/2=20 samples. After applying the filter to a signal, I tried to use cross correlation between the filtered and unfiltered signal, to reproduce D computationally by determining the argmax of the cross correlation (I do have a need to the delay this way). The issue is that I get too peaks in the cross correlation, one at zero lag and another at 20 lag. But the peak at zero lag is the maxima which means the peak at 20 lag which is the correct lag is ignored. This method work really well with other filters like averaging filters. Does anyone know while I get a the peak at zero which is overshadowing the real peak? Is this normal for comb filters? Is there another method to compute delays using the filtered and unfiltered signal other than cross correlation? Answer: Since this is an FIR, the group delay is D=(N-1)/2=20 samples. No, since this is a linear phase (i.e. symmetric or anti-symmetric) filter, the group delay is half the length! (being a FIR isn't sufficient.) The issue is that I get too peaks in the cross correlation, one at zero lag and another at 20 lag. Write down the formula for auto-correlation at zero lag. Compare that to the formula of "energy of a signal". They are identical! This really shouldn't surprise you! This method work really well with other filters like averaging filters. This method works with anything that has a non-zero zero-lag coefficient. Does anyone know while I get a the peak at zero which is overshadowing the real peak? Yes, because autocorrelation at zero is simply the energy. And since correlation is a linear, and your system passes through the original signal, plus a delayed version of it, you get the sum of the auto-correlation of the input signal and the cross-correlation of your delayed signal and the input signal. The 20-lag peak is no "realer" than the 0-lag peak. Is this normal for comb filters? This is normal for any linear time-invariant system. Is there another method to compute delays using the filtered and unfiltered signal other than cross correlation? The group delay is really defined as derivative of the phase of your signal over frequency. If in doubt, estimate the spectrum of your system, and derive its phase. You'll notice that only a few specific systems (linear-phase, see above) have constant group delay. Hence, I'm not sure your cross-correlation had much to do with group delay to begin with.
{ "domain": "dsp.stackexchange", "id": 8099, "tags": "filters, finite-impulse-response, cross-correlation, comb, channel-estimation" }
Critical 2d Ising Model
Question: The 2d Ising model is extremely well studied, nevertheless I have encountered two facts which seem to contradict one another, and I have not been able to find the resolution in the literature. The puzzle is the following. The critical Ising model is well known to be described by a CFT, and in particular a minimal model. This is described in many places, for example Ginsparg's CFT notes https://arxiv.org/abs/hep-th/9108028. To find the critical temperature, for which the CFT description is valid, perhaps the easiest way is to exploit the Kramers-Wannier duality, which relates the high-temperature/weak-coupling theory to the low-temperature/strong-coupling theory. The critical temperature is then given by the self-dual temperature. This makes it clear that the critical theory is just the usual 2d Ising Hamiltonian, but with the critical value of hte coupling constant $\beta J \equiv K = K_{*}$. The defining property of the theory at the critical point is that it is invariant under RG flows. In general if $\mathcal{R}$ denotes the RG operation (in any given scheme, for example block-spin RG) and if the Hamiltonian $H$ depends on the coupling constants $\lambda_{1} \lambda_{2}, ..$, then this may be written schematically as $$ \mathcal{R} H[\lambda_1^*, \lambda_2^* , ... ]= H[\lambda_1^*, \lambda_2^* , ... ],$$ where $*$ denotes fixed point quantities. Here is where the puzzle arises. Applied to the 2d Ising model with nearest-neighbor (NN) interactions only, the standard block-spin RG generates next-to-nearest-neighbor (NNN) interactions, and even NNNN interactions. See this for a demonstration of this fact. By examining the RG recursion relations, one finds that for no finite $K$ do these new interactions vanish. Therefore, with this RG scheme at least, the 2d Ising model with NN interactions can never be a fixed point of the RG transformation. Any critical theory will necessarily involve additional higher-spin couplings. So is the critical Ising model $H = - J_* \sum_{<i,j>} s_i s_j$, with only NN interactions, or are there an infinite number of additional higher-spin interactions (which may become negligible in the continuum limit)? Answer: This is a great question. Pages 15 and 16 of these notes argues that no nontrivial spin Hamiltonian can ever be a fixed point under spin decimation, but I don't understand why their argument doesn't hold in the 1D case. The notes end with the cryptic comment there are many RG’s. The goal is not to see how many don’t work, but rather to find one that does. I suspect that under repeated spin decimation, the 2D Ising model Hamiltonian will eventually converge to an extremely complicated fixed-point Hamiltonian with $n$-spin couplings for all even $n$ (odd-$n$ couplings would break the Hamiltonian's $\mathbb{Z}_2$ symmetry $\sigma_i \to -\sigma_i$), with coupling coefficients that decay exponentially with $n$. While obviously much more complicated than the original Hamiltonian, this fixed-point Hamiltonian would lie in the same Ising universality class and would therefore have the same long-distance physics. Since the particular choice of spin renormalization scheme is UV-sensitive, the resulting fixed-point Hamiltonian will depend on your choice of RG scheme - but every possible RG scheme should yield a fixed-point Hamiltonian in the same universality class, which is all that matters for the low-energy physics.
{ "domain": "physics.stackexchange", "id": 39446, "tags": "statistical-mechanics, renormalization, conformal-field-theory, ising-model, lattice-model" }
Non-observance and the Schrödinger equation
Question: I was thinking today about configurations where one measures that a certain observable is not in a certain state. I was getting confused about what this means for decoherence. If I observe a detector and I measure when a particle does not interact with it, then, I don’t understand how this can be entirely equivalent to allowing the particle to interact with further macroscopic objects (f.i. detectors, my brain) in such a way that the wave functions collapse. I’m detecting when it doesn’t interact so, I’m not interacting with it.. If the Schrödinger equation yields solutions that show the probability as the square of the amplitude, then the ‘negative’ Schrödinger equation’s solution is an operator $\sqrt(1-x^2)$ applied to the normal solution. Under what conditions is that still a solution of Schrödinger equation? And is it possible to define hermitian operators that give the probability of “not observing” a property? I don’t see how physically decoherence of non-observance can happen in the same way as regular observing, and at the same time it feels like it has to, although this may just be another aspect of QP that defies intuition. Answer: If the Schrödinger equation yields solutions that show the probability as the square of the amplitude, then the ‘negative’ Schrödinger equation’s solution is an operator √(1−x2) applied to the normal solution. This expression doesn't make sense on dimensional grounds. If you intend $x$ to be the wavefunction, then $x^2$ has units, so you can't subtract it from 1. I was getting confused about what this means for decoherence. If I observe a detector and I measure when a particle does not interact with it, then, I don’t understand how this can be entirely equivalent to allowing the particle to interact with further macroscopic objects (f.i. detectors, my brain) in such a way that the wave functions collapse. Decoherence isn't the collapse of a wavefunction. And is it possible to define hermitian operators that give the probability of “not observing” a property? Yes. You can define a the kind of does-it-have-this-property operator as a projection operator $P$, which is one that has eigenvalues all equal to 0 or 1. In Mackey, The Mathematical Foundations of Quantum Mechanics, 1963, these are referred to as "questions." The logical negation of the operator is defined as $1-P$. The act of measuring (an eigenvalue of) $1-P$ requires an interaction, just as the act of measuring $P$ requires in interaction. For example, in the Stern-Gerlach experiment, $P$ would be an operator whose eigenvalue is 0 on the spin-down state and 1 on the spin-up state. The measurement to implement $1-P$ is implemented using exactly the same apparatus as the measurement to implement $P$.
{ "domain": "physics.stackexchange", "id": 69310, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, observables, decoherence" }
Length of Contigs in Transcriptome and Whole Genome Assembly
Question: Why are there shorter contigs from transcriptome assembly than from a whole genome assembly? I know the difference between transcriptome and genome, but don't really understand what contigs are in the context of sequencing in bioinformatics, and why are the contigs shorter in transcriptome assembly than in whole genome assembly. Anyone mind explaining? Much appreciated. Answer: The longest possible transcriptome contig reflects the longest possible transcript. In the human genome, that's possibly Titin, which is ~35K aa (that's ~105Kb) and will be longer pre-splicing. The longest possible genome contig reflects the longest chromosome. In the human genome that's chr1 at ~250Mb. In practise your contigs/scaffolds will be a lot shorter for various reasons (e.g. hard to sequence the telomeres and repeats) but it illustrates the difference since all the chromosomes consist of contiguous DNA but many separate RNA transcripts (and not all DNA is transcribed).
{ "domain": "bioinformatics.stackexchange", "id": 2015, "tags": "assembly, transcriptome, genome-sequencing" }
Can entanglements themselves be entangled?
Question: In other words, could there be higher dimensional entanglements between entanglements? For instance, this could allow us to entangle two entangled-far-away pairs to create a system of four entangled particles. I am sorry if this sounds like a silly question. Answer: Entanglement is a property, so it doesn't make sense to "entangle entanglements". However you can entangle entangled objects. And indeed, if you have two entangled pairs, you can create four-particle entanglement that way. For example, you could create a four-particle cluster state (cluster states are special entangled states useful for quantum computing) by taking two Bell pairs, and do an entangling operation between one of the particles of each pair. Note however that you still need to bring those particles together. You don't need to bring all the particles together, however; in my example above, the second electron of the two Bell pairs could be both far away, and yet they'd still be part of the four-particle entangled state. You can even "measure out" the entanglement from the two particles you did the entanglement operations on, and end up with the two other particles forming an entangled Bell pair despite them never having been close to each other. That basically is what happens in entanglement swapping (except that instead of first entangling and then "measuring out" you do an entangling measurement, but as far as the remote particles are concerned, that's equivalent). However note that you cannot use that to entangle previously unentangled particles. While in the entanglement swapping case, the two particles that get entangled need never have come close to each other, they both already have been entangled with another particle each, and those "entanglement partners" indeed did come close to each other. Especially if you have two entangled pairs which are in different places, there's no way to entangle them without either getting at least one of the particles of each pair together, or using a third entangled system where one pair directly interacts with one part of that third system, and the other pair directly interacts with the other part. So entanglement can only be created locally, but it can be "redistributed" along existing "entanglement connections".
{ "domain": "physics.stackexchange", "id": 16068, "tags": "quantum-mechanics, quantum-entanglement" }
Why do rotons correspond to the minimum of the dispersion curve?
Question: The dispersion curve for superfluid helium-4 is given above. To my knowledge, the first paper that was able to argue that the curve should take this shape from first principles was Feynman's 1954 paper (although I could be wrong). This curve gives the energy for quasi-particle excitations of given momenta. The smallest energy excitations are phonons, i.e. sound waves. This is understood. The paper argues that the energy of an excitation is $$E = \frac{\hbar k^2}{2 m S(k)}$$ (equation 18). $S(k)$ is defined to be the fourier transform of $p(r)$ $$S(k) = \int p(\vec r) \exp(i \vec k \cdot \vec r) d^3 r$$ where $p(\vec r_1 - \vec r_2)$ is the probability per unit volume of finding a helium atom at position $\vec r_2$ given that an atom is present at $\vec r_1$. Therefore, the shape of the dispersion curve is an artifact of the distribution of helium atoms in the superfluid. (Please correct me if anything I said was incorrect.) Having said that, I don't understand the relation of the minimum of E(k) to "rotons." We are now pretty confident that rotons are in fact tiny vortex rings in the superfluid. The vorticity of the ring is quantized by the fact that the phase of a wave function is equivalent up to multiples of $2\pi$. My main question is: What exactly does the minimum of this dispersion curve have to do with tiny vortex rings? Does it have anything to do with the group velocity of the excitation being 0? Side question: Is there any microscopic picture of what a "maxon" is? Answer: The name "roton" is historical, Landau originally thought that the roton corresponds to a separate branch of the dispersion relation, somehow related to vorticity. He later realized that this is not the case, but the name stuck. Indeed, Feynman's variational ansatz shows that the roton minimum is continuously connected to the phonon, suggesting that the excitations are not fundamentally different. Note that the minimum arises from a maximum in the static structure function, suggesting that strong correlations between the atoms are important (There is no roton in conventional dilute gas BECs). Indeed, the strong maximum in the structure factor shows that helium is not at all like a dilute gas, but is a dense liquid, on the verge of solidifying. At the liquid-solid transition the short range order encoded in the structure factor becomes the long range order of the solid. Of course there are vortex rings in liquid helium, and these have been experimentally observed. Indeed, there are papers on the interaction of vortex rings with phonons and rotons.
{ "domain": "physics.stackexchange", "id": 46607, "tags": "quantum-mechanics, superfluidity" }
laser_scan_assembler outputs no data
Question: Hi forum, this has been driving me mad for a couple of days, can anyone help? I'm running ROS Indigo on Xubuntu 14.04 with a Hokuyo lidar on a remote machine. On my base machine I can see the /scan topic and get read data off it using rostopic echo /scan the output shows the data is in the "laser" frame. I am trying to assemble scans using the laser_scan_assembler as in the tutorial, currently with this launch, And I broadcast tf's so that rosrun tf tf_echo laser map gives At time 1429639619.311 Translation: [-0.050, 1.000, -0.250] Rotation: in Quaternion [0.000, 0.000, 0.000, 1.000] in RPY [0.000, -0.000, 0.000] However when I examine the assembler output there's no data, rosservice call assemble_scans 0 10 cloud: header: seq: 0 stamp: secs: 0 nsecs: 10 frame_id: map points: [] channels: [] I can't find any further info on this, and the tutorial doesn't give a full example, so could anyone help? I'm running here on a live robot, but have had similar problems running from a bag. Originally posted by charles.fox on ROS Answers with karma: 120 on 2015-04-21 Post score: 0 Answer: Answering my own question for future reference: Problem was due to syntax of the "rosservice call assemble_scans 0 10". The two arguments should instead be the starttime and endtime of the requested scans in NANOSECONDS of ROS TIME, not seconds since sim start. It was useful to run the test scripts in https://github.com/ros-perception/laser_assembler/tree/hydro-devel/test to reach this -- maybe would useful to mention the nanosecond convention on the laser_assemble tutorial wiki to clarify? Originally posted by charles.fox with karma: 120 on 2015-04-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 21490, "tags": "ros, laser-pipeline" }
Inserting nodes into a singly linked list based on size of number
Question: Given a set of numbers insert them so that they are ordered ie. add(5) add(3) add(6) add(2) Output: 6 5 3 2 GetNumber() returns the number of that term insertAfter() inserts that node after the pointer insertAtHead() inserts as the head node Term is the Node which holds the number How can I refactor this working code? boolean inserted = false; Node pointer = head; if (pointer == null || term.getNumber() > pointer.data.getNumber()) { // If head is empty insert there or if term is larger than head. insertAtHead(term); } else if(pointer.next == null) { insertAfter(term, head); } else { while (pointer.next != null) { //Insert at end. if (term.getNumber() > pointer.next.data.getNumber()) { insertAfter(term, pointer); inserted = true; break; } pointer = pointer.next; } if (!inserted) { //If not inserted it must be the smallest so add to the end. insertAfter(term, pointer); } } printList(); Answer: You don't need the else if(pointer.next == null) case, it's covered by if (!inserted). Declare variables in as small a scope as possible to increase readability. inserted and pointer aren't needed until the while loop. You can save a nesting level if you put your code into a insertSorted method, and then call printList afterwards in the calling method. That way, you can write the initial head insert as a guard clause and thus save the else. You could also save the break by adding && !inserted to the while clause. many of your comments just repeat the code they comment on, and are thus not really needed. // Insert at end. is quite misleading, as the insert doesn't happen at the end of the list. With all these changes, your code already looks a bit cleaner: if (head == null || term.getNumber() > head.data.getNumber()) { insertAtHead(term); return; } Node pointer = head; boolean inserted = false; while (pointer.next != null && !inserted) { if (term.getNumber() > pointer.next.data.getNumber()) { insertAfter(term, pointer); inserted = true; } pointer = pointer.next; } if (!inserted) { //If not inserted it must be the smallest so add to the end. insertAfter(term, pointer); } You could also return early after inserting, and thus get rid of the inserted variable: // guard clause for head insert Node pointer = head; while (pointer.next != null) { if (term.getNumber() > pointer.next.data.getNumber()) { insertAfter(term, pointer); return; } pointer = pointer.next; } //If not inserted it must be the smallest so add to the end. insertAfter(term, pointer); But you still have a separate case for the last insert which is a bit ugly. You should be able to get rid of it with something like this: // guard clause for head insert Node pointer = head; while (pointer.next != null && term.getNumber() < pointer.next.data.getNumber()) { pointer = pointer.next; } insertAfter(term, pointer);
{ "domain": "codereview.stackexchange", "id": 12636, "tags": "java, linked-list" }
NP-complete problem 3-SAT, is there a difference in complexity between just providing yes/no without exact solution
Question: The 3-SAT problem is NP-complete, meaning that no known algorithm can provide an exact solution in polynomial time, while a solution can be tested very quickly in polynomial time. My question is, if asking for an algorithm that only provides yes/no, or solvable/not solvable, without providing an exact solution of the 3-SAT formula, will this result in another complexity class? The reason I have in mind of asking this, is a similar situation when looking for prime numbers and factorizations. Where it is known that it's much easier to test if a number is a prime number yes/no, than to find an exact prime factorization of a number. Answer: Check Belare and Goldwasser, "The complexity of decision versus search", SIAM J. Of Computing, 23:1 (feb 1994), pp. 97-119. Belare has notes for a class.
{ "domain": "cs.stackexchange", "id": 15682, "tags": "complexity-theory, time-complexity, computability, np-complete, p-vs-np" }
XMLRPC and Nodes
Question: Is there a single XMLRPC server running for all of the topics that are registered under a given node? Or does each topic maintain its own server? If there is a single XMLRPC server for all of the topics, at what point does communication switch to the new connection between subscriber/publisher -- is it immediately after the requesting the topic or does the connection header get exchanged first? Originally posted by chris-smith on ROS Answers with karma: 1 on 2013-12-05 Post score: 0 Answer: There's one XMLRPC server per node. The XMLRPC communication is documented here: http://wiki.ros.org/ROS/Technical%20Overview Originally posted by tfoote with karma: 58457 on 2013-12-07 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 16349, "tags": "ros, xmlrpc, nodes" }
Energy and momentum of a relativistic electron
Question: The question is to find the magnitude, $p$ of the electron's momentum in the unit of MeV/$c$, given that the kinetic energy of the electron is 2.53 MeV. The answer provided by the book says, \begin{align} p&=c^{-1}\sqrt{E^2-m^2c^4} \\ &=c^{-1}\sqrt{3.04^2-0.511^2}=3.00\,{\rm MeV}/c \end{align} But I don't understand why I can't do it another way. That is, divide the total energy by the speed of light and get the momentum: \begin{align} p&=\frac{E}{c}\\ &=3.04\,{\rm MeV}/c \end{align} Answer: $E = pc$ is only true for massless particles. For massive particles you have the mass-shell relation: $E^2 = m^2c^4+p^2c^2$ After you use $E=T+mc^2$ and you can find $p$
{ "domain": "physics.stackexchange", "id": 23193, "tags": "special-relativity, energy, momentum, electrons" }
Does the Dirac Hamiltonian uniquely define the Dirac Lagrangian?
Question: Example Hamiltonian: the linearised graphene Hamiltonian In condensed matter, we typically write down Hamiltonians instead of Lagrangians. An example is given by the Hamiltonian for graphene. When we linearise the theory about the Dirac points of the model, we obtain a pair of Dirac Hamiltonians, see Eq. (18) of this review for example. Let's focus on one of these Hamiltonians, it is given by $$ H = \int dx \psi^\dagger (-i\alpha^i \partial_i + \beta m) \psi \equiv \int dx \mathcal{H}, $$ where I have set the $v_F = 1$ and $\alpha^i$, $\beta$ are the usual Dirac alpha and beta matrices used for expressing the Dirac equation in the form of a Schrodinger equation. What is the Lagrangian of this Hamiltonian? Now I ask, what is the Lagrangian of this model? Well, I could write the obivous Lagrangian as $$ \mathcal{L} = i \psi^\dagger \partial_t \psi - \mathcal{H} = \bar{\psi}( i \gamma^\mu \partial_\mu-m) \psi$$ where I have combined the space and time derivatives into a single object $\partial_\mu = (\partial_t , \partial_i)$ and defined the objects $\gamma^0 = \beta$, $\gamma^i = \beta \alpha^i$ and $\bar{\psi} = \psi^\dagger \gamma^0$ which brings the action into its standard covariant form. This is nothing but the Dirac Lagrangian and it returns the Hamiltonian after a Legendre transformation as expected. However, what if I decided to insert some arbitrary matrix $M$ into the time derivative part of my Lagrangian such as $$ \mathcal{L}'= i \psi^\dagger M \partial_t \psi -\mathcal{H}$$ where I could let $M$ be horribly space-dependent if I wish. This Lagrangian $\mathcal{L}'$ would yield a different equation of motion to the Dirac equation, however it would still yield the same Hamiltonian. Using the definition of the Legendre transformation, we have $$ \mathcal{H}' = \frac{\partial \mathcal{L'}}{\partial (\partial_t \psi)} \partial_t \psi - \mathcal{L}'= i\psi^\dagger M \partial_t \psi-\mathcal{L}'=\mathcal{H}$$ My question It appears that the Hamiltonian can be derived from both of the Lagrangians $\mathcal{L}$ and $\mathcal{L}'$, however the equations of motion of these two Lagrangians differ. The equations of motion are $$ \mathcal{L} \quad \Rightarrow \quad i \gamma^\mu \partial_\mu \psi = 0 $$ $$ \mathcal{L}' \quad \Rightarrow \quad i \gamma^0 M \partial_t \psi + i \gamma^i \partial_i \psi = 0$$ This is puzzling, because when I quantise the theory, I have to construct the mode expansion from the equations of motion. This will diagonalise the Hamiltonian. But it appears that this process would yield diferent results depending on whether I let $M \neq \mathbb{I}$ or not. For this reason my question is the following: if the Lagrangian corresponding to a Hamiltonian is not unique, which Lagrangian do I choose? Answer: Putting the $M$ in changes the commutation relation of $\psi$ and $\psi^\dagger$. The conjugate $\Pi$ field is now $\Pi=\psi^\dagger M$. You will therefore have $\{\psi^a,(\psi^b)^\dagger\}= (M^{-1})^{ab}\delta^3(x-x')$ and $H$ may look the same, but it does not have the same hamlitonian equations of motion.
{ "domain": "physics.stackexchange", "id": 86613, "tags": "lagrangian-formalism, field-theory, hamiltonian-formalism, action" }
Explosions of black holes
Question: I was bopping around YouTube and observed this enjoyably produced video. In it, when describing the behavior of a black hole with the mass of a US nickel, the narrator says, "Its 5 grams of mass will be converted to 450 terajoules of energy, which will lead to an explosion roughly three times bigger than the bombs dropped on Hiroshima and Nagasaki combined." Of all the fun things illustrated there, that was the one whose pretense I didn't understand. Do black holes explode after they've radiated away all of their mass? Or would the "explosion" just come from the rapid pace at which the black hole would consume nearby matter? The Googling I've done thus far has provided no firm answer. The closest I've come is from the Wikipedia page on Hawking Radiation, which states, "For a black hole of one solar mass, we get an evaporation time of 2.098 × 10^67 years—much longer than the current age of the universe at 13.799 ± 0.021 x 10^9 years. But for a black hole of 10^11 kg, the evaporation time is 2.667 billion years. This is why some astronomers are searching for signs of exploding primordial black holes." Some other websites refer to the last "explosion" of the Milky Way's supermassive black hole being some 2 million years ago, but is that the same mechanic mentioned on the Wikipedia page? Or the YouTube video, for that matter? Answer: What this video is talking about is Hawking Radiation, as you've linked. Hawking Radiation is a proposed hypothetical (by no means verified or proven) way for a black hole to radiate its energy into space. The basic idea is that a black hole is nothing but mass/energy compressed to an infinitesimal point, which is radiating its energy into space over time. For large black holes (such as solar mass or bigger), this radiation process is tiny and the time taken to leak all the black hole's energy into space (and thus for the black hole to "evaporate") is exceedingly long. For tiny black holes however, the time to radiate all the black hole's energy is exceedingly short. You can calculate how long it will take for a black hole of mass $m$ to evaporate (and release all its mass/energy) with the equation $$t_{\text{ev}} = \frac{5120\pi G^2 m^3}{\hbar c^4} = (8.41\times 10^{-17}\:\mathrm{s}\:\mathrm{kg}^{-3})\:m^3$$ For $m = 5\:\mathrm{g}=0.005\:\mathrm{kg}$, you get $t_{\text{ev}} \simeq 4\times10^{-19}\:\mathrm{s}$. Now that means in this tiny amount of time, the black hole will radiate all of its mass/energy away and completely evaporate. But the output of all the energy in 5 g of mass is a huge output. Putting out 450 Terajoules of energy in $10^{-19}\:\mathrm{s}$ is basically just an explosion. You can determine the total energy output from the famous equation $$E=mc^2$$ Just plug in $m = 0.005\:\mathrm{kg}$ and $c = 3\times 10^8\:\mathrm{m/s}$ and you'll get $E=4.5\times 10^{14}\:\mathrm{J} = 450\:\mathrm{Terajoules}$. So in short, hypothetical calculations (not even theory at this point) suggest that a tiny black hole with the mass of a nickel would immediately explode out in a huge amount of energy. Whether such a black hole can form, or if such an evaporation would/could occur is still highly debated and ultimately unknown at this point.
{ "domain": "astronomy.stackexchange", "id": 1758, "tags": "black-hole, hawking-radiation, explosion" }
Small amount of training data set for naive Bayes classifier for binary classification
Question: I'm implementing prediction system for young cricketers in ODI format using Naive Bayes classifier. The output of the system is to predict whether the young player is rising star or not. I have collected data from statsguru API of espncricinfo but I'm getting only about 300 records of players from ODI. Is it very small for training dataset? Answer: Actually in machine learning more data equals more accuracy but as you mentioned in the question you had 300 sample dataset.So, the classifier has the little room to decide whom it should select but if you have less number of classes and features you may get better results.When I'm doing my project based on sensor data I used just about 100 sample records from a sensor and it actually predicted very well and got the accuracy of 77 and it has grown once I increased the dataset.but don't train it with the noise like outliers and unwanted features they affect your accuracy a lot
{ "domain": "datascience.stackexchange", "id": 2380, "tags": "machine-learning, classification, supervised-learning, naive-bayes-classifier, bayesian-networks" }
ipc_bridge Example Publisher MATLAB Error
Question: Hi, I've been following the ipc_bridge package (https://alliance.seas.upenn.edu/~meam620/wiki/index.php?n=Roslab.IpcBridge) in order to connect MATLAB to ROS. I've installed MEX Compiler and able to compile every package with the correct version of GCC (with the help of linking mex compiler with the correct gcc version). Note that I'm using Groovy on Ubuntu 12.04. However, when I try to run the ~/catkin_ws/src/ipc-bridge/ipc_bridge_stack/ipc_bridge_example/example_publisher.m from MATLAB, I get the following errors: Warning: Name is nonexistent or not a directory: rospack. > In path at 110 In addpath at 87 In example_publisher at 9 Warning: Directory access failure: /usr/local/MATLAB/R2012a/sys/os/glnxa64/libstdc++.so.6. > In path at 110 In addpath at 87 In example_publisher at 9 Warning: Name is nonexistent or not a directory: version `GLIBCXX_3.4.15' not found (required by /opt/ros/groovy/lib/librospack.so)/bin. > In path at 110 In addpath at 87 In example_publisher at 9 Undefined function 'geometry_msgs_Twist' for input arguments of type 'char'. Error in example_publisher (line 13) pid=geometry_msgs_Twist('connect','publisher','example_module','twist'); What should I do to overcome this? Any help will be greately appreciated. Originally posted by mozcelikors on ROS Answers with karma: 181 on 2013-12-02 Post score: 1 Original comments Comment by ZYS on 2016-01-20: Did you solve this problem? I met a problem which is the same as yours. Comment by gvdhoorn on 2016-01-21: @ZYS: please don't post answers, unless you can actually answer a question. Use comments for these kind of things. Thanks. Answer: If you want to use the ipc_bridge functions, you must first add the path of the specific binary to Matlab: addpath('Address to the /bin directory of specific message which includes all the mex-compiled binary files') Originally posted by ZYS with karma: 108 on 2016-01-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 16327, "tags": "matlab" }
Flappy Bird game clone for a beginners' programming class
Question: I'll soon begin teaching a beginners' programming class. It's voluntary, so I thought I'd make it interesting by teaching Python programming and then introduce the kids to Pygame, so that they can make their own games. To try out Pygame (I've never used it before) and to see exactly how easy it is to make a game, I made a clone of Flappy Bird. What do you think? Could this be made simpler/shorter? Is there anything in there that I shouldn't teach my students? Github repo, with images #!/usr/bin/env python3 """Flappy Bird, implemented using Pygame.""" import math import os from random import randint import pygame from pygame.locals import * FPS = 60 EVENT_NEWPIPE = USEREVENT + 1 # custom event PIPE_ADD_INTERVAL = 3000 # milliseconds FRAME_ANIMATION_WIDTH = 3 # pixels per frame FRAME_BIRD_DROP_HEIGHT = 3 # pixels per frame FRAME_BIRD_JUMP_HEIGHT = 5 # pixels per frame BIRD_JUMP_STEPS = 20 # see get_frame_jump_height docstring WIN_WIDTH = 284 * 2 # BG image size: 284x512 px; tiled twice WIN_HEIGHT = 512 PIPE_WIDTH = 80 PIPE_PIECE_HEIGHT = BIRD_WIDTH = BIRD_HEIGHT = 32 class PipePair: """Represents an obstacle. A PipePair has a top and a bottom pipe, and only between them can the bird pass -- if it collides with either part, the game is over. Attributes: x: The PipePair's X position. Note that there is no y attribute, as it will only ever be 0. surface: A pygame.Surface which can be blitted to the main surface to display the PipePair. top_pieces: The number of pieces, including the end piece, in the top pipe. bottom_pieces: The number of pieces, including the end piece, in the bottom pipe. """ def __init__(self, surface, top_pieces, bottom_pieces): """Initialises a new PipePair with the given arguments. The new PipePair will automatically be assigned an x attribute of WIN_WIDTH. Arguments: surface: A pygame.Surface which can be blitted to the main surface to display the PipePair. You are responsible for converting it, if desired. top_pieces: The number of pieces, including the end piece, which make up the top pipe. bottom_pieces: The number of pieces, including the end piece, which make up the bottom pipe. """ self.x = WIN_WIDTH self.surface = surface self.top_pieces = top_pieces self.bottom_pieces = bottom_pieces self.score_counted = False @property def top_height_px(self): """Get the top pipe's height, in pixels.""" return self.top_pieces * PIPE_PIECE_HEIGHT @property def bottom_height_px(self): """Get the bottom pipe's height, in pixels.""" return self.bottom_pieces * PIPE_PIECE_HEIGHT def is_bird_collision(self, bird_position): """Get whether the bird crashed into a pipe in this PipePair. Arguments: bird_position: The bird's position on screen, as a tuple in the form (X, Y). """ bx, by = bird_position in_x_range = bx + BIRD_WIDTH > self.x and bx < self.x + PIPE_WIDTH in_y_range = (by < self.top_height_px or by + BIRD_HEIGHT > WIN_HEIGHT - self.bottom_height_px) return in_x_range and in_y_range def load_images(): """Load all images required by the game and return a dict of them. The returned dict has the following keys: background: The game's background image. bird-wingup: An image of the bird with its wing pointing upward. Use this and bird-wingdown to create a flapping bird. bird-wingdown: An image of the bird with its wing pointing downward. Use this and bird-wingup to create a flapping bird. pipe-end: An image of a pipe's end piece (the slightly wider bit). Use this and pipe-body to make pipes. pipe-body: An image of a slice of a pipe's body. Use this and pipe-body to make pipes. """ def load_image(img_file_name): """Return the loaded pygame image with the specified file name. This function looks for images in the game's images folder (./images/). All images are converted before being returned to speed up blitting. Arguments: img_file_name: The file name (including its extension, e.g. '.png') of the required image, without a file path. """ file_name = os.path.join('.', 'images', img_file_name) img = pygame.image.load(file_name) # converting all images before use speeds up blitting img.convert() return img return {'background': load_image('background.png'), 'pipe-end': load_image('pipe_end.png'), 'pipe-body': load_image('pipe_body.png'), # images for animating the flapping bird -- animated GIFs are # not supported in pygame 'bird-wingup': load_image('bird_wing_up.png'), 'bird-wingdown': load_image('bird_wing_down.png')} def get_frame_jump_height(jump_step): """Calculate how high the bird should jump in a particular frame. This function uses the cosine function to achieve a smooth jump: In the first and last few frames, the bird jumps very little, in the middle of the jump, it jumps a lot. After a completed jump, the bird will have jumped FRAME_BIRD_JUMP_HEIGHT * BIRD_JUMP_STEPS pixels high, thus jumping, on average, FRAME_BIRD_JUMP_HEIGHT pixels every step. Arguments: jump_step: Which frame of the jump this is, where one complete jump consists of BIRD_JUMP_STEPS frames. """ frac_jump_done = jump_step / float(BIRD_JUMP_STEPS) return (1 - math.cos(frac_jump_done * math.pi)) * FRAME_BIRD_JUMP_HEIGHT def random_pipe_pair(pipe_end_img, pipe_body_img): """Return a PipePair with pipes of random height. The returned PipePair's surface will contain one bottom-up pipe and one top-down pipe. The pipes will have a distance of BIRD_HEIGHT*3. Both passed images are assumed to have a size of (PIPE_WIDTH, PIPE_PIECE_HEIGHT). Arguments: pipe_end_img: The image to use to represent a pipe's endpiece. pipe_body_img: The image to use to represent one horizontal slice of a pipe's body. """ surface = pygame.Surface((PIPE_WIDTH, WIN_HEIGHT), SRCALPHA) surface.convert() # speeds up blitting surface.fill((0, 0, 0, 0)) max_pipe_body_pieces = int( (WIN_HEIGHT - # fill window from top to bottom 3 * BIRD_HEIGHT - # make room for bird to fit through 3 * PIPE_PIECE_HEIGHT) / # 2 end pieces and 1 body piece for top pipe PIPE_PIECE_HEIGHT # to get number of pipe pieces ) bottom_pipe_pieces = randint(1, max_pipe_body_pieces) top_pipe_pieces = max_pipe_body_pieces - bottom_pipe_pieces # bottom pipe for i in range(1, bottom_pipe_pieces + 1): surface.blit(pipe_body_img, (0, WIN_HEIGHT - i*PIPE_PIECE_HEIGHT)) bottom_pipe_end_y = WIN_HEIGHT - bottom_pipe_pieces*PIPE_PIECE_HEIGHT surface.blit(pipe_end_img, (0, bottom_pipe_end_y - PIPE_PIECE_HEIGHT)) # top pipe for i in range(top_pipe_pieces): surface.blit(pipe_body_img, (0, i * PIPE_PIECE_HEIGHT)) top_pipe_end_y = top_pipe_pieces * PIPE_PIECE_HEIGHT surface.blit(pipe_end_img, (0, top_pipe_end_y)) # compensate for added end pieces top_pipe_pieces += 1 bottom_pipe_pieces += 1 return PipePair(surface, top_pipe_pieces, bottom_pipe_pieces) def main(): """The application's entry point. If someone executes this module (instead of importing it, for example), this function is called. """ pygame.init() display_surface = pygame.display.set_mode((WIN_WIDTH, WIN_HEIGHT)) pygame.display.set_caption('Pygame Flappy Bird') clock = pygame.time.Clock() score_font = pygame.font.SysFont(None, 32, bold=True) # default font # the bird stays in the same x position, so BIRD_X is a constant BIRD_X = 50 bird_y = int(WIN_HEIGHT/2 - BIRD_HEIGHT/2) # center bird on screen images = load_images() # timer for adding new pipes pygame.time.set_timer(EVENT_NEWPIPE, PIPE_ADD_INTERVAL) pipes = [] steps_to_jump = 2 score = 0 done = paused = False while not done: for e in pygame.event.get(): if e.type == QUIT or (e.type == KEYUP and e.key == K_ESCAPE): done = True break elif e.type == KEYUP and e.key in (K_PAUSE, K_p): paused = not paused elif e.type == MOUSEBUTTONUP or (e.type == KEYUP and e.key in (K_UP, K_RETURN, K_SPACE)): steps_to_jump = BIRD_JUMP_STEPS elif e.type == EVENT_NEWPIPE: pp = random_pipe_pair(images['pipe-end'], images['pipe-body']) pipes.append(pp) clock.tick(FPS) if paused: continue # don't draw anything for x in (0, WIN_WIDTH / 2): display_surface.blit(images['background'], (x, 0)) for p in pipes: p.x -= FRAME_ANIMATION_WIDTH if p.x <= -PIPE_WIDTH: # PipePair is off screen pipes.remove(p) else: display_surface.blit(p.surface, (p.x, 0)) # calculate position of jumping bird if steps_to_jump > 0: bird_y -= get_frame_jump_height(BIRD_JUMP_STEPS - steps_to_jump) steps_to_jump -= 1 else: bird_y += FRAME_BIRD_DROP_HEIGHT # because pygame doesn't support animated GIFs, we have to # animate the flapping bird ourselves if pygame.time.get_ticks() % 500 >= 250: display_surface.blit(images['bird-wingup'], (BIRD_X, bird_y)) else: display_surface.blit(images['bird-wingdown'], (BIRD_X, bird_y)) # update and display score for p in pipes: if p.x + PIPE_WIDTH < BIRD_X and not p.score_counted: score += 1 p.score_counted = True score_surface = score_font.render(str(score), True, (255, 255, 255)) score_x = WIN_WIDTH/2 - score_surface.get_width()/2 display_surface.blit(score_surface, (score_x, PIPE_PIECE_HEIGHT)) pygame.display.update() # check for collisions pipe_collisions = [p.is_bird_collision((BIRD_X, bird_y)) for p in pipes] if (0 >= bird_y or bird_y >= WIN_HEIGHT - BIRD_HEIGHT or True in pipe_collisions): print('You crashed! Score: %i' % score) break pygame.quit() if __name__ == '__main__': # If this module had been imported, __name__ would be 'flappybird'. # It was executed (e.g. by double-clicking the file), so call main. main() Answer: You have docstrings for your functions and classes, which makes your code better than 95% of code submitted to Code Review. The behaviour of the pipes is split into several pieces: (i) the PipePair class; (ii) the motion, drawing, and destruction logic in main; (iii) the scoring logic in main; (iv) the factory function random_pipe_pair. It would make the code easier to understand and maintain if you collected all the pipe logic into methods on the PipePair class. Similarly, the behaviour of the bird is distributed among several places: (i) the local variables bird_y and steps_to_jump in main; (ii) the "calculate position of jumping bird" logic; (iii) the flapping animation logic; (iv) the get_frame_jump_height function. It would make the code easier to understand if you collected all the bird logic into methods on a Bird class. The word "jumping" doesn't seem like a good description of the behaviour of the bird. Name is_bird_collision make not sense English. In the collision logic you're effectively testing for intersection of rectangular hit boxes. Pygame provides a Rect class with various collide methods that would make your code clearer, and would make it easier to do things like drawing the hit boxes to help with debugging. You store the pipes in a list, but this is inefficient when it comes to remove a pipe: list.remove takes time proportional to the length of the list. You should use a set, or, since you know that pipes get created on the right and destroyed on the left, a collections.deque. When you test for collisions, you store the collision results in a list and then test to see if True is an element of the list. Instead, you should use the built-in function any: if any(p.collides_with(bird) for p in pipes): (This has the additional advantage of short-circuiting: that is, stopping as soon as a collision has been detected, instead of going on to test the remaining pipes.) You measure time in frames (for example, pipes move leftwards at a particular number of pixels per frame). This has the consequence that you cannot change the framerate without having to change many other parameters. It is more general to measure time in seconds: this makes it possible to vary the framerate. (In this kind of simple game you can get away with measuring in frames, but in more sophisticated games you'll need to be able to vary the framerate and so it's worth practicing the necessary techniques.) In commit 583c3e49 you broke the game by (i) removing the function random_pipe_pair without changing the caller; and (ii) changing the local variable surface to an attribute self.surface in some places but not others. We all make commits in error from time to time, but there are four commits after this one, which suggests that you haven't been testing your code before committing it. This is a bad habit to get into!
{ "domain": "codereview.stackexchange", "id": 9345, "tags": "python, pygame" }
Does a gun exert enough gravity on the bullet it fired to stop it?
Question: My question is set in the following situation: You have a completely empty universe without boundaries. In this universe is a single gun which holds one bullet. The gun fires the bullet and the recoil sends both flying in opposite directions. For simplicity I'll take the inertial frame of reference of the gun. The gun fired the bullet from its center of mass so it does not rotate. We now have a bullet speeding away from the gun. There is no friction. The only thing in this universe to exert gravity is the gun and the bullet. Would, given a large enough amount of time, the bullet fall back to the gun? Or is there a limit to the distance gravity can reach? Answer: Does a gun exert enough gravity on the bullet it fired to stop it? No. Would, given a large enough amount of time, the bullet fall back to the gun? No. Or is there a limit to the distance gravity can reach? No. But the bullet's velocity exceeds escape velocity. See Wikipedia where you can read that escape velocity at a given distance is calculated by the formula $$v_e = \sqrt{\frac{2GM}{r}}$$ Imagine you play this scenario in reverse. You have a bullet and a gun, a zillion light years apart, motionless with respect to another. You watch and wait, and after a gazillion years you notice that they're moving towards one another due to gravity. (To simplify matters we'll say the gun is motionless and the bullet is falling towards the gun). After another bazillion years you've followed the bullet all the way back to the gun, and you notice that they collide at 0.001 m/s. You check your sums and you work out that this is about right, given that if the gun was as massive as the Earth's 5.972 × 10$^{24}$ kg, the bullet would have collided with it at 11.7 km/s. Escape velocity is the final speed of a falling body that starts out at an "infinite" distance. If you launch a projectile from Earth with more than escape velocity, it ain't ever coming back. OK, now let's go back to the original scenario. You fire the gun, and the bullet departs at 1000 m/s. When the bullet is a zillion light years away, its speed has reduced to 999.999 m/s. Because the gun's escape velocity is 0.001 m/s. The gun's gravity is never going to be enough to stop that bullet, even if it had all the time in the world and all the tea in China.
{ "domain": "physics.stackexchange", "id": 24851, "tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, estimation, escape-velocity" }
Implementing a logging system in C++17
Question: I've been programming for what is a probably a little bit and made a very simple logging system for my personal projects. While it works out fairly well so far, I would like advice on how to make it perform better and stuff I can do to make it more usable in general. Main questions include: Is my code thread safe? If not, how do I make it thread safe? Is there a way to make the interface a bit cleaner for use? Right now #define feels a bit hacky. Is there a way to avoid the use of a global unique_ptr? Should I just make it a header only library? Is there any benefits in doing so? .hpp: #pragma once /* Use it by including this header in every file required, and in your main function start a new log. Logger::startLog("Log.txt"); Use the various error levels by naming them and simply passing the info and what you want to output. Logger::log(ERROR, "Something went wrong."); */ // For the unique pointers. #include <memory> // Filestream. #include <fstream> // String class for names and parameters passed around. #include <string> #define FATAL Logger::Level::Fatal #define ERROR Logger::Level::Error #define WARNING Logger::Level::Warning #define INFO Logger::Level::Info #define DEBUG Logger::Level::Debug namespace Logger { // Severity level enum. enum class Level { Fatal, Error, Warning, Info, Debug }; // Initialize the log. void startLog(const std::string& filepath); // Log a message. void log(Level s, const std::string& msg); // Logging class. class Log { public: Log(const std::string& filepath); void addLog(Level s, const std::string& msg); ~Log(); private: // File for logging. std::ofstream m_logfile; std::string levels[5] = {"Fatal", "Error", "Warning", "Info", "Debug"}; }; } .cpp: #include "Log.hpp" namespace Logger { // Global Logging Object. std::unique_ptr<Log> g_log; // Initalize our logging object. void startLog(const std::string& filepath) { g_log = std::make_unique<Log>(filepath); Logger::log(Level::Info, "Started logging system."); } // Method which logs. void log(Level s, const std::string& msg) { g_log->addLog(s, msg); } // Create our global logging object. Log::Log(const std::string& filepath) : m_logfile{} { m_logfile.open(filepath); } // Add a message to our log. void Log::addLog(Level s, const std::string& msg) { if (m_logfile.is_open()) { m_logfile << levels[static_cast<int>(s)] << ": " << msg << std::endl; } } Log::~Log() { addLog(Level::Info, "Stopped logging system."); m_logfile.close(); } } Answer: Is my code thread safe? If not, how do I make it thread safe? No, of course it's not thread-safe. You don't do anything to make it thread-safe. A more nuanced answer would be: It's thread-safe as long as you don't use it in an unsafe way. For example, calling Logger::log(FATAL, "hello world") from two different threads concurrently would of course be unsafe. But if your program has only one thread... :) If you want to allow calling Logger::log from two threads concurrently, you'll have to do something to eliminate the data race on m_logfile which is caused by the two threads' both calling m_logfile << levels[static_cast<int>(s)] at the same time. For example, you could throw a mutex lock around addLog. Is there a way to make the interface a bit cleaner for use? Right now #define feels a bit hacky. The only place you use #define is in #define FATAL Logger::Level::Fatal and so on. (By the way, technically #define ERROR ... triggers undefined behavior, because all macros of the form EXXXX are reserved for use by POSIX error codes.) My question is, if you wanted these values to be referred to as FATAL, ERROR, etc., why didn't you just declare them that way? inline constexpr int FATAL = 0; inline constexpr int ERROR = 1; inline constexpr int WARNING = 2; // ... Or, probably better: namespace Logger { enum Level { FATAL, ERROR, WARNING, // ... }; } Making this an enum (rather than an enum class) allows your user to refer to the enumerators without needing to redundantly name the enum type: just Logger::FATAL, Logger::ERROR, et cetera. Personally, I would consider writing convenience functions to eliminate the boilerplate: namespace Logger { void log_fatal(const std::string& msg) { log(FATAL, msg); } void log_error(const std::string& msg) { log(ERROR, msg); } // ... } By the way, I think your numbering scheme is backwards. "Level" represents the severity of the message, right? Some messages have higher severity than others? So how would I test whether one message had a higher severity than another? Well, I think I'd write: if (one_severity > another_severity) ... But with the values you gave your enumeration, this is actually going to be completely backwards! And so my code for testing severity levels is going to have a bug (or else I'll catch the bug, but then have to write unintuitive code that uses < to mean "greater than"). So, I recommend switching the values around. Is there a way to avoid the use of a global unique_ptr? Sure; declare it static! And to make it really non-global, stick it in a function. It'll still have static lifetime, though. There's not much getting around that. inline Log *get_glog() { static std::unique_ptr<Log> glog = std::make_unique<Log>(); return glog.get(); } void startLog(const std::string& filepath) { Log *glog = get_glog(); glog->set_filepath_and_open(filepath); } Should I just make it a header only library? Is there any benefits in doing so? The big benefit of a header-only library is that it's super easy to incorporate into another project — the user just drops the header file into his include/ directory and he's good to go. Single-header libraries are particularly nice because they can easily be dropped into Godbolt or Wandbox. In your case, the tradeoff is that presumably this header is going to get included all over the place (because logging is ubiquitous), and so the bigger you make it, the more work you're forcing the compiler to do in every translation unit. Since you said "C++17", consider rewriting all your void foo(const std::string& s) signatures into void foo(std::string_view sv) signatures. (Yes, pass string_view by value.)
{ "domain": "codereview.stackexchange", "id": 33631, "tags": "c++, logging, c++17" }
Using model's prediction score as movement quality evaluator
Question: Let's take the task of evaluating very short dance movements (phrases) using sensor data (accelerometer and gyro from an iPhone device) as an example. If the model's confidence is 100% on a particular dance phrase, it does not necessarily follow that the user performed this movement phrase perfectly. Given this task that consists of very short movements (1-2sec), given that a very high-quality dataset (sensor data) is under disposal, given that the model has very high accuracy in classifying these movement phrases (actions) would it be fair to assume that this action classifier can also serve as a movement evaluator? For example, we can set a threshold of 50% and evaluate the movements based on the model's confidence, i.e. if the model is 40% confident that this movement (we know the ground-truth beforehand) is X we say that the user didn't perform the movement correctly but if the model has a 90% confidence we say that the movement was performed correctly. In other words, we give feedback to the user about his performance based on the model's confidence. Or it still doesn't matter and we can't simply draw this conclusion (robust action classifier = potential action evaluator)? Alternatively, how much sense it would (theoretically) make if I feed certain data qualitative characteristics, such as the 25th, 50th, and 75th percentile (certain spikes at these points make up for the quality of my kind of data) as well as the mean and S.D. for each sensor as features to an attention model reasoning that, since I feed these as input features to the model, the classifier's prediction might now have been slightly nudged to an evaluator's prediction? Answer: Tested it out in practice for my case. Turns out the assumption didn't hold.
{ "domain": "datascience.stackexchange", "id": 10261, "tags": "deep-learning, classification, model-evaluations" }
Will ideal gas law apply to plasma?
Question: I have read that plasma is a state of matter that resembles gas but it consists of ions and electrons coexisting. So my question is : If plasma is just ionized gas, will ideal gas law apply for it ? Answer: Electrically charged particles interact via their fields and so there is, in general, wide range interaction throughout the gas. The electromagnetic interactions between particles of the gas/plasma can give raise to effects which are significantly different from neutral gas, such as e.g waves. So to what extend the ideas gas law can be considered to "hold" for a plasma will depend on the parameters of the system, temperature, pressure, etc., but foremostly of the ionization degree of the gas/plasma. It's an involved issue, as this quantity will depend on all the other parameters. One commonly cited relation for certain parameter ranges is the Saha equation, which relates temperature and particle density - which are both part of $PV=k_B T\cdot N$ too. Microscopic considerations in such a "chemical system", where the constituents can be ionized and thereby change their properties, lead you to the observation that the value charge density depends on the surroundings. So e.g. the Poission equation takes a nonlinear form $\Delta\Phi=\rho(\Phi)$. It's then also related to new'ish system parameters like the Debye length, which caracterize the overall bahaviour you ask for. I'm sure there are Debye length-temperature ranges where it's perfectly reasonable to apply a gas law, just watch out which part of the system makes up charged particles or neutral ones. E.g. I think in space, there are a whole lot of charged particles, but people work with ideal gas laws. A general rigourous classical look at it will lead you to $PV=k_B T\cdot \text{ln}(\mathcal Z)$, where the partition function contains the Hamiltonian of the system, which include the potentials $\Phi$ = energy expressions involving multiple variable-integrals over statistically weighted interaction potentials, see this link.
{ "domain": "physics.stackexchange", "id": 5261, "tags": "plasma-physics, ideal-gas" }
Interpreting phonon dispersion relations
Question: I have been working with phonon dispersion relations for a while now on the topic of metamaterials (phononic band gaps). However, I still do not feel that I have fully grasped how to interpret these disperion relation diagrams. The frequency $\omega$ is plotted against the wave vector $k$, but how do I actually read it? Do I search for a frequency and look which modes are "(co)existing" at that frequency? Or do I pick a wave vector (a direction) and look which frequencies are allowed for these values of $k$? I can probably read it both ways, but where is cause and effect exactly? Here's what I know: Let's assume a 2D case with a simple Brillouin Zone $\Gamma$-X-Y-$\Gamma$. The sections of the dispersion relation correspond to values of $k$, where $\Gamma$ denotes the point where $k$ is very small and the wavelength $\lambda$ is very large. Traveling along the x-axis is basically like traversing the edges of the Brillouin Zone, covering all possible directions of the wave vector. Suppose, a dispersion branch for $\Gamma$-X has two possible frequencies. What is the "real world meaning" of that? Do both these modes exist at a certain excitation frequency? Now assume there are two different branches that occur at the same frequency inside $\Gamma$-X. Does that make it any different than case 1 where the same branch has one frequency twice? Answer: I think the short version is that dispersion relations only tell a small part of the story of a phonon. They say nothing about the motion of the corresponding wave; all they tell you is how the oscillatory frequency is related to the wavevector. As for how to read the curves: you read them just like any function. There is no inherent cause and effect, just as there is no inherent cause and effect for $y = x^2$. $x=2$ does not cause $y=4$ any more than $y=4$ causes $x=2$. For phonons, an oscillatory source with a given frequency would cause phonon(s) with specific wavelengths, and an excitation with a given wavelength would cause phonons(s) with specific frequencies. The cause depends on what you do --- not on a curve. Let's start with a simpler system: an infinite, isotropic, continuum solid (e.g. jello). This systems supports two kinds of waves: longitudinal (also known as pressure or p-waves) and transverse (also known as shear or s-waves). The two types of waves will, in general, have different velocities --- say $c_l$ and $c_t$, respectively. The dispersion relation for "phonons" in this system will then have two branches: $\omega_l = c_l \left|\vec{k}\right|$ and $\omega_t = c_t \left|\vec{k}\right|$. The real world meaning is that, at any given $\vec{k}$, both longitudinal and transverse waves can exist. They will, however, look quite different (see below --- taken from Wikipedia). In this system, you can find longitudinal and transverse waves to match any frequency. When you're working with atoms (or meta materials), things become more complicated, but the long-wavelength limit of acoustic phonons converges to elastic waves in a continuum system. To answer your questions: I interpret this to mean that you have a single branch with $\omega_1\left(a\hat{x}\right) = \omega_1\left(b\hat{x}\right)$ where $a \neq b$. This means that there are two excitations with different wavelengths that have the same oscillatory frequency. If you were to visualize their motion, it would be easy to distinguish them because they would have different wavelengths. (They might also look quite different for other reasons too.) I interpret this to mean that you have two branches with $\omega_1\left(a\hat{x}\right) = \omega_2\left(b\hat{x}\right)$ where $a \neq b$. You can achieve this with the continuum material we started with; simply choose $c_l a = c_t b$. All we are now saying is that longitudinal elastic waves can have the same frequency as transverse elastic waves. Again, if you visualize the motion, you'll very easily see that the waves are different even though they oscillate with the same frequency.
{ "domain": "physics.stackexchange", "id": 60152, "tags": "waves, solid-state-physics, phonons, dispersion" }
Electric current in a circuit with two batteries
Question: I have the following circuit in wich both batteries provide the same voltage and the capacitors are uncharged. I need to find the current in each path of the circuit at time $t=0$ when i connect the batteries. I know that at $t=0$ the charge of the capacitors is zero, so $V_c=\frac qC=0$, and when i apply the laws of Kirchhnoff over the circuit i can ingore them. Then my circuit is But... what's next? Can i make an equivalent resistance? Because, i can't think of any. If i want to continue with Kirchhnoff... In what direction does the current flow? And how does the potencial decrease/grows while passing throw each resistance? Answer: Indeed, you must do this using Kirchhoff's laws. The direction of the current is something that you can CHOOSE. Do not worry: if the current was flowing in the opposite direction, you'll get a minus sign when you solve it. In other words, now you have to decide the reference direction for the currents, just like you choose where you put your origin of coordinates. Just draw how currents could flow. Suppose that that is the way they flow. Then solve the circuit. If your assumption was wrong, you will get a minus sign somewhere. That just means that you guessed wrong, but it does not matter, because the numerical value is correct. Only the sign can vary. The sign is telling you if the flow was in that direction or opposite. Actually, what we are doing is a choice of references. "I set that currents will be positive rightwards, and negative leftwards". But this is only a reference. The currents will be unknown until you solve the whole system.
{ "domain": "physics.stackexchange", "id": 57780, "tags": "homework-and-exercises, electricity, electric-circuits" }
Flow around a rock in a river: which differential equation?
Question: I'm a canoist, so I know that when I go with my kayak behind a rock in a river, I feel a current that is opposite to the river current. I'm also I student mathematician, so I would like to see this phenomenon from equations. But I don't know anything about the equations that govern fluid dynamic. So my problem is: let be $W = \mathbb{R} \times [-2, 2]$ the river and $R = B_1(0)$ the rock (a ball inside the river). Let be $M = W \backslash R$ the river without the rock, i.e. where the water could flow. Let be $\phi: \mathbb{R} \times M \to M$ a function such that $\phi(t, x)$ is the position at time $t$ of the fluid particle that at time $t = 0$ was in position $x$. Let be the water flow from left to right. My question is: what differential equation for $\phi$ I have to solve? And does this equation predict the correct flow behind the rock? Answer: The simplest model that fits is potential flow around a cylinder (or a circle in 2D). This assumes an an inviscid, incompressible fluid with no vorticity, which is too simple to model the backflow. The backflow occurs because of viscosity produces boundary layer separation. I think the second simplest model possible would be to solve the steady-state Navier Stokes equations for an incompressible fluid. Then it is just $$(\mathbf{v}\cdot \nabla)\mathbf{v}-\nu\nabla^2\mathbf{v}=-\nabla w$$ $$\nabla\cdot \mathrm{v}=0$$ where $\nu$ is the viscosity and $\nabla w=(\nabla p)/\rho$ is the pressure term (or other forces expressed as hydraulic head). Since you are interested in the 2D case it can also be turned into an equation for stream functions that leaves out the pressure term. The backflow starts at a Reynolds number of about 40. While there are no doubt some analytic solutions for this geometry and low velocities, as you approach higher flow rates/Reynolds numbers they become unstable and you have to rely on numerical models.
{ "domain": "physics.stackexchange", "id": 57144, "tags": "fluid-dynamics, differential-equations" }
Spreadsheet function that gives the number of Google indexed pages
Question: I've developed this spreadsheet in order to scrape a website's number of indexed pages through Google and Google Spreadsheets. I'm not a developer, so how can I improve this code in order to have less code, to use less resources, or to go faster? I've explained everything here. //------------------------------------------- // scrape a website's number of indexed pages through Google and Google Spreadsheets. //------------------------------------------- function indexedpages(myUrl) { var request = "http://www.google.com/search?&hl=en&q=site:http://" + unescape(myUrl); // Google Request for indexed pages var sourcecode = UrlFetchApp.fetch(request).getContentText(); // scrape the page content Utilities.sleep(1000); // 1000ms pause to prevent spam var codebefore = '<div id="resultStats">About '; // the code before the number we want var codeafter = ' results<'; // the code after the number we want var theresult = sourcecode.substring(sourcecode.indexOf(codebefore)+codebefore.length, sourcecode.indexOf(codeafter)); // scrape the number we want var theresult = theresult.replace(",", ""); // delete the "," if (isNaN(theresult)) // if the result is not a number { var codebefore = '<div id="resultStats">'; // the code before the number we want var codeafter = ' result<'; // the code after the number we want var theresult = sourcecode.substring(sourcecode.indexOf(codebefore)+codebefore.length, sourcecode.indexOf(codeafter)); // scrape the number we want if (isNaN(theresult)) // if the result is not a number { var codebefore = '<div id="resultStats">'; // the code before the number we want var codeafter = ' results<'; // the code after the number we want var theresult = sourcecode.substring(sourcecode.indexOf(codebefore)+codebefore.length, sourcecode.indexOf(codeafter)); // scrape the number we want if (isNaN(theresult)) // if the result is not a number (0 result) { return 0; // returns 0 } else { return theresult; // returns the number we want } } else { return theresult; // returns the number we want } } else { return theresult; // returns the number we want } } Answer: Use a regular expression to look for the relevant text in the page. It makes more sense to return a number instead of a string. In the parameter name, "my…" is pointless. Just call the parameter site. Google now encourages the use of HTTPS everywhere, so it's probably a good idea to scrape Google using HTTPS as well. Also, Google doesn't need the site:... query argument to start with "http://". Finally, the correct way to build a query string is to use encodeURIComponent(). function indexedpages(site) { // Google request for relevant pages var request = "https://www.google.com/search?&hl=en&q=site:" + encodeURIComponent(site); // scrape the page content var sourcecode = UrlFetchApp.fetch(request).getContentText(); // 1000ms pause for rate limiting Utilities.sleep(1000); var match = /<div id="resultStats">(?:About )?([0-9,]*) results?</.exec(sourcecode); return (match) ? parseInt(match[1].replace(',', '', 'g')) : 0; }
{ "domain": "codereview.stackexchange", "id": 4801, "tags": "javascript, web-scraping, google-apps-script, google-sheets" }
NUMAH LITTERS OV KITTEHS ON TEH NETZ
Question: It's not fair that 1% of the users hold 75% of the lolcode questions. #OccupyMatsMug ~ user2296177 I agree. Without further ado, here's the LOLCODE version of 99 bottles of beer on the wall: OBTW PRINT TEH LOLCODE VERSHUN OV N BOTTLEZ OV BER ON TEH WALL. TLDR HAI 1.3 HOW IZ I COUNTDOWN YR NUMAH I HAS A NAUW ITZ A NUMBR I HAS A LEZZ ITZ A NUMBR IM IN YR LOOPZ NERFIN YR COWNTR TIL BOTH SAEM COWNTR AN DIFF OF 1 AN NUMAH NAUW R SUM OF NUMAH AN COWNTR LEZZ R DIFF OF NAUW AN 1 VISIBLE SMOOSH NAUW AN " LITTERS OV KITTEHS ON TEH NETZ" MKAY VISIBLE SMOOSH NAUW AN " LITTERS OV KITTEHS" MKAY VISIBLE "WAN FALLS DOWN AN BREAKZ PAH" VISIBLE SMOOSH LEZZ AN " LITTERS OV KITTEHS ON TEH NETZ" MKAY VISIBLE "" IM OUTTA YR LOOPZ VISIBLE "1 LITTER OV KITTEHS ON TEH NETZ" VISIBLE "1 LITTER OV KITTEHS" VISIBLE "WAN FALLS DOWN AN BREAKZ PAH" VISIBLE "NO MOAR LITTERs OV KITTEHS ON TEH NETZ" IF U SAY SO I IZ COUNTDOWN YR 99 MKAY KTHXBYE Disclaimer: No actual kittens were harmed while writing this program. The number of iterations is held in the variable NUMAH. I know NUMBR is a more fitting name, but that's already claimed as a type. The above is compiled using the latest lci using the 1.3 specification (thanks to Pimgd for the link). Since the 1.3 specification doesn't contain every feature of the language, see the 1.2 specification as well. I don't think LOLCODE has templates or the likes, so getting all the output on screen is a bit tedious. Output: 99 LITTERS OV KITTEHS ON TEH NETZ 99 LITTERS OV KITTEHS WAN FALLS DOWN AN BREAKZ PAH 98 LITTERS OV KITTEHS ON TEH NETZ etc. 1 LITTER OV KITTEHS ON TEH NETZ 1 LITTER OV KITTEHS WAN FALLS DOWN AN BREAKZ PAH NO MOAR LITTERs OV KITTEHS ON TEH NETZ Is this idiomatic LOLCODE? Answer: Variable capitalization IT IZ PRETTY GUD LOLCODE HOWEVEZ ME THINKS THAT TEH LOLCODEZ R EZYR 2 READ IF TEH VARZ R lowercase ZIS BECUZ ALL CAPS IS HARD TOO READ (all spelling "mistakes" made by my cat, who assisted me in that part of this review) More seriously, it is an issue that there are no good LOLCODE syntax highlighters. To be "stylish" and keep everything uppercase is a choice you can make, but personally I prefer to use lowercase or camelCase variable names whilst keeping the language constructs all caps. Similar to SQL, this allows you to see what parts of the code are variables, and what parts are language constructs. Compare: I HAS A NAUW ITZ A NUMBR I HAS A LEZZ ITZ A NUMBR IM IN YR LOOPZ NERFIN YR COWNTR TIL BOTH SAEM COWNTR AN DIFF OF 1 AN NUMAH NAUW R SUM OF NUMAH AN COWNTR LEZZ R DIFF OF NAUW AN 1 VISIBLE SMOOSH NAUW AN " LITTERS OV KITTEHS ON TEH NETZ" MKAY VISIBLE SMOOSH NAUW AN " LITTERS OV KITTEHS" MKAY VISIBLE "WAN FALLS DOWN AN BREAKZ PAH" VISIBLE SMOOSH LEZZ AN " LITTERS OV KITTEHS ON TEH NETZ" MKAY VISIBLE "" IM OUTTA YR LOOPZ with... I HAS A nauw ITZ A NUMBR I HAS A lezz ITZ A NUMBR IM IN YR LOOPZ NERFIN YR cowntr TIL BOTH SAEM cowntr AN DIFF OF 1 AN numah nauw R SUM OF numah AN cowntr lezz R DIFF OF nauw AN 1 VISIBLE SMOOSH nauw AN " LITTERS OV KITTEHS ON TEH NETZ" MKAY VISIBLE SMOOSH nauw AN " LITTERS OV KITTEHS" MKAY VISIBLE "WAN FALLS DOWN AN BREAKZ PAH" VISIBLE SMOOSH lezz AN " LITTERS OV KITTEHS ON TEH NETZ" MKAY VISIBLE "" IM OUTTA YR LOOPZ Variable naming Regarding NUMAH, have you considered using NUMBAH? Similar meaning, but uses a more common mispronunciation. Working around the spec IM IN YR LOOPZ NERFIN YR COWNTR TIL BOTH SAEM COWNTR AN DIFF OF 1 AN NUMAH NAUW R SUM OF NUMAH AN COWNTR This construct is non-obvious. You had to work around the spec to do something that you wanted to do, and I think it warrants explanation. Add a comment via BTW: IM IN YR LOOPZ NERFIN YR COWNTR TIL BOTH SAEM COWNTR AN DIFF OF 1 AN NUMAH NAUW R SUM OF NUMAH AN COWNTR BTW workaround loop counters starting at 0 Correctness in corner cases Your function doesn't check if the value passed in is greater than 0. This means that it falsely forces a KITTEH to break their paw when it wasn't necessary. Add a guard clause at the top of the function; you can do an early return via GTFO. DIFFRINT numbah AN BIGGR OF numbah AN 0 O RLY? YA RLY GTFO OIC Program definition OBTW PRINT TEH LOLCODE VERSHUN OV N BOTTLEZ OV BER ON TEH WALL. TLDR HAI 1.3 Aside from the weird phrasing (does this code print the lolcode version of/and bottles of beer on the wall?), there's something else wrong here: Your comment is before the HAI 1.3. So any interpreter loading your file might not know what version you are using. Put your version number at the top. Possible alternative dialects VISIBLE "NO MOAR LITTERs OV KITTEHS ON TEH NETZ" Did you make a typo here, is this an alternative kitty dialect that uses lower case 's' for certain plurals? I'm not sure. Lastly, I think this version of 99 LITTERS OV KITTEHS ON TEH NETZ is not authentic, as anyone knows that when KITTEHS fall, they'll be alright. This version is sad, as all the KITTEHS just break their paws. Proper LOLCODE uses cats, it doesn't abuse them.
{ "domain": "codereview.stackexchange", "id": 20980, "tags": "strings, 99-bottles-of-beer, lolcode" }
Redshift near Kerr Black Hole
Question: The question is related to Sean Carroll's Spacetime and Geometry ex 6.6. Consider a Kerr black hole with an accretion disk of negligible mass. Particles in the disk follow geodesics. Some iron in the disk emits photons at constant frequency $\nu_0$ in the rest frame. Then what is the observed frequency from the edge of the disk at the edges and center of the disk? Consider both cases where the disk and the black hole rotate in the same and opposite directions. Limit everything in the equatorial plane so $\theta=\pi/2$. By edge of the disk, imaging we are looking at the accretion disk edge on, then we source from one edge to be approaching and the other recessing. I'll use -+++ as the signature. It immediately came to my mind that we can use $\omega=-g_{\mu\nu}\frac{dx^\mu}{d\lambda} U^\nu$, where $x$ is a null geodesics connecting emitter and detector parametrized by $\lambda$, $U^\nu = \frac{dX^\nu}{d\tau}$ which describes the iron and observer. With some algebra, we should find the frequency to be $$\omega = EU^t - LU^\varphi + g_{rr}\frac{dr}{d\lambda}U^r$$ where E and L are energy and angular momentum obtained via Killing vectors. The redshift is, therefore, $$ 1 + z = \frac{\omega_e}{\omega_d} = \frac{EU_e^t - LU_e^\varphi + g_{rr}\frac{dr}{d\lambda}U_e^r}{EU_d^t - LU_d^\varphi + g_{rr}\frac{dr}{d\lambda}U_d^r} $$ where I use e and d to denote emitter and detector respectively. This is where I got stuck. We may consider the iron atoms to move in a circular orbit so that we ignore the $U_e^r$ term. In addition, for the observer to be stationary far away, we can also take $U_d^r = U_d^\varphi = 0$. $$ 1 + z = \frac{EU_e^t - LU_e^\varphi}{EU_d^t} $$ This formula seems to be only dependent on radius r. As an observer, we should expect the photon to have the same redshift at a fixed radius r. This conclusion is not making much sense. Wouldn't we expect different redshifts due to the motion of the source on opposite edges of the disk? What am I missing here? Many thanks for any help in advance! Answer: The thing you are missing is that the null geodesic connecting the emitter to the detector (and therefore $E$ and $L$ ,or more specifically the ratio $b=L/E$) will depend on the location of the emitter along its orbit. When the emitter is approaching the value of $b$ will be different than when it is receding. Consequently, the formula you find doesn't just depend on $r$.
{ "domain": "physics.stackexchange", "id": 98755, "tags": "black-holes, redshift, kerr-metric" }
jQuery drop down
Question: This does the job, and it's pretty much all the functionality I need at the moment. However, I feel like it could be optimized a bit. Namely, is there a way I can do this without the div#boundary? Also, the drop down re-fires if I go back up into the menu, which not a big deal, but it would be nice to prevent this behavior. Demo $('#press, #contact, #about').bind('mouseenter', function() { var n = $(this).attr("id"); $('#dd_'+n).stop(true, true).slideDown('fast'); $('#menu').children().not('#'+n).bind('mouseenter', function () { $('#dd_'+n).slideUp('fast'); }); $('#boundary').bind('mouseenter', function () { $('#dd_'+n).slideUp('fast'); }); $('#dd_'+n).bind('mouseleave', function () { $('#dd_'+n).slideUp('fast'); }); }); <div id="container"> <div id="header"> <div id="boundary"></div> <div id="menu"> <div id="press"></div> <div id="contact"></div> <div id="about"></div> </div> </div> <div id="dd_press"></div> <div id="dd_contact"></div> <div id="dd_about"></div> </div> Answer: I've come up with two options, but I'm not sure if either are an acceptable replacement for you. Option 1 Overlay the expanding menu on the original menu. This allows simplification of the eventing. Only tricky part is that by moving quickly, you could leave one menu and enter another before your mouse was in the expanding menu. Hence the last line of the "mouseover" function. Also, the original menu basically must be duplicated between the two menus, as you are overlaying it. If you were going to have some sort of effect on the original menu anyway, this could be an OK thing. http://jsfiddle.net/KY2kY/1/ Option 2 Add everything to the header menu, but hide most of it. Expand and contract that element. Biggest downside here is that now your heights must be set in the code, not the CSS. But it is very, very clean. http://jsfiddle.net/KY2kY/2/
{ "domain": "codereview.stackexchange", "id": 1249, "tags": "javascript, jquery" }
Stringy corrections of Einstein's vacuum field equations
Question: From string theory, the vacuum field equations obtain correction of the order $O[\alpha'R]^n$ such that they can be written as $$ R_{\alpha\beta} -\frac{1}{2}g_{\alpha\beta}R + O[\alpha'R] = 0 $$ where $\alpha' = \frac{1}{2\pi T_s}$, when including just the first order term for example. What is the physical interpretation of these corrections, what do they look like more explicitely, and how can they be obtained? Answer: As Witten explains in his NOTICES OF THE AMS article (please see also his more recent lecture), the fully quantum string theory is characterized by two coupling constants (or in the language of deformation quantization: two deformation parameters). The string coupling $g_s$ and the string tension $\alpha^{'}$. In perturbation theory, one gets dependence of the string amplitudes on powers of $g_s$ (or equivalently in $\hbar$) through the genus expansion. The dependence of the amplitudes $\alpha^{'}$ is obtained once one takes into account that in the presence of background fields, the string Lagrangian is not free, it is described by a sigma model. If we compute the quantum correction to this sigma model we get terms with more and more derivatives multiplied by more powers of the of the string tension (as in chiral perturbation theory). When the quantum corrections to the trace of the energy momentum tensors are calculated then here also terms depending on powers of $\alpha^{'}$ will appear and the condition of vanishing of the beta function will results Einstein's equations with correction terms proportional to $\alpha^{'}$. Please see equations 3.7.14. in Polchinsky's first volume, where the beta functions are given to the first power of $\alpha^{'}$. Witten explains that for a while, the work on string theory concentrated on finding candidates of $\alpha^{'}$ deformed theories (as conformal field theories), then $\hbar$-quantize them as in ordinary quantum theory. But, as Witten explains, after the discovery of the full set of string dualities and the role of membranes, it was realized that in order to fully quantize string theory, the two quantizations or two deformations($\hbar$, $\alpha^{'}$) must be perfomed together. This route has profound consequences, for example, it leads to the conclusion that the string full quantum theory should be in the realm of noncommutative geometry, because in the presence of a $B$-field and brane boundary conditions, the position-position commutation relations will obtain $\alpha^{'}$ deformation and become noncommutative. As a consequence, the ordinary uncertainty relation will get $\alpha^{'}$ deformation and turns into a generalized (minimal scale) uncertainty, in which the position uncertainty has a nonvanishing minimum: $\Delta x \geqslant \frac{\hbar}{\Delta p}+ \alpha^{'}\frac{\Delta p}{\hbar}$
{ "domain": "physics.stackexchange", "id": 7968, "tags": "general-relativity, string-theory" }
Async db repository
Question: This is a simple repository where I want to save all the invoices of some queried users. I wanted this module to only expose a simple saveInvoices method and return a Promise. As a DB, I use Firebase whose API is async but does not return standard JS promises so I promisify the db queries. I haven't yet taken every reject and fail scenarios into account. Looking forward to read your feedback. const _ = require('lodash'); const Firebase = require('firebase'); const Config = require('./../../shared/config'); const Repository = (function() { let _conn; const _connect = function() { return new Promise((resolve, reject) => { _conn = new Firebase(Config.FIREBASESERVER); _conn.authWithCustomToken(Config.FIREBASESECRET, () => { resolve(); }); }); }; const _getQueuedUsers = function() { return new Promise((resolve, reject) => { _conn.child('users') .orderByChild('nextInvoiceDate') .startAt(1) .endAt(Date.now()) .once('value', (usersSnap) => { resolve(_.values(usersSnap.val())); }); }); }; const _saveInvoice = function(user) { return new Promise((resolve, reject) => { // ... resolve(); }); }; const saveInvoices = function() { return new Promise((resolve, reject) => { _connect() .then(() => _getQueuedUsers()) .then((users) => { resolve(Promise.all(users.map((u) => _saveInvoice(u)))); }, (rejection) => { console.log(rejection); }); }); }; return { saveInvoices } })(); Repository.saveInvoices .then(() => { console.log('done'); }) .catch((err) => { console.error(err); }); Answer: I see you're using modules. Consider moving Repository to its own module. That way, you can take advantage of encapsulation inside a module file and remove the need for an IIFE. You can then export only the APIs you want exposed. I even don't use _ anymore to mark private functions as export is a good enough indicator that it is exposed outside the module. // repository.js const _ = require('lodash'); const Firebase = require('firebase'); const Config = require('./../../shared/config'); let conn = null; const connect = function() { return new Promise((resolve, reject) => { conn = new Firebase(Config.FIREBASESERVER); conn.authWithCustomToken(Config.FIREBASESECRET, () => resolve()); }); }; const getQueuedUsers = function() { return new Promise((resolve, reject) => { conn.child('users') .orderByChild('nextInvoiceDate') .startAt(1) .endAt(Date.now()) .once('value', usersSnap => resolve(_.values(usersSnap.val()))); }); }; const saveInvoice = function(user) { return new Promise((resolve, reject) => { // ... resolve(); }); }; export function saveInvoices() { return new Promise((resolve, reject) => { connect() .then(() => getQueuedUsers()) .then( users => resolve(Promise.all(users.map(u => saveInvoice(u)))), rejection => console.log(rejection) ); }); }; // your-code.js import * as Repository from 'repository.js'; Repository.saveInvoices .then(() => console.log('done')) .catch(err => console.error(err)); A few other things include the optional () around arguments of arrow functions only when the arguments is just one (() required for none or multiple). {} is also optional if you just do one-liner arrow function bodies. Also, your configs are in a file? I suggest you use environment variables instead, especially when you're dealing with API keys. Config files are easily accidentally checked-in into the repo and we all know Git doesn't forget anything that's checked in. You wouldn't want anyone using an API under your name for malicious purposes. Other than that, the code looks ok!
{ "domain": "codereview.stackexchange", "id": 17004, "tags": "javascript, promise, firebase" }
Aho-Corasick algorithm to scan through a list of strings
Question: This is a follow-up to my previous question about finding min and max values of an iterable. Aho-Corasick algorithm was suggested to solve the problem. Below is my solution with the use of ahocorapy library. Short re-cap of the problem: You are given 2 arrays (genes and health), one of which have a 'gene' name, and the other - 'gene' weight (aka health). You then given a bunch of strings, each containing values m and n, which denote the start and end of the slice to be applied to the genes and health arrays, and the 'gene'-string, for which we need to determine healthiness. Then we need to return health-values for the most and the least healthy strings. I think there might be something off with the code, but not sure what. It works quite fine for small testcases, giving more or less same timing as previous versions of the solution showed, but when it comes to large testcases, my PC basically hangs. Example of a small testcase: genes = ['a', 'b', 'c', 'aa', 'd', 'b'] health = [1, 2, 3, 4, 5, 6] gene1 = "1 5 caaab" (result = 19 = max) gene2 = "0 4 xyz" (result = 0 = min) gene3 = "2 4 bcdybc" (result = 11) Large testcase (2 lists 100K elements each; testcase 41K+ elements): txt in my dropbox (2,80 MB) (too large for pastebin) So, I have 2 questions: 1) What is wrong with my code, how can I impore its performace 2) How do I apply the Aho-Corasick without turning to any non-standard library (because, most likely, it cannot be installed on HackerRank server) def geneshealth(genes, health, testcase): from ahocorapy.keywordtree import KeywordTree import math min_weight = math.inf max_weight = -math.inf for case in testcase: #construct the keyword tree from appropriately sliced "genes" list kwtree = KeywordTree(case_insensitive=True) fl, ceil, g = case.split() for i in genes[int(fl):int(ceil)+1]: kwtree.add(i) kwtree.finalize() #search the testcase list for matches result = list(kwtree.search_all(g)) hea = 0 for gn, _ in result: for idx, val in enumerate(genes): if val == gn: hea += health[idx] if hea < min_weight: min_weight = hea if hea > max_weight: max_weight = hea return(min_weight, max_weight) Answer: This code is slow because: It builds a new keyword tree for each test case. Just build it once, using all the genes. It builds a list of all the matching keywords. KeywordTree.search_all() is a generator, just loop over it directly. And, It loops over the list of genes to find the gene index, so that it can find the health. Instead, build a dict with the genes as keys and an (index, health) tuple for the value. Something like this (untested): import math from collections import defaultdict from ahocorapy.keywordtree import KeywordTree def geneshealth(genes, health, testcases): # build the kwtree using all the genes kwtree = KeywordTree(case_insensitive=True) for gene in genes: kwtree.add(gene) kwtree.finalize() # build a dict that maps a gene to a list of (index, health) tuples index_and_health = defaultdict(list) for gene, data in zip(genes, enumerate(health)): index_and_health[gene].append(data) min_dna_health = math.inf max_dna_health = -math.inf for case in testcases: start, end, dna = case.split() start = int(start) end = int(end) dna_health = 0 # search the dna for any genes in the kwtree # note: we don't care where the gene is in the dna for gene, _ in kwtree.search_all(dna): for gene_index, gene_health in index_and_health[gene]: # only genes that are within the testcase limits # contribute dna_health if start <= gene_index <= end: dna_health += gene_health # keep the min/max weight if dna_health < min_dna_health: min_dna_health = dna_health if dna_health > max_dna_health: max_dna_health = dna_health return(min_dna_health, max_dna_health)
{ "domain": "codereview.stackexchange", "id": 38518, "tags": "python, beginner, python-3.x, programming-challenge" }
images from training set are different from images of test set
Question: I am doing image classification with CNN and I have a training set and a test set with different distributions. To try to overcome this problem I am thinking about doing a standardization using Imagegenerator, but I am encoutering some problems. Here is the part of the code I am working on: trainingset = '/content/drive/My Drive/Colab Notebooks/Train' testset = '/content/drive/My Drive/Colab Notebooks/Test' batch_size = 32 train_datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255,\ zoom_range=0.1,\ rotation_range=10,\ width_shift_range=0.1,\ height_shift_range=0.1,\ horizontal_flip=True,\ vertical_flip=False) train_datagen.fit(trainingset); train_generator = train_datagen.flow_from_directory( directory=trainingset, #target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=True ) test_datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255) test_datagen.fit(testset); test_generator = test_datagen.flow_from_directory( directory=testset, #target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=False ) num_samples = train_generator.n num_classes = train_generator.num_classes input_shape = train_generator.image_shape classnames = [k for k,v in train_generator.class_indices.items()] print("Image input %s" %str(input_shape)) print("Classes: %r" %classnames) print('Loaded %d training samples from %d classes.' % (num_samples,num_classes)) print('Loaded %d test samples from %d classes.' % (test_generator.n,test_generator.num_classes)) so, what I am trying to do is using in the Imag genereator the fields featurewise_center=True and featurewise_std_normalization=True to do standardization, but if I try to fit the generator to the trainingset by doing train_datagen.fit(trainingset); I get the following error: ValueError Traceback (most recent call last) <ipython-input-16-28e4ebb819be> in <module>() 23 vertical_flip=False) 24 ---> 25 train_datagen.fit(trainingset); 26 27 train_generator = train_datagen.flow_from_directory( 1 frames /usr/local/lib/python3.6/dist-packages/numpy/core/_asarray.py in asarray(a, dtype, order) 83 84 """ ---> 85 return array(a, dtype, copy=False, order=order) 86 87 ValueError: could not convert string to float: '/content/drive/My Drive/Colab Notebooks/Train' Can somebody please help me? Thanks in advance. [EDIT] I am trying to adapt what is written here to my problem. [EDIT_2] I think the problem is that .fit() takes as parameter a numpy array, while I am trying to pass to it a string, which is the path for the images. But I don't understand now how to do, because I should transform this to a numpy array in order to do the fit. Answer: The way you use ImageDataGenerator is wrong. The .fit() method is trying to read in the directory path which is a string. To be able to run your code, you should remove the train_datagen.fit(training_set) and your script should work fine and all the images should have been preprocessed the way you want them. Here's a link on how to use the ImageDataGen. The .flow_from_directory() function is used for raw images. The .fit() method is used on numericals.
{ "domain": "datascience.stackexchange", "id": 6550, "tags": "machine-learning, neural-network, cnn, image-classification" }
RVIZ will not execute
Question: I am trying to run a LiDar taken from the Neato XV-11 vacuum cleaner on ROS nad amd using RVIZ to visualize the data coming from the sensor. However, I ran into the problem that RVIZ was not receiving and MAP data to display. I consulted the site and As per the video tutorial under map, I found that I did not have any topics being displayed. I tried to enter the topic manually but not RVIZ will not initialize and I am receiving the following error. [ERROR] [1338754771.604753719]: Caught exception while loading: Character [ ] at element [3] is not valid in Graph Resource Name [map (nav_msgs/OccupancyGrid)]. Valid characters are a-z, A-Z, 0-9, / and _. Kindly suggest what when wrong and what can be done regarding the same. Originally posted by rzv0004 on ROS Answers with karma: 1 on 2012-06-03 Post score: 0 Answer: Seems to me you placed a space in map (nav_msgs/OccupancyGrid) which is not valid. Remove that space in ~/.rviz/display_config or ~/.rviz/config and start rviz. Previously: http://ros-users.122217.n3.nabble.com/RVIZ-Graph-Resource-Name-Problem-td1678927.html http://answers.ros.org/question/11321/caught-exception-while-loading-character-/ Originally posted by felix k with karma: 1650 on 2012-06-04 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 9644, "tags": "lidar, rviz" }
Kinematics LHS≠RHS?
Question: For example, $v_f=v_i+a\Delta t=1+1(1),v_f=2$ Then after finding final velocity i try to find $\Delta x$: $\Delta x=\frac 12(v_i+v_f), \frac 12(1+2)1=1.5m$ However $\Delta x=v_f-v_i$=2-1=1m 1m≠1.5m Edit:try to forget that this question exists. Answer: The general formula for displacement is $$s=v_{i}t+\frac{at^2}{2}$$ Therefore, for your example $\Delta x$ = (1)(1) + $\frac{(1)(1)^{2}}{2}$ = 1.5 m. Hope this helps.
{ "domain": "physics.stackexchange", "id": 66523, "tags": "homework-and-exercises, kinematics" }
Accretion disk physics - Stellar formation
Question: I was going through the Wikipedia page for Accretion disks, and I couldn't comprehend what the meaning of this is: "If matter is to fall inwards it must lose not only gravitational energy but also lose angular momentum." What does that mean? Also, why are these accretion disks planar? I learned that the reason is conservation of angular momentum, but what is the physics behind it? Answer: The Setup This is a good question because whenever we say "conservation of momentum" we are really dodging the issue entirely. It's just that "conservation of momentum" has become a key phrase in astrophysics for summarizing the process of disk formation. So let's start from the beginning. You have a generally 3D distribution of matter in the form of a gas cloud in space. It has a center of mass. You can ascribe to each particle of mass $m$ an angular momentum $\vec{L}$ according to $$ \vec{L} = m \vec{r} \times \vec{v}, $$ where $\vec{r}$ is the vector from the center of mass to the particle, and $\vec{v}$ is its velocity. Now the following two facts about angular momentum can be shown: As the particles are influencing each other via gravity or even via exchanging photons, the sum of the angular momenta of any pair of particles is conserved in an interaction between those particles; When two particles collide, the sum of their angular momenta is conserved. It can be seen then that the total angular momentum of the system, summed over all the particles, stays constant. Now it is extremely unlikely that the total angular momentum will be $0$ - adding a bunch of random numbers together, especially considering there are actually large-scale correlations due to eddies and winds and shock waves etc. - will probably get you a rather nonzero value. So this total angular momentum vector picks out a preferred direction (parallel to itself) and a preferred family of planes (perpendicular to itself). Why Disks Actually Form We have shown that if a disk of material were to form, we know what orientation it would have, because anything else would have a total angular momentum pointing in the wrong direction. But most sources stop here and don't explain why collapse occurs at all, which is a serious omission. Some sources will tell you that given the constraint of fixed angular momentum, collapsing to a disk will minimize the potential energy of the system. While this statement is true, it does not in and of itself show that collapse will happen. For that we need a mechanism, which I will describe. The basic idea is that collisions enable the system to relax to a disk. Consider some particle whose angular momentum is quite out of line with the average of the rest of the particles. In particular suppose its orbit is oriented in the wrong plane.1 Then most of the time when it collides with another particle, it will lose some of that spurious angular momentum and be brought more in line with the average.2 If all particles have their angular momenta pointing in the same direction, then necessarily they are all moving in the same plane, and this is the state toward which collisions bring us. Now if there were no collisions, we wouldn't have this mechanism. That is why gas forms accretion disks more easily than populations of stars. For example, globular clusters are roughly spherical populations of stars that have remained like that for billions of years.3 One other point here: So far we have shown only that each individual orbit will be brought into a plane perpendicular to the total angular momentum vector, but not that these planes will coincide. That they will coincide follows from the fact that you cannot have a steady-state planar orbit in which the center of mass of the system is not in that plane. If you did, then you would feel an acceleration perpendicular to the plane in the general direction of the center of mass at all points in the orbit and so you would necessarily accelerate out of that plane.4 Why Matter Cannot Migrate Inward If we have matter orbiting in a disk, those particles cannot decide on their own to move into the center. Often we say "because angular momentum," which is again true but not very explanatory. The fact is, orbits in free space are generally stable. Yes, you are always accelerating toward the center, but your tangential velocity is always keeping you moving away from the center. The planets aren't falling into the Sun because they are moving too fast and nothing is slowing them down. How Matter Migrates Inward So now we have a disk. We know these are used to feed matter into the central object in various astrophysical systems, but how does this occur given the argument above? The answer in short is again can be considered in terms of collisions and angular momentum, but this time the magnitude rather than the direction of the latter. Consider for simplicity a lightweight disk where gravity is determined by the central object rather than the disk material.5 Each particle in the disk will, in the absence of interactions with other disk particles, follow an elliptical Keplerian orbit. The length of such an orbit scales as the semi-major axis $a$, and the time it takes to complete one orbit scales as $a^{3/2}$ according to Kepler's Third Law. Thus for particles with "average" separations from the center $a$, their "average" speeds will scale as $a/a^{3/2} = 1/\sqrt{a}$. That is, the further you are from the center, the slower you will be moving. Consider two particles in adjacent orbits in the disk, one slightly outside the other. The inner particle will be moving slightly faster. If these particles (whose orbits are not necessarily perfect circles) collide,6 the outer one may drag the inner one, slowing it down while getting sped up itself. So angular momentum has transferred outward. How does this cause matter to move inward? Well, that inner particle is now moving too slowly for its current separation from the center, and so it enters upon a new orbit with a new eccentricity that brings it closer to the center. That is, if it started off in a circular orbit of radius $a_0$, its new orbit will be an ellipse with semi-major axis $a_1 < a_0$. It is thus interactions and angular momentum exchange that allow and cause matter to slowly move inward in a disk. Note though that something has to pick up the extra angular momentum, so in general a fraction of the disk will not fall in but rather will be pushed further away. 1 For a generic distribution of matter, the orbits won't necessarily be planar. However, at any moment you can construct the unique osculating orbit that matches the particle's instantaneous position and velocity. 2 If you don't believe that the tendency will be in this direction on average, consider that an inanimate object like a balloon will quickly be turned around if thrown into a headwind. 3 In fact there is an alternate mechanism to transfer angular momentum between non-colliding bodies gravitationally - dynamical friction - but in most cases this is far too slow to be worth considering. This is further proof that potential energy arguments are incomplete without consideration of mechanisms. 4 This is the same reason you cannot have a satellite orbit Earth above a fixed latitude unless that latitude is $0^\circ$. 5 The argument holds even if the disk has nonnegligible self-gravity, but this just fixes certain quantities for a more concrete discussion. 6 Now is as good a time as any to note that collisions are not the only reasonable way of transferring angular momentum. In addition to the aforementioned dynamical friction present, many disks have charged particles and magnetic fields. Charged particles can interact across otherwise empty space if there is a magnetic field coupling them. This can lead to rather complicated phenomena, and studying magnetic effects on disks is one of the topics at the forefront of astrophysical research.
{ "domain": "physics.stackexchange", "id": 9044, "tags": "astrophysics, angular-momentum, conservation-laws, newtonian-mechanics" }
What's the matrix representation of the CSWAP?
Question: I don't know how to represent the matrix format of the CSWAP gate in the circuit: Despite reviewing some material about CSWAP from Qiskit CSWAP, I am still unable to understand the concept. I am seeking to obtain the 16x16 matrix format in order to calculate the quantum state vector after the CSWAP. In addition, it can be noted that q2 and q3 are fully entangled (I am unsure if this information is relevant). Any help would be greatly appreciated, thank you! Answer: To get the $CSWAP$ matrix representation in Qiskit, you can use the Operator class defined in the qiskit.quantum_info module: from qiskit import QuantumCircuit from qiskit.quantum_info import Operator from qiskit.visualization import array_to_latex qc = QuantumCircuit(4) qc.cswap(0, 1, 3) array_to_latex(Operator(qc), max_size=16) If you want to do all the actual maths, you should start from the following: $$ CSWAP_{0 \rightarrow 1, 3} = I \otimes I \otimes I \otimes |0 \rangle \langle 0| + SWAP_{1, 3} \otimes |1 \rangle \langle 1| $$ The formula above basically means: "if qubit $q_0$ (right-most) is in state $|0\rangle$, don't do anything; if it is in state $|1\rangle$, swap qubit $q_1$ and $q_3$". The $SWAP_{i,j}$ gate can then be decomposed in 3 controlled-not operations as $CX_{i,j} \cdot CX_{j,i} \cdot CX_{i,j}$. So, in this specific case, you have to compute: $$ CX_{1, 3} = I \otimes I \otimes |0 \rangle \langle 0| + X \otimes I \otimes |1 \rangle \langle 1| $$ $$ CX_{3, 1} = |0 \rangle \langle 0| \otimes I \otimes I + |1 \rangle \langle 1| \otimes I \otimes X $$ Finally, if you put all together and perform the calculations, you will get the same 16x16 unitary matrix returned by Qiskit.
{ "domain": "quantumcomputing.stackexchange", "id": 4565, "tags": "qiskit, quantum-gate, quantum-state, swap-test" }
How to Create a Filter that Matches a Desired Magnitude Response
Question: I'm very, very new to this stuff, so please forgive me if the answer is obvious. I haven't been able to find any clear answers to this issue, even after looking through source code for various filter implementations and scouring the net for similar questions. I have a function $f(x)=y$, such that: $x$ is a frequency in Hz $y$ is an output dB it should, ideally, be quieted down by. I would like to apply this function to a digitally-stored sound - represented of course by samples in the time domain, as is standard. Ideally, I would like this to give the same output as if one manually took each frequency out of a sound, amplified them with the function's output, then added them back together. I know this method itself is beyond impractical, so I'm sure my answer lies somewhere in the realm of IIR or FIR filtering algorithms. As an aside, I would prefer to avoid approximatory approaches where possible; I'd like to get as close as I can to applying the function exactly, rather than approximating its curve. This is for research purposes - I don't plan to apply this filter in real time, and my programming language of choice (Lua) isn't really suited to that anyways - so the increased speed and memory usage this might incur is not of issue to me. Where should I begin? Is this possible to do? Answer: What you seem to want is a multiplicative modification of the spectrum of the input signal in the following form: $$Y(\omega)=H(\omega)X(\omega)\tag{1}$$ where $X(\omega)$ is the spectrum of the input signal, $Y(\omega)$ is the spectrum of the output signal, and $H(\omega)$ is the function that modifies the input spectrum. Equation $(1)$ is exactly what a (linear and time-invariant) filter does. Examples of such filters in the discrete domain are FIR and IIR filters. In general, the function $H(\omega)$ - called the filter's frequency response - is complex-valued. That means that not only the magnitude of the input spectrum is changed but also its phase. If I understand your question correctly, you just care about the magnitude. For clarification, the frequency response's influence on a sinusoidal input signal $x[n]=\sin(\omega_0n)$ is described by the following equation: $$y[n]=\big|H(\omega_0)\big|\sin\big(\omega_0n+\phi(\omega_0)\big)\tag{2}$$ where $\phi(\omega)$ is the phase of the complex-valued frequency response $H(\omega)$: $$H(\omega)=\big|H(\omega)\big|e^{j\phi(\omega)}$$ Eq. $(2)$ shows that the magnitude of a sinusoid at frequency $\omega_0$ is modified by the magnitude of the frequency response at that frequency. Furthermore, its phase is modified by the phase of the frequency response at that frequency. The catch is that you can't just prescribe some $H(\omega)$ and expect a realizable filter to exactly implement the given response. You will always have to live with an approximation of the desired response. However, if computational cost and memory requirements are of no concern you can get very close to the prescribed response. I would recommend to use an FIR filter to approximate the given desired response. Reasons why one might want to avoid FIR filters are their large delay (at least for linear phase filters) and their large computational complexity, both when compared to equivalent IIR filters. However, I think that for your application these drawbacks are not very relevant, and the advantages of FIR filters outweigh the disadvantages. The advantages are ease of design and inherent stability. The frequency response of a causal FIR filter is $$H(\omega)=\sum_{n=0}^{N-1}h[n]e^{-jn\omega}\tag{3}$$ where $h[n]$ are the filter coefficients ("taps"), and $N$ is the filter length (= number of taps). Obviously, the larger $N$, i.e., the more taps, the better the approximation of a given response will be. Given the filter coefficients $h[n]$, the output $y[n]$ is computed by the discrete convolution of the filter coefficients with the input $x[n]$: $$y[n]=\sum_{k=0}^{N-1}h[k]x[n-k]$$ Designing a filter means finding the "best" filter coefficients $h[n]$ for a given specification. There are countless filter design methods and it's not always straightforward to find the one that best fits a given problem. In your case I would probably start experimenting with the frequency sampling method. In this case, the desired response is sampled on a dense equidistant frequency grid. This discretized response is then transformed to the time domain by applying an inverse discrete Fourier transform (IDFT), which is usually implemented by the (inverse) Fast Fourier Transform (IFFT). Finally, the resulting impulse response is usually windowed to smooth out the resulting response and to reduce the impulse response to the desired filter length $N$. For illustration purposes I used the frequency sampling method to design a filter approximating a rather silly fantasy specification. The desired magnitude response has a staircase shape and was sampled at $1000$ equidistant points. The desired filter length was chosen to be $N=201$. The figure below shows the result. Clearly, an actual filter cannot perfectly follow the discontinuities of the desired response. However, increasing the filter length will make the transitions narrower. Here is a link to a simple Octave/Matlab implementation of the frequency sampling method used in the design example shown above. The filter was designed with the following commands: mag = [ones(1,200),.75*ones(1,200),.5*ones(1,200),.25*ones(1,200),zeros(1,200)]; h = fsamp( mag, 201 );
{ "domain": "dsp.stackexchange", "id": 11865, "tags": "filter-design, programming, frequency-filters" }
Quantum commutation relation $[e^{-x^{2}},e^{\alpha i p}] = ?$
Question: I have been trying for find a closed form solution, or at least something neat for the commutation relation $$[e^{-x^{2}},e^{\alpha i p}] = ?$$ (where $[x,p] = i\mathbb{I}$) but have had little luck. I have tried using BHC theorem but this does not get me very far. I think that there must be some simple relation that I am over looking. Answer: You have set $\hbar=1$, so $p= -i\partial_x$ in the coordinate representation, so one of your operators is a bland Lagrange shift operator, and hence $$ [e^{\alpha i p}, f(x) ] = (f(x+\alpha)-f(x)) e^{\alpha i p} ~~~~\leadsto \\ [e^{-x^2}, e^{\alpha i p}]= - (e^{-(x+\alpha)^2}-e^{x^2}) e^{\alpha i p} . $$ (With a tip of the hat to @thedude 's comment! The linked WP article reminds you that $e^{i\alpha p} f(x) e^{-i\alpha p}= f(x+\alpha) $, in operator calculus language; when it acts on a constant, it reduces to just $e^{i\alpha p} f(x)= f(x+\alpha) $.) Make sure to confirm for small α.
{ "domain": "physics.stackexchange", "id": 89015, "tags": "quantum-mechanics, homework-and-exercises, operators, commutator" }
Definition of pressure in Euler's equation for incompressible inviscid fluid
Question: In fluid dynamics, Euler's equations describe an inviscid fluid. For an incompressible fluid with a constant and uniform density it reads (cf. Wikipedia article): $$ \begin{align} {\partial\mathbf{u} \over \partial t} + (\mathbf{u} \cdot \nabla) \mathbf{u} &= -\frac 1 {\rho_0} \nabla p + \mathbf{g} \\ \nabla \cdot \mathbf{u} &= 0 \end{align} $$ In order to completely define the problem, e.g. to numerically simulate it, I will also need to know how $p$ is defined in terms of $\mathbf{u}$, the function I want to solve for. To my surprise, none of the places talking about Euler's equations I've found so far give a definition of $p = p(\mathbf{x}, \mathbf{u}, t)$... Answer: For incompressible fluids the $p$ is what it needs to be in order to satisfy $\nabla\cdot {\bf v}=0$. In other words you do not use $p$ to solve for the motion, but instead use the motion to find $p$. A brief discussion of this, and the strategy for solving incompressible flow, is in exercise 67 in our book. A draft version can be found here. Th extercise in on on page 228 in the draft.
{ "domain": "physics.stackexchange", "id": 79231, "tags": "fluid-dynamics, computational-physics, stress-strain" }
SQL QueryBuilder
Question: I've made a simple querybuilder for my job, but I'm kind of insecure about starting to use it because I'm afraid of errors happening when I start using it. I've done some tests with where, join, etc... So far I didn't find anything, but if someone could take a look and see if there's something missing, I would appreciate it a lot. Is this class reliable enough? P.S.: I've only made it work for SELECT queries because it's the only thing I need. P.P.S.: I know I could use a external lib, but it is worth nothing trying to make one myself. (And libs are kind of overwhelming for what I need) <?php class QueryBuilder { private $query; private $table; public function __constructor() { $this->query = ''; $this->table = null; } public function __call($method, $arguments) { $index = 1; preg_match('/where/i', $method, $isWhere); if(!empty($isWhere) && strlen($method) > 6) { $method = str_ireplace($isWhere[0], '', $method); } preg_match_all('/[A-Z]/', $method, $temp, PREG_OFFSET_CAPTURE); $temp = $temp[0]; if(!empty($temp)) { foreach($temp as $offset) { if($offset > 1 || $offset[1] > 0) { $this->query .= strtoupper(substr_replace($method, ' ', $offset[1], 0)); $this->query .= ' '; } } } else { $this->query .= strtoupper($method); $this->query .= ' '; } if(count($arguments) == 1) { if(!empty($arguments)) { foreach($arguments as $arg) { foreach($arg as $key => $value) { $this->query .= $value; if($index < count($arg)) $this->query .= ', '; else $this->query .= ' '; } } } } else { if(!empty($arguments)) { $this->query .= $arguments[0]; $this->query .= ' ON '; unset($arguments[0]); foreach($arguments as $arg) { foreach($arg as $key => $value) { $this->query .= $value; if($index < count($arg)) $this->query .= ', '; else $this->query .= ' '; } } } } return $this; } public function table($tables = []) { $index = 1; if($tables === null || empty($tables)) { throw new Exception('Table value cannot be null.'); return; } foreach($tables as $table) { $this->table .= $table; if($index < count($tables)) { $this->table .= ', '; } else { $this->table .= ' '; } } return $this; } public function select($cols = null) { $index = 1; $this->query .= 'SELECT '; if($cols === null || empty($cols) || $cols == '') { $this->query .= ' * '; } else { foreach($cols as $value) { $this->query .= $value; if($index < count($cols)) { $this->query .= ', '; } $index++; } } $this->query .= ' FROM '.$this->table; return $this; } private function makeQuery() { return $this->query; } public function get() { return $this->makeQuery(); } } $test = new QueryBuilder(); echo $test ->table() ->select(['id', 'name']) ->Where(['name= \'john\'']) ->whereNotIn(['id > 1']) ->get(); Answer: Bug with WHERE conditions I tried running the sample code: $test = new QueryBuilder(); echo $test ->table() ->select(['id', 'name']) ->Where(['name= \'john\'']) ->whereNotIn(['id > 1']) ->get(); It threw an exception because the argument to the table method was null (which would yield an empty array per the default argument value), so I changed it to pass an array with a single string literal ['users'] to ->table(). Then when I ran it again, I saw the string literal returned below: SELECT id, name FROM users WHERE name= 'john' NOTIN NOT IN id > 1 Correct me if this is wrong but most SQL engines need to have the predicates combined with the AND and OR keywords, and those where conditions NOTIN NOT IN would definitely yield an error. It is unclear how the NOT IN should be combined with the id > 1... My best guess is that a sub-query would be needed for that to work... something like SELECT id, name FROM users WHERE name= 'john' AND id NOT IN (SELECT id FROM users WHERE id > 1) Given that issue, I would say to your question "Is this class reliable enough?": No it isn't reliable, but maybe if that issue is resolved then it would be. Constructor is useless The only affects of the constructor are to set the two properties (instance variables) to primitive values (i.e. an empty string literal and null). Those could be initialized when declared since those values can be evaluated at compile time. Thus the constructor can be removed once those initializations are added to the declarations: class Builder { private $query = ''; private $table = null; One advantage here would be that if this class had a parent class, then any method that overrides the same method in the parent class would need to have the same signature or at least pass sufficient parameters when calling the parent method is needed. Variables declared even if not used While the next section describes how to eliminate variables like $index, I do notice that variable is often declared as a local varible assigned the value 1 at the start of methods (like __call(), table(), select()). However in some cases the method may return early - for example in table() an exception is thrown if the $tables argument is null or empty. While it is only an integer, it is wise to not assign values to variables until they are needed. Imagine a large object was assigned to a variable there after calling a function (or multiple functions) - if the method returned early, then the CPU cycles used to get the variable from the function would then be wasted. Use implode() instead of conditionally appending seperators I see a few places like the block from select() below, where array elements are appended to the query property and then commas are added if end of the list hasn't been reached: foreach($cols as $value) { $this->query .= $value; if($index < count($cols)) { $this->query .= ', '; } $index++; } That can be simplified using implode() with the comma and space used as the $glue and the array $cols as the $pieces parameters. $this->query .= implode(', ', $cols); And this makes $index superfluous so it can be removed.
{ "domain": "codereview.stackexchange", "id": 31065, "tags": "php, sql" }
Magnetic field of a stationary electron
Question: As far as I know, a magnetic field can only be produced by a moving electric charge, or from a particle's spin (this is how a permanent magnet works, all the spins are in the same direction) What is strength and direction of the magnetic field of a stationary electron at the origin with spin oriented straight up? I suspect a function of cylindrical coordinates is most convenient. Answer: An electron's magnetic field is a dipole field - that is, the field strength is given by source: http://www2.ph.ed.ac.uk/~playfer/EMlect4.pdf In this expression, $m$ is the magnetic dipole moment. For an electron, this has the value $$m=-928.476377 × 10^{−26} J/T$$ The magnetic permeability $$\mu_0=4\pi\cdot 10^{-7}\frac{V\cdot s}{ A \cdot m}$$ Note - when the spin is up, the magnetic field at the origin points down, because the electron has negative charge. Hence the negative sign in the value of $m$
{ "domain": "physics.stackexchange", "id": 15302, "tags": "electromagnetism" }
How to calculate the derivative of scale factor as a function of conformal time from the solution of Friedmann equation
Question: For the flat geometry of lamda CDM model, the solution for Friedmann equation is $$ a(t) = \left\{ \frac{Ω_{m,0}}{Ω_{Λ,0}} \sinh^2 \left[\frac{3}{2} \sqrt{Ω_{Λ,0}} H_0(t - t_0)\right] \right\}^{1/3}, $$ here the scale factor $a$ is a function of cosmic time, at today $a(t) = 1$, t = 13.7 GY, $\eta = 47.7$ GY and the expression for $t_0$ is given by: $$ t_0 = \frac{2}{3 H_0 \sqrt{Ω_{Λ,0}}} \sinh^{-1}\sqrt{\frac{Ω_{m,0}}{Ω_{Λ,0}}}. $$ How can I get the derivative of scale factor, $a$ as a function of conformal time, i.e., how can I numerically calculate $\dot a(\eta)$ from the above expression? The relation between scale factor $a$ and conformal time $\eta$ is given by, $$\eta(t)=\int\frac{dt}{a(t)}.$$ Edit 1: For lamda CDM model the Friedmann equation can be written as, $$ \frac{da(t)}{dt} = H_0 a(t) \sqrt{\frac{\Omega_{m,0}}{a(t)^3} + \Omega_{Λ,0}} $$ and the conformal time can be written as $$ \frac{d\eta}{dt} = \frac{1}{a(t)} $$. Answer: You've asked how to do it numerically because, as we've already discussed in your previous question, there is no analytic formula for $a(\eta)$. The numeric approach just uses numerical differentiation. Basically you just approximate the derivative as $\Delta a/\Delta\eta$ using a small finite interval. Using your numbers for the constants, we have $$a=0.753947\sinh^{2/3}{(0.0840843\,t)}$$ where $t$ is gigayears since the Big Bang. (You've already agreed that $t_0$ should be 0, not what you wrote.) To get the conformal time $\eta$ corresponding to any cosmological time $t$, numerically integrate $1/a$ from 0 to $t$. I use the $\mathbf{NIntegrate}$ function in Mathematica to do this. Pick some coordinate time, say $t_1=10$. Calculate the scale factor at this time, $a_1=0.725268$, and the corresponding conformal time, $\eta_1=44.1803$. Now calculate the scale factor and conformal time at a slightly later cosmological time, say $t_2=10.01$. You get $a_2=0.725860$ and $\eta_2=44.1941$. Now calculate $\Delta a=a_2-a_1=0.000592479$ and $\Delta\eta=\eta_2-\eta_1=0.0137824$. Divide them to get $\Delta a/\Delta\eta=0.0429882$ when $\eta=44.1803$. This should be a good approximation to the derivative $da/d\eta$ at that conformal time. You can try a smaller time interval; it shouldn't change much. If you use too small a time interval, you may need to use a program that can do high-precision arithmetic because you will be subtracting numbers that are almost equal. Repeat for lots of other cosmological times, and you will have plenty of points to plot $da/d\eta$ vs. $\eta$. For example, I made this plot by doing this for cosmological times between 0 and 15, in steps of 0.1, using 0.01 as the finite time interval for approximating the derivative. The horizontal axis is $\eta$ and the vertical axis is $da/d\eta$. The fact that $da/d\eta$ isn't quite zero at $\eta=0$, as it should be, is because of the numerical approximation. It would probably be closer to zero if I used a smaller time interval and high-precision arithmetic. There are fancier ways to do numerical differentiation, which you can read about in the Wikipedia article. The approach I’ve described is the simplest version. Using the analytic formulas I provided in the previous question would also let you parametrically plot this very accurately, without having a formula for $a(\eta)$.
{ "domain": "physics.stackexchange", "id": 56844, "tags": "general-relativity, cosmology, differential-geometry, space-expansion, integration" }
Does the movement of a passenger change the velocity of an aeroplane?
Question: I encountered a problem in my physics textbook today. An aeroplane of total mass of 50,000kg is travelling at a speed of 200m/s. If a passenger of mass 100kg then walks toward the front of the aeroplane at a speed of 2m/s, what change in the speed of the aeroplane does this cause? It's easy enough to solve with the law of conservation of linear momentum. But my question is why? How? Because as you move forward in the plane, there is no external force on the system. So it shouldn't effect the plane. p.s- I'm new to this concept and would be grateful for an in depth walkthrough. Edit:- The answer in the book says that it's supposed to be 4mm/s difference in velocity. No direction stated. Also, by looking at all of the answers you have given me, I think it's safe to say that there is no 'true' answer. Both interpretations are equally correct Answer: Approach 1: Consider the plane and the man as a single system. Since there are no external forces acting on them, the center of mass shouldn't accelerate. But if the man starts running to the right and the plane doesn't react to this, the center of mass would go to the right as well. So in order for the center of mass to stay where it is the plane that should accelerate in the opposite direction. Approach 2: When you start walking on a plane your feet "rubs" against the floor of the plane, thus pushes it in the opposite direction that you are moving in. As you gain speed in one direction, the plane does in the other. But here's the thing: When you're trying to stop the reverse happens and as you come to a stop, the plane goes back to its original speed. A little more elaboration Someone asked for a mathemetical proof for the first approach. So first check out the attached photo. As you see their accelerations are opposite. Furthermore note that ap*mp = -am*mm. Left side of the equation is the total force on the plane, whereas the right side is that on the man times -1. These two forces make up an action-reaction pair. Since it is the man pushing against the floor of the plane it makes sense for the plane to push the man back with an equal but opposite force. So we have also derived Newton’s third law using his second law. By which I mean the man+plane system has no external forces on it, so we asserted that the center of mass doesn’t accelerate. Another outcome of this result can be understood by realising that the mass of the plane is so much larger than the man such that the ratio mm/mp is almost zero. Thus, even though the plane slows down due to the acceleration of the man, the decrease in its velocity is not large enough to be noticable.
{ "domain": "physics.stackexchange", "id": 78060, "tags": "homework-and-exercises, newtonian-mechanics, momentum, conservation-laws" }
Closing shapes at non-endpoints
Question: Consider how one would represent the following image in vector graphics: Pretty simple, right? The entire shape can be represented by a single path element. But suppose additionally that you want to color the heart at the top red. The path element is an open shape, so trying to fill it results in an appropriately red heart but also implementation-dependent bleeding between the spiral endpoints. Obviously, one could just draw the heart and the spiral tails as separate elements, but then the vector graphics representation no longer mirrors how a human being would draw the same image, and makes it more difficult to manipulate as a single object. One needs a way to communicate to the computer that two particular path segments within the larger path intersect in such a manner that they close a sub-shape. Is there a vector graphics format capable of doing this? More relevantly, how is it implemented and are there any papers on it? Answer: If we are not given the region to fill and have to figure that out for ourselves, one simple heuristic is: We're probably looking for a closed, bounded region. If the region is unbounded (it connects to the edges of the frame), it's probably not the one we're looking for. So, you could compute connected components and look for a bounded component. In your example, this means: look just at the white pixels, and consider two white pixels to be connected to each other if they are adjacent. Now compute the connected components of the resulting set. In your picture, there are 7 connected components, but only one of them is bounded (it does not intersect the top edge, bottom edge, right edge, or left edge of the picture). Therefore, that's probably the one that we want to fill. If you follow this heuristic, it does indeed fill the heart shape. If there are multiple components that are all bounded (have no intersection with any of the edges), then this heuristic fails and some interaction with the user is required to determine which component the user wanted to fill. But that's probably unavoidable. Your problem statement is ambiguous and does not give us enough information to disambiguate which was desired, so it's not surprising that this might require user interaction in some cases. After all, the computer can't read our minds....
{ "domain": "cs.stackexchange", "id": 1723, "tags": "computational-geometry, graphics" }
A problem related to Wick's theorem from RG analysis of KT transition
Question: Recently, I was reading a review paper by John B. Kogut An introduction to lattice gauge theory and spin systems, when he was doing the RG analysis for the X-Y model, on page 702, to go from (7.61a) to (7.61b), seem there is a step where we need to use the following identity:(at least when I use it, I can get the correct result...) $$ \begin{equation} \langle[h(x)+h(y)]^{2n}\rangle_0=\frac{1}{n!}C_{2n}^2C_{2n-2}^2\cdots C_{2}^2 \{\langle[h(x)+h(y)]^{2}\rangle_0\}^n \end{equation} $$ where the $\langle \cdots \rangle_0$ means average over the free (Gaussian) action of $\textbf{Real bosonic}$ field $h(x)$. (the factor $C_n^k$ is the binomial factors: $C_n^k=\frac{n!}{k!(n-k)!}$)It looks like a Wick's theorem for the "$h(x)+h(y)$". I wonder if this relation is true and if it's true, how to get it from Wick's theorem? Answer: I just got an idea of how to prove it: Write the equation as: $\langle [h(x)+h(y)]^{2n} \rangle_0=\sum_{ \{\alpha_i=x \ or\ y\} } \langle h(\alpha_1)h(\alpha_2) \cdots h(\alpha_{2n})\rangle_0 $ the summation over $\alpha_i's$ contains over all the possible cases, and as for $\langle h(\alpha_1)h(\alpha_2) \cdots h(\alpha_{2n})\rangle_0$, we can use the Wick's theorem As for the $\langle h(\alpha_1)h(\alpha_2) \cdots h(\alpha_{2n})\rangle_0$, we can use the Wick's theorem, and from mathematics, when know that there are $\frac{1}{n!}C_{2n}^2C_{2n-2}^2 \cdots C_{2}^2$ ways to do the contraction, in other words, we will be left with $\frac{1}{n!}C_{2n}^2C_{2n-2}^2 \cdots C_{2}^2$ terms after the contraction of $\langle h(\alpha_1)h(\alpha_2) \cdots h(\alpha_{2n})\rangle_0$. (Remember there is still a sum of $\alpha_i's$ !) For each way of the contraction, we will have, for example: $\begin{align} &\sum_{ \{\alpha_i \} } \langle h(\alpha_1)h(\alpha_i)\rangle_0 \langle h(\alpha_j)h(\alpha_k)\rangle_0 \cdots \langle h(\alpha_l)h(\alpha_m)\rangle_0 \\ &= \left( \sum_{\alpha_1,\alpha_i} G(\alpha_1,\alpha_i) \right) \times \left( \sum_{\alpha_j,\alpha_k} G(\alpha_j,\alpha_k) \right) \times \cdots \left( \sum_{\alpha_l,\alpha_m} G(\alpha_l,\alpha_m) \right) \\ &= \left[ \langle [h(x)+h(y)]^{2} \rangle_0 \right]^n \end{align} $ so we see that for each term given by Wick's theorem, after the $\sum_{ \{\alpha_i \} }$, gives the same result. Combine with the number of ways of contraction , which is $\frac{1}{n!}C_{2n}^2C_{2n-2}^2 \cdots C_{2}^2$, we can finally arrive at the answer.
{ "domain": "physics.stackexchange", "id": 34124, "tags": "quantum-field-theory, condensed-matter, renormalization, topological-phase, wick-theorem" }
Merge sort in Clojure
Question: I made my own version of merge sort: (defn rsort[a] (cond (<= (count a) 1) a :else (let [half (/ (count a) 2) [lh rh] (split-at half a)] (loop [res () slh (rsort lh) srh (rsort rh)] (cond (empty? slh) (into srh res) (empty? srh) (into slh res) :else (if (< (first slh) (first srh)) (recur (cons (first slh) res) (rest slh) srh) (recur (cons (first srh) res) slh (rest srh)))))))) Any suggestion how improve this code? Answer: I tested with a couple of input values and for the most part, it seem fine. Note however, that the standard sort also works with e.g. [1 nil] (with output [nil 1]) whereas this code breaks with an exception during the comparison. Code looks fine with a few minor issues: The name should be merge-sort; rsort is not meaningful. The values (first slh) and (first srh) are written down twice; the compiler might optimise that away, but IMO it would be nicer to have a separate let for them. Emacs' clojure-mode indents the :else branch differently, dunno about that. Some suggestions: Support the same signature as the standard sort. Add a docstring explaining the function. Add tests, possibly with randomised input as well.
{ "domain": "codereview.stackexchange", "id": 16877, "tags": "algorithm, sorting, clojure, mergesort" }
NP-complete problems on posets?
Question: I'm in the midst of some doctoral research and trying to figure out a particularly tricky reduction. I think my best shot is to reduce from an NP-complete problem on posets, if one exists. I did some digging but did not find anything NP-complete; I found a couple of items that are FPT with respect to the diameter of the poset, but I don't think that's terribly helpful in my search. I know I may not find anything here, but I figure it doesn't hurt to ask. Does anyone know of any NP-complete problems on posets? Answer: Let $P$ be a poset and $L$ a linear extension of $P$. We say $(x,y)$ is a jump in $L$ if $x$ and $y$ are uncomparable in $P$ and there is no $z$ such that $x \leq_L z \leq_L y$. Given a poset $P$ and an integer $k$, it is NP-hard to decide whether there is a linear extension with at most $k$ jumps. This is know as the Jump Number Problem. See, for example, this paper by Bouchitte and Habib: Bouchitte, Vincent; Habib, Michel, NP-completeness properties about linear extensions, Order 4, No. 1-3, 143-154 (1987). ZBL0627.06005. There are other problems which ask for extensions with some properties. For instance, given posets $P$ and $Q$, one may be interested in finding a linear extension of $P$ that disagrees with $Q$ as much as possible. This problem has some interesting applications and it has been shown to be NP-hard: da Silva, Rodrigo Ferreira; Urrutia, Sebastián; dos Santos, Vinícius Fernandes, One-sided weak dominance drawing, Theor. Comput. Sci. 757, 36-43 (2019). ZBL1422.68186.
{ "domain": "cstheory.stackexchange", "id": 5647, "tags": "np-hardness, np-complete" }
Programming challenge "Friend Request in Social Network"
Question: I am trying to solve a programming problem on a coding platform. When I execute it on my PC, it works perfectly, but when I submit the code on the coding platform, it throws a "Time Limit Exceeded" error. Can someone check my solution and help optimize it? In a social network, a person can invite friends of his/her friend. John wants to invite and add new friends. Complete the program below so that it prints the names of the persons to whom John can send a friend request. Input format: The first line contains the value of the N which represent the number of friends. N lines contain the name of the friend F followed by the number of friends of F finally their names separated by space. Input: 3 Mani 3 Ram Raj Guna Ram 2 Kumar Kishore Mughil 3 Praveen Naveen Ramesh Output: Raj Guna Kumar Kishore Praveen Naveen Ramesh Explanation: Ram is not present in the output as Ram is already John's friend. My Approach Extract the first word of each line and store them in HashSet and remove them from the string. Names stored in HashSet are already friends of the person (John). Now extract the names from the String using StringTokenizer and check whether the name is contained in the HashSet. If it is not present then print it. And can we solve this problem using graph/trees. The problem statement and my code can be found here. import java.util.HashSet; import java.util.Scanner; import java.util.StringTokenizer; class Ideone { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int N = sc.nextInt(); // No of friends sc.nextLine(); HashSet<String> hs = new HashSet<>(); // to store name of the John's friends String invitation = ""; for(int i =0; i < N; i++) { String str = sc.nextLine(); String friend = ""; int j =0; while(str.charAt(j)!= ' ') { friend = friend + str.charAt(j++); hs.add(friend); // add the name before the number to HashSet } j= j+2; invitation=invitation+str.substring(j)+" "; // remove name of John's friend from the string and store back remaining string } int j =0; StringTokenizer st = new StringTokenizer(invitation); // divide string into tokens while(st.hasMoreTokens()) { String str = st.nextToken(); if(!hs.contains(str)) { /* check if token(string) is already in hashset ie if the person already friend with John or not if he is not then print str */ System.out.print(str+" "); } } } } Answer: The big performance issue with the current solution is that you're relying on building a String to keep the list of persons that were sent a friend request. A String is immutable in Java so every single operation on it will result in the creation and allocation of a new String. When doing those sorts of repeated allocation in a loop, the performance degrades rapidly. We need another approach to the problem that doesn't rely on a simple String, and with that, a better data structure. From the problem description, we need to maintain a collection of the friends of John. To pick a suitable data structure, let's examine what we'll need to do with it: we want to add elements to it (each friend name read), and we want to check whether that collection contains a specific name (to not add it to the list of invited persons). Additionally, we don't care about a particular order for this collection. The perfect data structure for that is a HashSet, just like the one you used: it has a constant-time add and contains methods. Then, we also need to maintain a collection of the invited persons, those that were sent the friend request. This is where you used a String and we can do better. Like the above, we want a data structure capable of adding elements (the name of the invited persons), but we also want it to be capable of removing elements (the friend names we might read further down the input). Also, we want to keep an order here, specifically, the order in which the elements were added. For that, LinkedHashSet is perfect: add and remove are both constant-time operations, and it keeps the insertion order. With that in mind, we can refactor the code to use those data structures: Collection<String> friends = new HashSet<>(); // to store name of the John's friends Collection<String> invited = new LinkedHashSet<>(); for (int i = 0; i < N; i++) { Scanner scanLine = new Scanner(sc.nextLine()); String friend = scanLine.next(); friends.add(friend); invited.remove(friend); // potentially remove this invited as it is actually a friend scanLine.next(); // ignore the number of friends, we can deduce it anyway while (scanLine.hasNext()) { String name = scanLine.next(); if (!friends.contains(name)) { invited.add(name); } } } Notice that I changed other things in there: When you have the line of input, that is space-separated, you don't need to read manually characters by characters. You can either split(" "), which would return an array of all the tokens space separated, or you can use another Scanner, which by default tokenizes over white spaces, meaning every call to Scanner.next() will return the next word. The variables have a meaningful name: friends represent the collection of John's friend and invited represents the collection of persons to whom John sent a friend request. At the end of the process, we have our wanted collection inside invited, so we can finally print it. For that, we can use String.join starting from Java 8, which is built-in. System.out.println(String.join(" ", invited)); If you are on a lower Java version, you would need to write the for loop explicitely and use a StringBuilder to construct the result, like shown here. Don't use the old StringTokenizer, it isn't officially deprecated but it is only kept for compatibility reasons and is considered a legacy class.
{ "domain": "codereview.stackexchange", "id": 21293, "tags": "java, programming-challenge, time-limit-exceeded" }
Specify the Stress Energy Tensor and Calculate the Curvature
Question: I have a simple question about general relativity and the Einstein field equations, I wonder if you can specify the stress energy tensor, i.e. specify some mass distribution in space and then calculate the curvature to later find equations of motions etc, instead of starting out with how the geomerty would look. I am quite new to general relativity and so I am bound to have misconceptions. Edit: 2013 December 19th I have found this article which at page 10, Chapter 5, section 5.2 does something simillar to what I meant, apperently there is a general from for the Stress energy tensor (for what is known as a perfect fluid(?)), and from it they derive something simillar to the second component in the normal schwarzschild metric i.e $$A(r)=(1-\frac{2U}{r})^{-1}$$ where $U$ is the energy. I do have one remaining question, the name of the general form of the stress energy tensor confuses me somewhat, "perfect fluid" is it just its name, and is it still fully capabable of describing the stress energy tensor in general relativity? Answer: In principle, yes, you can specify the stress tensor and solve the resulting equations, but in practice, this is hard to do because the field equations are non-linear PDEs...darn. The simplest possible example is the case in which the stress tensor vanishes; $T_{\mu\nu} = 0$ namely the vacuum equations. The field equations with vanishing cosmological constant then become \begin{align} R_{\mu\nu} = 0. \end{align} Manifolds with this property are called Ricci flat. Minkowski space $\mathbb R^{3,1}$ is such a manifold; it is Ricci flat everywhere, but so is, for example, the exterior of the Schwarzschild black hole. I am far from an expert on this stuff, but in my experience solutions for a given stress tensor are usually obtained by constructing an ansatz containing some free parameters (usually exhibiting certain symmetries), and then determining if the ansatz solves the equations for certain values of the parameters. One is often forced to consider numerical solutions, and there is a whole industry dedicated to this: numerical relativity. You may also find this article interesting.
{ "domain": "physics.stackexchange", "id": 10865, "tags": "homework-and-exercises, general-relativity" }
What will I see if launch a thing into a black hole?
Question: Suppose that I launch a thing into a black hole from a secure distance, this black hole is secure at 2 meters and is floating over my yard, doesn't matter. What will I see? Will I see that the thing increases their speed and falls quickly into the hole? Or will I see that the thing decreases their speed and falls slowly, every time more slowly? Answer: You'll see the object at first accelerate towards the hole (under gravity) and then slow more and more as it approaches the event horizon. It will asympotically freeze in place at the event horizon and then gradually shift redder and redder until it disappears. This is assuming that the black hole is big enough that the acceleration is similar across the body. For small black holes the tidal effects would rip the object up because of the great difference in acceleration between the parts of the object that are closer to the black hole and those that are further away.
{ "domain": "physics.stackexchange", "id": 25182, "tags": "general-relativity, black-holes, reference-frames, observers" }
Regarding (regular) kinetic energy and rotational kinetic energy
Question: In my physics class we saw this problem: A disc of mass $M$ and radius $r$ is standing vertically and can rotate freely through an axis thats goes through its center of mass. A small particle of mass $m$ is attached to the top border of the disc. A small perturbance makes the disc rotate and so, the particle goes down. Determine the angular velocity of the disc when the particle is at the lowest part. And my professor said this: As the energy is conserved in this case, we have that the initial energy: $2r\,m\,g$ is equal to the final energy $\frac m 2 v^2+\frac {I}{2}\omega^2$... I don't understand why we have to put the $\frac I2\omega^2$ part, when I was trying to solve the exercise I just put the (regular) kinetic energy $\frac {m}{2}v^2$ and I was told this was wrong, but I was not explained why. By putting the two kinetic energies, it feels like I'm counting the same thing twice, as the particle is just rotating! Could someone clear my confusion? Answer: The $\frac m 2 v^2$ term is the kinetic energy of the "small particle". The $\frac {I}{2}\omega^2$ is rotational kinetic energy of the disc of mass $M$. You are just counting the kinetic energy of each mass once.
{ "domain": "physics.stackexchange", "id": 30890, "tags": "homework-and-exercises, newtonian-mechanics, energy, rotational-dynamics" }
General Relativity: change of coordinates in tangent space
Question: For starters, in the context of the tangent space of a manifold in GR, we can derive that: $$g'_{\mu \nu}=\frac{\partial x^\rho}{\partial x'^\mu}\frac{\partial x^\sigma}{\partial x'^\nu}g_{\rho \sigma} \ \ \ \ \ \ \ \ (1)$$ where of course $g$ is the metric tensor and where we have indicated with $'$ the objects in the new coordinate system. From here we can derive that: $$\partial '_\mu \cdot \partial '_\nu =\frac{\partial x^\rho}{\partial x'^\mu}\frac{\partial x^\sigma}{\partial x'^\nu}\partial _\rho \cdot \partial _\sigma \ \ \ \ \ \ \ \ (2)$$ where $\partial _\mu \ , \ \partial _\nu$ are the basis of the $\mathbb{M}^4$ tangent space of the manifold. (we can derive this because the metric tensor is defined as $g_{\mu \nu}=\partial _\mu \cdot \partial _\nu$) Then we can get: $$\partial '_\mu=\frac{\partial x^\sigma}{\partial x'^\mu} \partial _\sigma \ \ \ \ \ \ \ \ (3)$$ and at last: $$V'^\mu=\frac{\partial x'^\mu}{\partial x^\sigma}V^\sigma \ \ \ \ \ \ \ \ (4)$$ where $V$ is a vector of the tangent space. Ok, the tedious part is over, as you can see the topic in which I am interested regards change of coordinates in the tangent space of a manifold. Regarding all the above I have a couple of questions: The vectors $x$ in the partial derivatives are part of the manifold or part of the tangent space of the manifold? (I strongly suspect that the first option is the correct one, but I am not completely sure) We can derive $(3)$ from $(2)$ thanks to the linearity of the scalar product? The formula $(4)$ for the change of coordinates should be general, so it should apply in the special case of a manifold equal to $\mathbb{M}^4$; in this case we should get the tensor corresponding to the usual Lorentz's Transformation: $$\Lambda =\begin{bmatrix}\gamma & -\beta \gamma & 0 & 0\\-\beta \gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}$$ so we want to be able to prove that: $$\frac{\partial x'^\mu}{\partial x^\sigma}=\Lambda^\mu _\sigma$$ how should we do it? Answer: The numbers $x^\mu$ which appear in the partial derivatives are not vectors (or the components of vectors) at all, they are the coordinates in the two coordinate charts. (3) is simply the chain rule from elementary calculus. If your coordinate transformation is a simple Lorentz transformation on Minkowski space, then $x'^\mu = \Lambda^\mu_{\ \ \sigma} x^\sigma$, so the derivative is trivial.
{ "domain": "physics.stackexchange", "id": 69199, "tags": "general-relativity, special-relativity, metric-tensor, tensor-calculus, linear-algebra" }
extra dependencies listed for urdf_tutorials?
Question: While listing out the dependencies of the package urdf_tutorials using sudo apt-cache depends ros-kinetic-urdf-tutorials, I am obtaining the list : Depends: ros-kinetic-joint-state-publisher Depends: ros-kinetic-pr2-description Depends: ros-kinetic-robot-state-publisher Depends: ros-kinetic-rviz Depends: ros-kinetic-urdf Depends: ros-kinetic-xacro However, when I refer to the documentation (http://wiki.ros.org/urdf_tutorial), there are more dependencies listed. So, why aren't these extra ones displayed with sudo apt-cache depends. Also, cross checking with the package.xml file, I have noticed that this command is displaying only the runtime dependencies of a package. Is it the case? Originally posted by sam26 on ROS Answers with karma: 231 on 2017-03-01 Post score: 1 Answer: The current release of urdf_tutorial is 0.2.4 0.2.4 was released April 2015 Many of those extra dependencies came in June 2015. So, all that is to say, we're due for a new release. Originally posted by David Lu with karma: 10932 on 2017-03-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 27168, "tags": "ros, urdf-tutorial, dependencies" }
Clouds in closed hydrosphere
Question: Is possible estimate the needed size of an geodesic dome (like in the Eden project) for creating an real hydrosphere - especially clouds (and rain)? With other words, under what circumstances can happen clouds formation in an closed space -isolated from the outer atmosphere? Answer: The most important physics with respect to cloud formation happens in what is called the Atmospheric Boundary Layer (ABL). A lot of research is done in this field, since the effects of clouds is the major source of incertainty in all climate prediction models. To get some sort of cloud formation, in my opinion you would need to have some kind of ABL inside of your dome. The typical height of this ABL is ~800m (in the morning) up till 2km (in the evening). So if your dome is that large, you will be able to form clouds. But it is not that easy: heat fluxes (representing radiation), temperature gradient, humidity, pressure, condensation, everything should be controlled in the dome to spontaneously form these clouds. Another aspect to be aware of is the size of the base of the dome. This has to be larger than the largest turbulent structures in this domain, so to be sure to let these structures develop, it should be at least a few times the height of the dome. If this dome is too large for you, it is probably possible to make clouds in a smaller dome, but then you still need to capture the ABL. To achieve this, you need to change the conditions: increase temperatures and temperature gradients, replace the air by some other fluid, etc etc. It will in no case be pleasant for people. Actually, some experimental physicists have been able to model this ABL in a water tank (~1m³). However, cloud formation was off course impossible to achieve in such a experimental set-up.
{ "domain": "physics.stackexchange", "id": 2238, "tags": "fluid-dynamics, water, air, climate-science" }
Why is infrared radiation associated with heat?
Question: I am little confused with infrared radiation. I understand that when an object is hotter, it radiates electromagnetic waves of a bigger frequency and this waves are also more energetic, that is why blue stars are hotter than red stars, so that means that objects that radiate infrared are cooler and their waves are less energetic than the objects that radiate visible light. On the other hand I have heard from professors saying things like: "the reason of why you feel the heat of a wood fire as you get closer is because it radiates infrared radiation", nobody says the reason is because it radiates red radiation. Another example: "the old incandescent light bulb was inefficient because it radiates infrared light so it waste energy to radiate light that we don't even see, but at least it keeps the room warm", nobody says the modern lamps are better because it radiates only visible light and because is more energetic than infrared also keeps you warmer. Is this a general misconception in physics? Answer: It sounds like you know some of the most important summary points about blackbody radiation, but here is a reference on the subject, since I will be talking almost entirely about blackbody radiation: https://en.wikipedia.org/wiki/Black-body_radiation Given any temperature, there is a certain emission spectrum (see https://en.wikipedia.org/wiki/Planck%27s_law) describing the mixture of photons which a body at that temperature emits. As you already noted, the peak frequency of this spectrum increases as the temperature increases. It is also of interest to note that an increase in temperature increases photon emission at ALL frequencies, not just higher frequencies. The short form answer to your question is that we live in on Earth, where temperatures tend to be in the 200-400K range. Even a campfire doesn't make it much above 1500K. At all of these temperatures (yes, including the campfire) the VAST majority of the energy radiated is in the infrared range. If you put a filter between yourself and a campfire which absorbed all visible light and transmitted all infrared light, you would feel just as warm. So it is natural for us to associate infrared radiation with heat. The sun is the only everyday example of something that warms us noticeably with visible light, and sunlight already holds its own unique place in the human experience. Sunlight feels warm. A physicist can get out sensitive instruments and observe that in fact all light warms us slightly, but as far as what we can feel with our own nerves goes, it is only infrared and sunlight that seem warm. If we were plasma beings that inhabited the core of the sun, perhaps we would associate visible light (or some energetic subatomic particle or other...) with the transmission of thermal energy, but we aren't and we don't. In the end, all light transmits energy, and heat is just energy in the form of atoms exercising their degrees of freedom. So there is no clear cut distinction between the way infrared radiation interacts with heat and the way any other radiation does. But over most of the wide range of commonly studied environments, heat is mostly ratiaded as infrared photons. Thus the association. Regarding your thought about incandescent lights vs more efficient alternatives, we replaced our 60-100 Watt bulbs with 7-20 Watt bulbs, so they really don't warm us up anywhere near as much as the old ones. If we had replaced the bulbs with equivalent wattages, then your thought would be correct, but we would be blinded by our lamps!
{ "domain": "physics.stackexchange", "id": 30001, "tags": "electromagnetism, optics, visible-light, infrared-radiation" }
Make ros to link against Opencv3
Question: I have ros indigo and opencv 3.0.0 installed on my system and wanted to install object recognition kitchen with ork.rosinstall from their installation tutorial. I can't even compile the ork. Before, when I had opencv 2.4.8 everything was alright. I read somewhere that ros is linked against opencv 2.4 so I wonder if it is even possible to run it with opencv 3. While compiling ORK a get these errors, which I didn't get when I had 2.4.8 version of opencv. I could installed old version back but it doesn't seem as a good solution to me. Base path: /home/tomas/ws Source space: /home/tomas/ws/src Build space: /home/tomas/ws/build Devel space: /home/tomas/ws/devel Install space: /home/tomas/ws/install #### #### Running command: "make cmake_check_build_system" in "/home/tomas/ws/build" #### #### #### Running command: "make -j8 -l8" in "/home/tomas/ws/build" #### [ 0%] Built target rosgraph_msgs_generate_messages_py [ 1%] Built target openni_wrapper [ 1%] Built target nav_msgs_generate_messages_cpp [ 1%] Built target roscpp_generate_messages_lisp [ 1%] [ 1%] [ 1%] Building CXX object ork_renderer/src/CMakeFiles/object_recognition_renderer_2d.dir/renderer2d.cpp.o Building CXX object opencv_candidate/src/opencv_candidate/CMakeFiles/opencv_candidate.dir/datamatrix.cpp.o Built target object_recognition_renderer_3d [ 1%] Built target sensor_msgs_generate_messages_cpp [ 1%] Built target sensor_msgs_generate_messages_lisp [ 1%] Building CXX object ecto_image_pipeline/src/CMakeFiles/ecto_image_pipeline.dir/calibration.cpp.o [ 1%] Built target sensor_msgs_generate_messages_py [ 1%] Built target topic_tools_generate_messages_cpp [ 4%] Built target ecto [ 4%] Built target geometry_msgs_generate_messages_py [ 4%] Built target geometry_msgs_generate_messages_cpp [ 4%] [ 4%] Built target topic_tools_generate_messages_lisp Building CXX object ecto_opencv/cells/cv_bp/opencv/CMakeFiles/opencv_boost_python.dir/cv_mat.cpp.o [ 4%] [ 5%] Built target geometry_msgs_generate_messages_lisp Building CXX object ecto_opencv/cells/cv_bp/opencv/CMakeFiles/opencv_boost_python.dir/cv_highgui.cpp.o [ 5%] Built target roscpp_generate_messages_py make[2]: *** No rule to make target `/usr/lib/x86_64-linux-gnu/libopencv_videostab.so.2.4.8', needed by `/home/tomas/ws/devel/lib/python2.7/dist-packages/ecto_opencv/cv_bp.so'. Stop. make[2]: *** Waiting for unfinished jobs.... [ 5%] Building CXX object ecto_opencv/cells/cv_bp/opencv/CMakeFiles/opencv_boost_python.dir/highgui_defines.cpp.o [ 5%] [ 5%] Built target rosgraph_msgs_generate_messages_lisp Built target topic_tools_generate_messages_py [ 5%] Built target actionlib_msgs_generate_messages_py [ 5%] Built target std_msgs_generate_messages_lisp [ 5%] Built target nav_msgs_generate_messages_py [ 5%] Built target rosgraph_msgs_generate_messages_cpp [ 5%] Built target std_msgs_generate_messages_cpp [ 5%] Built target actionlib_msgs_generate_messages_lisp [ 5%] Built target std_msgs_generate_messages_py [ 5%] Built target nav_msgs_generate_messages_lisp [ 5%] Built target actionlib_msgs_generate_messages_cpp [ 5%] Built target roscpp_generate_messages_cpp /home/tomas/ws/src/opencv_candidate/src/opencv_candidate/datamatrix.cpp:4:37: fatal error: opencv2/legacy/compat.hpp: No such file or directory #include <opencv2/legacy/compat.hpp> ^ compilation terminated. make[2]: *** [opencv_candidate/src/opencv_candidate/CMakeFiles/opencv_candidate.dir/datamatrix.cpp.o] Error 1 make[1]: *** [opencv_candidate/src/opencv_candidate/CMakeFiles/opencv_candidate.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... make[2]: *** No rule to make target `/usr/lib/x86_64-linux-gnu/libopencv_videostab.so.2.4.8', needed by `/home/tomas/ws/devel/lib/python2.7/dist-packages/ecto_ros/ecto_ros_main.so'. Stop. make[1]: *** [ecto_ros/src/CMakeFiles/ecto_ros_main_ectomodule.dir/all] Error 2 /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp: In member function ‘virtual void Renderer2d::render(cv::Mat&, cv::Mat&, cv::Mat&, cv::Rect&) const’: /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:126:17: error: ‘numeric_limits’ is not a member of ‘std’ float x_min = std::numeric_limits<float>::max(), y_min = x_min; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:126:37: error: expected primary-expression before ‘float’ float x_min = std::numeric_limits<float>::max(), y_min = x_min; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:132:5: error: ‘y_min’ was not declared in this scope y_min = std::min(res[1] / res[2], y_min); ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:138:44: error: ‘y_min’ was not declared in this scope T_img = cv::Matx33f(1, 0, -x_min, 0, 1, -y_min, 0, 0, 1) * T_img; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp: In member function ‘virtual void Renderer2d::renderDepthOnly(cv::Mat&, cv::Mat&, cv::Rect&) const’: /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:227:17: error: ‘numeric_limits’ is not a member of ‘std’ float x_min = std::numeric_limits<float>::max(), y_min = x_min; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:227:37: error: expected primary-expression before ‘float’ float x_min = std::numeric_limits<float>::max(), y_min = x_min; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:233:5: error: ‘y_min’ was not declared in this scope y_min = std::min(res[1] / res[2], y_min); ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:239:44: error: ‘y_min’ was not declared in this scope T_img = cv::Matx33f(1, 0, -x_min, 0, 1, -y_min, 0, 0, 1) * T_img; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp: In member function ‘virtual void Renderer2d::renderImageOnly(cv::Mat&, const Rect&) const’: /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:325:17: error: ‘numeric_limits’ is not a member of ‘std’ float x_min = std::numeric_limits<float>::max(), y_min = x_min; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:325:37: error: expected primary-expression before ‘float’ float x_min = std::numeric_limits<float>::max(), y_min = x_min; ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:330:5: error: ‘y_min’ was not declared in this scope y_min = std::min(res[1] / res[2], y_min); ^ /home/tomas/ws/src/ork_renderer/src/renderer2d.cpp:336:44: error: ‘y_min’ was not declared in this scope T_img = cv::Matx33f(1, 0, -x_min, 0, 1, -y_min, 0, 0, 1) * T_img; ^ make[2]: *** [ork_renderer/src/CMakeFiles/object_recognition_renderer_2d.dir/renderer2d.cpp.o] Error 1 make[1]: *** [ork_renderer/src/CMakeFiles/object_recognition_renderer_2d.dir/all] Error 2 [ 10%] Built target ecto_pcl_ectomodule /home/tomas/ws/src/ecto_image_pipeline/src/calibration.cpp: In function ‘void image_pipeline::calibrate_stereo(const observation_pts_v_t&, const observation_pts_v_t&, const object_pts_v_t&, const Size&, image_pipeline::PinholeCameraModel&, image_pipeline::PinholeCameraModel&)’: /home/tomas/ws/src/ecto_image_pipeline/src/calibration.cpp:27:76: error: cannot convert ‘cv::TermCriteria’ to ‘int’ for argument ‘13’ to ‘double cv::stereoCalibrate(cv::InputArrayOfArrays, cv::InputArrayOfArrays, cv::InputArrayOfArrays, cv::InputOutputArray, cv::InputOutputArray, cv::InputOutputArray, cv::InputOutputArray, cv::Size, cv::OutputArray, cv::OutputArray, cv::OutputArray, cv::OutputArray, int, cv::TermCriteria)’ cv::CALIB_FIX_INTRINSIC | flags); ^ make[2]: *** [ecto_image_pipeline/src/CMakeFiles/ecto_image_pipeline.dir/calibration.cpp.o] Error 1 make[1]: *** [ecto_image_pipeline/src/CMakeFiles/ecto_image_pipeline.dir/all] Error 2 In file included from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/arrayobject.h:4, from /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_mat.cpp:3: /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp:31:12: error: ‘map’ in namespace ‘std’ does not name a type static std::map<std::string,PyMCallBackData*> callbacks_; ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp:34:3: error: ‘map’ in namespace ‘std’ does not name a type std::map<std::string,PyMCallBackData*> PyMCallBackData::callbacks_; ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp: In function ‘void {anonymous}::setMouseCallback_(const string&, boost::python::api::object, boost::python::api::object)’: /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp:41:9: error: ‘callbacks_’ is not a member of ‘{anonymous}::PyMCallBackData’ PyMCallBackData::callbacks_[windowName] = NULL; ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp:49:5: error: ‘callbacks_’ is not a member of ‘{anonymous}::PyMCallBackData’ PyMCallBackData::callbacks_[windowName] = d; ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp: In function ‘void opencv_wrappers::wrap_video_capture()’: /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp:64:61: error: no matches converting function ‘open’ to type ‘open_1 {aka bool (class cv::VideoCapture::*)(const class std::basic_string<char>&)}’ VideoCapture_.def("open", open_1(&cv::VideoCapture::open)); ^ In file included from /usr/local/include/opencv2/highgui.hpp:48:0, from /usr/local/include/opencv2/highgui/highgui.hpp:48, from /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_highgui.cpp:8: /usr/local/include/opencv2/videoio.hpp:419:26: note: candidates are: virtual bool cv::VideoCapture::open(int) CV_WRAP virtual bool open(int device); ^ /usr/local/include/opencv2/videoio.hpp:414:26: note: virtual bool cv::VideoCapture::open(const cv::String&) CV_WRAP virtual bool open(const String& filename); ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/highgui_defines.cpp: In function ‘void opencv_wrappers::wrap_highgui_defines()’: /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/highgui_defines.cpp:124:54: error: ‘CV_CAP_PROP_WHITE_BALANCE_U’ was not declared in this scope opencv.attr("CV_CAP_PROP_WHITE_BALANCE_U") = int(CV_CAP_PROP_WHITE_BALANCE_U); ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/highgui_defines.cpp:129:48: error: ‘CV_CAP_PROP_MONOCROME’ was not declared in this scope opencv.attr("CV_CAP_PROP_MONOCROME") = int(CV_CAP_PROP_MONOCROME); ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/highgui_defines.cpp:139:54: error: ‘CV_CAP_PROP_WHITE_BALANCE_V’ was not declared in this scope opencv.attr("CV_CAP_PROP_WHITE_BALANCE_V") = int(CV_CAP_PROP_WHITE_BALANCE_V); ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/highgui_defines.cpp:169:53: error: ‘CV_CAP_ANDROID_COLOR_FRAME’ was not declared in this scope opencv.attr("CV_CAP_ANDROID_COLOR_FRAME") = int(CV_CAP_ANDROID_COLOR_FRAME); ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/highgui_defines.cpp:170:52: error: ‘CV_CAP_ANDROID_GREY_FRAME’ was not declared in this scope opencv.attr("CV_CAP_ANDROID_GREY_FRAME") = int(CV_CAP_ANDROID_GREY_FRAME); ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_mat.cpp: In function ‘std::string {anonymous}::tostr(cv::Mat&)’: /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_mat.cpp:311:32: error: invalid conversion from ‘const char*’ to ‘int’ [-fpermissive] cv::Formatter::get("python")->write(ss,m); ^ In file included from /usr/local/include/opencv2/core/core.hpp:48:0, from /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_mat.cpp:10: /usr/local/include/opencv2/core.hpp:2874:27: error: initializing argument 1 of ‘static cv::Ptr<cv::Formatter> cv::Formatter::get(int)’ [-fpermissive] static Ptr<Formatter> get(int fmt = FMT_DEFAULT); ^ /home/tomas/ws/src/ecto_opencv/cells/cv_bp/opencv/cv_mat.cpp:311:35: error: ‘class cv::Formatter’ has no member named ‘write’ cv::Formatter::get("python")->write(ss,m); ^ make[2]: *** [ecto_opencv/cells/cv_bp/opencv/CMakeFiles/opencv_boost_python.dir/highgui_defines.cpp.o] Error 1 make[2]: *** [ecto_opencv/cells/cv_bp/opencv/CMakeFiles/opencv_boost_python.dir/cv_highgui.cpp.o] Error 1 make[2]: *** [ecto_opencv/cells/cv_bp/opencv/CMakeFiles/opencv_boost_python.dir/cv_mat.cpp.o] Error 1 make[1]: *** [ecto_opencv/cells/cv_bp/opencv/CMakeFiles/opencv_boost_python.dir/all] Error 2 make: *** [all] Error 2 Invoking "make -j8 -l8" failed Originally posted by sykatch on ROS Answers with karma: 109 on 2015-12-08 Post score: 0 Answer: This problem of using OpenCV 3.0 in ROS-indigo has already been discussed here Originally posted by Willson Amalraj with karma: 206 on 2015-12-08 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Will Chamberlain on 2018-09-27: Also see https://github.com/ros-perception/image_pipeline/issues/176 for detection of "2.4.8' apparently hardcoded in the ROS code.
{ "domain": "robotics.stackexchange", "id": 23181, "tags": "ros, opencv3" }
Periodic Wave formula; need explanation?
Question: I was listening to Quantum Mechanics lecture and there were wave explanation; to be exact it is periodic wave .. its formula is v (speed) = lambda (wavelength) X f (frequency cycles/second) the professor said that if for example wavelength is shorter wave, that means the frequency is becoming higher .. and if wavelength is longer wave, that means the frequency is becoming lower ... I don't get it, I thought that if we have an object which has many cycles per second such as 50 c/s, then it has much more opportunity than the 30 c/s to reach as far place as I know... please someone could explain what does professor means? Answer: Imagine a wave that advances 10 meters per second. if every second a wave crashes on the shore, the frequence is one per second (1 herz) the wavelength is then 10 meters since each second the wave advanced 10 meters and therefore the waves are 10 meters apart. If we halve the wavelength (the wave still advances at 10 meters per second) then the waves are 5 meters apart. if waves 5 meters apart advance at 10 meters per second then 2 waves will crash on the shore per second. This means that the frequency of waves has doubled. Conclusion : if the speed of a wave in an environment is constant, then doubling the frequency is equivalent to dividing wavelength by two. Constant = frequency * wavelength for electromagnetic waves : speed of light = frequency * wavelength
{ "domain": "physics.stackexchange", "id": 10670, "tags": "waves" }
how can I modify the value of a transform when playing back a bag file?
Question: Just like there are message filters, would it be possible to make tf filters? I often find I want to modify the value of a single node in the tf tree when I have a bagfile, or to eliminate broadcasts on tf altogether. It also could be useful to associate/utilize multiple tf's together, even when broadcast at different rates. Like ApproximateTime can do with messages Originally posted by phil0stine on ROS Answers with karma: 682 on 2011-06-03 Post score: 0 Answer: There's a tool called tf_remap which does approximately what you want. You can use the mappings to rename the unwanted frames and then republish them using another process. Originally posted by tfoote with karma: 58457 on 2011-06-06 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 5750, "tags": "ros, geometry, transform, time" }
When we talk about natural frequency of an object in the context of resonance, what exactly is vibrating, the electrons or the entire atoms?
Question: And how exactly do objects acquire "natural frequencies"? Is it due to the temperature and the lattice structure (the type of bonds they form with other atoms)? And thus, is resonance just a phenomenon that preserves energy within the lattice than leak it outside? And is the below illustration accurate for explaining resonance? Case 1: A hollow metal sphere. Say its natural frequency is 20 Hz. So, if we hit it with a hammer and the resultant energy (by accident/chance) makes it vibrate at 20 Hz, then it takes $n$ seconds to cool down and loses $x$ Joules to the atmosphere as heat energy (per second). Case 2: Same sphere as above. We hit it with another hammer but more forcibly. It vibrates at 22 Hz in the beginning, and because that's not its natural frequency, it comes to a standstill at $n+4$ seconds, losing $x+4$ Joules as heat energy in the process (per second). Case 3: Same sphere as above. We hit it with a smaller hammer, lightly. It initially vibrates at 18 Hz. Since that's also not the right frequency as the natural one, it loses energy fast but not as fast as Case 1, as the total energy, in this case, is also low (Law of Equilibrium says that one side has too much energy, it loses energy fast to reach the equilibrium). Say it takes $n+2$ seconds and loses $x+2$ energy per second. What if we increase the impact in Case 1 by a thousand times (assuming the object doesn't break)? Will the resonant frequency still last longer than the frequency generated by the impact? Is there a threshold that tells the exact amount of energy that is required to cross this resonant threshold of losing energy? Answer: We will take the case of mechanical vibrations and resonance. This will let me answer the question posed in the title line of your post. If you attach something with mass to something with springiness and then kick it, you set the mass into motion and it compresses the spring until the spring has stored up all the kinetic energy in the mass, at which point the mass has stopped moving. Then the spring pushes back on the mass, reversing its direction of motion, and it stops pushing when the mass is back at its starting position. But now the mass has kinetic energy and it overshoots its starting position and is now pulling on the spring, which pulls back on the mass until the spring is stretched out and the mass has come to a halt... and this process goes on, over and over, until the friction in the system has dissipated the energy you put in with your first kick. In a mechanical system, the atoms all move together as a single mass, and their electrons (necessarily) move with them in unison. So the necessary conditions for imparting a resonant frequency to a system is that it contain an inertance coupled with a compliance. A big mass connected to a soft spring will yield a low resonant frequency and a small mass coupled to a stiff spring will yield a high resonant frequency.
{ "domain": "physics.stackexchange", "id": 81514, "tags": "solid-state-physics, oscillators, resonance, vibrations" }
Logging Entity Framework Changes - improve performance
Question: I have the following code to log changes using Entity Framework 6. At the moment it only logs changes to a particular table - but I intend to expand it to cope with any table. I'm not sure of the best way to extract the table name. I call this code before Db.SaveChanges. Db is the DBContext. private void LogHistory() { var entries = this.Db.ChangeTracker.Entries() .Where(e => e.State == EntityState.Added || e.State == EntityState.Modified || e.State == EntityState.Deleted) .ToList(); foreach (var entry in entries) { foreach (string o in entry.CurrentValues.PropertyNames) { var property = entry.Property(o); var currentVal = property.CurrentValue == null ? "" : property.CurrentValue.ToString(); var originalVal = property.OriginalValue == null ? "" : property.OriginalValue.ToString(); if (currentVal != originalVal) { if (entry.Entity.GetType().Name.Contains("PropetyPair")) { // make and add log record } } } } } Answer: Not a big deal, but this: var currentVal = property.CurrentValue == null ? "" : property.CurrentValue.ToString(); var originalVal = property.OriginalValue == null ? "" : property.OriginalValue.ToString(); Can be shortened to this: var currentVal = (property.CurrentValue ?? string.Empty).ToString(); var originalVal = (property.OriginalValue ?? string.Empty).ToString(); I'm not sure of the best way to extract the table name. You're not doing that. You're looking at the name of the type of the entity - being an object/relational mapper, EF maps that entity to a table; if you need the table name then you need to look at the entity's table mappings. I intend to expand it to cope with any table How about having an IEnumerable<Type> where you have all the entity types you want to log changes for, and then instead of getting the type's name and comparing with some magic string, you can just do _monitoredTypes.Contains(typeof(entry.Entity)) (not tested, just a thought). In terms of performance, you're looping too much: if (entry.Entity.GetType().Name.Contains("PropetyPair")) This condition doesn't need string o, as it's working off entry - thus, you can move that condition up two levels and only enter the 2nd loop when the entity type is interesting.
{ "domain": "codereview.stackexchange", "id": 5782, "tags": "c#, performance, entity-framework" }
Is it possible that every single isotope is radioactive, and isotopes which we call stable are actually unstable but have an extremely long half-life?
Question: I've read that tellurium-128 has an half-life of $2.2 \times 10^{24}$ years, much bigger than the age of the universe. So I've thought that maybe every single isotope of every single atom are radioactive, and isotopes which we call "stable" are actually unstable but their half-life are immensely big (but not infinite), like $10^{100}$ years. Is this a possible theory or are we truly $100$% sure that stable isotopes are really eternal? Answer: If protons decay, then what you say is true: all atomic nuclei are indeed unstable, and a so-called "stable" nucleus simply has too long a half-life for its decay to be observed. The most tightly bound nucleus is $^{62}$Ni, with a binding energy per nucleon of 8.79 MeV [source], which is less than 1% of the mass of a nucleon. On the other hand, the decay of a proton through a process such as $$p \to e^+ + \pi^0$$ results in the loss of most of the mass of the proton. So if the proton can decay then it's pretty clear that an atomic nucleus always has more much more mass than a hypothetical final state in which some or all of the protons have decayed. In other words, while neutrons do not decay inside "stable" atomic nuclei because of the binding energy of the nucleus, protons cannot be so protected because their decay would be much more energetically favourable (than that of a neutron to a proton). The question of whether protons do decay is still unresolved, as far as I know. If protons do not decay, then the $^1$H nucleus, by definition, is stable, so there is at least one stable nucleus. Now, you might be wondering how we can establish that a nucleus is stable (assuming no proton decay). We make the assumption that energy is conserved, and it's impossible for a nucleus to be created if there isn't enough energy in the system to make up its rest mass. Given that assumption, say we have a nucleus. If we know the masses of the ground states of all nuclei with an equal or smaller number of nucleons, then we can rule out the possibility of there being a state that the given nucleus can transform into with less total mass. That in turn guarantees that the given nucleus is stable, since it can't decay into a final state with greater mass without violating conservation of energy. For a simple example, consider a deuteron, $^2$H. Its minimal possible decay products would be: a proton plus a neutron; two protons (plus an electron and an electron antineutrino) two neutrons (plus a positron and an electron neutrino) a diproton (plus an electron and an electron antineutrino) a dineutron (plus a positron and an electron neutrino) But all of those states have higher mass than the deuteron, so the deuteron is stable; it has no decay channel. Of course, you might wonder whether there are possible daughter nuclei whose masses we don't know because we've never observed them. Could, say, the "stable" $^{32}$S decay into $^{16}$P (with 15 protons and 1 neutron) and $^{16}$H (with 1 proton and 15 neutrons)? After all, we don't know the masses of these hypothetical nuclei. But if nuclei so far away from the drip line actually have masses low enough for that to happen, then there would have to be some radically new, unknown nuclear physics that would allow this to happen. Within anything remotely similar to existing models, this simply isn't possible.
{ "domain": "physics.stackexchange", "id": 63761, "tags": "atomic-physics, atoms, radioactivity, stability, isotopes" }
Quantum advantage with only Clifford gates (Gottesman Knill theorem)
Question: Let's say I want to solve a computational task which input can be encoded in $n$ bits of information. The look for a quantum advantage is (usually) asking to find a quantum algorithm in which there are exponentially fewer gates and qubits required in order to implement this algorithm compared to the best known classical algorithm. Gottesman Knill theorem shows that it is possible to simulate in polynomial time a quantum algorithm composed of Clifford gates only. For this reason, it removes the ability to find a quantum advantage with circuits only composed of such gates (the non-Clifford are very "costly" in term of physical resources). However, if a classical algorithm requires (for instance) $O(n^{800})$ gates while the quantum $O(n)$, the gain with the quantum algorithm would still be phenomenal. My question is thus: Are there examples of quantum algorithms only composed of Clifford operations that show "for all practical purpose" a clear advantage in computational speed over the best known classical algorithm? A reduction in the "same spirit" of the $n^{800} \to n$ for instance. Such result would be interesting because fault tolerant quantum computing can be efficiently implemented with only Clifford gates. Also, my formulation of the quantum advantage is probably a bit "handwavy" so if you believe it is not entirely correct I would be interested in providing me a better way to phrase it. Answer: Are there examples of quantum algorithms only composed of Clifford operations that show [...] A reduction in the "same spirit" of the $n^{800}→n$ for instance. No. An $n$ qubit Clifford+measure circuit with $m$ operations can be simulated in $O(n^2m)$ time (arXiv:quant-ph/0406196) with small constant factors (arXiv:2103.02202).
{ "domain": "quantumcomputing.stackexchange", "id": 3187, "tags": "quantum-algorithms, complexity-theory, quantum-advantage, stabilizer-state" }
Commutator relationships and the exponential
Question: I am currently trying to prove that the two following commutator relationships are equivalent (for an operator $\hat{A}(s)$ that depends on a continuous parameter $s$), so if one holds the other one should hold as well: $$0=\left[\frac{d\hat{A}(s)}{ds},\exp(\hat{A}(s))\right]$$ and $$0=\left[\frac{d\hat{A}(s)}{ds},\hat{A}(s)\right].$$ Proof from the second relation to the first That the second equation implies the first one is easy to see. Just start from the first equation and fill in the series expansion of the exponential, yielding: $$0=\sum\limits_{n=0}^\infty\frac{1}{n!}\left[\frac{d\hat{A}(s)}{ds},\hat{A}^n(s)\right].$$ The above commutator can be expanded in terms of the second commutator, proving that if the second relation holds, the first one is implied. Proof from the first equation to the second The other way around is much harder, and it's here that I am stuck! I could re-enter the series and expand the first relation as: $$0=\left[\frac{d\hat{A}(s)}{ds},\exp(\hat{A}(s))\right]=\left[\frac{d\hat{A}(s)}{ds},\hat{1}\right]+\left[\frac{d\hat{A}(s)}{ds},\hat{A}(s)\right]+\frac{1}{2}\left[\frac{d\hat{A}(s)}{ds},\hat{A}(s)^2\right]+...,$$ but I don't see how this might help me. So my question is: do these relations truly imply each other? And if not, what are the conditions such that they do ? So, if one of the two is true, does this automatically imply the other? Answer: Theorem. Let $B :D(B) \to H$, $A: D(A) \to H$ be densely defined self-adjoint operators in the Hilbert space $H$. Suppose that $e^{A}$ is bounded (it happens in particular if $\sigma(A)$ is bounded from above). $$Be^A\psi = e^AB\psi$$ for every $\psi \in D(B)$ is equivalent to the fact that $B$ commute with the spectral measure of $e^A$ which in turn is equivalent to the fact that $B$ commutes with the spectral measure of $A$. Therefore $BA\psi = AB\psi$ whenever both sides are defined. Applying this theorem you prove (2) out of (1). I think the statement is valid also if $A$ and $B$ are closed and normal provided $e^A$ is bounded, but I do not have spare time to produce a proof.
{ "domain": "physics.stackexchange", "id": 29009, "tags": "quantum-mechanics, homework-and-exercises, operators, lie-algebra, commutator" }
Do black holes have zero stress-energy tensor?
Question: The Einstein field equations are, in geometrized units: $$G_{\mu \nu} = 8 \pi T_{\mu \nu}$$ I know that black holes (take the simplest case of a Schwarzschild black hole) are vacuum solutions to the Einstein field equations. Does this imply that $T_{\mu \nu} = 0$? If so, how then does $G_{\mu \nu}$ have nonzero components? If not, what then does $T_{\mu \nu}$ physically represent for a Schwarzschild black hole? Answer: There is more than one usage of the term "vacuum" in physics, which will explain why you may get more than one answer. First of all, the simplest black hole, called Schwarzschild black hole, is a vacuum solution and it gives for the Einstein tensor $$ G_{ab} = 0 $$ everywhere that this quantity is well-defined, which is to say everywhere except at the singularity. But note, the Einstein tensor is not the complete information about the spacetime curvature. It is a sum of curvature components in various directions. The Riemann curvature tensor $R^a_{\;bcd}$ is not zero anywhere for this solution (but it tends to zero in the limit $r \rightarrow \infty$, i.e. far from the black hole). The situation is not so different from more familiar physics, where you can have a solution of Laplace's equation $\nabla^2 \phi = 0$ but this does not necessarily imply that $\phi$ itself is zero. Now to expand a little to other considerations. First, the Schwarzschild black hole has mass (in the sense that other things will orbit it or more generally be attracted to it) and one would like to be able to say where this mass is located. But the stress-energy tensor is zero everywhere ($T_{ab} = 0$) except at the singularity. So it seems we have to say that once any process of collapse which created the black hole has settled down, so that the Schwarzschild metric holds everywhere, then the mass is located at the very place where our theory breaks down! Oh dear. But we can live with this situation as far as practical physics is concerned. In practice, in order to understand how the black hole influences bodies around it, it is enough to say that the mass is located inside (or beyond) the horizon in a spherically symmetric way. More fully, one has to see the black hole as a dynamic not a completely static entity, because the metric within the horizon is not static. The goings-on within the horizon are of interest from a theoretical point of view, but have strictly no impact on the rest of spacetime. The mass which influences events outside the horizon is the mass in the past light cone of those events---the mass which was at some stage collapsing before the horizon formed. (This paragraph, and a tweak to the previous one, were added after an exchange of comments with safesphere.) Finally, a brief comment on Kerr and Reissner-Nordstrom solutions. The former has $T_{ab} = 0$, the latter does not (and all bets are off at the singularity). Therefore, from a GR point of view one would say that the former is a vacuum solution and the latter is not (and I am taking cosmological constant zero throughout). But some people might want to call a region of space with an electric field but nothing else in it a 'vacuum'. That would be quite common terminology for people not interested in the specifically gravitational effects.
{ "domain": "physics.stackexchange", "id": 67604, "tags": "general-relativity, black-holes, metric-tensor, vacuum, stress-energy-momentum-tensor" }
Species Identification: Spider
Question: I usually would search for myself, but I need to be super sure about the identification because I need to know if I need to get an exterminator in here or not. (I think it might be a giant house spider, but don't let me influence your analysis.) So I live in the northern part of Ireland (About 40 minutes from Derry Northern Ireland over in County Donegal) and I went to my bathroom to a VERY fun surprise this morning. It was this wonderful spider in my bathroom. For reference, the lip of the glasses it is depicted in are 8 - 8.5 cm in diameter. The missing leg was not my fault either, I was very gentle with the spider and released it outside post photo-shoot. (Warning: these are macro shots of the said spider) More high resolution photos available on my Flickr (SolarLunix). Answer: The images in the question are of a giant house spider, as hypothesized in the question. (Eratigena duellica/atrica/saeva). These (along with the hobo spider, Eratigena agrestis) do not have any banding or annulations in their legs (dark/light stripes/rings). The spider in the other answer of Tegenaria domestica do (always) exhibit leg banding. So, it is not the spider in question. Both Eratigena and Tegenaria are in the funnel weaver family.
{ "domain": "biology.stackexchange", "id": 11314, "tags": "species-identification, arachnology" }
Entropy change in a cycle with two isochoric and two adiabatic processes
Question: Prove that the change of Entropy in a cycle with two isochoric and two adiabatic processes is 0. How can I prove that? Thanks! Answer: We don't give full solutions here. But here is a hint: Write $dS=\frac{q}{T}$ and apply first law. Or just say: Entropy is a state function, so it does not change in a cyclic process.
{ "domain": "physics.stackexchange", "id": 2518, "tags": "homework-and-exercises, thermodynamics, entropy" }
Connect ROS with RobotStudio via abb_robot_driver
Question: Hello, I installed the abb_robot_driver node from here in my workspace, it worked without any errors while building. I am running: Ubuntu 20.04, kernel: 5.14.0-1054-oem ROS Noetic I have now installed Robot studio on a seperate Windows PC, Robot Ware 7 is also installed. I open the robot I want (ABB IRB4600-40/255) and create a controller. It is indicated that the controller is running. When I now try to use the first example of the abb_robot_driver here which is: roslaunch abb_robot_bringup_examples ex1_rws_only.launch robot_ip:=<WindowsPC IP> I get the warning [ WARN] [1667469851.804625427]: Failed to establish RWS connection to the robot controller (attempt 1/5), reason: 'Failed to collect general system info' Until I receive: [FATAL] [1667469856.931221918]: Runtime error: 'Failed to establish RWS connection to the robot controller' I am pretty sure I did not set up everything correctly, but was not able either to find a tutorial how to do so. Can anybody give me a hint? Thank you. Cheers Originally posted by joff on ROS Answers with karma: 40 on 2022-11-03 Post score: 0 Answer: Hi, here is my update: I shut off all firewalls of the Windows PC (I switched to a VM). I was able to ping from both directions this way. I still was not able to connect to the virtual controller of RobotStudio. For that I needed to follow this instuction here. Afterwards I was able to launch the first example of the bringup. I still have issues with the bringup examples but I'll open another question for that. Thanks for the help! Originally posted by joff with karma: 40 on 2022-11-09 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 38100, "tags": "ros" }
Is interpolation of an audio signal to increase frequency resolution possible?
Question: I apologize if some of what I ask is not entirely correct, I'm new to this field, but extremely interested. I have an Audio signal of sample rate 44.1 kHz that I want to segment into 30 frames, and get the DFT of each frame to find the magnitude of certain frequencies for that frame. However, this means that I have a frequency resolution of 30 Hz in each bin, which isn't narrow enough. Is it possible to interpolate the data to attain more data samples? As far as I'm aware, doubling the number of points would give a 88.2 kHz sampling rate, but still give a frequency resolution of 30. Would it be possible to treat the interpolated data as still having a sample rate of 44.1 kHz? Answer: The (very) short answer is no, interpolation does not increase resolution: no new data, no new information (Note that strictly speaking, the usage of "resolution" is not appropriate according to common definitions of the term. Check also the comment of hotpaw2, and also his answer). A longer answer needs the following spectrum visualization (time domain, continuous frequency domain, and discrete frequency domain from left to right). I assume no aliasing for the sake of simplicity and therefore clarity. The interpolation technique preserves the information of the spectrum. The output of DFT is a set of frequency bins spanning from $-f_s$ to $+f_s$. First, look at the continuous frequency domain, if you upsample your signal correctly, it is equivalent to increasing the sampling frequency. You wish to double the number of data, but it is just removing one-half the spectrum replicas of the sampling process. Now, look at the discrete frequency domain. This version is normalized from $-f_s$ to $+f_s$ of the continuous frequency counterpart. As $f_s$ is doubled, the spectrum is then shrunk by a factor of $1/2$. If we call $0 < \alpha < 1$ the proportion of non-zero frequencies before upsampling, it becomes $\alpha/2$ after upsampling. Before upsampling, DFT gives you $N$ bins for $\alpha$ then $N\alpha$ bins for the spectrum; after upsampling it is $2N$ for $\alpha/2$ then always $2N \times \alpha/2 = N\alpha$ bins for the same spectrum. No, your resolution does not change at all. To have a "better" resolution, the only way is to add more data. In your example, instead of dividing to 30 frames, divide your audio file to 15 frames.
{ "domain": "dsp.stackexchange", "id": 5016, "tags": "fourier-transform, interpolation" }
What Assumptions Govern the Applicability of the Boltzmann Distribution?
Question: In the book "Concepts in Thermal Physics" the Boltzmann distribution is derived with the following assumptions: There are two systems, one enormous heat reservoir and one comparatively miniscule system. The two systems are in thermal equilibrium. The heat reservoir is so large that any energy the smaller system can remove makes no change to its overall temperature. The large system has an incredibly large number of possibly microstates. By contrast the small system is assumed to have 1 microstate for every possible energy. Therefore the energies of each system are $\epsilon$ for the small system and $(E-\epsilon)$ for the large system, with the total energy being $E$. This allows you to formulate the probability of the small system have energy $\epsilon$ as: $$P(\epsilon) \propto \Omega(E-\epsilon)\times1$$ Where $\Omega(E-\epsilon)$ is the total number of microstates for the reservoir, and $1$ represents the total number of microstates for the system. This allows you to formulate the Boltzmann distribution: $$P(\epsilon) \propto e^{-\epsilon/k_BT}$$ My question is, when does assumption 5 apply? Is it allowed because the reservoir is assumed to be so much larger than the small system? Or would you need to derive the Boltzmann distribution differently for different systems? Edit: The answer to this question seems to be given here Derivation of Boltzmann distribution - two questions Answer: Assumption 4 isn't needed to talk about the Boltzmann distribution. Each state has a probability which is proportional to $\exp(-E/k_B T)$, and if there are $g(E)$ different states with the same energy (degeneracy), then the probability to be at any state of energy E is just proportional to $g(E) \cdot \exp(-E/k_B T)$.
{ "domain": "physics.stackexchange", "id": 82894, "tags": "thermodynamics, statistical-mechanics, kinetic-theory, boltzmann-equation" }
Construct a CFG for $L = \{ w \in \{0,1\}^*\text{ } |\text{ } w = w^R \text{ and } |w| \text{ is even}\}$
Question: I need to construct a CFG for the following language$$L = \{ w \in \{0,1\}^*\text{ } |\text{ } w = w^R \text{ and } |w| \text{ is even}\}$$ I know that the two middle position should always be the same. E.g. $11,00,0\underline{11}0,0\underline{00}0,\dots$ Each position on the opposite sides of the middle points should also be the same. So these are the production rules of the CFG I came up with $$ S\rightarrow00$$ $$S\rightarrow11$$ $$S\rightarrow ASA$$ $$A\rightarrow1$$ $$A\rightarrow0$$ $$A\rightarrow \epsilon$$ Is this solution correct? Answer: As HendrikJan stated, we can for example obtain the following languages with above CFG $$01, 0001,\dots$$ which do not meet the required conditions. Instead, we need to make sure that the ends are always the same. We can do that by the following production rules. $$ S\rightarrow 0S0$$ $$S\rightarrow 1S1$$ $$S\rightarrow \epsilon$$
{ "domain": "cs.stackexchange", "id": 17344, "tags": "regular-languages, context-free" }
What happens when someone is stung by a jellyfish?
Question: What happens in the human body when someone is stung by a jellyfish; namely a box jelly. Judging by what I have heard about the stings I'm guessing that they involve a neurotoxin. But what is actually happening? What are the symptoms and what happens after the sting (treatments and survivability)? Answer: The problem is that box jellyfish doesn't specify one jellyfish but a group of different jellyfish. Some of these are highly venomous - I pick here Chironex fleckeri, as this is often called "the most venomous jellyfish in the world". Chironex fleckeri has long tentacles which are covered with millions of explosive cells called Cnidocytes which inject a dart with the very powerful toxin upon touch. It is therefore very dangerous to touch the tentacles with bare hands to remove them. The toxin is a mix of different bioactive proteins which have cytolytic, cytotoxic, inflammatory or hemolytic activity. Case reports (you can find information in reference 3 about it) show that the tentacles "burn" though all skin layers causing immediate strong pain and lasting scars (if the victims survive) which tend to show signs of necrosis. The toxins itself paralyze the muscles of the heart and the respiration and also causes hypokalemia by hemolyzing red blood cells (one of the toxin acts as a membrane pore in the blood cells). This causes further problems and often a cardiac arrest. See reference 1 and 2 for details on the toxins. Especially 2 is interesting, as this is a PhD-Thesis on this topic with a lot of references. Treatment depends on how severe the injury is - in severe cases an antivenom can be used. Besides this typical measures are CPR, giving oxygen and removing the tentacles (with appropriate protection). See reference 3 (good overview) to 5 for details on this topic. References: Chironex fleckeri (Box Jellyfish) Venom Proteins: Expansion of a Cnidarian Toxin Family that Elicits Variable Cytolytic and Cardiovascular Effects The molecular and biochemical characterisation of venom proteins from the box jellyfish, Chironex fleckeri Chironex fleckeri (Multi-tentacled Box Jellyfish) Jellyfish stings and their management: a review. Cytotoxic and cytolytic cnidarian venoms. A review on health implications and possible therapeutic applications.
{ "domain": "biology.stackexchange", "id": 3338, "tags": "human-biology, neuroscience, toxicology, ichthyology" }
Quantum harmonic oscillator: ground state solution derivation step
Question: I have what is probably a silly question about one step that is made in the derivation of the ground state solution for a quantum harmonic oscillator. My textbook gives no explanation, only the solution, so I could not get an answer from that. The following link shows how one can obtain the solution: http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hosc3.html#c1 What I do not understand is the bit that is described with ''For this to be a solution to the Schrodinger equation for all values of x, the coefficients of each power of x must be equal. That gives us a method for fitting the boundary conditions in the differential equation. Setting the coefficients of the square of x equal to each other...'' Basically, I do not see why the those terms should be equal/ what it means. So I was wondering if anyone could explain what it means or why that step is made? Answer: The solution must satisfy the Schrödinger equation for every $x$. The equation after the ansatz follows the shape \begin{align} (-\alpha+\alpha^2 x^2) f(x) = (b x^2 +c)f(x) \end{align} Now in order to be true for every $x$ the polynomial function prefactors of $f(x)$ must be the same.
{ "domain": "physics.stackexchange", "id": 38318, "tags": "quantum-mechanics, schroedinger-equation, harmonic-oscillator" }
How one apply correctly FFT in image denoising
Question: I'm writing program (Qt widgets/c++) for removing noise from images. As denoising method, i selected non local means method. This method has incredible quality of restored images (that's why it's the only denoising method in OpenCV), but has huge computation cost, so i made a lot of modified variants of this method (some with multithreading, some algorithmic). But, i'm having problem with the one, involving FFT I followed all the steps of this article (only one page, 1430) and all works perfectly, except for FFT part, there just 2 lines about it in the paper and i can't understand, HOW should one use fft This problem has bothered me for months, any help or insight would be greatly appriciated. Shortened version of question: How can i get summed squared difference of two arrays on the image (the one at top and one in the middle, values are colors) quickly? ( O(n^2) is huge cost, there are lots of this kind of operations, paper above states, that it can be done via FFT with O(n*log n) (says that this 2 arrays forming circular convolution somehow)) Answer: The trick inside the paper is the following: What you want to compute is $\sum_{i \in W} |I(x+i)-I(y+i)|^2$, where $I$ is an image, $x$ and $y$ two noisy pixels and $i$ is a 2D offset used to define a patch. Expanding the expression yields: $\sum_i I^2(x+i) + \sum_i I^2(y+i) - 2 \sum_i I(x+i)I(y+i) = A + B - 2C$. $A$ and $B$ are computed using a squared integral image, i.e., an integral image from the squared original image. $C$ is the convolution between the two patches centered on $x$ and $y$. Thus, it can be computed in the Fourier domain, where it becomes a multiplication. You get the value of $C$ by computing the Fourier transform of the patch around $x$, the patch around $y$, pointwise-multiplying these results and taking the inverse Fourier transform of the multplication result. The Fourier transform is obviously a 2D transform since you are working with 2D data. What you obtain for a given patch is a 2D array of complex values. Additional notes In my opinion this article is not the best NL-means speedup strategy. Experiments I did way back in 2007/2008 show that pre-selection of patches are better (both in terms of speed and quality of the results). I have started blogging about these here, but unfortunately I am looking for time to finish the posts. The original NL-means papers mention blockwise implementations that can be interesting. There are fundamentally 2 ways in implementing NL-means: writing a denoising loop for every pixel in the image writing a denoising loop for each patch, then back-project the patches to form an image. The first impolementation is the original approach, because in 2005 memory and multicore CPUs were expensive. I chose on the other hand number 2 on recent hardware in the past 2 years. It depends on your typical image size and if you want to be able to compute domain transforms like DFT/DCT (as in the proposed paper and in BM3D).
{ "domain": "dsp.stackexchange", "id": 958, "tags": "image-processing, fft, filters, fourier-transform, denoising" }
Browser quiz game
Question: I have made a little game in JS. Are there any remarks? The game asks a question through the console and accepts the answer through prompt. If the answer is correct you'll get points. To stop the game you need to write exit. These are the game rules: Let's build a fun quiz game in the console! Build a function constructor called Question to describe a question. A question should include: a) question itself b) the answers from which the player can choose the correct one (choose an adequate data structure here, array, object, etc.) c) correct answer (I would use a number for this) Create a couple of questions using the constructor Store them all inside an array Select one random question and log it on the console, together with the possible answers (each question should have a number) (Hint: write a method for the Question objects for this task). Use the 'prompt' function to ask the user for the correct answer. The user should input the number of the correct answer such as you displayed it on Task 4. Check if the answer is correct and print to the console whether the answer is correct ot nor (Hint: write another method for this). Suppose this code would be a plugin for other programmers to use in their code. So make sure that all your code is private and doesn't interfere with the other programmers code (Hint: we learned a special technique to do exactly that). After you display the result, display the next random question, so that the game never ends (Hint: write a function for this and call it right after displaying the result) Be careful: after Task 8, the game literally never ends. So include the option to quit the game if the user writes 'exit' instead of the answer. In this case, DON'T call the function from task 8. Track the user's score to make the game more fun! So each time an answer is correct, add 1 point to the score (Hint: I'm going to use the power of closures for this, but you don't have to, just do this with the tools you feel more comfortable at this point). Display the score in the console. Use yet another method for this. var Question = function(question , answer1 , answer2, answer3 , correctAnswer) { this.question = question; this.answer1 = answer1; this.answer2 = answer2; this.answer3 = answer3; this.correctAnswer = correctAnswer; this.askQuestion = function() { console.log(this.question); console.log(this.answer1); console.log(this.answer2); console.log(this.answer3); } this.chekQuestion = function(){ var answerQuestion = prompt('Answer the question'); console.log(answerQuestion); if (answerQuestion == this.correctAnswer ){ console.log(answerQuestion + '--- is correct answer'); scorePlayer ++; console.log("Your score is ---" + scorePlayer); initGame() } else if (answerQuestion == "exit"){ alert('game is stopped'); }else { console.log(answerQuestion + '--- wrong answer') console.log("Your score is ---" + scorePlayer); initGame() } } } var authorOfCourseQuestion = new Question('Who is an author of course?', '0:John' ,'1:Jane' ,'2:Jonas', '2'); var whatIsJS = new Question('What is JS for you?', '0:fun' ,'1:boring' ,'2:not interesting', '0'); var whatIsAFunctionInJS = new Question('What a function in JS?', '0:string' ,'1:obj' ,'2:number', '1'); var arrayQuestions = [authorOfCourseQuestion , whatIsJS , whatIsAFunctionInJS ]; var scorePlayer = 0; function initGame() { var randomNumber = Math.floor(Math.random()*3); arrayQuestions[randomNumber].askQuestion(); arrayQuestions[randomNumber].chekQuestion(); } initGame(); <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Section 5: Advanced JavaScript: Objects and Functions</title> </head> <body> <h1>Section 5: Advanced JavaScript: Objects and Functions</h1> <script src="script.js"></script> </body> </html> Answer: Your layout is all over the place. Sometimes, you indent by 8 spaces, sometimes by 4. Sometimes, you have whitespace around else on both sides, sometimes only on the right. Sometimes, you have whitespace around , on both sides, sometimes only on the right. Sometimes, you have one space before and after an operator, sometimes none, sometimes, you have two spaces after an operator. Be consistent! Consistency is important, because if you express the same thing in two different ways, a reader of the code will think that you wanted to express two different things, and they will waste time wondering what the difference is. You have empty lines to break your code up into logical units, but they are in weird places, such as before an else or before a closing curly brace. Sometimes, you use semicolon, sometimes you don't. You should be consistent and follow a fixed set of rules. Ideally, you should use an automated tool to format your code, so that you don't have to make those decisions at all. For example, jslint with default settings detects 3 errors and then stops and doesn't even parse the whole file, jshint with the default settings detects 6 errors, and eslint with some reasonable settings detects a whopping 63 errors and 14 warnings. With my settings, the numbers are even worse: 20 errors for jshint and 155(!!!) for eslint. You should always use a linter and make sure that your code is lint-clean.
{ "domain": "codereview.stackexchange", "id": 35923, "tags": "javascript, quiz" }
Does radiation force depend on group velocity or on phase velocity?
Question: What is the radiation force $F$ due to a beam of photons of power $P$ undergoing perfect reflection? Is it $$a) F = 2 P / c$$ or $$b) F = 2 P / v_g$$ where $v_g$ is the group velocity ? Note that $v_g$ varies spatially in waveguides and in asymmetric cavities too, even in the absence of a dielectric. Answer: The radiation pressure varies with the phase velocity of the light, not the group velocity, although you must keep in mind that the phase velocity in matter might be different from $c$ and vary with wavelength in a dispersive medium. This has been experimentally demonstrated and described in this paper, in which it was found that the radiation force felt on a mirror will vary in direct proportion to the index of refraction of the material in which the mirror is immersed. Intuitively, for higher index of refraction (slower phase velocity), there are more photons per unit length, increasing their energy density and pressure. While the pressure on the mirror once the photons arrive will depend on the phase velocity, the group velocity still controls the rate at which information can be transferred. Update To be explicit, for a beam of photons with wavelength $\lambda_0$ emitted with power $P$ traveling through a medium with index of refraction $n(\lambda_0)$ impinging directly on a mirror immersed in that medium (angle of incidence is zero), the force $F$ felt by the mirror is given by $$ F = \frac{2 n(\lambda_0) P}{c} \, \, .$$ The index of refraction is related to the phase velocity at that given wavelength, $v_{ph}(\lambda_0)$, by the relation $$ n(\lambda_0) = \frac{c}{v_{ph}(\lambda_0)} \, \, ,$$ so that the radiation force for this single-wavelength beam can also be written as $$ F = \frac{2 P}{v_{ph}(\lambda_0)} \, \, .$$ For light composed of a range of wavelengths, you must integrate this expression over the photon distribution of the beam with respect to wavelength, taking into account how the index of refraction varies with wavelength. Further update (microscopic description) Microscopically, you can think of the momentum being carried simultaneously by the E&M fields, and "an additional co-traveling momentum within the medium from the mechanical momentum of the electrons in the molecular dipoles in response to the incident traveling wave" (J.D. Jackson, Classical Electrodynamics 3rd edition, p. 262, and its corresponding reference to this paper).
{ "domain": "physics.stackexchange", "id": 23680, "tags": "electromagnetism, reflection, phase-velocity" }
Why do the two relations of Power in a resistive load $P= I^2 R$ and $P = V^2/R$ (Assume Ideal resistances ) seem contradicting in this situation?
Question: I have seen similar questions in the forum but they fail to be very specific . I would ask the question from the viewpoint of Problem Solving. Let me put this question from a textbook : A heater coil is cut into two equal parts and only one part is now used in the heater. The heat generated will now be? Now from Ohm's Law, $R \propto l $, where $l$ is the length of conductor. So, since length is halved, $R$ is halved. Since there is no mention in the question that the supply is Voltage Limited or Current limited, I assumed that the supply is a standard household supply which is voltage limited and I used the Relation $P=\frac{V^2}{R}$ and the fact that Heat generated is $H= P\cdot t $, where $t$ is time. I got my answer correct which is, The heat generated would be doubled Now, the real question begins: What if we used the relation $P=I^2 R$, assuming supply to be current limited? Then the answer would be: The heat generated would be halved. Why such a contradiction arise? Is my assumption correct or is there some mistake that just somehow lead to a right answer ? Answer: You just answered your question yourself: if the supply is voltage-limited, the first answer is correct. Under the assumption of constant voltage, what the expression $P = I^2 R$ really means is $$P = I(R)^2 R.$$ That is, current is a function of the resistance. Since $I(R) = V/R$ by Ohm's law, you get back $P = V^2/R$ by substitution. Of course, the conceptual opposite occurs if the supply is current-limited instead -- all your expressions in that case would reduce to $P = I^2 R$ with voltage being a function of the resistance.
{ "domain": "physics.stackexchange", "id": 77979, "tags": "electricity, electric-current, electrical-engineering" }
How to determine the poles from a graph
Question: From my knowledge of stability, I understand that if the function approaches a finite number then the system will be stable. Thus if a system is stable its poles will be on the left of the $j\omega$-axis. This means that $Re(p_{1}, p_{2}) < 0 $. What I don't understand is how you determine whether the Imaginary part is equal to 0 or not. Answer: If you have complex conjugate poles (i.e., $\text{Im}\{p_i\}\neq 0$) you have oscillations in the step response. Only real-valued poles will give you a monotonic step response as shown in the figure. Take a look at this related answer.
{ "domain": "dsp.stackexchange", "id": 5928, "tags": "continuous-signals, poles-zeros, stability, step-response" }
Phase shift between two signals
Question: So I got some oscilloscope captures for a project I'm doing and I'd like to find a phase shift between them because I don't trust the scope calculation. So I extracted data from a .csv file and loaded it into Matlab and now I'd like to find a phase shift using FFT (I think). Is there a way to do this even though the second signal, output, is not a perfect sinusoid? Answer: Your waveforms appear to have a low enough noise level to use interpolation of FFT results for phase estimation. First do an fftshift (to rotate the data halfway around your FFT vector) so that the FFT result phase reference point is in the center of you original data (not at the discontinuity or edges of your waveform data). Then do the FFTs, and estimate the location of the frequency peak (if between FFT result bins by interpolation and/or successive approximation, or by knowledge of the exact frequency by other means if available), then interpolate the complex phase between bins, if needed. Then compare the two phase estimations between the two waveforms.
{ "domain": "dsp.stackexchange", "id": 4110, "tags": "fft, phase, delay" }
Are diffeomorphisms a proper subgroup of conformal transformations?
Question: The title sums it pretty much. Are all diffeomorphism transformations also conformal transformations? If the answer is that they are not, what are called the set of diffeomorphisms that are not conformal? General Relativity is invariant under diffeomorphisms, but it certainly is not invariant under conformal transformations, if conformal transformations where a subgroup of diff, you would have a contradiction. Or I am overlooking something important? Answer: A general diffeomorphism is not part of the conformal group. Rather, the conformal group is a subgroup of the diffeomorphism group. For a diffeomorphism to be conformal, the metric must change as, $$g_{\mu\nu}\to \Omega^2(x)g_{\mu\nu}$$ and only then may it be deemed a conformal transformation. In addition, all conformal groups are Lie groups, i.e. with elements arbitrarily close to the identity, by applying infinitesimal transformations. Example: Conformal Group of Riemann Sphere The conformal group of the Riemann sphere, also known as the complex projective space, $\mathbb{C}P^1$, is called the Möbius group. A general transformation is written as, $$f(z)= \frac{az+b}{cz+d}$$ for $a,b,c,d \in \mathbb{C}$ satisfying $ad-bc\neq 0$. Example: Flat $\mathbb{R}^{p,q}$ Space For flat Euclidean space, the metric is given by $$ds^2 = dz d\bar{z}$$ where we treat $z,\bar{z}$ as independent variables, but the condition $\bar{z}=z^{\star}$ signifies we are really on the real slice of the complex plane. A conformal transformation takes the form, $$z\to f(z)\quad \bar{z}\to\bar{f}(\bar{z})$$ which is simply a coordinate transformation, and the metric changes by, $$dzd\bar{z}\to\left( \frac{df}{dz}\right)^{\star}\left( \frac{df}{dz}\right)dzd\bar{z}$$ as required to ensure it is conformal. We can specify an infinite number of $f(z)$, and hence an infinite number of conformal transformations. However, for general $\mathbb{R}^{p,q}$, this is not the case, and the conformal group is $SO(p+1,q+1)$, for $p+q > 2$.
{ "domain": "physics.stackexchange", "id": 65731, "tags": "differential-geometry, symmetry, conformal-field-theory, covariance, diffeomorphism-invariance" }
Electric charges
Question: It is known that why we see a small bit of lightning or an electrostatic shock is when placing a negatively charged conductor to a neutral conductor, isn't it? My question is why do feel hurt or shocked when this happens? For example if we touch a conducting object when we are negatively charged we feel a bit of shock. This is because the charges in our body rapidly goes to the conducting object then why is it that we are hurt? Or even why do we see a bit of lightning? Is it connected to electromagnetism? Answer: The "shock" or "hurt" you feel is the result of current in the body - which is what happens when charge flows from one place to another. Your body contains nerves - and they signal "pain" and other feelings with very small currents. Your body is not very good at distinguishing "pain sensor is sending a message" from "a current is flowing through the nerve, making me think there is pain". Also - currents can cause local heating of tissue, ionization, polarization, ... all of which may in turn cause nerves to fire so that you feel "real" pain. The amount of current flowing in nerves is very small, because it's just a few ions of potassium crossing the membrane. A voltage on the order of several tens of mV (40 - 100 mV) is enough to fire a nerve"; static electricity can easily give voltages in the 1000's of Volts. See also this link on neurotransmission
{ "domain": "physics.stackexchange", "id": 40637, "tags": "electrostatics, electricity, charge, conductors, lightning" }
current induced on an inductor modifying inducing magnetic field. Wouldn't that cause an infinite oscillating feedback?
Question: Imagine a simple circuit with an inductor and resistor in series. Now pass thru a varying flux thru the inductor, a phi. The vary flux induces voltage on the inductor to oppose the flux causing a current to flow in the circuit. That current generates its own flux phi_i. phi_i would counter phi, causing total flux thru the inductor to lessen. The voltage induced on the inductor is now less, current in the circuit is less, phi_i would lessen, so the total flux thru the inductor, phi - phi_i now rebounded a bit. Which will then cause phi_i to increase and start the whole chain again. current in the coil is affected by the total flux, and total flux is affected by the current in the coil, we have a "chicken-and-egg" scenario here. I admit this is a very crude way of thinking, as the voltage induced on the inductor doesn't depend on the magnitude of the phi but the change rate. However, the current and voltage aren't induced instantaneously either, so the real dynamics of the total flux is quite complex to think thru. I think. Answer: You are getting tangled up in a verbal description of what is actually an easily analyzable problem. One must use mathematics to analyze it. The result is, if the external flux changes only once at the beginning, there is no infinite oscillation because the resistor dissipates energy into heat. If the circuit had only inductor and capacitor instead of resistor, it would oscillate much longer, but still not indefinitely, because it would lose energy by other means - by radiation. If the external flux keeps changing, the current in the circuit will keep changing too. The external flux change acts as a driving force for the circuit and it supplies energy to it. For a circuit of inductor in series with resistor, while the inductor is driven by the external flux change $\frac{d\Phi_0}{dt}$ ($\Phi_0$ is total effective flux for the coil at hand, for a different coil same magnetic field would have different flux), the equation of motion for electric current $I$ is $$ RI + L\frac{dI}{dt} + \frac{d\Phi_0}{dt} = 0. $$ Since it is first order differential equation, there can be no oscillation in the solution for current $I$, unless the oscillation is put in via $\Phi_0$.
{ "domain": "physics.stackexchange", "id": 58084, "tags": "electromagnetic-induction" }
Infinite Square Well Normalization Issue
Question: Let's say you have a well from $(0,a)$ and you want to find how some wave function changes with time. For our function we have: $$\Psi(x,0)=Ax(x-a)$$ We can normalize this and everything is fine with bounds 0 to a. Now lets say we move the well to $(-a,a)$ and want to try to normalize a similar function and see how it changes with time. Let's take the function $$\Psi(x,0)=A(x-a)(x+a)$$But if we normalize this from $(-a,a)$ we find that the integral becomes $0$ and the function is un-normalizable. I don't understand why or if I am just making some dumb error. Answer: The integral is not 0, for example - https://www.wolframalpha.com/input/?i=integrat+(x-3)(x%2B3)+from+x%3D-3+to+x%3D3 also, you cn easily see your function does not change sign, so it's integral cannot be 0.
{ "domain": "physics.stackexchange", "id": 33002, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, normalization" }
Does milk drinking prevent long-term chemical poisoning?
Question: I have heard some rumors that drinking milk prevents chemical poisoning. I have done a bit research and some sources confirm that. Corrosive Poisons The best first aid is to dilute the poison as quickly as possible. For acids or alkalis (bases), give the patient water or preferably milk or ice cream — one (1) cup for victims under five (5) years; or one (1) to two (2) glasses for patients over five (5) years. Milk or ice cream is better than water because it dilutes and helps neutralize the poison. Water only dilutes the poison. Cornell University In Case of Poison Ingestion: Drink Milk? As @Jan pointed out in the case of chemical poisoning the paramedics are immediately called. So my question is, does milk helps chemical poisoning, e.g., for people that daily work with chemicals or are working in chemical facilities (working with or be in the presence of $\ce{HCl}$, $\ce{HF}$, $\ce{NaOH}$, $\ce{HNO3}$, $\ce{H3PO4}$, etc.)? Or what would be beneficial? And I mean long-term exposure and not immediately threat to life or health. Answer: This post only concerns chronic poisoning, i.e. exposure to a harmful chemical in low amounts for an extended period of time. The answer is not applicable to acute poisoning; in that case a doctor should immediately be called. Basically there are three types of chemicals that have a harmfulness threshold to which one can be exposed over extended periods of time. Type one includes substances such as mercury that are primarily aggregated in the body rather than excreted. In rare cases, drinking milk may enhance excretion thereby decontaminating the body. I haven’t heard of any such examples, though, and I would assume it to be a very rare case so neglegible. Type two includes those substances that the body can actually metabolise to harmless products. This would include most protein poisons. However, the metabolism shouldn’t be enhanced by drinking milk so it will likely have a non-effect. Type three includes those that, while they aren’t metabolised, just don’t do enough damage at low concentrations. That would basically include your list of acidic and basic substances but also non-aggregating metals at low concentrations. For these, excretion is not significantly enhanced or reduced by drinking milk so again no effect is expected. Comparing the acute exposure to acids to the chronic one, in the former giving a person milk to drink is actively encouraged. This is because milk contains a lot of proteins that can act as buffers, increasing the body’s buffering potential and reducing the effects of the acid. The body does have more than enough buffering capabilities to cope with chronic cases though, so even there regularly drinking milk would not help. Tl;dr: Milk is a great drink but cannot act as a type of preventive antidote.
{ "domain": "chemistry.stackexchange", "id": 10839, "tags": "everyday-chemistry, safety, reference-request, food-chemistry, toxicity" }
Tail-recursive call vs looping for a Poker app
Question: I'm developing a Poker app. It is almost done, and I'm looking for improvement. One of the things I wonder is whether I should change how I perform iterations. I currently iterate by using tail-recursive calls. However, a friend of mine suggested that I use a while loop instead because loops don't require stack space. private async Task Turns() { _turns = ReturnTurns(); GC.KeepAlive(Updates); if (!Player.FoldTurn && Player.Chips > 0) { if (Player.Turn) { SetPlayerStuff(true); Call -= Player.PreviousCall; _up = int.MaxValue; _turnCount++; Bot1.Turn = true; _restart = true; } } if (!Player.Turn) { await Flip(0); } if (Player.FoldTurn || !Player.Turn || Player.Chips <= 0) { Call = TempCall; if (StatusLabels[Player.EnumCasted].Contains(RepetitiveVariables.Fold)) { Bot1.Turn = true; } SetPlayerStuff(false); Bot1 = (Bot)await RotateTurns(Bot1, Bot1.EnumCasted); Bot2 = (Bot)await RotateTurns(Bot2, Bot2.EnumCasted); Bot3 = (Bot)await RotateTurns(Bot3, Bot3.EnumCasted); Bot4 = (Bot)await RotateTurns(Bot4, Bot4.EnumCasted); Bot5 = (Bot)await RotateTurns(Bot5, Bot5.EnumCasted); _restart = false; } if (!_restart) { await Turns(); } } That's how I think it should look like with a while loop: private async Task Turns() { while (true) { _turns = ReturnTurns(); GC.KeepAlive(Updates); if (!Player.FoldTurn && Player.Chips > 0) { if (Player.Turn) { SetPlayerStuff(true); Call -= Player.PreviousCall; _up = int.MaxValue; _turnCount++; Bot1.Turn = true; _restart = true; } } if (!Player.Turn) { await Flip(0); } if (Player.FoldTurn || !Player.Turn || Player.Chips <= 0) { Call = TempCall; if (StatusLabels[Player.EnumCasted].Contains(RepetitiveVariables.Fold)) { Bot1.Turn = true; } SetPlayerStuff(false); Bot1 = (Bot) await RotateTurns(Bot1, Bot1.EnumCasted); Bot2 = (Bot) await RotateTurns(Bot2, Bot2.EnumCasted); Bot3 = (Bot) await RotateTurns(Bot3, Bot3.EnumCasted); Bot4 = (Bot) await RotateTurns(Bot4, Bot4.EnumCasted); Bot5 = (Bot) await RotateTurns(Bot5, Bot5.EnumCasted); _restart = false; } if (!_restart) { continue; } break; } } Answer: GC.KeepAlive(Updates); Do you really need to do this? Objects are not collected as long there is a valid reference to them meaning that Updates should always be avaiable if you are not replacing the reference. If you are replacing the reference it means that you are not keeping you object alive long enough for the current game which sounds leaky. Bot1 = (Bot)RotateTurns(Bot1, Bot1.EnumCasted); RotateTurns should be an instance method on Bot and should not be present on the current scope. And once again you are replacing the object reference. The bots should be alive at least as long as a game instance. It would be better to keep a list of all bots so you could iterate on them. foreach(var bot in _bots){ await bot.RotateTurns(); } if (!_restart) { continue; } break; The !_restart condition could be placed on the while. Also consider to keep this a method variable instead of a field. while(!_restart)
{ "domain": "codereview.stackexchange", "id": 18688, "tags": "c#, recursion, stack, iteration" }
SARS-CoV - relative size of the spike protein
Question: I was given the task of determining the percentage of the S-protein of the SARS-CoV relative to the total of its proteins from the attached image. However, I have been given no explanation of the image, and with a physics background, I simply do not understand it. Can someone please explain to me how it would be possible to solve the task from the given image (i.e., explain the image). Answer: I'm not going to give you the final answer, because this is still a class assignment, after all, but I'll give you some tips. What you are looking at is a Western blot of whole-cell extracts from a cell line called Vero, either on its own (vertical ane 9, from numbering across the bottom) or infected with various viruses carrying specific genes, as indicated. Lanes 1-7 are from Vero cells infected with the BHPIV3 viral vector (a live-attenuated virus that can carry genes from another organism). Lane 1 is just the viral vector itself, with its own proteins. Lanes 2-7 are from cells infected with the viral vector carry certain genes from the SARS coronavirus. Lane 8 is from cells infected with SARS-CoV itself (so it carries all the previous genes). The numbers along the far right side are molecular weights, in kilodaltons. A Western blot is performed in three parts - 1) separating the protein samples (whole-cell extracts) by molecular weight (size) in an SDS-PAGE gel by using an electric field, 2) transferring the separated proteins to a membrane, also using an electric field, and 3) using some sort of probe (usually a specific antibody) to determine what is present and in what amount. In this case, two antibodies were used. The lower, narrow image shows the samples probed with an antibody that recognizes the HPIV3 F protein, which is present in all the BHPIV3-infected lanes, in relatively equal amounts. This can be determined because the size/intensity of the different bands in each vertical lane is directly proportional to the amount of that protein present in the original sample. The upper part of the image shows the extracts probed with an antibody that evidently recognizes the SARS S, M, and N proteins, but apparently not the E protein (as Lane 6, with the SARS-E virus, is blank). Using this information, you can now determine which bands correspond to which protein. Keeping in mind that Lane 8 contains all the proteins from the SARS-CoV virus, while the previous lanes just contain certain ones as labeled along the top, you should now be able to use ImageJ to answer the question. I will give you one hint - there is much more information present than you need to answer the question. You actually only need 4 (well, maybe 5) of the lanes. I'll leave it to you to figure out which ones.
{ "domain": "biology.stackexchange", "id": 11350, "tags": "proteins, virology, coronavirus, protein-structure, covid" }
Whats the difference between the Photon Sphere and the Marginally bound orbit?
Question: Whats the difference between the Photon Sphere and the Marginally bound orbit? Why does photon sphere have a radius of 1.5Rs, while the Marginally bound orbit has a radius of 2Rs? Answer: Both the light ring (photon orbit) and the marginally bound circular orbit, are cases of unstable circular orbits. The light ring is a null orbit, i.e. the orbit followed by a massless particle (such as a photon). Any massive particle would need infinite energy to follow this orbit. The marginally bound circular orbit on the other hand is defined as the unstable circular orbit whose energy is equal to the rest mass of the particle. This means that if the orbit is perturbed towards infinity, the particle would reach infinity with zero velocity. (A particle escaping the light ring on the other hand would reach infinity travelling at the speed of light.) So, why is the radius of the light ring smaller than that of the marginally bound circular orbit? One way to see this is to note that the energy of unstable circular orbits increases as the radius of the orbit decreases. Since infinity is bigger than a finite energy, the marginally bound circular orbit must lie outside the light ring. (Alternatively we could have noted that the light ring is the smallest possible circular orbit.) The actual values (2Rs and 1.5Rs) for these orbits are simply how these work out for a Schwarzschild black hole. If one were to add spin or charge to the black hole the radii of these orbits would change. However, it would still be true that the marginally bound circular orbit lies outside the light ring.
{ "domain": "physics.stackexchange", "id": 98519, "tags": "black-holes, photons, orbital-motion" }
Would it help if you jump inside a free falling elevator?
Question: Imagine you're trapped inside a free falling elevator. Would you decrease your impact impulse by jumping during the fall? When? Answer: While everyone agrees that jumping in a falling elevator doesn't help much, I think it is very instructive to do the calculation. General Remarks The general nature of the problem is the following: while jumping, the human injects muscle energy into the system. Of course, the human doesn't want to gain even more energy himself, instead he hopes to transfer most of it onto the elevator. Thanks to momentum conservation, his own velocity will be reduced. I should clarify what is meant by momentum conservation. Denoting the momenta of the human and the elevator with $p_1=m_1 v_1$ and $p_2=m_2 v_2$ respectively, the equations of motion are $$ \dot p_1 = -m_1 g + f_{12} $$ $$ \dot p_2 = -m_2 g + f_{21} $$ Here, $f_{21}$ is the force that the human exerts on the elevator. By Newton's third law, we have $f_{21} = -f_{12}$, so the total momentum $p=p_1+p_2$ obeys $$ \frac{d}{dt} (p_1 + p_2) = -(m_1+m_2) g $$ Clearly, this is not a conserved quantity, but the point is that it only depends on the external gravity field, not on the interaction between human and elevator. Change of Momentum As a first approximation, we treat the jump as instantaneous. In other words, from one moment to the other, the momenta change by $$ p_1 \to p_1 + \Delta p_1, \qquad p_2 \to p_2 + \Delta p_2 .$$ Thanks to momentum "conservation", we can write $$ \Delta p := -\Delta p_1 = \Delta p_2 .$$ (Note that trying to find a force $f_{12}$ that models this instantaneous change will probably give you a headache.) How much energy did this change of momentum inject into the system? $$ \Delta E = \frac{(p_1-\Delta p)^2}{2m_1} + \frac{(p_2+\Delta p)^2}{2m_2} - \frac{p_1^2}{2m_1} - \frac{p_2^2}{2m_2} .$$ $$ = \Delta p(\frac{p_2}{m_2} - \frac{p_1}{m_1}) + (\Delta p)^2(\frac1{2m_1}+\frac1{2m_2}) .$$ Now we make use of the fact that before jumping, the velocity of the elevator and the human are equal, $p_1/m_1 = p_2/m_2$. Hence, only the quadratice term remains and we have $$ (\Delta p)^2 = \frac2{\frac1{m_1}+\frac1{m_2}} \Delta E .$$ Note that the mass of the elevator is important, but since elevators are usually very heavy, $m_1 \ll m_2$, we can approximate this with $$ (\Delta p)^2 = 2m_1 \Delta E .$$ Energy reduction How much did we manage to reduce the kinetic energy of the human? After the jump, his/her kinetic energy is $$ E' = \frac{(p_1-\Delta p)^2}{2m_1} = \frac{p_1^2}{2m_1} - 2\frac{\Delta p\cdot p_1}{2m_1} + \frac{(\Delta p)^2}{2m_1}.$$ Writing $E$ for the previous kinetic energy, we have $$ E' = E - 2\sqrt{E \Delta E} + \Delta E = (\sqrt E - \sqrt{\Delta E})^2 $$ or $$ \frac{E'}{E} = (1 - \sqrt{\Delta E / E})^2 .$$ It is very useful to estimate the energy $\Delta E$ generated by the human in terms of the maximum height that he can jump. For a human, that's roughly $h_1 = 1m$. Denoting the total height of the fall with $h$, we obtain $$ \frac{E'}{E} = (1 - \sqrt{h_1/h})^2 .$$ Thus, if a human is athletic enough to jump $1m$ in normal circumstances, then he might hope to reduce the impact energy of a fall from $16m$ to a fraction of $$ \frac{E'}{E} = (1 - \sqrt{1/16})^2 \approx 56 \% .$$ Not bad. Then again, jumping while being weightless in a falling elevator is likely very difficult...
{ "domain": "physics.stackexchange", "id": 45689, "tags": "acceleration, newtonian-gravity, collision, equivalence-principle, free-fall" }