anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Rust implementation of next-permutation
Question: I am translating this next permutation algorithm in C++: template<class BidirIt> bool next_permutation(BidirIt first, BidirIt last) { if (first == last) return false; BidirIt i = last; if (first == --i) return false; while (true) { BidirIt i1, i2; i1 = i; if (*--i < *i1) { i2 = last; while (!(*i < *--i2)) ; std::iter_swap(i, i2); std::reverse(i1, last); return true; } if (i == first) { std::reverse(first, last); return false; } } } Into this Rust code: struct Solution; impl Solution { pub fn next_permutation(nums: &mut Vec<i32>) -> bool{ if nums.len() < 2 { return false; } let first = 0; let last = nums.len(); let mut i = last - 1; loop { let i1 = i; i -= 1; if nums[i] < nums[i1] { let mut i2 = last; loop { i2 -= 1; if nums[i] < nums[i2] { break; } } let tmp = nums[i]; nums[i] = nums[i2]; nums[i2] = tmp; nums[i1..last].reverse(); return true; } if i == first { nums[first..last].reverse(); return false; } } } } Though it works, I think it's not very good-looking. I am a beginner in Rust; any improvement will be welcome! Answer: In no particular order: Create test cases so you can have some confidence while refactoring. Don't create empty structs (Solution) just to give them functions. next_permutation is perfectly fine as a free function. If you have to create a bad API to satisfy LeetCode or whatever, write the good API first, and then wrap it. If the API required is so bad you can't wrap a good API with it, it's probably a waste of time. Use rustfmt. Accept &mut [T] instead of &mut Vec<T> when you don't need to change the size. let first = 0; let last = nums.len(); Don't give unnecessary symbolic names to things that aren't used symbolically. 0 is far more obviously the index of the first element in a slice than first. Furthermore, you don't even need to provide the bounds of the slice when indexing with [..]; they're implied. So you need these even less than you perhaps thought. i, i1, i2 are meaningless. How about some descriptive names? I prefer to put each loop in a block with its state variables to limit the scope of the mutability. let i1 = i; Similar to the note on first and last above, don't make variables that can be trivially calculated from other variables. The relationship between i and i + 1 is way more obvious than i and i1. if nums[i] < nums[i + 1] { ... } Why is this body nested in the outer loop? The algorithm breaks fairly easily into four steps: Get the index of the rightmost ascending pair of values Get the index of the rightmost value greater than the first element of that pair Swap the values at these two indices Reverse the slice starting just after the first index. You have 2, 3, and 4 nested inside 1, which makes the algorithm look more complicated than it actually is. let tmp = nums[i]; nums[i] = nums[i2]; nums[i2] = tmp; Use nums.swap(i, i2) instead. As mentioned in the reddit comments, there's an iterator method rposition that finds the last item matching some predicate (if the iterator is one that can be iterated from both ends). You can use this to linearize the first (outer) loop, which may be easier to reason about. Don't go overboard, though: iterators are great for certain tasks and not so great for others. If you feel like the logic and flow of the code is better with a loop, just write the loop. As for the inner loop, since the trailing subslice is by definition monotonically decreasing, you can search for the swap point using binary search. It's not likely to make a big performance difference, though, and the logic might be a little harder to follow, so you might want to use rposition here as well. Applied pub fn next_permutation(nums: &mut [i32]) -> bool { use std::cmp::Ordering; // or use feature(array_windows) on nightly let last_ascending = match nums.windows(2).rposition(|w| w[0] < w[1]) { Some(i) => i, None => { nums.reverse(); return false; } }; let swap_with = nums[last_ascending + 1..] .binary_search_by(|n| i32::cmp(&nums[last_ascending], n).then(Ordering::Less)) .unwrap_err(); // cannot fail because the binary search will never succeed nums.swap(last_ascending, last_ascending + swap_with); nums[last_ascending + 1..].reverse(); true }
{ "domain": "codereview.stackexchange", "id": 41063, "tags": "rust, combinatorics" }
Can I use grayscale images when working with ImageJ?
Question: I am using ImageJ to analyze Western Blots. I have scanned films in as grayscale images because this is how we did it in my old lab. People in my current lab are not satisfied with that explanation and think I should consider using color images. I've been searching for protocols but they all seem to address how to use the program once you have some films scanned. Can anyone help me figure out what considerations go into choosing how I scan my films? Answer: There is no evidence that one is better than the other, most likely because it differs from case to case. Neither you, nor your critics, are right. There is a tiny bit of science in a paper on digitizing blots, generalizing from blots of a specific protein (PMID: 19517440), and they use grayscale for no given reason. Come to think, that is the best paper in the field of immunoblot quantification., and it still lacks evidence. Immunoblot is semiquantitative, meaning it may sort and order bands (and protein amounts), but it fails often at estimating differences and ratios, and may even fail to see differences. Just try your best to find a difference, in the knowledge that if there is no difference in protein amounts, any fiddling with contrast won't create a difference in bands quantitation. There is almost no difference between color and black-and-white digitized images of an immunoblot. The parts of the blot that are black will still appear in the computer as White = 0, even though a tri-color image file will detail it to Red = 0, Green = 0, Blue = 0. The parts that are nearly black will still be very close to zero. The only thing that changes a lot between RGB and color is the background. The background may change, for example, from White = 200, to Red = 190, Green = 190, Blue = 220. There is one consequence of such switching between values for background. (These changes may be induced by switching between grayscale and RGB, but also by altering exposure time etc.) When you use a method where the background gets too close to the blots, the "amplitude" of the blots, the height of the peaks in ImageJ, will be reduced. This loss of contrast should wash out some of the differences between blot bands and lessen support for differences between bands (i.e., for the alternative hypothesis), making you miss a genuine protein difference. Again, if your blot bands are different enough so that your percent increase / decrease is huge, and your p value is small, you are all set, regardless of the method you chose. Ask your critics to explain your observed differences other than by differences in protein. But if your quantification fails the p test, I think it will be a honest effort, to change the color from grayscale to RGB, or the other way around, just in case your first choice had been the one that dampens contrast.
{ "domain": "biology.stackexchange", "id": 2867, "tags": "proteins, protocol, western-blot, imagej" }
Radius Local Search Algortihm for Max-Sat problem approximating ratio
Question: Assume that in classical Local Search algorithm for MAX-SAT we could flip no more than $r \leq n/2$ variables (let's call it $r$-flip) on every iteration. More precise: on every iteration we're finding $r$-flip which satisfies more clauses then was satisfied before the flip. What is the approximation ratio of this algorithm? Is there any worst case where my algorithm has multiplicative error 1/2? Answer: Suppose that the all-zero assignment is a local optimum for your algorithm, for some CNF $\varphi$, say satisfying an $\alpha$ fraction of the clauses. If we choose any assignment of weight at most $n/2$, it should satisfy at most an $\alpha$ fraction of the clauses. In particular, a random assignment of weight exactly $n/2$ satisfies at most an $\alpha$ fraction of clauses. If we assume that no clauses are empty (empty clauses only work in our favor), then every clause is satisfied with probability at least $1/2$ under this distribution. Hence $\alpha \geq 1/2$, and so your algorithm gives a $1/2$ approximation. If $\alpha = 1/2$ then all clauses are unit clauses. Since the all-zero assignment is a local optimum, this means that for each variable $x_i$, there must be at least as many clauses $\bar{x}_i$ than clauses $x_i$. Hence the all-zero assignment is actually optimal. This shows that there is no worst case in which the approximation ratio is exactly $1/2$. Perhaps this gives you a start on finding the actual approximation ratio of your local search algorithm.
{ "domain": "cs.stackexchange", "id": 19621, "tags": "satisfiability, approximation, maxsat" }
How do permanent magnets manage to focus field on one side?
Question: The actuator of a hard drive head consists of two very strong neodymium magnets, with an electromagnetic coil between them. If you take that apart, the magnets attract each other very strongly. There's no doubt the field between them is very strong. But if you flip them back to back, there is no repulsion force - there is pretty much nothing. While the magnets are very strong on one side, they seem completely inert on the other. I understand how a U-shape of a shaped magnet allows it to focus field near the two exposed poles, but how does a flat rectangular plate manage to keep the field over one flat, wide surface and nearly none near the other? [sorry about the heavy edit but it seems the question got totally muddled with irrelevant discussion about unipolar magnets and possibility or impossibility to remove magnetic field from one side. Only heavy rephrasing may help.] Edit: some photos: 1: The magnets stuck together. They hold really hard, I'd be unable to just pull them apart but I can slide one against the other to separate them. The magnets in "inert position" - they don't act on each other at all, just like two inert pieces of metal. The magnets seem to have two poles located on the surface off-center. Here, a normal magnet placed against the hard disk magnet, centering itself on one side, then flipped - on another. The metal shield seems to act like completely unmagnetized ferromagnetic. I can stick the "normal magnet" any way I like to it, and it doesn't act on another piece of ferromagnetic (a needle) at all. When I apply a small magnet to it, it becomes magnetic like any normal "soft" ferromagnetic - attracts the needle weakly. It behaves as if the (very powerful) neodymium magnet glued to the other side wasn't there at all. Unfortunately the neodymium magnets are glued very well and so fragile I was unable to separate any without snapping it, and then the "special properties" seem to be gone. Answer: The metal plates the magnets are glued to are an Iron and Nickel alloy that has a very high magnetic permeability called a Mu-metal. I don't understand all of the details of magnetism or how Mu-metal works but that should get you started.
{ "domain": "physics.stackexchange", "id": 7029, "tags": "electromagnetism, magnetic-fields" }
Algorithm or function or model that encourages clustered classfication?
Question: I have a soft classfication problem, i.e., the correct label for a certain instance is not just one class with 100% probability, but rather bunch of classes with probabilities that sum up to one. What I know as apriori information (I know it because I understand the underlying physical phenomenon) that the classes are close together, like a cluster. E.g, class one and five and eight always come together cause they're adjacent (PS: I can create adjacency matrix if required). What do I want? I want some way to tell the NN about this fact. Currently it is vanilla classfication neural net. Any suggestions or readings or guidannce are appreciated. Answer: Why not create another set of classes which corresponds to the groups you want? This works if you care more about groups. Later on you can subsequently classify within groups. Another idea would be to devise a loss function which penalize less within group misclassification and more outside group classification but this involves going into implementation details of your NN. As an idea it would involve writing something like softmax multiplied with misclassification cost matrix
{ "domain": "datascience.stackexchange", "id": 5662, "tags": "machine-learning, deep-learning, classification" }
Block on cart, equation of motion
Question: Consider a rigid block of $b \times h$ having mass $m$ on cart (as depicted below). The cart is given an acceleration $a$, this leads to overturning of the block. The angle of rotation is indicated by $\theta$. This is how far I got (not considering the movement of the cart): The Lagrangian $L=T-V$ is calculated using $$T = \frac12 J \dot{\theta}^2$$ $$V = m g \Delta_y = m g \bigg(r \cos(\alpha - \theta) - \frac{h}{2}\bigg) $$ so that $$\frac{\mathrm{d}}{\mathrm{d} t} \bigg( \frac{\mathrm{d} L}{\mathrm{d} \dot{\theta}} \bigg) - \frac{\mathrm{d} L}{\mathrm{d} \theta} = 0$$ yields the EOM $$J\ddot{\theta} + m g r \sin(\alpha-\theta) = 0$$ Now, my question is: how do I add the acceleration of the cart to the RHS? My initial guess would be $$J\ddot{\theta} + m g r \sin(\alpha-\theta) = m a r \cos(\alpha - \theta) $$ where $ma$ is the force and $r \cos(\alpha - \theta)$ the lever arm. But I don't believe this is true since the block does not experience the acceleration $a$ over its full body. Can anyone help or provide some literature? Thanks. Answer: Verified it with FEA, it is correct. Also, taking into account the "rocking" effect requires the piece-wise description: $$ J\ddot{\theta} + \bigg\lbrace\begin{matrix} -m g r \sin(\alpha + \theta) & \theta < 0 \\ m g r \sin(\alpha - \theta) & \theta \geq 0 \end{matrix} = m a r \cos(\alpha - \theta) $$
{ "domain": "physics.stackexchange", "id": 11771, "tags": "homework-and-exercises, newtonian-mechanics, lagrangian-formalism, rotational-dynamics" }
Do photons emitted from a LED show bunching?
Question: If photons are emitted from a thermal source, we get photon bunching. For coherent radiation, the detection probability doesn't change after detecting a photon. For single photon sources, we get anti-bunching. A LED isn't a thermal source but it isn't coherent either. Does this mean that we still have some amount of bunching? I didn't find anything in the literature because everybody is all over single photon LEDs. Answer: One thing that you can be sure of is, for a large enough LED, you will get poisson statistics to a very good approximation. Neither bunching nor anti-bunching. The reason is simple: One photon comes from a certain part of the LED, the next photon is likely to come from a totally different part of the LED and head in a totally different direction. There's no way that either of these photons can influence the other. The question is, what do I mean by "large enough LED"? 100 microns is definitely large enough. 100 nanometers is probably not. In between those, I don't know. I hope someone else will give a better answer!! :-D (I'm referring to the size of the active area on the chip, not the size of the package.)
{ "domain": "physics.stackexchange", "id": 25255, "tags": "photons, quantum-optics, light-emitting-diodes" }
Calculating total maintenance costs of a car
Question: Introduction I drive a white 2CV from home to work, and intend to keep that way until retirement. Once retired, I will sell the 2CV to buy a yacht and sail away. My home is far, far away from my work, so I put a lot of kms on the 2CV everyday. The 2CV is beautiful and nice but also an old car, so a lot of maintenance is required. Every month I’m facing this dilemma. Should I repair the 2CV engine or should I exchange it by a re-conditioned one? You see: as the engine gets old its maintenance cost increases, and at the same time its return value in the exchanging process decreases (the older the engine the more I have to spend to exchange it for a re-conditioned one). On the other hand, the older the engine the lowest the final sell price of the car. Obviously, at the end (i.e., at my retirement) I would like to have the maximum money possible for the yacht. This forces me to make the right decision each month, i.e., either maintain or exchange the engine. Task Write a program for computing the total minimum cost I’ll have to spend at the end of N months (the period from now till retirement) knowing the initial age of the engine I (in months), the series of maintenance costs for 2CV engines over the months, the price of motor exchange as function of its age (in months), and the selling prices of the 2CV as function of the age of its engine. I don’t want to count every cent so all the above values are integers. Notice that the mooshak timeout for this task is 1 second. That is, your code should output an answer in 1 second max. A Time Limit Exceeded or Run Time Error will be issued otherwise. Input The input consists of 5 lines. The first line has an integer N representing the number of months till retirement, 1 ≤ N ≤ 240. The second line has the integer I, the initial age of the motor in months, 0 ≤ I ≤ 100. The third line has a space separated sequence of integer maintenance costs C(i), for a one month period, of an engine with i months at the beginning of the current month, 0 ≤ i ≤ N + I - 1. The fourth line has a space separated sequence of integer exchange prices T(i) for an engine with i months, 1 ≤ i ≤ N + I - 1. The fifth line has a space separated sequence of integer selling prices S(i) of the 2CV equipped with an engine that just turned i months old at the end of the N months, 1 ≤ i ≤ N + I+1. Output The output line has an integer representing the maximum possible money for the given input. Input example 1 1 3 10 50 100 350 18 75 170 5000 4980 4750 4000 Output example 1 4820 Input example 2 5 2 100 150 200 250 330 450 499 180 290 390 450 500 500 5000 4980 4750 4000 3950 3730 3000 Output example 2 3620 My solution import java.util.Scanner; public class Main { static int[] maintenance; static int[] exchange; static int[] sell; static int truemax = Integer.MIN_VALUE; public static void main(String[] args) { Scanner sc = new Scanner(System.in); int months = sc.nextInt(); int engine = sc.nextInt(); maintenance = new int[months+engine]; exchange = new int[months+engine - 1]; sell = new int[months+engine]; for(int i = 0; i < months+engine; i++) { maintenance[i] = sc.nextInt(); } for(int i = 0; i < months+engine- 1; i++) { exchange[i] = sc.nextInt(); } for(int i = 0; i < months+engine; i++) { sell[i] = sc.nextInt(); } sc.close(); int routes = (int) Math.pow(2, months); int f = routes/2; route(0,f, engine,engine, 0, 0); System.out.println(truemax); } static int type; public static int route(int curr, int per, int eng, int inv_eng, int z, int inv_z) { type = Math.floorDiv(curr,per); if(type%2 == 0) { z += exchange[eng - 1] + maintenance[0]; eng = 1; inv_z += maintenance[inv_eng]; inv_eng++; } else { z += maintenance[eng]; eng++; inv_z += exchange[inv_eng - 1] + maintenance[0]; inv_eng = 1; } if(per != 1) { for(int i = 0; i < 2; i++) { curr = route(curr,per/2, eng,inv_eng, z, inv_z); } } else { z = sell[eng - 1] - z; inv_z = sell[inv_eng - 1] - inv_z; curr++; if(z > truemax || inv_z > truemax) { if(z > inv_z) { truemax = z; } else { truemax = inv_z; } } } return curr; } } My program is taking too long for bigger inputs. What can I change to decrease the time taking to execute for some inputs in less than 1 second? Answer: This is a textbook example asking for dynamic programming. The time limit is chosen such that success is impossible without dynamic programming. Dynamic programming can be applied to recursive problems. The idea is to store the results of all sub-problems in a table and if you encounter the same sub-problem again in another recursion path, you can return early. Change your code to something like this: public class Main { [...] static Hashtable optima; // table to store results for re-use public static void main(String[] args) { optima = new HashTable(); [...] // read input route(...) // call route } public static int route(...) { // if the optimum for the sub-problem is already known, fetch it Object optimum = optima.get(new Pair(remainingMonths, engineAge)); if (optimum != null) { return (int)optimum; } // if not, calculate it normally using recursion [...] // store newly calculated result for re-use optima.put(new Pair(remainingMonths, engineAge), optimum); return optimum; }
{ "domain": "codereview.stackexchange", "id": 30242, "tags": "java, algorithm, programming-challenge, time-limit-exceeded" }
Coplanar conducting sheets at different potentials
Question: There is an exercise in Zangwill (7.23 Contact Potential) shown below, it is not very clear in the diagram, but $\rho$ is normal to $z$. The aim is to compute the potential in space, you are suggested to argue that the potential is univariate in $\phi$ by symmetry arguments. Consider a cylindrical coordinate system with axis along the boundary between the sheets. The radial direction away from the boundary has no scale, so we expect no variation in that direction. There is also symmetry along the line of the boundary, so we expect no dependence there either. This leaves just the angular variable. I have two results for the potential of this system, but they aren't the same. This violates uniqueness. The textbook solution is that the problem is univariate in $\phi$ and therefore linear in $\phi$. And if it's linear in $\phi$ then the $E$ field lines have power law spacing. However, when I did this, I thought of a solution via conformal map where I take the solution of a infinite parallel plate capacitor and map the coordinates with a function so that the two plates become co-linear like in the problem statement. (Credit on Desmos tool found here made by Youtube: Partial Science, I had to make very few edits but it was useful for visualizing with a parameter to map continuously) Starting with the infinite parallel plate capacitor, each sheet on $z=x\pm i\pi/2$, shown below, the same curve in the mapped domain, $w=\exp(z)$ we get the upper/lower plate on $w=\pm iv, v=\exp(x)$ where the gap near zero vanishes as the lower limit of x approaches $-\infty$ But under this map, the field lines start at uniform spacing then get exponential spacing disagreeing with the first solution. I looked up the exponential map and found that it is conformal. I've run across some other properties on a region of $\mathbb C$ near zero, but none of them make it clear to me that this should fail. What am I missing? Turns out I messed up the method. I don't know if I should or shouldn't answer my own question if I just made a mistake. Answer: I had some misunderstanding of the conformal map technique with the complex potential. So I'll be going through the solution, my misunderstanding, and clarifying the technique. Before this, a known solution to the problem is that $\varphi$ is a linear function of the angle around the edge of contact, $\varphi(\theta)=V\theta/\pi$ and uniqueness implies any other solution must be equal to this one. Note that the $E$ field for such a solution would be inverse to the distance from the edge of contact. The solution to the problem (the potential as a function of space for two half planes in contact) via using conformal maps is as follows. Consider an infinite parallel plate capacitor with distance $\pi$ between the surfaces. One surface at $\varphi=V$ the other at $\varphi=0$. Each surface is a contour described by $z = x\pm i\pi/2$. If we want these two curves to lie parallel to one another and intersect at one point, then we can use the map, $$w=\exp(z)=\exp(x)e^{\pm i\pi}=\pm i\exp(x)$$ and these two contours will both lie in the purely complex axis and share one point. It occurs as $x\rightarrow-\infty$ where $w\rightarrow0$. My misunderstanding happened here. I have one solution that has $\exp(x)$ and I considered this to be related to the "spacing" of field lines and therefore strength of the E field, but from the prior result, I expected field strength to be inverse to length, not exponential. Turns out, I jumped the gun on a conclusion. The conformal map of the coordinates is not a complete description of the problem. The technique is to leverage a change of coordinates and the following fact, $$ f(z) = f(z(w)) = \varphi + i\psi $$ for a potential, $\varphi$, and a function, $\psi$ of field lines to describe the "same potential" in different coordinates. Where in both cases the physical potential, $\varphi$ is described in both representations of $f$, but each in a different coordinate system. So I describe $\varphi$ in the $z$ coordinate system. The function, $f(z)=-(V/\pi)iz$ has uniform vertical field lines, just like a parallel plate capacitor with horizontal plates. But $z=\log w$, so $$ f(z) = -(V/\pi)iz = f(z(w)) = -(V/\pi)i \log w = f(w) $$ and if we polar parametrize $w=r\exp(i\theta)$, then $$ f(w) = (V/\pi) (\theta-i\log r) $$ which has a real part $\varphi = {\mathrm{Re}}\,f =(V/\pi) \theta\,$ as expected. The imaginary part gives that there are logarithmic spacing of field lines, which I did expect, but misinterpreted. So my misunderstanding was twofold, the first part was I didn't get to the end of the technique to find a potential, the other part was that the spacing of the field lines (related to the exponential) was not the form of the field strength (inverse power).
{ "domain": "physics.stackexchange", "id": 74565, "tags": "homework-and-exercises, electrostatics, potential" }
Is Callen really being sloppy saying $\text{đ}Q=T\mathrm{d}S$ for *all* quasi-static processes?
Question: I am reading Callen's Thermodynamics and introduction to Thermostatics (second edition). The textbook says in chapter 4-2 that $\text{đ}Q=T\mathrm{d}S$ always holds for quasi-static processes, reversible or irreversible: The identification of - P dV as the mechanical work and of T dS as the heat transfer is valid only for quasi-static processes. This says nothing about irreversible processes, but later Callen applies the $\text{đ}Q=T\mathrm{d}S$ formula to the process of free heat exchange between systems of different temperatures, which is clearly irreversible, in equation (4.9) on page 105. For the record, quasi-static means a slow succession of equilibrium states, reversible means that the process can be conducted in a different direction and this is equivalent to the conservation of the total entropy of the closed system in question. I also stumbled upon this stackexchange answer saying The condition to write $\text{đ}Q=T\mathrm{d}S$ is that the process be reversible. In general, we have $\text{đ}Q\le TdS$. For a concrete example, consider two gases separated by an insulating piston with friction. One can consider a quasi-static process where the piston slowly moves to the right. Even an infinitesimal motion is not reversible, because heat is produced by friction. The equality $\text{đ}Q=T\mathrm{d}S$ is violated, as $\mathrm{d}S>0$, while $\text{đ}Q=0$. I assume here that there are two subsystems: (the wessel with the piston) and (the two gases). Both $T\mathrm{d}S$ and $\text{đ}Q=0$ refer to the two gases, their entropy change and heat output. This example kind of contradicts the Callen's approach. So, I have the following questions: Why exactly is $\mathrm{d}S>0$ for the two gases in this example? Does Callen's approach really fail for this example? It sounds reasonable that processes involving friction are irreversible. Does Callen address this in his book at all? Answer: Understanding entropy change is much simpler than it seems from your description of Callen. Here are the basics: Entropy is a physical (state) property of the material(s) comprising a system at thermodynamic equilibrium, and the entropy change between two thermodynamics equilibrium states of a system depends only on the two end states (and not on any specific process path between the two end states). For a closed system, there are only two ways that the entropy of the system can change: (a) by heat flow across the system boundary with its surroundings at the temperature present at the boundary $T_B$; this is equal to the integral from the initial state to the final state of $dQ/T_B$ along whatever path is taken between the two end states. (b) by entropy generation within the system as a result of irreversibility; this is equal to the integral from the initial state to the final state of $d\sigma$ along whatever path is taken between the two end states, where $d\sigma$ is the differential amount of entropy generated along the path.. Contribution (a) is present both for reversible and irreversible paths. Contribution (b) is positive for irreversible paths and approaches zero for reversible paths. For any arbitrary path between the two end states, the two contributions add linearly: $$\Delta S=\int{\frac{dQ}{T_B}}+\sigma$$ Determining the amount of entropy generation along an irreversible path is very complicated so, to determine the entropy change for a system between any initial and final thermodynamic equilibrium states, we are forced to choose only from the set of possible paths that are reversible in applying our equation. The reversible path we choose does not have to bear any resemblance to the actual path for the process of interest. All reversible paths with give the same result, and will also provide the entropy change for any of the irreversible paths. Armed only with these basics, one can determine the change in entropy for a closed system experiencing any process, provided that application of the 1st law of thermodynamics is sufficient to establish the final thermodynamic equilibrium state.
{ "domain": "physics.stackexchange", "id": 85847, "tags": "thermodynamics, entropy, reversibility" }
Location of free charge in insulators
Question: I'm going through the introductory section to Electrostatics in Materials in Griffiths, and I have a question that I can't seem to find a satisfactory answer to. If I have an insulator with free charge, is it necessarily confined to the surface? In the case of a conductor, Gauss' law immediately gives a "yes", because no electric field can exist in a conductor, leading us to conclude that there is no free charge inside the surface. But insulators can have electric fields inside them. Does this mean that free charge can exist inside the volume? Or does the free charge still move to the surface of the insultator? Answer: The question is not entirely clear; I wonder if you are really asking about bound charge (polarization charge). Free charge is charge that is free to move macroscopically; bound charge is charge that can only move on a microscopic or submicroscopic scale. Free charge ends up on the surface of a dielectric medium under static conditions. In a uniform dielectric slab subjected to a uniform electric field, bound charge is displaced in proportion to the strength of the electric field, but there is no net bound charge density except at the surfaces of the slab. However net density of bound charge is confined to the surface of a dielectric only if the dielectric and the electric field is uniform. See this Feynman lecture. If, for example, there are two slabs of different dielectric media laminated together and an electric field is imposed across the combined slab, free charge will exist at the interface between the two. Or, if a slab of material were prepared containing a gradient of ratio of two substances with different dielectric constant - so that the net dielectric constant in the material has a gradient - and an electric field is applied across the slab, bound charge will be distributed throughout the volume of the slab.
{ "domain": "physics.stackexchange", "id": 57713, "tags": "electrostatics, charge, dielectric, insulators" }
When water is about to boil
Question: Have ever noticed? When water is about to boil, no matters the kettle, there is some sound I have no idea where it comes from, sometimes long before it boils. Is there any explanation for this phenomena? Answer: The water near the heating element turns into water vapor. This vapor then rises up to the surface but as it meets colder water upwards it turns back into water. As the water/steam transition is not smooth (e.g. the volume changes rather rapidly during phase change), the constant transition between vapor and liquid states produces noise. As the water gets to +100C (boiling point), the vapor bubble will not turn back into liquid before reaching the surface thus making less noise. For more on this click this.
{ "domain": "physics.stackexchange", "id": 927, "tags": "fluid-dynamics, acoustics" }
Error correction that adapts to different error rates
Question: Say we have $N$ bits that we'd like to store in an $M$-bit error correcting code, where $M > N$. Given $\epsilon > 0$, as long as $N > N_0(\epsilon)$ we can recover the original bits as long as any $N(1+\epsilon)$ of the $M$ bits are correct. Now say the bits are ordered in decreasing order of importance. Can we pick a single $M$-bit code so that if $K > K_0(\epsilon)$ out of the $M$ bits are uncorrupted, we can recover the first $K(1 - \epsilon)$ original bits correctly? Here the difference is that the code is independent of $K$. Answer: This is impossible as stated. Consider $M = N, K = N/2$. Up to $\epsilon$, we're asking to be able to recover the whole input from the whole code, and the first half of the input from either the first half of the code or the second half. But if we can recover the first half of the input from the first half of the code, this implies we can recover the second half of the input from the second half of the code, and vice versa. Thus we can recover both halves of the input from the first half of the code, which is impossible.
{ "domain": "cstheory.stackexchange", "id": 4210, "tags": "it.information-theory" }
When can one write $a=v \cdot dv/dx$?
Question: Referring to unidimensional motion, it is obvious that it doesn't always make sense to write the speed as a function of position. Seems to me that this is a necessary condition to derive formulas like: $$v^2=v_0 ^2 +2\int_{x_0}^{x}a\cdot dx$$ In fact, in the first step of the demonstration (the one I saw, but I think that this step is crucial) it's required to write $a=dv/dt=(dv/dx)(dx/dt)$, that doesn't make sense if $v$ isn't a function of $x$. When can one rigorously write $v=v(x)$? Answer: This is going to be essentially the same in content as Jerry Schirmer's response, but I thought you might like to hear it in more mathematical terms. The velocity function $v$ is defined as $$ v(t) = \dot{ x}(t) $$ Let's take the domain of the position function to be the open interval $(t_1, t_2)$ and suppose that it has the property that given any point $x_0$ in the range of $x$, there is a unique point $t_0$ in its domain $(t_1, t_2)$ such that $x(t_0) = x_0$. Then there exists a function $x^{-1}$ (the inverse of $x$) defined on the range of $x$ satisfying $$ x^{-1}(x(t)) = t $$ Now we define a function $\bar v$ on the range of $x$ by $$ \bar v(x) = v(x^{-1}(x)) $$ It is common to abuse notation here and use $v$ in place of $\bar v$ for this function, but let's keep things notationaly rigorous. Then on one hand the chain rule gives $$ \frac{d}{dt}\bar v(x(t)) = \frac{d\bar v}{dx}(x(t))\,\dot x(t) = \frac{d\bar v}{dx}(x(t))\,v(t) $$ While on the other hand we use the definition of $\bar v$ to write $$ \frac{d}{dt}\bar v(x(t)) = \frac{d}{dt} v(x^{-1}(x(t))) = \frac{dv}{dt}(t) = a(t) $$ and combining these observations gives the identity you wanted $$ a(t) = \frac{d\bar v}{dx}(x(t))\,v(t) $$ Notice that if we indulge in the usual abuse of notation, then we can simply write this as $$ a = v \frac{dv}{dx} $$
{ "domain": "physics.stackexchange", "id": 6552, "tags": "newtonian-mechanics, energy, kinematics, work" }
Wasn't the Hawking Paradox solved by Einstein?
Question: I just watched a BBC Horizon episode where they talked about the Hawking Paradox. They mentioned a controversy about information being lost but I couldn't get my head around this. Black hole thermodynamics gives us the formula $$S ~=~ \frac{A k c^3 }{4 \hbar G}=4\pi k\dfrac{M^2}{M_P^2}.$$ where $M_P$ is the Planck mass. And we also have Einstein's famous $E = M c^2$, which mean that mass can be turned into energy, right? Hence information is either lost or it is preserved, but now in energy-form instead of mass-form. I can't understand why radiation from black holes would be any different than an atomic bomb for example, where mass is also turned into energy? Answer: Radiation normally contains subtle correlations. For all practical purposes you can't use it, but it's there. Hawking radiation is, according to the theory, perfectly thermal and does not contain any more information than the temperature itself. The problem is that then the process of black hole evaporation is not reversible, in principle. Unlike all other processes that we know of (which might be irreversible in practice, but are reversible in principle). That irreversibility (which implies non-unitarity) is incompatible with quantum mechanics. That is the problem in a nutshell. It is really a conflict between quantum mechanics and semi-classical general relativity. There are many more things to be said but I get the impression you haven't done a lot of reading about this and details would be rather pointless. I suggest you browse around for a bit with that starting point in mind.
{ "domain": "physics.stackexchange", "id": 3624, "tags": "thermodynamics, black-holes, entropy, information, hawking-radiation" }
With ebikes, is there friction from the motor when riding with the electronics turned off?
Question: One may want to ride an electric bike with the electronics turned off. There are many reasons to do this, i.e. flat battery, fitness (who would have thought) When riding an ebike conventionally, is there a resistive friction caused by the DC motor? If so, do e-bikes have a way to 'disengage' the motor from the wheel? Potentially, a clutch Answer: Different electric assist bikes will present different circumstances. For instance, a standard hub motor (front or rear wheel) will have permanent magnets which react with the iron in the stator. You can feel what is called cogging when rotating the wheel by hand, without power. This is energy expended by hand, but also applies when pedaling without applying throttle. It's not zero, but it's also not a substantial amount. Geared hub motors will have an internal freewheel, which prevents this cogging. One can rotate the wheel (in the forward direction) and not feel interaction with the magnets. This particular design has less energy loss to consider when pedaling without applying throttle. Mid-drive motors can be considered as geared hub motors, as they also have freewheels that permit pedaling without motor rotation and the description above applies there as well. My now-sold velomobile used a Stokemonkey electric assist which required me to pedal whenever power was applied by battery/throttle, but would not spin when I pedaled without application of throttle. There is a freewheel on the motor which had minimum resistance, but not zero. Typically, there will be very little additional friction or load to consider, although the non-geared hub motor design is going to be the least efficient when operating without battery power applied.
{ "domain": "engineering.stackexchange", "id": 3564, "tags": "electrical-engineering, motors, electric-vehicles, bicycles" }
Is there any planet out there which is half gaseous and half rocky ? is this possible?
Question: I have heard many times that this planet is gaseous like jupiter this planet is like super earth like rocky planet etc. so is there any planet which is mix of gase and rock ? Answer: Probably. Relatively little is known about exoplanets because they're very hard to get a good look at, but there's no reason why a rocky world couldn't accumulate enough ices and/or gas to also resemble a gas giant. Now half rocky half gas giant (hydrogen/helium) might be rare. Half rocky half "ices" is very possible and those have likely already been observed. It helps to understand planet formation and elemental abundance. There's likely a size limit to rocky planets with thin atmospheres because mass tends to accumulate and retain atmosphere and with hydrogen and helium being the most abundant gases in the universe, once a planet gets massive enough, it should collect and retain hydrogen and helium and begin to resemble Jupiter or Saturn, even if it started out as a rocky world. Rocky worlds, need elements like Magnesium, Silicon, and Iron, often bound with oxygen. close to 90% of the mass of rocky worlds in our solar system comes from those 4 elements and 90% isn't a bad estimate for rocky exoplanets. Based on our solar-system: Iron makes up about 0.117%, Silicon 0.065% and and Magnesium 0.051%, for about .234% of the mass of the solarsystem. Oxygen bound to those elements, using Earth's composition as an estimate adds about another 50%, up to about 0.35%. Hydrogen and Helium make up about 97.6%, which leaves about 2% of the mass of the solar system in the form of ices and heavier gases, not hydrogen and helium. Those are primarily water, ammonia, CO2 and Methane, with smaller amounts of other gases/ices. These numbers are rough but good enough for an estimate. In our solarsystem there's about 6 times as much ice and heavier gases by mass than there is rocky material and there's close to 40 times as much hydrogen and helium as the other elements. Planets need to be quite large to hold hydrogen and helium. Jupiter and Saturn are the only planets in our solar-system that are hydrogen and helium abundant. Even the smaller gas giants, Uranus and Neptune have comparatively little hydrogen and helium. So there's probably no such thing as a small gas giant. Gas giants need to be large or they don't exist at all. (I should make a footnote that as a gas giant loses atmosphere, you can get a small gas giant, but that would be in a transition phase). I don't like the term "Ice giants", though it's often used to define Neptune and Pluto because they are primarily ices (water, methane, CO2, NH3), not primarily hydrogen/helium. I don't like calling them "ice giants" when they can also be hot, so I prefer the term Neptune like planets or Neptunes. Uranus (a Neptune type planet) is mostly made up of ices and heavier gases with an estimate of just 3%-10% hydrogen and helium. Estimates of about 70%-90% of ice/not hydrogen/helium gas and the remaining, about 7%-20% more dense material. That would be some other elements, perhaps some solidified carbon (diamonds), higher sulfur and lower Iron by percentage than Earth as well as some Earth like silicates. magnesium-oxides, Iron and Iron-oxides, though the high internal temperature might not leave much in the way of chemical bonds towards the center of the planet where temperatures reach 9,000 degrees C. Neptune is similar to Uranus but more massive and more dense. It's thought to have more water and perhaps a larger internal mantle, though like Uranus, the majority of it's mass is ices and heavy gases (not hydrogen/helium). Neptune and Uranus have hydrogen in the form of Methane, Ammonia and Water, but I'm counting that hydrogen as part of their "heavy gases/ices". They both have a comparatively low percentage of free hydrogen (3%-10%) making a clear distinction between Neptune like planets and gas giants like Jupiter and Saturn with which the majority of their mass is hydrogen. It helps to think of how planets form. Smaller planets have too little gravity to retain hydrogen and helium gas and solar-systems are mostly too warm for hydrogen to freeze, so planets are made up of solid material for the most part, either rocky or ices that clump and stick together during formation. When they have enough gravity, then they can begin to hold onto an atmosphere and further from the star makes this easier. The Moon Titan, as an example, is small to have an atmosphere, but it's quite far from the sun and largely made up of ices on it's surface, so as the ice on it's surface melts, it basically replenishes it's atmosphere. Titan is still out-gassing, and it losing atmosphere, but it loses atmosphere slowly enough that it retains an atmosphere, but in a sense, it's in a transition, where it's atmosphere is outgassed. When it runs low on surface ice that readily thaws, it should begin to resemble the icy moons of Jupiter. So, back to your original question. Many combinations are possible, some are not. Baby Jupiters (the mass of Neptune) are probably unlikely, though a gas giant close to it's star that's lost a lot of it's atmosphere could resemble a baby Jupiter, but like Titan, that would be transitional. Rocky Jupiters are unlikely because hydrogen outnumbers rocky material by over so much, over 200 to 1 in most of the milky-way. Now, some solarsystems are likely more "metalic" than others, so that ratio will have some variation, but it's once there's enough mass for a gas giant to form, hydrogen is likely to be the abundant element and Helium #2 and any rocky core would be dwarfed by the hydrogen and helium. A rocky Neptune however, no reason why not. A planet with 8 earth masses of Earth like material and 8 Earth masses of ices (and 5%-10% hydrogen) would basically be a rocky Neptune. It would likely be a little smaller and certainly denser, but not all that different in outward appearance. Water worlds is a common term that might qualify as a "rocky-neptune". We don't know exactly what they are made of, but density estimates suggest a high percentage of water (and presumably CO2, CH4, NH3), which is the majority of Uranus and Neptune. Here's a chart of exoplanets of less than 20 Earth masses. I suspect there's a considerable margin of error in these estimates, but it more or less agrees with the rocky-neptune argument. Planets with 2 or 3 grams per cc would be in that range. See chart: www.hpcf.upr.edu/~abel/phl/hec_plots/exoplanet_df.png Source. Baby Neptunes of just 1 or 2 Earth masses might be possible too, but they'd probably need to be quite cold and far from their star or they'd be in danger of losing their atmosphere. Iceworlds like Pluto can get quite small, but Pluto has very little atmosphere. Planets generally need to be fairly large to retain their atmosphere. If they are too cold, that atmosphere freezes. Titan, as I mentioned above, is in what could be called a slow transition where it's atmosphere comes from it's surface and it loses it slowly. Ceres, based on it's density, is an icy moon like object too, though it's lost nearly all it's surface ice, but it probably has a lot of water/"ices" below it's surface.
{ "domain": "astronomy.stackexchange", "id": 2579, "tags": "planet, exoplanet, planetary-formation, gas-giants" }
Big O vs Big $\Theta$ during coding interview
Question: Almost every time I see an article about time or space complexity, people are expressing the complexity with Big O, whereas it should be $\Theta$. From the book "Cracking the coding interview": "In industry (and therefore in interviews), people seem to have merge Θ and together. Industry's meaning of big O is closer to what academics mean by Θ , in that would be seen as incorrect to describe printing an array as $O(n^2)$. Industry would just say this is O(N)" In an interview context, would it be considered ok to say $\Theta$ instead of O? If the interviewer is asking : "What's the Big O of this algorithm?", is it alright to answer :"The time complexity of this algorithm is $\Theta$(n)"? I'm wondering if most interviewers would think I'm trying to outsmart them by saying that. But I don't feel comfortable by replacing O by $\Theta$ since they don't have the same meaning. Answer: Here is an answer from reddit that I found the most useful: I guess I would say, if someone says, "what's O of insertion sort?", you want to say "it's $O(n^2)$". Sure it's not precisely the same thing as saying that it's "$\Theta(n^2)$ in the worst case", but it's a convention that everyone understands, and it takes less time to get the words out of your mouth.
{ "domain": "cs.stackexchange", "id": 16443, "tags": "time-complexity, big-o-notation, ambiguity" }
Operator norm and Action
Question: We define the norm of the operator as $\left\lVert A \right\rVert = \sup \frac{\left\lVert A\psi \right\rVert}{ \left\lVert A \right\rVert} = \sup \left\lVert A\psi \right\rVert$ for $A ∈ L(H)$. It is said that $||A||$ measures the magnitude of the action of $A$. What is meant by the action of $A$ and what is sup in this equation? Also, how can we check if $A$ is bounded or unbounded with this statement? Answer: An operator is unbounded if its supremum norm is infinite. This follows trivially by negating the definition of a bounded operator in a preBanach space. "A linear operator $A :D(A) \rightarrow X$ is called bounded, iff $$\forall \psi\in D(A), \exists k\in \mathbb R, \text{so that} ||A\psi|| \leq k ||\psi||. $$ The maximum value of $k$ over the domain of A is called the supremum norm of A.
{ "domain": "physics.stackexchange", "id": 72612, "tags": "quantum-mechanics, hilbert-space, operators, action" }
Integrating custom controller to ros_control package
Question: Hi everyone, I am new to ROS and ros_control and I don't know how to integrate my custom controller into to the ros_control package. I have implemented this tutorial and everything works OK. I can run default/existing controllers but I need to know how I can merge my new controller to the controller_manager. In my workspace within src folder, I have three folders: rrbot_control, rrbot_description, and rrbot_gazebo and each has its own CMakeLists.txt and package.xml files. Right now I have a .cpp file of my new controller: Where should I put it (in which folder)? Shall I change CMakeLists.txt and package.xml files? and how? I'm using ROS melodic and ubuntu 18.04. Thanks a lot for your help. Originally posted by abata on ROS Answers with karma: 33 on 2020-07-18 Post score: 0 Answer: I found this video and it is the answer to my question :) Thanks. Originally posted by abata with karma: 33 on 2020-07-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 35302, "tags": "microcontroller, ros-melodic, ros-control" }
Can $M_3$ move upwards and $M_2$ move to the right?
Question: Can M3 move upwards and M2 move to the right? in the solutions they assume this, but I find it impossible, it doesn't make sense physically. I get the answer wrong if I assume M3 moving down and M2 moving right in the next question, which is the one that makes sense. Answer: Well, Let's assume F = 0 -> M3 goes down and M2 to the right. Now assume F > M3g -> M2 goes to the left and M3 goes up. So that's 2 cases, and they can be solved in 1 set of equations. Those equations assume M3 and M2 are free to move in whatever direction they please... The correct paths will be given by doing the math properly. I suppose that's why they assume this. Assume the string is always at tension and doing the math will give you the right directions. Just keep a sharp eye for signs and directions.
{ "domain": "physics.stackexchange", "id": 14676, "tags": "newtonian-mechanics, forces, reference-frames" }
Why is the size of image increasing as observer moves away from lens?
Question: I was using a convex lens and placed the object at principal axis at a distance from optical center lesser than focal length (between $F_1$ and optical center). Then I started observing the size of the image from other side of lens. At first I had placed my eye close to the $F_2$ and between $F_2$ and $2F_2,$ then moved it away from that towards $2F_2.$ I found that as I moved away from lens, the image was getting bigger and bigger. That's where my confusion comes in. What I understand is that the size of the image formed at any point is only dependent on its position from lens and lens. It should not be dependent on observer but the size of object seen by observer can get smaller and smaller as he moves away from lens just like a tree when seen from a distance appears small as compared to looking at it from closer distance. Why is the size of image of object increasing? Answer: It's really not easy to judge the absolute angular size of an object (see Moon illusion). The image you see may get larger relative to the lens frame, but a bit smaller in angular size due to perspective. In any case, with a perfect lens you're watching the virtual image at a fixed distance behind the lens, farther from the lens than the object, and magnified. This is equivalent to an experiment with a window (without optical power, just flat glass) and an object behind it. As you go away from the window, the object will seem larger—compared to the window frame. But it actually becomes smaller, as you can confirm if you try to measure its angular size e.g. by using a coin at an arm's length as a reference. I've done the experiment you describe, and indeed the image grew relative to the lens but shrank relative to a SIM card I placed at a fixed distance to my eye.
{ "domain": "physics.stackexchange", "id": 85701, "tags": "optics, visible-light, observers, lenses" }
A variation of the maximum bipartite matching problem
Question: Given a bipartite simple graph $G=(V,E)$, where $V=A\cup B$ and $A\cap B=\emptyset$, any edge in $E$ connects two vertices in $A$ and $B$, respectively. The maximum bipartite matching problem is to find the largest subset $E'\subseteq E$ such that no two edges in $E'$ share common vertices. We know that this can be addressed in $O\left(\sqrt{|V|}|E|\right)$ time. Now there is a variation. Each vertex in $A$ has exactly two edges connected with some vertex in $B$. We restrict each vertex in $A$ to be either unmatched, or matched with exactly two vertices in $B$. Each vertex in $B$ can only match at most one vertex in $A$. The question is: how many vertices in $A$ can be matched at most? I've considered reducing this problem to max flow. I set up a super source $s$ and a super sink $t$. $s$ connects to all vertices in $A$, with the capacity of the edges being $2$. $t$ connects to all vertices in $B$, with the capacity of the edges being $1$. The edges between $A$ and $B$ have capacity $1$, and they're added according to the edge set $E$. Now the answer of the question is the max flow of the graph divided by $2$. But in this way, I can't restrict each vertex in $A$ not to match only one vertex in $B$. So it doesn't work. My question is: Can this problem be reduced to some problem known to be in $\mathbf{P}$? Background: This problem is derived from a "string merging" game. Let there be $m$ distinct strings with length $n$, and each string consists of $0$, $1$. If there is one and only one corresponding character in two strings being complement (i.e. $0$ and $1$), we can merge the two strings into a new string with the character replaced by $x$. For example, $10\color{green}{1}11$ and $10\color{green}{0}11$ can be merged into $10\color{blue}{x}11$. Each string can only be used once, and new strings cannot be furthur merged. Now given $m$ of such strings, the goal of the game is to merge them into as fewer strings as we can. However, one string can be merged with many strings, so we need to find out the most clever way. Here is another example. Consider four strings: (a) 0110 (b) 0010 (c) 0000 (d) 1000 (a), (b) can be merged into $0x10$, (b), (c) can be merged into $00x0$, and (c), (d) can be merged into $x000$. There are two strategies to perform merging: the first way is to merge (a), (b) and (c), (d), then we'll get two strings. The second is to merge (b), (c), but (a) and (d) cannot be merged, so we'll get three strings. Apparently the first strategy is more clever. Then the question is: what is the most clever way? I've reduced the problem into a graph theory problem. Let the new strings form set $A$ and the original strings from set $B$. The vertex of the new string is connected to the two strings merged into it. Then the problem is to find the maximum match as I mentioned. Answer: Yes, the problem is in $\mathbf{P}$. For every vertex $u\in A$, let $M(u)$ be the set of the two vertices in $B$ that is connected with $u$. For every vertex $v\in B$, let $N(v)=\{u\in A|(u,v)\in E\}$. We construct another undirected graph $G'=(V',E^*)$, whose vertex set is $V'=B$. For every $v_1,v_2\in B\,(v_1\ne v_2)$, we let $(v_1,v_2)\in E^*$ if and only if $\exists u\in A$ such that $M(u)=\{v_1,v_2\}$. Now each (effective) $u\in A$ is converted to an $e^*\in E^*$ (for multiple $u$ that share common $M(u)$, we only take one of them). In $G$ we require that each vertex $v\in B$ can only match at most one $u\in A$, so equivalently we require every $e^*\in E^*$ we selected does not share a common vertex. Now the problem becomes the maximum matching problem, which can be addressed in $O\left(|E^*|{|B|}^2\right)$ time using the blossom algorithm. For the version of the "string merging" game, actually, the graph $G'$ is a bipartite graph and hence the problem can be solved with the Hopcroft-Karp algorithm. We can consider it in the following way. For two strings $s_1, s_2$, if they can be merged, we denote $g(s_1,s_2)$ as the specific position being merged. For example, $g(00\color{green}{1}1,00\color{green}{0}1)=3$, since the third characters in the strings are merged. The method to build $G'$ is as follows. Each string is equivalent to a vertex $v\in B$, so $B$ is simply the string set. We define $(s_1,s_2)\in E^*$ iff the two strings $s_1,s_2$ can be merged. We now show that any cycle in $G'$ contains even amount of vertices (strings). Let $C$ be a cycle and $V(C)$ be its vertex set. We assert that if $s_1,s_2\in V(C)$ and $g(s_1,s_2)=p$, then there must be $s_3,s_4\in V(C)$ such that $\{s_1,s_2\}\cap\{s_3,s_4\}=\emptyset$ and $g(s_3,s_4)=p$. This is easy to understand. We take a tour along $C$: $s_1\to s_2\to \cdots\to s_1$. Note that when the $p$-th character alters, the two adjacent strings can be merged. Now $s_1\to s_2$ is just an example of such situation. Without losing generality, lets say the $p$-th character in $s_1$ is $0$ (written as $s_1[p]=0$). Then we must have $s_2[p]=1$. Now since the destination of our tour is $s_1$, the $p$-th character transform from $1$ to $0$ along the tour. When it transforms, we found $s_3$ and $s_4$. By this approach we can prove that the graph $G'$ is a bipartite graph.
{ "domain": "cs.stackexchange", "id": 20859, "tags": "algorithms, graphs, network-flow, bipartite-matching" }
How to find the size of gaps between entries in an array so that the first and last element of the array is filled and the gaps are all of equal size?
Question: I have an array a of n entries. I need to place a token on the first and last position of that array, so a[0] = 1 and a[n-1] = 1. I now want to place additional tokens into that array with a distance inbetween each index i where a[i] = 1 that is greater than 2 (so placing a token on every index is invalid as well as alternating using and not using an entry is invalid). Phrazed differently: I want that sum(a) < n/2 . The gap inbetween each token should be the same, so say with an array of size 16, a[0] = 1, a[3] = 1, a[6] = 1, a[9] = 1, a[12] = 1, a[15] = 1 would be a solution with a gap size of 2 (distance of 3). How do I find all gap sizes that are possible to fill said array with the given constraints? Imagine a street inbetween two crossroads where a lamppost should be placed on each crossroad and then additional lampposts should be placed equidistant to each other and for some reason only natural number distances are allowed. (The actual problem I want to solve is where to place Sea Lanterns in my Minecraft Project so do not disregard this problem as an assignment question I want a solution for.) Answer: If I understand your problem correctly, the tokens (lanterns) can be placed every $x$ blocks (starting from $0$) if and only if $x>2$ is a divisor of $n-1$. For example, if the array has $n=31$ elements the valid values of $x$ are $3,5,6,10,15,$ and $30$.
{ "domain": "cs.stackexchange", "id": 16557, "tags": "arrays" }
What is the largest hydrogen-burning star?
Question: I am wondering what is the largest known core hydrogen-burning star? A look at the list of largest known stars on Wikipedia seems to indicate VV Cephei B (at the bottom of the list), but I would like to know for sure if it is the largest known. In addition to knowing which star it is, I would also like to know its temperature, size, and expected lifetime. I am also curious to know if the largest known core hydrogen-burning star is similar to what astrophysicists theorize is the largest possible core hydrogen-burning star (given current metallicity conditions in the universe; I know star-formation timescales have a metallicity dependence) and the expected temperature, size, and lifetime of such an object. Answer: I assume by largest, you mean largest radius. Well it won't be VV Cep B since this is merely a B-type main sequence star. O-type main sequence stars are known and these have both larger masses and larger radii on the main sequence (when they are burning hydrogen in their cores). A selection of the most massive objects can be found in the R136 star forming region in the Large Magellanic Clouds. If you look at this list (though I recommend having a look at the primary literature), you will see that O3V stars are listed. Such objects are also present in our Galaxy, for instance in the supercluster NGC 3603 (Crowther & Dessart 1998). Such stars have masses of maybe $100 M_{\odot}$, luminosities of $2\times 10^{6} L_{\odot}$ and temperatures of 50,000 K. Using Stefan's law, we can deduce radii of $\sim 20 R_{\odot}$. There are suggestions that even more massive main sequence stars have existed in R136 and NGC 3603 (see Crowther et al. 2010), which are now seen as evolved Wolf-Rayet objects, possibly up to $300 M_{\odot}$ on the main sequence (though this is a model-dependent extrapolation), and these would have had radii $>20 R_{\odot}$. In the very early universe, population III main sequence stars without metals could have been much more massive and larger.
{ "domain": "astronomy.stackexchange", "id": 1045, "tags": "star, star-formation, main-sequence" }
Classical, identical particles which are distinguishable
Question: Aren't classical, identical particles, always indistinguishable? Consider monitoring the trajectory of one particle. After it collides with an identical particle how would one continue to keep track of which particle is which? However, textbooks on statistical mechanics routinely discuss classical, identical (but distinguishable) particles in the context of the Gibbs Paradox (https://en.wikipedia.org/wiki/Gibbs_paradox) and other situations. Following query does not address the issue: Distinguishing identical particles Answer: In classical mechanics, you can actually "watch" the particles. You can track the position of each particle before and after the collision. You can label one particle as "particle 1" and the other as "particle 2", and those labels will stay consistent as you watch the system do what it does. In QM that is not the case. We only observe our system through measurements, but before those measurements we cannot "watch" the particles do what they are doing. Therefore, there is no way to give the particles unique labels. This is why we impose certain symmetries under particle exchange for the state vectors of systems of indistinguishable particles.
{ "domain": "physics.stackexchange", "id": 66554, "tags": "statistical-mechanics, identical-particles" }
Why does positive work done by internal conservative forces $\implies$ decrease of potential energy?
Question: Potential energy can be thought as the amount of work that the force can potentially do on the point because of its position. $$W=-\Delta U=U_{initial}-U_{final}$$ A positive work done by a force translates into a negative variation of potential energy. That sounds ok, given the interpretation of $U$ stated above. If a force does some work, then the "potentiality" of doing more must decrease. But the equation says also that any time the force does a negative work, the potential energy increases. Why does this happen, in the light of such interpretation of $U$? Answer: If a force does negative work, it is in fact trying to work against another force, doing positive work. When you lift up a book from the floor, gravity does negative work on the book, while you do positive work. And the books rises higher up, so $U$ increases. Negative work just means "receiving" instead of "giving" away energy. Which basicly is the same as saying that something else is doing work on it. There is not more to it than that.
{ "domain": "physics.stackexchange", "id": 35321, "tags": "energy, energy-conservation, work, potential-energy, conventions" }
What is a complete reaction?
Question: For a lab assignment, I combined $\pu{10 mL}$ of $\pu{1 M}$ sodium sulfate solution with $\pu{10 mL}$ of $\pu{1 M}$ calcium chloride solution. The ionic equation I came up with is: $$\begin{multline} \ce{2Na^+ (aq) + SO4^2- (aq) + Ca^2+ (aq) + 2Cl^- (aq) ->}\\ \ce{CaSO4(s) + 2Na^+ (aq) + 2Cl^- (aq)} \end{multline}$$ I was then asked, in a lab question, the following: If the reaction called for $\pu{1 M}$ chloride ion with $\pu{1 M}$ calcium ion, would you still use $\pu{10 mL}$ of each solution for a complete reaction? Explain your reasoning. To my understanding, I initially had $\pu{2 M}$ chloride ion and $\pu{1 M}$ calcium ion. With only $\pu{1 M}$ chloride ion instead, the ionic equation would change to this, or so I'm led to believe: $$\begin{multline} \ce{2Na^+ (aq) + SO4^2- (aq) + Ca^2+ (aq) + Cl^- (aq) ->}\\ \ce{CaSO4(s) + 2Na^+ (aq) + Cl^- (aq)} \end{multline}$$ As a (balanced) molecular equation, to my understanding, this would be: $$\begin{multline} \ce{Na2SO4 (aq) + Ca (aq) + Cl (aq) ->}\\ \ce{CaSO4(s) + NaCl (aq) + Na (aq)} \end{multline}$$ Is this a complete reaction? I have seen conflicting definitions of a complete reaction online: answers.com says that a complete reaction means all of the reactants are reacted into products. However, someone on chemicalforums.com says that a complete reaction means that all of the limiting reactant is used up. The University School of Milwaukee seems to back up what the user on chemicalforums.com said ("all of at least one of the available reactants is used up and converted into products"). My main question is thus: what is a complete reaction? No matter my final equation for this lab question, I still need to know what a complete reaction is so that I can identify them in general. My intuition says that this is a complete reaction, since all of the calcium ions and the chloride ions are converted to products. Is this correct? The leftover sodium ion makes me suspicious, so I don't know how to answer. I later spoke to my lab instructor and he clarified that the question is asking for me to come up with two arbitrary aqueous solutions, one which contains $\pu{1M}$ of chloride ion and another which contains $\pu{1M}$ of calcium ion, e.g. $\ce{KCl(aq)}$ and $\ce{Ca(NO3)2 (aq)}$. So in this case, I would need twice the volume of potassium chloride to get the number of chloride ions I need to balance the equation and prevent explosions, i.e. no extra sodium ions leftover. Answer: I suspect that you would have both these molecular equations: \begin{align} \ce{2KCl(aq) + Na2SO4(aq) &-> K2SO4(aq) + 2NaCl(aq)}\\ \ce{Ca(NO3)2(aq) + Na2SO4(aq) &-> 2NaNO3(aq) + CaSO4(aq)} \end{align} To answer your question with "What is a complete reaction?": A complete reaction is one where all of at least one of the available reactants is used up and converted into products. So for example $\ce{KCl}$ and $\ce{Na2SO4}$: In this equation you will need $\pu{2 M}$ $\ce{KCl}$ and $\pu{1 M}$ $\ce{Na2SO4}$ in order for all the reactants to react. However, you only have $\pu{1 M}$ $\ce{KCl}$ (so $\pu{1 M}$ chloride ions). This means that, as you mentioned, you will need the double concentration for the $\ce{KCl}$ solution to achieve a complete reaction. So in this case the only way for all $\ce{Na2SO4}$ molecules to react is to add the double amount of $\ce{KCl}$ solution. Then this is a complete reaction : at least one reactant is used up in the reaction. But say we have a situation in which we have a $\pu{2 M}$ $\ce{KCl}$ solution, but a $\pu{2 M}$ $\ce{Na2SO4}$ solution, then all the $\ce{KCl}$ would react, but still \pu{1 M} of $\ce{Na2SO4}$ will not have reacted.If in your lab assignment it is the meaning that you react all $\ce{Na2SO4}$, then in this case you don't have a complete reaction. Only when a two times larger volume of the $\ce{KCl}$ is added (to 2M $\ce{Na2SO4}$), then you will once again achieve a complete reaction. Furthermore you would just get a solution of solvated ions. In the case of $\ce{Ca(NO3)2}$ and $\ce{Na2SO4}$: In this equation you will need $\pu{1 M}$ $\ce{Ca(NO3)2}$ and $\pu{1 M}$ $\ce{Na2SO4}$ in order for all the reactants to react. You have $\pu{10 ml}$ of both compounds, each $\pu{1 M}$ in concentration: so in this case there is no need to use a larger volume of aqueous $\ce{Ca(NO3)2}$.
{ "domain": "chemistry.stackexchange", "id": 1028, "tags": "aqueous-solution, concentration, terminology" }
Is it possible to prove that units can be manipulated algebraically?
Question: With expressions such as $$4\ \mathrm{\frac{m}{s}} \times 2\ \mathrm{kg} = 8\ \mathrm{\frac{m}{s}} \times 1\ \mathrm{kg}$$ We can justify that a $2\ \mathrm{kg}$ mass moving at $4\ \mathrm{m/s}$ has the same momentum as a $1\ \mathrm{kg}$ mass moving at $8\ \mathrm{m/s}$. This might make sense at an intuitive level, but is there a fundamental argument that says units can be manipulated algebraically such that in this case all we've employed is the commutative property? Or take for instance when we have to do unit conversion and all we do is just cancel stuff according to some proportionalities. We're manipulating units algebraically. Is this way of doing things blindingly obvious? Or did we have to go out there and find out that it works? Also, if I have a differential equation such as $$Lq''+Rq'+\frac{q}{c}=E(t)$$ I usually solve it as if it were an empty math problem and had no units. That is, I only care about the numbers involved. But how do I prove to myself that this way of doing things is right and won't produce contradictions in terms of its units? Answer: I suppose that the best way to argue for this is to consider the units as indicative of the vector space from which the quantities in question originate. The algebra that we have defined in physics is one such that these quantities behave under the natural rules of commutative and associate multiplication, and so when we multiply quantities $m$ and $v$ (to stay consistent with your example above), we have a quantity with units $[m] \times [v]$, which is indicative of that quantity belonging to a different vector space than either of the two original spaces (specified by units $[m]$ and $[v]$). This is my interpretation, however. It’s somewhat (crudely) analogous to the case of matrix multiplication; that is, you cannot add matrices of unlike dimensions. But, the product of two matrices (not necessarily commutative) can belong to a different vector space than the matrices originated from, in analogy to the above scenario. Too long, didn’t read: units specify the vector/Hilbert (to be precise) space that to what a quantity belongs and it so happens that we use the unit algebra to specify exactly to what space the quantity in question belongs. We happen to treat them as nifty tools to help keep track of what space we’re landing in when we perform these sorts of operations. On your second point, I don’t know exactly what you’re asking, but I know that whenever I’m working through a differential equation or any sort of calculation, I frequently check my units to make sure I’m not adding quantities of unlike units (i.e., of different vector spaces, to stay consistent), that I have arguments of transcendental/trigonometric functions that are unitless, etc. If you’re ever in a predicament in which you’re violating some of these rules of sorts, that’s a major red flag that something has gone wrong in your computation.
{ "domain": "physics.stackexchange", "id": 17776, "tags": "units, dimensional-analysis" }
Box operator in FLRW metric
Question: Definition of box operator in curved space time is $g^{\mu \nu}\partial_{\mu}\partial_{\nu}$ and in FLRW metric $g_{\mu \nu}$ is $diag(1 ,-a^2(t)$ $,-a^2(t),-a^2(t) )$ so the box operator should be $\partial^2_t- a^{-2}(t)(\partial^2_x+\partial^2_y+\partial^2_z)$ but according to the book of David Toms QFT in curved spacetime the box operator should be $a^{-3}\partial_t(a^3\partial_t...)-a^{-2}(\partial^2_x+\partial^2_y+\partial^2_z)...$ So basically my first term is not matching, can anyone tell me where am I making the mistake? Also he is using the signature $(+- - - )$. Answer: If you take the scalar field in the curved spacetime the action is, \begin{equation} S=\int d^4x\sqrt{-g}\Big(g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi-m^2\phi^2\Big) \end{equation} Now the trick is that when you integrate by parts the term with the variation $\partial_\mu\delta\phi$ the factor $\sqrt{-g}$ gets inside the derivative. So the resulting equation of motion is, \begin{equation} \frac{1}{\sqrt{-g}}\partial_\mu\Big(\sqrt{-g}g^{\mu\nu}\partial_\nu\phi\Big)+m^2\phi=0 \end{equation} Now, why this equation is much better than yours? The simple derivative of a scalar is covariant. However $\partial_\mu\phi$ is no longer a scalar but a vector and its simple derivative is not covariant. So your equation will take the different form if you change the coordinates. In contrast the correct one is nothing else than, \begin{equation} \nabla_\mu\nabla^\mu\phi+m^2\phi=0 \end{equation} and thus is covariant with respect to the coordinate transformations Edit: note that you can write the box operator as follows: $$\nabla_{\mu}\nabla^{\mu}\phi=\frac{1}{\sqrt {-g}}\partial_{\mu}(\sqrt {-g}\partial^{\mu}\phi)$$
{ "domain": "physics.stackexchange", "id": 59021, "tags": "general-relativity, metric-tensor, differentiation" }
Is it possible to create a regular language from an non regular language? (details inside)
Question: I am wondering, is it is possible to create a regular language from a non regular language if we add or remove finite number of words from it? say L is irregular, if we add or remove finite number of words can we create a regular language? i might be mistaken, but since all regular languages are finite - if we add a finite amount to a non regular language - it still stays non regular, but if we substract, let's say a finite amount from infinity, it is still infinity. so is it safe to say that in both cases a regular language cannot not be obtained by adding/substracting a finite amount of words? i was told to ask this question here rather then in softwareengineering. thank you very much for your help. really curious about that Answer: We have a language $L$ which is non-regular. We subtract a finite subset $S \subset L$ of words in $L$ to get the rest: $R = L \setminus S$. Assume that $R$ is regular. Since regular languages are closed under union, and finite sets of words are trivially regular, we can construct a language $L' = R \cup S$ which is also regular. Since $S \subset L$, we have $(L \setminus S) \cup S = L$, thus $L' = L$. But $L'$ is regular while $L$ is not - a contradiction. Thus our assumption that $R$ is regular was wrong.
{ "domain": "cs.stackexchange", "id": 12702, "tags": "automata, regular-languages, finite-automata" }
Failure tolerant factor coding
Question: There are a lot of ml-algorithms which cannot directly deal with categorical variables. A very common solution is to apply binary (dummy-) coding to still properly handle the categorical nature of the data. Very often e.g. in sk-learn or apache-spark the actual dummy-coder can only handle numeric values. So label-encoding needs to be performed beforehand. In a real live ml-scenario, the fitted model will encounter new and formerly not known data. Usually, such a label-encoder (string-indexer) for spark has the option to either skip (ignore) a row of data which contains any unknown value or to throw an error. If multiple values require coding this can lead to a big loss of "new" data. Are there any approaches which "tolerate" up to x new values per row and still properly evaluate the fitted pipeline? An example for spark string-indexing + dummy-coding is shown below. val df = spark.createDataFrame(Seq( (0, "a"), (1, "b"), (2, "c"), (3, "a"), (4, "a"), (5, "c") )).toDF("id", "category") val indexer = new StringIndexer() .setInputCol("category") .setOutputCol("categoryIndex") .fit(df) val indexed = indexer.transform(df) val encoder = new OneHotEncoder() .setInputCol("categoryIndex") .setOutputCol("categoryVec") val encoded = encoder.transform(indexed) encoded.select("id", "categoryVec").show() http://spark.apache.org/docs/latest/ml-features.html#onehotencoder Answer: For a categorical variable, if the fitted model encounters a previously "unseen" category, i.e. it did not exist in the training set when the model was trained; then, you should skip that record. You could also opt to throw an error so that you're notified of the existence of new categories and can re-train the model based on that trigger. If you skip the records with new categories, you would still be able to evaluate the pipeline successfully. That may be the preferred option for a fully automated production setup. The one-hot encoding creates a new "column" of data for a new category and if this hasn't been used for training, the ML algorithm has no way of knowing how to use the new dummy variable. In your sample code, each of the categories are encoded into new variables e.g. is_category_a = 0/1, is_category_b = 0/1 , is_category_c = 0/1, etc. If such a model is sent data with a new category "d", then it would be encoded in another column called is_category_d = 0/1 , but the model would ignore this column (or throw an error) since it doesn't expect its input matrix to contain is_category_d. Lets assume you've fit a linear regression model with the coefficients as: $$ y = 1 + 2.is\_category\_a + 3.is\_category\_b + 4.is\_category\_c $$ Now, when you try to evaluate a record with new category = "d", then, the model is not able to use the new dummy variable since it doesn't have a coefficient for is_category_d . Hence, so you should not "tolerate" new categories and process such records.
{ "domain": "datascience.stackexchange", "id": 1317, "tags": "scikit-learn, apache-spark, binary, labels, encoding" }
How do catalysts provide alternative routes for reactions?
Question: According to collision theory, a reaction will take place between, say, two molecules, if the collision between the atoms has a sufficiently high kinetic energy (to meet the activation energy threshold) and have the correct orientation. If these two conditions are not met, the collision will not result in a reaction. However, there is one way to make a reaction easier, and that's with the use of a catalyst. Catalysts do not lower the activation energy or increase the kinetic energy, as I've just learned trying to figure out this question, but they provide an alternative activation energy, lower than the initial one, that, if met, will allow the reaction to take place. My question is this: what is the actual chemistry behind how catalysts provide this alternative route? What's happening with the molecules or atoms? How does it work? For example, if I wrap zinc in oxidized copper wire and mix it with some hydrochloric acid, the oxidized copper will act as a catalyst for the reaction between zinc and hydrochloric acid. I'm not sure how this takes place, though. Thanks. Answer: So catalysts provide an alternative pathway by allowing a new intermediate to be formed. For example consider the reaction of $A$ and $B$ to form $C$. this means that $$\ce{A + B -> C}$$ Say this reaction has an activation energy of $x$ $kJmol^{-1}$. This reaction could also be possible with a catalyst $X$. This means that a new possible route for the reaction could be (there a multiple routes the new reaction could take this is just a simple example): $$\ce{A + X->AX}$$ $$\ce{AX + B->C +X}$$ Now you have ended up in the same position using a catalyst and when not using a catalyst, seen as you add both reaction and cancel out common species then you get back to the equation of: $$\ce{A + B -> C}$$ These two reactions with the catalyst will have their own activation energies. Say these two reactions have activation energies $\pu{p}$ and $\pu{q}$ $\pu{kJmol^{-1}}$ and that $\pu{p>q}$ (doesn't matter if $\pu{p}$ applies to the first or second reaction) then if $x>p$ a particle needs less energy to proceed via the route with the catalyst compared to the route without the catalyst. As a particle now needs less energy for the product $\pu{C}$ to be formed, more particles will now be able to take part in the reaction thus increasing the amount of successful collisions per second which is the same in an increase in rate. This can be seen nicely in a general Maxwell-Boltzmann distribution of molecular energies as the new route will mean there's a lower activation energy overall meaning a lot more particles can react.
{ "domain": "chemistry.stackexchange", "id": 11074, "tags": "catalysis" }
On current enclosed by Ameprian loop
Question: Ampère’s law is saying : $$\oint \mathbf{B}\cdot \mathbf{dl} = \mu_{0}I_{inc} $$ Where $$I_{inc} = \int \mathbf J \cdot \mathbf {da} $$ But if my Amperian loop encloses a wire at an angle : What is $I_{inc}$ equal to in the case where the wire is one dimensional ? is it $I$ or $I\cos \theta $ ? Answer: Suppose you had a uniform current density and your wire had a cross-sectional area $\vec{A}$. The magnitude of $\vec{J}$ is I/A. Now move to your Amperian loop and suppose for the sake of this argument that it bounds a flat surface which cuts the wire at an angle $\theta$ as shown in your diagram. The current density is now at an angle $\theta$ to the area vector defined by the Amperian loop and the size of this area vector is $A'$. Note that $A' > A$, because you have cut the wire at an angle. In fact $A' = A/\cos \theta$. Thus when you do the scalar product $$\int \vec{J}\cdot d\vec{a} = \frac{I}{A}A'\cos\theta = I$$
{ "domain": "physics.stackexchange", "id": 74389, "tags": "electromagnetism, magnetic-fields, electricity, electric-fields, electric-current" }
MLTT not being Turing Complete
Question: Where can I find a proof that Martin-Löf Type Theory isn't Turing Complete, if such proof exists? Answer: It is a general feature of reasonable total programming languages that they do not have self-interpreters, but interpreters for reasonable programming languages are Turing-computable. So, a concrete example of a total computable function which is not definable in a total programming language is an interpreter for that programming language. See Definition 2.1, Theorem 2.2, and Corollary 2.3 of this note. It proves that a self-interpreter for Gödel's T is not definable in Gödel's T. You can use the exact same proof for MLTT. It is generally well known that a confluent terminating normalization system such as that of MLTT leads to a Turing-computable normalization procedure.
{ "domain": "cs.stackexchange", "id": 7258, "tags": "type-theory" }
Find nearest k points from the central point
Question: The task: Given a list of points, a central point, and an integer k, find the nearest k points from the central point. For example, given the list of points [(0, 0), (5, 4), (3, 1)], the central point (1, 2), and k = 2, return [(0, 0), (3, 1)]. My solution: const lst = [[0, 0], [5, 4], [3, 1]]; const center = [1, 2]; const k = 2; const findNearestPoints = ({lst, center, k}) => { // I assume the data are valid; no error checks const calcHypo = x => Math.sqrt((x[0] - center[0])**2 + (x[1] - center[1])**2); const sortPoints = (a,b) => calcHypo(a) - calcHypo(b); return lst .sort(sortPoints) .slice(0,k); }; console.log(findNearestPoints({lst, center, k})); Answer: Don't sqrt the distance. It is a common mistake when filtering distances to use the complete distance calculation. Given 2 values a and b, if a < b then it is also true that sqrt(a) < sqrt(b). Hence you don't need the expensive sqrt calculation to know the if a point is closer than another. To find the closest the following does not use the sqrt of the distance. function closestPoint(points, point, dist){ var x, y, found, min = dist * dist; // sqr distance for(const p of points) { x = p[0] - point[0]; y = p[1] - point[1]; x *= x; y *= y; if(x + y < min){ min = x + y; if (min === 0) { return p } // early exit found = p; } } return found; } Not in the sort!!! You are doing the distance calculation in the sort DON'T!!!, that means you repeat the same calculations over and over. To improve you throughput the following will reduce the over all time. The improvement is linear and does not change the complexity. Note that in JS a ** 2 is slightly slower than a * a A more efficient version of your solution function findNearestPoints({list, center, k}) { const res = []; const cx = center[0], cy = center[1]; // alias and reduce indexing overhead const distSqr = (x, y) => (x -= cx) * x + (y -= cy) * y; const sort = (a, b) => a[1] - b[1]; for (const p of list) { res.push([p, distSqr(p[0], p[1])]) } res.sort(sort).length = k; return res.map(p => p[0]); } The ** operator for roots Note that JS has the ** operator. That you can use it to get roots by making the right side the inverse, 1 over the power. Thus the sqrt is **(1/2) the cube root is **(1/3) eg if 2 ** 2 === 4 then 4 ** (1/2) === 2 if 2 ** 3 === 8 then 8 ** (1/3) === 2 Don't approximate 8 ** 0.33 !== 2 if 2 ** 4 === 16 then 16 ** (1/4) === 2 Better sort The sort is the bottle neck in this problem. You can use a binary tree sort as it is the least complex for real numbers (every coder should learn how to implement a binary tree sort) Do you need the sort? However I think (think means might be, I am going by instinct) that there is a faster solution that does not involve a sort and that is at most \$O(n)\$ Remember that the order of the points is not important, that you need only separate the points in two. It may take a few passes to do, but as long as the number of passes is not related to the number of points or 'k' you will have a \$O(n)\$ solution. I am not going to give you the solution this time (if there is one) as there is no problem solving experienced gained coping code.
{ "domain": "codereview.stackexchange", "id": 40431, "tags": "javascript, algorithm, programming-challenge, functional-programming" }
Query a product from a list of brands
Question: I am quite new to F# and just wrote my first program. It checks if the query exists in the brand list and returns the matching brand. Query is the string you are searching for in the brand list. So someone could be looking for "Miso Power Washer X1000" the idea is that my function analyses the query to see if there is a matching brand. Later the remainder ("Power Washer X1000") could be parsed as well. In this way I am able to "parse" the query of the user and give it context. This meta data will be used to give better search results when the database is queried. I used stringInArray in order to split the logic into a separate function. Of course I could do something like Array.filter(fun (elem: string) -> query.IndexOf(elem) > -1)) brands But I felt it was less readable. Sometimes when learning a new language you get things to work. However someone more experienced say: you can use this or that which is more common, faster etc. I am looking for this kind of feedback before creating a bunch of code which turns out to be rubbish through the eyes of an experienced F# programmer. Is this the correct way to use F# function structure? Code module Program = [<EntryPoint>] let main argv = let query = "Miso Power Washer X1000" let brands = [| "Hayo" "Miso" "The Master" "Vector" |] let stringInArray = fun (elem: string) -> query.IndexOf(elem) > -1 let getBrands (query: string): string[] = Array.filter(stringInArray) brands let result = getBrands query printfn "%A" result 0 Output [|"Miso"|] Answer: Unfortunately, there is at least one blatant issue with the program in its current state: unused arguments. Let's have a look at it and find out why that's important. Before that, a short disclaimer, though: I'm not a F# developer. However, I know functional programming (e.g. Haskell). I don't know the .NET lands by heart. Take this review with a grain of salt on the arguments that concern F#. Functions and the world When we write a function in a context, it gains knowledge of that context. For example, we bound five to addFive in the following example: let y = 5 let addFive x = x + y However, there is a possible issue with the code above: we might accidentally use y at a place we didn't intend, for example: let add x z = x + y This is exactly what happened in your functions: let stringInArray = fun (elem: string) -> query.IndexOf(elem) > -1 let getBrands (query: string): string[] = Array.filter(stringInArray) brands Note how stringInArray just uses query? And how getBrands completely ignores the given query? This means that we could use let result = getBrands "" and still end up with [|"Miso"|]. That's not what we intended! Instead, let's go back to back to the drawing board. We need to make sure that the query gets used. So we need to add at least one argument to stringInArray: let stringInArray (query: string) (elem: string) = query.IndexOf(elem) > -1 Now we can use query in getBrands: let getBrands query = Array.filter(stringInArray query) brands Great! Now let result = getBrands "" leads to an empty array. Success! Names and tales However, now that we changed stringInArray, we note that the name isn't quite fitting: if we add type signatures, we and up with: let stringInArray (query: string) (elem: string) = query.IndexOf(elem) > -1 Neither of the arguments is an array. We should call this function contains or similar. However, we could introduce another function that gets matching elements from an array: let isSubstringOf (haystack: string) (needle: string) = haystack.IndexOf(needle) > -1 let matchingElements arr haystack = Array.filter(isSubstringOf haystack) arr let getBrands query = matchingElements brands query // or even // getBrands = matchingElements brands Note that with this approach we can keep the definition of getBrands to a minimum: let getBrands = matchingElements brands General purpose functions and the world Now that we used proper naming and split the functionality of our functions, it's time to re-evaluate whether they really belong in main. Remember how functions have their context saved? They provide a closure. It's therefore a good idea to keep the context small. What functions should we therefore move out of main? We have the following at hand: isSubstringOf, which searches in a string for another string matchingElements, which filters an array of strings whether they are contained in the second argument getBrands, which filters brands given a query The first two functions sound very generic, so let's move them out of main: module Program = let isSubstringOf (haystack: string) (needle: string) = haystack.IndexOf(needle) > -1 let matchingElements arr haystack = Array.filter(isSubstringOf haystack) arr [<EntryPoint>] let main argv = let query = "Miso Power Washer X1000" let brands = [| "Hayo" "Miso" "The Master" "Vector" |] let getBrands = matchingElements brands let result = getBrands query printfn "%A" result 0 Note how short our main got. It only contains the essential elements: query, brands, getBrands and result. One could argue that we can just replace getBrands by its definition, but premature brevity in source code is the source of future confusion, so let's keep it a little bit more verbose but self-explanatory. Moving the functions might seem like an overkill, but note that this approach immediately had shown an error if we followed it right from the beginning. If we now use we now use query accidentally in isSubstringOf, we immediately get a compiler error (and probably an IntelliJ warning/note/error). That can be a huge boon in finding errors! Furthermore, this approach makes it easy to unit test the functions later. Maybe we want to improve the isSubstringOf to use fuzzy logic so that it also works for "Mizo Power Washer". (Unit) tests can make sure that we don't accidentally break old functionality on the way.
{ "domain": "codereview.stackexchange", "id": 36109, "tags": "beginner, f#" }
Does The Earth appear to orbit the ISS from its point of view?
Question: Does the International Space Station orbit the Earth with one side constantly facing the Earth? Or does it look more like the Earth is orbiting the ISS from its perspective? Answer: Once it goes into orbit, it will remain in the same attitude, what I mean is the cupola the astronauts look out of always looks "down," was far as I know. I don't think the ISS has any spin, so it should stay in the same position. The International Space Station orbits about 354 kilometers (220 miles) above the Earth and travels at approximately 27,700 km/hr (17,211 mph), so it takes about 92 minutes to circle the Earth once. For this reason, every 45 minutes the astronauts on-board see a sunrise or a sunset, with a total of 15 – 16 of each every 24 hours. Actually, if the astronauts see the maximum number of sunrises possible, then it must effectively point down, but I will try to answer James's point about a slight spin if I can find an orbital illustration. I would imagine, no matter what their physical senses say, the astronauts will see the earth as if from a very high flying plane, which some, or most , are used to anyway. It would take a large amount of fuel to move the ISS, and to kill the AM, so the loading craft would do as much of the manoeuvring as possible. ISS Manoeuvres No more million-dollar maneuvers. When the space station must rotate for operations such as docking of resupply vehicles, it uses thrusters that run on propellant costing nearly $10,000 per pound. This demonstration successfully rotated the station 90 and 180 degrees without propellant, saving more than 1 million dollars worth of propellant on the 180-degree maneuver. The new technology uses gyroscopes, or spinning momentum-storage devices powered by solar energy, to maneuver along special attitude trajectories. It will substantially reduce propellant use and contamination of solar arrays and loads. With this technology, long-duration space exploration missions can carry less propellant and more provisions.
{ "domain": "physics.stackexchange", "id": 24803, "tags": "orbital-motion, reference-frames" }
Method for returning valid URLs from a sitemap URL
Question: I need a method which fetches all of the text from a URL (generally a sitemap URL), and returns an IEnumerable of all valid URLs contained in the text returned from the initial address. What I have so far is: public IEnumerable<Uri> GetSitemapUrls(Uri sitemapUrl) { var sitemapText = GetSitemapText(sitemapUrl); if (string.IsNullOrWhiteSpace(sitemapText)) yield break; var urls = new List<string>(); var urlRegex = new Regex(@"\b(?:https?://|www\.)[^ \f\n\r\t\v\]]+\b", RegexOptions.Compiled | RegexOptions.IgnoreCase); foreach (Match m in urlRegex.Matches(sitemapText)) urls.Add(CleanUriString(m.Value)); foreach (var url in urls) { var cleanedUriString = CleanUriString(url); if (Uri.IsWellFormedUriString(cleanedUriString, UriKind.RelativeOrAbsolute)) yield return new Uri(cleanedUriString); } } string GetSitemapText(Uri sitemapUri) { var wc = new WebClient { Encoding = System.Text.Encoding.UTF8 }; return wc.DownloadString(sitemapUri); } string CleanUriString(string dirtyUriString) { var legalCharacters = @"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~:/?#[]@!$&'()*+,;=".ToCharArray(); var cleanedString = dirtyUriString; foreach (var character in dirtyUriString) { var matchIndex = dirtyUriString.IndexOf(character); if (!legalCharacters.Any(x => x.Equals(character)) && matchIndex > 0) cleanedString = dirtyUriString.Substring(0, matchIndex); } return cleanedString; } It seems to work as intended for actual sitemaps, null/empty responses from the URL it receives, and URLs with URL-illegal characters in them. I have a feeling I'm missing out on potential issues or letting bad URLs through anyway, not to mention I haven't thought about spidering through URLs that the initial sitemap (if it is a sitemap) returns. Is there anything I can do to improve it? Answer: inside your GetSiteMapUrls Method you clean the Urls twice, and I don't see a reason for this. foreach (Match m in urlRegex.Matches(sitemapText)) urls.Add(CleanUriString(m.Value)); foreach (var url in urls) { var cleanedUriString = CleanUriString(url); if (Uri.IsWellFormedUriString(cleanedUriString, UriKind.RelativeOrAbsolute)) yield return new Uri(cleanedUriString); } before you add the URL to the urls list you run them through the CleanUriString() method then you traverse the list that you just made and run the urls through the same method before returning the new uri. This is redundant. instead you could just use a single foreach loop and return the list of good urls from it, like this: public IEnumerable<Uri> GetSitemapUrls(Uri sitemapUrl) { var sitemapText = GetSitemapText(sitemapUrl); if (string.IsNullOrWhiteSpace(sitemapText)) yield break; var urlRegex = new Regex(@"\b(?:https?://|www\.)[^ \f\n\r\t\v\]]+\b", RegexOptions.Compiled | RegexOptions.IgnoreCase); foreach (Match m in urlRegex.Matches(sitemapText)) { var clean = CleanUriString(m.Value); if (Uri.IsWellFormedUriString(clean, UriKind.RelativeOrAbsolute)) yield return new Uri(clean); } } and personally I like Braces on my If's and Loops
{ "domain": "codereview.stackexchange", "id": 31654, "tags": "c#, .net, regex" }
Does there exist a context-free language $L$ such that $L\cap L^2$ is not context-free?
Question: I can see that $L$ has to be context-free but not regular here as regular languages are closed under concatenation and intersection. But $L\cap L^2$ looks too weird. I couldn't think of any $L$ that gives rise to meaningful $L\cap L^2$. Any hint would be appreciated! Answer: You can take $$ L = \{ a^n b^n : n \geq 1 \} \cup \{ a^k b^n a^n b^\ell : n,k,\ell \geq 1 \}. $$ You can check that $$ L \cap L^2 = \{ a^n b^n a^n b^n : n \geq 1 \}. $$
{ "domain": "cs.stackexchange", "id": 13300, "tags": "regular-languages, context-free, regular-expressions" }
What does negative electrical energy signify?
Question: When we derive the formula for potential energy caused by the torque of a dipole in uniform electrical field we get $U = -pE \cos \theta$. And my textbook tells me that the when the dipole is kept parallel to the electric field, the angle made is zero $\cos\theta = 1$, thus the potential energy $U$ is $-pE$. The textbook also tells me that this is the minimum energy attained by a dipole in external electric field ($U = -pE$) and the configuration is stable. But since energy is a scalar quantity, it isn't supposed to have direction, So, what does the negative symbol signify? Answer: Absolute values of the potential energies of systems do not have any physical meaning. It is the change in potential energy that has a physical meaning. When the potential energy of a dipole system is derived, the following approach is used: $$U_f - U_i = \int dW = \int_{\theta_i}^{\theta_f} \tau\cdot d\theta$$ $$U_f - U_i = pE \int_{\theta_i}^{\theta_f}\sin\theta\cdot d\theta,$$ and consequently, $$\boxed{\Delta U= -pE(\cos\theta_f - \cos\theta_i).}$$ From here, a reference configuration is chosen for the dipole system such that $U_i = 0 \ \text{at} \ \theta_i = 90^{\circ}$, purely for convenience, which gives the formula $U = -pE\cdot\cos\theta$. Naturally, any positive or negative signs arise only due to this choice of convention and do not possess any meaning as such. Hope this helps.
{ "domain": "physics.stackexchange", "id": 88619, "tags": "electrostatics, electricity, electric-fields, potential, potential-energy" }
Any chance of getting the gazebo2 and the collada-dom packages added to the armhf archive?
Question: gazebo2 and collada-dom can't be found in any of the ubuntu repositories and are typically found in the ros repo for other architectures. These will be useful for bootstrapping any of the armhf full builds. PCL and sbcl are also missing, but I can build these in a PPA. Originally posted by jolting on ROS Answers with karma: 21 on 2014-06-17 Post score: 1 Answer: I have collada-dom-dev builds for most of the Ubuntu armhf variants, and PCL builds for 12.04 and 13.04. Which ones are you missing? I haven't tried to build gazebo... it doesn't really make sense to try to run gazebo on a low-powered ARM cpu, and I don't want to spend time on something that won't be useful. (The same line of reasoning goes for rviz). If you can manage to do builds of SBCL for armhf, you will be my hero. EDIT I've added builds of collada-dom-dev and pcl to my repository. I'm currently working on full builds of ROS. I also have a build of SBCL in the works. Fingers crossed. Originally posted by ahendrix with karma: 47576 on 2014-06-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by jolting on 2014-06-18: BeagleBone Black supports 14.04. Some of those arm processors aren't that low powered. For example the exynos processors have 8 cores and nvidia has a board with cuda built in. I'm trying to figure out how adam conrad bootstraped sbcl for ppc. Then I'll have a ppa with sbcl.
{ "domain": "robotics.stackexchange", "id": 18298, "tags": "ros, armhf" }
Prime factoring function using recursion
Question: As a programming exercise, I decided to make a prime factoring function using recursion: static bool isPrime(int n) { if (n < 2) return false; if (n % 2 == 0) return (n == 2); int root = (int)Math.Sqrt((double)n); for (int i = 3; i <= root; i += 2) if (n % i == 0) return false; return true; } static bool iseven(int n) { return n % 2 == 0; } static int[] prfact(int n, int curr, List<int> nums) { if (nums.All(x => isPrime(x)) && isPrime(curr)) { nums.Add(curr); return nums.ToArray(); } else if (iseven(curr)) { nums.Add(2); return prfact(n, curr / 2, nums); } else if (!iseven(curr)) { int div = 3; while (curr % div != 0 || !isPrime(div)) div += 2; nums.Add(div); return prfact(n, curr / div, nums); } else return null; } static int[] factor(int n) { return prfact(n, n, new List<int>()); } How efficient is it and how can it be improved? Answer: +1 for stopping your isPrime search at the SqRt(n). That's a nice little optimization. +1 for incrementing by 2. It's nice to see you realized that since it wasn't possibly even by this point, that you could optimize. Next, consider a sieve or memento implementation for further performance improvements on isPrime. Consider renaming root to limit. +1 for extracting an isEven method. -1 for neglecting to use it in your isPrime method. Instead of having prfact and factor, Rename prfact to factor, overloading the latter. Make it private scoped to hide the implementation details from the outside world. Sorry for the mostly non-algo review. Not my strong suit & short on time.
{ "domain": "codereview.stackexchange", "id": 22777, "tags": "c#, algorithm, recursion, primes" }
Find the normal force on beam in asymmetrical scenario
Question: A thin beam with evenly distributed mass $M$ and length $L$ is resting on two spikes as shown below. What is the vertical normal force $F_l$ on the beam exerted by the left spike? Solution The solution uses something called the "moment-torque theorem". They use the theorem on the spike to the left and comes up with the equation $$F_l \cdot \frac{2}{3}L-Mg \bigg(\frac{2}{3}-\frac{1}{2}\bigg)L =0$$ From where they conclude that $F_l=\frac{1}{4}M$ It is not intuitively clear to me at all how they arrive at that equation. Can anybody explain that? Furthermore, is it possible to solve this problem by force analysis? If so, I would be very interested to see that. Answer: Furthermore, is it possible to solve this problem by force analysis? If so, I would be very interested to see that. Yes, it is possible. Use Newton's Second Law: in order for the beam to be immobile all forces and torques acting on it must cancel out. Start with a force body diagram (FBD): All forces must cancel out, so: $$F_l+F_r-Mg=0\tag{1}$$ All torques must cancel out. Here we've taken the torques about the point $x=0$ ($x=L/2$ is the CoG of the beam): $$Mg\frac{L}{2}-F_r \frac{2L}{3}=0\tag{2}$$ Then solve for $F_l$ and $F_r$ from $(1)$ and $(2)$: $$F_r=\frac{3}{4}Mg$$ $$F_l+\frac{3}{4}Mg=Mg\Rightarrow \boxed{F_l=\frac{1}{4}Mg}$$ (Note that you wrote $F_l=\frac{1}{4}M$ which is incorrect, as it is a mass and not a force)
{ "domain": "physics.stackexchange", "id": 76392, "tags": "homework-and-exercises, forces, momentum, torque" }
How to convert cc to bar?
Question: In astronomy/astrophysics, medium density is often given in cc, particles per cubic centimeter. Also, the temperature of the medium is usually given, in Kelvins. For some materials the melting point differs significantly even at very low pressures; water melts at $10\mu{bar}$ in about 230K while at $1\mu{bar}$ its melting point is close to 1K. And the pressure is given in common pressure units - bar, Pascals etc. So, if I want to know the melting points of various materials in space (and various areas of it), using common phase-temperature-presure diagrams (possibly extrapolating a little), I need to find the 'ambient pressure' of the inter[planetary|stellar|galactic] medium in units the graph is in, usually bar. How can I calculate the gas pressure given particles per cubic centimeter, and its temperature in Kelvin? Answer: How can I calculate the gas pressure given particles per cubic centimeter, and its temperature in Kelvin? as pointed out in comment by KyleKanos $$pV=Nk_\mathrm BT$$ where $p$ is pressure, $V$ is volume (in $\mathrm{m^3}$), $N$ is the number of particles, $k_\mathrm B$ is Botzmann's constant and $T$ is temperature in Kelvin. If you rearrange it $$p= \frac NVk_\mathrm BT$$ so you can use this to convert to particles per volume to pressure, but note that volume units in the equation are $\mathrm{m^3}$ so you need to convert from $\mathrm{cm^3}$ (multiply by $10^6$).
{ "domain": "physics.stackexchange", "id": 17840, "tags": "thermodynamics, particle-physics, pressure, temperature, density" }
Why is current measured in coulombs per second and not in electron per second?
Question: If electricity is produced by flow of electron, that is current, then current should be measured in electron per second, why is it measured in coulombs per second? Answer: If electricity is produced by flow of electron, that is current, Current and electricity is not only electrons flowing. Sure, it is electrons flowing in metal wires, but it is holes (positively charged) in semiconductors such as solar panels, protons in ion beans and fission processes, both negatively and positively charged ions (not electrons but charged molecules) in electrolytes and other liquid such as in a fuel cell etc. So, in general, current is not about electrons. If it was only about electrons, then I agree that a unit like electrons-per-second would make sense. But it isn't just about electrons. Instead, we have to make it something that all the above have in common. What is that? They all have an electric charge. And charge is measured in Coulumb-units (just like e.g. mass is measured in kilogram-units). Thus, Coulomb-per-second is the unit for current.
{ "domain": "physics.stackexchange", "id": 64941, "tags": "electric-current, units, si-units, metrology" }
Does "soluble in alcohol" imply ethanol?
Question: Sources like Wikipedia and SolubilityOfThings say that copper (II) acetate is soluble in alcohol. Does that mean any alcohol, or does "alcohol" in this context mean ethanol? Answer: The term "alcohol", without modification (i.e. "propyl" alcohol, etc.), in the context of something like a solubility table, almost always refers to ethanol (as it does in your sources for copper (II) acetate solubility). Note, for example, that both pubchem and Wikipedia list alcohol as a synonym for ethanol. This is not the case for other alcohols like isopropanol.
{ "domain": "chemistry.stackexchange", "id": 8015, "tags": "solubility, alcohols" }
merging of launch files
Question: I have 2 launch files which are launching same node but one launches with namespace and other with no namespace. how do I combine both of them? is it possible to make namespace argument optional ? launch_file1 : <node pkg="node_pkg" type="node_type" name="node_name" > launch_file2: so i am launching same node with no_namespace and with namespace. <node pkg="node_pkg" type="node_type" name="node_name" if="$(arg use_namespace)" ns="$(arg namespace)"> so i combined both launch files with above line in one single launch file. and i am using use_namespace to specify whether i want to launch node with namespace or not. but now it is launching nodes with onyl namespace my understanding from above line is just use ns if, "if" condition is satisfied and launch it with <node pkg="node_pkg" type="node_type" name="node_name" Originally posted by debonair on ROS Answers with karma: 17 on 2018-08-31 Post score: 0 Original comments Comment by Reamees on 2018-09-03: It is not clear what you want to achieve. Do you want your nodes to be in the same namespace/no namespace? Namespace is an optional attribute. If @gvdhoorn 's answer is what you were looking for accept it, otherwise elaborate on "combine". Answer: Namespace attributes cannot be empty, so it's not directly possible to make something like that optional I believe. Something like this might work: first launch file: <launch> <arg name="use_namespace" default="false" /> <include file="other.launch" if="$(arg use_namespace)" ns="my_ns" /> <include file="other.launch" unless="$(arg use_namespace)" /> </launch> second launch file (just an example): <launch> <param name="la" value="123" /> </launch> Originally posted by gvdhoorn with karma: 86574 on 2018-09-01 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by debonair on 2018-09-04: what should be my other.launch file contain? just launch node? and we don't specify namespace there? Comment by gvdhoorn on 2018-09-04:\ what should be my other.launch file contain? just launch node? whatever you want it to contain. Could launch nodes, include other launch files, everything. and we don't specify namespace there? Exactly. the first launch file essentially "pushes" the other launch file down into a ns. Comment by debonair on 2018-09-04: is it possible to combine include code with if condition in your answer. Comment by debonair on 2018-09-04: I had to write node launch code <node pkg> with params twice. one with if tag and one with unless tag. i was trying to find out way to avoid writing params twice. Comment by gvdhoorn on 2018-09-04: Which you can do, if you follow my approach. The "second launch file" has all the nodes without a ns. If you start that, nothing will be in a namespace. The "first launch file" starts the second but in a namespace. Don't start the second launch file: no namespace. No need to duplicate. Comment by debonair on 2018-09-04: great. Thanks so much!
{ "domain": "robotics.stackexchange", "id": 31693, "tags": "ros-kinetic" }
Hardware interrupt handler in C++
Question: I wrote this class to use a C++ function as hardware interrupt handler. The idea is to have a single entry point where I can switch stacks, and call a chain of std::function objects to handle the interrupt. It works... but I think it's a pretty ugly hack. Is there anything I could improve here? class __attribute__((packed)) interrupt_wrapper { using function_ptr = void(*)(unsigned); unsigned int_vector; // [eax-0x1C] selector ds; // [eax-0x18] selector es; // [eax-0x16] selector fs; // [eax-0x14] selector gs; // [eax-0x12] function_ptr entry_point; // [eax-0x10] std::array<byte, 0x40> code; // [eax-0x0C] public: interrupt_wrapper(unsigned vec, function_ptr f) : int_vector(vec), entry_point(f) { byte* start; std::size_t size; asm volatile ( ".intel_syntax noprefix;" "jmp interrupt_wrapper_end%=;" // --- \/\/\/\/\/\/ --- // "interrupt_wrapper_begin%=:;" // On entry, the only known register is CS. "push ds; push es; push fs; push gs; pusha;" // 7 bytes "call get_eip%=;" // call near/relative (E8) // 5 bytes "get_eip%=: pop eax;" // Pop EIP into EAX and use it to find our variables "mov ds, cs:[eax-0x18];" // Restore segment registers "mov es, cs:[eax-0x16];" "mov fs, cs:[eax-0x14];" "mov gs, cs:[eax-0x12];" "push cs:[eax-0x1C];" // Pass our interrupt number along "call cs:[eax-0x10];" // Call the entry point "add esp, 4;" "popa; pop gs; pop fs; pop es; pop ds;" "sti;" // IRET is not guaranteed to set the interrupt flag. "iret;" "interrupt_wrapper_end%=:;" // --- /\/\/\/\/\/\ --- // "mov %0, offset interrupt_wrapper_begin%=;" "mov %1, offset interrupt_wrapper_end%=;" "sub %1, %0;" // size = end - begin ".att_syntax prefix" : "=r" (start) , "=r" (size)); assert(size <= code.size()); auto* ptr = memory_descriptor(get_cs(), start).get_ptr<byte>(); // Get near pointer to cs:[start] std::copy_n(ptr, size, code.data()); asm volatile ( ".intel_syntax noprefix;" "mov ax, ds;" "mov bx, es;" "mov cx, fs;" "mov dx, gs;" ".att_syntax prefix" : "=a" (ds) , "=b" (es) , "=c" (fs) , "=d" (gs)); } auto get_ptr(selector cs) { return far_ptr32 { cs, reinterpret_cast<std::size_t>(code.data()) }; } }; Answer: We're missing some pieces, such as the memory_descriptor and far_ptr32 classes and get_cs function and I guessed at the definitions of selector and byte, but with that said, here's what I found that may help you improve your code. Make the constructor noexcept It's unlikely that an exception within the constructor would be a welcome event. For that reason, your constructor should be declared noexcept. Make function_ptr noexcept For similar reasons, I'd suggest making function_ptr also noexcept: using function_ptr = void(*)(unsigned) noexcept; Be careful with interrupt flags Instead of disabling all interrupts using sti, it might be better to restore the interrupt flag to whatever it had been. Although the interrupt could not have been called by hardware if the interrupt flag had been reset (that is, interrupts disabled) but it's not uncommon to chain interrupts by having one hardware interrupt chain to another via explicit call. In that circumstance, the hardware flag should not be re-enabled lest the interrupt chain be incorrectly interrupted.
{ "domain": "codereview.stackexchange", "id": 18929, "tags": "c++, assembly" }
Constructing gauge invariants
Question: Is there an efficient way for constructing gauge invariants given the number of operators one can use is fixed. For example, if I am given some boson in $\mathbf{3}$ of $SU(2)$, and I want to find out the number of possible invariants constructed when I have 20 such objects. One can construct an object like \begin{equation} \mathcal{O}_{a_1,b_1}...\mathcal{O}_{a_{20},b_{20}}\epsilon^{a1,a3}....\epsilon^{a6,b3} \end{equation} I want to find out how many such objects are possible. Is there an efficient way of doing such things for other representations like $\mathbf{5}$, $\mathbf{6}$,..etc, for fermions or maybe mixed objects like $\mathbf{4}$ and $\mathbf{7}$. I was trying to use mathematica for contractions but the number of partitions grows very fast and doing it brute force doesn't seem to be an option. Any suggestion ? Thanks Answer: I am using this reference, Spin Multiplicities, T Curtright, T van Kortryk, and C Zachos, Phys Lett A381 (2017) 422-427. The character of a spin j, dimension 2j+1 irrep of SU(2) is $$ \chi_j (\theta)= \frac{\sin((2j+1)\theta/2)}{\sin (\theta/2)}, \tag{4} $$ and the multiplicity of spin s in their n-fold composition is $$ M(s;n;j)= \frac{1}{\pi}\int_0^{2\pi} \!\! d\vartheta ~\sin^2{\vartheta} ~~ (\chi_j(2\vartheta))^n ~\chi_s(2\vartheta), \tag{5} $$ whence, in your case, $$ M(0;20;1)= \frac{1}{\pi}\int_0^{2\pi} \!\! d\vartheta ~\sin^2{\vartheta} ~~ (\chi_1(2\vartheta))^{20} , $$ the hypergeometric function of (17). Since this n is large, this is quite close to $$ { 3^{20.5} \over 8\sqrt{\pi} ~ 20^{3/2} }, $$ by (31). Millions... The method addresses your further questions. Try a few simple examples, to ensure you understand the language.
{ "domain": "physics.stackexchange", "id": 88824, "tags": "gauge-theory, group-theory, representation-theory" }
First law of thermodynamics (derivation)
Question: According to the first law of thermodynamics, $$dQ=dU+dW$$ The derivation of this formula is as follows: $dW=Fdx$ where $d$x stands for displacement. $dW=P(Adx)$ as force = pressure x area and $dW= PdV$. Finally, $dQ= dU+dW$. What does $dV$ stand for? And how is $(Adx)=dV$. Please answer my question in the simplest way possible. Answer: $dV$ is the incremental volume. If you have a cylinder with a base of area $A$ and with height $dx$, then the volume will be $dV=A.dx$
{ "domain": "physics.stackexchange", "id": 37951, "tags": "homework-and-exercises, thermodynamics" }
Spacing of G and K class stars
Question: I was looking at the Wikipedia article "List of potentially habitable exoplanets" and I noticed that many of the closest planets listed(tens or hundreds instead of thousands of light-years away) orbited M-class stars. I have also read that scientists are more certain about the possibility of habitable planets around G and K class stars, because the habitability of red dwarf systems is still debated. I was wondering whether there is something that makes the larger stars be farther away from each other, or whether it is just a coincidence. So is it possible to have many G and K class stars tens of light years near each other? Answer: The solution to your mystery about why habitable planets are found around nearby low-mass stars, but distant G/K stars is all to do with observational selection effects and biases. There are many G/K stars within 100 light years of the Sun, but almost none have been examined in the detail required to reveal small, habitable planets around them. Almost all the small "Earthlike" planets in habitable zones have been found by the transit technique. It is much easier to find such planets around small stars for two reasons: The transit technique yields a signal that depends on $(R_p/R_{*})^2$. Thus small planet transits are easier to see around small stars. The probability of a transit depends on the ratio of the star size to orbital radius. This sounds like it works against finding planets around small stars; and it does at a fixed orbital radius. But, because the habitable zone is much closer-in to a smaller, less luminous star, then the habitable planets around such stars are more likely to transit. In addition, their orbital periods will be tens of days rather than the months to a year around G/K stars and this makes it much easier to observe repeated transits. In fact, the combination of these two selection effects makes it possible to find habitable zone planets around small stars all over the sky from ground-based observatories. The downside is that small stars are low luminosity, so the planets that can be found are found around nearby examples at tens of light years. In contrast, habitable zone planets around G/K stars are mostly found in months-year orbits by Kepler, which stared at a single small field for 4 years. The majority of the stars targeted by Kepler for exoplanet searches were 10th-15th magnitude G/K stars (there are few brighter examples in such a limited patch of sky), which are at distances of hundreds to thousands of light years.
{ "domain": "astronomy.stackexchange", "id": 2228, "tags": "star, distances, star-formation" }
C++17 allocator_traits implementation
Question: Inspired by my earlier question C++17 pointer_traits implementation, I re-implemented allocator_traits under the name my_std::allocator_traits, put in a separate header allocator_traits.hpp because <memory> is way too comprehensive: // C++17 allocator_traits implementation #ifndef INC_ALLOCATOR_TRAITS_HPP_D6XSISB6AD #define INC_ALLOCATOR_TRAITS_HPP_D6XSISB6AD #include <limits> // for std::numeric_limits #include <memory> // for std::pointer_traits #include <type_traits> // for std::false_type, etc. #include <utility> // for std::forward namespace my_std { template <class Alloc> struct allocator_traits; namespace at_detail { template <class Tmpl, class U> struct rebind_first_param { }; template <template <class, class...> class Tmpl, class T, class... Args, class U> struct rebind_first_param<Tmpl<T, Args...>, U> { using type = Tmpl<U, Args...>; }; template <class Tmpl, class U> using rebind_first_param_t = typename rebind_first_param<Tmpl, U>::type; template <class Alloc, class T> auto pointer(int) -> typename Alloc::pointer; template <class Alloc, class T> auto pointer(long) -> T*; template <class Alloc, class T, class Ptr> auto const_pointer(int) -> typename Alloc::const_pointer; template <class Alloc, class T, class Ptr> auto const_pointer(long) -> typename std::pointer_traits<Ptr>::template rebind<const T>; template <class Alloc, class Ptr> auto void_pointer(int) -> typename Alloc::void_pointer; template <class Alloc, class Ptr> auto void_pointer(long) -> typename std::pointer_traits<Ptr>::template rebind<void>; template <class Alloc, class Ptr> auto const_void_pointer(int) -> typename Alloc::const_void_pointer; template <class Alloc, class Ptr> auto const_void_pointer(long) -> typename std::pointer_traits<Ptr>::template rebind<const void>; template <class Alloc, class Ptr> auto difference_type(int) -> typename Alloc::difference_type; template <class Alloc, class Ptr> auto difference_type(long) -> typename std::pointer_traits<Ptr>::difference_type; template <class Alloc, class Diff> auto size_type(int) -> typename Alloc::size_type; template <class Alloc, class Diff> auto size_type(long) -> std::make_unsigned_t<Diff>; template <class Alloc> auto pocca(int) -> typename Alloc::propagate_on_container_copy_assignment; template <class Alloc> auto pocca(long) -> std::false_type; template <class Alloc> auto pocma(int) -> typename Alloc::propagate_on_container_move_assignment; template <class Alloc> auto pocma(long) -> std::false_type; template <class Alloc> auto pocw(int) -> typename Alloc::propagate_on_container_swap; template <class Alloc> auto pocw(long) -> std::false_type; template <class Alloc> auto iae(int) -> typename Alloc::is_always_equal; template <class Alloc> auto iae(long) -> std::is_empty<Alloc>::type; template <class Alloc, class T> auto rebind_alloc(int) -> typename Alloc::rebind<T>::other; template <class Alloc, class T> auto rebind_alloc(long) -> rebind_first_param_t<Alloc, T>; } template <class Alloc> struct allocator_traits { using allocator_type = Alloc; using value_type = typename Alloc::value_type; using pointer = decltype(at_detail::pointer<Alloc, value_type>(0)); using const_pointer = decltype(at_detail::const_pointer<Alloc, value_type, pointer>(0)); using void_pointer = decltype(at_detail::void_pointer<Alloc, pointer>(0)); using const_void_pointer = decltype(at_detail::const_void_pointer<Alloc, pointer>(0)); using difference_type = decltype(at_detail::difference_type<Alloc, pointer>(0)); using size_type = decltype(at_detail::size_type<Alloc, difference_type>(0)); using propagate_on_container_copy_assignment = decltype(at_detail::pocca<Alloc>(0)); using propagate_on_container_move_assignment = decltype(at_detail::pocma<Alloc>(0)); using propagate_on_container_swap = decltype(at_detail::pocw<Alloc>(0)); using is_always_equal = decltype(at_detail::iae<Alloc>(0)); template <class T> using rebind_alloc = decltype(at_detail::rebind_alloc<Alloc, T>(0)); static pointer allocate(Alloc& a, size_type n) { return a.allocate(n); } static pointer allocate(Alloc& a, size_type n, const_void_pointer hint) { return allocate_(a, n, hint, 0); } static void deallocate(Alloc& a, pointer p, size_type n) { a.deallocate(p, n); } template <class T, class... Args> static void construct(Alloc& a, T* p, Args&&... args) { construct_(a, p, 0, std::forward<Args>(args)...); } template <class T> static void destroy(Alloc& a, T* p) { destroy_(a, p, 0); } static size_type max_size(const Alloc& a) noexcept { return max_size_(a, 0); } static Alloc select_on_container_copy_construction(const Alloc& rhs) { return soccc(rhs, 0); } private: static auto allocate_(Alloc& a, size_type n, const_void_pointer hint, int) -> decltype(a.allocate(n, hint), void(), std::declval<pointer>()) { return a.allocate(n, hint); } static auto allocate_(Alloc& a, size_type n, const_void_pointer, long) -> pointer { return a.allocate(n); } template <class T, class... Args> static auto construct_(Alloc& a, T* p, int, Args&&... args) -> decltype(a.construct(p, std::forward<Args>(args)...), void()) { a.construct(p, std::forward<Args>(args)...); } template <class T, class... Args> static void construct_(Alloc&, T* p, long, Args&&... args) { ::new(static_cast<void*>(p)) T(std::forward<Args>(args)...); } template <class T> static auto destroy_(Alloc& a, T* p, int) -> decltype(a.destroy(p), void()) { a.destroy(p); } template <class T> static void destroy_(Alloc&, T* p, long) { p->~T(); } static auto max_size_(const Alloc& a, int) noexcept -> decltype(a.max_size(), std::declval<size_type>()) { return a.max_size(); } static auto max_size_(const Alloc&, long) noexcept -> size_type { return std::numeric_limits<size_type>::max() / sizeof(value_type); } static auto soccc(const Alloc& rhs, int) -> decltype(rhs.select_on_container_copy_construction(), std::declval<Alloc>()) { return rhs.select_on_container_copy_construction(); } static auto soccc(const Alloc& rhs, long) -> Alloc { return rhs; } }; } #endif I used N4659 as a reference. Answer: Well, it looks pretty clean, nice and right. Of course, if it really was part of the implementation, it would have to use solely reserved identifiers to avoid interacting with weird and ill-advised user-defined macros, making it look much less nice. Tmpl is a curious name for the primary type template-parameter. Please stay with the customary T, unless you have a much more telling name like Alloc. Tmpl is also a curious name for a template template parameter. TT is customary and more concise. Consider leaving names out if you don't need one, and they do not pull their weight conveying useful extra-information to the reader. I wonder what kind of logic you used to decide whether to put something as a private member, or in a private namespace for implementation-details. While there are good reasons for either, better use only one. A real implementation would probably mark ODR-used internal functions as always_inline in some implementation-defined way.
{ "domain": "codereview.stackexchange", "id": 34509, "tags": "c++, reinventing-the-wheel, c++17, template-meta-programming" }
Why is the transform in Schönhage–Strassen's multiplication algorithm cheap?
Question: The Schönhage–Strassen multiplication algorithm works by turning multiplications of size $N$ into many multiplications of size $lg(N)$ with a number-theoretic transform, and recursing. At least I think that's what it does because there's some other cleverness and I really don't understand it well enough to summarize accurately. It finishes in $O(N \cdot lg(N) \cdot lg(lg(N)))$ time. A number-theoretic transform is exactly like a Discrete Fourier Transform, except it's done in the finite field $F_{2^N+1}$ of integers modulo $2^N+1$. This makes the operations a lot cheaper, since the Fourier transform has a lot of multiplying by roots of unity, and $F_{2^N+1}$'s roots of unity are all powers of 2 so we can just shift! Also, integers are a lot easier to work with than floating point complex numbers. Anyways, the thing that confuses me is that $F_{2^N+1}$ is very large. If I give you a random element from $F_{2^N+1}$, it takes $O(N)$ bits to specify it. So adding two elements should take $O(N)$ time. And the DFT does a lot of adding. Schönhage–Strassen splits the input into $\frac{N}{lg(N)}$ groups with $lg(N)$ bits. These groups are the values of $F_{2^N+1}$ that it will transform. Each pass of the DFT will have $O(\frac{N}{lg(N)})$ additions/subtractions, and there are $O(lg(\frac{N}{lg(N)}))$ passes. So based on addition taking $O(N)$ time it seems like the cost of all those additions should be $O(N \frac{N}{lg(N)} lg(\frac{N}{lg(N)}))$, which is asymptotically the same as $O(N^2)$. We can do a bit better than that... because the values start out so small, the additions are quite sparse. The first pass' additions really only cost $O(lg(N))$ each, and the second pass' cost $2^1 O(lg(N))$ each, and the i'th pass' cost $O(min(N, 2^i \cdot lg(N)))$ each, but that still all totals out to a terrible $\frac{N^2}{lg(N)}$. How is Schönhage–Strassen making the additions cheap? How much do they cost overall? Is it because the algorithm actually uses $F_{N+1}$ (with $N$ guaranteed to be a power of 2)? There's enough stacked $2^{2^{n}}$ and german in the paper that I'm really not sure. On the other hand, I don't think that guarantees enough roots of unity for things to work out. Answer: According to the Wikipedia article, at each step the length of the integers is reduced from $N$ to (roughly) $\sqrt{N}$, and there are (roughly) $\sqrt{N}$ of them, and so the additions only cost $O(N)$. There is a detailed analysis of the running time in the final paragraph of the linked section, copied here in case it changes: In the recursive step, the observation is used that: Each element of the input vectors has at most $N/2^k$ bits; The product of any two input vector elements has at most $2N/2^k$ bits; Each element of the convolution is the sum of at most $2^k$ such products, and so cannot exceed $2N/2^k + k$ bits. Here $N$ is current input length and $k = \Theta(\sqrt{N})$. Arithmetic is done modulo $2^n+1$, where $n$ is some multiple of $2^k$ which is larger than $2N/2^k + k$; note that $n = \Theta(\sqrt{N})$.
{ "domain": "cs.stackexchange", "id": 3084, "tags": "algorithm-analysis, time-complexity, multiplication" }
amu and g/mol relation
Question: Do we have that $\pu{1 g/mol} = \pu{1 amu}$ ? Because we have, for the mass of an atom of carbon 12, call it $m(\ce{^12C})$, that $$m(\ce{^12C}) = \pu{12 amu}$$ and furthermore $$\pu{1 mol} \cdot m(\ce{^12C}) = \pu{12 g}$$ therefore $$m(\ce{^12C}) = \pu{12 amu} = \pu{12 g/mol}$$ So finally we get that $\pu{1 g/mol} = \pu{1 amu}$ . However, my chemistry teacher is telling me that those are two completely different things and that I am confused between the mass per atom and the mass per $6.022\cdot10^{23}$ atoms. I can't understand how, and this is really bugging me, so help is very appreciated. Note that this requires the mole to be a number (or a "constant"), which may be where I'm wrong. Answer: You are correct, but to make it a little more clear you can include the assumed "atom" in the denominator of amu: $$ \begin{align} m_{\ce{C}^{12}} &= \pu{12amu atom^-1} \\ \\ m_{\ce{C}^{12}} &= \pu{12g mol^-1} \\ \\ \pu{12amu atom^-1} &= \pu{12g mol^-1} \\ \\ \pu{1amu atom^-1} &= \pu{1g mol^-1} \end{align} $$ In other words, the ratio of amu/atom is the same as the ratio of g/mol. The definitions of amu and moles were intentionally chosen to make that happen (I'm surprised your teacher didn't explain this, actually). This allows us to easily relate masses at the atomic scale to masses at the macroscopic scale. To check this, look at the mass of an amu when converted to grams: $\pu{1amu}= \pu{1.6605E-24 g}$ Now divide one gram by one mole: $\pu{1g mol^-1}= \frac{\pu{1 g}}{\pu{6.022E23 atom}} = \pu{1.6605E-24 g atom^-1}$ It's the same number! Therefore: $\pu{1g mol^-1}= \pu{ 1 amu atom^-1}$
{ "domain": "chemistry.stackexchange", "id": 9823, "tags": "mole" }
What is the meaning of capacitance for a cell membrane?
Question: I fail to appreciate the significance of voltage, capacitance and charge by studying electrical circuits but understanding these concepts in context of a cell make more sense to me. In the article below, I got stuck on the following sentence under capacitance: "we don't need much voltage to separate the charges and therefore the membrane capacitance is quite high" http://www.scholarpedia.org/article/Electrical_properties_of_cell_membranes What does it mean for a cell to use voltage to separate charges? What are the implications of a cell membrane being thicker? Like why would larger distances require a greater voltage? Answer: "we don't need much voltage to separate the charges and therefore the membrane capacitance is quite high" I don't think this sentence is particularly useful. The capacitance of the cell capacitor, which is formed by two conductors (eletrolytes inside and outside the cell) and a dielectric (membrane), is determined by its physical characteristics, such as the thickness and the dielectric constant of the membrane and the area of the "plates", not by the voltage needed to separate the charges. What does it mean for a cell to use voltage to separate charges? It is rather the other way around: the cell, or more specifically, the ion pumps built into the cell membrane, acting as a battery, separate charges, by shuttling ions across the membrane, and, by doing so, charge the cell capacitor to a particular voltage. What are the implications of a cell membrane being thicker? A thicker membrane implies a smaller cell capacitance, which is similar to the effect of the dielectric thickness on the capacitance of a man-made capacitor. ...why would larger distances require a greater voltage? Again, this phrasing is not particularly useful. To function properly, the cell capacitor has to be charged to a particular voltage level (a resting membrane potential), therefore, regardless of the distance (the thickness of the membrane), the ion pumps will continue moving the charges across the membrane until this voltage level is reached.
{ "domain": "physics.stackexchange", "id": 50014, "tags": "capacitance" }
Calculator in C - Parsing arithmetic expression using the shunting-yard algorithm
Question: This is a simple arithmetic calculator, which parses mathematical expressions specified in infix notation using the shunting-yard algorithm. This is one of my personal projects and I would love to receive expert advice. After compiling, you can run the program with one command-line argument: $ calc <arith_expr> This is an example of running the calculator: $ calc "3^2 + 4 * (2 - 1)" The passed arithmetic expression is tokenized and the operands and operators are stored in two different stacks, implemented as linked lists. The calculator currently supports these operators + - * / ^. In calc.c I have: #include <ctype.h> #include <math.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include "stack.h" struct node_float *operands = NULL; struct node_char *operators = NULL; enum token_type { TOKEN_TYPE_OPERATOR, TOKEN_TYPE_OPERAND, }; /* Can store an operator or an operand */ struct token { enum token_type type; union { char operator; float operand; } data; }; /* Return precedence of a given operator */ int prec(char op) { switch (op) { case '+': case '-': return 2; case '*': case '/': return 3; case '^': return 4; default: fprintf(stderr, "Error in %s: invalid operator '%c'\n", __func__, op); exit(EXIT_FAILURE); } } /* Check if given token is an operator */ bool is_operator(char token) { return token == '+' || token == '-' || token == '*' || token == '/' || token == '^'; } /* Apply mathematical operation to top two elements on the stack */ float eval(char op) { float tmp = pop_float(&operands); switch (op) { case '+': return pop_float(&operands) + tmp; case '-': return pop_float(&operands) - tmp; case '*': return pop_float(&operands) * tmp; case '/': return pop_float(&operands) / tmp; case '^': return pow(pop_float(&operands), tmp); default: fprintf(stderr, "Error in %s: invalid operator '%c'\n", __func__, op); exit(EXIT_FAILURE); } } /* Remove all spaces from string */ void rmspaces(char *str) { const char *dup = str; do { while (isspace(*dup)) ++dup; } while (*str++ = *dup++); } /* Return first token of given arithmetic expression */ struct token *tokenize(char **expr) { static bool op_allowed = false; struct token *token = malloc(sizeof *token); if (!token) { fprintf(stderr, "Error in %s: memory allocation failed\n", __func__); exit(EXIT_FAILURE); } if (op_allowed && is_operator(*expr[0])) { token->type = TOKEN_TYPE_OPERATOR; token->data.operator = *expr[0]; ++(*expr); op_allowed = false; } else if (!op_allowed && *expr[0] == '(') { token->type = TOKEN_TYPE_OPERATOR; token->data.operator = *expr[0]; ++(*expr); } else if (op_allowed && *expr[0] == ')') { token->type = TOKEN_TYPE_OPERATOR; token->data.operator = *expr[0]; ++(*expr); } else { token->type = TOKEN_TYPE_OPERAND; char *rest; token->data.operand = strtof(*expr, &rest); if (*expr == rest) { fprintf(stderr, "Error in %s: invalid expression\n", __func__); exit(EXIT_FAILURE); } strcpy(*expr, rest); op_allowed = true; } return token; } /* Handle a given token, which might be an operand or an operator */ void handle_token(struct token *token) { if (token->type == TOKEN_TYPE_OPERAND) { push_float(&operands, token->data.operand); } else if (is_operator(token->data.operator)) { while (operators != NULL && operators->data != '(' && prec(token->data.operator) <= prec(operators->data)) { float result = eval(pop_char(&operators)); push_float(&operands, result); } push_char(&operators, token->data.operator); } else if (token->data.operator == '(') { push_char(&operators, token->data.operator); } else if (token->data.operator == ')') { while (operators != NULL && operators->data != '(') { float result = eval(pop_char(&operators)); push_float(&operands, result); } pop_char(&operators); } else { fprintf(stderr, "Error in %s: invalid operator '%c'\n", __func__, token->data.operator); exit(EXIT_FAILURE); } } /* Handle command line arguments */ int main(int argc, char *argv[]) { if (argc != 2) { printf("Usage: %s <arith_expr>\n" "Example: %s \"5 2 3 * +\"\n", argv[0], argv[0]); return EXIT_FAILURE; } char *expr = argv[1]; rmspaces(expr); struct token *token; while (expr[0] != '\0') { token = tokenize(&expr); handle_token(token); } free(token); while (operators != NULL) { float result = eval(pop_char(&operators)); push_float(&operands, result); } if (operands == NULL || operands->next != NULL) { fprintf(stderr, "Error in %s: too many operands on stack\n", __func__); exit(EXIT_FAILURE); } printf("Result: %f\n", operands->data); return EXIT_SUCCESS; } In stack.c I have: #include <stdio.h> #include <stdlib.h> #include "stack.h" /* Push float onto stack */ void push_float(struct node_float **head, float data) { struct node_float *new = malloc(sizeof *new); if (!new) { fprintf(stderr, "Error in %s: memory allocation failed\n", __func__); exit(EXIT_FAILURE); } new->data = data; new->next = *head; *head = new; } /* Pop float from stack */ float pop_float(struct node_float **head) { if (*head == NULL) { fprintf(stderr, "Error in %s: stack underflow\n", __func__); exit(EXIT_FAILURE); } struct node_float *tmp = *head; float data = tmp->data; *head = tmp->next; free(tmp); return data; } /* Push char onto stack */ void push_char(struct node_char **head, char data) { struct node_char *new = malloc(sizeof *new); if (!new) { fprintf(stderr, "Error in %s: memory allocation failed\n", __func__); exit(EXIT_FAILURE); } new->data = data; new->next = *head; *head = new; } /* Pop char from stack */ char pop_char(struct node_char **head) { if (*head == NULL) { fprintf(stderr, "Error in %s: stack underflow\n", __func__); exit(EXIT_FAILURE); } struct node_char *tmp = *head; char data = tmp->data; *head = tmp->next; free(tmp); return data; } In stack.h I have: #ifndef STACK_H #define STACK_H struct node_float { float data; struct node_float *next; }; struct node_char { char data; struct node_char *next; }; /* Push float onto stack */ void push_float(struct node_float **head, float data); /* Pop float from stack */ float pop_float(struct node_float **head); /* Push char onto stack */ void push_char(struct node_char **head, char data); /* Pop char from stack */ char pop_char(struct node_char **head); #endif /* STACK_H */ Answer: Keep the stack as simple as possible You have given your stack implementation knowledge of the types that are stored in the stack; either float or char. This complicates the code, since now you have to have two node structs, and two sets of push and pop functions, one for each type. You should try to apply the separation of concerns design principle here, and limit the stack code to just managing the stack itself, not its contents. In this case, I would do this by making the stack manage tokens, and let the calling code worry about whether a token holds a float or a char. For example, like so: struct node { struct token data; struct node *next; }; void push(struct node **head, struct token data); struct token pop(struct node **head); This will actually simplify other code as well. For example, instead of: push_float(&operands, token->data.operand); You can now just write: push(&operands, *token); And even a line like: return pop_float(&operands) + tmp; Can still be a one-liner, by writing: return pop(&operands).data.operand + tmp; Consider passing tokens by value everywhere A struct token is a rather small structure, in fact on a 64-bit operating system it's just as big as a single pointer. So I would just pass it by value where possible, and avoid unnecessary memory allocations. This will also get rid of a memory leak, since you called free(token) outside the first while-loop in main(). Fix the example usage If you just run ./calc without arguments, you print usage information, including an example. This is very nice! However, the example you print makes it look like your program wants the input in reverse polish notation, but your program actually expects infix notation. I would just copy the example you gave here on Code Review. Move the code calling the tokenizer and evaluation function into its own function Again, try to separate concerns, and let main() just worry about parsing the command line arguments and printing the final result, but move the code calling the tokenizer and evaluator into its own function. For example: float calculate(char *expr) { rmspaces(expr); ... return operands->data.data.operand; } int main(int argc, char *argv[]) { if (argc != 2) { ... } printf("Result: %f\n", calculate(argv[1])); } Avoid static and global variables if possible There are global variables operands and operators, and a static variable op_allowed in tokenize(). While it is fine for a simple application like this, in larger applications this can lead to problems. A good practice is to put all the state related to the calculator in a single struct, and pass a pointer to such a struct to any functions that need to access this state. For example: struct calculator_state { struct node *operands; struct node *operators; bool op_allowed; }; float calculate(char *expr) { rmspaces(expr); struct calculator_state state = {NULL, NULL, false}; while (expr[0] != '\0') { struct token token = tokenize(&state, expr); handle_token(token); } ... } Of course you have to modify tokenize(), handle_token() and eval() to use this state as well: float eval(struct calculator state &state, char op) { float tmp = pop(&state.operands); ... } Ensure you free all memory At the end of the calculation, you have a single node left on the stack that contains the result. It is good practice to also ensure you free that node before returning from calculate(), otherwise you will have a memory leak. This is not a big deal in your current program, but if you were to use calculate() multiple times in a larger program, then it does become a problem. Don't copy strings over themselves There is a bug in this line of code: strcpy(*expr, rest); You are copying part of a string over itself. This has the same issues as memcpy() with overlapping source and destination: there is no guarantee in which order strcpy() is doing the actual copying, so it might corrupt parts of rest before it finishes the copy. There is no strmove() unfortunately, but you don't need it at all, you can just replace this line with: *expr = rest;
{ "domain": "codereview.stackexchange", "id": 41010, "tags": "algorithm, c, parsing, calculator, stack" }
Structures of cyclodextrin complexed with small ligands
Question: For some structural study I am looking for cyclodextrin strucutures (in 3D format such as pdb, mol2, etc) complexed with small molecule ligands, such as cholesterol and even smaller. Right now I could only find on the PDB database some proteins complexed with cyclodextrin, but no structure of cyclodextrin with small ligands, could you help me? Answer: I looked for relevant publications at Web of Science using 'structur* AND cyclodextrin' in the Title field. For the period 2011-2012 there were 56 hits including: Racz et al (2012) Structure of the inclusion complex of beta-cyclodextrin with lipoic acid from laboratory powder diffraction data. Acta Crystallographica Section B 68: 164-170 Ali et al (2012) Structure determination of fexofenadine-a-cyclodextrin complex by quantitative 2D ROESY analysis and molecular mechanics studies. Magnetic Resonance in Chemistry 50:299-304 Lula et al (2012) Interaction between bradykinin potentiating nonapeptide (BPP9a) and beta-cyclodextrin: A structural and thermodynamic study. Materials Science & Engineering C 32: 244-253 I looked at some of the papers but I didn't find any specific case where a structure has been deposited in a database. There were, however, several mentions of the Cambridge Crystallographic Data Centre as a source of small molecule structures.
{ "domain": "biology.stackexchange", "id": 722, "tags": "structural-biology, structure-prediction" }
Given airplane mass, velocity of air under wing, and a wing area, find velocity of air over wing
Question: I attempted to solve this problem as a tutor for a student and struggled, but want to be convince the professor didn't provide enough information. The problem is essentially: We wish to maintain a plane in flight. The plane has a mass of 1.9E6 kg, the wings have a surface area of 1500 m^2, and the velocity of the air underneath the wing is 97 m/s. I setup: P1 + 1/2*density*velocity1^2 + density*gravity*y1 = P2 + 1/2*density*velocity2^2 + density*gravity*y2 where the 1 sub terms are beneath the wing, and the 2 sub terms are above we are essentially looking for velocity2 I realized that without a thickness of the wing, the professor is probably wanting us to recognize that (y2-y1) ~ 0, thus P1 + 1/2*density*velocity1^2 = P2 + 1/2*density*velocity2^2 Recognizing that the upward and downward forces must be equal to the pressure exerted downward by the force of gravity on the mass of the plane only, and thus 1.9E6 kg * g P2 = ------------ Area P1 is different, however the force is the same, thus 1.96E6 kg * g P1 = ------------- since area is given, and I can only assume is the bottom area 1500 m^2 We now have everything we need except the top area of the wing, which given the equation A1*v1 = A2*v2 allows us to equate A of the top to area of the bottom, this results in A1*v1 A2 = ----- v2 Putting all of this together results in a quadratic equation that results in essentially the same velocity over the top as was given for the bottom (our result was 97.02 m/s). This of course was not the answer expected which is why I am asking for help? What did I do incorrectly here, or is there truly not enough information given? Answer: If the plane is just flying at constant altitude then a vertical force balance requires that lift from the wings be equal to the plane's weight. The lift force, $L$ comes from a pressure difference above and below the wing so that $$ L = (p_1-p_2)A = mg $$ You can use the Bernoulli equation assuming a negligible difference in height to express the pressure difference as $$ p_1-p_2 = \rho/2 (v_2^2-v_1^2)$$ You should then be able to rearrange for $v_2$ and solve. As @CarlWitthroft pointed out, this ain't how planes actually fly, but it does seem to answer your question.
{ "domain": "physics.stackexchange", "id": 21062, "tags": "homework-and-exercises, fluid-dynamics, lift" }
How to estimate the maximum chlorine hazard from 1 g of Palladium(II) chloride?
Question: I plan to work with an electrolyte that contains 1 g of Palladium(II) chloride. I am trying to estimate how much of a hazard this system could possibly pose (and provide ventilation, chlorine monitoring and other safety measures accordingly). How would you go about determining what max. ppm concentration of chlorine might one need to expect in this case? Answer: Consider the molecular mass, and the mass of chlorine in 1 g of $\ce{PdCl2}$. Then consider the volume of the work space, and how quickly the $\ce{Cl2}$ disperses to fill that space. Until it disperses, the concentration at the source might be 1E6 ppm (i.e. 100%), but is that meaningful? If it disperses to fill a flask, what is that volume? Are you doing this under a fume hood? BTW, if no hood is available, you could enclose the reaction vessel(s) in a container with some sodium thiosulfate, $\ce{Na2S2O3}$, solution, which was used to absorb chlorine in gas mask canisters.
{ "domain": "chemistry.stackexchange", "id": 12444, "tags": "safety, electrolysis" }
Inductor connected to an AC source
Question: Consider an inductor connected to an AC source, $V=V_0\sin\omega t$. Let the the switch in the circuit be closed at $t=0$. Then by Kirchoff's voltage law, $$ V-L\frac{dI}{dt}=0 $$ where $I$ is the current in the circuit. It follows from this that $$ I=-\frac{V_0}{L\omega }\cos\omega t+c. $$ My question is regarding the value of the constant $c$. Since the current in the circuit should be zero at $t=0$, I assumed it should be equal to $\frac{V_0}{L\omega }$, so the equation for current in the circuit becomes $$ I=\frac{V_0}{L\omega }(1-\cos\omega t). $$ However, it is printed in my textbook that the value of $c$ equals zero, so that $$ I=-\frac{V_0}{L\omega }\cos\omega t. $$ Why is my assumption wrong and how does current flow through the circuit if the initial voltage of the AC source at $t=0$ is zero? Answer: Both your equations are correct as they are for different situations with the equation the general one $i=\frac{V_0}{L\omega }(1-\cos\omega t) = \frac{V_0}{L\omega } - \frac{V_0}{L\omega }\cos\omega t$ (1) whereas $i=-\frac{V_0}{L\omega }\cos\omega t$ (2) is more specific. Let's look at equation (1) first. It can be split into a transient part, $\frac{V_0}{L\omega }$, and a steady-state part, $i=-\frac{V_0}{L\omega }\cos\omega t$. It has to be this way because you made the current through the inductor, $i$, zero when time, $t$, was zero. According to the dictionary transient means lasting only for a short time but the transient term in this example last for ever! Why is that? It is because there is no resistance in the circuit. The book solution is just the steady-state with the transient having died away which in practical situations would always be true. To illustrate this I have used the Spinning Numbers sandbox with no, $\rm(0\,k\Omega)$, resistance in the circuit. You will note a sinusoidal variation (steady-state) with a dc offset (transient). Now with $1\,\rm k\Omega$ in the circuit where you can see the transient dc offset decaying to zero which is the book solution. You can run the first simulation yourself by clicking on the link I have provided above and the clicking on the $\fbox{TRAN}$ button. You can change a component value by clicking in the component. The $\Large \bf ?$ at the top left will help guide you or go to the Spinning Numbers website for more information.
{ "domain": "physics.stackexchange", "id": 91016, "tags": "electromagnetism, electric-circuits, electric-current, electromagnetic-induction, inductance" }
What is the origin of the Quadratic factor in calculation of XRD peaks?
Question: Whenever we are to calculate the d-spacing(for cubic system) in XRD peaks with reference to Quadratic Factors we do that with the help of the formula $$\frac{1}{d_{hkl}^2}=\frac{h^2+k^2+l^2}{a^2}.$$ Please tell me how we are arriving at this form and where from are we getting ${h^2+k^2+l^2}$ as a factor. I know that things are different for hexagonal. Would be happy if both are explained with more emphasis on the cubic one. Answer: We know that the planes (h k l) are the planes which are perpendicular to the vector (h, k, l). Thus, their equations are: $$hx+ky+lz=D$$, in which D is a real number. But these planes have to pass through atoms of the crystal. So, $$D=na$$, where a is the lattice parameter and n is a natural number (e.g., 1, 2, 3, ...). So, the planes have the equation: $$hx+ky+lz=na$$ The distance between two adjacent planes is, thus, the difference between its distances from the origin of the Cartesian coordinate system. $$d=\frac{na}{\sqrt{h^2+k^2+l^2}}-\frac{(n-1)a}{\sqrt{h^2+k^2+l^2}}$$ $$d=\frac{a}{\sqrt{h^2+k^2+l^2}}$$ And thus your formula is correct.
{ "domain": "chemistry.stackexchange", "id": 12437, "tags": "inorganic-chemistry, physical-chemistry, materials, x-ray-diffraction" }
Why don't different output weights break symmetry?
Question: My deep learning lecturer told us that if a hidden node has identical input weights to another, then the weights will remain the same over the training/there will be no separation. This is confusing to me, because my expectation was that you should only need the input or output weights to be different. Even if the input nodes are identical, the output weights would affect the backpropagated gradient at the hidden nodes and create divergence. Why isn't this the case? Answer: It is fine if a hidden node has identical initial weights with nodes in a different layer, which is what I assume you mean by output weights. The problem with weight-symmetry arises when nodes within the same layer that are connected to the same inputs with the same activation function are initialized identically. To see this, the output of a node $i$ within a hidden layer is given by $$\alpha_i = \sigma(W_i^{T}x + b) $$ where $\sigma$ is the activation function, $W$ is the weight matrix, $x$ is input, $b$ is bias. If the weights $W_{i}=W_{j}$ are identical for nodes $i,j$ (note that bias is typically initialized to 0), then $\alpha_i = \alpha_j$ and the backpropagation pass will update both nodes identically.
{ "domain": "datascience.stackexchange", "id": 7696, "tags": "weight-initialization" }
Time for bead to slide along the chord of a vertical circle
Question: I ran into this problem in my mechanics homework. Here's my go at it. I hit a wall at the end and I just don't know what to do. assuming this circle Please note that $\alpha \neq 90$ degrees. It's just faulty sketching. Sorry. $\because Arc Length (L) = 2rSin(\frac{\theta}{2})$ $\therefore S_1 = 2rSin(\frac{\theta}{2}), S_2 = 2rSin(\frac{\alpha}{2})$ $\because S = V_i + \frac{1}{2}at^2 $ $\therefore S_1 = 0 + \frac{1}{2}a(t_1)^2, S_2 = 0 + \frac{1}{2}a(t_2)^2$ $\therefore 2rSin(\frac{\theta}{2}) = \frac{1}{2}a(t_1)^2, 2rSin(\frac{\alpha}{2}) = \frac{1}{2}a(t_2)^2 $ $(t_1)^2 = \frac{4rSin(\frac{\theta}{2})}{a}$, $(t_2)^2 = \frac{4rSin(\frac{\alpha}{2})}{a}$ $\therefore \frac{(t_1)^2}{(t_2)^2} = \frac{\frac{4rSin(\frac{\theta}{2})}{a}}{\frac{4rSin(\frac{\alpha}{2})}{a}}$ $\frac{(t_1)^2}{(t_2)^2} = \frac{Sin(\frac{\theta}{2})}{Sin(\frac{\alpha}{2})}$ $\frac{t_1}{t_2} = \frac{\sqrt{Sin(\frac{\theta}{2}}}{\sqrt{Sin(\frac{\alpha}{2}}}$ That's it. That's the wall I hit. I don't know what to do anymore. Can someone help? Answer: Using geometry you can get for the distance AC : $S=2r\cos\alpha=2r\sin\beta$ where $\alpha$ is the angle MAC and $\beta=90-\alpha$ is the slope of AC. (The polar equation of a circle of radius $a$ with origin at A is $r=2a\sin\theta$.) The acceleration down AC is $a=g\sin\beta$. The distance is $S=2r\sin\beta$. So the time of descent is $t=\sqrt{\frac{2s}{a}}=\sqrt{\frac{4r}{g}}$. This is independent of $\beta$ (or equivalently $\theta$). Therefore $t_1=t_2$. More generally, the bead will descend in the same time $\sqrt{\frac{4r}{g}}$ along any chord. Your calculation is close to success. $\frac{\theta}{2}=\beta$, the slope of AC. The accelerations $a=g\sin\beta$ along AC and AB are not equal, they depend on the slope $\beta$.
{ "domain": "physics.stackexchange", "id": 38651, "tags": "newtonian-mechanics, classical-mechanics, kinematics" }
What would be pressure of 1 kg of photon gas at room temperature put in a volume of 1 liter?
Question: Suppose a number of photons with spectrum corresponding to black body spectrum at 293 K with total energy corresponding to 1 kg put in a box with ideal mirror walls with volume of 1/1000 of a cubic meter (1 liter). What pressure this photonic gas will manifest on the walls of the box? Answer: Photons are radiation so their equation of state is $$ p = \frac{\rho}{3} $$ where $\rho$ is the energy density. So we have $$ p = \frac{mc^2}{3V} = \frac{1\times 9\times 10^{16}\,\,{\rm J}}{0.003\,{\rm m}^3} = 3 \times 10^{19}\,\,{\rm Pa}$$ It's a huge pressure. Not a surprising fact because the actual mass of photons we can produce is negligible. One kilogram worth of photons could be obtained by detonating a reasonable number of H-bombs: the mass would be taken from the difference between the helium and hydrogen nuclear masses. Note that in this parameterization, the result doesn't depend on the frequency/wavelength/temperature of the photons.
{ "domain": "physics.stackexchange", "id": 2138, "tags": "homework-and-exercises, photons, ideal-gas" }
What should I do when I have a variable-length sequence when instantiating an LSTM in Keras?
Question: In keras, when we use an LSTM/RNN model, we need to specify the node [i.e., LSTM(128)]. I have a doubt regarding how it actually works. From the LSTM/RNN unfolding image or description, I found that each RNN cell take one time step at a time. What if my sequence is larger than 128? How to interpret this? Can anyone please explain me? Thank in advance. Answer: In Keras, what you specify is the hidden layer size. So : LSTM(128) gives you a Keras layer representing a LSTM with a hidden layer size of 128. As you said : From the LSTM/RNN unfolding image or description, I found that each RNN cell take one time step at a time So if you picture your RNN for one time step, it will look like this : And if you unfold it in time, it look like this : You are not limited in your sequence size, this is one of the feature of RNN : since you input your sequence element by element, the size of the sequence can be variable. That number, 128, represent just the size of the hidden layer of your LSTM. You can see the hidden layer of the LSTM as the memory of the RNN. Of course the goal is not for the LSTM to remember everything of the sequence, just link between elements. That's why the size of the hidden layer can be smaller than the size of your sequence. Sources : Keras documentation This blog Edit From this blog : The larger the network, the more powerful, but it’s also easier to overfit. Don’t want to try to learn a million parameters from 10,000 examples – parameters > examples = trouble. So the consequence of reducing the size of hidden state of LSTM is that it will be simpler. Might not be able to get the links between the element of the sequence. But if you put a too big size, your network will overfit ! And you absolutely don't want that. Another really good blog on LSTM : this link
{ "domain": "ai.stackexchange", "id": 771, "tags": "machine-learning, recurrent-neural-networks, long-short-term-memory" }
Noughts & Crosses (Final Version)
Question: I've made a major update to my Noughts & Crosses program, this ISN'T a duplicate! I know it's not good to use system("pause") and system("cls"), however, I just wanted to make the console screen more readable so resorted to these methods to achieve this. I would like feedback on how well I've used std::shared_ptr, the std::algorithm library, and any suggestions on how I can improve. I have also returned a std::shared_ptr null-ptr. A lot of what I've done has been intuitive and I'm unsure if what I've done is the best way to go about doing certain things. If you run this program, please, just press enter where needed, it's two robots playing against each other. If you'd like to manually play it, please change the arguments passed into the game.play function to a human. #include <iostream> #include <vector> #include <map> #include <utility> #include <algorithm> #include <random> #include <ctime> #include <cstdlib> #include <ranges> class Player { private: std::string m_type; unsigned char m_name; int m_winTally; int m_drawed; public: std::string GetType()const { return m_type; }; Player(unsigned char name, std::string&& type = "Player") :m_name(name), m_type(type), m_winTally(0), m_drawed(0) {} virtual unsigned char GetName()const { return m_name; } virtual bool ClaimSquare(std::map<int, unsigned char>& board, int move) = 0; virtual int NextMove(std::map<int, unsigned char>& board) = 0; virtual ~Player() = default; void AddWinToTally() { m_winTally++; } void Drawed() { m_drawed++; } int GetGameWins() const { return m_winTally; } int GetDraws() const { return m_drawed; } }; class Human : public Player { public: Human(unsigned char name) :Player(name, "Human") {} virtual int NextMove(std::map<int, unsigned char>& board) override { int move; std::cout << "Enter a number on the board (e.g. 1): "; std::cin >> move; return move; } virtual bool ClaimSquare(std::map<int, unsigned char>& board, int move) { auto validSquare = std::find_if(board.begin(), board.end(), [&](auto pair) { return pair.first == move; }); if (validSquare != board.end()) { if (validSquare->second == '-') { validSquare->second = Player::GetName(); return true; } else { std::cout << "This square has already been claimed. Choose a different square!" << std::endl; return false; } } return false; } virtual ~Human() = default; }; class Robot : public Player { public: Robot(unsigned char name) :Player(name, "Robot") {} bool CheckAvailability(std::map<int, unsigned char>& board, int number, std::vector<int>& keys) { for (auto& cell : board) { if (cell.first == number) { if (cell.second == '-') { return true; } } } std::remove_if(keys.begin(), keys.end(), [&](auto& key) { return key == number; }); return false; } virtual int NextMove(std::map<int, unsigned char>& board) override { std::vector<int>number = { 1,2,3,4,5,6,7,8,9 }; int randNum = 0; std::srand(std::time(0)); do { randNum = rand() % 9 + 1; } while (CheckAvailability(board, randNum, number) == false); return randNum; } virtual bool ClaimSquare(std::map<int, unsigned char>& board, int move) { auto validSquare = std::find_if(board.begin(), board.end(), [&](auto pair) { return pair.first == move; }); if (validSquare != board.end()) { if (validSquare->second == '-') { validSquare->second = Player::GetName(); return true; } else { std::cout << "This square has already been claimed. Choose a different square!" << std::endl; return false; } } return false; } virtual ~Robot() = default; }; class NoughtsAndCrosses { private: //std::vector<Player*> m_p; std::map<int, unsigned char>board; void DisplayBoard() { for (auto const& cell : board) { if (cell.first % 3 == 1) { std::cout << "\n\n"; } if (cell.second != '-') { std::cout << cell.second << " "; } else { std::cout << cell.first << " "; } } std::cout << "\n\n"; } auto CheckForAWinner(std::map<int, unsigned char>& board, std::shared_ptr<Player>& player) { if (board.at(1) == player->GetName() && board.at(2) == player->GetName() && board.at(3) == player->GetName()) { return true; } else if (board.at(4) == player->GetName() && board.at(5) == player->GetName() && board.at(6) == player->GetName()) { return true; } else if (board.at(7) == player->GetName() && board.at(8) == player->GetName() && board.at(9) == player->GetName()) { return true; } else if (board.at(1) == player->GetName() && board.at(4) == player->GetName() && board.at(7) == player->GetName()) { return true; } else if (board.at(2) == player->GetName() && board.at(5) == player->GetName() && board.at(8) == player->GetName()) { return true; } else if (board.at(3) == player->GetName() && board.at(6) == player->GetName() && board.at(9) == player->GetName()) { return true; } else if (board.at(1) == player->GetName() && board.at(5) == player->GetName() && board.at(9) == player->GetName()) { return true; } else if (board.at(7) == player->GetName() && board.at(5) == player->GetName() && board.at(3) == player->GetName()) { return true; } else { return false; } } bool CheckForDraw(std::map<int, unsigned char>& board) { return std::all_of(board.begin(), board.end(), [&](auto& pair) {return pair.second != '-'; }); } public: NoughtsAndCrosses() { board = { std::make_pair(1,'-'),std::make_pair(2,'-'),std::make_pair(3,'-'), std::make_pair(4,'-'),std::make_pair(5,'-'),std::make_pair(6,'-'), std::make_pair(7,'-'),std::make_pair(8,'-'),std::make_pair(9,'-') }; } void ResetBoard() { std::for_each(board.begin(), board.end(), [&](auto& pair) { pair.second = '-'; }); } auto play(std::shared_ptr<Player>& p1, std::shared_ptr<Player>& p2) { int currentPlayer = 1; bool isWinner = false; bool isDraw = false; std::vector<std::shared_ptr<Player>>m_player = { p1, p2 }; do { currentPlayer = (currentPlayer + 1) % 2; do { system("cls"); std::cout << m_player.at(currentPlayer)->GetType() << ": " << m_player.at(currentPlayer)->GetName() << " turn: " << std::endl; DisplayBoard(); system("pause"); } while (m_player.at(currentPlayer)->ClaimSquare(board, m_player.at(currentPlayer)->NextMove(board)) == false); //std::cout << "\nPress enter to make the robot move. . ."; //std::cin.get(); //system("cls"); } while (CheckForDraw(board) == false && (isWinner = CheckForAWinner(board, m_player.at(currentPlayer))) == false); if (isWinner == true) { return m_player.at(currentPlayer); } m_player.at(0)->Drawed(); m_player.at(1)->Drawed(); DisplayBoard(); ResetBoard(); return std::shared_ptr<Player>(nullptr); } }; int main() { std::shared_ptr<Player> human1 = std::make_shared<Human>('O'); std::shared_ptr<Player> human2 = std::make_shared<Human>('X'); std::shared_ptr<Player> robot1 = std::make_shared<Robot>('O'); std::shared_ptr<Player> robot2 = std::make_shared<Robot>('X'); std::vector<std::shared_ptr<Player>>player = { robot1, robot2 }; NoughtsAndCrosses game; int round = 3; int roundCount = 0; std::shared_ptr<Player>winner; do { int gameCount = 1; int totalGamesinRound = 3; std::cout << "START GAME!\n"; system("pause"); system("cls"); std::cout << "\nROUND " << ++roundCount << ". . .\n"; do { std::cout << "Game " << gameCount << " of round " << roundCount << "\n"; winner = game.play(robot1, robot2); if (winner != std::shared_ptr<Player>(nullptr)) { std::cout << "Winner of game " << gameCount << " is type: " << winner->GetType() << ": " << winner->GetName() << "\n"; winner->AddWinToTally(); } else { system("cls"); std::cout << "Game " << gameCount << " is a draw!\n"; } gameCount++; totalGamesinRound--; } while (totalGamesinRound != 0); /* std::cout << "Game 2: Human vs Robot\n"; game.play(robot1, robot1);*/ std::cout << "Wins for " << robot1->GetType() << ": Player : " << robot1->GetName() << " - " << robot1->GetGameWins() << "\n"; std::cout << "Wins for " << robot2->GetType() << ": Player : " << robot2->GetName() << " - " << robot2->GetGameWins() << "\n"; std::cout << "Drawed: " << robot1->GetDraws() << "\n"; auto playerWithMostWins = std::max_element(player.begin(), player.end(), [](const auto& lhs, const auto& rhs) { return lhs->GetGameWins() < rhs->GetGameWins(); }); std::cout << "Winner of round " << roundCount << " is " << playerWithMostWins->get()->GetName() << "\n"; round--; } while (round != 0); } Answer: Make system() more portable The reason system("cls") isn't portable is because the command on windows is cls but on most other platforms it is clear. You can check for the OS using certain macros. void clear_screen() { #ifdef _WIN32 // windows system("cls"); #else // macOS and linux system("clear"); #endif //_WIN32 } A similar situation for pause void pause() { #ifdef _WIN32 // windows system("pause"); #else // macOS and linux system("read") #endif //_WIN32 } I know it's not good to use system("pause") and system("cls") however, I just wanted to > make the console screen more readable It isn't not good, but pure evil Code structure There is a lot of room for improvement here. Let me start with the simplest one Unecessary getters and setters! class Player { private: std::string m_type; unsigned char m_name; int m_winTally; int m_drawed; public: std::string GetType()const { return m_type; }; Player(unsigned char name, std::string&& type = "Player") :m_name(name), m_type(type), m_winTally(0), m_drawed(0) {} virtual unsigned char GetName() const { return m_name; } virtual bool ClaimSquare(std::map<int, unsigned char>& board, int move) = 0; virtual int NextMove(std::map<int, unsigned char>& board) = 0; void AddWinToTally() { m_winTally++; } void Drawed() { m_drawed++; } int GetGameWins() const { return m_winTally; } int GetDraws() const { return m_drawed; } }; How about class Player { public: std::string m_type; unsigned char m_name; int m_winTally; int m_drawed; Player(unsigned char name, std::string&& type = "Player") : m_name(name), m_type(type), m_winTally(0), m_drawed(0) {} virtual unsigned char GetName() const { return m_name; } virtual bool ClaimSquare(std::map<int, unsigned char>& board, int move) = 0; virtual int NextMove(std::map<int, unsigned char>& board) = 0; }; ClaimSquare() Your function ClaimSquare() is virtual and has been overridden in both of the derived classes, let's look at them Human virtual bool ClaimSquare(std::map<int, unsigned char>& board, int move) { auto validSquare = std::find_if(board.begin(), board.end(), [&](auto pair) { return pair.first == move; }); if (validSquare != board.end()) { if (validSquare->second == '-') { validSquare->second = Player::GetName(); return true; } else { std::cout << "This square has already been claimed. Choose a different square!" << std::endl; return false; } } return false; } Robot virtual bool ClaimSquare(std::map<int, unsigned char>& board, int move) { auto validSquare = std::find_if(board.begin(), board.end(), [&](auto pair) { return pair.first == move; }); if (validSquare != board.end()) { if (validSquare->second == '-') { validSquare->second = Player::GetName(); return true; } else { std::cout << "This square has already been claimed. Choose a different square!" << std::endl; return false; } } return false; } They both are the EXACT same. The point of polymorphism is to reuse the same code i.e reduce repetition. Here you have done the EAXCT opposite, introduced more repetition. Why is it virtual? On the contrary, it shouldn't be a part of the player class. It should be a part of your NoughtsAndCrosses since that contains the game board. The same applies to CheckAvailibility(). What does that have to do with a player? Nothing at all. All of that belongs to NoughtsAndCrosses With all of that being said, here is what I think it should be struct Player { char symbol; const std::string type; int wins; int draws; virtual int nextMove() const = 0; Player (const char symbol, std::string&& type) : symbol(symbol), type(std::move(type)), wins(0), draws(0) {} }; // Edit: As pointed out by G.Sliepen, Player::type remains unchanged, therefore `const` struct Human : public Player { Human(const char symbol) : Player(symbol, "Human") {} int nextMove() const override { // get move from STDIN and return } }; struct Robot : public Player { Robot(const char symbol) : Player(symbol, "Robot") {} int nextMove() const override { // generate random number and return } }; Using this, your game class can have two objects, one player and one human, and you can switch between them when you need to. class Game{ Player* one; Player* two; Player* turn; // either points to player one or player two, switch accordingly public: Game(Player& one, Player& two) : one(&one), two(&two), turn(&one) {} // board, checkWin(), checkDraw().... // if player one wins -> one.wins++ // if player two wins -> two.wins++ } With that, you can easily create games, human vs human, robot vs human, robot vs robot int main() { Human h('x'); Human h2('o'); Game game1(h, h2) // human vs human, can be anything else } No need for shared_ptr here. With all that being said, I suggest you create a gameloop() function for the Game class that automatically plays the game. That is calling nextMove() until someone wins. Represent the board with std::array I'm not sure why you chose std::map to represent the board. Using std::array here is enough and will simplify everything further. Pseudo code class Game { Player* one; Player* two; Player* turn; // either points to player one or player two public: Game(Player& one, Player& two) : one(&one), two(&two), turn(&one) {} void gameloop(){ for(;;){ clear_screen(); playerMove(); if (win()) ;// bla bla else if (draw()) ; // bla bla bla } } private: bool moveIsValid(const int move){ // check if square is occupied, or out of range and return accordingly } void performMove(const int move){ // perform move on the board } void playerMove(){ int move = -1; for(;;){ move = turn->nextMove(); if (moveIsValid(move)) break; } performMove(move); switchPlayers(); } void switchPlayers(){ turn = turn == one ? two : one; } char win(){ // self explanatory } bool draw(){ // self explanatory } std::array < char, 9 > board; };
{ "domain": "codereview.stackexchange", "id": 40168, "tags": "object-oriented, tic-tac-toe, c++17, classes, inheritance" }
Is cubic complexity still the state of the art for LP?
Question: According to D. den Hertog, Interior Point Approach to Linear, Quadratic and Convex Programming, 1994, a linear program with $n$ variables, $n$ constraints and precision $L$ is solvable in $O(n^3L)$ time. Has that been improved upon? Answer: It seems that K.M.Anstreicher has improved the result to $O((n^3/\ln n)L)$ in Anstreicher, Kurt M. "Linear programming in O ([n3/ln n] L) operations." SIAM Journal on Optimization 9, no. 4 (1999): 803-812.. I have not read this paper, but I hope that this answer will help you in some extent.
{ "domain": "cstheory.stackexchange", "id": 2854, "tags": "cc.complexity-theory, reference-request, linear-programming" }
A recursive_count Function For Various Type Arbitrary Nested Iterable Implementation in C++
Question: This is a follow-up question for A Summation Function For Arbitrary Nested Vector Implementation In C++ and A Summation Function For Various Type Arbitrary Nested Iterable Implementation in C++. Besides the summation result, I am trying to get the element count in arbitrary nested iterable things. For example, there are three elements in std::vector<int> test_vector = { 1, 2, 3 };, so the element count of test_vector is 3. The experimental implementation of recursive_count is as below. size_t recursive_count() { return 0; } template<class T> requires (!is_elements_iterable<T> && !is_iterable<T>) size_t recursive_count(const T& input) { return 1; } template<class T> requires (!is_elements_iterable<T> && is_iterable<T>) size_t recursive_count(const T& input) { return input.size(); } template<class T> requires (is_elements_iterable<T> && is_iterable<T>) size_t recursive_count(const T& input) { size_t output{}; for (auto &element : input) { output += recursive_count(element); } return output; } Some test cases of this recursive_count template function. // std::vector<int> case std::vector<int> test_vector = { 1, 2, 3 }; auto recursive_count_result1 = recursive_count(test_vector); std::cout << recursive_count_result1 << std::endl; // std::vector<std::vector<int>> case std::vector<decltype(test_vector)> test_vector2 = { test_vector, test_vector, test_vector }; auto recursive_count_result2 = recursive_count(test_vector2); std::cout << recursive_count_result2 << std::endl; // std::deque<int> case std::deque<int> test_deque; test_deque.push_back(1); test_deque.push_back(1); test_deque.push_back(1); auto recursive_count_result3 = recursive_count(test_deque); std::cout << recursive_count_result3 << std::endl; // std::deque<std::deque<int>> case std::deque<decltype(test_deque)> test_deque2; test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); test_deque2.push_back(test_deque); auto recursive_count_result4 = recursive_count(test_deque2); std::cout << recursive_count_result4 << std::endl; A Godbolt link is here. All suggestions are welcome. The summary information: Which question it is a follow-up to? A Summation Function For Arbitrary Nested Vector Implementation In C++ and A Summation Function For Various Type Arbitrary Nested Iterable Implementation in C++ What changes has been made in the code since last question? The previous questions are focused on summation operation, and I am trying to implement a recursive_count template function here. Why a new review is being asked for? In my opinion, I am not sure the name of this function is good or not. I know there is a function std::count in STL which purpose is to count the elements that are equal to given specific value. If there is any better naming suggestion, please let me know. Answer: No need for an overload that takes no parameters Nothing calls recursive_count() without parameters, so that overload is unnecessary. Unnecessary requirements You don't need to require both is_elements_iterable<T> and is_iterable<T>, the former will already cover the requirements of the latter, and if you negate them then the latter covers the former. So the overloads should be: template<class T> requires !is_iterable<T> size_t recursive_count(const T& input) { return 1; } template<class T> requires (!is_elements_iterable<T> && is_iterable<T>) size_t recursive_count(const T& input) { return input.size(); } template<class T> requires is_elements_iterable<T> size_t recursive_count(const T& input) { size_t output{}; for (auto &element : input) { output += recursive_count(element); } return output; } Naming Indeed you already noticed that STL has a std::count that does something different. However there is a function that does what you want for non-nested containers: std::size. So consider naming your function std::recursive_size. Consider making more use of STL algorithms You are implementing your own algorithms but you are not making use of STL's algorithms. This can reduce the amount of code you write. For example, the last overload can be rewritten as: template<class T> requires is_elements_iterable<T> size_t recursive_count(const T& input) { return std::transform_reduce(std::begin(input), std::end(input), std::size_t{}, std::plus<std::size_t>(), [](auto &element){ return recursive_count(element); }); } Although I'm not sure this is more readable than your version.
{ "domain": "codereview.stackexchange", "id": 39927, "tags": "c++, recursion, c++20" }
Gazebo and Rviz have different velocity when robot_localization package is used
Question: ROS2 Humble, Gazebo Fortress, Gravity in gazebo set to zero. I am trying to fuse IMU and GPS data with robot_localization package. Already went through the following tutorial 1, tutorial 2. I am publishing a static transform from map to odom. The parameters are the following ekf_filter_node_map: ros__parameters: frequency: 30.0 two_d_mode: true # Recommended to use 2d mode for nav2 in mostly planar environments print_diagnostics: true debug: false publish_tf: true map_frame: map odom_frame: odom base_link_frame: base_link # the frame id used by the turtlebot's diff drive plugin world_frame: odom odom0: odometry/gps odom0_config: [true, true, false, false, false, false, false, false, false, false, false, false, false, false, false] odom0_queue_size: 10 odom0_differential: false odom0_relative: false imu0: imu/data imu0_config: [false, false, false, false, false, true, false, false, false, false, false, false, false, false, false] imu0_differential: false # If using a real robot you might want to set this to true, since usually absolute measurements from real imu's are not very accurate imu0_relative: false imu0_queue_size: 10 imu0_remove_gravitational_acceleration: true use_control: false process_noise_covariance: [ 1e-3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00000005, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.005, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0 ] navsat_transform: ros__parameters: frequency: 30.0 delay: 3.0 magnetic_declination_radians: 0.0 yaw_offset: 0.0 zero_altitude: true broadcast_utm_transform: true publish_filtered_gps: true use_odometry_yaw: true wait_for_datum: false I tried several values for the process_noise_convariance but I see that the velocity in gazebo and Rviz dont match when I sent a Twist msg with only z angular velocity. This video explains it better I think. If I use ground truth odometry from gazebo, then the robot could reach a target goal but with robot_localization , this doesnt happen. Is there something else that I could try out to get the localization working? Any help would be great thanks Answer: I found that the simulated IMU in gazebo fortress was already giving me filtered data (imu/data as per imu_filter_madgwick). I thought it would be raw imu/data_raw and used imu_filter_madgwick to filter it again which made the orientation estimation wrong.
{ "domain": "robotics.stackexchange", "id": 38697, "tags": "robot-localization, ros-humble, ignition-fortress" }
Form the largest number by swapping one pair of digits
Question: You are given a number. You are given the opportunity to swap only once to produce the largest number. My approach was to use buckets whose values were their indexes and their locations were their values. I walked the array checking if the largest value was greater than the current value. If the current value was smaller I would decrement the largest value. Checking if it existed and that, that bucket was in a location greater than the current location in the iteration. At which point I would know the bucket's location and that it contained the largest value outside a successive set and the nearest lowest value it would replace, to produce the largest number. Analysis: Creating Buckets: O(n) time and O(n) space Walking Values Space and Time: O(1) + finding the lowest high and swapping: O(1) Total: O(n) const maximumSwap = function(array) { array = array.toString().split(''); const buckets = [] for (let location = 0; location < array.length; location++) { buckets[array[location]] = location } let largest = buckets.length - 1 for (let current_location = 0; current_location < array.length; current_location++) { let current_val = array[current_location] for (; largest > current_val; largest--) { if (buckets[largest] > current_location) { array[current_location] = [array[buckets[largest]], array[buckets[largest]] = array[current_location]][0] return +array.join('') } } } return +array.join('') }; console.log(maximumSwap(99739)) console.log(maximumSwap(9273)) console.log(maximumSwap(9732)) Answer: Style critique Either terminate your statements consistently with semicolons, or not at all. If you're going to use const to define the function, then I suggest using the arrow notation as well, instead of the function keyword. The variable names could be improved: Why is the function's parameter called array? It's clearly supposed to be a number. After splitting the digits, what does array mean, and what does it contain? I'd rename it to digits. What are the buckets? I suggest renaming it to lastIndex. Instead of location and current_location, I'd just use i, which is a conventional variable name to use for an array index. (Also, the underscore is unconventional in JavaScript.) largest doesn't remain the largest digit of the number. Perhaps wantDigit would be a better name, since it's the digit that we would like to put at the current index, if possible. The nastiest line of code is the one where you perform the swap: array[current_location] = [array[buckets[largest]], array[buckets[largest]] = array[current_location]][0] A better way to do parallel assignment would be to use destructuring assignment: let array = ['a', 'b', 'c']; [array[0], array[2]] = [array[2], array[0]]; console.log(array); But here, you don't even need to do parallel assignments, because you have already stored the values to be swapped in other variables: array[buckets[largest]], as an rvalue, is just largest. (With the renamings I proposed above, it would have been digits[lastIndex[wantDigit]], which might make it more obvious that it's actually just wantDigit.) array[current_location], as an rvalue, is just current_val. So, just write two simple and clear assignment statements instead. If no swap occurred, then you can just return the original number. const maximumSwap = (number) => { let digits = number.toString().split(''); let lastIndex = []; for (let i = 0; i < digits.length; i++) { lastIndex[digits[i]] = i; } let wantDigit = lastIndex.length - 1; for (let i = 0; i < digits.length; i++) { for (let iDigit = digits[i]; iDigit < wantDigit; wantDigit--) { if (i < lastIndex[wantDigit]) { digits[i] = wantDigit; digits[lastIndex[wantDigit]] = iDigit; return +digits.join(''); } } } return number; }; console.log(maximumSwap(99739)); console.log(maximumSwap(9273)); console.log(maximumSwap(9732)); Complexity analysis Your algorithm is pretty efficient, and I couldn't find any significant optimizations to make. Be careful to specify what you mean by "n". You probably used it to mean the length of the input number. (If I had to choose my conventions, I'd say instead that the input number itself is n, and its length is d = log10 n.) Creating the buckets takes O(n) time, because location loops up to array.length. But I would say that it takes O(1) space, since the length of buckets will not exceed 10. Walking the values clearly takes more than O(1) time, because the outer loop is O(n): current_location iterates up to array.length. The inner loop, which executes at most 10 times, can be said to take O(1) time. Altogether, then, the algorithm takes O(n) time and O(1) space.
{ "domain": "codereview.stackexchange", "id": 32061, "tags": "javascript, algorithm, complexity" }
This question is about negative velocity
Question: So let's say I have an object moving from the points X to Y and my positive direction is to the right. Now suppose the object moves back from Y to a point W on the line between X and Y. Is the displacement for the second journey deemed to be negative? The object is not before point X. And now let's say I want to calculate that displacement with the SUVAT equation of motion: $s = ut + (1/2)at^2$. If I put the velocity to be negative, I will obtain a different answer than the actual distance travelled from Y to W. Why is that? Answer: This is a usual confusion. You must take into account that there is a separation between reality and how we mathematically model reality. A distance is always possitive, that's well known. However, if we want to talk about position, we need a coordinate system, and ocordinates can be either possitive or negative. You must not mix position with "ditances". Both $x=+5$ and $x=-5$ are at distance $5$ units from the origin. In other words, positions are jsut coordinates, and they can have a sign. Position: x) Possitive at the right side of the origin, negative at the left side. y) Possitive if above the origin, negative below origin Now, what happens to velocity? In the same way, the speed is a concept that can only be possitive, but the coordinates of velocity can be negative, and ther's nothing wrong with it. To compute the sign of velocity, you will recall the definition of velocity: $$v=\frac{\Delta x}{\Delta t} = \frac{ x_{final}-x_{initial}}{t_{final}-t_{initial}}$$ As time always flows forwards (at least in this kind of exercises) the denominator will always be possitive, so the sigh of velocity will depend on the numerator only. So check that the numerator contains the difference in positions. It is $x_{final}-x_{initial}$. The numerator, and hence velocity, will be possitive as long as the final position is bigger than the initial one. When I say bigger, I'm including negatiev values. If you go from $-9$ to $-7$, the value is "less negative", so the position is also increasing. Consequently, you can deduce a pratical rule for velocities. Velocity $v_x$) Possitive if moving rightwards, negative if moving leftwards. $v_y$) Possitive if moving upwards, negative if moving downwards It is also important that you do not mix position and velocity. Position is where you are, and velocity is how fast you move away from there. You can be far away, in Russia, for example (that's position), but moving quickly to the USA (that's velocity). How to use them? In uniform straight movement (USM), and uniformly accelerated straight movement (UASM), formulae require signs for all quantities: position, velocity and acceleration. All of them must have the signs included. So, the first thing you must do is making a picture of your situation. Never try to do an exercise without a picture! Once you have a shceme of what is happening, you must decide where is your origin of coordinates and what are you coordinate axes. Once you have decided where your Reference system is, then you only have to decide signs acoording to the criteria listed above. Tip: if possible, try to place your origin as low and leftwards as possible, so that most coordinates are possitive, in order to avoid minus signs. Edit: yes, you will obtain different results if you put the signs wrong. Teh reason is that the equation $x_{final} = x_{initial} + v\cdot t $ means that the final position is the same as the initial plus a variation. A possitive variation means increasing position, whiel a negative one means decreasing. It's not the same, you must say if the rate of change is from one side or to another, and that's what the signs are for
{ "domain": "physics.stackexchange", "id": 79499, "tags": "kinematics, velocity" }
Different frequencies and the same $n-$revolutions
Question: The frequency $\nu$ it is defined as: $$\nu=\frac1T \tag 1$$ where with $1$ indicate one cycle and $T$ the period, i.e. when a material point completes a complete circle. If I want generalize the $(1)$, I write: $$\nu^*=\frac{n^*}{\Delta t}$$ where $n\in \Bbb N\smallsetminus\{0\}$ is the number of revolutions and $\Delta t$ it is the time interval required to complete $n-$revolutions. If the $$\bbox[5px,border:2px solid #138D75]{\nu=\nu^*=\text{constant} \implies\frac1T = \frac{n}{\Delta t} \iff n=\frac{\Delta t}{T}}\tag 2$$ If I have $\nu\ne \nu^*$ do it exists a relation (or an practical example) where I can get a different relation of the $(2)$ to obtain the same $n-$revolutions? Addendum (further explanation): If we have $\nu\ne \nu^*$ obviously we will have $$\frac{n}{\Delta t}\ne \frac{n^*}{\Delta t^*} \implies $$ $$\Delta t\neq \Delta t^* \,\text{ or } \, n\neq n^*\tag 3$$ If it is $n\neq n^*$ the case is closed. If $$\bbox[5px,border:2px solid #BA4A00 ]{\Delta t\neq \Delta t^* \stackrel{?}{\implies} n=Kn^*, \, K\in\Bbb R} \tag 4 $$ When does this occur the case $(4)$ i.e. $\nu\neq \nu^* \implies n\propto n^*$, and in particular for $K=1$ i.e. $n= n^*$? Answer: I don't know what this question is supposed to be useful for. But the answer is pretty simple. We have the functional relationship $$n:\Bbb R\times\Bbb R\to \Bbb R$$ $$n:(\nu,\Delta t)\to \nu\cdot\Delta t$$ Since you require $$n=Kn^*$$ and you know that $\nu$ and $\nu^*$ have to satisfy $\nu\ne\nu^*$ we substitute our function $n(\nu,\Delta t)$ and rearrange the above relation so that the known quantities are on the right hand side $$\frac{\Delta t}{\Delta t^*}=K\frac{\nu^*}{\nu}$$ If I have $\nu\ne \nu^*$ do it exists a relation (or an practical example) where I can get a different relation of the $(2)$ to obtain the same $n-$revolutions? If the number of revolutions need to be the same, we have $K=1$ and so, $$\frac{\Delta t}{\Delta t^*}=\frac{\nu^*}{\nu}$$ Put your specific numbers $\nu\ne\nu^*$ in there, and you get the relation that $\Delta t$ and $\Delta t^*$ have to satisfy. $$\Delta t\neq \Delta t^* \stackrel{?}{\implies} n=Kn^*, \, K\in\Bbb R$$ When does this occur the case $(4)$ i.e. $\nu\neq \nu^* \implies n\propto n^*$, and in particular for $K=1$ i.e. $n= n^*$? The case $K=1$ has already been answered above. If $K\in\Bbb R$ is arbitrary and fixed, the relation for $\Delta t$ and $\Delta t^*$ you have asked for is $$\frac{\Delta t}{\Delta t^*}=K\frac{\nu^*}{\nu}$$ If $K$, or $\nu$, or $\nu^*$ is completely arbitrary, there are no constraints on $\Delta t$ and $\Delta t^*$, other than being nonzero (and presumably non-negative). Just choose anything you like, for example point your finger blindly to some entry in the phone book, take the square root of it, multiply it by $\pi$, and so forth. Caveat: due to the fact that $n$ and $n^*$ are nonzero integers, $K$ can only be, mathematically, a rational number, not an arbitrary real, i.e. $$K\in\Bbb Q$$ However, since any real number can be approximated by a rational arbitrarily accurately, and on the other hand measurements of $\nu$ and $\nu^*$ can't cover the reals anyway, $K$ can also be assumed an arbitrary real for all practical (physical) purposes.
{ "domain": "physics.stackexchange", "id": 76071, "tags": "frequency, rotational-kinematics" }
How to calculate molar mass of (6)UO2+2D and (6)UO2OH+D?
Question: For a hydrogeochemistry project, I want to calculate the molar masses of some compounds. I came across a problem when encountering species like (6)UO2+2D (aq) and (6)UO2OH+D(aq). I don't know what the (6) and the +2D mean. I started off with IPC-MS results for several trace elements and fed them to VMinteq for the speciation modelling. I tried to google the species, but that query produced no useful results. I am no chemist and I don't know "how" or "where" to search efficiently for an answer. EDIT: another (6) appeared in FA1-UO2(6)(aq) Answer: These notations signify someone performed calculations in Visual MINTEQ accounting for dissolved organic matter (DOM) using a NICA-Donnan modeling method. From the manual nicadonnan.pdf bundled with the installer: In Visual MINTEQ, a “D” suffix is used to identify a diffuse-layer species such as a Donnan species. […] The numbers 6, 7, 8 and 9 have no conceptual significance; they are only used by Visual MINTEQ to distinguish different humic components, and they reflect the order in which the components appear on the NICA-Donnan menu (with 6 as the starting number). Considering the default numbering layout and that FA1 and FA2 refer to fulvic acids containing carboxylic (FA1) and phenolic groups (FA2), OP's species can be interpreted as such: (6)UO2+2D(aq): Weakly (electrostatically) bound uranyl $\ce{UO2^2+}$ to fulvic acid in the aqueous phase. (6)UO2OH+D(aq): Weakly (electrostatically) bound uranyl monohydroxide $\ce{UO2(OH)+}$ to fulvic acid in the aqueous phase. FA1-UO2(6)(aq): Organically complexed uranyl $\ce{UO2^2+}$ to carboxylic fulvate in the aqueous phase. Several examples from the manual: FA1-Zn(aq), FA2-Zn(aq): Organically complexed Zn to dissolved fulvic acid. Sites 1 and 2 refer to carboxylic and phenolic functional groups, respectively. FA1-Zn(s), FA2-Zn(s): Organically complexed Zn to fulvic acid in the solid phase. HA1-Zn(s), HA2-Zn(s): Organically complexed Zn to humic acid in the solid phase. (8)Zn+2D: Weakly (electrostatically) bound Zn to dissolved fulvic acid. Zn+2D(s)(6): Weakly (electrostatically) bound Zn to fulvic acid in the solid phase. Zn+2D(s)(7): Weakly (electrostatically) bound Zn to humic acid in the solid phase.
{ "domain": "chemistry.stackexchange", "id": 13054, "tags": "computational-chemistry, terminology, software, notation, geochemistry" }
Center of gravity for best in-flight behavior of oblong projectile
Question: I am interested in improving the in-flight behavior of an oblong projectile that is launched from something like a trebuchet (a rotating arm).* This is a hobby project and I'm not an engineer or physicist. My problem is that the oblong projectile tends to tumble. I want it to fly straight like an arrow.** What I want to know is simply where the center of gravity of an oblong projectile should be given my parameters. Those parameters are: the launch has a rotational component that will tend to make the oblong projectile tumble speeds are relatively slow: max of 150 feet per second, and usually half that. distances are relatively short: max of 100 feet, and typically half or a quarter of that. the oblong projectile is very dense: it's solid steel and weighs about 250 grams, has length of 250 mm and diameter on the order of 10 mm (approx). the oblong projectile has very minimal spin around its longitudinal (leading to trailing) axis, and it is not possible to give it more spin (such as a rifled bullet has). the oblong projectile has no fins or stabilizer -- it's cylinder shaped with a point at the front. Think of a small torpedo, missile or spear without any fins. The cylinder does not have to be constant diameter nor have straight sides. air resistance is negligible (not worth considering); assume, fins would not help even if they could be added (which they can't). I mention specific speeds, lengths, etc. only to give an example. The problem is a general one. The important conclusion from the above parameters is that solving this problem cannot rely on the usual methods such as center of pressure being behind the center of gravity. A different approach is required. My question is simply: should the center of gravity to be toward the front, balanced or toward the rear? (And, if possible, why?) UPDATE: The projectile is positioned radially on the launcher with the "front" furthest from the pivot of rotation and the back closest to the pivot. Therefore, the front of the projectile has higher velocity (compared to the rear) during the launch. It's similar to a spoke in a wheel, where in this case, the front tip of the projectile is mounted near the rim with the back end toward the hub. When the projectile is released from the rotating arm, it should fly straight without tumbling. I will work separately on engineering issues related to the release of the projectile from the rotating arm. For this general problem, I only wish to know where the center of gravity should be given that air/fluid pressure won't help stabilize the projectile. *NOTE 1: I have tried to simplify the description to avoid a long distracting explanation of how the projectile is thrown. I'm not using a trebuchet. I'm using something with more pivots. But I want to keep the question general and simple. Engineering details of the launcher are outside the scope of this community discussion. **NOTE 2: a lot of the tumbling will relate to the characteristics of the launch (the throw). But, again, those issues are outside the scope of this question. Answer: If you want your projectile to rotate less, the easiest way to do so (in the absence of air resistance) is to make it difficult to rotate. This is equivalent to maximizing the moment of inertia of your projectile when rotating on an arbitrary axis through the center of mass and perpendicular to the travel direction. Since your projectile has cylindrical symmetry, it doesn't matter which particular axis we pick, as long as it's perpendicular to the axis of the cylinder. There are generally three different ways we could proceed: Taper heavily toward the back, Taper heavily toward the front, or Taper in both directions, creating a dumbbell shape. Let's examine the limit of these three cases. In case 1 and 2, we get a thin disk* (i.e. basically all of the mass is at the front or back), and in case 3 we get two thin disks (one at each end). Assume each case has the same total mass $M$ and total area $A$. Cases 1 and 2 (taper to front or back) Cases 1 and 2 are single disks with radius $R=\sqrt{\frac{A}{\pi}}$, so their moment of inertia is $$I_1=I_2=\frac{MR^2}{4}=\frac{MA}{4\pi}$$ Case 3 (make a dumbbell) The radius of the disks in case 3 are such that $2\pi R^2=A$, so $R=\sqrt{\frac{A}{2\pi}}$ in this case. The moment of inertia of each disk is $\frac{MR^2}{4}=\frac{MA}{8\pi}$ when rotated about its center of mass. But now we apply the parallel axis theorem, which moves the disks' axis of rotation a distance $L/2$ away (where $L$ is the total length of the projectile). So the moment of inertia of each disk about this new axis is $\frac{MA}{8\pi}+M\left(\frac{L}{2}\right)^2$ by the parallel axis theorem, and there are 2 identical disks, so the total moment of inertia is $$I_3=2\left(\frac{MA}{8\pi}+\frac{ML^2}{4}\right)=\frac{MA}{4\pi}+\frac{ML^2}{2}$$ Thus, we see that in every case, the moment of inertia in case 3 is greater than that in case 1 or 2. So, in the absence of air resistance, making the projectile dumbbell-shaped is the best option. *It may be counterintuitive that tapering toward the front and the back would be the same, but note that a front-tapered projectile is just a back-tapered one rotated by 180 degrees, so they would be the same object.
{ "domain": "physics.stackexchange", "id": 42164, "tags": "newtonian-mechanics, projectile, drag, aerodynamics" }
Smallest string length to contain all types of beads
Question: I read this question somewhere, and could not come up with an efficient answer. A string of some length has beads fixed on it at some given arbitrary distances from each other. There are $k$ different types of beads and $n$ beads in total on the string, and each bead is present atleast once. We need to find one consecutive section of the string, such that: that section contains all of the $k$ different types of beads atleast once. the length of this section is as small as possible, provided the first condition is met. We are given the positions of each bead on the string, or alternatively, the distances between each pair of consecutive beads. Of course, a simple brute force method would be to start from every bead (assume that the section starts from this bead), and go on till atleast on instance of all beads are found while keeping track of the length. Repeat for every starting position, and find the minimum among them. This will give a $O(n^2)$ solution, where $n$ is the number of beads on the string. I think a dynamic programming approach would also probably be $O(n^2)$, but I may be wrong. Is there a faster algorithm? Space complexity has to be sub-quadratic. Thanks! Edit: $k$ can be $O(n)$. Answer: With a little care your own suggestion can be implemented to $O(kn)$, if my idea is correct. Keep $k$ pointers, one for each colour, and a general pointer, the possible start of the segment. At each moment each of these colour pointers keeps the next position of its colour, that follows the segment pointer. One colour pointer points to the segment pointer. That colour pointer is updated when the segment pointer moves to the next position. Each colour pointer in total moves only $n$ positions. For each position the segment pointer computes the maximal distance to the colour pounters, and one takes the overal minimum of that. Or, intuitively perhaps simpler, let the pointers look into the past, not the future. Let the colour pointers denote the distance to the respective colours last seen. In each step add the distance to the last bead to each pointer, except the one of the current colour, which is set to zero. (edit: answer to question) If $k$ is large, in the order of $n$ as suggested, then one may keep the $k$ pointers in a max heap. An update of a pointer costs $\log k$ each of $n$ steps. We may find max (the farthest colour, hence the interval length) in constant time, each of $n$ steps. So $n \log k$ total plus initialization. Now we also have to find the element/colour in the heap that we have to update. This is done by keeping an index of elements. Each time we swap to elements in the heap (a usual heap operation) we also swap the positions stored in the index. This is usually doen when computing Dijkstra's algorithm with a heap: when a new edge is found some distances to vertices have to be decreased, and one needs to find them.
{ "domain": "cs.stackexchange", "id": 776, "tags": "algorithms, dynamic-programming, search-algorithms" }
Charge stored in Series Capacitor = $\frac{Q}{3}$
Question: I was reading a problem in class concerning capacitors of equal capacitance C in class today, and I wanted to ask a question about it. If we consider 3 of these capacitors connected in series: --||--||--||-- then I know that the magnitude charge on each of these capacitors is the same. However, if I was to plot a graph of current against time, and knowing that $I \times \triangle T = \triangle Q$ the book states that the total charge stored by the 3 capacitors in series is $\frac{Q}{3}$. I'm confused by this. Why is this the case if each capacitor stores charge of magnitude Q, when all are connected in series? Answer: It looks like this figure is using "Q" to express the amount of charge that one capacitor would have when given a certain potential difference "V". Now, when we hook 3 of these capacitors up in series, since they are identical the potential across them will be V/3, so the charge on each capacitor is Q/3. Similarly, if 3 of these capacitors are hooked up in parallel, each capacitor has a potential of V, so each capacitor will have a charge Q. Now let's think about what the graph is actually saying. We are looking at currents over time. In series, we only need to "pull" Q/3 amount of charge from the battery to get the final configuration, whereas we need to "pull" 3Q in the parallel configuration. When we talk about "total charge" here we do not just add up the amount of charge on each plate. Adding up the associated "q" for each capacitor does not make much physical sense here, since technically when we say a charge "q" is on a capacitor what we mean is that one plate has net +q on it and the other has net -q on it. Instead, what we need to do is think about the total charge that we needed to "move" or "store". Hence this is why we have Q/3 for the series configuration and 3Q for the parallel configuration.
{ "domain": "physics.stackexchange", "id": 49238, "tags": "homework-and-exercises, electric-circuits, electric-current, charge, capacitance" }
Electric shadow-hand error with haptic glove
Question: Hi. i create a pkg for an haptic glove few months ago, on ros fuerte version and was working very nice with shadow robot (shadow_hand). for some reasons i had to install ROS electric version. i installed also the shadow- robot for electric but it don't works. i think and suppose is for the dependencies that aren't on ros electric like sr_teleop, now the problem is , how to install sr_teleop on ros electric. cause an overlay was very problematic too. Originally posted by monidiaz on ROS Answers with karma: 92 on 2013-02-04 Post score: 0 Answer: Can you consider upgrading to Fuerte? Our latest stacks are released in fuerte (sr-teleop etc...). You can install them by simply running: sudo apt-get install ros-fuerte-shadow-robot ros-fuerte-sr-visualization ros-fuerte-sr-teleop If you really want to install them on electric, see my answer to this thread. Originally posted by Ugo with karma: 1620 on 2013-02-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by monidiaz on 2013-02-05: i was working on fuerte, and don't have problems with shadow, but i had it with gazebo, cause i need the collision too, and gazebo's fuerte version don't work. Comment by Ugo on 2013-02-05: right, then I'd suggest to go with the source install. Is this working for you? Comment by monidiaz on 2013-02-05: the source install for gazebo? i tried many times but that failed, many erros. Comment by Ugo on 2013-02-05: No, the source install for our code under electric. Never managed to get gazebo working from sources either... :S Comment by monidiaz on 2013-02-05: mm no, i don't know what is the code source for shadow. and why you say dont get gazebo from sources? well i get some errors, so i install ubuntu one more time, with ros electric now and these is my 4th time intented get working gazebo with collision+shadow-robot(sr_hand) + haptic glove. :'( Comment by Ugo on 2013-02-05: I think there's a misunderstanding :S I'm pointing you to another thread where I give a rosinstall file which can be used to install our code from sources under ROS electric. Is the problem that you don't know how to use the rosinstall file? Comment by monidiaz on 2013-02-05: ok that sounds better, and you're right really i don't know how to use rosinstall file . thnks Comment by Ugo on 2013-02-05: edited the other thread. Can we continue the conversation there? It'll be better. Comment by monidiaz on 2013-02-05: sorry, on wich thread? or i make other question ? thnk u Comment by Ugo on 2013-02-05: as pointed above: http://answers.ros.org/question/53819/how-to-install-shadow_hand-in-ros-electric
{ "domain": "robotics.stackexchange", "id": 12733, "tags": "ros, shadow-robot" }
Why the light curve goes down when the planet is behind the star?
Question: There is a video explaing the transiting exoplanet light curve — https://www.youtube.com/watch?v=RrusIZaWDW8 It is clear to my why the curve goes down when the planet is between the observer and the star, but I don't understand why the curve goes a little bit down when the the planet is behind the star. I was expected that this should not change the light curve at all. Answer: Think of when the planet is at the "side" - there's a little bit of light from the planet (ie, reflecting off the planet) shining towards us. Could it be due to that little bit of light - when the planet is behind the star, it no longer reflects towards us? Maybe that's the effect you have in mind?
{ "domain": "astronomy.stackexchange", "id": 1669, "tags": "star, planet, observational-astronomy, exoplanet, light" }
Is presuming that any linear uniform motion is transformed into another one sufficient to assume that Lorentz transformation must be linear?
Question: I've read this condition in a script: The Lorentz transformation must be linear because any uniform motion must be a uniform motion in any inertial frame. All other proofs that I've read so far need some kind of homogeneity of space-time: Suppose we have two different inertial frames of reference $S$ and $S'$ and want to transform their coordinates into each other, so that we have a coordinate transformation law like: $$ {\mathrm{d}x^\mu}'= \frac{{\mathrm{d}x^\mu}'}{\mathrm{dx^\mu}}\vert_{{x^\mu}'=\tilde x}\, \mathrm{dx^\mu} $$ which is supposed to be invariant under any modification of the space-time coordinate like $\tilde x \rightarrow \tilde x'$, i.e. the Lorentz transformation looks the same at each event. Now is there a way to prove that this statement is correct or false? When I'm not mistaken, the condition of transforming uniform linear motions into one another dictates something like, assuming motion along the x- and x'-axis, respectively, $x=vt$ and $x'=v't'$ to be true. Let us drop covariant notation as I deem it a bit overloaded for the translation along 1 axis. Also we would get $z=z'$, $y=y'$ and finally something like $x'=f(x,t)$ and $t'=g(x,t)$. What happens now? I apparently miss a condition that this enforces, since I seem to have no way to determine a general form of $f$ and $g$ at this point, which could show whether the transformation is linear or not. Could I substitute the linear motion into the transformation function, i.e. say that we must have $f=f(x-vt)$ and the same for $g$? Seems not to illegit since that is how Lorentz transformation looks like. I'm curious about if/how this can be resolved. Answer: Yes, the statement is correct. One proof I am familiar with was given by V.Fock (of Fock states, etc) in his book "Theory of Space, Time, and Gravitation". See Chap.1, Sec.8 therein (pg.20, bottom). In modern notation the idea is as follows: In inertial frame $S$ parametrize straight lines as ($x_0 = ct$) $$ x_i = \xi_i + \beta_i s, \;\;\; i=0,1,2,3 $$ where the $\xi_i$, $\beta_i$, and $s$ are independent real parameters. Let coordinate transformations from $S$ to another inertial frame $S'$ be given by ($x'_0=ct'$) $$ x'_i = f_i(x) \equiv f_i(x_0, x_1, x_2, x_3),\;\;i=0,1,2,3 $$ Then the $f_i$-s must be such that for $x_i$ as above the transformed $x'_i$-s read $$ x'_i = \xi'_i + \beta'_i s', \;\;\; i=0,1,2,3 $$ for some real $\xi'_i$, $\beta'_i$, and $s'$. A first step is to note that the $f_i$-s must be functions of $s$ and that from $ dx'_i/dx'_0 = \frac{dx'_i}{ds'}/\frac{dx'_0}{ds'}=$ $= \beta'_i/\beta'_0 = const$ we must also have $$ \frac{dx'_i}{dx'_0} = \frac{\frac{df_i}{ds}}{\frac{df_0}{ds}} = \frac{\beta_j\partial^j f_i}{\beta_j\partial^j f_0} = \frac{\beta'_i}{\beta'_0} = const. $$ Taking the derivative over $s$ again, $$ \frac{d}{ds}\frac{\frac{df_i}{ds}}{\frac{df_0}{ds}}= 0 \;\; \Rightarrow \frac{\frac{d^2f_i}{ds^2}}{\frac{df_i}{ds}} = \frac{\frac{d^2f_0}{ds^2}}{\frac{df_0}{ds}} $$ gives after rearranging $$ \frac{\beta_j\beta_k\partial^j\partial^k f_i}{\beta_j\partial^j f_i} = \frac{\beta_j\beta_k\partial^j\partial^k f_0}{\beta_j\partial^j f_0} $$ The identity above has the advantage that it involves only the functions $f_i$ and the parameters defining the $x_i$-s. Since the $\beta_i$, $\xi_i$, and $s$ are independent, the $\beta_i$-s and the derivatives $\partial^jf_i$, $\partial^j\partial^kf_i$ can also be varied independently. Keep $\beta_j\neq 0$ for some $j$, put $\beta_k = 0$ for $k\neq j$, and simplify. We get $$ \frac{(\partial^j)^2f_i}{\partial^j f_i} = \frac{(\partial^j)^2f_0}{\partial^j f_0} \;\;\;\Rightarrow \;\;\; (\partial^j)^2f_i = a_j \partial^jf_i\;\; \forall i $$ for $a_j = a_j(x)$ some function of $x$. Put this back into the original identity, consider next the case where $\beta_j\neq 0$ and $\beta_k\neq 0$ for $j\neq k$, rearrange as polynomial in the $\beta$-s, and conclude that its coefficients must be null. Some algebra with the ensuing equations yields $$ 2 \partial^j\partial^k f_i = a_k \partial^j f_i + a_j \partial^k f_i,\;\;\;\forall i=0,1,2,3\;\; \text{and}\;\; j\neq k $$ Alternatively, use Fock's much more elegant argument that the derivative ratios in the original identity must be linear functions of the $\beta_i$-s to arrive at the same conclusions. The bottom line is that any transformations $f_i$ that take straight lines into straight lines must satisfy the above relations between second and first derivatives. The 2nd step is to take into account the light speed postulate. That is, if the straight lines are light rays in $S$, then they must be light rays in $S'$. This amounts to the requirement that $$ \left(\frac{dx_1}{dx_0}\right)^2 + \left(\frac{dx_2}{dx_0}\right)^2 + \left(\frac{dx_3}{dx_0}\right)^2 = \left(\frac{dx_1/ds}{dx_0/ds}\right)^2 + \left(\frac{dx_2/ds}{dx_0/ds}\right)^2 + \left(\frac{dx_3/ds}{dx_0/ds}\right)^2 = 1 \\ \text{or} \;\;\;\beta_1^2 + \beta_2^2 + \beta_3^2 = \beta_0^2 $$ implies $$ \left(\frac{dx'_1}{dx'_0}\right)^2 + \left(\frac{dx'_2}{dx'_0}\right)^2 + \left(\frac{dx'_3}{dx'_0}\right)^2 = 1 \\ \text{or} \;\;\;\left(\frac{df_1}{ds}\right)^2 + \left(\frac{df_2}{ds}\right)^2 + \left(\frac{df_3}{ds}\right)^2 = \left(\frac{df_0}{ds}\right)^2 $$ The last condition means $$ \eta^{mn}\left(\beta_j\partial^j f_m\right)\left(\beta_k\partial^k f_n\right) = 0 $$ must be equivalent to $$ \eta^{mn}\beta_m\beta_n = 0 $$ where as usual $\eta^{jk} = 0$ for $j\neq k$, $\eta^{00} = -1$, $\eta^{jj} = 1$ otherwise. By the same argument that the $\beta_j$-s and $\partial^jf_i$-s are independent variables it gives $$ \eta^{mn}\left(\partial^j f_m \right)\left(\partial^k f_n\right) = \eta^{jk}\lambda $$ for $\lambda = \lambda(x)$ some function of $x$. As before, taking the derivative on $x_i$-s on both sides brings in the second derivatives of the $f$-s, $$ \eta^{mn}\left[\left(\partial^i\partial^j f_m \right)\left(\partial^k f_n\right) + \left(\partial^j f_m \right)\left(\partial^i\partial^k f_n\right)\right]= \eta^{jk}\partial^i\lambda $$ Use the previously derived conditions $2 \partial^j\partial^k f_i = a_k \partial^j f_i + a_j \partial^k f_i$, as well as $\eta^{mn}\left(\partial^j f_m \right)\left(\partial^k f_n\right) = \eta^{jk}\lambda$, and obtain the following condition relating the functions $a_i$ and $\lambda$: $$ 2\eta^{jk}a_i + \eta^{ik}a_j + \eta^{ij}a_k = 2\eta^{jk}\frac{\delta^i \lambda}{\lambda} $$ Now let us check what this means. On the one hand, $$ i\neq j = k \;\; \Rightarrow \;\;a_i = \frac{\delta^i \lambda}{\lambda} $$ but then $$ i = j \neq k \;\; \text{or} \;\; i = k \neq j\;\; \Rightarrow\;\; a_k = a_j = 0 \;\; \forall j,k $$ The simple conclusion is that transformations $f_i$ that take straight lines into straight lines and are compatible with the light speed postulate must be such that $$ \partial^j\partial^k f_i = 0 \;\;\; \forall \;i,j,k = 0,1,2,3 $$ and therefore must be linear.
{ "domain": "physics.stackexchange", "id": 26462, "tags": "special-relativity, spacetime" }
How are cells rechargable?
Question: I have learnt that you can test if a reaction is thermodynamically feasible by testing if the cell voltage $E_\mathrm{cell}$ is greater than $0$. Surely, if a reaction is feasible, then the $E_\mathrm{cell}$ of the reverse reaction is negative, and so not thermodynamically feasible; in which case, how can a cell be recharged? Taking the following example: $$ \begin{align} \ce{Li+ + CoO2 + e- &<=> LiCoO2} &\quad &\pu{+0.60 V}\tag{1a}\\ \ce{Li+ + e- &<=> Li} &\quad &\pu{-3.00 V}\tag{1b}\\ \end{align} $$ $$ \begin{align} \text{In use:} &\quad &\ce{CoO2 + Li &<=> LiCoO2} &&\tag{2a}\\ \text{Charging:} &\quad &\ce{LiCoO2 &<=> CoO2 + Li} &\quad \mathrm{EMF}&= \pu{+3.60 V}\tag{2b}\\ \end{align} $$ Is it the case that putting an EMF against the EMF makes the reaction feasible? In this case, could any reaction be made thermodynamically feasible by putting a potential difference across it? Answer: Firstly, I would replace the word "feasible" with "favorable". I'm also going to replace $E_{cell}$ with $E_{OCV}$, where the open circuit voltage (OCV) is the potential of the cell without any applied electric potential. So you're right in that if the $E_{OCV}$ is positive, the net reaction is thermodynamically favorable: it will occur spontaneously if allowed to and will produce energy that can be used by the load connected to the battery (i.e. the cell can do work). You're correct in saying that the reverse reaction is thermodynamically unfavorable, but that doesn't mean the reaction won't happen, just that you need to do work on the battery in order for it to happen. In most cases, this means applying a bias potential ($E_{cell}$) that is more positive than the $E_{OCV}$. When that happens, the current flowing across this bias potential takes energy into the cell and recharges it. However, not every reaction will proceed in this way. In many cases, if a high potential is applied, another reaction will occur. If you have water in your battery and apply a high potential, you may electrolyze water, splitting into elemental hydrogen and oxygen. This will then reduce the usable capacity and/or cause rupture of the battery. Lastly, all of this ignores the role of kinetics. Just because a reaction is thermodynamically favorable doesn't mean that it happens at a rate that useful. If you have a cell with $E_{OCV} = 3.60V$, applying $3.601V$ will begin to charge the cell but at such a slow rate than you'll never notice. You generally need to apply at least a few tens of millivolts of over-potential to a lithium ion battery in order to get any appreciable current. So in the case on non-rechargeable (chemically non-reversible) batteries, the over potential needed to reverse the reaction will be big enough that you will instead hit the potential for some other, undesirable reaction.
{ "domain": "chemistry.stackexchange", "id": 13112, "tags": "physical-chemistry, thermodynamics, electrochemistry, redox" }
move_base crashes with custom global planner whenever instantiating base_local_planner::CostmapModel
Question: I have written my own global planner and have successfully registered the plugin so that move_base can detect it. I am getting a problem where move_base will crash whenever I instantiate an instance of base_local_planner::CostmapModel. I've removed all the code from my global planner so that it will run as a plugin that does nothing. This is what my initialize function looks like: void FrontierPlanner::initialize(string name, Costmap2DROS* costmap_ros) { map = costmap_ros; Costmap2D* costmap = map->getCostmap(); base_local_planner::CostmapModel model(*costmap); //The offending line } This will cause move_base to crash. If I comment out the third line then it will run no problem. It compiles in both cases as well. The error move_base gives me is: [FATAL] [1438809026.524028208]: Failed to create the bob/FrontierPlanner planner, are you sure it is properly registered and that the containing library is built? Exception: Failed to load library /home/viki/catkin_ws/devel/lib//libfrontier_planner_lib.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/viki/catkin_ws/devel/lib//libfrontier_planner_lib.so: undefined symbol: _ZTVN18base_local_planner12CostmapModelE) [move_base-5] process has died [pid 2554, exit code 1, cmd /opt/ros/indigo/lib/move_base/move_base __name:=move_base __log:=/home/viki/.ros/log/6207c02a-3bb6-11e5-8754-080027c434d6/move_base-5.log]. log file: /home/viki/.ros/log/6207c02a-3bb6-11e5-8754-080027c434d6/move_base-5*.log Any ideas? Originally posted by Sebastian on ROS Answers with karma: 363 on 2015-08-05 Post score: 0 Original comments Comment by mgruhler on 2015-08-06: Can you please give some more details (maybe link to a gist)? Showing your *plugin.xml for this planner, more details on the implementation as well as the CMakeLists.txt and package.xml. Otherwise, we can only guess... But something seems to be wrong the way you export your library... Comment by Sebastian on 2015-08-06: I forgot a target_link_libraries in my CMakeLists.txt. I wasn't linking the library with my plugin to catkin_libraries. Comment by mgruhler on 2015-08-07: can you then post your solution as an answer and mark it as correct? Helps keeping ROS Answers clean ;-) Comment by Sebastian on 2015-08-07: I can't accept my own answer because I don't have a high enough reputation lol Answer: I forgot to target_link_libraries in my CMakeLists.txt file. I wasn't linking the library with my plugin to catkin_libraries. Originally posted by Sebastian with karma: 363 on 2015-08-07 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 22384, "tags": "navigation, base-global-planner, plugin, global-planner, move-base" }
Deriving Tsiolkovsky Rocket Equation using Conservation of Momentum
Question: I recently tried to derive the rocket equation using conservation of momentum, and did not get very far; was wondering what I am missing. Here's my attempt: Let's say that over some small time $dt$, the rocket ejects $dm$ mass at speed $c$ going the opposite way, and speeds up by $dv$ as a result. So, $$p_i = mv$$ $$p_f = (m-dm)(v+dv)-cdm$$ The $cdm$ term is subtracted since it is going the opposite way. Expanding $p_f$, simplifying the equation $p_i=p_f$, and ignoring the $dmdv$ term, we get $$(c+v)dm = mdv \Rightarrow \frac{dv}{c+v}=\frac{dm}{m}$$ This is clearly wrong because after integrating it says that $v$ has a linear relation to the mass. What forbidden moves have I done? Answer: You have forgotten that the velocity of the exhaust relative to the "observer" depends on the velocity of the rocket relative to the observer, as we have a constant velocity of the exhaust relative to the rocket. $c$ is not a constant; therefore, you need to take into account that $c=c(v)=-v_e-v$, where $v_e$ is the velocity of the exhaust relative to the rocket.$^*$ $^*$Since in your $p_f$ equation you made the $c\text dm$ term negative, technically $c$ is the velocity of the observer relative to the exhaust. If you want the velocities to all be relative to the observer in the $p_f$ equation that term should be positive. Also keep in mind that just because the exhaust is traveling backwards relative to the rocket does not necessarily mean that the exhaust is traveling backwards relative to the observer. Your mistakes here most likely come from not being careful enough in really thinking about these relative velocities.
{ "domain": "physics.stackexchange", "id": 71725, "tags": "newtonian-mechanics, momentum, conservation-laws, rocket-science, propulsion" }
How to compute denominator in Naive Bayes?
Question: Suppose we have class C_k and input feature vector x in dataset How to calculate probability p(x)? Answer: in the Examples section of the Wikipedia article there is a nice example. The calculation of $p(\mathbf{x})$ can be done via $$p(\mathbf{x}) = \sum_k p(C_k) \ p(\mathbf{x} \mid C_k)$$ Note that using the conditional independence assumption of the Naive Bayes one can write $$ p(\mathbf{x} \mid C_k) = \Pi_{i} \, p(x_i \mid C_k) $$
{ "domain": "datascience.stackexchange", "id": 7139, "tags": "machine-learning, probability, naive-bayes-classifier" }
Methods for dissolving Scotch/Kaptop/etc. Tape for Electron Microscopy?
Question: The famous method for the Nobel prize in studying Graphene is the so-called "scotch tape" method. Here, one takes a weakly-bonded van der Waals-type material and peel a layer off. From that layer, another piece of tape is taped onto the original piece of tape with the layer to split it into two pieces. This is repeated until nanometer thick layers are produced. At this point, you have a single monolayer on a piece of tape, and need to transfer it onto a grid to look at it with a TEM (transmission electron microscope). However, the monolayer is much too thin to peel off the tape, so the tape must be completely dissolved. The question I have is: what are the best options for dissolving tape? Obviously there are many, many variables at play here. But here are some relevant constraints: Completely dissolves tape, leaving little sticky residue Solvent is relatively non-reactive to inorganics, metals Safe and inexpensive For example, I found a question on Research Gate that suggests using an alkaline solution to completely dissolve Kapton. I wonder if there are other methods for scotch tape Since this method for TEM preparation is so common there must be some great literature resources on this topic, so resource recommendations would be great! I don't fully understand the mechanics of "stickyness" so a description from the atomic perspective would be useful, especially in understanding what good solvents should do. Answer: Generally you use Acetone. The following is taken from Enoki, Toshiaki, and Tsuneya Andō. 2018. Physics and Chemistry of Graphene : Graphene to Nanographene p. 94 The Scotch tape method is very common in the fabrication of graphene devices, which are mainly used in research into the fundamental properties of graphene. It can produce high-quality graphene with lateral sizes ≤ 0.1 mm; however, controlling the size and position of the graphene is almost impossible. The following is the typical procedure of the Scotch tape method: (i) Place a few flakes of graphite (≈ 1 mm) on the adhesive side of a plastic sticky tape with tweezers (Fig. 3.1(a)). Scotch tape (3M) and Nitto tape are commonly used. Natural graphite, Kish graphite, or HOPG (highly oriented pyrolytic graphite) is usually used for the starting material. (ii) Fold the adhesive side to sandwich the graphite flakes and press the tape firmly (Fig. 3.1(b)). (iii) Peel the tape apart slowly, so that the graphite flakes are cleaved and attached on both sides of the tape (Fig. 3.1(c)). (iv) Repeat the second and the third steps with slightly shifting the fold line, so that the graphite flakes do not overlap (Fig. 3.1(d)). (v) Stop repeating the cleavage when graphite flakes spread over the sticky tape (Fig. 3.1(e)). (vi) Prepare a silicon substrate with a silicon dioxide layer on the top surface (Fig. 3.1(f)). Place address markers on the surface in advance using photolithography so that one can easily locate the position of a graphene flake in an image. These markers are indispensable for further microfabrication on graphene. An example of the address markers is shown in Fig. 3.2 (vii) Attach the adhesive side of the tape with the graphite flakes to a silicon substrate, and gently rub the surface to remove any air between the substrate and the tape (Fig. 3.1(g)). (viii) Slowly peel the tape off the substrate (for example, more than 2 min for a 1 cm substrate) (Fig. 3.1(h)). Not only graphite flakes but also some adhesive remains on the substrate. The latter can be removed by submerging the substrate in acetone for a few seconds. It is also reported that in the scotch tape method, not only graphene flakes but also a large amount of adhesive is attached to the substrate. You get rid of it immersing the substrate in Acetone. Other methods are reported in the reference too. About adhesion There are lots of aspect about adhesion. About scotch tape the most interesting answer you can find here "https://www.scientificamerican.com/article/what-exactly-is-the-physi/" is the following "The simplest answer that I can give to the question is that pressure-sensitive adhesives (which are polymers) are 'tacky' or 'sticky' because they are essentially very high viscosity liquids that also have some elastic characteristics--in technical terms, they are 'viscoelastic.' This property means that they exhibit some of the characteristics of liquids, and so they will 'wet' a surface to which they are pressed. But then, because of their elasticity, they will resist separation when stressed. Thus, 'stickiness' is strictly a physical (viscoelastic) phenomenon, not a chemical one."
{ "domain": "chemistry.stackexchange", "id": 9580, "tags": "organic-chemistry, experimental-chemistry" }
Need help with the calculations/conversion of a celestial object
Question: I'm developing a telescope controller open-source application. I started this project with a very little knowledge of Astronomy. Basically, the app is going to send data to telescope over a wireless connection. Data of the celestial object is as follow: ["SAO#": 308, "HD": 8890, "Con": "Alpha Ursae Minoris", "StarName": "Polaris", "RAH": 2, "RAM": 31.812, "DED": 89, "DEM": 15.85, "Mag": 2.02, "PRA": 0.038, "PDec": -0.015] RAH = RA hours, RAM = RA minutes, DED = Dec degrees, DEM = Dec minutes (PRA and PDec are the drift of the star in arc-seconds per year (from the catalog Epoch J2000.0). I don't need to use these in the app, that I'm told.) The controller server (data receiver) accepts in this format: Set target RA- HH:MM:SS, Set target Dec- DD:MM:SS, Set target Azm- DDD:MM:SS, Set target Alt- DD:MM:SS I would want to convert the given celestial object data to Ra, Dec, Azm, Alt. I'm told that: RA = ((RAH + (RAM / 60.0)) * 15.0); //in degrees, RA is an earthly longitude projected onto the sky DEC = (DED + (DEM / 60.0)); //in degrees, Dec is an earthly latitude projected onto the sky For a star or other celestial object from the catalogs they have a fixed Epoch (J2000.) I need to apply a correction for precession/nutation (wobble of the Earth's axis of rotation) to get a half decent estimate of the star's RA/Dec "now". I'm using a code library that seems to have support for this and does the calculation on it own. I would want to know the procedure of the data conversion from equatorial coordinates to horizontal including. For an instance this is how the code works: let jd = Date().julianDay let RAH = 2.0 let RAM = 31.812 let DED = 89.0 let DEM = 15.85 let eqCoor = EquatorialCoordinates.init(rightAscension: Hour((RAH+(RAM/60.0))*15.0), declination: (Degree(DED+(DEM/60.0))), epoch: Epoch.J2000, equinox: Equinox.standardJ2000) let annualAbb = eqCoor.correctedForAnnualAberration(julianDay: jd, highPrecision: true) let initEQ = AstronomicalObject.init(name: "Polaris", coordinates: annualAbb, julianDay: jd, highPrecision: true) print("EquatorialCoordinates -> RA(α):", initEQ.equatorialCoordinates.alpha, "DEC(δ):", initEQ.equatorialCoordinates.delta) let userLocation = GeographicCoordinates(positivelyWestwardLongitude: Degree(.plus, 75, 51, 13.65), latitude: Degree(.plus, 30, 54, 43.55), altitude: 256) print("User Location -> Latitude:", userLocation.latitude, "Longitude:", userLocation.longitude) let preccCoor = annualAbb.precessedCoordinates(to: Equinox.standardJ2000) let horizontalCoor = preccCoor.makeHorizontalCoordinates(for: userLocation, at: jd) print("Horizontal Coordinates -> Azimuth:", horizontalCoor.azimuth.inHours, "Altitude:", horizontalCoor.altitude.inRadians.inDegrees) This is the data that code returns: EquatorialCoordinates -> RA(α): +37h56m06.009s DEC(δ): +89°16'05.165" User Location -> Latitude: +30°54'43.550" Longitude: +75°51'13.650" Horizontal Coordinates -> Azimuth: +12h1m07.396s Altitude: +30°13'16.650" Here's the data from Stellarium according to the same location as my code. The data calculated from my code is not even close to the one from Stellarium except apparent altitude. Answer: I couldn’t really figure out the way to get RA Dec from the data mentioned. I ended up using a catalog that already has RA Dec of objects.
{ "domain": "astronomy.stackexchange", "id": 3182, "tags": "telescope, amateur-observing, fundamental-astronomy, coordinate" }
What is the molecular structure of polymetatelluric acid?
Question: Telluric Acid, $\ce{Te(OH)6}$, dehydrates to form polymetatelluric acid: $\ce{(H2TeO4)10}$. However, I have been unable to locate the exact structure of polymetatelluric acid, and it does not seem entirely obvious. Are the tellurium centers connected directly to each other or are they separated by oxygen? Are these polymeric bonds single bonds or double bonds or some mixture of the two? My best guess is that the $\ce{Te}$ centers are separated by oxygens with three other oxygens surreounding each $\ce{Te}$ center, two of which are $\ce{-OH}$ groups and the third being a $\ce{=O}$ group: this would make the oxygens between the $\ce{Te}$ centers alternating single and then double bonds — possibly conjugated on some level. Is there any chance polymetatelluic acid has direct $\ce{Te-Te}$ bonds? Answer: It would be ideal to check out x-ray analysis data, but it seems like polymetatelluric acid is a rather amorphous substance and does not form suitable crystals. As of now, I haven't discovered anything relevant neither in ICSD, nor in CCDC. Another method of structural analysis would be vibrational spectroscopy. There is an article "Ultrarotspektren von Tellursäuren, Telluraten und Antimonaten" [1, p. 165] (in German) containing an extensive research of various $\ce{Te}$-containing substances performed with infrared and Raman-spectroscopy (I reduced the table to the frequencies for polymetatelluric acid exclusively): \begin{array}{ll} \hline \ce{(H2TeO4)_x} &\text{Zuordnung} \\ \hline 450~\mathrm{(m, b)} & \delta~(\ce{TeO}) \\ \hline 600~\mathrm{(st, b, Sch)} & \nu~(\ce{TeO}) \\ 720~\mathrm{(sst, b)} & \\ 600~\mathrm{(sst, b, Sch)} & \\ \hline 1085~\mathrm{(mst, b)} & \delta~(\ce{TeOH}) \\ \hline 1618~\mathrm{(s, b)} & \delta~(\ce{H2O}) \\ \hline 2360~\mathrm{(m)} & 2\delta~(\ce{TeOH}) \\ \hline 3200~\mathrm{(sst, b)} & \nu~(\ce{TeOH}) \\ 3360~\mathrm{(sst, b, Sch)} & \nu~(\ce{H2O}) \\ \hline \end{array} Polymetatellursäure $\ce{(H2TeO4)_x}$ ist eine feste amorphe Substanz. In ihrem UR-Spektrum sind alle Banden sehr breit, wie man es bei amorphen Substanzen meist antrifft. Das Tellur ist in dieser Verbindung wahrscheinlich wie im $\ce{Te(OH)6}$ sechsfach koordiniert, wie die annähernd gleiche Lage der $\ce{TeO}$-Valenz- und Deformationsschwingungen in beiden Verbindungen zeigt. Hieraus folgt, daß $\ce{TeOTe}$-Bindungen vorhanden sein müssen. Bei Viererkoordination würden die Valenzschwingungen höher liegen, wie es in den vergleichbaren Molekeln $\ce{IO}$, und $\ce{H5IO6}$ der Fall ist.[...] Im Gegensatz zum $\ce{Te(OH)6}$ tritt im Spektrum des $\ce{(H2TeO4)_x}$ eine Frequenz bei $\pu{1618 cm-1}$ auf, die nur von freiem Wasser herrühren kann. Ebenso beobachtet man neben der $\ce{OH}$-Valenzschwingung der $\ce{TeOH}$-Gruppen ($\pu{3200 cm-1}$) die Bande $\pu{3360 cm-1}$ des freien Wassers. Aus den Intensitäten dieser Banden ist zu entnehmen, daß $\ce{H2O}$ und $\ce{TeOH}$-Gruppen in größenordnungsmäßig gleicher Menge vorhanden sind. Eine stöchiometrische Konstitutionsformel wird man dieser Substanz wohl nicht zuordnen können. Briefly, the IR-spectra bands of $\ce{[H2TeO4}]_x$ bands are very broad, suggesting amorphous non-stoichiometric compound (as Mithoron suggested) with polymeric structure where $\ce{Te}$ atom is 6-coordinated. Bands were only assigned for $\ce{Te-O}$, $\ce{Te-OH}$ bonds and $\ce{H2O}$. IR-spectra do not show the presence of $\ce{Te-Te}$ bonds. Most likely there are $\ce{[TeO]6}$ units with edge- or corner-sharing, linked together in a chain-alike fashion. Reference Siebert, H. Z. anorg. allg. Chem. 1959, 301 (3–4), 161–170. DOI: 10.1002/zaac.19593010305 (in German).
{ "domain": "chemistry.stackexchange", "id": 8423, "tags": "inorganic-chemistry, molecular-structure" }
Does this variant of Multiplicative Linear Logic with mix rule enjoy cut elimination?
Question: In Multiplicative Linear Logic (MLL), addition of the mix rule eliminates 'connectedness' from Danos-Regnier criterion. I'm investigating how the criterion changes if we do not distinguish between tensor and par. Let's take a MLL inference rules with the mix rule and forget the difference between tensor and par: $$ \frac{}{\vdash A, A^\bot} \;\mathtt{id} $$ $$ \frac{\vdash \Gamma_1, A\quad \vdash \Gamma_2, A^\bot}{\vdash \Gamma_1, \Gamma_2} \;\mathtt{cut} $$ $$ \frac{\vdash \Gamma_1 \quad \vdash \Gamma_2}{\vdash \Gamma_1, \Gamma_2} \;\mathtt{mix}$$ $$ \frac{\vdash \Gamma, A, B}{\vdash \Gamma, A \cdot B} \;\mathtt{par}$$ The tensor rule is obsolete as it can be derived from the mix and par rules: $$ \frac{\vdash \Gamma_1, A\quad \vdash \Gamma_2, B}{\vdash \Gamma_1, \Gamma_2, A \cdot B} \;\mathtt{tensor} $$ My intuition is that not all proof-structures are valid i.e. some variant of Danos-Regnier criterion for this system still is necessary. The intuition is that it admits cycles but only the trivial ones, not 'real' deadlocks. But I don't know how to formalize it, so I'll move to cut elimination formalization. The above, might be considered a type system for an interaction net with a single self-annihilating node $ \mu $ (notation defined in the footnote): $$ \{\ldots, e \frown \mu (a_1, a_2), e \frown \mu (b_1, b_2), \ldots, \} \rightsquigarrow \{\ldots, a_1 \frown b_1, a_2 \frown b_2, \ldots \}$$ Let's extend standard cut-elimination procedure with one more rule: the trivial cycle, made from identity and cut only, disappears: $$ \{\ldots, a \frown b, b \frown a, \ldots, \} \rightsquigarrow \{\ldots, a \frown a, \ldots \} \rightsquigarrow \{\ldots, \ldots \} $$ Example of a deadlock: $a \frown \mu(a,b)$. Questions: Does the described type system, when applied in an obvious way to the interaction net, eliminates the possibility of deadlocks? Does the type system enjoy cut elimination? Are these two questions equivalent? Are there any publications about it? Footnote Notation for the nets: Net is a set of links. Variables denote edges. $ e \frown \mu(a,b)$ represents an a node $\mu$ with two auxiliary edges $a, b$ and a principal edge $e$. Each variable is present in the set exactly 1 or 2 times, 1 means it is a 'free edge', $\frown$ is commutative. Two connected edges are just an edge: $\{\ldots, a \frown b, b\frown c, \ldots \} \rightsquigarrow \{\ldots, a \frown c, \ldots \}$ Answer: For question 1, if by "deadlock" you mean "non-trivial vicious circle", then the answer is obviously yes, simply because a non-trivial vicious circle cannot be typed: you will have a cut between a formula $A$ and a formula $F$ containing $A^\bot$ as strict subformula, so it is impossible that $F=A^\bot$. But this is kind of trivial: by refusing a priori to consider cyclic wires as deadlocks, you are eliminating the only possible typed deadlock, so of course you are deadlock-free... For question 2, there are definitely problems with your sequent calculus. First, I assume that you are working with orthogonality defined by $(A\cdot B)^\bot=A^\bot\cdot B^\bot$. Then, you have that the formula $A^\bot\cdot A$ is essentially self-dual and, since it is provable, you have a proof of the empty sequent. But your inference rules (except cut) still verify the subformula property, which means that there is no cut-free proof of the empty sequent. In other words, you are forgetting the rule $$\frac{}{\vdash}$$ Without this rule, cut-elimination does not hold. There is a further problem though, namely that you cannot represent the cut-elimination steps inside the sequent calculus: in other words, as a type system, you do not have subject reduction. For that, you need to consider a unary cut rule: $$\frac{\vdash\Gamma,A^\bot,A}{\vdash\Gamma}$$ Note that, like the tensor rule, the binary cut rule is derivable from this one (and mix). For question 3, I assume that you are asking whether the system captures your notion of deadlock-freedom. Since you are excluding cyclic wires a priori (i.e., you are artifically adding a cut-elimination rule that makes them disappear) the the answer is, again, trivially yes: a net normalizes (with your additional reduction rule) to what Lafont calls a reduced net (no vicious circle, no active pair) iff it is typable in the above system. This is provable by standard arguments: subject reduction/subject expansion and typability of reduced nets. For question 4, I doubt anyone has published anything because this system is so degenerate that I honestly do not see what its interest may be. In fact, one may further simplify the types by eliminating duality altogether: there is no negation, formulas are just $$A,B::=\alpha\mathrel |A\cdot B$$ and axioms/cuts introduce/eliminate pairs of equal formulas. This system, albeit logically trivial (because of self-duality), has the same normalization property as above. It boils down to the so-called relational semantics of MLL, which is applicable to untyped structures too, in which tensor equals par. Regarding your search for a Danos-Regnier correctness criterion for single-agent nets, there is not much to be said: if you make cyclic wires disappear "by fiat", then the self-dual "typing" above suffices and correctness criteria play no role; if you stick to the standard definitions, then there can be no meaningful correctness criterion: every one-conclusion net cut with itself generates cyclic wires.
{ "domain": "cstheory.stackexchange", "id": 4404, "tags": "lo.logic, linear-logic, interaction-nets" }
Splitting up a list with datetimes
Question: I have this ordered list of datetimes that comprises of 30 minute slots: availability = [ datetime.datetime(2010, 1, 1, 9, 0), datetime.datetime(2010, 1, 1, 9, 30), datetime.datetime(2010, 1, 1, 10, 0), datetime.datetime(2010, 1, 1, 10, 30), datetime.datetime(2010, 1, 1, 13, 0), # gap datetime.datetime(2010, 1, 1, 13, 30), datetime.datetime(2010, 1, 1, 15, 30), # gap datetime.datetime(2010, 1, 1, 16, 0), datetime.datetime(2010, 1, 1, 16, 30) ] And I would to split the ones with time gaps in between into a dictionary with start and end values, like so: [ {'start': datetime.datetime(2010, 1, 1, 9, 0), 'end': datetime.datetime(2010, 1, 1, 10, 30)}, {'start': datetime.datetime(2010, 1, 1, 13, 0), 'end': datetime.datetime(2010, 1, 1, 13, 30)}, {'start': datetime.datetime(2010, 1, 1, 15, 30), 'end': datetime.datetime(2010, 1, 1, 16, 30)} ] So I threw this snippet of code together that does exactly that. result = [] row = {} for index, date in enumerate(available): exists = False if not row: row['start'] = date try: if date + timedelta(minutes = 30) != available[index + 1]: exists = True except IndexError: exists = True if exists: row['end'] = date result.append(row) row = {} But needless to say this is super ugly and there just has to be a better, shorter and more elegant way to do this. Answer: It is possible to utilize itertools.groupby() using indexes to calculate if there is a gap: import datetime from datetime import timedelta from functools import partial from itertools import groupby from operator import itemgetter def is_consecutive(start_date, item): """A grouping key function that highlights the gaps in a list of datetimes with a 30 minute interval.""" index, value = item return (start_date + index * timedelta(minutes=30)) - value available = [ datetime.datetime(2010, 1, 1, 9, 0), datetime.datetime(2010, 1, 1, 9, 30), datetime.datetime(2010, 1, 1, 10, 0), datetime.datetime(2010, 1, 1, 10, 30), datetime.datetime(2010, 1, 1, 13, 0), # gap datetime.datetime(2010, 1, 1, 13, 30), datetime.datetime(2010, 1, 1, 15, 30), # gap datetime.datetime(2010, 1, 1, 16, 0), datetime.datetime(2010, 1, 1, 16, 30) ] start_date = available[0] result = [] for _, group in groupby(enumerate(available), key=partial(is_consecutive, start_date)): current_range = list(map(itemgetter(1), group)) result.append({ 'start': current_range[0], 'end': current_range[-1] }) print(result) Prints: [ {'start': datetime.datetime(2010, 1, 1, 9, 0), 'end': datetime.datetime(2010, 1, 1, 10, 30)}, {'start': datetime.datetime(2010, 1, 1, 13, 0), 'end': datetime.datetime(2010, 1, 1, 13, 30)}, {'start': datetime.datetime(2010, 1, 1, 15, 30), 'end': datetime.datetime(2010, 1, 1, 16, 30)} ]
{ "domain": "codereview.stackexchange", "id": 26484, "tags": "python, python-2.x, datetime" }
Classification of 2D time dependent diffusion equation
Question: I was trying to classify the following PDE: $$\frac{\partial{u}}{\partial{t}}=\frac{\partial^2{u}}{\partial{x^2}}+\frac{\partial^2{u}}{\partial{y^2}}$$ where $u = u(x,y,t)$. I was originally using the definition of $B^2-4AC$ and found this equation to be elliptic, which is true for the Laplace equation however I was wondering if the dependence on time changes this. I was also wondering if this PDE is inhomogeneous and linear? Thank you! Answer: homogenous, linear and parabolic. In a generalization of the 2-dimensional equation, any equation of the form $$ \partial_t y = -L u $$ where $L$ is positive elliptic (such as $-\nabla^2$) is said to be parabolic. It shares with the 2d case the fact that it has well defined solutions with inital value data an a line with $t=constant$.
{ "domain": "physics.stackexchange", "id": 56186, "tags": "diffusion" }
How is the joint state of these qubits derived?
Question: Can someone show to me the steps to derive the joint state at the bottom of this image, please? I tried to follow his explanation but I didn't get the same results… This is taken from the lecture notes of Ronald de Wolf in case it may help Answer: Since this is a homework-type question, I'll just outline the method: You begin in the state $(\alpha_0|0\rangle + \alpha_1|1\rangle) \otimes \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$. It can be written as $(\alpha_0|0\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|0\rangle_{A2} |0\rangle_B + |1\rangle_{A2} |1\rangle_{B}) + (\alpha_1|1\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|0\rangle_{A2} |0\rangle_B + |1\rangle_{A2} |1\rangle_{B})$ Apply the CNOT to qubits $\text{A1}$ and $\text{A2}$. If $\text{A1}$ is in $|0\rangle$, $\text{A2}$ remains unchanged or else it flips. You get to the state: $(\alpha_0|0\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|0\rangle_{A2} |0\rangle_B + |1\rangle_{A2} |1\rangle_{B}) + (\alpha_1|1\rangle_{A1}) \otimes \frac{1}{\sqrt{2}}(|1\rangle_{A2} |0\rangle_B + |0\rangle_{A2} |1\rangle_{B})$ Then apply the Hadamard gate on $\text{A1}$. Remember that the Hadamard gate maps $|0\rangle_{A1}$ to $\frac{|0\rangle + |1\rangle}{\sqrt 2}$ and $|1\rangle_{A1}$ to $\frac{|0\rangle - |1\rangle}{\sqrt 2}$. You finally get to the state shown in the diagram. Note: $\text{A1}$ refers to Alice's first qubit. $\text{A2}$ refers to Alice's second qubit. $\text{B}$ refers to Bob's qubit.
{ "domain": "quantumcomputing.stackexchange", "id": 270, "tags": "quantum-gate, quantum-state, linear-algebra, teleportation, communication" }
Fusion in the sun for 4 hydrogen to Helium-4. How is the energy produced
Question: So please correct me if I am wrong but: First 2 proton’s (each with an electron) fuse together, The mass stays the same so no energy is produced? Then 1 of the protons turns into a neutron with a Neutrino and a positron. The positron and the electron collide and produce energy equivalent to the mass of 2 electrons (E=MC^2). The neutrino flys away and doesn’t interact with anything (at least in the fusion process) again. So now we have 1 Proton, 1 Neutron and 1 electron aka Deuterium. Then another proton and electron fuse with the Deuterium to form Helium-3. How does energy work here because it isn’t the mass of 1 proton and electron cause that is over 100MeV which is wrong as the entire process (according to google) is only meant to be around 24Mev, and there isn’t an apparent gain/loss of mass. Then the 2 Helium-3 fuse and have the same problem as before (the mass stays the same but energy is still produced). The 2 Helium-3’s then lose 2 protons and 2 electrons to form Helium-4. My question is how to work out how the energy is produced in a hydrogen -> helium fusion (the type of fusion that happens in the sun) Answer: (0) This process is called "the proton-proton chain", and should be referred to as such in question titles and what-not. Two protons do not fuse into ${}^2_2{\rm He}$, that has a half-life in the 1e-22 second region...compare that with the mean collision time for protons in the core of the Sun to see that it cannot contribute. Rather, two protons undergo a weak interaction: $$ p + p \rightarrow {}^2_1{\rm D} + e^+ + \nu_e + 0.42\,{\rm MeV}$$ (that it is weak means it is unlikely, hence a 10 billion year lived Sun). The initial (final) mass-sum on the RHS (LHS) is: $$ M_0 = 2m_p = 1876.544163\,{\rm MeV} $$ $$ M_1 = m_d + m_e +m_{\nu_e}= 1876.1239416\,{\rm MeV} $$ (where I've ignored the neutrino mass). Note that $$M_1 - M_0 = -0.420221\,{\rm MeV} < 0$$ So the total mass is reduced. This is generally referred to "binding energy", which is negative. The deuteron binding energy (the only binding energy every nuclear physicist has memorized) is 2.2 MeV, which gives the deuteron a lower mass than that of a free proton plus a free neutron. (Here, some of the 2.2 MeV is required to turn a proton into a neutron and create the final state leptons). That mass difference is generally liberated as kinetic energy according to: $$ E = mc^2 $$ The positron goes onto annihilate an unrelated plasma electron, releasing: $$2m_e = 1.022\,{\rm MeV}$$ as 2, 3, 4, ... gamma rays. Note that this is not fusion, and does not require temperature of pressure to occur, though number density is obviously a factor. The deuteron is stable, and finds a proton: $$ d + p \rightarrow {}^3_2{\rm He} + \gamma + 5.493\,{\rm MeV}$$ which is a fairly hard gamma. https://en.wikipedia.org/wiki/Proton–proton_chain says the mean residence time of the helium-3 is 400 years. From here, there are 4 branches to helium 4. The main one is: $$ {}^3_2{\rm He} + {}^3_2{\rm He} \rightarrow {}^4_2{\rm He} + 2p + 12.859\,{\rm MeV}$$ (So I retract my hard gamma quip, and now apply it here). You can see the above reference for details on other branches. They are: Lithium Burning (https://en.wikipedia.org/wiki/Lithium_burning) The pp-III branch, involving berylliums-7,8 and boron-8, which is dominant above 25 MK. p-IV (Hep), which is a theoretical weak interaction branch: ${}^3_2{\rm He} + p \rightarrow \alpha + {\rm appropriate\ leptons}$. It's important to appreciate the difference between strong and weak interaction fusion processes, as the time-scale differ by orders of magnitude in the exponent of "orders of magnitude". Finally, to address your question, "How to work out the energy released". Find the difference between initial mass and final mass: $$ E_{pp} = \Big[4(m_p + m_e)\Big] - \Big[m_{\alpha} + 2(m_e + m_{\nu_e})\Big]$$ Of course the neutrino mass isn't known (and the electron neutrino isn't even a mass eigenstate), but it is tiny ($ m_{\nu_e} \approx 0.07\,{\rm eV}$)...200 times smaller than the binding energy of hydrogen. Since the neutrinos carry away their mass and kinetic energy, a full analysis of available energy for heating would require detailed analysis of the neutrino spectrum. See https://jila.colorado.edu/~pja/astr3730/lecture21.pdf . IIRC, the neutrino luminosity is around 1% of the solar luminosity. Note: IMHO, an interesting, but often overlooked role the neutrinos play is radiating lepton number. In the Standard Model, lepton number is conserved. The core contains $0.34 \times M_{\rm solar symbol}/{\rm grams} \times N_A \approx 4\times 10^{56}$ protons and electron, each. Over the lifetime of the sun, that becomes $2\times 10^{56}$ protons, neutrons and electron, each (assuming 100% burning, idk if that is correct), so baryon number is conserved, but $L=2\times 10^{56}$ lepton number has "gone missing" and is radiated via neutrinos.
{ "domain": "physics.stackexchange", "id": 100193, "tags": "energy, sun, hydrogen, fusion" }
Quadratic relationship between nondeterministic and deterministic space?
Question: Savitch's theorem shows that $\mathrm{NSPACE}(f(n)) \subseteq \mathrm{DSPACE}(f(n)^2)$ for all large enough functions $f$, and proving that this is tight has been an open problem for decades. Suppose we approach the problem from the other end. For simplicity, assume the Boolean alphabet. The amount of space used by a TM to decide a computable language is often closely related to the logarithm of the number of states used by the automaton simulating the TM for each regular slice of a language. This motivates the following question. Let $D_n$ be the number of syntactically distinct DFAs with $n$ states, and let $N_n$ be the number of distinct NFAs with $n$ states. It is straightforward to show that $\lg N_n$ is close to $(\lg D_n)^2$. Further, let $D_n'$ be the number of distinct regular languages that can be recognised by a DFA with $n$ states, and let $N_n'$ be the number recognised by an NFA. Is it known whether $\lg N_n'$ is close to $(\lg D_n')^2$? It is not clear to me how $D_n$ and $D_n'$, or $N_n$ and $N_n'$, are related to each other, or how closely. If all this relates to a well known question in automata theory then a hint or pointer would be appreciated. The same question is also relevant for two-way automata, due to the same reasoning, and I am especially interested in this version. Answer: In my paper with Domaratzki and Kisman, "On the number of distinct languages accepted by finite automata with n states" published in J. Automata, Languages, and Combinatorics 7 (2002) we proved that if $G_k (n)$ is the number of distinct languages accepted by NFA's with $n$ states over a $k$-letter alphabet, and $g_k (n)$ is similarly the number of distinct languages accepted by DFA's, then for fixed $k \geq 2$ (i) $\log g_k (n)$ is, up to smaller order terms, asymptotically $kn\log n$ (ii) $\log G_k (n)$ is, up to smaller order terms, asymptotically between $(k-1)n^2$ and $kn^2$.
{ "domain": "cstheory.stackexchange", "id": 3998, "tags": "cc.complexity-theory, automata-theory, space-bounded, space-complexity" }
Is evolution always unidirectional?
Question: Is it possible, at least in theory, for a species to evolve into another species and then evolve back into the first species? Answer: It's possible, but very unlikely. Usually it takes a substantial number of genetic differences to qualify two organisms as different species. One might think that reversing the direction of selection pressure could reverse a genetic change, but genetic change begins with random mutations which are not determined by selection pressure. It's very unlikely for one random process to undo another random process (imagine mixing black and white sand in a jar by shaking it, then trying to unmix by unshaking!).
{ "domain": "biology.stackexchange", "id": 10231, "tags": "evolution, natural-selection" }
Reading a vector of structs as parameters in ROS2
Question: I would like to be able to read ROS2 parameters of the form: /**: ros__parameters: people: - name: "Alice" age: 30 grade: "A+" - name: "Bob" age: 25 grade: "B-" From what I can tell, there doesn't seem to be a great way to do this in ROS2. The workaround I've been using looks like this: /**: ros__parameters: people_id: - "Alice" - "Bob" people: Alice: name: "Alice" age: 30 grade: "A+" Bob: name: "Bob" age: 25 grade: "B-" This method works alright. However, It's a pain to need to forward declare all the people you want to configure. (In this example we could technically drop the name field since that's what we're using for the ID.) In ROS1 I would just read the people parameter as yaml and parse it directly, without the need to forward declare. Another option I've tried (and least like) is just declaring vectors of everything /**: ros__parameters: names: ["Alice", "Bob"] ages: [30 , 25] grades:["A+" , "B-" ] for this simple example it's ok, however, I find it easy to get an "off by 1" error when your're configuring more complex objects. It also doesn't support default of name, age, grade if you don't want to specify in your config. Answer: Im working on the same problem and have found a workaround that lets you get parameters from a yaml file like you would in ROS1 without forward declaring them as is the default in ROS2. You need to set the node options in your nodes constructor to accept undeclared parameters and to automatically declare parameters from overrides. rclcpp::NodeOptions().allow_undeclared_parameters(true).automatically_declare_parameters_from_overrides(true) And your yaml file... /your_node_namespace: your_node_name: ros__parameters: people: Alice: name: "Alice" age: 30 grade: "A+" Bob: name: "Bob" age: 25 grade: "B-" You can then use the get_parameters() ROS2 api to get all the people parameters by specifying people as a prefix. This puts the people key value pairs into a std::map as shown below. std::map<std::string, rclcpp::Parameter> people_map; node->get_parameters("people", people_map); So your params would be accessible from the map as a key value pair where the key is a string "Alice.age" and value is a rclcpp parameter which can be utilized with rclcpp built in methods .as_int() or .as_double() or .as_string() etc. This isnt perfect as I'd prefer the people params in a struct rather than a map. You also have to be careful not to redeclare parameters that are specified in the yaml file as it will throw a ROS error. If you want to be able to forward declare parameters AND load them from a yaml file youll have to implement logic to check whether the node already has the parameter. if (!node->has_parameter("your_param")) { node->declare_parameter("your_param", your_param_default); }
{ "domain": "robotics.stackexchange", "id": 39069, "tags": "ros2, ros-humble, parameters" }
Orientation of the LIGO Arms
Question: The orientation of the interferometer arms at both sites are approximately Northeast-Southwest and Nortwest-Southeast, though I assume that, on account of the Earth's curvature, no pair of arms is exactly parallel. Is there a preferred orientation for the instruments, and is there a benefit in having them in (approximately) the same orientation? My guess on the latter issue is that if an interferometer has blind spots, it is better for those at both sites to be aligned (or at least overlap as much as possible), as a blind spot for one is a blind spot for the instrument as a whole. Answer: For the German GEO 600, I happen to know that the orientation (and the fact that the two arms do not form an angle of $\pi/2$) is mainly due to building site constraints. As LIGO seems to be in the middle of nowhere, this might not have played as big a role. However, it seems that your assumption that most interferometers have a similar alignment is wrong - see here: http://www.ligo.org/scientists/GW100916/GW100916-geometry.html - the orientation changes actually quite a lot.
{ "domain": "physics.stackexchange", "id": 28481, "tags": "gravitational-waves, interferometry, ligo" }
Why are tunnel boring machines not using cone-shaped drills?
Question: The surface area of the head of a tunnel boring machine is usually flat. A cone-shaped head would increase the surface area. The question is if it could speed up the boring process. I know that with current machines the boring process is also limited by other factors. But that is not part of the question. Answer: Because with a flat head the waste material can be (relatively easily) collected and removed from the cutting face. With a cone-shaped bit that removal is not so easily accomplished. Cone-shaped bits tend to be used to push or compress the material out of the way, which is fine for "softer" materials ...
{ "domain": "engineering.stackexchange", "id": 2390, "tags": "tunnels" }
Using gazebo_ros_control with SDF instead of URDF
Question: My robot has a closed loop linkage, and therefore I cant use a URDF file as URDF only supports tree strucutres. Since SDF supports graph structures, I need to use that in order to simulate my robot. With that being said, I have a 2 actuators as well as a servo that I need to control. Normally, if I was to use an URDF, I would setup the tags in the URDF and then just include the gazebo_ros_control plugin in the URDF. But since the SDF specifications (sdformat.org) do not support tags, im not sure how to get the gazebo ros control plugin to work with my SDF file. All the ros_control tutorials are based upon URDF, although there are lines in the tutorials stating "for your URDF/SDF file". Currently, im simply adding this tag to the bottom of my SDF file (i wont include the whole thing for space reasons, for now at least): .... <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so"> <robotNamespace>/surus_sim</robotNamespace> <robotSimType>gazebo_ros_control/DefaultRobotHWSim</robotSimType> </plugin> </model> </sdf> And when i launch the simulation, this is the error I receive in which i cant pinpoint where it comes from (searched through the ros_control plugin code): [INFO] [WallTime: 1451341627.255617] [0.000000] Loading model xml from ros parameter [INFO] [WallTime: 1451341627.264068] [0.000000] Waiting for service /gazebo/spawn_sdf_model [INFO] [WallTime: 1451341628.176358] [0.000000] Calling service /gazebo/spawn_sdf_model [INFO] [WallTime: 1451341629.094573] [0.001000] Spawn status: SpawnModel: Successfully spawned model [spawn_model-3] process has finished cleanly log file: /home/krystian/.ros/log/1fbfd810-adb2-11e5-9876-001c4239cab7/spawn_model-3*.log [ INFO] [1451341632.830481140, 0.001000000]: Loading gazebo_ros_control plugin [ INFO] [1451341632.830768827, 0.001000000]: Starting gazebo_ros_control plugin in namespace: /surus_sim [ INFO] [1451341632.832180175, 0.001000000]: gazebo_ros_control plugin is waiting for model URDF in parameter [/robot_description] on the ROS param server. [ERROR] [1451341633.054706078, 0.001000000]: Could not find the 'robot' element in the xml file [ INFO] [1451341633.104100859, 0.001000000]: Loaded gazebo_ros_control. [ INFO] [1451341633.150833544, 0.022000000]: waitForService: Service [/gazebo/set_physics_properties] is now available. [ INFO] [1451341633.232730090, 0.053000000]: Physics dynamic reconfigure ready. The error [ERROR] [1451341633.054706078, 0.001000000]: Could not find the 'robot' element in the xml file is what I cannot figure out as only a URDF file contains the tag and SDF file does not. surus_sim is the name of the model in the sdf as well. Any help would be appreciated. Would anyone know if the gazebo_ros_control plugin is meant to work directly with a SDF file? Ive looked around and a few people asked the question a while back but received no responses. Originally posted by l0g1x on ROS Answers with karma: 1526 on 2015-12-28 Post score: 5 Answer: So i did manage to figure out a work around to this, but its super hacked.. So basically to at least be able to use ros_control to control 2 actuators (cant view it in rviz since its a closed linkage) you need 2 robot_description parameters. ros_control loads from robot_description so you need to specify the path to your URDF file (if you only have a SDF, you can reference my answer here about converting from SDF to URDF). You need to specify the URDF file so that you can insert the tags and associate which joints you want to use. If you convert URDF -> SDF or vis versa, you will still have all the same joint and link names, for example: In my URDF file: <joint name="left_mid_actuator_joint" type="prismatic"> <origin xyz="-0.5588 -0.010351 0" rpy="0 0 -3.9443E-31" /> <parent link="left_top_actuator_link" /> <child link="left_bot_actuator_link" /> <axis xyz="-1 0 0" /> <limit effort="100.0" lower="0" upper="1.4" velocity="0.2"/> </joint> <transmission name="tran1"> <type>transmission_interface/SimpleTransmission</type> <joint name="left_mid_actuator_joint"> <hardwareInterface>EffortJointInterface</hardwareInterface> </joint> <actuator name="motor1"> <hardwareInterface>EffortJointInterface</hardwareInterface> <mechanicalReduction>1</mechanicalReduction> </actuator> </transmission> and then you can see in my SDF file that i will have the same joint name left_mid_actuator_joint <joint name='left_mid_actuator_joint' type='prismatic'> <child>left_bot_actuator_link</child> <parent>left_top_actuator_link</parent> <axis> <xyz>-0.685505 -4.48206e-15 -0.728068</xyz> <limit> <lower>0</lower> <upper>1.4</upper> <effort>100</effort> <velocity>0.2</velocity> </limit> <dynamics/> </axis> </joint> You then want to load both the SDF to spawn the model in gazebo (this actually eventually gets converted into URDF anyway) and the URDF so that ros_control can find the <transmission> tags. Make sure the URDF is loaded into robot_description and the SDF is loaded into some other parameter which you then pass as an arguement to spawn_model <param name="robot_description" textfile="$(find rmc_simulation)/surus_sim/robots/surus_sim.URDF" /> <param name="robot_description_sdf" textfile="$(find rmc_simulation)/surus_sim/robots/surus_sim.sdf" /> <node name="spawn_model" pkg="gazebo_ros" type="spawn_model" args="-sdf -param robot_description_sdf -model surus_sim -x 0 -y 0.6 -z 0.3" output="screen"> </node> <rosparam file="$(find rmc_simulation)/gazebo_config/control/surus_control.yaml" command="load"/> <node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false" output="screen" args="joint1_position_controller joint2_position_controller joint_state_controller"/> Where my yaml is: # Publish all joint states ----------------------------------- joint_state_controller: type: joint_state_controller/JointStateController publish_rate: 50 # Position Controllers --------------------------------------- joint1_position_controller: type: effort_controllers/JointPositionController joint: right_mid_actuator_joint pid: {p: 100.0, i: 0.01, d: 10.0} joint2_position_controller: type: effort_controllers/JointPositionController joint: left_mid_actuator_joint pid: {p: 100.0, i: 0.01, d: 10.0} Like i said, this is NOT the proper way of doing this. If someone could still post an answer as to how to directly include ros_control into a sdf, that would be splendid. Originally posted by l0g1x with karma: 1526 on 2016-01-27 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by Adnen on 2016-04-04: Hi Krystian, Did you try to create a custom plugin? Comment by l0g1x on 2016-04-13: @Adnen , I didnt create any custom plugin for this as I currently dont see any easy way of implementing a fix for this by creating a plugin. Did you have anything in mind? Comment by Adnen on 2016-04-18: @Krystian, I'm still a newbie in ROS and Gazebo, so when I tried your solution I couldn't spawn the model using two files. I followed the Gazebo tutorial and I created a model plugin. Now I can control the velocities of the joints via ROS Topics. Comment by srees304 on 2016-10-28: Have you played at all with getting this to be viewable inside gzweb? I was passed a ros based launch file to view a ros-controlled vehicle in gazebo. It works fine in gzclient, but I can't seem to figure out how to get the models to show up in gzweb... Comment by nathan_u on 2016-12-01: @Adnen Could you elaborate a bit more on how you created a model plugin to control velocities? Do you implement your own PID controller within the plugin or something like that? I am trying to control joints defined in an sdf using a pid controller receiveing commands from ros. Comment by rosroll on 2017-08-07: Hey ,@Krystian, may you introduce your method to convert sdf to urdf, I have trouble launching model in Gazebo. Comment by Angel-J on 2019-11-01: you can refer: https://github.com/wojiaojiao/pegasus_gazebo_plugins it to solve the issue that URDF not support Closed loop chains. Comment by AidenPierce on 2022-05-03: Interesting workaround. I have 2 joints. For first I am trying to load controller of type velocity_controllers/JointVelocityController and second one is of type velocity_controller/JointPositionController. But only first one is loaded correctly and can be controlled. Second does not load correctly and gives error: Failed to initialize the controller. Did you encounter this error by any chance?
{ "domain": "robotics.stackexchange", "id": 23312, "tags": "ros, urdf, sdf, transmission, gazebo-ros-control" }
Non-localities in Wilsonian effective action
Question: Why terms non-analytical dependent on momenta in the effective action (in momentum space) are non-local? How to see this directly? Answer: Local terms always have fields/ operators at the same spacetime point, i.e. $S = m^2\int d^4x \phi(x) \phi(x) = m^2\int d^4x \int d^4 y \delta^{(4)}(x-y) \phi(x) \phi(y)$ is local to-where-as $S = m^2\int d^4x \int d^4 y f(|x-y|) \phi(x) \phi(y)$ is not local. Now we typically perform calculations in momentum space so a common term that might arise would be something like $S = \int d^4p \hspace{1mm} p^2 \phi(p) \phi(-p) = \int d^4x \partial_\mu \phi(x) \partial^\mu \phi(x)$ which is local. Naively we might pick up other terms like $S = \int d^4p \hspace{1mm} \frac{1}{p^n} \phi(p) \phi(-p) \sim \int d^4 x d^4 y (x-y)^{n-4} \phi(x) \phi(y)$ which is not local and where $0 < Re(n)<d$ where d is space-time dimension. Another possible non-local term that could conceivably arise that we wouldn't want is $\int d^4 p \hspace{1mm} \log p^2 \hspace{1mm} \phi (p) \phi(-p) $. and in fact terms of this form do arise at intermediate stages of calculations but are subtracted off by similarly non-local counter terms (see Peskin pg 335 - 338). I couldn't find the result for this integral in 4-dimensions but in 2 dimensions it's certainly non-local: $\int d^2 p \hspace{1mm} \log \frac{p^2}{\Lambda^2} \hspace{1mm} \phi (p) \phi(-p) \sim \Lambda^4 \int d^2 x d^2 y \frac{1}{x^2+y^2} \phi (x) \phi(y)$ In general really all we ever want to see in our effective action in position space is derivatives with a delta function and the only thing that will give us this from momentum space is: $ \int d^4 p p^n \phi (p) \phi(-p) \sim \int d^4 x d^4 y \delta^{(n)}(x-y) \phi(x) \phi(y) \rightarrow \sim \int d^4 x d^4 y \delta (x-y) \partial_x^n \phi(x) \phi(y) = \int d^4 x \partial_x^n \phi(x) \phi(x) $ where I have integrated by parts and n is taken to be a positive integer. Throughout I have been sloppy with numerical pre-factors, signs, dimensions and notation and lack any rigor (and grammar) what-so-ever, but hopefully this gets the point across. To summarize: an effective action that is non-analytical in the momentum is an action that can't be written as a bunch of operators at the same space-time point or rather the Fourier transform from momentum space to position space isn't just delta function (plus possible derivatives). A caveat: if you are working with a non-relativistic theory/action none of this holds since then its admissible to have action-at-a-distance etc. There is probably a better source for this but I have at least seen it in pg10 of http://arxiv.org/pdf/0905.4752v2.pdf .
{ "domain": "physics.stackexchange", "id": 3839, "tags": "quantum-field-theory" }
Would inspiraling binary wormholes produce gravitation wave?
Question: Mathematically I am wonder would two binary wormholes radiate intense energy as gravitational wave as they get closer and closer together, I like to know what happens to the mass (or negative mass) for these wormholes? Answer: Most wormhole solutions involve significant curvature*, so if they change rapidly over time we should expect them to radiate gravitational waves. However, exactly what happens is going to be tremendously model-dependent. (Footnote: One can make wormhole solutions that lack curvature simply by topologically gluing together parts of the spacetime manifold. But it is less clear what it would mean to have the holes orbit each other in this case. )
{ "domain": "physics.stackexchange", "id": 57030, "tags": "general-relativity, gravitational-waves, mass-energy, wormholes" }