anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
MultiCameraSensor question
Question: I am looking for a way to synchronize multiple cameras in gazebo. I came across MultiCameraSensor and believe this may be what I am looking for. Does anyone know if MultiCameraSensor is capable of providing depth data as well (similar to DepthCameraSensor) as I would like to synchronize multiple depth cameras. Thanks. W Originally posted by Dubya on Gazebo Answers with karma: 1 on 2013-02-19 Post score: 0 Answer: It currently does not provide depth data. But you can use stereo image processing provided by ROS. Here is an issue to track this enhancement: https://bitbucket.org/osrf/gazebo/issue/562/create-a-multicamera-stereo-sensor Originally posted by nkoenig with karma: 7676 on 2013-03-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Dubya on 2013-03-12: Thanks for the response. Currently I haven't seen a plugin specifically for the MultiCameraSensor. Does one exist? Comment by nkoenig on 2013-03-17: You can create your own sensor plugin: http://gazebosim.org/wiki/Tutorials/1.5/plugins
{ "domain": "robotics.stackexchange", "id": 3053, "tags": "gazebo" }
Get consecutive integer number ranges from list of int
Question: This is an algorithm to return a list of consecutive integer ranges from a list of integers. The practical use of such an algorithm is to filter lists more efficiently, i.e. rather than check e.g. 1000 items (1..1000; x==1 || x==2 || ... || x==1000), it might be possible to check only 2 items (x >= 1 && x <= 1000). Does this algorithm have any mistakes, can it be optimized, or are there any other improvements you can suggest? (As a side note: I know this code does not follow the standard C# naming conventions. I do not like to follow that particular convention; I prefer "snake_case" as it is easier on my eyes.) Sample output: (x >= 0 && x <= 1690) (x >= 13642 && x <= 15331) (x >= 27283 && x <= 27296) (x >= 27769 && x <= 27776) (x >= 28249 && x <= 28256) (x >= 28729 && x <= 28736) (x >= 29209 && x <= 29222) The algorithm (built Visual Studio 2017, C# 7.3, .NET 4.7.2): public static List<(int from, int to)> get_consecutive_ranges(List<int> fids) { if (fids == null || fids.Count == 0) return null; fids = fids.OrderBy(a => a).Distinct().ToList(); var fids_fast = new List<(int from, int to)>(); var is_conseq_with_last = true; var start_index = 0; var end_index = 0; for (var fids_index = 0; fids_index < fids.Count; fids_index++) { var first = fids_index == 0; var last = fids_index == fids.Count - 1; if (!first && fids[fids_index - 1] == fids[fids_index] - 1) { is_conseq_with_last = true; } else if (!first) { is_conseq_with_last = false; } if (!is_conseq_with_last && !first && !last) { end_index = !first && !last ? fids_index - 1 : fids_index; fids_fast.Add((fids[start_index], fids[end_index])); start_index = fids_index; } else if (last) { if (!is_conseq_with_last) { if (!first) { end_index = fids_index - 1; } fids_fast.Add((fids[start_index], fids[end_index])); start_index = fids_index; } end_index = fids_index; fids_fast.Add((fids[start_index], fids[end_index])); } } fids_fast.ForEach(a => Console.WriteLine($"(x >= {a.@from} && x <= {a.to})")); return fids_fast; } Example use: // slow: body = body.Where(a => fids.Contains(a.fid)).ToList(); // fast: body = body.Where(a => fids_fast.Any(x => a.fid >= x.from && a.fid <= x.to)).ToList(); Answer: Your code is really complicated. First, your code should be abstracted a little more. It is not specific to feature IDs, therefore the terminology should not use these words. The same algorithm can be used to select which pages to print from a document, therefore the variables should be just nums and ranges. To test your current code, I wrote: [Test] public void TestRanges() { Assert.AreEqual("", Str(Ranges(new List<int>()))); Assert.AreEqual("1", Str(Ranges(new List<int> { 1 }))); Assert.AreEqual("1-5", Str(Ranges(new List<int> { 1, 2, 3, 4, 5 }))); Assert.AreEqual("1-3, 5", Str(Ranges(new List<int> { 1, 2, 3, 5 }))); Assert.AreEqual("1, 3, 5-6", Str(Ranges(new List<int> { 1, 3, 5, 6 }))); } I wrote a helper function Str so that I don't have to construct a list of ranges for each test case: public static string Str(List<(int from, int to)> ranges) { var parts = new List<string>(); foreach (var range in ranges) { if (range.from == range.to) { parts.Add(range.from.ToString()); } else { parts.Add(range.@from + "-" + range.to); } } return string.Join(", ", parts); } After renaming your function to Ranges, these tests ran successfully. So I was ready to refactor your code. I did not really do this since your code looked too complicated to start with. Instead, I remembered that I had successfully used the following pattern quite often: var start = ...; while (start < nums.Count) { var end = ...; while (end < nums.Count) { } } With this knowledge I wrote the following code: public static List<(int from, int to)> Ranges(List<int> nums) { nums = nums.OrderBy(a => a).Distinct().ToList(); var ranges = new List<(int from, int to)>(); var start = 0; while (start < nums.Count) { var end = start + 1; // the range is from [start, end). while (end < nums.Count && nums[end - 1] + 1 == nums[end]) { end++; } ranges.Add((nums[start], nums[end - 1])); start = end; // continue after the current range } return ranges; } This code doesn't need any special cases for the last range, or anything else. A range either stops when the end of the numbers is reached, or when the following number is not consecutive. This sounds sensible, and this is exactly what the code is doing. I removed the check for nums == null since it is not necessary. Collections should never be null, and if they are, the code immediately throws an exception, which is fine. I also removed the special case for nums.Count == 0 since returning an empty list is much better than returning null. Again, expressions that have collection type should never be null. The test cases cover this case, so there's nothing to worry about.
{ "domain": "codereview.stackexchange", "id": 34355, "tags": "c#, algorithm, interval" }
How do I get the Unitary matrix of a circuit without using the 'unitary_simulator'?
Question: I am using jupyter notebook and qiskit. I have a simple quantum circuit and I want to know how to get the unitary matrix of the circuit without using 'get_unitary' from the Aer unitary_simulator. i.e.:By just using matrix manipulation, how do I get the unitary matrix of the circuit below by just using numpy and normal matrix properties? The result should equal this: [[1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j] [0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j] [0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j] [0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]] I tried doing the following but it did not result in the correct Unitary matrix swapcnot = np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]]) layer1 = np.kron( swapcnot,np.eye(2) ) layer2 = np.kron( np.eye(2),swapcnot ) print( np.matmul(layer2,layer1) ) The result: [[1. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 1.] [0. 0. 0. 0. 0. 0. 1. 0.] [0. 1. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 1. 0. 0. 0.] [0. 0. 0. 1. 0. 0. 0. 0.] [0. 0. 1. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 1. 0. 0.]] Answer: For the first layer of your circuit, compute the tensor product between the unitary matrix of the (swapped) CNOT gate and the identity matrix (using numpy's kron()). Do a similar operation for the second layer. You will obtain two 8x8 matrices. Then multiply them using numpy's matmul(). Here you have the working code: import numpy as np swapcnot = np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]]) layer1 = np.kron(np.eye(2),swapcnot ) layer2 = np.kron( swapcnot, np.eye(2) ) print( np.matmul(layer2,layer1) ) Output: [[1. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 1. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 1. 0.] [0. 0. 0. 0. 0. 1. 0. 0.] [0. 0. 0. 0. 1. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 1.] [0. 0. 1. 0. 0. 0. 0. 0.] [0. 1. 0. 0. 0. 0. 0. 0.]]
{ "domain": "quantumcomputing.stackexchange", "id": 3277, "tags": "programming, qiskit, quantum-state, unitarity" }
A Shortest Path Strange Formulation, or new modeling?
Question: We have a directed Graph $G=(V,E)$ with vertex set $V=\left\{ 1,2,...,n\right\}$. weight of each edge $(i,j)$ is shown with $w(i, j)$. if edge $(i,j)$ is not present, set $ w(i,j)= + \infty $. For each vertex $i$ put $w(i,i)=0$. we want using Dynamic Programming to find shortest path between any two vertex in this graph. by using which of them following recursive relation $d[i,j,n]$ is equal to shortest path among vertexes $i$ and $j$? $I)$ $d[i,j,k]=\begin{cases}w(i,j) & k=1\\ \min_{1 \leq r \leq n} \left\{ d[i,r,k-1] +w(r,j) \right\}& k>1 \end{cases}$ $II) d[i,j,k]=\begin{cases}w(i,j) & k=0\\ \min \left\{ {d[i,j,k-1],d[i,k,k-1]+d[k,j,k-1]}\right\} & k>0 \end{cases}$ $III) d[i,j,k]=\begin{cases}w(i,j) & k=1\\ \min_{1 \leq r \leq n} \left\{ {d[i,r,\lfloor k/2\rfloor ]}+d[r,j, \lceil k/2\rceil ] \right\} & k>1 \end{cases}$ This is an exam on 2011 that solution is wrote all of them. Who Can Help this confusing question better understood? Answer: Solution I) is recursive definition where $k$ is number of links. This formulation comes from same as Bellman-Ford-Moore algorithm. Solution II) is recursive definition where $k$ is an intermediate node. $d[i,j,k]$ is length of shortest path from $i$ to $j$ involving only $1..k$ as intermediate nodes. This formulation comes from Floyd Warshall algorithm. Solution III) is recursive definition where $k$ is again number of links. I) and III) will give same value for $d[i,j,k]$. (See Shortest Paths) Final solution will be $d[i,j,n]$ for every case.
{ "domain": "cs.stackexchange", "id": 6061, "tags": "algorithms, graphs, data-structures, trees, shortest-path" }
HTML form with confirmation before submission
Question: I'm creating a website about an auto show that the user is going to. I'm collecting input from them via textboxes, check boxes, radio buttons, etc. I'm presenting them with a confirmation that the information they entered is correct. Is the way I'm doing this the most efficient, injecting the HTML into the website via JavaScript? online_form.html <!DOCTYPE html> <html> <head> <title>Online Form - BA</title> <link rel="stylesheet" type="text/css" href="external/style.css"> <script src="external/script.js"></script> </head> <body> <!-- NAV START --> <hr> <a href="index.html">Main Page</a> - <a href="online_form.html">Online Form</a> - <a href="special.html">Specialty Car</a> <hr> <!-- NAV END --> <h2>Complete this online form to get into the Ann Arbor Auto Show for <i>FREE</i></h2> <form action="" method="post"> <fieldset> <label>Name:</label> <input type="text" id="name"><br> <label>Age:</label> <input type="text" id="age"><br> <label>Email:</label> <input type="text" id="email"><br> </fieldset> <fieldset> <h3>What is your reason for attending the Ann Arbor Auto Show?</h3> <input type="radio" name="reason" value="cars">I like cars<br> <input type="radio" name="reason" value="been_before">I've been here before<br> <input type="radio" name="reason" value="friend">A friend told me<br> <input type="radio" name="reason" value="other">Other<br> <p>If other, please explain:</p> <textarea rows="2" cols="20"></textarea> </fieldset> <fieldset> <h3>What color cars do you like?</h3> <input type="checkbox" name="color" value="red">Red<br> <input type="checkbox" name="color" value="blue">Blue<br> <input type="checkbox" name="color" value="green">Green<br> <input type="checkbox" name="color" value="orange">Orange<br> <input type="checkbox" name="color" value="yellow">Yellow<br> <input type="checkbox" name="color" value="purple">Purple<br> <input type="checkbox" name="color" value="black">Black<br> <input type="checkbox" name="color" value="white">White<br> </fieldset> <button type="button" onclick="checkInformation()">Submit</button> <button type="reset">Reset</button> <div id="conformation"></div> </form> </body> </html> script.js function Form() { this._name = document.getElementById('name').value; this.age = document.getElementById('age').value; this.email = document.getElementById('email').value; this.conformation = document.getElementById('conformation'); /* * Reset this.conformation if user clicks `no` button */ this.conformation.innerHTML = ""; this.response = '<h3>Is this information correct?</h3>\n'; this.response += '<p>Name: ' + this._name + '</p>\n'; this.response += '<p>Age: ' + this.age + '</p>\n'; this.response += '<p>Email: ' + this.email + '</p>\n'; this.response += '<button type="button" onclick="yes()">Yes</button>'; this.response += '<button type="reset" onclick="no()">No</button>'; this.send_conformation = function() { this.conformation.style.display = "block"; this.conformation.innerHTML = this.response; } } var form; function checkInformation() { form = new Form(); form.send_conformation(); } function yes() { /* To be implemented */} function no() { var conf = document.getElementById('conformation'); conf.style.display = "none"; } Answer: Replace document.getElementByIds with a forEach and the this.response lines with template literals to keep the code DRY : function Form () { ["_name", "age", "email", "confirmation"].forEach(key => { const id = key.replace(/^_/, ''); // removes the _ in the beginning this[key] = document.getElementById(id).value; }); /* * Reset this.conformation if user clicks `no` button */ this.conformation.innerHTML = ""; this.response = ` <h3>Is this information correct?</h3>\n <p>Name: ${this._name}</p>\n <p>Age: ${this.age}</p>\n <p>Email: ${this.email}</p>\n <button type="button" onclick="yes()">Yes</button> <button type="reset" onclick="no()">No</button>`; this.send_conformation = function() { this.conformation.style.display = "block"; this.conformation.innerHTML = this.response; } } var form; function checkInformation () { form = new Form(); form.send_conformation(); } function yes { /* To be implemented */} function no { document.getElementById('conformation').style.display = "none"; }
{ "domain": "codereview.stackexchange", "id": 33483, "tags": "javascript, html, form, dom" }
How does 3cm microwave pass through a 0.5 cm grating?
Question: This YouTube video prompted a question. If the wavelength of the PASCO microwave generator is about 3 cm, how do we explain that a significant portion passes through a 1/2 $cm^2$ screen? I have a naive impression that the wave won't pass through an opening significantly smaller than the wavelength. My guess is some variation of the idea that the wave is both reflected and propagated at the screen, and the propagated part reassembles on the other side...? I have noticed this with my microwave oven too. The screen is supposed to shield and mostly does, but there is a strong and measurable reading across a room on my cheapo RF detector. It seems reasonable that energy can get through "gaps" in the shielding, but I am not sure what the precise explanation should be, or how to visualize this. (I assume the Gunn diode partially polarizes the light but this does not really change the question). (This is a cross-post from the Chemistry site at the suggestion of a senior member). Answer: General rule of thumb is that the opening in a Faraday cage should be smaller than 1/10th of the wavelength that should be blocked. For example, in order to block EM fields with frequencies of 10 GHz and lower, the hole size of the Faraday cage should be smaller than 3 mm.
{ "domain": "physics.stackexchange", "id": 93007, "tags": "waves, microwaves, light-emitting-diodes" }
2D array as service response
Question: I want to use a service that has a 2D array response this is my .srv file uint32 a --- uint32[][] tree catkin_make error: genmsg.base.InvalidMsgSpec: invalid field: Currently only support 1-dimensional array types: uint32[][] Originally posted by Moon on ROS Answers with karma: 19 on 2016-04-26 Post score: 0 Answer: Well, the error says it all: two dimensional arrays are currently not supported. I see two possibilities: use one of the MultiArray message types in std_msgs. create your service similar to what e.g. sensor_msgs/Image is doing, by having a 1D array for the data and an additional parameter for the length of a row, i.e. int32 a --- uint32 step # Full row length in bytes uint32[] data # actual matrix data, size is (step * rows) Originally posted by mgruhler with karma: 12390 on 2016-04-27 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Moon on 2016-04-27: thanks . :) Comment by Moon on 2016-04-27: if I chose the second possibility ; do you know how to return step and data to my client node ?? Comment by mgruhler on 2016-04-27: Sorry, I don't understand your question.. This is a service, so the client sends the request, and receives the response, which contains step and data in this case.... Comment by Moon on 2016-04-27: I mean how to write it in server but I found this way resp=stResponse() resp.step = 7 resp.data = cell #cell is an array in my server code return resp but I have another problem now: when I tried to print the received array in client it is printed as (1,2,3,4,..) not as the data that was assigned? Comment by mgruhler on 2016-04-27: please post a new question and provide the respective code (make sure to format it properly using the preformatted text button, the one with 101 on it). Comment by Moon on 2016-04-27: I did , thanks a lot :) and this the question if want to help http://answers.ros.org/question/233049/array-as-response-in-python/ Comment by thejose on 2020-10-14: I don't understand how the second option works. Could you explain it in a little more detail? Comment by mgruhler on 2020-10-15: Instead of a two-dimensional array, create a one-dimensional array by putting all rows "behind" each other. With the step field you define how long (either in bytes or you could also use number of elements) each row ist. Does this answer your question? If not, I suggest you create a new question where you detail what exactly you don't understand... Comment by thejose on 2020-10-15: Thank you! That answered my question.
{ "domain": "robotics.stackexchange", "id": 24481, "tags": "ros, python, array, service" }
Bit magnetic in one handle, but not another
Question: Notice that the driver bit in this video is magnetically attracted to both handles, yet it is magnetically attracted to the fastener only when it is in the green handle: Video on Imgur. Video description: Driver bit is in black handle. It is magnetically attracted to the handle, but not to a screw. Driver bit is moved to green handle. It is magnetically attracted to the handle and also to the screw. Driver bit is moved back to black handle. It is magnetically attracted to the handle, but not to the screw, just as before. My 13 year old daughter noticed the phenomenon. Today she'll hopefully learn not only about magnetism, but also discover the amazing results of asking in the right place after showing that she tried to research by herself. Why is the bit attracted to the screw only when in a specific handle? Note that the screw is magnetically attracted to both handles, though to the green handle far more strongly. The screw is not attracted to the bit when the bit is not in a handle. The bit is magnetically attracted to both handles, it is not only friction holding the bit in place (this is demonstrated in the video). Answer: Could it be that the black handle retains the bit using a magnet that is mounted sideways with two pole pieces (NS) - so that the bit 'closes the magnetic circuit'; whereas the green handle just has a simple magnet mounted axially (N)? If this were the case then with the black handle there would be less induced magnetism in the part of the bit touching the screw.
{ "domain": "physics.stackexchange", "id": 85529, "tags": "electromagnetism" }
Why is the ring in this simulation of Sgr A* off center?
Question: In the recent releases of images of Sgr A*, simulated versions of what they expected were included along side the actual images they were able to get. What confuses me about these simulated images (and I believe this was the case for M87 as well) is that the shadow of the black hole seems to be off center from the ring that is visible on the image. So my question is why is that the case? My initial thoughts are that it might be related to the anisotropic nature of accretion disks and the fact that we are looking at it at an unknown inclination, but even then it still seems a bit odd. It reminds me of an Einstein ring in the sense that gravitational lensing is playing a role here for light found around the black hole, but the fact that it’s off center confuses me and I’m not sure if that’s the right term given this isn’t lensing in the sense that it’s normally used. Included is an image of what I’m referencing, taken from one of the videos from the press release from the other day. I got this image from a drive folder shared as ‘Additional visuals’ at the bottom of this website Answer: The photon ring around a non-spinning Schwarzschild black hole is perfectly circular and centered on the black hole. The photon ring around a spinning Kerr black hole is almost circular (except for very high spins) but is displaced from the centre of the black hole (e.g. see Takahashi 2005; Johanssen 2015). The amount of displacement is related to the spin and the inclination of the black hole spin to the line of sight (if the rotation axis points towards you with $i=0$, then the ring is undisplaced). The plot below (from Johanssen 2015) shows a calculation of the displacement (in units of $M$, where the Schwarzschild radius is $2M$) as a function of inclination for values of the spin parameter ($GJ/cM$) varing from $a=0.0$ (no displacement) to 0.9 in steps of 0.1 and then 0.998.
{ "domain": "astronomy.stackexchange", "id": 6351, "tags": "black-hole, general-relativity, supermassive-black-hole, event-horizon-telescope" }
Date Time - Seconds Difference
Question: For trivial reasons, I decided to have a go at differentiating dates. Low and behold having no idea how much of a non-trivial task it was to become. It was originally a small sidetrack from a project I'm doing. And, whilst performance isn't a huge concern here, the code I've posted below performs highly optimally in comparison to its alternative (shown below it). This is preferred, as originally this was used in a real-time program, and without other changes to the high-level algorithm, the cost of re-calculating the date difference every frame (up to 60FPS) was creating a significant run-time penalty. But what I'm looking for in my solution, is algorithmic improvements, not optimizations (it runs more than fast enough). Such as removing the for loop for calculating which years are leap years (perhaps using 365.242199 constant?). And especially techniques on how to get rid of that huge tree of comparisons for the initial swap; that just doesn't look like good practice... ever. I'm sure it can be done in the algorithm, but my attempts failed and I ran out of time. long calculate_seconds_between( uint Y1, uint M1, uint D1, uint H1, uint m1, uint S1, uint Y2, uint M2, uint D2, uint H2, uint m2, uint S2 ) { bool invert = false; if (Y1 > Y2) { invert = true; } else if (Y1 == Y2) { if (M1 > M2) { invert = true; } else if (M1 == M2) { if (D1 > D2) { invert = true; } else if (D1 == D2) { if (H1 > H2) { invert = true; } else if (H1 == H2) { if (m1 > m2) { invert = true; } else if (m1 == m2 && S1 > S2) { invert = true; } } } } } if (invert) { std::swap(Y1, Y2); std::swap(M1, M2); std::swap(D1, D2); std::swap(H1, H2); std::swap(m1, m2); std::swap(S1, S2); } static const int month_days_sum[] = {0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365}; const uint Y1_days = month_days_sum[M1 - 1]; const uint Y2_days = month_days_sum[M2 - 1]; int years_days = (Y2 - Y1) * 365; // Leap Years for (uint i = Y1 + 1; i < Y2;) { if (is_leap_year(i)) { ++years_days; i += 4; } else { ++i; } } const bool lY1 = is_leap_year(Y1) && (M1 < 2 || (M1 == 2 && D1 < 29)); const bool lY2 = is_leap_year(Y2) && (M2 > 2 || (M2 == 2 && D2 > 28)); if (Y1 == Y2) { if (lY1 && lY2) ++years_days; } else { if (lY1) ++years_days; if (lY2) ++years_days; } // Convert years to seconds const long years_seconds = years_days * 86400; // Time difference in seconds const long S1s = ((Y1_days + D1) * 86400) + (H1 * 3600) + (m1 * 60) + S1; const long S2s = ((Y2_days + D2) * 86400) + (H2 * 3600) + (m2 * 60) + S2; const long total = years_seconds + (S2s - S1s); if (invert) return -total; else return total; } Standard C++ Alternative Note: very slow, up to (8000 / 35) 228x slower than the above. time_t calculate_seconds_between2( const uint Y1, const uint M1, const uint D1, const uint H1, const uint m1, const uint S1, // YY/MM/DD HH:mm:SS const uint Y2, const uint M2, const uint D2, const uint H2, const uint m2, const uint S2 ) { time_t raw; time(&raw); struct tm t1, t2; gmtime_r(&raw, &t1); t2 = t1; t1.tm_year = Y1 - 1900; t1.tm_mon = M1 - 1; t1.tm_mday = D1; t1.tm_hour = H1; t1.tm_min = m1; t1.tm_sec = S1; t2.tm_year = Y2 - 1900; t2.tm_mon = M2 - 1; t2.tm_mday = D2; t2.tm_hour = H2; t2.tm_min = m2; t2.tm_sec = S2; time_t tt1, tt2; tt1 = mktime(&t1); tt2 = mktime(&t2); return (tt2 - tt1); } As shown in the Unit Testing, every single date (excluding tests on time) from 1990 to 2020 has been tested against every date from 1990 to 2020 (n^2) without failure, so the algorithm appears to be correct in terms of accuracy against the GNU implementation on my platform. Unit Testing Code: http://pastie.org/2933904 Benchmark Code: http://pastie.org/2933893 Tagged with C as this is barely a far cry from being completely transferable. Answer: I think if i was doing it, I'd try to structure it more like the standard code: turn each Y/M/D/H/m/S into seconds since some epoch, then use fairly straightforward subtraction to compute the difference. unsigned calculate_seconds_between2(unsigned Y1, unsigned M1, unsigned D1, unsigned H1, unsigned m1, unsigned S1, unsigned Y2, unsigned M2, unsigned D2, unsigned H2, unsigned m2, unsigned S2) { // JSN = seconds since some epoch: unsigned T1 = JSN(Y1, M1, D1, H1, m1, S1); unsigned T2 = JSN(Y2, M2, D2, H2, m2, S2); return T1>T2 ? T1-T2 : T2-T1; } For the seconds since epoch, I'd probably use something like a normal Julian Day Number, but with a more recent epoch (to reduce magnitudes, and with them the possibility of overflow), then calculate seconds into the day, something like this: unsigned JSN(unsigned Y, unsigned M, unsigned D, unsigned H, unsigned m, unsigned S) { static const int unsigned secs_per_day = 24 * 60 * 60; return mJDN(Y-1900, M, D) * secs_per_day + H * 3600 + m * 60 + S; } That leaves only calculating the modified JDN. It's not exactly transparent, but: unsigned mJDN(unsigned Y, unsigned M, unsigned D) { return 367*Y - 7*(Y+(M+9)/12)/4 + 275*M/9 + D; } This formula is from a 1991 Usenet post by Tom Van Flandern, with an even more modified JDN (i.e., an even more recent epoch). Another way to help avoid overflow would be to model it a bit more closely after your code: compute a difference in days, and a difference in seconds, and only then convert the days to seconds, and add on the difference in seconds within the day: unsigned time_diff(/* ...*/) { unsigned D1 = JDN(Y1, M1, D1); unsigned D2 = JDN(Y2, M2, D2); unsigned T1 = H1 * 3600 + m1 * 60 + S1; unsigned T2 = H2 * 3600 + m2 * 60 + S1; if (D1 == D2) return T1>T2 ? T1-T2 : T2-T1; return D1>D2 ? (D1-D2)*secs_per_day + T1-T2 : (D2-D1)*secs_per_day + T2-T1; } In particular, this would make it easier to avoid overflow while still using standard Julian day numbers. This would be useful if (for example) you were using standard Julian day numbers for other purposes, so you wanted to re-use those standard routines. I haven't run full regression tests for accuracy (since the point is more about the overall structure than the actual code implementing it), but I'm reasonably certain the approach can/will produce accurate results. A quick test for speed indicates that it should be reasonably competitive in that regard as well -- at least with the compilers I have handy, it's fairly consistently somewhat faster. Even if (for example) I've messed something up in transcribing Tom's formula into C++, I doubt that fixing it will have any major effect on speed. Readability is open to a bit more question. Most of this code is very simple and straightforward, with one line of nearly impenetrable "magic math". Yours "distributes" the complexity, so there's no one part that's terribly difficult, but also no part that's really easy, obvious, or reusable either. Edit: As written this produces the absolute value of the difference. Eliminating that simplifies the code to something like this: int mJDN(int Y, int M, int D) { return 367*Y - 7*(Y+(M+9)/12)/4 + 275*M/9 + D; } int JSN(ull Y, ull M, ull D, ull H, ull m, ull S) { static const int secs_per_day = 24 * 60 * 60; return mJDN(Y-1900, M, D) * secs_per_day + H * 3600 + m * 60 + S; } int calculate_seconds_between3(int Y1, int M1, int D1, int H1, int m1, int S1, int Y2, int M2, int D2, int H2, int m2, int S2) { int T1 = JSN(Y1, M1, D1, H1, m1, S1); int T2 = JSN(Y2, M2, D2, H2, m2, S2); return T2-T1; }
{ "domain": "codereview.stackexchange", "id": 30868, "tags": "c++, optimization, c, algorithm" }
x86 FASM assembly Reverse FizzBuzz
Question: This program counts down from 100 to 1 and: If the current number is a multiple of 3 it prints "Fizz" instead of the number If it is a multiple of 5 it prints "Buzz" instead of the number If it is a multiple of 3 and 5 it prints "FizzBuzz" instead of the number format PE console entry main include 'macro/import32.inc' section '.rdata' data readable msg db '%d',13,10, 0 fizz db 'Fizz', 13, 10, 0 buzz db 'Buzz', 13, 10, 0 p db 'pause>nul', 0 fizzbuzz db 'FizzBuzz', 13, 10, 0 section '.data' data readable writeable vdiv_by_3 dd 0 vdiv_by_5 dd 0 main: push ebp mov ebp, esp mov ecx, 100 load_3: mov eax, ecx mov ebx, 3 xor edx, edx div ebx mov ebx, edx mov [vdiv_by_3], ebx ; Store the remainder in div_by_3 load_5: ; Now check if its divisible by 5 mov eax, ecx mov ebx, 5 xor edx, edx div ebx mov [vdiv_by_5], edx ; Remainder in div_by_5 cmp edx, 0 ; Checking 5 jne check_3_not_5 check_3: mov eax, [vdiv_by_3] cmp eax, 0 je print_fizzbuzz check_3_not_5: mov eax, [vdiv_by_3] cmp eax, 0 je print_fizz check_5_not_3: mov eax, [vdiv_by_5] cmp eax, 0 je print_buzz print_num: ; Problem: This is printing 101 first, we need to start at 1 push ecx push msg call [printf] ; This call will mess with ecx so we have to store it add esp, 4 pop ecx ; Get the counter back into ecx jmp endme print_fizz: push ecx push fizz call [printf] add esp, 4 pop ecx jmp endme print_buzz: push ecx push buzz call [printf] add esp, 4 pop ecx jmp endme print_fizzbuzz: push ecx push fizzbuzz call [printf] add esp, 4 pop ecx jmp endme print_number: push ecx push msg call [printf] add esp, 4 pop ecx endme: dec ecx cmp ecx, 0 jne load_3 push p call [system] add esp, 4 push 0 call [exit] section '.idata' import data readable library msvcrt, 'msvcrt.dll' import msvcrt, \ printf, 'printf', \ system, 'system', \ exit, 'exit' Answer: Since you are writing FizzBuzz in assembler, you are obviously concerned about performance and code size. For performance, div is one of the worst instructions. Since the divisibility repeats modulo 15, you could define some constants: const divisible_by_3 = 0b1001001001001001 const divisible_by_5 = 0b1000010000100001 To test the divisibility, have an extra register that is initialized with i mod 15 and adjusted whenever i changes. The basic idea is: dec i dec i_mod_15 cmovs i_mod_15, 14 ; the maximum value mod 15 You can also combine divisible_by_3 and divisible_by_5 into a bit vector of two-bit entries (divisible) and define a jump table based on that. To do the actual testing, use bit-shifting. const divisible_by_3 = 0b_1_0_0_1_0_0_1_0_0_1_0_0_1_0_0_1 const divisible_by_5 = 0b1_0_0_0_0_1_0_0_0_0_1_0_0_0_0_1_ const divisible = 0b11000001001001000001100001000011 Another idea is to use unroll the loop by using Duff's Device. Right now, your code is a really boring, straight-forward translation of some higher-level language, probably C. In assembler, you have much more potential to compress the code (DRY principle). For example, you can jmp do_printf instead of writing push ecx; call printf; pop ecx several times. … 90 minutes later … Based on the above ideas, the code might look like this, tested and works. format PE console entry main include 'macro/import32.inc' section '.rdata' data readable ; Each of these messages takes exactly 8 bytes, ; except for the last one. ; This is important for addressing them efficiently. messages db '%d', 13, 10, 0, 0, 0, 0 db 'Fizz', 13, 10, 0, 0 db 'Buzz', 13, 10, 0, 0 db 'FizzBuzz', 13, 10, 0 ; This bit mask selects one of the above messages ; to be printed. The counter is always passed to ; printf, and in 8/15 cases it is ignored. ; The uu entry at the end is unused. ; ; 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ; div3 mask = 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 u ; div5 mask = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 u div_mask = 11_00_00_01_00_10_01_00_00_01_10_00_01_00_00_00b section '.text' code readable main: push ebp mov ebp, esp mov ecx, 100 ; ecx = the counter mov eax, ecx xor edx, edx mov ebx, 15 div ebx ; edx = counter mod 15 mov ebx, div_mask ; ebx = bit mask for selecting the message rol ebx, 2 xchg edx, ecx ; Variable-width rotation is only possible rol ebx, cl ; with cl, therefore the temporary swap rol ebx, cl ; of edx and ecx. xchg edx, ecx ; .again: mov eax, ebx ; Select the message format for printf. and eax, 11b shl eax, 3 add eax, messages push ebx ; save registers before printf push ecx push edx push ecx ; actually call printf push eax call [printf] add esp, 8 pop edx ; restore registers after printf pop ecx pop ebx dec edx ; Adjust counter, counter mod 15 jns .normal ; and bit mask. add edx, 15 ror ebx, 2 ; Rotate one extra time since .normal: ; the bit mask has 16 entries. ror ebx, 2 dec ecx jnz .again xor eax, eax pop ebp ret section '.idata' import data readable library msvcrt, 'msvcrt.dll' import msvcrt, printf, 'printf', system, 'system' Some more things I took care of: The code must not be in a writeable section. Since the program can return from main, it should do so. To make that work, I had to add the pop ebp that corresponds to the push ebp at the very top; in your code that was missing. I carefully avoided many branching statements, since they are poisonous to performance. See Why is it faster to process a sorted array. The one remaining conditional behaves well in that it follows the jump in 14/15 cases, which is easily predictable. Of course, using printf and the C stdio for output ruins all performance effects. But that's outside the scope for this little fun experiment. All arguments to printf may be modified by printf. There's no guarantee that you get your ecx back at the point where you commented ; Get the counter back into ecx. To hide the counter from printf, you must push it once more to the stack. That's why I explicitly commented on this saving/calling/restoring in my code. A nice thing is that you can play with the bit mask, which feels just like an ordinary configuration file.
{ "domain": "codereview.stackexchange", "id": 29762, "tags": "assembly, fizzbuzz" }
Depth-first search algorithm in clojure
Question: Context As an exercise for myself (I'm learning clojure). I wanted to implement the Depth-first search algorithm. How I did it Using recursion (def graph {:s {:a 3 :d 4} :a {:s 3 :d 5 :b 4} :b {:a 4 :e 5 :c 4} :c {:b 4} :d {:s 4 :a 5 :e 2} :e {:d 2 :b 5 :f 4} :f {:e 4 :g 1}}) (def stack [[:s]]) (def goal :g) (defn cost [Graph start goal] (goal (start Graph))) (defn hasloop? [path] (not (= (count path) (count (set path))))) (defn atgoal? [path] (= goal (last path))) (defn solved? [stack] (some true? (map atgoal? stack))) (defn addtopath [path node] (conj path node)) (defn pop* [stack] (last stack)) (defn findpath [stack] (if (not (solved? stack)) (let [first* (pop* stack) l (last first*) ] (findpath (drop-last (remove hasloop? (lazy-cat (map #(addtopath first* %) (keys (l graph))) stack))))) [(first stack)])) How to use (findpath stack) Question I'm really really interested in how this code can be improved. Both in readability, efficiency and performance. Answer: I wrote the modified version of your findpath function using recursion: (defn- dfs [graph goal] (fn search [path visited] (let [current (peek path)] (if (= goal current) [path] (->> current graph keys (remove visited) (mapcat #(search (conj path %) (conj visited %)))))))) (defn findpath "Returns a lazy sequence of all directed paths from start to goal within graph." [graph start goal] ((dfs graph goal) [start] #{start})) Instead of using your hasloop? function, my search function keeps track of visited nodes in order to avoid visiting the same node twice. It seems to work for your settings: user> (def graph {:s {:a 3 :d 4} :a {:s 3 :d 5 :b 4} :b {:a 4 :e 5 :c 4} :c {:b 4} :d {:s 4 :a 5 :e 2} :e {:d 2 :b 5 :f 4} :f {:e 4 :g 1}}) user> (findpath graph :s :g) ([:s :a :b :e :f :g] [:s :a :d :e :f :g] [:s :d :a :b :e :f :g] [:s :d :e :f :g])
{ "domain": "codereview.stackexchange", "id": 2455, "tags": "optimization, algorithm, performance, clojure" }
osx error finding boost
Question: I'm trying to install diamondback on a brand new, fresh out of the box macbook pro, and I'm encountering an error that I've never seen before. I'm getting errors from diamondback/ros/tools/rosboost_cfg/src/rosboost_cfg/rosboost_cfg.py about not being able to find boost. The specific error is rosboost_cfg.rosboost_cfg.BoostError: "Cannot find boost in any of [('/usr', True), ('/usr/local', True)]" Looking in the python script, it looks like this list should at the very least also include /opt/local/include, which is set in $CPATH. Here's the relevant python bit: _search_paths = [(sysroot+'/usr', True), (sysroot+'/usr/local', True), (None if 'INCLUDE_DIRS' not in os.environ else os.environ['INCLUDE_DIRS'], True), (None if 'CPATH' not in os.environ else os.environ['CPATH'], True), (None if 'C_INCLUDE_PATH' not in os.environ else os.environ['C_INCLUDE_PATH'], True), (None if 'CPLUS_INCLUDE_PATH' not in os.environ else os.environ['CPLUS_INCLUDE_PATH'], True), (None if 'ROS_BOOST_ROOT' not in os.environ else os.environ['ROS_BOOST_ROOT'], False)] When I do echo $CPATH I get /opt/local/include (set from .profile). Originally posted by Nick on ROS Answers with karma: 93 on 2011-03-10 Post score: 0 Answer: I figured out the problem. It happens when doing sudo rosinstall instead of just rosinstall. For future reference, rosinstall and sudo don't play well together on OSX currently. Originally posted by Nick with karma: 93 on 2011-03-10 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 5028, "tags": "boost, osx, ros-diamondback, rosbuild" }
Relating Poyntings theorem to Lenz and Faraday's law?
Question: In system's similar to a motor, where the armature begins to accelerate simultaneously there is induced $-\epsilon$ to reduce the applied current(hence the applied power $P(t)$ is also reduced), or others similar to that principle such as a rail gun, Lenz's law would state the conservation of energy in Faraday's law of induction, however, how does Poyntings theorm relate to such systems? To add to the conservation of energy? Answer: Definitions First, let's start by defining some parameters: $\mu_{o}$ is the permeability of free space $\varepsilon_{o}$ is the permittivity of free space $\mathbf{E}$ is the 3-vector electric field $\mathbf{B}$ is the 3-vector magnetic field $\mathbf{S}$ is the 3-vector Poynting flux (also called the Poynting vector) $\mathbf{j}$ is the 3-vector electric current density $\partial_{\alpha}$ is the partial derivative with respect to parameter $\alpha$ $q_{s}$ charge of particle species $s$ $n_{s}$ number density of particle species $s$ (i.e., number per unit volume) $\mathbf{v}_{s}$ bulk flow velocity of particle species $s$ Background Poynting's theorem is defined mathematically (in differential form) as: $$ \partial_{t} \left( w_{B} + w_{E} \right) + \nabla \cdot \mathbf{S} = - \mathbf{j} \cdot \mathbf{E} \tag{1} $$ where $\partial_{t}$ is the partial time derivative, $w_{B} = B^{2}/\left( 2 \mu_{o} \right)$, $w_{E} = \varepsilon_{o} E^{2}/2$, $\mathbf{S} = \left( \mathbf{E} \times \mathbf{B} \right)/\mu_{o}$, and $\mathbf{j} = \sum_{s} \ q_{s} \ n_{s} \mathbf{v}_{s}$. $^{\mathbf{A}}$ We can often write Poynting's theorem in differential form because the volume over which one integrates (i.e., the surface through which $\mathbf{S}$ is leaving/entering) is generally arbitrary [e.g., see pages 258-264 in Jackson [1999]]. We can define Poynting's theorem in terms of physically significant phrases, like the following: the time rate of change of the energy density of the electromagnetic fields; plus the rate of electromagnetic energy flux flowing out of an arbitrary surface; equals the energy lost due to momentum transfer between particles and fields. We could just as easily describe 1. as the rate of energy transfer per unit volume, 2. as the power flowing out of a volume through a defined surface, and 3. as the rate of work done per unit volume on the charges in the volume element. One thing to note is that when in differential form as in Equation 1, Poynting's theorem is one example of a continuity equation. All continuity equations are expressed as: the time rate of change of a density; plus the rate of flux flowing out of an arbitrary surface; equals sources and losses. In terms of units, a flux is just a density multiplied by a velocity. In simple (loose/careless) terms, the velocity gives the direction and rate while the density supplies the volume and number. how does Poyntings theorm relate to such systems? Poynting's theorem is, in short, a statement of the conservation of electromagnetic energy. The $\left( \mathbf{j} \cdot \mathbf{E} \right)$ term in Equation 1 shows how energy is transformed from electromagnetic(particle mechanical) to particle mechanical(electromagnetic) energy.$^{\mathbf{B}}$ You can see this by recalling that one form for expressing power (i.e., energy per unit time) is given by: $$ P = \mathbf{F} \cdot \mathbf{v} $$ where $\mathbf{F}$ is some force acting on some object and $\mathbf{v}$ is the velocity of said object. When you look at the $\left( \mathbf{j} \cdot \mathbf{E} \right)$ term, you can see that it can be represented by: $$ \left( \mathbf{j} \cdot \mathbf{E} \right) = \mathbf{E} \cdot \left( \sum_{s} \ q_{s} \ n_{s} \mathbf{v}_{s} \right) \\ \sim \sum_{s} \ \frac{ \mathbf{F} }{ q_{s} } \cdot \left( q_{s} \ n_{s} \mathbf{v}_{s} \right) $$ where I have just rewritten $\mathbf{E}$ as $\mathbf{F}/q$ (from the Lorentz force). Then you can see that there is a term similar to $\mathbf{F} \cdot \mathbf{v}$ within $\left( \mathbf{j} \cdot \mathbf{E} \right)$. Thus, the $\left( \mathbf{j} \cdot \mathbf{E} \right)$ term is clearly a rate of change of energy per unit volume. To add to the conservation of energy? Poynting's theorem is part of the total conservation of energy of a given system. There are numerous ways to treat this and many can be too involved to go into here, but the simple answer is that it defines the conversion of particle energy to/from electromagnetic energy. For instance, one can use Poynting's theorem with the generalized Ohm's law [e.g., see page 572 in Jackson [1999]] when describing the transport properties (e.g., conductivity) of a given system. So in a sense, yes, Poynting's theorem adds to the conservation of energy in that it is one part of that law. Side Notes A. I already converted the expression for current density to a macroscopic form. To see more details on the difference between micro- and macroscopic Maxwell's equations, see pages 248-258 in Jackson [1999]. B. Note that I have included heat (i.e., random kinetic energy) and bulk flow kinetic energy in my use of the term mechanical energy here. References J.D. Jackson, Classical Electrodynamics, Third Edition, John Wiley & Sons, Inc., New York, NY, 1999.
{ "domain": "physics.stackexchange", "id": 28499, "tags": "electromagnetism, energy-conservation, classical-electrodynamics" }
Calculating eigenvalues and eigenstates of an infinite dimensional Hamiltonian
Question: Consider the Hamiltonian, $$H = E_{0} \sum_{m = - \infty}^{\infty}(|m⟩⟨m + 1| + h.c.),$$ where $E_{0}$ is an energy scale, $|m⟩$ are kets which can be used to form a complete basis and h.c. denotes Hermitian conjugate. Find out the eigenvalues of $H$. Next, carry out the same analysis for $$H' = H − E_{0}m|m⟩⟨m|.$$ Compare the eigenfunctions in the two cases. Now, in order to find out the eigenvalues of the first Hamiltonian, I need to get $\lambda$ where, $$ \det(H - \lambda I) = 0$$ This means I need to find the determinant of an infinite dimensional matrix that has only 3 non-zero diagonals - the main one ($-\lambda$), and the ones above and below this one ($E_{0}$). We have a recursion relation here, $$ \det(T_{n}) = -\lambda \det(T_{n-1}) + E_{0} \det(T_{n-2})$$ This is where I am stuck. I tried this - the matrix is infinite, which means in principle it shouldn't make a difference to the determinant if I remove (or add) a couple of rows and columns, i.e., as $n \rightarrow \infty$, $\det(T_{n}) = \det(T_{n-1}) = \det(T_{n-2})$. This gives me $\lambda = E_{0} - 1$. But I am not sure if this is correct. If I use this same technique for the next Hamiltonian too, then I get $\lambda_{n} = E_{0}(n+1) - 1$. Once again, I have no idea if this is correct. As for the eigenfunctions, all I can say after this is that in the first case we have an infinite fold degeneracy while this is not the case for the second one. But that is all, I cannot proceed beyond this. Any help would be appreciated. Answer: The following procedure is only on a formal level. Let $k\in [-\pi,\pi)$ and define $$|k\rangle :=\frac{1}{\sqrt{2\pi}}\sum\limits_{m\in\mathbb Z} e^{ikm}|m\rangle \tag 1 \quad .$$ Note that $|k\rangle$ in a strict sense is not an element of the underlying Hilbert space but should be interpreted as a "generalized" vector. We compute $$\langle m|\left(\int_{[-\pi,\pi)} |k\rangle\langle k|\, \mathrm dk\right)|m^\prime\rangle= \frac{1}{2\pi} \int_{[-\pi,\pi)}e^{ik(m-m^\prime)}\, \mathrm d k = \delta_{m,m^\prime} = \langle m|m^\prime\rangle \quad , $$ which means that $$ \mathbb I= \int_{[-\pi,\pi)} |k\rangle\langle k|\, \mathrm dk\tag{2} \quad .$$ On the other hand, we have that $$ \langle k|k^\prime\rangle= \frac{1}{2\pi} \sum\limits_{m\in\mathbb Z}e^{im(k-k^\prime)} = \sum\limits_{n\in \mathbb Z} \delta(k-k^\prime - 2\pi n) \quad ,\tag{3}$$ by an instance of Poisson's summation formula. Now note that $k-k^\prime$ can only be a multiple of $2\pi$ if $k=k^\prime$, in which case the RHS gives a single $\delta(0)$; but if $k\neq k^\prime$ the RHS of $(3)$ vanishes. I guess it is more or less legitimate to thus summarize $(3)$ by $$\langle k|k^\prime\rangle = \delta(k-k^\prime) \quad . \tag{4} $$ With this in mind, you can express the Hamiltonian in this new generalized basis, which will yield $$ H=2 E_0 \int_{[-\pi,\pi)} \cos(k)\,|k\rangle\langle k|\,\mathrm dk \quad . \tag 5$$
{ "domain": "physics.stackexchange", "id": 95434, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, linear-algebra, eigenvalue" }
Running Constant Values At Very Low Temperatures
Question: From Wikipedia Coupling Constants, using QED as an example. I realise that the one-loop beta function in quantum chromodynamics is negative. If a beta function is positive, the corresponding coupling increases with increasing energy. An example is quantum electrodynamics (QED), where one finds by using perturbation theory that the beta function is positive. In particular, at low energies, α ≈ 1/137, whereas at the scale of the Z boson, about 90 GeV, one measures α ≈ 1/127. Moreover, the perturbative beta function tells us that the coupling continues to increase, and QED becomes strongly coupled at high energy. In fact the coupling apparently becomes infinite at some finite energy. My questions are based on pure curiosity (and a total lack of experimental experience, so my apologies if this combination displays naivety on my part). Have we tested coupling constants at the lowest temperature/energy to confirm a reduction at the far end of the energy scale from the LHC? It may be that low temperature experiments have to take any changes in values as a matter of routine, to correspond to theoretical predictions, so "yes, of course!!!" is an perfectly acceptable answer. If a reduction has been observed in the value of any arbitrary constant, at these extremely low temperatures, can we compare this to conditions if the "heat death of the universe" scenario is true and predict what effects will occur as the temperature drops? Answer: The fine structure constant $\alpha\approx\frac1{137}$ appears in the Coulomb force between fundamental charges: $$ \alpha\hbar c = e^2/4\pi\epsilon_0, \quad\text{so}\quad |E_\text{Coulomb}| = \frac{e^2}{4\pi\epsilon_0} \frac1r = \frac{\alpha\hbar c}{r} $$ Quantum electrodynamics is pretty well tested down into the radio frequencies, with techniques like magnetic resonance, and radio frequencies correspond to micro-eV photons. This is zero temperature as compared to the LHC.
{ "domain": "physics.stackexchange", "id": 32024, "tags": "cosmology, experimental-physics, physical-constants" }
If an astrophysical jet contains gamma rays does it mean that at the source poles there is small gravitational redshift?
Question: If an astrophysical jet contains gamma rays does it mean that even though the source has incredibly strong gravitational pull, at the source poles there is small gravitational redshift? Maybe there is no much gravity if there is a so small grav. redshift? Answer: Let's consider a gamma ray emitted from the surface of a $1.4$ solar mass neutron star, assuming the radius is 10 km (on the low end of what is consistent with observations from LIGO and NICER). Let's also assume the spin is small in units of the mass of the object, (a) for simplicity and (b) consistent with observations of known neutron stars. The redshift of a light ray emitted from the surface of a (non-spinning) neutron star obeys \begin{equation} 1+z = \left(1-\frac{R_s}{R_{\rm NS}}\right)^{-1/2} \end{equation} where $R_{\rm NS}=10{\ \rm km}$ is the radius of the neutron star, and \begin{equation} R_s = \frac{2GM}{c^2} = 4.1\ {\rm km} \end{equation} is the Schwarzschild radius of a $1.4$ solar mass object. Plugging in the numbers we find that $z=0.3$. In other words, if the energy of the emitted gamma ray was $100\ {\rm keV}$ (a typical scale associated with the gamma rays observed from 170817, a collision of two neutron stars), then the energy of the gamma ray that is observed far away from the neutron star is $100{\ \rm keV}/(1.3)=77\ {\rm keV}$. So, there is some effect from gravitational redshift, but it is not enough to make a qualitative difference. (note that the exact boundaries between what is called a gamma ray, vs an X-ray, are somewhat fuzzy, and in astronomy what typically matters more is the process producing the radiation more than the radiation itself) Said differently, neutron stars are compact objects, but not so compact that their gravitational field is enough to strip the majority of the energy from emitted radiation.
{ "domain": "physics.stackexchange", "id": 78866, "tags": "gravity, astrophysics, gravitational-redshift, relativistic-jets" }
Peter Corke Robotics Matlab Toolbox Jacobian in Simulink
Question: I am working on getting the Peter Corke Robotics Toolbox for Matlab connected to a Panda robot via Simulink. The problem I am encountering is the fact that the variables created by the Peter Corke Toolbox are not recognized by Simulink. I want to feed a function block with joint angles and end-effector velocities and then have the block output the joint velocities. This is done via the following multiplication: $\dot{q}=J(q)^{\dagger} \dot{p}$. Executing this multiplication in Matlab is easy and straightfoward using the toolbox. You can get $J$ using robot.jacobe(q) and then take the pseudo-inverse and multiply this with the end-effector velocities $\dot{p}$. The problem arises as soon as I try to execute this same multiplication in Simulink. The function block along with its inputs and outputs is shown below: The function block contains the same script as the m-file that correctly outputs the joint velocities in Matlab. When I build the Simulink model, I get a multitude of error messages. Caused by the fact that Simulink does not recognize the SerialLink object (shown below) created by the Peter Corke Robotics Toolbox. I've tried converting everything to a struct but then the toolbox's Jacobian function no longer works (unsurprisingly). Is there anybody that has experience with using SerialLink object in Simulink or is there a simple way to get the robot data into Simulink? Thanks a lot in advance, for any help and or tips. Answer: I haven't used the Robotics Toolbox before, but it seems like probably Simulink just doesn't know how to package the panda into a signal it can move over one of the Simulink wires. Simulink can do things with signals in the wire like gains, derivatives, etc., but it has to know something about those signals. The easy solution, I think, would be to just load the panda data inside your Matlab block, using whatever technique you're already using to load it in the Matlab scripts. Here's a Mathworks post describing loading variables into the Matlab workspace, which might be useful because, Simulink blocks like the Constant block read variables from the base workspace. Their post is trying to load data in Matlab then start Simulink, but you're kind of the opposite in that you're running Simulink and trying to load something. However, I think you could probably use the same technique to load the data if it doesn't exist. For example: if(exist('variableYouNeed', 'var') == false) evalin('base', 'yourMatFile.mat'); end You could selectively load different robots by passing a constant that you use inside your Matlab code (enum/switch) to selectively load different files.
{ "domain": "robotics.stackexchange", "id": 2434, "tags": "matlab, jacobian, robotics-toolbox" }
Why does it take energy to grow the surface of a drop?
Question: Classical nucleation theory predicts that the growth of small nuclei is thermodynamically disfavoured, on account of the energy required to grow its surface. I am struggling to understand why it takes energy to grow a nuclei's surface. I have done my best to expound my understanding of surface tension and classical nucleation theory below. My hope is that someone can identify a mistaken assumption or fault in my reasoning. Surface Tension Consider a drop of water. Molecules on the surface of the droplet have fewer neighbours than molecules in the bulk of the liquid. This gives them a higher potential energy relative to their counterparts away from the surface. We can assign a positive Gibbs free energy $G_s$ to the liquid's surface that is proportional to its surface area, with the constant of proportionality known as the surface tension. It therefore takes work to increase the drop's surface area through deformation. However, it is important to note that water molecules on the surface still have less free energy compared to water in vapour outside of the drop. After all, a molecule on the surface has less potential energy due to cohesive forces with other molecules. $G_s$ is an opportunity cost: it's the free energy difference between the water droplet and the droplet if it were submerged entirely in water. Nucleation Theory Classical nucleation theory says that growth of nuclei, such as water droplets in a supercooled vapour, is governed by two competing processes. The first is the free energy reduction from the vapour to liquid phase transition, which is proportional to the drop's volume. The second is the free energy increase from the formation of the drop's surface. The reasoning goes that, since the drop has surface tension, it must have a positive Gibbs free energy proportional to its surface area. Since the surface area of a small drop grows faster than its volume, (homogeneous) nucleation is not thermodynamically favoured. Where I'm Confused My issue with the reasoning above is the claim that the formation of the drop's surface has a free energy cost. The preceding discussion on surface tension illustrates that the deformation of a drop of water with a fixed number of molecules increases its free energy. However, as far as I can tell, the addition of a molecule to its surface should still decrease the total free energy. Phrased differently: I understand why it takes work to increase a drop's surface area through deformation. I don't understand why it takes work to increase a drop's surface area by adding molecules. Answer: (This is a model of a well-constructed question–my compliments.) Classical nucleation theory says that growth of nuclei, such as water droplets in a supercooled vapour, is governed by two competing processes. The first is the free energy reduction from the vapour to liquid phase transition, which is proportional to the drop's volume. This is true but incomplete. Recall that phase transitions occur because of an interplay between enthalpy $H$ and a temperature-mediated entropy term $TS$. Nature (contradictorily) prefers strong bonding but also many possibilities, and the temperature controls which aspect wins out (with the lower-entropy phase always seen at the lower temperature at equilibrium.) The full driving force for a phase change is $V\Delta G$, with volume $V$ and free energy change $\Delta G$. Note that this term incorporates not only the additional drop volume but also the latent heat $L$ and undercooling/overcooling $\Delta T$ past the equilibrium phase transition temperature $T_\mathrm{t}$: $$V\Delta G=V(\Delta H-T\Delta S)=VL-T\frac{L}{T_\mathrm{t}}=VL\frac{\Delta T}{T_\mathrm{t}},$$ where I've ignored any difference in the specific heat between the two phases (equivalent to taking only the first term of a Taylor-series expansion around $T_\mathrm{t}$ to linearize the response) and I've used the equality $\Delta G=\Delta H-T_\mathrm{t}\Delta S=0$ at $T_\mathrm{t}$ to obtain $\Delta S=\frac{\Delta H}{T_\mathrm{t}}=\frac{L}{T_\mathrm{t}}.$ So the energy benefit from a phase change is low at low temperature excursions $\Delta T$ past $T_\mathrm{t}$. However, the energy penalty from forming additional surface area $A$ remains constant: $\gamma A$, with interface energy $\gamma$. This is why the latter can be larger than the former. It's only when the sum of the two start to decrease that nucleation is widely expected (as discussed in this answer). Therefore, I don't agree with this part of the question: However, it is important to note that water molecules on the surface still have less free energy compared to water in vapour outside of the drop. After all, a molecule on the surface has less potential energy due to cohesive forces with other molecules. The molecules can obtain a lower enthalpy from attaching to neighbors, but the loss in entropy from being constrained may make the $-T\Delta S$ term positive enough that the free energy is larger than that in the vapor. In this case, molecules that happen to collide to form a nucleate-like cluster (with a surface) will tend to disperse again as relatively unstable. The undercooling simply isn't enough for them to give up the high-entropy possibilities of free translation in the gas phase. Does this help clear up the confusion?
{ "domain": "physics.stackexchange", "id": 95376, "tags": "thermodynamics, statistical-mechanics, surface-tension, nucleation" }
Statistics Calculator for Listed and Grouped Data
Question: I made a statistics calculator based on raw data for my Edexcel IAL Statistics 1 course which I'm going to use in my calculator's MicroPython. I would like some suggestions for ways to further improve my code and become better at Python. Note: MicroPython only supports a subset of the standard library. import math def interpolation_grouped_data(grouped_data, cumulative_frequencies, position): # responsible for using linear interpolation to find the lower quartile, median, and upper quartile of grouped data if cumulative_frequencies[0] > position: # if the position of the data required is not in the first interval, then it is between 0 , and the lowest bound in the first interval mn_cu_freq = 0 mx_cu_freq = cumulative_frequencies[0] mid_cu_freq = position interval_index = 0 else: for index in range(len(cumulative_frequencies) - 1): if cumulative_frequencies[index+1] > position >= cumulative_frequencies[index]: # if the position is within this interval mn_cu_freq = cumulative_frequencies[index] mx_cu_freq = cumulative_frequencies[index + 1] mid_cu_freq = position interval_index = index + 1 break lower_bound = grouped_data[interval_index][0] higher_bound = grouped_data[interval_index][1] return interpolation([mn_cu_freq, mid_cu_freq, mx_cu_freq, lower_bound, higher_bound]) def interpolation(data_for_interpolation): # uses interpolation to find the result, cu represents cumulative mn_cu_freq, mid_cu_freq, mx_cu_freq, lower_bound, higher_bound = data_for_interpolation result = lower_bound + ( ( (mid_cu_freq - mn_cu_freq)/(mx_cu_freq - mn_cu_freq) ) * (higher_bound - lower_bound) ) return result def listed_data_stats(listed_data): # for dealing with listed data Ex: 1,2,3,4 or 5,1,4,2,6,7 # sum of data, number of data, mean sum_listed_data = sum(listed_data) number_of_data = len(listed_data) mean = sum_listed_data / number_of_data # sum of each data squared sum_squared_listed_data = sum([i**2 for i in listed_data]) # variance, and standard deviation variance = (sum_squared_listed_data / number_of_data) - (mean)**2 standard_deviation = round(math.sqrt(variance), 5) # median sorted_listed_data = listed_data[:] sorted_listed_data.sort() if number_of_data % 2 == 0: median1 = sorted_listed_data[number_of_data//2] median2 = sorted_listed_data[number_of_data//2 - 1] median = round((median1 + median2)/2, 5) else: median = round(sorted_listed_data[number_of_data//2], 5) # mode m = max([listed_data.count(value) for value in listed_data]) mode = set([str(x) for x in listed_data if listed_data.count(x) == m]) if m>1 else None return sum_listed_data, sum_squared_listed_data, number_of_data, mean, median, mode, round(variance, 5), round(standard_deviation, 5) def grouped_data_stats(grouped_data): # for dealing with grouped data ex: [[lower bound, upper bound, frequency], [...], [...]] etc. in [[0, 10, 16], [10, 15, 18], [15, 20, 50]] in the first list, 0 and 10 represents the interval 0 -> 10, and 16 is the frequency of numbers in this range midpoints = [] cumulative_frequencies = [] sum_x = 0 sum_x_squared = 0 number_of_data = 0 if grouped_data[1][0] - grouped_data[0][1] != 0: # if there are gaps in data gap = (grouped_data[1][0] - grouped_data[0][1])/2 for data in grouped_data: if data[0] != 0: data[0] -= gap data[1] += gap for index, data in enumerate(grouped_data): midpoints.append((data[0] + data[1])/2) # acquires a list of midpoints for the each interval/tuple number_of_data += data[2] # acquires the number of data/ total frequency of all intervals sum_x += (midpoints[index] * data[2]) # gets the sum of all midpoints x frequency sum_x_squared += (midpoints[index]**2 * data[2]) # gets the sum of all midpoints^2 x frequency if index == 0: # if it is the first loop, then add the first value of cumulative frequency to the list cumulative_frequencies.append(data[2]) else: # if it is not, then get the value of the previous cumulative frequency and add to it the frequency of the current data, and append it cumulative_frequencies.append(cumulative_frequencies[index-1] + data[2]) # mean mean = sum_x / number_of_data # variance, and standard deviation variance = (sum_x_squared / number_of_data) - (sum_x / number_of_data)**2 # standard_deviation = math.sqrt(variance) # lower quartile, median, and upper quartile, and interquartile range lower_quartile = interpolation_grouped_data(grouped_data, cumulative_frequencies, (25/100) * number_of_data) # performs interpolation to acquire it median = interpolation_grouped_data(grouped_data, cumulative_frequencies, (50/100) * number_of_data) upper_quartile = interpolation_grouped_data(grouped_data, cumulative_frequencies, (75/100) * number_of_data) interquartile_range = upper_quartile - lower_quartile return sum_x, sum_x_squared, number_of_data, mean, variance, standard_deviation, lower_quartile, median, upper_quartile, interquartile_range def statistics(): # checks for what you want choice = input("a for\nInterpolation\nb for\nListed Data\nc for Grouped Data\n: ") if choice == "a": # interpolation mn_cu_freq = mid_cu_freq = mx_cu_freq = lower_bound = higher_bound = None variables = [mn_cu_freq, mid_cu_freq, mx_cu_freq, lower_bound, higher_bound] # values to be inputted for interpolation variables_names = ["mn_cu_freq", "mid_cu_freq", "mx_cu_freq", "lower_bound", "higher_bound"] for index, _ in enumerate(variables): variables[index] = float(input("Enter {}: ".format(variables_names[index]))) print("x = ", interpolation(variables)) elif choice == "b": # listed data statistics listed_data, results = [], [] while True: value = input("Enter Values: ") if value == "x": # enter x when no more data available break value = int(value) listed_data.append(value) results.extend(listed_data_stats(listed_data)) results = [str(value) for value in results] print("", "Sum_x = " + results[0], "Sum_x^2 = " + results[1], "n = " + results[2], "Mean = " + results[3], "Median = " + results[4], "Mode = " + results[5], "Variance = " + results[6], "Standard_Deviation = " + results[7], sep="\n") elif choice == "c": # grouped data statistics grouped_data, results = [], [] while True: start_boundary = input("Start Bound: ") if start_boundary == "x": # enter x when no more data available break end_boundary = input("End Bound: ") frequency = input("Frequency: ") grouped_data.append([int(start_boundary), int(end_boundary), int(frequency)]) # each row in the grouped data is a list results.extend(grouped_data_stats(grouped_data)) results = [str(round(value, 5)) for value in results] print("", "Sum_x = " + results[0], "Sum_x^2 = " + results[1], "n = " + results[2], "Mean = " + results[3], "Variance = " + results[4], "Standard Deviation = " + results[5], "Lower Quartile = " + results[6], "Median = " + results[7], "Upper Quartile = " + results[8], "IQR = " + results[9], sep="\n") statistics() Answer: Docstrings def interpolation_grouped_data(grouped_data, cumulative_frequencies, position): # responsible for using linear interpolation to find the lower quartile, median, and upper quartile of grouped data by standard should be written as def interpolation_grouped_data(grouped_data, cumulative_frequencies, position): """ responsible for using linear interpolation to find the lower quartile, median, and upper quartile of grouped data """ Unpacking If grouped_data's second dimension only has two entries, then lower_bound = grouped_data[interval_index][0] higher_bound = grouped_data[interval_index][1] can be lower_bound, higher_bound = grouped_data[interval_index] Multi-line expressions I would find this: result = lower_bound + ( ( (mid_cu_freq - mn_cu_freq)/(mx_cu_freq - mn_cu_freq) ) * (higher_bound - lower_bound) ) more easily legible as result = lower_bound + ( ( (mid_cu_freq - mn_cu_freq)/(mx_cu_freq - mn_cu_freq) ) * (higher_bound - lower_bound) ) Edge cases listed_data_stats does not take into account the edge case of an empty listed_data, which will produce a divide-by-zero. Inner lists sum([i**2 for i in listed_data]) should be sum(i**2 for i in listed_data) Similarly for both of these: m = max([listed_data.count(value) for value in listed_data]) mode = set([str(x) for x in listed_data if listed_data.count(x) == m]) if m>1 else None Parens variance = (sum_squared_listed_data / number_of_data) - (mean)**2 does not need parentheses around mean. Equality if grouped_data[1][0] - grouped_data[0][1] != 0: can simply be if grouped_data[1][0] != grouped_data[0][1]: Formatting for print print("", "Sum_x = " + results[0], "Sum_x^2 = " + results[1], "n = " + results[2], "Mean = " + results[3], "Variance = " + results[4], "Standard Deviation = " + results[5], "Lower Quartile = " + results[6], "Median = " + results[7], "Upper Quartile = " + results[8], "IQR = " + results[9], sep="\n") is somewhat of a mess. First of all, your call to grouped_data_stats should not dump its results into a results list. Instead, unpack them; something like xsum, xsum2, n, mean, var, stdev, qlow, med, qhi, iqr = grouped_data_stats(grouped_data) Then for your print, consider separating out your expression onto multiple lines for legibility.
{ "domain": "codereview.stackexchange", "id": 38029, "tags": "python, python-3.x, mathematics, calculator, statistics" }
Intuition about Momentum Maps
Question: I'm studying Classical Mechanics and there is one object that appeared recently on the book I'm not being able to get a physical intuition about it. The mathematical definition goes as follows: Let $M$ be a smooth manifold together with a sympletic form $\omega$ and suppose $G$ acts on the left on $M$ such that the action preserves the sympletic form. This means that if $\delta_{g} : M\to M$ is the diffeomorphism associated to $g\in G$, then $$\delta_g^\ast \omega=\omega$$ Now let $\mathfrak{g}$ be the Lie algebra of $G$ and $\langle,\rangle : \mathfrak{g}^\ast\times \mathfrak{g}\to \mathbb{R}$ the pairing $$\langle \varphi, A\rangle = \varphi(A),$$ if we denote $X^A$ the vector field in $M$ associated to $A$ then one can see that $\eta = X^A\lrcorner \ \omega$ is closed because the action preserves $\omega$. In that case, we define a momentum map as a function $\mu:M\to \mathfrak{g}^\ast$ such that $$d(\langle \mu,A\rangle) = X^A \lrcorner \ \omega.$$ Now, for Classical Mechanics we are interested in the case $M = T^\ast Q$ where $Q$ is the configuration manifold. In that case I assume there should be some good intuition about what momentum maps really are and what they represent. In truth, even the name invites us to think there are some important implication in Physics from this definition above. In that case, in Classical Mechanics, what momentum maps defined as above really are? What they represent and what is a good intuition about them? Answer: The equivariant moment map has several applications. Its meaning is that it provides an encoding of how the Lie group $G$ acts on the phase space, and it gives you a way to find the observables corresponding to the conserved quantities/generators of the symmetry $G$: It also defines the process of symplectic reduction to a reduced phase space. Given that $$ \mathrm{d}(\langle\mu(\dot{}),g\rangle) = \omega(\rho(g),\dot{})$$ where $\rho(g)$ denotes the (Killing) vector field associated to the infinitesimal action of $\mathfrak{g}$, the 1-form $\omega(\rho(g),\dot{})$ is closed due to $\mathrm{d}^2 = 0$. If one assumes the group action to be Hamiltonian, then one assumes that the form is also exact and thus there is a smooth function $f_g$ with $\mathrm{d}f_g = \omega(\rho(g),\dot{})$. A Hamiltonian action is also assumed to give a Lie algebra homomorphism $$ \mathfrak{g} \to C^\infty(M), g \mapsto f_g$$ and its image are precisely the generators of the symmetry in the classical sense. In this way, the moment map provides a coordinate-free description of how the Lie algebra of the symmetry embeds into the full Poisson algebra of observables. For example, for the rotation group, the image is the Lie algebra of angular momentum (thus the name!). One may now use the symmetry to reduce the phase space to a surface on which the conserved quantities are constant. This is done by picking any regular value of $\mu$ and taking its preimage, then diving out the group action. This surface is invariant under the Hamiltonian flow and gets its own dynamics, allowing us to discard the rest of the phase space if we are only interesting in this particular value for the charges generating the symmetry. The moment map thus tells you how to find "surfaces with constant charges" in the total phase space. Of particular importance is the case of a gauge group representing constrained dynamics, where the correct choice is naturally prescribed by the fact that the generators vanish on the constraint surface, so the correct symplectic reduction is $\mu^{-1}(0)/G$. This may be used to give a high-level account of how BRST cohomology obtains the correct algebra of gauge-invariant observables, see this answer of mine.
{ "domain": "physics.stackexchange", "id": 24571, "tags": "classical-mechanics, mathematical-physics, differential-geometry, hamiltonian-formalism" }
From Liénard-Wiechert to Feynman potential expression
Question: When studying the potential of an uniformly moving charge in vacuum, Feynman proposes to apply a Lorentz transformation on the Coulomb potential, which reads in the rest frame $ \phi'(\mathbf r',t') = \frac{q}{4\pi\epsilon_0} \frac{1}{r'} $, where $ |\mathbf r'| = r' $. In a frame with constant velocity $ \mathbf v $ along the x-axis, he obtains the following expression: $$ \phi(\mathbf r, t) = \frac{\gamma q}{4\pi\epsilon_0} \dfrac{1}{\sqrt{(\gamma(x-vt))^2+y^2+z^2}} \tag 1 $$ by transforming $ \phi = \gamma\left(\phi'+\dfrac{A'_xv}{c^2}\right) $, where $ \gamma = \dfrac{1}{\sqrt{1-\frac{v^2}{c^2}}} $ and the vector potential $ \mathbf A' $ vanishes within the rest frame. Another Lorentz transformation of the time and space coordinates $ (\mathbf r', t') \rightarrow (\mathbf r,t) $ yields (1). I suspect that (1) describes the potential at a given point for the instantaneous time t. What I am wondering is how this formula is connected to the expression of Liénard and Wiechert, namely $$ \phi(\mathbf r, t)=\dfrac{q}{4\pi\epsilon_0}\dfrac{1}{|\mathbf r - \mathbf x(t_{ret})| - \frac{1}{c}\mathbf v(t_{ret})\cdot(\mathbf r - \mathbf x(t_{ret}))} \tag 2, $$ where $ \mathbf x(t_{ret}) $ describes the position of the charge and $ \mathbf v(t_{ret}) = \frac{d}{dt}\mathbf x(t)\bigg|_{t=t_{ret}} $ its velocity at the retarded time $ t_{ret}(\mathbf r,t) = t-\frac{|\mathbf r - \mathbf x(t_{ret})|}{c} $, respectively. In the case of uniform motion, we have $ \mathbf x(t) = (vt,0,0)^\intercal $. How do I get now from (2) to (1)? My idea is to actually calculate an explicit expression for the retarded time and plug it into (2), which should yield (1) if I understand it correctly. By asserting that $ c^2(t-t_{ret})^2 = (x-vt_{ret})^2+y^2+z^2 $, $ t_{ret} $ can be found in terms of solving the quadratic equation, leading to the solutions $ t_{ret}^\pm = \gamma\left(\gamma(t-\frac{vx}{c^2})\pm\sqrt{\gamma^2(t-\frac{vx}{c^2})^2-t^2+\frac{r^2}{c^2}}\right) = \gamma\left(\gamma t'\pm\sqrt{\gamma^2t'^2-\tau^2}\right)$ where $ t' $ is the Lorentz transformation of $ t $ and $ \tau = \frac{t}{\gamma} $ looks like some proper time. Plugging this into (2) looks nothing like (1), what am I missing? Answer: If you look at Feynman Volume II Section 21-6, he walks through this calculation. Your idea and initial assertion look good; the trick is to manage the algebra to get to the final form you want.
{ "domain": "physics.stackexchange", "id": 45451, "tags": "electromagnetism, potential, lienard-wiechert" }
Is there any deeper reason behind the conservation of mass?
Question: I have read that behind the conservation of energy or momentum is the Noether theorem with its intimidating maths. Is there any similar deeper foundation behind the conservation of mass? Answer: Mass is not a conserved quantity, except in classical mechanics and its derivatives. As classical mechanics emerges from quantum mechanics and special relativity the conservations laws on energy and momentum, Noether's theorem in quantum mechanical terms, define also the mass. Everything has energy and momentum and is described by a fourvector, and the "length" of that four vector is the invariant mass of a particle . Vector algebra has to be used for systems of particles, and the summed vector's length gives the invariant mass of the system
{ "domain": "physics.stackexchange", "id": 68523, "tags": "mass, conservation-laws" }
How can define an indicator which measures the degree of similarity between two signals?
Question: The similarity of two signals is calculated by cross correlation. But, how to define an indicator which quantitatively measures the degree of similarity between two signals? Thanks. Answer: assuming finite power signals: $$ \lVert x \rVert^2 \triangleq \lim_{N \to \infty} \ \frac{1}{2N+1} \sum\limits_{n=-N}^{+N} \big|x[n] \big|^2 \ < +\infty $$ this is a Hilbert Space sorta thingie. define inner product: $$ \langle x,y \rangle \triangleq \lim_{N \to \infty} \ \frac{1}{2N+1} \sum\limits_{n=-N}^{+N} x[n] \cdot \overline{y}[n] $$ where $\overline{y}[n] $ is the complex conjugate of $y[n]$. so this is true about the norm: $$ \lVert x \rVert = \sqrt{\langle x, x \rangle} $$ Cross-Correlation: $$ R_{xy}[k] \triangleq \langle x[n], y[n+k] \rangle $$ Autoorrelation: $$ R_{xx}[k] \triangleq \langle x[n], x[n+k] \rangle \ \le R_{xx}[0] = \lVert x \rVert^2 $$ Normalized Autocorrelation (sometimes called "autocovariance") $$ -1 \le \frac{R_{xx}[k]}{R_{xx}[0]} \triangleq \frac{\langle x[n], x[n+k] \rangle}{\langle x[n], x[n] \rangle} \ \le 1 $$ Normalized Crosscorrelation: $$ -1 \le \frac{R_{xy}[k]}{\lVert x \rVert \lVert y \rVert} \triangleq \frac{\langle x[n], y[n+k] \rangle}{\sqrt{\langle x[n], x[n] \rangle}\sqrt{\langle y[n], y[n] \rangle}} \ \le 1 $$
{ "domain": "dsp.stackexchange", "id": 4356, "tags": "autocorrelation, cross-correlation, dsp-core" }
Why isn't this a way to transfer information faster than light using quantum entanglement?
Question: Suppose Alice and Bob have entangled particles A and B that are very far apart. Alice uses Particle A to send information through its spin along the x axis, and Bob uses Particle B to receive this information. To start, Alice measures Particle A's spin perpendicular to the x axis: and continually measures it along axes that gradually closer to the desired spin state along the x axis. The more measurements taken to get to the desired state, the better, since that reduces the chance of an error occurring at some point. Once Alice has reached the desired spin state for Particle A, Bob measures the spin of Particle B. The images I've given show Particle A being measured with spin to the right, then being rotated counterclockwise, but it doesn't always have to be like this. It could be measured with spin left or right and rotated clockwise or counterclockwise, depending on whether Alice wants to set it to up or down. This process is repeated at regular intervals so that Alice and Bob can make measurements at the right time without communicating, and there are other pairs of entangled particles in case some of them do get misaligned. Ignoring the logistics of actually setting up a system like this, what keeps this from working? Comments in this answer to another thought experiment said that quantum entanglement doesn't last that long. Is that why this system fails too? Answer: Once you measure an entangled state, the "entangledness" gets destroyed and they start behaving like regular spin particles. This is one of the biggest impediments in building quantum computers; they rely heavily on entangled states and a stray photon or something can accidentally cause one of the particles in the quantum computer to get measured which thus destroys the quantum state.
{ "domain": "physics.stackexchange", "id": 86362, "tags": "quantum-mechanics, quantum-entanglement, faster-than-light, thought-experiment" }
Why does filling a compressed air cylinder produces heat?
Question: And the opposite follow-up question: why does opening the air cylinder makes the air cooler? What I know is that I can't find these answers using the ideal gas law, because that is an equation of state. I cannot use Charles' law (it requires pressure to remain constant) nor Boyle's law (it requires the temperature to remain constant). Similarly, I cannot use Gay-Lussac's law ($P \propto T$), because that law requires both mass and volume to be constant (when filling an air cylinder I'm adding mass and/or I'm reducing the gas volume). So, where can I find a physical justification for this effect? Answer: Because what you are doing is a flow process, with mass inflow and no mass outflow, you need to use the thermodynamic equation: $dU_{cv}={m_{in}d}{H}_{in}-{m_{out}d}H_{out}+\delta Q-\delta W_{shaft}$ If you insulate your air cylinder well enough, $\delta Q = 0$. Assuming that your air cylinder does not deform, $\delta W = 0$. Since you are filling your cylinder with air and assuming no air escapes, ${m_{out}d}H_{out} = 0$ Therefore, the enthalpy of the gas which you are filling adds to the internal energy of the gas in the cylinder, and because the internal energy is positively correlated to temperature, the temperature in of the gas in the cylinder rises. $\Delta{U_{cv}}={H_{in}}>0$, so $\Delta{T} >0$ You may apply the reverse for the release of air from the cylinder. In this case: $\Delta{U_{cv}}={-H_{out}}<0$, so $\Delta{T} <0$ http://en.wikipedia.org/wiki/Thermodynamic_system#Flow_process
{ "domain": "physics.stackexchange", "id": 60941, "tags": "thermodynamics, pressure, temperature" }
Distance between Newton's Rings fringes is does not seem linear
Question: On the outer edges of Newton Ring patterns, the fringes are really close together, and much more uneven. Also, the spacing does not seem to decrease linearly at all as you move from the center. Why is this (if there is maths involed, a diagram would be very helpful for me). Answer: I am answering your question here, but please provide more information about your goals/experience, as specified by the comments. Primarily, I would like to say that I was planning on answering your question much less in depth than I ended up doing. However, while brushing up, I got carried away and figured out some very interesting calculations concerning your question: In order to aptly portray the reason for this phenomenon, I will begin by addressing why the rings form. In addition, I would like to note that I will be using a convex glass lens as an example for the sake of simplicity (and by your assertions, I am assuming that is what you are familiar with). However, most experimental research done with this phenomenon requires far more complex calculations, with changing radii of curvature [these calculations are sometimes conducted to test the flatness of a glass surface beyond what can be done with a sphereometer]. The illuminated rings become visible when the change in path between the two interfering waves is : $\bigg\{\cfrac{\lambda}{2}, \cfrac{3\lambda} {2}, \cfrac{5\lambda}{2},...\bigg\}$ Thus, the path difference between any pair of adjacent rings is $\pm \lambda$ (depending on whether the adjacent ring is on the inside or outside relative to its partner). The rings themselves are formed via thin film interference, by a minuscule layer of air between a curved glass surface and the flat glass surface where it resides. Due to the curvature of the glass surface, the thickness of the air layer does not increase linearly. The farther a point is from the center, the smaller the horizontal distance, which means an increase in vertical thickness of $\lambda$. Thus, the horizontal distance between two neighboring rings decreases as we observe radially outward from the center. Here is a visual: $\textbf{Mathematical approximation for high order ring separation:}$ This part I believe you will find exceptionally useful, since your objective concerned small separation the higher order fringes . Mathematically speaking, if you want to address the change in distance between adjacent rings, it is possible to make an approximation. If we assume that the order number, $\mu>>1$ and that $\triangle\mu = 1$, we are allowed to safely assume that the distance between a pair of rings ($\triangle r)$ is equal to: $\cfrac{(dr)(\triangle \mu)}{d\mu}$ (I am assuming here that you have taken introductory physics, which is typically taken with first semester calculus as a co-requisite, so this should make sense to you). Due to the fact that we have a standard formula for the radius of a ring of order $\mu$, [which I will not derive, but it is explained in a wikipedia link given to you above in the comments], we can make the following calculations: $\Omega = $ radius of curvature $\mu =$ order $r =$ radius of ring $\triangle\ r =$ separation of rings $\lambda =$ wavelength $r = \big((\mu - \frac12 )\lambda \Omega\big)^{1/2}$ and from above we know that $\triangle r =$$\cfrac{(dr)(\triangle \mu)}{d\mu}$ and since $\triangle\mu$ = $1$, $\triangle r$ = $\cfrac{\Omega\lambda}{2}\big((\mu - \frac12 )\lambda \Omega\big)^{-1/2}$, or if we square our value and then take the square root, we can achieve a much simpler value of: $\triangle r = \sqrt{\bigg(\cfrac{\lambda \Omega}{4(\mu - \frac12)}\bigg)}$ Here, I have graphed this function. Notice that the fringe distance between a pair of rings has a horizontal asymptote approaching zero as the order number approaches infinity : It worth mentioning here that you can do the same calculation for the dark rings, but you would use $\mu$, instead of $\mu - \cfrac12$. Let me know if you have any questions.
{ "domain": "physics.stackexchange", "id": 15490, "tags": "waves, interference, geometric-optics" }
Complexity for merging 3 sorted arrays using this specific algorithtm
Question: During an interview I was asked to calculate the big theta complexity for the following algorithm that receives 3 sorted arrays of variable size and returns a new array which has the elements of the original 3 arrays. The algorithm is pretty basic: we set indexes at the beginning of each array and use such indexes for accessing the elements, in that fashion we find the minimum element for the 3 arrays (at the position given by the indexes) and then we insert the element into the resulting array and we increase such index. We repeat until we are done processing every element. My answer was that the complexity was linear because we are processing n elements and we are doing a constant number of comparisions for finding the minimum element out of the 3 arrays (at the given index position). Yet, I was told that the complexity is not linear but it is higher than nlogn. I have a few ideas but could someone explain the actual complexity of this algorithm for me? Thanks for your time. Answer: Assuming that you're using some random-access model of computation (i.e., not an ordinary Turing machine) and that comparisons can be done in constant time, the algorithm you describe is linear. Each element of the final array is produced by comparing at most three elements of the original arrays, so each element of the output is produced in constant time. Perhaps you misunderstood the question and they were actually asking about something else? Perhaps they mis-stated their question and they were trying to ask about the complexity of three-way mergesort (sorting an array by splitting into three parts, recursively sorting the parts, then merging them)? Perhaps they were just wrong.
{ "domain": "cs.stackexchange", "id": 9530, "tags": "complexity-theory, runtime-analysis" }
Low-pass filtering a clipped signal
Question: In order to downsample a signal sampled at 48KHz, I implemented an anti-aliasing filter. An Elliptic LPF with a cutoff at 16KHz and order of 10. Everything looks OK until the input to this filter is a clipped signal, say a clipped sinewave. In that case, the output of the filter has a larger magnitude than the input (less power, but a higher max). Is there any theory to explain this behavior? Answer: Once overshoot the threshold of clipping, the low-pass filter tries to smooth out the sharp changes. But in doing so, it introduces some oscillations (Gibbs phenomenon, as mentioned by Jdip in the comment) or ripples near the transition points. These oscillations cause the filtered signal to have a higher maximum magnitude (peak) than the original input signal, even though the overall power of the signal is reduced due to the low-pass filtering.
{ "domain": "dsp.stackexchange", "id": 12194, "tags": "filters, lowpass-filter, anti-aliasing-filter" }
Sierpinski’s Gasket Triangle in JavaScript
Question: I wrote the Sierpinski’s Gasket Triangle in JavaScript, but I feel the code can be better, especially from L32 to L47. Could you make it more organized? var canvas = document.getElementById('chaos'); var ctx = canvas.getContext('2d'); const GenerateRand = () => Math.floor(Math.random() * 7); const updateDot = (x, y, point) => { let X = Math.min(x,point.x)+(Math.max(x,point.x)-Math.min(x,point.x))/2; let Y = Math.min(y,point.y)+(Math.max(y,point.y)-Math.min(y,point.y))/2; return {x: X, y: Y}; } const createDot = (obj) => { ctx.beginPath(); ctx.arc(obj.x, obj.y, 1, 0, 2 * Math.PI, false); ctx.lineWidth = 1; ctx.strokeStyle = '#fc3'; ctx.stroke(); } const pA = {x: canvas.width/2, y: 5}; const pB = {x: 5, y: canvas.height-5} const pC = {x: canvas.width-5, y: canvas.height-5} createDot(pA); createDot(pB); createDot(pC); const begin = (iterations) => { let x = canvas.width/4; let y = canvas.height/2; for(let i=0;i<iterations;i++) { createDot({x, y}); let randN = GenerateRand(); if(randN == 1 || randN == 2) { const currentDot = updateDot(x, y, pA); x = currentDot.x; y = currentDot.y; } else if(randN == 3 || randN == 4) { const currentDot = updateDot(x, y, pB); x = currentDot.x; y = currentDot.y; } else if(randN == 5 || randN == 6){ const currentDot = updateDot(x, y, pC); x = currentDot.x; y = currentDot.y; } } } let time=0; let timer = setInterval(() => { if(time >= 500) return clearInterval(timer) begin(500); time++; }, 200); <div> <canvas id="chaos" width="500" height="500"></canvas> </div> Answer: Performance For positive numbers < 2 ^ 31 use num | 0 (bitwise or zero) to floor You are drawing an arc that is 1 pixel in radius, with the stroke width of 1 the diameter is 3 pixels. This covers an area much greater than the point you sample. Use fillRect to draw a single pixel as its much quicker. Better yet as they are all the same color create a single path and use ctx.rect to add to it. Render all rect in one pass at the end of begin function. Avoid creating objects needlessly. Create a working object and use that to hold intermediate values. This can greatly reduce memory allocation and GC overheads. Eg the object you return in updateDot is a waste of memory and time. If you test two numbers to find the max or min, knowing either means you also know the other and thus do not need to test for it. The long lines Math.min(p.y, p1.y) + (Math.max(p.y, p1.y) - Math.min(p.y, p1.y)) / 2 can be reduced by a single test and give significant performance improvement. Style Use const for constants. Eg canvas and ctx should be const. Capitals only for names of objects that are instantiated with the new token. Eg GenerateRand should be generateRand Avoid repeated code by using functions. Eg you create many instances of an object {x,y}, would be better as a function. Spaces between operators, commas, etc. Use === rather than == else on the same line as the closing } The final statement in function begin does not need the test (randN == 5 || randN == 6) (assuming you want a new point each iteration) Code The random number generated is from 0 to 6 and you ignore 0, redrawing the same point 1 in 7 times. You can reduce the random to give 3 values 0,1,2 and perform the correct calculation on that or use a counter and cycle the points. You could also put the points pA, pB, pC in an array and index them directly via the random number. Rather than use setInterval, use setTimeout. That way you don't need to clear the timer each time. Put magic numbers in one place and name them as constants. You reset the start point each time delay is called (first two lines). Better to just let it keep going. It may also pay to stop the rendering after a fixed amount of points have been rendered. The rewrite. This is just an example of the various points outlined above. Also a few modifications Automatically adjust number of points rendered to keep the GPU load steady. Stop rendering after a fixed number of points rendered. The starting points pA,pB,pC are in an array. Magic numbers as constants. Using a single render path to draw all points per render cycle. Using a working point wPoint to hold coordinates rather than create a new point for each point rendered. const ctx = canvas.getContext('2d'); const padding = 5; const renderDelay = 200; const maxTime = 2; // time in ms allowed to render points. const maxPointsToDraw = canvas.width * canvas.height * (1 / 3); var pointsPerRender = 500; // points to render per render pass var totalPoints = 0; // count of total points drawn ctx.fillStyle = '#fc3'; const generateRand = () => Math.random() * 3 | 0; const point = (x, y) => ({x, y}); const drawDot = p => ctx.rect(p.x, p.y, 1, 1); const updateDot = (p, p1) => { p.x = p.x < p1.x ? p.x + (p1.x - p.x) / 2 : p1.x + (p.x - p1.x) / 2; p.y = p.y < p1.y ? p.y + (p1.y - p.y) / 2 : p1.y + (p.y - p1.y) / 2; return p; } const points = [ point(canvas.width / 2, padding), point(padding, canvas.height - padding), point(canvas.width - padding, canvas.height - padding) ]; const wPoint = point(canvas.width / 4, canvas.height / 2); // working point const renderPoints = iterations => { totalPoints += iterations; const now = performance.now(); ctx.beginPath(); while (iterations --) { drawDot(updateDot(wPoint, points[generateRand()])) } ctx.fill(); const time = performance.now() - now; // use render time to tune number points to draw // Calculates approx time per point and then calcs number of points // to render next time based on that speed. // Note that security issues mean time is rounded to much higher // value than 0.001 ms so must test for 0 incase time is zero pointsPerRender = maxTime / ((time ? time : 0.1)/ pointsPerRender); if (totalPoints < maxPointsToDraw) { setTimeout(renderPoints, renderDelay, pointsPerRender | 0); } } renderPoints(pointsPerRender); <canvas id="canvas" width="500" height="500"></canvas>
{ "domain": "codereview.stackexchange", "id": 32953, "tags": "javascript, animation, canvas, fractals" }
Evaluating loss for non classifying convolutional neural network
Question: Sorry if my question is kind of dumb, I am very new to this field. I am trying to create a CNN that plays a variant of chess (for the examples, we'll use chess as it is close enough). My network , which is a policy network, outputs a vector of planes of probabilities. (e.g 1st pawn layer, 2nd rook layer etc..) each layer contains scalars determining how "good" would a move of the piece to this case be, according to the network. My question is, given the input channels I. And expected outpus (0s everywhere and 1 for the move that was played), how to calculate loss for gradient descent ? (e.g : how to evaluate the "closeness" of a move to another.). My wild guess is to weight each layer a "wrongness" factor (e.g if instead of wanting to move the first pawn , it tries to move the queen , it is labelled as really "wrong" and then apply some kind of spacial locality (e.g if it doesn't move the first pawn but the second and on the very right case, it is not very wrong). But is it correct ? And in general , how to compute the loss of a non classifying convolutional neural network ? Answer: My wild guess is to weight each layer a "wrongness" factor (e.g if instead of wanting to move the first pawn , it tries to move the queen , it is labelled as really "wrong" and then apply some kind of spacial locality (e.g if it doesn't move the first pawn but the second and on the very right case, it is not very wrong). But is it correct ? You can relatively simply teach a policy network to predict human-like moves in your scheme using a database of moves. Actually your "wrongness" is probably well-enough represented by classification (your positive class might be "This is a good move") and the usual log loss that goes with it. After you have trained a policy network, you will want to look in depth at the literature for game-playing bots. Your policy network might work quite well alongside Monte Carlo Tree Search, provided you have some kind of evaluation heuristic for the resulting position. A reinforcement learning approach to learn from self-play would take you further, enabling the bot to teach itself about good and bad moves/positions, but is too complex to explain in an answer here. I suggest look into the subject after training your network and seeing how good a player you can create using just the policy network and a move search algorithm. And in general , how to compute the loss of a non classifying convolutional neural network ? There are a few common options available for regression, such as mean square error (MSE) i.e. $\frac{1}{2N}\sum_{i=1}^{N}(\hat{y}_i - y_i)^2$ where $\hat{y}_i$ is your prediction and $y_i$ is the ground truth for each example. If you use this loss function, and want to predict values outside of range 0-1, remember to use a linear output layer (i.e. no activation function after the last layer), so that the network can actually output close to the values you need - that's about the only difference in network architecture you need to care about. In the more general case of game-playing bots, it is usual (but not required) to calculate a predicted "return" or "utility" which is the sum of all rewards that will be gained by continuing to act in a certain way. MSE loss is a good choice for that. Although for zero-sum two player games where the reward is simply win/lose, you can use a sigmoid output layer (predicting chance of a win) and cross-entropy loss, much like a classifier. For your specific case, you can treat your initial policy network as a classifier. This immediately gives you some probability weightings for the predicted move, which you can use to pick the predicted best play, or maybe to guide Monte Carlo Tree Search. The kind of network that predicts utility or return from a position (or combination of position and action) is called a value network (or an action-value network) in reinforcement learning . If you have both a policy network and a value network, then you are on the way to creating an actor-critic algorithm.
{ "domain": "datascience.stackexchange", "id": 1913, "tags": "machine-learning, neural-network, training" }
Does the SVM require lots of features most of the time?
Question: So I know about the curse of dimensionality (too many features too less data). Say I have a 3000 sample dataset, would 3 features be too less? Answer: So I'll post an answer to my own question. For anyone who comes across this post during the feature selection / More features or less process, I dont know what you can do (well except if you're on python then mork's answer has a good way to do feature selection there) but I can tell you what NOT to do. Do not under any circumstances ever "try" determining best features by training+testing the SVM / statistical model. That is oh this feature works because of more classification accuracy than the other one. NO. Not unless that is the only way left, dont do it. That is a way, you are free to do it, but if you can try something else please do. Dont listen to anyone who tells you to do that How many features your problem requires depends on how many optimal features you can find. I'll leave it at that. How to find them? That is the million dollar question. Edit: People are getting confused. When you dont know about the accuracy of your features, it is bad practice to "train" data to see how many features your SVM needs. For that it is better you select features on the basis of some criteria set by your problem. If you want after that, then you may try feature selection techniques. But remember, reducing too many dimensions may also decrease accuracy sometimes.
{ "domain": "datascience.stackexchange", "id": 1657, "tags": "svm, feature-construction" }
What would a clock read that has existed since the Big Bang?
Question: Just what the title says. The clock in question I am assuming to be infinitesimal in size (no spacetime curvature inside the clock). What would the proper time of a single point be at this epoch of the universe, according to the current cosmological models? Is that number 13.7 billion years? Or something else. Does cosmological inflation affect the answer? Answer: We have a pretty good understanding of the evolution of the current state of the universe from a very hot, dense, uniform expanding plasma. If there was (somehow) a stopwatch in that plasma that read 0 at that time, and it moved with the Hubble flow until the present day (which is to say that it moved in such a way that the universe around it appeared isotropic), then it would read about 13.7 billion years today. Where the plasma came from is unclear. It might have come from an inflationary epoch. The inflationary epoch can last for an arbitrarily long time, and the beginning of the inflationary epoch is not (necessarily) the beginning of time, but just another state of the universe. That state might not be isotropic enough for there to be a well-defined elapsed time since the beginning of time, if indeed there was a beginning of time. We just don't know.
{ "domain": "physics.stackexchange", "id": 90339, "tags": "general-relativity, cosmology, cosmological-inflation" }
Why must the leaving group in E1cb be poor?
Question: Typically the leaving group for E1cb is poor (like -OH or -OR) but why must this be the case? The substrate appears in the rate equation so surely a good leaving group would be beneficial? Answer: There is a range of elimination reactions with E1cb at one end, E1 at the other end and E2 in between. It is not uncommon for these different reaction pathways to compete with one another. For example, in some elimination reactions the E1 and E2 pathways can operate in competition with one another. An activation energy is associated with each of these 3 reaction pathways. Whichever pathway has the lowest activation energy will be the major pathway followed. By changing solvent, reaction temperature, relative strength of the nucleophile, relative strength of the base, leaving group stability, etc., we can raise or lower the activation energy for each of these 3 pathways and shift a reaction towards one side of this mechanistic range or the other. Typically the leaving group for E1cb is poor (like -OH or -OR) but why must this be the case? The E1 mechanism involves ejecting a leaving group in its first step, while the E1cb mechanism involves removing a proton in its first step. Let's consider how changing the leaving group can shift an elimination reaction towards one pathway or the other. To a first approximation, changing a leaving group will not affect how hard or easy it is to remove the proton. So it is reasonable to assume that the activation energy for the E1cb process doesn't change as we vary our leaving group. If we use a better leaving group (make our leaving group more stable) that means that we have made ejecting the leaving group a lower energy pathway and the E1 process will become more favorable relative to the other elimination mechanisms. If we change our leaving group to one that is an extremely poor leaving group (make our leaving group less stable), then ejecting it becomes a higher energy process and the E1 reaction becomes less competitive with the other reaction pathways. Saying this last sentence differently, when we use a poor leaving group we raise the activation energy for the E1 mechanism. Since the rate of the E1cb process is not affected by the leaving group, its rate remains unchanged. Consequently, a poor leaving group will disfavor the E1 process making the E1cb process more competitive. If the leaving group is bad enough, we can disfavor the E1 process so much that we wind up pushing our reaction all the way over to the E1cb side.
{ "domain": "chemistry.stackexchange", "id": 2653, "tags": "organic-chemistry, reaction-mechanism" }
Does magnitude of a charge influence magnitude of force that individual charge exerts on another charge
Question: two point charges, q1 and q2, are placed 0.3m apart on the x-axis, as shown in the figure above. Charge q1 has a value of -3 nano Coulomb and q2 has a value of +4.8 x10^-8 C. The net electric field at point P is 0. Given that q2>q1, can it be said that q2 exerts a greater attractive electrostatic force on q1 as the magnitude of q2's charge is greater than the magnitude of q1's charge? Or is it that the attractive force they exert on each other is equal? Answer: The magnitude of the forces $q_1$ and $q_2$ exert on each other is equal. According to Coulomb's law, the magnitude of the force that a charge $q_1$ will experience due to a charge $q_2$ is $$|\mathbf F_{12}|=k_e{|q_1q_2|\over r^2}\ ,$$ where $k_e$ is Coulomb's constant and $r$ is the distance between the charges. But that equation is symmetrical in $q_1$ and $q_2$. I.e., if you calculate $$|\mathbf F_{21}|=k_e{|q_2q_1|\over r^2}\ ,$$ you get the same numeric value, because $q_{1}q_{2}=q_{2}q_{1}$.
{ "domain": "physics.stackexchange", "id": 16612, "tags": "electromagnetism, forces, electrostatics, electricity, charge" }
Stereoisomers of Hexachlorocyclohexane
Question: What is the basis of naming the various isomers of hexachlorocyclohexane $\alpha$, $\beta$, $\gamma$, etc.? Here is a list of these isomers. Is it just random naming, or is it according to some convention? Answer: I didn’t find anything definite. However, I found a PDF from the State’s Institution for Protection of the Enviroment in Baden-Württemberg (Landesanstalt für Umweltschutz) which gives me two or three relevant clues. Along with my intuition and a chat with a labmate, I conclude: There is nothing systematic or conventionalised in the names of the isomers. They were likely named after the order in which they were isolated or their structures were determined. Clues I used: The $\unicode[Times]{x3B6}$, $\unicode[Times]{x3B7}$ and $\unicode[Times]{x3B8}$ isomers were synthesised last. $\unicode[Times]{x3B1}$, $\unicode[Times]{x3B2}$ $\unicode[Times]{x3B3}$ and $\unicode[Times]{x3B4}$ were characterised first by a guy called Teunis van der Linden, after whom the active insecticide $\unicode[Times]{x3B3}$-hexachlorocyclohexane was named lindane. Thereby concluding that $\unicode[Times]{x3B5}$ was discovered after $\unicode[Times]{x3B1}$ to $\unicode[Times]{x3B4}$.
{ "domain": "chemistry.stackexchange", "id": 4265, "tags": "nomenclature, stereochemistry" }
Error implementing ros timer
Question: My goal was to make a Time Stamp on each message published from the commlink node and subscribe them to my pd_controller. I need the Time Stamps to apply numerical methods. In this case is a simple differentiation. But as I go further into the book there are lots of cool methods. The problem is that my code compiles, but it does not run. I probably did something wrong, and I could not find a tutorial doing something similar with ROS in order to understand what I made. So, how do I write my own controller? What are the best practices? How do I identify and fix my code? This my pd_controller node #include <ros/ros.h> #include <geometry_msgs/Twist.h> #include <std_msgs/Float32.h> #include <std_msgs/Float64.h> #include <std_msgs/Int32.h> #include "surp/Int32Stamped.h" #include <math.h> surp::Int32Stamped encoder; ros::Subscriber encoder_sub; ros::Publisher vel_pub; geometry_msgs::Twist vel; // subscriber from comm encoder void encoderCallback(const surp::Int32Stamped::ConstPtr& tk){ //Modify this equation to be the Controller Function double timelapse = double(encoder.header.stamp) - double(encoder.header.stamp); double kd = (double(encoder.data) - double(encoder.data))/timelapse; vel.linear.x = float(((encoder.data*234)*5886) + kd); encoder.data = tk->data; encoder.header.stamp = tk->header; vel_pub.publish(vel); } //make publisher for cmd_vel //create a rotary encoder to send cmd_vel int main(int argc, char** argv){ ros::init(argc, argv, "pd_controller"); ros::NodeHandle nh; vel_pub = nh.advertise<geometry_msgs::Twist>("cmd_vel", 1); encoder_sub = nh.subscribe<std_msgs::Int32> ("comm_encoder", 10, &encoderCallback); ros::spin(); return 0; } I get this error: /home/rmb/gopigo_ws/src/pd_controller/src/pd_controller.cpp: In function ‘void encoderCallback(const ConstPtr&)’: /home/rmb/gopigo_ws/src/pd_controller/src/pd_controller.cpp:21:61: error: no match for ‘operator/’ (operand types are ‘double’ and ‘ros::Duration’) double kd = (double(encoder.data) - double(encoder.data))/timelapse; ^ /home/rmb/gopigo_ws/src/pd_controller/src/pd_controller.cpp:24:25: error: no match for ‘operator=’ (operand types are ‘std_msgs::Header_<std::allocator<void> >::_stamp_type {aka ros::Time}’ and ‘const _header_type {aka const std_msgs::Header_<std::allocator<void> >}’) encoder.header.stamp = tk->header; ^ In file included from /opt/ros/kinetic/include/ros/ros.h:38:0, from /home/rmb/gopigo_ws/src/pd_controller/src/pd_controller.cpp:1: /opt/ros/kinetic/include/ros/time.h:176:22: note: candidate: ros::Time& ros::Time::operator=(const ros::Time&) class ROSTIME_DECL Time : public TimeBase<Time, Duration> ^ /opt/ros/kinetic/include/ros/time.h:176:22: note: no known conversion for argument 1 from ‘const _header_type {aka const std_msgs::Header_<std::allocator<void> >}’ to ‘const ros::Time&’ pd_controller/CMakeFiles/pd_controller.dir/build.make:62: recipe for target 'pd_controller/CMakeFiles/pd_controller.dir/src/pd_controller.cpp.o' failed make[2]: *** [pd_controller/CMakeFiles/pd_controller.dir/src/pd_controller.cpp.o] Error 1 CMakeFiles/Makefile2:784: recipe for target 'pd_controller/CMakeFiles/pd_controller.dir/all' failed make[1]: *** [pd_controller/CMakeFiles/pd_controller.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed Contact GitHub API Training Shop Blog About © 2017 GitHub, Inc. T This is my whole work space with all nodes I am applying: https://github.com/renanmb/gopigo_ws/tree/master/src Originally posted by renanmb on ROS Answers with karma: 33 on 2017-08-08 Post score: 0 Original comments Comment by jayess on 2017-08-08: What book are you referring to? Comment by billy on 2017-08-09: You say 'it does not run'? What doesn't run, and how do you know it doesn't run? Have you verified that the topics that trigger your callbacks are being published to? Like the topic "encoder"? Comment by renanmb on 2017-08-09: Yes, I verified. I have a node called test where I do a version much simpler to test all nodes and my robot. It worked. My pd_controller node does not run (rosrun give error and close) when I tried to use the TimeStamps. I believe I implemented the msg right. So it might be the ros::Timer. Comment by renanmb on 2017-08-09: Right Know I am practicing using Modern Control from Ogata. I have other books that I will study, Introduction to Robotics-Craig, Theory of Applied Robotics - Jazar, Automatic control system - Benajmin C. Kuo, Robots Vision and Control - Peter Corke, Robot Modeling and Control - Mark W. Spong. Comment by jayess on 2017-08-09: You said that it gives errors, can you please post them and anything else that the terminal prints out? Comment by jayess on 2017-08-10: To set timestamps in the header of a message you should be using a ROS data type like ros::Time::now() or nh.now() instead of surp::Int32Stamped Comment by renanmb on 2017-08-10: I got this error on commlink node: [ERROR] [1502413341.933058388]: Client [/test_encoder] wants topic /comm_encoder to have datatype/md5sum [std_msgs/Int32/da5909fbe378aeaf85e547e830cc1bb7], but our version has [surp/Int32Stamped/e7344a45486eefa24d2f337265df37ce]. Dropping connection Comment by renanmb on 2017-08-10: On the code that publishes the Time Stamp I have ros::Time::now() surp::Int32Stamped is my custom msg. I believe I need it to be able to subscribe the topic right? Comment by jayess on 2017-08-10: I've updated my answer addressing the error in your comment. Answer: Line 6 is missing a closing quotation mark: #include "surp/Int32Stamped.h should be #include "surp/Int32Stamped.h" Edit: You're asking a lot of questions in one question. Perhaps asking new, separate questions will get more responses. A couple of Google searches turned up these links for issues regarding "best practices": https://github.com/ethz-asl/ros_best_practices/wiki ROS Best Practices When should I split my code into multiple packages, and what's a good way to split it? Roslaunch tips for large projects Best practices for organizing a project [closed] The question "how do I write my own controllers?" is very general and difficult to answer. Try to narrow it down a bit and ask a new question. Edit 2: Are you sure that you've updated your code and re-compiled it? Your repo still shows the same code missing the closing " on line 6, as I pointed out. The message `/home/rmb/gopigo_ws/src/pd_controller/src/pd_controller.cpp: line 9: //std_msgs: No such file or directory` means that it's looking for a directory //std_msgs which would make sense if it thinks that it's part of the include statement on line 6. Either you haven't updated and re-compiled your code or you're not copying and pasting the entire contents of the output of the terminal. Edit 3: You're mixing up data types. Your publisher has the surp::Int32Stamped data type, your calback function encoderCallback is expecting a data type of surp::Int32Stamped in its argument while your subscriber is saying that the data type will be surp::Int32. The error in your comment gives this away. The error in your question is unrelated. You should update your question with the full terminal output. That's the best way to get help from everyone. Originally posted by jayess with karma: 6155 on 2017-08-09 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by renanmb on 2017-08-09: I still want to know, how do I write my own controllers? What are the best practices? I saw a package some people made called controlit, I want to learn how to organize myself to do something like that on the long term. Comment by renanmb on 2017-08-10: That was not the mistake, I already had tha right. Sorry for checking later. The Error keep the same. Comment by jayess on 2017-08-10: What do you mean you already had that right? If you fixed your code and it still doesn't work you should update your question. Comment by renanmb on 2017-08-10: The Error message is the same, the code I posted was with a mistake when copied it here. So the question is the same as when I started. One node work the other don't. The error message makes no sense. Comment by renanmb on 2017-08-10: Now I updated my github. The error is this, the datatype thing was because I forgot on node running. I posted everything I have . The error in line 9 should never be happening because that line is commented. I run catkin_make and catkin_make install everytime I change my code. Comment by renanmb on 2017-08-10: commlink node work just fine publishing the message I need and it does not give me this Error: /home/rmb/gopigo_ws/src/pd_controller/src/pd_controller.cpp: line 9: surp::Int32Stamped: command not found Comment by jayess on 2017-08-10: So what is the error? The one in your comment or the one in the question? You're now saying the one in your comment is not an error? Because I still see the same issue mixing of data types that I pointed out in my answer. Comment by renanmb on 2017-08-10: Now I see. When I go to my lab I will try that and see. Comment by renanmb on 2017-08-11: It did not work. The problem was on the makefile. I solved the makefile problem then now I have ros::Time problems I will update my github. Comment by jayess on 2017-08-11: Do you mean your CMakeLists.txt? Because you shouldn't have to touch your Makefile. Also, this fix seems odd (to me) given there error that you posted. Was what you posted the terminal output? Comment by renanmb on 2017-08-11: I forgot to add add my new msg into the makefile . That was the issue. I fixed. Now I have the ros::Time I just want to do a simple differentiation, I have no idea how to use the ros::Time. Comment by jayess on 2017-08-11: If you fixed your problem you should write an answer and accept it. If you want help with using ros::Time you should create a new question. Comment by renanmb on 2017-08-11: Just thanks for spending your time. We could not achieve any result with this post. I hope I could delete it.
{ "domain": "robotics.stackexchange", "id": 28560, "tags": "ros" }
How to measure one of the qubits in a two-qubit register?
Question: How do I measure the first qubit of an entangled vector, say \begin{pmatrix} 1 \\ -1 \\ 0 \\ 0 \\ \end{pmatrix} is what I get on the end of Deutsch's algorithm. If I get it right, I should now measure the first qubit in this 2-qubit register. How can I do it? Answer: To measure, observe that you are simply projecting a quantum state onto some basis set of vectors. First, I will note that this state is not normalized. Let us first define the following quantum state. $$|\psi_i\rangle = \begin{pmatrix}1\\-1\\0\\0\end{pmatrix}.$$ Then, calculating the corresponding probability yields: $$|\langle \psi_i|\psi_i\rangle|^2 = (1)(1) + (-1)(-1) = 2.$$ So to normalize this state, we will simply divide by $\sqrt{2}$. Thus, we obtain the state: $$|\psi\rangle = \frac{1}{\sqrt{2}}\begin{pmatrix}1\\-1\\0\\0\end{pmatrix}.$$ We now wish to measure this state in the standard basis, and so we wish to project the state onto the set of basis vectors: $$|00\rangle = \begin{pmatrix}1\\0\\0\\0\end{pmatrix}, |01\rangle = \begin{pmatrix}0\\1\\0\\0\end{pmatrix},|10\rangle = \begin{pmatrix}0\\0\\1\\0\end{pmatrix},|11\rangle = \begin{pmatrix}0\\0\\0\\1\end{pmatrix}$$. We will now calculate the probability amplitude of the state collapsing to each of those states. That is, we wish to calculate: $$\langle00|\psi\rangle\\=\frac{1}{\sqrt{2}}\begin{pmatrix}1&0&0&0\end{pmatrix}\begin{pmatrix}1\\-1\\0\\0\end{pmatrix}\\=\frac{1}{\sqrt{2}}.$$ $$\langle01|\psi\rangle = \frac{1}{\sqrt{2}}\begin{pmatrix}0&1&0&0\end{pmatrix}\begin{pmatrix}1\\-1\\0\\0\end{pmatrix}\\=-\frac{1}{\sqrt{2}}.$$ And although it is trivial to see that the amplitudes of the two remaining states will be zero, I will include the calculations for completeness: $$\langle10|\psi\rangle = \frac{1}{\sqrt{2}}\begin{pmatrix}0&0&1&0\end{pmatrix}\begin{pmatrix}1\\-1\\0\\0\end{pmatrix}\\=0.$$ $$\langle11|\psi\rangle = \frac{1}{\sqrt{2}}\begin{pmatrix}0&0&0&1\end{pmatrix}\begin{pmatrix}1\\-1\\0\\0\end{pmatrix}\\=0.$$ And so we see that the probability of obtaining the $|00\rangle$ and $|01\rangle$ states are 0.5 each, and so measurement of the first qubit must yield the $|0\rangle$ state. To see what would happen if you measured the second qubit, simply sample the $|00\rangle$ and $|01\rangle$ states once according to the aforementioned probabilities. Edit: In response to a comment left on this answer, I have added the following note. If you have the state: $$|\psi\rangle = \alpha_0|0\rangle + ... + \alpha_N|N\rangle,$$ then the probability amplitude of obtaining a component of the state, $|\psi_i\rangle$, is given by $\langle \psi_i|\psi\rangle$. Consequently, the probability of measuring a value associated with $|\psi_i\rangle$ is given by $|\langle \psi_i|\psi\rangle|^2$.
{ "domain": "quantumcomputing.stackexchange", "id": 1113, "tags": "entanglement, measurement, computational-models, deutsch-jozsa-algorithm" }
Relationship between Energy density and Curvature
Question: I don't know GR so while answering the question so keep in mind that. In the Friedmann Equations, is energy density has an effect on curvature or vice versa? Or they are separate things and they don't affect each other? For example can we have an energy density $\rho_0$ such that its less then a critical density $\rho_c\,,\,\,(\rho_0<\rho_c)$ in a positive curvature universe ? Or a hyperbolic universe with $\rho_0>\rho_c$. In general, it seems it should affect, but I couldn't be sure. Answer: The Einstein field equations relate the components of spacetime curvature to the density and flow of energy and momentum, somewhat similarly to how Maxwell’s field equations relate the electromagnetic field to the density and flow of electric charge. Energy density and spacetime curvature are separate but related things. It is common to say that energy density “causes” curvature. However, you can have curvature in places where you don’t have energy density, just as in electromagnetism you can have electromagnetic field in places where you don’t have charge. A homogeneous and isotropic universe can have positive, negative, or zero curvature. Zero curvature corresponds to a critical energy density, positive curvature to greater energy density, and negative curvature to lesser energy density.
{ "domain": "physics.stackexchange", "id": 55051, "tags": "cosmology, universe, curvature" }
How does an electrons's wave function change when it moves between energy levels?
Question: I'm taking a class on QM and we're simulating the wave function of an electron in a box at the lowest energy level and I'm supposed to change the simulation to show the wave function for the next energy level. The problem is that I don't quite understand the relationship between position and energy level. I'm fine with the formula for the energy levels, but I don't see how to relate it to $\psi(x)$, as it doesn't even involve $x$. And yet, I know there is a relationship -- if I understand correctly, electron orbitals are just PDFs of position obtained from normalizing and squaring the position wave functions of electrons with different quantum numbers, one of which is energy level, and these orbitals are very obviously different, as can be seem from the many diagrams of them. So, clearly energy level does effect the position function, and, for that matter, so do the other quantum numbers. But what's the specific mathematical relationship, either for a theoretical electron in a box or for an electron in an atom? Like, say I already have an expression for $\psi(x)$ at an arbitrary energy level. How would I modify it to model an electron at a different energy level? I did see these questions: Relationship between Quantum Numbers and the Wave-function Relationship between Energy Level and electron position https://physics.stackexchange.com/questions/355461/energy-eigenfunction-completeness https://physics.stackexchange.com/questions/345244/in-qm-we-have-position-and-momentum-space-what-about-energy-space but none of them answer the question of what the specific mathematical relationship is in a useful form. I'm not really sure if this is a physics or chemistry question, but I'm picturing electron orbitals in atoms so I'm leaning towards it being more chemistry. Answer: You need to go back to the very start. Here, you're kind of asking: I have a solution $\psi_0$, how do I get the next solution $\psi_1$? The answer is to look at how $\psi_0$ was obtained, and it turns out that that same process which yielded $\psi_0$ will give you all the $\psi_i$'s along with their associated energies $E_i$. Physical systems tend to admit a series of wavefunctions* $$\{\psi_0(x), \psi_1(x), \cdots\},$$ which satisfy the time-independent Schrödinger equation $$\hat{H}\psi_i(x) = E_i \psi_i(x)$$ for all $i$. The way to obtain the wavefunction is to solve the Schrödinger equation. For a particle in a box, you have that $$\hat{H} = \frac{p^2}{2m} = -\frac{\hbar^2}{2m}\left(\frac{\mathrm{d}^2}{\mathrm{d}x^2}\right)$$ (for $0 \leq x \leq L$, $L$ being the length of the box) and so to obtain the wavefunctions you need to solve the differential equation $$-\frac{\hbar^2}{2m}\frac{\mathrm{d}^2\psi}{\mathrm{d}x^2} = E\psi, \label{eq:de}\tag{1}$$ which also respect the boundary conditions $\psi(0) = \psi(L) = 0$. It turns out the functions $\psi$ have a general formula of $$\psi_n = \sqrt{\frac{2}{L}}\sin\left(\frac{n\pi x}{L}\right),$$ with the energies $$E_n = \frac{n^2\hbar^2\pi^2}{2mL^2}.$$ So, generally speaking, you don't generate one wavefunction from another one: you obtain all of them at the same time by solving the differential equation (eq. $\ref{eq:de}$). The same is true of pretty much any other system, including the hydrogen atom (which gives you the orbitals). It's just that the Hamiltonian is different, so you get a different differential equation, and ultimately different solutions. Now... there are some cases where you can go from one stationary state to another. A notable example is the quantum harmonic oscillator, where you can use 'raising' and 'lowering' operators to go between stationary states; however, this isn't something you can conclude by looking at the final wavefunctions, it's something you have to figure out by studying the form of the Schrödinger equation. So again, you have to go from the start. TLDR: The "specific mathematical relationship" you're looking for is that they all satisfy the Schrödinger equation. * To be technically correct, these are stationary states; they are 'special' wavefunctions which do not change over time (in that the expectation values of observable quantities like position and momentum are independent of time). Non-stationary states are perfectly permissible too, and can be constructed as linear combinations of stationary states.
{ "domain": "chemistry.stackexchange", "id": 17270, "tags": "quantum-chemistry, energy, electrons, orbitals" }
What does $|V|=O(|E|)$ mean?
Question: I was reading about Dijkstra's algorithm from this Stanford University lecture presentation. On page 18 it says Dijkstra's algorithm is $O(|V|\log|V|+|E|\log|V|)$ and I understand why. But then it says that $O(|V|\log|V|+|E|\log|V|) = O(|E|\log|V|)$, because $|V|=O(|E|)$. What does $|V|=O(|E|)$ mean and why is it so? Answer: First, $|V|$ is the number of vertices and $|E|$ is the number of edges. The point is that if a graph is connected it must have at least $|V|-1$ edges. Therefore, $|V|\leq |E|+1$, so $$|V|\log|V| + |E|\log|V| \leq 2(|E|+1)\log|V|\leq 3|E|\log |V|\,.$$ Writing $|V|=O(|E|)$ is something of an abuse of notation. $O(\cdot)$ is an asymptotic statement about what happens as some variable becomes large, over some infinite family of instances. So, here, we're supposed to imagine that the infinite family of instances is the set of all connected finite graphs. It would have been clearer to just write $|V|\leq |E|+1$.
{ "domain": "cs.stackexchange", "id": 14467, "tags": "graphs, algorithm-analysis, time-complexity, asymptotics, big-o-notation" }
C - Fast & simple bump allocator
Question: I recently have been very interested in custom allocators, so I decided to make the very basic (this should be faster than malloc) bump allocator. Here is my code in C: #include <stdio.h> #include <stdlib.h> #include <assert.h> #include <sys/mman.h> #include <unistd.h> #define KB(size) ((size_t) size * 1024) #define MB(size) (KB(size) * 1024) #define GB(size) (MB(size) * 1024) #define HEAP_SIZE GB(1) typedef intptr_t word_t; void* free_ptr = NULL; void* start_ptr; word_t end_ptr; void init() { free_ptr = mmap(NULL, HEAP_SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (free_ptr == MAP_FAILED) { printf("unable to map memory\n"); abort(); } start_ptr = free_ptr; end_ptr = (word_t) start_ptr + HEAP_SIZE; } void* bump_alloc(size_t size) { void* new_ptr = free_ptr; free_ptr = (char*) free_ptr + size; return new_ptr; } void free_all_mem() { munmap(start_ptr, HEAP_SIZE); } int main() { init(); int* x = (int*) bump_alloc(sizeof(int)); assert(x != NULL); *x = 10000; printf("x: %d\n", *x); free_all_mem(); } This is my first custom allocator so could I get some tips on optimization, etc. Answer: could I get some tips on optimization, etc. Alignment loss free_ptr = (char*) free_ptr + size; simply increases the next available allocation to so many bytes later. This differs from malloc() whose allocations meets all possible system alignment needs. Either document that bump_alloc() does not provide aligned allocations or change code to do so. Error messages I'd expect the error message to go out stderr - yet your call. // printf("unable to map memory\n"); fprintf(stderr, "Unable to map memory\n"); Missing include intptr_t is define in <stdint.h>. Best to include that rather than rely of a hidden inclusion. Good type math The below avoids int overflow. #define KB(size) ((size_t) size * 1024) Better code would () each macro parameter. #define KB(size) ((size_t) (size) * 1024) Yet I'd recommend rather than type-casting, which may narrow the math, allow gentle widening. The below multiplication will occur with the wider of size_t and the type of size. #define KB(size) ((size_t) 1024 * (size)) Unneeded cast, simplify Casting not needed going from void * to an object pointer. Size to the de-referenced type. Easier to code right, review and maintain. // int* x = (int*) bump_alloc(sizeof(int)); int* x = bump_alloc(sizeof *x);
{ "domain": "codereview.stackexchange", "id": 38476, "tags": "performance, c, mmap" }
Our choice of basis surely cannot effect possible outcomes of a measurement?
Question: Common sense says that, of course, the outcome of a measurement on a quantum system cannot be affected by what base we choose to represent it in. However, while studying QM text, it seems like they sometimes just almost suggest it... My question is quite vague, so let me instead give you an example which I think contains most of my confusion. Consider a system in a basis of simultaneous eigenstates of $L^2$ and $L_z$, with respective eigenvalues $l(l+1)\hbar^2$ and $m\hbar$. Let $\mathbf{\hat{n}}$ be a unit vector in a direction specified by polar angles ($\theta,\phi$). Clearly $L_n = \sin\theta\cos\phi L_x+\sin\theta\sin\phi L_y +\cos\theta L_z$ Now I have two slightly different questions: What are the possible results of a (precise) measurement of $L_n$, $L_n^2$? What are the possible expectation values of $L_n$, $L_n^2$ I would tell you $m\hbar$, $l(l+1)\hbar^2$ or maybe $m \hbar \cos\theta$, but this would mean that my choice of direction in space affected the outcome of the measurement? No idea and makes me question my answer to 1) again... Answer: Phenomena in quantum mechanics may be expressed using any basis (that's the English word for the set of vectors, not a "base"). It doesn't mean that all bases are equally useful for a given situation. In particular, a fundamental postulate of quantum mechanics says that right after every measurement, the system is found in one of the eigenstates of the observable that was just measured. That's why the basis of the eigenstates of $K$ is obviously more useful to describe the measurement of $K$ than other bases. Note that which basis is useful – or which basis describes possible post-measurement state – does depend on the kind of a measurement we decide to perform. This dependence of the "right analysis of the physical system" on the chosen way of observing it is really a main point of quantum mechanics. If the physical analysis could be made independently of the nature of observations, the theory would be by definition classical physics, not quantum mechanics. Such independence on the observations could be called "common sense" by someone – but that changes nothing about the fact that Nature contradicts this assumption. $L_n$ always has eigenvalues $m\hbar$ where $m$ is an integer. $L_n^2$ has eigenvalues $m^2\hbar^2$ – because it's just the square of the operator from the previous sentence. This shouldn't be confused with $L^2$ which has the eigenvalues $\ell(\ell+1)\hbar^2$ where $\ell=0,1,2,3,\dots$ $L^2$ may always be measured simultaneously with any $L_n$ and in that case, $m\in\{-\ell, -\ell+1,\dots, \ell-1,\ell\}$. The expectation value of any operator may be any number from the interval between the lowest and highest eigenvalue. So if we measure $L^2$ and $L_n$ at the same moment, then $\langle L_n\rangle$ may be any real number between $-\ell$ and $+\ell$. Note that for every $\vec n$, the spectrum of $L_n$ is the same. For all choices of $\vec n$, the operators $L_n$ are conjugate to each other i.e. $$\exists U\in U({\mathcal H}):\quad L_{n'} = U L_n U^\dagger $$ Here, the operator $U$ is an operator on the Hilbert space that represents a rotation that turns the $\vec n$ axis to $\vec n'$ (passively or actively, one would have to be careful). The corresponding eigenstates of $L_n$ and $L_{n'}$ are also conjugate to each other – but the detailed sets of eigenvectors are different. So when we measure $L_n$, we bring the system to one of the basis vectors of $L_n$ by the measurement, and if we measure $L_{n'}$, the candidate post-measurement states are elements of the basis of different eigenvectors.
{ "domain": "physics.stackexchange", "id": 31146, "tags": "quantum-mechanics, angular-momentum, hilbert-space, measurement-problem, observables" }
Can all animals of the same species crossbreed?
Question: For example, take Canis lupus, the species of dog and wolf. Within their species, can all dog and wolf types crossbreed? We can forget the logistics and assume this is all done through artificial insemination or in vitro methods (let's assume that the in vitro methods would be feasible). If yes to the main question, does cross-breeding stop at the species class, or does cross-breeding extend across other taxonomic classes? From what I gathered, the taxonomic classes seem somewhat loose in their definitions. Perhaps I should ask, is there a rule "in general" to cross-breeding? Let me know if this question ought to be refined in any way. Answer: Species definitions are a somewhat contentious part of biology. There are no hard boundaries in nature that mean "this group here is one species, this group here is another species". Some people don't even believe that species truly exist and there are only gradations of relatedness. That being said, the Biological Species Concept is one of the more popular species definitions. From the linked website: "The biological species concept defines a species as members of populations that actually or potentially interbreed in nature". Thus, by definition, all animals of the same species can and do interbreed in nature. If two individuals breed in nature, they are considered the same species (under this definition). Of course, you can already see some problems with this definition. Under the BSC, are a dog and a wolf the same species? Do dogs and wolves breed "in nature"? Of course, fertile half-dog half-wolf animals exist, but does it even make sense to ask whether a domestic species breeds "in nature". I suppose you can say they can "potentially" breed in nature. But how do you define "potentially" breeding? Asking questions about species can lead to many exciting debates.
{ "domain": "biology.stackexchange", "id": 8477, "tags": "genetics, taxonomy" }
Is there a better way to handle multiple join statements in a linq query?
Question: I am rewriting a VB.NET app into C#. I won't subject you to the original code. I am mainly looking for a better way to handle all the join statements that are being done. I am dealing with a legacy database that I cannot currently change the structure of. (Though I am making my arguments for it) I am wondering if there is a better / cleaner way to handle multiple join statements. I am using DB first EF 6.1 so I have more options than was originally available. Here is the original Linq query converted to C#: (from myObject in _context.MyObjects join myObjectType in _context.MyObjectTypes on myObject.CRTKey equals myObjectType.CRTKey join myObjectsSchedule in _context.MyObjectSchedule on myObject.CRS_Key equals myObjectsSchedule.CRS_Key join myObjectGroup in _context.MyObjectGroups on myObject.ReportKey equals myObjectGroup.ReportKey where myObjectGroup.ReportGroupNameKey == groupNameKey select myObject).Distinct(); This particular example only has three joins I have a few blocks of code that have in excess of 6 join statements. Is there a better way to handle all the join statements? Answer: I'm not sure what you mean by a 'cleaner way'. The syntax you're using is known as query syntax and is by far the best for readability when you have multiple joins. If you use method syntax to express the same query you'll quickly see what I mean. However, if you use method syntax you'll be able to break those joins up using different methods. For example: IGrouping<Results> AddJoin(IQueryable<Stuff> query) { if (business_rule) { return query.Join(inner => inner.Key, outer => inner.Key == outer.Key); } return query; } In this way you can reuse parts of your join and construct them dynamically. Hope this helps.
{ "domain": "codereview.stackexchange", "id": 8830, "tags": "c#, linq" }
Computation of Only Even or Odd Frequency Bins of DFT
Question: I have an algorithm where I am computing the FFT of a large signal. However, I desire only the even or odd terms of the DFT of the signal, but not both. Currently, I discard these undesired terms. Is there a way by which I may appreciably reduce the runtime complexity when only every other DFT term is desired? How might I begin to find such an optimization? If it is unclear, I will look for how the larger, encompassing algorithm could be improved, but I'm starting by examining the smallest, most expensive portion before thinking about more fundamental changes to the system. Answer: Consider a sequence $x[n]$ of length N, and assume $X[k]$ is its N-point DFT given by $$X[k] = \sum_{n=0}^{N-1}{x[n]e^{-j\frac{2\pi}{N}nk}}$$ $k = 0, 1,..., N-1$, which is computed through an N-point FFT. If you wish to compute the even-indexed ($k=0,2,4,...$), or the odd-indexed ($k=1,3,5,...$) indexed samples of $X[k]$, you can proceed with the following: Denote the even indexed samples of $X[k]$ as $X_e[k]$ : \begin{align} X_e[k] = X[2k] &= \sum_{n=0}^{N-1}{x[n]e^{-j\frac{2\pi}{N}n(2k)}} &\scriptstyle{\text{term 2k computes even samples of X[k]}}\\ X_e[k] &= \sum_{n=0}^{N-1}{x[n]e^{-j\frac{2\pi}{N/2}nk}} &\scriptstyle{\text{factor 2 is moved into N/2 term}}\\ X_e[k] &= \sum_{n=0}^{\frac{N}{2}-1}{x[n]e^{-j\frac{2\pi}{N/2}nk}} + \sum_{n=\frac{N}{2}}^{N-1}{x[n]e^{-j\frac{2\pi}{N/2}nk}} &\scriptstyle{\text{Divide the sum into 2 halves}}\\ X_e[k] &= \sum_{n=0}^{\frac{N}{2}-1}{x[n]e^{-j\frac{2\pi}{N/2}nk}} + \sum_{n=0}^{\frac{N}{2}-1}{x[n+\frac{N}{2}]e^{-j\frac{2\pi}{N/2}nk}} &\scriptstyle{\text{adjust the 2nd sum range}}\\ X_e[k] &= \sum_{n=0}^{\frac{N}{2}-1}{ (x[n]+x[n+\frac{N}{2}])e^{-j\frac{2\pi}{N/2}nk}} &\scriptstyle{\text{merge the 2 sums}}\\ \end{align} Final form suggest that the required even samples of N-point DFT of the sequence $x[n]$ can be computed from an N/2-point DFT of a new sequence $x_{half}[n]= x[n]+x[n+ N/2]$ whose length is half that of $x[n]$, assuming N is even. The signal $x_{half}[n]$ is simply computed by adding the first half of the signal $x[n]$ into its second half. (if the length N of signal x[n] is not even padd a zero to its tail to make it even) The following matlab/octave code gives you the desired even indexed samples of X[k] without computing a full N-point DFT of x[n]: Xe = fft ( x(1:N/2) + x(N/2 + 1 : end), N/2); Note that because of the addition of the halves before the FFT, efficiency will degrade from a pure N/2-point FFT. The case for the odd indexed samples proceeds similary but results in a more complex form which I would like to summarize in the following Matlab script: % Let x[n] be the signal of length N xc = x .* exp(-j*2*pi*[0:N-1]/N) ; % Multiply x[n] with a complex phase term Xo = fft (xc(1:N/2) + xc(N/2 + 1, N), N/2); % odd indexed samples of X[k] Due to a complex multiplication before the FFT, in this case of computing odd indices performance is further reduced but still preferable over a direct N-point FFT. The following Matlab/Octave excerpt demonstrates the computation of both the even and the odd indexed samples. N = 1024; % original sequence length x = randn(1,N); X = fft(x,N); % original DFT of X Xe = fft( x(1:N/2) + x(N/2 + 1: N) , N/2); % Even samples only from N/2 point FFT Xo = fft( ( x(1:N/2) - x(N/2+1:N) ).*exp(-j*pi*[0:2:N-2]/N) , N/2); % Odd samples only from N/2 point FFT figure,stem( abs ( Xe - X(1:2:N)) ); title('Xe - X(1:2:N)'); figure,stem( abs ( Xo - X(2:2:N)) ); title('Xo - X(2:2:N)'); Due to numerical roundoff effects, the difference plotted by the stems will be nonzero but very small...
{ "domain": "dsp.stackexchange", "id": 3807, "tags": "fft, fourier-transform, dft" }
Question when comparing two experimental results
Question: In my textbook it says that if two experimental results vary less than 3$\sigma$ then they can be considered to have arrived at the same result. My question is how do you determine this "x$\sigma$". For example if i did an experiment to calculate $g$ and my result was $g$=9.79$\pm$0.07 and i want to compare it to $g$$^´$=9.80$\pm$0.22 , should i just use $$\frac{g^´-g}{error}\,\,\,\,?$$ But then what error do i use?If i use o.o7 i get a difference of 1.4$\sigma$ but if i use 0.22 i get 0,45$\sigma$. Or should i use a combination of both? Answer: Given that both measurement have a 1 std. dev. random error estimate (all bets are off when systemantics and model dependencies rear their ugly heads!), then you are effectively comparing the difference of the measurements with zero. So the error you use is the error of the difference. Which means propagating the error in the usual way (i.e. adding the errors in quadrature) $$ \left[\frac{g' - g}{\sqrt{(\Delta g')^2 + (\Delta g)^2}}\right] \stackrel{?}{\le} 1 \;.$$ If the computed value is less than one the measurements are in agreement. more than a few the measurements clearly disagree. bewtween 1 and—say—3 is more ambiguous. But your instructor may want you to treat them as disagreeing so that you can have a yes/no answer.
{ "domain": "physics.stackexchange", "id": 43176, "tags": "measurements, error-analysis, statistics, data-analysis" }
Tool for analyzing ROS network traffic?
Question: We have a robot with 3 onboard computers with an internal LAN. One of the onboard computers creates an adhoc wifi network and forwards packets to the others. We control the robot via a laptop connected over wifi. We are managing to bring down our wifi connection pretty regularly, so I'm trying to figure out which nodes/topics need to be throttled or moved to a different machine. Is there a nifty ROS tool that shows bandwidth usage without affecting the network traffic, or is my best bet using wireshark? EDIT: "rostopic bw" doesn't quite do what I'm looking for, as it creates a new connection to the publisher and then monitors the bandwidth on that connection, which is actually dependent on what else is running. For example, starting up a second "rostopic bw /my_camera/image_raw" significantly slows down the first one. So, I can use it in isolation to figure out what topics are likely to be an issue, but I'd like to be able to observe the whole system as it runs and dies, w/o affecting it. (Or at least the part that's transferring data to/from a given computer) I'm not even sure that I'm asking the right question ... perhaps a better one would be "What's the best way to debug/understand network traffic in a system running ROS?" I'm envisioning some tool that's a mashup of rxgraph, rostopic and ???. If I'm approaching this all wrong, please let me know :) EDIT 2: I took a look at linux_networking, and it looks like it has tools to determine properties of a network and to emulate various network properties, but I didn't see anything that quite fits what I think I need. So, I'm trying to use wireshark (running on the laptop) to monitor packets going over the wifi connection to the robot. I can successfully capture the packets and view their contents. However, I'm stuck on how to determine which packets correspond to which nodes and topics. I've seen documentation about the connection header [1] and TCPROS [2], but I'm not sure how this translates into the raw data that I can view in wireshark. Can anybody point me to information about how to set up appropriate filters? [1] http://mirror.umd.edu/roswiki/ROS(2f)Connection(20)Header.html [2] http://mirror.umd.edu/roswiki/ROS(2f)TCPROS.html Originally posted by lindzey on ROS Answers with karma: 1780 on 2012-05-31 Post score: 4 Original comments Comment by Eric Perko on 2012-05-31: Have you looked through the linux_networking stack ( http://ros.org/wiki/linux_networking )? Maybe there is something you can use in there? Comment by MarkyMark2012 on 2012-05-31: I'd go with wireshark and add filtering as needed... Comment by lindzey on 2018-01-25: See https://answers.ros.org/question/61093/how-can-statistics-for-all-active-ros-topics-be-obtained/ for more recent answers. Answer: There are lots of standard linux tools for network diagnostics. Ones I use regularly include ntop, netstat, and wireshark. Adding more statistics counting into the client APIs would be a great feature, for now no one has wanted it enough to spend the time to write it. Originally posted by tfoote with karma: 58457 on 2012-06-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 9615, "tags": "ros, bandwidth" }
Hold and validate EAN code - follow-up
Question: Based on suggestions and some thoughts I made some refactorings of my classes to hold EAN13 codes, I created a contract interface which now is extended by BarCode classes, since there will be other types of BarCodes, like DUN-14, UPC-A... public interface BarCode {} I decide to move all the validations to factory class, so the EAN13 class looked like this: @Embeddable public class Ean13 implements BarCode { @Column(name = "ean_code", nullable = true, length = 13) private String code; public Ean13() { } public Ean13(String code) { this.code = code; } @Override public String toString() { return code; } } I changed the pre-generate exception creating a custom InvalidBarCodeException exception. In the first moment, this was created as a RuntimeException. But I think this is the case where checked is much better suited, because I'm forced to deal with invalid codes. public class InvalidBarCodeException extends Exception { private String code; private static final String INVALID_EAN = "INVALID EAN CODE"; public InvalidBarCodeException(String code) { super(INVALID_EAN + " "+code); this.code = code; } public String getCode() { return code; } } The validation was moved to a Predicate in a separated class: public class BarCodePredicate { public static Predicate<String> isValidEan13() { return p -> isValid(p); } private static boolean isValid(String code) { if (code == null || code.length() != 13) { return false; } if (!CharMatcher.DIGIT.matchesAllOf(code)) { return false; } String codeWithoutVd = code.substring(0, 12); int pretendVd = Integer.valueOf(code.substring(12, 13)); String[] evenOdd = SplitterByIndex.split(codeWithoutVd, idx -> idx % 2 == 0); int evenSum = sumStringDigits(evenOdd[0]); int oddSum = sumStringDigits(evenOdd[1]); int oddFator = oddSum * 3; int sumResult = oddFator + evenSum; int dv = getEanVd(sumResult); if (pretendVd != dv) { return false; } return true; } private static int sumStringDigits(String s) { return s.chars().map(n -> Character.getNumericValue(n) ).sum(); } private static int getEanVd(int s) { return 10 - (s % 10); } } So I use it in a factory class: public interface BarCodeFactory { BarCode create(String code) throws InvalidBarCodeException; default boolean isValid(String code, Predicate<String> predicate) { return predicate.test(code); } } public class Ean13Factory implements BarCodeFactory { @Override public BarCode create(String code) throws InvalidBarCodeException { if (!isValid(code, BarCodePredicate.isValidEan13())){ throw new InvalidBarCodeException(code); } return new Ean13(code); } } In the Product class now is simply a set method (it will be changed to BarCode interface): public class Product{ public void setEan(Ean13 ean) { this.ean = ean; } } And invalid codes are treated outside: Product p = new Product(); p.setDescription(name); p.setUrl(url); try { BarCode ean = new Ean13Factory().create(code); //TODO: refactoring. p.setEan((Ean13) ean); } catch (InvalidBarCodeException e) { logInvalidCode(e, code); } Does anyone have other suggestions? Answer: interface BarCode The interface is a good idea, but it shouldn't be just a marker interface. It seems very probable that all the implementations that you will add later, will contain a String code value. So, a method declaration like String getCode(); would be useful here. It will allow to access the value when necessary, without worrying about the concrete BarCode type behind. Over-engineering But I'm also asking myself if a separate class per BarCode type is really necessary here. Should ean just remain a basic field within Product? @Entity public class Product { ... @Column private String eanCode; ... } The idea would be to validate the candidate eanCode value before setting it on the object. Not sure that dedicated classes are necessary, because they are just wrappers for the only field, aren't they? Exceptions Handling Taking into account the current listing, the code try { BarCode ean = new Ean13Factory().create(code); p.setEan((Ean13) ean); } catch (InvalidBarCodeException e) { logInvalidCode(e, code); } does exactly the same thing as if it were if (BarCodePredicate.isValid(code) { p.setEan(new Ean13(code)); } else { logInvalidCode(code); } This resembles drastically to what we can find in item#59, page 247 of Effective Java.
{ "domain": "codereview.stackexchange", "id": 17749, "tags": "java, validation" }
Why is the answer to this diffusion example unintuitive?
Question: Imagine a linear decrease in concentration from left to right. Using Fick's first law, $J = -D \frac{d \psi}{d x}$ for all x, from left to right, we have the same flux amount because the decrease is linear. So $J(x) = m$ According to Fick's second law, $\frac{d \psi}{d t} = -D \frac{d^2 \psi}{d x^2} = \frac{d J}{d x}$ So $\frac{d J(x)}{d x} = 0$ so dJ/dx is just 0 since the 2nd derivative of a line is 0. Yet this seems unintuitive. I would expect as long as there is a concentration gradient, there should be a change in concentration at each point until the concentration is completely uniform. There must be an error in my math or reasoning, where is it? EDIT: To clarify the boundary conditions, imagine a closed box with no out flow or in flow at the edges. Answer: As Ted Bunn said, the linear concentration profile is only a steady state if there is a steady inflow at one end and a steady outflow at the other. This net flow is what preserves the concentration gradient. With the "closed box" boundary condition instead, there is indeed an error in your reasoning because the linear profile is no longer a steady state. So, to make things explicit, you should have instead: $$\left.J(x)\right|_{t=0} = m$$ $$\left.\frac{\partial\psi}{\partial t}\right|_{t=0} = 0$$ for all x in the interior of the box. However, these results do not imply that $\psi(x)$ is always constant. At time $t=0$, there is a constant flow from left to right, but because the box is closed this means that the concentration at the left edge of the box is decreasing and the concentration at the right edge is increasing (even though it hasn't yet begun to change anywhere in the interior - If you want, you can say that $\partial\psi/\partial t(t=0)$ has the form of two Dirac delta functions). The only way I know to get the full solution is expanding in a Fourier series. For concreteness, say the box extends from $x=-1/2$ to $x=1/2$. The correct basis of eigenfunctions to use for this boundary condition contains functions whose derivative is zero at the edges of the box, namely $\sin(n \pi x)$ for odd n and $\cos(n \pi x)$ for even n. Since the initial condition is an odd function, the cosines don't appear. Also, for convenience, set the initial slope equal to $\pi^2/4$. $$\psi(x,t=0) = \frac{\pi^2}{8} - \frac{\pi^2}{4} x = \frac{\pi^2}{8} - \sum_{n\text{ odd}} \frac{\sin(n\pi x)}{n^2}$$ (where the last equality is from the well-known Fourier series of a triangle wave) $$\psi(x,t) = \frac{\pi^2}{8} - \sum_{n\text{ odd}} \frac{\sin(n\pi x)}{n^2} e^{-nt/\tau}$$ where $\tau$ is a time scale that depends on the diffusion constant and dimensions (if you want, I can work out what it actually is, but it's irrelevant for the discussion). If you plot this function at different $t$ values increasing from zero, you can clearly see that the concentration is becoming smoothed out and tending toward a uniform concentration of the average value, $\pi^2/8$. Thus, even though at $t = 0$ it seems like the concentration isn't changing anywhere ($\partial\psi/\partial t$ = 0), it immediately begins changing, and diffusion does eventually lead to a uniform concentration. Here are some plots I made, using terms through $n=15$:
{ "domain": "physics.stackexchange", "id": 474, "tags": "fluid-dynamics, diffusion" }
navigation yields velocity on Y axis
Question: hi, all, I am using navigation stack, move_base, for a simple navigation implementation. my base is driven by two differential wheels, and rotates using speed difference between the two driving wheels. i.e. only speed on X axis and rotational speed from time to time, speed on Y axis is received on "cmd_vel" topic, and the base doesn't know what to do with it. how do I configure the move_base properly so that it generates velocity command that suits my robot base? --- linear: x: 0.105263157895 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.157894736842 --- linear: x: 0.1 ***y: -0.1*** z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0 Originally posted by dreamcase on ROS Answers with karma: 91 on 2014-06-14 Post score: 0 Answer: I suspect you have your local planner configured for a holonomic base, when your base isn't actually holonomic. Without knowing exactly which local planner you're using, it's difficult to say exactly how to fix this. Fox example, base_local_planner has a parameter called ~/holonomic_robot that you can set to false to disable this behavior, but I think the configuration is different for the dwa_local_planner Originally posted by ahendrix with karma: 47576 on 2014-06-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dreamcase on 2014-06-15: yes. that was the reason. thanks. Comment by tik0 on 2018-06-10: Had the same issue. it seems that the dwa_local_planner does not handle non-holonomic vehicles. Changed it to base_local_planner and it works like a charm.
{ "domain": "robotics.stackexchange", "id": 18273, "tags": "navigation" }
How to calculate the number of operations required to factor a number $n$ with Shor's Algorithm?
Question: Essentially, how to find the number of clock cycles required to factor a number $n$ on a quantum computer with Shor's Algorithm? An example on any architecture would be helpful. Answer: Making an accurate estimate is difficult, because there are so many factors that it depends on. What circuit constructions are you using to perform the modular exponentiation? How are you performing non-Clifford operations? How are you producing magic states? Are you magic-state-production limited or are you measure-react-delay limited? What kinds of circuit and topological optimizations are you applying? Are you doing on-the-fly optimizations? How effective are they on average? What kind of overhead is there in terms of moving qubits around so that they can interact? The answers to these questions are all in flux as more research is done. For a historical estimate, see the paper "Surface codes: Towards practical large-scale quantum computation". They use a modular exponentiation circuit that uses $\approx 280 N^3$ T gates arranged into $\approx 120 N^3$ layers (separated by Clifford gates, which have negligible cost in comparison). If lots more qubits are available, you can use circuits with better asymptotic complexity. $\tilde{O}(N)$ depth and $\tilde{O}(N^2)$ count are easily achievable with known classical multiplication algorithms, but they may not be better for typical RSA key sizes. There are also many possible constant-factor improvements. For example, Toffoli gates in a compute/uncompute pair only require 4 T gates total instead of 14. Only time will tell what actually gets used.
{ "domain": "physics.stackexchange", "id": 43834, "tags": "quantum-mechanics, quantum-information, quantum-computer" }
Why is a fermion field complex?
Question: The Lagrangian of a fermion field is \begin{equation} \mathcal{L} = \overline{\psi} (i\gamma_{\mu} \partial^{\mu} - m)\psi \end{equation} It is said that the fermion field $\psi$ is necessarily complex because of the Dirac structure. I don't quite understand this. Why is the fermion field complex from a physical point of view? A complex field has two components, i.e., the real and imaginary components. Does this imply that all fermions are composite particles? For example, an electron is assumed to be a point particle that does not have structure. How can it have two components if it is structureless? Answer: Any type of field can be complex, not only the fermions. The reason is the $U(1)_{EM}$ symmetry, i.e., the electromagnetic interactions. The electric charge is the conserved quantity of the $U(1)_{EM}$ gauge symmetry in nature. A transformation of this symmetry is such that $$ \phi(x) \mapsto e^{iq\theta (x)} \phi (x) \\ \phi^{\dagger}(x) \mapsto e^{-iq\theta (x)} \phi^\dagger (x) $$ where $q$ is the electric charge of the field, and $\theta(x)$ is the gauge parameter. If $\phi(x)$ is a real-valued field, then the first and second equations should be identical, which implies $$ e^{iq\theta(x)} = e^{-iq\theta(x)} $$ This is only true if and only if $q=0$. For complex fields the charge would be opposite for their conjugates. So, complex fields are charged and real fields are neutral. For example, after the electroweak breaking, the Higgs field is neutral therefore a real-valued boson, while W bosons, i.e., $W^\pm_\mu \equiv W^1_\mu \mp i W^2_\mu$, are charged, so complex-valued bosons. Neutrinos are neutral so they are real-valued fermions, but electrons are charged thus complex-valued fermions.
{ "domain": "physics.stackexchange", "id": 50463, "tags": "particle-physics, fermions, dirac-equation, complex-numbers" }
Application of Schur's lemma to proving the completeness of coherent states
Question: I am studying many-body path integral through Altland & Simons's textbook called "Condensed Matter Field Theory," and the book states the completeness of the coherent states as below: $$\int\prod_i\frac{d\bar{\phi_i}d\phi_i}{\pi}e^{-\Sigma_i\bar{\phi_i}\phi_i}\left|\phi\right>\left<\phi\right|=\mathbf{1}_\mathcal{F}\tag{4.7}$$ where $\mathbf{1}_\mathcal{F}$ represents the identity operator in Fock space (I will also use it as a definition of LHS), and the coherent state $\left|\phi\right>$ is chosen as follows: $$\left|\phi\right>=e^{\sum_i\phi_ia_i^\dagger}\left|0\right>\tag{4.1}$$ This statement is proven using Schur's lemma by showing the commutativity of $a_i$ and $\mathbf{1}_\mathcal{F}$ (or $G$-linearity of $\mathbf{1}_\mathcal{F}$), by showing $$a_i\mathbf{1}_\mathcal{F}=\mathbf{1}_\mathcal{F}a_i.$$ Here, I wonder what is the group that is represented by $a_i$ (i.e., what is $G$ in $a: G \rightarrow \mathbf{1}_\mathcal{F}$), and how to show $\{a_i\}_{i\in J}$ irreducibly represents that group. Answer: Note that Schur's Lemma is both formulated for groups and algebras. Altland & Simons use the Heisenberg Lie algebra generated by creation and annihilation operators and the identity operator. The representation is an infinite-dimensional irreducible Fock space. However, the standard formulation of Schur's Lemma assumes a finite-dimensional representation. Nevertheless, it is straightforward to prove eq. (4.7) directly by converting the coherent states into an eigenbasis for the number operators $\hat{n}_i=\hat{a}^{\dagger}_i\hat{a}_i $.
{ "domain": "physics.stackexchange", "id": 91086, "tags": "hilbert-space, operators, group-theory, commutator, coherent-states" }
Moving Turtlebot to a goal without a map
Question: Hi there! We're two newbies. We installed Ros Electric on a pc acer aspire 5740g running Ubuntu Lucid 10.04. Our task is to move Turtlebot giving vocal commands. We're doing it writing publishers and subscribers in c++. We have a node which does the speech recognition, it publishes on a topic the identified word. When for example it publishes "advance" and "one", the turtlebot should move forward for one meter and then stop, waiting for new commands. We have to make the turtlebot do this without a map, only using odometry data. Our problem is that turtlebot doesn't stop after it has reached the goal. To give you an idea of what we're doing, here's part of the code: geometry_msgs::PoseWithCovarianceStamped msg; double pos_x,pos_y,pos_z,ori_x,ori_y,ori_z,ori_w; void odomcallback(const geometry_msgs::PoseWithCovarianceStamped msg_turtlebot) { msg=msg_turtlebot; pos_x=msg.pose.pose.position.x; pos_y=msg.pose.pose.position.y; pos_z=msg.pose.pose.position.z; ori_x=msg.pose.pose.orientation.x; ori_y=msg.pose.pose.orientation.y; ori_z=msg.pose.pose.orientation.z; ori_w=msg.pose.pose.orientation.w; } int main(int argc, char **argv) { ros::init(argc, argv, "nodo_che_movimenta"); ros::NodeHandle node; ros::Subscriber sub_odom = node.subscribe("odom", 1000, odomcallback); ros::Publisher vel_pub = node.advertise<geometry_msgs::Twist>("cmd_vel", 10); [...] geometry_msgs::Twist vel; ros::Rate rate(10.0); const double a=pos_x+1; while (node.ok()) { ros::spinOnce(); sostituto_quantita=quantita; bool select=true; if (attiva) { printf("niente\n") [...] if(complesso==103 ) { printf("complesso_on\n"); printf("Inserisci quantità \n"); if(sostituto_quantita==1 && select==true) { printf("1 meter\n"); if(pos_x<a) {vel.linear.x=0.3; vel.linear.y=vel.linear.z=0.0; vel.angular.x=vel.angular.y=vel.angular.z=0.0; } else { select=false; } } [...] } vel_pub.publish(vel); rate.sleep(); } ros::spin(); return 0; } Originally posted by lascre on ROS Answers with karma: 1 on 2013-06-24 Post score: 0 Original comments Comment by Lucile on 2013-06-24: I think you forgot to stop publishing or to put vel.linear.x = 0.0 after you reached the goal. Comment by lascre on 2013-06-25: We tried to put vel.linear.x=0 instead of the bool variable, but it didn't work. We suppose that the real problem is the condition we put in the if instruction: if(pos_x<a). It is always true since both pos_x and a continously update their values. Comment by Lucile on 2013-06-25: An easy way to find it out is to print 'a' value. Comment by Lucile on 2013-06-25: I am not sure weither it would work or not (I am using python and not C++ for ROS programming), but you could try to remove the ros::spin() and to use ros::spinOnce() at the end of your while loop. Comment by lascre on 2013-06-26: We printed the values both of the odometry and of the target using the printf function and we found out that they don't change at all! Is this a loop? How to find out? We tried also to remove ros::spin() and use ros::spinOnce(), but it didn't serve. It still doesn't want to perform the task.l. Comment by Lucile on 2013-06-26: I just answered on the google group Answer: For just moving the robot, without any information of the environment(map): Run the following command: Topic Name: "/move_base_simple/goal" Message Type: "geometry_msgs/PoseStamped" rostopic pub /move_base_simple/goal geometry_msgs/PoseStamped '{ header: { frame_id: "base_link" }, pose: { position: { x: -1.0, y: 0 }, orientation: { x: 0, y: 0, z: 0, w: 1.0}}}' Originally posted by sudhanshu_mittal with karma: 311 on 2013-11-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14683, "tags": "navigation, odometry, turtlebot, ros-electric" }
Where does a star's angular momentum go as its spin slows down?
Question: So we know that stars slow down as they age. But total angular momentum must be conserved. Where does that angular momentum go? The dissipation of Earth's tides somehow transfers Earth's angular momentum to the moon (as shown in my answer at http://www.quora.com/Has-the-Earths-rotational-period-always-been-24-hours-If-not-what-was-it-before-and-what-caused-the-change). But where does a star's dissipation go? Answer: For single stars (doubles also exhibit the spin-to-orbital angular momentum transfer), Rotation braking states: Stars slowly lose mass by the emission of a stellar wind from the photosphere. The star's magnetic field exerts a torque on the ejected matter, resulting in a steady transfer of angular momentum away from the star. A first order approximation is that the rotation velocity decreases by the inverse squareroot of time elapsed.
{ "domain": "physics.stackexchange", "id": 3045, "tags": "astronomy, astrophysics, angular-momentum, stars" }
What is the best way to fasten securely a load onto a hollow brick?
Question: Due to the ongoing COVID pandemic, I decided to put a basketball hoop on one of our outside walls, so that me and my son can stretch a bit. The problem is that the wall I plan to put the basketball is built with hollow bricks, (see below). The thickness of the brick is close to 10cm. Additionnal specs : the wall is getting its fair share or rain during autumn and winter and its facing the wind, so I want to make sure that no moisture will find its way in. there may be some dunking (we are on average 100 kg each) involved so I need to make this as robust as possible. I considered but I find them lacking : screw plastic wall plugs screw anchors Toggle Bolts expansion anchors hollow wall anchors In most of the cases my main worry is that because the brick is hollow the won't find enough support. So, I wonder what is the best practice for this type of brick. I would greatly appreciate any guidance on the matter. Answer: I'd recommend first making a hole to fill the channels in the brick with mortar/concrete locally. This way you will effectively have a solid brick which allows for a much stronger fastening and a much larger range of applicable anchors. The anchors you show above are all mechanical anchors, which is not the optimal solution when a waterproof result is required. You could consider using a chemical anchor in the form of a threaded rod installed with an appropriate resin. (This will require filling the channels of the brick.)
{ "domain": "engineering.stackexchange", "id": 3730, "tags": "mechanical-engineering, structural-engineering, bolting, fasteners, waterproofing" }
As regards measurement how would a quantum-full-adder perform multiple additions simultaneously?
Question: Here in this video from 15:14 Arvin Ash demonstrates a quantum-full-adder circuit, he goes on further to illustrate how it can perform multiple operations simultaneously via superposition. I took some screenshots asper:$a)$$b)$I have no issues with the concept of multiple operations being performed simultaneously but how would measurement make sense here (since we have a superposition of all states of the simultaneously performed additions). Here's what I mean, for example with Grover's search algorithm we are interested in isolating a single state from a superposition of multiple states, from the video it seems to me we are interested in isolating multiple states from a superposition of multiple states. Since qubits collapse only to a single state after measurement, How would measurement here yield multiple states or is there a work around? Answer: Fair warning: I would consider this video typical of Arvin Ash's content, in that it is a mix of correct details and incorrect assertions. Eg. at 16:13 he claims a quantum computer can search 8 items in 1 step which is just wrong. It's misleading to say there are multiple additions being done at the same time. Suppose I told you I have two numbers, A and B. I tell you there's a 20% chance that A=1 and B=2, a 30% chance that A=3 and B=0 and a 50% chance that A=2 and B=2. Now I tell you that I computed C=B+A. Do you jump into the air and say "WOW! There were 3 cases and you updated all of them! You computed 1+2=3, and 3+0=3, and 2+2=4! You just did 3 additions at the same time!"? It's a mistake to assume that the amount of computational work you'd need to do to track the effects of a change is the same as the amount of computational work that can actually be extracted from applying that change. Yes, a quantum computer can apply one addition and force you to do a million additions to update your state vector tracking its state. But that's very different from claiming if you have a million additions to do then you can get a quantum computer to do them in one addition. And anyone who is mixing up those two things, without calling it out, is doing a disservice to your understanding. So my answer to your question is: it doesn't do multiple additions at the same time. There's a sense in which it does, but going down that road leads to misunderstanding.
{ "domain": "quantumcomputing.stackexchange", "id": 3176, "tags": "measurement" }
Calculating the maximum total neutrino mass by using cosmological bound
Question: In an article its written, $$\Omega_{\nu} = \frac{\rho_{\nu}}{\rho_{crit}}=\frac{\sum m_{i,\nu}n_{i,\nu}}{\rho_{crit}} = \frac{\sum m_{\nu}}{93.14h^2eV}$$ Now I am trying to derive this for myself but I could not. Can someone help me ? So the values are, $\rho_{crit} = 1.053 75 \times 10^{-5}h^2 GeV/c^2~~cm^{-3}$ Total neutrino average number density today : $n_{\nu} = 339.5~cm^{-3}$ I tried to write it like, $$\frac{n_{\nu}\sum m_{\nu}}{\rho_{crit}} = \frac{\sum m_{\nu}}{93.14h^2eV}$$ $$\frac{n_{\nu}}{\rho_{cric}} = \frac{339.5cm^{-3}}{1.05375 \times 10^{-5}h^2 GeV/c^2~~cm^{-3}} = \frac{1}{3.103h^2 \times 10^{-8} GeV} = \frac{1}{31.0382916 h^2eV}$$ I am missing additional $1/3$. Where is it coming from ? I guess its a simple question but I couldnt see the answer. Edit: here is the source https://www.google.com/url?sa=t&source=web&rct=j&url=http://pdg.lbl.gov/2019/reviews/rpp2018-rev-neutrinos-in-cosmology.pdf&ved=2ahUKEwiwlvu3lNjmAhXNcJoKHS4RB48QFjACegQIAhAB&usg=AOvVaw0oNgNCVyC98o9jzCu867SY&cshid=1577529872455 Equation 25.2 Answer: The text makes it clear that the summation is over three neutrino species. In which case when you take the number density out of the summation, it is the average density per species that you use, which is one third of 339 cm$^{-3}$.
{ "domain": "physics.stackexchange", "id": 63706, "tags": "particle-physics, cosmology, mass, neutrinos" }
Dealing Poker Hands
Question: I thought I'd give a shot at creating my own version of dealing 5-card hands to n players in VBA, printing them to columns and coloring hearts and diamonds red. I felt I might have been a little repetitive and I had to jump through some hoops to avoid ByRef. Anyhow, what can I improve? Option Explicit Public Sub DealCards() 'Just dealing to sheet2 Sheet2.Range("A:Z").Clear Dim numberOfPlayers As Long numberOfPlayers = GetPlayers If numberOfPlayers = 0 Then Exit Sub Dim i As Long Dim myPlayers As Variant ReDim myPlayers(1 To numberOfPlayers, 1 To 6) myPlayers = DealDeck(numberOfPlayers) Sheet2.Range(Cells(1, 1), Cells(6, numberOfPlayers)) = Application.WorksheetFunction.Transpose(myPlayers) Colorize numberOfPlayers End Sub Private Function GetPlayers() As Long Dim result As Long result = Application.InputBox("How many players?", "Number of Players", 2, Type:=1) If result > 9 Or result = 0 Then MsgBox "There aren't enough chairs or players for this game!" GetPlayers = 0 Exit Function End If GetPlayers = result End Function Private Function DealDeck(ByVal numberOfPlayers As Long) As Variant Dim dealHands As Variant ReDim dealHands(1 To numberOfPlayers, 1 To 6) Dim i As Long For i = 1 To numberOfPlayers dealHands(i, 1) = "Player" & i Next Dim myDeck(1 To 52) As Variant Dim hand As Long Dim card As Long Dim handPosition As Long For hand = 1 To numberOfPlayers For handPosition = 2 To 6 TryAgain: card = Int(52 * Rnd + 1) If IsEmpty(myDeck(card)) Then myDeck(card) = dealHands(hand, 1) dealHands(hand, handPosition) = ConvertCards(card) Else: GoTo TryAgain End If Next handPosition Next hand DealDeck = dealHands End Function Private Function ConvertCards(ByVal card As Long) As String Dim club As String club = ChrW(9827) Dim diamond As String diamond = ChrW(9830) Dim heart As String heart = ChrW(9829) Dim spade As String spade = ChrW(9824) Select Case card Case 1 To 13 ConvertCards = club If card = 1 Or card > 10 Then ConvertCards = ConvertCards & FaceCard(card) Else: ConvertCards = ConvertCards & card End If Case 14 To 26 ConvertCards = diamond If card = 14 Or card > 23 Then ConvertCards = ConvertCards & FaceCard(card) Else: ConvertCards = ConvertCards & card - 13 End If Case 27 To 39 ConvertCards = heart If card = 27 Or card > 36 Then ConvertCards = ConvertCards & FaceCard(card) Else: ConvertCards = ConvertCards & card - 26 End If Case 40 To 52 ConvertCards = spade If card = 40 Or card > 49 Then ConvertCards = ConvertCards & FaceCard(card) Else: ConvertCards = ConvertCards & card - 39 End If End Select End Function Private Function FaceCard(ByVal card As Long) As String Select Case card Case 1, 14, 27, 40 FaceCard = "A" Case 11, 24, 37, 50 FaceCard = "J" Case 12, 25, 38, 51 FaceCard = "Q" Case 13, 26, 39, 52 FaceCard = "K" End Select End Function Private Sub Colorize(ByVal numberofcolumns As Long) Dim i As Long Dim j As Long For i = 2 To 6 For j = 1 To numberofcolumns If AscW(Left(Cells(i, j), 1)) = 9829 Or AscW(Left(Cells(i, j), 1)) = 9830 Then Cells(i, j).Font.Color = RGB(255, 0, 0) Next j Next i End Sub Answer: Being a Casino Dealer I felt that I have to answer this one. Objective: The dealer is going to dealer cards out of a deck to N number of players until each player has a 5 card hand. So what are the logical units? Let's look at the nouns in the Objective for clues. We have a dealer, players, hands, a deck and cards. I think that each of these should be it's own class. When writing classes it is helps to think of a class as an object. What are the properties and attributes of the object? What actions can the object perform? Aren't properties and attributes just variables? Actions, well an action, that's what methods perform. Playing Cards What are the properties and attributes of a playing card? Rank Suit Name Suit Character Suit Name Color What actions can the Card perform? None really, but it makes since to have it place itself. Instead of asking the card what rank, suit, and color are you, we'll just say card here is your destination place yourself there. PlayingCard Class Option Explicit Private CardValues() Private CardCharValues() Private CardSuitNames() Private CardSuitChars() Private CardColors() Public Rank As Integer Public Suit As Integer Private Sub Class_Initialize() CardValues = Array(2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14) CardCharValues = Array(2, 3, 4, 5, 6, 7, 8, 9, 10, "J", "Q", "K", "A") CardSuitNames = Array("Hearts", "Clubs", "Diamonds", "Spades") CardSuitChars = Array(ChrW(9829), ChrW(9827), ChrW(9830), ChrW(9824)) CardColors = Array(RGB(255, 0, 0), RGB(0, 0, 0), RGB(255, 0, 0), RGB(0, 0, 0)) End Sub Public Sub PlaceCard(Desination As Range) Desination.Value = Me.Text Desination.Font.Color = Me.Color End Sub Public Function Color() As Long Color = CardColors(Suit) End Function Public Function Text() As String Text = CardCharValues(Rank) & CardSuitChars(Suit) End Function Public Function Value() As Integer Value = CardValues(Rank) End Function Deck of Cards What is a deck of cards? A deck of cards is a collection of 52 cards. Cards as Collection There are 4 suits with 13 cards in each suit. For i = 0 To 3 For j = 0 To 12 Set card = New PlayingCard card.Rank = j card.Suit = i Cards.Add card Next Next Once you take a card out of the deck, it is gone. You can't use it again. Set NextCard = Cards.Item(i) Cards.Remove i DeckOfCards Class Option Explicit Private Cards As Collection Private Sub Class_Initialize() Me.Shuffle End Sub Public Function NextCard() As PlayingCard Dim i As Integer i = Int((Rnd * Cards.Count) + 1) Set NextCard = Cards.Item(i) Cards.Remove i End Function Public Function hasNextCard() As Boolean hasNextCard = Cards.Count End Function Public Sub Shuffle() Dim i As Integer, j As Integer Dim card As PlayingCard Set Cards = New Collection For i = 0 To 3 For j = 0 To 12 Set card = New PlayingCard card.Rank = j card.Suit = i Cards.Add card Next Next End Sub Test Sub DealTenDeckOfCards() Dim deck As New DeckOfCards Dim card As PlayingCard Dim i As Integer, j As Integer Application.ScreenUpdating = False For i = 1 To 10 j = 1 Do While deck.hasNextCard Set card = deck.NextCard card.PlaceCard Cells(i, j) j = j + 1 Loop deck.Shuffle Next Application.ScreenUpdating = True End Sub
{ "domain": "codereview.stackexchange", "id": 20676, "tags": "vba, excel, reinventing-the-wheel, playing-cards" }
Tension in string
Question: If all the pulleys are connected with the same string why is tension same at all points? Shouldn't $T_2 =2T$ and $T_3=2T_2$? Answer: Your remark "same string" says it all. It's a freely moving string (despite any intricacies of stuff its wrapped around) -- so tension at any point must equal tension at any other. But here's another way to look at it. And, as is often the case in mechanical situations, "another way" involves energy. Since it's the "same string", suppose you grab it at some (any) point and pull it one foot. Then every other point on the string also moves that exact same one-foot distance. So force$\equiv$tension $\times$ distance is the work/energy involved. Now, if $T$ were different anywhere along the string, apply work to pull it at the small-$T$ point, and extract work at the large-$T$ point. Bingo! You've got more energy out than you put in, and the world's energy crisis is history.
{ "domain": "physics.stackexchange", "id": 41457, "tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, string" }
MoveIt! - How to execute trajectories backwards
Question: Hello guys, I was wondering if it is possible to execute a trajectory backwards in MoveIt!. So, for example, let's say that I have a manipulator and I want to go from point 1 to point 2 and MoveIt! calculates a plan for a trajectory with 5 waypoints. When the trajectory is executed, the manipulator goes to the first waypoint, then to the second, and so on until it reaches the 5th waypoint. If I want to go from point 2 to point 1 right after I reach point 2, I shouldn't necessarily have to calculate a new trajectory again, right? I should simply have use the trajectory calculated from point1 to point 2 and go from waypoint 5 to waypoint 4 till I reach waypoint 1, right? So, is there any way to reuse the first trajectory calculated in order to follow its path backwards? Thanks in advance! Best regards, José Brito Originally posted by znbrito on ROS Answers with karma: 95 on 2018-06-25 Post score: 0 Answer: This is not supported right now. A semi-manual way could be to write a short function that takes in a JointTrajectory, reverses the order of the JointTrajectoryPoints in it and then re-runs the time parameterisation. One reason I can come up with for why this isn't directly supported (apart from 'no one' needing this sort of functionality badly enough to add it) would be that it's quite an assumption to make that the environment hasn't changed between planning & executing the "to" motion and the time you'd want to make the "return" motion. As MoveIt is primarily intended to perform motion planning in dynamic (or at least not in completely static) scenes, reusing trajectories does not seem like something that would be often done. Originally posted by gvdhoorn with karma: 86574 on 2018-06-25 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by znbrito on 2018-06-25: Thanks a lot @gvdhoorn! I found a possible way to do it in this link: https://groups.google.com/forum/#!topic/moveit-users/ewfCUHrxSo8 I will try to spend some time with this! Thanks a lot :)
{ "domain": "robotics.stackexchange", "id": 31088, "tags": "moveit, ros-kinetic" }
Is the mathematical expression between the CNOT gates arranged in this way a tensor product or a concatenated product?
Question: Is the mathematical expression between the CNOT gates arranged in this way a tensor product or a concatenated product? That is \begin{matrix} \text{CNO}{{\text{T}}^{\otimes n}} & or & \prod\limits_{n}{\text{CNOT}} \\ \end{matrix}? Answer: $CNOT^{\otimes n}$ is a matrix of size $4^n$ and corresponds to the following circuit : $\prod_{i=1}^{n}{CNOT}$ is a matrix of size $2$ and corresponds to the following circuit : The circuit you describe is more something like : $\prod_{i=1}^{n}{I_{2^{i-1}}\otimes CNOT \otimes I_{2^{n-i}}}$ where $I_n$ is the identity matrix of size $n$ edit : explaination You apply successively $CNOT$ gates on different qubits. In the circuit you present, when you apply a $CNOT$ gate on 2 qubits, you do not perform any transformation on other qubits, this is equivalent to perform an identity transformation (which does nothing). The first $CNOT$ gate is applied on the two first qubits and an identity transformation is applied on all other qubits (n-1 qubits). The full transformation is then : $CNOT \otimes I^{\otimes n-1} = CNOT \otimes I_{2^{n-1}}$ The second $CNOT$ gate is applied on the 2nd and 3rd qubits and an identity transformation is applied on the first qubit and all the qubits after the third. The full transformation is then : $I \otimes CNOT \otimes I^{\otimes n-2} = I \otimes CNOT \otimes I_{2^{n-2}}$ And so on, if you apply a $CNOT$ gate on the qubits $i$ and $i+1$, you will apply an identity on all the qubits before the $i^{th}$ and all the qubits after the $i+1^{th}$, the transformation will be : $I^{\otimes i-1} \otimes CNOT \otimes I^{\otimes n-i} = I_{2^{i-1}}\otimes CNOT \otimes I_{2^{n-i}}$ The overall circuit is the succession of all this transformations for i going from 1 to n and is therefore described by : $\prod_{i=1}^{n}{I^{\otimes i-1}\otimes CNOT \otimes I^{\otimes n-i}} = \prod_{i=1}^{n}{I_{2^{i-1}}\otimes CNOT \otimes I_{2^{n-i}}}$
{ "domain": "quantumcomputing.stackexchange", "id": 4210, "tags": "quantum-gate, textbook-and-exercises" }
Why doesn't a metal get ionized (lose electrons) when heated for melting?
Question: If we heat a metal at high temperatures to melt it, its outermost electrons should also gain some energy so as to get excited to a higher state and eventually become free from the metal atom. But that doesn't happen. WHY? Answer: Because the first ionization energy is much larger than the latent heat of vaporization of the metal, which is much larger than the latent heat of fusion of the metal. Consequently, for a neutral metal, continuous random distribution of the available kinetic energy in the sample by a multitude of individual molecular interactions will result in vaporization before ionization is possible. A comment suggests thermal electron emission, but that is for a charged metal cathode. The energy required to remove extra electrons from the surface of a solid conductor is lower than latent heat of fusion. Consequently, for a charged cathode, continuous random distribution of the available kinetic energy in the sample results in electron emission before melting or vaporization.
{ "domain": "physics.stackexchange", "id": 99238, "tags": "thermodynamics, energy, electrons" }
Did Dialup Modem use closed or open loop power/volume control? How did they determine Tx level?
Question: I have been reading about dialup modems, but one thing I cannot seem to find out about is how implementers determined optimal transmit power/volume. Is this part of the echo cancelation framework? The block diagrams I have seen describing echo cancelation don't seem to mention it. Answer: Did Dialup Modem use closed or open loop power/volume control? Yes. How did they determine Tx level? "Dialup Modem" is a very big term that spans > 50 years of technological development. Early modems for landline telephone very pretty plain binary FSK. So, there wasn't much use for any fine-grained volume control. V.90 modems sense the channel during connection establishment, and equalize (and in reality probably re-adjust during operation). Of course, this also means they'll also adjust power / amplitude to get the most information through the channel at minimum distortion, power consumption and crosstalk. Now, you've got me reading the V.34 standard (which I had before, but because of something else): V.34 already introduced negotiation of TX power, so that's a closed-loop control. But, honestly, for something that has very modified QAM constellations ("superconstellations") and Trellis coding since at least the early 1990s, talking about "open or closed loop TX power control" feels like asking whether a Boeing 747 autopilot has "closed or open loop altitude control"; it ignores that TX power is just one of very many adjusted parameters that effect the waveshape. but one thing I cannot seem to find out about is how implementers determined optimal transmit power/volume. The V.34 standard is freely available at the ITU, maybe check that, it defines how the transmitting modem learns from the receiving modem that it should be adjusting power. Of course, that standard doesn't define how that modem determines it would be better to reduce power – that's up to the individual implementation. But we know why you'd want to reduce power: too much power is bad because it drives amplifiers into nonlinearity, and forces modems on adjacent lines that are subject to crosstalk to transmit stronger themselves, thus increasing interference for us. Hence, the receiving V.34 modem will use DSP to determine the power in the intermodulation products in the L1 and L2 line probing signals. L1 seems to be made for this: it has tones in a regular 150 Hz raster, with a few left out. That's where you don't want to "hear" anything if there's no intermodulation due to nonlinearity. L2 is the same signal, but 6 dB weaker - and from the difference in things you shouldn't hear but are hearing, you can calculate (for a sufficiently simple model of your nonlinearities) how far you'd want to reduce the power, before the improvement in distortion becomes less beneficial than the decrease in SNR becomes detrimental.
{ "domain": "dsp.stackexchange", "id": 10766, "tags": "audio, digital-communications, voice, feedback, communication-standard" }
Is obtaining the coordinate representation of momentum operator from commutator more fundamental than generator of translation
Question: Related post: What is the most general expression for the coordinate representation of momentum operator? There are two methods of obtaining the coordinate representation of momentum in quantum mechanics. $$ \langle x|p|y \rangle = -i \delta'(x-y). \tag{1} $$ The first one is from the canonical commutator $$[x,p]=i \tag{2} $$ As shown in Dirac's principles of quantum mechanics 4th edition section 22, actually there is an ambigulity in this procedure. $$ \langle x|p|y \rangle = -i \delta'(x-y) + F' \delta (x-y) \tag{3} $$ where $F':=dF/dx$ is a derivative of a general function $F(x)$ (I slightly modify the equation in Dirac's book). Eq. (3) satisfies the commutator (1), but implies arbitary value of expectation of momentum. As noticed by Dirac, this ambigulity can be removed from a local phase factor $\psi \rightarrow e^{-iF(x)} \psi$. The second one is regarding momentum operator is the generator of translation, as given in Sakurai's modern quantum mechanics 1st edition p54, $$ ( 1- \frac{ i p \Delta x'}{\hbar} ) | \alpha \rangle = \int dx' \mathcal{F} (\Delta x') | x ' \rangle \langle x | \alpha \rangle $$ $$ = \int dx' | x' + \Delta x' \rangle \langle x' | \alpha \rangle $$ $$ = \int dx' | x' \rangle \langle x' - \Delta x' | \alpha \rangle $$ $$ = \int dx' | x' \rangle \left( \langle x'| \alpha \rangle - \Delta x' \frac{ \partial}{\partial x'} \langle x' | \alpha \rangle \right). \tag{1.7.15} $$ Comparision both sides yields $$ p | \alpha \rangle = \int dx' | x' \rangle \left( - i \frac{ \partial}{\partial x'} \langle x'| \alpha \rangle \right) \tag{1.7.16} $$ $$ \langle x' | p | \alpha \rangle = - i \frac{ \partial }{ \partial x'} \langle x'| \alpha \rangle \tag{1.7.17} $$ It seems that the arbitrariness as Eq. (3) is hidden in the second approach. My questions are: (i) Did Dirac already discover the origin of gauge invariance? If we do not consider from the commutator, I may say, there is no gauge invariance. A local phase factor would modify the expectation value of momentum, therefore we can only have global phase factor corresponds to the same state. However, since local phase factor comes from commutator, it has to be a redundency. (ii) Is the first method, from commutator, more fundamental than translation generator? Since we can find gauge invariance from the first method. Answer: We interpret OP's question (v4) as: How do we recover the phase ambiguity from the generator of translation method in Ref. 1? Recall that an eigenvector for an operator can be rescaled with a non-zero multiplicative factor. The main point is that the position eigenket $| x \rangle$, which satisfies $$\tag{A} \hat{x}| x \rangle~=~ x| x \rangle, $$ can always be redefined with an $x$-dependent phase factor without destroying the normalization condition $$\tag{B} \langle x | x^{\prime} \rangle ~=~\delta(x-x^{\prime}).$$ So the phase ambiguity is encoded in the different choices of position eigenkets $| x \rangle$. See also e.g. this, this, and this related Phys.SE posts. References: J.J. Sakurai, Modern Quantum Mechanics, 1994, p. 54.
{ "domain": "physics.stackexchange", "id": 13153, "tags": "quantum-mechanics, operators, commutator, gauge-invariance" }
Are the squared absolute values of the eigenvalues of a unitary matrix always 1?
Question: I'm going through the phase estimation algorithm, and wanted to sanity-check my calculations by making sure the state I'd calculated was still normalized. It is, assuming the square of the absolute value of the eigenvalue of the arbitrary unitary operator I'm analyzing equals 1. So, does it? Assuming that the eigenvector of the eigenvalue is normalized. Answer: Good question. The answer turns out to be Yes. You don't even need the vector to be normalized. Watch: Start with the definition of eigenvalues and eigenvectors: $$ \begin{align} U|\psi\rangle &= \lambda |\psi\rangle\\ \end{align} $$ Conjugate and transpose both sides of the equation: $$ \begin{align} \langle\psi|U^\dagger &= \langle \psi| \lambda^*. \end{align} $$ Left multiply each side of line 1 by the corresponding side of line 2. $$ \begin{align} \langle \psi|U^\dagger\cdot U|\psi \rangle &= \langle \psi | \lambda^* \lambda |\psi\rangle \\ \langle \psi |\psi \rangle &= |\lambda |^2 \langle \psi |\psi \rangle \\ c &= |\lambda|^2 c\\ 1 &= |\lambda|^2 \end{align} $$ If $|\psi\rangle $ is normalized, it just means that $c=1$, which makes no difference in this proof because the $c$ was on both sides of the equation and can be divided out.
{ "domain": "quantumcomputing.stackexchange", "id": 402, "tags": "quantum-phase-estimation" }
Does latex protect against acids?
Question: I heard somewhere that some plastics are pretty good at resisting decently strong acids. To what degree is this true? Does latex protect against corrosives? Are plastics good at "resisting" bases or other strong solvents? I'm not a chemist, so I apologize if my question seems dumb. I couldn't find much info on the subject of latex vs corrosives via Google. Also, I don't plan on handling any dangerous chemicals; I'm just curious. Answer: There are all sorts of resources for picking the best gloves for handling any given type of chemical. This resource describes latex as being good for protecting against bases, alcohols, dilute water solutions, and fair against aldehydes, ketones. Indeed if you check the Glove Type and Chemical Use table towards the bottom of the page, you can see that latex fairs pretty well against most acids. These ratings give you an idea of how long it takes to penetrate the glove when exposed to that chemical. As you can also see from this table, neoprene gloves are slightly better for handling acids, and though it's not shown on the table, PVC gloves are also good.
{ "domain": "chemistry.stackexchange", "id": 7271, "tags": "acid-base, safety, plastics" }
Would I see a mushroom cloud if I nuke the sun corona?
Question: The sun corona can reach an extremely high temperature at least a few million degree, say if the temperature of the explosion of a nuclear bomb is roughly equal to the temperature of the corona then would a mushroom cloud forms? p.s: I know that it is extremely difficult to land a hit on the Sun from Earth and even so everything would simply vaporize before then, but I really like to find out if it is possible to grow a mushroom cloud on the corona. Answer: Mushroom clouds happen because the hot gas released by an explosion is buoyant compared to the surrounding colder atmosphere, since it has lower density. Were an explosion happen in an atmosphere of equal temperature I assume one could still get a Rayleigh-Taylor instability due to lower density, but it would be far less pronounced. In the corona the gas is a charged plasma, so the dynamics is going to be very different. The mean-free path of fast particles is very long, so the explosion is going to spread out, and the actual shape will be due to magnetohydrodynamic flows. As far as I know, nobody has studied what it would be shaped like, but it is very doubtful you would get a mushroom cloud. A mushroom cloud would for instance snag a lot of magnetic field lines that would act back on it, likely preventing it from forming a toroidal core and the characteristic mushroom shape. My guess is that you get something like a coronal mass ejection, a blob of plasma that flows outwards.
{ "domain": "physics.stackexchange", "id": 48207, "tags": "temperature, sun, explosions" }
Objects travelling relatively to each other faster than light?
Question: When we say that something is travelling a certain speed, it's really travelling that speed relative to the Earth. When saying the speed of anything, it is, for the most part, relative to something else. That being said. If I have an object moving at half the speed of light, and another moving at just above half the speed of light in the opposite direction, would the second object be moving faster than the speed of light to the first one? Note: I know this question is similar to some other questions like this one. However, with my limited physics knowledge (taking AP Physics class next year) I found the explanation a bit confusing. So, even though this might be a bit similar to other questions, I'm looking for a simpler explanation that could help me understand this and its foundation. Answer: In Special Relativity, we use Lorentz Transformations to add speed. The relevant formula here is $$u = \frac{u^{'}+v}{1+\frac{u^{'}v}{c^2}} $$ where $u^{'}$ and $v$ are the speeds of the objects and $u$ (what you are looking for) is the speed that the object at $u$ sees the object at $v$. This embodies Einstein's postulate that no information can be transferred faster than the speed of light in vacuum. Now using this formula, we can put $u^{'} = 0.5c$ and v = $0.6c$ and still get that $u = \frac{1.1c}{1.3} = 0.85c$. Note that even if $u^{'}=v=c$ we get $u=c$, which tells us that the speed of light in vacuum is the same for all observers (which is really the more-precise text of Einstein's conjecture) Note what you have learned that you can just add the two speeds up is only a good approximation when $v<<c$. P.S. See e.g. here for a simple derivation of the formula, which we get from using the lorentz transformations of time and position. It is only after we realise that time and space can be a combination of each other that we arrive at this.
{ "domain": "physics.stackexchange", "id": 85424, "tags": "special-relativity, velocity, faster-than-light, inertial-frames, relative-motion" }
What's the maximum pressure inside a bombardier beetle?
Question: This question got me wondering about the pressure inside a bombardier beetle. Lots of articles mention pressure, but don't specify the amount of it: One study records the velocity of the spray to be within a range of 325 to a stunning 1950 cm/s. [...] Once the muscles around the reservoir squeeze the first amount of reactants through the valve into the reaction chamber, the resulting explosion causes the pressure to rise rapidly in the reaction chamber, forcing shut the one-way valve. The products of the reaction then exit the chamber with a pop and a puff, and the pressure inside the reaction chamber lowers again, falling below the pressure of the collection reservoir, which is still being squeezed by the reservoir muscles. Calculating from the spray characteristics is beyond me. Answer: According to this study, model data shows a maximum pressure of 110 kPa, that's 16 psi and only 1.086 atm: The three-phase process involved in the beetles explosive secretory discharge (ESD) process. The inlet size is shown as a proportion of the inlet radius. During the first phase of refill and heating (blue), only the inlet valve is open. During the second phase (red), the chamber is closed. Finally, the third phase (marked by an arrow) is the exhaust phase, when only the outlet valve is open. This phase is significantly shorter than the first two phases. This cycle then repeats with phase 1 starting at approximately 5.1 and again at 7.8 ms.
{ "domain": "biology.stackexchange", "id": 3556, "tags": "entomology, biochemistry" }
A recursive function that performs a binary search
Question: I created a recursive function that uses binary search to just return true if it finds the value and false if it does not. I'm new to recursion and binary search so please let me know where I can improve upon. /** * Uses binary search O(log n). Returns true if the values is in the value array false if it's not. Recursive */ bool binarySearch( int value, int values[], int left, int right ) { //keep track of when the array will be empty and return false if ( right < left ) { return false; } //Find the middle of the array int mid = ( left + right ) / 2; //compare the value to the middle of the array. If it's a match return true, else move the left position and right position accordingly if( value == values[ mid ] ) { return true; } else if ( value < values[ mid ] ) { right = mid - 1; } else { left = mid + 1; } //return the function return binarySearch( value, values, left, right ); } Answer: You return a boolean to indicate whether the value was found or not. The function would be more useful if you returned an index at which the value was found (or -1 if not found). It's better information for the same amount of work. It appears that left is the leftmost index to consider, and right is the rightmost index to consider. It would be more idiomatic to follow the convention of inclusive-exclusive bounds, with left being the leftmost index and right being just beyond the last element. That way, right - left indicates the number of elements in the array. You've implemented the search using recursion, but it could also be done using just a loop. You would avoid the overhead of function calls and the possibility of stack overflow. int mid = ( left + right ) / 2 is vulnerable to integer overflow if both indices are large. int mid = left + (right - left) / 2 would not overflow.
{ "domain": "codereview.stackexchange", "id": 24522, "tags": "algorithm, c, recursion, binary-search" }
Why doesn't gravity mess up the double slit experiment?
Question: So let's say you are doing a double slit experiment. Also, let's use electrons. My question is, won't the gravity of the electron affect the earth, thereby causing it decoherence and its wave function to collapse (or for MWI, entanglement and loss of information to the environment, preventing interference)? The reason why I think this would happen is because you could tell which path the electron took based off its tidal effects on the earth: everything is a detector. Answer: Yes, everything is a detector, but you need to quantify which things your system interacts with (and how strongly). Gravity is in some sense a poor example, because the quantum details of gravity are still an unsettled question (and gravity is a weak force regardless), so let's bypass that red-herring by replacing gravity with the electromagnetic field: As your charged electron accelerates one way or another in a Stern-Gerlack apparatus or double-slit experiment, in theory it should radiate electromagnetic waves. Moreover, you would expect to be able to determine its position, by measuring differences of how particles in the environment are affected by the electron's EM field, right? Basically, the reason you still observe interference fringes is because the coupling with the environment is weak. (Whereas if you gradually adjust the experimental parameters to increase the strength of coupling to the environment, then the fringes gradually fade.) Weak means that if you do the math, it isn't possible even in principle to infer sufficient information from the environment. You might enjoy some of Zeilinger's journal-papers such as experimental demonstration of slit interference fringes with buckyballs (which are over a million times more massive than electrons), including demonstration of gradual decoherence (controlling the strength of interaction with the environment). You could also look at QM papers on weak measurement, or decoherence theory.
{ "domain": "physics.stackexchange", "id": 27650, "tags": "quantum-mechanics, gravity, electrons, double-slit-experiment, decoherence" }
How is Zn not a transition metal?
Question: A transition metal can be defined as an element that possesses an incomplete sub-level in one or more of its oxidization states. In the textbook I'm reading, it claims that zinc is not a transition metal because it has a full $d$-sub-level in all its oxidization states. A quick google reveals that zinc has oxidization states $-2, 0, +1$, which means that zinc(with oxidization number $+1$) has an incomplete d-sub-level and is a transition metal. What's going on here? Is my textbook incorrect? Answer: Zinc in the +1 oxidation state is $\text{[Ar]}3d^{10}4s^1$, and even in its highest, most common known oxidation state +2 (which the quoted values above seem to have forgotten) it's still $\text{[Ar]}3d^{10}$. No known zinc species in what we normally consider the realm of chemistry breaks that complete $3d^{10}$ subshell, and we would need a major revamp of our calculations and models if any ever does. Moreover, the thermophysical properties of zinc also betray a loss of transition-metal character. Zinc is just not a transition element.
{ "domain": "chemistry.stackexchange", "id": 12839, "tags": "transition-metals, oxidation-state" }
Help plotting a trip to the Moon!
Question: Take me to the moon! I know this is literally rocket science but, I would like any sources or formulas that would help me plot a trip to the Moon. I'm not sending anything, nor do I ever plan to, I just want to do the next best thing by plotting the trip. So if anyone could give me any info as to how I can plot the trip, that'd be great! :) P.S. I'm perfectly aware that the moon isn't always the same distance away, that it's like hoping onto a taxi in New York City, and how it's got a lot of work (I wouldn't mind doing it really), I'd just like to plot the trip. And please don't hate on me for asking, let's just be civil adults here. :) Answer: This website has a mathematical in depth analysis of the Apollo 11 translunar trajectory. It looks to be a reasonably reliable resource but I've only had a quick skim through so you should check on the sources cited. There should be enough information there to make a computer program to calculate the orbits.
{ "domain": "astronomy.stackexchange", "id": 1026, "tags": "the-moon, space-travel" }
Why is the force acting down on an object submerged in a fluid only equal to the force of gravity?
Question: I was reading through a solution to the following problem: What acceleration will a completely submerged object experience if its density is three times that of the fluid in which it is submerged? The solution states $F_\text{down} - F_\text{buoyancy} = m_\text{object}a_\text{object}$. This is reasonable so far, but then it states that $F_\text{down} = F_\text{weight}$. This confuses me; if an object is completely submerged then doesn't it have both the force of gravity and the force of the liquid above it pushing it down? In other words, isn't the force pushing it down equal to $F_g + \rho ghA$ where $A$ is the area of the surface of the object, $h$ is the depth at which the object is submerged, and $\rho$ is the density of the liquid? What am I missing? Answer: Expanding on @SebastianRiese's comment, the buoyant force already takes care of the downward force caused by the upper liquids. Lets consider the problem from a physical perspective. There is the downward force from gravitation, the downward force from the liquid above, and the upward force from the liquid above. In the diagram, the gravitational force isn't shown but the pressures causing the upward and downward forces from the liquid are shown. The buoyant force is calculated from subtracting these two pressures and multiplying by the cross-sectional area: $F_b = (P_B - P_T)A = (\rho g(y+h) - \rho gy)A = \rho ghA = \rho gV$ Hence, the net force is $F_{net} = F_g - F_b = F_g - (F_T - F_B) = F_g + F_T - F_B = F_{downward} - F_{upward}$ Hence, the buoyant force already takes into account the downward force caused by the liquids above.
{ "domain": "physics.stackexchange", "id": 22126, "tags": "buoyancy" }
Validating a name with C#, ASP.NET Core 3.1 and Regex
Question: This class validates that a name contains only letters and optionally spaces, hyphens and apostrophes. public class NameAttribute : ValidationAttribute { protected override ValidationResult IsValid(object value, ValidationContext validationContext) { if (value == null) return ValidationResult.Success; string name = value.ToString(); var regex = new Regex(@"^[a-zA-Z]+(?:['-][a-zA-Z\s*]+)*$"); return regex.IsMatch(name) ? ValidationResult.Success : new ValidationResult($"The name '{name}' is invalid, it should consist of only letters, and optionally spaces, apostrophes and/or hyphens."); } } One thing I wasn't sure on was the null check at the start, we've got the Required attribute if something is required and I don't want the validation to occur if the value is null. Is this an appropriate way of handling it? Answer: From the separation of concerns perspective it seems fine that you do not want to validate against emptiness. On the other hand, one can argue that why do you treat an empty string as a valid name. My suggestion is to create a ctor with a parameter, where the consumer of the attribute can define how should it behave in case of empty input. public class NameAttribute : ValidationAttribute { private static readonly Regex NameRegex = new Regex(@"^[a-zA-Z]+(?:['-][a-zA-Z\s*]+)*$", RegexOptions.Compiled); private readonly bool shouldTreatEmptyAsInvalid; public NameAttribute(bool shouldTreatEmptyAsInvalid = true) { this.shouldTreatEmptyAsInvalid = shouldTreatEmptyAsInvalid; } protected override ValidationResult IsValid(object value, ValidationContext validationContext) { if (value == null) return shouldTreatEmptyAsInvalid ? new ValidationResult($"The name is invalid, it should not be empty.") : ValidationResult.Success; return NameRegex.IsMatch(value.ToString()) ? ValidationResult.Success : new ValidationResult($"The name '{value}' is invalid, it should consist of only letters, and optionally spaces, apostrophes and/or hyphens."); } } I moved the creation of the Regex to a static variable, but you should take some measurements in order to be sure, which has the best performance. Please check this MSDN article for other alternatives.
{ "domain": "codereview.stackexchange", "id": 38702, "tags": "c#, regex, asp.net, asp.net-core" }
Filtering spam from retrieved data
Question: I once heard that filtering spam by using blacklists is not a good approach, since some user searching for entries in your dataset may be looking for particular information from the sources blocked. Also it'd become a burden to continuously validate the current state of each spammer blocked, checking if the site/domain still disseminate spam data. Considering that any approach must be efficient and scalable, so as to support filtering on very large datasets, what are the strategies available to get rid of spam in a non-biased manner? Edit: if possible, any example of strategy, even if just the intuition behind it, would be very welcome along with the answer. Answer: Spam filtering, especially in email, has been revolutionized by neural networks, here are a couple papers that provide good reading on the subject: On Neural Networks And The Future Of Spam A. C. Cosoi, M. S. Vlad, V. Sgarciu http://ceai.srait.ro/index.php/ceai/article/viewFile/18/8 Intelligent Word-Based Spam Filter Detection Using Multi-Neural Networks Ann Nosseir, Khaled Nagati and Islam Taj-Eddin http://www.ijcsi.org/papers/IJCSI-10-2-1-17-21.pdf Spam Detection using Adaptive Neural Networks: Adaptive Resonance Theory David Ndumiyana, Richard Gotora, and Tarisai Mupamombe http://onlineresearchjournals.org/JPESR/pdf/2013/apr/Ndumiyana%20et%20al.pdf EDIT: The basic intuition behind using a neural network to help with spam filtering is by providing a weight to terms based on how often they are associated with spam. Neural networks can be trained most quickly in a supervised -- you explicitly provide the classification of the sentence in the training set -- environment. Without going into the nitty gritty the basic idea can be illustrated with these sentences: Text = "How is the loss of the Viagra patent going to affect Pfizer", Spam = false Text = "Cheap Viagra Buy Now", Spam = true Text = "Online pharmacy Viagra Cialis Lipitor", Spam = true For a two stage neural network, the first stage will calculate the likelihood of spam based off of if the word exists in the sentence. So from our example: viagra => 66% buy => 100% Pfizer => 0% etc.. Then for the second stage the results in the first stage are used as variables in the second stage: viagra & buy => 100% Pfizer & viagra=> 0% This basic idea is run for many of the permutations of the all the words in your training data. The end results once trained is basically just an equation that based of the context of the words in the sentence can assign a probability of being spam. Set spamminess threshold, and filter out any data higher then said threshold.
{ "domain": "datascience.stackexchange", "id": 36, "tags": "bigdata, efficiency" }
Producing residues of combinatorics on strings
Question: Basically, this code does multiplication of chosen operations. public static String[] gen8String(String[] pattern1, String[] pattern2){ String[] combinedSubset = new String[90]; //emty array for the subset of 90 strings String combinedString = ""; //string holder for each combined string int index = 0; //used for combinedSubset array int present = 0; //used to check if all 6 characters are present for(int i = 0; i < 15; i++){ for(int j = 0; j < 15; j++){ combinedString = pattern1[i] + pattern2[j]; //combine both 4 letter strings into 8 char length string char[] parsedString = combinedString.toCharArray(); //parse into array //check if all 6 characters are present for(int k = 1; k <= 6; k++) { if(new String(parsedString).contains(k+"")) { present++; } else break; //if all 6 are present, then add it to combined subset if(present == 6) combinedSubset[index++] = combinedString; } present = 0; } } return combinedSubset; } Let's look at an example: Let's says I have 2 strings ABCDEF and ACDEBF. Now before I call gen8string, I will first perform a \${6 \choose 4}\$ deletion operation on each string. For instance, I will take ABCDEF delete any two characters in that string and I am left with substring of length 4. How many different such strings can I have (given all the letters are unique)? \$15 = {6 \choose 4}\$. Now, I will do the same for both strings. Now I am left with 2 string arrays, each of size 15. In gen8String, I am passing in the 2 above mentioned arrays and I am combining the substrings under the following constraints: I can only combine 2 strings to make one 8-length string (4+4). I can only combine them if there are no overlapping missing characters. What do I mean by the 2nd point? Well, let's say I one of the inputs in pattern is ABCD and in pattern2 one of the inputs is ACDE. These 2 substrings cannot be combined because there are overlapping missing characters, i.e. the F. However, if I have ABCD in pattern1 and CDEF, these can be combined because all 6 characters are present in the 8-length string at least once. So, no overlapping missing characters. This can be seen by tracing through the code. As a concluding note, this function essentially does a $${{6}\choose{4}} * {{4}\choose{2}} = 90$$ How can I optimize/generalize this code? Ways to improve it? Answer: Think about alternatives Let's says I have 2 strings ABCDEF and ACDEBF. Now before I call gen8string, I will first perform a \$\binom 6 4\$ deletion operation on each string. For instance, I will take ABCDEF delete any two characters in that string and I am left with substring of length 4. How many different such strings can I have (given all the letters are unique)? \$15=\binom 6 4\$. Now, I will do the same for both strings. Now I am left with 2 string arrays, each of size 15. Why? You want to generate 90 strings. Why generate 255 strings and then mask out 165 of them? Why not just generate 90 strings directly? ABEFCDEF 001122 ABDFCDEF 001212 ABDECDEF 001221 ... ABCFBDEF 020112 ... ABEFABCD 221100 The six numbers are which string has the letter. So 001122 means AB in the first string, CD in the second string, and EF in both. Iterate from 001122 to 221100 or vice versa. Then you don't need gen8string at all, as all it does is eliminate invalid strings like 2222-- and 22210-. Of course, this would happen outside the code that you posted. Working with what we have here You have public static String[] gen8String(String[] pattern1, String[] pattern2){ String[] combinedSubset = new String[90]; //emty array for the subset of 90 strings String combinedString = ""; //string holder for each combined string int index = 0; //used for combinedSubset array int present = 0; //used to check if all 6 characters are present for(int i = 0; i < 15; i++){ for(int j = 0; j < 15; j++){ combinedString = pattern1[i] + pattern2[j]; //combine both 4 letter strings into 8 char length string char[] parsedString = combinedString.toCharArray(); //parse into array //check if all 6 characters are present for(int k = 1; k <= 6; k++) { if(new String(parsedString).contains(k+"")) { present++; } else break; //if all 6 are present, then add it to combined subset if(present == 6) combinedSubset[index++] = combinedString; } present = 0; } } return combinedSubset; } Consider public static String[] gen8String(String[] patterns1, String[] patterns2) { String[] results = new String[90]; int resultCount = 0; for (String pattern1 : patterns1) { for (String pattern2 : patterns2) { String combined = pattern1 + pattern2; //check if all 6 characters are present for (int k = 1; combined.contains(k+""); k++) { //if all 6 are present, then add it to the results and end if (k >= 6) { results[resultCount++] = combined; break; } } } } return results; } This changes the names of the parameters. They are collections of patterns, not the patterns themselves. I changed the names of combinedSubset and index to be clearer about how they relate to the results. This allowed me to drop the comments as obsolete. Switching from C-style for loops to range based for loops gets rid of the unnecessary i and j variables. It also gets rid of the magic number 15. I renamed combinedString to combined and moved the declaration to where it was set. Since a single value is never used across iterations of the outer two loops, the scope doesn't need to be any wider. We don't need present, as we already have k. If we make it to the sixth iteration, we can update the results and stop the loop. If we gate the loop on the contains check, we can just check the iterations once inside the loop. The original code checked twice on each iteration. Note that your example suggested inputs like ABCD and CDEF. The patterns here are actually like 1234 and 3456. Generalizing Your code only works for four element subsets of a six element set. Consider rewriting so the results are a List. Then you wouldn't have to give a result set size or maintain your own count of the results. The largest value for k (6 here) should probably be a parameter. Or pass in an array or list over which we could iterate. I already modified the for loops to be based on the size of the input parameters, so that's already generalized. And of course the name would need to change. Precalculate Because the current results are so narrow, you could just set them manually. One ninety element array.
{ "domain": "codereview.stackexchange", "id": 22796, "tags": "java, performance, algorithm, strings, combinatorics" }
Adding multiple rows in an HTML table using jQuery
Question: I have this function that allows the user to supply a number and that many rows are appended to the existing table: $('#add-row').click(function() { if($('#insert-rows-amnt').val() < 1) { alert('Please enter number of rows to insert'); } else { for(i = 1; i <= $('#insert-rows-amnt').val(); i++) { $('table tr:last').clone(true).appendTo("tbody"); } } }); Where #insert-rows-amnt is an input field and in this case contains the number 5000. This currently takes 48 seconds on my machine to add 5,000 additional rows - is there a quicker way of doing this? I'm not against putting a loading message somewhere if there isn't a way to significantly increase the speed. I'm just curious if this is the most efficient way. Answer: I'd do it like this: $('#add-row').click(function() { var $tbody, $row, additionalRows; var numNewRows = parseInt($('#insert-rows-amnt').val(), 10); if (isNaN(numNewRows) || numNewRows <= 0) { alert('Please enter number of rows to insert'); } else { $tbody = $('table tbody'); $row = $tbody.find('tr:last'); additionalRows = new Array(numNewRows + 1).join($row[0].outerHTML); $tbody.append(additionalRows); } }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <table> <thead> <th>Col1</th> <th>Col2</th> </thead> <tbody> <tr> <td>1</td> <td>2</td> </tr> </tbody> </table> <input type="number" id="insert-rows-amnt" /> <button id="add-row" type="button">Add Rows</button> Firstly, don't trust user input, verify you actually have a number, try to parse it and be explicit. Always use the radix, parseInt() can try to be too clever for its own good. The result could be NaN or a number, so check for these: var numNewRows = parseInt($('#insert-rows-amnt').val(), 10); if (isNaN(numNewRows) || numNewRows <= 0) { If you're using JQuery, cache found objects, finding them isn't free, a search is done every time you pass a string selector, so if you've already found it, use it. This ins't going to set the world on fire, but it's a good habit to get into. Onto the meat of the method, there are two methods for inserting HTML quickly. HTML updates are like Excel Worksheet updates, no small amount of processing goes on when you change the dom. Most of the time this is negligible, but if you're inserting a huge amount of html it's something to be aware of. You could create a document fragment and append nodes to it. This gives the benefit of working with objects rather than strings so is less error prone. The essence of it is you create the fragment in memory and insert it once. You can create the html as a string and insert it, it's a single dom update, but more error prone as you're string building. This case is simple enough for it to be a good option though. new Array(numNewRows + 1).join($row[0].outerHTML); This replicates a string x number of times. It works by initialising an array of x dimensions and then joining it using a given string as the text between the elements.
{ "domain": "codereview.stackexchange", "id": 23312, "tags": "javascript, jquery" }
Studying dynamic elasticity for finite deformations
Question: this is not a question asking for help with a problem but one asking for help where to begin serious study of elasticity, particularly that applied to dynamic systems. Most textbooks about elasticity focus on strain and stress and mention little about how position and energy evolve with time; my interest lies chiefly in looking at how elastic rods, rings etc. respond to a deformation. I was wondering where I should begin in terms of material like textbooks, online lecture series for this. I have looked extensively and found scores of books and websites that are useful for static elasticity but preferably, as an undergraduate physicist I'd like an introduction like, perhaps, Landau & Lifshitz classical mechanics book. Answer: I think that a good point to start are decent introductory books about continuum mechanics in general to get an idea about how the framework works in general and how systems are described, how different quantities are transported. For this I myself really liked as a starting point: Liu, Continuum Mechanics, Berlin, Heidelberg: Springer, 2002 It is by far not the most complete account, but it is a nice, well motivated, good to read introduction which gives you the basic concepts. For further reading, I myself like: Haupt, Continuum Mechanics and Theory of Materials, Berlin Heidelberg: Springer, 2000 It is, for my taste, a bit less clear and concise, but the extent is quite superior to Liu's book and I always found it useful for looking up things that were not covered adequatly in Liu's work. Finally, if you have a lot of time to invest, I personally suggest you Marsden und Hughes, Mathematical Foundations of Elasticity, Mineola: Dover Publications, 1994 This book is very mathematical but it is, in my opinion, worth it and even by only reading the first chapters you will get a very, very solid basis for the underlying mathematics. If you understand the framework of continuum mechanics itself, I think the question of which material you actually have can be more easily adressed and understood. I am personally not working on Elasticity, so I unfortunately cannot give you any specific works for that, but as the response to your question seemed pretty low, I thought it was a good idea to still post my recommendations.
{ "domain": "physics.stackexchange", "id": 31812, "tags": "classical-mechanics, resource-recommendations, elasticity, stress-strain, non-linear-systems" }
DSP low pass filter (IIR) no longer works when changed to a new MCU
Question: Having troubles understanding why a DSP low pass filter was that working on the M4 is no longer working on an M7. I recently switched over to a STM32H753ZI from a STM32L432KC. In addition to switching from the L4 to H7 I am using the P2MODI2S2 with the H7 and not the internal ADC like I was when using the L4. The only thing that came to mind would be the difference of sampling rates. I was using a 44.410kHz sampling rate on the L4 and now I am using a 96kHz sampling rate on the H7 using the PMODI2S2. So I re-did the discrete function and put in the new IIR coefficients and no cigar. Using the H7 with the PMODI2S2 as a passthrough: CODE: #define ARM_MATH_CM7 #include "main.h" #include "arm_math.h" void init_Clock(void); void init_I2S(void); void init_Debugging(void); void init_Interrupt(void); void init_SpeedTest(void); uint32_t RxBuff[4]; uint32_t TxBuff[4]; uint8_t TC_Callback = 0; uint8_t HC_Callback = 0; char uartBuff[8]; float iir_coeffs[5] = {0.00102, 0.002041, 0.00102, 1.908, -0.9116}; //B0, B1, B2, A1, A2 float iir_mono_state[4]; float Rx_Buff_f[8]; float Rx_Buff_f_out[8]; arm_biquad_casd_df1_inst_f32 monoChannel; void DMA1_Stream0_IRQHandler(void) { if (((DMA1 -> LISR) & (DMA_LISR_TCIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CTCIF0; TC_Callback = 1; } else if (((DMA1 -> LISR) & (DMA_LISR_HTIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CHTIF0; HC_Callback = 1; } } int main(void) { init_Clock(); init_I2S(); //init_Debugging(); init_Interrupt(); //init_SpeedTest(); arm_biquad_cascade_df1_init_f32(&monoChannel, 1, iir_coeffs, iir_mono_state); while (1) { if (HC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BS3_HIGH; for (int i = 0; i < 2; i++){ TxBuff[i] = RxBuff[i]; } HC_Callback = 0; } else if (TC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BR3_LOW; for (int i = 2; i < 4; i++){ TxBuff[i] = RxBuff[i]; } TC_Callback = 0; } } } H7 with PMODI2S2 with IIR coefficients using 96kHz sampling rate: Code: #define ARM_MATH_CM7 #include "main.h" #include "arm_math.h" void init_Clock(void); void init_I2S(void); void init_Debugging(void); void init_Interrupt(void); void init_SpeedTest(void); uint32_t RxBuff[4]; uint32_t TxBuff[4]; uint8_t TC_Callback = 0; uint8_t HC_Callback = 0; char uartBuff[8]; float iir_coeffs[5] = {0.00102, 0.002041, 0.00102, 1.908, -0.9116}; //B0, B1, B2, A1, A2 float iir_mono_state[4]; float Rx_Buff_f[8]; float Rx_Buff_f_out[8]; arm_biquad_casd_df1_inst_f32 monoChannel; void DMA1_Stream0_IRQHandler(void) { if (((DMA1 -> LISR) & (DMA_LISR_TCIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CTCIF0; TC_Callback = 1; } else if (((DMA1 -> LISR) & (DMA_LISR_HTIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CHTIF0; HC_Callback = 1; } } int main(void) { init_Clock(); init_I2S(); //init_Debugging(); init_Interrupt(); //init_SpeedTest(); arm_biquad_cascade_df1_init_f32(&monoChannel, 1, iir_coeffs, iir_mono_state); while (1) { if (HC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BS3_HIGH; for (int i = 0; i < 2; i++){ Rx_Buff_f[i] = (float)RxBuff[i]; } arm_biquad_cascade_df1_f32(&monoChannel, Rx_Buff_f, Rx_Buff_f_out, 2); for (int i = 0; i < 2; i++){ TxBuff[i] = (uint32_t)Rx_Buff_f_out[i]; } HC_Callback = 0; } else if (TC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BR3_LOW; for (int i = 2; i < 4; i++){ Rx_Buff_f[i] = (float)RxBuff[i]; } arm_biquad_cascade_df1_f32(&monoChannel, &Rx_Buff_f[2], &Rx_Buff_f_out[2], 2); for (int i = 2; i < 4; i++){ TxBuff[i] = (uint32_t)Rx_Buff_f_out[i]; } TC_Callback = 0; } } } So I thought to myself, since I am using a I2S protocol and since its stereo I tried using a sampling rate of 192kHz just to see what happens: CODE: #define ARM_MATH_CM7 #include "main.h" #include "arm_math.h" void init_Clock(void); void init_I2S(void); void init_Debugging(void); void init_Interrupt(void); void init_SpeedTest(void); uint32_t RxBuff[4]; uint32_t TxBuff[4]; uint8_t TC_Callback = 0; uint8_t HC_Callback = 0; char uartBuff[8]; float iir_coeffs[5] = {0.0002507, 0.0005013, 0.0002507, 1.955, -0.9557}; //B0, B1, B2, A1, A2 float iir_mono_state[4]; float Rx_Buff_f[8]; float Rx_Buff_f_out[8]; arm_biquad_casd_df1_inst_f32 monoChannel; void DMA1_Stream0_IRQHandler(void) { if (((DMA1 -> LISR) & (DMA_LISR_TCIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CTCIF0; TC_Callback = 1; } else if (((DMA1 -> LISR) & (DMA_LISR_HTIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CHTIF0; HC_Callback = 1; } } int main(void) { init_Clock(); init_I2S(); //init_Debugging(); init_Interrupt(); //init_SpeedTest(); arm_biquad_cascade_df1_init_f32(&monoChannel, 1, iir_coeffs, iir_mono_state); while (1) { if (HC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BS3_HIGH; for (int i = 0; i < 2; i++){ Rx_Buff_f[i] = (float)RxBuff[i]; } arm_biquad_cascade_df1_f32(&monoChannel, Rx_Buff_f, Rx_Buff_f_out, 2); for (int i = 0; i < 2; i++){ TxBuff[i] = (uint32_t)Rx_Buff_f_out[i]; } HC_Callback = 0; } else if (TC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BR3_LOW; for (int i = 2; i < 4; i++){ Rx_Buff_f[i] = (float)RxBuff[i]; } arm_biquad_cascade_df1_f32(&monoChannel, &Rx_Buff_f[2], &Rx_Buff_f_out[2], 2); for (int i = 2; i < 4; i++){ TxBuff[i] = (uint32_t)Rx_Buff_f_out[i]; } TC_Callback = 0; } } } Any ideas? I am not sure if its the M7 or the peripheral in question. This was working on an L4, no problem. UPDATE 1: I recorded the variables in debugger mode to see what is happening. I took three pictures. The first iteration is index 0-2 and the second iteration from 2-4 and the third picture is many iterations afterwards. What I noticed is that RxBuffer and RxBuffer_f are out of sync. I also noticed that many iterations later the RxBuffer_f_out just becomes an int like data type and no longer contain any sort of decimals. UPDATE 2: I also notice that I am using a I2S device that shoots out stereo audio, am I maybe not adding the coefficients properly to the buffers. What I mean by this do I need to adjust the buffers when they come in, like bit shift them or anything along those lines? The only thing I know about that PMODI2S2 is that I believe it shoots out 24 bits in a 32 data frame, so I am assuming its padded with zeroes and why not. UPDATE 3: Was playing around with just multiplying the RxBuffer before putting in the TxBuffer and what it did was increase the PK - PK of the signal, however increasing it more caused this: Multiplying the RxBuffer by 2^0 (Passthrough) Multiplying the RxBuffer by 2^1 Multiplying the RxBuffer by 2^2 The last picture looks like the problem I am having, is this maybe an overflow issue? UPDATE 4: Talking to a concerned citizen he mentioned the I2S protocol is a 2's complement data encoded. I know what 2's complement is, however I am not sure if the TxBuff or the Rxbuff needs to be complemented. Anyhow I changed both data type of the TxBuff and the Rxbuff to int32_t datatypes and the problem still insist. UPDATE 5: Tried using the 2's complement or simply just casting it as an int32_t. No luck. CODE: #define ARM_MATH_CM7 #include "main.h" #include "arm_math.h" void init_Clock(void); void init_I2S(void); void init_Debugging(void); void init_Interrupt(void); void init_SpeedTest(void); int32_t RxBuff[4]; int32_t TxBuff[4]; uint8_t TC_Callback = 0; uint8_t HC_Callback = 0; char uartBuff[8]; float32_t iir_coeffs[5] = {0.00102, 0.002041, 0.00102, 1.908, -0.9116}; //B0, B1, B2, A1, A2 float32_t iir_mono_state[4]; float32_t Rx_Buff_f[4]; float32_t Rx_Buff_f_out[4]; arm_biquad_casd_df1_inst_f32 monoChannel; void DMA1_Stream0_IRQHandler(void) { if (((DMA1 -> LISR) & (DMA_LISR_TCIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CTCIF0; TC_Callback = 1; } else if (((DMA1 -> LISR) & (DMA_LISR_HTIF0)) != 0){ DMA1 -> LIFCR |= DMA_LIFCR_CHTIF0; HC_Callback = 1; } } int main(void) { init_Clock(); init_I2S(); //init_Debugging(); init_Interrupt(); //init_SpeedTest(); arm_biquad_cascade_df1_init_f32(&monoChannel, 1, iir_coeffs, iir_mono_state); while (1) { if (HC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BS3_HIGH; for (int i = 0; i < 2; i++){ Rx_Buff_f[i] = (float32_t)RxBuff[i]; } arm_biquad_cascade_df1_f32(&monoChannel, Rx_Buff_f, Rx_Buff_f_out, 2); for (int i = 0; i < 2; i++){ TxBuff[i] = Rx_Buff_f_out[i]; } HC_Callback = 0; } else if (TC_Callback == 1){ // GPIOA->BSRR |= GPIO_BSRR_BR3_LOW; for (int i = 2; i < 4; i++){ Rx_Buff_f[i] = (float32_t)RxBuff[i]; } arm_biquad_cascade_df1_f32(&monoChannel, &Rx_Buff_f[2], &Rx_Buff_f_out[2], 2); for (int i = 2; i < 4; i++){ TxBuff[i] = Rx_Buff_f_out[i]; } TC_Callback = 0; } } } Answer: I2S audio samples are signed two's complement. Just add $2^{N-1}$, where $N$ is the number of bits, to the result, and binary and by $2^N-1$, to get the range to $0\ldots2^{N}-1$, which I think you used to get from the built-in analog-to-digital converter (ADC). Do this both to the data you receive and the data you transmit using I2S. You can optimize the calculation a little and combine the add and the binary and into just a binary xor by $2^N-1$. Or start working with signed numbers. If the 24-bit data is not left-aligned in the 32-bit words, you either need to A) extend the sign bit or B) left-shift the received data and right-shift the data to be transmitted. You should consult the ADC and DAC datasheets to confirm how the data is aligned, and if that is configurable, that you have configured it properly on the processor and in the peripherals. In I2S, the audio sample bits should be read starting at the second rising edge of the clock line after a transition on the word select line. If your processor RX and TX are configured to start the data at the first rising edge of the clock line, and you don't bother to correct that, and if the peripherals are configured to use I2S, then you effectively have the data aligned starting at the second MSB (most significant bit) of each 32-bit word. In that case, a shift of one in solution B above should suffice. The first MSB of the 32-bit received words that alternates at the frequency of sinusoidal input is the MSB of the audio sample. Also, if you switch from using uint32_t to using int32_t, check that your integer↔float conversions supports int32_t. C language type casting should work fine.
{ "domain": "dsp.stackexchange", "id": 9438, "tags": "filters, analog-to-digital, digital-to-analog" }
Is floating in space similar to falling under gravity?
Question: In the case there is no air and your eye are closed, then does falling from the sky under gravity have the same feeling as floating in space? Can our body feel that we are accelerating without the air hitting us. If not how are they different? Also are free fall and zero g the same thing cause when we are falling freely we are accelerating at g towards earth then why would it be called "zero g"? Answer: Yes, they feel the same, and this observation is fundamental to how we think of gravity. Einstein said that not only do they feel the same, they are the same: movement under gravity alone is the same thing as movement under no force at all. The name for this assumption is the equivalence principle, and it underlies General Relativity: because we know that things experiencing no force at all move in straight lines through spacetime, we also know that things moving under gravity alone move in straight lines through spacetime, and this works because what gravity does is to curve spacetime, so that 'straight lines', which are now called geodesics, have properties which straight lines in a flat spacetime do not have, such as intersecting more than once. To be slightly more precise about this: there is (in GR) no local distinction between movement under gravity alone and movement under no force at all: because gravity distorts (curves) spacetime, there are experiments you can do which are not local which will tell you whether you are moving under gravity or under no force. Geometrically, these experiments consist of establishing whether straight lines have the properties you would expect in a flat spacetime or whether they have properties you would expect in a curved spacetime; physically the experiments consist of detecting 'tidal forces' which are forces which cause two separated objects (the being separated is what makes the experiment non-local), initially at rest relative to each other, to want to move away or towards each other over time.
{ "domain": "physics.stackexchange", "id": 58763, "tags": "newtonian-mechanics, newtonian-gravity, free-fall, equivalence-principle" }
How to determine which features matter the most?
Question: I have a large dataset that consists of search results of loans. Someone would input their details like income etc and the results would include a bunch of loans from different companies and different loan types (so there can be more than 1 loan per company). The dataset consists of every unique search and all the corresponding results. I also have a column that shows which loan has been selected at the end by the user per each search. I am looking to find out which features of the loan were most important to users, i.e. try to predict what loan the user will select depending on his/her inputs. What ML model could I use for this? I am unsure how to approach the problem. Answer: I see a couple great answers here! For something like this, I would lean towards Principal Component Analysis (sample code below) and Feature Selection (sample code below). Let's not confuse Feature Selection with Feature Engineering (Data Cleaning & Preprocessing, One-Hot-Encoding, Scaling, Standardizing, Normalizing, etc.) Principal Component Analysis: PCA is a technique for feature extraction — so it combines our input variables in a specific way, then we can drop the “least important” variables while still retaining the most valuable parts of all of the variables! As an added benefit, each of the “new” variables after PCA are all independent of one another. This is a benefit because the assumptions of a linear model require our independent variables to be independent of one another. Here is a nice example of how Principal Component Analysis works. import pandas as pd url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"# load dataset into Pandas DataFrame df = pd.read_csv(url, names=['sepal length','sepal width','petal length','petal width','target']) from sklearn.preprocessing import StandardScaler features = ['sepal length', 'sepal width', 'petal length', 'petal width']# Separating out the features x = df.loc[:, features].values# Separating out the target y = df.loc[:,['target']].values# Standardizing the features x = StandardScaler().fit_transform(x) from sklearn.decomposition import PCA pca = PCA(n_components=2) principalComponents = pca.fit_transform(x) principalDf = pd.DataFrame(data = principalComponents, columns = ['principal component 1', 'principal component 2']) finalDf = pd.concat([principalDf, df[['target']]], axis = 1) finalDf Result: principal component 1 principal component 2 target 0 -2.264542 0.505704 Iris-setosa 1 -2.086426 -0.655405 Iris-setosa 2 -2.367950 -0.318477 Iris-setosa 3 -2.304197 -0.575368 Iris-setosa 4 -2.388777 0.674767 Iris-setosa .. ... ... ... 145 1.870522 0.382822 Iris-virginica 146 1.558492 -0.905314 Iris-virginica 147 1.520845 0.266795 Iris-virginica 148 1.376391 1.016362 Iris-virginica 149 0.959299 -0.022284 Iris-virginica Continuing... # visualize results import matplotlib.pyplot as plt fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(1,1,1) ax.set_xlabel('Principal Component 1', fontsize = 15) ax.set_ylabel('Principal Component 2', fontsize = 15) ax.set_title('2 component PCA', fontsize = 20) targets = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'] colors = ['r', 'g', 'b'] for target, color in zip(targets,colors): indicesToKeep = finalDf['target'] == target ax.scatter(finalDf.loc[indicesToKeep, 'principal component 1'] , finalDf.loc[indicesToKeep, 'principal component 2'] , c = color , s = 50) ax.legend(targets) ax.grid() Reference: PCA using Python (scikit-learn) | Towards Data Science Feature Selection: A feature in case of a dataset simply means a column. When we get any dataset, not necessarily every column (feature) is going to have an impact on the output variable. If we add these irrelevant features in the model, it will just make the model worst (Garbage In Garbage Out). This gives rise to the need of doing feature selection. For a Feature Selection exercise, I like this example quite a lot. import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline df = pd.read_csv("https://rodeo-tutorials.s3.amazonaws.com/data/credit-data-trainingset.csv") df.head() from sklearn.ensemble import RandomForestClassifier features = np.array(['revolving_utilization_of_unsecured_lines', 'age', 'number_of_time30-59_days_past_due_not_worse', 'debt_ratio', 'monthly_income','number_of_open_credit_lines_and_loans', 'number_of_times90_days_late', 'number_real_estate_loans_or_lines', 'number_of_time60-89_days_past_due_not_worse', 'number_of_dependents']) clf = RandomForestClassifier() clf.fit(df[features], df['serious_dlqin2yrs']) # from the calculated importances, order them from most to least important # and make a barplot so we can visualize what is/isn't important importances = clf.feature_importances_ sorted_idx = np.argsort(importances) padding = np.arange(len(features)) + 0.5 plt.barh(padding, importances[sorted_idx], align='center') plt.yticks(padding, features[sorted_idx]) plt.xlabel("Relative Importance") plt.title("Variable Importance") plt.show() Reference: http://blog.yhat.com/tutorials/5-Feature-Engineering.html
{ "domain": "datascience.stackexchange", "id": 7136, "tags": "machine-learning, scikit-learn, feature-selection, machine-learning-model" }
How does the view of an observer near an event horizon change as a function of the observer's position, velocity, and acceleration?
Question: I see that this topic has been well covered from the perspective of a) a stationary observer dangling on the end of a rope and and b) an observer free-falling from infinity. What I'm still unsure about are the roles of velocity vs. acceleration. Is it all about velocity? Does an observer who is stationary near the horizon and only just starting to freefall see the same as a)? Also, for an observer who is descending at close to escape velocity but then fires his/her retrorockets to cancel the acceleration and achieve constant velocity, does he/she still see the same as b)? Answer: What you see is what lies on your past light cone. The past light cone only depends on spacetime position, not velocity or higher derivatives. What you see also depends on your velocity in the sense that it is "distorted" in different ways by Doppler shift and aberration. But given a (infinite-precision, omnidirectional) photograph of what someone at a particular location with a particular velocity sees, you can derive what someone at the same location with any other velocity will see just from that picture, without needing to know anything about the 4D spacetime it was originally derived from. Acceleration doesn't affect what an idealized camera/eye sees. Does an observer who is stationary near the horizon and only just starting to freefall see the same as a)? Also, for an observer who is descending at close to escape velocity but then fires his/her retrorockets to cancel the acceleration and achieve constant velocity, does he/she still see the same as b)? Yes to both questions, and also, if you ignore Doppler shift and aberration, all four of them see the same thing when they're at the same spacetime location even if they don't match their velocities.
{ "domain": "physics.stackexchange", "id": 71618, "tags": "general-relativity, event-horizon" }
"Active quantum fluctuations" due to the quantization of the gravitational field?
Question: This is a follow-up to the question In what sense is the word quantum fluctuation used here? In arXiv:0710.3787 it is stated on page 7 that there are "active fluctuations in the intrinsic degrees of freedom of gravity" and on page 13 that "Quantum stress tensor fluctuations lead to passive fluctuations of the gravitational field, which are in addition to the active fluctuations coming from the quantization of the gravitational field itself." I will once again give my definition of quantum fluctuations: "Quantum fluctuation" is an informal name for the fact that: In any quantum mechanical system with states in the Hilbert space $\mathcal H$, if you have a self-adjoint linear (densely defined, maybe?) operator $$\hat{\mathcal{O}}:\mathcal H\supset D(\hat{\mathcal O})\to\mathcal H$$ corresponding to some observable $\mathcal O$ and a state $\vert\psi\rangle\in\mathcal H$ which is not an eigenvector of $\hat{\mathcal O}$, then performing the same measurement on a physical system in the state $\vert\psi\rangle$ will lead to different results. For further discussion of my understanding see [1], [2], [3], [4]. In particular, the word "fluctuation" is misleading since the above fact need not have anything to do with a change in time (or space). Now, what do the "active fluctuations" in the quoted text blocks above refer to? Are they fluctuations in my sense? Are they something else? Answer: The "fluctuations" language is addressed in my answer to the previous question, so in this answer I'll only address the "active" and "passive" language. In classical general relativity, the metric is still dynamic even when/where the stress-energy tensor is zero, which is why gravitational waves are possible. "Active quantum fluctuations" refers to quantum fluctuations in those gravitational-wave degrees of freedom. "Passive" refers to fluctuations in the rest of the metric degrees of freedom, those that are tied to fluctuations in the stress-energy tensor as shown in equation (1) of arXiv:1703.05331.
{ "domain": "physics.stackexchange", "id": 82945, "tags": "quantum-mechanics, quantum-field-theory, terminology, quantum-gravity, foundations" }
Latent heat vs temperature of phase transitions?
Question: Is the latent heat associated with phase transitions correlated with the temperature at which they occur? The latent heat is related to the difference in energy between the two phases, and the temperature of the phase transition occurs at the point where the difference in energy between the two phases is comparable to thermal fluctuations. What factors would lead to departures from a linear relationship between the two, e.g. in different materials or different phase transitions in the same material? Or is there an error in my assumptions? Answer: The link between temperature and latent heat associated to any phase transition can be found using classical thermodynamics. Rigourosly talking, phase changes can be described using the Gibbs free energy potential, $G=U+PV-TS$. It follows from the Second Law of Thermodynamics that any closed system (which is one that cannot transfer any mass across its boundaries, but it may involve heat and/or work exchanges with its environment) that evolves from an initial state to an equilibrium state at constant pressure and temperature, will minimize $G$, which is nothing but an extensive property. You can think of Gibbs free energy as a way to physically represent and follow an evolution for most natural systems and find equilibrium states. Equilibrium states are static: once you've reached them, the system will not abandon them unless its conditions are disrupted by an external event. The steady coexistence of material phases is just one example of an equilibrium state. So, for example, if you have a chemically pure system consisting of two phases not in equilibrium (think about filling up bottle of water originally dry, and closing it leaving a space over the water line, allowing evaporation) and realizing that it can be modelled as a closed system at constant ambient temperature an pressure (the system's boundaries are given by the atmosphere) you certainly can calculate its final state by minimizing the system's free energy, which as an extensive property its equal to the sum of its subsystem's free energies: $G_{sys}=G_{w}+G_{v}\Rightarrow \displaystyle\frac{dG_{sys}}{dt}=\displaystyle\frac{dG_{w}}{dt}+\displaystyle\frac{dG_{v}}{dt}\leq0$ Without further information, there isn't too much we can say about the evolution of the system. Here we are in a non-equlibrium situation (that is, the place for transport phenomena-related stuff) and so we have a time response of the system, which is dynamically changing its properties. That means that there is a net mass flux from the liquid phase to the vapour phase as long as the evaporation continues. Nevertheless, given enough time this system will reach a steady, non-changing state, easily recognizable by the fixed liquid level. At this point the free energy of the system will be at a minimum, and become independent of time. So: $\left. \displaystyle\frac{dG_{sys}}{dt} \right|_{eq}=0$ Even though we know the system has reached equlibrium (and there won't be any more spontaneus stuff happening any more) we can study a reversible evolution, that in this case involves a macroscopically undetectable shift of the system's extensive properties while keeping at equilibrium. Recalling the liquid and vapour phases free energies: $\left.\displaystyle\frac{dG_{w}}{dt}\right|_{eq}+\left.\displaystyle\frac{dG_{v}}{dt}\right|_{eq}=0\Rightarrow\left.\displaystyle\frac{dG_{w}}{dt}\right|_{eq}=-\left.\displaystyle\frac{dG_{v}}{dt}\right|_{eq}$ Now, we can take a step further and express this reversible process in terms of the mass of each phase and specific properties: $\left.\displaystyle\frac{dG_{w}}{dt}\right|_{eq}=-\left.\displaystyle\frac{dG_{v}}{dt}\right|_{eq}\Rightarrow \left.\displaystyle\frac{d(M\cdot \widehat{g})_{w}}{dt}\right|_{eq}=-\left.\displaystyle\frac{d(M\cdot \widehat{g})_{v}}{dt}\right|_{eq}\\\left.\displaystyle \widehat{g}_{w} \cdot \frac{dM_{w}}{dt}\right|_{eq}-\left.\displaystyle M_w\cdot\frac{d \widehat{g}_{w}}{dt}\right|_{eq}=-\left.\displaystyle \widehat{g}_{v} \cdot \frac{dM_{v}}{dt}\right|_{eq}-\left.\displaystyle M_v\cdot\frac{d \widehat{g}_{v}}{dt}\right|_{eq}$ Just as $G$ is an extensive property, $\widehat{g}$ is an intensive one, like density, viscosity or refraction index. Now both $\widehat{g}_{w}$ and $\widehat{g}_{v}$ are fixed because of macroscopical equlibrium, but that is nothing but a fancy way of saying that the two phases are (reversibly) exchanging matter between them, as the rest of the terms drop out. Also, the fact that the system is closed means that the increment in one phase's mass can only be the result of a decrease in the other: $\left.\displaystyle \widehat{g}_{w} \cdot \frac{dM_{w}}{dt}\right|_{eq}=-\left.\displaystyle \widehat{g}_{v} \cdot \frac{dM_{v}}{dt}\right|_{eq}\\\left.\displaystyle \frac{dM_{w}}{dt}\right|_{eq}=-\left.\displaystyle \frac{dM_{v}}{dt}\right|_{eq} \Rightarrow (\widehat{g}_{w}-\widehat{g}_{v})\cdot\left.\displaystyle \frac{dM_{w}}{dt}\right|_{eq}=0$ Finally, as the reversible exchange rate can exhibit a non-unique set of values (and in principle, setting it at zero would be a contradiction with this line of reasoning), the conclusion is that the equilibrium specific free energies are equal for both phases: $\widehat{g}_{w}=\widehat{g}_{v}$ This is the very condition that two phases at equilibrium must verify. But what about latent heat and temperature? Firstly, "latent heat" is the informal (engineering-like) name given to the specific enthalpy change in a system which undergoes a phase transformation. So what we are really looking for is a relation between enthalpy change and temperature, and this is easily deduced from the previous reasoning. Expressing the specific free energy in terms of enthalpy and entropy at equilibrium: $\widehat{h}_{w}-T_{eq}\cdot\widehat{s}_{w} =\widehat{h}_{v}-T_{eq}\cdot\widehat{s}_{v}\\(\widehat{h}_{v}-\widehat{h}_{w})=\lambda=T_{eq}\cdot(\widehat{s}_{v}-\widehat{s}_{w})$ Making use of an appropiate Maxwell relation, and realizing that (for pure substances) the pressure at phase equilibrium its only a function of temperature: $\displaystyle\frac{{\partial \widehat{s}}}{{\partial \widehat{v}}}=\displaystyle\frac{{\partial P}}{{\partial T}}=\left.\displaystyle\frac{{dP}}{{dT}}\right|_{eq}\Rightarrow \int\partial \widehat{s}=(\widehat{s}_{v}-\widehat{s}_{w})=\int\left.\displaystyle\frac{{dP}}{{dT}}\right|_{eq}\partial \widehat{v}=\left.\displaystyle\frac{{dP}}{{dT}}\right|_{eq}\cdot(\widehat{v}_{v}-\widehat{v}_{w})$ Then: $\left.\displaystyle\frac{{dP}}{{dT}}\right|_{eq}=\displaystyle\frac{\lambda}{T_{eq}\cdot(\widehat{v}_{v}-\widehat{v}_{w})}$ This is the Clausius–Clapeyron relation, and it expresses the slope of the single component, phase equilibrium states pressure curve. EDIT: As latent heat is actually the change in the specific enthalpy of the substance when going from one phase to another, and specific enthalpy is an intensive property (which, by the Gibbs-Duhem equation, gets uniquely determined for a pure substance by it's temperature and pressure -and at equilibrium, pressure is a scalar function of saturation temperature- so it's actually a function of temperature only) then yes, latent heat is correlated to the temperature of the phase transition. But that's not the end of the story: the Clausius-Clapeyron relation tells you a way to compute that relation without making use of a calorimeter (i.e. measuring directly the latent heat of the phase transition). You just have to measure saturation pressures, temperatures and densities at equilibrium.
{ "domain": "physics.stackexchange", "id": 4856, "tags": "statistical-mechanics, phase-transition" }
Is there a clear analogy here between volume and entropy? Looking at: $ \frac{dQ}{T} = dS, \frac{dW}{P} = dV$
Question: The first law of thermodynamics states: $$ dE = TdS + PdV $$ And entropy is defined as: $$ \int \frac{dQ}{T} = S \qquad(1)$$ If we make an analogous relation using the second term in the first law, it looks like this: $$ \int \frac{dW}{P} = V \qquad(2) $$ And what does this exactly say? $(1)$ says that the heat added to a system at temperature $T$ results in an increase of entropy $S$. $(2)$ says (if it makes any sense) that the work on a system at certain pressure $P$ results in a change of volume $V$. Like pushing in a piston $dW$ at pressure $P$ and thus a volume change of $ W/P = - V$. Is there a clear analogy here between volume and entropy? Is there any way equation $(2)$ is useful? My goal is to get a better picture of entropy, and since volume is easily visualised it seems that this would be a nice way of visualising entropy. Also the fact that volume and entropy both have a lowerbound $0$. Answer: I’d prefer to write your equations as $$\int\frac{q_\text{rev}}{T}=\Delta S;$$$$\int\frac{w_\text{rev}}{P}=\Delta V;$$ where $q_\text{rev}$ and $w_\text{rev}$ correspond to infinitesimal reversible heat and work, respectively (the presence of “d” might mislead us into thinking that there’s a heat or work state variable that can be differentiated). Note that we’re describing a change in the extensive state variable (entropy and volume), not its absolute value. These equations hold for a closed system in which only mechanical work is considered; the existence of other types of work would violate the second equation. Under these constraints (and the constraint of reversibility), yes, entropy and volume are analogous. Entropy is the “stuff” that’s driven to shift by a temperature imbalance, just as volumes are exchanged due to pressure imbalances. (Note however that for real processes, which are all irreversible, entropy is generated. No analogous behavior exists for volume—at least in classical thermodynamics.)
{ "domain": "physics.stackexchange", "id": 87975, "tags": "thermodynamics, entropy" }
How to add column name
Question: I have a dataset in which some columns have header but the other ones don't have. Therefore, I want to add column name only to those who doesn't have column name. I know how to add column names using header but I would like to know how to do this using index so that I can add column names only to the empty ones. Any suggestions will be appreciated. Answer: Loop through df.columns checking whether there is a column name or if it is empty: import pandas as pd # create df with two empty columns df = pd.DataFrame({'a': [1,2], 'b': [1,2], 'c': [1,2], 'd':[1,2]}) df.columns = ['', 'b', '', 'd'] df with empty columns: b d 0 1 1 1 1 1 2 2 2 2 # new of list columns names must be the same length as number of empty columns empty_col_names = ['x', 'y'] new_col_names = [] count = 0 for item in df.columns: if item == '': new_col_names.append(empty_col_names[count]) count += 1 else: new_col_names.append(item) df.columns = new_col_names df with filled columns: x b y d 0 1 1 1 1 1 2 2 2 2
{ "domain": "datascience.stackexchange", "id": 7280, "tags": "python, pandas, jupyter" }
Leetcode 54: Spiral Matrix
Question: Problem statement Given a matrix of m x n elements (m rows, n columns), return all elements of the matrix in spiral order. For example, Given the following matrix: [ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ] ] You should return [1,2,3,6,9,8,7,4,5]. My introduction of algorithm This is medium level algorithm on Leetcode.com. I have practiced over ten times from March 2017 to Jan. 2018. I wrote the algorithm in mock interview five times and also watched five peers to work on the algorithm. I experienced the pain to write a for loop inside a while loop four times, I wrote the code with a bug to print one row matrix twice, duplicate the output on last row and first column. And also I watched a few times the peer to struggle with so many bugs. Overall it is an algorithm with a lot of fun to play with. How to write bug free solution in mock interview? First time I had a mock interview on another mock interview platform on Jan. 23, 2018, and it is anonymous one. The interviewer asked me if I can write the solution only using one loop instead of four for loops inside one while loop whereas two for loops to iterate on row, either top or bottom row; two for loops to iterate on last column or first column. I had worked on the algorithm over 10 times on mock interview, but I never come out the completed idea based on limited time concern and buggy for loops. None of my peers came out the idea and wrote the similar ideas, and I only had discussion with one peer before. As an interviewer or interviewee, I did thought about four for loops are problematic as well. One time I complained to the peer in mock interview when I worked on the four for loop of this spiral matrix problem. I told the peer that I like to use extra array to mark visit, so that my four for loop can always go from 0 to rows - 1 or 0 to cols - 1. The code will take extra time to iterate on visited elements but definitely no worry to define the start and end position. The peer's advice is not to be a hacker in mock interview, you should always follow the interviewer's hint or advice. That is only time I made very close to this new idea. It is helpful to review all past practices through the code blogs. Here is one of blogs about the practice on spiral matrix algorithm. Analysis of the algorithm One thing I like to do is to write down some analysis of the algorithm before I write any code in mock interview. And it is also very helpful for me to go over various ideas to find the optimal one. I also like to practice this approach when I ask a question on this site. Here are some keywords for the spiral matrix algorithm. Direction - There are four directions. Change direction if need, in the order of clockwise, starting from top left corner (0,0). Range - Stay inside the array Visit - visit each element in the array only once. Do not visit more than once. Order - follow the order of clockwise, start from (0,0). Quick solution with readable code I wrote a C# solution and use extra space to declare an array to mark the element in the matrix is visited or not. To change direction, if the current row and column is out of boundary of matrix or it is visited before. I wrote the solution after the mock interview, I could not believe that I need so many hints in the mock interview after so many practice, one hint for four directions, one hint using extra array for visited array. Here is C# solution. using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Text; using System.Threading.Tasks; namespace MatrixSpiralPrint { class Program { /// <summary> /// Leetcode Spiral Matrix /// https://leetcode.com/problems/spiral-matrix/description/ /// </summary> /// <param name="args"></param> static void Main(string[] args) { var spiral = MatrixSpiralPrint(new int[,] { { 1, 2, 3 }, { 8, 9, 4 }, { 7, 6, 5 } }); Debug.Assert(String.Join(",", spiral).CompareTo("123456789") == 0); } /// <summary> /// Navigate the direction automatically by checking boudary and checking /// the status of element visited status. /// Only one loop, perfect idea to fit in mock interview or interview /// 20 minutes setting /// </summary> /// <param name="array"></param> /// <returns></returns> public static int[] MatrixSpiralPrint(int[,] array) { if (array == null || array.GetLength(0) == 0 || array.GetLength(1) == 0) { return new int[0]; } int rows = array.GetLength(0); int columns = array.GetLength(1); var visited = new int[rows, columns]; int index = 0; int totalNumbers = rows * columns; var fourDirections = new List<int[]>(); fourDirections.Add(new int[] { 0, 1 }); // left to right - top row fourDirections.Add(new int[] { 1, 0 }); // top to down - last column fourDirections.Add(new int[] { 0, -1 }); // right to left - bottom row fourDirections.Add(new int[] { -1, 0 }); // bottom up - first row int direction = 0; int row = 0; int col = 0; var spiral = new int[totalNumbers]; while (index < totalNumbers) { var current = array[row, col]; spiral[index++] = current; visited[row, col] = 1; // mark visit var nextRow = row + fourDirections[direction][0]; var nextCol = col + fourDirections[direction][1]; var isOutArrayBoundary = nextRow < 0 || nextRow >= rows || nextCol < 0 || nextCol >= columns; if (isOutArrayBoundary || visited[nextRow, nextCol] == 1) // change the direction { direction = (direction + 1) % 4; // map to 0 to 3 } row += fourDirections[direction][0]; col += fourDirections[direction][1]; } return spiral; } } } Answer: Why use integer for on/off status? The values in visited are either 0 or 1. A boolean matrix would be a natural choice for the purpose. Do you really need the extra storage? Using the visited array makes the implementation somewhat simpler. Another way, without extra storage would be keeping track of the number of steps to do in the current direction. Think about the correct values and observe the pattern: Top: columns Right: rows - 1 Bottom: columns - 1 Left: rows - 2 Top: columns - 2 Right: rows - 3 Bottom: columns - 3 Left: rows - 4 ... Basically, alternating the column and row counts, decreasing by 1 on every direction change. When the next number of steps is 0, it's time to stop. A bit more tricky to write, but using constant extra storage. Handling special cases The case of 0 rows or 0 columns doesn't need special treatment. The result array will be created empty, and the main loop will not do any steps. The case of null value for the array is non-sense input. If that ever happens, it looks suspiciously as a malfunction in the caller. It would be better to signal that there is a problem by throwing an explicit exception. Naming fourDirections contains 4 directions. Don't name a collection by its size. Just directions would be better.
{ "domain": "codereview.stackexchange", "id": 29280, "tags": "c#, algorithm, programming-challenge, matrix" }
Electromagnetism Ampere's law Application to solenoid
Question: Electromagnetism;Ampere's Law Application for finding magnetic field strength(B) inside a current carrying solenoid Question is that why we multiply the current in one loop to the number of turns(enclosed in amperian rectangular loop) ALTHOUGH the current flowing(charges flowing per unit time) is SAME through all the loops??? There is a SINGLE complete circuit If there would be more than one circuits comprising each loop, then i think we should add all currents in individual loops BUT in this situation there is a single circuit... I hope my question is clear Answer: Answer by @Frecher is correct! I will try to give physical essence to your question. Each loop of current generates its own magnetic field regardless of whether they are part of the same circuit or not. Think of two loops which are in the same circuit and have the same current flowing through them. Now separate the two loops very very far away from each other (you have very long wire!). Now they don't interact with each other but they still generate the magnetic field at their own places. This means each loop must be generating magnetic field even though they are part of the same circuit and the same current flows through them! All this is actually incorporated automatically in Ampere's law.
{ "domain": "physics.stackexchange", "id": 51212, "tags": "electromagnetism" }
Beginner Physics - Explaining longitudinal waves
Question: I am having difficulty grasping the concept of a longitudinal wave. My textbook definition "In longitudinal waves, the vibration is backwards and forwards in the direction of motion of the wavefront" If it vibrates backwards and then forwards, would it not be in the same position it originally was? Do we assume it vibrates forward at a rate faster than that at which it vibrates backwards? Further, what am I supposed to 'visualize' when I think of waves, in a physics sense? What is the purpose of a longitudinal wave? Sorry if I am asking a lot of questions, Answer: There are two different but connected motions one can speak of when discussing pulse and wave propagation. I suspect you may not have a clear separation of these two ideas. One type of motion is the disturbance motion or wave motion. When you watch a wave move, this is typically what catches your eye. If you see a wave and you watch it travel, you're watching the disturbance travel. There's not actually a physical object that travels, but it's the disturbance itself that does. Another type of motion is the particle motion. This is the motion that each individual bit of material is undergoing. Imagine you tie a small bit of string to a slinky. The particle motion is what the bit of string does. The key idea is that for a longitudinal wave, the particle motion will be "forward and backward" while the motion of the wave will only be "forward". Take a look at this video of a longitudinal wave. Try to identify the two different types of motion. You'll see the backward and forward motion of the "particles", as well as the forward-only motion of the disturbances. If you're still unsure about this, that video link ends with a transverse wave. In those waves, the particle motion is actually perpendicular to the wave motion. If the wave is moving to the right, the particles can be moving up and down. This might actually be an easier example for separating the two types of motion.
{ "domain": "physics.stackexchange", "id": 11659, "tags": "waves" }
What is the meaning of the invariant $S^{ij} S_{ij}$?
Question: I came across $S^{ij}$ when studying GR and Riemann Curvature. $$S^{ij} = \oint_{\text{loop on the surface}} x^i dx^j - x^j dx^i \approx \text{Area Enclosed By The Loop.}$$ What is $S^{ij} S_{ij} ?$ And does this mean something like "area" in any coordinates or does this have to be done in cartesian coordinates to give "area". I used this to parameterize a circle and find the area. Then I calculated it in polar coordinates by doing the integral such that $x=r$ and $y=\theta$ and by transforming the tensor and got the same answer, $2\pi r$. And that's not an area. Answer: I'm going to take a shot in the dark here, and say that this must have come up in the discussion of parallel transport. Yes, it a tensor. This is fairly simple to prove. Since this is a contour integral, we can perform the following parameterization: $$ x^{\rho} \rightarrow x^{\rho}(\tau)$$ Where $\tau$ has the usual meaning of proper time measured along the path. You can therefore then, do the following: $$S^{ij} = \oint x^{i} dx^{j} = \oint x^{i} \frac{dx^{j}}{d\tau} d\tau$$ $\tau$ is a scalar, and the kernel consists of products of two tensors. Therefore, $S^{ij}$ is indeed a tensor. As far as I know, $S^{ij}S_{ij}$ does not have any importance in physics at least. I recommend Weinberg's Gravitation and Cosmology - Chapter: Curvature for parallel transport.
{ "domain": "physics.stackexchange", "id": 76800, "tags": "general-relativity, differential-geometry" }
Is it possible to use a Volterra series to generate subharmonics?
Question: Taylor series can create harmonics of the frequencies in an input signal. I'm wondering if it is likewise possible to use Volterra series to create sub-harmonics of the frequencies in that signal. Some degree of finesse is required in posing the question: Taylor series don't only create harmonics; there's intermodulation distortion as well. Likewise, it's ok with me if a suitable Volterra series doesn't only create sub-harmonics, but also creates something similar to intermodulation distortion, or even some degree of harmonic distortion. However, I would like to exclude trivial cases where the input signal gets so wrecked that it ends up looking like white noise, which would obviously create sub-harmonics (along with every other frequency you can imagine!). I'm not sure how to formalize this requirement precisely, other than to say that I hope it's clear what I'm driving at. Here's one way to formalize the behaviour I want: Given an input $\cos(\omega t)$, yields an output containing $\cos(\frac{\omega}{n} t)$ for some fixed $n$, ideally a natural number Given a general input, a sub-harmonic is generated corresponding to every frequency in the input Other "intermodulation distortion"-type artifacts are allowed, "within reason" The answer can be expressed for either discrete or continuous signals. Answer: Check out this paper. I would have made a comment but not high enough rep. Link Looks like you need multiple in to get subharmonics in Volterra series The abstract states "Subharmonic generation is a complex nonlinear phenomenon which can arise from nonlinear oscillations, bifurcation and chaos. It is well known that single-input-single-output Volterra series cannot currently be applied to model systems which exhibit subharmonics. A new modeling alternative is introduced in this paper which overcomes these restrictions by using local multiple input single output Volterra models. The generalized frequency-response functions can then be applied to interpret systems with subharmonics in the frequency domain."
{ "domain": "dsp.stackexchange", "id": 10265, "tags": "discrete-signals, signal-analysis, audio, non-linear, taylor-expansion" }
Time complexity of Hash table lookup
Question: Suppose I have a hash table which stores the some strings. Let the index/key of this hash table be the length of the string. what is the time complexity of checking if the string of length K exists in your hashtable? is it O(1) or O(k) ? Answer: Here's how a hash table usually works: Given an input $x$, you compute its hash $h$. You look at cell $h$, and compare your input to the key $k$ stored there. If the key compares, great. Otherwise, what you do depends on the exact implementation. As you can see, you have to compute the hash of your input, as well as compare it to a key stored at the table. Typically we want the hash to depend on the entire input, has computing the hash takes at least linear time in the size of $x$. Similarly, comparing $x$ and $k$ takes non-constant time.
{ "domain": "cs.stackexchange", "id": 15943, "tags": "algorithms" }
How to arrange a sub-array for Quick sorting algorithm?
Question: Alghorithm : Quick sort . idea : devide and conqure . steps : 1- find the pivot point from array like first element . 2- partiotioning the array so that elements are smaller than pivot point are in the left side and ones are bigger in the right side . 3- sorting both sub arrays (recursion) Question : i know all of the steps but i don't know how to sorting sub arrays in step 3 . i can't find any example that explains this ending part . Example : if you look at this you see that both sub arrays are unsorted and we should complete steps with recursion method to solve this and this is my question . but can you explian this for me ? Answer: The conceptually simplest way is to simply use the same algorithm recursively to sort the sub arrays; hence why it's labeled as recursion. As long as the base cases for very small arrays are correctly defined, this will yield a correct result. As an example, let's Quicksort some alphabet, using first symbol as pivot: sort(FHDEBACG) -> sort(DEBAC) F sort(HG) Then, we recursively sort DEBAC and HG. I'll demonstrate with the former: sort(DEBAC) -> sort(BAC) D sort(E) -> sort(BAC) D E sort(BAC) -> sort(A) B sort (C) -> A B C thereby getting the desired result: sort(DEBAC) -> A B C D E In practice, Quicksort implementations are often modified to sort small enough (sub-)arrays with Insertion Sort, which usually exhibits superior performance for sufficiently small input arrays. Arrays and sub-arrays larger than this are sorted, as above, usually by recursively applying the same algorithm.
{ "domain": "cs.stackexchange", "id": 18558, "tags": "quicksort" }
ATP cost for gene expression
Question: How would you estimate the number of ATPs required to transcribe, export and translate a single eukariotic protein? Answer: The cost of transcribing and translating a hypothetical average gene in yeast has been calculated as 551 activated phosphate bonds ~P per second (Wagner, 2005). The median length of a yeast RNA molecule is 1,474 nucleotides, and the median cost of precursor synthesis per nucleotide (derived from the base composition of yeast-coding regions) is 49.3 ∼P. With a median mRNA abundance of R = 1.2 mRNA molecules per cell and a median mRNA decay constant of dR = 5.6 × 10−4 s−1, the mRNA synthesis costs calculates as 49.3 × 1,474 × 1.2 × (5.6 × 10−4) = 48.8 ∼P per second and cell. This is a fraction 48.8/1.34 × 107 = 3.6 × 10−6 of the total RNA synthesis cost per second. The median length of a yeast protein is 385 amino acids, with a combined biosynthesis and polymerization cost of 30.3 ∼P per amino acid. The median abundance is 2,460 protein molecules per cell. No currently available data allows a meaningful estimate of the median protein half-life, but a protein of an intermediate half-life (see below) of 10 h (decay constant dP = 1.92 × 10−5 s−1) yields an overall synthesis cost of 30.3 × 385 × 2,460 × (1.92 × 10−5) = 551 ∼P s−1. For your question about a single gene, the cost would be 49.3 * 1474 ~P for the mRNA and 30.3 * 385 ~P for the translation, which would result in around 84 thousand ~P. This is probably a very misleading statistic as you can transcribe multiple proteins from a single mRNA. How the cost of mRNA synthesis and translation are calculated is described in detail in the paper. A large part of the cost comes from the synthesis of the basic building blocks, the nucleotides and the amino acids. Wagner, A. Energy Constraints on the Evolution of Gene Expression. Mol Biol Evol 22, 1365-1374 (2005).
{ "domain": "biology.stackexchange", "id": 190, "tags": "biochemistry, molecular-biology, metabolism" }
Why is light speed the only constant speed?
Question: I've been thinking about special relativity and I did the following math (I’m a beginner in this relativity so sorry if the problem I did has mistakes) Let’s suppose a spaceship travelling at $\frac{1}{2}$ light speed ($150.000 km/h$) in a vacuum measures a photon that travels $300.000 km$ in one second. If we are still and we apply special relativity to what the spaceship has measured in comparison with us we find the folowing The $300.000 km$ for the spaceship is $346.410,16 km$ for us($300.000 x \gamma(150.000)$), and the $1$ second for the spaceship is $1,155 s$ for us ($1 x \gamma(150.000)$). If we divide the results to get the speed ($346.410,16/1,155$) we get $300.000 km/h$ , the speed of light, so we see that it’s constant and doesn’t vary no matter at what speed we measure it. The problem I found is that if I change those numbers for others, for example the speed we're measuring is (measured by the spaceship) $ 250.000 km/h$ , that speed stills the same: $250.000 km$ for the spaceship is $288.675,13 km$ for us. $1 s$ for the spaceship is $1,155 s$ for us $\frac{288.675,13}{1,155 }= 250.000 km/h$ So $250.000 km/h$ is behaving the same as light speed, being constant Can someone explain why is this happening to me? Answer: First we need to clarify: your calculations seem to be correct. you are setting the speed of light to 250000 km/s then your calculations seem to be correct too. you are wondering why the calculation still works with a different speed of light. the Maxwell equations set the speed of light to approx. 300000 km/s in the universe. It is because the vacuum is set up that way, the permittivity and permeability of vacuum define that EM waves can propagate through vacuum with this exact speed. you wonder why this is the case, but you just have to accept that EM waves, photons and every particle without rest mass can travel with this speed, because the vacuum is set up so in the universe, vacuum has EM fields everywhere, and photons are just an excitation in the field. Now this excitation is what propagates with this speed. We do not know why, but maybe because EM fields need time to excite and so the propagation of this excitation needs time. what you are doing is you are trying to set this characteristic of the universe to a different value, in your case to 250000 km/s. That would need a different vacuum permittivity and permeability. In this case, EM waves would need a different time to propagate through vacuum. This would still not change SR, so any observer would see light propagate at this 250000 km/s speed.
{ "domain": "physics.stackexchange", "id": 48484, "tags": "special-relativity, speed-of-light, speed" }
Meaning and conservation of integrals of momentum with respect to velocity
Question: Mass is usually conserved, momentum is usually conserved, energy is usually conserved (assuming there is not net force on the system). This gives rise to the natural question: Is $\frac{1}{6} m v^3$ conserved? and more generally: Is $\frac{1}{n!} m v^n$ conserved? Furthermore, what is the meaning behind these higher order integrals with respect to velocity. The units of $\frac{1}{6} m v^3$ are $kg\,m^3 s^{-3}$ so it seems as though it could be related to power times distance. Is there a name for this quantity and a good way to think about it? Answer: The pattern doesn't continue for higher powers of $v$, and we shouldn't expect it to continue, because they hold for completely different reasons. The total mass (which here I'm considering to be the sum of the rest masses of the particles) is conserved only in the nonrelativistic limit, where energies are small compared to $mc^2$. The total momentum is conserved by spatial translational symmetry. The total energy is conserved by time translational symmetry. There are no more symmetries in a generic system (besides rotations and (Galilean) boosts, which give something different), so we don't expect $\sum mv^3$ to be conserved. In the relativistic case, things break down even further. Neither $\sum m$ nor $\sum mv^2$ are conserved; instead, they combine into $\sum \gamma m$, which is the total relativistic energy. The quantity $\sum mv$ isn't conserved either and has to be replaced with $\sum \gamma m v$. So the pattern is a lot weaker than it looks at first glance.
{ "domain": "physics.stackexchange", "id": 40484, "tags": "momentum, energy-conservation, conservation-laws, integration" }
What is the link between equation of a continuous signal versus equation of its sampled form?
Question: I have just started DSP self-learning. I am a little confused by this 'end of the chapter' exercise question from Chapter 2 of "Understanding Digital signal processing" 3rd edition by Richard G. Lyons. Consider a continuous time-domain sine wave defined by $$x(t)=\cos(4000\pi t)$$ that was sampled to produce the discrete sine wave sequence defined by $$x(n)=\cos(n \pi/2)$$ What is the sample rate ($f_s$ measured in $\textrm{Hz}$) that would result in the sequence $x(n)$? There are a few things that I have not understood about this question: Why is the argument of the cosine function so different after sampling? I am not sure why the argument of the cosine after sampling does not include $t$ so that we can get $nT$ where $T$ is the sample time. What determined the argument of the cosine after sampling? Can I determine the frequency of the sampled signal from the argument of the cosine in the $x(n)$ equation? And of course a cheeky one, the answer to the book question please :) Answer: The sequence $x[n]$ equals the continuous-time signal $x(t)$ sampled at $t=nT$, where $T=1/f_s$ is the sampling period: $$x[n]=x(nT)=\cos(4000\pi nT)=\cos(4000\pi n/f_s)\tag{1}$$ So if $x[n]=\cos(n\pi/2)$ you just have to compare this argument to the argument in $(1)$ to figure out what $f_s$ is. The normalized frequency of the discrete-time signal $x[n]$ is $\omega_0=\pi/2$ (in radians). It is related to the frequency $f$ in Hertz by $$\omega_0=\frac{2\pi f}{f_s}$$
{ "domain": "dsp.stackexchange", "id": 3989, "tags": "discrete-signals, sampling, continuous-signals" }