anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
How to normalize noisy data
Question: Suppose I have 1-D data which has some outliers, I want to normalize the data to be in the range [0,1]. I tried calculating the maximum value and the minimum value as follows: q1,q2,q3 = quartiles of the data max = q3 + (q3-q1)*1.5 min = q1 - (q3-q1)*1.5 I used the above approach because I have read that data above maximum or data less than the minimum (as calculated above is noise). My question is: whatever I am doing, is it correct or is there any other way to achieve good results? Thank you for helping. Answer: The question, whatever I'm doing is it correct or ... will occur over and over again, until you have a firm understanding of statistics. See if you are looking for short-term learning, then you might pick up some fragments of information here and there, but, for long term learning, I'll suggest that you read a good text on introductory statistics. Try, this book, or this one on Applied Predictive Modeling Now to answer the question, Yes what you are doing is correct. I've explained it below; Outliers are observations that fall below the lower limit or above the upper limit. The five number summary of a continuous variable (or numerical variable) is Min, Q1, Q2, Q3, Max. Where the lower and upper limits are defined as, Lower Limit = Q1 - 1.5 x IQR, Upper Limit = Q3 + 1.5 x IQR and IQR is the Inter Quartile Range, which is the difference between the first and third quartile. Q1 is the first quartile, Q2 is the second quartile or the median and Q3 is the third quartile. Finally, please always plot your data.
{ "domain": "datascience.stackexchange", "id": 8254, "tags": "normalization" }
Understanding wave functions
Question: I'm currently writing an essay on the measurement problem, and I'm not quite certain that I've fully understood the purpose of the wave function, in that does the following sentence make sense with respect to the wave function: Although the wave function itself is never physically observed, we are given resultant macroscopic events which we can interpret, allowing us to formulate a conclusion. What I mean to say is that although we don't actually see the wave function, we can see the kind of after-math of what happens in the form of a physical macroscopic event..is that right? Answer: In short, yes, you can see the "effects" of a wavefunction after some math, but never "see" the "actual wavefunction." Wavefunctions are not physical things. Let me explain: If you take the complex conjugate of a wavefunction, you get a function which can tell you the odds of finding a particular particle at a place at a certain time. If you do other math to it, you can get energies of the particle and other properties (like spin). If you take that wavefunction into a lab, you can make predictions about what your experiment will do, and smile at an unlikely but still predicted result. You should note, however, that a wavefunction alone does not mean probability, it does not mean energy, nor does it mean position. It's a mathematical concept, a collection of marks on a physicist's notebook, as real as a country's Gross Domestic Product or the number of cats GIFs downloaded per click from a particular computer. Like all those things, they are not physical things. Anyways, the issue here is that people often feel that these physics concepts must be tangible "things." Wavefunctions, energy, and entropy are not things you directly experience. They are concepts; mathematical constructions we use to account for what can happen in the universe. You can see the effects of entropy, you can see the effects of high or low energies, and you can see the effects of wavefunctions. You cannot, however, "see" these concepts because they don't exist like a force or property of matter. These "higher" concepts are just how we keep track of things going on in the universe, and wavefunctions are very complicated accountant's notes. You can try to think of wavefunctions as actual things, bouncing around the universe, but then you'll be flummoxed once and forever, because they are not physical things. If we come up with a better way of tracking the universe, we may one day discard wavefunctions. It could be totally abandoned, becoming entombed in an infrequently visited wikipedia page, a stepping stone to some greater understanding. If that happens, the actual, physical universe would not change at all. We would just understand it better under some other accounting system.
{ "domain": "physics.stackexchange", "id": 13527, "tags": "quantum-mechanics, wavefunction, measurement-problem, wave-particle-duality, wavefunction-collapse" }
Error while linking OpenCV
Question: Hello everyone, I am using ROS Fuerte and in the project that I use OpenCV with ROS I can compile perfectly. The problem is, if I try to compile a project normal project that uses OpenCV (out of ROS), I get errors while linking: $ make g++ -Wall -DASSERT -O3 -g pkg-config opencv --cflags -o main.o -c main.cpp main.cpp: In function ‘int main(int, char**)’: main.cpp:44:5: warning: ‘key’ may be used uninitialized in this function [-Wuninitialized] g++ pkg-config opencv --libs -o main main.o main.o: In function main': /home/roleiland/Desktop/video_test/main.cpp:29: undefined reference to cvCreateFileCapture' /home/roleiland/Desktop/video_test/main.cpp:39: undefined reference to cvGetCaptureProperty' /home/roleiland/Desktop/video_test/main.cpp:42: undefined reference to cvNamedWindow' /home/roleiland/Desktop/video_test/main.cpp:52: undefined reference to cvShowImage' /home/roleiland/Desktop/video_test/main.cpp:55: undefined reference to cvWaitKey' /home/roleiland/Desktop/video_test/main.cpp:46: undefined reference to cvQueryFrame' /home/roleiland/Desktop/video_test/main.cpp:59: undefined reference to cvReleaseCapture' /home/roleiland/Desktop/video_test/main.cpp:60: undefined reference to `cvDestroyWindow' collect2: ld returned 1 exit status make: *** [main] Error 1 If I execute: $ pkg-config opencv --libs -L/opt/ros/fuerte/lib -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_ts -lopencv_video -lopencv_videostab $ ls /opt/ros/fuerte/lib/ libactionlib.so libopencv_flann.so libopencv_photo.so.2.4 libpcl_io_ply.so.1.5.1 libpcl_segmentation.so libcpp_common.so libopencv_flann.so.2.4 libopencv_photo.so.2.4.2 libpcl_io.so libpcl_segmentation.so.1.5 libflann_cpp-gd.so libopencv_flann.so.2.4.2 libopencv_stitching.so libpcl_io.so.1.5 libpcl_segmentation.so.1.5.1 libflann_cpp_s.a libopencv_gpu.so libopencv_stitching.so.2.4 libpcl_io.so.1.5.1 libpcl_surface.so libflann_cpp_s-gd.a libopencv_gpu.so.2.4 libopencv_stitching.so.2.4.2 libpcl_kdtree.so libpcl_surface.so.1.5 libflann_cpp.so libopencv_gpu.so.2.4.2 libopencv_ts.so libpcl_kdtree.so.1.5 libpcl_surface.so.1.5.1 libflann_cpp.so.1.7 libopencv_highgui.so libopencv_ts.so.2.4 libpcl_kdtree.so.1.5.1 libpcl_tracking.so libflann_cpp.so.1.7.1 libopencv_highgui.so.2.4 libopencv_ts.so.2.4.2 libpcl_keypoints.so libpcl_tracking.so.1.5 libflann_s.a libopencv_highgui.so.2.4.2 libopencv_video.so libpcl_keypoints.so.1.5 libpcl_tracking.so.1.5.1 libflann.so libopencv_imgproc.so libopencv_video.so.2.4 libpcl_keypoints.so.1.5.1 libpcl_visualization.so libflann.so.1.7 libopencv_imgproc.so.2.4 libopencv_video.so.2.4.2 libpcl_octree.so libpcl_visualization.so.1.5 libflann.so.1.7.1 libopencv_imgproc.so.2.4.2 libopencv_videostab.so libpcl_octree.so.1.5 libpcl_visualization.so.1.5.1 libmessage_filters.so libopencv_legacy.so libopencv_videostab.so.2.4 libpcl_octree.so.1.5.1 librosbag.so libopencv_calib3d.so libopencv_legacy.so.2.4 libopencv_videostab.so.2.4.2 libpcl_range_image_border_extractor.so librosconsole.so libopencv_calib3d.so.2.4 libopencv_legacy.so.2.4.2 libpcl_common.so libpcl_range_image_border_extractor.so.1.5 libroscpp_serialization.so libopencv_calib3d.so.2.4.2 libopencv_ml.so libpcl_common.so.1.5 libpcl_range_image_border_extractor.so.1.5.1 libroscpp.so libopencv_contrib.so libopencv_ml.so.2.4 libpcl_common.so.1.5.1 libpcl_registration.so libroslib.so libopencv_contrib.so.2.4 libopencv_ml.so.2.4.2 libpcl_features.so libpcl_registration.so.1.5 librospack.so libopencv_contrib.so.2.4.2 libopencv_nonfree.so libpcl_features.so.1.5 libpcl_registration.so.1.5.1 librostime.so libopencv_core.so libopencv_nonfree.so.2.4 libpcl_features.so.1.5.1 libpcl_sample_consensus.so librxtools.so libopencv_core.so.2.4 libopencv_nonfree.so.2.4.2 libpcl_filters.so libpcl_sample_consensus.so.1.5 libtopic_tools.so libopencv_core.so.2.4.2 libopencv_objdetect.so libpcl_filters.so.1.5 libpcl_sample_consensus.so.1.5.1 libxmlrpcpp.so libopencv_features2d.so libopencv_objdetect.so.2.4 libpcl_filters.so.1.5.1 libpcl_search.so pkgconfig libopencv_features2d.so.2.4 libopencv_objdetect.so.2.4.2 libpcl_io_ply.so libpcl_search.so.1.5 python2.7 libopencv_features2d.so.2.4.2 libopencv_photo.so libpcl_io_ply.so.1.5 libpcl_search.so.1.5.1 Any idea about what could be wrong? Originally posted by roleiland on ROS Answers with karma: 21 on 2013-05-05 Post score: 1 Answer: I think this is linking order; the library flags for opencv should be specified after the .o files in the linking step: g++ -o main main.o main.o `pkg-config opencv --libs` Originally posted by ahendrix with karma: 47576 on 2016-03-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14071, "tags": "opencv" }
PHP secure login script
Question: I was just wondering how secure my code looked and if I'm overlooking any serious mistakes. Any suggestions/critiques are welcome. This is my relevant login script. login.php <?php if (!isset($_POST['remember']) && isset($_POST['user'])) { if (isset($_COOKIE['remember_me'])) { $past = time() - 100; setcookie('remember_me', gone, $past); } } require_once 'header.php'; error_reporting(0); if (isset($_POST['user'])) { $user = $_POST['user']; $pass = $_POST['pass']; $hide = sanitizeString($_POST['hide']); $userIp = $_SERVER['REMOTE_ADDR']; $now = date('r'); if ($user === "" || $pass === "" || $hide != "") { $error = "<div class='alert alert-danger text-center animated shake'>Not all fields were entered</div>"; } elseif (!preg_match('~^[a-z0-9_.-]+$~i', $user)) { $error = "<div class='alert alert-danger text-center animated shake'>Usernames must be all lowercase. They can only contain letters, numbers, periods, hyphens, and underscores</div>"; } else { $stmt = $con->prepare("SELECT pass, user FROM members WHERE user= (?) LIMIT 1"); $stmt->bind_param('s', $user); $stmt->execute(); $result = $stmt->get_result(); $num = $result->num_rows; $row = $result->fetch_array(MYSQLI_ASSOC); if (password_verify($pass, $row['pass']) && $row['user'] == $user) { $stmt = $con->prepare("UPDATE `members` SET `logins`=logins + 1 WHERE user= (?)"); $stmt->bind_param('s', $user); $stmt->execute(); $result = $stmt->get_result(); $_SESSION['user'] = $user; $_SESSION['canary'] = time(); $_SESSION['HTTP_USER_AGENT'] != md5($_SERVER['HTTP_USER_AGENT']); $stmt = $con->prepare("SELECT logins FROM members WHERE user=(?)"); $stmt->bind_param('s', $user); $stmt->execute(); $result = $stmt->get_result(); $num = $result->num_rows; $row = $result->fetch_array(MYSQLI_ASSOC); if ($row['logins'] == 1) { queryMysql("INSERT INTO leave_type (user, hours, leave_start, leave_end, leave_reason, leave_type) VALUES ('$user', '24.00', '2016-01-08T06:30:00', '2016-01-09T06:30:00', 'Example Leave', 'Annual') "); die("<br><br><script>setTimeout(function () { window.location.href='FullscreenForm.php'; // the redirect goes here }, 100); // 5 seconds</script> <div class='alert alert-info text-center animated bounceInDown'>Hi. Since this is your first time logging in Please go to the <a href='FullscreenForm.php'>" . "Edit Profile</a> page and input your information so the app will work correctly. Or just wait and you'll be redirected</div>"); } else { die("<br><br><script>setTimeout(function () { window.location.href='members.php?view=$user'; // the redirect goes here }, 0); </script><div class='alert alert-success text-center animated bounceInDown'>You are now logged in. Please <a href='members.php?view=$user'>" . "click here</a> to continue.</div>"); } } else { //queryMysql("INSERT INTO failedlogins (user,pass,hide,ip,time)VALUES('$user', '$pass', '$hide', '$userIp','$now')"); $stmt=$con->prepare("INSERT INTO failedlogins (user,pass,hide,ip,time)VALUES((?), (?), (?), (?), (?))"); $stmt->bind_param('sssss', $user, $pass, $hide, $userIp, $now); $stmt->execute(); $error = "<div class='alert alert-danger text-center animated shake'><span class='error'>Username/Password invalid</span></div>"; } } } header.php (on all pages on site) require_once 'functions.php'; error_reporting(0); $year = time() + 31536000; if ($_POST['remember']) { setcookie('remember_me', $_POST['user'], $year); } elseif (isset($_GET['username'])) { setcookie('remember_me', $_GET['username'], $year); echo " <script type='text/javascript'> $(document).ready(function(){ //Check if the current URL contains '#' if(document.URL.indexOf('#')==-1){ // Set the URL to whatever it was plus '#'. url = document.URL+'#'; location = '#'; //Reload the page location.reload(true); } }); </script>"; } $params = session_get_cookie_params(); setcookie(session_name(), $_COOKIE[session_name()], time() + 60*60*24*3, $params["path"], $params["domain"], $params["secure"], $params["httponly"]); session_start(); if (!isset($_SESSION['canary'])) { session_regenerate_id(true); $_SESSION['canary'] = time(); } // Regenerate session ID every five minutes: if ($_SESSION['canary'] < time() - 300) { session_regenerate_id(true); $_SESSION['canary'] = time(); } //checks if user agent has changed on all page requests and logs out if changed if (isset($_SESSION['HTTP_USER_AGENT'])) { if ($_SESSION['HTTP_USER_AGENT'] != md5($_SERVER['HTTP_USER_AGENT'])) { destroySession(); } } else { $_SESSION['HTTP_USER_AGENT'] = md5($_SERVER['HTTP_USER_AGENT']); } $userstr = ' (Guest)'; $phpself = htmlspecialchars($_SERVER["PHP_SELF"]); if (isset($_SESSION['user'])) { $user = $_SESSION['user']; $loggedin = true; $userstr = " $user"; } else { $loggedin = false; } Answer: Security Generally, your code looks secure. Your authentication doesn't seem to contain any giant logic flaws, you mostly use prepared statements, you use bcrypt, etc. Unused Security Mechanisms and pieces of code Your login does contain quite a bit of code that doesn't seem to be used, which makes it more complex than necessary, which makes it harder to maintain, and thus harder to verify its security. For example, you have a remember_me cookie, which indicates that you have a remember_me functionality, which would need to be checked for security. But you don't actually have that functionality, just the cookie. Another example is the failedLogins table. It suggests that you have some form of bruteforce detection / account blocking code. But again, it doesn't seem to be used and thus adds complexity and further attack surfaces. Yet another example is the odd setcookie that sets $_COOKIE[session_name()] as session_name(). It doesn't seem all that harmful, but also doesn't seem to serve any purpose. Because of all these additional pieces of code that don't do anything, it's hard to see at first glance what pieces of code actually do do something. For example, is the user agent check real, or just party implemented as some of the functionality above? [It's real - at least after the request after the initial login - , but it takes extra effort to see that] Possible future SQL Injection For the most part, you use prepared statements, which is great. But then all of a sudden, you have this: queryMysql("INSERT INTO leave_type (user, hours, leave_start, leave_end, leave_reason, leave_type) VALUES ('$user', '24.00', '2016-01-08T06:30:00', '2016-01-09T06:30:00', 'Example Leave', 'Annual') "); Now, you currently do have a regex on the username which would make it impossible to exploit this. But what if your requirements change? What if you want to allow Bill O'Harris to sign up as well? You might change that regex, and then be vulnerable. You should really always use prepared statements, not only most of the time. Possible future XSS Same problem as above, if the regex changes and ' is allowed, this is vulnerable to XSS: window.location.href='members.php?view=$user'; // the redirect goes here }, 0); </script><div class='[...] <a href='members.php?view=$user'>" . Misc Even though remember_me is currently unused, you should set it as httpOnly in case it is used in the future. You regenerate the session ever X minutes, which is good, but it is more important to regenerate it when the session state changes, especially on login (to avoid session fixation with some less common php.ini settings) It's good to use htmlspecialchars on PHP_SELF, but you should ideally use it when actually echoing PHP_SELF. If you handle other values like this as well, it will be very difficult to determine which values are safe and which are not (eg is $user safe?). It doesn't seem that you have any CSRF protection. Even login pages do need it (to easily avoid exploitation of XSS in the user area, and to avoid information leakages by force-login a user into an attackers account). Other Structure You should introduce functions to make your code easier to read, test and reuse. You might also want to add classes, such as a User class. Functions may be User:getByName(), User:increaseLoginCount(), User:firstLogin(), redirectFirstLogin(), redirectLogin(), failedLogin(), checkSessionUserAgent(), etc. You should also return early / add guard clauses to reduce nesting (eg if (!isset($_POST['user'])) { return; }). Naming Try to be consistent with your naming. For example, why is the same thing called user when using POST, but username when using GET? Formatting Your indentation is inconsistent (2 vs 3 vs 4 spaces), as is your use of spaces (sometimes, assignments are aligned, sometimes they are not). You don't need () around ?, it just makes the queries more difficult to read. Misc Why is the cookie delete check outside of the big if? It's confusing, and you need to check isset($_POST['user']) twice because of it. The $row['user'] == $user check is confusing and not necessary. You just selected $row['user'] based on $user, so it will always be equal. "clever" code is never good. I would need to look up / test what $x != md5($x) does to understand it. It looks like a bug to me. remember_me doesn't seem to do anything. What's $hide? Do you really need to count the number of logins? It seems to add two unnecessary queries and quite a bit of unnecessary complexity. You could just add a isFirstLogin field to the members table and select that when selecting pass and user. You never do anything with failedlogins.
{ "domain": "codereview.stackexchange", "id": 19464, "tags": "php, mysql, authentication" }
What is a good resource to learn about oriented matroids in the context of digraphs and optimization?
Question: I am interested in oriented matroids in the context of directed graphs and optimization. Unfortunately, I know very little of the topic. Is there a book, article or a resource that serves as a good introduction to oriented matroids, especially in the context of directed graphs? It's a bonus if the resource is suitable for an (under)graduate level course and is preferably even free. Answer: These notes written by Winfried Hochstättler for an MAA short course on matroids look like a good, short, introduction. They also contain references to several longer references on oriented matroids.
{ "domain": "cs.stackexchange", "id": 552, "tags": "optimization, reference-request, discrete-mathematics, matroids" }
Calculating Elastic Potential Energy of a Stretched Sheet
Question: Essentially, I'm trying to determine the amount of elastic potential energy stored in a thin, elastic sheet that has gone under some type of stretching (ex. A flag of stretchy fabric waving in the wind). To keep the post as relevant as possible, in this example the sheet can be rectangular and is just being stretched length-wise and width-wise into a larger rectangle. Since I'm simplifying the sheet's behaviour to follow Hooke's Law, I started off by realizing that Hooke's Law applies to linear deformations, and in this case I essentially have two linear stretches of the whole sheet, one in the x direction and another in the y. If I calculate both stretches, I figure that I can compute the elastic potential energy contributed by both and just add them together to get total energy. My main questions would be: 1) Is there any missteps in my reasoning, especially in how I treated the overall stretch as 2 independent stretches (i.e. x and y)? 2) Is there any relationship between the elastic coefficient of the sheet when stretched in the x-direction vs. the y-direction? Or would they be independent and need to be determined by experiment? Thanks in advance! :) Answer: Indeed, you can do it for calculating the potential energy by linearly adding together both contributions in x and y. For the forces and elastic coefficients, I think it depends on the nature of the fabric, and orientation of fabric strands with respect to the axes. In fact, the forces are vector additive, with respect to the components. So, suppose you have a system that can be represented like this, directions to be stretched are represented by the axes: Then the coefficients in the x and y axes are obviously independent of each other. and might have different values, and so are the components of force. However if the system represents something like this: Then it is not so obvious that they are independent. To simplify, suppose I have one spring with coefficient k, positioned diagonally with respect to axes. $$\vec F=k\Delta r \hat r = k\Delta r(\frac{\Delta x}{\Delta r} \hat x + \frac{\Delta y}{\Delta r} \hat y) = k\Delta x \hat x + k\Delta y \hat y $$ So, the components of force can be separated with having the same effective coefficient k as the original, on both components. And this can be applied to a system of combination of springs, just like above, which can represent a sheet of fabric, that is assuming there is zero or minimal shrinkage or contraction in between when stretched on opposite sides. The above shows that the components of force can be treated independently, But are coefficients independent? or does it depend on orientation? To make it short, directions of stretch can be considered as completely independent, and depends on the type of material. however, there might be sets of directions which might have the same coefficients. for example, most fabric can be represented by a square lattice and the repeating symmetric thread patterns usually lie on 2 perpendicular directions depending on where the thread is run. Then the directions with similar strengths are usually the directions with mirror angles to the line of reflection which are the 2 perpendicular lines of directions mentioned above. Furthermore, my hypothesis is that the result will likely be a Young's modulus type formula, but instead of $$\frac{F}{A} = E\frac{\Delta l}{l_0}$$ you'll get $$\frac{F}{L} = E\frac{\Delta l}{l_0}$$ Because it is 2D, where $L$ is the length of the side perpendicular to the length to be stretched, $l_0$ is the length of the length to be stretched. :) and $E$ is the elastic constant in specified direction (wrt thread orientation), which $E$ supposedly should now be "constant" and independent on length or width of material since it is now taken already into account in the formula. :) Might apply only to rectangular shapes though, and where direction of force should be parallel to either side of rectangle. So the formula is basically just Hooke's law ($F=kx$) but taken into account the length and width, to make the coefficient $E$ more universal, remaining approximately constant even with respect to any change in length or width of material.
{ "domain": "physics.stackexchange", "id": 51135, "tags": "energy, potential-energy, elasticity, soft-matter" }
What is the relation between solubility and thermal stability?
Question: The thermal stability of alkaline earth metals increases down the group for hydroxides i-e., Be(OH)2 is less stable than Ba(OH)2. The solubility also increases down the group for these compounds i-e., Be(OH)2 is less soluble in water as compared to Ba(OH)2. Hence for Hydroxides of group 2 elements the solubility and thermal stability trends are same i-e. Both increases down the group. But the solubility and thermal stability trends become inverse of each other for the "sulphates of group 2 elements". Does this mean that there is no relation between solubility and thermal stability? Answer: Nobody is able to foresee the solubility of a product. There are some experimental rules, but they all have exceptions, that nobody is able to explain. Just have a look on the Calcium salts made with the halogens (F, Cl, Br, I). There is a nice analogy among Cl, Br and I, but not F. Look ! The Calcium chloride $\ce{CaCl2}$, bromide $\ce{CaBr2}$ and iodide $\ce{CaI2}$ are all extremely soluble in water. They are all soluble in less than their weight of water. But calcium fluoride $\ce{CaF2}$ is among the least soluble product on Earth. The principal mineral for Fluoride is $\ce{CaF2}$, and it can be found everywhere at the surface of the Earth. If this mineral would have been at least a little soluble in water, the rains would have washed away this mineral in the geological times. Why is there such a huge difference between calcium fluoride and the other halogenides ? Nobody knows ! Another example. Potassium perchlorate is the only potassium salt which is very weakly soluble in water. By comparison, Sodium perchlorate is extremely soluble in water. Why? Another example: the number of Silver compounds which are soluble in water is limited. In organic solvents, it is even worse. But the Handbook of Chemistry and Physics says that Silver perchlorate is soluble in toluene. Why? Nobody knows. From time to time there are articles published in journals like the Journal of Chemical Education. The author is proud of displaying a theory filled with new parameters, for explaining the solubility of quite a lot of chemicals. But there are always exceptions, that he regrets not to be able to explain.
{ "domain": "chemistry.stackexchange", "id": 13072, "tags": "inorganic-chemistry, solubility, stability" }
What is slinky-approximation?
Question: I was reading the derivation of wave-equation from Berkeley Physics - Waves by Frank S. Crawford Jr. Let $\Delta z$ be a small segment of a continuous string . At equilibrium, tension is $T_0$ at both ends. When it moves upwards, it no longer remains straight. $T_1 \quad \& \quad T_2$ are tensions at both ends making $\theta_1 \quad \& \quad \theta _2$ with the line parallel to the horizontal. The horizontal components of the tensions at the ends are, then, $T_1\cos\theta_1 \quad \& \quad T_2\cos\theta_2$ genergally denoted as $T\cos\theta$. Now, in slinky-approximation, $T$ is larger than $T_0$ by a factor of $\dfrac{1}{\cos\theta}$. Therefore, $$T\cos\theta = T_0$$. Now, what is slinky-approximation? How is it used in the derivation of wave-equation here? Answer: The slinky approximation is essentially the assumption that the extensions we are dealing with (including the equilibrium length) are much greater than the natural length of the spring. For example, this is true in a slinky, which stretches to much greater lengths when you pull it than its natural length. Thus, while we would normally have, for a length $x$ of string, $$ T = k(x-a_0) $$ where $a_0$ is the natural length, under the slinky approximation we can neglect $a_0$. This means that we get a tension approximately proportional to the length, $T \approx kx$, and the ratio of tensions for two different cases is approximately equal to the ratio of lengths. Here, the length of the segment inclined at an angle $\theta$ due to extension is $\dfrac{L_0}{\cos\theta}$, where $L_0$ is the equilibrium length of the segment. The slinky approximation here directly leads to the relation $$T_0 = T \cos\theta $$
{ "domain": "physics.stackexchange", "id": 21216, "tags": "waves" }
Why can't gravity repel things?
Question: Gravity is the result of the curvature of space, so could the curvature actually send objects away from the source? Answer: Apart from the field-theoretical standpoint presented by Stan, one can repel objects in a sense, when taking orbital mechanics into account. The slingshot maneuver extracts angular momentum and energy of an orbiting mass by the use of gravity. The trick here is that the probe's velocity just gets redirected in the planet's reference frame. But this results in an acceleration in the solar rest-frame that the probe essentially tries to leave. As you were asking about curvature, I don't know how and if centrifugal potentials that play a role here are described by general relativity.
{ "domain": "astronomy.stackexchange", "id": 1261, "tags": "gravity, space, general-relativity, space-time, newtonian-gravity" }
Boundedness of a Hamiltonian and when does a Hamiltonian have a spectrum?
Question: In the context of Quantum Field Theory we put restrictions on the potentials we can use. One argument is boundedness. If the potential is unbounded, for example $V(\phi) = \phi^3$, then `the field can always decrease its energy further' by becoming more negative. This results in the no ground state and no bound states. The analogous argument in the context of Quantum Mechanics is more confusing to me. Consider the example of the hydrogen atom. The potential is not bounded from below and yet we have a discrete spectrum with a unique finite energy ground state. Another example where boundedness arguments come up is perturbation theory (in Quantum Mechanics). Consider a finite well for a single particle $$V(x)=\Big{\{} \begin{matrix}V_0 \ \ |x| >a \\ 0\ \ \text{otherwise} \end{matrix} $$ If we apply a linear perturbation $\delta V = \lambda x$ then we say that perturbation theory can't converge because the hamiltonian is unbounded and hence has no bound states. And it follows that (this is the point I am not sure if it's true) the corrections to the energy must diverge to $-\infty$. I am not sure what this statement is saying. Is it that the spectrum is just $-\infty$ or is it saying the hamiltonian just doesn't have a spectrum at all (whether thats continuous and/or discrete). I guess what I'm asking for is a more mathmatical statement about when a single particle hamiltonian has a spectrum. Answer: $\phi^3$ model: This is a nice toy model to illustrate some aspects of perturbutative calculations in QFT, but certainly not a fully consistent QFT. Hydrogen atom with $V(r)=-e^2/r$: The spectrum of the potential term alone is just $(-\infty, 0)$ and, of course, unbounded from below. But the full Hamiltonian (including the kinetic term with spectrum $[0,\infty)$) $H = \vec{p}^2/2m + V(r)$ is bounded from below with the ground state having the lowest energy-eigenvalue $E_1=-me^4/2\hbar^2$. This can be understood as the best compromise of the contribution of the kinetic energy and the potential energy, which are - in contrast to classical mechanics - related by (a generalized version of) the uncertainty relation. As a final result, $H$ has the well known point spectrum $E_n=-m e^4/2 \hbar^2 n^2$ ($n=1,2,\ldots)$ and a continuous spectrum $[0,\infty)$ (corresponding to the scattering states). Finite well potential: Again, spectrum of the multiplication operator $V(x)$ is just the range of the function $V(x)$ and spectrum of the kinetic energy $P^2/2m$ is $[0,\infty)$. But $H=P^2/2m+V(x)$ has a point spectrum plus a continuous spectrum for $E \ge V_0$. Hamiltonian $H= P^2/2m + mgX$: Describes e.g. the one-dimensional motion of a particle with mass $m$ in a constant gravitational (or electric $mg \to E$) field. As to be expected, there are no bound states and the spectrum is purely continuous. This has nothing to do with perturbation theory.
{ "domain": "physics.stackexchange", "id": 92267, "tags": "quantum-mechanics, hilbert-space, potential, hamiltonian, eigenvalue" }
In what cases and with what method does one find a time dependent probability density for a quantum system in an infinite square well?
Question: How can one find the time dependent probability density function of a quantum system given $\Psi(x,t=0)$? Say, $\psi(x) \sim x^4$ for $0 < x < L$. How can one find the time dependant probability density knowing $\psi(x)$? Knowing that the most general solution to the time dependent Schrodinger equation is: $\Psi(x,t) = \sum_{n} a_n \psi(x)_n e^{-i E_n t / \hbar}$ and knowing that $\psi(x)$ has no dependence on $n$, then when one tries to take $P(x,t) = |\Psi(x,t)|^2$, all the time dependent terms drop out and one is left with a time independent probability density. Answer: There's two different concepts that you are mixing up (perhaps partly because you use the same letter $\psi$ to refer to both concepts). There is the initial wave function, which I'll call $f$, so $\Psi(x,0)=f(x)$. Then there are the energy eigenfunctions, $u_n(x)$. It is true that \begin{equation} \Psi(x,t)=\sum_n a_n u_n(x) e^{-i \omega_n t} \end{equation} where I am using $\omega_n = E_n / \hbar$. Now to relate these, we have \begin{equation} \Psi(x,0) = f(x) = \sum_n a_n u_n(x) \end{equation} We can solve for the $a_n$ by using the orthonormality of the $u_n$. Then after we have solved for the $a_n$, the time dependence of $|\Psi(x,t)|^2$ is fixed (just plug those numerical values of $a_n$ into the general formula for $\Psi(x,t)$). The key thing is that $|\Psi(x,t)|^2$ will be time dependent if there is more than one nonzero $a_n$. To see that is true, you can just compute $|\Psi(x,t)|^2$ for an explicit example such as $\Psi(x,t) = a_1 u_1(x) e^{-i \omega_1 t} + a_2 u_2(x) e^{-i \omega_2 t}$. To respond to some comments: To compute $\Psi(x,t)$ you need to know both the $a_n$ and the $u_n(x)$. The way you compute the $u_n(x)$ is by solving the time independent Schrodinger equation $\hat{H} u_n = E_n u_n$. So the $u_n$ will depend on the potential you choose. For the infinite square well potential the $u_n$ are given by $\sin(n\pi x / L)$ (up to normalization). The way you compute $a_n$ is by using orthormality, $a_n = \int dx u_n^\star(x) f(x)$. If you want more detail you can find worked examples in any undergrad level quantum mechanics text, such as Griffifths.
{ "domain": "physics.stackexchange", "id": 25472, "tags": "quantum-mechanics, homework-and-exercises, wavefunction, probability" }
IntArray implementation
Question: I need to write an IntArray implementation for college. I'm wondering if my code could be better and if it is efficient. In the header file are the methods listed that we need to write. Header file: #ifndef INTARRAY_H_ #define INTARRAY_H_ #include <iostream> using namespace std; class IntArray{ private: int length; int * data; public: IntArray(int size = 0); IntArray(const IntArray& other); IntArray& operator=(const IntArray& original); int getSize() const { return length; }; int& operator[](unsigned int i); void resize(unsigned int size); void insertBefore(int value, int index); friend ostream& operator<<(ostream& out, const IntArray& list); ~IntArray(){ delete[] data; }; }; #endif Cpp file: IntArray::IntArray(int size){ length = size; data = new int[size]; for(int i = 0;i<size;i++){ data[i] = 0; } } IntArray::IntArray(const IntArray& other){ length = other.length; data = new int[length]; for(int i =0;i<length;i++){ data[i] = other.data[i]; } } IntArray& IntArray::operator=(const IntArray& other){ if(this = &other){ return *this; } length = other.length; data = new int[length]; for(int i = 0; i <length;i++){ data[i] = other[i]; } return *this; } void IntArray::resize(unsigned size){ if (size <= length){ length = size; return; } int * temparr = new int[size]; // copy data for (int i = 0; i < length; ++i){ temparr[i] = data[i]; } delete [] data; data = temparr; length = size; } int& IntArray::operator [](unsigned i){ if(i >= length || i < 0){ throw "index out of bounds"; } return data[i]; } void IntArray::insertBefore(int d, int index){ if(index >= length){ throw "index out of bounds"; } resize(length + 1); for(int i = length -1;i>index;i--){ data[i] = data[i-1]; } data[index] = d; } Answer: Main changes: Assuming you keep an int length, check for size >= 0 in constructor IntArray(int size = 0);, and in void IntArray::insertBefore(int d, int index) You thought about delete[] data in the destructor. However, there are other places in your code where data needs to be deleted: each time you change data's value, you need to deallocate what used to be pointed by data. Ie, delete[] data before any data = You added data = new int[length]; in the copy constructor, so you've solved the out of bound issue, but introduced a memory leak instead. You need to deallocate data first. Same for the assignment constructor. resize had an out-of-bound issue you've fixed Code used to be: void IntArray::resize(unsigned int size){ for(int i=length; i<size; i++){ data[i] = 0; } length = size; } (Note: you're not supposed to edit your code, or put your updated code in a separate section, at the bottom of your question) Opinion-based Maybe use an unsigned type (like std::size_t) instead of the signed int for the length. Pros: You enforce the invariant "the length can only be positive" Cons: Special values (like -1 for "Index not found") will look like regular values, higher risk of underflow. (see Herb Sutter comment at 44:30) Other Superfluous this-> in operator[](int index) Style: pick one of length or getSize()to get the data length, but don't mix the two (Here length is fine, in classes where you'd like to make a check before accessing the value, getSomething() is better. Add a const version of operator[](int index) Don't add an extra space at the end of operator<< (which you removed by mistake) Going further It depends on the assignment. Optimization: Introducing capacity to reduce the number of allocations/deallocations Add iterators Add convenience methods (bool empty() const;, int& front();) Add typedefs like typedef int value_type (useful for for metaprogramming) Add relationship operators (at least == and !=) @glampert made excellent points I forgot, check his answer.
{ "domain": "codereview.stackexchange", "id": 15172, "tags": "c++, array, homework" }
Is there a difference in the 'quality' of a gas if it's heated by electromagnetic radiation as opposed to conduction/convection?
Question: According to this link, "The wavelength at which the $O_2$ molecule most strongly absorbs light is approximately $145$ nm." According to this link, that's in the ultraviolet range of the electromagnetic spectrum. Consider two tanks containing oxygen gas, both equivalent. One tank has a steady stream of $145$ nm ultraviolet light being emitted on it, while the other has a flame warming the bottom of the tank. The temperature of the oxygen in both tanks will increase. Say we calibrate the experiment so both tanks reach $10^{\circ}$ C (and that nothing explodes) My question is, is there any difference in the oxygen contained in one tank vs. the other? In other words, is there any experiment that could be done to determine whether some oxygen came from one tank or the other? Or, since they are at the same temperature, are they equivalent? The question arises because from what I understand, a gas molecule heats up by radiative absorption by absorbing a photon, which excites the electrons in it, while the mechanism by which it is heated via conduction/convection is different (molecules bumping into each other ?), and I'm not sure if this results in a different "quality" of heated gas. Answer: As noted in @Bob_D's comment, 145 nm radiation is going to convert many oxygen (O$_2$) molecules into ozone (O$_3$). Ozone is unstable and eventually reverts to molecular oxygen, but its half-life is long enough that it could be detected for quite some time. The half-life of ozone in a glass vessel initially filled with 10% O$_3$ and 90% O$_2$at at ambient temperature and pressure is about 20 hours. The lifetime is less is some other types of vessels, e.g. it is only about an hour in brass. Ambient surface ozone levels are $<10^{-7}$, so an oxygen tank initially filled with 10% ozone might be distinguishable for about 20 half-lives, i.e. a couple of weeks. Of course, figuring out how long your two tanks would be distinguishable is not so simple. Even if you specified many more details, it is not trivial to estimate the initial ozone concentrations and their decay. A 172 nm lamp in dry oxygen at 300K can produce 400 g/kWh of ozone (assuming 100% wall plug efficiency). The specific heat of oxygen is 0.92 J/gK, so a crude dimensional estimate is that we might expect the ozone concentration to be $\sim 10^{-4}K^{-1} \Delta T$. Increasing the temperature of oxygen by $10$K with a UV lamp should roughly produce $\sim 0.1\%$ ozone concentration, which could be distinguishable for a week or so.
{ "domain": "physics.stackexchange", "id": 94241, "tags": "thermodynamics, electromagnetic-radiation, gas, heat-conduction, convection" }
How do space rovers survive at very low temperatures?
Question: For example, if a rover has working temperature range of -70 to +120 Celsius, how does it survive and then restore itself if the temperature drops to -150 degrees for several months? Answer: That is a very good question, and depends on the design. There are in general two ranges for components which are temperature sensitive. The operational range gives the temperature at which the component can be actively used. Within the survival range the component should generally take no harm but may not be actively used. Often what is even more demanding on the components than extreme temperatures is temperature cycling. Mars for example usually has a large amount of cycles, where as moon has the more extreme temperatures, but less cycling. In principle there are two ways to handle low temperatures for space rovers Heating your rover so that all the components that are temperature sensitive stay within the respective limits. You can use active heating by for example using energy from your battery. Depending on the battery and your insulation you may not hold out very long though. Radioactive heater units (RHU) are another alternative that can be very effective. Increasing the temperature tolerance of your system is another option. This can be done by selection of tolerant components, or potentially extending the range of existing components. Usually the most temperature sensitive component is the battery. Insulation is also possible, but may only get you so far. In general one could say that the thermal design is one of the most critical aspects for space probes, and will often decide about the feasibility of a mission.
{ "domain": "robotics.stackexchange", "id": 128, "tags": "electronics, ugv, reliability" }
Binary Search Tree in Java
Question: Please give me some feedback on how I could make it better or simpler. public class BinarySearchTree { private static TreeNode root; public static void main(String args[]) { BinarySearchTree bst = new BinarySearchTree(); root = bst.addItem(root, 10); bst.addItem(root, 2); bst.addItem(root, 6); bst.addItem(root, 3); bst.addItem(root, 1); bst.addItem(root, 15); bst.addItem(root, 12); bst.addItem(root, 16); System.out.println("** Print BST PRE ORDER **"); bst.printPreOrderTree(root); System.out.println("** Print BST IN ORDER **"); bst.printInOrderTree(root); System.out.println("** Print BST POST ORDER **"); bst.printPostOrderTree(root); } private TreeNode addItem(TreeNode root, int item) { if (root == null) { root = new TreeNode(null, item, null); return root; } else { if (item < root.element) { if (root.left == null) { TreeNode node = addItem(root.left, item); root.left = node; } else { addItem(root.left, item); } } else if (item > root.element) { if (root.right == null) { TreeNode node = addItem(root.right, item); root.right = node; } else { addItem(root.right, item); } } else { System.out.println("Duplicate Item"); } } return root; } private void printPreOrderTree(TreeNode root) { if (root != null) { System.out.println(root.element); printPreOrderTree(root.left); printPreOrderTree(root.right); } } private void printInOrderTree(TreeNode root) { if (root != null) { printInOrderTree(root.left); System.out.println(root.element); printInOrderTree(root.right); } } private void printPostOrderTree(TreeNode root) { if (root != null) { printPostOrderTree(root.left); printPostOrderTree(root.right); System.out.println(root.element); } } private static class TreeNode { TreeNode left; int element; TreeNode right; protected TreeNode(TreeNode left, int element, TreeNode right) { this.left = left; this.element = element; this.right = right; } } } Answer: static vs instance Your variable root should probably not be static. Instead, it should be an instance variable so you could have multiple BinarySearchTrees in a single program. node vs root In most places, your naming of node vs root is good. In some places, I found it a bit misleading: In printPreOrderTree(), printInOrderTree() and printPostOrderTree() I'd rather call it node than root. I'd use root only for that type of node for which the system has no parent. element vs item (vs data?) I think you can get rid of one of these terms and only use the other. I'd use item and dismiss element. I find element too abstract, the term could also be a supertype of node. I think item describes the payload of a node better than element. But then again, maybe simply data would be even better? Simpler Code in addItem() Since you don't need to remember the variables, you can simplify the code. Your code: private TreeNode addItem(TreeNode root, int item) { if (root == null) { root = new TreeNode(null, item, null); return root; } else { if (item < root.element) { if (root.left == null) { TreeNode node = addItem(root.left, item); root.left = node; } else { addItem(root.left, item); } } else if (item > root.element) { if (root.right == null) { TreeNode node = addItem(root.right, item); root.right = node; } else { addItem(root.right, item); } } else { System.out.println("Duplicate Item"); } } return root; } Simplified code: private TreeNode addItem(TreeNode root, int item) { if (root == null) return new TreeNode(null, item, null); else if (item < root.element) if (root.left == null) root.left = addItem(root.left, item); else addItem(root.left, item); else if (item > root.element) if (root.right == null) root.right = addItem(root.right, item); else addItem(root.right, item); else System.out.println("Duplicate Item"); return root; } And since the recursive addItem() is called anyway and checking root for null anyway, it can be even simpler: private TreeNode addItem(TreeNode root, int item) { if (root == null) return new TreeNode(null, item, null); else if (item < root.element) root.left = addItem(root.left, item); else if (item > root.element) root.right = addItem(root.right, item); else System.out.println("Duplicate Item"); return root; } Use stderr instead of stdout for error messages Your error message System.out.println("Duplicate Item"); should not go to stdout but to stderr, I guess. System.err.println("Duplicate Item"); Use the Visitor pattern? In printPreOrderTree, printInOrderTree and printPostOrderTree, you always visit all nodes. The recursion over the nodes is always the same. What differs is the event placement and the action. If your code gets more complex and you have more such recursions over the tree, you might want to use the Visitor design pattern to separate the recursion from the action to be performed with the node.
{ "domain": "codereview.stackexchange", "id": 14385, "tags": "java, tree" }
What Is A Hydrogen Bond?
Question: I know this might be a bit of a silly question, but what is a hydrogen bond? I always thought that (say in water for example) the hydrogen was slightly more negative than the other ion (oxygen). But hear me out, this is because the shared electron (between the hydrogen and the oxygen due to their covalent bond) is slightly closer to the hydrogen's positive nucleus (than the oxygen's positive nucleus). And since both are gaining an electron, that additional electron had more of an effect on the overall hydrogen ion (and less so of an effect on the oxygen ion) than a charge of exactly -1. Therefore if the charge of the hydrogen ion is now is slightly less than -1 and the charge of the oxygen is slightly more than -1, then the hydrogen is more negative than the oxygen... Wait, if both the hydrogen and the oxygen ions are negative, why would the attract other negative ions from other water molecules, surely they'd repel. Obviously there is some flaw in my understanding of this and im just hoping someone can help me out. Answer: I always thought that [in water] the hydrogen was slightly more negative than the [oxygen]. […] [T]his is because the shared electron (between the hydrogen and the oxygen due to their covalent bond) is slightly closer to the hydrogen’s positive nucleus (than the oxygen’s positive nucleus). You do not consider the relative size of atoms to determine their partial charges. It is not relevant by itself whether an atom is large or small; the only thing that matters is how well the nucleus can attract electrons — this is a value tabulated as electronegativity. The size of an atom in itself, and therefore also the distance from middle of a covalent bond to the nucleus of either atom is irrelevant; however, the size of an atom plays an indirect role since additional shells of core electrons reduce electronegativity by shielding the positive charge of a nucleus. It is not required to perform a formal analysis of how much influence the core electrons have, since — as I mentioned — the elements’ electronegativities are tabulated in periodic tables and similar resources. Going strictly by electronegativity, oxygen has $3.5$ and hydrogen has $2.1$. Thus, the bonding electrons are polarised much more towards oxygen; this means that oxygen has an electron surplus (partial negative charge) while hydrogen has an electron deficit (and therefore a partial positive charge). Another thing to consider is that a water molecule is a neutral molecule. Therefore, positive and negative partial charges must be somewhat evenly distributed. There is no overall surplus of electron, i.e. there is no overall charge. For an overall charge, you would have to turn to species such as the hydroxide ion $\ce{OH-}$. However, even therein the hydrogen is partially positively charged due to oxygen’s much larger electronegativity. This is not always given: in a peroxide ion $\ce{O2^2-}$, each oxygen atom will carry one single negative charge to give a symmetric molecule. Taking this all into account, it makes sense to consider a hydrogen bond as an electrostatic interaction between a positively charged hydrogen bound to a negatively charged atom and another strongly electronegative, negatively charged atom of a different molecule.
{ "domain": "chemistry.stackexchange", "id": 7148, "tags": "hydrogen-bond" }
Steer to a pose for a differential drive robot
Question: I am working on a diff-drive robot that needs to go and dock into a docking station to charge when the battery is low. The charging socket is on the back of the robot, like a vacuum cleaner, so this movement will have to be done in reverse, much like parking a car. I have a diagram that might help you understand the situation better: From a ROS perspective, all related perception will be in the base_link frame, so that my robot will effectively be at $(0, 0, 0)$ at all times, and the docking station will be detected in this frame. Let's assume for now that the docking station is at $(x_g, y_g, \theta_g)$ as shown in the figure. In the ROS implementation, I'd be working with geometry_msgs/PoseStamped messages. I need to be able to write geometry_msgs/Twist messages to the cmd_vel topic in order to make the robot move. Because this is a diff-drive robot, I know that I can only control linear.x and angular.z fields of the particular Twist message. I would like to know if the control law expressed below is valid or not: /** * pose -> The desired pose (geometry_msgs/PoseStamped) * command -> the resulting joint commands (geometry_msgs/Twist) */ double lin_vel_gain = 0.3; double ang_vel_gain = 0.1; /* Find the linear error */ double linear_error = std::hypot(pose.position.x, pose.position.y); /* Find the angular error */ double angular_error = std::atan2(pose.position.y, pose.position.x); command.linear.x = lin_vel_gain * linear_error; command.angular.z = angular_vel_gain * angular_error; I would also like to know how I can make this better, I was thinking of designing it according to a more classical path-planner, like this: How would I go about obtaining these intermediate poses? Any help, including literature and code samples are greatly appreciated. Thanks! Answer: To your question around the control math being valid, it appears to be mathematically OK, but it's not clear it will work in practice. Moves are usually profiled where you set an end point. Your math (a P-filter controller) will technically never reach the charger since it will slow down as it approaches and will never fully align with it for the same reason. Also the update rate on locating the charger will have to come in very quickly compared to the motion since theta will change as the robot starts moving. My thoughts are that you should line up the robot using theta (with either a PID filter or oscillate around the center point) and move towards the charger at a set (slow) speed until the robot hits the charger. Don't be fancy with velocity. Focus on steering and just move slowly. I have accomplished this with a camera that locates an LED on the charger and steers towards it with a slow set/steady forward speed. When the voltage on the charging pin appears, I stop forward motion. If it takes too long, it stops, backs up and tries again. You have more than one question so should probably break it up.
{ "domain": "robotics.stackexchange", "id": 38500, "tags": "ros2, control, inverse-kinematics, path-planning, differential-drive" }
Mass changes with speed, so shouldn't that mean reactionless drives are possible?
Question: Increasing the angular momentum of an object will increase the mass of an object. The thought experiment I have in mind is 2 identical discs, left and right: spin left push right off of left stop left spinning spin right pull left back towards right stop right spinning start over I imagine it inch-worming along as each disc has its mass momentarily increased to act on the other. So why is this type of system theoretically impossible? Answer: There's a sense in which mass does increase with speed. However, to increase something's speed, you have to put energy into it. That energy also has mass, it also figures into center-of-mass calculations, and it's exactly equal to the mass gained by the object when it speeds up. So no scheme of this sort can work.
{ "domain": "physics.stackexchange", "id": 71599, "tags": "speed-of-light, acceleration" }
Is it possible for Hydrogen to lose its electron?
Question: I know the basics of Chemistry and one thing I've always wondered if it is possible for Hydrogen to give up it's one electron? I know Hydrogen is eager to share its election through covalent bonds, but is there a scenario where Hydrogen will give up its only electron? Answer: Hydrogen can lose an electron meaning it can be in the +1 oxidation state. However, just like any other cation or anion it never occurs free in condensed matter, it always is in contact with solvent and/or anions. Moreover, because of extremely small size of proton, it is an extremely powerful Lewis acid. Consequently, in common conditions proton would react with first electron pair it comes in contact with, up to and including inert gas electron pairs and covalent bond pairs. On the other hand, hydrogen ions are quite easy to generate in electric discharge and/or under extreme heating. In fact, producing and confinement of super-hot plasma, consisting of hydrogen ions and electrons, is an area of active research for several decades and, well, producing and confinement of relatively cold plasma is not a problem. Confinement of several billions Kelvin hot plasma, however, is still a problem.
{ "domain": "chemistry.stackexchange", "id": 2495, "tags": "electrons, elements" }
Why can it be important to engineer band alignments/offsets (e.g. CdS buffer layer in CIGS)?
Question: I would like to ask this question by the example of the CdS buffer layer in CIGS solar cells. One paper (https://pubs.rsc.org/en/content/articlelanding/2017/se/c7se00348j#!divAbstract) says that CdS is deployed, because it has "excellent properties for better device performances such as favorable band alignment both to CIGS and ZnO, protection of the absorber surface by complete coverage of its surface (due to close lattice matching) from the subsequent layer deposition process, and removal of natural oxides from the absorber surface by the CBD process". I am particularly interested in the part about "favorable band alignments". So far I didn't really understand what was written about it and while I wait for answers, I will further read the Wikipedia article on "Band offsets" and the book "Heterojunction band offset engineering" (Franciosi, A.; Walle, C.). If someone could explain to me (with only basic knowledge in Fermi-Dirac distribution, and a little bit more detailed knowledge in solar cell physics), what "favorable band alignments" mean, I would be super happy. :-) Also, how relevant is this reason compared to other ones stated in the quote above? Answer: You should look up the material properties, band gap and band offset for these compounds and draw them. I cannot find a diagram in that article but here is another example, You will end up with a diagram like this. Bandgap is the separation between the conduction and valance band. The band offset is a vertical shift, between the conduction bands which describes how they line up. In this example the conduction band offset is a problem, it causes a discontinuity in the conducting which can influence the recombination and current flow. If the conduction bands have zero offset then discontinuity will disappear. So yes, the greater the conduction band offset the greater the band bending at the interface. To elaborate a bit more on transport characteristics. How can electrons in the conduction band pass through the potential barrier caused by mismatch of conduction band offsets? They must tunnel through. So this will decrease the conductance. I don’t have a specific resource on that topic.
{ "domain": "physics.stackexchange", "id": 72455, "tags": "solid-state-physics, semiconductor-physics, electronic-band-theory, solar-cells, fermi-energy" }
Beginner PHP calculator
Question: I've spent the last few evenings learning PHP, CSS and HTML through Codecademy and now on Team Treehouse. My goal is to progress with PHP over the next year or two and perhaps obtain Zend cert. Anyway, I've worked on a simple calculator to test what I've learned. I'd like to see, (given the same 'functionality')what more experienced programmers would come up with. I think this would enhance my learning curve. I've used a switch statement, because I'm not sure if it's possible to do something like this: Variable = Math Operator Var1 = User Input Var2 = User Input Variable3 = Var1 Variable Var2 Example, if you entered *, 2, 2, Variable 3 would = 4. I tried a few ways but could not make it work, so had to settle with a switch which seems cumbersome. <?php $input1 = $_GET["num1"]; $input2 = $_GET["num2"]; $input3 = $_GET["symbol"]; $mathinput = "Not Selected"; switch ($input3) { case "*": $input3 = "*"; $mathinput = "Multiplication"; $result = $input1 * $input2; break; case "/": $input3 = "/"; $mathinput = "Division"; $result = $input1 / $input2; break; case "+": $input3 = "+"; $mathinput = "Addition"; $result = $input1 + $input2; break; case "-": $input3 = "-"; $mathinput = "Subtraction"; $result = $input1 - $input2; break; default: $input3 = "0"; $mathinput = "Invalid Operator"; } if ($input3=="0"){ echo '<div id="result"> <p>You have entered a bad operator</p> </div>' ; } else{ echo '<div id="result"> <p>Result: ' . $result . '</p> </div>'; } ?> Answer: No it's not possible writing Variable3 = Var1 Variable Var2 With the Variable as an operator, i decided to use call_user_func to call the BC Math functions http://php.net/manual/en/ref.bc.php I removed the $mathinput since you were not using it anyway. Result <?php $input1 = $_GET["num1"]; $input2 = $_GET["num2"]; $operator = $_GET["symbol"]; $operators = [ '+' => 'bcadd', '-' => 'bcsub', '*' => 'bcmul', '/' => 'bcdiv', '%' => 'bcmod', ]; $result = array_key_exists($operator, $operators) && is_numeric($input1) && is_numeric($input2) ? call_user_func($operators[$operator], $input1, $input2) : null; ?> <?php if (!$result) : ?> <div id="result"> <p>Error: Invalid input or arithmetic operator.</p> </div> <?php else: ?> <div id="result"> <p>Result: <?= $result; ?></p> </div> <?php endif; ?> EDIT The array_key_exists function checks if a given key exists in the array. http://php.net/manual/en/function.array-key-exists.php The following is a shorthand if / ternary logic $var = array_key_exists($operator, $operators) && is_numeric($input1) && is_numeric($input2) ? 'all true' : 'conditions not met'; Simply checks if all the conditions are true, if so the above example would set $var to a string with a value of 'all true', if one of the conditions would return false, the $var would be set to 'conditions not met' You could write the above snippet using regular if / else statements like this: if(array_key_exists($operator, $operators) && is_numeric($input1) && is_numeric($input2)) { $var = 'all true'; } else { $var = 'conditions not met'; } Using ternary logic could have some of the following advantages when used properly. Makes coding simple if/else logic quicker Makes code shorter Makes maintaining code quicker, easier
{ "domain": "codereview.stackexchange", "id": 18614, "tags": "php, beginner, calculator" }
Create an API from EDA or ML outcome?
Question: I have the following sample dataset (the actual dataset is over 10 million records) Passenger Trip 0 Mark London 1 Mike Girona 2 Michael Paris 3 Max Sydney 4 Martin Amsterdam 5 Martin Barcelona 6 Martin Barcelona 7 Mark London 8 Mark Paris 9 Martin New york 10 Max Sydney 11 Max Paris 12 Max Sydney ... ... ... And I wanted to get the destination frequently travelled by a passenger ! I was playing around in Jupyter and got the expected data with the following approach series_px = df_px_dest.groupby('Passenger')['Trip'].apply(lambda x: x.value_counts().head(1)) df_px = series_px.to_frame() df_px.index = df_px.index.set_names(['UName', 'DEST']) df_px.reset_index(inplace=True) def getNextPossibleDestByUser(pxname,df=df_px): return df.query('UName==@pxname')['DEST'].to_string(index=False) While the response is fine. I have few doubts now 1) What's the best way to expose the method (say in this case getNextPossibleDestByUser) as a API (pass customer name as input and get the destination as output) ? 2) Whenever the API is being called , does that mean all the 10 million records gets processed each time ? Are there anyway to optimise that ? 3) Rather than dataframe (pandas) query approach can I consider some ml models or utility functions from say scikit to solve the same problem ? Answer: What's the best way to expose the method (say in this case getNextPossibleDestByUser) as a API (pass customer name as input and get the destination as output) ? Use flask. Pretty easy to build an API. Whenever the API is being called , does that mean all the 10 million records gets processed each time ? Are there anyway to optimise that ? It depends on the exact application but for you case you can take the passenger as an input from the call to your API as POST body or in the arguments and only iterate over the subset of data for that passenger. Rather than dataframe (pandas) query approach can I consider some ml models or utility functions from say scikit to solve the same problem ? If the problem is -- as you illustrated -- simple frequency calculations then better to stick to pandas, in my opinion. If your problem is solved by the simple approach it would not make sense to go for a complicated approach.
{ "domain": "datascience.stackexchange", "id": 4667, "tags": "python, scikit-learn, pandas" }
In what sense do repeated applications of Grover's operator rotate the state closer to the target?
Question: I'm studying the quantum search algorithm on this book: M.A. Nielsen, I.L. Chuang, "Quantum Computation and Quantum Information", Cambridge Univ. Press (2000) [~p. 252]. To sum up we have a state: $$|\psi \rangle = \cos( \frac{\theta}{2}) |\alpha \rangle + \sin( \frac{\theta}{2})|\beta\rangle$$ with $\theta \in [0, \pi]$ Now we apply an operator called G that performs a rotation: $$G |\psi \rangle = \cos( \frac{3\theta}{2}) |\alpha \rangle + \sin( \frac{3\theta}{2})|\beta\rangle$$ Continued application of $G$ takes the state to: $$G^k |\psi \rangle = \cos( \frac{(2k+1)\theta}{2}) |\alpha \rangle + \sin( \frac{(2k+1)\theta}{2})|\beta\rangle$$ Now the book says: " Repeated application of the Grover iteration rotates the state vector close to $| \beta \rangle $ Why? Probably it is a silly doubt but I can't figure out. Answer: As was also pointed out in another answer, repeated applications of the Grover operator rotate the state closer to the target $\lvert\beta\rangle$ in the sense that the probability of finding the state in $\lvert\beta\rangle$ increases up to a certain point (or equivalently, the fidelity between the state and $\lvert\beta\rangle$ gets closer to one). More precisely, you can see that this probability is, after $k$ iterations, $$p^{(k)}_\beta\equiv \lvert\langle \beta\rvert G^k\lvert\psi\rangle\rvert^2=\sin^2\left(\frac{(2k+1)\theta}{2}\right).$$ Now, you start with the probability $p^{(0)}_\beta=\sin^2(\theta/2)$. This tells you how close the initial state is to the target. In most basic introductions to Grover's algorithm, you have $\sin(\theta/2)=2^{-n/2}=1/\sqrt N$ with $n$ number of qubits or $N$ total dimension of the state space, so that $p_\beta^{(0)}=2^{-n}=1/N$. This is not really important for the discussion though so let us consider the general case with arbitrary $\theta$. By definition, you know that $\sin(\theta/2)\le1$ (because the overlap of a state with another state can never exceed $1$), so that $\theta\le\pi$. The question thus becomes: what is the smallest integer $k\ge0$ such that $(2k+1)\theta\sim\pi$?. More precisely, we are looking for the $k_0\in\mathbb N$ that minimises the difference between $(2k+1)\theta$ and $\pi$: $$k_0=\operatorname{argmin}_k\{(2k+1)\theta-\pi\}.$$ In other words, you are looking for the odd number $(2k+1)$ that is closer to $\pi/\theta$, which is the same as saying that you are looking for the non-negative integer $k_0$ that is closer to $\pi/2\theta-1/2$. This number is $$k_0=\left\lfloor\frac{\pi}{2\theta}\right\rfloor,$$ that is, the integer part of $\pi/2\theta$ ${}^\dagger$. In summary, $p_\beta^{(k)}$ will keep increasing with $k$ for all $k\le k_0$, after which it reaches its maximum and will start decreasing again (note that you might have $k_0=0$, in which case Grover's algorithm is useless). You might notice that the smaller the initial $\theta$ is, the more Grover's algorithm brings you closer to the target, but also the more steps will be needed to do that. ${}^\dagger$ To see this, write $\frac{\pi}{2\theta}=\left\lfloor\frac{\pi}{2\theta}\right\rfloor+r$, where $0\le r\le1$ is the decimal part of $\pi/2\theta$. If $0\le r\le 1/2$, then $$\frac{\pi}{2\theta}-\frac{1}{2}=\left\lfloor\frac{\pi}{2\theta}\right\rfloor-r'$$ where $0\le r'\le 1/2$, and thus $\left\lfloor\frac{\pi}{2\theta}\right\rfloor$ is the closer integer. If on the other hand $1/2\le r\le1$, then $$\frac{\pi}{2\theta}-\frac{1}{2}=\left\lfloor\frac{\pi}{2\theta}\right\rfloor+r''$$ for some $0\le r''\le1/2$. It follows that, again, the integer closest to $\pi/2\theta-1/2$ is $\left\lfloor\frac{\pi}{2\theta}\right\rfloor$.
{ "domain": "quantumcomputing.stackexchange", "id": 674, "tags": "grovers-algorithm, nielsen-and-chuang" }
Variation of double slits experiment
Question: Setup goes something like this: the laser gun fires only 1 photon each time and the only way for the photon to appear on the hidden screen is for them to be reflected from the 2 narrow mirrors.(see image below) I was watching a ping pong match and suddenly this pops into my mind. Will there be any interference pattern based on my setup? I argue that 1 photon now does not have the chance to interfere with itself like the double slits so there will not be any zebra pattern showing up but I might be wrong. Also if I coat both mirrors with Polaroid so that one mirror is left circularly polarized while the other is right circularly polarized, what will appears on the hidden screen if any? Answer: As an experimental physicist I would advice you to do the experiment. What the theory predicts for single photons is what the boundary conditions the wavefunction of the photon has obeyed for the particular experiment. This wavefunction is complex and carries the phase information for building up the classical electromagnetic wave. It should not be surprising because both the classical wave and the photons it is composed of are solutions of the same maxwell equations, in the case of the photon treated as operators on the wavefunction. Thus, if interference is seen in a classical light experiment,the single photon distributions will build up to the interference pattern. The classical em distribution is the probability density of finding a photon at a screen, and thus it is the square of the wavefunction of the individual photon. For links look at this answer of mine.
{ "domain": "physics.stackexchange", "id": 33157, "tags": "double-slit-experiment, polarization" }
How do membrane proteins find their target locations?
Question: The question might be asked for any kind of "bound" proteins, but I'd like to restrict it to membrane proteins. Assuming membrane proteins (or their main parts) don't (or aren't) build in situ but at some distance of the membrane, I wonder by which mechanisms they travel from their generation site to their final destination inside the membrane (inner or outer). Proteins that are distributed roughly evenly (or randomly) over the membrane don't pose a big conceptual problem: they could have gone their just by diffusion, possibly from many generation sites, distributed roughly evenly (or randomly) inside the cell. But what about uneven distributions, where some proteins are more densely and non-randomly packed (significant and functional) at some sites of the membrane than at others, e.g. receptors at postsynaptic densities Na+ channels at action potential initiation sites like the axon hillock or the axon initial segment Na+ channels at nodes of Ranvier? By which mechanisms (forces, signals or structures) are these proteins led to their targets? Maybe it depends, and there are different mechanisms. These I came up with (by contemplating first principles): uneven distribution of generation sites inside the cell (due to what?) uneven distribution of origins of attracting signals inside the membrane (due to what?) some "self-attracting" forces or signals (leading to accumulation by positive feedback) microtubules Which mechanism is — possibly — predominant? Related questions: Life cycle of proteins Pathways of ligand-gated ion channels Distribution of synapses of CA1 neurons Distribution of dendritic spike generating ion channels on the dendritic tree Visual maps of the neuronal membrane Distribution of sodium–potassium pumps Answer: This is a great question. A comprehensive answer would be beyond the scope of an answer on a forum like this. I will summarize the best I can here, but if you are really interested in this you should look at some of the work by Randy Schekman and Tom Rapoport, who have done a lot of pioneering work in this field and have papers from more than two decades ago on their lab websites. I'll talk about membrane proteins generally, but I'm not sure what the state of the field is for Na+ channels specifically, so I can't comment too much on that particular case. Many of the details of the processes I will mention are still areas of active research, so I will try to stick mainly to what has been well-characterized (to the best of my knowledge). To restate the problem, proteins are generally synthesized in the lumen of the endoplasmic reticulum, which is an aqueous environment similar to the cytosol in many (but not all) ways. However, membrane proteins, which are not stable in aqueous environments, must: 1) Find a way from the ER lumen into a membrane. 2) Get from the ER into the correct membrane so they may perform their cellular function. We will start with step 1, but the key to both is a critical but often underappreciated aspect of protein biology called the signal peptide. The signal peptide is simply an N-terminal sequence of amino acids that precedes what we would normally think of as the beginning of a mature protein. It is relatively short, usually only ~30 amino acids in length. It is cleaved off the mature protein by a protease once the protein is folded and in the membrane. Until that time, the signal peptide serves as a molecular marker that indicates where the nascent protein should be heading and how it should be handled. Not surprisingly, there are many different signal peptides that serve multiple functions, and they are not only used for membrane proteins. So let's say we are in the ER lumen, and have some mRNA coding for a membrane protein that is destined for the plasma membrane. The first amino acids that will emerge from the ribosome is the signal sequence, in this case a specific signal sequence indicating that this is to be a plasma membrane protein. Once the signal sequence emerges from the ribosome, it is recognized by a ribonucleoprotein complex (that is, a complex of RNA and protein) called the signal recognition particle. Once the SRP binds to the signal peptide, translation is halted, and the whole complex moves to the ER membrane, where it forms a complex with another large protein complex called the translocon. I can't go into the intricacies of these complexes and their functions in this answer alone, but the simple description is that the translocon contains an ATPase that can insert the membrane protein into the ER membrane as it is translated, with the correct orientation. The hydrolysis of ATP provides energy to move the emerging polypeptide chain into the hydrophobic membrane, where chaperones help it fold. This process is in part driven by the recognition of hydrophobic transmembrane regions of the proteins by the translocon. It can also move soluble, cytosolic proteins across the membrane through a similar mechanism. Now that the protein has been translocated, a peptidase will cleave the signal sequence off the protein. From here, sorting signals will take over. Generally, these are simple sequence motifs in the first transmembrane domain that act similarly to a signal sequence, but they are not cleaved. However, sorting signals can be stretches of peptide throughout the protein too, in some cases. These sequence motifs will be recognized by cell trafficking machinery. Without going into too much detail, the proteins will be gathered into vesicles, and transported to other organelles. Usually, the first stop for proteins is the golgi apparatus, which is typically where many post-translational modifications, such as glycosylation, take place. I am a biochemist and not a cell biologist, so I am not the most qualified to go into the details of subcellular trafficking. Suffice it to say, once the protein is finished being processed in the golgi, it will be trafficked into other organelles, such as the plasma membrane using vesicle transport as before. From my understanding, the protein will be sorted into the proper vesicles based on its sorting signals, as well as other markers (in some cases, certain post-translational modifications on certain proteins can influence its trafficking). The vesicles recognize the proper destination membrane in part by the lipid composition of that membrane. For example, phosphoinositides have extensive influence on membrane trafficking, and many membranes can be differentiated by their phosphoinositide signature. Anyway, that is a very broad overview of the answer to your question. I'm sorry I can't comment too much on the intricacies of cellular trafficking, I don't quite have the expertise to go through that literature quickly enough to answer your question in a reasonable time frame. I hope this is helpful in pointing you in the right direction, and good luck!
{ "domain": "biology.stackexchange", "id": 8044, "tags": "molecular-biology, cell-biology, proteins, cell-membrane, membrane" }
Why is the Pauli's exclusion principle not violated in the two neutron beams interference experiments?
Question: It is my understanding so far that in this kind of experiments like the one measuring the 4π (i.e. 720° Dirac Belt trick) rotation characteristic of 1/2 spin fermions like neutrons, two neutron beams are polarized via S-G apparatus to the same quantum spin number. The two separated polarized beams are initially in phase meaning identical in every aspect. Continuing, one of the two beams is then brought out of phase from the other by forcing it to continuous Larmor precession while the other beam is not forced to precess. The two beams are then combined together in superposition and a interference signal is obtained. I understand that because in the one beam the neutrons are wobbling all the time (Larmor precession) most of the time the beams are never in phase and don't have all four quantum numbers identical and therefore the Pauli's exclusion principle is not violated. Therefore, most of the time a steady noise interference output signal is produced of the two neutron beams combined. However, as these experiments show for every 4π of Larmor rotation period, the two beams get momentarily in phase and a maxima in the signal output is generated due constructive interference: My question here is, at the points where the maxima in the interference signal are observed as shown above, meaning the two beams are monetarily in phase, do these events not violate the Pauli's exclusion principle? The best explanation I could find so far in the literature to resolve my confusion is that mathematically this means the wavefunctions of the two combined fermions must be antisymmetric (antiparallel spin) which leads to the probability amplitude of the interference wavefunction going to a zero maxima if the two fermionic particle beams are in the same phase. Thus according to the above interpretation IMO the signal output will be like this: But then how can be the two beams be in phase and at the same time having a destructive interference? And most importantly, if the two neutron beams combined end up having anti parallel spin because the Pauli's exclusion principle, how then can these experiments measure the 720° rotation Dirac Belt trick characteristic of these fermions (i.e. neutrons)? Would that not totally mess up the experiment? I'm confused, please help. A step by step procedure description of such an experiment example to measure the 4π phase characteristic of the neutron would be most beneficial for a general audience to understand how this measurement is carried out and therefore why the Pauli's exclusion principle is not violated in this experiment. Answer: Why the Pauli exclusion principle doesn't matter here Neutron interference experiments like this are done with no more than one neutron in the interferometer at any given time. Here's a quote from reference 1: All neutron experiments performed until now belong to the field of self-interference where at a given time only one neutron — if at all — is within the interferometer and the next one is not yet released from the... neutron source. The key point is that this interference effect is a single-particle phenomenon. The Pauli exclusion principle prevents two neutrons from occupying the same state. It does not prevent a single neutron from interfering (constructively or destructively) with itself, which is what's happening in these experiments. Like in any single-particle interference experiment, the interference pattern is only evident after accumulating detections of a large number of neutrons, but again: neutrons go through the interferometer one-at-a-time, so the Pauli exclusion principle does not apply. More information about the experiment @rob's answer gives more detail about how the experiment is done. To complement those details, here's a diagram from reference 2, showing an oblique view of a typical neutron interferometer (see @rob's answer for typical dimensions): As the arrows indicate, each individual neutron enters from the left. At point A, the neutron's wavefunction is diffracted into a two-peaked wavefunction. Those two branches of the wavefunction are diffracted again at points B and C, respectively, so that the wavefunction emerging from the middle slab has four peaks. Two of those peaks converge back together at point D, and their relative phases determine the relative intensities of the two peaks of the wavefunction that emerge downstream of point D. The relative intensities of those two peaks determine the relative probabilities of detecting the neutron at either $C_2$ or $C_3$. The neutron can only be detected by one of them, not by both, because it's just one neutron. But after repeating this with lots of neutrons, the cumulative numbers of detections registered by $C_2$ and $C_3$ tell us what the relative probabilities were. (There is also some probability that neither of those detectors will register the neutron, because the other two branches of the wavefunction, the ones that are shown bypassing the third slab in the figure, also have nonzero amplitudes.) A top view of the interferometer in a $4\pi$-rotation experiment is shown in this diagram from reference 2: The points A,B,C,D are the same as in the previous oblique view. In this case, one of the branches of that single-neutron wavefunction goes through a magnetic field that is tuned to produce a known amount of precession. That affects the relative phase of those two branches of the single-neutron wavefunction at point D. By repeating the whole experiment many times with different magnetic-field strengths, we can map out how the relative probabilites at $C_2$ and $C_3$ depend on the magnetic field, and therefore on the amount of precession experience by one of the two branches of the neutron's wavefunction. Again: these experiments never have more than one neutron at any point. That one neutron is interfering with itself because of the way diffraction caused its wavefunction to split (point A) and then rejoin (point D). (Diffraction itself is a kind of single-particle interference phenomenon, too.) The Pauli exclusion principle prevents two neutrons from occupying the same state at the same time, but in these experiments, that never happens. These experiments are done using only one neutron at a time. The Pauli exclusion principle clearly cannot prevent a single neutron from being in the same state at the same time... because if it did, then neutrons could not exist at all. But how is this possible?? The fact that a single particle can take multiple paths through the interferometer and interfere with itself is something that does not have any anolog from everyday experience. It cannot be understood by thinking of a particle as a tiny version of a macroscopic object. Single-particle interference phenomena are one of the hallmarks of quantum physics, like Feynman said in this chapter of The Feynman Lectures on Physics: We choose to examine a phenomenon [namely the single-particle interference phebnomenon] which ... has in it the heart of quantum mechanics. In reality, it contains the only mystery. For more information about the single-particle quantum interference phenomenon, see the questions Double Slit experiment with just one photon or electron How do experiments prove that fermion wavefunctions really pick up a minus sign when rotated by $2\pi$? Operational definition of rotation of particle and their answers, and also see the references listed in this other answer. References Page 21 in Rauch and Werner (2000), Neutron Interferometry: Lessons in Experimental Quantum Mechanics (Clarendon Press) Feng (2020), Neutron Optics: Interference Experiment with Neutrons (http://home.ustc.edu.cn/~feqi/neutron%20interference.pdf)
{ "domain": "physics.stackexchange", "id": 83656, "tags": "quantum-mechanics, particle-physics, experimental-physics, pauli-exclusion-principle, interferometry" }
What is the complexity of splitting a state into a superposition of $n$ computational basis states?
Question: $\newcommand{\bra}[1]{\left<#1\right|}\newcommand{\ket}[1]{\left|#1\right>}\newcommand{\bk}[2]{\left<#1\middle|#2\right>}\newcommand{\bke}[3]{\left<#1\middle|#2\middle|#3\right>}$ I'm looking for (unitary$^1$) transformations$^2$, to create a superposition of any $n$ computational basis states with equal coefficients. I'm further interested in the complexity of these implementations and/or a gate count. Building up on this question, on how to create superpositions of 3 states, I wonder how it scales to split up into any number of $n$ computational basis states. For simplicity let's always start with $\ket{00...0}$. Here are my thoughts: $n=5$: Split $\ket{000}\rightarrow \sqrt\frac25\ket{000}+ \sqrt\frac35\ket{001}$ by a local $Y$-gate , then implement a controlled (trigger when the rightmost bit is $0$) split like $\ket{000}\rightarrow \sqrt{\frac12}(\ket{000}+\ket{010})$ and finally a controlled (trigger when the leftmost bit is $1$) split like $\ket{001}\rightarrow \sqrt{\frac13}(\ket{001}+\ket{011}+\ket{111})$. For reference see here. $n=6$: same as for $n=5$ but after an local split into 2 equal halves on the first (rightmost/top) qubit, 2 controlled splits into 3 states are involved. $n=7$: again split in two parts with the weigths $\sqrt\frac37 :\sqrt\frac47$ and further implement controlled splits... $n=8$: trivial $n=9$: this takes 4 qubits. Split into 3 on the first 2 qubits and then have controlled splits into another 3 on the last 2 qubits. I haven't counted CNOTs and local operations (necessary for answering the scaling question), but the pattern looks like something, how to decompose numbers. Is there a generic way to come up with a circuit to split into $n$ computational basis states? If you can improve my suggested implementations, let me know... Answer: I'm not quite sure why using Grover is cheating, or what the motivation is for discounting measurement. However, there is a way of thinking about an algorithm without either. It also runs as $O(m^2)$ where $m=\lceil\log_2(n)\rceil$. Consider the value $n$. We write $n-1$ in binary as $x$, comprising $m$ bits. We know that the first (most significant) bit of $x$ must be 1. We're going to describe an iterative protocol. In the first step, we start with $|0\rangle^{\otimes m}$, and convert the first qubit to $$ \sqrt{\frac{2^{m-1}}{n}}|0\rangle+\sqrt{\frac{n-2^{m-1}}{n}}|1\rangle. $$ Controlled off the first qubit being 0, we apply Hadamard on all other qubits. This has got all the terms $|0y\rangle$ for $y\in\{0,1\}^{m-1}$ correct. Now we just have to worry about the $|1y\rangle$ terms. If the second bit of $x$ is 1, then we essentially just repeat the first step but for the $m-1$ least significant bits, and controlling every operation off the first qubit being in 1. If the second bit of $x$ is 0, then we move onto the third bit, but now controlling everything off the first qubit being 1 and the second qubit being 0 (except that we don't actually need that control because it's always 0 if the first qubit is 1). We can now just repeat this for the entire sequence. For example, if $n=14$ ($x=1101$), you'd do something like \begin{align*} |0\rangle^{\otimes 4}&\rightarrow \sqrt{\frac{6}{14}}|1000\rangle+\frac{1}{\sqrt{14}}(|0000\rangle+|0001\rangle+|0010\rangle+|0011\rangle+|0100\rangle+|0101\rangle+|0110\rangle+|0111\rangle) \\ &\rightarrow \sqrt\frac{6}{14}|1\rangle(\sqrt\frac{2}{6}|100\rangle+\frac{1}{\sqrt{6}}(|000\rangle+|001\rangle+|010\rangle+|011\rangle))+\frac{1}{\sqrt{14}}(\ldots) \\ &\rightarrow\sqrt\frac{6}{14}|1\rangle\sqrt\frac{2}{6}|10\rangle\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)+\ldots \end{align*} To see the complexity, we've got to loop over the $m$ bits of $x$, and each step, at worst, we have to perform a multi-controlled not, which can be decomposed in terms of $O(m)$ controlled-nots. In this circuit diagram, I defined $U$ such that $U(x)|0\rangle=x|0\rangle+\sqrt{1-x^2}|1\rangle$. The slices correspond to the stated outputs in the example, and the dashed box hopefully guides the eye about the recursive structure. Note that for the very last gate, $U(1/\sqrt{2})=H$. I should also note that there are some case specific optimisations that can be made (all the gates on the last qubit can be combined as a single, uncontrolled, Hadamard) but that only obfuscates the algorithmic structure.
{ "domain": "quantumcomputing.stackexchange", "id": 1484, "tags": "quantum-state, gate-synthesis" }
Why level of noise can be magnified twice through each numerical differentiation?
Question: I was reading a paper and saw this is mentioned there, but I cannot figure out how this can analytically be proven? Answer: Simply put: take an independent identically distributed Gaussian noise (one observation in blue, left) and its gradient (in green, right). The amplitude is roughly multiplied by the norm of the gradient filter (here $h=[1\,-1]$), which is $\sqrt{2}$, which you can see from the picture. Their average energy $E$ thus differ by a factor of $|h\|^2=\sqrt{2}^2 = 2$. As said by Olli, a similar question was answered in 2nd order edge detectors more susceptible to noise?. The part "can be magnified twice" is important, since this does not happen for all noises or all difference filters. More generally, a random signal $x(t)$ passing through a time-invariant filter with frequency response $H(f)$ produces a signal $y(t)$ with PSD: $$ S_y(f) = S_x(f) |H(f)|^2\,. $$ Since the Fourier domain is energy-preserving, you get that $$\|H\|^2\ =|h\|^2=2\,.$$ This is a reason why people have developed noise-robust differentiation, often combined with smoothing (eg Savitzky-Golay filters). See examples at Differentiation. Note that in the continuous case, you should take care of non integrable processes, see for instance Variance of white Gaussian noise.
{ "domain": "dsp.stackexchange", "id": 4437, "tags": "noise, snr, derivative" }
How to save OctoMap from MoveIt!
Question: I am running Ubuntu 14.04 and ROS Indigo. I am currently using MoveIt! for generating an OctoMap from the robot sensors. How can I save that OctoMap? Or, alternatively, how can I access it in order to obtain occupancy information from specific points? I just know the type of the topic (moveit_msgs/PlanningScene), but I don't know how to work on the map or access it. I know that there is a similar question here in ROSanswers, I checked it, but that does not answer my question since the map is not generated via MoveIt!. So if you know the answer, I would extremely appreciate. Originally posted by HenrySleek on ROS Answers with karma: 53 on 2017-03-06 Post score: 0 Answer: There are multiple ways to do that and it depends on your setup. I assume for you it's probably most easy to call the get_planning_scene ROS service offered by move_group (set the OCTOMAP bit in the components bit field to ask for the octomap). Then, if you want to convert the octomap message to an object of liboctomap, have a look at the conversions.h header in the octomap_msgs package. Originally posted by v4hn with karma: 2950 on 2017-03-08 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by HenrySleek on 2017-03-08: Thank you, I already found a (more complicated) way for getting the octomap_msgs::Octomap from it. Now my problem is with the conversion: I used the following: octomap::AbstractOcTree* my_map = octomap_msgs::binaryMsgToMap(octomap_content), But I have problems with the AbstractOcTree header. Comment by HenrySleek on 2017-03-08: When I build it, I need to include the header < octomap/AbstractOcTree.h >, and along with that all the other headers in the same folder for letting it work. Nevertheless, when I do so, I have the problem that "OcTreeBaseSE.hxx:83:31: error: ‘octomap::point3d’ has no member named ‘norm2’". Comment by HenrySleek on 2017-03-08: Indeed, for a point3d type only .norm is defined, I couldn't find any .norm2. So I guess there is a bug in what I have. I didn't directly install it, since it came along with the robot, but in any case if I run 'sudo apt-get install ros-indigo-octomap', I have the latest version. How do I solve it? Comment by hc on 2017-07-27: I would like to manipulate the octomap further rather than saving it. Is there a way to get the octomap at a faster rate than the ros service? Through the planning scene monitor?
{ "domain": "robotics.stackexchange", "id": 27209, "tags": "moveit, octomap" }
In coordinate-free relativity, how do we define a vector?
Question: Relativity can be developed without coordinates: Laurent 1994 (SR), Winitzski 2007 (GR). I would normally define a vector by its transformation properties: it's something whose components change according to a Lorentz transformation when we do a boost from one frame of reference to another. But in a coordinate-free approach, we don't talk about components, and vectors are thought of as immutable. For example, Laurent describes an observer using a timelike unit vector $U$, and then for any other vector $v$, he defines $t$ and $r$ uniquely by $v=tU+r$, where $r$ is orthogonal to $U$. The $(t,r)$ pair is what we would normally think of as the coordinate representation of $v$. In these approaches, how do you define a vector, and how do you differentiate it from things like scalars, pseudovectors, rank-2 tensors, or random objects taken from something that has the structure of a vector space but that in coordinate-dependent descriptions would clearly not transform according to the Lorentz transformation? It seems vacuous to say that a vector is something that lives in the tangent space, since what we mean by that is that it lives in a vector space isomorphic to the tangent space, and any vector space of the same dimension is isomorphic to it. [EDIT] I'm not asking for a definition of a tangent vector. I'm asking what criterion you can use to decide whether a certain object can be described as a tangent vector. For example, how do we know in this coordinate-free context that the four-momentum can be described as a vector, but the magnetic field can't? My normal answer would have been that the magnetic field doesn't transform like a vector, it transforms like a piece of a tensor. But if we can't appeal to that definition, how do we know that the magnetic field doesn't live in the tangent vector space? Bertel Laurent, Introduction to spacetime: a first course on relativity Sergei Winitzki, Topics in general relativity, https://sites.google.com/site/winitzki/index/topics-in-general-relativity Answer: Honestly, this coordinate-free GR stuff (Winitzki's pdf in particular) looks like GR as would be taught by a mathematician--very similar to do Carmo's text on Riemannian geometry. In classic (pseudo-)Riemannian geometry, vectors are defined as derivatives of affine parameterized curves, covectors as either maps on vectors to scalars or as gradients of scalar fields. Something like the Riemann tensor is defined as a map on two/three/four vectors spitting out two vectors/one vector/a scalar. Differential geometers love defining everything as a mapping; I consider it almost a fetish, honestly. But it is handy: defining higher-ranked tensors as mappings of vectors means that the tensor inherits the transformation laws of each argument, and as such, once you establish the transformation law for a vector, higher-ranked tensors' transformation laws automatically follow. Edit: I see the question is more how one can figure out a given physical quantity is a vector or higher-ranked tensor. I think the answer there is to look at the quantity's behavior under a change of coordinate chart. But Muphrid, we never chose a coordinate chart in the first place; isn't that how coordinate-free GR works? Yes, but the point of coordinate-free GR is just to delay the choice of the chart as long as possible. There is still a chart, and most results depend on there being a chart, just not on what exactly that chart is. How does looking at a change of chart (when we never chose a chart in the first place) help us? The transition map from one chart to another is a diffeomorphism, and so its differential can be used to push vectors forward or pull covectors back. Hence, the transformation laws that usually characterize vectors and covectors are still there. They look like this: let $p \in M$ be a point in our general relativistic manifold. Let $\phi_1: M \to \mathbb R^4$ be a chart, and let $\phi_2 : M \to \mathbb R^4$ be another chart. Then there is a transition map $f : \mathbb R^4 \to \mathbb R^4$ such that $f = \phi_2 \circ \phi_1^{-1}$ that changes between the coordinate charts. Thus, if there is a vector $v \in T_p M$, there is a corresponding vector $v_1 = d\phi_1(v)_p \in \mathbb R^4$ that is the mapping of the original vector into the $\phi_1$ coordinate chart. We can then move $v_1 \to v_2$ by the (edit: differential of the) transition map. But Muphrid, aren't we meant to be working with the actual vector $v$ in the tangent space of $M$ at $p$, not its expression in a chart, $d\phi_1(v)$? You might think so, but (as was drilled to me repeatedly in a differential geometry course) we don't actually know how to do any calculus in anything other than $\mathbb R^n$. So I think there's some sleight of hand going on where "really" what we do all the time is use some chart to move into $\mathbb R^4$ and do the calculus that we need to do. What this means is that, in my opinion, coordinate-free is a bit of a misnomer. There are still coordinate charts all over the place. We just leave them undetermined as long as possible. All the transformation laws that characterize vectors and covectors and other ranks of tensors are still there and still let you determine whether an object is one or the other, because you're always in some chart, and you can always switch between charts.
{ "domain": "physics.stackexchange", "id": 8226, "tags": "mathematical-physics, differential-geometry, mathematics, relativity, vectors" }
How does high-frequency electrolysis of water work?
Question: I have read that combining the DC current with a high-frequency AC current, the electrolysis of water speeds up. Is this true? In that case, how is less energy wasted as heat? Or does it simply catalyze the process? Answer: First of all, have a look at the wikipedia page on electrolysis of water. I also like this review: Zoulias et al.: A Review on Water Electrolysis, TCJST, 4 (2) (2004) 41-71 Specifically they list a number of actually existing installations (context: renewable energy) and their actually achieved efficiency. Speed up does not necessarily have anything to do with higher efficiency. In electrolysis it is often the other way round: if you want to squeeze out the maximum free energy, you need to do the reaction infinitely slowly (despite thermodynamics having a dynamic name, it looks at infinitively slow processes). Thus, speeding up usually means that you find a way to put more power through your system. The big issue is to find a way of doing this without loosing (too much) efficiency. Pulsed/modulated DC: looking through a few papers I liked this one: Shimizu et al.: A novel method of hydrogen generation by water electrolysis using an ultra-short-pulse power supply, Journal of Applied Electrochemistry (2006) 36:419–423, DOI 10.1007/s10800-005-9090- They aim at avoiding the diffusion controlled situation by having the pulses short enough so that no depletion zone occurs. Look at these diagrams: So they report one setting where the pulsed electrolysis is actually more efficient than DC in their cell. There's more research going on on this, however the papers I found reported increased efficiency compared to pure DC electrolysis, but the absolute efficiencies are around 10%. However, compare their numbers with the 80% efficiency cited by the review for an industrial alkaline electrolysis. Note that one big difference is the voltage that is applied: for the DC it is around 1.85 - 2.05 V, so much less overvoltage. Note also that when they say that higher voltage speeds up ion transport, then this overvoltage is converted to heat (ions face friction in the medium when they travel) and thus basically lost. Another line that looks real is that if you go to higher temperatures, a (small) part of the energy can be supplied by heat. As heat is cheap, this may help. One point one has to be aware though, that the efficiency calculations may be done with respect to the electric energy only (neglecting the heat input) and thus look artificially nice (like efficiencies of condensing boilers calculated against the lower heating value). I found a bunch of nonsense claims in the internet, about the resonance frequency of water helping to split bonds. The first thing to realize here is that there is no one resonance frequency of water. With suitable energy, you can excite rotational, vibrational and electronic states (I left out translation - there transition energies minute). At room temperature you can say as a rule of thumb that most molecules will be in some excited rotational state, but in the vibrational and electronic ground states. Excitation energies for rotation are in the far infrared or microwave energy/frequency region. Widely used e.g. in the microwave oven at 2.45 GHz ($\approx$ 12 cm). Actually, the whole region is full of bands where water absorbs. Note that microwave heating of water does not cause electrolysis. Vibrational transitions are around 2.9 μm = 105 THz = 3500 cm⁻¹ and 6μm = 50 THz = 1635 cm⁻¹ with lots of combinations and overtones throughout the near infrared region. Quite exceptionally, the visible region is basically free of water absorption. Electronic transitions (breaking of bonds) need energies in the UV, and here we meet bands that lead to photodissociation, e.g. at 166nm (taken from Wikipedia). That corresponds to 1.8 PHz = $1.8 \cdot 10^{15}$ Hz. Compare this to the kHz and MHz where your link claims dissociation. This doesn't mean that the pulsed DC cannot help, nor that impedance spectoscopy won't give important information. But resonance frequencies in the kHz range are electrical LC-circuit resonances depending on cell and electrode geometries and electrical double layers etc. But neiter on vibrations nor breaking of the bonds of the water molecule. To give the "method" you ask about some real world numbers, at the very end of the Wiki page the energy efficiency for industrial water electrolysis is cited as usually between 50 and 80 %. The paper then proposes to burn the gas in an internal combustion machine. As such a stationary process could be adjusted so that the engine is at its maximum efficiency, we may assume 1/3 or 35% efficiency here. we then need a generator to convert the mechanical energy into electric energy. Fortunately, that step is rather efficient. Say, 95 %. A fuel cell would be more efficient than the combustion - generator combination: ca. 40 - 60 % according to Wikipedia. Unfortunately, also battery charging is not 100% efficient. Let's assume 80–90% (taken from Wikipedia on Li-ion batteries) For batteries that are charged with higher current (or current density) efficiency is less. Example would be lead-acid batteries as used in cars. Wiki quotes efficiencies between 50 and 80 %. Taking these numbers together, I conclude that after going once through the cycle of the proposed "perpetuum mobile", 8 - 24 % of the energy are retained in a "useful state" while 76 - 92 % became heat. With fuel cell, we may be able to "boost" the energy efficiency to 43%. Useful general knowledge (in addition to the law of energy conservation) The US patent system is different from e.g. the German patent system in that here in Germany the patent application has to have commercial/industrial usability. This includes a technical argument why it works (according to the physical laws). A perpetuum mobile would be rejected on these grounds (of course the inventor could prove his case with a prototype). US patents do not have this technical check. generators (mechanic -> electric conversion) are sources of current, while batteries (galvanic cells) are sources of voltage.
{ "domain": "chemistry.stackexchange", "id": 8985, "tags": "electrochemistry, water, electrolysis" }
Designing primers with restriction sites
Question: I want to design a forward and reverse primer that include overhangs with restriction sites, with the dna used for restriction enzyme cloning. If I want to send my dna to a synthesis company should I include the attached overhang with the restriction sites to the dna? Does it matter if the primers have the overhang of the dna within their sequence ? From the sequence right at the end what bp sequence would be recommended I use for my primers if I want approx. 20-25bp for the primer? See below for cDNA sequence: Nde1 cut site: 5' ... CA|TATG ... '3' Xho1 cut site: 3'...GAGCT|C...5' cDNA with overhangs that have restriction sites, Ndel1 (5'->3') and Xho1 (3'->5') 5' catatg - ATGGGTTCAAACACTTCCAAAGTGGGTGCTGGTGCAGAAAAACAACAAGTCTATACTCCG CTAACACAGATCGATTTTTCACAGTCTTTGGTTTCTCAATTGGATTCATCGAAGGAATCA GACTATGTCACCAAGCAAAATGCAGAAAAGTTCATTGAGAAGAAGGTTTCACAAAGGCTA TCTAACCTAGAAGTTGAAACGTTAAAGAAGTTTGAAGATACTTTGAACAATTCACTATTA TCAGACGACGACAAGGATGCCGTTGATGGAATATCATCAAGTTCATTGAATAATCAAATC GAGTCGTTGAACAAGAAACTAACATTATTTGATCAATTAGAGTTACAAAAGTTGGAGAAA TATGGGGGTGCCAAAGGTAAATCTGATAAAAAAACCGACAACGGCAGCATTTCTATAAAG GCAAAATTGACTGAGTGTCTTTTGGCCAATAAGGGCAAGCCATTGAATTGTTACGAAGAG ATGGAAGAATTCAAGAAGCTCGTTATGGGTTGA - gagctc 3' Answer: Yes. If you intend to subclone the fragment via classical restriction enzyme cloning you need the overhang. Different restriction enzymes require different lengths of overhang to allow them to sit fully on the strand and then cleave the sequence. NEB has a great page detailing how many bases are needed for the enzymes they sell: https://www.neb.com/tools-and-resources/usage-guidelines/cleavage-close-to-the-end-of-dna-fragments No, it doesnt matter if the overhang appears elsewhere in the DNA, just so long as your restriction sites don't (obviously). Here are some considerations I use when designing primers for PCR (though not wholly applicable if you're just having it synthesised): Use the 3+ basepairs present the given number of nucleotides in your restriction site away in the sequence you're PCRing from as your overhang sequence to allow a few more bases to bind. I.e With the following primer: Overhang - Restriction Site - Priming sequence AAA-GGATCC-ATGACATGCAGTAGC PRIMER ||| ||||||||||||||| TTT--------TACTGTACGTCATCG TARGET SEQUENCE Alternatively, I use those 3 or so basepairs to add or remove GC content and manipulate the Tm if the annealing temperatures of the 2 primers aren't ideally matches
{ "domain": "biology.stackexchange", "id": 6496, "tags": "molecular-biology, pcr, primer" }
Drift velocity of electrons with changing area
Question: What would happen with the drift velocity of a cylindrical resistor's diameter increases, with a given voltage between its terminals? According to the expression: \begin{align} R&=\rho\frac{L}{A}\\ I&=neAv_d\\ \Delta V &= IR\\ v_d&=\frac{\Delta V}{\rho L n e}\\ \end{align} The resistivity does not change, neither does the length of the resistor nor the term $ne$ but the resistance does change as well as the current, so the area is eliminated from the expression. I wonder if the drift velocity would be the same after increasing the diameter or if my derivation is wrong. Answer: The drift velocity is the average velocity due to an applied electric field. In a conductor, electrons scatter around at the Fermi velocity but have a net zero average (i.e., equal scattering in all directions). When the electric field is applied, the electrons are given a small velocity in one direction. Thus, we can say, $$ v_\textrm{drift}=\eta E $$ where $\eta$ is some constant. Since the electric field comes from a gradient in a potential, which changes as a function of the length of the bar, $L$. This approximates to $$ v_\textrm{drift}\simeq\eta\frac{V}{L} $$ which is similar to what you have. Since there is no factor of $A$ in the latter equation (not in my $\eta$ here), then increasing the area (by increasing the diameter) should not change the drift velocity.
{ "domain": "physics.stackexchange", "id": 16703, "tags": "electric-circuits, electrons, velocity" }
If the wire of the secondary circut is thicker than that of the primary in a transformer, what type of transformer is this and why?
Question: There is this question in my physics book, and two teachers (a private teacher of a friend of mine and the school teacher) say that it's a step down transformer, while two other teachers say that it's niether of them, since a transformer's type is only determined by the number of turns. I dont really know which one is correct and why, so if someone could explain id appreciate it. Answer: The thickness of the wire determines the maximum current the wire can carry without overheating. Thicker wire means a greater current. With a transformer the power coming into the primary, $P_p = V_pI_p$, is the same as the power coming out of the secondary, $P_s = V_sI_s$, (less a few resistive losses) and this means $VI$ is constant. For a step down transformer $V_p \gt V_s$ so $I_s \gt I_p$ - the current in the secondary is higher than the current in the primary so the secondary needs to be wound with thicker wire. For a step up transformer $V_s \gt V_p$ so $I_p \gt I_s$ - the current in the primary is higher than the current in the secondary so the primary needs to be wound with thicker wire.
{ "domain": "physics.stackexchange", "id": 87073, "tags": "electric-circuits, electromagnetic-induction" }
Does anybody know what would happen if I changed input shape of pytorch models?
Question: In this https://pytorch.org/vision/stable/models.html tutorial it clearly states: All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. Does that mean that for example if I want my model to have input size 128x128 it is or if I calculate mean and std which is unique to my dataset that it is gonna perform worse or won't work at all? I know that with tensorflow if you are loading pretrained models there is a specific argument input_shape which you can set according to your needs just like here: tf.keras.applications.ResNet101( include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000, **kwargs) I know that I can pass any shape to those (pytorch) pretrained models and it works. What I wanna understand is can I change input shape of those models so that I don't decrease my models training performance? Answer: Each machine learning model should be trained by constant input image shape, the bigger shape the more information that the model can extract but it also needs a heavier model. A model's parameters will adapt with the datasets it learns on, which means it will perform well with the input shape that it learned. Therefore, to answer your question "What I wanna understand is can I change input shape of those models so that I don't decrease my models training performance?", the answer is no, it will decrease the performance. "I know that I can pass any shape to those (pytorch) pretrained models and it works." $\Rightarrow$ this happened because Pytorch team replace all Pooling layer with Adaptive Average Pooling 2d so you can pass any shape of the image into the model without any bugs.
{ "domain": "ai.stackexchange", "id": 2730, "tags": "deep-learning, pytorch" }
Limited tapes-version TM for pair sum
Question: In the problem of pair-sum we are given a multiset $A$ and a number $\alpha$. We are asked to find whether there is a pair ($2$ numbers) of $A$ s.t. their sum is $\alpha$. Here all numbers are small/constant, $O(1)$, sum of $2$ small numbers requires $O(1)$ actions and their size is $O(1)$ bits for representation. I'd like to analyse an efficient algorithm for this. The algorithm sorts $A$ and then iterates over the endpoints. If their sum is less than $\alpha$, we will check the second smallest element and the largest. If their sum is larger, we will check the smallest and the second largest. So on and on. This algorithm takes $O(n\log n)$ due to sorting. However, trying to write it as a single-work tape TM, I'm having trouble. Assuming we have $2$ tapes: one read only tape for the input, and another read/write for the working process. Sorting the array and writing it on the TM takes $O(n\log n) $ by merge sort, using a single tape. However, what about the process itself of comparison? If we had $2$ tapes, or a single tape with $2$ access heads, we could have done trivially in $O(n)$. But having a single tape seems to be problematic, as I might run back and forth many times, and running back and forth might take $O(n)$. My question follows: is there a way to implement this algorithm, or any other algorithm for pair-sum, such that it will need $O(n\cdot \log n)$ runtime, on a TM with $2$ tapes: the first is read only and the second is read-write? Answer: Summary: There is no need to sort the given numbers since whether there are two numbers in $A$ such that their sum is $\alpha$ depends on the set of numbers in $A$. Since the choices for the set of numbers in $A$ is $O(1)$, there is an $O(n)$-time algorithm/TM with one read-only tape. Assume that multiset $A$ and a number $\alpha$ are given as $c_{a_i}\square c_{a_2}\square\cdots\square c_{a_m}\square' c_{\alpha}$ as input on the tape, where $c_{a_i}$ stands for the cells that represent $a_i$, the $i$-th number in $A$ as a binary number. $c_{\alpha}$ stands for the cells that represent $\alpha$ as a binary number. $\square$ and $\square'$ are two field separators (neither of them appear in $c_*$). Since "their size is $O(1)$ bits for representation", there is a constant $c\in \mathbb N$ such that each number in $A$ uses at most $c$ cells, i.e., $a_i\in [2^c] = \{0,...,2^c−1\}$. Let us specify Turing machine (TM) $M$ as follows. Given the input as described above, TM $M$ will,     for each number $x\in[2^c]$:         check whether $x$ is in $A$. If yes, for each number $y\in \{x+1, x+2, \cdots, 2^c-1\}$:             check whether $y$ is in $A$. If yes, check whether $x+y=\alpha$. If still yes, halt and accept.     for each number $x\in[2^c]$:         check whether $x$ appears in $A$ at least twice. If yes, check whether $2x=\alpha$. If still yes, halt and accept.     Halt and reject. Since $c$ is a constant, we can hardcode all "for" loops, $x$, $y$, the result of each check, $x+y$, $2x$, etc. using states and state transitions of $M$. There is no need to alter any tape cell. Each "check" above involves moving the head of $M$ from the start of the input to the end of the input, and then back to the start of the input, which takes $O(n)$ time. The total running time of $M$ is no more than $2^c2^c2O(n) + 2\cdot2^cO(n)$, which is $O(n)$ still. We can improve the algorithm/$M$ so that it could run faster with less states. However, that is another task.
{ "domain": "cs.stackexchange", "id": 20110, "tags": "algorithms, time-complexity, turing-machines, linear-complexity, tape-complexity" }
Telegram - GetUpdates Process
Question: My code fetches telegrams from a server periodically in the background. The server API is documented here. What do you think of the way I do multithreading by handling my _updatesList? public class AutoUpdate { public bool IsStarted { get; private set; } private readonly Timer _timer; private readonly List<Update> _updatesList; private readonly object _lock = new object(); private long _lastUpdateId; public AutoUpdate() { _lastUpdateId = 0; _updatesList = new List<Update>(); TimerCallback getUpdates = GetUpdates; _timer = new Timer(getUpdates, null, Timeout.Infinite,Timeout.Infinite); IsStarted = false; } public bool Start() { if (Monitor.TryEnter(_lock, 500)) { try { _timer.Change(0, 100); IsStarted = true; } finally { Monitor.Exit(_lock); } return true; } return false; } public bool Stop() { if (Monitor.TryEnter(_lock, 500)) { try { _timer.Change(Timeout.Infinite, Timeout.Infinite); IsStarted = false; } finally { Monitor.Exit(_lock); } return true; } return false; } public bool Terminate() { var stop = Stop(); if (!stop) return false; _timer.Dispose(); return true; } public ICollection<Update> GetLastUpdates() { lock (_lock) { //Make A Copy: for thread safety //Then Pass To ReadOnlyCollection return new ReadOnlyCollection<Update>(_updatesList.ToArray()); } } public void RemoveFromUpdates(Update update) { lock (_lock) { _updatesList.Remove(update); } } private void GetUpdates(object state) { ReturnedResult<List<Update>> responseObject = null; if (Monitor.TryEnter(_lock)) { try { HttpClient hc = new HttpClient(); HttpContent requestContent = new ObjectContent<UpdateRequest>(new UpdateRequest(), new JsonMediaTypeFormatter()); var task = hc.PostAsync( "https://api.telegram.org/bot<Private Token>/getUpdates", requestContent); if (task != null) { var result = task.Result; var jsonString = result.Content.ReadAsStringAsync().Result; responseObject = JsonConvert.DeserializeObject<ReturnedResult<List<Update>>>(jsonString, new JsonSerializerSettings { ContractResolver = new CustomPropertyNamesContractResolver { Case = IdentifierCase.UnderscoreSeparator} }); } //Add To The List... if (responseObject != null && responseObject.Ok) { var updateList = responseObject.Result.Where(w => w.UpdateId > _lastUpdateId); //Resharper Detected: Possible multi enumeration of same list in same time var list = updateList as Update[] ?? updateList.ToArray(); var maxId = list.Max(m => (long?) m.UpdateId) ?? 0; _lastUpdateId = Math.Max(maxId, _lastUpdateId); _updatesList.AddRange(list); } } finally { Monitor.Exit(_lock); } } } } Answer: Disposable You class manages a Timer, which you dispose of if Terminate is called. It feels like your class should implement IDisposable and cleanup the timer if it hasn't already been done. Is Started Does this need to be exposed as a public property? You're setting it in both your Start and Stop methods, but you never check its existing state. Is calling Start twice in a row without calling Stop acceptable? Perhaps Start should return IsStarted, rather than a value to indicate if it was able to acquire the lock. Uncertain Usage As I said in my response to your previous code it is unclear what your use case for this code is. You are protecting the list, by creating a copy of it in GetLastUpdates, however if this is called by multiple threads each thread could end up with its own copy of the underlying list with the same items in it. Each thread could then decide to perform some processing on the same item in the list. Does this matter? Is it prevented in someway outside of the code you've shown? Some alternative options would be: Calling threads simply pop the first item from the list, then perform processing on it. If the underlying _updateList was a ConcurrentQueue and it was the the only way to fetch items, you could change GetUpdates to enqueue items and probably do away with the Monitor lock altogether. Using your existing data structure you would have something like this. public Update GetFirstItem() { Update update = null; if (Monitor.TryEnter(_lock)) { try { update = _updatesList.FirstOrDefault(); if (null != update) { _updatesList.Remove(update); } } finally { Monitor.Exit(_lock); } } return update; } Calling threads pass in a selection delegate to select the item from the list they need to remove, then perform processing on it. public Update GetFirstMatchingUpdate(Func<Update, bool> searchCriteria) { Update update = null; if (Monitor.TryEnter(_lock)) { try { update = _updatesList.Where(searchCriteria).FirstOrDefault(); if(null != update) { _updatesList.Remove(update); } } finally { Monitor.Exit(_lock); } } return update; } Called like this: var update = au.GetFirstMatchingUpdate(x => x.SomeField == "someText" && x.Id > 3); Calling threads call existing GetLastUpdates, find the update they want to process, remove it from the list, then perform processing only if that removal works (i.e. they were the first to remove it). Processing is only actually performed by a single thread (with the contention being between the update and the processor), in which case the problem I've described doesn't exist.
{ "domain": "codereview.stackexchange", "id": 20494, "tags": "c#, multithreading, json, thread-safety, client" }
ip/tcp packet decoding without wireshark
Question: How can we manually read and interpret packets properly without using wireshark? Now from the Ethernet header I know that the Destination MAC Address should be at the 5th byte (after converting bits/bytes). So from this data, I thought it would be 4a onwards. However, in reality it's 00:17:f2:d0:4c:82. Same goes for the IP source destination. For instance, the source should be at 13-16 bytes. According to the readings, I guess it should be 0800 onwards. But in reality, it is on 0a 32 e7 85 but I don't get why? I am just confused on how to interpret this data correctly or maybe I am understanding the general header structure incorrectly. https://ntquan87.wordpress.com/2015/08/17/reading-packet-hex-dumps-manually-no-wireshark/ Answer: My guess is that what you are seeing is a Level 2 Ethernet frame and therefore the preamble is missing. Also the Ethernet checksum seems to be missing. In this case everything seems to line up (the packet type inside the Ethernet frame, the IPv4 version, the IPv4 packet length, the packet type, i.e. TCP, inside the IP packet, ...). Then you'd read your packet as in the picture. The TCP payload is 474554202f20485454502f312e300d0a 557365722d4167656e743a2057676574 2f312e31312e340d0a4163636570743a 202a2f2a0d0a486f73743a207777772e 696574662e6f72670d0a436f6e6e6563 74696f6e3a204b6565702d416c697665 0d0a0d0a and decodes to: GET / HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: www.ietf.org Connection: Keep-Alive which is coherent with the fact that the destination port is 80.
{ "domain": "cs.stackexchange", "id": 15909, "tags": "computer-networks, communication-protocols, bit-manipulation, tcp, ip" }
Why there is no piston style pedal bicycle?
Question: I have interested in walking bicycle. For the concept is just bicycle that you don't cycling pedal but stepping or striding to drive wheel Which leads me to think that is it better if we use linear or ellipse motion of leg to transmit to bicycle The closest mechanic I could think is piston crankshaft like a car. It convert linear motion to rotation How disadvantage of the mechanic compare to cycling motion? Answer: The biggest problem with a "two cylinder" crankshaft is that there is a dead spot in the torque curve whenever either "piston" is at TDC or BDC. When you have your feet directly on the pedals of a conventional circular crank, you actually have the ability to apply force over greater than a 180° arc — by a combination of flexing your ankles and pushing forward/backward with your thigh muscles — which eliminates the dead spot. If you try to use a mechanical connecting rod with a crankshaft, you need some other way to eliminate the dead spot. This typically involves using more than two pistons, or by incorporating a large flywheel into the mechanism. None of this is practical or desirable on a bicycle. I have seen toy vehicles for small children that use such a mechanism. but invariably, they need to put their feet down on the ground from time to time to push themselves out of the dead spot.
{ "domain": "engineering.stackexchange", "id": 608, "tags": "power, pistons, bicycles" }
What is the best way to synchronize messages?
Question: I wrote a script with two subscribers. The general way to do this is of course subN = rospy.Subscriber("topicN", TypeN,CallbackN) so I have two callbacks: Callback1 for topic1 and Callback2 for topic2 However my objective is to use the information of both these topics to generate (and publish) another message. I am wondering how should I proceed to use both of these messages? Where should I put the code? In a third callback function? How can I synchronize receiving these two messages? Originally posted by Kansai on ROS Answers with karma: 170 on 2021-03-28 Post score: 0 Original comments Comment by gvdhoorn on 2021-03-29: @Kansai: I'm trying to figure out what leads people to post new questions on topics already discussed on ROS Answers. There are many duplicates of your question (and of @Rufus answer). Could you write a bit about how you approached posting here? Did you search before you posted? If so: how did you search? With which tools? If not: why not? If you did search, and did find previous Q&As: did those not answer your question? Could you describe why not? Answer: You can use a message_filter for this. If you want to synchronize by time, you can use the TimeSynchronizer or ApproximateTimeSynchronizer. Originally posted by Rufus with karma: 1083 on 2021-03-29 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Kansai on 2021-03-29: Thanks! I found this http://wiki.ros.org/message_filters Comment by Kansai on 2021-03-29: Mmmm. I tried the example in the link and eventhough both topics are publishing (I checked this) the callback function is not being called. then I found this https://answers.ros.org/question/172676/solved-problem-with-ros-message-filters/
{ "domain": "robotics.stackexchange", "id": 36251, "tags": "ros" }
Why is the Lagrangian specifically a function of $v^2$?
Question: I've been reading "L. D. Landau, E.M. Lifshitz - Mechanics (Volume 1)" and he justifies the fact that the Lagrangian is a function of $v^2$ with the fact that space is isotropic - that is, direction does not matter. My question is: could we choose $L$ to be a function of $|v|$ or $v^4$? I know that, if we choose $v^2$ and assume $v_0<<1$, (note: in the book, $v_0$ is a factor that relates one inertial frame of reference to another) we can nicely cancel out $v_0^2$. However, is this the only reason? Could we achieve the same results choosing other "v"s? Answer: The speed $v\equiv |\vec{\bf v}|\geq 0$ is by definition the magnitude of the velocity $\vec{\bf v}$. Any function of speed $v$, or say $v^4$, can be easily rewritten as a function of $v^2 \equiv |\vec{\bf v}|^2$, or vice-versa. The latter form $f(v^2)$ is preferable when one tries to partial differentiate wrt. a velocity component in order to avoid square roots. For why the Lagrangian for a free particle is such function $f(v^2)$, see this related Phys.SE post.
{ "domain": "physics.stackexchange", "id": 71787, "tags": "classical-mechanics, lagrangian-formalism" }
Veto algorithm in particle decays and Monte Carlo techniques
Question: What is the main concept behind Veto algorithm and how does it contribute to efficiency in Monte Carlo methods like when used in designing a parton shower? Answer: Veto algorithm is an efficient way to integrate invertible functions or any functions for that matter. What Veto algorithm does is that it uses a separate function(over-estimating function) such that we do not need to use the original function's inverse or its integral. Mathematically, if $ f(t)$ is the function to be integrated, with no simple $F(t)$ or $F(t)^{-1}$ where $F(t)$ is the integral of $f(t)$, then we use the over-estimating function $G(t)$ to find the probability distribution $P(t)$ of $f(t)$. So we have $P(t) = f(t) \exp(-\int_0^t dt' g(t'))$, where $g(t)$ is $\frac{d}{dt}G(t) = g(t)$. For its use in a parton shower evolution, we can refer to how it uses the entire phase space available avoiding double counting. We follow the same method as stated in the first paragraph and then, in the end, it eliminates the over-estimated region of phase space by multiplying the reciprocal of the over-estimated function. For further reading of veto algorithm in parton showers, see section 3.2 of this paper and slides 54 onwards of these slides.
{ "domain": "physics.stackexchange", "id": 60241, "tags": "particle-physics, computational-physics, statistics, beyond-the-standard-model, algorithms" }
Optimising Lucky Number Program
Question: In this, we are given an array of numbers, say 1 2 3 4. We start from a given position, let's say the 1st position. We cancel that number and move forward that many non-cancelled numbers. The number on which we stop again has the same procedure.This is repeated until only one number is left. That is the lucky number. 1 2 3 4 2 3 4 3 4 3 My code: import java.util.*; class ln { public static void main(String[] ar)throws Exception { Scanner sc = new Scanner(System.in); System.out.print("How many nos. will you enter: "); int n = sc.nextInt(); int[] a = new int[n]; System.out.print("Enter the nos.: "); for(int i = 0; i < a.length; i++) a[i] = sc.nextInt(); System.out.print("From which position do you want to start: "); int st = sc.nextInt(); for(int i = 0; i < a.length - 1; i++) { int j = a[st - 1]; a[st - 1] = 0; int s = (int)Math.signum(j); while(j != 0) { if(a[st-1] != 0) j -= s; st += s; if(st > a.length) st = 1; else if(st < 1) st = a.length; } } for(int i = 0; i < a.length; i++) { if(a[i] != 0) { System.out.println("Final no.: " + a[i]); break; } } } } Please tell if there is a more elegant way of doing this . Answer: Yes, there is a more elegant way of doing this. (That's all you wanted to know, isn't it?) First of all, here's some general comments about your code: Your Scanner should be closed when you are done with it. sc.close(); You use in total six different names for variables. All of these have a variable name of one or two single characters. It is hard to understand their purpose because of the short variable names you have given them. Possible better names could be currentIndex, sign, currentNumber... Use methods. A method should do one thing and it should do that one thing well. Your main method currently does: Receive user input and put it into an array Determine the "final number" according to your rules for this array Display result to user At least make a separate method for determining the final number for an array. The method could be something like this: public static int finalNumber(int[] inputArray) Use an if-statement to check if you are currently processing the last number. This will get rid of the last for-loop in your current code, which only loops through the array and shows the non-zero number it encounters. There is some things that can be simplified within your algorithm as well. loop x number of times, where x equals array size get the value at the current index get the sign of the current value modify the current index by + current value while the value at the current index is zero, modify the current index by the value of sign When modifying the index, you can use the modulo % operator to make sure that it is within the bounds of the array. Here is what I would do: public static int luckyNumber(int[] a, int index) { List<Integer> ints = new ArrayList<Integer>(a.length); for (int i = 0; i < a.length; i++) { // Add the values to a list so that it is easy to remove them later ints.add(a[i]); } int lastValue = 0; while (!ints.isEmpty()) { // Save the index that should be removed later int oldIndex = index; int currentNumber = ints.get(index); index = index + currentNumber; // Make sure that the index is within a valid range if (index < 0) index += ints.size() * Math.abs(index); index = index % ints.size(); // Remove the old index and adjust the index value if the index removed was before the current index. lastValue = ints.remove(oldIndex); if (oldIndex <= index) index--; } // Once the list is empty, return the last value that was removed return lastValue; } All these tests will print the lovely number 42: System.out.println(luckyNumber(new int[]{ 1, 2, 42, 4 }, 0)); System.out.println(luckyNumber(new int[]{ 1, -1, 42, 4 }, 0)); System.out.println(luckyNumber(new int[]{ 1, -1, 2, 42, -4 }, 1));
{ "domain": "codereview.stackexchange", "id": 5283, "tags": "java, array" }
How does quantization arise in quantum mechanics?
Question: BACKGROUND I'm trying to build an intuition for what quantization really means and came up with the following two possible "visualizations": The quantization of the energy levels of a harmonic oscillator is the result of a wave function that is confined in a potential well (namely of quadratic profile). It is the boundary conditions of that well that give rise to standing waves with a discrete number of nodes---hence the quantization. Photons are wave packets, i.e., localized excitations of the electromagnetic field that happen to be traveling at the speed of light. On the one hand, #1 explains quantization as the result of the boundary conditions, and on the other hand #2 explains it as the localization of an excitation. Both pictures are perfectly understandable from classical wave mechanics and yet we don't think of classical mechanics as quantized. QUESTION With the above in mind, what is intrinsically quantized about quantum mechanics? Are my "intuitions" #1 and #2 above contradictory? If not, how are they related? PS: Regarding #2, a corollary question is: If photons are wave packets of the EM field, how does one explain the fact that a plane, monochromatic wave pervading all of space, is made up of discrete, localized excitations? My question is somewhat distinct from this one in that I'd rather not invoke the Schrödinger equation nor resort to any postulates, but basically build on the two intuitions presented above. Answer: First and second quantization Quantization is a misleading term, since it implies discreteness (e.g., of the energy levels), which is not always the case. In practice (first) quantization refers to describing particles as waves, which in principle allows for discrete spectra, when boundary conditions are present. The electromagnetic waves behave in a similar fashion, exhibiting discrete spectra in resonators. Thus, technically, quantization of the electromagnetic field corresponds to second quantization of particles. Second quantization arises when dealing with many-particle systems, when the focus is not anymore on the wave nature of the states, but on the number of particles in each state. The discreteness (of particles) is inherent in this approach. For the electromagnetic field this corresponds to the first quantization, and the filling particles, whose number is counted, are referred to as photons. Thus, photon is not really a particle, but an elementary excitation of electromagnetic field. Associating a photon with a wave packet is misleading, although it appeals to intuition. (One could argue however that physically observed photons are always wave packets, since to have truly well-defined energy they would have to exist for infinite time, which is not possible.) This logic of quantization is applied to other wave-like fields, such as wave excitations in crystals: phonons (sound), magnons, etc. One speaks sometimes even about diffusons - quantized excitation of a field described by the duffusion equation. Uncertainty relation An alternative way to look at quantization is from the point of view of the Heisenberg uncertainty relation. One switches from classical to quantum theory by demanding that canonically conjugate variables cannot be measured simultaneously (e.g., position and momentum, $x,p$ can be simultaneously measured in classical mechanics, but not in quantum mechanics). Mathematically this means that the corresponding operators do not commute: $$ [\hat{x}, \hat{p}_x]_- = \imath\hbar \Rightarrow \Delta x\Delta p_x \geq \frac{\hbar}{2}.$$ The discreteness of spectra then shows up as discrete eigenvalues of the operators. This procedure can be applied to anything - particles or fields - as long as we can formulate it in terms of Hamiltonian mechanics and identify effective position and momenta, on which we then impose the non-commutativity. E.g., for electromagnetic field, one demands the non-commutativity of the electric and the magnetic fields at a given point.
{ "domain": "physics.stackexchange", "id": 66574, "tags": "quantum-mechanics, discrete" }
Callbacks in pr2 controllers
Question: Hi ros answers users, I'm currently trying to make a custom controller work on pr2. I have two subscribers inside the controller.My problem is that their respective callbacks are not called. Moreover I noticed that the existing controllers on pr2 (namely joint_spline_controller and joint_velocity_controller ) don't call spinOnce. Still, they use subscribers and callbacks. So my question is how and when are callbacks called for a controller ? Guido Originally posted by Guido on ROS Answers with karma: 514 on 2011-03-11 Post score: 0 Answer: Hi Guido, ROS's spin() function is called external to the controller manager. On the PR2, it's called from pr2_etherCAT, and in Gazebo it's called from pr2_gazebo_plugins. The controller manager itself uses ROS to load and unload controllers, and many other topics are published (such as joint_state), so if spin() weren't being called, there would be many other issues. I'm not sure why your subscription callback isn't being called, but I suspect it's just related to standard ROS issues and unrelated to the controllers. Try using rostopic info, rostopic echo, rosnode ping, and rosnode info. Check that both ends are connected to the same topic, and that messages are being published. One last thing to check: be sure that your controller's init() method finishes. If you still can't find the issue, try posting some code. Best of luck Originally posted by sglaser with karma: 649 on 2011-03-22 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Guido on 2011-03-23: It was indeed a connection problem between both ends of the topic. I forgot to take into account namespaces, so the topic advertised was /command but the topic to which I should publish is /my_controller/command. I was missing the "/my_controller" part. Thank you for the explanation.
{ "domain": "robotics.stackexchange", "id": 5036, "tags": "ros, pr2-controllers, pr2, callbacks" }
Is the type inference here really complicated?
Question: There's a question on SO asking why in Java the right type doesn't get picked in a concrete case. I know that Java can't do it in such "complicated" cases, but I'm asking myself WHY? The (for simplicity slightly modified) line failing to compile is Map<String, Number> m = ImmutableMap.builder().build(); and the methods are defined as1 class ImmutableMap { public static <K1, V1> Builder<K1, V1> builder() {...} ... } class Builder<K2, V2> { public ImmutableMap<K2, V2> build() {...} ... } The solution K1=K2=String and V1=V2=Number is obvious to everyone but the compiler. There are 4 variables here and I can see 4 trivial equations, so what's the problem with type inference here? 1I simplified the code piece from Guava for this example and numbered the type variables to make it (hopefully) clearer. Answer: This is a common limitation of type inferencing and it has to do with the distinction between parameters and results of a function. Generally type inferencing is done strictly with the parameters passed to a function. Consider just the expression: ImmutableMap.builder().build(); This has to be a valid expression due to how the language works. This means the type inferencing has to work from this expression alone. There is of course nothing in this expression which reveals what type you are expecting, thus it cannot compile (the type of the expression is not known). It really isn't a question of how complicated the inferencing is, but rather a question of the fundamental structure of the language. It is possible to design a language where the return type becomes an implicit parameter to a function. However, this starts to create a loop in the typing logic in languages where variables can be auto-type (like C++). The reason this happens is because of how the syntax trees for such languages are built (this may be only in theory as the compiler may not match exactly). When you have an expression like the above you have a tree that might look kind something like this: - Assignment - LValue: Map<String, Number> m - RValue: FunctionCall( FunctionCall( ImmutableMap, builder ), build ) In such languages the "RValue" is essentially an expression -- where expressons are something that resolves to a value. These are processed on their own, and thus limited to the variables and sub-expressions which occur in them. Type inferencing doesn't usually go up tree, thus the FunctionCall has no knowledge it is part of the Assignment tree and thus no knowledge of the "LValue" (what it is being assigned to). This is the traditional way to process languages (it even applies to formal languages like lambda calculus). Languages don't have to work this way, but there it is the most commonly understood way parsing works. You can certainly make a syntax tree which forwards type information in an assignment, but that leads to the pitfall I mentioned if the "LValue" is auto-typed. If you attempt to allow both sides to infere their type you start entering a constraint based typed system (which again, is possible and I have worked on one before). One practical way of doing this in some languages is to pass the return variables by reference (which isn't really possible in Java). In C++ I often do this where the function requires its return type. Instead of doing: var = someFunc(...); I'll do: someFunc( var, ... ); And this allows the type of "var" to be known by the function (in this case for template resolution).
{ "domain": "cs.stackexchange", "id": 314, "tags": "programming-languages, typing, java, type-inference" }
Need verification: Hamiltonian formulation states that an atom is bounded to an eigenenergy state due to opposing kinetic and potential energy
Question: I received an explanation from someone who said that electrons in an atom are trapped in an eigenenergy state $E_n$ as per Hamiltonian mechanics is because the KE and PE of the atom balances themselves out. The claim was that as the electron cloud moves, the KE oscillates and the PE oscillates and the summation of that KE and PE oscillation is the eigenenergy I know about eigenenergy as derivable from Schrodinger's equation, but I have never heard of this explanation. While it sounds plausible, can someone verify whether there is indeed ground for this claim? Answer: The electron in an atom has no defined kinetic energy, because it has no defined velocity. If it had a well-defined velocity $\vec v$ its position would be absolutely undetermined - i.e. all the universe. It would be interesting for you to see the question Quantum mechanics,and how the law $ΔxΔp≥ℏ/2$ explains the paradox regarding atoms . However, I can do something else for helping, and that would show that what you were said it not far from the truth. Namely, I shall calculate the average kinetic energy. I will not complicate myself with higher states, I will do the calculus for the ground hydrogen state. This state has a relatively simple wave-function, of the form $ \psi (r) = Ce^{-r/a_0}$, where C is a constant of normalization and $a_0$ is the Bohr radius. So, what I do is $<E_{K,0}> = C^2 \frac {\hbar ^2}{2m} \int e^{-r/a_0} \frac {d^2}{dr^2} e^{-r/a_0} dr$ $ = \frac {\hbar ^2 }{2m a_0^2} C^2 \int e^{-r/a_0} e^{-r/a_0} dr = \frac {\hbar ^2 }{2m a_0^2}$ Introducing the expression of the Bohr radius, $a_0 = \frac {4\pi \epsilon _0 \hbar ^2}{m_e c}$ we get, the same absolute value as the eigenvalue of the total energy on the ground level, $E_1 = <E_{K,0}>$. In all, the potential energy is negative, and the total energy is also negative. We have $E_0 = <E_{K,0}> + <E_{P,0}>$, or in another form, $-<E_{P,0}> =-E_0 + <E_{K,0}>$. The kinetic energy balances half of the potential energy.
{ "domain": "physics.stackexchange", "id": 18868, "tags": "quantum-mechanics, atomic-physics" }
Error when initializing trac_ik on Noetic
Question: Hi! I have used TRAC-IK on ROS Melodic and Kinetic and everything works fine. However, when I ported the same code to ROS Noetic, I found that TRAC-IK could not be initialized, i.e., I got this error Signal: SIGSEGV (Segmentation fault) when running to TRAC_IK::TRAC_IK tracik_solver(chain_start, chain_end, urdf_param, timeout, eps); To make sure it wasn't an error in my code, I ran test.cpp separately and ran into the same problem. I'm not sure what is causing this error, or if I'm missing the installation of other libraries. But for now, my code builds fine. Looking forward any reply! Thanks! Originally posted by JohnDoe on ROS Answers with karma: 3 on 2022-11-15 Post score: 0 Answer: I believe the error is coming due to an incorrect installation of trac_ik. To answer you completely, I tested it in ROS Noetic and found it working. Please see below: Install trac_ik: Please install from pre-built binaries as shown below: $ sudo apt install ros-noetic-trac-ik Install dependency to run an example: We do not need this package. But here, I want to run an example to confirm trac_ik. $ sudo apt install ros-noetic-pr2-description Run example: The pr2_arm.launch uses TRAC_IK::TRAC_IK tracik_solver(chain_start, chain_end, urdf_param, timeout, eps) ravi@dell:~$ roslaunch trac_ik_examples pr2_arm.launch ... logging to /home/ravi/.ros/log/a428d09e-64be-11ek-b98a-bhc0cd415369/roslaunch-dell-13219.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://dell:36633/ SUMMARY ======== PARAMETERS * /robot_description: <?xml version="1.... * /rosdistro: noetic * /rosversion: 1.15.14 * /trac_ik_tests/chain_end: r_wrist_roll_link * /trac_ik_tests/chain_start: torso_lift_link * /trac_ik_tests/num_samples: 1000 * /trac_ik_tests/timeout: 0.005 * /trac_ik_tests/urdf_param: /robot_description NODES / trac_ik_tests (trac_ik_examples/ik_tests) auto-starting new master process[master]: started with pid [13259] ROS_MASTER_URI=http://localhost:11311 setting /run_id to a428d09e-64be-11ek-b98a-bhc0cd415369 process[rosout-1]: started with pid [13284] started core service [/rosout] process[trac_ik_tests-2]: started with pid [13287] [ INFO] [1668500629.439007277]: Using 7 joints [ INFO] [1668500629.440011185]: *** Testing KDL with 1000 random samples [ INFO] [1668500629.440163447]: 0% done [ INFO] [1668500630.509473437]: 90% done [ INFO] [1668500630.632576062]: KDL found 859 solutions (85.9%) with an average of 0.0011914 secs per sample [ INFO] [1668500630.632594903]: *** Testing TRAC-IK with 1000 random samples [ INFO] [1668500630.632938700]: 0% done [ INFO] [1668500631.733434004]: TRAC-IK found 955 solutions (95.5%) with an average of 0.0010914 secs per sample [trac_ik_tests-2] process has finished cleanly log file: /home/ravi/.ros/log/a428d09e-64be-11ek-b98a-bhc0cd415369/trac_ik_tests-2*.log ^C[rosout-1] killing on exit [master] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done Note Please read below if you get the following error with pr2_arm.launch file RLException: Invalid <param> tag: Cannot load command parameter [robot_description]: no such command [['/opt/ros/noetic/share/xacro/xacro.py', '/opt/ros/noetic/share/pr2_description/robots/pr2.urdf.xacro']]. Param xml is <param name="robot_description" command="$(find xacro)/xacro.py '$(find pr2_description)/robots/pr2.urdf.xacro'"/> The traceback for the exception was written to the log file Please change xacro.py to xacro at line#8 of /opt/ros/noetic/share/trac_ik_examples/launch/pr2_arm.launch file as suggested here. Update With "chain_start", "chain_end", as "base_link", and "tool0" respectively, the IK test of kuka lbr iiwa robot using trac_ik works smoothly. Below is the kuka_lbr_iiwa.launch file: <?xml version="1.0"?> <launch> <arg name="num_samples" default="1000" /> <arg name="chain_start" default="base_link" /> <arg name="chain_end" default="tool0" /> <arg name="timeout" default="0.005" /> <param name="robot_description" command="$(find xacro)/xacro --inorder '$(find kuka_lbr_iiwa_support)/urdf/lbr_iiwa_14_r820.xacro'" /> <node name="trac_ik_tests" pkg="trac_ik_examples" type="ik_tests" output="screen"> <param name="num_samples" value="$(arg num_samples)" /> <param name="chain_start" value="$(arg chain_start)" /> <param name="chain_end" value="$(arg chain_end)" /> <param name="timeout" value="$(arg timeout)" /> <param name="urdf_param" value="/robot_description" /> </node> </launch> This is how, I executed it: $ roslaunch trac_ik_examples kuka_lbr_iiwa.launch ... logging to /home/ravi/.ros/log/648856b2-655b-11ed-b98a-bfc0cd455360/roslaunch-dell-115393.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. xacro: in-order processing became default in ROS Melodic. You can drop the option. started roslaunch server http://dell:44651/ SUMMARY ======== PARAMETERS * /robot_description: <?xml version="1.... * /rosdistro: noetic * /rosversion: 1.15.14 * /trac_ik_tests/chain_end: tool0 * /trac_ik_tests/chain_start: base_link * /trac_ik_tests/num_samples: 1000 * /trac_ik_tests/timeout: 0.005 * /trac_ik_tests/urdf_param: /robot_description NODES / trac_ik_tests (trac_ik_examples/ik_tests) auto-starting new master process[master]: started with pid [115433] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 648856b2-655b-11ed-b98a-bfc0cd455360 process[rosout-1]: started with pid [115460] started core service [/rosout] process[trac_ik_tests-2]: started with pid [115467] [ INFO] [1668567873.100935324]: Using 7 joints [ INFO] [1668567873.102060585]: *** Testing KDL with 1000 random samples [ INFO] [1668567873.107102190]: 0% done [ INFO] [1668567874.109234688]: 30% done [ INFO] [1668567875.367298698]: 70% done [ INFO] [1668567876.307079837]: KDL found 400 solutions (40%) with an average of 0.00320388 secs per sample [ INFO] [1668567876.307104300]: *** Testing TRAC-IK with 1000 random samples [ INFO] [1668567876.307449347]: 0% done [ INFO] [1668567876.837260447]: TRAC-IK found 996 solutions (99.6%) with an average of 0.000525585 secs per sample [trac_ik_tests-2] process has finished cleanly log file: /home/ravi/.ros/log/648856b2-655b-11ed-b98a-bfc0cd455360/trac_ik_tests-2*.log ^C[rosout-1] killing on exit [master] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done Originally posted by ravijoshi with karma: 1744 on 2022-11-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by JohnDoe on 2022-11-15: Thank you for your reply and detailed explanation. I followed your instructions and was able to see the same verification results. However, this is not what I want. When I run test.cpp via C++ in IDE and launch the robot description in another terminal, I get the error mentioned above, which does not happen with the ROS Melodic. I am using the kuka lbr iiwa robot. For the main function of my test.cpp, I just changed the following parameters based on the sample. nh.param("chain_start", chain_start, std::string("base_link")); nh.param("chain_end", chain_end, std::string("tool0")); nh.param("urdf_param", urdf_param, std::string("/robot_description")); Besides, the rest is exactly the same as the original test.cpp. Would you please try to call trac-ik in this way? Thanks! Comment by ravijoshi on 2022-11-15: Please see the "Update" section in my answer above. Basically, it works fine with your parameters as well. However, in your computer, most probably the trac_ik is broken. Please install it as mentioned in the answer above. Comment by JohnDoe on 2022-11-15: Thank you for your patient response, which I greatly appreciate. I have tried all your instructions and everything goes well. However, I found the problem when I was trying to share a minimally reproducible example for you. For my original project, there are many components in the find_package of CMakeList, e.g. kdl_parser, trac_ik_lib, nlopt, visualization_msgs, and so on. For the minimal reproducible example project, I just included kdl_parser and trac_ik_lib, and I found that the error encountered above disappeared. Through repeated testing, I found that in ROS Noetic, the nlopt lib and trac_ik_lib conflict with each other, which causes the error I raised in the question and does not occur with ROS Melodic. Anyway, thanks for the guidance and I was able to find this problem in my attempt. PS: I installed nlopt and trac_ik_lib by sudo apt-get install ros-noetic-****. Comment by ravijoshi on 2022-11-15: I am glad that you made it work. Sometime, creating a " minimally reproducible example" helps to filter the issue, as you did. BTW, the nlopt is automatically installed when using sudo apt install ros-noetic-trac-ik. So, you don't need to worry about it. Otherwise, you have to install libnlopt-dev and libnlopt-cxx0. I have tested both approaches in Noetic (Ubuntu 20.04 LTS) and found working. Finally, if this answer helped you to found the issue and the problem is resolved, may I request you to (1) upvote, and (2) mark this answer as accepted? Please look at the top left corner of this above above. Comment by JohnDoe on 2022-11-16: No problem, I've marked this as the answer. Thanks again for your kindness!
{ "domain": "robotics.stackexchange", "id": 38126, "tags": "ros, ros-melodic, kdl" }
Why does Lamb shift renormalization not affect decay rate?
Question: As a preface, I know there are "more" correct ways to calculate the Lamb shift and decay rate through full blown QED, but this is what's most familiar with me, so I would appreciate an answer in this realm (without going to full QED and talking about poles etc.) When following the Wigner-Weisskopf method for spontaneous decay of a hydrogen atom, we assume an exponential ansatz. Using this method, we can find a real and imaginary exponent for the initial state lifetime. This represents the decay rate, and the Lamb shift. We get a finite lifetime, but the Lamb shift diverges. Typically at this point we say that the Lamb shift needs to be dealt with through renormalization, but are happy with the finite decay rate (which is apparently quite close to experiment). One typical way to deal with the Lamb shift infinity is to follow Bethe and subtract away the self-energy contribution from a free electron. The remaining integral still diverges, but if we use a cutoff energy around ~mc^2, then we get a lamb shift of about ~1040 MHz, which is reasonably close to experiment. One way to think about this subtraction is that the bare mass in the Hamiltonian is adjusted to remove the effects of the free-electron self-energy. Now my question, if we are effectively adding mass counter-terms to our Hamiltonian to adjust our final calculation for the Lamb shift frequency, why does this not also affect the decay frequency? Although renormalization is already somewhat strange, I don't understand in this specific instance why we can add terms to change the calculation of infinite quantities, while finite quantities like the decay rate (which do depend on the form of the Hamiltonian, and what specific mass is used) can be taken as their "non-renormalized" (no mass counter term in Hamiltonian) value? Hope that makes sense. Answer: Mass renormalization is indeed introduced to cure the divergent behaviour of the real part of the self-energy, while the imaginary part (giving the lifetime) is seemingly finite. However, in the naive calculation, it actually contains the unobservable (and infinite) bare mass. The lifetime formula must also be calculated from the renormalized formalism, but in the end this will make no difference, we will end up with formally the same expression; the only difference is that the observed (that is, renormalized) mass $m_\text{obs}$ enters the lifetime formula as well. In the old-fashioned PT (i.e. non-relativistic QM coupled to a quantized electromagnetic field), we find that the renormalized Hamiltonian of the hydrogen atom reads $$ H=H_\text{atom}+H_\text{rad}+H_\text{int} \ , $$ where $$ H_\text{atom}=\frac{P^2}{2m_\text{obs}}-\frac{Z\alpha}{r} $$ is the Hamiltonian of the hydrogen without any field (with eigenstates and eigenvalues $\phi_n$, $E_n$), $$ H_\text{rad}=\frac{1}{2} \int\mathrm{d}^3r\left[\Pi^2+(\nabla\times\vec{A})^2\right] $$ is the Hamiltonian of the field ($\vec{A}$ and $\vec{\Pi}=-\vec{E}_\perp$ being the vector potential and its conjugate momentum, respectively), and $$ H_\text{int}=\frac{e}{m_\text{obs}}\vec{P}\cdot\vec{A}+\frac{e^2}{2m_\text{obs}}A^2+\left(\frac{\delta m}{m_\text{obs}}\right)\frac{P^2}{2m_\text{obs}} $$ is the usual minimal coupling interaction. The only non-standard term is the last one, the mass counterterm, which is introduced to cancel the divergence of the self-energy. In the lowest order and in the dipole approximation, $\delta m$ reads $$ \delta m=\frac{4\alpha}{3\pi}\Lambda \ , $$ as $\Lambda\rightarrow\infty$. See Chapter 11. of Molecular Quantum Electrodynamics by Craig & Thirunamachandran for the derivation of this Hamiltonian with this counterterm (and also Chapter 3.4 of Relativistic Quantum Mechanics and Field Theory by Gross for some details on the following PT calculation). When calculating the energy shift and decay rate in the renormalized theory, everything goes on just like in the naive "Rayleigh-Schrödinger PT + dipole approximation" calculation (the $\sim A^2$ term gives an infinite but state-independent "constant" contribution which can be absorbed by the redefinition of zero energy, while the $\sim\vec{P}\cdot\vec{A}$ term yields in second order a state-dependent linear divergence cancelled by the mass counterterm), and we end up with $$ \Delta {\cal{E}}_{n,\Lambda}=\frac{2\alpha}{3\pi m^2_\text{obs}} \sum_m(E_m-E_n)|\langle\phi_n|\vec{P}|\phi_m\rangle|^2 \int_0^\Lambda\mathrm{d}k\frac{1}{E_m-E_n+k} $$ for the $n$-th state. The integrand can become singular for excited states when $k=E_n-E_m$. This must be avoided with an appropriate $i0^+$ prescription, that is, by formally adding a small imaginary part to the energy: $$ \begin{aligned} \Delta {\cal{E}}_{n,\Lambda}&=\frac{2\alpha}{3\pi m^2_\text{obs}} \sum_m(E_m-E_n)|\langle\phi_n|\vec{P}|\phi_m\rangle|^2 \int_0^\Lambda\mathrm{d}k\frac{1}{E_m-E_n+k-i0^+} \\ &=\Delta {{E}}_{n,\Lambda}-\frac{i}{2}\Gamma_n \ , \end{aligned} $$ where $$ \Delta {{E}}_{n,\Lambda}=\frac{2\alpha}{3\pi m^2_\text{obs}} \sum_m(E_m-E_n)|\langle\phi_n|\vec{P}|\phi_m\rangle|^2 {\cal{P}}\int_0^\Lambda\mathrm{d}k\frac{1}{E_m-E_n+k} \ , $$ and $$ \begin{aligned} \Gamma_n&= -\frac{4\alpha}{3 m^2_\text{obs}} \sum_m(E_m-E_n)|\langle\phi_n|\vec{P}|\phi_m\rangle|^2 \int_0^\infty\mathrm{d}k\delta(E_m-E_n+k) \\ &=\frac{4\alpha}{3 m^2_\text{obs}} \sum_{\substack{m \\ E_n>E_m}}(E_n-E_m)|\langle\phi_n|\vec{P}|\phi_m\rangle|^2 \\ &=\frac{4\alpha}{3} \sum_{\substack{m \\ E_n>E_m}}(E_n-E_m)^3|\langle\phi_n|\vec{r}|\phi_m\rangle|^2 \ . \end{aligned} $$ We used $$ \frac{1}{x\pm i0^+}={\cal{P}}\frac{1}{x}\mp i\pi\delta(x) $$ to separate the real and imaginary parts, and we were free to let $\Lambda\rightarrow\infty$ in the decay rate formula. The expression for $\Gamma_n=1/\tau_n$ looks just like the one calculated naively, but with $m_\text{obs}$ instead of the bare mass. Note that the decay rate was extracted from an essentially time-independent PT formalism by avoiding poles along the real line, an approach reminiscent of the treatment of resonances. The correct choice of the sign of $i0^+$ is better motivated in the full QED derivation; in this context, not much more can be said other than choosing $-i0^+$ instead of $+i0^+$ is necessary to prevent an unphysical, exponentially growing probability. The real part providing the energy shift is still logarithmically divergent. Bethe simply introduced the cut-off $\Lambda=m$, while in the QED treatment, this divergence is cancelled by the infrared divergence coming from the $F_1$ form factor of the one-loop vertex correction. Note, however, that the violently singular nature of the self-energy (containing both linear and logarithmic divergences) is an artifact of the dipole approximation. If one calculated the one-loop self-energy in the old-fashioned PT but without dipole approximation, then only a single logarithmic divergence should be treated in the mass renormalization, and the remaining parts would be finite; see Lett. Math. Phys. 24 115 (1992) for details.
{ "domain": "physics.stackexchange", "id": 99131, "tags": "atomic-physics, quantum-electrodynamics, renormalization, half-life, lamb-shift" }
Impedance matching in Fresnel's equations
Question: I have Fresnel's equations: \begin{align} \begin{aligned} r_\text{TE} = \dfrac{\eta_2\cos\theta_1 - \eta_1\cos\theta_2}{\eta_2\cos\theta_1 + \eta_1\cos\theta_2}, && t_\text{TE} = \dfrac{2\eta_2\cos\theta_1}{\eta_2\cos\theta_1 + \eta_1\cos\theta_2},\\ r_\text{TM} = \dfrac{\eta_2\cos\theta_2 - \eta_1\cos\theta_1}{\eta_1\cos\theta_1 + \eta_2\cos\theta_2}, && t_\text{TM} = \dfrac{2\eta_2\cos\theta_1}{\eta_1\cos\theta_1 + \eta_2\cos\theta_2}, \end{aligned} \end{align} where $\theta_1$ is the incoming angle, and $\theta_2$ is the transmitted angle from medium 1 with impedance $\eta_1$ to medium 2 with impedance $\eta_2$. These are deduced from Snell's law and the law of reflection, as well as from the continuity relations of the electric and magnetic fields on the interface of the mediums. How can it be deduced from these equations that if $\eta_2=\eta_1$ there is no reflection? I'm not seeing it. Any help is very much appreciated. Answer: The answer is that it is only true for non-magnetic media (or media with identical relative permeability). In those cases, the impedances are inversely proportional to refractive indices and Snell's law ensures that the angles of incidence and refraction are the same. If you had materials with identical impedance but different refractive indices, then I don't think it is true there would be zero reflection, except at normal incidence. The relevant relationships are $\eta = (\mu_r \mu_0/\epsilon_r \epsilon_0)^{1/2}$, whereas $n = (\mu_r \epsilon_r)^{1/2}$.
{ "domain": "physics.stackexchange", "id": 37583, "tags": "optics, reflection" }
A question about halt (or stop) of Turing machine
Question: I try to understand something: At Turing machine we have two stats: $q_{accept}$ and $q_{reject}$. Now, if machine $M$ runs on word $w$ (I hope I write it right...) and the final configuration is: $C(w,q_t,\varepsilon )$, and $q_t =Q-\{q_{accept},q_{reject}\}$. Of course $M$ don't accept $w$ and don't reject $w$, but my question is: We can say that $M$ Stops on $w$? I try to look for an answer but I didn't found... Thank you! Answer: No, Turing machine are defined in a different way than finite-automata: they don't "stop" at the end of the input, they stop whenever they reach a final state $q\in F$. Usually, there are two final states, $q_{acc},q_{rej}$ – if the machine transitions to one of these final states it stops; but until then, it keeps running. Therefore, $\delta(q,\cdot)$ must be defined for any $q\in Q\setminus F$ and and letter the head points on. This will tell us how the machine behaves when in the state $q_t$ (using your symbol, assuming it is not a final state; see below if it is). regarding halting without accepting/rejecting: the answer is yes. If, say, $F=\{q_{acc},q_{rej}, q_{inconclusive}\}$, and the machine reached the third final state, we can say it neither accepted nor rejected, but it halts indeed since it reached a final state in $F$. Of course, $F$ can be as large as $Q$. It may make sense for computing other task than yes/no decision problems.
{ "domain": "cs.stackexchange", "id": 6523, "tags": "turing-machines, halting-problem" }
navsat_transform_node has zero orientation output?
Question: I'm working with a robot with no wheel odometry measures, and have IMU, GPS and motor commands to fuse. Can anyone clarify how the setup should work for this? At the moment I have GPS, IMU and motor command odometry going into a navsat_transform_node, which outputs odometry messages. I'm planning to fuse them with a second copy of the motor commands with an ekf_localization node. My problem is: the odometry/gps messages coming out of the navsat_transform_node all contain zeros for all orientations. I thought the IMU data was being used to set these (and I've check the incoming IMU messages are non-zero). Am I doing something wrong, or mis-interpreting what navsat_transform_node is doing? Perhaps the IMU is used like the motor command odometry to set only the initial state, then ignored the rest of the time? Even with that, it would be unlikely to have orientation exactly 0 at the start. Do I still need to fuse the odometry/gps with IMU again to get the desired effect, and if so why? Thanks for any insights! Sample odometry/gps message below: position: x: 0.806593844667 y: -4.15517381765 z: -0.674990490428 orientation: x: 0.0 y: 0.0 z: 0.0 w: 1.0 covariance: [1.259293318748, 0.03294127204511593, 0.41883145581259457, 0.0, 0.0, 0.0, 0.03294127204511593, 1.232013681194764, 0.2798927172566257, 0.0, 0.0, 0.0, 0.41883145581259457, 0.2798927172566257, 4.768693000057238, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] twist: twist: linear: x: 0.0 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] Originally posted by charles.fox on ROS Answers with karma: 120 on 2015-05-21 Post score: 2 Original comments Comment by yohtm on 2015-05-21: Can you post the configuration of your navsat_transform_node and ekf_localization nodes? Answer: The purpose of navsat_transform_node is to convert GPS coordinates into your robot's local coordinate frame. To do this, it needs three inputs: Navsat data. This is typically obtained from a GPS device. The robot's orientation in an earth-fixed frame. navsat_transform_node first converts lat/lon coordinates to UTM coordinates, and then transforms them into your robot's local frame. In order to do this, it needs to know what your heading is in the UTM frame. Conveniently, a magnetometer (such as the one in your IMU) provides this information. Your current odometry pose. This should include your robot's orientation, which does not have to necessarily correspond to the orientation from your IMU (though for most users, it will). The output is simply the lat/lon data converted to your local coordinate frame. The heading should not be used. To answer your final question, yes, you do need to fuse the output topic /odometry/gps with ekf_localization_node. ekf_localization_node (or ukf_localization_node) will fuse multiple input sources; navsat_transform_node serves a very specific purpose, which is to transform GPS data into your local coordinate frame so that you can fuse it. For some more information on configuring your ekf_localization_node instance(s), see section 3 of this page of the tutorials. Originally posted by Tom Moore with karma: 13689 on 2015-05-21 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by M@t on 2016-11-08: FYI, the link in this answer is broken because of the location shift for r_l documentation. I think this is the new location: Integrating GPS data Comment by Tom Moore on 2016-11-09: Like updated. Thanks @M@t.
{ "domain": "robotics.stackexchange", "id": 21750, "tags": "navigation, odometry, gps, navsat-transform-node, ekf-localization" }
What is a good Category Theory-Domain Theory dictionary?
Question: When dealing with the domain theoretic categories (say CPO and $\omega$CPO), I frequently wish for a dictionary for the language of category theory in domain theory. That is, given a concept, say monic arrow, I could look it up in the dictionary and see what are the known characterisations of it in the different domain categories. I realise this wish is too much to hope for, but is there any text or resource approximating it? Answer: The best resource for this is Abramsky and Jung's handbook chapter. I recall they had a table which cross-referenced various constructions and categories of domains, with the entries saying whether the construction worked in that category and what properties it had. However, properties of arrows like being a monic tended not to have terribly slick characterizations, because the availability of flat domains tends to ensure that they are often not terribly different from their set-theoretic counterpart. OTOH, properties that make some use of the order structure (like being an embedding-projection pair) tend to have fairly pretty characterizations. A minor point to watch out for is that there are actually two definitions of CPO in common use! Consumers of domain theory (like me) often prefer to work with omega-chains, since chains are pretty concrete objects; whereas producers of domain theory (like, er, your advisor) tend to prefer to work with directed sets, which are more general and have better algebraic properties. (Offhand I'm not sure if restricting to directed sets having countable base is equivalent to the omega-chain condition.) Something I found very helpful in building this kind of dictionary is to work through the solution of recursive domain equations in some category of things that aren't exactly domains. Two good choices are categories of PERs (eg, in models of polymorphism) and presheaves (eg, for name allocation). Metric spaces are another possibility, but I found them to be too similar to domains to help me build intuition.
{ "domain": "cstheory.stackexchange", "id": 401, "tags": "ct.category-theory, semantics, denotational-semantics, domain-theory" }
wrong network adapter with InetAddressFactory.newNonLoopback()
Question: Hi, the last days i tried to run android_tutorial_pubsub and ran into several problems. First i had the problem that NodeConfiguration nodeConfiguration = NodeConfiguration.newPrivate(); was executed, which won't let you connect from a different device. So i changed it into something like NodeConfiguration.newPublic(InetAddressFactory.newNonLoopback().getHostAddress()); This gives you the first network device which is not a loopback device, but it does not check what device it is. For me it gave me vsnet0 instead of wlan0. So whenever i tried to connect to the subscribed topics or only executed rostopic info /chatter it would complain about not being able to connect. This is somehow expected as the ip address of vsnet0 is not in the same subnet as wlan0 and my other computers which run ROS. I have now hardcoded the ip address for wlan0 from my android phone, but this is only usefull for my development environment, as i can't expect to have the same ip at other places. The question now is how can i select the network device i want to use instead of just getting the first nonLoopback device? Any idea or advice would be helpfull. Thank you. Greetings, Markus Originally posted by bajo on ROS Answers with karma: 206 on 2012-10-08 Post score: 0 Original comments Comment by Lorenz on 2012-10-08: Maybe that's a bug in rosjava. Consider filing a ticket on the corresponding bug tracker. To come up with a workaround, I suggest you have a look at the source code of InetAddressFactory: http://mediabox.grasp.upenn.edu/roswiki/doc/api/rosjava/html/InetAddressFactory_8java_source.html Answer: Hi, thank you both for your hints. I managed to get the desired effect by changing the function newNonLoopback() so that it won't return PointToPoint interfaces like my phones radio network adapter vsnet0. rosjava_core/rosjava/src/main/java/org/ros/address/InetAddressFactory.java public static InetAddress newNonLoopback() { for (InetAddress address : getAllInetAddresses()) { // IPv4 only for now. boolean p2p; try { p2p = NetworkInterface.getByInetAddress(address).isPointToPoint(); } catch (SocketException e) { throw new RosRuntimeException(e); } if (!address.isLoopbackAddress() && isIpv4(address) && (!p2p)) { return address; } } throw new RosRuntimeException("No non-loopback interface found."); } I will ask the maintainer of rosjava if he thinks it is wise to include in his repository so others can use it too. Originally posted by bajo with karma: 206 on 2012-10-10 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Lorenz on 2012-10-10: Nice work. Best if you add a patch to the ticket that was referenced in @ahendrix answer or create a ticket on rosjava.googlecode.com with a patch attached. Comment by damonkohler on 2012-10-10: Please post issues to the rosjava issues list (http://code.google.com/p/rosjava/issues/list). I don't follow the trac tickets @ahendrix mentioned. Comment by bajo on 2012-10-12: I opened a ticket at http://code.google.com/p/rosjava/issues/list as this seems the correct place for it.
{ "domain": "robotics.stackexchange", "id": 11276, "tags": "rosjava, android" }
Using the sicktoolbox_wrapper with LMS1xx
Question: I am trying to connect an LMS111 through my computer's ethernet port, and use it with the sicktoolbox_wrapper. Will the package work properly with an LMS1xx? Are there any changes to the tutorial steps I need to do, particularly concerning the permission configuration to the ethernet port (eth1 in my case), in order to make it work? Originally posted by gpuszko on ROS Answers with karma: 11 on 2013-07-02 Post score: 1 Answer: Hi, For for LMS1xx Series I think the driver is here LMS1xx driver. I have used the driver package with SICK LMS100. The command for running the driver after connecting it via Ethernet is rosrun LMS1xx LMS100 _host:=192.168.0.1 where 192.168.0.1 is the IP of the Laser-scanner(I think IP I used is the default one, not sure though). Change it according to your IP settings. Originally posted by arp with karma: 172 on 2013-07-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by mysteriousmonkey29 on 2014-06-30: Hello, I am in the same situation as the person who asked this question, and tried your solution (I had to modify if to the following: rosrun lms1xx LMS100.launch _host:=(my_ip_adress) in order for it to say it could find the files and stuff). However, now I am getting the error: cannot launch node of type [LMS1xx/LMS1xx_node]: LMS1xx ros path [0]=/opt/ros/hydro/share/ros ros path[1]=/opt/ros/hydro/share ros path[2]=/opt/ros/hydro/stacks no processes to monitor ANy idea what could be going wrong? Thanks, Randyh Comment by DavidSuh on 2014-08-26: Has anyone made this work?
{ "domain": "robotics.stackexchange", "id": 14787, "tags": "ros, lidar, sicklms, sicktoolbox-wrapper, sick" }
How is it possible to dissolve zinc oxide safely for skin use, preferably without acid?
Question: Zinc oxide comes as a powder that is insoluble in water but soluble in most acids. I once received a clear liquid containing zinc oxide from an Italian pharmacy as a deodorant. What could they have used to make the $\ce{ZnO}$ soluble and safe for use on the skin? Answer: As I previously stated in a comment, I think that the zinc oxide is not actually dissolved. That would just leave ions which you would hardly consider zinc oxide any more. $$\ce{ZnO + 2HCl <=> Zn^2+ (aq) + 2Cl- (aq) + H2O}$$ Instead I think that there are nanoparticles in use, that appear to be dissolved. In deodorant or other skin products it is likely to have quite a low concentration. An admittedly short search turned up a patent by Marc Paye, in which is stated:[1] [...] (f) 0.01% to 1%, more preferably 0.03% to 0.5% of Nanoparticulates of zinc oxide which functions as an anti-irritant system for anionic surfactants, [...] Or from another patent of the same company:[2] [...] (c) 0.5-10% (particularly 0.5-7.0, and more particularly 0.5-5.0%) of a small particle size zinc oxide, particularly a micronized zinc oxide or a nanoparticle size zinc oxide having a particle size in the range of 20 nanometers-200 microns; [...] I also found, but have not completely read an article that confirmed my theory.[3] As an ingredient in dry deodorants to reduce wetness under the arm, ZnO can be used between 0.05 and 10% by weight with average particle size in the range of 0.02–200 microns. ZnO may be used to provide a pH range desirable for deodorants designed for use on sensitive skins [174]. At least in the last article you should also find some information about how to synthesise these particles. Spoiler: It's nothing you would try at home. Paye, M.; Zinc oxide containing surfactant solution. US6774096 B1, 2004. Hall-Puzio, P.A.; Gale, A.E.V.; Brahms, J.C.F.; Deodorant with small particle zinc oxide. US 6358499 B2, 2002. Moezzi, A.; McDonagh, A. M.; Cortie, M.B.; Chem. Eng. J., 2012, 185–186, 1-22..
{ "domain": "chemistry.stackexchange", "id": 4374, "tags": "solubility, safety" }
Is tum_ardrone available in ROS Indigo?
Question: I am interested in using tum_ardrone package in one of my projects. I currently have a Ubuntu 14.04 machine with ROS Indigo installed. Since there isn't adequate information on tum_ardrone, I wanted to know whether it is available/compatible in ROS Indigo? Originally posted by Anirudh on ROS Answers with karma: 1 on 2015-01-19 Post score: 0 Answer: Yes it works fine. Just use the indigo branch from tum_ardrone's repository on github https://github.com/tum-vision/tum_ardrone/tree/indigo-devel Originally posted by Michael Panayiotou with karma: 61 on 2015-01-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 20615, "tags": "ros, tum-ardrone, ardrone, ros-indigo" }
What does "energy dumped into waves" mean?
Question: In A Very Short Introduction: Black Holes by Katherine Blundell, the author discusses the merger of two black holes in a binary system: The energy released in the merger of two supermassive black holes in a binary system is staggering, potentially more than all the light in all the stars in the visible Universe. Most of this energy is dumped into gravitational waves, ripples in the curvature of spacetime, which propagate across the Universe at the speed of light. The hunt is on for evidence of these waves. What's exactly meant by that? Answer: When two black holes merge, the gravitational mass of the final black hole is lower than the sum of the masses of the two merging black holes. Where has this mass gone? The answer is that it has been turned into energy in the form of gravitational waves. These are produced during the merger process at an accelerating rate, with peak power emitted just as the event horizons of the two black holes merge. The gravitational waves propagate outwards at the speed of light and the energy they carry is lost from the system. Your text is referring to gravitational waves from merging supermassive black holes. These have not yet been detected because the gravitational waves are emitted at frequencies of about twice the orbital frequency of the merging system. For supermassive black holes, these frequencies are far below the 20-2000 Hz sensitivity window of current ground-based gravitational wave detectors and so other means are necessary to search for them.
{ "domain": "astronomy.stackexchange", "id": 6422, "tags": "astrophysics, supermassive-black-hole" }
Why does multipying a scalar in both sides of a balanced chemical equation, change related quantities?
Question: Stoichiometric coefficients appearing in a balanced chemical equation are unrelated to the actual concentration of reactant/products at any stage of a chemical reaction. They give information only about the ratio in which the reactants and products are consumed and produced. Multiplying by a scalar on both sides of the equation doesn't change that ratio. Then, why does doing so result in changes in physical quantities related to the reaction, like equilibrium constant or Gibbs energy? Answer: Maybe because the equilibrium constant, Gibbs energy, enthalpy, etc. are not the fundamental characteristics you think they are. Really, what does enthalpy mean, for example? Say, a reaction $\ce{A + B -> C + D}$ has certain enthalpy. This means that running a reaction with 1 mole of A and 1 mole of B gives us certain amount of heat. Now let's multiply everything (including coefficients and enthalpy) by 2. That would be $\ce{2A + 2B -> 2C + 2D}$. In other terms, running a reaction with 2 moles of A and 2 moles of B gives us twice as much heat, just like we would expect. Now think of the equilibrium constant. Say, we have a reaction $\ce{2NO2<->N2O4}$ and the constant just happens to be 4. This means ${\ce{[NO2]^2\over[N2O4]}}=4$. OK, and what if we write this reaction in a different way, like $\ce{4NO2<->2N2O4}$? Obviously, the equilibrium concentrations are still the same, but the expression for the equilibrium constant has changed to ${\ce{[NO2]^4\over[N2O4]^2}}$. That's a square of the previous expression, hence its numeric value would be $4^2=16$. So what? It's not like we can just up and measure this constant with some kind of "constantometer", anyway. It's the real concentrations that matter; and if we calculate the concentrations using either version of the constant, they will turn out the same. This is also in accordance with $\ln K = {\Delta G\over RT}$. When you square the constant, its logarithm doubles, and so does $\Delta G$.
{ "domain": "chemistry.stackexchange", "id": 6122, "tags": "physical-chemistry, thermodynamics, equilibrium, stoichiometry" }
Electric Dipole force exerted by a charged wire on a dipole
Question: Why is force exerted by a charged wire on a dipole given as $F = P×(dE(r))/(dr)$ where P is the dipole moment? Please explain in simple words and avoid using too much technical derivations.I am a beginner so i'd appreciate if you could explain me how the formula originates in layman's terms.Thank You. Answer: Without equations: The ideal dipole is made up of two oppositely charged particles infinitely close to each other. So we can immediately deduce that if the electric field does not change along the direction of the dipole, it exerts no force (because the force it exerts at the positive particle will identically cancel that at the negative one). The only way the electric field can exert force on the dipole is when the force it exerts on the positive particle is different from the force on the negative one, so the sum of the two forces is proportional to the difference in electric field between these two adjacent points (because of opposite charge and $F=qE$). Now as you know from calculus the limit of the difference of a function between two points as we take separation to zero is just the derivative. In the case of infinite wire, the electric field points radially outwards, so the force must too. The electric field also only changes radially, so the dipole will experience more force the more it points radially along the change of the electric field (e.g. if the dipole was parallel to the wire, there will be no change in the electric field "along" the dipole so it will experience zero force). With equations (if you know multivariate calculus): \begin{align*} \vec{F} &= \vec{F}_+ + \vec{F}_- = q\vec{E}_+ - q\vec{E}_- \\ & = \lim_{d\rightarrow 0} qd \frac{\vec{E}(\vec{x} + \vec{d}) - \vec{E}(\vec{x} - \vec{d})}{d} \\ &= \left( \vec{p}\cdot\vec{\nabla}\right)\vec{E}(\vec{x}) \end{align*} $\left( \vec{p}\cdot\vec{\nabla}\right)$ just says in equations what we said in words : "variation along the dipole". The equation also shows that $\vec{F}$ is parallel to $\vec{E}$.
{ "domain": "physics.stackexchange", "id": 20779, "tags": "electrostatics, electricity" }
How to use sensor_msgs/Image message?
Question: Hi I'm a beginner on ROS. I want to use a topic "/usb_cam/image_raw". (this topic is usb_cam package) So I made a package, and a Subscriber. However, there is still error in the catkin_make process. How can I write CMakeList ? main source #include "ros/ros.h" #include "sensor_msgs/Image.h" void msgCallback(const sensor_msgs::Image::ConstPtr& msg) { ROS_INFO("height = %d, width = %d",msg->height, msg->width); } int main(int argc, char** argv) { ros::init(argc,argv,"cam_data"); ros::NodeHandle nh; ros::Subscriber cam_sub = nh.subscribe("/usb_cam/image_raw",100,msgCallback); ros::spin(); return 0; } CMakeList cmake_minimum_required(VERSION 2.8.3) project(test_cam) ## Find catkin macros and libraries ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) ## is used, also find other catkin packages find_package(catkin REQUIRED COMPONENTS roscpp sensor_msgs ) generate_messages( DEPENDENCIES sensor_msgs ) catkin_package( # INCLUDE_DIRS include LIBRARIES test_cam CATKIN_DEPENDS roscpp sensor_msgs DEPENDS system_lib ) include_directories( ${catkin_INCLUDE_DIRS} ) add_executable(receive_cam src/receive_cam.cpp) #target_link_libraries(test_cam ${catkin_LIBRARIES}) #add_dependencies(receive_cam test_cam_cpp) Originally posted by anonymous28394 on ROS Answers with karma: 13 on 2016-10-31 Post score: 1 Answer: You have several problems: If you are going to use the generate_msgs() macro to ensure message dependencies are built, you'll need a find_package call for message_generation, you'll also need to list message_runtime as a CATKIN_DEPENDS in your catkin_package macro. You'll also need to add message_generation as a build dependency and message_runtime as a run dependency in your package.xml. Documentation here. You have what appear to be incorrectly specified dependencies in your catkin_package macro. Is system_lib actually a CMake project containing libraries that you depend on? Are you providing the test_cam library for use with other catkin packages? Documentation here You need to link your executable against libraries that your package depends on with target_link_libraries. Here's a version that should work: cmake_minimum_required(VERSION 2.8.3) project(test_cam) find_package(catkin REQUIRED COMPONENTS roscpp sensor_msgs message_generation ) generate_messages( DEPENDENCIES sensor_msgs ) catkin_package( INCLUDE_DIRS include CATKIN_DEPENDS roscpp message_runtime sensor_msgs ) include_directories( ${catkin_INCLUDE_DIRS} ) add_executable(receive_cam src/receive_cam.cpp) target_link_libraries(receive_cam ${catkin_LIBRARIES}) Originally posted by jarvisschultz with karma: 9031 on 2016-10-31 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2016-10-31: Note: @breeze6478 is writing a subscriber. generate_message(..) is typically not needed/used in that case, as he depends on sensor_msgs. I think the only problem (but this is a guess) was that generate_messages(..) is there. It's not needed. And of course the test_cam etc. Comment by jarvisschultz on 2016-10-31: That is for sure correct, unless the package is defining custom messages, no call to generate_messages() should be needed. Without more info on the package, it was tough to tell whether I should delete the call or fix the call. Comment by jarvisschultz on 2016-10-31: If he is only writing a subscriber, I should have probably just deleted the call.
{ "domain": "robotics.stackexchange", "id": 26105, "tags": "ros, sensor-msgs, image" }
Are hot stars like O-type stars entirely composed of helium?
Question: Hot stars like O-type stars show no hydrogen in their spectra. Does this mean they are made entirely of helium? Any explanation would be really helpful. Answer: The lines that appear in a stars spectrum mainly reflects its temperature not its composition, see here O-type stars start out with the same sort of composition as other stars, that is they are mainly H and He (approximately 75% and 25% by mass) with traces heavier elements.
{ "domain": "astronomy.stackexchange", "id": 1187, "tags": "star, spectra, hydrogen, helium" }
Was ChatGPT trained on Stack Overflow data?
Question: Has ChatGPT used highly rated and upvoted questions/answers from Stack Overflow in its training data? For me it makes complete sense to take answers that have upwards of 100 upvotes and include them in your training data, but people around me seem to think this hypothesis doesn't make sense. Is there a way to confirm this? Answer: ChatGPT is in the Large Language Models (LLM) category. The most (in)famous GPT model is probably GPT-3, because since then, researchers realized that LLMs mostly follow a predictable scaling law, thus the more data and the bigger model, the better. It is accurate to say that ChatGPT was trained with Stack Overflow data, but it should be all Stack Overflow instead of just most upvoted answers/comments. The Wikipedia page of GPT-3 and their paper mentions that GPT-3 was trained on multiple datasets, and one of which is the Common Crawl, which basically crawls everything on the Internet. Some data pre-processing was done before training, but the authors did not mention removing the comments, so we can say that it is all Stack Overflow data. If we look at the Common Crawl data in Sep 2022, there is indeed the domain com.stackoverflow in their list. Thus, while ChatGPT was trained on Stack Overflow data, it is trained on all Stack Overflow data instead of just most upvoted answers. However, if you think ChatGPT's code output is of high quality, think again, because Stack Overflow temporarily bans ChatGPT because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers (quoted from the link). Here is the justification of them: The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure. EDIT: a comment below also confirmed that GPT-2, the predecessor of GPT-3 and ChatGPT, was trained with Stack Overflow data.
{ "domain": "ai.stackexchange", "id": 3875, "tags": "chat-bots, chatgpt, language-model" }
Finding minimum force to hold a particle in equilibrium
Question: A particle of mass $m$ is suspended from a point A on a vertical wall by means of a light inextensible string of length $b$. Find the magnitude and direction of the minumum force $\boldsymbol P$ that would hold the particle in equilibrium at a distance $a$ from the wall. I found this problem on MathSE and tried to help the OP with it but with no success. This is my sketch of the situation: So, for $b,a$ fixed lengths we know that $\theta = \arctan\frac{a}{\sqrt{b^2-a^2}}$. Since the sum of the forces must be $0$, the magnitude of $\boldsymbol P$ can be expressed by: $$\|\boldsymbol P\|=\frac{mg-T\cos\theta}{\sin\alpha}=\frac{T\sin\theta}{\cos\alpha}.$$ I really don't know how to get another equation that relates $\alpha,\boldsymbol T$ and $\boldsymbol P$. Also I'm struggling with expressing equations that show a path leading towards a raw maximization-minimization calculus problem. I appreciate your thoughts on the problem and will be very thankful if you could give me a hint. Answer: There's no such thing as minimum force for equilibrium. There is no range. Just a single value. For equilibrium, there should be no net external forces on the particle, so $\vec{F}_{net} = \vec{P} + \vec{T} + m\vec{g} = 0$. This would mean, in the horizontal direction: $P \cos \alpha = T \sin \theta$, or $P = T \sin \theta / \cos \alpha$ In the vertical direction, $P \sin \alpha + T \cos \theta = m g$ Solve these two, and you should get an equation that relates all the variables in the problem.
{ "domain": "physics.stackexchange", "id": 18234, "tags": "homework-and-exercises, newtonian-mechanics, statics, equilibrium" }
Are reversed reverse preorder traversals equivalent to a postorder traversal?
Question: I was viewing the solutions of other Leetcode users for the classic "post-order traversal of a binary tree" question, when to my surprise, I found a ton of users simply finding the reverse preorder traversal (because it is considerably easier to implement iteratively), and then reversing the output. To my further surprise, I could not find a single counterexample to these conjectures, which I will state clearly: Conjecture 1: The post-order traversal of a tree $T$ is equivalent to the reversed reverse pre-order traversal of $T$. Conjecture 2: The reverse post-order traversal of a tree $T$ is equivalent to the reversed pre-order traversal of $T$. I thought about this plenty, and came up with an extremely informal justification for this: LRN = reverse(NRL) and RLN = reverse(NLR). Not sure if this is purely coincidental or not. Can anybody either provide a proof of either conjecture? Furthermore, if true, do these conjectures extend to any arbitrary graph/digraph? Answer: This can be proven by induction on trees. I give details on the conjecture 1 here. It is clearly true for the empty tree and for leaves; Suppose it is true for trees $l$ and $r$. Consider $t$ a node with $l$ and $r$ as left and right children, and $x$ as root. Then, if we denote $Rpre(t)$ the reverse pre-order of $t$, $post(t)$ the post-order of $t$, and $\overline{seq}$ the reversed sequence of $seq$, we get: $$Rpre(t) = x\cdot Rpre(r)\cdot Rpre(l) = x\cdot \overline{post(r)}\cdot \overline{post(l)} = \overline{post(l)\cdot post(r)\cdot x} = \overline{post(t)}$$ The conjecture 2 is proven in the same way. I doubt there is an equivalent property for graphs because children are unordered in graphs (but I may be wrong…)
{ "domain": "cs.stackexchange", "id": 20043, "tags": "binary-trees, graph-traversal" }
Why are half angles used in the Bloch sphere representation of qubits?
Question: Suppose we have a single qubit with state $| \psi \rangle = \alpha | 0 \rangle + \beta | 1 \rangle$. We know that $|\alpha|^2 + |\beta|^2 = 1$, so we can write $| \alpha | = \cos(\theta)$, $| \beta | = \sin(\theta)$ for some real number $\theta$. Then, since only the relative phase between $\alpha$ and $\beta$ is physical, we can take $\alpha$ to be real. So we can now write $$| \psi \rangle = \cos(\theta) | 0 \rangle + e^{i \phi} \sin(\theta)| 1 \rangle$$ My Question: Why are points on the Bloch sphere usually associated to vectors written as $$| \psi \rangle = \cos(\theta/2) | 0 \rangle + e^{i \phi} \sin(\theta/2)| 1 \rangle$$ instead of as I have written? Why use $\theta /2$ instead of just $\theta$? Answer: It is a convention, chosen so that $\theta$ is the azimuthal angle of the point representing the state in the Bloch sphere. To see where this convention comes from, start from a state $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. Remembering the normalisation constraint $|\alpha|^2+|\beta|^2=1$, and assuming without loss of generality $\alpha\in\mathbb R$, a natural way to parametrise the state is by defining an angle $\gamma$ such that $|\alpha|=\alpha=\cos\gamma$ and $|\beta|=\sin\gamma$. A generic state $|\psi\rangle$ thus reads $$|\psi\rangle=\cos\gamma|0\rangle + e^{i\varphi}\sin\gamma|1\rangle,$$ for some phase $\varphi\in\mathbb R$. Remember now that the Bloch sphere coordinates of a generic (pure) state $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$ have the explicit form \begin{align}\newcommand{\on}[1]{\operatorname{#1}}\newcommand{\bs}[1]{\boldsymbol{#1}} x\equiv\langle\psi|\sigma_x|\psi\rangle &= 2\on{Re}(\bar\alpha\beta),\\ y\equiv\langle\psi|\sigma_y|\psi\rangle &= 2\on{Im}(\bar\alpha\beta),\\ z\equiv\langle\psi|\sigma_z|\psi\rangle &= |\alpha|^2 - |\beta|^2. \end{align} Relating these with our previous parametrisation with $\gamma$ we find $$z=\cos^2\gamma-\sin^2\gamma=\cos(2\gamma).$$ But the canonical way to define spherical coordinates uses $z=\cos\theta$, so if we wish to interpret the coefficients of the state as angles in the Bloch sphere, we have to set $\gamma=\theta/2$. See also the analogous question on physics.SE for more info.
{ "domain": "quantumcomputing.stackexchange", "id": 349, "tags": "quantum-state, bloch-sphere" }
Why are protons more common than hydride ion?
Question: I'm a high school student. I noticed $\ce{H+}$ ion is commonly present in my books while I didn't find any presence of $\ce{H-}$ ions in my books. However, I found on internet that $\ce{H-}$ also exists but it is less common. Because Hydrogen has just one electron, it can either receive one electron to be $\ce{H-}$ or omit an electron to be $\ce{H+}$. So, both should have the same possibility to exist. Then, why is $\ce{H+}$ more common than $\ce {H-}$? The answer to the question might be obvious to most of the users here with their knowledge. But please share a detailed explanation that is suitable for a high school student. Answer: This is because we live in a world dominated by oxygen and water. In other words, it is an oxidized world. Most metals occur naturally in the form of oxides, silicates, halides, or other derivatives. Hydrogen occurs as $\ce{H+}$. In a hypothetical world dominated by metals, all that could have turned out otherwise. Oxygen would be a scarcity, and would come in the form of metal oxides. Nitrogen would be found in nitrides, hydrogen in hydrides (so, a lot of $\ce{H-}$), and so on. There would be no free water or free oxygen. In our world, it is the other way around. Water is ubiquitous (that is, found pretty much everywhere); oxygen is even more so. $\ce{H-}$ can't exist in their presence. It will quickly react with either and cease to be $\ce{H-}$. It can only exist in an artificial environment. So it goes.
{ "domain": "chemistry.stackexchange", "id": 15871, "tags": "physical-chemistry, ions, electronic-configuration, hydrogen" }
Why according to Hund's first rule all electron with same spin should occupy orbitals when partially filling?
Question: I get that because of coulomb repulsion initially all the electrons will not occupy the same site but will single occupy the orbitals.But while doing so how do they know to keep their spins aligned along the same direction? either all up or all down? I read somewhere that by keeping their spins aligned along the same direction they are more shielded from nuclear attraction.But I wonder if that's the only explanation and if its correct. Answer: To have the right picture in mind, you need to also take into account the Pauli exclusion between the electrons, being fermions, but also more importantly, do not exclude the nucleus from the picture here! Now, Why rule one holds you may ask? Well it clearly cannot be due to dipole dipole interaction between electrons as it's so insanely small (let's say with dipole moment being a Bohr magneton, in an atom), that it will remain irrelevant for our discussion here. But as you say, it is largely due to Coulomb interaction, but that's not the whole story. Lets take a system of two electrons (1 and 2), associated with the wavefunction $\psi$, which will be composed of an orbital part $\phi$ and a spin part $\xi$: $$\psi=\phi_{orb}(\mathbf{r}_1,\mathbf{r}_2)\xi_{spin}(1,2)$$ The above should be anti-symmetric as we're dealing with fermions, this in mind: If the spins are both up, the $\xi$ is symmetric, so $\phi_{orb}$ has to be anti-symmetric, which means that when $r_1=r_2$, $\phi$ has to go through $0.$ (antisymmetric functions). This in turn implies that the electrons cannot get close to each other. This line of reasoning may work, by leading to arguments solely in the line of Coulomb interaction between electrons $V_{ee}$, but that's not entirely correct. The correct and complete answer is as follows (will try to make as intuitive as I can): The key to the answer lies in the $V_{ne}$ term, i.e. the coulomb interaction between nuleus-electron. First case: if the electrons have opposite spins, then they're allowed to get close to each other, and this means that the one closer to the nucleus will now screen the other electron from the nucleus and meaning the electron bit further away will experience a smaller effective nuclear charge, which results in this electron being weakly bound, not favorable! Second case: Now if their spins are aligned, due to Pauli exclusion principle, they cannot get as close to each as before, in particular neither of the electrons can get inside the other one's orbit, hence no more screening effect on the nucleus! Consequently we say both electrons here are strongly bound, favorable! Because it means the overall energy is lowered by having both electrons' spins aligned. Hund's first rule! In short: When the spins are anti-aligned, sometimes one electron will get in between the other electron and nucleus, hence screens the effective charge of the nucleus. But when they have their spins aligned, they repel each other due to Pauli's principle, this in turn tends to lower the chance of screening configurations to occur, as the electrons will be further apart. Bear in mind, all this doesn't mean that because of Hund's first rule, all the electrons will have their spins aligned (not possible), you should just interpret it as: the electrons will have their spins aligned when they can (energetically favorable). Now to decide which orbital states the electrons will occupy, Hund's second rule comes into play, which is another story! Main reference used: The Oxford Solid State Basics
{ "domain": "physics.stackexchange", "id": 30395, "tags": "quantum-mechanics, solid-state-physics, coulombs-law, electronic-band-theory, pauli-exclusion-principle" }
How to convert $K_S$ magnitude to $M_K$
Question: My question is how do I convert a magnitude $K_S$, from a system like 2MASS, to $M_K$, and what are the differences. A use of this $M_K$ can be seen on Delfosse et al. (2000) in the $\log(M/M\odot) - M_K$ can be found. In the end I want to be able to use a $K_S$, and other possible parameters to derive the mass of the star. Delfosse et al. (2000) - Delfosse, X., Forveille, T., Ségransan, D., et al. 2000, A&A, 364, 217 Answer: Since this seems to be a kind of niche question I will answer and leave some comments for future souls with the same problem. To whom it may interest that may be searching for the same problem. The solution is the basic $M = m + 5( \log (parallax) + 1)$. Parallax in arcsec. The reason I'm going with K instead of V is due to a more independent relationships with the stellar metallicity Parallaxes can be from catalogues like Hipparcus or Gaia. If anyone is searching and gets here with the goal of using $M_K$ in the $\log (M / M_\odot) - M_K$ relation from Delfosse et al. (2000) keep in mind that in the paper the $\log$ relation is in $\log_{10}$ !! Delfosse et al. uses "Johnson-Cousins-CIT", for that I followed equation (12) on Carpenter, J. M. 2001, AJ, 121, 2851. If you want the $\log g$ I recommend the relation in: Bean, J. L., Sneden, C., Hauschildt, P. H., Johns-Krull, C. M. and Benedict, G. F. (2006), ‘Ac- curate m dwarf metallicities from spectral synthesis: A critical test of model atmospheres’, The Astrophysical Journal 652(2), 1604.
{ "domain": "astronomy.stackexchange", "id": 3979, "tags": "spectroscopy, photometry, elemental-abundances" }
Why is the photon clock equivalent to all clocks?
Question: I can understand why, if the speed of light is invariant, a photon clock would tick slower. I find this explanation very useful in terms of introducing the idea of time dilation (also because it allows for the Lorentz formula to be derived intuitively, only using Pythagora's Theorem). But this approach has one important missing concept. A student might say; Okay I get why the photon clock would tick slower, but why is it an intrinsic property of time itself? Why is this not some effect of the mechanics of this specific clock? How are a pendulum clock, an atomic clock, circadian rhythms, a chemical clock, etc... all equivalent to the photon clock? Why the slowdown of the ticking of the photon clock is a probe on the very nature of all clocks and time itself and not just a probe on the nature of this particular clock (more so if we consider that the explanation relies on the specific mechanism of this clock to work)? For example some students might reason; a pendulum clock would slow down on the lunar surface, since the gravity is lower and therefore the pendulum would have a larger period, but we don't immediately jump to the conclusion that time itself has slowed down on the Moon with respect to Earth (in fact, ironically, in general relativity it is the other way around), just that the technical features of this particular clock make this happen because we have altered its functionality by altering the physical enviroment where it operates. The same could be said of a spring clock submerged in water for example. But if we don't think that the Moon gravity slows time with respect to Earth's just because the pendulum clock ticks slower, or that water slows time just because the spring clock ticks slower, then why should we think that moving at a certain relative speed slows the flow of time just because the photon clock ticks slower? Answer: Invoke the principle of relativity. An inertial observer carries both a light clock and a mechanical wristwatch, which agree when all are at rest. If they don't agree when the inertial observer is moving [with nonzero constant velocity] carrying these clocks, then that observer can distinguish being at rest from traveling with nonzero constant velocity. UPDATE: Q: What makes the photon clock special among all other clocks? A: Simplicity. It's easier to formulate, analyze, and interpret than other clocks. If the principle of relativity holds, it must turn out that one can eventually analyze any clock and get the same result as the light-clock---it probably takes a lot more analysis and interpretation [of the device, the surroundings, and the interactions].
{ "domain": "physics.stackexchange", "id": 91461, "tags": "special-relativity, time, time-dilation, thought-experiment" }
Why is the acceleration vector the spatial gradient of the lapse function?
Question: If we have a Lorentzian manifold $(M, g)$ with a foliation by spacelike surfaces $\Sigma_t$ with unit-normal vector field $n$, we can define the lapse function $N$ by $$ \partial_t = N n + X $$ where $X$ is the shift vector. I have seen several claims that, for any $Y$ tangent to $\Sigma_t$, we have $$ g(\nabla_n n, Y) = \nabla_Y \ln N = N^{-1} \nabla_Y N = N^{-1} Y(N), $$ but I cannot find a proof for this. It is often stated as a trivial consequence of the definitions but I cannot derive it myself. Is there something obvious I am missing? Answer: Write \begin{align*} \langle \nabla_n n, Y\rangle &= -\langle n, \nabla_Y n\rangle-\langle n, [n,Y]\rangle \\ &=-\frac 12 Y \langle n,n\rangle - \langle n, N^{-1} Y(N)\rangle \\ &=N^{-1} Y(N). \end{align*} You can get the formula for $[n,Y]$ by writing $n=\frac 1N(\partial_t-X)$ and using the coordinate expression for the Lie bracket.
{ "domain": "physics.stackexchange", "id": 93430, "tags": "general-relativity, differential-geometry" }
Context-free grammar for tautologies in one variable
Question: Construct a context-free grammar for the set of tautologies in $p$ - that is, the set of formulae in $\{p, \text{true}, \text{false}, \land, \lor, \lnot, (, )\}$ which evaluate to $\text{true}$ for any assignment to $p$. My first step is to construct a CFG for all propositional formulae: $$ S \rightarrow S \land S \mid S \lor S \mid ( S ) \mid \lnot S\mid p \mid \text{true } \mid \text{ false} $$ What can I do from here? Answer: Every formula in $p$ is equivalent to one of four formulas: true, false, $p$, $\lnot p$. This suggests having four different nonterminals, which generate exactly formulas which have this truth values. Calling these $T,F,P,N$, we have, for example, $$ T \to T \land T \mid T \lor S \mid S \lor T \mid P \lor N \mid N \lor P \mid (T) \mid \lnot F \mid \mathrm{true}, $$ where $S \to T \mid F \mid P \mid N$; this additional nonterminal is only used as shorthand. In the same way, you can write rules that generate all tautologies over any fixed number of variables.
{ "domain": "cs.stackexchange", "id": 14373, "tags": "context-free, propositional-logic" }
Is it possible to use air entrainment for air conditioning
Question: I have seen this video pretending to decrease by 5° the temperature inside home using an air-conditioner based on the air entrainment principle (see the video here at 1m6s). EDIT : The video is now private but you can see a picture of the air-conditioner here Does such an air-conditioner really work? Answer: This device is not really working using the principles referenced in the other SE article. In this case the air inside the hut is being heated by the sun and this is really about improving ventilation and letting that heated air out. I think, however, it would perform worse than a fully open window from a heat transfer point of view, however, it would also act as a screen for privacy/exclusion reasons. I also have doubts it works much better with the bottles than without them (ie. just a series of holes). The article mentions the bottles are used to funnel cold air in but holes would allow that too. Also, the bottles would cause more interference with the drawing out of hot air if the wind is blowing parallel to the wall than simple holes would. TL:DR It would certainly help cool the air inside, but I think simpler methods would work better. EDIT: Apparently this question has appeared on multiple SE see: Physics.SE and Skeptics.SE
{ "domain": "engineering.stackexchange", "id": 909, "tags": "heat-transfer, airflow, temperature" }
Limitations of DFA
Question: In this link it is mentioned: A DFA is not powerful enough to recognize many context-free languages because a DFA can't count. But counting is not enough -- consider a language of palindromes, containing strings of the form $ww^R$. Such a language requires more than an ability to count; it requires a stack. I would extremely appreciate if someone could explain in simple words or refer a tutorial what is the meaning of not being able to count. Moreover, DFA does not accept $\{0^n1^n \mid n \ge 1\}$ because it can't count or because it does not have stack? Any help to clarify above points is truly appreciated. Answer: DFA has a finite memory. By finite memory, I mean, It can not store any information of infinite length. Let's take an example, $$ l = {0^n1^n | n > 0} $$ This is a very simple example of a language which can not be recognized by a DFA, because, to accept a word of the above language, the DFA need to count the number of 0's seen. The number of 0's can be infinite as n has no upper bound. Therefore this language is an irregular language. But if he we have an upper bound to n, then then any word of the language will be finite length and can be recognized by a DFA (just make those many number of states in the worst case). Now, let's take another example: $$ l = { (01)^n | n > 0 } $$ The words of languge l can be of infinite length. But it does not imply that it is a irregular language, i.e. can not be recognized by any DFA, Because it has a pattern. DFA does not need to memorize the input symbol seen two states before the current state, hence, a finite memory can do the job. In your example, $$ l = { WW^R} $$ DFA has to memorize the contents of W first, to recognize it's reverse, and there is no upper bound on the length of W. Hence it requires an infinite memory. This is very nice tutorial playlist on theory of computation by Prof. Harry Porter. The first 3 videos should clarify your doubts.
{ "domain": "cs.stackexchange", "id": 13774, "tags": "automata, finite-automata, counting" }
(Coleman's lecture note) scattering in QFT
Question: I am currently reading Coleman's lecture note on QFT.(https://arxiv.org/abs/1110.5013) I have several questions regarding the scattering theory. Let $\phi$ be a real scalar field, and consider the interaction Hamiltonian $H_I(x)=g\phi(x)\rho(\vec{x})f(t)$, where $f$ represents the adiabatic turning on/off function. The following is pp.91-92 of the lecture note: Questions: What is the difference between the vacuum with respect to $H_0$, and the vacuum with respect to $H_0 + H_I$? In the lecture note, these two vacuums are denoted by different notations $|0\rangle$ and $|0\rangle_P$. What is their difference? Somewhat ad hoc counterterm $a=E_0$ was added to $H_I$ to fix the divergent phase in $\langle0|S|0\rangle $. Adding $a$ will fix the divergent phase, but I think it does not necessarily means that $\langle0|S|0\rangle =1$. (As described in the lecture note, the correct expression should be $\langle0|S|0\rangle=e^{-i(\gamma_- +\gamma_+)}$. Answer: The states $$|0\rangle~\in~{\cal H}_0 \quad\text{and}\quad |0\rangle_P~\in~{\cal H}$$ are the ground states for the Hamiltonians $\hat{H}_0$ and $\hat{H}$, respectively. More explicitly, using Coleman's notation p. 72-77, we have in the Schrödinger picture $$ |0 (t)\rangle_S~=~|0 (t)\rangle_S^{\rm in} ~=~e^{-i\gamma_-}|0 (t)\rangle_{P,S}\quad\text{for}\quad t\lesssim -\frac{T}{2}$$ and $$ |0 (t)\rangle_S~=~|0 (t)\rangle_S^{\rm out} ~=~e^{i\gamma_+}|0 (t)\rangle_{P,S}\quad\text{for}\quad t\gtrsim \frac{T}{2}.$$ The phase $e^{-i(\gamma_- +\gamma_+)}$ is apparently absorbed via the $O(T/\Delta)$ counterterm.
{ "domain": "physics.stackexchange", "id": 59901, "tags": "quantum-field-theory, hilbert-space, renormalization, scattering, s-matrix-theory" }
Errors in connecting an Arduino to Ros Groovy
Question: I'm trying to get an Arduino Uno hooked up to Ros Groovy; I've been following the Hello World tutorial for rosserial_arduino. When running rosserial_python serial_node.py /dev/ttyACM0 I'm able to connect for a second, and I start getting the error: [ERROR] [Walltime: ...] Lost sync with device, restarting... I noticed my Arduino code is still running even though it lost connection (I have a flashing LED indicator). Its not publishing anything to "Chatter" however - not even once before it disconnects. Please let me know how I might be able to fix this! Thanks! Originally posted by asriraman93 on ROS Answers with karma: 75 on 2013-07-09 Post score: 0 Answer: Hi there, I've come to a similar situation in ROS Indigo. This was mostly due to having the Arduino IDE open with the Serial Port open. After I uploaded the code to the arduino and closed the IDE and restarted the rosserial server the error was gone. Originally posted by bpinaya with karma: 700 on 2017-10-27 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 14857, "tags": "ros, arduino, rosserial, rosserial-python" }
Simulate 2-bit half adder with 2-bit inputs on Arduino without using logic gates
Question: You are to simulate a 2-bit half adder. You are not required to implement a real 2-bit half adder using logic gates; you are going to implement a black-box system using the microcontroller that functions as a 2-bit half adder. Your system will have 4 inputs (2 for the first 2-bit number, and another 2 for the second 2-bit number), and 2 outputs (2-bit output for the sum). Inputs: Each input is controlled via a separate push-down button. The inputs should be toggled each time the buttons are released,as opposed to being LOW while the button is released, and HIGH while the button is pushed. The initial values of inputs will be LOW. Outputs: Each output is controlled via a separate LED. The LED will be off while the output is LOW, and on while the output is HIGH. Since the states of the inputs will be invisible to the naked eye, you are to represent the input states with additional LEDs as well. byte numberOneFirstDigit = 3; byte numberOneSecondDigit = 2; byte numberTwoFirstDigit = 5; byte numberTwoSecondDigit = 4; byte sumFirstDigit=7; byte sumSecondDigit=6; byte carryDigit=12; byte numberOneFirstDigitButton = 9; byte numberOneSecondDigitButton = 8; byte numberTwoFirstDigitButton = 11; byte numberTwoSecondDigitButton= 10; void setup() { pinMode(numberOneFirstDigit,OUTPUT); pinMode(numberOneSecondDigit,OUTPUT); pinMode(numberTwoFirstDigit,OUTPUT); pinMode(numberTwoSecondDigit,OUTPUT); pinMode(sumFirstDigit,OUTPUT); pinMode(sumSecondDigit,OUTPUT); pinMode(carryDigit,OUTPUT); pinMode(numberOneFirstDigitButton,INPUT); pinMode(numberOneSecondDigitButton,INPUT); pinMode(numberTwoFirstDigitButton,INPUT); pinMode(numberTwoSecondDigitButton,INPUT); } void loop() { byte numberOneFirstDigit_state = digitalRead(numberOneFirstDigitButton); byte numberOneSecondDigit_state = digitalRead(numberOneSecondDigitButton); byte numberTwoFirstDigit_state = digitalRead(numberTwoFirstDigitButton); byte numberTwoSecondDigit_state = digitalRead(numberTwoSecondDigitButton); if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == LOW && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , LOW); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , HIGH); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , HIGH); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == LOW && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , LOW); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , HIGH); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , LOW); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == LOW && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , LOW); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , HIGH); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == LOW) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , LOW); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , HIGH); digitalWrite(sumSecondDigit , LOW); digitalWrite(carryDigit , HIGH); } else if(numberOneSecondDigit_state == HIGH && numberOneFirstDigit_state == HIGH && numberTwoSecondDigit_state == HIGH && numberTwoFirstDigit_state == HIGH) { digitalWrite(numberOneFirstDigit , HIGH); digitalWrite(numberOneSecondDigit , HIGH); digitalWrite(numberTwoFirstDigit , HIGH); digitalWrite(numberTwoSecondDigit , HIGH); digitalWrite(sumFirstDigit , LOW); digitalWrite(sumSecondDigit , HIGH); digitalWrite(carryDigit , HIGH); } } I have showed the question and the circuit I did. I am open to any suggestions to improve my code (and maybe the circuit). I want a program which is not too many lines. Answer: Avoid code duplication Your implementation is like that of a truth table: for all possible inputs, you have pre-calculated the output. That is one way of doing it. But instead of having lots of if-then statements, implement it using a look-up table. This is quite easy if you pack all the bits in a single integer: static const byte table[16] = { /* 0b0000 -> */ 0b000, /* 0b0001 -> */ 0b001, /* 0b0010 -> */ 0b010, ... /* 0b1101 -> */ 0b100, /* 0b1110 -> */ 0b101, /* 0b1111 -> */ 0b110, }; Then in loop(), you just need to do this (un)packing using a combination of bit shifting, OR and AND operations: loop() { byte input = digitalRead(numberOneFirstDigitButton) << 0 | digitalRead(numberOneSecondDigitButton) << 1 | digitalRead(numberTwoFirstDigitButton) << 2 | digitalRead(numberTwoSecondDigitButton) << 3; byte result = table[input]; digitalWrite(numberOneFirstDigit, (input >> 0) & 1); digitalWrite(numberOneSecondDigit, (input >> 1) & 1); digitalWrite(numberTwoFirstDigit, (input >> 2) & 1); digitalWrite(numberTwoSecondDigit, (input >> 3) & 1); digitalWrite(sumFirstDigit, (result >> 0) & 1); digitalWrite(sumSecondDigit, (result >> 1) & 1); digitalWrite(carryDigit, (result >> 2) & 1); } Note how this removes a huge amount of code duplication, which makes the code easier to read and easier to maintain. The implementation using a table has a counterpart in digital logic, and that is using a ROM chip to implement combinatorial logic. Alternatively, you can also implement this by parsing the input as two int values, adding them together, and then writing the result back to the output pins: unsigned int numberOne = digitalRead(numberOneFirstDigitButton) << 0 | digitalRead(numberOneSecondDigitButton) << 1; unsigned int numberTwo = digitalRead(numberTwoFirstDigitButton) << 0 | digitalRead(numberTwoSecondDigitButton) << 1; unsigned int result = numberOne + numberTwo; digitalWrite(sumFirstDigit, (result >> 0) & 1)); digitalWrite(sumSecondDigit, (result >> 1) & 1)); digitalWrite(carryDigit, (result >> 2) & 1)); This avoids needing to make a table. Make constants static const or even static constexpr The constants used for the pin numbers, like numberOneFirstDigit, are stored in non-const variables. It is better to make them const, or even better, constexpr. Also, make it a habit of marking constants and functions that don't have to be accessed by anything outside of the current source file as static. Both of these things can allow the compiler to generate more optimal code (also in size), and const will ensure the compiler generates an error if you accidentally do write to one of these constants. So: static constexpr byte numberOneFirstDigit = 3; ...
{ "domain": "codereview.stackexchange", "id": 44101, "tags": "c, arduino" }
Pulse Radar using Barker code
Question: i am trying to build a pulse radar using barker codes. i modulate the barker code and sine wave and added some noise as well. then i attached a matched filter whose parameters are shown in the image. and then made an attempt to demodulate, which i don't think is right. can someone please help me with the matched filter- like are the coefficients right? and also what else can i add remove and demodulate/detect. Answer: There's a disconnect between what the Barker code itself is, and its nominal autocorrelation function, to how you actually use it to modulate a signal. When using Barker codes in a radar system, you're modulating (aka encoding) some transmitted pulse of pulse length $\tau$. This pulse will then have N-chips, or subpulses, inside of it that correspond to each -1 or +1 of the Barker code. Of course, taking for example the Barker-13 code's series of -1 and +1, performing the autocorrelation does give you the expected result. But in a real system, you don't just have the thirteen individual -1/+1 floating around, you have a no-kidding real signal that you transmit/receive to process. Using a very-low frequency carrier to show how a pulse is modulated, a Barker-13 coded pulse looks like this (real part): The red-orange parts correspond to the -1 and blue to the +1 in the corresponding Barker code. If you modulate this down to baseband, then you would get: These are the signals you would use to define the matched-filter, either at baseband or some intermediate frequency. It is not the thirteen -1/+1 of the Barker code. If you take the proper matched filter (the time-reversed conjugate) and apply it, you get the same response for both: Which is the response expected.
{ "domain": "dsp.stackexchange", "id": 12434, "tags": "matlab" }
Inapproximability of set cover: can I assume m=poly(n)?
Question: I am trying to show that a certain problem is inapproximable by a reduction from set cover. My reduction transforms an instance with ground set of size $n$ and $m$ sets into an instance of my problem where a certain parameter $r$ is of size $O(n+m)$. I can then show that an instance of set cover where the cover size is s corresponds to an instance of my problem where the size of the optimal solution is $2s$ (or something like this), and vice versa. I would like to invoke Raz-Safra to conclude that my problem is inapproximable up to a factor of $c \log{r}$, for some constant $c$. This would work fine if I could assume that $m$ is bounded by a fixed polynomial of $n$. Does anyone know if it is kosher to assume this? This is certainly true for the family of instances used in the standard NP-hardness proof for set cover, but I am not sure if this remains the case for the kind of PCP reductions employed by Raz and Safra. Answer: Yes, the number of sets m in a set-cover instance is polynomial in the number of elements. By the way -- the state of the art hardness results for Set-Cover are: With Noga Alon and Muli Safra, we showed how to use the Raz-Safra/Arora-Sudan PCP to get a better constant $c$ in the hardness factor $c\log n$. http://people.csail.mit.edu/dmoshkov/papers/k-restrictions/k-rest-full.ps Feige showed how to get the optimal hardness factor $(1-\epsilon)\ln n$, assuming $NP\not\subseteq DTIME(n^{\log\log n})$. http://www.cs.duke.edu/courses/spring07/cps296.2/papers/p634-feige.pdf I recently published a note on how to adapt Feige's reduction to an NP-hardness result (i.e., a result based on $P\neq NP$), assuming a plausible conjecture about PCPs (A conjecture I call "The Projection Games Conjecture" - a specialization of the 1993 "Sliding Scale Conjecture" to projection games). http://eccc.hpi-web.de/report/2011/112/ (I later found out that the reduction gives an optimal tradeoff between $\epsilon$ and the reduction blow-up).
{ "domain": "cstheory.stackexchange", "id": 1123, "tags": "cc.complexity-theory, approximation-hardness, set-cover" }
Energy of orbitals under a crystal field
Question: It is shown in octahedral crystal field splitting that both the $e_g$ orbitals have equal energy. But how can that be possible ? The $d_{x^2-y^2}$ is directed along both the axes (x and y) whereas the $d_{z^2}$ is directed along only the z-axis, shouldn't the later have less energy than the former ? Answer: They are equivalent in energy because symmetry says so, and because the calculations give that as an answer. Okay, that wasn’t exactly satisfactory, was it? The next complicated answer on the road from over-simplified to reality is that the $\mathrm{d_{z^2}}$ extends to both ligands, too: Do not forget the ‘ring’ in the $xy$ plane. By extending towards the ligands on the $z$ axis further than the $\mathrm{d_{x^2 - y^2}}$ orbitals do to their ligands, it is less stabilised than it looks, and the interaction with the equatorial ligands also destabilises. The really sophisticated and probably most correct answer is that we generally have a wrong view of those two orbitals. They are mathematically equivalent and would actually form a group of three: $\mathrm{d_{x^2 - y^2}}$, $\mathrm{d_{z^2 - x^2}}$ and $\mathrm{d_{y^2 - z^2}}$ — but because we can only have two, wee need to add two together and commonly take the two that give ‘$\mathrm{d_{z^2}}$’. In some images of molecular orbitals you can see that they are much more identical than they initially look.
{ "domain": "chemistry.stackexchange", "id": 5291, "tags": "coordination-compounds, orbitals, crystal-field-theory" }
Visualizing friction?
Question: I'm trying to simulate a mecanum wheel using a plugin which dynamically sets directional friction (related question). One of the tricky parts is that it's difficult to assess whether my plugin is having the desired results without a way to visualize the friction— I can show the contact points between the wheels and the ground plane, but fdir1/mu/mu2 are completely opaque. Is there some way to visualize these which I'm not seeing, or a plugin which can add an overlay to gzclient? Thanks Originally posted by mikepurvis on Gazebo Answers with karma: 67 on 2015-06-30 Post score: 2 Answer: As far as I know, there isn't a way to visualize friction forces in the 3D scene, but it should be possible to implement a plugin for that. If all you want is to check the values for mu, mu2 and fdir1, they should be visible on the left panel, just find the link you're interested in and check its collision -> surface -> friction. Unfortunately, you will need to reopen the the panel to refresh the numbers (click again on the link's name and see the Property/Value panel update. I've never tried to change friction dynamically though, so there's a possibility these values might not update (in that case we should create an issue for that). Originally posted by chapulina with karma: 7504 on 2015-07-01 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by mikepurvis on 2015-07-02: It does seem that they update (in the sidebar) in response to my plugin changing them. However, values alone are pretty much meaningless, especially when it's not super clear from the Gazebo documentation what direction the primary friction vector is even relative to.
{ "domain": "robotics.stackexchange", "id": 3797, "tags": "gazebo-plugin" }
Which math books would help in learning SLAM systems?
Question: Recently I started studying papers on SLAM systems by Durrant-Whyte for my research and I'm finding some difficulties in the math (matrices and probability) that is tackled in these papers. Which math books/topics would you recommend me to go over before continuing with other papers? Answer: Probability and statistics. Stochastic signal processing. Estimation and Detection theory (I highly recommend that you find a class that uses Harry VanTrees's book and that offers office hours, that you enroll, and study, and that you reserve lots of time in your schedule to take it -- if you can learn that stuff by reading the book you're somewhere in the 99.9th percentile). "Optimal State Estimation" by Dan Simon is a really good Kalman filter book, but if you find yourself just reading the words and not getting the math, then you need to put it down and go study multivariate probability for a while. Matrix math has been mentioned -- but you can know the matrix math up the wazoo, and all it does is make it quicker to formulate the problems. Without the knowledge of probability & statistics part, the matrix math will just help you screw up quicker, and with more elan.
{ "domain": "robotics.stackexchange", "id": 2145, "tags": "slam, localization, mapping, probability" }
Scaling and incrementing non-zero elements of a NumPy matrix
Question: I have a NumPy matrix C and want create a copy of it cPrime, which has some operation of the original matrix to all non-zero values of it. In the code below, for each non-zero element of C, I multiply by 30 and then add 1: import numpy as np size = 6 C = np.zeros((size,size), dtype=int) C[4][0] = 2 C[4][1] = 5 C[4][2] = 3 C[0][3] = 1 C[1][3] = 1 C[2][3] = 1 C[3][5] = 3 cPrime = np.zeros((size, size),dtype=int) for i in range(size): for j in range(size): if C[i][j] != 0: cPrime[i][j] = C[i][j]*30 + 1 This code works, but it feels inefficient. I'm about 99% sure that there's a really efficient way to achieve this goal, maybe using a masked array, but I haven't been able to figure it out. Answer: This is a typical use case for numpy.where: cPrime = np.where(C, C * 30 + 1, C) This is about twice as fast as (30 * C + 1) * (C != 0) and generalizes more easily to other conditions.
{ "domain": "codereview.stackexchange", "id": 39066, "tags": "python, numpy, vectorization" }
Elementwise iterator adaptor
Question: There are many C++ implementations of the Euclidean vector. I.e., a vector in what is typically a 3- or 4-dimensional space. Something along the lines of struct vec3f { float x, y, z; }; It can of course also be a much more advanced implementation. Since we don't have a standard implementation, nearly every library provides their own. E.g., GLM, Assimp, FBX, etc. It is common to work with ranges of vectors such as std::vector<vec3f> (in computer graphics for instance). Furthermore, it is common to elementwise (by x, y, z) iterate through such a range. E.g., std::vector<vec3f> uv_coordinates; // Given from 3rd party library std::vector<float> raw_uv_coordinates; // Used later for low-level transformations for (const auto& position : positions) { raw_uv_coordinates.emplace_back(position.x); raw_uv_coordinates.emplace_back(position.y); // The z-coordinate is discarded since it is not used. } Note how I can't use std::copy since there isn't a one-to-one mapping between uv_coordinates and raw_uv_coordinates. Say you have to support another vector type like struct Vector4 { double data[4] }; This will further complicate the code. I solve this problem by introducing an iterator adaptor called elementwise. It wraps an iterator to a container of vector elements. E.g., auto first = elementwise(begin(uv_coordinates)); *first++; // Returns the x-coordinate of the 1st element. *first++; // Returns the y-coordinate of the 1st element. *first++; // Returns the z-coordinate of the 1st element. *first++; // Returns the x-coordinate of the 2nd element. ... This can be used to output all the coordinates std::copy( elementwise(begin(uv_coordinates)), elementwise(end(uv_coordinates)), std::ostream_iterator<float>{std::cout, ", "}); elementwise gets information about the underlying vector type (vec3f, Vector4, etc.) through the vector_traits class. Consequently, vector_traits must be specialized for each vector type you wish to use with elementwise. Here is the implementation along with all the needed helper classes //////////////////////////////////////////////////////////////////////////////// /// Utility //////////////////////////////////////////////////////////////////////////////// // is_const_iterator // Reference: http://stackoverflow.com/a/5423637/554283 template<typename T> struct is_const_pointer { static const bool value = false; }; template<typename T> struct is_const_pointer<const T*> { static const bool value = true; }; template <typename TIterator> struct is_const_iterator { typedef typename std::iterator_traits<TIterator>::pointer pointer; static const bool value = is_const_pointer<pointer>::value; }; //////////////////////////////////////////////////////////////////////////////// /// Vector Traits //////////////////////////////////////////////////////////////////////////////// enum class vector_indexing_method { brackets, xyz, xyzw }; template<typename Vector> struct vector_traits {}; // Must be specialized. Say you have a vector like // struct vector_xyz { float x, y, z; }; // // Then you must provide the specialization // // template<> // struct vector_traits<vector_xyz> { // using value_type = float; // const static vector_indexing_method indexing_method{vector_indexing_method::xyz}; // const static int num_elements{3}; // }; // Get vector traits from an iterator template<typename Iterator> using vector_traits_from_iterator = vector_traits<typename iterator_traits<Iterator>::value_type>; // Get vector value_type from an iterator template<typename Iterator> struct vector_value_type { using base_value_type = typename vector_traits_from_iterator<Iterator>::value_type; using type = conditional_t<is_const_iterator<Iterator>::value, const base_value_type, base_value_type>; }; template<typename Iterator> using vector_value_type_t = typename vector_value_type<Iterator>::type; //////////////////////////////////////////////////////////////////////////////// /// Elementwise Iterator Base //////////////////////////////////////////////////////////////////////////////// template<typename Iterator, vector_indexing_method> struct elementwise_iterator_base {}; // Specialization for brackets template<typename Iterator> struct elementwise_iterator_base<Iterator, vector_indexing_method::brackets> { vector_value_type_t<Iterator>& dereference() const { return (*current)[element]; } Iterator current; int element; }; // Specialization for xyz template<typename Iterator> struct elementwise_iterator_base<Iterator, vector_indexing_method::xyz> { vector_value_type_t<Iterator>& dereference() const { switch (element) { case 0: return current->x; case 1: return current->y; case 2: return current->z; default: assert(false); // Shouldn't be reached. Prevents compiler warnings. } } Iterator current; int element; }; // Specialization for xyzw template<typename Iterator> struct elementwise_iterator_base<Iterator, vector_indexing_method::xyzw> { vector_value_type_t<Iterator>& dereference() const { switch (element) { case 0: return current->x; case 1: return current->y; case 2: return current->z; case 3: return current->w; default: assert(false); // Shouldn't be reached. Prevents compiler warnings. } } Iterator current; int element; }; //////////////////////////////////////////////////////////////////////////////// /// Elementwise Iterator //////////////////////////////////////////////////////////////////////////////// // Template class template<typename Iterator, int N = vector_traits_from_iterator<Iterator>::num_elements> class elementwise_iterator : public boost::iterator_facade< elementwise_iterator<Iterator, N>, vector_value_type_t<Iterator>, typename iterator_traits<Iterator>::iterator_category> , public elementwise_iterator_base<Iterator, vector_traits_from_iterator<Iterator>::indexing_method> { public: static_assert(N <= vector_traits_from_iterator<Iterator>::num_elements, "elementwise_iterator: Exceeded vector's num_elements limit."); using difference_type = typename iterator_traits<Iterator>::difference_type; using base = elementwise_iterator_base<Iterator, vector_traits_from_iterator<Iterator>::indexing_method>; using base::current; using base::element; elementwise_iterator() : base{nullptr, 0} {} elementwise_iterator( Iterator current ) : base{current, 0} {} private: friend class boost::iterator_core_access; bool equal( const elementwise_iterator& other ) const { return current == other.current && element == other.element; } void increment() { element = (element + 1) % N; if (!element) ++current; } void decrement() { if (!element) { element = N; --current; } --element; } void advance( difference_type n ) { auto div = std::div(n, N); current += div.quot; element += div.rem; } difference_type distance_to( const elementwise_iterator& other ) const { return (other.current - current) * N - (other.element - element); } }; // Utility functions template<typename Iterator> auto elementwise( Iterator iterator ) { return elementwise_iterator<Iterator>{iterator}; } template<int N, typename Iterator> auto elementwise( Iterator iterator ) { return elementwise_iterator<Iterator, N>{iterator}; } Note that I use boost::iterator_facade to define the elementwise_iterator. Currently, the implementation supports element indexing through brackets (E.g., vec[0]) and through direct member access (E.g., vec.x). Additional indexing methods can easily be added by specializing elementwise_iterator_base. Now the example from before can be rewritten to std::vector<vec3f> uv_coordinates; // Given from 3rd party library std::vector<float> raw_uv_coordinates; // Used later for low-level transformations for_each( elementwise<2>(cbegin(uv_coordinates)), elementwise<2>(cend(uv_coordinates)), [] ( auto& value ) { raw_uv_coordinates.emplace_back(value); }); Note that I have explicitly provided the template parameter N = 2. This clamps the output to the two first elements of each vector. We could even just do std::vector<vec3f> uv_coordinates; // Given from 3rd party library std::vector<float> raw_uv_coordinates; // Used later for low-level transformations copy( elementwise<2>(cbegin(uv_coordinates)), elementwise<2>(cend(uv_coordinates)), back_inserter(raw_uv_coordinates)); In contrast to the earlier code, the above will work with any vector type. The only requirement is that there exists a specialization of vector_traits for the vector type (this can be done once). I've made a code sample to demonstrate further. Herein, you will also find nested adaptations. E.g., a std::reverse_iterator of an elementwise_iterator. Have I missed any edge cases? What do you think of the naming/style/implementation? Any critique is welcome. Answer: I don't write C++, but one thing I've noticed is that your bracing style isn't consistent. You have the standard/expected C-style braces: struct is_const_iterator { typedef typename std::iterator_traits<TIterator>::pointer pointer; static const bool value = is_const_pointer<pointer>::value; }; Then you have the one-liner style: template<int N, typename Iterator> auto elementwise( Iterator iterator ) { return elementwise_iterator<Iterator, N>{iterator}; } And then you have the Java-style braces: void decrement() { if (!element) { element = N; --current; } --element; } Perhaps it's nitpicky, but you should strive to make your code look like it was written by one person - this looks like a Java and a C# programmer are fighting over which bracing style the C++ code base should be using: void decrement() { if (!element) { element = N; --current; } --element; } void advance( difference_type n ) { auto div = std::div(n, N); current += div.quot; element += div.rem; } I'd say just pick one, and stick to it ;)
{ "domain": "codereview.stackexchange", "id": 11225, "tags": "c++, c++11, iterator, boost, c++14" }
Higher dimension operator in free Dirac Lagrangian
Question: When discussing higher dimensional operators in a theory with fermions, why do I never see anyone ever talk about the dimension five operator $\partial_\mu\bar\psi\partial^\mu\psi$? How does the Fermion field behave when such a strange operator is its Lagrangian? $$\mathcal{L}=\bar\psi(i\partial_\mu\gamma^\mu-m)\psi+\frac{1}{\Lambda}\partial_\mu\bar\psi\partial^\mu\psi$$ Since it is still quadratic in $\psi$, I expect this to be fairly easy to analyze. What happens when such a term is added? Answer: This particular extra term may be removed by a field redefinition $$\psi\to \psi' = \psi - K \cdot \gamma^\mu \partial_\mu \psi $$ for an appropriate value of $K\sim 1/\Lambda$, up to terms that are even higher dimension operators. This also modifies the mass. This field redefinition is an explicit off-shell way to realize Vibert's comment that one is just modifying the mass in which he assumed the equations of motion. Mark Wayne is also right that (despite the equivalence, up to even higher-order terms), the propagator violates positivity of quantum mechanics (we really mean the positive norm of states with particle excitations which is linked to positive probabilities, a "must": excitations with negative norm are known as "[bad] ghosts"). If one writes down the full propagator for this (free) theory, one gets some additional poles which have a coefficient of the wrong sign. However, these pathologies occur at $p^2\sim \Lambda^2$ where we expect the theory to break down, anyway: the OP wrote the extra term as a correction in effective field theory that is meant to be used at energies $p^2\ll \Lambda^2$. This pathology with non-positivity may be fixed by new physics near $\Lambda$ and because of the equivalence, indeed, we can see that we may adjust the even higher-order terms (which may also arise from integrating out other fields and interactions) so that the full theory is exactly equivalent to the ordinary massive Dirac fermion and therefore has no ghosts. That means that as long as we use this as an effective field theory, knowing that it may have further higher-order modification and new physics in general that kicks in near $p^2\sim \Lambda^2$, it is healthy as an effective field theory. Again, if we wanted to use it as a full theory even at energies of order $\Lambda$, it would be inconsistent. In general, higher-order "free" terms like that – similarly $\square\phi\cdot \square\phi$ for bosons etc. – have been proposed as ways to make the propagators softer in the UV, like $1/p^4$ instead of $1/p^2$, which would improve the convergence of Feynman diagrams. The price is that if such a theory is to be taken seriously near $\Lambda$, the scale determining the size of the higher-dimensional operators, then this theory leads to new negative-norm excitations which is an inconsistency. So the addition of these new terms makes the theory break down "earlier", despite the intriguing softening of the UV problems.
{ "domain": "physics.stackexchange", "id": 5618, "tags": "quantum-field-theory, dirac-equation, 1pi-effective-action" }
How to reconstruct the signal if there were some temporary offsets
Question: I am not a signal processing expert. I have data like this: where the red is what I have and the black is what I want to have. Any idea how to reconstruct this? The code for generating the data is here: close all Ndata = 1000; X = nan(Ndata,1); sigma = 0.5; Y(1) = 50; offset = 0; for i =2:Ndata % consumption X(i,1) = -0.8*rand; if rand<0.01 % supply X(i,2) = 50; end if rand<0.5 % offset change if abs(offset)==0 offset = exprnd(50)*(randi(2)*2-3); else offset = 0; end end X(i,3) = offset ; % noise X(i,4) = sigma*randn; Y(i) = Y(i-1)+sum(X(i,1:2)); Z(i) = Y(i)+sum(X(i,3:4)); end plot(Z,'r') hold on plot(Y) The tricky thing is the fact that there is not only a small white noise, but there are also temporary excesses that are relatively large. Thank you for any advice. Answer: I just had a go at something stupidly simple: applying a median filter. It's not ideal, but it might be a pointer to how to proceed. The top plot is my instance of your plot. The bottom plot shows the "noiseless" line from the top plot, with the median filtered version in black. N2= 10; for k=1:1000, idx1 = max(1,k-N2); idx2 = min(1000,k+N2) M(k) = median(Z(idx1:idx2)); end
{ "domain": "dsp.stackexchange", "id": 3054, "tags": "matlab, noise, filtering" }
Ising model on torus
Question: It's expected that the pressure of the Ising model on a $d$-dimensional discrete torus with side length $L$ converges to the mean-field Ising model pressure as the dimension $d$ goes to infinity. Is there a rigorous proof for this fact? Intuitively this is indeed true. Thanks for any references. Answer: Yes, this was first proved (as far as I know) by Colin J. Thompson in his paper Ising model in the high density limit, published in Communication in Mathematical Physics 36(4), 255-262, 1974. There are also similar results for the magnetization, as well as inequalities relating these quantities on finite-dimensional lattices and in the mean-field approximation. You can also look at the (brief) discussion and the references given in Sections 2.5.4 and 3.10.2 in our book for additional information on this topic. There is also a discussion (with some proofs) of such issues in Section II.14 of Simon's book.
{ "domain": "physics.stackexchange", "id": 81582, "tags": "statistical-mechanics, ising-model" }
Parsing Semver versions
Question: I have written my own take on semantic versioning. Parsing it is not really hard, but I feel like my parsing could be more optimal, more readable and feel more like a parser. Currently, there is this unread method that I don't see in most parser so if possible I would like to get rid of it, and the two methods readPrerelease() and readBuild() feel too complex. I'm only interested in parsing so my Version class got cleaned of equals, hashCode, compareTo and toString methods and I removed the related tests. If required I could re-add them, but to me that is superfluous in this code review request. For this code review, I would like to: Make the code more readable Let the code go forward and avoid going backwards (remove unread and all those currentPosition - 1), unless necessary. Avoid having so many booleans in the methods readPreRelease() and readBuild(). Write general remarks about the code if any. My code provides three classes: the VersionParser which do the actual parsing. the Version class, which was dumbed down because I don't want that to be code-reviewed, it's rather easy but the goal here is for the parser. The parsing entry point is here, through the valueOf method. the testing class I used to make sure my parsing is correct. Below those classes, you can see the BNF grammar for reference. Please note that I removed comments, as I want my code to be self-explanatory, so if it's unclear, that something that should be factored in the review. import java.util.ArrayList; import java.util.List; import static java.util.Objects.requireNonNull; final class VersionParser { private final String source; private int currentPosition = 0; VersionParser(String source) { this.source = requireNonNull(source); } Version parse() { var major = readNumericIdentifier(); consume('.'); var minor = readNumericIdentifier(); consume('.'); var patch = readNumericIdentifier(); var preRelease = List.<String>of(); if (peek() == '-') { consume('-'); preRelease = readPreReleases(); } var build = List.<String>of(); if (peek() == '+') { consume('+'); build = readBuilds(); } check(isAtEnd(), "Unexpected characters in \"%s\" at %d", source, currentPosition - 1); return new Version(major, minor, patch, preRelease, build); } private boolean isDigit(int c) { return '0' <= c && c <= '9'; } private boolean isAlpha(int c) { return ('a' <= c && c <= 'z') || ('A' <= c && c <= 'Z'); } private boolean isNonDigit(int c) { return isAlpha(c) || c == '-'; } private boolean isAtEnd() { return currentPosition >= source.length(); } private int advance() { var c = source.charAt(currentPosition); currentPosition++; return c; } private int peek() { if (isAtEnd()) { return -1; } return source.charAt(currentPosition); } private void unread() { currentPosition--; } private void check(boolean expression, String messageFormat, Object... arguments) { if (!expression) { var message = String.format(messageFormat, arguments); throw new IllegalArgumentException(message); } } private void consume(char expected) { check(!isAtEnd(), "Early end in \"%s\"", source); var c = advance(); check(c == expected, "Expected %c, got %c in \"%s\" at position %d", expected, c, source, currentPosition - 1); } private int readNumericIdentifier() { check(!isAtEnd(), "Early end in \"%s\"", source); var start = currentPosition; var c = advance(); check(isDigit(c), "Expected a digit, got %c in \"%s\" at position %d", c, source, currentPosition - 1); if (c == '0') { return 0; } while (!isAtEnd()) { c = advance(); if (!isDigit(c)) { unread(); break; } } var string = source.substring(start, currentPosition); return Integer.parseInt(string); } private List<String> readPreReleases() { var preReleases = new ArrayList<String>(); preReleases.add(readPreRelease()); while (true) { if (peek() != '.') { return preReleases; } consume('.'); preReleases.add(readPreRelease()); } } /* * Basically, should be a valid number (without leading 0, unless for 0) or should contain at least one letter or dash. */ private String readPreRelease() { var start = currentPosition; var isAllDigit = true; var startsWithZero = false; var isEmpty = true; while (!isAtEnd()) { var c = advance(); var isDigit = isDigit(c); var isNonDigit = isNonDigit(c); if (!isDigit && !isNonDigit) { unread(); break; } if (isEmpty && c == '0') { startsWithZero = true; } isEmpty = false; isAllDigit &= isDigit; } check(!isEmpty, "Empty preRelease part in \"%s\" at %d", source, currentPosition - 1); var length = currentPosition - start; var doesNotStartWithZero = !isAllDigit || !startsWithZero || length == 1; check(doesNotStartWithZero, "Numbers may not start with 0 except 0 in \"%s\" at position %d", source, start); return source.substring(start, currentPosition); } private List<String> readBuilds() { var builds = new ArrayList<String>(); builds.add(readBuild()); while (true) { if (peek() != '.') { return builds; } consume('.'); builds.add(readBuild()); } } private String readBuild() { var start = currentPosition; var isEmpty = true; while (!isAtEnd()) { var c = advance(); var isDigit = isDigit(c); var isNonDigit = isNonDigit(c); if (!isDigit && !isNonDigit) { unread(); break; } isEmpty = false; } check(!isEmpty, "Empty build part in \"%s\" at %d", source, currentPosition - 1); return source.substring(start, currentPosition); } } The Version class that hides the parser. import java.util.List; public final class Version { public static Version valueOf(String s) { return new VersionParser(s).parse(); } private final int major; private final int minor; private final int patch; private final List<String> preRelease; private final List<String> build; Version(int major, int minor, int patch, List<String> preRelease, List<String> build) { this.major = major; this.minor = minor; this.patch = patch; this.preRelease = List.copyOf(preRelease); this.build = List.copyOf(build); } // getters, equals, hashCode, toString, compareTo (+ implement Comparable) } The test class to make sure the parsing works. Requires Junit and AssertJ. import org.junit.jupiter.params.*; import org.junit.jupiter.params.provider.*; import java.util.*; import static java.util.stream.Collectors.toList; import static org.assertj.core.api.Assertions.*; class VersionTest { @ParameterizedTest @MethodSource("provideCorrectVersions") void testVersion_correct(String correctVersion) { assertThat(be.imgn.common.base.Version.valueOf(correctVersion)) .isNotNull(); } private static List<Arguments> provideCorrectVersions() { var versions = new String[] { "0.0.4", "1.2.3", "10.20.30", "1.1.2-prerelease+meta", "1.1.2+meta", "1.1.2+meta-valid", "1.0.0-alpha", "1.0.0-beta", "1.0.0-alpha.beta", "1.0.0-alpha.beta.1", "1.0.0-alpha.1", "1.0.0-alpha0.valid", "1.0.0-alpha.0valid", "1.0.0-alpha-a.b-c-somethinglong+build.1-aef.1-its-okay", "1.0.0-rc.1+build.1", "2.0.0-rc.1+build.123", "1.2.3-beta", "10.2.3-DEV-SNAPSHOT", "1.2.3-SNAPSHOT-123", "1.0.0", "2.0.0", "1.1.7", "2.0.0+build.1848", "2.0.1-alpha.1227", "1.0.0-alpha+beta", "1.2.3----RC-SNAPSHOT.12.9.1--.12+788", "1.2.3----R-S.12.9.1--.12+meta", "1.2.3----RC-SNAPSHOT.12.9.1--.12", "1.0.0+0.build.1-rc.10000aaa-kk-0.1", "999999999.999999999.999999999", "1.0.0-0A.is.legal" }; return Arrays.stream(versions) .map(Arguments::of) .collect(toList()); } @ParameterizedTest @MethodSource("provideIncorrectVersions") void testVersion_incorrect(String incorrectVersion) { assertThatThrownBy(() -> Version.valueOf(incorrectVersion)) .isInstanceOf(IllegalArgumentException.class) .hasNoSuppressedExceptions(); } private static List<Arguments> provideIncorrectVersions() { var versions = new String[] { "1", "1.2", "1.2.3-0123", "1.2.3-0123.0123", "1.1.2+.123", "1.2.3+", "+invalid", "-invalid", "-invalid+invalid", "-invalid.01", "alpha", "alpha.beta", "alpha.beta.1", "alpha.1", "alpha+beta", "alpha_beta", "alpha.", "alpha..", "beta", "1.0.0-alpha_beta", "-alpha.", "1.0.0-alpha..", "1.0.0-alpha..1", "1.0.0-alpha...1", "1.0.0-alpha....1", "1.0.0-alpha.....1", "1.0.0-alpha......1", "1.0.0-alpha.......1", "01.1.1", "1.01.1", "1.1.01", "1.2", "1.2.3.DEV", "1.2-SNAPSHOT", "1.2.31.2.3----RC-SNAPSHOT.12.09.1--..12+788", "1.2-RC-SNAPSHOT", "-1.0.3-gamma+b7718", "+justmeta", "9.8.7+meta+meta", "9.8.7-whatever+meta+meta", "999999999999999999.999999999999999999.999999999999999999", "999999999.999999999.999999999----RC-SNAPSHOT.12.09.1-------------..12" }; return Arrays.stream(versions) .map(Arguments::of) .collect(toList()); } } The Backus–Naur Form grammar, as taken from the semver.org website. <valid semver> ::= <version core> | <version core> "-" <pre-release> | <version core> "+" <build> | <version core> "-" <pre-release> "+" <build> <version core> ::= <major> "." <minor> "." <patch> <major> ::= <numeric identifier> <minor> ::= <numeric identifier> <patch> ::= <numeric identifier> <pre-release> ::= <dot-separated pre-release identifiers> <dot-separated pre-release identifiers> ::= <pre-release identifier> | <pre-release identifier> "." <dot-separated pre-release identifiers> <build> ::= <dot-separated build identifiers> <dot-separated build identifiers> ::= <build identifier> | <build identifier> "." <dot-separated build identifiers> <pre-release identifier> ::= <alphanumeric identifier> | <numeric identifier> <build identifier> ::= <alphanumeric identifier> | <digits> <alphanumeric identifier> ::= <non-digit> | <non-digit> <identifier characters> | <identifier characters> <non-digit> | <identifier characters> <non-digit> <identifier characters> <numeric identifier> ::= "0" | <positive digit> | <positive digit> <digits> <identifier characters> ::= <identifier character> | <identifier character> <identifier characters> <identifier character> ::= <digit> | <non-digit> <non-digit> ::= <letter> | "-" <digits> ::= <digit> | <digit> <digits> <digit> ::= "0" | <positive digit> <positive digit> ::= "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" <letter> ::= "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" | "J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" | "S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z" | "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" | "i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" | "q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" | "y" | "z" ```` Answer: On the whole, the code was easy to read, and the tests were concise (apart from the fully qualified be.imgn.common.base.Version.valueOf call. Here's a few things to think about. Object Lifetime The VersionParser class takes in a String and then provides a parse method which actually does the parsing work. However, this method can only ever be called once. If it's called more than once, then it fails, because the source string has already been parsed and the method assumes that it's only called on a newly constructed parser. This feels wrong. It's relying on the clients knowing too much about how the class works. A better approach might have been to make the class constructor private and have a static parse method for the interface, which spun up the data, if required, and performed the parse. Alternately, parse could simply reset processed back to the beginning of the source data an reprocess it, or even return a cached version... Circular Dependency Circular dependencies as a general rule are bad. They have a tendency to result in tightly coupled code that bites you, just as you decide you want do reuse a bit of the code somewhere else. As it stands, you've got a circular dependency. Your Version calls VersionParser which then creates a Version. To me, it seems like a VersionParser might need to know about a Version, in order to construct it, but a Version shouldn't really need to know about a VersionParser. If it's !isDigit && !isNonDigit what is it? One of your goals is self explanatory code. If found this line less than obvious, my instinct was non-digit is the same as not-a-digit, which is everything that isn't a digit... However that's clearly not the case. A NonDigit, appears to be an alpha, or '-'. A better name might help, however you only actually seem to use it in this check. Maybe a method isValidVersionCharacter which evaluated digits and 'non digits', would be clearer... Check There's quite a lot of ! in your code. For me, this made some of the calls to check awkward to process. check(!isAtEnd(), "Early end in \"%s\"", source); I'm not sure if it's that check sounds a bit like if, so I'm expecting it to perform the print/throw action if the condition is true, or if it's that "Check not is at end" sounds awkward. verify would work better for me I think, because it's closer to assert, so I'm of a mindset that it's expecting the condition to be true, or it will perform the print/throw action (i.e. the opposite of the if processing). I suspect this is very subjective, but possibly something to consider. Since I'm thinking about check, its first parameter is boolean expression. This is a bit misleading. It doesn't take an expression, it takes in a boolean value that if it isn't true will result in an exception. A better name for the parameter may help with some of my previous misunderstandings. To get a feel for possible approaches for the negatives, I went through the code and noticed several possible small refactorings, with a goal of making small improvements. Replaced your while constructs with do..while, the operation is always performed once. isAtEnd seemed to result in a lot of !isAtEnd, so I inverted it to isMoreToProcess I think you’re missing three invalid test cases, so I added them: “01.0.4”, “0.01.4”, “0.0.04” unread and peek seem to be doing the same thing in different ways. I removed unread to make the approach more straightforward to follow Since I used peek, which checks if we’re at the end as part of it, didn’t need to check if we have reached the end during loops The isEmpty variables make your iterations busier than they need to be, so I extracted them from the loop doesNotStartWithZero had a lot of negatives, so I inverted it to numberWithZeroPrefix I removed the construction requirement for the parser, so that parse takes the information it’s expected to parse. This makes calling parse more consistent (you get the same behaviour if you call it once or 5 times) The resulting code: final class VersionParser { private String source; private int currentPosition = 0; Version parse(String source) { this.source = requireNonNull(source); var major = readNumericIdentifier(); consume('.'); var minor = readNumericIdentifier(); consume('.'); var patch = readNumericIdentifier(); var preRelease = List.<String>of(); if (peek() == '-') { consume('-'); preRelease = readPreReleases(); } var build = List.<String>of(); if (peek() == '+') { consume('+'); build = readBuilds(); } check(!isMoreToProcess(), "Unexpected characters in \"%s\" at %d", source, currentPosition - 1); return new Version(major, minor, patch, preRelease, build); } private boolean isDigit(int c) { return '0' <= c && c <= '9'; } private boolean isAlpha(int c) { return ('a' <= c && c <= 'z') || ('A' <= c && c <= 'Z'); } private boolean isNonDigit(int c) { return isAlpha(c) || c == '-'; } private boolean isMoreToProcess() { return !(currentPosition >= source.length()); } private int advance() { var c = source.charAt(currentPosition); currentPosition++; return c; } private int peek() { if (!isMoreToProcess()) { return -1; } return source.charAt(currentPosition); } private void check(boolean expression, String messageFormat, Object... arguments) { if (!expression) { var message = String.format(messageFormat, arguments); throw new IllegalArgumentException(message); } } private void consume(char expected) { check(isMoreToProcess(), "Early end in \"%s\"", source); var c = advance(); check(c == expected, "Expected %c, got %c in \"%s\" at position %d", expected, c, source, currentPosition - 1); } private boolean isDigitNext() { return isDigit(peek()); } private boolean isValidIdentifierCharacterNext() { var nextCharacter = peek(); return isDigit(nextCharacter) || isNonDigit(nextCharacter); } private int readNumericIdentifier() { check(isMoreToProcess(), "Early end in \"%s\"", source); var start = currentPosition; var c = advance(); check(isDigit(c), "Expected a digit, got %c in \"%s\" at position %d", c, source, currentPosition - 1); if (c == '0') { return 0; } while (isDigitNext()) { advance(); } var string = source.substring(start, currentPosition); return Integer.parseInt(string); } private List<String> readPreReleases() { var preReleases = new ArrayList<String>(); do { preReleases.add(readPreRelease()); if (peek() != '.') { return preReleases; } consume('.'); } while(true); } /* * Basically, should be a valid number (without leading 0, unless for 0) or should contain at least one letter or dash. */ private String readPreRelease() { var start = currentPosition; var isAllDigit = true; check(isValidIdentifierCharacterNext(), "Empty preRelease part in \"%s\" at %d", source, currentPosition - 1); boolean startsWithZero = peek() == '0'; while (isValidIdentifierCharacterNext()) { var c = advance(); isAllDigit &= isDigit(c); } var length = currentPosition - start; var numberWithZeroPrefix = isAllDigit && startsWithZero && length != 1; check(!numberWithZeroPrefix, "Numbers may not start with 0 except 0 in \"%s\" at position %d", source, start); return source.substring(start, currentPosition); } private List<String> readBuilds() { var builds = new ArrayList<String>(); do { builds.add(readBuild()); if (peek() != '.') { return builds; } consume('.'); } while(true); } private String readBuild() { var start = currentPosition; check(isValidIdentifierCharacterNext(), "Empty build part in \"%s\" at %d", source, currentPosition - 1); while (isValidIdentifierCharacterNext()) { advance(); } return source.substring(start, currentPosition); } }
{ "domain": "codereview.stackexchange", "id": 40966, "tags": "java" }
Function to sort items with an array of keys
Question: I'm working on a function to sort two elements using several properties. I want to know if the function is well structured. function comparePropsOfAWithB(a, b, props) { var prop = props[0]; var propA = a[prop]; var propB = b[prop]; if (propA === propB) { var array = props; array.shift(); return props.length ? comparePropsOfAWithB(a, b, props) : false } else if (propA < propB ) { return true } else { return false } } This function is meant to be used alongside the sort method like this: array.sort(function(a,b) { return comparePropsOfAWithB(a,b,["a","b","c"]) }) The intent of the function is to compare two objects using several values to determine if they are different. In the example, the key "a" will be tested, and if it fails, "b" will be tested until the last element of the array is reached. Answer: That's not how the comparison function for Array.sort() works at all: it should return negative, zero, or positive. Eating up all or part of the props argument (due to your array.shift() call) is bad practice. The usage is cumbersome: array.sort(function(a,b) { return comparePropsOfAWithB(a,b,["a","b","c"]) }) Wouldn't it be nicer to write a propertyComparator(…) function that returns a function? array.sort(propertyComparator('a', 'b', 'c')) Something like this: function propertyComparator() { var props = arguments; return function(a, b) { for (var i = 0; i < props.length; i++) { var aProp = a[props[i]]; var bProp = b[props[i]]; if (aProp < bProp) return -1; if (aProp > bProp) return +1; } return 0; }; }
{ "domain": "codereview.stackexchange", "id": 14607, "tags": "javascript, recursion, sorting" }
A problem about whitening of a signal
Question: I intend to whiten the spectrum of a time series using a point-wise normalization in the frequency domain. In my first test, it looks pretty good. figure; n = 2e3; t = 1:n; m = n/2+1; f = linspace(0,1,m); x = rand(1,n) - 0.5; y = fft(x); subplot(2,2,1); plot(t,x,'k'); axis tight; xlabel('t/s'); title('original signal x'); subplot(2,2,2); plot(f,abs(y(1:m)),'k'); axis tight; xlabel('f/hz'); title('original spectrum'); y1 = y ./ abs(y); x1 = real(ifft(y1)); y2 = fft(x1); x2 = real(ifft(y2)); subplot(2,2,3); plot(t,x1,'k',t,x2-x1,'r'); axis tight; xlabel('t/s'); title('whitened signal x1(black) and x2-x1(red)'); subplot(2,2,4); plot(f,abs(y1(1:m)),'k',f,abs(y2(1:m)),'r'); axis tight; xlabel('f/hz'); title('whitened spectrum y1(black) and y2(red)'); The spectrum is flat after whitening. But when I windowed the spectrum, sth strange happened. figure; n = 2e3; t = 1:n; m = n/2+1; f = linspace(0,1,m); x = rand(1,n) - 0.5; y = fft(x); subplot(2,2,1); plot(t,x,'k'); axis tight; xlabel('t/s'); title('original signal x'); subplot(2,2,2); plot(f,abs(y(1:m)),'k'); axis tight; xlabel('f/hz'); title('original spectrum'); y1 = y ./ abs(y); k = n/10; w = sin(linspace(0,pi/2,k)); y1(1:k) = y1(1:k) .* w; y1(m:-1:m-k+1) = y1(m:-1:m-k+1) .* w; y1(n-m+3:n) = y1(m-1:-1:2); x1 = real(ifft(y1)); y2 = fft(x1); x2 = real(ifft(y2)); subplot(2,2,3); plot(t,x1,'k',t,x2-x1,'r'); axis tight; xlabel('t/s'); title('whitened signal x1(black) and x2-x1(red)'); subplot(2,2,4); plot(f,abs(y1(1:m)),'k',f,abs(y2(1:m)),'r'); axis tight; xlabel('f/hz'); title('whitened spectrum y1(black) and y2(red)'); As shown in subplot 3, x1 and x2 are the same (the difference is zero; see red line), but their spectrum in subplot 4 (black and red) is different. After IFFT y1 to x1 and FFT x1 to y2, y2 differs from y1. Answer: As suggested by @Drazick, I post as an answer. The good code according to the comments: figure; n = 2e3; t = 1:n; m = n/2+1; f = linspace(0,1,m); x = rand(1,n) - 0.5; y = fft(x); subplot(2,2,1); plot(t,x,'k'); axis tight; xlabel('t/s'); title('original signal x'); subplot(2,2,2); plot(f,abs(y(1:m)),'k'); axis tight; xlabel('f/hz'); title('original spectrum'); y1 = y ./ abs(y); x1 = real(ifft(y1)); y2 = fft(x1); x2 = real(ifft(y2)); subplot(2,2,3); plot(t,x1,'k',t,x2-x1,'r'); axis tight; xlabel('t/s'); title('whitened signal x1(black) and x2-x1(red)'); subplot(2,2,4); plot(f,abs(y1(1:m)),'k',f,abs(y2(1:m)),'r'); axis tight; xlabel('f/hz'); title('whitened spectrum y1(black) and y2(red)'); The spectrum is flat after whitening.
{ "domain": "dsp.stackexchange", "id": 3503, "tags": "fft, frequency-spectrum, signal-analysis, ifft" }
Prove by induction $T(n) = T(\lfloor\frac{n}{2}\rfloor)+n^2 \in \Theta (\log_2 n)$
Question: Text of the problem: Solve the following recurrence equation and prove it by applying the principle of induction: $T(n) = \begin{cases} 3, \ n \le 2 \\ T(\lfloor\frac{n}{2}\rfloor)+n^2, \ n \ge 3 \end{cases}$ after doing the recursion tree, I find that the complexity (if I'm not wrong) is $ \Theta (\log_2 n) $ But I don't know how to do the induction step. Answer: First of all note that $T(n)$ is indeed not in $\Theta(\log n)$, which makes the proof difficult. You need to understand that if $T(n) = T(n/2) + n^2$, then $T(n) = \Omega(n^2)$, since it uses $n^2$ time in the first "iteration" or "level". You have made a mistake when drawing the call tree. The call tree will look like this: $$ n^2 + \left(\frac{n}{2}\right)^2 + \left(\frac{n}{4}\right)^2 + \left(\frac{n}{8}\right)^2 + \cdots + \left(\frac{n}{2^i}\right)^2$$ You are right that the "tree" (or path) terminates after $\log_2(n)$ calls, so the summation should look like $$\sum_{i=1}^{\log_2 n} \left( \frac{n}{2^i}\right)^2 = \sum_{i=1}^{\log_2 n} \left( \frac{n^2}{2^{2i}}\right) = n^2 \cdot \sum_{i=1}^{\log_2 n} \left( \frac{1}{2^{2i}}\right) = n^2 \cdot c, $$ for some constant $c$. Now, since $T(1) = 3$, let's try to prove by induction that $T(n) \leq 3n^2$. Base case 1: $T(1) = 3 \leq 3\cdot 1^2 = 3$ Base case 2: $T(2) = 3 \leq 3\cdot 2^2 = 12$ Induction hypothesis: $T(n') \leq 3n^2$ for all $n' < n$. Induction step: $T(n) = T(n/2) + n^2 \leq 3 \left(\frac{n}{2}\right)^2 + n^2 = 3/4 n^2 + n^2 \leq 3n^2$.
{ "domain": "cs.stackexchange", "id": 19231, "tags": "algorithms, complexity-theory, time-complexity, algorithm-analysis, induction" }
Ed. Witten's new paper and the simulation of a quantum field theory
Question: Context: Ed. Witten recently wrote a potentially revolutionary paper where he showed that under certain conditions, a Chern-Simons path integral in three dimensions is equivalent to an N = 4 path integral in four dimensions (this is the standard d=4, N=4 super Yang Mills theory) Speculation: Witten had shown that the Chern-Simons topological quantum field theory can be solved in terms of Jones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial. (source: Wikipedia and this paper) Now I haven't completed reading Witten's paper and I wouldn't understand much of it anyways. But the idea is that if a quantum computer can simulate a path integral (or a Chern-Simons TQF) and since now Witten has shown in both of them to be dual descriptions in some sense, a quantum computer, atleast theoritically might be able to simulate a QFT. Also by the extension of that, Maldacena proposed that the specific field theory that Witten is using to be dual to type-II B string theory in AdS/CFT so then it may also be possible (only theoritically) to simulate a string theory. Question: What are the technical constrains that a quantum or classical computer faces while simulating a QFT? Also my speculations only partially complete, could experts suggest a better description? Thanks! PS. Also thanks to Mitchell Porter who brought up that paper before. Answer: To complement what Joe wrote, and maybe explain this question a bit more (without answering it!): The computational complexity of simulating "realistic" quantum field theories has been considered an open problem for a long time. One of the main problems, as I understand it, is that (3+1)-dimensional QFTs aren't sufficiently well-defined mathematically for it to be clear what model of computation should correspond to them. But the situation is different for the (2+1)-dimensional QFTs called topological quantum field theories (TQFTs). For those, there is a rigorous mathematical description based on the Jones polynomial, due to Witten from the 1980s. It's that description that led to the deep and celebrated result of Freedman, Kitaev, Larsen, and Wang, which showed that simulating TQFTs is indeed BQP-complete, as one would hope and expect (see Aharonov, Jones, and Landau for a more computer-scientist-friendly version). This remains essentially the only rigorous result we have about the computational complexity of quantum field theory. Now, the questioner is asking whether some new work by Witten could give a handle on the computational complexity of (3+1)-dimensional QFTs. I don't know the answer to that, but it seems obvious that whatever it is, it would involve a significant research effort, and probably not fit within the margins of CS Theory StackExchange. :-) Addendum (Oct. 12, 2013): I just saw this answer again, and I thought I should add a note that, shortly after I posted it, Jordan, Lee, and Preskill released an important paper showing how to simulate "φ4 theory" (a simple interacting quantum field theory) in quantum polynomial time, in any number of spacetime dimensions. This doesn't directly address the OP's question, but it does render obsolete my comment about Freedman-Kitaev-Larsen-Wang remaining "essentially the only rigorous result we have about the computational complexity of quantum field theory."
{ "domain": "cstheory.stackexchange", "id": 1026, "tags": "quantum-computing" }
How to classify the symmetry $C$, $P$ and $T$?
Question: What is the difference between internal symmetries and space-time symmetries? Where would the $C$, $P$ and $T$ symmetries be classified? Answer: Space-time symmetries are symmetries with respect to coordinate transformations. In non-relativistic physics, space and time is assumed to be Galilean invariant. Galileo symmetries form a ten parameters continuous group. These parameters label three space translations, time translation, three rotations and three Galileo's boosts. It also includes discrete symmetries such as reflections. In relativistic physics, Galileo symmetry is replaced by Poincare symmetry, another ten parameter symmetry group generated by four translations, three rotations and three Lorentz boosts, and which also includes reflections. On the other hand, internal symmetries refer to transformations acting on internal degrees of freedom of systems, such as fields, charges, etc, which does not transform space-time points. The classic example of internal symmetries are gauge symmetries. In quantum field theory, gauge symmetries are normally given by compact semi-simple Lie groups, such as the QCD's $SU(3)$ and the electroweak's $SU(2)\times U(1)$. When people talk about internal symmetries, they normally refer to gauge symmetries. However (as remarked by AccidentalFourierTransform), gauge symmetries are not symmetries in a strict sense. A symmetry is a transformation relating distinct states of a given system which results in the same physics. Gauge symmetries is only a way of labeling the same state in different ways. It is therefore just a redundancy. See for instance: Gauge symmetry is not a symmetry? Parity transformation (P) and time reversal (T) are coordinates reflections and therefore are space-time transformations. These transformations belong to Poincare group. Charge conjugation (C) acts only on charges, with no regards to space-time so it is an internal transformation.
{ "domain": "physics.stackexchange", "id": 40071, "tags": "quantum-field-theory, terminology, symmetry, poincare-symmetry, cpt-symmetry" }
Composition of permutation operators
Question: Im reading a course on second quantization and they say that the composition of permutation operators is: $$ P_{\sigma} \circ P_{\sigma'} = P_{\sigma' \circ \sigma} $$ But for me it should be: $$ P_{\sigma} \circ P_{\sigma'} = P_{\sigma \circ \sigma '} $$ I take an example. Imagine I work with 3 particles. I have $| \psi \rangle = |1:u_1,2:u_2,3:u_3 \rangle $ I apply : $P_{312} \circ P_{132} $ on it. $$ P_{132} |1:u_1,2:u_2,3:u_3 \rangle = |1:u_1,3:u_2,2:u_3 \rangle=|1:u_1,2:u_3,3:u_2 \rangle$$ Thus: $$ P_{312} \circ P_{132} |1:u_1,2:u_2,3:u_3 \rangle = P_{312} |1:u_1,2:u_3,3:u_2 \rangle = |1:u_3,2:u_2,3:u_1 \rangle$$ I remark that : $P_{321} \circ P_{132} = P_{321}$ And I have the permutations $321 \circ 132 = 321$ So why do we have to "invert" the composition in the first equations written ? I don't get it. I don't think it is a mistake from the course I read because they wrote to be careful with the inversion of permutations. Answer: This is a perennial problem in the literature of permutation groups and their action on sets. There are two ways in which permutations can act on a ket $|a_1,a_2,a_3>$. One can interpret $S_1=(12)$ as the instruction to interchange the label in place 1 with that in place 2, or as the instruction to interchange the label with subscript 1 with that of subscript 2. For the first move it makes no difference: in each case $S:|a_1,a_2a_3> \to |a_2,a_1,a_3>$. It makes a difference at the next move. Does $S_2=(13)$ take $|a_2,a_1,a_3> \to |a_3,a_1,a_2>$ or to $|a_2,a_3,a_1>$? If $S_1\circ S_2=S_3$ in the first interpretation, then $S_2\circ S_1=S_3$ in the second. It's like the way body-fixed and space-fixed rotation obey opposite composition rules in mechanics. Beware also that, for historical reasons, some books read the composion of permutions from left to right -- in $S_1S_2$ they first act by $S_1$ and then by $S_2$. I wrote "$\circ$'' to imply composition in which $S_1\circ S_2$ means first $S_2$ then $S_1$.
{ "domain": "physics.stackexchange", "id": 39582, "tags": "quantum-mechanics, mathematics" }
Variation on Bellman-Ford Algorithms?
Question: We have a Directed Graph with 100 vertexes. v1 --> v2 --> ... v100 and all edges weights is equal to 1. we want to used bellman-ford for finding all shortest paths from v1 to other vertexes. this algorithm in each step check all edges in arbitrary order. if in each step the shortest distance v1 to all others vertexes is not changed, this algorithm halt. the number of steps is related to checking order of edges. what is the minimum and maximum of steps in this problem? Solution: 2 and 100. How this solution will be achieved? Answer: Since we are checking edges in different order each time, the time taken will depend on the order of the edges being checked in Bellman-Ford algorithm. We will get the final result in 2 steps if our order is $v_1v_2, v_2v_3, v_3v_4, ... v_{99}v_{100}$. However we will get the final result in 100 steps if our order is $v_{99}v_{100}, v_{98}v_{99}, ..., v_3v_2, v_2v_1$. We can prove that this is worst possible by noting that in Bellman-Ford algorithm with positive weights, in every iteration at least one shortest distance is calculated correctly. So in first iteration we will only determine the shortest distance to $v_2$, others will remain $\infty$, in second iteration we will determine the shortest distance to $v_2$ and $v_3$ while others still remain $\infty$, and so on so forth.
{ "domain": "cs.stackexchange", "id": 6123, "tags": "algorithms, graphs, data-structures, trees, shortest-path" }
sensor_msgs/JointState on Arduino
Question: Hi all, I've got maybe a very noob question. I'm using rosserial on a Arduino One board and I'm trying to stream out the readings from wheel encoders as a sensor_msgs/JointState data structure. The problem is that I can't figure out how to properly push back elements in the data structure and when I try to use push_back() method I get the following error. error: request for member ‘push_back’ in ‘wheel_odo.sensor_msgs::JointState::name’, which is of non-class type ‘char**’ Can you help me out? Thanks! EDIT: here is the code #include <CAN.h> #include <SPI.h> #include <SerialCommand.h> #include <ros.h> #include <std_msgs/String.h> #include <sensor_msgs/ChannelFloat32.h> #include <sensor_msgs/JointState.h> ros::NodeHandle nh; std_msgs::String str_msg; //sensor_msgs::ChannelFloat32 wheel_odo[4]; sensor_msgs::JointState wheel_odo; char* id = "/myBot"; ros::Publisher sinbot_odometry("sinbot_odometry", &wheel_odo); SerialCommand serialCommand; #define CAN_BUS_SPEED 1000 // 1Mbaud int state = 0; int pin = LOW; byte received[8] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; byte rec [4] = {0x00, 0x00, 0x00, 0x00}; signed long rec_new; void setup() { // put your setup code here, to run once: pinMode(7, OUTPUT); pinMode(8, OUTPUT); CAN.begin(); CAN.baudConfig(CAN_BUS_SPEED); CAN.setMode(NORMAL); delay(100); Serial.begin(115200); //Baudrate Serial connection wheel_odo.header.frame_id = id; wheel_odo.name.push_back(str_msg); nh.initNode(); nh.advertise(sinbot_odometry); } PS: By the way what I'm trying to do is streaming out the information from 4 wheel encoders using an Arduino One board. What I would like to have is the following set of information for each wheel: [ ros_timestamp, arduino_timestamp, encoder_reading] Considering that I'm not willing to implement my own message, is sensor_msgs/JointState the best solution in my case? EDIT: Also, using Joint_state data structure as it is done in this link "answers.ros.org/question/43157/trying-to-use-get-joint-state-with-my-urdf/" is not working for Arduino and I get error: request for member ‘resize’ in ‘wheel_odo.sensor_msgs::JointState::position’, which is of non-class type ‘float*’ when I try to resize the structure as: wheel_odo.position.resize(7); Originally posted by RagingBit on ROS Answers with karma: 706 on 2014-07-07 Post score: 0 Original comments Comment by McMurdo on 2014-07-07: Please update question with the relevant lines that produce the error. It might be difficult without that. Also look at the API for sensor_msgs::JointState. http://docs.ros.org/api/sensor_msgs/html/msg/JointState.html . Answer: Since the arduino has far less memory, the message generation for rosserial uses different data structures. In this case, I think it's representing the list of names int the joint state message as a char** instead of a std::vector<std::string>. You should probably put your joint names in a char ** and assign it into your joint_states message. Originally posted by ahendrix with karma: 47576 on 2014-07-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by RagingBit on 2014-07-08: Can you provide an example of how you would do that? Thanks! Comment by McMurdo on 2014-07-08: char *a[] = {"joint_1", "joint_2", "joint_3", "joint_3.2"}; Comment by RagingBit on 2014-07-08: The main problem I'm having is how to allocate memory to my joint space data structure. resize() or push_back() methods seems not to be available on Arduino. Comment by McMurdo on 2014-07-08: Yes, resize and push_back are only applicable to c++ containers. So for a regular array of strings/floats you can only use new / malloc, I guess. Of course the array of strings can be directly specified since it won't change.
{ "domain": "robotics.stackexchange", "id": 18544, "tags": "ros, joint-states" }
Complexity of Turing self-reducibility of Clique problem?
Question: I'm interested in the complexity of self reducibility of a variant clique problem. Namely, the NP-complete problem HALF Clique: given a graph on $N$ nodes, Is there a clique of size $N/2$ in the graph? How hard is it to reduce an instance of HALF CLIQUE of input size $2N$ to an instance of size $N$ using Turing reduction? Is there a Cook reduction? I'm looking for references for upper bounds and lower bounds on the complexity of Turing reductions of this type. Answer: This is not a reference, but an NP-complete problem cannot have a polynomial-time self-reduction which halves the input size unless NP has a nO(log n)-time algorithm, even if we allow Turing reductions. Suppose there is a p(n)-time Turing reduction from an NP-complete problem L to itself such that given an input of length n, the reduction only makes queries to strings of length at most n/2. Then you can convert this reduction to a standalone deterministic algorithm for L by using recursion. By writing down a recurrence relation for the running time of this deterministic algorithm, it is not hard to see that it runs in time nO(log n).
{ "domain": "cstheory.stackexchange", "id": 775, "tags": "cc.complexity-theory, reductions" }
Fetch and append player statistics to an Excel spreadsheet
Question: I am a new programmer in python, and I need a bit of help to structure my code. game_schedule = ( ('Daily Game Schedule', 'daily_game_schedule'), ('Full Game Schedule', 'full_game_schedule'), ) team_standings = ( ('Overall Team Standings', 'overall_team_standings'), ('Playoff Team Standings', 'playoff_team_standings'), ) worksheets = ( ('Game Schedule', game_schedule), ('Cumulative Player Stats', 'cumulative_player_stats'), ('Player Injuries', 'player_injuries'), ('Team Standings', team_standings), ) def create_and_update_worksheets(): """ Add 'Player statistics' if the worksheet is not in file_name. Otherwise, it will update the worksheet itself. """ os.chdir(os.path.dirname(os.path.abspath(__file__))) writer = pd.ExcelWriter(file_name, engine='openpyxl') for key, value in worksheets: start_row = 0 if isinstance(value, tuple): for subkey, subvalue in value: response = send_request('2017-2018-regular', subvalue).content df1 = pd.DataFrame(['Title']) df2 = pd.read_csv(io.StringIO(response.decode('utf-8'))) df1.to_excel(writer, key, startrow=start_row, header=None, \ index=False) df2.to_excel(writer, key, startrow=start_row+2, index=False) start_row += len(df2) + 4 else: response = send_request('2017-2018-regular', value).content df1 = pd.DataFrame(['Title']) df2 = pd.read_csv(io.StringIO(response.decode('utf-8'))) df1.to_excel(writer, key, startrow=start_row, header=None, \ index=False) df2.to_excel(writer, key, startrow=start_row+2, index=False) for sheet in writer.sheets.values(): resize_columns(sheet) writer.save() writer.close() create_and_update_worksheets() I think there is repetitive code in the for loop for key, value in worksheets:. How can I change the structure so that it is not repetitive? Answer: As we can see these piece of code is repeated: response = send_request('2017-2018-regular', value).content df1 = pd.DataFrame(['Title']) df2 = pd.read_csv(io.StringIO(response.decode('utf-8'))) df1.to_excel(writer, key, startrow=start_row, header=None, \ index=False) df2.to_excel(writer, key, startrow=start_row+2, index=False) you can define a function for it, it works like this: def create_and_update_worksheets(): def less_repetitive(value, start_row): response = send_request('2017-2018-regular', value).content df1 = pd.DataFrame(['Title']) df2 = pd.read_csv(io.StringIO(response.decode('utf-8'))) df1.to_excel(writer, key, startrow=start_row, header=None, \ index=False) df2.to_excel(writer, key, startrow=start_row+2, index=False) ... for key, value in worksheets: ... if isinstance(value, tuple): for subkey, subvalue in value: less_repetitive(subvalue, start_row) start_row += len(df2) + 4 else: less_repetitive(subvalue, start_row) ... Another solution is change the value in worksheets to be all tuple, but I am not sure if it works. As you didn't use the subkey, in such situation those value is not tuple now, can be changed to (('', value)) and as you don't need use subkey just replace it with _ code works like this: def create_and_update_worksheets(): ... for key, value in worksheets: ... if not isinstance(value, tuple): value = (('',value)) for _, subvalue in value: response = send_request('2017-2018-regular', subvalue).content df1 = pd.DataFrame(['Title']) df2 = pd.read_csv(io.StringIO(response.decode('utf-8'))) df1.to_excel(writer, key, startrow=start_row, header=None, \ index=False) df2.to_excel(writer, key, startrow=start_row+2, index=False) start_row += len(df2) + 4 ...
{ "domain": "codereview.stackexchange", "id": 27908, "tags": "python, excel, csv, pandas" }
Faraday and Lenz's law
Question: I'm trying to wrap my head around Faraday and Lenz's law. Mathematically it implies that $$\oint_{\partial s} (\vec E +\vec v \times \vec B) = \frac{d \phi}{dt}$$ where $$ \phi = \int \int \vec B \cdot\vec {dA}.$$ But aren't magnetic fields supposed to have zero flux? So isn't $\phi$ supposed to be zero? Answer: First, Faraday's Law is $$ \mathcal{E} = - \frac{d}{dt} \Phi$$ (with a minus sign). Second, the EMF is defined as $$ \mathcal{E} = \int_{\mathcal{C}} \mathbf{f}\,d\mathbf{s}$$ where $\mathbf{f}$ is the force per unit of charge. What you have written is correct in the case where the only force is the electromagnetic force. The term with $\mathbf{v}\times\mathbf{B}$ will be perpendicular to the path for wires (or if your path follows a linear current), since it's perpendicular to velocity. Third, as they've already stated in comments, magnetic flux is $0$ only for closed surfaces. In general, flux is not $0$ for any surface. Think of the magnetic field of a solenoid and chose the surface to be just inside it: all the field lines will pass through the surface in the same direction. By the way, in closed surfaces, you won't be able to apply Faraday's Law since the surfaces have to be bounded, as you have written in your integral. Forth, you didn't ask explicitly but Lenz's Law is contained on the minus sign of Faraday's Law. It just says that the induced EFM will induce a magnetic field which goes against the change in flux.
{ "domain": "physics.stackexchange", "id": 75940, "tags": "electromagnetism, electromagnetic-induction, lenz-law" }
Ray diagram of infinity well
Question: An infinity well works on the principle that when two plane mirrors are placed in front of each other an infinite number of images are produced thus creating an illusion.I am a bit confused about it's ray diagram.Please guide. Answer: Here is a diagram. I'm not sure if it is important to make the separation of the mirrors match the diameter to get a nicer effect.
{ "domain": "physics.stackexchange", "id": 17705, "tags": "geometric-optics" }